Viacom International, Inc. v. Youtube, Inc.
Filing
468
AMICUS BRIEF, on behalf of Amicus Curiae Public Knowledge, FILED. Service date 09/27/2011 by CM/ECF.[402380] [10-3270]
10-3270-cv
UNITED STATES COURT OF APPEALS
FOR THE SECOND CIRCUIT
VIACOM INTERNATIONAL INC., COMEDY PARTNERS, COUTNRY
MUSIC TELEVISION, INC., PARAMOUNT PICTURES
CORPORATION, BLACK ENTERTAINMENT TELEVISION, LLC,
Plaintiffs-Appellants
v.
YOUTUBE, INC. YOUTUBE, LLC, GOOGLE, INC.,
Defendants-Appellees
On Appeal from the United States District Court
for the Southern District of New York
Honorable Louis L. Stanton, District Judge
(Additional Caption on Reverse)
BRIEF OF AMICUS CURIAE PUBLIC KNOWLEDGE IN SUPPORT
OF AFFIRMANCE FOR DEFENDANTS-APPELLEES
Sherwin Siy
Benjamin Kallos
Michael Weinberg
BENJAMIN KALLOS,
PUBLIC KNOWLEDGE
ATTORNEY AT LAW
1818 N Street NW, Suite 410
402 E 90th Street, Suite 3F
Washington, DC 20036
New York, New York 10128
(202) 861-0020
(212) 600-4960
ssiy@publicknowledge.org
ben@kalloslaw.com
10-3342-cv
UNITED STATES COURT OF APPEALS
FOR THE SECOND CIRCUIT
THE FOOTBALL ASSOCIATION PREMIER LEAGUE LIMITED, on
behalf of themselves and others similarly situated, BOURNE CO., CAL IV
ENTERTAINMENT, LLC, CHERRY LANE MUSIC PUBLISHING
COMPANY, INC., NATIONAL MUSIC PUBLISHERS‘ ASSOCIATION,
THE RODGERS & HAMMERSTEIN ORGANIZATION, EDARD B.
MARKS MUSIC COMPANY, FREDDY BIENSTOCK MUSIC
COMPANY, dba Bienstock Publishing Company, ALLEY MUSIC
CORPORATION, X-RAY DOG MUSIC, INC., FEDERATION
FRANCIAISE DE TENNIS, THE MUSIC FORCE MEDIA GROUP LLC,
SIN-DROME RECORDS, LTD., on behalf of themselves and all others
similarly situated, MURBO MUSIC PUBLISHING, INC., STAGE THREE
MUSIC (US), INC. THE MUSIC FORCE LLC,
Plaintiffs-Appelants,
and
ROBERT TUR, dba Los Angeles News Service,
THE SCOTTISH PREMIER LEAGUE LIMITED,
Plaintiffs,
v.
YOUTUBE, INC., YOUTUBE, LLC, GOOGLE INC.,
Defendants-Appellees.
On Appeal from the United States District Court
for the Southern District of New York
CORPORATE DISCLOSURE STATEMENT
Pursuant to Federal Rule of Appellate Procedure 26.1, Public
Knowledge states that it has no parent corporations and no publicly held
corporation has an ownership stake in it.
TABLE OF CONTENTS
SUMMARY OF ARGUMENT .................................................................... 3
ARGUMENT ................................................................................................. 4
I. Automated Filtering Technologies Cannot Reliably Identify
Infringement. ............................................................................................. 4
A. Automated Systems Cannot Make the Legal Judgments Necessary
To Identify Infringement.......................................................................... 5
B. Given the High Volumes of Uploads, Automated Filters Would
Generate High Volumes of False Positives. ............................................ 8
II. The Language and Purpose of the DMCA Do Not Allow Safe
Harbors to be Conditioned on Filtering Implementation ................... 11
A. Service Providers‘ Voluntary Use of Filters Cannot Create the
Presumption of Knowledge of Infringement ......................................... 13
B. Service Providers‘ Use of Filters Does Not Give them the Ability to
Control Infringement ............................................................................. 15
C. Expeditious Removal Does Not Require the Use of Filters ............. 18
D. Reasonable Implementation of a Repeat Infringer Policy Does Not
Require Filters. ....................................................................................... 19
CONCLUSION ........................................................................................... 23
i
TABLE OF AUTHORITIES
CASES
Corbis Corp. v. Amazon.com, Inc., 351 F. Supp. 2d 1090, 1110 (W.D. Wa.
2004). ................................................................................................... 16, 21
Harper & Row Publishers v. Nation Enters., 471 U.S. 539, 564-65 (1985).. 7
Hendrickson v. eBay, Inc., 165 F. Supp. 2d 1082 (C.D. Cal. 2001) ...... 16, 18
Perfect 10, Inc. v. CCBill LLC, 488 F.3d 1102, 1112-13 (9th Cir. 2007) ... 20,
21
Perfect 10, Inc. v. Cybernet Ventures, Inc., 231 F. Supp. 2d 1146, 1173
(C.D. Cal. 2002) ........................................................................................ 17
Sandoval v. New Line Cinema Corp., 147 F.3d 215, 217-18 (2d Cir. 1998) . 8
Tur v. YouTube, 2007 U.S. Dist. LEXIS 50254 (C.D. Cal.) .................. 16, 17
UMG Recordings, Inc. v. Veoh Networks Inc. 665 F. Supp. 2d 1099 (C.D.
Cal. 2009) ........................................................................................... passim
STATUTES
17 U.S.C. § 107 ............................................................................................... 7
17 U.S.C. § 512(c)(1)(A)(iii) ........................................................................ 18
17 U.S.C. § 512(c)(1)(B) .................................................................. 15, 16, 17
17 U.S.C. § 512(i)(1)(A) ....................................................................... 3, 4, 21
17 U.S.C. § 512(m) ................................................................................. 12, 15
OTHER AUTHORITIES
35 Hours of Video a Minute Uploaded to YouTube, AFP (Nov. 11, 2010) ... 9
Center for Democracy and Technology, Campaign Takedown Troubles:
How Meritless Copyright Claims Threaten Online Political Speech (Sept.
2010) .................................................................................................... 10, 11
ii
Copyright Infringement Notification, YouTube. .......................................... 11
Egypt. Hosni Mubarak, uploaded by "moudy2005"Apr. 14, 2007 .............. 10
Gamal Mubarak, uploaded by "mssewidan" on Apr. 15, 2007.................... 10
H. Rep. No. 105-796 (Oct. 8, 1998) ............................................................. 18
Jordan Golson, YouTube Rules Web Videos, PCWORLD (Dec. 13, 2008) ..... 9
Julia Angwin et al., Record Labels Turn Piracy into a Marketing
Opportunity, Wall Street Journal, Oct. 18, 2006......................................... 6
Mehan Jayasuria, et al., Forcing the Net Through a Sieve: Why Copyright
Filtering is Not a Viable Solution for US ISPs, (Public Knowledge 2009) 6
Virginia Heffernan, The Hitler Meme, New York Times Magazine, Oct. 24,
2008 ........................................................................................................... 11
iii
INTEREST OF AMICUS CURIAE1
Public Knowledge is a nonprofit public interest advocacy organization that
represents consumers‘ rights in Washington, D.C. Public Knowledge works with
consumer and industry groups to promote balance in intellectual property law and
technology policy, ensuring that the public can benefit from new innovations, fast
and affordable access, and the use of content.
Public Knowledge has joined as amicus curiae in a number of cases
addressing important copyright issues. See, e.g., Eldred v. Ashcroft, 537 U.S. 186
(2003); Twentieth Century Fox Film Corp. v. Cablevision Sys., No. 07-1480-CV
(2d Cir. amicus brief filed June 6, 2008); MDY Ind., LLC v. Blizzard Ent., Inc.,
2011 U.S. App. LEXIS 3428 (9th Cir. Jun. 17, 2010); Vernor v. Autodesk, Inc.,
621 F.3d 1102 (9th Cir. 2010). Public Knowledge has also investigated the effects
of copyright filtering on networks and consumers, publishing a white paper on the
topic. Mehan Jayasuria, et al., Forcing the Net Through a Sieve: Why Copyright
Filtering is Not a Viable Solution for US ISPs, (Public Knowledge 2009). The
above-captioned case directly impacts the development of services that empower
customers to share and distribute information via the Internet, and both parties and
1
This brief was not authored in whole or in part by any party to the action nor did
any such party or its counsel contribute money that was intended to fund preparing
or submitting this brief. There is no person other than the amicus curiae who
contributed money that was intended to fund preparing or submitting this brief.
1
proposed amici have raised arguments based upon the role and effectiveness of
copyright filtering.
2
SUMMARY OF ARGUMENT
Public Knowledge submits this brief in order to address certain specific
issues related to the use and legal significance of content filtering technology in
relation to the safe harbor provisions of the Digital Millennium Copyright Act
(DMCA). Appellants make several arguments regarding filtering, each amounting
to the theory that the existence of filtering technology and YouTube‘s failure to use
it in certain specific ways excludes YouTube from the safe harbor provisions of
Section 512. In addition, Amici Audible Magic and Vobile (―Filtering Amici‖), in
support of no party, have filed briefs regarding the reliability of their products,
apparently in response to the district court‘s citation of language from a 2009
decision to the effect that filters‘ determinations of infringement are not reliable
enough to be dispositive for purposes of 17 U.S.C. § 512(i)(1)(A). Public
Knowledge files this brief to emphasize that (i) whatever the accuracy of
automated filtering technology in identifying content, the technology cannot make
reliable legal determinations about when and whether specific uses of that content
are infringing; and (ii) disqualifying YouTube from the safe harbor because it
declined to use filters in the ways preferred by Appellants (and perhaps Filtering
Amici) would effectively make adoption of certain technologies a new prerequisite
for the safe harbor, in direct contradiction to both the plain meaning and the
purpose of the DMCA.
3
ARGUMENT
I. Automated Filtering Technologies Cannot Reliably Identify Infringement.
The filtering software touted by Appellants and Filtering Amici is designed
to identify videos that may be infringing. It does this by comparing the video files
uploaded by users with an existing catalog of known copyrighted works supplied
by cooperating copyright owners. If some or all of an uploaded video matches
video footage in the database, that indicates that the user may have copied at least
some material to which a cooperating copyright holder claims ownership.
The district court cited UMG Recordings, Inc. v. Veoh Networks Inc. for the
proposition that filtering technology ―does not meet the standard of reliability and
verifiability required by the Ninth Circuit in order to justify terminating a user‘s
account.‖ 665 F. Supp. 2d 1099 (C.D. Cal. 2009). Audible Magic ―strenuously
disagree[s].‖ (Audible Magic Br. at 3.) But the district court, like the court in
Veoh, was simply pointing out that automated filters cannot identify infringement
with sufficient certainty that a service provider should be required to treat a filter‘s
results as dispositive for purposes of identifying repeat infringers under
512(i)(1)(a). This proposition should not be controversial; an automated, softwarebased warning of content matching is a far cry from an actual court ruling that
infringement has occurred. Automated filters cannot reliably determine when and
whether specific conduct is infringing.
4
A. Automated Systems Cannot Make the Legal Judgments Necessary
To Identify Infringement.
Filtering Amici are eager to promote the effectiveness of their software at
matching uploaded files to known copyrighted works. However, technical accuracy
in identifying content is not the same as legal accuracy in identifying infringement.
Identifying whether a given file contains material copied from an existing,
copyrighted work is only the beginning of an infringement analysis. For instance,
the file could be uploaded with the express or implied permission of the copyright
holder, or its presence on the site could be fair or de minimis use. None of these
common scenarios yields readily to automated analysis.
In the case of permissions, the mere presence of a work within a filtering
system‘s database of copyrighted works does not mean that any given upload is
unauthorized. A rightsholder‘s antipiracy division that wishes to identify uses of its
works throughout various online services has a strong incentive to be as complete
as possible when submitting its catalog of works to Audible Magic or another
filtering service. Meanwhile, various members of public relations and marketing
departments, as well as outside contractors hired to promote works through social
media, have incentives to allow clips of upcoming movies, individual tracks from
upcoming albums, or iconic portions of past works to be uploaded for others to
comment on, analyze, promote, and share with other potential customers. See, e.g.,
YouTube Br. at 42-50; Julia Angwin et al., Record Labels Turn Piracy into a
5
Marketing Opportunity, Wall Street Journal, Oct. 18, 2006, at B1.2 Not only are
authorized, copyrighted clips released directly by rightsholders themselves, but in
some cases, rightsholders‘ agents or employees have also attempted to make the
clips resemble infringing clips, in order to build publicity for a work in progress.
Id.
Authorizations can be granted through practically any means of
communication between a rightsholder (or a rightsholder‘s agent) and the uploader,
whether that be in an email, a letter, a negotiated contract, or a phone conversation.
In none of these cases would the presence of a license be indicated in the contents
of an uploaded file, leaving a filtering system no way to screen out these
authorized copies.
A limitation of filtering programs even less solvable by technical means is
their inability to determine fair use. See Mehan Jayasuria, et al., Forcing the Net
Through a Sieve: Why Copyright Filtering is Not a Viable Solution for US ISPs,
(Public Knowledge 2009).3 Unauthorized uses of copyrighted works are not
infringement if they qualify for fair use, typified by commentary, criticism, news
reporting, teaching, scholarship, and research. 17 U.S.C. § 107 (2006). Even this
list is non-exhaustive, and fair use is determined by weighing at a bare minimum
2
http://online.wsj.com/public/article/SB116113611429796022_5EZVscJYWWFqv1AmPvXCiOjJms_20071018.html.
3
www.publicknowledge.org/pdf/pk-filtering-whitepaper-200907.pdf
6
four extremely open-ended factors, most of which are impossible for an automated
system to determine.4
The complexity of these factors has made many determinations of fair use
notoriously difficult for legal scholars and judges, and much more so for a
formula-bound computer. Even with this complexity, however, there are many
cases of alleged infringement that require little expertise, but only simple human
judgment, in order to make a fair use determination. A movie review incorporating
a small portion of the review‘s subject, a commentary on a politician‘s speech, or a
parody of a song ridiculing the original are all easily picked out as fair use by nonexpert humans, while presenting a difficult task for computers. This is because
analysis of these factors depends not just upon the identity of the data stored within
the file, or even the sounds and images that data represents, but of the actual
semantic meaning encoded within the language and imagery used.
Fair use is not the only limitation or exception to copyright protection that
requires subjective judgment, either. De minimis use, while it may be picked up by
an automated filter, is not copyright infringement. The ways in which a particular
4
Those four factors are: the purpose and character of the use, the nature of the
copyrighted work, the amount and substantiality of the portion used, and the effect
of the use upon the potential market for the copyrighted work. Even the least
apparently subjective factor—the amount of the work used—cannot be measured
purely by numeric values, as an excerpt of a given length from a work could easily
be considered more or less the ―heart‖ of that work. Harper & Row Publishers v.
Nation Enters., 471 U.S. 539, 564-65 (1985).
7
use might be rendered de minimis are several. A visual work could be out of focus,
not prominently displayed in frame, visible for only a brief period of time, or some
combination of the three. See Sandoval v. New Line Cinema Corp., 147 F.3d 215,
217-18 (2d Cir. 1998). An audio work could be distorted, low in volume, brief, or
intermingled with other background sounds and noise. And any work could be
captured in passing, as opposed to placed within the allegedly infringing work
deliberately.
A filter‘s inability to correctly determine authorization, fair use, or de
minimis usage can easily result in over-identification of material as infringing, to a
far greater extent than any examination by human beings.
B. Given the High Volumes of Uploads, Automated Filters Would
Generate High Volumes of False Positives.
A common factor underlying the separate developments of the DMCA and
of automated filtering systems is the sheer volume of content that flows over the
Internet and that is submitted to user-generated content sites like YouTube. In
light of the tremendous volume, automated filters are likely to generate substantial
volumes of ―false positives‖ – content that is mistakenly flagged, either because
the filter has misidentified it or because the specific use of the content qualifies for
one of the legal defenses described above. The impact on lawful speech could
8
therefore be considerable, especially if filters automatically impair access to the
flagged content.
In their briefs, Audible Magic and Vobile emphasize the reliability of their
software in determining whether a file contains material copied from a known
copyrighted work. Audible Magic, for instance, notes a 99% correct identification
rate, with a false positive rate of ―better than 1 in 10,000.‖ (Audible Magic Br.
10.)5 However, this percentage must also be measured against the volume of
material to which it is being applied. YouTube claims that 35 hours of video are
uploaded each minute of every day. 35 Hours of Video a Minute Uploaded to
YouTube, AFP (Nov. 11, 2010).6 Assuming that these videos are limited to 15
minutes in length,7 each year would yield at least 73,634,400 uploaded videos. If
just 1 in 10,000 are incorrectly identified as matching submitted videos when they
do not, this could mean that every year, some 7,363 videos would be flagged for
5
Audible Magic does not state whether this reflects a percentage of random input
files or files known not to match those in its database, nor does it state what its
false-positive rates were during the period in question.
6
http://www.google.com/hostednews/afp/article/ALeqM5hL4UMqXBKBTfJ2PjHI
NPGpWZe82w.
7
This estimate is admittedly rough, but likely very conservative, as the 15-minute
limit (only recently lengthened from 10) is considerably longer than the average
video length of approximately 3 minutes. Jordan Golson, YouTube Rules Web
Videos, PCWORLD (Dec. 13, 2008),
http://www.pcworld.com/article/155440/youtube_rules_web_videos.html.
9
infringements when they did not even match a clip provided by a cooperating
copyright holders.
Nor does this number account for other videos mistakenly flagged for uses
that were authorized, fair use, or de minimis. The number of videos that fall into
this category are likely to be substantial. Many YouTube users create videos to
comment on current events, critique culture, and criticize public figures. Many of
these videos will incorporate existing works under the fair use provision of
copyright law. For instance, a search of videos of Hosni Mubarak reveals a number
of critical videos uploaded by users around the world, many of them combining
copyrighted images from press sources with pop culture imagery and music. See,
e.g., Gamal Mubarak, uploaded by ―mssewidan‖ on Apr. 15, 2007; Egypt. Hosni
Mubarak, uploaded by ―moudy2005‖Apr. 14, 2007.8 Political campaigns
frequently use clips of copyrighted news coverage of their candidates or their
opponents to highlight favorable news or point out inconsistencies. See Center for
Democracy and Technology, Campaign Takedown Troubles: How Meritless
Copyright Claims Threaten Online Political Speech (Sept. 2010) (―CDT Report‖).9
Searches for terms such as ―media bias,‖ ―movie review,‖ ―video game review‖
8
http://www.youtube.com/watch?v=Ri82Lj8-tyQ;
http://www.youtube.com/watch?v=_rNK3hfeGMc.
9
www.cdt.org/files/pdfs/copyright_takedowns.pdf.
10
and ―downfall‖10 return thousands of examples of legal uses of copyright protected
content.
Mistakenly flagging thousands of user videos as infringing can chill free
speech and harm free expression. This is especially true in situations where
flagged videos are automatically blocked, or when being flagged multiple times
can result in being banned. See Copyright Infringement Notification, YouTube.11
If time sensitive videos making fair use of clips are automatically blocked, their
timeliness and impact are lost and they are effectively censored. CDT Report. As
such, a site relying on automated content matching, with its thousands of false
positives and inability to distinguish fair use, will inevitably reduce the free flow of
speech and expression online.
II. The Language and Purpose of the DMCA Do Not Allow Safe
Harbors to be Conditioned on Filtering Implementation
Appellants, however, attempt to read a requirement of automated filters into
several different portions of the statute. Variously, Appellants invite the Court to
accept automated filters as: providing knowledge of infringing activity, granting a
10
Returning examples of fan-made remixes of a scene in the movie ―Downfall‖
featuring Hitler's enraged response to news of losses. Countless remixes of this
scene have been used to criticize a wide range of figures and topics, from
politicians all over the world to popular media figures and outlets. See Virginia
Heffernan, The Hitler Meme, New York Times Magazine, Oct. 24, 2008,
http://www.nytimes.com/2008/10/26/magazine/26wwln-medium-t.html
11
http://www.youtube.com/t/dmca_policy.
11
right and ability to control infringement, a requirement of expeditious removal, and
a requirement for termination policies. (Viacom Br. at 33-34, 41, 43; Premier
League Br. at 42-43, 50-52, 55-56.) The Court should decline each of these
invitations.
Apart from the above-mentioned limitations of automated filters‘ legal
reliability, these various arguments discount several basic facts underlying the
DMCA. One is the plain statutory language of 17 U.S.C. § 512(m), which states
that nothing shall be construed to condition the application of the safe harbor upon
―a service provider monitoring its service or affirmatively seeking facts indicating
infringing activity.‖ 17 U.S.C. § 512(m)(1). Despite this clear ban, Appellants
attempt to exclude YouTube from the safe harbor based precisely upon its failure
to monitor its service. By mischaracterizing the duties imposed by various
provisions of the DMCA, they advance a theory of the statute that places these
several provisions directly in opposition with the plain prohibition of section
512(m).
Another problem with Appellants‘ arguments regards their objections to
YouTube‘s selective use of filtering. Appellants state that YouTube began using
Audible Magic only for those content owners who had entered into licensing
partnerships with YouTube, and not screening all clips with the same system.
(Premier League Br. at 22-23; Viacom Br. at 45-46.) Appellants claim that this
12
selective application of Audible Magic necessarily means that YouTube was
willfully blinding itself to infringement, or show a right and ability to control
infringement. (Premier League Br. at 42-43, 51-52; Viacom Br. at 41, 45.) In
making this argument, however, Appellants object not merely to YouTube‘s failure
to use filtering systems, they object to the ways in which YouTube used filters. If
limited application of filters creates the inference that a service provider is
willfully blinding itself or able to control infringement, service providers who
chose to employ filters would, in order to remain within the safe harbor, be
required to apply those filters entirely, or not at all. Given that the statute explicitly
excludes a mandate on monitoring, the paradoxical upshot of this argument is that
service providers should, at the risk of facing infringement liability, avoid any
voluntary monitoring or filtering systems.
A. Service Providers’ Voluntary Use of Filters Cannot Create the
Presumption of Knowledge of Infringement
Appellants insist that YouTube‘s access to and occasional use of Audible
Magic gave it sufficient knowledge to either require action or face exclusion from
the safe harbor. Doing otherwise, Appellants claim, constitutes ―willful blindness.‖
(Viacom Br. at 37; Premier League Br. at 42-43.) Viacom‘s brief in particular
criticizes YouTube for failing to test Audible Magic by applying it to every video
uploaded, ―even though the MPAA had offered to reimburse YouTube for the cost
13
of testing this technology.‖ (Viacom Br. at 37.) In other words, Viacom believes
YouTube should not have the benefit of the safe harbor because it failed to test
Audible Magic to Appellants‘ specifications. If this line of thought is to be
followed, then any service provider approached by a rightsholder willing to pay for
Audible Magic must either accept the offer and implement the software on the
rightsholder‘s terms, or lose the benefit of the safe harbor. For a sufficient sum of
money, then, a rightsholder could impose its preferred method of determining
infringement upon service providers large or small, or sue that provider for any
infringement committed by its users. Such an absurd result cannot be what was
intended by the statute, and YouTube cannot categorically be found to be willfully
blind simply based upon its particular mode of testing Audible Magic.
Furthermore, there is danger in allowing third parties to define what
constitute appropriate steps to combat infringement. Just because a rightsholder is
willing to pay for a given technical measure does not make that measure
appropriate for incorporation into a site or service. While service providers must be
mindful of the need to comply with the DMCA and copyright law, they must be
free to innovate and to build services to meet the needs of their customers. If a
filtering system will damage the lawful purpose of the service and alienate users
with burdensome processes, the fact that it is ―free‖ for the provider to implement
should not create an obligation to do so. Radically altering a service at the
14
insistence of a third party is never free of cost or burden. The DMCA recognizes
this by stating that monitoring its services is not a prerequisite for safe harbor
protection.
B. Service Providers’ Use of Filters Does Not Give them the Ability to
Control Infringement
A similar line of reasoning appears in Appellants‘ arguments about
YouTube‘s right and ability to control infringement. Appellants argue that the
availability of Audible Magic gave YouTube the ability to control the infringing
activity of its users, thus preventing it from qualifying for the safe harbor under 17
U.S.C. § 512(c)(1)(B). (Viacom Br. at 41, 45; Premier League Br. at 50-53.) This
interpretation of the statute again runs up against the plain language of 512(m).
The ―ability to control‖ cannot equate to ―an ability to institute monitoring
practices.‖ To do so would render the statute in contradiction of itself.
The Premier League additionally argues that the presence of Audible Magic
creates ―something more‖ than the mere ability to block users that Veoh found
insufficient to trigger a right and ability to control. (Premier League Br. at 52.)
According to the Premier League, Audible Magic provides ―pinpoint control‖ over
YouTube‘s audiovisual inventory. Id. As noted above, the precision of this control
is at best debatable, certainly when it comes to determining whether or not activity
is infringing. In addition, this argument suffers the same problem as the ―willful
15
blindness‖ argument, in that it would suggest that service providers who used no
filters at all could more easily be subject to the safe harbor than a provider who
implemented a filter more tentatively than Appellants might desire.
Furthermore, the Premier League‘s brief mischaracterizes the state of the
law regarding 512(c)(1)(B), stating that courts ―have found it applies‖ where:
service providers have an antecedent ability to limit or filter material; are actively
involved in the bidding, sale, and delivery of infringing items; or have the right or
ability to control vendor sales on their site, preview products prior to listing on
their website, edit product descriptions, suggest prices, or otherwise involve
themselves in vendor sales on their website. (Premier League Br. at 50-51, citing
inter alia Tur v. YouTube, 2007 U.S. Dist. LEXIS 50254 (C.D. Cal.); Hendrickson
v. eBay, Inc., 165 F. Supp. 2d 1082 (C.D. Cal. 2001); Corbis Corp. v. Amazon.com,
Inc., 351 F. Supp. 2d 1090, 1110 (W.D. Wa. 2004).)
However, none of the above-mentioned cases in fact found that the service
provider had the right and ability to control infringing activity. The practices cited
by the Premier League in Hendrickson and Amazon are at best mentioned in dicta
as practices that the service provider did not engage in. Hendrickson, 165 F. Supp.
2d at 1094; Amazon, 351 F. Supp. 2d at 1110. In neither of these cases does the
court explicitly state that engaging in such practices would necessarily remove the
safe harbor; the courts could just as easily be providing examples of borderline
16
practices that may or may not trigger the ―control and benefit‖ clause, but that the
service providers in their various cases did not do.
Tur declined, absent a more-developed factual record, to rule on the
application of the ―control and benefit‖ provision, merely citing Fonovisa for the
proposition that the provision ―presupposes some antecedent ability to limit or
filter copyrighted material.‖ Tur at *9-10. At no point does the Tur court, in its
brief opinion, hold that this factual prerequisite for finding an ability to control
represented the sum total of the necessary analysis.
The one case the Premier League cites that did in fact apply 512(c)(1)(B) to
deny the service provider the safe harbor was Perfect 10, Inc. v. Cybernet
Ventures, Inc., in which the service provider actively reviewed allegedly infringing
content before approving it. 213 F. Supp. 2d 1146, 1173 (C.D. Cal. 2002)
(discussing ability to control in the context of vicarious infringement); Id. at 118182 (discussing ability to control in the context of the DMCA). While Cybernet thus
indicates an amount of involvement that would deny a service provider a safe
harbor, it gives no guidance on the potential liability of a service provider less
involved in selecting infringing content than Cybernet was.
In arguing that liability should fall upon a service provider who fails to
implement filters according to their dictates, Appellants would therefore
discourage service providers from gaining what limited knowledge or control they
17
may from the filters‘ application. This cannot be the result desired by Appellants,
Filtering Amici, and certainly not by Congress. See Hendrickson, 165 F. Supp. 2d
at 1093-94 (quoting H. Rep. No. 105-796 (Oct. 8, 1998).
C. Expeditious Removal Does Not Require the Use of Filters
The brief by Viacom additionally suggests that in failing to use filters,
YouTube has failed to remove allegedly infringing content expeditiously. (Viacom
Br. at 33.) In addition to suffering the failings of the arguments detailed above, this
theory, asserted without additional argument, suggests that ―expeditious‖ removal
requires a service provider to remove content ―during the upload process,‖ i.e. to
remove content before it appears online. The expeditious removal language
appears in 17 U.S.C. § 512(c)(1)(A)(iii), which states that a service provider shall
not be liable for storing infringing content if it responds expeditiously to remove
the material after ―obtaining such knowledge or awareness‖ of infringing activity.
While ―expeditiously‖ is not precisely defined within the statute, it is difficult to
believe that Congress could possibly have intended prescience on the part of
service providers.
18
D. Reasonable Implementation of a Repeat Infringer Policy Does Not
Require Filters.
The brief by Premier League additionally argues that by failing to terminate
user accounts based on the findings of a filter, YouTube has failed to implement a
reasonable repeat infringer policy. (Premier League Br. at 55-56.) This is precisely
the line of argument rejected by Veoh, for the reasons that actual determinations of
infringement, and the serious consequences of user termination, can reasonably
rest on more solid grounds than the indication of a computer program.
For its part, Audible Magic ―strenuously disagree[s]‖ with the district court‘s
statement that the filter ―does not meet the standard of reliability and verifiability
required by the Ninth Circuit in order to justify terminating a user‘s account.‖
(Audible Magic Br. at 3.) However, Audible Magic does not contest the actual
truth of this statement, but simply the implication that its products have been or
should be used to terminate accounts.
The district court‘s holding and Audible Magic‘s statement therefore do not
appear to be at odds. Rather, they both recognize the fact that automated filtering
software is not or should not be employed to make legal determinations. While the
district court‘s language, in isolation, may appear to suggest that Audible Magic‘s
software was being improperly used to make legal determinations, it is clear from
19
the surrounding reasoning of the opinion that the court was merely indicating that
decisions of termination required a minimum level of judgment, indicated in
precedent, that automated filters could not be said to meet.
The district court‘s statement is presented within a citation to Veoh. In Veoh,
the plaintiff recording company claimed that the defendant video hosting site had
an inadequate repeat infringer policy because it did not terminate user accounts
after Audible Magic‘s software had flagged and blocked attempted uploads. 665 F.
Supp. 2d at 1116. The Veoh court rejected this theory not on the basis that Audible
Magic‘s software had failed some numerical test of reliability, but because the
Ninth Circuit‘s baseline for reasonable termination policies, set out in Perfect 10 v.
CCBill, allows termination policies that are far more selective in flagging repeat
infringers. Id. at 1117-18; see also Perfect 10, Inc. v. CCBill LLC, 488 F.3d 1102,
1112-13 (9th Cir. 2007).
The CCBill rule deemed sufficient a repeat infringer policy that was based
upon properly-formed, DMCA-compliant notices. CCBill, 488 F.3d at 1110. In
CCBill, the service provider did not terminate users based on notices sent by
rightsholders unless the notices were DMCA compliant, containing a declaration
under penalty of perjury that the complainant was authorized to represent the
copyright holder and that there was a good-faith belief that the user was infringing.
Id. at 1111-12. In other words, a service provider‘s obligation to terminate user
20
accounts under 17 U.S.C. § 512(i)(1)(A) is not triggered by bare assertions of
infringement—such assertions do not make a user the ―repeat infringer‖ specified
by the statute. Instead, those assertions must comply with DMCA requirements. A
system that requires DMCA-compliant notices to trigger termination is sufficient
to preserve the safe harbor, because requiring lower bars for termination could
have ―drastic consequences:‖
A user could have content removed, or may have his access
terminated entirely. If the content infringes, justice has been done. But
if it does not, speech protected under the First Amendment could be
removed. We therefore do not require a service provider to start
potentially invasive proceedings if the complainant is unwilling to
state under penalty of perjury that he is an authorized representative of
the copyright owner, and that he has a good-faith belief that the
material is unlicensed.
Id. at 1112. As the Veoh court points out, a repeat infringer policy even less
stringent than the one in CCBill would still pass muster: ―Indeed, as the Corbis
court pointed out, even a DMCA-compliant notice is ‗not the sine qua non of
copyright liability....A copyright owner may have a good faith belief that her work
is being infringed, but may still be wrong.‘‖ Veoh, 665 F. Supp. 2d at 1117 (C.D.
Cal. 2009) (quoting Corbis Corp. v. Amazon.com, Inc., 351 F. Supp. 2d 1090, 1105
(W.D. Wash. 2004)) (emphasis in original). The district court in this case was
therefore summarizing the statement of the law outlined in Veoh, which was that,
since non-compliant infringement notices are insufficiently reliable to trigger a
21
termination policy, mere flagging by an automated system cannot possibly meet
the required standard either.
None of this prevents service providers from instituting more stringent
termination policies; it merely assures that the law will not require the banning of
users absent some process. Filtering Amici may rest assured that under the district
court‘s opinion, service providers remain free to use their products, and may even,
if they wish, use them as the basis for terminating user accounts; however, the
DMCA‘s repeat infringer policy cannot mandate their use as a condition of the safe
harbor. While automated filters may perform to increasingly tight specifications for
eliminating false matches, they can neither make the necessary judgments for
infringement, nor can this numerical reliability in file matching translate into
sufficient legal reliability for the courts to require a service provider to accept a
computer‘s decree of infringement in creating a termination policy.
Like the arguments that would have filters categorically determine judicial
findings on knowledge, ability to control, or expeditious removal, requiring
termination upon flagging by a filter would also discourage voluntary filtering
adoption. Service providers concerned about overblocking of misidentified,
authorized, fair, or de minimis uses would not want to risk alienating their users on
the one hand and facing safe harbor denial for overriding filter recommendations.
Providers would also be likely to hesitate implementing a complex technical
22
system if they would be required to roll it out across their service simultaneously
or risk expanded infringement liability.
CONCLUSION
For the above-mentioned reasons, Public Knowledge urges the court not to
require the adoption of specific technological identification measures as a
prerequisite for DMCA safe harbor.
Dated: April 7, 2011
By:
/s/ Benjamin J. Kallos
Benjamin Kallos
BENJAMIN KALLOS
ATTORNEY AT LAW
402 E 90th Street, Suite 3F
New York, New York 10128
(212) 600-4960
ben@kalloslaw.com
Sherwin Siy
Michael Weinberg
PUBLIC KNOWLEDGE
1818 N Street NW, Suite 410
Washington, DC 20036
(202) 861-0020
ssiy@publicknowledge.org
Attorney for Amicus Curiae
Public Knowledge
23
CERTIFICATE OF COMPLIANCE
Pursuant to Fed. R. App. P. 32(a)(7)(C), I certify as follows:
1.
This brief complies with the type-volume limitation of Fed. R. App. P.
32(a)(7)(B) because this brief contains 4,958 words, excluding the parts of the
brief exempted by Fed. R. App. P. 32(a)(7)(B)(iii);
2.
This brief complies with the typeface requirements of Fed. R. App. P.
32(a)(5) and the type style requirements of Fed. R. App. P. 32(a)(6) because this
brief has been prepared in a proportionally spaced typeface using Microsoft Word
2008 in 14-point Times New Roman font.
April 7, 2011
By:
/s/ Benjamin J. Kallos
Benjamin Kallos
BENJAMIN KALLOS
ATTORNEY AT LAW
402 E 90th Street, Suite 3F
New York, New York 10128
(212) 600-4960
ben@kalloslaw.com
Attorney for Amicus Curiae
Public Knowledge
24
CERTIFICATE OF SERVICE
I hereby certify that on this 7th day of April, 2011, a true and correct copy of
the foregoing Brief of Amicus Curiae Public Knowledge in Support of Affirmance
for Defendants-Appellees and Affirmance was served on all counsel of record in
this appeal via CM/ECF pursuant to Second Circuit Rule 25.1(h)(1)-(2).
April 7, 2011
By:
/s/ Benjamin J. Kallos
Benjamin Kallos
BENJAMIN KALLOS
ATTORNEY AT LAW
402 E 90th Street, Suite 3F
New York, New York 10128
(212) 600-4960
ben@kalloslaw.com
Attorney for Amicus Curiae
Public Knowledge
25
Disclaimer: Justia Dockets & Filings provides public litigation records from the federal appellate and district courts. These filings and docket sheets should not be considered findings of fact or liability, nor do they necessarily reflect the view of Justia.
Why Is My Information Online?