I/P Engine, Inc. v. AOL, Inc. et al
Filing
776
Memorandum in Support re 775 MOTION for Judgment as a Matter of Law of Invalidity filed by AOL Inc., Gannett Company, Inc., Google Inc., IAC Search & Media, Inc., Target Corporation. (Noona, Stephen)
UNITED STATES DISTRICT COURT
EASTERN DISTRICT OF VIRGINIA
NORFOLK DIVISION
I/P ENGINE, INC.
Plaintiff,
Civil Action No. 2:11-cv-512
v.
AOL INC., et al.,
Defendants.
MEMORANDUM IN SUPPORT OF DEFENDANTS’ MOTION FOR JUDGMENT AS A
MATTER OF LAW ON INVALIDITY
TABLE OF CONTENTS
Page
I.
INTRODUCTION ...............................................................................................................1
II.
LEGAL STANDARD ..........................................................................................................1
III.
CULLISS ANTICIPATES EACH ASSERTED CLAIM OF THE '420 AND '664
PATENTS ............................................................................................................................1
A.
Overview of Culliss .................................................................................................2
B.
'420 Claim 10 is Anticipated by Culliss...................................................................3
1.
Culliss discloses a “search engine system” (claim 10 preamble). ...............3
2.
Culliss discloses “a system for scanning a network . . .” (claim
10[a]). ...........................................................................................................4
3.
Culliss discloses “a content-based filter system . . .” (claim 10[b]). ..........4
(a)
Culliss discloses content-based analysis ..........................................4
(b)
Culliss discloses filtering .................................................................5
4.
Culliss discloses “a feedback system for receiving collaborative
feedback data . . .” (claim 10[c]). .................................................................6
5.
Culliss discloses “the filter system combining . . .” (claim 10[d]). ............7
C.
'420 Claims 14 and 15 are Anticipated by Culliss ...................................................7
D.
'420 Claims 25, 27, and 28 are Anticipated by Culliss ............................................7
E.
'664 Claim 1 is Anticipated by Culliss.....................................................................8
1.
Culliss discloses “a search system” (claim 1 preamble). .............................8
2.
Culliss discloses “a scanning system . . .” (claim 1[a]). ..............................8
3.
Culliss discloses “a feedback system . . .” (claim 1[b]). ..............................8
4.
Culliss discloses “a content-based filter system . . .” (claim 1[c])...............9
F.
'664 Claim 5 is Anticipated by Culliss.....................................................................9
G.
'664 Claim 6 is Anticipated by Culliss...................................................................10
i
H.
I.
'664 Claim 26 and 28 are Anticipated by Culliss...................................................10
J.
IV.
'664 Claim 21 and 22 are Anticipated by Culliss...................................................10
'664 Claim 38 is Anticipated by Culliss.................................................................11
BOWMAN ANTICIPATES EACH ASSERTED CLAIM OF THE '420 AND
'664 PATENTS ..................................................................................................................11
A.
Overview of Bowman ............................................................................................11
B.
'420 Claim 10 is Anticipated by Bowman .............................................................12
1.
Bowman discloses a “search engine system” (claim 10 preamble). ..........12
2.
Bowman discloses “a system for scanning a network . . .” (claim
10[a]). .........................................................................................................13
3.
Bowman discloses “a content-based filter system . . .” (claim
10[b])..........................................................................................................13
(a)
Bowman uses content analysis.......................................................13
(b)
Bowman discloses filtering ............................................................15
4.
Bowman discloses “a feedback system for receiving collaborative
feedback data . . .” (claim 10[c]). ...............................................................15
5.
Bowman discloses “the filter system combining . . .” (claim 10[d]). ........16
C.
'420 Claims 14 and 15 are Anticipated by Bowman ..............................................17
D.
'420 Claims 25, 27, and 28 are Anticipated by Bowman .......................................17
E.
'664 Claim 1 is Anticipated by Bowman ...............................................................17
1.
Bowman discloses “a search system” (claim 1 [preamble]). .....................17
2.
Bowman discloses “a scanning system for searching for
information . . .” (claim 1[a]). ....................................................................17
3.
Bowman discloses “a feedback system . . .” (claim 1[b])..........................18
4.
Bowman discloses “a content-based filter system . . .” (claim 1[c]). ........18
F.
'664 Claim 5 is Anticipated by Bowman ...............................................................19
G.
'664 Claim 6 is Anticipated by Bowman ...............................................................19
ii
H.
I.
'664 Claim 26 and 28 are Anticipated by Bowman ...............................................20
J.
V.
'664 Claim 21 and 22 are Anticipated by Bowman ...............................................20
'664 Claim 38 is Anticipated by Bowman .............................................................20
THE ASSERTED CLAIMS ARE INVALID FOR OBVIOUSNESS ..............................21
A.
Scope and Content of the Prior Art ........................................................................21
1.
The claim elements are found in the WebHound thesis ............................22
2.
The claim elements are found in the Rose patent ......................................22
3.
The elements from the dependent claims are found in the prior art ..........23
(a)
(b)
‘664 claims 21 and 22 ....................................................................24
(c)
‘664 claims 6 and 28 ......................................................................24
(d)
‘664 claim 5 ...................................................................................24
(e)
B.
‘420 claims 14, 15, 27, and 28 .......................................................23
‘664 claim 38 .................................................................................24
Differences Between the Claims and the Prior Art ................................................25
1.
To the extent WebHound and Rose do not disclose a “tight
integration” between the search and filtering systems, adding this
element would be obvious .........................................................................25
2.
To the extent Bowman does not disclose content matching, adding
this element would be obvious ...................................................................26
3.
To the extent Bowman does not disclose filtering, adding this
element would be obvious .........................................................................27
4.
To the extent Culliss does not disclose filtering, adding this
element would be obvious .........................................................................28
C.
D.
VI.
The Level of Ordinary Skill in the Art...................................................................29
No Secondary Considerations Can Rebut the Obviousness Showing ...................30
CONCLUSION ..................................................................................................................30
iii
I.
INTRODUCTION
Plaintiff I/P Engine alleges that Google’s AdWords, AdSense for Search, and AdSense
for Mobile Search systems infringe claims 10, 14, 15, 25, 27, and 28 of U.S. Patent No.
6,314,420 (“the ‘420 patent”) and claims 1, 5, 6, 21, 22, 26, 28, and 38 of U.S. Patent No.
6,775,664 (“the ‘664 patent”). The undisputed evidence presented at trial shows that every
element of the asserted claims was known and used in the prior art, and that the asserted claims
are invalid for both anticipation and obviousness. Because there is no legally sufficient basis for
a reasonable jury to find that the asserted claims of the ‘420 and ‘664 patents are valid,
Defendants are entitled to judgment as a matter of law that the asserted claims are both
anticipated and obvious.
II.
LEGAL STANDARD
Judgment as a matter of law is appropriate where a party has been fully heard on an issue
and “there is no legally sufficient evidentiary basis for a reasonable jury to have found for that
party with respect to that issue.” Fed. R. Civ. P. 50(a).
“A patent is invalid for anticipation if a single prior art reference discloses each and
every limitation of the claimed invention.” Schering Corp. v. Geneva Pharms., Inc., 339 F.3d
1373, 1377 (Fed. Cir. 2003). A patent is invalid as obvious “if the differences between the
subject matter sought to be patented and the prior art are such that the subject matter as a whole
would have been obvious at the time the invention was made to a person having ordinary skill in
the art to which said subject matter pertains.” 35 U.S.C. § 103.
III.
CULLISS ANTICIPATES EACH ASSERTED CLAIM OF THE '420 AND '664
PATENTS
Although the Culliss reference (DX-058) was before the Examiner during prosecution,
Culliss still can and does invalidate the asserted claims of the ‘420 and ‘664 patents. See, e.g.,
1
Scanner Techs. Corp. v. ICOS Vision Sys. Corp. N.V., 528 F.3d 1365, 1380-82 (Fed. Cir. 2008)
(affirming invalidation of representative claim on obviousness based on prior art considered by
the PTO during prosecution).
A.
Overview of Culliss
Culliss is directed to a search engine system that ranks and filters search results based on
a combination of the content of the search results and feedback from prior users who had entered
the same query and viewed these search results. In Culliss, Internet articles are associated with
key terms they contain. (DX-058 at 3:60-64.) For example, two articles about museum-viewing
vacations in Paris (“Article 1” and “Article 2”) might be associated with the key terms “Paris,”
“museum,” and “vacations” if they both contained those three words. (Trial Tr. at 1344:8-11).
These articles are given a “key term score” for each of the key terms that they contain.
(DX-058 at 3:65-66.) Culliss discloses that each key term score might initially be set at 1. (Id. at
3:10-4:9.) Thus, in the above example, Article 1 would have a key term score of 1 for each of
“Paris,” “museum,” and “vacations,” and so would Article 2. Alternatively, Culliss discloses
that the key term scores might be set to reflect how many times each of the key terms appeared
in the document’s content. (See id. at 14:34-36.)
Culliss discloses that the articles are presented to the user in the order dictated by their
combined key term scores. (DX-058 at 5:7-17.) For example, if Article 1 had a key term score
of 5 for “Paris,” 3 for “museum,” and 2 for “vacations,” its aggregate score for the query “Paris
museum vacations” would be 10 (5 + 3 +2). If Article 2 had a key term score of 4 for “Paris,” 2
for museum,” and 3 for “vacations,” its aggregate score for the query “Paris museum vacations”
would be 9 (4 + 2 +3). (Trial Tr. at 1344:17-22). Thus, Article 1 would be presented above
Article 2 because it had a higher aggregate score. (Id. at 1344:24-1345:1).
2
When a user selects an article whose squib is presented to him, the key term scores for
that article which correspond to the terms in the user’s query are increased. (DX-058 at 4:3749.) This is because the user, by selecting the article in response to his query, has implicitly
indicated that these key terms from the query are appropriately matched to the article. (See id.)
For example, if a hypothetical first user who queried “Paris museum vacations” selected
Article 2, then Article 2’s key term scores for “Paris,” “museum,” and “vacations” might each
rise by +1. (DX-058 at 4:43-45; Trial Tr. at 1345:15-21.) The next user who enters the same
query would thus see a different rank of articles, based on the new key term scores that reflect
the input of the prior user. (See DX-058 at 4:66-5:1.) Sticking with the same example, Article 2
would have a new aggregate score of 12 (instead of 9) after the first user selected it, because its
key term scores for “Paris,” “museum,” and “vacations” each increased by +1 when the first user
selected it. Thus, a later user who queries “Paris museum vacations” would see Article 2 (which
has a new aggregate score of 12) presented above Article 1 (which still has its old aggregate
score of 10). (Trial Tr. at 1345:25-1346:4.) In short, the article ranking in Culliss is based on a
combination of the articles’ content and feedback from previous users who entered the same
query. This is because both factors (article content and user feedback) are used to calculate the
key term scores that determine the article ranking.
B.
'420 Claim 10 is Anticipated by Culliss
1. Culliss discloses a “search engine system” (claim 10 preamble).
The preamble to claim 10 of the ‘420 patent describes a “search engine system.” Culliss
discloses a “search engine system” because Culliss accepts a user’s search query and returns a
set of search results. (See DX-058 at 4:10-26.) Based on these disclosures, Defendants’
invalidity expert (Dr. Ungar) opined that Culliss discloses a “search engine system.” (Trial Tr. at
3
1346:10-22.) Plaintiff’s validity expert (Dr. Carbonell) does not dispute that Culliss discloses
this element.
2. Culliss discloses “a system for scanning a network . . .” (claim 10[a]).
Claim 10[a] recites “a system for scanning a network to make a demand search for
informons relevant to a query from an individual user.” The Court construed “scanning a
network” as “looking for or examining items in a network” and construed “demand search” as “a
single search engine query performed upon a user request.” (Dkt. 171 at 23.)
Culliss meets this element. Specifically, Culliss looks for search results (which it calls
“articles”) in response to a single search engine query entered by a user. (See DX-058 at 4:1025.) These articles are stored on the Internet, which is “an extensive network of computer
systems.” (Id. at 3:45-55). Based on these discloses, Dr. Ungar opined that Culliss discloses this
element (Trial Tr. at 1346:23-1347:8), and Dr. Carbonell does not contend otherwise.
3. Culliss discloses “a content-based filter system . . .” (claim 10[b]).
Claim 10[b] recites “a content-based filter system for receiving the informons from the
scanning system and for filtering the informons on the basis of applicable content profile data for
relevance to the query.” Culliss meets this element by: (a) giving scores to articles based partly
on content analysis, and (b) using these scores to filter these articles.
(a)
Culliss discloses content-based analysis
Culliss discloses content-based analysis because an article’s key term score can be
initially set in Culliss by counting how often terms from the user’s query appear in the article.
(DX-058 at 14:34-36; see also Trial Tr. at 1347:14-19).
While Dr. Carbonell argues that Culliss does not disclose content analysis, no reasonable
jury could credit this position. Dr. Carbonell does not dispute that Culliss calculates articles’ key
term scores in part by counting how often each key term from the query appears in the article’s
4
content. He merely argues that this content-based metric gets diluted over time as an article’s
key term score gets repeatedly altered by user feedback, so that, for all intents and purposes,
Culliss’s scores are based only on feedback. (Trial Tr. at 1859:4-21). In other words, Dr.
Carbonell concedes that Culliss discloses content-based analysis and that that analysis
substantively contributes to the filtering score initially. His validity argument is that eventually,
the content portion of the score may be diluted by feedback data.
However, the fact that content analysis may play less and less of a role in Culliss’s
system as more and more user feedback is received does not mean that the content analysis is
ever absent. No matter how much feedback is received to alter the initial content-based scores of
Culliss’s articles, the final score for the articles will always be some combination of content and
feedback. Furthermore, the feedback received by Culliss can raise an article’s score when the
article is clicked on or lower the article’s score when the article is not clicked on. (DX-058 at
15:12-19). Thus, the positive and negative feedback adjustments can mostly cancel each other
out, leaving a significant role for the content score to play in setting the article’s overall score.
(b)
Culliss discloses filtering
Culliss also discloses “filtering” in the specific embodiment where its articles’ key terms
include “rating” key terms like X-rated, G-rated, etc. (See DX-058 at 11:8-12:41.) Like the
other key term scores, the rating key term scores can be initially set by content analysis (Id. at
14:23-25) and then altered based on user feedback. (Id. at 11:47-51.) And these rating key term
scores can be used to filter the articles – for example, articles with an X-rated key term score
above a certain threshold will be filtered out and not displayed to G-rated searchers. (Id. at 12:15.) As Dr. Ungar explained, excluding articles based on their X-rated scores is “filtering.” (Trial
Tr. at 1347:20-1348:6).
5
Culliss also states that this specific rating embodiment can be integrated with the more
traditional Culliss embodiments, so that Culliss’s articles would receive a variety of key terms,
one of which is the rating key term used for filtering. (See DX-058 at 11:39-41 (“The invention,
operating separately or in addition to the manner described above, would permit or require the
user to enter a rating key term in the search query.”) (emphasis added).)
Dr. Carbonell’s only argument against this X-rated filtering embodiment is to say that
this embodiment won’t actually work as described. (Trial Tr. at 1862:12-1864:13.) But Culliss
explains exactly how this embodiment works, and Dr. Carbonell provides no comprehensible
explanation of why this embodiment would supposedly not work.
4. Culliss discloses “a feedback system for receiving collaborative
feedback data . . .” (claim 10[c]).
Claim 10[c] recites “a feedback system for receiving collaborative feedback data from
system users relative to informons considered by such users.” The Court construed
“collaborative feedback data” as data from system users with similar interests or needs regarding
what informons such users found to be relevant. (D.I. 212 at 23.)
Culliss discloses this element by recording which articles were selected by users who
entered a given query and raising the key term scores for terms in the selected articles that match
terms in the query. (See DX-058 at 4:37-49). Plaintiff takes the position that users have
“similar interests or needs” if they entered the same query. (Trial Tr. 428:8-15). Thus, and as
Dr. Ungar opined, by receiving the selection choices of users whose queries contained the same
terms, Culliss receives “collaborative feedback data” under Plaintiff’s application of the Court’s
construction and the claim language. (Trial Tr. at 1351:5-19). Dr. Carbonell does not dispute
this fact.
6
5. Culliss discloses “the filter system combining . . .” (claim 10[d]).
Claim 10[d] recites “the filter system combining pertaining feedback data from the
feedback system with the content profile data in filtering each informon for relevance to the
query.” Culliss meets this element. As discussed above, Culliss ranks articles for relevance to a
query by calculating the articles’ aggregate key term scores for the terms in that query (See DX058 at 5:2-10), and each key term score is based on a combination of feedback data and content
data. (See id. at 4:37-49; 14:35-36.) Based on these disclosures, Dr. Ungar opined that Culliss
meets this element. (Trial Tr. at 1351:23-1352:8.)
For his part, Dr. Carbonell admits that each article’s key term score is based on a
combination of content and feedback data – he just asserts that the feedback data will tend to
outweigh and dilute the content data over time. Because Dr. Carbonell disputes this element for
the same erroneous reasons that he disputes claim 10[b], Dr. Carbonell’s arguments fail.
C.
'420 Claims 14 and 15 are Anticipated by Culliss
Claim 14 depends from claim 10 and further requires “wherein the collaborative feedback
data comprises passive feedback data.” Claim 15 adds the further requirement that “the passive
feedback data is obtained by passively monitoring the actual response to a proposed informon.”
Culliss meets these limitations because Culliss’s feedback data is derived from passively
monitoring users’ actual response to articles – namely, monitoring how frequently users who had
entered the same query selected each of those articles. (DX-058 at 4:32-34). Based on these
disclosures, Dr. Ungar opined that Culliss anticipates claims 14 and 15. (Trial Tr. at 1361:51362:3.)
D.
'420 Claims 25, 27, and 28 are Anticipated by Culliss
Claims 25, 27, and 28 contain the same substance as claims 10, 14, and 15, respectively,
but are simply recast as method rather than system claims. Thus, as Dr. Ungar explained, Culliss
7
anticipates claims 25, 27, and 28 for the same reasons that it anticipates claims 10, 14, and 15.
(Trial Tr. at 1362:6-22.)
E.
'664 Claim 1 is Anticipated by Culliss
1. Culliss discloses “a search system” (claim 1 preamble).
Culliss describes “a search system” as recited by the preamble to claim 1 because Culliss
accepts a search query from a user and returns a set of search results. (See DX-058 at 4:10-26.)
Based on these discloses, Dr. Ungar opined that Culliss meets the claim 1 preamble (Trial Tr. at
1363:3-13), and Dr. Carbonell does not contend otherwise.
2. Culliss discloses “a scanning system . . .” (claim 1[a]).
Claim 1[a] recites “a scanning system for searching for information relevant to a query
associated with a first user in a plurality of users.” The Court construed “a scanning system” as
“a system used to search for information.” (Dkt. 171 at 23.)
Culliss meets this element because it searches for articles relevant to a query associated
with a first user among a plurality of users. (See DX-058 at 4:10-26.) As noted above, Culliss
also states that its content- and feedback-based methods may be applied to traditional search
systems like Excite and Lycos to rank their search results. (Id. at 13:35-45.) Based on these
disclosures, Dr. Ungar opined that Culliss meets claim 1[a]. (Trial Tr. at 1363:14-1364:4.)
3. Culliss discloses “a feedback system . . .” (claim 1[b]).
Claim 1[b] recites “a feedback system for receiving information found to be relevant to
the query by other users.” In its infringement case, Plaintiff asserted that this element is met by
receiving click-through data about information items (e.g., ads) that users view. (Trial Tr. at
610:2-14.)
Culliss meets this element under Plaintiff’s own infringement theory because Culliss
receives feedback about which articles were selected by other users, and uses this feedback to
8
adjust the articles’ key term scores. (See DX-058 at 4:37-49.) For purposes of invalidity,
Plaintiff must be held to the same application of the claims that it advanced for infringement.
See Amazon.com, Inc. v. Barnesandnoble.com, Inc., 239 F.3d 1343, 1351 (Fed. Cir. 2001) (“A
patent may not, like a ‘nose of wax,’ be twisted one way to avoid anticipation and another to find
infringement.”). Thus, because Culliss meets this element under Plaintiff’s own theory of what
this element requires, Culliss meets this element for purposes of invalidity. Indeed, Dr. Ungar
opined that Culliss meets this element under Plaintiff’s application of the claims (Trial Tr. at
1364:5-1365:7), and Dr. Carbonell does not contend otherwise.
4. Culliss discloses “a content-based filter system . . .” (claim 1[c]).
Claim 1[c] recites “a content-based filter system for combining the information from the
feedback system with the information from the scanning system and for filtering the combined
information for relevance to at least one of the query and the first user.” Culliss meets this
element by giving articles key term scores that reflect both content and feedback data. (See DX058 at 4:37-49; 14:34-36.) These scores are then used to filter the articles by, e.g., excluding
articles whose X-rated scores exceed a given threshold. (See id. at 12:1-5). Combining search
results with ranking scores that reflect content and feedback data – as disclosed by Culliss – is
precisely how Plaintiff alleges that Defendants meet this claim element. (Trial Tr. at 610:15611:21). Thus, as Dr. Ungar opined, Culliss meets this element under Plaintiff’s own
infringement theory. (Trial Tr. at 1365:8-1367:25.) See Amazon.com, 239 F.3d at 1351.
F.
'664 Claim 5 is Anticipated by Culliss
Claim 5 depends from claim 1 and further requires the filtered information to be an
advertisement. Culliss meets this element, because Culliss explicitly states that the articles
which are filtered may be advertisements. (See DX-058 at 9:56-62.) Based on these disclosures,
Dr. Ungar opined that Culliss anticipates claim 5. (Trial Tr. at 1368:5-9.)
9
G.
'664 Claim 6 is Anticipated by Culliss
Claim 6 depends from claim 1 and further requires “an information delivery system for
delivering the filtered information to the first user.” Culliss discloses this element, as it recites
that the search engine displays squibs of the articles to the user. (See DX-058 at 4:25-31.) Based
on these disclosures, Dr. Ungar opined that Culliss anticipates claim 6. (Trial Tr. at 1368:10-17.)
H.
'664 Claim 21 and 22 are Anticipated by Culliss
Claim 21 depends from claim 1 and further recites “wherein the content-based filter
system filters by extracting features from the information.” Culliss discloses the additional
element of claim 21 because Culliss extracts words from the content of each article in order to
determine how often the words from the query appear in these articles. (See DX-058 at 14:3436.)
Claim 22 depends from claim 21 and further recites “wherein the extracted features
comprise content data indicative of the relevance to the at least one of the query and the user.”
Culliss discloses this element, because the words that Culliss extracts from an article’s content
indicate how relevant the article is to the query. (See id.) Based on these disclosures, Dr. Ungar
opined that Culliss anticipates claims 21 and 22. (Trial Tr. at 1368:18-1369:10.)
I.
'664 Claim 26 and 28 are Anticipated by Culliss
Claim 26 contains essentially the same elements as claim 1, but is recast as a method
rather than system claim. Thus, as Dr. Ungar opined, Culliss anticipates claim 26 for the same
reasons that it anticipates claim 1. (Trial Tr. at 1369:11-19.) Claim 28 depends from claim 26
and further recites “the step of delivering the filtered information to the first user.” As discussed
with respect to claim 6, supra, Culliss discloses this element as well. (Trial Tr. at 1369:201370:2.)
10
J.
'664 Claim 38 is Anticipated by Culliss
Claim 38 depends from claim 26 and further recites “wherein the searching step
comprises scanning a network in response to a demand search for the information relevant to the
query associated with the first user.” Culliss meets this element because Culliss searches for
articles in response to a single user search query, and these articles are searched for on the vast
network of the Internet. (See DX-058 at 3:45-55; 4:10-26.) Based on these disclosures, Dr.
Ungar found that Culliss anticipates claim 38. (Trial Tr. at 1370:3-13.)
IV.
BOWMAN ANTICIPATES EACH ASSERTED CLAIM OF THE '420 AND '664
PATENTS
A.
Overview of Bowman
The Bowman reference (DX-059) functions similarly to a traditional search engine in that
it accepts a query from a user and generates a body of results in response. (DX-059 at 5:31-32;
claim 28[preamble-b].) Bowman then filters those results based on collaborative feedback and
content analysis. For example, if a user enters the search query “ghost stories for kids,” Bowman
would generate a body of search result items that contain the words “ghost,” “stories,” or “kids.”
(Id. at claim 28[preamble-b], Trial Tr. at 1322:22-1323:2.) Bowman would then give each of
these items a ranking score based on how often they were selected by other users who had
entered the query “ghost stories for kids.” (Id. at claim 28[c], Trial Tr. at 1323:2-6.)
Alternatively, rather than utilizing feedback from all users who entered the same query, Bowman
may cluster users into discrete groups (such as age or income groups) and use feedback from
users within the same group who entered the same query. (See id. at 3:28-33.) In this way, items
returned in response to a given query may have different ranking scores for users in different
groups.
11
Some Bowman embodiments further adjust the score of each item according to its
content, by analyzing how many of the terms in the query appear in the item. (See id. at claim
29.) Items that contain all the terms in the query get higher ranking scores, while items that
contain fewer of the query terms get progressively lower ranking scores. (See id.) Thus, if a
user entered the query “ghost stories for kids,” Bowman would give items that contain the terms
“ghost,” “stories,” and “kids” higher adjustments to their ranking score, while giving items with
only two of these terms a lower adjustment (and giving even lower adjustments to items that
contain only one of these terms). (Trial Tr. at 1325:9-1326:5.)
The items are finally presented to the user in ranked order. (Id. at Abstract.)
Additionally, the system may present only a subset of the items whose ranking scores exceed a
certain threshold. (See id. at 9:58-62.) In sum, the final score for each item in Bowman is
generated through a combination of collaborative feedback data and content data. This score is
then used to filter which items are presented to the user.
B.
'420 Claim 10 is Anticipated by Bowman
1. Bowman discloses a “search engine system” (claim 10 preamble).
The preamble to claim 10 of the '420 patent describes a “search engine system.”
Bowman discloses a “search engine system” because it ranks items in a search result by
receiving a query and generating a plurality of items satisfying the query. (See DX-059 at claim
28 [preamble-b].) This is exactly how a search engine operates, and Dr. Ungar accordingly
opined that Bowman meets the claim 10 preamble. (Trial Tr. at 1326:21-1327:4.) Indeed, Dr.
Carbonell does not dispute that Bowman meets this element.
12
2. Bowman discloses “a system for scanning a network . . .” (claim
10[a]).
Claim 10[a] recites “a system for scanning a network to make a demand search for
informons relevant to a query from an individual user.” Bowman meets this element, as Dr.
Ungar explained and Dr. Carbonell does not dispute. (Trial Tr. at 1327:5-25.) Specifically,
Bowman discloses the steps of: “receiving a query specifying one or more terms; generating a
query result identifying a plurality of items satisfying the query.” (DX-059 at claim 28 [a-b].)
This query is submitted by a user, and thus the resulting search is “performed upon a user
request.” (See id. at 7:43-46.) Further, Bowman operates on a networked system of computers.
(See id. at 5:29-30.)
3. Bowman discloses “a content-based filter system . . .” (claim 10[b]).
Claim 10[b] recites “a content-based filter system for receiving the informons from the
scanning system and for filtering the informons on the basis of applicable content profile data for
relevance to the query.” As with Culliss, Bowman meets this element by giving items scores
based partly on content analysis and using these scores to filter the items.
(a)
Bowman uses content analysis
Bowman discloses content analysis in claim 29, which requires adjusting an item’s score
based on how many query terms are “matched” by the item. (DX-059 at claim 29). Bowman
makes clear that “matching” involves content analysis – i.e., determining how many query terms
appear in an item’s content. Indeed, when discussing “matching” in connection with the prior
art, Bowman explicitly states that a query term is “matched” to a search result if it appears in that
search result’s content. For example, if the search results are books, Bowman states that a list of
books will be “matching the terms of the query” if their “titles contain some or all of the query
terms.” (See DX-059 at 1:30-38.) In that same paragraph, Bowman states that the list of books
13
“may be ordered based on the extent to which each identified item matches the terms of the
query.” (Id. at 1:43-44 (emphasis added).) In other words, the list of books can be ordered based
on how many of the query terms are matched to (i.e., contained within) the title of each book.
In nearly verbatim language, claim 29 of Bowman describes this technique of ranking
search results based on how many query terms they “match.” A simple comparison of claim 29
to the “matching” prior art discussion makes this clear. (Compare claim 29 (“adjusting the
ranking value produced for each item indentified in the query result to reflect the number of
terms specified by the query that are matched by the item”) with 1:43-44 (“the list may be
ordered based on the extent to which each identified item matches the terms of the query”).)
Given the identity of language, the only logical interpretation is that claim 29’s “matching”
technique also involves content analysis, as Dr. Ungar explained at trial. (Trial Tr. at 1328:314.) No reasonable jury could find otherwise.
In an effort to resist this commonsense conclusion, Dr. Carbonell argues that claim 29’s
“matching” technique instead analyzes whether an item is associated with a query term in
Bowman’s rating table, which would merely mean that at least one prior user had selected that
item in response to a query containing that term. (Trial Tr. at 1843:18-22). In purported support
of this opinion, Dr. Carbonell points to two statements from Bowman that refer to ordering
search results “in accordance with collective and individual user behavior rather than in
accordance with attributes of the items.” (Id. at 1842:19-1843:10). But this is a non-sequitur.
Neither of these statements mention, or have anything to do with, the “matching” technique
disclosed in claim 29 of Bowman. Rather, they occur when discussing more general Bowman
embodiments that rely solely on user feedback. (See DX-059 at 2:59-3:22; 4:38-48.) Given that
Bowman explicitly uses the word “matching” to refer to content analysis, these general
14
statements (which do not mention “matching”) could not cause a reasonable fact-finder to
conclude that the “matching” technique of Bowman claim 29 lacks content analysis.1
(b)
Bowman discloses filtering
Bowman also “filters” items based on their scores, by retaining items that score above a
predetermined threshold while excluding the rest. (DX-059 at 9:58-62, claim 15). As Dr. Ungar
explained, this process of retaining some items and excluding others is “filtering.” (Trial Tr. at
1321:15-22, 1331:8-20.)
Although Dr. Carbonell admits that Bowman retains items that score above a threshold
and excludes the rest, he argues that this is not “filtering” because it is carried out with reference
to the entire ranked list of search results rather than being an item-by-item process. (Trial Tr. at
1852:11-1854:5). This argument makes no sense. By setting a threshold and presenting a user
with the items that score above the threshold, Bowman determines, on an item-by-item basis,
whether each item has scored highly enough to be retained or must be discarded. This is
“filtering” under any sensible meaning of the word.
4. Bowman discloses “a feedback system for receiving collaborative
feedback data . . .” (claim 10[c]).
Claim 10[c] recites “a feedback system for receiving collaborative feedback data from
system users relative to informons considered by such users.” The Court construed
1
Further, Dr. Carbonell’s reading of the claim 29 “matching” technique would render
claim 29 meaningless, and thus cannot be correct. Claim 29 depends from claim 28, and an
item’s claim 28 score is based on how often the item was selected in response to prior queries
containing each of the terms in the current query. (DX-059 at claim 28[c]). Thus, to get a claim
28 score, an item must be associated with every query term in Bowman’s rating table. That
being the case, it would be impossible to adjust an item’s score based on how many query terms
it “matches” under Dr. Carbonell’s interpretation of “matching” (i.e., Dr. Carbonell’s position
that an item is “matched” with a term if the term-item pair appears in Bowman’s rating table.)
Every item with a “claim 28 score” is already matched with every query term under Dr.
Carbonell’s interpretation.
15
“collaborative feedback data” as data from system users with similar interests or needs regarding
what informons such users found to be relevant. (D.I. 212 at 23.)
Bowman meets this element under Plaintiff’s infringement theory by recording how often
users who entered the same search query selected various items. (See, e.g., DX-059 at claim
28[c]). Bowman also may cluster users into groups (such as age, income or behavioral groups)
and use feedback from users within the same group who entered the same query. (See DX-059 at
3:28-33.) Based on these disclosures, Dr. Ungar opined that Bowman meets element 10[c] (Trial
Tr. at 1331:21-1332:21), and Dr. Carbonell does not contend otherwise.
5. Bowman discloses “the filter system combining . . .” (claim 10[d]).
Claim 10[d] recites “the filter system combining pertaining feedback data from the
feedback system with the content profile data in filtering each informon for relevance to the
query.” As Dr. Ungar explained, Bowman meets this element because Bowman combines data
regarding the content of informons with collaborative feedback data from other users to
determine the most relevant informons to a query. (Trial Tr. at 1332:22-1333:12.) Specifically,
Bowman determines each search result item’s score by combining collaborative feedback data
(showing how often the item was selected by users from the same group who entered the same
query) with content profile data (showing how many of the query terms appear in the item’s
content). (See DX-059 at claim 28[c] and claim 29.) The final score is used to determine the
item’s relevance to the query. (See id. at 2:23-24.) Bowman then filters out items whose scores
fall below a certain threshold. (Id. at 9:58-62.)
While Dr. Carbonell disputes that Bowman meets this element, Dr. Carbonell's
conclusion relies on the same unsupportable assertions about content analysis and filtering
discussed with respect to claim 10[b]. Thus, the only reasonable conclusion is that Bowman
does meet this element.
16
C.
'420 Claims 14 and 15 are Anticipated by Bowman
Claim 14 depends from claim 10 and further requires “wherein the collaborative feedback
data comprises passive feedback data.” Claim 15 adds the requirement that “the passive
feedback data is obtained by passively monitoring the actual response to a proposed informon.”
Bowman meets both these elements, as Dr. Ungar explained, because Bowman’s feedback data
is derived from passively monitoring users’ actual responses to search result items – namely,
monitoring how often users selected each of those items. (DX-059 at 2:31-35; Trial Tr. at
1334:9-24.)
D.
'420 Claims 25, 27, and 28 are Anticipated by Bowman
Claims 25, 27, and 28 contain the same substance as claims 10, 14, and 15, respectively,
but are simply recast as method rather than system claims. Thus, as Dr. Ungar explained,
Bowman anticipates claims 25, 27, and 28 for the same reasons that it anticipates claims 10, 14,
and 15. (Trial Tr. at 1335:2-14.)
E.
'664 Claim 1 is Anticipated by Bowman
1. Bowman discloses “a search system” (claim 1 [preamble]).
Bowman describes “a search system” as recited by the preamble to claim 1. Specifically,
as Dr. Ungar explained, Bowman accepts a search query from a user and returns a set of search
results. (DX-059 at 5:31-32; Trial Tr. 1335:20-1336:6). Dr. Carbonell does not dispute that
Bowman meets this element.
2. Bowman discloses “a scanning system . . .” (claim 1[a]).
Claim 1[a] recites “a scanning system for searching for information relevant to a query
associated with a first user in a plurality of users.” The Court construed “a scanning system” as
“a system used to search for information.” (Dkt. 171 at 23.) Bowman meets this limitation
because it searches for information relevant to a query associated with a first user. As recited in
17
Claim 28 of Bowman, Bowman discloses “[a] computer-readable medium whose contents cause
a computer system to rank items in a search result by: receiving a query specifying one or more
terms; generating a query result identifying a plurality of items satisfying the query.” (DX-059
at claim 28[preamble-b]) (emphasis added). Based on these disclosures, Dr. Ungar testified that
Bowman meets claim 1[a]. (Trial Tr. 1336:7-16.)
3. Bowman discloses “a feedback system . . .” (claim 1[b]).
Claim 1[b] recites “a feedback system for receiving information found to be relevant to
the query by other users.” Again, Plaintiff asserts that this element is met by receiving clickthrough data about information items (e.g., ads) that users view. (Trial Tr. at 610:2-14.)
Bowman meets this element under Plaintiff’s own infringement theory because Bowman
receives feedback about information found to be relevant to the query by other users – i.e., it
receives feedback about which items were selected most often by other users who entered the
same query. (See DX-059 claim 28[c].) Further, rather than recording feedback from all users
who entered the same query, Bowman may cluster users into groups (such as age, income or
behavioral groups) and use feedback from users within the same group who entered the same
query. (See DX-059 at 3:28-33.) Based on these disclosures, Dr. Ungar explained that Bowman
meets claim 1[b] (Trial Tr. 1336:17-1337:4), and Dr. Carbonell does not contend otherwise.
4. Bowman discloses “a content-based filter system . . .” (claim 1[c]).
Claim 1[c] recites “a content-based filter system for combining the information from the
feedback system with the information from the scanning system and for filtering the combined
information for relevance to at least one of the query and the first user.”
Bowman meets this element. As described above, Bowman gives items scores that
reflect both content data and feedback data. (See DX-059 at claims 28-29.) These scores are
used to filter the items for relevance to the query. (See id. at Abstract; 2:23-24, 9:58-62.)
18
Combining search results with scores that reflect content and feedback data – as disclosed
by Bowman – is precisely how Plaintiff alleges that Defendants meet this claim element.
Specifically, Plaintiff alleges that Defendants meet this element by calculating a “Quality Score”
for ads using what Plaintiff alleges is based on feedback data and content data. (Trial Tr. at
610:15-611:21). Thus, Bowman meets this element under Plaintiff’s own infringement theory,
as Dr. Ungar explained. (Trial Tr. 1337:5-1338:12).
Dr. Carbonell disputes this conclusion by repeating his arguments that Bowman does not
use content analysis and does not “filter.” As explained above, Dr. Carbonell’s conclusions are
not supported by the disclosures in Bowman. No reasonable jury could find otherwise.
F.
'664 Claim 5 is Anticipated by Bowman
Claim 5 depends from claim 1 and further requires the filtered information to be an
advertisement. Bowman meets this element. Specifically, Bowman discloses that system users
can purchase the products represented by the search results, such as by adding these products to
their virtual shopping carts. (DX-059 at 5:4; 9:2-3; claim 7.) Thus, the search results constitute
advertisements for the purchasable products that they represent. Based on these disclosures, Dr.
Ungar testified that Bowman anticipates claim 5. (Trial Tr. at 1339:3-15.)
G.
'664 Claim 6 is Anticipated by Bowman
Claim 6 depends from claim 1 and further requires “an information delivery system for
delivering the filtered information to the first user.” Bowman discloses this element, as it recites
that the software facility displays the filtered search results to the user. (DX-059 at 9:56-58.)
Based on these disclosures, Dr. Ungar testified that Bowman anticipates claim 6. (Trial Tr. at
1339:16-1340:1.)
19
H.
'664 Claim 21 and 22 are Anticipated by Bowman
Claim 21 depends from claim 1 and further recites “wherein the content-based filter
system filters by extracting features from the information.” Bowman discloses this element
because Bowman extracts words from the content of each item in order to determine how many
words from the query are found in the item. (DX-059 at claim 29.)
Claim 22 depends from claim 21 and further recites “wherein the extracted features
comprise content data indicative of the relevance to the at least one of the query and the user.”
Bowman discloses this element, because the words that Bowman extracts from an item’s content
indicate how relevant the item is to the query. (See id., Trial Tr. at 1340:14-21.) Based on these
disclosures, Dr. Ungar testified that Bowman anticipates claims 21 and 22. (Trial Tr. at 1340:224.)
I.
'664 Claim 26 and 28 are Anticipated by Bowman
Claim 26 contains essentially the same elements as claim 1, but is recast as a method
rather than system claim. Thus, as Dr. Ungar explained, Bowman anticipates claim 26 for the
same reasons that it anticipates claim 1. (Trial Tr. at 1341:6-13.) Claim 28 depends from claim
26 and further recites “the step of delivering the filtered information to the first user.” As
discussed with respect to claim 6, supra, Bowman discloses this element as well. (Trial Tr. at
1341:14-23.)
J.
'664 Claim 38 is Anticipated by Bowman
Claim 38 depends from claim 26 and further recites “wherein the searching step
comprises scanning a network in response to a demand search for the information relevant to the
query associated with the first user.” Bowman meets this element because Bowman looks for or
examines items in response to a single search engine query. (See DX-059 at claim 28[a-b]
(disclosing the steps of “receiving a query specifying one or more terms; generating a query
20
result identifying a plurality of items satisfying the query”).) Furthermore, Bowman operates on
a computer network. (See id. at 5:29-30.) Based of these disclosures, Dr. Ungar opined that
Bowman anticipates claim 38. (Trial Tr. at 1341:24-1342:12.)
V.
THE ASSERTED CLAIMS ARE INVALID FOR OBVIOUSNESS
Obviousness is a question of law, though based on underlying facts. In re Gartside, 203
F.3d 1305, 1316 (Fed. Cir. 2000). To determine obviousness, a court must consider: (1) the
scope and content of the prior art; (2) the differences between the prior art and the claims at
issue; (3) the level of ordinary skill in the art; and (4) any relevant secondary considerations,
such as commercial success, long felt but unsolved needs, and the failure of others. Graham v.
John Deere Co. of Kansas City, 383 U.S. 1, 17-18 (1966). Under these so-called “Graham
factors,” all asserted claims are obvious as a matter of law.
A.
Scope and Content of the Prior Art
Plaintiff has repeatedly characterized the asserted independent claims as a combination of
four color-coded elements for purposes of showing infringement: (1) yellow searching for
information relevant to a query, (2) blue content-based analysis, (3) green collaborative analysis,
and (4) purple combining the content and collaborative analysis to filter the information. (See
Trial Tr. 425:4-18; 521:16-24.) As noted above, a patentee may not interpret a claim one way
for purposes of infringement and another way for purposes of invalidity. See Amazon.com, 239
F.3d at 1351.
As explained above, the combination of the claim elements is found in Culliss and
Bowman. However, all claim elements are also found other prior art references raised at trial –
the WebHound thesis (DX-049) and the Rose patent (DX-034).2
2
In addition, three of the four elements – content-based filtering, collaborative filtering,
and combining content-based and collaborative filtering – are found in the “Fab” prior art
21
1. The claim elements are found in the WebHound thesis
All the elements cited in Plaintiff’s infringement case are found in the WebHound thesis.
As shown in the Abstract of this reference, the WebHound thesis discloses a combination of
content-based and collaborative filtering: “This thesis claims that content-based filtering and
automated collaborative filtering are complementary techniques, and the combination of ACF
with some easily extractable features of documents is a powerful information filtering
technique.” (DX-049 at Abstract.) Thus, the WebHound Abstract alone discloses three of the
four elements that Plaintiff contends are required by the independent claims: content-based
analysis, collaborative analysis, and combining content and collaborative analysis for filtering.
(Trial Tr. at 1292:25-1293:10).
The WebHound thesis also discloses the fourth element cited in Plaintiff’s infringement
case – searching for information relevant to a query. Specifically, the WebHound thesis
discloses that its content-based/collaborative filtering can be used to filter search results obtained
by a search engine. (DX-049 at 78 (“a WEBHOUND like front-end to a popular search engine
such as Lycos, could enable users to filter the results of their searches on the extensive databases
complied by these search engines in a personalized fashion.”); See also Trial Tr. at 1293:11-22).
2. The claim elements are found in the Rose patent
The elements from Plaintiff’s infringement case are also found in the Rose patent (DX034). As with the WebHound thesis, the Abstract of Rose discloses content analysis,
collaborative analysis, and combining the content and collaborative analysis. Specifically, the
Rose Abstract explains that “the prediction of relevance [for information items] is carried out by
reference (DX-050). (Trial Tr. 1287:16-1288:11). The very title of the Fab reference is “Fab:
Content-Based, Collaborative Recommendation.” (Id. at 66). The sub-title goes on to state: “By
combining both collaborative and content-based filtering systems, Fab may eliminate many of
the weaknesses found in each approach.” (Id.) (emphasis added).
22
combining data pertaining to the content of each item of information with other data regarding
correlations of interests between users.” (DX-034 at Abstract.) The Rose Abstract further
explains that “[t]he user correlation data is obtained from feedback information provided by
users when they retrieve items of information.” (Id.) Thus, Rose combines content data with
feedback data (from users with correlated interests)3 to score items. (Trial Tr. at 1297:14-18).
Rose further explains that “[t]he relevance predicting technique of the present invention .
. . can be used to filter messages provided to a user in an electronic mail system and search
results obtained through an on-line text retrieval service. (DX-034 at 2:51-55) (emphasis added).
In other words, Rose discloses that its content/collaborative scoring method can be used to “filter
. . . search results.” (See also Trial Tr. at 1297:22-1298:7).
3. The elements from the dependent claims are found in the prior art
Moving from the independent claims to the dependent claims, the elements from the
dependent claims are also found in the prior art references discussed above.
(a)
‘420 claims 14, 15, 27, and 28
‘420 claims 14, 15, 27, and 28 add the requirements that the feedback data be passive
data reflecting a user’s actual response to an informon. This element is disclosed by, e.g.,
Culliss, which passively monitors whether users select articles and adjusts the articles’ scores
based on the user selections. (DX-058 at 4:37-45; Trial Tr. at 1309:16-1310:15). Furthermore,
given that user feedback can comprise only two basic types – active or passive – it would be
obvious to modify any “active feedback” reference to disclose passive feedback instead. See
KSR, 550 U.S. at 421 (where “there are a finite number of identified, predictable solutions, a
3
Because the user feedback data in Rose comes from users with correlated interests, it is
“collaborative feedback data” under the Court’s construction. See D.I. 212 at 4 (construing
“collaborative feedback data” as “data from system users with similar interests or needs
regarding what informons such users found to be relevant.”)
23
person of ordinary skill has good reason to pursue the known options within his or her technical
grasp”).
(b)
‘664 claims 21 and 22
‘664 claims 21 and 22 add the elements of extracting features from the information that
indicate the information’s relevance to the query or the user. This element is disclosed by, e.g.,
the WebHound thesis, which relies on “easily extractable features of documents” and analyzes
“the importance of [a given] feature relative to the other features for a particular user.” (DX-049
at Abstract, 38; Trial Tr. at 1307:10-1308:4).
(c)
‘664 claims 6 and 28
‘664 claims 6 and 28 require delivering the filtered information to the user. Numerous
prior art references disclose this element. For example, the WebHound thesis discloses returning
the top-rated web pages to the user. (DX-049 at 78 (“the resulting matches could be filtered
through WEBHOUND and only the top ranked ones (in terms of predicted rating) need be
returned.”); see also Trial Tr. at 1308:5-1309:5).
(d)
‘664 claim 5
‘664 claim 5 requires that the filtered information be an advertisement. This element is
disclosed by, e.g., Culliss. (See DX-058 at 9:61; Trial Tr. at 1309:6-15). Furthermore, since
advertisements are just one type of information that can be scored and filtered like any other, it
would be obvious that the other prior art references disclosed herein could filter advertisements.
(e)
‘664 claim 38
‘664 claim 38 requires scanning a network in response to a demand search. This element
is met by, e.g., WebHound, which discloses a search engine that scans the Internet for articles in
response to a single search engine query entered by a user. (See DX-049 at 78).
24
B.
Differences Between the Claims and the Prior Art
As detailed above, there are no differences between the claims and the prior art. But even
if there were, overcoming any alleged differences would have been obvious.
1. To the extent WebHound and Rose do not disclose a “tight
integration” between the search and filtering systems, adding this
element would be obvious
Dr. Carbonell tried to distinguish the claims from prior art such as WebHound and Rose
on the ground that the prior art does not teach a “tight integration” between search and filtering.4
(See Trial Tr. at 1875:13-23). He specifically argued that the prior art such as Rose and
WebHound does not remember the search query when filtering. To use his metaphor, he argued
that these prior art systems do not throw the search query “over the wall” between their search
and filtering components the way they throw the search results “over the wall.” (Trial Tr. at
1880:10-15).
But even if Dr. Carbonell were correct, it would be obvious to throw the search query
“over the wall” along with the search results. In other words, it would be obvious to modify
WebHound and Rose to remember and use the query for filtering and thereby achieve the “tight
integration” that Dr. Carbonell contends is lacking. As Dr. Ungar explained, “If you are filtering
search results, it’s obvious to keep around the query and use that also for filtering . . . just think
about it. If you ask a query of a search engine, you get a result, you just have the query sitting
there with the result, why not use that also for filtering?” (Trial Tr. at 1317:25-1318:7.)
Furthermore, as detailed above and in Sections II and III, both Bowman and Culliss
remember the search query when scoring and filtering items, because their content scores
compare words in the query to words in the items and their feedback scores utilize feedback from
4
As an initial matter, the Asserted Patents nowhere mention “tight integration” in either
the claims or specification. (See Trial Tr. at 1900:7-19).
25
users who entered the same or a similar query. Thus, one of ordinary skill could draw upon these
Bowman and Culliss disclosures (if necessary) in order to modify Rose or WebHound to
remember and use the search query for filtering. See KSR Intern. Co. v. Teleflex Inc., 550 U.S.
398, 421 (2007) (holding that claims are obvious if they are an obvious combination of prior art
elements).
Dr. Carbonell’s argument that the prior art did not teach how to “tightly integrate” search
with filtering – i.e., remember the search query when filtering – is also belied by the Asserted
Patents themselves. In their discussion of the prior art, the Asserted Patents’ shared specification
states that “conventional search engines initiate a search in response to an individual user’s query
and use content-based filtering to compare the query to accessed network informons . . .” (DX001 at 2:15-18) (emphasis added). Dr. Carbonell admitted this disclosures on cross-examination.
(Trial Tr. at 1893:24-1894:21). This disclosure underscores how remembering the search query
when filtering is hardly a leap of inventiveness.
Finally, the obviousness of “tightly integrating” search with filtering is shown by the fact
that Mr. Lang and Mr. Kosak had no search experience before going to work for Lycos in the
Spring of 1998 – yet, by December of that same year, they were able to file the ‘420 Patent
application that supposedly achieves this “tight integration.” (Trial Tr. at 1897:19-1898:16).
2. To the extent Bowman does not disclose content matching, adding this
element would be obvious
As discussed above, claim 29 of Bowman discloses adjusting an item’s score to reflect
the number of terms in the query that are “matched” by the item. The only sensible reading of
this “matching” technique is that it determines how many query terms appear in the content of
the item. But even if Plaintiff were correct that this “matching” technique does not compare the
26
query terms to the content of the item, modifying Bowman to disclose content-based matching
would be obvious.
This is because content-based matching indisputably appears in other sections of the
same Bowman reference. For example, the Background section of Bowman discusses how a
search system can order books within a search result based on how many query terms “match” or
appear in the books’ titles. (DX-059 at 1:37-45 (“the query result is a list of books whose titles
contain some or all of the query terms . . . the list may be ordered based on the extent to which
each identified item matches the terms of the query.”)). It would be obvious to apply this
unambiguous content-based matching to the invention disclosed in claim 29 of Bowman. Given
that the content-based matching from the Bowman Background appears in the same reference as
Bowman claim 29, it is self-evident that one could apply the content-based matching from the
Background to claim 29. Thus, even if the “matching” of Bowman claim 29 did not already
embrace content-based matching, modifying this technique to disclose content-based matching
would be obvious. See Boston Sci. Scimed, Inc. v. Cordis Corp., 554 F.3d 982, 991 (Fed. Cir.
2009) (“Combining two embodiments disclosed adjacent to each other in a prior art patent does
not require a leap of inventiveness.”)
3. To the extent Bowman does not disclose filtering, adding this element
would be obvious
As discussed above, Bowman “filters” because it retains items that score above a
predetermined threshold while excluding items that score below the threshold. (DX-059 at 9:5862, claim 15). Plaintiff argues that this is not true “filtering” because it involves ranking all the
items and then retaining a subset of items which exceed the threshold, rather than passing the
items one-by-one through the filter. As an initial matter, it is unclear how assigning scores to all
items and then filtering out the items whose scores do not exceed a certain threshold is
27
substantively different than assigning a score to one item and filtering it based on a threshold,
then assigning a score to the next item and filtering it based on the same threshold, etc. Even if
Plaintiff were correct that “filtering” requires retaining or excluding items one-by-one,
modifying Bowman to disclose “one-by-one” filtering would be utterly trivial and obvious.
There is no dispute that Bowman gives scores to items, nor is there any dispute that Bowman sets
a threshold and excludes items that score below the threshold. So modifying Bowman to
disclose the “one-by-one” filtering that Plaintiff contends is required would simply require
scoring Item A, retaining or excluding Item A based on whether it passes the threshold, and then
moving on to Item B – rather than scoring all the items and retaining the items that exceed the
threshold in one fell swoop. Because there are only two basic ways to retain and exclude items –
“one-by-one” or “all at once” – modifying Bowman to disclose the former technique rather than
the latter would necessarily be obvious. As the Supreme Court held in KSR, a modification or
combination is likely obvious where “there are a finite number of identified, predictable
solutions” to a known problem. See KSR, 550 U.S. at 421. Such is the case here – modifying
Bowman to disclose “one-by-one” filtering instead of “all at once” filtering would necessarily be
obvious, given the limited number of ways that one could retain and exclude items based on their
scores.
4. To the extent Culliss does not disclose filtering, adding this element
would be obvious
As discussed above, Culliss discloses “filtering” in the embodiment where X-rated
articles are screened out of the search results when their X-rated scores exceed a given threshold.
(See DX-058 at 12:1-5.) But even if one ignored this specific Culliss embodiment and focused
on the other Culliss embodiments that merely rank articles, it would be obvious to modify these
other Culliss embodiments so that they disclosed filtering as well as ranking. There is no dispute
28
that all Culliss embodiments present articles to the user in decreasing order of their scores. (Id.
at 5:5-10). Modifying this “ranking” method into “filtering” would simply require setting a
threshold and excluding, one-by-one, the articles that score below the threshold. Bowman
discloses setting such a threshold, and there is no inventiveness in comparing items to a threshold
“one-by-one” versus “all at once.” Thus, it would be obvious to set a such a threshold in Culliss
and filter out the articles that score below the threshold.
C.
The Level of Ordinary Skill in the Art
As Dr. Ungar explained, a person of ordinary skill in the art for the asserted patents
would have a bachelor’s degree in computer science (or an equivalent degree) plus 2-3 years
experience in the field of information retrieval. (Trial Tr. at 1311:15-20.) Dr. Carbonell has a
very similar formulation for the person of ordinary skill in the art. (Id. at 1284:8-18.) Given the
prior art disclosures recited above and the few (if any) differences between the asserted claims
and the prior art, Dr. Ungar opined (and the undisputed evidence supports) that such a person
would have found the asserted claims to be obvious. (See id. at 1311:25-1312:4).
The elements of the asserted claims were all found in the prior art. (Id. at 1312:5-10).
Moreover, as Dr. Ungar opined, and these elements were not used in any unconventional or
unpredictable way in the asserted claims, but rather did the same things they did previously in
the art. (Id. at 1312:11-22). Thus, there would have been no barriers or difficulties to a person
or ordinary skill in the art combining these elements to create the inventions in the asserted
patents. (Id. at 1312:22-1313:4). Indeed, named inventor Ken Lang himself admitted that he
was not aware of any technological barriers to creating the inventions in the asserted claims. (Id.
at 274:12-275:1). See KSR, 550 U.S. at 421.
29
D.
No Secondary Considerations Can Rebut the Obviousness Showing
A patentee may rebut an obviousness showing by pointing to “secondary
considerations” of non-obviousness, such as commercial success of the patented invention,
failure of others to create the patented invention, or showing that the patented invention filled a
long-felt and unsolved need. See Graham, 383 U.S. at 17-18. In this case, there are no
secondary considerations to rebut the obviousness of the ‘420 and ‘664 patents. For example,
there was no commercial success for these patents – in fact, the patents were never commercially
used at all. (Trial Tr. at 332:7-12, 339:18-341:5, 1315:4-16.) There also was no failure by others
to devise the systems or methods claimed by these patents, nor did these patents fill any long-felt
and unsolved need. (Id. at 1315:17-1316:15.) To the contrary, numerous prior art references had
already solved the problem of combining content-based and collaborative filtering, in order to
resolve the weaknesses of each individual method on its own. (Id. at 1316:10-15.) Thus, no
secondary considerations might rebut the obviousness of the Asserted Patents.
VI.
CONCLUSION
For the foregoing reasons, Defendants respectfully request judgment as a matter of law
that each asserted claim of the ‘420 and ‘664 patents is anticipated by Culliss, anticipated by
Bowman, and invalid for obviousness.
30
DATED: November 1, 2012
/s/ Stephen E. Noona
Stephen E. Noona
Virginia State Bar No. 25367
KAUFMAN & CANOLES, P.C.
150 West Main Street, Suite 2100
Norfolk, VA 23510
Telephone: (757) 624.3000
Facsimile: (757) 624.3169
senoona@kaufcan.com
David Bilsker
David A. Perlson
QUINN EMANUEL URQUHART &
SULLIVAN, LLP
50 California Street, 22nd Floor
San Francisco, California 94111
Telephone: (415) 875-6600
Facsimile: (415) 875-6700
davidbilsker@quinnemanuel.com
davidperlson@quinnemanuel.com
Counsel for Google Inc., Target Corporation,
IAC Search & Media, Inc., and Gannett Co., Inc.
By: /s/ Stephen E. Noona
Stephen E. Noona
Virginia State Bar No. 25367
KAUFMAN & CANOLES, P.C.
150 W. Main Street, Suite 2100
Norfolk, VA 23510
Telephone: (757) 624-3000
Facsimile: (757) 624-3169
Robert L. Burns
FINNEGAN, HENDERSON, FARABOW, GARRETT &
DUNNER, LLP
Two Freedom Square
11955 Freedom Drive
Reston, VA 20190
Telephone: (571) 203-2700
Facsimile: (202) 408-4400
Cortney S. Alexander
FINNEGAN, HENDERSON, FARABOW, GARRETT &
DUNNER, LLP
31
3500 SunTrust Plaza
303 Peachtree Street, NE
Atlanta, GA 94111
Telephone: (404) 653-6400
Facsimile: (415) 653-6444
Counsel for Defendant AOL Inc.
32
CERTIFICATE OF SERVICE
I hereby certify that on November 1, 2012, I will electronically file the foregoing with the
Clerk of Court using the CM/ECF system, which will send a notification of such filing (NEF) to
the following:
Jeffrey K. Sherwood
Kenneth W. Brothers
DICKSTEIN SHAPIRO LLP
1825 Eye Street NW
Washington, DC 20006
Telephone: (202) 420-2200
Facsimile: (202) 420-2201
sherwoodj@dicksteinshapiro.com
brothersk@dicksteinshapiro.com
Donald C. Schultz
W. Ryan Snow
Steven Stancliff
CRENSHAW, WARE & MARTIN, P.L.C.
150 West Main Street, Suite 1500
Norfolk, VA 23510
Telephone: (757) 623-3000
Facsimile: (757) 623-5735
dschultz@cwm-law.cm
wrsnow@cwm-law.com
sstancliff@cwm-law.com
Counsel for Plaintiff, I/P Engine, Inc.
/s/ Stephen E. Noona
Stephen E. Noona
Virginia State Bar No. 25367
KAUFMAN & CANOLES, P.C.
150 West Main Street, Suite 2100
Norfolk, VA 23510
Telephone: (757) 624.3000
Facsimile: (757) 624.3169
senoona@kaufcan.com
12020850v1
33
Disclaimer: Justia Dockets & Filings provides public litigation records from the federal appellate and district courts. These filings and docket sheets should not be considered findings of fact or liability, nor do they necessarily reflect the view of Justia.
Why Is My Information Online?