I/P Engine, Inc. v. AOL, Inc. et al
Filing
238
Memorandum in Support re 237 MOTION for Summary Judgment Defendants AOL Inc., Google Inc., IAC Search & Media, Inc., Gannett Company, Inc., and Target Corporation's Motion for Summary Judgment filed by AOL Inc., Gannett Company, Inc., Google Inc., IAC Search & Media, Inc., Target Corporation. (Noona, Stephen)
UNITED STATES DISTRICT COURT
EASTERN DISTRICT OF VIRGINIA (NORFOLK DIVISION)
I/P ENGINE, INC.
Plaintiff,
Civil Action No. 2:11-cv-512
v.
AOL, INC., et al.,
Defendants.
MEMORANDUM IN SUPPORT OF DEFENDANTS’ MOTION FOR SUMMARY
JUDGMENT
As construed by the Court, the Asserted Patents in this case, U.S. Patent Nos. 6,314,420
(“the ‘420 Patent”) and 6,775,664 (“the ‘664 Patent”), claim systems and methods for filtering
search results by using content data and [collaborative] feedback data. Summary Judgment is
appropriate on several grounds.
First, Plaintiff cannot show a genuine issue of material fact as to whether Defendants
infringe the asserted patents; the facts show Defendants do not.
01980.51928/4951557.4
1
Second, two prior art patents – U.S. Patent No. 6,185,558 to Bowman et al. (“Bowman”)
and U.S. Patent 6,006,222 to Culliss (“Culliss”) – describe the same purported invention, and
anticipate all asserted claims as construed by the Court and interpreted by Plaintiff. Thus,
summary judgment of invalidity for all asserted claims under 35 U.S.C. § 102(e) is appropriate.
Third, laches presumptively applies if a patentee delays bringing suit for more than six
years after it knew or should have known of the alleged infringement. In this case, public
disclosures from as early as July 2005 mirror the infringement allegations from Plaintiff’s
Complaint. A reasonably diligent patentee would have investigated such statements to uncover
the same supposed basis for Plaintiff’s claims more than six years before Plaintiff filed suit in
September 2011. Thus, a laches presumption applies in this case. As Plaintiff has come forward
with no evidence to rebut the presumption, summary judgment of laches is appropriate.
STATEMENT OF UNDISPUTED FACTS
I.
THE ASSERTED PATENTS TEACH FILTERING USING CONTENT AND
COLLABORATIVE FEEDBACK DATA
1.
Plaintiff alleges infringement of the ‘420 and ‘664 Patents. The Asserted Patents
originally issued to Lycos, Inc. (“Lycos”), from whom Plaintiff acquired them in the summer of
2011. The ‘420 Patent issued on November 6, 2001 and the '664 Patent issued on August 10,
2004. The ‘664 Patent claims priority to, and shares a specification with, the ‘420 Patent. Both
are directed to the concept of filtering search results by combining content data with user
feedback data. Plaintiff asserts infringement of claims 10, 14, 15, 25, 27, and 28 from the ‘420
Patent and claims 1, 5, 6, 21, 22, 26, 28, and 38 from the ‘664 Patent.
2.
01980.51928/4951557.4
Claim 10 of the ‘420 Patent recites “[a] search engine system comprising”:
2
[a] a system for scanning a network to make a demand search for informons relevant to a query
from an individual user;
[b] a content-based filter system for receiving the informons from the scanning system and for
filtering the informons on the basis of applicable content profile data for relevance to the query;
and
[c] a feedback system for receiving collaborative feedback data from system users relative to
informons considered by such users;
[d] the filter system combining pertaining feedback data from the feedback system with the
content profile data in filtering each informon for relevance to the query.1
‘420 Claim 25 is substantially similar, but is cast as a method claim. The asserted dependent
claims of the ‘420 Patent (claims 14, 15, 27, and 28), add limitations such as requiring that the
feedback data be passive feedback data.
3.
Claim 1 of the ‘664 Patent recites “[a] search system comprising:
[a] a scanning system for searching for information relevant to a query associated with a first
user in a plurality of users;
[b] a feedback system for receiving information found to be relevant to the query by other users;
[c1] a content-based filter system for combining the information from the feedback system with
the information from the scanning system and; [c2] for filtering the combined information for
relevance to at least one of the query and the first user.
‘664 Claim 26 is substantially similar, but is cast as a method claim. The asserted dependent
claims of the ‘664 Patent (claims 5, 6, 21, 22, 28, and 38) add elements such as having the
filtered information be an advertisement and delivering the filtered information to the first user.
4.
During prosecution of the Asserted Patents, the PTO did not state that filtering
information by combining content data and user feedback data was novel or patent-worthy.
Rather, the PTO appears to have allowed the Patents based on the fact that no prior art taught the
use of a “wire,” which the patents cite as a continuous query whose results are updated over
1
Throughout this brief, Defendants have added bracketed letters denoting the various claim
steps or elements, for the Court's convenience.
01980.51928/4951557.4
3
time. (1:57-58.)2 (Chen Decl. Ex. 1.) None of the asserted claims recite the “wire” that the PTO
recited as the alleged point of novelty.
II.
PLAINTIFF’S INFRINGEMENT ALLEGATIONS AND THE FUNCTIONALITY
OF THE ACCUSED SYSTEMS
5.
Unless otherwise noted, all specification citations are from the ‘420 Patent.
01980.51928/4951557.4
4
A.
Google’s Advertising Services
8.
B.
01980.51928/4951557.4
The Smart Ad Selection System
5
11.
01980.51928/4951557.4
6
III.
THE BOWMAN PATENT DISCLOSES FILTERING SEARCH RESULTS BY
COMBINING CONTENT DATA WITH COLLABORATIVE FEEDBACK DATA
16.
The Bowman patent, entitled “Identifying the Items Most Relevant to a Current
Query Based on Items Selected in Connection with Similar Queries,” was filed on March 10,
1998 and claims priority to a provisional application filed one week earlier. Bowman is
accordingly prior art to the Asserted Patents under 35 U.S.C. § 102(e).
17.
Bowman functions similarly to a traditional search engine in that it accepts a
query from a user and generates a body of results in response. (See Chen Decl. Ex. 2 at Abstract;
5:31-32; claim 28.) As in the asserted patents, Bowman then filters those results based on
feedback of other users and content filtering. For example, if a user enters the search query
“Paris museum vacations,” Bowman would generate a body of search result items that contain
the words “Paris,” “museum,” or “vacations.” Bowman would then give each of these items a
ranking score based on how often they were selected by other users who had entered the query
“Paris museum vacations.” (See id. at Abstract; 2:30-35; 5:32-35; claim 28.) Alternatively,
rather than utilizing feedback from all users who entered the same query, Bowman may cluster
users into discrete groups (such as age, income, or behavioral groups) and use feedback from
users within the same group who entered the same query. (See id. at 3:28-33.) In this way,
search results returned in response to a given query may have different ranking scores for users
in different groups.
18.
Some Bowman embodiments further adjust the ranking score of each search result
according to its content, by analyzing how many of the terms in the query appear in the search
result’s content. (See id. at 8:50-53; claim 29.) Search results whose content contains all the
terms in the query get higher ranking scores, while search results that contain fewer of the query
terms get progressively lower ranking scores. (See id.) Thus, if a user entered the query “Paris
museum vacations,” Bowman would give search results that contain the terms “Paris,”
01980.51928/4951557.4
8
“museum,” and “vacations” higher adjustments to their ranking score, while giving search results
with two of these terms a lower adjustment (and giving even lower adjustments to search results
that contain only one of these terms).
19.
The search results are finally presented to the user in ranked order. (Id. at
Abstract.) Additionally, the system may present only a subset of the search results whose
ranking scores exceed a certain threshold, or a predetermined number of search results that have
the highest ranking scores. (See id. at 9:60-64.)
20.
In sum, the final ranking score for each search result in Bowman is generated
through a combination of feedback-based data and content-based data. This ranking score is
then used to filter which search results are presented to the user.
IV.
THE CULLISS PATENT DISCLOSES FILTERING SEARCH RESULTS BY
COMBINING CONTENT DATA WITH COLLABROATIVE FEEDBACK DATA
21.
U.S. Patent No. 6,006,222 to Culliss, entitled “Method for Organizing
Information,” was filed on August 1, 1997 and issued on December 21, 1999. Culliss is
accordingly prior art to the Asserted Patents under 35 U.S.C. § 102(e).
22.
Culliss, like Bowman, is directed to a search engine system that ranks search
results based on a combination of the content of the search results and feedback from prior users
who had entered the same query and viewed these search results.
23.
In Culliss, Internet articles are associated with key terms they contain. (Chen
Decl. Ex. 3 at 3:60-64.) For example, two articles about museum-viewing vacations in Paris
(“Article 1” and “Article 2”) might be associated with the key terms “Paris,” “museum,” and
“vacations” if they both contained those three words.
24.
These articles are given a “key term score” for each of the key terms that they
contain. (Id. at 3:65-66.) Culliss discloses that each key term score might initially be set at 1.
(Id. at 3:10-4:9.) Thus, in the above example, Article 1 would have a key term score of 1 for
01980.51928/4951557.4
9
each of “Paris,” “museum,” and “vacations,” and so would Article 2. Alternatively, Culliss
discloses that the key term scores might be set to reflect how many times each of the key terms
appeared in the document’s content. (See id. at 14:32-36.)
25.
Culliss discloses that the articles are presented to the user in the order dictated by
their combined key term scores. (Id. at 5:7-17.) For example, if Article 1 had a key term score
of 5 for “Paris,” 3 for “museum,” and 2 for “vacations,” its aggregate score for the query “Paris
museum vacations” would be 10 (5 + 3 +2). If Article 2 had a key term score of 4 for “Paris,” 2
for museum,” and 3 for “vacations,” its aggregate score for the query “Paris museum vacations”
would be 9 (4 + 2 +3). Thus, Article 1 would be presented above Article 2 because it had a
higher aggregate score.
26.
When a user selects an article whose squib is presented to him, the key term
scores for that article which correspond to the terms in the user’s query are increased. (Id. at
4:37-49.) This is because the user, by selecting the article in response to his query, has implicitly
indicated that these key terms from the query are appropriately matched to the article. (See id.)
27.
For example, if a hypothetical first user who queried “Paris museum vacations”
selected Article 2, then Article 2’s key term scores for “Paris,” “museum,” and “vacations”
might each rise by +1. (See id. at 4:43-45.) The next user who enters the same query would thus
see a different rank of articles, based on the new key term scores that reflect the input of the prior
user. (See id. at 4:66-5:1.) Sticking with the same example, Article 2 would have a new
aggregate score of 12 (instead of 9) after the first user selected it, because its key term scores for
“Paris,” “museum,” and “vacations” each increased by +1 when the first user selected it. Thus, a
later user who queries “Paris museum vacations” would see Article 2 (which has a new
aggregate score of 12) presented above Article 1 (which still has its old aggregate score of 10).
01980.51928/4951557.4
10
01980.51928/4951557.4
13
01980.51928/4951557.4
16
V.
BOWMAN AND CULLISS ANTICIPATE ALL ASSERTED CLAIMS
The Bowman and Culliss references both anticipate every asserted claim of the ‘420 and
‘664 Patents. As detailed below, Bowman and Culliss use a combination of feedback-based
filtering and content-based filtering to rank and filter search results for relevance to a query.
These disclosures anticipate every asserted claim of the ‘420 and ‘664 Patents as construed by
the Court and interpreted by Plaintiff.
A.
Plaintiff’s Own Validity Expert Disputes Very Few Elements from Bowman
and Culliss
Plaintiff’s validity expert (Dr. Jaime Carbonell) does not dispute that the vast majority of
claim elements are met by Bowman and Culliss. For both references, Dr. Carbonell merely
disputes three issues: (1) whether they employ content analysis; (2) whether they “filter”
information; and (3) whether they “search for information” within the meaning of ‘664 claims 1
and 26. (See Chen Decl. Ex. 19 at pp. 17-28.) As discussed below, Dr. Carbonell’s positions are
demonstrably incorrect, and both Bowman and Culliss anticipate each asserted claim.
B.
Bowman Anticipates Claim 10 of the ‘420 Patent
1.
Bowman discloses a search engine system (claim 10 (preamble))
Bowman discloses a “search engine system” as recited by the claim 10 preamble.
Specifically, Bowman includes “a query server for generating query results from queries.” (Id.
Ex. 2 at 5:31-32.)
01980.51928/4951557.4
20
2.
Bowman discloses a system for scanning a network to make a demand
search for informons relevant to a query from an individual user (claim
10[a])
Claim 10[a] recites “a system for scanning a network to make a demand search for
informons relevant to a query from an individual user.” The Court construed “scanning a
network” as “looking for or examining items in a network” and construed “demand search” as “a
single search engine query performed upon a user request.” (See Dkt. 171 at 23.)
Bowman meets this element. Specifically, Bowman discloses the steps of: “receiving a
query specifying one or more terms; generating a query result identifying a plurality of items
satisfying the query.” (Chen Decl. Ex. 2 at claim 28 [a-b].) This query is submitted by a user,
and thus the resulting search is “performed upon a user request.” (See id. at 7:43-46.) Further,
Bowman operates on a networked system of computers. (See id. at 5:29-30; 7:66-67.)
3.
Bowman discloses a content-based filter system for receiving the
informons from the scanning system and for filtering the informons on the
basis of applicable content profile data for relevance to the query (claim
10[b])
Claim 10[b] recites “a content-based filter system for receiving the informons from the
scanning system and for filtering the informons on the basis of applicable content profile data for
relevance to the query.” Bowman meets this element, as it receives informons and filters them
based on content.
After a search query is entered and search results retrieved, Bowman examines each
search result’s content profile to see how many query terms it contains. Bowman then may
adjust each search result’s ranking score so that search results containing every term in the query
receive higher adjustments than search results containing fewer terms in the query. Specifically,
Bowman explains: “The facility uses rating tables that it has generated to generate ranking values
for items in new query results . . . scores may be adjusted to more directly reflect the number of
query terms that are matched to the item, so that items that match more query terms than others
01980.51928/4951557.4
21
are favored in the rankings.” (Id. at 9:28-53 (emphasis added).) Claim 29 of Bowman also
recites adjusting search results’ ranking scores based on how many terms from the query are
found in each search result’s content, by “adjusting the ranking value produced for each item
identified in the query result to reflect the number of terms specified by the query that are
matched by the item.” (Id. at claim 29.)
Finally, Bowman filters out (i.e., excludes) search results whose ranking scores fall below
a certain threshold, or presents a predetermined number of search results that have the highest
ranking scores and filters out all the rest. (See id. at 9:60-64.)
(a)
Dr. Carbonell’s argument that Bowman’s “matching” does not use
content analysis is incorrect
Plaintiff’s expert, Dr. Carbonell, disputes whether Bowman’s “matching” technique
analyzes whether a query term appears in a search result’s content. Dr. Carbonell argues that the
matching technique analyzes whether a search result is associated with a query term in
Bowman’s rating table, which would merely mean that at least one prior user had selected that
search result in response to a query containing that term. (See Chen Decl. Ex. 19 ¶¶ 84 fn. 3, 85,
88.) In purported support of his opinion, Dr. Carbonell points to two statements from Bowman
that refer to ordering search results “in accordance with collective and individual user behavior
rather than in accordance with attributes of the items.” (Id. at ¶ 85 (citing Chen Decl. Ex. 2 at
2:59-3:22; 4:38-48).) But this is a non-sequitur. Neither of these statements mention, or have
anything to do with, the “matching” technique disclosed in claim 29 and at 9:50-53 of Bowman.
Rather, they occur when discussing more general Bowman embodiments that rely solely on user
feedback to rank and filter search results. (See Chen Decl. Ex. 2 at 2:59-3:22; 4:38-48.)
Contrary to Dr. Carbonell’s argument, Bowman makes clear that “matching” involves
content analysis. Indeed, when discussing matching in connection with the prior art, Bowman
explicitly states that a query term is “matched” to a search result if it appears in that search
01980.51928/4951557.4
22
result’s content. For example, if the search results are books, Bowman states that a list of books
will be “matching the terms of the query” if their “titles contain some or all of the query terms.”
(See id. at 1:30-38.) In that same paragraph, Bowman states that the list of books “may be
ordered based on the extent to which each identified item matches the terms of the query.” (Id.
at 1:43-44 (emphasis added).) In other words, the list of books can be ordered based on how
many of the query terms are matched to (i.e., contained within) the title of each book.
In nearly verbatim language, dependant claim 29 of Bowman describes this prior art
technique of ranking search results according to how many query terms are contained in their
content. A simple comparison of claim 29 to the “matching” prior art discussion makes this
clear. Compare claim 29 (“adjusting the ranking value produced for each item indentified in the
query result to reflect the number of terms specified by the query that are matched by the item”)
with 1:43-44 (“the list may be ordered based on the extent to which each identified item matches
the terms of the query.”) Given the identity of language, the only logical interpretation is that
claim 29’s matching technique does involve content analysis, and no reasonable jury could find
otherwise.
Because Dr. Carbonell’s interpretation of Bowman’s “matching” technique ignores the
plain text of Bowman, Plaintiff cannot rely on Dr. Carbonell’s implausible interpretation to alter
what Bowman discloses and defeat summary judgment. See Iovate Health Sci., Inc. v. Bio-Eng.
Supp. and Nutrition, Inc., 586 F.3d 1376, 1381 (Fed. Cir. 2009) (upholding summary judgment
of anticipation despite patentee’s submission of an expert declaration, where the Court found that
the expert took implausible positions that were inconsistent with the patent specification).
(b)
Dr. Carbonell’s argument that Bowman does not “filter” search
results is incorrect
Although Dr. Carbonell admits that Bowman presents the user with search results that
score above a numerical threshold and excludes the rest, he argues that this is somehow not
01980.51928/4951557.4
23
“filtering” because it is “relative and carried out with reference to the entire ranked list of search
results” rather than being an “item-by-item process.” (Chen Decl. Ex. 19 ¶ 90.) This argument
makes no sense. By setting an absolute numerical threshold and presenting a user with the
search results that score above this threshold, Bowman determines, on a non-relative and itemby-item basis, whether each search result has scored highly enough to be presented to the user.
Furthermore, Bowman also teaches “select[ing] for prominent display items having top 3
combined scores.” (Chen Decl. Ex. 2 at Fig., 9, step 907.)
4.
Bowman discloses a feedback system for receiving collaborative feedback
data from system users relative to informons considered by such users
(claim 10[c])
Claim 10[c] recites “a feedback system for receiving collaborative feedback data from
system users relative to informons considered by such users.” The Court construed
“collaborative feedback data” as data from system users with similar interests or needs regarding
what informons such users found to be relevant. (D.I. 212 (Revised Markman Order) at 23.)
Bowman meets this element by recording how often users in the same demographic or
behavioral group who entered the same search query selected various search results. For
example, claim 28[c] of Bowman recites: “for each item identified in the query result, combining
the relative frequencies with which users selected the item in earlier queries specifying each of
the terms in the query to producing [sic] a ranking value for the item.” (emphasis added).
Moreover, rather than recording feedback from all users who entered the same query, Bowman
may cluster users into groups (such as age, income or behavioral groups) and use feedback from
users within the same group who entered the same query. (Chen Decl. Ex. 2 at 3:28-33.)
01980.51928/4951557.4
24
Because Bowman receives feedback from users in the same demographic or behavioral
group, Bowman receives feedback from users “with similar interests or needs” as required by the
Court’s construction of “collaborative feedback data.” Additionally, Plaintiff takes the position
that users have “similar interests or needs” as long as they entered the same query.7 Thus,
Bowman’s feedback data qualifies as “collaborative feedback data” under Plaintiff’s
interpretation even when Bowman does not cluster users into discrete groups, because
Bowman’s feedback data still shows how often users who entered the same query selected a
given search result. (See Chen Decl. Ex. 2 at 13:42-46; Abstract.)
5.
Bowman discloses the filter system combining pertaining feedback data
from the feedback system with the content profile data in filtering each
informon for relevance to the query (claim 10[d])
Claim 10[d] recites “the filter system combining pertaining feedback data from the
feedback system with the content profile data in filtering each informon for relevance to the
query.” Bowman meets this element, because Bowman combines data regarding the content of
informons with collaborative feedback data from other users to determine the most relevant
informons to a query. Specifically, Bowman determines each search result item’s ranking score
by combining collaborative feedback data (showing how often the item was selected by users
from the same group who entered the same query) with content profile data (showing how many
of the query terms appear in the item’s content). (See id. at claim 29.) Bowman explicitly states
that an item’s feedback score is “combined” with its content matching score to produce a final
ranking score for the item. (Id. at 9:49-53.) The final ranking score is used to determine the
item’s relevance to the query. (See id. at 2:23-24.) As noted above, Bowman then filters out
7
As Plaintiff stated at the Markman Hearing: “when we look to see who has similar needs
or interests, what we are looking at is who else made that same search? Who else made that
same query?” (Chen Decl. Ex. 32 at 35:14-17.)
01980.51928/4951557.4
25
items whose scores fall below a certain threshold, or presents a predetermined number of items
with the highest scores and filters out the rest. (Id. at 9:60-64.)
C.
Bowman Anticipates Claims 14 and 15 of the ‘420 Patent
Claim 14 depends from claim 10 and further requires “wherein the collaborative feedback
data comprises passive feedback data.” Claim 15 adds the further requirement that “the passive
feedback data is obtained by passively monitoring the actual response to a proposed informon.”
Bowman meets both these elements, because Bowman’s feedback data is derived from passively
monitoring users’ actual responses to search results – namely, monitoring how often users
selected each of those search results. (See id. at 2:31-35.)
D.
Bowman Anticipates Claims 25, 27, and 28 of the ‘420 Patent
Claims 25, 27, and 28 contain the same substance as claims 10, 14, and 15, respectively,
but are simply recast as method rather than system claims. Thus, Bowman anticipates claims 25,
27, and 28 for the same reasons that it anticipates claims 10, 14, and 15.
E.
Bowman Anticipates Claim 1 of the ‘664 Patent
1.
Bowman discloses a search system (claim 1 (preamble))
Bowman recites “a search system” as recited by the preamble. Specifically, Bowman
accepts a search query from a user and returns a set of search results. (See id. Ex. 2 at 5:31-32
(stating that Bowman includes “a query server for generating query results from queries.”).)
2.
Bowman discloses a scanning system for searching for information
relevant to a query associated with a first user in a plurality of users (claim
1[a])
Claim 1[a] recites “a scanning system for searching for information relevant to a query
associated with a first user in a plurality of users.” The Court construed “a scanning system” as
“a system used to search for information.” (Dkt. 171 at 23.) Thus construed, Bowman meets
this limitation because it searches for information relevant to a query associated with a first user.
01980.51928/4951557.4
26
As recited in Claim 28 of Bowman, Bowman discloses “[a] computer-readable medium whose
contents cause a computer system to rank items in a search result by: receiving a query
specifying one or more terms; generating a query result identifying a plurality of items satisfying
the query.” (Chen Decl. Ex. 2 at claim 28[a-b].)
Furthermore, Bowman’s system is intended for use by a plurality of users, as evidenced
by the fact that the system records the collective preferences of multiple users. (See id. at 5:3334; claim 28[c].) Within the plurality of users, Bowman searches for results to a query submitted
by a particular user. (See id. at 7:42-45.) Therefore, Bowman meets the “first user in a plurality
of users” aspect of this claim element.
(a)
Dr. Carbonell’s position that Bowman does not “search[] for
information” is incorrect
Dr. Carbonell disputes that Bowman “searches for information,” but he provides no
support for this position. He merely states that Bowman lacks this element (Chen Decl. Ex. 19 ¶
80) and later says that Bowman lacks “full search engine capabilities.” (Id. at ¶ 83.) Yet, as
shown in claim 28 of Bowman, Bowman explicitly claims the steps of “rank[ing] items in a
search result” by “receiving a query” and “generating a query result identifying a plurality of
items satisfying the query.” Because Bowman generates a query result and explicitly calls this
query result a “search result,” Bowman necessarily teaches that it has searched for these results.
Indeed, elsewhere in his report, Dr. Carbonell himself says that Bowman falls within a class of
prior art references that he calls the “ad-hoc search group.” (Id. ¶ 156.)
3.
Bowman discloses a feedback system for receiving information found to
be relevant to the query by other users (claim 1[b])
Claim 1[b] recites “a feedback system for receiving information found to be relevant to
the query by other users.”
01980.51928/4951557.4
27
See Amazon, 239 F.3d at 1351.
F.
Bowman anticipates claim 5 of the ‘664 Patent
Claim 5 depends from claim 1 and further requires the filtered information to be an
advertisement. Bowman meets this element. Specifically, Bowman discloses that system users
can purchase the items represented by the search results, such as by adding these items to their
virtual shopping carts. (See Chen Decl. Ex. 2 at 5:4; 9:2-3; claim 7.) Thus, the search results
constitute advertisements for the purchasable items that they represent.
G.
Bowman anticipates claim 6 of the ‘664 Patent
Claim 6 depends from claim 1 and further requires “an information delivery system for
delivering the filtered information to the first user.” Bowman discloses this element, as it recites
that the software facility displays the filtered search results to the user. (See id. at 9:56-58.)
H.
Bowman anticipates claim 21 of the ‘664 Patent
Claim 21 depends from claim 1 and further recites “wherein the content-based filter
system filters by extracting features from the information.” Bowman discloses this element. As
discussed above, Bowman extracts words from the content of each search result in order to
determine how many words from the query are found in the search result. (See id. at 9:50-53;
claim 29.)8
I.
Bowman anticipates claim 22 of the ‘664 Patent
Claim 22 depends from claim 21 and further recites “wherein the extracted features
comprise content data indicative of the relevance to the at least one of the query and the user.”
8
Dr. Carbonell disputes that Bowman meets this limitation, but his position appears to be
entirely derivative of his position that Bowman does not use content analysis. (See Chen Decl.
Ex. 19 ¶ 96.)
01980.51928/4951557.4
29
Bowman discloses this element, because the words that Bowman extracts from a search result’s
content indicate how relevant the search result is to the query. (See id. at 9:50-53; claim 29.)
J.
Bowman Anticipates Claim 26 of the ‘664 Patent
Claim 26 contains essentially the same elements as claim 1, but is simply recast as a
method rather than system claim. Thus, Bowman anticipates claim 26 for the same reasons that
it anticipates claim 1.
K.
Bowman anticipates claim 28 of the ‘664 Patent
Claim 28 depends from claim 26 and further recites “the step of delivering the filtered
information to the first user.” As discussed with respect to claim 6, supra, Bowman discloses
this element.
L.
Bowman anticipates claim 38 of the ‘664 Patent
Claim 38 depends from claim 26 and further recites “wherein the searching step
comprises scanning a network in response to a demand search for the information relevant to the
query associated with the first user.” Bowman meets this element, as construed, because
Bowman looks for or examines items in response to a single search engine query. (See Chen
Decl. Ex. 2 at claim 28[a-b] (disclosing the steps of “receiving a query specifying one or more
terms; generating a query result identifying a plurality of items satisfying the query.”) This
query is submitted by a user, and thus the resulting search is “performed upon a user request.”
(See id. at 7:43-46.) Finally, Bowman operates on a computer network. (See id. at 5:29-30;
7:66-67.)
M.
Culliss Anticipates Claim 10 of the ‘420 Patent
1.
Culliss discloses a search engine system (claim 10 (preamble))
Culliss discloses “a search engine system” as required by the claim 10 preamble because
Culliss accepts a user’s search query and returns a set of search results. (See Chen Decl. Ex. 3 at
01980.51928/4951557.4
30
4:10-26.) Culliss also discloses that its content- and feedback-based methods may be used to
rank and order the search results of traditional search engines like Excite and Lycos. (See id. at
13:35-45.)
2.
Culliss discloses a system for scanning a network to make a demand
search for informons relevant to a query from an individual user (claim
10[a])
Claim 10[a] recites “a system for scanning a network to make a demand search for
informons relevant to a query from an individual user.” The Court construed “scanning a
network” as “looking for or examining items in a network” and construed “demand search” as “a
single search engine query performed upon a user request.” (See Dkt. 171 at 23.)
Culliss meets this element. Specifically, Culliss looks for search results (which it calls
“articles”) in response to a single search engine query entered by a user. (See Chen Decl. Ex. 3
at 4:10-25.) These articles are housed on the Internet, which is “an extensive network of
computer systems.” (Id. at 3:45-55 (emphasis added).)
3.
Culliss discloses a content-based filter system for receiving the informons
from the scanning system and for filtering the informons on the basis of
applicable content profile data for relevance to the query (claim 10[b])
Claim 10[b] recites “a content-based filter system for receiving the informons from the
scanning system and for filtering the informons on the basis of applicable content profile data for
relevance to the query.” Culliss meets this element, as it receives informons and filters them
based on content. Specifically, Culliss uses articles’ aggregate key term scores to rank the
articles for relevance to the query (id. at 5:2-10), and the key term scores are calculated in part by
analyzing each article’s content to determine how many times each key term from the query
appears in the article. (See id. at 14:35-36 (“the [key term] scores can be initially set to
correspond with the frequency of the term occurrence in the article.”).)
(a)
01980.51928/4951557.4
Dr. Carbonell’s argument that Culliss does not disclose content
analysis is incorrect
31
Dr. Carbonell argues that Culliss does not disclose content analysis. But he does not
dispute that Culliss calculates articles’ key term scores in part by counting how many times each
key term from the query appears in the article’s content. He merely argues that this contentbased metric gets diluted over time as an article’s key term score gets repeatedly altered based on
user feedback, so that “[f]or all intents and purposes, Culliss’s rankings are based only on
popularity information.” (Chen Decl. Ex. 19 ¶ 106.) He gives a specific example of an article
whose key term score is initially set at 1 based on content analysis, and then is later clicked on
1,000 times, so that its eventual key term score is based 99.9% on feedback on only .1% on the
initial content analysis. (See id. at fn. 5.)
However, the fact that content analysis may play less and less of a role in Culliss’s
system as more and more user feedback is received does not mean that the content analysis is
ever absent. Even in the stylized example from Dr. Carbonell’s Report, the article’s key term
score is based on a combination of content data and feedback data – it is just based .1% on
content and 99.9% on feedback. Moreover, Dr. Carbonell does not dispute that content analysis
can play a dominant role in setting an article’s key term score if the term appears many times in
the article (thus yielding a high content score) but the article was selected few times by users
who queried that term (thus yielding a small feedback-based alteration to the score). Thus, Dr.
Carbonell’s analysis only confirms that Culliss relies partly on content analysis to set the key
term scores for its articles.
(b)
Dr. Carbonell’s argument that Culliss does not disclose filtering is
incorrect
As to the “filtering” limitation, Dr. Carbonell argues that Culliss does not “filter” articles
because it merely ranks them. (See Chen Decl. Ex. 19 ¶ 108.) Yet Culliss’s ranking determines
the position in which these articles are presented to users, because Culliss discloses that the
article with the highest score is presented to the user in the first or highest position, the article
01980.51928/4951557.4
32
with the second-highest score is presented in the second position, etc. (See id. Ex. 3 at 5:7-17.)
Thus, Culliss’s system – which
presents articles to the user in decreasing order of their key term scores – “filters” these articles
.9
4.
Culliss discloses a feedback system for receiving collaborative feedback
data from system users relative to informons considered by such users
(claim 10[c])
Claim 10[c] recites “a feedback system for receiving collaborative feedback data from
system users relative to informons considered by such users.” The Court construed
“collaborative feedback data” as data from system users with similar interests or needs regarding
what informons such users found to be relevant. (Dkt. 212 at 23.)
Culliss discloses this element by recording which articles were selected by users who
entered a given query and raising the key term scores for terms in the selected articles that match
terms in the query. (See Chen Decl. Ex. 3 at 4:37-49.) As discussed above, Plaintiff takes the
position that users have “similar interests or needs” if they entered the same query. Thus, by
receiving and recording the selection choices of users whose queries contained the same terms,
9
Alternatively, if “filtering” required some articles to be excluded altogether, it would be
obvious to modify Culliss so that articles scoring below a certain threshold would be excluded
and not presented to the user. As explained above, Bowman discloses this precise technique.
(See Chen Decl. Ex. 2 at 9:60-64). It would be obvious to modify Culliss so that it performed the
same filtering as Bowman, particularly given Dr. Carbonell’s position that Bowman and Culliss
should be grouped together as fundamentally similar references. (See id. Ex. 19 at ¶¶ 136, 156).
01980.51928/4951557.4
33
Culliss receives “collaborative feedback data” under the Court’s construction and Plaintiff’s
application of the claim.
5.
Culliss discloses the filter system combining pertaining feedback data
from the feedback system with the content profile data in filtering each
informon for relevance to the query (claim 10[d])
Claim 10[d] recites “the filter system combining pertaining feedback data from the
feedback system with the content profile data in filtering each informon for relevance to the
query.” Culliss meets this element. As discussed above, Culliss ranks articles for relevance to a
query by calculating their aggregate key term scores for the terms in that query (id. at 5:2-10),
and each key term score is based on a combination of feedback data and content data. (See id. at
4:37-49; 14:35-36.) Indeed, even Dr. Carbonell admits that each article’s key term score is
based on a combination of content and feedback data – he just asserts that the feedback data will
tend to outweigh and dilute the content data over time. (See id. Ex. 19 at ¶ 106.)
N.
Culliss Anticipates Claims 14 and 15 of the ‘420 Patent
Claim 14 depends from claim 10 and further requires “wherein the collaborative feedback
data comprises passive feedback data.” Claim 15 depends from claim 14 and further requires
“wherein the passive feedback data is obtained by passively monitoring the actual response to a
proposed informon.” Culliss meets these limitations because Culliss’s feedback data is derived
from passively monitoring users’ actual response to articles – namely, monitoring how
frequently users who had entered the same query selected each of those articles. (Id. at 4:32-34.)
O.
Culliss Anticipates Claims 25, 27, and 28 of the ‘420 Patent
30.
Claims 25, 27, and 28 contain the same substance as claims 10, 14, and 15,
respectively, but are simply recast as method rather than system claims. Thus, Culliss anticipates
claims 25, 27, and 28 for the same reasons that it anticipates claims 10, 14, and 15.
P.
01980.51928/4951557.4
Culliss Anticipates Claim 1 of the ‘664 Patent
34
1.
Culliss discloses a search system (claim 1 (preamble))
Culliss discloses “a search system” as recited by the claim 1 preamble because Culliss
accepts a search query from a user and returns a set of search results. (See id. at 4:10-26.)
Additionally, Culliss’s content- and feedback-based methods may be used to rank and order the
search results of traditional search engines like Excite and Lycos. (See id. at 13:35-45.)
2.
Culliss discloses a scanning system for searching for information relevant
to a query associated with a first user in a plurality of users (claim 1[a])
Claim 1[a] recites “a scanning system for searching for information relevant to a query
associated with a first user in a plurality of users.” The Court construed “a scanning system” as
“a system used to search for information.” (Dkt. 171 at 23.) Thus construed, Culliss meets this
claim element because it searches for articles relevant to a query associated with a first user
among a plurality of users. (See Chen Decl. Ex. 3 at 4:10-26.) Culliss also states that its
content- and feedback-based methods may be applied to traditional search engines like Excite
and Lycos to rank their search results. (See id. at 13:35-45.)
Dr. Carbonell states that Culliss does not disclose “searching for information relevant to a
query associated with a first user” (Chen Decl. Ex. 19 ¶104), but he provides literally no
explanation or support for this statement. Accordingly, Dr. Carbonell’s mere ipse dixit cannot
raise a genuine issue as to whether Culliss discloses this element.10
3.
Culliss discloses a feedback system for receiving information found to be
relevant to the query by other users (claim 1[b])
Claim 1[b] recites “a feedback system for receiving information found to be relevant to
the query by other users.”
10
Moreover, Dr. Carbonell himself puts Culliss within a class of references that he calls the
“ad-hoc search group.” (See Chen Decl. Ex. 19 ¶ 156) (emphasis added).
01980.51928/4951557.4
35
Culliss meets this element
because Culliss receives feedback about which articles were selected by
other users and uses this data to adjust the articles’ key term scores. (See id. Ex. 3 at 4:37-49.)
4.
Culliss discloses a content-based filter system for combining the
information from the feedback system with the information from the
scanning system and for filtering the combined information for relevance
to at least one of the query and the first user (claim 1[c])
Claim 1[c] recites “a content-based filter system for combining the information from the
feedback system with the information from the scanning system and for filtering the combined
information for relevance to at least one of the query and the first user.” Culliss meets this
element by giving articles key term scores that reflect both content and feedback data. (See Chen
Decl. Ex. 3 at 4:37-49; 14:35-36.) These scores are used to “filter” the articles by determining
the position in which the articles are presented to users. (See id. at 5:7-17.)
As discussed above, combining search results with ranking scores that reflect content and
feedback data – as disclosed by Culliss –
Q.
Culliss Anticipates Claim 5 of the ‘664 Patent
Claim 5 depends from claim 1 and further requires the filtered information to be an
advertisement. Culliss meets this element, because Culliss explicitly states that the articles
which are filtered may be advertisements. (See id. at 9:56-62.)
R.
Culliss Anticipates Claim 6 of the ‘664 Patent
Claim 6 depends from claim 1 and further requires “an information delivery system for
delivering the filtered information to the first user.” Culliss discloses this element, as it recites
that the search engine displays squibs of the articles to the user. (See id. at 4:25-31.)
S.
01980.51928/4951557.4
Culliss Anticipates Claim 21 of the ‘664 Patent
36
Claim 21 depends from claim 1 and further recites “wherein the content-based filter
system filters by extracting features from the information.” Culliss discloses this element. As
discussed above, Culliss extracts words from the content of each article in order to determine
how often the words from the query are found in these articles. (See id. at 14:34-36.)
T.
Culliss Anticipates Claim 22 of the ‘664 Patent
Claim 22 depends from claim 21 and further recites “wherein the extracted features
comprise content data indicative of the relevance to the at least one of the query and the user.”
Culliss discloses this element, because the words that Culliss extracts from an article’s content
indicate how relevant the article is to the query. (See id. at 14:34-36.)
U.
Culliss Anticipates Claim 26 of the ‘664 Patent
Claim 26 contains essentially the same elements as claim 1, but is recast as a method
rather than system claim. Thus, Culliss anticipates claim 26 for the same reasons that it
anticipates claim 1.
V.
Culliss Anticipates Claim 28 of the ‘664 Patent
Claim 28 depends from claim 26 and further requires “delivering the filtered information
to the first user.” As discussed with respect to claim 6, supra, Culliss discloses this element.
W.
Culliss Anticipates Claim 38 of the ‘664 Patent
Claim 38 depends from claim 26 and further recites “wherein the searching step
comprises scanning a network in response to a demand search for the information relevant to the
query associated with the first user.” As noted above, “scanning a network” has been construed
as looking for or examining items in a network, and “demand search” has been construed as a
single search engine query performed upon a user request. Culliss meets this element because
Culliss searches for articles in response to a single user search query, and these articles are
searched for on the vast network of the Internet. (See Chen Decl. Ex. 3 at 3:45-55; 4:10-26.)
01980.51928/4951557.4
37
VI.
SUMMARY JUDGMENT THAT LACHES BARS PRE-FILING DAMAGES IS
APPROPRIATE
The defense of laches, when proven, bars a patent plaintiff from winning any damages
that accrued before the filing of suit. See A.C. Auckerman Co. v. R.L. Chaides Const. Co., 960
F.2d 1020, 1041 (Fed. Cir. 1992) (en banc). A laches defense has two elements: “(1) the plaintiff
delayed filing suit for an unreasonable and inexcusable length of time from the time the plaintiff
knew or reasonably should have known of its claim against the defendant; and (2) the delay
operated to the prejudice or injury of the defendant.” Id. at 1032. “A presumption of laches
arises where a patentee delays bringing suit for more than six years after the date the patentee
knew or should have known of the alleged infringer's activity.” Id. at 1037. When the
presumption applies, the laches elements of undue delay and prejudice “must be inferred, absent
rebuttal evidence.” Id. at 1038 (emphasis in original). The plaintiff then bears the burden of
rebutting the presumption by producing sufficient evidence to raise a genuine issue of material
fact as to whether unreasonable delay and prejudice actually exist. See id. at 1038.
When a patent transfers ownership, “a transferee of the patent must accept the
consequences of the dilatory conduct of immediate and remote transferors.” Donald S. Chisum,
CHISUM ON PATENTS § 19.05[2][A][ii]
(2011); accord Eastman Kodak Co. v. Goodyear Tire &
Rubber Co., 114 F.3d 1547, 1559 (Fed. Cir. 1997). Thus, if a series of patent owners
collectively delayed asserting a patent for more than six years, a defendant may invoke the sixyear presumption of laches against any later attempt to assert that patent.
Under these principles, a presumption of laches applies in this case. I/P Engine (the
Asserted Patents’ present owner) and Lycos (the prior owner) had actual or constructive notice of
Google’s allegedly infringing activities no later than July 2005, which is more than six years
prior to the filing of this lawsuit on September 15, 2011. I/P Engine and Lycos nevertheless
failed to assert the Patents for over six years, thereby triggering a presumption of laches.
01980.51928/4951557.4
38
A.
I/P Engine and Lycos Had Actual or Constructive Knowledge of Google's
Alleged Infringement Since at Least July 2005.
For purposes of triggering the six-year laches presumption, the period of delay begins
when the patentee gains actual or constructive knowledge of the alleged infringement, meaning
that patentees have a duty to police their rights. Wanlass v. General Elec. Co., 148 F.3d 1334,
1337-38 (Fed. Cir. 1998). “[I]gnorance will not insulate [a patentee] from constructive
knowledge in appropriate circumstances.” Id. at 1338. Reasonable patentees must investigate
potentially infringing “pervasive, open, and notorious activities,” including “sales, marketing,
publication, or public use of a product similar to or embodying technology similar to the patented
invention, or published descriptions of the defendant's potentially infringing activities.” Id.
A reasonably diligent company holding these asserted patents would have investigated
Google’s “open” search advertising systems more than six years before the filing of suit in
September 2011. Indeed, a reasonably diligent patentee would have become aware of the
potential infringement by July 2005, when Google publicly announced Quality Score – the
precise aspect of Google’s systems that I/P Engine ultimately accused in its Complaint.
(Compare Chen Decl. Ex. 10 (“The Quality Score is simply a new name for the predicted CTR,
which is determined based on the CTR of your keyword, the relevance of your ad text, the
historical keyword performance, and other relevancy factors”) with Dkt. 1 at ¶ 43 (“Google’s
search advertising systems filter advertisements by using ‘Quality Score’ which is a combination
of an advertisement’s content relevance to a search query (e.g., the relevance of the keyword and
the matched advertisement to the search query), and click-through-rates from prior users relative
to that advertisement (e.g., the historical click-through rate of the keyword and matched
advertisement)”).
I/P Engine and Lycos thus should have known of Google’s alleged infringement by July
2005 at the very latest. In other words, constructive knowledge must be imputed to I/P Engine
01980.51928/4951557.4
39
and Lycos no later than July 2005. This is more than six years before I/P Engine filed suit on
September 15, 2011. Therefore, a presumption of laches applies.
Even beyond the public disclosures of Quality Score in Google’s advertising systems,
Lycos has long been a Google partner.
B.
Plaintiff Has Offered No Evidence to Rebut the Laches Presumption.
As a result of the six-year presumption, Plaintiff bears the burden of producing sufficient
evidence to raise a genuine issue of material fact as to whether unreasonable delay and prejudice
actually exist. See Aukerman, 960 F.2d at 1038. To date, however, Plaintiff has come forward
with no evidence to rebut the presumption. Accordingly, the presumption must stand.
CONCLUSION
For the foregoing reasons, Defendants respectfully request that the Court grant summary
judgment that all asserted claims are not infringed and are invalid as anticipated by Bowman and
Culliss. In the alternative, Defendants respectfully request that the Court grant summary
judgment that Plaintiff’s pre-suit damages are barred by laches.
DATED: September 12, 2012
01980.51928/4951557.4
/s/ Stephen E. Noona
Stephen E. Noona
Virginia State Bar No. 25367
KAUFMAN & CANOLES, P.C.
150 West Main Street, Suite 2100
Norfolk, VA 23510
Telephone: (757) 624.3000
Facsimile: (757) 624.3169
senoona@kaufcan.com
40
David Bilsker
David A. Perlson
QUINN EMANUEL URQUHART &
SULLIVAN, LLP
50 California Street, 22nd Floor
San Francisco, California 94111
Telephone: (415) 875-6600
Facsimile: (415) 875-6700
davidbilsker@quinnemanuel.com
davidperlson@quinnemanuel.com
Counsel for Google Inc., Target Corporation,
IAC Search & Media, Inc., and
Gannet Co., Inc.
By: /s/ Stephen E. Noona
Stephen E. Noona
Virginia State Bar No. 25367
KAUFMAN & CANOLES, P.C.
150 W. Main Street, Suite 2100
Norfolk, VA 23510
Telephone: (757) 624-3000
Facsimile: (757) 624-3169
Robert L. Burns
FINNEGAN, HENDERSON, FARABOW, GARRETT &
DUNNER, LLP
Two Freedom Square
11955 Freedom Drive
Reston, VA 20190
Telephone: (571) 203-2700
Facsimile: (202) 408-4400
Cortney S. Alexander
FINNEGAN, HENDERSON, FARABOW, GARRETT &
DUNNER, LLP
3500 SunTrust Plaza
303 Peachtree Street, NE
Atlanta, GA 94111
Telephone: (404) 653-6400
Facsimile: (415) 653-6444
Counsel for Defendant AOL, Inc.
01980.51928/4951557.4
41
CERTIFICATE OF SERVICE
I hereby certify that on September 12, 2012, I will electronically file the foregoing with
the Clerk of Court using the CM/ECF system, which will send a notification of such filing (NEF)
to the following:
Jeffrey K. Sherwood
Kenneth W. Brothers
DICKSTEIN SHAPIRO LLP
1825 Eye Street NW
Washington, DC 20006
Telephone: (202) 420-2200
Facsimile: (202) 420-2201
sherwoodj@dicksteinshapiro.com
brothersk@dicksteinshapiro.com
Donald C. Schultz (also served by hand delivery on 9/12/12)
W. Ryan Snow
Steven Stancliff
CRENSHAW, WARE & MARTIN, P.L.C.
150 West Main Street, Suite 1500
Norfolk, VA 23510
Telephone: (757) 623-3000
Facsimile: (757) 623-5735
dschultz@cwm-law.cm
wrsnow@cwm-law.com
sstancliff@cwm-law.com
Counsel for Plaintiff, I/P Engine, Inc.
/s/ Stephen E. Noona
Stephen E. Noona
Virginia State Bar No. 25367
KAUFMAN & CANOLES, P.C.
150 West Main Street, Suite 2100
Norfolk, VA 23510
Telephone: (757) 624.3000
Facsimile: (757) 624.3169
senoona@kaufcan.com
01980.51928/4951557.4
42
Disclaimer: Justia Dockets & Filings provides public litigation records from the federal appellate and district courts. These filings and docket sheets should not be considered findings of fact or liability, nor do they necessarily reflect the view of Justia.
Why Is My Information Online?