I/P Engine, Inc. v. AOL, Inc. et al

Filing 358

Memorandum in Support re 357 MOTION in Limine Plaintiff I/P Engines Daubert Motion, and Fourth Motion in Limine, to Exclude Lyle Ungars New Theory of Invalidity and Opinions Regarding Claim Construction filed by I/P Engine, Inc.. (Attachments: # 1 Exhibit 1 (Redacted), # 2 Exhibit 2, # 3 Proposed Order)(Sherwood, Jeffrey)

Download PDF
UNITED STATES DISTRICT COURT EASTERN DISTRICT OF VIRGINIA NORFOLK DIVISION __________________________________________ ) ) ) Plaintiff, ) v. ) ) AOL, INC. et al., ) ) Defendants. ) __________________________________________) I/P ENGINE, INC., Civ. Action No. 2:11-cv-512 EXHIBIT 1 FILED UNDER SEAL DSMDB-3099636 IN THE UNITED STATES DISTRICT COURT FOR THE EASTERN DISTRICT OF VIRGINIA I/P ENGINE, INC., ) ) Plaintiff, ) ) v. ) ) AOL, INC., GOOGLE INC., IAC SEARCH & ) MEDIA, INC., TARGET CORP., and ) GANNETT CO., INC. ) ) Defendants. C.A. No. 2:11-cv-512-RAJ JURY TRIAL DEMANDED REPORT OF DEFENDANTS’ EXPERT LYLE H. UNGAR, PH.D., CONCERNING INVALIDITY OF CLAIMS 10, 14, 15, 25, 27, AND 28 OF U.S. PATENT NO. 6,314,420 AND CLAIMS 1, 5, 6, 21, 22, 26, 28, AND 38 OF U.S. PATENT NO. 6,775,664 01980.51928/4874260.1 UNGAR EXPERT REPORT ON INVALIDITY CASE 2:11-cv-512-RAJ shows a truncated portion of the corresponding article’s content, so the user can evaluate whether he wants to select and view the full article. (Id. at 4:26-36). These articles are presented to the user in the order dictated by their combined key term scores. (Id. at 5:2-10). 125. When a user selects an article whose squib is presented to him, the key term scores for that article corresponding to terms in the user’s query are increased. (Id. at 4:37-49). This is because the user, by selecting the article in response to his query, has implicitly endorsed the idea that these key terms from the query are appropriately matched to the article. (See id.) 126. The next user who enters the same query would thus see a different rank of articles, based on the new key term scores that reflect the input of the prior user. (See id. at 4:665:1) Accordingly, the article ranking in Culliss is based on a combination of article content and feedback from prior users who entered the same query, because both factors (article content and user feedback) are used to calculate the key term scores that determine the article ranking. 2. 127. Culliss anticipates claim 10 of the ‘420 Patent As noted above, I/P Engine’s infringement contentions assert that the preamble of claim 10 is not a limitation. To the extent that the preamble is considered a limitation here, Culliss recites a “search engine system” as recited by the preamble to claim 10. Specifically, Culliss’s disclosed computer system accepts a search query from a user and returns a set of search results, which is the hallmark of a search engine. (See Culliss at 4:10-26 (explaining that Culliss’ system accepts a search query from a user and returns squibs of articles that match the query.)) (a) 128. A system for scanning a network to make a demand search for informons relevant to a query from an individual user As noted with respect to the Bowman reference, I/P Engine takes the position that ‘420 claim 10(a) is satisfied if a system conducts a search for information in response to a user query, including looking for advertisements stored in a distributed database. Culliss meets this 01980.51928/4874260.1 UNGAR EXPERT REPORT ON INVALIDITY 46 CASE 2:11-cv-512-RAJ element under I/P Engine’s interpretation because Culliss’ system accepts a search query from a user and returns a set of search results in response. (See Culliss at 4:10-26). (b) 129. a content-based filter system for receiving the informons from the scanning system and for filtering the informons on the basis of applicable content profile data for relevance to the query Culliss discloses this element. Specifically, Culliss uses search results’ aggregate key term scores to rank these search results for relevance to the query. (id. at 5:2-10). The key term scores are calculated in part by analyzing each search result’s content profile to determine how many times each of the key terms from the query appear in the search result. (See id. at 14:35-36 (“the [key term] scores can be initially set to correspond with the frequency of the term occurrence in the article.”)19 I also note that, under I/P Engine’s infringement allegations, ranking a set of search results is sufficient to meet the “filter” limitation even if no candidate search results are excluded altogether. See, e.g., 7/2/12 Infringement Contentions for Google at 11 Accordingly, Culliss discloses this element. (c) 130. a feedback system for receiving collaborative feedback data from system users relative to informons considered by such users Culliss discloses a feedback system for receiving collaborative feedback data from system users relative to informons considered by such users. Specifically, Culliss’s feedback system records which search results were selected by users who entered a given query. Culliss then raises the key term scores for terms in the selected search results that match terms in the query. (See id. at 4:37-49). 19 Alternatively, if “content profile data” were understood to require a more elaborate or thorough mapping of the informon’s content, then this element would be obvious over Culliss in view of Rose. See fn. 16, supra. 01980.51928/4874260.1 UNGAR EXPERT REPORT ON INVALIDITY 47 CASE 2:11-cv-512-RAJ (d) 131. The filter system combining pertaining feedback data from the feedback system with the content profile data in filtering each informon for relevance to the query Culliss discloses this element. Specifically, Culliss ranks search results for relevance to a query by calculating their aggregate key term scores for the terms in that query (id. at 5:2-10), and each key term score is based on a combination of feedback data and content data. For example, a key term score for a search result may be initially determined by the content of the search result – namely, how many times the key term appears in the search result’s content. (See id. at 14:34-36). This key term score may then be altered based on feedback from other users. If users who had entered the same query had selected that search result, then the key term scores would rise for each of the key terms in that search result that match terms from the query. (See id. at 4:37-49). 132. The example cited in Section V.B.2, supra, provides a concrete illustration of how Culliss combines content data with feedback data to rank search results for relevance to a query. Namely, two articles about museum-viewing Paris vacations (“Article 1” and “Article 2”) might be given different key term scores for the terms “Paris,” “museum,” and “vacations” based on how often each of these terms appeared in Article 1’s and Article 2’s content. Thus, if “Paris” appeared five times in Article 1, then Article 1 would have a key term score of 5 for “Paris;” if “museum” appeared three times in Article 1, then Article 1 would have a key term score of 3 for “museum,” etc. 133. A user who enters the query “Paris museum vacations” would be presented with squibs of Article 1 and Article 2, and could select one or both of them. If the user selects Article 1, then Article 1’s key term scores for “Paris,” “museum,” and “vacations” would rise. The next user who enters the same query might see Article 1 listed in a higher ranked position because Article 1’s key term scores had risen based on feedback from the first user. 01980.51928/4874260.1 UNGAR EXPERT REPORT ON INVALIDITY 48 CASE 2:11-cv-512-RAJ 134. In this way, the key term scores that determine Article 1’s relevance ranking for the query “Paris museum vacations” are determined by combination of content profile data (how often these terms appeared in Article 1’s content) and feedback data (how often other users who entered the same search query had selected Article 1). 3. 135. Culliss anticipates claims 14 and 15 of the ‘420 Patent Claim 14 depends from claim 10 and further requires “wherein the collaborative feedback data comprises passive feedback data.” Claim 15 depends from claim 14 and further requires “wherein the passive feedback data is obtained by passively monitoring the actual response to a proposed informon.” Culliss meets both these limitations because Culliss’s feedback data is derived from passively monitoring users’ actual response to search results – namely, monitoring how frequently users who had entered the same query selected each of those search results. (See id. at Abstract (“As users enter search queries and select articles, the scores are altered”); 3:3-4 (same)). Specifically, the system passively monitors whether the user performs such selection actions as “opening, retrieving, reading, viewing, listening to or otherwise closely inspecting the article.” (Id. at 4:32-34). 4. 136. Culliss anticipates claims 25, 27, and 28 of the ‘420 Patent Claims 25, 27, and 28 contain the same substance as claims 10, 14, and 15, respectively, but are simply recast as method rather than system claims. Thus, Culliss anticipates claims 25, 27, and 28 for the same reasons that it anticipates claims 10, 14, and 15. I incorporate by reference my prior discussion about how Culliss anticipates claims 10, 14, and 15. I also incorporate by reference the claim chart, attached as Exhibit A-6 to this Report, showing how Culliss anticipates these claims. 01980.51928/4874260.1 UNGAR EXPERT REPORT ON INVALIDITY 49 CASE 2:11-cv-512-RAJ 5. 137. Culliss anticipates claim 1 of the ‘664 Patent As noted above, I/P Engine’s infringement contentions assert that the preamble of claim 1 is not a limitation. To the extent that the preamble is considered a limitation here, Culliss recites “a search system” as required by the preamble to ‘664 claim 1. Specifically, Culliss’s disclosed computer system accepts a search query from a user and returns a set of search results, which qualifies this system as a search system. (See Culliss at 4:10-26). (a) 138. a scanning system for searching for information relevant to a query associated with a first user in a plurality of users The Court has construed “a scanning system” as “a system used to search for information.” Thus construed, Culliss’s disclosed system meets this claim element because it searches for information relevant to a query associated with a first user. (See id.). Furthermore, Culliss is intended for use by a plurality of users, as evidenced by the fact that the system records the collective preferences of multiple users. (See id. at Abstract, 4:37-49). However, because Culliss searches for results to a query submitted by a particular user, it meets the “first user in a plurality of users” aspect of this claim element. (b) 139. a feedback system for receiving information found to be relevant to the query by other users Culliss discloses a feedback system for receiving information found to be relevant to the query by other users. As previously noted, Culliss records which search results were selected by users who entered a given query and raises the key term scores for terms in the selected search results that match terms in the query. (See id. at 4:37-49). Thus, Culliss contains a feedback system that receives information found to be relevant to the query by other users – i.e., it receives feedback about which search results were selected most often by other users who had entered the same query (or a query containing some of the same terms). (c) a content-based filter system for combining the information from the feedback system with the information from the 01980.51928/4874260.1 UNGAR EXPERT REPORT ON INVALIDITY 50 CASE 2:11-cv-512-RAJ scanning system and for filtering the combined information for relevance to at least one of the query and the first user 140. Culliss discloses this element. Culliss discloses a content-based filter system because Culliss’ system ranks search results for relevance to the query in part by examining the search results’ content – i.e., examining how often the terms from the query appear as key terms in each search result’s content. (Id. at 14:34-36). I also note that, under I/P Engine’s infringement allegations, ranking a set of search results is sufficient to meet the “filter” limitation even if no candidate search results are excluded altogether. See, e.g., 7/2/12 Infringement Contentions for Google at 11. 141. Culliss’ content-based filter system also operates by combining the search results from the scanning system with the feedback information from the feedback system. This is because Culliss’ content-based key term scores – which, of course, are associated with the search results from the scanning system – are adjusted based on the feedback information from other users. If users who had entered the same query had selected a given search result, then the key term scores will rise for each of the key terms in that search result that match terms from the query. (See id. at 4:37-49). 6. 142. Culliss anticipates claim 5 of the ‘664 Patent Claim 5 depends from claim 1 and further requires “wherein the filtered information is an advertisement.” Culliss meets this element, because Culliss explicitly states that the articles which are filtered may be advertisements. (See id. at 9:56-62 (“The invention may allow a user to enter one or more category key terms in formulating a search. For example, the user may enter the category key terms ‘Apartments’ and ‘Los Angeles’ or the category key terms ‘Romantic’ and ‘Comedy’ to find articles (i.e., advertisements or movies) which fall under two or more category key terms.”) (emphasis added). 01980.51928/4874260.1 UNGAR EXPERT REPORT ON INVALIDITY 51 CASE 2:11-cv-512-RAJ 7. 143. Culliss anticipates claim 6 of the ‘664 Patent Claim 6 depends from claim 1 and further requires “an information delivery system for delivering the filtered information to the first user.” Culliss discloses this element, as it recites that the search engine displays squibs of the search results to the user. (See id. at 4:2531 (“As shown in FIG. 1 at 20, the search engine will then display a squib of each of the matched articles . . . the user can then scroll through the squibs of the articles and select a desired one”)). 8. 144. Culliss anticipates claim 21 of the ‘664 Patent Claim 21 depends from claim 1 and further recites “wherein the content-based filter system filters by extracting features from the information.” Culliss discloses this element. As discussed above, Culliss extracts words from the content of each search result in order to determine how often the words from the query are found in these search results. (See id. at 14:34-36). 9. 145. Culliss anticipates claim 22 of the ‘664 Patent Claim 22 depends from claim 21 and further recites “wherein the extracted features comprise content data indicative of the relevance to the at least one of the query and the user.” Culliss discloses this element, because the words that Culliss extracts from a search result’s content indicate how relevant the search result is to the query. (See id. at 14:34-36). 10. 146. Culliss anticipates claim 26 of the ‘664 Patent Claim 26 contains essentially the same elements as claim 1, but is recast as a method rather than system claim. For example, where claim 1 requires “a scanning system for searching for information relevant to a query associated with a first user in a plurality of users,” claim 26 simply requires “searching for information relevant to a query associated with a first user in a plurality of users.” Where claim 1 requires “a feedback system for receiving information found to be relevant to the query by other users,” claim 26 simply requires 01980.51928/4874260.1 UNGAR EXPERT REPORT ON INVALIDITY 52 CASE 2:11-cv-512-RAJ “receiving information found to be relevant to the query by other users.” Thus, Culliss anticipates claim 26 for the same reasons that it anticipates claim 1. I incorporate by reference my prior discussion about how Culliss anticipates claim 1, as well as the claim chart attached hereto as Exhibit A-6. 11. 147. Culliss anticipates claim 28 of the ‘664 Patent Claim 28 depends from claim 26 and further recites “the step of delivering the filtered information to the first user.” As discussed with respect to claim 6, supra, Culliss discloses this element. (See id. at 4:25-31 (“As shown in FIG. 1 at 20, the search engine will then display a squib of each of the matched articles . . . the user can then scroll through the squibs of the articles and select a desired one”)). 12. 148. Culliss anticipates claim 38 of the ‘664 Patent Claim 38 depends from claim 26 and further recites “wherein the searching step comprises scanning a network in response to a demand search for the information relevant to the query associated with the first user.” As noted above, “scanning a network” has been construed simply as looking for or examining items in a network, and “demand search” has been construed as a single search engine query performed upon a user request. Furthermore, I/P Engine has taken the position that “scanning a network” is satisfied by looking for advertisements stored on a distributed database. (See, e.g., 7/2/2012 Infringement Contentions for Google at 6-7). Under I/P Engine’s interpretation, Culliss meets this element because Culliss conducts a search for information in response to a user query. (See Culliss at 4:10-26). C. Lashkari anticipates claims 10 and 25 of the ‘420 Patent and claims 1, 6, 21, 22, 26, 28, and 38 of the ‘664 Patent 1. 149. Background on Lashkari Lashkari discloses a general search engine system that utilizes both feature extraction and automated collaborative filtering in order to predict webpage ratings. (Lashkari at 01980.51928/4874260.1 UNGAR EXPERT REPORT ON INVALIDITY 53 CASE 2:11-cv-512-RAJ

Disclaimer: Justia Dockets & Filings provides public litigation records from the federal appellate and district courts. These filings and docket sheets should not be considered findings of fact or liability, nor do they necessarily reflect the view of Justia.


Why Is My Information Online?