I/P Engine, Inc. v. AOL, Inc. et al

Filing 427

Opposition to 237 MOTION for Summary Judgment Defendants AOL Inc., Google Inc., IAC Search & Media, Inc., Gannett Company, Inc., and Target Corporation's Motion for Summary Judgment filed by I/P Engine, Inc.. (Attachments: # 1 Exhibit 1, # 2 Exhibit 2-8 (Filed Under Seal), # 3 Exhibit 9, # 4 Exhibit 10, # 5 Exhibit 11-19 (Filed Under Seal), # 6 Exhibit 20, # 7 Exhibit 21, # 8 Exhibit 22-24 (Filed Under Seal), # 9 Exhibit 25, # 10 Exhibit 26, # 11 Exhibit 27-32 (Filed Under Seal), # 12 Exhibit 33, # 13 Exhibit 34-39 (Filed Under Seal), # 14 Exhibit 40, # 15 Exhibit 41, # 16 Exhibit 42, # 17 Exhibit 43, # 18 Exhibit 44, # 19 Exhibit 45 (Filed Under Seal), # 20 Exhibit 46, # 21 Exhibit 47, # 22 Exhibit 48, # 23 Exhibit 49, # 24 Exhibit 50, # 25 Exhibit 51, # 26 Exhibit 52, # 27 Exhibit 53-54 (Filed Under Seal), # 28 Exhibit 55, # 29 Exhibit 56 (Filed Under Seal))(Sherwood, Jeffrey)

Download PDF
  Exhibit 52    IN THE UNITED STATES DISTRICT COURT FOR THE EASTERN DISTRICT OF VIRGINIA I/P ENGINE, INC., ) ) Plaintiff, ) ) v. ) ) AOL, INC., GOOGLE INC., IAC SEARCH & ) MEDIA, INC., TARGET CORP., and ) GANNETT CO., INC. ) ) Defendants. C.A. No. 2:11-cv-512-RAJ JURY TRIAL DEMANDED REPORT OF DEFENDANTS’ EXPERT LYLE H. UNGAR, PH.D., CONCERNING INVALIDITY OF CLAIMS 10, 14, 15, 25, 27, AND 28 OF U.S. PATENT NO. 6,314,420 AND CLAIMS 1, 5, 6, 21, 22, 26, 28, AND 38 OF U.S. PATENT NO. 6,775,664 01980.51928/4874260.1 UNGAR EXPERT REPORT ON INVALIDITY CASE 2:11-cv-512-RAJ 66. Culliss discloses that the articles are presented to the user in the order dictated by their combined key term scores. (Id. at 5:2-10). For example, if Article 1 had a key term score of 5 for “Paris,” 3 for “museum,” and 2 for “vacations,” its aggregate score for the query “Paris museum vacations” would be 10 (5 + 3 +2). If Article 2 had a key term score of 4 for “Paris,” 2 for museum,” and 3 for “vacations,” its aggregate score for the query “Paris museum vacations” would be 9 (4 + 2 +3). Thus, Article 1 would be presented above Article 2 because it had a higher aggregate score. 67. When a user selects an article whose squib is presented to him, the key term scores for that article which correspond to the terms in the user’s query are increased. (Id. at 4:37-49). This is because the user, by selecting the article in response to his query, has implicitly endorsed the idea that these key terms from the query are appropriately matched to the article. (See id.) 68. For example, if our hypothetical first user who queried “Paris museum vacations” selected Article 2, then Article 2’s key term scores for “Paris,” “museum,” and “vacations” might each rise by +1. (See id. at 4:43-45 (“To alter the key term scores, a positive score such as (+1) can be added to the key term scores, for example . . .”) The next user who enters the same query would thus see a different rank of articles, based on the new key term scores that reflect the input of the prior user. (See id. at 4:66-5:1). Sticking with our example, Article 2 would have a new aggregate score of 12 (instead of 9) after the first user selected it, because its key term scores for “Paris,” “museum,” and “vacations” each increased by +1 when the first user selected it. Thus, a later user who queries “Paris museum vacations” would see Article 2 (which has a new aggregate score of 12) presented above Article 1 (which still has its old aggregate score of 10). 01980.51928/4874260.1 UNGAR EXPERT REPORT ON INVALIDITY 25 CASE 2:11-cv-512-RAJ item’s content, feature extraction would be a predictable element of a content-based filter system. 246. Delivering filtered information to users: As noted above, even the most sophisticated information-filtering system has little utility if the filtered information cannot be somehow delivered to the users who have a need for it. Accordingly, it would be predictable for any information-filtering method to include the element of delivering filtered information to users. 247. Filtering advertisements: There are no technical or conceptual difficulties in filtering advertisements as opposed to filtering other types of digital media. Thus, it would be predictable for an information filtering system to filter advertisements. Indeed, this would allow such a system to be used in the lucrative market for computerized advertising services – a market that was well-established by the asserted patents’ priority date of December 1998. Accordingly, it is unsurprising that many of the prior art references discussed in this Report specifically disclose filtering advertisements. (See Culliss at 9:56-62; Bowman at 5:4, 9:2-3; Ryan at 4:5767). D. Claims 10, 14, 15, 25, 27, and 28 of the ‘420 Patent and claim 5 of the ‘664 Patent are obvious over Rose in view of Bowman 1. 248. Claim 10 of the ‘420 Patent is obvious over Rose in view of Bowman As noted above, I/P Engine’s infringement contentions assert that the preamble of claim 10 is not a limitation. To the extent that the preamble is considered a limitation here, Rose recites a “search engine system,” as recited by the preamble to claim 10. Specifically, Rose discloses filtering “search results obtained through an online text retrieval service.” (Id. at 2:5455) (emphasis added). Similarly, Rose discloses that its information access system may comprise “an electronic search and retrieval system.” (Claim 26). 01980.51928/4874260.1 UNGAR EXPERT REPORT ON INVALIDITY 91 CASE 2:11-cv-512-RAJ (a) 249. a system for scanning a network to make a demand search for informons relevant to a query from an individual user As noted above, I/P Engine takes the position that ‘420 claim 10(a) is satisfied if a system conducts a search for information in response to a user query. Rose meets this element, because Rose’s system accepts a search query from a user and returns a set of search results in response. (See id. at 2:54-55; claim 26). (b) 250. a content-based filter system for receiving the informons from the scanning system and for filtering the informons on the basis of applicable content profile data for relevance to the query Rose discloses a content-based filter system that filters informons on the basis of applicable content profile data. Specifically, the search results in Rose’s system are filtered by comparing a vector representing a document’s content to a vector representing the user’s preferences. (Id. at 6:11-58). The closer the vectors are to each other, the more relevant the document is judged to be for the user. (Id. at 6:56-58). 251. While Rose’s comparison of document vector to user vector filters for relevance to the user rather than relevance to the query, it would be obvious modify Rose so that Rose filtered for relevance to the query. As Rose discloses, any sort of information content can be described by a vector. (Id. at 6:26-35). Thus, while Rose compares a search result vector against a vector representing the user profile, Rose’s method could be just as easily used to compare a search result vector against a vector representing the user’s query. Moreover, one of skill the art would be motivated to modify Rose in this manner. As discussed above, there are a limited number of ways to filter content-based information: chiefly, one could filter for relevance to a user or to a user’s query. Furthermore, other references such as Bowman already teach the utility of filtering search results for relevance to a query. See Section VI.A, supra (explaining how 01980.51928/4874260.1 UNGAR EXPERT REPORT ON INVALIDITY 92 CASE 2:11-cv-512-RAJ Bowman combines content-based and feedback-based methods to filter search results for relevance to a query). 252. One of skill in the art would be motivated to combine Bowman with Rose, such that Rose’s system would filter information for relevance to the query. Rose and Bowman are both directed to the problem of efficiently filtering large amounts of information. Moreover, they both solve this problem in similar ways – namely, by combining content data with feedback data to create a hybrid content/feedback filtering method. Because Rose and Bowman propose similar approaches to solving the same problem, one of skill in the art would be strongly motivated to combine their teachings. (c) 253. a feedback system for receiving collaborative feedback data from system users relative to informons considered by such users Rose discloses a feedback system for receiving collaborative feedback data from system users relative to informons considered by such users. Specifically, Rose receives feedback from system users about how highly they rated search results presented to them. (See Rose at 5:31-46). For each user, Rose uses the feedback from users most similar to that user to determine how relevant a given search result will be for that user. (See id. at 6:59-7:10). In other words, Rose judges that a search result will be deemed more relevant to a given user if other users who are similar to that user had rated that search result highly. (d) 254. The filter system combining pertaining feedback data from the feedback system with the content profile data in filtering each informon for relevance to the query Rose combines the feedback data with the content profile data to filter each document for relevance. See Rose at Abstract (“Items of information to be presented to a user are ranked according to their likely degree of relevance to that user and displayed in order of ranking. The prediction of relevance is carried out by combining data pertaining to the content 01980.51928/4874260.1 UNGAR EXPERT REPORT ON INVALIDITY 93 CASE 2:11-cv-512-RAJ of each item of information with other data regarding correlations of interests between users.”) (emphasis added); see also id. at 7:35-50. 255. While Rose uses this hybrid filtering method to filter documents for relevance to the user, it would be obvious to modify Rose so that it filtered for relevance to the query. This could be done simply by comparing each document vector to a query vector instead of to a user vector, and by recording feedback from the subset of other users who had entered the same search query (instead of recording feedback from all users). There would be no technical difficulties to modifying Rose in this manner. Moreover, one of skill the art would be motivated to modify Rose in this manner, given the limited number of ways to do relevance filtering and the fact that other references such as Bowman already teach the utility of filtering search results for relevance to a query. See also Section VII.D.1(b), supra (explaining why one of skill in the art would be motivated to combine Rose’s and Bowman’s teachings). 2. 256. Claims 14 and 15 of the ‘420 Patent are obvious over Rose in view of Bowman Claim 14 depends from claim 10 and further requires “wherein the collaborative feedback data comprises passive feedback data.” Claim 15 depends from claim 14 and further requires “wherein the passive feedback data is obtained by passively monitoring the actual response to a proposed informon.” Rose discloses a feedback system where users actively state their level of interest in each document that they view. (See Rose at 5:8-30). However, Bowman teaches passive feedback. Bowman’s feedback data is derived from passively monitoring users’ actual response to search results – namely, monitoring how frequently users who had entered the same query selected each of those search results. (See Bowman at 2:31-35). 257. It would have been obvious to modify Rose so that it utilized passive feedback in the manner that Bowman does. There are only two basic types of user feedback that can be collected – active feedback and passive feedback. Moreover, one of skill in the art would 01980.51928/4874260.1 UNGAR EXPERT REPORT ON INVALIDITY 94 CASE 2:11-cv-512-RAJ

Disclaimer: Justia Dockets & Filings provides public litigation records from the federal appellate and district courts. These filings and docket sheets should not be considered findings of fact or liability, nor do they necessarily reflect the view of Justia.


Why Is My Information Online?