I/P Engine, Inc. v. AOL, Inc. et al

Filing 769

Opposition to Plaintiff's (Oral) Motion for Judgment as a Matter of Law on Anticipation and Obviousness filed by AOL Inc., Gannett Company, Inc., Google Inc., IAC Search & Media, Inc., Target Corporation. (Noona, Stephen)

Download PDF
UNITED STATES DISTRICT COURT EASTERN DISTRICT OF VIRGINIA NORFOLK DIVISION I/P ENGINE, INC. Plaintiff, Civil Action No. 2:11-cv-512 v. AOL INC., et al., Defendants. DEFENDANTS’ OPPOSITION TO PLAINTIFF’S MOTION FOR JUDGMENT AS A MATTER OF LAW ON ANTICIPATION AND OBVIOUSNESS I. INTRODUCTION On October 30, 2012, Plaintiff I/P Engine, Inc. (“Plaintiff”) made an oral Motion for Judgment as a Matter of Law on Defendants’ anticipation and obviousness defenses. (See Trial Tr. at 1771:12-1773:17). Contrary to Plaintiff’s arguments, there is copious record evidence that the asserted claims are anticipated by the Culliss and Bowman references and rendered obvious by the prior art. Plaintiff’s Motion for Judgment as a Matter of Law should be denied. II. LEGAL STANDARD Judgment as a matter of law is appropriate where a party has been fully heard on an issue and “there is no legally sufficient evidentiary basis for a reasonable jury to have found for that party with respect to that issue.” Fed. R. Civ. P. 50(a); In re Outsidewall Tire Litig., No. 1:09cv-1217, 2010 WL 2929626, at *4 (E.D. Va. July 21, 2010). “A patent is invalid for anticipation if a single prior art reference discloses each and every limitation of the claimed invention.” Schering Corp. v. Geneva Pharms., Inc., 339 F.3d 1373, 1377 (Fed. Cir. 2003). A patent is invalid as obvious “if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains.” 35 U.S.C. § 103. III. CULLISS ANTICIPATES EACH ASSERTED CLAIM OF THE '420 AND '664 PATENTS As a threshold matter, Plaintiff argued in its Motion that the Culliss reference (DX-058) was before the PTO during prosecution of the Asserted Patents. (Trial Tr. at 1772:14-15). However, prior art references which were before the PTO during prosecution can still be used to invalidate. See, e.g., Scanner Techs. Corp. v. ICOS Vision Sys. Corp. N.V., 528 F.3d 1365, 138082 (Fed. Cir. 2008) (affirming invalidation of representative claim on obviousness based on prior 1 art considered by the PTO during prosecution); IPXL Holdings, L.L.C. v. Amazon.com, Inc., 430 F.3d 1377, 1381 (Fed Cir. 2005) (“[A] patent may be found to be anticipated on the basis of a reference that had properly been before the patent examiner in the [PTO] at the time of issuance.”). Thus, the fact that Culliss was before the PTO during prosecution of the Asserted Patents does not show as a matter of law that the Asserted Patents are valid over Culliss. Plaintiff also argues that “Dr. Ungar admitted that under his view, Culliss does not invalidate the asserted claims.” (Trial Tr. at 1772:15-17). But as Dr. Ungar made clear, he applied the claims and the Court’s constructions for invalidity purposes in the same way that Dr. Frieder did for infringement, which is fair and proper as a patent may not be interpreted inconsistently to capture infringement but avoid the prior art. (See Trial Tr. at 1513:1514:8; Amazon.com, Inc. v. Barnesandnoble.com, Inc., 239 F.3d 1343, 1351 (Fed. Cir. 2001) (“A patent may not, like a ‘nose of wax,’ be twisted one way to avoid anticipation and another to find infringement.”)). Finally, Plaintiff concludes that Culliss does not disclose the content, combining, or filtering limitations of the asserted claims. (Trial Tr. at 1772:12-14). As explained below, however, Culliss does disclose all these limitations, and all other limitations required by the asserted claims. A. Overview of Culliss Culliss is directed to a search engine system that ranks and filters search results based on a combination of the content of the search results and feedback from prior users who had entered the same query and viewed these search results. In Culliss, Internet articles are associated with key terms they contain. (DX-058 at 3:60-64.) For example, two articles about museum-viewing vacations in Paris (“Article 1” and “Article 2”) might be associated with the key terms “Paris,” “museum,” and “vacations” if they both contained those three words. (Trial Tr. at 1344:8-11). 2 These articles are given a “key term score” for each of the key terms that they contain. (Id. at 3:65-66.) Culliss discloses that each key term score might initially be set at 1. (Id. at 3:10-4:9.) Thus, in the above example, Article 1 would have a key term score of 1 for each of “Paris,” “museum,” and “vacations,” and so would Article 2. Alternatively, Culliss discloses that the key term scores might be set to reflect how many times each of the key terms appeared in the document’s content. (See id. at 14:34-36.) Culliss discloses that the articles are presented to the user in the order dictated by their combined key term scores. (Id. at 5:7-17.) For example, if Article 1 had a key term score of 5 for “Paris,” 3 for “museum,” and 2 for “vacations,” its aggregate score for the query “Paris museum vacations” would be 10 (5 + 3 +2). If Article 2 had a key term score of 4 for “Paris,” 2 for museum,” and 3 for “vacations,” its aggregate score for the query “Paris museum vacations” would be 9 (4 + 2 +3). (Trial Tr. at 1344:17-22). Thus, Article 1 would be presented above Article 2 because it had a higher aggregate score. (Id. at 1344:24-1345:1). When a user selects an article whose squib is presented to him, the key term scores for that article which correspond to the terms in the user’s query are increased. (Id. at 4:37-49.) This is because the user, by selecting the article in response to his query, has implicitly indicated that these key terms from the query are appropriately matched to the article. (See id.) For example, if a hypothetical first user who queried “Paris museum vacations” selected Article 2, then Article 2’s key term scores for “Paris,” “museum,” and “vacations” might each rise by +1. (See id. at 4:43-45, Trial Tr. at 1345:15-21). The next user who enters the same query would thus see a different rank of articles, based on the new key term scores that reflect the input of the prior user. (See id. at 4:66-5:1.) Sticking with the same example, Article 2 would have a new aggregate score of 12 (instead of 9) after the first user selected it, because its 3 key term scores for “Paris,” “museum,” and “vacations” each increased by +1 when the first user selected it. Thus, a later user who queries “Paris museum vacations” would see Article 2 (which has a new aggregate score of 12) presented above Article 1 (which still has its old aggregate score of 10). (Trial Tr. at 1345:25-1346:4). In short, the article ranking in Culliss is based on a combination of the articles’ content and feedback from previous users who entered the same query. This is because both factors (article content and user feedback) are used to calculate the key term scores that determine the article ranking. B. '420 Claim 10 is Anticipated by Culliss 1. Culliss discloses a “search engine system” (claim 10 preamble). The preamble to claim 10 of the ‘420 patent describes a “search engine system.” Culliss discloses a “search engine system” because Culliss accepts a user’s search query and returns a set of search results. (See DX-058 at 4:10-26.) Culliss also discloses that its content- and feedback-based methods may be used to rank and order the search results of traditional search engines like Excite and Lycos. (See id. at 13:35-45.) Based on these plain disclosures in Culliss, Dr. Ungar opined that Culliss discloses a “search engine system” (Trial Tr. at 1346:10-22), and Plaintiff did not dispute this point in its cross-examination of Dr. Ungar or at oral argument over its Motion. 2. Culliss discloses “a system for scanning a network to make a demand search for informons relevant to a query from an individual user” (claim 10[a]). Claim 10[a] recites “a system for scanning a network to make a demand search for informons relevant to a query from an individual user.” The Court construed “scanning a network” as “looking for or examining items in a network” and construed “demand search” as “a single search engine query performed upon a user request.” (Dkt. 171 at 23.) 4 Culliss meets this element. Specifically, Culliss looks for search results (which it calls “articles”) in response to a single search engine query entered by a user. (See DX-058 at 4:1025.) These articles are stored on the Internet, which is “an extensive network of computer systems.” (Id. at 3:45-55). Based on these disclosures, Dr. Ungar opined that Culliss discloses this element (Trial Tr. at 1346:23-1347:8), and Plaintiff did not dispute this point in its crossexamination of Dr. Ungar or at oral argument over its Motion. 3. Culliss discloses “a content-based filter system for receiving informons from the scanning system and for filtering the informons on the basis of applicable content profile data for relevance to the query” (claim 10[b]). Claim 10[b] recites “a content-based filter system for receiving the informons from the scanning system and for filtering the informons on the basis of applicable content profile data for relevance to the query.” Culliss meets this element by: (a) giving scores to articles based partly on content analysis, and (b) using these scores to filter these articles. (a) Culliss discloses content-based analysis Culliss discloses content-based analysis because an article’s key term score can be initially set in Culliss by counting how often terms from the user’s query appear in the article. (DX-058 at 14:34-36; see also Trial Tr. at 1347:14-19 (opining that Culliss uses content-based analysis)). Nor did Plaintiff dispute this point in its cross-examination of Dr. Ungar. (b) Culliss discloses filtering Culliss also discloses “filtering” in the specific embodiment where its articles’ key terms include “rating” key terms like X-rated, G-rated, etc. (See DX-058 at 11:8-12:41.) Like the other key term scores, the rating key term scores can be initially set by content analysis (Id. at 14:23-25) and then altered based on user feedback. (Id. at 11:47-51.) And these rating key term scores can be used to filter the articles – for example, articles with an X-rated key term score 5 above a certain threshold will be filtered out and not displayed to G-rated searchers. (Id. at 12:15.) As Dr. Ungar explained, excluding articles based on their X-rated scores is “filtering.” (Trial Tr. at 1347:20-1348:6). Plaintiff did not dispute this point in its cross-examination of Dr. Ungar. Culliss also states that this specific rating embodiment can be integrated with the more traditional Culliss embodiments, so that Culliss’s articles would receive a variety of key terms, one of which is the rating key term used for filtering. (See DX-058 at 11:39-41 (“The invention, operating separately or in addition to the manner described above, would permit or require the user to enter a rating key term in the search query.”) (emphasis added).) 4. Culliss discloses “a feedback system for receiving collaborative feedback data from system users relative to informons considered by such users” (claim 10[c]). Claim 10[c] recites “a feedback system for receiving collaborative feedback data from system users relative to informons considered by such users.” The Court construed “collaborative feedback data” as data from system users with similar interests or needs regarding what informons such users found to be relevant. (D.I. 212 at 23.) Culliss discloses this element by recording which articles were selected by users who entered a given query and raising the key term scores for terms in the selected articles that match terms in the query. (See DX-058 at 4:37-49). Plaintiff takes the position that users have “similar interests or needs” if they entered the same query. (Trial Tr. 428:8-15). Thus, by receiving the selection choices of users whose queries contained the same terms, Culliss receives “collaborative feedback data” under Plaintiff’s own application of the construed claim, as Dr. Ungar opined. (Trial Tr. at 1351:5-19; Amazon.com, 239 F.3d at 1351.) 5. Culliss discloses “the filter system combining pertaining feedback data from the feedback system with the content profile data in filtering each informon for relevance to the query” (claim 10[d]). 6 Claim 10[d] recites “the filter system combining pertaining feedback data from the feedback system with the content profile data in filtering each informon for relevance to the query.” Culliss meets this element. As discussed above, Culliss ranks articles for relevance to a query by calculating the articles’ aggregate key term scores for the terms in that query (See DX058 at 5:2-10), and each key term score is based on a combination of feedback data and content data. (See id. at 4:37-49; 14:35-36.) Based on these disclosures, Dr. Ungar opined that Culliss meets this element. (Trial Tr. at 1351:23-1352:8). C. '420 Claims 14 and 15 are Anticipated by Culliss Claim 14 depends from claim 10 and further requires “wherein the collaborative feedback data comprises passive feedback data.” Claim 15 adds the further requirement that “the passive feedback data is obtained by passively monitoring the actual response to a proposed informon.” Culliss meets these limitations because Culliss’s feedback data is derived from passively monitoring users’ actual response to articles – namely, monitoring how frequently users who had entered the same query selected each of those articles. (DX-058 at 4:32-34). Based on these disclosures, Dr. Ungar opined that Culliss anticipates claims 14 and 15. (Trial Tr. at 1361:51362:3). D. '420 Claims 25, 27, and 28 are Anticipated by Culliss Claims 25, 27, and 28 contain the same substance as claims 10, 14, and 15, respectively, but are simply recast as method rather than system claims. Thus, as Dr. Ungar explained, Culliss anticipates claims 25, 27, and 28 for the same reasons that it anticipates claims 10, 14, and 15. (Trial Tr. at 1362:6-22). E. '664 Claim 1 is Anticipated by Culliss 1. Culliss discloses “a search system” (claim 1 preamble). 7 Culliss describes “a search system” as recited by the preamble to claim 1 because Culliss accepts a search query from a user and returns a set of search results. (See DX-058 at 4:10-26.) Additionally, Culliss’s content- and feedback-based methods may be used to rank and order the search results of traditional search engines like Excite and Lycos. (See id. at 13:35-45.) Based on these discloses, Dr. Ungar opined that Culliss meets the claim 1 preamble (Trial Tr. at 1363:3-13), and Plaintiff did not dispute this point in its cross-examination of Dr. Ungar or at oral argument over its Motion. 2. Culliss discloses “a scanning system for searching for information relevant to a query associated with a first user in a plurality of users” (claim 1[a]). Claim 1[a] recites “a scanning system for searching for information relevant to a query associated with a first user in a plurality of users.” The Court construed “a scanning system” as “a system used to search for information.” (Dkt. 171 at 23.) Culliss meets this element because it searches for articles relevant to a query associated with a first user among a plurality of users. (See DX-058 at 4:10-26). As noted above, Culliss also states that its content- and feedback-based methods may be applied to traditional search systems like Excite and Lycos to rank their search results. (Id. at 13:35-45.) Based on these disclosures, Dr. Ungar opined that Culliss meets claim 1[a] (Trial Tr. at 1363:14-1364:4), and Plaintiff did not dispute this point in its cross-examination of Dr. Ungar or at oral argument over its Motion. 3. Culliss discloses “a feedback system for receiving information found to be relevant to the query by other users” (claim 1[b]). Claim 1[b] recites “a feedback system for receiving information found to be relevant to the query by other users.” In its infringement case, Plaintiff asserted that this element is met by 8 receiving click-through data about information items (e.g., ads) that users view. (Trial Tr. at 610:2-14). Culliss meets this element under Plaintiff’s own infringement theory because Culliss receives feedback about which articles were selected by other users, and uses this feedback to adjust the articles’ key term scores. (See DX-058 at 4:37-49). For purposes of invalidity, Plaintiff must be held to the same interpretation of the claims that it advanced for infringement. See Amazon.com, 239 F.3d at 1351. Thus, as Dr. Ungar opined, Culliss meets this element under Plaintiff’s own theory of what this element requires. (See Trial Tr. at 1364:5-1365:7). 4. Culliss discloses “a content-based filter system for combining the information from the feedback system with the information from the scanning system and for filtering the combined information for relevance to at least one of the query and the first user” (claim 1[c]). Claim 1[c] recites “a content-based filter system for combining the information from the feedback system with the information from the scanning system and for filtering the combined information for relevance to at least one of the query and the first user.” Culliss meets this element by giving articles key term scores that reflect both content and feedback data. (See DX058 at 4:37-49; 14:34-36.) These scores are then used to filter the articles by, e.g., excluding articles whose X-rated scores exceed a given threshold. (See id. at 12:1-5). Combining search results with ranking scores that reflect content and feedback data – as disclosed by Culliss – is precisely how Plaintiff alleges that Defendants meet this claim element. (Trial Tr. at 610:15611:21). Thus, Culliss meets this element under Plaintiff’s own infringement theory, as Dr. Ungar opined. (See Trial Tr. at 1365:8-1367:25; Amazon.com, 239 F.3d at 1351.) F. '664 Claim 5 is Anticipated by Culliss Claim 5 depends from claim 1 and further requires the filtered information to be an advertisement. Culliss meets this element, because Culliss explicitly states that the articles 9 which are filtered may be advertisements. (See DX-058 at 9:56-62). Based on these disclosures, Dr. Ungar opined that Culliss anticipates claim 5. (Trial Tr. at 1368:5-9). G. '664 Claim 6 is Anticipated by Culliss Claim 6 depends from claim 1 and further requires “an information delivery system for delivering the filtered information to the first user.” Culliss discloses this element, as it recites that the search engine displays squibs of the articles to the user. (See DX-058 at 4:25-31). Based on these disclosures, Dr. Ungar opined that Culliss anticipates claim 6. (Trial Tr. at 1368:10-17). H. '664 Claim 21 and 22 are Anticipated by Culliss Claim 21 depends from claim 1 and further recites “wherein the content-based filter system filters by extracting features from the information.” Culliss discloses the additional element of claim 21 because Culliss extracts words from the content of each article in order to determine how often the words from the query appear in these articles. (See DX-058 at 14:3436). Claim 22 depends from claim 21 and further recites “wherein the extracted features comprise content data indicative of the relevance to the at least one of the query and the user.” Culliss discloses this element, because the words that Culliss extracts from an article’s content indicate how relevant the article is to the query. (See id.). Based on these disclosures, Dr. Ungar opined that Culliss anticipates claims 21 and 22. (Trial Tr. at 1368:18-1369:10). I. '664 Claim 26 and 28 are Anticipated by Culliss Claim 26 contains essentially the same elements as claim 1, but is recast as a method rather than system claim. Thus, as Dr. Ungar opined, Culliss anticipates claim 26 for the same reasons that it anticipates claim 1. (Trial Tr. at 1369:11-19). Claim 28 depends from claim 26 and further recites “the step of delivering the filtered information to the first user.” As discussed 10 with respect to claim 6, supra, Culliss discloses this element as well. (Trial Tr. at 1369:201370:2). J. '664 Claim 38 is Anticipated by Culliss Claim 38 depends from claim 26 and further recites “wherein the searching step comprises scanning a network in response to a demand search for the information relevant to the query associated with the first user.” As noted above, “scanning a network” has been construed as looking for or examining items in a network, and “demand search” has been construed as a single search engine query performed upon a user request. (Dkt. 171 at 23.) Culliss meets this element because Culliss searches for articles in response to a single user search query, and these articles are searched for on the vast network of the Internet. (See DX-058 at 3:45-55; 4:10-26). Based on these disclosures, Dr. Ungar found that Culliss anticipates claim 38. (Trial Tr. at 1370:3-13). IV. BOWMAN ANTICIPATES EACH ASSERTED CLAIM OF THE '420 AND '664 PATENTS As with Culliss, Plaintiff argues that the Bowman reference (DX-059) does not disclose the content-based, combining, or filtering limitations of the asserted claims. As explained below, however, Bowman does disclose those limitations, as well as every other limitation required by the asserted claims. A. Overview of Bowman The Bowman reference functions similarly to a traditional search engine in that it accepts a query from a user and generates a body of results in response. (DX-059 at 5:31-32; claim 28[preamble-b]). Bowman then filters those results based on collaborative feedback and content analysis. 11 For example, if a user enters the search query “ghost stories for kids,” Bowman would generate a body of search result items that contain the words “ghost,” “stories,” or “kids.” (Id. at claim 28[preamble-b], Trial Tr. at 1322:22-1323:2). Bowman would then give each of these items a ranking score based on how often they were selected by other users who had entered the query “ghost stories for kids.” (Id. at claim 28[c], Trial Tr. at 1323:2-6). Alternatively, rather than utilizing feedback from all users who entered the same query, Bowman may cluster users into discrete groups (such as age, income, or behavioral groups) and use feedback from users within the same group who entered the same query. (See id. at 3:28-33.) In this way, items returned in response to a given query may have different ranking scores for users in different groups. Some Bowman embodiments further adjust the score of each item according to its content, by analyzing how many of the terms in the query appear in the item. (See id. at claim 29.) Items that contain all the terms in the query get higher ranking scores, while items that contain fewer of the query terms get progressively lower ranking scores. (See id.) Thus, if a user entered the query “ghost stories for kids,” Bowman would give items that contain the terms “ghost,” “stories,” and “kids” higher adjustments to their ranking score, while giving items with only two of these terms a lower adjustment (and giving even lower adjustments to items that contain only one of these terms). (Trial Tr. at 1325:9-1326:5). The items are finally presented to the user in ranked order. (Id. at Abstract.) Additionally, the system may present only a subset of the items whose ranking scores exceed a certain threshold. (See id. at 9:58-62.) 12 In sum, the final score for each item in Bowman is generated through a combination of collaborative feedback data and content data. This score is then used to filter which items are presented to the user. B. '420 Claim 10 is Anticipated by Bowman 1. Bowman discloses a “search engine system” (claim 10 preamble). The preamble to claim 10 of the '420 patent describes a “search engine system.” Bowman discloses a “search engine system” because it ranks items in a search result by receiving a query and generating a plurality of items satisfying the query. (See DX-059 at claim 28 [preamble-b]). This is exactly how a search engine operates, and Dr. Ungar accordingly opined that Bowman meets the claim 10 preamble. (Trial Tr. at 1326:21-1327:4). Nor did Plaintiff did not dispute this point in its cross-examination of Dr. Ungar or at oral argument over its Motion. 2. Bowman discloses “a system for scanning a network to make a demand search for informons relevant to a query from an individual user” (claim 10[a]). Claim 10[a] recites “a system for scanning a network to make a demand search for informons relevant to a query from an individual user.” Bowman meets this element, as Dr. Ungar explained. (Trial Tr. at 1327:5-25). Specifically, Bowman discloses the steps of: “receiving a query specifying one or more terms; generating a query result identifying a plurality of items satisfying the query.” (DX-059 at claim 28 [a-b].) This query is submitted by a user, and thus the resulting search is “performed upon a user request.” (See id. at 7:43-46.) Further, Bowman operates on a networked system of computers. (See id. at 5:29-30.) In its crossexamination of Dr. Ungar, Plaintiff did not dispute that Bowman meets this element either, nor did Plaintiff dispute this element in oral argument over its Motion. 13 3. Bowman discloses “a content-based filter system for receiving informons from the scanning system and for filtering the informons on the basis of applicable content profile data for relevance to the query” (claim 10[b]). Claim 10[b] recites “a content-based filter system for receiving the informons from the scanning system and for filtering the informons on the basis of applicable content profile data for relevance to the query.” As with Culliss, Bowman meets this element by giving items scores based partly on content analysis and using these scores to filter the items. (a) Bowman uses content analysis Bowman discloses content analysis in claim 29, which requires adjusting an item’s score based on how many query terms are “matched” by the item. (DX-059 at claim 29). Bowman makes clear that “matching” involves content analysis – i.e., determining how many query terms appear in an item’s content. Indeed, when discussing “matching” in connection with the prior art, Bowman explicitly states that a query term is “matched” to a search result if it appears in that search result’s content. For example, if the search results are books, Bowman states that a list of books will be “matching the terms of the query” if their “titles contain some or all of the query terms.” (See DX-059 at 1:30-38). In that same paragraph, Bowman states that the list of books “may be ordered based on the extent to which each identified item matches the terms of the query.” (Id. at 1:43-44 (emphasis added).) In other words, the list of books can be ordered based on how many of the query terms are matched to (i.e., contained within) the title of each book. In nearly verbatim language, claim 29 of Bowman describes this technique of ranking search results based on how many query terms they “match.” A simple comparison of claim 29 to the “matching” prior art discussion makes this clear. (Compare claim 29 (“adjusting the ranking value produced for each item indentified in the query result to reflect the number of terms specified by the query that are matched by the item”) with 1:43-44 (“the list may be 14 ordered based on the extent to which each identified item matches the terms of the query.”).) Given the identity of language, the only logical interpretation is that claim 29’s “matching” technique also involves content analysis, as Dr. Ungar explained at trial. (Trial Tr. at 1328:314). (b) Bowman discloses filtering Bowman also filters items based on their scores, by retaining items that score above a predetermined threshold while excluding the rest. (DX-059 at 9:58-62, claim 15). As Dr. Ungar explained, this process of retaining some items and excluding others is “filtering.” (Trial Tr. at 1321:15-22, 1331:8-20). Plaintiff argues that Bowman does not disclose filtering because Bowman does not actually recite the word “filtering.” (Trial Tr. at 1772:8-10). But it is well-settled that an anticipatory reference need not use the same words as the claim as long as it fairly teaches the claimed concept. See Whitserve, LLC v. Comp. Packages, Inc., 694 F.3d 10, 21 (Fed. Cir. 2012). Because Bowman’s method of retaining items above a threshold and discarding items below the threshold is a filtering process, it is irrelevant that Bowman does not use the word “filtering.” 4. Bowman discloses “a feedback system for receiving collaborative feedback data from system users relative to informons considered by such users” (claim 10[c]). Claim 10[c] recites “a feedback system for receiving collaborative feedback data from system users relative to informons considered by such users.” The Court construed “collaborative feedback data” as data from system users with similar interests or needs regarding what informons such users found to be relevant. (D.I. 212 at 23.) Bowman meets this element. First, Bowman records how often users who entered the same search query selected various items. (See, e.g., DX-059 at claim 28[c]). Second, rather than recording feedback from all users who entered the same query, Bowman may cluster users 15 into groups (such as age, income or behavioral groups) and use feedback from users within the same group who entered the same query. (See DX-059 at 3:28-33.) Based on these disclosures, Dr. Ungar opined that Bowman meets element 10[c]. (Trial Tr. at 1331:21-1332:21). Plaintiff did not dispute this point in its cross-examination of Dr. Ungar. 5. Bowman discloses “the filter system combining pertaining feedback data from the feedback system with the content profile data in filtering each informon for relevance to the query” (claim 10[d]). Claim 10[d] recites “the filter system combining pertaining feedback data from the feedback system with the content profile data in filtering each informon for relevance to the query.” As Dr. Ungar explained, Bowman meets this element because Bowman combines data regarding the content of informons with collaborative feedback data from other users to determine the most relevant informons to a query. (Trial Tr. at 1332:22-1333:12). Specifically, Bowman determines each search result item’s score by combining collaborative feedback data (showing how often the item was selected by users from the same group who entered the same query) with content profile data (showing how many of the query terms appear in the item’s content). (See DX-059 at claim 28[c] and claim 29). The final score is used to determine the item’s relevance to the query. (See id. at 2:23-24.) Bowman then filters out items whose scores fall below a certain threshold. (Id. at 9:58-62.) C. '420 Claims 14 and 15 are Anticipated by Bowman Claim 14 depends from claim 10 and further requires “wherein the collaborative feedback data comprises passive feedback data.” Claim 15 adds the requirement that “the passive feedback data is obtained by passively monitoring the actual response to a proposed informon.” Bowman meets both these elements, because Bowman’s feedback data is derived from passively monitoring users’ actual responses to search result items – namely, monitoring how often users 16 selected each of those items. (Id. at 2:31-35). Based on these disclosures, Dr. Ungar opined that Bowman anticipates claims 14 and 15. (Trial Tr. at 1334:9-24). D. '420 Claims 25, 27, and 28 are Anticipated by Bowman Claims 25, 27, and 28 contain the same substance as claims 10, 14, and 15, respectively, but are simply recast as method rather than system claims. Thus, as Dr. Ungar explained, Bowman anticipates claims 25, 27, and 28 for the same reasons that it anticipates claims 10, 14, and 15. (Trial Tr. at 1335:2-14). E. '664 Claim 1 is Anticipated by Bowman 1. Bowman discloses “a search system” (claim 1 [preamble]). Bowman describes “a search system” as recited by the preamble to claim 1. Specifically, Bowman accepts a search query from a user and returns a set of search results. (DX-059 at 5:3132 (stating that Bowman includes “a query server for generating query results from queries”).) Based on these disclosures, Dr. Ungar explained that Bowman meets the claim 1 preamble (Trial Tr. 1335:20-1336:6), and Plaintiff did not dispute this point in its cross-examination of Dr. Ungar or in oral argument over its Motion. 2. Bowman discloses “a scanning system for searching for information relevant to a query associated with a first user in a plurality of users” (claim 1[a]). Claim 1[a] recites “a scanning system for searching for information relevant to a query associated with a first user in a plurality of users.” The Court construed “a scanning system” as “a system used to search for information.” (Dkt. 171 at 23.) Bowman meets this limitation because it searches for information relevant to a query associated with a first user. As recited in Claim 28 of Bowman, Bowman discloses “[a] computer-readable medium whose contents cause a computer system to rank items in a search result by: receiving a query specifying one or more terms; generating a query result identifying a plurality of items satisfying the query.” (DX-059 17 at claim 28[preamble-b]) (emphasis added). Based on these disclosures, Dr. Ungar testified that Bowman meets claim 1[a] (Trial Tr. 1336:7-16), and Plaintiff did not dispute this point in its cross-examination of Dr. Ungar or in oral argument over its Motion. 3. Bowman discloses “a feedback system for receiving information found to be relevant to the query by other users” (claim 1[b]). Claim 1[b] recites “a feedback system for receiving information found to be relevant to the query by other users.” Again, Plaintiff asserts that this element is met by receiving clickthrough data about information items (e.g., ads) that users view. (Trial Tr. at 610:2-14). Bowman meets this element under Plaintiff’s own infringement theory because Bowman receives feedback about information found to be relevant to the query by other users – i.e., it receives feedback about which items were selected most often by other users who entered the same query. (See DX-059 claim 28[c].) Based on these disclosures, Dr. Ungar explained that Bowman meets claim 1[b] (Trial Tr. 1336:17-1337:4), and Plaintiff did not dispute this point in its cross-examination of Dr. Ungar or in oral argument over its Motion. 4. Bowman discloses “a content-based filter system for combining the information from the feedback system with the information from the scanning system and for filtering the combined information for relevance to at least one of the query and the first user” (claim 1[c]). Claim 1[c] recites “a content-based filter system for combining the information from the feedback system with the information from the scanning system and for filtering the combined information for relevance to at least one of the query and the first user.” Bowman meets this element. As described above, Bowman gives items scores that reflect both content data and feedback data. (See DX-059 at claims 28-29.) These scores are used to filter the items for relevance to the query. (See id. at Abstract; 2:23-24, 9:58-62.) Combining search results with scores that reflect content and feedback data – as disclosed by Bowman – is precisely how Plaintiff alleges that Defendants meet this claim element. 18 Specifically, Plaintiff alleges that Defendants meet this element by calculating a “Quality Score” for ads using what Plaintiff alleges is based on feedback data and content data. (Trial Tr. at 610:15-611:21). Thus, Bowman meets this element under Plaintiff’s own infringement theory, as Dr. Ungar explained. (Trial Tr. 1337:5-1338:12). F. '664 Claim 5 is Anticipated by Bowman Claim 5 depends from claim 1 and further requires the filtered information to be an advertisement. Bowman meets this element. Specifically, Bowman discloses that system users can purchase the products represented by the search results, such as by adding these products to their virtual shopping carts. (DX-059 at 5:4; 9:2-3; claim 7). Thus, the search results constitute advertisements for the purchasable products that they represent. Based on these disclosures, Dr. Ungar testified that Bowman anticipates claim 5. (Trial Tr. at 1339:3-15). G. '664 Claim 6 is Anticipated by Bowman Claim 6 depends from claim 1 and further requires “an information delivery system for delivering the filtered information to the first user.” Bowman discloses this element, as it recites that the software facility displays the filtered search results to the user. (DX-059 at 9:56-58). Based on these disclosures, Dr. Ungar testified that Bowman anticipates claim 6. (Trial Tr. at 1339:16-1340:1). H. '664 Claim 21 and 22 are Anticipated by Bowman Claim 21 depends from claim 1 and further recites “wherein the content-based filter system filters by extracting features from the information.” Bowman discloses this element because Bowman extracts words from the content of each item in order to determine how many words from the query are found in the item. (DX-059 at claim 29). Claim 22 depends from claim 21 and further recites “wherein the extracted features comprise content data indicative of the relevance to the at least one of the query and the user.” 19 Bowman discloses this element, because the words that Bowman extracts from an item’s content indicate how relevant the item is to the query. (See id., Trial Tr. at 1340:14-21). Based on these disclosures, Dr. Ungar testified that Bowman anticipates claims 21 and 22. (Trial Tr. at 1340:224). I. '664 Claim 26 and 28 are Anticipated by Bowman Claim 26 contains essentially the same elements as claim 1, but is recast as a method rather than system claim. Thus, as Dr. Ungar explained, Bowman anticipates claim 26 for the same reasons that it anticipates claim 1. (Trial Tr. at 1341:6-13). Claim 28 depends from claim 26 and further recites “the step of delivering the filtered information to the first user.” As discussed with respect to claim 6, supra, Bowman discloses this element as well. (Trial Tr. at 1341:14-23). J. '664 Claim 38 is Anticipated by Bowman Claim 38 depends from claim 26 and further recites “wherein the searching step comprises scanning a network in response to a demand search for the information relevant to the query associated with the first user.” Bowman meets this element because Bowman looks for or examines items in response to a single search engine query. (See DX-059 at claim 28[a-b] (disclosing the steps of “receiving a query specifying one or more terms; generating a query result identifying a plurality of items satisfying the query”).) Furthermore, Bowman operates on a computer network. (See id. at 5:29-30.) Based of these disclosures, Dr. Ungar opined that Bowman anticipates claim 38. (Trial Tr. at 1341:24-1342:12). V. THE ASSERTED CLAIMS ARE INVALID FOR OBVIOUSNESS Obviousness is a question of law, though based on underlying facts. In re Gartside, 203 F.3d 1305, 1316 (Fed. Cir. 2000). To determine obviousness, a court must consider: “(1) the scope and content of the prior art; (2) the differences between the prior art and the claims at 20 issue; (3) the level of ordinary skill in the art; and (4) any relevant secondary considerations, such as commercial success, long felt but unsolved needs, and the failure of others.” Western Union Co. v. MoneyGram Payment Sys., Inc., 626 F.3d 1361, 1369 (Fed. Cir. 2010) (citing Graham v. John Deere Co. of Kansas City, 383 U.S. 1, 17-18 (1966)). Under these so-called “Graham factors,” all asserted claims are obvious. Certainly, Plaintiff cannot show that the claims are non-obvious as a matter of law. Thus, Plaintiff’s Motion must be denied. A. All Elements of the Asserted Claims Were Found in the Prior Art Plaintiff has repeatedly characterized the asserted independent claims as a combination of four color-coded elements for purposes of showing infringement: (1) yellow searching for information relevant to a query, (2) blue content-based analysis, (3) green collaborative analysis, and (4) purple combining the content and collaborative analysis to filter the information. (See Trial Tr. 425:4-18; 521:16-24). If these four elements are all that is required to show infringement, then these four elements are all that is required to show invalidity. As noted above, a patentee may not interpret a claim one way for purposes of infringement and another way for purposes of invalidity. See Amazon.com, 239 F.3d at 1351. As explained above, the combination of these four elements is found in the Culliss and Bowman references. However, this combination is also found in two other prior art references raised at trial – the WebHound thesis (DX-049) and the Rose patent (DX-034). The inventors themselves admitted that the elements of the patents existed in the prior art. (See Trial Tr. at 272:11-13; 336:10-14 (search); 238:15-16; 336:7-9 (content-based filtering); 238:17-18; 336:3-6 (collaborative filtering)). The Fab article (DX-050) is titled “Fab: Content-Based, Collaborative Recommendation.” (Id. at 66). The sub-title goes on to state: “By combining both collaborative and content-based filtering systems, Fab may eliminate many of the weaknesses found in each 21 approach.” (Id.) Thus, the Fab article discloses three of the four elements that Plaintiff alleges is required in the independent claims: content-based filtering, collaborative filtering, and combining content-based and collaborative filtering. All four of the elements cited in Plaintiff’s infringement case are found in the WebHound thesis. As shown in the Abstract of this reference, the WebHound thesis discloses a combination of content-based and collaborative filtering: “This thesis claims that content-based filtering and automated collaborative filtering are complementary techniques, and the combination of ACF with some easily extractable features of documents is a powerful information filtering technique.” (DX-049 at Abstract). Thus, the WebHound Abstract alone discloses three of the four elements that Plaintiff contends are required by the independent claims: content-based analysis, collaborative analysis, and combining content and collaborative analysis for filtering. 22 The WebHound thesis also discloses the fourth element cited in Plaintiff’s infringement case – searching for information relevant to a query. Specifically, the WebHound thesis discloses that its content-based/collaborative filtering can be used to filter search results obtained by a search engine. (DX-049 at 78) (“a WEBHOUND like front-end to a popular search engine such as Lycos, could enable users to filter the results of their searches on the extensive databases complied by these search engines in a personalized fashion.”) The four elements from Plaintiff’s infringement case are also found in the Rose patent (DX-034). As with the WebHound thesis, the Abstract of the Rose patent discloses content analysis, collaborative analysis, and combining the content and collaborative analysis. Specifically, the Rose Abstract explains that “the prediction of relevance [for information items] is carried out by combining data pertaining to the content of each item of information with other data regarding correlations of interests between users.” (DX-034 at Abstract). The Rose Abstract further explains that “[t]he user correlation data is obtained from feedback information provided by users when they retrieve items of information.” (Id.) Thus, Rose combines content data with feedback data (from users with correlated interests)1 to score items. Rose further explains that “[t]he relevance predicting technique of the present invention . . . can be used to filter messages provided to a user in an electronic mail system and search results obtained through an on-line text retrieval service. (DX-034 at 2:51-55) (emphasis added). In other words, Rose discloses that its content/collaborative scoring method can be used to “filter . . . search results.” 1 Because the user feedback data in Rose comes from users with correlated interests, it is “collaborative feedback data” under the Court’s construction. See D.I. 212 at 4 (construing “collaborative feedback data” as “data from system users with similar interests or needs regarding what informons such users found to be relevant.”) 23 5. The elements from the dependent claims are found in the prior art Moving from the independent claims to the dependent claims, the elements from the dependent claims are also found in the prior art references discussed above. (a) ‘420 claims 14, 15, 27, and 28 ‘420 claims 14, 15, 27, and 28 add the requirements that the feedback data be passive data reflecting a user’s actual response to an informon. This element is disclosed by, e.g., Culliss, which passively monitors whether users select articles and adjusts the articles’ scores based on the user selections. (DX-058 at 4:37-45). Furthermore, given that user feedback can comprise only two basic types – active or passive – it would be obvious to modify any “active feedback” reference to disclose passive feedback instead. See KSR, 550 U.S. at 421 (where 24 “there are a finite number of identified, predictable solutions, a person of ordinary skill has good reason to pursue the known options within his or her technical grasp.”) (b) ‘664 claims 21 and 22 ‘664 claims 21 and 22 add the elements of extracting features from the information that indicate the information’s relevance to the query or the user. This element is disclosed by, e.g., the WebHound thesis, which relies on “easily extractable features of documents” and analyzes “the importance of [a given] feature relative to the other features for a particular user.” (DX-049 at Abstract, 38). (c) ‘664 claims 6 and 28 ‘664 claims 6 and 28 require delivering the filtered information to the user. Numerous prior art references disclose this element. For example, the WebHound thesis discloses returning the top-rated web pages to the user. (DX-049 at 78) (“the resulting matches could be filtered through WEBHOUND and only the top ranked ones (in terms of predicted rating) need be returned.”) (d) ‘664 claim 5 ‘664 claim 5 requires that the filtered information be an advertisement. This element is disclosed by, e.g., Culliss. (See DX-058 at 9:61). Furthermore, since advertisements are just one type of information that can be scored and filtered like any other, it would be obvious that the other prior art references disclosed herein could filter advertisements. (e) ‘664 claim 38 ‘664 claim 38 requires scanning a network in response to a demand search. This element is met by, e.g., WebHound, which discloses a search engine that scans the Internet for articles in response to a single search engine query entered by a user. (See DX-049 at 78). 25 B. Any Differences Between the Claims and the Prior Art Would be Obvious to Overcome 1. To the extent WebHound and Rose do not disclose a “tight integration” of their search and filtering systems, adding this element would be obvious Plaintiff argues that Defendants’ obviousness arguments are deficient because Defendants supposedly did not show how the prior art could be combined or modified to meet all elements of the asserted claims. (Trial Tr. at 1772:18-1773:9).2 This allegation is not true. For example, Plaintiff has alleged throughout this case that Rose and WebHound are not anticipatory art because they do not disclose a “tight integration” between their search and filtering systems. Yet Dr. Ungar testified how it would be obvious to modify WebHound or Rose to disclose this “tight integration,” by simply having these references use their search queries in the filtering process. As Dr. Ungar stated, “If you are filtering search results, it’s obvious to keep around the query and use that also for filtering . . . just think about it. If you ask a query of a search engine, you get a result, you just have the query sitting there with the result, why not use that also for filtering?” (Trial Tr. at 1317:25-1318:7). In other words, it would be obvious to modify WebHound or Rose so that they disclosed the “tight integration” between search and filtering that Plaintiff contends is required. Furthermore, both Bowman and Culliss remember the search query when scoring and filtering items, because their content scores compare words in the query to words in the items3 2 Plaintiff also argues that all of Defendants’ obviousness combinations are deficient because Dr. Ungar “had not tested the asserted obvious combinations to get the same results of the patents-in-suit.” (Trial Tr. at 1773:10-12). This argument is without support – there is no case holding that an obviousness combination must undergo “testing” before being used to invalidate an asserted claim. 3 See DX-059 at claim 29; DX-058 at 4:10-15, 14:34-36. 26 and their feedback scores utilize feedback from users who entered the same or a similar query.4 Thus, one of ordinary skill could draw upon these Bowman and Culliss disclosures and thereby modify Rose or WebHound to remember and use the search query for filtering. 2. To the extent Bowman does not disclose content matching, adding this element would be obvious As discussed above, claim 29 of Bowman discloses adjusting an item’s score to reflect the number of terms in the query that are “matched” by the item. The only sensible reading of this “matching” technique is that it determines how many query terms appear in the content of the item. But even if this “matching” technique did not compare the query terms to the content of the item, modifying Bowman to disclose content-based matching would be obvious. This is because content-based matching indisputably appears in other sections of the same Bowman reference. For example, the Background section of Bowman discusses how a search system can order books within a search result based on how many query terms “match” or appear in the books’ titles. (DX-059 at 1:37-45 (“the query result is a list of books whose titles contain some or all of the query terms . . . the list may be ordered based on the extent to which each identified item matches the terms of the query.”)). It would be obvious to apply this unambiguous content-based matching to the invention disclosed in claim 29 of Bowman. Given that the content-based matching from the Bowman Background appears in the same reference as Bowman claim 29, it is self-evident that one could apply the content-based matching from the Background to claim 29. Thus, even if the “matching” of Bowman claim 29 did not already embrace content-based matching, modifying this technique to disclose content-based matching would be obvious. See Boston Sci. Scimed, Inc. v. Cordis Corp., 554 F.3d 982, 991 (Fed. Cir. 4 See DX-059 at claim 28[c], DX-058 at 4:37-45. 27 2009) (“Combining two embodiments disclosed adjacent to each other in a prior art patent does not require a leap of inventiveness.”) 3. To the extent Bowman does not disclose filtering, adding this element would be obvious As discussed above, Bowman “filters” because it retains items that score above a predetermined threshold while excluding items that score below the threshold. (DX-059 at 9:5862, claim 15). Plaintiff has argued that this is not true “filtering” because it involves ranking all the items and then retaining a subset of items which exceed the threshold, rather than passing the items one-by-one through the filter. But even if Plaintiff were correct that “filtering” requires retaining or excluding items one-by-one, modifying Bowman to disclose “one-by-one” filtering would be utterly trivial and obvious. There is no dispute that Bowman gives scores to items, nor is there any dispute that Bowman sets a threshold and excludes items that score below the threshold. So modifying Bowman to disclose the “one-by-one” filtering that Plaintiff contends is required would simply require scoring Item A, retaining or excluding Item A based on whether it passes the threshold, and then moving on to Item B – rather than scoring all the items and retaining the items that exceed the threshold in one fell swoop. Because there are only two basic ways to retain and exclude items – “one-by-one” or “all at once” – modifying Bowman to disclose the former technique rather than the latter would necessarily be obvious. As the Supreme Court held in KSR, a modification or combination is likely obvious where “there are a finite number of identified, predictable solutions” to a known problem. See KSR Intern. Co. v. Teleflex Inc., 550 U.S. 398, 421 (2007). Such is the case here – modifying Bowman to disclose “one-by-one” filtering instead of “all at once” filtering would necessarily be obvious, given the limited number of ways that one could retain and exclude items based on their scores. 28 4. To the extent Culliss does not disclose filtering, adding this element would be obvious As discussed above, Culliss discloses “filtering” in the embodiment where X-rated articles are screened out of the search results when their X-rated scores exceed a given threshold. (See DX-058 at 12:1-5). But even if one ignored this specific Culliss embodiment and focused on the other Culliss embodiments that merely rank articles, it would be obvious to modify these other Culliss embodiments so that they disclosed filtering as well as ranking. There is no dispute that all the Culliss embodiments give scores to articles and present the articles to the user in decreasing order of their scores. (Id. at 5:5-10). Modifying this “ranking” method into “filtering” would simply require setting a threshold and excluding, one-by-one, the articles that score below the threshold. As noted above, Bowman discloses setting such a threshold, and there is no inventiveness in comparing items to a threshold “one-by-one” versus “all at once.” Thus, it would be obvious to set a such a threshold in Culliss and filter out the articles that score below the threshold. C. The Level of Ordinary Skill in the Art As Dr. Ungar explained at trial, a person of ordinary skill in the art for the asserted patents would have a bachelor’s degree in computer science (or an equivalent degree) plus 2-3 years experience in the field of information retrieval. (Trial Tr. at 1311:15-20). Dr. Carbonell has a very similar formulation for the person of ordinary skill in the art. (Id. at 1284:8-18). Given the prior art disclosures recited above and the few (if any) differences between the asserted claims and the prior art, Dr. Ungar opined that such a person would have found the asserted claims to be obvious. (See id. at 1311:25-1312:4). The elements of the asserted claims were all found in the prior art (Id. at 1312:5-10), and these elements were not used in any unconventional or unpredictable way in the asserted claims. (Id. at 1312:11-22). Thus, there 29 would have been no barriers or difficulties to a person or ordinary skill in the art combining these elements to create the inventions in the asserted patents. (Id. at 1312:22-1313:4). Indeed, named inventor Ken Lang himself admitted that he was not aware of any technological barriers to creating the inventions in the asserted claims. (Id. at 274:12-275:1). See KSR, 550 U.S. at 421. D. No Secondary Considerations Can Rebut the Obviousness Showing A patentee may rebut an obviousness showing by pointing to “secondary considerations” of non-obviousness, such as commercial success of the patented invention, failure of others to create the patented invention, or showing that the patented invention filled a long-felt and unsolved need. See KSR, 550 U.S. at 406 (citing Graham, 383 U.S. at 17-18). In this case, however, there are no secondary considerations that might rebut the obviousness of the ‘420 and ‘664 patents. For example, there was no commercial success for these patents – in fact, the patents were never commercially used at all. (Trial Tr. at 332:7-12, 339:18-341:5, 1315:416). There also was no failure by others to devise the systems or methods claimed by these patents, nor did these patents fill any long-felt and unsolved need. (Id. at 1315:17-1316:15). To the contrary, numerous prior art references had already solved the problem of combining content-based and collaborative filtering, in order to resolve the weaknesses of each individual method on its own. (Id. at 1316:10-15). In short, there are no secondary considerations that might rebut the obviousness of the ‘420 and ‘664 patents. VI. CONCLUSION For the foregoing reasons, Defendants respectfully request judgment as a matter of law that each asserted claim of the ‘420 and ‘664 patents is anticipated by Culliss, anticipated by Bowman, and invalid for obviousness. 30 DATED: October 31, 2012 /s/ Stephen E. Noona Stephen E. Noona Virginia State Bar No. 25367 KAUFMAN & CANOLES, P.C. 150 West Main Street, Suite 2100 Norfolk, VA 23510 Telephone: (757) 624-3000 Facsimile: (757) 624-3169 senoona@kaufcan.com David Bilsker David A. Perlson QUINN EMANUEL URQUHART & SULLIVAN, LLP 50 California Street, 22nd Floor San Francisco, California 94111 Telephone: (415) 875-6600 Facsimile: (415) 875-6700 davidbilsker@quinnemanuel.com davidperlson@quinnemanuel.com Counsel for Google Inc., Target Corporation, IAC Search & Media, Inc., and Gannett Co., Inc. /s/ Stephen E. Noona Stephen E. Noona Virginia State Bar No. 25367 KAUFMAN & CANOLES, P.C. 150 W. Main Street, Suite 2100 Norfolk, VA 23510 Telephone: (757) 624-3000 Facsimile: (757) 624-3169 Robert L. Burns FINNEGAN, HENDERSON, FARABOW, GARRETT & DUNNER, LLP Two Freedom Square 11955 Freedom Drive Reston, VA 20190 Telephone: (571) 203-2700 Facsimile: (202) 408-4400 Cortney S. Alexander FINNEGAN, HENDERSON, FARABOW, GARRETT & DUNNER, LLP 31 3500 SunTrust Plaza 303 Peachtree Street, NE Atlanta, GA 94111 Telephone: (404) 653-6400 Facsimile: (415) 653-6444 Counsel for Defendant AOL Inc. 32 CERTIFICATE OF SERVICE I hereby certify that on October 31, 2012, I will electronically file the foregoing with the Clerk of Court using the CM/ECF system, which will send a notification of such filing (NEF) to the following: Jeffrey K. Sherwood Kenneth W. Brothers DICKSTEIN SHAPIRO LLP 1825 Eye Street NW Washington, DC 20006 Telephone: (202) 420-2200 Facsimile: (202) 420-2201 sherwoodj@dicksteinshapiro.com brothersk@dicksteinshapiro.com Donald C. Schultz W. Ryan Snow Steven Stancliff CRENSHAW, WARE & MARTIN, P.L.C. 150 West Main Street, Suite 1500 Norfolk, VA 23510 Telephone: (757) 623-3000 Facsimile: (757) 623-5735 dschultz@cwm-law.cm wrsnow@cwm-law.com sstancliff@cwm-law.com Counsel for Plaintiff, I/P Engine, Inc. /s/ Stephen E. Noona Stephen E. Noona Virginia State Bar No. 25367 KAUFMAN & CANOLES, P.C. 150 West Main Street, Suite 2100 Norfolk, VA 23510 Telephone: (757) 624-3000 Facsimile: (757) 624-3169 senoona@kaufcan.com 12018377v1 33

Disclaimer: Justia Dockets & Filings provides public litigation records from the federal appellate and district courts. These filings and docket sheets should not be considered findings of fact or liability, nor do they necessarily reflect the view of Justia.


Why Is My Information Online?