Personalized User Model LLP v. Google Inc.

Filing 136

REDACTED VERSION of 132 Claim Construction Answering Brief by Personalized User Model LLP. (Tigan, Jeremy)

Download PDF
IN THE UNITED STATES DISTRICT COURT FOR THE DISTRICT OF DELAWARE PERSONALIZED USER MODEL, L.L.P., Plaintiff, v. GOOGLE, INC., Defendant. ) ) ) ) ) ) ) ) ) C.A. No. 09-525 (LPS) PUBLIC VERSION PLAINTIFF'S RESPONSIVE CLAIM CONSTRUCTION BRIEF MORRIS, NICHOLS, ARSHT & TUNNELL LLP Karen Jacobs Louden (#2881) Jeremy A. Tigan (#5239) 1201 N. Market Street P.O. Box 1347 Wilmington, DE 19899-1347 (302) 658-9200 klouden@mnat.com jtigan@mnat.com OF COUNSEL: Marc S. Friedman SNR Denton US LLP 1221 Avenue of the Americas New York, NY 10020-1089 (212) 768-6767 Mark C. Nelson SNR Denton US LLP 2000 McKinney Ave., Ste. 1900 Dallas, TX 75201-1858 (214) 259-0900 Jimmy M. Shin Jennifer D. Bennett SNR Denton US LLP 1530 Page Mill Road, Ste. 200 Palo Alto, CA 94304-1125 (650) 798-0300 Confidential Version Filed: December 13, 2010 Public Version Filed: December 20, 2010 Attorneys for Personalized User Model, L.L.P. TABLE OF CONTENTS Page I. THE COURT SHOULD CONSTRUE THE LEARNING MACHINERELATED TERMS TO REQUIRE THAT THEY BE SPECIFIC -- BUT NOT NECESSARILY UNIQUE -- TO EACH USER. ....................................................... 2 A. "ESTIMATING PARAMETERS OF A [USER-SPECIFIC] LEARNING MACHINE" -- THE "PARAMETERS" TERM...............................3 1. a) b) c) d) 2. The estimated "parameters" are the values or weights of the variables, not the variables themselves......................................................3 The claim language supports PUM's construction....................................3 The specification supports PUM's construction. ....................................... 5 The extrinsic evidence is consistent with PUM's construction. ................................................................................. 6 Defendant's inconsistent constructions of "parameters" and "estimating parameters" belie its position......................................7 Defendant's improper attempt to engraft a requirement -"to calculate a probability" -- seeks to read non-existent limitations into the term "parameters".......................................................8 B. DEFENDANT'S PROPOSED CONSTRUCTIONS OF "LEARNING MACHINE," "USER MODEL SPECIFIC TO THE USER," AND "USER-SPECIFIC LEARNING MACHINE" SHOULD BE REJECTED. .................................................................................. 9 1. 2. a) b) The learning machine should not be construed as narrowly as Defendant proposes. ............................................................................. 9 Defendant's "User Model specific to the user" definition is incorrect. ................................................................................................ 11 The claims do not require millions of User Models, each "unique" to a user. ...................................................................... 11 The claims do not require that the User Model be created and updated by the learning machine, or that the User Model be stored in a data structure......................................12 -i- 3. II. The claims do not require millions of "user-specific learning machines," each unique to a particular user. .............................. 13 DEFENDANT'S ATTEMPT TO OVERLY NARROW "USER-SPECIFIC DATA FILES," "USER," AND "DOCUMENT" SHOULD ALSO BE REJECTED. ..................................................................................................................... 14 A. B. C. THE CLAIM LANGUAGE EXPRESSLY DEFINES "USERSPECIFIC DATA FILES." ................................................................................ 14 IN AN ELECTRONIC SYSTEM A "USER" IS A PERSON AS REPRESENTED BY A TAG OR IDENTIFIER. ............................................... 16 A "DOCUMENT" NEED NOT BE AN ELECTRONIC FILE. .......................... 16 III. PROBABILITY P(U|D) SHOULD NOT BE LIMITED TO PERCENTAGE CHANCE. ........................................................................................................................ 17 IV. DEFENDANT'S PROPOSED DEFINITIONS OF "UNSEEN DOCUMENT" AND "PRESENTING" ARE AT ODDS WITH THE INTRINSIC EVIDENCE AND MUST BE REJECTED. .................................................. 19 V. DEFENDANT'S INDEFINITENESS ARGUMENTS ARE WITHOUT MERIT ..........................................................................................................................21 A. THE "DOCUMENTS [NOT] OF INTEREST" TERMS ARE DEFINITE BECAUSE THE SPECIFICATION PROVIDES EXAMPLES OF OBJECTIVE CRITERIA UPON WHICH THE USER'S INTEREST OR NON-INTEREST MAY BE JUDGED. ...................... 21 THE "USER INTEREST INFORMATION DERIVED FROM THE USER MODEL" PHRASE IS NOT INSOLUBLY AMBIGUOUS. .................................................................................................. 23 B. VI. DEFENDANT'S ANTECEDENT BASIS AND ORDER OF STEPS ARGUMENTS MISS THE MARK. ................................................................................. 23 - ii - TABLE OF AUTHORITIES Page(s) CASES AllVoice Computing PLC v. Nuance Commc'ns, Inc., 504 F.3d 1236 (Fed. Cir. 2007)........................................................................................8, 21 Altiris, Inc. v. Symantec Corp., 318 F.3d 1363 (Fed. Cir. 2003)............................................................................................25 Applied Med. Res. Corp. v. U.S. Surgical Corp., 448 F.3d 1324 (Fed. Cir. 2006)............................................................................................21 B. Braun Melsungen AG. v. Terumo Med. Corp., 2010 WL 2219667 (D. Del. June 3, 2010) ..................................................................... 1, 8, 9 Cheetah Omni, LLC v. Verizon Servs. Corp., 2010 WL 4510986 (E.D. Tex. Nov. 9, 2010) ....................................................................... 14 Colorquick, LLC v. Eastman Kodak Co., 2008 WL 5771324 (E.D. Tex. June 25, 2008)........................................................................9 Datamize LLC v. Plumtree Software, Inc., 417 F.3d 1342 (Fed. Cir. 2005)............................................................................................22 Datatreasury Corp. v. Wells Fargo & Co., 2009 WL 1393068 (E.D. Tex. May 11, 2009)......................................................................24 Durr Sys., Inc. v. FANUC Ltd., 463 F. Supp. 2d 663 (E.D. Mich. 2006) ................................................................................. 9 Exxon Research Eng'g Co. v. United States, 265 F.3d 1371 (Fed. Cir. 2001)............................................................................................22 Furminator, Inc. v. Munchkin, Inc., 2009 WL 3805564 (E.D. Mo. Nov. 9, 2009)........................................................................14 Haemonetics Corp. v. Baxter Healthcare Corp., 607 F.3d 776 (Fed. Cir. 2010)..............................................................................................14 Halliburton Energy Servs. Inc. v. M-I LLC, 514 F.3d 1244 (Fed. Cir. 2008)............................................................................................23 Howmedica Osteonics Corp. v. Wright Med. Tech., Inc., 540 F.3d 1337 (Fed. Cir. 2008)..............................................................................................7 Liebel-Flarsheim Co. v. Medrad, Inc., 358 F.3d 898 (Fed. Cir. 2004).................................................................................. 10, 13, 20 - iii - Markman v. Westview Instruments, Inc., 52 F.3d 967 (Fed. Cir. 1995) ............................................................................................... 13 Nazomi Commc'ns, Inc. v. Arm Holdings, PLC, 403 F.3d 1364 (Fed. Cir. 2005)............................................................................................21 Novo Nordisk A/S v. Eli Lilly & Co., 1999 U.S. Dist. LEXIS 18690 (D. Del. Nov. 18, 1999)..........................................................6 Oakley, Inc. v. Sunglass Hut Intern., 316 F.3d 1341 (Fed. Cir. 2003)............................................................................................23 Phillips v. AWH Corp., 415 F.3d 1303 (Fed. Cir. 2005)............................................................................ 6, 12, 17, 22 Praxair, Inc. v. ATMI, Inc., 543 F.3d 1306 (Fed. Cir. 2008)..............................................................................................8 Rambus Inc. v. Infineon Techs AG, 318 F.3d 1081 (Fed. Cir. 2003)...................................................................................... 1, 8, 9 Scriptgen Pharms., Inc. v. 3-Dimensional Pharms., Inc., 79 F. Supp. 2d 409 (D. Del. 1999) ......................................................................................... 6 Viskase Cos., Inc. v. World Pac Intern. AG, 714 F. Supp. 2d 878 (N.D. Ill. 2010)....................................................................................22 Vitronics Corp. v. Conceptronics, Inc., 90 F.3d 1576 (Fed. Cir. 1996)..........................................................................................6, 12 OTHER AUTHORITIES AMERICAN HERITAGE DICTIONARY..............................................................................................3 THE COMPUTER DESKTOP ENCYCLOPEDIA (Freedman 2d ed. 1999)............................................15 J. Hertz, A. Krogh, R. Palmer, Introduction to the Theory of Neural Computation (1991) ........... 4 Vladimir S. Cherkassky and Filip M. Mulier, Learning from Data: Concepts, Theory, and Methods (1998) ..................................................................................................................... 4 - iv - In its brief, Defendant Google, Inc. ("Defendant") seeks to use claim construction to improperly redefine and narrow the scope of the invention. Defendant begins its assault on the invention by attempting to limit it to the preferred embodiment (Personal Web).1 Defendant then describes the Personal Web preferred embodiment incorrectly and in an overly limiting way that is not supported by either the claim language or the specification. By doing this, Defendant hopes to advance a litigation-inspired construction of the learning machine-related elements (e.g., learning machine, user model specific to the user, and user-specific learning machine) that requires that each user have his or her own personal learning machine/user model containing a set of variables "unique" to that user. Contrary to Defendant's proposed construction, the specification does not require that there be millions of personal learning machines/models, but also describes embodiments containing a single learning machine/model where the learning machine-related elements are "specific" to each user. This user-specificity is accomplished by initializing the learning machine/model with values or weights of variables (i.e., "parameters") that are estimated and further updated for each user. PUM's constructions for the learning machine-related elements clearly are aligned with the specification; whereas Defendant's limiting constructions are designed to support its non-infringement defense. Defendant's remaining claim constructions To do so, Defendant quotes "the present invention, referred to as Personal Web" language from the specification. (D.I. 116, at 3). Defendant, however, ignores the paragraph directly preceding the quoted language, which states "[a]ccordingly, the following preferred embodiment of the invention is set forth without any loss of generality to, and without imposing limitations upon, the claimed invention... The present invention, referred to as Personal Web ...". 6:67-7:5 (xx:y-z refers to the '040 patent, col. xx, ll. y-z unless otherwise noted). Thus, when read in context, "the present invention" language does not limit the invention to a single embodiment, as Defendant suggests. See, e.g., B. Braun Melsungen AG. v. Terumo Med. Corp., 2010 WL 2219667, at *6 n.5 (D. Del. June 3, 2010) (finding reference to "the invention" was a description of the preferred embodiment); Rambus Inc. v. Infineon Techs AG, 318 F.3d 1081, 1094 (Fed. Cir. 2003) (use of "the present invention" language did not limit the invention). 1 -1- are further impermissible attempts to unduly narrow the invention, are legally flawed, or both. Thus, for the reasons set forth below and in PUM's Opening Brief (D.I. 119), the Court should reject Defendant's constructions and adopt PUM's.2 I. THE COURT SHOULD CONSTRUE THE LEARNING MACHINE-RELATED TERMS TO REQUIRE THAT THEY BE SPECIFIC -- BUT NOT NECESSARILY UNIQUE -- TO EACH USER. Although the parties have many disputes relating to the learning machine-related elements, the main and overarching dispute relates to the construction of "specific to the user"/"user-specific."3 Defendant incorrectly argues that the "user model specific to the user" and "user-specific learning machine" elements must be "unique" to each individual user (i.e., that each user has his or her own personal learning machine/user model potentially resulting in millions and millions of learning machines/models with tens of millions of variables). See D.I. 116, at 9-14. In contrast, PUM construes these terms according to the claim language (i.e., the "user-specific learning machine" and "user model" must be "specific" to the user). PUM's construction contemplates that the "specific to the user"/"user-specific" aspects of the learning machine/user model occur because they are defined by "parameters," which are specific to each 2 Sections II-IV of Defendant's Brief purport to describe the patented technology, the prosecution history, and the accused technology. Plaintiff will address inaccuracies in these sections in the context of the various disputed claim terms below. Plaintiff notes, however, that if Defendant were correct and its models were not specific to users, but rather applied to all users (see D.I. 116, at 5), then personalization would not occur because the learning machines/models would operate the same in all circumstances. As Defendant admits, however, it does personalize search, advertisements and news. See Exs. 1-3, attached to the Declaration of Jennifer D. Bennett in support of Plaintiff's Responsive Claim Construction Brief. All Exhibits cited hereinafter are attached the Bennett Declaration unless specifically noted. 3 The learning machine-related elements occur in steps (c) and (e) of claims 1 and 32 of the `040 patent and in steps (c) and (f) of claim 1 of the `276 and steps (c) and (e) of claim 23 of that patent. Claim 1 of both the `040 and `276 patents is reproduced in Appendix A for the Court's convenience. The parties respective claim construction positions for each of the disputed terms/phrases are also set forth in tables at Appendices B- F. -2- user. The first term/phrase, therefore, to be addressed is "parameters" and "estimating parameters of a learning machine." A. "Estimating Parameters of a [User-Specific] Learning Machine" -- The "Parameters" Term. 1. The estimated "parameters" are the values or weights of the variables, not the variables themselves. The parties dispute whether the "parameters" are the values or weights of the variables of the learning machine/user model (PUM's position) or the variables themselves (Defendant's position).4 The answer to this question goes directly to the resolution of the overarching dispute described above. a) The claim language supports PUM's construction. The plain meaning of the claim language -- "estimating parameters of a learning machine" and "wherein the parameters are estimated in part from the user-specific data files" -requires that the parameters be "estimated." The generally understood meaning of "estimate" relates to approximation or rough calculation: 1. To make a judgment as to the likely or approximate cost, quantity or extent of; calculate approximately. 2. To form a tentative opinion about; evaluate.5 The correct construction of "parameters," therefore, should encompass the mathematical concept of approximation or rough calculation. Defining "parameters" as "values or weights" of variables, as opposed to the variables themselves, is consistent with the commonly understood meaning of "estimate." 4 "Variables" in this context can be thought of as the knobs of the learning machine that specifically define it, not as the input to the specific learning machine (e.g., the property of an unseen document applied to the specific learning machine). 5 THE AMERICAN HERITAGE DICTIONARY (NEW COLLEGE EDITION), 449 (Houghton Mifflin Co. 1978), attached as Ex. 4. -3- The language of claims 1 and 32 of the `040 patent (see Appendix A) also requires that the "parameters define a User Model specific to the user." PUM defines the "user model" as an implementation of a learning machine, which, in turn, PUM defines as "a model and/or mathematical function ...". Thus, the parameters define a model and/or mathematical function "specific" to the user. Parameters in the context of defining a "specific" model or function means the assigned values or weights of the variables, otherwise we have only a template of a function family. Consider, for example, the function family f(x) = a*x + b, only when a and b are known (e.g., if a = 3 and b = 2) is it a "specific" function (e.g. f(x) = 3*x +2)) resulting in output specific to an input. Referring to the example above, when "x" is 5 the learning machine computes an output (e.g. estimated user interest)6 for any given input x (e.g., the unseen document of claim 1 of the `040 patent) of 17 (e.g., the function is 3*5 + 2 = 17). For the "parameters" (i) to define a User Model specific to the user (or a user-specific learning machine) as required by the claim language, and (ii) for the learning machine so defined to estimate the user interest in a document, therefore, the parameters must be the specific values or weights (e.g., the 3 and the 2) and not the "a" and the "b". See e.g., J. Hertz, A. Krogh, R. Palmer, Introduction to the Theory of Neural Computation, Addison-Wesley (1991), at 115-120, relevant pages attached as Ex. 5 (describing back propagation algorithm initializing and updating weights to estimate a non-linear function); see also Vladimir S. Cherkassky and Filip M. Mulier, Learning from Data: Concepts, Theory, and Methods (1998), 135-139, attached as Ex. 6. 6 See steps 1(e) and 32(e) of the `040 patent and steps 1[f] and 23[e] of the `276 patent, reproduced in Appendix A. -4- b) The specification supports PUM's construction. The specification describes a learning machine as having "tunable parameters" and states that the user model is an implementation of a learning machine. 8:43-46. These parameters are "continually updated" based upon monitored user interactions. 8:46-50, 64-66. The specification also teaches that documents may be analyzed during the initialization stage of the user model to determine the initial parameters for the various functions. 17:48-54. The system then monitors the user's interactions (e.g., network searching, network navigation, network browsing, email activities, viewing pushed information, and searching to name a few) and, as a result, "the parameters of each user representation in the User Model are modified." 21:63-22:7. Defining "parameters" as values or weights of variables is consistent with both their "tunability" and the ability to update and modify them. Conversely, defining "parameters" as the variables themselves, as Defendant suggests, is not consistent with the specification because tuning and updating the parameters would require actually changing them to something else (e.g., modifying the actual variables in some way or creating new ones -- neither of which is described in the specification).7 Defendant relies on a portion of the specification relating to mutual information theory and word importance to support its argument that "parameters" are variables. (D.I. 116, at 13-14). The informative measures Iw and Iu that Defendant cites, however, are not "parameters" but rather measures of when a word w appears in a web document and when a web document is 7 Additionally, the specification discusses using a nonlinear function (e.g., Multilayer Perceptron) in the user model and that a key feature of that model is that the parameters are updated based on actual user reactions to documents. 22:7-11. In a Multilayer Perceptron, it is the weights, not the variables, that are updated. See, e.g., Ex. 5, at 120 (the weights are first initialized (step 1), then updated (step 6) based on the delta between the desired output and the actual output for a given input (step 4)). -5- of interest to a user u. 11:48-60. Indeed, the word "parameters" does not even appear in those portions of the specification cited by Defendant. See, e.g., 10:52-12:25. This is not surprising because, as the specification explains, Iw and Iu are indicator variables as defined by probability theory, and are used to select the word and phrase list for each user. See generally, 11:46-12:27. Defendant's apples to oranges comparison should be rejected. c) The extrinsic evidence is consistent with PUM's construction. The claims and the specification support PUM's, not Defendant's, construction. Defendant, therefore, will likely rely on a blizzard of extrinsic evidence in an attempt to muddy the waters. Where, as here, the meaning of a claim term/phrase is clear from the intrinsic evidence, however, extrinsic evidence need not be considered. See Phillips v. AWH Corp., 415 F.3d 1303, 1322-1323 (Fed. Cir. 2005) (extrinsic evidence many be considered so long as it does not contradict the intrinsic evidence). 8 Moreover, the extrinsic evidence (to the extent the Court deems it relevant) presented thus far supports PUM's construction. See D.I. 119, at 17 and the exhibits cited therein. 9 REDACTED 8 See also Novo Nordisk A/S v. Eli Lilly & Co., No. 98-643-MMS, 1999 U.S. Dist. LEXIS 18690, at *42 (D. Del. Nov. 18, 1999) (improper to examine extrinsic evidence when definition is clear from the intrinsic evidence); Scriptgen Pharms., Inc. v. 3-Dimensional Pharms., Inc., 79 F. Supp. 2d 409, 411-12 (D. Del. 1999) (same). 9 To the extent extrinsic evidence is considered, dictionaries, such as those on which PUM relies, are preferred over opinion testimony. See Vitronics Corp. v. Conceptronics, Inc., 90 F.3d 1576, 1585 (Fed. Cir. 1996) ("dictionaries ... are accessible to the public in advance of litigation [and are therefore] to be preferred over opinion testimony..."). -6- REDACTED d) Defendant's inconsistent constructions of "parameters" and "estimating parameters" belie its position. Defendant construes "parameters ..." as "variables, having a value or weight ...," on the one hand, but construes "estimating parameters of a learning machine" as "estimating a value or weight of each of the variables ..." on the other. (D.I. 116, at 13). Defendant cannot have it both 10 REDACTED REDACTED REDACTED 11 12 See, e.g., Howmedica Osteonics Corp. v. Wright Med. Tech., Inc., 540 F.3d 1337, 1347 (Fed. Cir. 2008) (rejecting the use of inventor testimony in construing claim language because "it is not unusual for there to be a significant difference between what an inventor thinks his patented invention is and what the ultimate scope of the claims is after allowance by the PTO.") -7- ways. If the "parameters" are the variables, then "estimating parameters" should be "estimating the variables" themselves, not their values or weights. This inconsistency further demonstrates that Defendant's constructions are not correct. 2. Defendant's improper attempt to engraft a requirement -- "to calculate a probability" -- seeks to read non-existent limitations into the term "parameters". Defendant conflates steps (c) and (e) of claim 1 of the `040 patent when it proposes that "estimating parameters of a [user-specific] learning machine" must include the phrase "used by the [user-specific] learning machine to calculate a probability." (D.I. 116, at 13). Defendant's proposed construction confuses how parameters may be used with what parameters are. The language of step (e) of claim 1 is clear -- the "probability is estimated by applying the identified properties of the document to the learning machine having the parameters defined by the User Model."13 Because Defendant offers no justification for importing the "to calculate a probability" into the "parameters"/"estimating parameters" terms, Defendant's construction should be rejected.14 Defendant's construction should also be rejected because it incorrectly requires that the probability must be "calculated" whereas the claim language repeatedly uses the term "estimate." See AllVoice Computing PLC v. Nuance Commc'ns, Inc., 504 F.3d 1236, 1248 (Fed. Cir. 2007) (holding different words to have different meanings where "[t]he specification and the claims consistently use the terms `forming,' `updating,' and `monitoring' to denote 13 Claims 1 and 23 of the `276 patent similarly state that the properties of the retrieved document are applied "to the user-specific learning machine to estimate a probability." Defendant cites to language from the summary of the invention portion of the specification, as well as Praxair, Inc. v. ATMI, Inc., 543 F.3d 1306, 1324 (Fed. Cir. 2008), to support its argument. But, as stated previously, although the specification contains a summary of invention section and uses the phrase "the present invention" to describe certain aspects of the invention, the specification also clearly states that these are only "preferred embodiments." See, e.g., 6:63-7:6. Praxair is, therefore, distinguishable. See B. Braun, 2010 WL 2219667, at *6; Rambus, 318 F.3d at 1094. 14 -8- separate processes with different end results.") B. Defendant's Proposed Constructions of "Learning Machine," "User Model Specific to the User," and "User-Specific Learning Machine" Should be Rejected. 1. The learning machine should not be construed as narrowly as Defendant proposes. Relying on its incorrect definition of "estimating parameters," Defendant again conflates steps 1(c) and 1(e) in attempt to read the "probability" language from element 1(e) into the learning machine definition. This reliance results in Defendant confusing how a learning machine may be used (and further attempting to import a particular methodology for accomplishing that use (e.g., Defendant's "calculating a probability" language)), with what a learning machine actually is. Defendant's discussion on pages 5-6 of its brief, relating to "estimating" (not "calculating," to which Defendant suddenly jumps after identifying citations that use the word "estimate") probabilities is, therefore, not relevant to defining a "learning machine."15 As explained in PUM's opening brief, Defendant attempts to read probability into the learning machine definition so that it can then argue that the learning machine must calculate a "percentage chance" -- which is Defendant's definition of "probability" -- thereby advancing its Defendant's argument that the invention is limited to the description in the summary of invention misses the mark. As described previously, the cases Defendant cites, Honeywell and Microsoft, do not stand for the proposition that the claims must be limited to what is described whenever language such as "the present invention" is used or by statements in the summary of invention section. See, e.g., B. Braun, 2010 WL 2219667, at *6 n.5; Rambus, 318 F.3d at 1094; Durr Sys., Inc. v. FANUC Ltd., 463 F. Supp. 2d 663, 677 n.13 (E.D. Mich. 2006); Colorquick, LLC v. Eastman Kodak Co., No. 6:06-cv-390, 2008 WL 5771324, at *7 n.9 (E.D. Tex. June 25, 2008). Microsoft is also distinguishable because there the Court found that the invention could not included a packet switch network such as the Internet based on over two dozen clear statements, including some in the summary of invention, that the invention was directed over a telephone line. Here, there are no such clear statements relating that parameters or learning machines must be limited to "calculating a probability." 15 -9- non-infringement position. (D.I. 119, at 20). It is black letter law, however, that the claims should not be read restrictively unless the patentee has demonstrated a clear intent to limit the claim scope using "words or expressions of manifest exclusion or restriction." Liebel-Flarsheim Co. v. Medrad, Inc., 358 F.3d 898, 906 (Fed. Cir. 2004). Defendant has demonstrated no reason to read the description of a use of the learning machine (e.g., "estimating the probability ... by applying the identified properties of the documents to the learning machine having the parameters defined by the User Model (element 1(e)) into the definition of what a learning machine is and the Court should decline to do so.16 The two remaining disputes relate to Defendant's proposed requirement that the predictive ability of the learning machine improve with the addition of new data, and to Defendant's criticism of PUM's proposal that the learning machine is used to make a prediction "or intelligent decision." First, as stated in its opening brief, PUM does not dispute that it is desirable that the learning machine's performance improve over time (so long as the performance does not have to improve each time it is updated). But Defendant's "new data" language is vague and confusing. PUM's "past observations/experiences" language more accurately represents that the improved performance is based on the "monitored user interactions," which comes directly from the specification -- "as defined in the art, a learning machine contains tunable parameters based on past experience." 8:44-46. Second, although Defendant criticizes PUM's inclusion of "or intelligent decision" language in the definition of learning machine, PUM included it because the term "predict" ("to 16 Similarly, because Defendant advanced no argument in support of its position that a learning machine is limited to a "program," the Court should adopt PUM's model and/or mathematical function language for the reasons set forth in its brief (D.I. 119, at 18-19). Defendant's argument relating to the learning machine and user model being separate limitations (D.I. 116, at 8) will be addressed in connection with the User Model. - 10 - state, tell about, or make known in advance, especially on the basis of special knowledge, foretell)17 does not fully capture all aspects of learning machine operation. For example, articles relating to machine learning discuss the recognition of patterns and making "intelligent decisions" based on data.18 2. Defendant's "User Model specific to the user" definition is incorrect. a) The claims do not require millions of User Models, each "unique" to a user. The meaning of "specific to the user" was discussed extensively above in connection with the "parameters" term and thus will not repeated. Here, however, Defendant begins its attempt to constrict the invention by using extrinsic evidence to improperly equate "specific" with "unique." (D.I. 116, at 9). Defendant is compelled to rely on extrinsic evidence, because the intrinsic evidence (i.e., the specification) does not support its position. The specification, in fact, directly rebuts Defendant's position that there exists a "unique" user model for each user. The specification states that (i) "the User Model may be initialized by selecting a set of predetermined parameters of a prototype user selected by the user" (5:18-21), (ii) "the user can temporarily use a User Model that is built from a set of predetermined parameters of a profile selected by the user," (5:24-26), or (iii) the user can try on a "hat" (i.e., a prototype user): In some cases, initialization is performed without any user-specific information. A user may not have a large bookmarks or cache, or may not want to disclose any personal information. For such users, prototype users are supplied.... [P]rototype users are trained on a set of documents selected to represent a particular interest. For this reason, prototype users are known as `hats,' as the user is trying on the hat of a prototype user. 20:28-43. 17 18 Ex. 4, at 1032. See http://en.wikipedia.org/wiki/Machine_learning, attached as Ex. 8. - 11 - In each of these examples, the specification describes a situation where the User Model is "specific" to a user, but is not "unique" to that user because more than one user can have the same User Model. Because Defendant's construction would read out these preferred embodiments, it should be rejected. Vitronics, 90 F.3d at 1583 (constructions that do not read on the preferred embodiment are "rarely, if ever, correct.").19 b) The claims do not require that the User Model be created and updated by the learning machine, or that the User Model be stored in a data structure. Defendant relies on the claim language stating that the parameters of the learning machine define the user model and sections of the specification relating to the stored and updated parameters to conclude that user model is created and updated by the learning machine (D.I. 116, at 11). PUM agrees that the parameters define the user model specific to the user. As such, the parameters define the user model as "an implementation of a learning machine that is updated in part from data specific to the user," as PUM's definition provides. 8:43-46. Contrary to Defendant's argument, this language is definitional: it comes directly from the specification. Phillips, 415 F.3d at 1321 ("the specification is the single best guide to the meaning of a disputed term" and "the specification acts as a dictionary when it expressly defines terms used in the claims"). Moreover, PUM's definition of a user model as an implementation of a learning machine is consistent with PUM's definition of a learning machine (i.e., "a model and/or mathematical function..."), and is fully supported by the specification, which also refers 19 Defendant's citations to the specification do not support its argument. For example, Defendant cites to language regarding "stor[ing] parameters that define a User Model for each user (8:46-50) (D.I. 116, at 9), but that language supports PUM's construction that it is the parameters that make the model specific to each user. Similarly, Defendant states that "the User Model represents the user interest in a document ... This estimating is unique to each user. Id. (9:35-38). This language discusses the "estimation" being unique to each user not the user model itself being unique. - 12 - to the user model as a function (8:32-33), a model (passim), or a dynamic entity (21:63-64).20 Nor is PUM's "implementation" language vague as Defendant contends. As explained above, the parameters estimated in part from the user-specific data files, define the user model specific to the user, which one of ordinary skill in the art would understand to be an "implementation" of a learning machine.21 3. The claims do not require millions of "user-specific learning machines," each unique to a particular user. Defendant's construction fails because, as set forth previously, (i) "specific" does not mean "unique," and (ii) the specification does not require such a construction. See supra, at pp. 2-7, 11-12. In contrast, PUM's construction is fully supported by the specification, which teaches that the user-specificity comes from the parameters. According to the step (c) of claim 1, these parameters are estimated in part from the user-specific data files, which consist of the monitored user interactions and documents associated with the user. It is, therefore, the past observations and experiences that are specific to the user, which, as previously explained, make It is anticipated that Defendant will also rely on a portion of the file history that states there are three limitations in element 1(c) "a learning machine, parameters, and a User Model... [a]ll three limitations, as well as the deterministic relationship among them (i.e., the User Model is defined by the parameters of the learning model) must be present in Breese for an anticipatory type of rejection to stand" to support its construction of User Model. See Ex. 9, at PUM 0068510. This language, however, is equally applicable to PUM's construction defining the "user model" as an "implementation of a learning machine..." based on the parameters. Because the prosecution history should not be used to "enlarge, diminish, or vary" the limitations of the claims (Markman v. Westview Instruments, Inc., 52 F.3d 967, 980 (Fed. Cir. 1995)), Defendant's argument fails. 21 20 Finally, Defendant's definition requires that the user model be stored in a data structure. Where the user model is stored, however, does not define what a user model is. Moreover, the specification makes clear that the user model is a function "that may be implemented with any desired data structure" and "that is not tied to any specific data structure or representation." 10:30-36. The "or representation" language suggests that other types of representations for the user model may also be used and, therefore, the definition should not be limited to data structures because there are no words of manifest exclusion contained in the specification. Liebel, 358 F.3d at 906. - 13 - the values/weights of the variables specific to the user. For all of the reasons set forth above, PUM's construction should be adopted.22 II. DEFENDANT'S ATTEMPT TO OVERLY NARROW "USER-SPECIFIC DATA FILES," "USER," AND "DOCUMENT" SHOULD ALSO BE REJECTED. Defendant's proposed constructions for this group of terms should be rejected because its constructions (i) ignore the definitional language set forth in the claims themselves, (ii) ignore well-understood principles of computer science, and (iii) attempt to import non-existent limitations into the claims. A. The Claim Language Expressly Defines "User-Specific Data Files." Defendant asserts that this phrase "means what it says" (D.I. 116, at 14), but then ignores the specific definition found in the claims. The claim language unambiguously defines "userspecific data files" as comprising "the monitored user interactions with data and a set of documents associated with the user." 32:29-32. That definition controls. See Haemonetics Corp. v. Baxter Healthcare Corp., 607 F.3d 776, 781 (Fed. Cir. 2010) (the "controlling language could hardly be clearer," where the claim recites "a centrifugal unit comprising a centrifugal component and a plurality of tubes," the "centrifugal unit" must be the "centrifugal component and a plurality of tubes.").23 PUM's definition is also fully supported by, and consistent with, the use of the phrase in the specification. (See, e.g., 8:67-9:2 -- "The user specific data files include a set of documents and products associated with the user, and monitored user interactions with 22 Defendant's reliance on Honeywell and Kinetics Concepts is incorrect (D.I. 116, at 12) for the reasons previously stated. 23 See also Furminator, Inc. v. Munchkin, Inc., 2009 WL 3805564, at *10 (E.D. Mo. Nov. 9, 2009) (finding the claim term "the blade portion comprising a leading surface and a trailing surface defining a blade edge" is unambiguous and requires no construction); Cheetah Omni, LLC v. Verizon Servs. Corp., 2010 WL 4510986, at *6-*7 (E.D. Tex. Nov. 9, 2010) (finding the term "switching element" was sufficiently defined in the claims, which recite "a switching element coupled to the input interface, wherein the switching element comprises..."). - 14 - data."). PUM's definition, moreover, does not read out "user-specific" or "data files" as Defendant suggests. First, because the monitored user interactions with data and set of documents associated with the user are "specific" to the user, PUM's construction does not eliminate the plain meaning of user-specific.24 It is Defendant who attempts to read out the plain meaning of the term "specific" by equating it with "unique" to advance its non-infringement position.25 Second, the use of the word "files" or "data files" in the context of this claim is not intended to limit structure. The specification contemplates that "any suitable data structure may be used" (e.g., the relational database/table of Figures 14 and 4A-E) to store the data. 22:32-42; 22:64-23:9. The term "data files" is broad and conveys any type of information structure: A collection of data records. This definition may refer specifically to a database file that contains records and fields in contrast to other files such as a word processing document or spreadsheet. Or, it may refer to a file that contains any type of information structure including documents and spreadsheets in contrast to a program file.26 Finally, PUM requested that the Court construe the phrases "monitored user interactions with data" and "set of documents associated with the user." (D.I. 119, at 14-15). Defendant's only response to those proposals is that the "monitoring" phrase is "tied to" the monitoring steps argument and that the parties agree on the definition of "set." (D.I. 116, at 15 n.3). Because Defendant has not meaningfully responded to PUM's arguments, PUM submits that its 24 The cases Defendant cites are inapposite. None of the cases Defendant cites address a situation where the court was construing a claim term that was explicitly defined by using "comprising" in the claim language. 25 Indeed, Defendant's proposed construction simply rearranges the words of the phrase and impermissibly replaces "specific" with "unique." 26 The COMPUTER DESKTOP ENCYCLOPEDIA, 208 (Freedman 2d ed. 1999), attached as Ex. 10. - 15 - constructions be adopted for the reasons set forth in its opening brief. B. In an Electronic System a "User" is a Person as Represented by a Tag or Identifier. Defendant's construction of user -- "person operating a computer" -- ignores the realities of electronic systems, including those described in the patents. Defendant cannot argue that the physical person is inside the computer. So naturally the person operating the computer must be electronically represented by something -- e.g., a tag or identifier as PUM proposes And while Defendant mistakenly argues that "[a]s a matter of common sense, personalization services are provided to persons" (D.I. 116, at 18), REDACTED Defendant's true motivation thus is revealed -- to restrict the user to a physical person to advance a noninfringement argument that an identifier representing a person cannot be a "user." The intrinsic evidence does not support this view. While Defendant attempts to erase the user as identified by "u" throughout the patent as an indicator of a user (D.I. 116, at 18-19), the specification clearly contemplates that the user includes his or her associated representation "u." 9:10-14 ("the user and his or her associated representation are denoted with u"). The REDACTED specification, likewise, describes clusters of users with c and clusters of clusters of users with c(c(u)). 9:21-25. Because literally clustering physical persons operating their computers is nonsensical, the "u" in this context is an identifier of a user. PUM's construction, therefore, most closely aligns with the intrinsic and extrinsic evidence. C. A "Document" need not be an Electronic File. Defendant's proposed construction for document -- "an electronic file" -- is fatally flawed because it does not define what a document is, but instead relates to how it may be stored. - 16 - Defendant attempts to end-run this distinction by reciting a list of activities from the specification and Figure 13 to argue that these actions can only be done if a document is an electronic file (D.I. 116, at 19). All of these actions, as well as Figure 13, however, explain how documents are analyzed, not what they actually are. And, contrary to Defendant's position, all of these actions are not necessarily performed on all documents.27 PUM's construction, on the other hand, is grounded in what a "document" actually is, as defined in the specification -- "text or any type of media." 9:14-18. Where the inventors expressly define a term in the specification, the inventors' lexicography controls. See Phillips, 415 F.3d at 1316. Other portions of the specification also indicate the term "document" is meant to be a broad concept. See e.g., 12:30-48; 17:19-47; 22:27-41; 23:25-46. III. PROBABILITY P(U|D) SHOULD NOT BE LIMITED TO PERCENTAGE CHANCE. The central dispute regarding these phrases is whether probability P(u|d) should be construed narrowly to mean "percentage chance" (Defendant's view), or whether it should be construed more broadly to mean "the degree of belief or likelihood" as set forth by PUM. A review of the intrinsic and extrinsic evidence makes clear that PUM's construction is correct and should be adopted. The patents relate to machine learning and in that context "probability" is used more generally. For example, the specification defines P(u|d) as the "probability of the event that the user u is interested in the document d...". 9:38-42; 28:10-12 ("P(u|d) represents the user interest ..."). These statements make it clear that probability in the context of the patents is utilized in a 27 Defendant's definition is also logically inconsistent with its definition of "user" as the person operating the computer. Under Defendant's "user" logic, a document should be the physical piece of paper, book, et cetera. - 17 - broad sense reflecting user interest, as opposed to in a limited "percentage chance" sense as Defendant contends. Moreover, the specification describes a Bayesian statistics approach to estimating probabilities. 27:55-28:12. Bayesian statistics expresses probabilities as "beliefs" or "likelihoods," not as "percentage chances." See Ex. 12, at 33-35.28 That the claim language consistently and repeatedly refers to "estimating" a probability P(u|d) (as opposed to "calculating" one) further supports this conclusion because "estimating" is normally understood to be a less precise measurement (e.g., an approximation). See D.I. 119, at 24 and Exhibits cited therein.29 The extrinsic evidence also supports PUM's definition. Contemporaneous learning machine texts describe probabilities as "approximations": "The problem encountered by the learning machine is to select a function that best approximates...The quality of an approximation produced by the learning machine is measured..."30 REDACTED REDACTED A second dispute arises from Defendant's attempt to add extraneous limitations into the definition of "estimating posterior probability P(u|d,q) ...". (D.I. 116, at 15-17). Relying on a 28 Even in the Bayesian approach upon which Defendant relies (D.I. 116, at 16-17), the descriptions relate to approximating functions and estimates not "percentage chances." Ex. 6, at 45-50. 29 Defendant argues in response by stating that the specification uses the terms "estimating" and "calculating" interchangeably. D.I. 116, at 16-17. Although portions of column 5 do refer to both calculating and estimating, generally the specification refers to "estimating" probability P(u|d)" not calculating it. See, e.g., 9:38-45, 62-66; 24:60-25:3 (estimated probability arrived at by combining various calculated scores); 25:61-26:3 (multilayer perceptron estimates probabilities); 27:2; 27:55-28:31 (calculating). The claims, moreover, do not use the term "calculating." 30 31 Ex. 6, at 20-21. REDACTED - 18 - technical treatise, Defendant proposes that "estimating a posterior probability ..." further requires taking into account what is previously known about the user's interests "given new knowledge of the document d the user is considering...". This language is not found in the claim and is not supported by the specification. The specification, in fact, defines the posterior probability as the probability that a document is of interest to the user having an information need q. 27:60-28:14. This is the same language used in the claim and is fully consistent with PUM's proposed construction.32 IV. DEFENDANT'S PROPOSED DEFINITIONS OF "UNSEEN DOCUMENT" AND "PRESENTING" ARE AT ODDS WITH THE INTRINSIC EVIDENCE AND MUST BE REJECTED. Unseen Document. Relying on a misunderstanding of collaborative filtering and a flawed argument based on the prosecution history, Defendant argues that the inventors surrendered claim scope and asks the Court to conclude that an "unseen document" is a document that has not been previously seen by any user. (D.I. 116, at 21-23). Defendant's construction should be rejected. First, Defendant's construction cannot be squared with the specification. For example, the specification contemplates taking into account "world knowledge" (e.g., "the number of users who have accessed the document, saved it in a favorite list, or been previously interested in the document") when applying the user model to unseen documents. 25:56-59. Adopting Defendant's construction would rule out this embodiment. Second, the inventors did not surrender claim scope, but rather distinguished collaborative filtering from generalization. As taught in the specification, the drawback of that technique was that it relied on the opinions of others (as opposed to information known about the 32 Defendant's proposed construction of "estimating posterior probability P(u|d,q)" also suffers from the same "percentage chance" issue as its definition of probability P(u|d) and should be rejected for the same reason. - 19 - user). See generally 2:53-3-8. Not surprisingly, documents that had never been rated by anyone could not be recommended or evaluated in a collaborative filtering scheme because there would be no opinions of others upon which to rely. 3:9-21. PUM's patents improved upon collaborative filtering by permitting the evaluation of documents (seen or unseen by others) based on the information known about the user, as opposed to based on the opinions of others. Defendant's attempt to narrow the meaning of "unseen document," and thereby unduly limit the invention to documents never before seen by anyone, fails because the statements in the specification upon which Defendant relies are unrelated to "unseen document" in a definitional sense, but rather distinguishes between two techniques -- collaborative filtering (which relies on the opinions of others) and generalization (which does not require the opinions of others).33,34 Presenting. Ignoring the plain language of the claims and the specification, Defendant posits that "presenting" means "displaying" in attempt to fabricate a joint infringement argument by reading a web-browser or other display mechanism into the claims. Nothing in the claims or 33 Such statements, moreover, are not unequivocal disclaimers of claim scope. Liebel, 358 F.3d at 906. Finally, Defendant's prosecution history argument is flawed. The Gerace reference used memorization to determine the profile of a user and defined a user interest in a fixed set of categories (called agate information, e.g., sports). Ex. 9, at PUM0067574-75. The portion of the prosecution history Defendant cites related to finding similar user(s), among the existing set of users with a fixed set of categories. Gerace teaches serving ads that were liked by one member of this similar user group to others in this group. Id. at 67575. Not surprisingly, ads that had never been seen by the set of existing users could not be evaluated until they had been shown to a random set of users to build up sufficient statistics. Id. The patentee distinguished Gerace because Gerace was not able to generalize when presented with an unseen document: "Gerace does not teach nor suggest generalization beyond the recorded history or memorized information." Id. at 67574. "[I]t is not taught nor is it suggested how the first set of users [in Gerace] or the first user are/is presented with an unseen document or an unseen Ad." Id. at 67575. Thus, contrary to Defendant's argument, the distinction the patentee made over Gerace relates to the invention's ability to generalize (one of the hallmarks of the invention) to deal with documents that had never been seen by that user. Id. Defendant's cited cases do not even apply as the patentee has not and did not disavow claim scope. 34 - 20 - the specification supports such a construction. Both, in fact, contradict it. As set forth in PUM's opening brief, the claims distinguish between "presenting" and "displaying" (see, e.g., claim 24 "The method of claim 23, wherein `presenting' ... comprises `displaying' said collected documents..."). `276::34:19-23. "Presenting" and "displaying," therefore, mean different things. Nazomi Commc'ns, Inc. v. Arm Holdings, PLC, 403 F.3d 1364, 1370 (Fed. Cir. 2005); Applied Med. Res. Corp. v. U.S. Surgical Corp., 448 F.3d 1324, 1333 n.3 (Fed. Cir. 2006). Defendant's specification citations further reinforce PUM's construction. Throughout the specification, "presenting" is used to communicate a broad category of acts by which information is made available. `276::1:67-2:4; 8:15-21. "Displaying," on the other hand, describes a subset of situations in which links, ads, or other forms of content are shown to the user. `276::28:52-62. This consistent use of the two terms to mean different things is strong evidence that the terms do, in fact, mean different things. AllVoice Computing, 504 F.3d at 1248. V. DEFENDANT'S INDEFINITENESS ARGUMENTS ARE WITHOUT MERIT. A. The "Documents [Not] Of Interest" Terms Are Definite Because The Specification Provides Examples Of Objective Criteria Upon Which The User's Interest Or Non-Interest May Be Judged. Defendant argues that the terms "documents [not] of interest" must be indefinite because whether a document is "of interest" to the user is purely a subjective question. (D.I. 116, at 2324). The specification, however, provides numerous examples of objective criteria to determine the level of the user's interest. Examples of user interest include search results that are visited, documents saved in user "favorites" or "bookmarks," and independently visited websites. 22:1520. The specification continues by noting that the "of interest" measure even has degrees, which may be judged by whether a document was saved in a bookmarks file, how long the user spent - 21 - viewing the document, and whether the user followed any links in the document. 22:35-43.35 The specification goes even further, describing an embodiment where a metric is determined for each document to indicate whether the user interest was positive, negative, or neutral. See generally 22:43-54. Because one of ordinary skill in the art would, based on a review of the specification, clearly understand the general types of objective criteria that indicate a user's interest or non-interest in a document, the claims are not indefinite. Viskase Cos., Inc. v. World Pac Intern. AG, 714 F. Supp. 2d 878, 882 (N.D. Ill. 2010) (denying indefiniteness relating to losses of "taste" and "flavor" because, while they undoubtedly have subjective components, "identifying `losses' in these elements when comparing products before and after processing does not seem to me to be so impossibly subjective that it could not be established by any objective criteria"); Exxon Research Eng'g Co. v. United States, 265 F.3d 1371, 1379-81 (Fed. Cir. 2001) (finding "for a period sufficient" definite because the limitation was expressed in terms that are reasonably precise in light of the subject matter).36 REDACTED 35 Likewise, examples indicating non-interest include search results that are ignored although they appear at the top of the search result page, deleted bookmarks, and ignored pushed news or email. 22:20-24. 36 This case, therefore, is distinguishable from Datamize LLC v. Plumtree Software, Inc., 417 F.3d 1342, 1350 (Fed. Cir. 2005). There, the court found the term "aesthetically pleasing" to be indefinite because the specification lacked any objective standard to permit the public to determine the scope of the claimed invention. - 22 - REDACTED B. The "User Interest Information Derived from the User Model" Phrase is not Insolubly Ambiguous. A patentee need not define his invention with mathematical precision to comply with the definiteness requirement. Oakley, Inc. v. Sunglass Hut Intern., 316 F.3d 1341, 1342 (Fed. Cir. 2003). Here, the specification states that user model represents the user's information and product interests, that "all information that is presented to the user has been evaluated by the User Model to be of interest to the user,"(7:31-34), and that "the User Model reflects the user's current interests and needs." 8:63-64. This language supports PUM's construction and provides support for the phrase such that one of ordinary skill in the art would understand its meaning. Defendant has not and cannot meet the high bar of showing that this phrase is insolubly ambiguous.37 VI. DEFENDANT'S ANTECEDENT BASIS AND ORDER OF STEPS ARGUMENTS MISS THE MARK. Antecedent Bases. In its opening brief, PUM identified two terms -- "a document d"/"the document" and "the probability P(u|d)"/"the estimated probability" -- that had antecedent bases despite not always referring back to the previously introduced element. (D.I., 119, at 9- Defendant's reliance on Halliburton Energy Servs. Inc. v. M-I LLC, 514 F.3d 1244 (Fed. Cir. 2008) is misplaced. In Halliburton, the court found the phrase "fragile gel" indefinite because neither plaintiff's construction or any other construction resolved the ambiguity of the phrase. Here, plaintiff's construction clearly defines this phrase. 37 - 23 - 10). Defendant, on the other hand, provides one example in its brief -- that "a user u" and "the user"/"the user u" refer to the same user. (D.I. 116, at 25). Despite the dialogue between the parties, PUM still does not understand Defendant's apparent need to construe these terms, which Defendant does not dispute have antecedent bases as written. "[A] user u" as used in the preamble indicates an exemplary user as defined according to PUM's definition. In the remainder of the claim, the steps of the method are described with respect to a particular "user."38 To the extent Defendant seeks to limit the claims to a single system, requiring each user u to have his/her own unique learning machine, as explained above, such an argument is unsupportable and should be rejected. To the extent the Court believes resolution of this dispute to be necessary, because Defendant's proposed constructions should be rejected because they contradict the ordinary meaning of some of the claim terms. The claims are clear and there is no reason to depart from their plain meaning. Order of Steps. Again, attempting to box in the invention, Defendant incorrectly argues that an order of steps should be imposed based on the logic or grammar of the claims. (D.I. 116, at 26-29). This argument should be rejected. First, Defendant confuses antecedent basis with a grammatical requirement that the claim elements must be performed in a specific order. See Datatreasury Corp. v. Wells Fargo & Co., 2009 WL 1393068, at *60 (E.D. Tex. May 11, 2009) ("[A]ntecedent basis is not a matter of logic or grammar that compels the steps to be performed in order."). Second, the order of steps Defendant proposes is not supported by the specification, which teaches that the invention 38 To the extent Defendant is attempting to construe the claims such that a "user" must be a singular person, PUM disagrees. As discussed above in connection with "user," the invention REDACTED personalizes to a person as represented by a tag or identifier. - 24 - operates in different modes -- "initialization, updating or dynamic learning, and application." 8:54-56. PUM's opening brief set forth several examples where steps could repeat and/or overlap, which would be precluded by Defendant's order of steps construction. (D.I. 119, at 6-8). Finally, Defendant criticizes PUM's statement that (i) the steps can be performed in a consecutive manner, an overlapping manner, or a combination of the two, and (ii) that step 1(d) of the `276 patent must be performed before a "portion" of step 1(f). In the preferred embodiment, however, "the User Model is constantly and dynamically updated." 22:56-61. This language contemplates steps occurring in an overlapping manner -- e.g., monitoring the user's interactions occurring while other steps are occurring, or using a previously-estimated probability and updating the user-specific data files. Second, with regard to PUM's reference to the portion of step 1(f) of the `276 patent not needing to occur with the remainder of the step, the specification is clear that the "properties of the retrieved documents" (e.g., key words, author, title, media type, et cetera) may be identified at any time (including prior to the document being retrieved in response to a search). Once the document is retrieved, the method identifies these properties and applies them to the user-specific learning in the remainder of the step. `276::31:54-62. In sum, neither logic nor grammar require that the Court construe these claims to infer an order of steps. Altiris, Inc. v. Symantec Corp., 318 F.3d 1363, 1369 (Fed. Cir. 2003); D.I. 119, at 6-8. CONCLUSION For the foregoing reasons, Defendant's proposed claim constructions should be rejected. - 25 - MORRIS, NICHOLS, ARSHT & TUNNELL LLP /s/ Jeremy A. Tigan Karen Jacobs Louden (#2881) Jeremy A. Tigan (#5239) 1201 N. Market Street P.O. Box 1347 Wilmington, DE 19899-1347 (302) 658-9200 klouden@mnat.com jtigan@mnat.com Attorneys for Personalized User Model, L.L.P. OF COUNSEL: Marc S. Friedman SNR Denton US LLP 1221 Avenue of the Americas New York, NY 10020-1089 (212) 768-6700 Mark C. Nelson SNR Denton US LLP 2000 McKinney Ave., Ste. 1900 Dallas, TX 75201-1858 (214) 259-0900 Jimmy M. Shin Jennifer D. Bennett SNR Denton US LLP 1530 Page Mill Road, Ste. 200 Palo Alto, CA 94304-1125 (650) 798-0300 December 13, 2010 3957570 - 26 - APPENDIX A Claim 1 of '040 Patent 1. A computer-implemented method for providing automatic, personalized information services to a user u, the method comprising: a) transparently monitoring user interactions with data while the user is engaged in normal use of a computer; b) updating user-specific data files, wherein the user-specific data files comprise the monitored user interactions with the data and a set of documents associated with the user; c) estimating parameters of a learning machine, wherein the parameters define a User Model specific to the user and wherein the parameters are estimated in part from the user-specific data files; d) analyzing a document d to identify properties of the document; e) estimating a probability P(u|d) that an unseen document d is of interest to the user u, wherein the probability P(u|d) is estimated by applying the identified properties of the document to the learning machine having the parameters defined by the User Model; and f) using the estimated probability to provide automatic, personalized information services to the user. Claim 1 of '276 Patent 1. A computer-implemented method for providing personalized information services to a user, the method comprising: [a] transparently monitoring user interactions with data while the user is engaged in normal use of a browser program running on the computer; [b] analyzing the monitored data to determine documents of interest to the user; [c] estimating parameters of a user-specific learning machine based at least in part on the documents of interest to the user; [d] receiving a search query from the user; [e] retrieving a plurality of documents based on the search query; [f] for each retrieved document of said plurality of retrieved document, and applying the identified properties of the retrieved document to the user-specific learning machine to estimate a probability that the retrieved document is of interest to the user; and [g] using the estimated probabilities for the respective plurality of retrieved documents to present at least a portion of the retrieved documents to the user. Claim 32 of '040 Patent 1. A program storage device accessible by a central computer, tangibly embodying a program of instructions executable by the central computer to perform method steps for providing automatic, personalized information services to a user u, the method comprising: a) transparently monitoring user interactions with data while the user is engaged in normal use of a computer; b) updating user-specific data files, wherein the user-specific data files comprise the monitored user interactions with the data and a set of documents associated with the user; c) estimating parameters of a learning machine, wherein the parameters define a User Model specific to the user and wherein the parameters are estimated in part from the user-specific data files; d) analyzing a document d to identify properties of the document; e) estimating a probability P(u|d) that an unseen document d is of interest to the user u, wherein the probability P(u|d) is estimated by applying the identified properties of the document to the learning machine having the parameters defined by the User Model; and f) using the estimated probability to provide automatic, personalized information services to the user. Claim 23 of '276 Patent 1. A computer-implemented method f

Disclaimer: Justia Dockets & Filings provides public litigation records from the federal appellate and district courts. These filings and docket sheets should not be considered findings of fact or liability, nor do they necessarily reflect the view of Justia.


Why Is My Information Online?