Coho Licensing LLC v. Twitter Inc.
Filing
1
COMPLAINT FOR PATENT INFRINGEMENT filed with Jury Demand against Twitter Inc. - Magistrate Consent Notice to Pltf. ( Filing fee $ 400, receipt number 0311-1370198.) - filed by Coho Licensing LLC. (Attachments: # 1 Exhibit A, # 2 Exhibit B, # 3 Civil Cover Sheet)(cla, )
Exhibit B
US008166096B1
(12) Ulllted States Patent
(10) Patent N0.:
Odom
(54)
US 8,166,096 B1
(45) Date of Patent:
*Apr. 24, 2012
DISTRIBUTED MULTIPLE-TIER TASK
6,775,831 B1
8/2004 Carrasco etal.
ALLOCATION
6,782,422 B1
6,826,753 B1
8/2004 Bahl et al.
11/2004 Dageville et al.
6,941,365 B2 *
9/2005
(76)
InVemOr-
_
Gary Odom, Portland, OR (Us)
7,013,344 B2 *
3/2006 Megiddo ,,,,,,,,,,,,,,,,,,,,, ,, 709/232
Sirgany
7,085,853 B2 *
8/2006 VolkoV et al.
(*)
Notice:
Subject to any disclaimer, the term of this
patent is extended or adjusted under 35
7,103,628 B2 * 9/2006 Neiman et a1.
7,155,722 131* 12/2006 Hilla et a1.
U_S_C_ 154(1)) by 0 days
7,188,113 B1
3/2007 Thusoo
7,243,121 B2 *
Claimem
7/2007
7,383,426 B2
This patent is subject to a terminal dis-
6/2003 Chung et a1~
Neiman et al. .............. .. 709/201
(21)
Flledi
1/2010 Matsumoto
2/2010 Motoyama et al.
7,693,931 B2 * 4/2010 Polan
7,835,022 B2 * 11/2010 Matsumoto ................ .. 358/115
APP1~ NOJ 13/208,404
(22)
7,647,593 B2 *
7,668,800 B2
_
7,849,178 B2
Aug. 12,2011
12/2010
Shen et al.
7,936,469 B2*
2003/0028640 A1*
2003/0028645 A1*
2/2003 Romagnoli
2003/0050955 A1*
Related U_s_ Application Data
5/2011 Gregory ..................... .. 358/1.15
2/2003 Malik
3/2003 Eatough et al.
(63) Continuation of application No. 10/228,588, ?led on
Aug. 26, 2002, noW Pat. No. 8,024,395.
(60) Provisional application No. 60/317,108, ?led on Sep.
(51)
4’ 2001 '
Int Cl
(52)
us. Cl. ....................... .. 709/201; 709/223; 709/226
(58)
(Continued)
OTHER PUBLICATIONS
Kao, “Subtask deadline assignment for complex distributed soft real
time tasks,” Proceedings of the 14th International Conference on
Field of Classi?cation Search ................ .. 709/201,
G0'6F 25/16
(2006 01)
Distributed Computing Systems, 1994; Jun. 21-24, 1994; pp. 172
181’ USA‘
(Continued)
709/223i226; 718/105
See application ?le for complete search history.
(56)
Primary Examiner * Faruk HamZa
References Cited
(57)
U S PATENT DOCUMENTS
' '
Described is a system and methods for multiple tier distribu
_
2
5’815’793 A *
ABSTRACT
tion of task portions for distributed processing. Essentially, a
g 31'
task is divided into portions by a ?rst computer and a task
9/l998 Ferguson ““““““““““ “ 725/131
631123225 A
8/2()()() Kraft et 31,
6,148,323 A * 11/2000 Whitneret al.
6,167,427 A
12/2000 Rablnovlch et a1~
5:230:21
633703560 B1
participatory computer on the netWork, Whereby distributed
4/2002 Robenaz'zi et a1‘
6,418,462 B1*
portion transferred to second participatory computer on the
network, Whereupon an allocated task portion is again por
tioned by the second computer into subtask portions, and a
subtask portion transferred by the second computer to a third
7/2002 Xu
6,463,457 B1*
10/2002
processing transpires, and results collated as required.
20 Claims, 6 Drawing Sheets
Armentrout et al. ........ .. 709/201
11 ALLOCATING COMPUTER
I 10 SUB-ALLOCATING 1 COMPUTER
99 NETWORK
12 SUB-ALLOCATING 2 COMPUTER ‘
13 ALLOCATED COMPUTER
14 COMPUTER
l5 COLLATING COMPUTER
US 8,166,096 B1
Page 2
US. PATENT DOCUMENTS
OTHER PUBLICATIONS
2003/0060“ Al*
3/2003 B enhaseet a.
1
Lee, “Some simple task assignment problems for distributed
2003/01 58887 A1 *
8/2003 M egi ddo
networked agents,” Fourth International Conference on Knowledge
2004/0045002 A1
2004/0264503 A1
3/ 2004 Berger et 31,
12/2004 Braves
2008/0216859 Al
9/2008 Chan
2009/0204470 A1
8/2009 Weyl et a1.
Based Intelligent Engineering Systems and Allied Technologies,
2000. Proceedings. vol. 1, Aug. 30-Sep. 1, 2000 pp. 305-308, USA.
* cited by examiner
US. Patent
Apr. 24, 2012
Sheet 1 0f 6
100 COMPUTER
101 CPU
102 STORAGE
103 MEMORY
104 RETENTION
DEVICE(s)
105 DISPLAY DEVICE
106 INPUT DEVICE(S)
107 POINTING DEVICE
(E.G. MOUSE)
108 KEYBOARD
109 NETWORK
CONNECTION DEVICE
FIGURE 1
US 8,166,096 B1
US. Patent
Apr. 24, 2012
11 ALLOCATING COMPUTER
Sheet 2 0f6
US 8,166,096 B1
10 SUB-ALLOCATING 1 COMPUTER
12 SUB-ALLOCATING 2 COMPUTER
13 ALLOCATED COMPUTER
14 COMPUTER
15 COLLATING COMPUTER
FIGURE 2
US. Patent
Apr. 24, 2012
Sheet 3 0f6
US 8,166,096 B1
70D TASK (DATA)\/
711) TASK PORTION
721) SUBTASK
PORTION
_____ __
\\_///
FIGURE 3A
81
71P TASK PORTION
/
\
\/ 82A
83A
70F TASK
82Y
83B
83Y
72R SUBTASK
PORTION
8A
84Y
85
FIGURE 35
842 \/
\-/7COLLATE
US. Patent
Apr. 24, 2012
Sheet 4 0f6
US 8,166,096 B1
20 COORDINATOR
21 ALLOCATOR
22 SCHEDULER
23 ESTIMATOR
24 PROCESSOR
25 INITIATOR
26 SUSPENDER
27 COLLATOR
28 COMMUNICATOR
29 ADDRESSOR
F|GURE4
60 (SUB)TASK
61 MESSAGE
PORTION
IDENTIFIABLE BY
DIVISION
TYPE
ACTIONSPECIFIC (E.G.
ALLOCATE,
62 DATA
DATA OR
DATA
CANCEL,
REFERENCE
(SUB)TASK
63 CODE
64 STATUS /
DIRECTIVE
SOFTWARE DEPENDS ON
OR SW
MESSAGE
REFERENCE TYPE
RESULT,
COLLATE)
FIGURE5
65 RESULTS
DEPENDS
ON
MESSAGE
TYPE
US. Patent
Apr. 24, 2012
Sheet 5 0f6
US 8,166,096 B1
1 — ALLOCATING COMPUTER: ALLOCATE TASK PORTION TO A COMPUTER
l
2 —SET COMPLETION SCHHDULH (OPTIONAL)
l
3 — ESTIMATE COMPLETION TIME (OPTIONAL)
l
4 — ALLOCATEI) COMPUTER: SUB-ALLOCATH SUBTASK PORTION
l
5 * PROCESSING COMPUTERS: PROCESS TASK PORTION/SUBTASK
l
6 — PROCESSING COMPUTERS: TRANSFER RESULTS (OPTIONAL)
l
7 * COLLATING COMPUTER(S): COLLATE RESULTS (OPTIONAL)
l
8 — RESULTS COMPUTER(S): TRANSFER RESULTS (OPTIONAL)
FIGURE 6
US. Patent
Apr. 24, 2012
Sheet 6 0f 6
11
11
/I\
40
50
l
I
I
I
1O
l
| 53
I l
13
1/4
44 :54
l
4
43
US 8,166,096 B1
40
14
l
4\
I
I
I
l
44
VI
10
42
I 52
46
I/ l
J
12
4‘
FIGURE 7A
45
I55
v |
v
13
FIGURE 75
:1: (SUB-)ALLOCATE
40
I? COLLATE
V
(ALT. EMBODIMENT) \'/
: TRANSFER
42
‘ (ALTERNATIVE EMBODIMENT)
49
11
l
FIGURE 7c
90' ‘91
10
FIGURE 70
US 8,166,096 B1
1
2
DISTRIBUTED MULTIPLE-TIER TASK
ALLOCATION
A computer which has been allocated a distributed pro
cessing task portion may itself determine to reallocate a por
tion of its subtask, for example, in order to meet a schedule, or
CROSS-REFERENCE TO RELATED
APPLICATIONS
if its performance pro?le deteriorates below expectation. The
described technology localiZes further (sub)task portion allo
cation control to computers having been assigned task por
Compliant with 35 U.S.C. §120, this application is a con
tinuation of US. patent application Ser. No. 10/228,588, ?led
Aug. 26, 2002, now US. Pat. No. 8,024,395, which claims
priority bene?t under 35 U.S.C. §119(e) of US. Provisional
tions.
Further task processing division to other computers on the
network may be extended to initial task portioning, schedul
Application No. 60/317,108, ?led Sep. 4, 2001.
ing, and results collation.
Admittedly, only those tasks capable of being subdivided
STATEMENT REGARDING FEDERALLY
SPONSORED RESEARCH OR DEVELOPMENT
in some manner may bene?t from the described technology.
BRIEF DESCRIPTION OF THE SEVERAL
VIEWS OF THE DRAWINGS
Not Applicable
THE NAMES OF THE PARTIES TO A JOINT
RESEARCH AGREEMENT
Not Applicable
FIG. 1 is a block diagram of a suitable computer.
FIG. 2 depicts an example computer network.
20
INCORPORATION-BY-REFERENCE OF
MATERIAL SUBMITTED ON A COMPACT DISC
Not Applicable
FIG. 5 depicts an abstract of a distributed processing mes
sage.
25
BACKGROUND OF THE INVENTION
advantage of distributed processing is positively correlated to
availability of powerful computers in a networked environ
ment. This trend is especially encouraged by always-on
DETAILED DESCRIPTION OF THE INVENTION
30
FIG. 1 is a block diagram ofa computer 100 which com
prises at least a CPU 101; storage 102, which comprises
memory 103 and optionally one or more devices with reten
tion medium(s) 104 such as hard disks, diskettes, compact
35
one or more input devices 106, examples of which include but
are not exclusive to, a keyboard 108, and/or one or more
pointing devices 107, such as a mouse. Such a computer 100
40
processing problems”, and switching an allocated task por
tion to a different computer if the one ?rst assigned the task
portion becomes occupied. 6,192,388 also describes some of
the resource factors involved in determining whether to allo
cate a task portion to a computer.
disks (e. g. CD-ROM), or tape; a device 109 for connection to
a network 99; an optional display device 105; and optionally
broadband connection to the ultimate wide-area network: the
Internet.
US. Pat. No. 6,192,388 details “detecting available com
puters to participate in computationally complex distributed
FIG. 6 depicts distributed processing steps.
FIG. 7 depicts examples of processing distribution and
results collation.
1. Field of the Invention
The relevant technical ?eld is computer software, speci?
cally distributed processing in a networked environment.
2. Description of the Related Art Including Information
Disclosed Under 37 CFR 1.97 and 1.98
In what is not ironically called a “network effect”, the
FIG. 3 depicts example tasks.
FIG. 4 depicts relevant distributed processing application
components.
45
is suitable for the described technology.
FIG. 2 is a block diagram of distributed processing partici
patory computers 100 connected to each other through a
network 99. Computers 100 are participatory based upon
having installed required software and, optionally, meeting
speci?ed conditions for participation. Example conditions
With some content overlap to the earlier-?led 6,192,388,
US. Pat. No. 6,112,225 describes a “task distribution pro
include su?icient processing power, storage, network band
width or reliability, or adequate security precautions, such as
cessing system and the method for subscribing computers to
perform computing tasks during idle time”, and goes into
a particular installed operating system.
detail as to various ways of specifying “idle time”. Both
Computer 11 in FIG. 2 is depicted in the role of an allocat
50
ing computer, signifying initial allocation of task portions.
use the same computer for allocating, monitoring and re
Likewise, other computers in FIG. 2 are signi?ed by their
roles. FIGS. 2, 6, and 7 are used for example explanation of
allocating task portions.
the technology. The roles of computers are envisioned as
6,192,388 and 6,112,225, incorporated herein by reference,
US. Pat. No. 6,263,358 discloses sophisticated regimes of
scheduling of distributed processing tasks using software
55
a task or sub-task portion allocated to it by another computer
in a succeeding task.
A network 99 may be any means by which computers are
connected for software program or data transfer. The
agents. In the face of schedule slippage, such a system relies
upon coordination among multiple agents to work effectively.
US. Pat. No. 6,370,560 discloses “a load sharing
system . . . . A controller divides a divisible load or task and
assigns each segment of the load or task to a processor plat
form based on the processor platform’s resource utiliZation
cost and data link cost.”
transitory: for example, a computer initiating distributed pro
cessing and allocating task portions for its task may next have
60
described technology relies upon network connectivity,
including inter-application messaging and data or software
transfer capabilities that are well known.
Participatory computers have software installed enabling
BRIEF SUMMARY OF THE INVENTION
65
Multiple tier task allocation maximizes ?exibility and pro
ductivity of distributed processing participatory computers.
the desired distributed processing. The software may be
installed by download through network 99 connection, or via
a more traditional local retention medium, such as CD-ROM
or ?oppy disk.
US 8,166,096 B1
3
4
The desired distributing processing may take various
forms. FIG. 3 illustrates examples.
priate computer. A poWerful computer With relative poor net
Work capacity (speed or reliability) may be shunned from
One example is a divisible and distributable chunk of data
communication-intensive jobs, such as collation 7. In this
requiring a single processing, as depicted in FIG. 3a, split into
embodiment, the distributed processing application may be
portions so that the various participatory computers can pro
cess the data portions. The task data 70d is shoWn portioned
heterogeneous, comprising relative capabilities according to
computer capacity.
into equal quarter task portions 71d. A task portion has been
further split into subtask portions 72d.
An example at the other end of the spectrum, depicted in
Messages are passed as required, including, for example,
the folloWing types of messages 61: (sub)task portion alloca
tion; data 62 or code 63 transfer; cancellation; scheduling:
FIG. 3b, is a series of processing steps Which to some extent
directives or estimation initiation or results; processing:
may overlap, Whereby each of the participatory computers
performs some portion of the task 70. Task 70p processing
can be portioned into task portions 71p (8211-8411 and 82y
directives (such as initiation, suspension, or collation) and
results 65. FIG. 5 depicts an abstract of a distributed process
ing message; intended for conceptual understanding and sug
gestion, not speci?c implementation (as this message format
84y/z). Further, a subtask portion 72p could be allocated at
speci?c processing steps (8311/!) or 84y/z). Note that synchro
is not particularly e?icient). Not all ?elds shoWn Would nec
essarily be used for each message type 61, and other ?elds
may be required depending upon message type 61 or embodi
niZation may be an issue, such as in FIG. 3b Where processing
step 83b requires the output of preceding steps 82a and 82y to
proceed. There may also be a results collation 85 step.
BetWeen the extreme examples lies divisible and distributable
data capable of being processed in an overlap (not exclusively
ment.
(Sub)task portions may be identi?able by its division, such
20
serial) manner.
One possible employment scenario for the described tech
and recombination of results by a collator 27. A table or
database may be kept and transferred as necessary that iden
nology is a set of participatory computers running one or
more applications Which intermittently require intermittent
excessive (to a single computer) processing. Distributed pro
as, for example: 2/s-1A1-2/3, Where each set of number indicates
a (sub)task division. 2/s, for example, Would be part 2 of 5
portions. The point is to alloW portioning by an allocator 21
25
ti?es actual and/or possible (sub)task portions.
cessing may be used as a remedy for those times When a
Data 62 or executable softWare code 63 or references to
singular computer may otherWise bog doWn or be insuf?cient.
In this scenario, any computer With excessive processing
needs may initiate shared task processing, either by direct
allocation of task portions, or by directing another computer
them may be transferred via messaging. Status/directive 64
and result 65 depend on message type 61.
30
to perform task portion allocation and attendant processing.
Note that the term “allocate” and its conjugations may refer
computer is recommended as a Way to calibrate future allo
cations.
FIG. 6 outlines the steps for the described multiple tier
distributed processing. FIG. 7 illustrates examples of the dis
to initial allocation or subsequent sub-allocationiafter all,
the allocation process is self-similar. In the preferred embodi
ment, allocation (and sub-allocation) necessarily implies por
35
tioning of a (sub)task prior to transferring a portion to another
computer. In an alternative embodiment, depicted in FIG. 7d,
a task or (sub)task portion may be (sub-)allocated by transfer
90 of the (sub)task to another computer 10 prior to any por
tioning by the initially transferring computer 11, With a
Keeping track of processing times of allocated (sub)tasks
(including CPU overhead and other performance factors) by
40
tribution process.
An allocating computer 11 allocates a portion of a task to
another computer 10 in step 1. As depicted in FIG. 7a, an
allocating computer 11 may allocate task portions to multiple
computers (11 and 14). An allocator 21 may be employed for
task and subtaskportioning and transfer, and for tracking such
request or directive that a portion be (sub-)allocated 91 to the
(sub-)allocations and portions.
computer 11 initiating the transfer, thus putting the overhead
of (sub-)allocation on the recipient 10 rather than the initially
transferring computer 11.
Optionally, an allocating 11 (or sub-allocating 10) or allo
cated 13 computer may set a completion schedule (step 2) for
the time by Which results should be available. Depending
FIG. 4 depicts an exemplary embodiment of relevant com
ponents of a distributed processing program, some of Which
45
are optional, depending upon embodiment; other compo
d’etre for multiple tier subtask sub-allocation, but subtask
sub-allocation may be driven by anticipation of available
nents, such as user interface, event handling, and the actual
processing modules, likely exist. Components may have dif
ferent con?gurations in different embodiments.
upon the nature of the task, a schedule may be a single
completion time for an allocated portion, or for intermediate
computations as Well. Ostensibly, a schedule is the raison
50
While an application program is used as the preferred
embodiment, an alternative preferred embodiment may
incorporate all or portions of the described distributed pro
resources Which later fail to appear forthcoming. For
example, an allocated computer 13 may become busier than
historical usage Would indicate, making (sub)task portion
of?oading prudent.
cessing functionality in an operating system.
If scheduling is a factor, an estimated completion time
An overall coordinator 20 may be employed to ensure
proper interaction betWeen the relevant distributed process
ing modules. In one embodiment, certain modules may be
missing from an application on a particular computer, in
Which case the coordinator 20 Would knoW the (limited) capa
55
bilities of the application, and compensate accordingly.
Operationally, that compensation may take the form of knoW
60
calculation (step 3) is advised. The availability and speed of
resources, such as processor(s) 101 and storage 102, may
naturally ?gure into such calculation. Estimation calculations
may be done by any participatory computer With suf?cient
information.
As depicted in FIG. 4, an allocator 21 may employ a sched
uler 22, Which may employ an estimator 23, to perform pro
cessing steps 3 and 2 respectively.
ing, by an addressor 29 With a database tracking such capa
bilities, of suitable computers With adequate capabilities to
The overhead of distribution may be considered by an
take on jobs Which a coordinator 20 needs to off-load.
For example, a computer With limited storage or process
ing poWer may not have a scheduler 22 or collator 27,
Whereby a coordinator 20 off-loads those jobs to an appro
estimator 23 or scheduler 22 as a factor in (sub-)allocation.
Distribution overhead includes the time and resources to por
65
tion and distribute subtask portions, and to collect and collate
results. Depending on the netWork, communication lags may
US 8,166,096 B1
6
5
also be a factor. Excessive (sub)task portion (sub-)allocation
granularity is conceivable and should be accounted for. A
suggested rule is that estimate of (sub-)allocation should be a
fraction of estimated processing time if processing time is the
bottleneck; storage 102 capacity or other such bottlenecks
The invention claimed is:
necessitate similar consideration.
Wherein said plurality of sets are calculated from portions
An estimate of processing capability may be ascertained
for a computer targeted for processing prior to (sub-)alloca
tion, so as to portion (sub)tasks accordingly.
Wherein said ?rst computer receiving a second set of said
1. A computer-implemented method comprising:
a ?rst computer receiving via netWork communication a
plurality of sets of calculated results from a plurality of
computers,
of a single computing task,
plurality of sets,
said second set comprising results from a second computer
calculating a second task portion after said second com
puter received said second task portion from a third
For Whatever reason, in step 4, a computer 10 With an
allocated task portion 71 decides to sub-allocate a portion 72
of its allotted subtask to another computer 13, as depicted in
FIG. 7a.
computer,
said second task portion being divided from a third task
Participatory computers With (sub-)allocated (sub)task
portions perform required processing per step 5. The generic
portion,
said third task portion comprising after division said sec
ond task portion and a fourth task portion,
Wherein said ?rst computer receiving a fourth set of said
processor 24 signi?es the performer of step 5. An initiator 25
may synchroniZe With other processors 24 if necessary. A
computer may be Watchful (a possible coordinator 20 j ob) and
sub-allocate after beginning processing, upon realiZing sub
plurality of sets,
20
allocation as a prudent measure because of some unantici
pated constraint, such as, for example, high CPU utiliZation
(processing overhead) or suddenly limited storage. A sus
pender 26 may suspend processing, saving state as necessary
for later resumption.
Depending upon embodiment, processing may occur only
under speci?ed conditions, for example, only When a com
?nal result set.
2. The method according to claim 1, Wherein said netWork
comprises a Wide-area netWork.
3. The method according to claim 1, further comprising:
said fourth computer coordinating distribution of a plural
puter is past a threshold state deemed idle. Other conditions,
such as available storage 102, or netWork 99 connection speed
or reliability, may also be pertinent allocation or processing
ity of task portions, including said third task portion to
said third computer.
criteria. If processing is conditional, temporary results may
4. The method according to claim 3, further comprising:
be stashed (locally or elseWhere on the netWork) for later
resumption. A processor 24 initiator 25 and suspender 26
may, for example, respectively detect and act upon onset and
termination of speci?ed threshold conditions.
Step 6 speci?es transferring results. This step may not be
necessary, depending upon the task 70. LikeWise, in step 7,
said fourth computer communicating a schedule to said
third computer related to said third task portion.
35
results are optionally collated by one or more participatory
computers, With results monitoring as required. Results
said fourth set comprising results from said third computer
calculating a fourth task portion after said third com
puter receiving said third task portion from a fourth
computer; and
said ?rst computer collating said plurality of sets into a
5. The method according to claim 3, further comprising:
said fourth computer distributing executable softWare to
said third computer related to said third task portion.
6. The method according to claim 3, further comprising:
said fourth computer dividing said single computing task
monitoring and collation may itself become a distributed task.
Collators 27 on multiple computers may collaborate to piece
into a plurality of task portions.
7. The method according to claim 1, Wherein said ?rst
computer and said fourth computer comprise the same com
together and conclude the task.
With the notable exception of 53', FIG. 7a depicts results
returned to the computer Which allocated (or sub-allocated)
puter.
8. A computer-implemented method comprising:
a fourth computer receiving by inter-computer communi
40
45
the task (subtask) portion (50, 53, 54) for collation. But, as
shoWn by example, results may be sent 53' to the allocating
computer 11 instead of or in addition to that computer 10 that
(sub-)allocated a (sub)task portion.
FIG. 70 depicts results being transmitted (likely for colla
50
second computer via inter-computer communication,
tion) to a different computer 15 than the allocating computer
11. This strategy may make sense, for example, When a series
of tasks are allocated in succession: a division of duty
betWeen an allocating computer 11 and a results-collating
computer 15. Final results may be sent to the allocating com
Wherein a second result set received by said fourth com
puter comprises data resultant from said second com
55
puter communication,
necessary.
FIG. 7b depicts a situation Where an allocated computer 13
Wherein said second computer allocated said ?rst portion
to said ?rst computer based upon a computed determi
60
as suggested.
Task or subtask portions may be redundantly assigned as a
precaution. Redundant (sub)allocation may be sensible given
scheduling constraints.
Security may be an issue. Data, results, messages, or other
content may be encrypted as required.
puter computing a second portion of said task,
Wherein said ?rst and second portions received by said
second computer from a third computer via inter-com
puter 11 or other computers by the collating computer 15 as
is processing multiple subtask portions allocated by different
computers (12, 14). This is doable given identi?able portions
cation a plurality of result sets,
Wherein a ?rst result set received by said fourth computer
comprises data resultant from a ?rst computer comput
ing a ?rst portion of a task,
said ?rst portion received by said ?rst computer from a
65
nation by said second computer; and
said fourth computer collating said plurality of result sets.
9. The method according to claim 8, further comprising:
said third computer conditionally sending said ?rst and
second task portions to said second computer.
10. The method according to claim 8, Wherein said ?rst
result set comprises data related to processing time in com
puting said ?rst task portion.
US 8,166,096 B1
8
7
11. The method according to claim 8, wherein said com
puted determination by said second computer comprises a
scheduling consideration.
12. The method according to claim 8, Wherein said com
puted determination by said second computer comprises con
sideration of available computing resources.
13. The method according to claim 8, further comprising:
said fourth computer receiving data regarding processing
duration related to at least one result set.
14. A computer-implemented method comprising:
a ?rst computer receiving from a plurality of computers a
plurality of results related to a task,
Wherein said task comprises a plurality of task portions,
Wherein at least one said task portion comprises a plurality
of subtask portions,
Wherein a ?rst result received by said ?rst computer is
calculated from a ?rst subtask portion by a fourth com
puter,
said ?rst subtask portion received by said fourth computer
from a third computer,
said second computer dividing said task into a plurality of
task portions, including said ?rst task portion; and
Wherein said receiving occurs via netWork communication.
15. The method according to claim 14, further comprising:
said second computer conditionally sending said ?rst task
portion to said third computer.
16. The method according to claim 14, Wherein said ?rst
computer and said second computer comprise the same com
puter.
17. The method according to claim 14, Wherein said ?rst
result comprises data related to duration of calculation of said
?rst subtask portion.
18. The method according to claim 14, further comprising:
said third computer sending said ?rst subtask portion to
said fourth computer at least partly based upon a sched
ule associated With said ?rst task portion.
19. The method according to claim 14, further comprising:
said third computer conditionally determining to send said
?rst subtask portion to said fourth computer.
20. The method according to claim 19, Wherein said con
said ?rst subtask portion being a divisible portion of a ?rst
ditional determination based upon data related to said fourth
task portion, and
Wherein said third computer received said ?rst task portion
computer.
from a second computer;
Disclaimer: Justia Dockets & Filings provides public litigation records from the federal appellate and district courts. These filings and docket sheets should not be considered findings of fact or liability, nor do they necessarily reflect the view of Justia.
Why Is My Information Online?