Coho Licensing LLC v. Twitter Inc.
Filing
1
COMPLAINT FOR PATENT INFRINGEMENT filed with Jury Demand against Twitter Inc. - Magistrate Consent Notice to Pltf. ( Filing fee $ 400, receipt number 0311-1370198.) - filed by Coho Licensing LLC. (Attachments: # 1 Exhibit A, # 2 Exhibit B, # 3 Civil Cover Sheet)(cla, )
Exhibit A
US008024395B1
(12) United States Patent
(10) Patent N0.:
Odom
(54)
US 8,024,395 B1
(45) Date of Patent:
Sep. 20, 2011
DISTRIBUTED PROCESSING MULTIPLE
6,775,831 Bl *
8/2004 Carrasco et a1. ............ .. 707/200
TIER TASK ALLOCATION
6,782,422 B1*
8/2004
Bahl et al. ........ ..
709/224
6,826,753 B1 * 11/2004 Dageville et al.
718/102
7,188,113
(76) Inventor: Gary Odom, Portland, OR (US)
(*)
Notice:
Subject to any disclaimer, the term of this
patent is extended or adjusted under 35
B1*
3/2007
Thusoo
. . . . . . . . . . . . . .
7,383,426
7,668,800
7,849,178
2004/0045002
B2 *
B2 *
B2*
A1*
6/2008
Chung et a1. ................ .. 712/220
2/2010 Motoyama et al.
. . . . . . . . . ..
1/1
707/999.001
12/2010 Shen et al.
3/2004 Berger et a1. ..
. 709/223
718/102
2004/0264503 A1* 12/2004 Draves, Jr.
370/469
2008/0216859 A1*
9/2008
132/224
2009/0204470 A1*
U.S.C. 154(b) by 508 days.
8/2009 Weylet al. ...................... .. 705/9
(21) App1.No.: 10/228,588
(22) Filed:
Chan ......... ..
OTHER PUBLICATIONS
Aug. 26, 2002
Kao, “Subtask deadline assignment for complex distributed soft real
time tasks,” Proceedings of the 14th International Conference on
Related US. Application Data
Distributed Computing Systems, 1994; Jun. 21-24, 1994; pp. 172
181, USA.
Lee, “Some simple task assignment problems for distributed
(60) Provisional application No. 60/317,108, ?led on Sep.
4, 2001.
networked agents,” Fourth International Conference on Knowledge
(51)
Int. Cl.
G06F 15/16
2000. Proceedings. vol. 1, Aug. 30-Sep. 1, 2000 pp. 305-308, USA.
(52)
(58)
US. Cl. ....................... .. 709/201; 709/223; 709/226
Field of Classi?cation Search .................. .. 705/37;
Based Intelligent Engineering Systems and Allied Technologies,
(2006.01)
* cited by examiner
709/226, 201, 208, 223; 718/105
See application ?le for complete search history.
(56)
Primary Examiner * Faruk HamZa
(57)
ABSTRACT
Described is a system and methods for multiple tier distribu
References Cited
tion of task portions for distributed processing. Essentially, a
U.S. PATENT DOCUMENTS
task is divided into portions by a ?rst computer and a task
portion transferred to second participatory computer on the
3,662,401 A *
5/1972
Collins et al. ............... .. 718/103
5,025,369
6/1991
Schwartz
8/2000
Kraft et al. ............... .. 709/202
A
*
6,112,225 A *
6,167,427 A *
6,192,388 Bl*
6,263,358
Bl*
6,370,560 Bl*
6,463,457 B1 *
.....
. . . . ..
718/102
12/2000 Rabinovich et a1. .
709/201
2/2001 Cajolet .................... .. 718/100
7/2001
Lee et a1.
. ... ... ..
. . . . ..
4/2002 RobeItaZZiet al. ..
10/2002
718/100
subtask portion transferred by the second computer to a third
participatory computer on the netWork, Whereby distributed
processing transpires, and results collated as required.
718/105
Armentrout et al. ........ .. 709/201
1 1 ALLOCATING COMPUTER
netWork, Whereupon an allocated task portion is again por
tioned by the second computer into subtask portions, and a
20 Claims, 6 Drawing Sheets
10 SUB-ALLOCATING 1 COMPUTER
12 SUB-ALLOCATING 2 COMPUTER
13 ALLOCATED COMPUTER
14 COMPUTER
15 COLLATING COMPUTER
US. Patent
Sep. 20, 2011
Sheet 1 of6
100 COMPUTER
101 CPU
102 STORAGE
103 MEMORY
104 RETENTION
DEVICE(S)
105 DISPLAY DEVICE
106 INPUT DEv1cE(s)
107 POlNITNG DEVICE
(E.G. MOUSE)
103 KEYBOARD
109 NETWORK
CONNECTION DEvTcE
FIGURE 1
US 8,024,395 B1
US. Patent
Sep. 20, 2011
11 ALLOCATING COMPUTER
Sheet 2 of6
US 8,024,395 B1
1O SUB-ALLOCATING 1 COMPUTER
l2 SUB-ALLOCATING 2 COMPUTER
l3 ALLOCATED COMPUTER
14 COMPUTER
15 COLLATING COMPUTER
FIGURE 2
US. Patent
Sep. 20, 2011
Sheet 3 0f 6
US 8,024,395 B1
70D TASK (DATA)
71D TASK PORTION
\\J
721) SUBTASK
r ____ _ _
PORTION
\//
FIGURE 3A
81
7 IP TASK PORTION
/ \
70p TASK
ZY
/
8 3A
l
83
84A
84
/
85
FIGURE 35
3Y
72P SUBTASK
PORTION
842
\/
7 COLLATE
US. Patent
Sep. 20, 2011
Sheet 4 0f 6
US 8,024,395 B1
20 COORDINATOR
21 ALLOCATOR
22 SCHEDULER
23 ESTIMATOR
24 PROCESSOR
25 INI'I'IATOR
26 SUsPENDER
27 COLLATOR
28 COMMUNICATOR
29 ADDRESSOR
FIGURE 4
6O (SUB)TASK
PORTION
IDENTTFLABLE BY
DIVISION
61 MESSAGE
TYPE
ACTIONSPECIFIC (E.G.
ALLOCATE,
62 DATA
63 CODE
64 STATUs/
DIRECTIVE
(SUB)TASK SOFTWARE DEPENDS ON
DATA OR
OR SW
MESSAGE
DATA
REFERENCE TYPE
CANCEL,
REFERENCE
RESULT,
COLLATE)
FIGURE 5
65 RESULTS
DEPENDS
ON
MESSAGE
TYPE
US. Patent
Sep. 20, 2011
Sheet 5 of6
US 8,024,395 B1
1 — ALLOCATING COMPUTER: ALLOCATE TASK PORTION TO A COMPUTER
l
2 —SET COMPLETION SCHEDULE (OPTIONAL)
l
3 - ESTIMATE COMPLETION TIME (OPTIONAL)
l
4 - ALLOCATED COMPUTER: SUB-ALLOCATE SUBTASK PORTION
l
5 - PROCESSING COMPUTERS: PROCESS TASK PORTION/SUBTASK
l
6 — PROCESSING COMPUTERS: TRANSFER RESULTS (OPTIONAL)
l
7 - COLLATING COMPUTER(S): COLLATE RESULTS (OPTIONAL)
l
8 — RESULTS COMPUTER(S): TRANSFER RESULTS (OPTIONAL)
FIGURE 6
US. Patent
Sep. 20, 2011
Sheet 6 of6
11
US 8,024,395 B1
11
14
l2
FIGURE 7A
FIGURE 75
I (SUB-)ALLOCATE
40
A
V
l
l
V
.l
A
COLLATE
f\ ;
(ALT. EMBODIMENT) v
{ TRANSFER
i (ALTERNATIVE EMBODIMENT)
ll
FIGURE 7C
91
10
FIGURE 70
US 8,024,395 B1
1
2
DISTRIBUTED PROCESSING MULTIPLE
TIER TASK ALLOCATION
described technology localiZes further (sub)task portion allo
cation control to computers having been assigned task por
tions.
Further task processing division to other computers on the
network may be extended to initial task portioning, schedul
CROSS-REFERENCE TO RELATED
APPLICATIONS
ing, and results collation.
Admittedly, only those tasks capable of being subdivided
This application claims priority bene?t under 35 U.S.C.
§119(e) of US. Provisional Application No. 60/317,108,
?led Sep. 4, 2001.
in some manner may bene?t from the described technology.
BRIEF DESCRIPTION OF THE SEVERAL
VIEWS OF THE DRAWINGS
STATEMENT REGARDING FEDERALLY
SPONSORED RESEARCH OR DEVELOPMENT
FIG. 1 is a block diagram of a suitable computer.
FIG. 2 depicts an example computer network.
Not Applicable
FIG. 3 depicts example tasks.
FIG. 4 depicts relevant distributed processing application
components.
THE NAMES OF THE PARTIES TO A JOINT
RESEARCH AGREEMENT
FIG. 5 depicts an abstract of a distributed processing mes
sage.
Not Applicable
20
INCORPORATION-BY-REFERENCE OF
MATERIAL SUBMITTED ON A COMPACT DISC
FIG. 6 depicts distributed processing steps.
FIG. 7 depicts examples of processing distribution and
results collation.
DETAILED DESCRIPTION OF THE INVENTION
Not Applicable
25
FIG. 1 is a block diagram ofa computer 100 which com
BACKGROUND OF THE INVENTION
prises at least a CPU 101; storage 102, which comprises
memory 103 and optionally one or more devices with reten
1. Field of the Invention
The relevant technical ?eld is computer software, speci?
cally distributed processing in a networked environment.
2. Description of the Related Art
Including Information Disclosed Under 37 CFR 1.97 and
tion medium(s) 104 such as hard disks, diskettes, compact
30
are not exclusive to, a keyboard 108, and/or one or more
1.98
In what is not ironically called a “network effect”, the
advantage of distributed processing is positively correlated to
availability of powerful computers in a networked environ
ment. This trend is especially encouraged by always-on
broadband connection to the ultimate wide-area network: the
Internet.
US. Pat. No. 6,192,388 details “detecting available com
disks (e. g. CD-ROM), or tape; a device 109 for connection to
a network 99; an optional display device 105; and optionally
one or more input devices 106, examples of which include but
pointing devices 107, such as a mouse. Such a computer 100
35
40
is suitable for the described technology.
FIG. 2 is a block diagram of distributed processing partici
patory computers 100 connected to each other through a
network 99. Computers 100 are participatory based upon
having installed required software and, optionally, meeting
speci?ed conditions for participation. Example conditions
processing problems”, and switching an allocated task por
include su?icient processing power, storage, network band
width or reliability, or adequate security precautions, such as
tion to a different computer if the one ?rst assigned the task
a particular installed operating system.
puters to participate in computationally complex distributed
Computer 11 in FIG. 2 is depicted in the role of an allocat
portion becomes occupied. US. Pat. No. 6,192,388 also
describes some of the resource factors involved in determin
45
Likewise, other computers in FIG. 2 are signi?ed by their
roles. FIGS. 2, 6, and 7 are used for example explanation of
ing whether to allocate a task portion to a computer.
With some content overlap to the earlier-?led US. Pat. No.
6,192,388, US. Pat. No. 6,112,225 describes a “task distri
bution processing system and the method for subscribing
computers to perform computing tasks during idle time”, and
the technology. The roles of computers are envisioned as
50
monitoring and re-allocating task portions.
55
described technology relies upon network connectivity,
including inter-application messaging and data or software
transfer capabilities that are well known.
agents. In the face of schedule slippage, such a system relies
upon coordination among multiple agents to work effectively.
BRIEF SUMMARY OF THE INVENTION
transitory: for example, a computer initiating distributed pro
cessing and allocating task portions for its task may next have
a task or sub-task portion allocated to it by another computer
in a succeeding task.
A network 99 may be any means by which computers are
connected for software program or data transfer. The
goes into detail as to various ways of specifying “idle time”.
Both US. Pat. Nos. 6,192,388 and 6,112,225, incorporated
herein by reference, use the same computer for allocating,
US. Pat. No. 6,263,358 describes sophisticated regimes of
scheduling of distributed processing tasks using software
ing computer, signifying initial allocation of task portions.
Participatory computers have software installed enabling
60
the desired distributed processing. The software may be
installed by download through network 99 connection, or via
a more traditional local retention medium, such as CD-ROM
Multiple tier task allocation maximizes ?exibility and pro
or ?oppy disk.
ductivity of distributed processing participatory computers.
A computer which has been allocated a distributed pro
cessing task portion may itself determine to reallocate a por
tion of its subtask, for example, in order to meet a schedule, or
if its performance pro?le deteriorates below expectation. The
The desired distributing processing may take various
forms. FIG. 3 illustrates examples.
65
One example is a divisible and distributable chunk of data
requiring a single processing, as depicted in FIG. 3a, split into
portions so that the various participatory computers can pro
US 8,024,395 B1
3
4
cess the data portions. The task data 70d is shown portioned
embodiment, the distributed processing application may be
into equal quarter task portions 71d. A task portion has been
further split into subtask portions 72d.
An example at the other end of the spectrum, depicted in
heterogeneous, comprising relative capabilities according to
computer capacity.
Messages are passed as required, including, for example,
the folloWing types of messages 61: (sub)task portion alloca
tion; data 62 or code 63 transfer; cancellation; scheduling:
FIG. 3b, is a series of processing steps Which to some extent
may overlap, Whereby each of the participatory computers
performs some portion of the task 70. Task 70p processing
can be portioned into task portions 71p (8211-8411 and 82y
84y/z). Further, a subtask portion 72p could be allocated at
directives or estimation initiation or results; processing:
directives (such as initiation, suspension, or collation) and
results 65. FIG. 5 depicts an abstract of a distributed process
ing message; intended for conceptual understanding and sug
gestion, not speci?c implementation (as this message format
speci?c processing steps (8311/!) or 84y/z). Note that synchro
niZation may be an issue, such as in FIG. 3b Where processing
is not particularly e?icient). Not all ?elds shoWn Would nec
essarily be used for each message type 61, and other ?elds
may be required depending upon message type 61 or embodi
step 83b requires the output of preceding steps 82a and 82y to
proceed. There may also be a results collation 85 step.
BetWeen the extreme examples lies divisible and distributable
ment.
data capable of being processed in an overlap (not exclusively
(Sub)task portions may be identi?able by its division, such
serial) manner.
as, for example: 2/5-1/4-2/3, Where each set of number indi
cates a (sub)task division. 2/ 5, for example, Would be part 2 of
5 portions. The point is to alloW portioning by an allocator 21
One possible employment scenario for the described tech
nology is a set of participatory computers running one or
more applications Which intermittently require intermittent
excessive (to a single computer) processing. Distributed pro
20
and recombination of results by a collator 27. A table or
database may be kept and transferred as necessary that iden
cessing may be used as a remedy for those times When a
ti?es actual and/or possible (sub)task portions.
singular computer may otherWise bog doWn or be insuf?cient.
In this scenario, any computer With excessive processing
needs may initiate shared task processing, either by direct
allocation of task portions, or by directing another computer
them may be transferred via messaging. Status/directive 64
and result 65 depend on message type 61.
Data 62 or executable softWare code 63 or references to
25
Keeping track of processing times of allocated (sub)tasks
(including CPU overhead and other performance factors) by
to perform task portion allocation and attendant processing.
Note that the term “allocate” and its conjugations may refer
to initial allocation or subsequent sub-allocationiafter all,
the allocation process is self-similar. In the preferred embodi
computer is recommended as a Way to calibrate future allo
30
distributed processing. FIG. 7 illustrates examples of the dis
tribution process.
An allocating computer 11 allocates a portion of a task to
another computer 10 in step 1. As depicted in FIG. 7a, an
ment, allocation (and sub-allocation) necessarily implies por
tioning of a (sub)task prior to transferring a portion to another
computer. In an alternative embodiment, depicted in FIG. 7d,
a task or (sub)task portion may be (sub-)allocated by transfer
90 of the (sub)task to another computer 10 prior to any por
cations.
FIG. 6 outlines the steps for the described multiple tier
tioning by the initially transferring computer 11, With a
allocating computer 11 may allocate task portions to multiple
computers (11 and 14). An allocator 21 may be employed for
task and subtask portioning and transfer, and for tracking such
request or directive that a portion be (sub-)allocated 91 to the
(sub-)allocations and portions.
computer 11 initiating the transfer, thus putting the overhead
of (sub-)allocation on the recipient 10 rather than the initially
transferring computer 11.
35
40
FIG. 4 depicts an exemplary embodiment of relevant com
ponents of a distributed processing program, some of Which
upon the nature of the task, a schedule may be a single
completion time for an allocated portion, or for intermediate
computations as Well. Ostensibly, a schedule is the raison
are optional, depending upon embodiment; other compo
nents, such as user interface, event handling, and the actual
45
processing modules, likely exist. Components may have dif
ferent con?gurations in different embodiments.
While an application program is used as the preferred
embodiment, an alternative preferred embodiment may
incorporate all or portions of the described distributed pro
historical usage Would indicate, making (sub)task portion
50
If scheduling is a factor, an estimated completion time
resources, such as processor(s) 101 and storage 102, may
55
cessing steps 3 and 2 respectively.
60
take on jobs Which a coordinator 20 needs to off-load.
For example, a computer With limited storage or process
ing poWer may not have a scheduler 22 or collator 27,
Whereby a coordinator 20 off-loads those jobs to an appro
communication-intensive jobs, such as collation 7. In this
naturally ?gure into such calculation. Estimation calculations
may be done by any participatory computer With suf?cient
information.
As depicted in FIG. 4, an allocator 21 may employ a sched
uler 22, Which may employ an estimator 23, to perform pro
ing, by an addressor 29 With a database tracking such capa
priate computer. A poWerful computer With relative poor net
Work capacity (speed or reliability) may be shunned from
of?oading prudent.
calculation (step 3) is advised. The availability and speed of
bilities of the application, and compensate accordingly.
Operationally, that compensation may take the form of knoW
bilities, of suitable computers With adequate capabilities to
d’etre for multiple tier subtask sub-allocation, but subtask
sub-allocation may be driven by anticipation of available
resources Which later fail to appear forthcoming. For
example, an allocated computer 13 may become busier than
cessing functionality in an operating system.
An overall coordinator 20 may be employed to ensure
proper interaction betWeen the relevant distributed process
ing modules. In one embodiment, certain modules may be
missing from an application on a particular computer, in
Which case the coordinator 20 Would knoW the (limited) capa
Optionally, an allocating 11 (or sub-allocating 10) or allo
cated 13 computer may set a completion schedule (step 2) for
the time by Which results should be available. Depending
The overhead of distribution may be considered by an
estimator 23 or scheduler 22 as a factor in (sub-)allocation.
Distribution overhead includes the time and resources to por
65
tion and distribute subtask portions, and to collect and collate
results. Depending on the netWork, communication lags may
also be a factor. Excessive (sub)task portion (sub-)allocation
granularity is conceivable and should be accounted for. A
suggested rule is that estimate of (sub-) allocation should be
US 8,024,395 B1
6
5
a fraction of estimated processing time if processing time is
the bottleneck; storage 102 capacity or other such bottlenecks
necessitate similar consideration.
An estimate of processing capability may be ascertained
an allocating computer transferring at least one said task
for a computer targeted for processing prior to (sub-)alloca
into a plurality of subtask portions;
said sub-allocating computer transferring at least one said
subtask portion to an allocated computer,
portion to a sub-allocating computer;
said sub-allocating computer receiving said task portion;
said sub-allocating computer dividing said task portion
tion, so as to portion (sub)tasks accordingly.
For Whatever reason, in step 4, a computer 10 With an
allocated task portion 71 decides to sub-allocate a portion 72
of its allotted subtask to another computer 13, as depicted in
FIG. 7a.
said allocated computer receiving said subtask portion;
said allocated computer processing said subtask portion,
Whereby producing at least one result;
Participatory computers With (sub-)allocated (sub)task
portions perform required processing per step 5. The generic
said allocated computer transferring said result to a pre
designated results computer;
said results computer receiving and storing said result; and
processor 24 signi?es the performer of step 5. An initiator 25
may synchroniZe With other processors 24 if necessary. A
such that all foregoing transferring occurs by netWork con
nection.
2. The method according to claim 1, Wherein said sub
computer may be Watchful (a possible coordinator 20 j ob) and
sub-allocate after beginning processing, upon realiZing sub
allocation as a prudent measure because of some unantici
pated constraint, such as, for example, high CPU utiliZation
allocating computer conditionally determines allocating said
(processing overhead) or suddenly limited storage. A sus
pender 26 may suspend processing, saving state as necessary
task portion to said allocated computer.
3. The method according to claim 1, further comprising:
20
said sub-allocating computer redundantly allocating said
subtask portion.
for later resumption.
Depending upon embodiment, processing may occur only
under speci?ed conditions, for example, only When a com
puter is past a threshold state deemed idle. Other conditions,
such as available storage 102, or netWork 99 connection speed
or reliability, may also be pertinent allocation or processing
4. The method according to claim 1, Wherein said sub
25
criteria. If processing is conditional, temporary results may
be stashed (locally or elseWhere on the netWork) for later
resumption. A processor 24 initiator 25 and suspender 26
may, for example, respectively detect and act upon onset and
termination of speci?ed threshold conditions.
30
based upon, at least in part, a schedule and estimation
related to said schedule.
6. The method according to claim 5, Wherein said schedule
is not received from said allocating computer.
7. A computer-implemented method for distributed pro
Step 6 speci?es transferring results. This step may not be
necessary, depending upon the task 70. LikeWise, in step 7,
results are optionally collated by one or more participatory
computers, With results monitoring as required. Results
allocating computer receives indicia of predetermined sub
task portions of said task portion separate from receiving said
task portion.
5. The method according to claim 1, further comprising:
said sub-allocating computer dividing said task portion
35
cessing comprising:
monitoring and collation may itself become a distributed task.
Collators 27 on multiple computers may collaborate to piece
dividing a task into a plurality of task portions,
Wherein at least one ?rst task portion comprises further
together and conclude the task.
With the notable exception of 53', FIG. 7a depicts results
returned to the computer Which allocated (or sub-allocated)
hereinafter referred to as subtask portions;
divisible portions,
40
the task (subtask) portion (50, 53, 54) for collation. But, as
shoWn by example, results may be sent 53' to the allocating
tion;
computer 11 instead of or in addition to that computer 10 that
(sub-)allocated a (sub)task portion.
FIG. 70 depicts results being transmitted (likely for colla
45
is processing multiple subtask portions allocated by different
computers (12, 14). This is doable given identi?able portions
connectivity;
50
task portion and storing said result;
said allocated computer transferring said result to a results
computer;
55
precaution. Redundant (sub)allocation may be sensible given
scheduling constraints.
60
dividing a task into a plurality of task portions,
divisible executable instruction sets;
said results computer receiving a plurality of results related
to said ?rst task; and
said results computer collating said results.
8. The method according to claim 7, further comprising:
said sub-allocating computer communicating With said
allocated computer regarding subtask allocation prior to
allocating said subtask portion to said allocated com
puter.
cessing comprising:
Wherein said task comprises at least one of divisible data or
said allocated computer receiving said subtask portion;
said allocated computer processing said subtask portion,
Whereby producing at least one result related to said sub
as suggested.
Task or subtask portions may be redundantly assigned as a
Security may be an issue. Data, results, messages, or other
content may be encrypted as required.
The invention claimed is:
1. A computer-implemented method for distributed pro
said sub-allocating computer dividing said ?rst task por
tion into a plurality of subtask portions;
said sub-allocating computer allocating at least one said
subtask portion to an allocated computer via netWork
tion) to a different computer 15 than the allocating computer
11. This strategy may make sense, for example, When a series
of tasks are allocated in succession: a division of duty
betWeen an allocating computer 11 and a results-collating
computer 15. Final results may be sent to the allocating com
puter 11 or other computers by the collating computer 15 as
necessary.
FIG. 7b depicts a situation Where an allocated computer 13
an allocating computer allocating said ?rst task portion to
a sub-allocating computer via netWork connectivity;
said sub-allocating computer receiving said ?rst task por
65
9. The method according to claim 7, further comprising:
said results computer receiving redundant results portions.
10. The method according to claim 7, further comprising:
said sub-allocating computer determining said subtask
portion allocation by relying partly upon a schedule.
US 8,024,395 B1
8
7
14. The method according to claim 12, further comprising:
said sub-allocating computer selecting said allocated com
puter based, at least in part, upon netWork communica
11. The method according to claim 7, wherein said allocat
ing computer and said results computer comprise the same
computer.
12. A computer-implemented method for distributed pro
tion With at least one other computer.
cessing comprising:
15. The method according to claim 12, further comprising:
associating a schedule With said subtask portion.
16. The method according to claim 12, Wherein said allo
cated computer and said subtask processing computer com
prise the same computer.
17. The method according to claim 12, Wherein said allo
cated computer and said subtask processing computer com
dividing a task into a plurality of task portions;
an allocating computer allocating at least one said task
portion to a sub-allocating computer;
said sub-allocating computer receiving said task portion;
said sub-allocating computer allocating a subtask portion
to an allocated computer,
Wherein said subtask portion comprises a portion of a task
prise different computers.
portion;
said allocated computer receiving said subtask portion;
a subtask processing computer processing said subtask
portion,
thereby creating and storing at least one result;
said subtask processing computer transferring said result to
a results computer; and
said results computer receiving and storing results related
to said task from a plurality of computers.
13. The method according to claim 12, Wherein said sub
allocating computer partially processes said at least a portion
of said task portion prior to allocating said subtask portion to
said allocated computer.
20
18. The method according to claim 12, Wherein said allo
cated computer processes a plurality of subtask portions
received from a plurality of computers.
19. The method according to claim 12, further comprising:
conditionally determining at least one of a sub-allocating
computer and an allocated computer based, at least in
part, upon data received via netWork communication.
20. The method according to claim 12, further comprising:
said allocated computer conditionally allocating said sub
task portion to said subtask processing computer.
Disclaimer: Justia Dockets & Filings provides public litigation records from the federal appellate and district courts. These filings and docket sheets should not be considered findings of fact or liability, nor do they necessarily reflect the view of Justia.
Why Is My Information Online?