KSC BOSS ALLIANCE, LLC v. USA
Filing
55
PUBLIC OPINION AND ORDER denying 31 Motion for Judgment on the Administrative Record; granting 35 Cross-Motion; granting 36 Cross-Motion. Signed by Judge Elaine D. Kaplan. (kb) Service on parties made.
In the United States Court of Federal Claims
BID PROTEST
No. 18-1559C
(Filed Under Seal: March 19, 2019 | Reissued: March 29, 2019)*
)
)
)
Plaintiff,
)
)
v.
)
)
THE UNITED STATES OF AMERICA,
)
)
Defendant,
)
)
and
)
)
PAE-SGT PARTNERS, LLC,
)
)
Defendant-Intervenor. )
)
)
KSC BOSS ALLIANCE, LLC,
Keywords: Pre-Award Bid Protest; 28
U.S.C. § 1491; Exclusion from the
Competitive Range; FAR 15.306;
Judgment on the Administrative Record;
NASA.
Aron C. Beezley, Bradley Arant Boult Cummings LLP, Washington, DC, for Plaintiff. Patrick R.
Quigley and Sarah S. Osborne, Bradley Arant Boult Cummings LLP, Washington, DC, Of
Counsel.
Elizabeth Anne Speck, Senior Trial Counsel, Commercial Litigation Branch, Civil Division, U.S.
Department of Justice, Washington, DC, for Defendant, with whom were Douglas K. Mickle,
Assistant Director, Robert E. Kirschman, Jr., Director, and Joseph H. Hunt, Assistant Attorney
General. Brian M. Stanford, Senior Attorney, and Lisette S. Washington, Attorney, NASA
Headquarters, Office of the General Counsel, Washington, DC, Of Counsel.
Anuj Vohra, Crowell & Moring LLP, Washington, DC, for Defendant-Intervenor. James G.
Peyster, Olivia L. Lynch, Michelle D. Coleman, Sarah A. Hill, and Payal P. Nanavati, Crowell &
Moring LLP, Washington, DC, Of Counsel.
*
This opinion was originally issued under seal and the parties were given the opportunity to
request redactions. The opinion is now reissued with redactions of three offerors’ names denoted
as references to Offeror A, Offeror B, and Offeror C.
OPINION AND ORDER
KAPLAN, Judge.
In this pre-award bid protest, Plaintiff KSC Boss Alliance, LLC (“KBA”) challenges as
arbitrary and capricious NASA’s decision to exclude it from the competitive range in a
procurement for “baseport operations and spaceport services” at John F. Kennedy Space Center
in Cape Canaveral, Florida.1 Currently before the Court are cross-motions for judgment on the
administrative record filed by KBA on the one hand, and the government and DefendantIntervenor PAE-SGT Partners, LLC (“PSP”), on the other.
For the reasons discussed below, the Court finds that KBA’s arguments lack merit.
KBA’s motion is therefore DENIED and the government and PSP’s cross-motions are
GRANTED.
BACKGROUND
I.
The Solicitation
NASA issued Solicitation NNK18619079R (“the Solicitation”) on November 1, 2017.
AR Tab 15 at 13589. The Solicitation requested proposals for the Kennedy Space Center Base
Operations and Spaceport Services Contract (“the BOSS Contract”). Id. at 13582, 13589. Under
the BOSS Contract, the contractor would provide “mission-focused institutional support at the
John F. Kennedy Space Center and NASA facilities on Cape Canaveral Air Force Station.” Id. at
13720. Specifically, the BOSS contractor’s duties would include managing infrastructure and
utilities at Kennedy Space Center as well as coordinating the use of various facilities by several
institutional spaceport “users” including NASA, the Air Force, private entities, and other
contractors. See generally AR Tab 15 at 13743–803; see id. at 13720 (describing support
services as including “operations, maintenance, and engineering (OM&E) of assigned facilities,
systems, equipment, and utilities (FSEU); work management and Spaceport integration
functions; mission support and launch readiness management; project management and design
engineering services; construction support services; and institutional logistics”); id. at 13906 (list
of OM&E organizational users); AR Tab 38 at 27797 (list of baseline customers). The
Solicitation defined the period of performance as an initial two-year base period followed by
three two-year option periods. AR Tab 15 at 13604.
The Solicitation contemplated a “single award, fixed-price contract with a costreimbursable Contract Line Item Number (‘CLIN’) and a fixed-price Indefinite Delivery
Indefinite Quantity (‘IDIQ’) component.” AR Tab 58a at 28303. The cost-reimbursable CLIN
applied to the “baseline work” set forth in the Performance Work Statement (“PWS”). That
baseline work included:
1
KBA is a joint venture of Kellogg Brown and Root Services, Inc. (“KBR”) and Yang
Enterprises, Inc. (“YEI”). Redacted Compl. ¶ 1, ECF No. 22. YEI is a subcontractor on the
incumbent contract. AR Tab 38 at 27893.
2
responding to Service Orders (SOs) and Baseline Repairs and Replacements
(BRRs); accomplishing recurring services such as Preventive Maintenance (PM),
Programmed Maintenance (PGM), and Predictive Testing and Inspection (PT&I)
that involve routine, periodic maintenance, and incidental repair requirements
associated with assigned FSEU [i.e., facilities, systems, equipment and utilities];
operations that require continuous presence of qualified personnel during a
specified period; work management and Spaceport integration; logistics; system
engineering and engineering services; and project management services.
AR Tab 15 at 13720–21. The IDIQ component would include “work which cannot be adequately
defined in advance for inclusion with baseline work.” Id. at 13721.
II.
The Solicitation’s Source Selection Procedures and Evaluation Factors
A.
Source Selection Procedures
Under the Solicitation, the award would be made to the responsible offeror whose
proposal represented the best value to the government. Id. at 13686. The Solicitation specified
that the BOSS procurement would use competitive negotiated acquisition procedures pursuant to
FAR 15.3 and NASA FAR Supplement (“NFS”) 1815.3 and that NASA would also use the
tradeoff process described in FAR 15.101-1. Id. at 13707.
The Solicitation warned prospective offerors that their “initial proposal[s] should contain
[their] best terms from a cost or price and technical standpoint,” and indicated that NASA did not
intend to hold discussions. Id. at 13686. Nonetheless, NASA reserved the right to conduct
discussions if the contracting officer “later determine[d] them to be necessary.” Id. Moreover, in
the event discussions were held—which would require the agency to establish a competitive
range pursuant to FAR 15.306(c)—the contracting officer could “limit the number of proposals
in the competitive range to the greatest number that [would] permit an efficient competition
among the most highly rated proposals.” Id.
NASA’s Source Evaluation Board (“SEB”) would conduct evaluations of the proposals
pursuant to NFS 1815.370. Id. at 13707. The SEB was comprised of five voting members, six
nonvoting members (including the contracting officer, Kelly J. Boos), six evaluators, several
consultants, and five ex officio members. AR Tab 11 at 13528–29.
B.
Evaluation Factors
As set forth in the Solicitation and in accordance with NFS 1815.304-70, the SEB would
evaluate proposals based on three factors: (1) Mission Suitability; (2) Past Performance; and (3)
Price. AR Tab 15 at 13707. The Price factor was more important than the Mission Suitability
factor, which was more important than the Past Performance factor. Id. The Mission Suitability
and Past Performance factors, “when combined, [were] approximately equal to the Price factor.”
Id. The Solicitation stated that unless it was not in NASA’s best interest, it would evaluate price
based on the total price for all options plus the total price for the base period. Id. at 13708.
3
1.
Mission Suitability
The Mission Suitability factor was comprised of three subfactors: (1) Management
Approach; (2) Technical Approach; and (3) Small Business Utilization. Id. at 13708. The
subfactors would be individually evaluated, numerically scored, and weighted “using the
adjectival rating, definitions, and percentile ranges at NFS 1815.305(a)(3)(A).” Id. Those ratings,
definitions, and percentile ranges are represented in the chart below:
AR Tab 38 at 27802; see also NFS 1815.305(a)(3)(A). Offerors could earn up to 1,000 points as
part of the Mission Suitability evaluation—525 points for Management Approach, 375 points for
Technical Approach, and 100 points for Small Business Utilization. AR Tab 15 at 13708.
As explained by the government, under the Solicitation’s evaluation procedures, the
percentile scores assigned to each subfactor would be multiplied by the points assigned to arrive
at the total for each subfactor. See Def.’s Cross-Mot. for J. on the Admin. R. & Resp. to Pl.’s
Mot. for J. on the Admin. R. (“Def.’s Mot.”) at 4–5, ECF No. 36 (citing AR Tab 58c.6 at 28508).
“For example, a score of 91 for the Management Approach [subfactor], would be multiplied as a
percentage of 525 available points (.91 x 525 = 477.75 points).” Id.
2.
Past Performance
The Solicitation provided that, consistent with FAR 15.305(a)(2) and NFS
1815.305(a)(2), NASA would “evaluate [offerors’] and proposed major subcontractors’ recent
performance of work similar in size, content, and complexity to the requirements of [the
Solicitation].” AR Tab 15 at 13711. Based upon that evaluation, NASA would then assign
confidence ratings as provided in NFS 1815.305(a)(2): Very High, High, Moderate, Low, Very
Low, or Neutral. Id.; see also NFS 1815.305(a)(2). The Solicitation did not contemplate that
incumbent contractors or subcontractors on the predecessor contract would receive any particular
advantage when evaluating the Past Performance factor.
4
3.
Price
Price—the most important factor—was to be evaluated in accordance with FAR Subpart
15.4. The Solicitation stated that NASA would perform a price analysis consistent with FAR
15.404-1(b). Id. NASA reserved the right to reject a proposal with unbalanced pricing. Id. at
13686. The Solicitation did not require NASA to conduct a price realism analysis.
III.
The SEB’s Evaluation and the Competitive Range Determination
The Solicitation required that all proposals be fully submitted by December 29, 2017. Id.
at 13583. KBA and four other offerors submitted timely proposals. AR Tab 58a at 28308.2 Those
proposals were then reviewed by the SEB in accordance with the evaluation scheme described
above.
On April 11, 2018, the SEB presented the results of its initial evaluation to the Source
Selection Authority (“SSA”). AR Tab 38 at 27787. The presentation summarized the evaluation
factors as well as the ratings and scores achieved by the offerors. Id. at 27798–803. The SEB also
detailed the strengths and weaknesses assigned to each offeror as part of the Mission Suitability
evaluation and the reasons for those findings. Id. at 27804–40. The presentation further explained
the reasoning for the “Very High” confidence rating assigned for each offeror’s past performance
and provided a summary of the total evaluated price of all offerors. Id. at 27841–51.
In the “Final Summary” section of its presentation, the SEB provided a chart detailing the
offerors’ Mission Suitability adjectival ratings and scores, their Past Performance ratings, and
their total evaluated prices:
A
B
C
Id. at 27853.
2
The other offerors were Offeror A, Offeror B, PSP, and Offeror C. AR Tab 38 at 27790.
5
The contracting officer decided during the evaluation process that discussions should be
held. Accordingly, FAR 15.306 (c) required the establishment of a competitive range composed
of the most highly rated proposals. AR Tab 39 at 27913. The SEB “recommended that the
Government hold discussions with Offeror A, Offeror B, and PSP, which the SEB determined to
be the most highly rated proposals” pursuant to FAR 15.306(c). Id. at 27913–14.
On April 12, 2018, the contracting officer prepared a five-page memorandum explaining
the rationale for her competitive range determination. See generally id. She summarized the
performance of each offeror’s proposal in a chart reflecting the SEB’s evaluation, as follows:
A
B
C
Id. at 27915.
In her memorandum, the contracting officer provided an overview of the ratings assigned
to each offeror and an explanation of her competitive range determination. She observed that
Offeror A had the highest Mission Suitability score but that it also had the highest price. Id. at
27915–16. Offeror B had the second highest Mission Suitability score, but discussions were
needed to determine whether its proposed price was accurate. Id. at 27916. PSP had the third
highest Mission Suitability score and the lowest price. Id. The contracting officer opined that
“[d]iscussions would likely enhance all three proposals, thereby increasing competition, which
maximizes the Government’s ability to obtain the best value.” Id. Although the prices offered by
Offeror B and PSP might increase if their proposals were revised, she observed, their proposals,
along with that of Offeror A, “represent[ed] the most highly rated after initial evaluation.” Id.
She found this conclusion supported by the existence of a significant gap between the Mission
Suitability scores of the top three proposals and those of Offeror C and KBA. Id.
The memorandum also included a discussion of the ratings assigned to KBA’s proposal
and an explanation for its exclusion from the competitive range. The contracting officer observed
that KBA’s Mission Suitability score “was the second lowest of the five proposals.” Id. at 27916.
She noted that “KBA was not among the most highly rated proposals when considering the
evaluation results of all factors.” Id. In particular, and as described below, she explained that the
SEB had “identified several Basis of Estimate areas that demonstrated a lack of understanding in
various resource areas.” Id. In addition, and as also explained below, she noted that the SEB had
concluded that KBA’s “approach to managing the established counts” was “inappropriate.” Id.
The contracting officer determined that it would not be worthwhile to engage in
discussions with KBA to address the weaknesses in its proposal. She opined that “[e]ven if KBA
6
were to correct these weaknesses as a result of discussions, without any strengths or significant
strengths in the Management and Technical subfactors, it is highly unlikely that discussions
would result in KBA substantially increasing its Mission Suitability score without significant
proposal revisions.” Id. And, the contracting officer observed, KBA’s evaluated price of $658.3
million was the second highest evaluated price, which the contracting officer concluded made the
possibility of any tradeoff unlikely. Id.
IV.
KBA’s GAO Protest
On April 16, 2018, NASA notified KBA that its proposal was not among the most highly
rated proposals and was not included in the competitive range. See generally AR Tab 40; see
also AR Tab 51 at 28049. KBA requested a pre-award debriefing on April 18, 2018. AR Tab 51
at 28049; AR Tab 54 at 28093. NASA debriefed KBA on April 25, 2018. AR Tab 54 at 28093.
On May 4, 2018, KBA filed a protest with GAO. AR Tab 55. Relying upon GAO’s
decision in Pinnacle Solutions, Inc., B-414360, 2017 CPD ¶ 172 (Comp. Gen. May 19, 2017)
(“Pinnacle”), KBA contended that NASA’s competitive range determination was arbitrary and
capricious because the contracting officer, according to KBA, “did not look behind the scores or
adjectival ratings,” and because “the competitive range determination did not document a
reasoned consideration of the actual evaluation findings.” See id. at 28142; see also id. at 28145
n.7, 28146 (citing Pinnacle in support of argument that the agency failed to meaningfully
consider price).
GAO denied the protest on July 27, 2018. AR Tab 71 at 60112. It concluded that
NASA’s “underlying evaluation of KBA’s proposal was reasonable and in accordance with the
stated evaluation criteria.” Id. at 60125. It also rejected KBA’s argument that the contracting
officer made the competitive range determination “solely on the basis of point scores or rating.”
Id. “[R]ather,” GAO concluded, she “was fully aware of and compared the proposals against one
another on a qualitative basis for each evaluation criteria.” Id. The contracting officer did not
“overemphasize the significant weaknesses and weaknesses assigned to KBA’s proposal,” GAO
explained. Id. at 60125 n.14. Instead, she merely “noted these as [part of the] rationale as to why
the proposal was not among the most highly rated.” Id. In sum, GAO concluded that the
competitive range determination “was reasonable” and “within the agency’s discretion.” Id. at
60124.
V.
This Action
On August 6, 2018, ten days after GAO rejected KBA’s protest, NASA awarded the
BOSS contract to PSP, Defendant-Intervenor herein. Press Release, Nat’l Aeronautics & Space
Admin., NASA Awards Base Operations, Spaceport Services Contract at Kennedy Space Center
(Aug. 6, 2018), available at https://www.nasa.gov/press-release/nasa-awards-base-operationsspaceport-services-contract-at-kennedy-space-center. Two months later, on October 9, 2018,
KBA filed the present protest. Compl., ECF No. 1.
In its pleadings, KBA challenges NASA’s competitive range determination on several
grounds. First, it contends that NASA relied too heavily on the adjectival ratings and point scores
assigned to the proposals and that it failed to adequately document the reasons for its competitive
7
range determination. KBA also contends that the point scores themselves were distorted and
arbitrary, id. ¶¶ 18–31, and, further, that NASA’s determination does not document a meaningful
consideration of price, id. ¶¶ 32–38. Finally, KBA alleges that several of the weaknesses that
NASA assigned to the proposal under the Management Approach and Technical Approach
subfactors were unjustified. Id. ¶¶ 39–63.
On October 10, 2018, contract awardee PSP filed a motion to intervene. ECF No. 15. The
Court granted the motion the next day. ECF No. 16. KBA filed its motion for judgment on the
administrative record on November 6, 2018 (ECF No. 31), and cross-motions from the
government (ECF No. 36) and PSP (ECF No. 35) followed. The cross-motions have been fully
briefed. Oral argument was held on February 27, 2019.
DISCUSSION
I.
Subject-Matter Jurisdiction
The Court of Federal Claims has jurisdiction over bid protests in accordance with the
Tucker Act, 28 U.S.C. § 1491, as amended by the Administrative Dispute Resolution Act of
1996 § 12, 28 U.S.C. § 1491(b). Specifically, the Court has the authority “to render judgment on
an action by an interested party objecting to a solicitation by a Federal agency for bids or
proposals for a proposed contract or to a proposed award or the award of a contract or any
alleged violation of statute or regulation in connection with a procurement or a proposed
procurement.” 28 U.S.C. § 1491(b)(1); see also Sys. Application & Techs., Inc. v. United States,
691 F.3d 1374, 1380–81 (Fed. Cir. 2012) (observing that § 1491(b)(1) “grants jurisdiction over
objections to a solicitation, objections to a proposed award, objections to an award, and
objections related to a statutory or regulatory violation so long as these objections are in
connection with a procurement or proposed procurement”).
To possess standing to bring a bid protest, a plaintiff must be an “interested party”—i.e.,
an actual or prospective bidder (or offeror) who possesses a direct economic interest in the
procurement. Sys. Application & Techs., Inc., 691 F.3d at 1382 (citing Weeks Marine, Inc. v.
United States, 575 F.3d 1352, 1359 (Fed. Cir. 2009)); see also Orion Tech., Inc. v. United States,
704 F.3d 1344, 1348 (Fed. Cir. 2013). An offeror has a direct economic interest if it suffered a
competitive injury or prejudice as a result of an alleged error in the procurement process. Myers
Investigative & Sec. Servs., Inc. v. United States, 275 F.3d 1366, 1370 (Fed. Cir. 2002) (holding
that “prejudice (or injury) is a necessary element of standing”); see also Weeks Marine, Inc., 575
F.3d at 1359.
In a pre-award protest, to possess the direct economic interest necessary for standing, a
protester must have suffered a non-trivial competitive injury that can be addressed by judicial
relief. See Weeks Marine, Inc., 575 F.3d at 1361–62. Even more specifically, in a pre-award,
post-evaluation protest, the protester meets this standard if, absent the alleged errors, it would
have a “substantial chance” of receiving the award; that is, if the protester “could have likely
competed for the contract” but for the alleged errors. Orion Tech., 704 F.3d at 1348–49. The
Court assumes well-pled allegations of error to be true for purposes of the standing inquiry.
Square One Armoring Serv., Inc. v. United States, 123 Fed. Cl. 309, 323 (2015) (citing Digitalis
8
Educ. Sols., Inc. v. United States, 97 Fed. Cl. 89, 94 (2011), aff’d, 664 F.3d 1380 (Fed. Cir.
2012)).
KBA, an actual offeror, challenges its exclusion from the competitive range, a
procurement decision falling within the Court’s “broad grant of jurisdiction over objections to
the procurement process.” Sys. Application & Techs., Inc., 691 F.3d at 1381. Moreover, KBA
has sufficiently alleged a direct economic interest in the procurement. Taking the allegations of
error to be true—as it must to determine standing—the Court finds that KBA “could likely have
competed for the contract” in the absence of the alleged errors, because it would likely have
proceeded to the competitive range stage and engaged in discussions. Accordingly, KBA is an
interested party and the Court has subject-matter jurisdiction over its claims.
II.
Motions for Judgment on the Administrative Record
Parties may move for judgment on the administrative record pursuant to Rule 52.1 of the
Rules of the Court of Federal Claims (“RCFC”). Pursuant to RCFC 52.1, the Court reviews an
agency’s procurement decision based on the administrative record. See Bannum, Inc. v. United
States, 404 F.3d 1346, 1353–54 (Fed. Cir. 2005). The court makes “factual findings under RCFC
[52.1] from the record evidence as if it were conducting a trial on the record.” Id. at 1357. Thus,
“resolution of a motion respecting the administrative record is akin to an expedited trial on the
paper record, and the Court must make fact findings where necessary.” Baird v. United States, 77
Fed. Cl. 114, 116 (2007). The Court’s inquiry is “whether, given all the disputed and undisputed
facts, a party has met its burden of proof based on the evidence in the record.” A&D Fire Prot.,
Inc. v. United States, 72 Fed. Cl. 126, 131 (2006). Unlike a summary judgment proceeding,
genuine issues of material fact will not foreclose judgment on the administrative record.
Bannum, Inc., 404 F.3d at 1356.
III.
Scope of Review of Procurement Decisions
The Court reviews challenges to procurement decisions under the same standards used to
evaluate agency actions under the Administrative Procedure Act, 5 U.S.C. § 706 (“APA”). See
28 U.S.C. § 1491(b)(4) (stating that “[i]n any action under this subsection, the courts shall
review the agency’s decision pursuant to the standards set forth in section 706 of title 5”). Thus,
to successfully challenge an agency’s procurement decision, a plaintiff must show that the
agency’s decision was “arbitrary, capricious, an abuse of discretion, or otherwise not in
accordance with law.” 5 U.S.C. § 706(2)(A); see also Bannum, Inc., 404 F.3d at 1351.
This “highly deferential” standard of review “requires a reviewing court to sustain an
agency action evincing rational reasoning and consideration of relevant factors.” Advanced Data
Concepts, Inc. v. United States, 216 F.3d 1054, 1058 (Fed. Cir. 2000) (citing Bowman Transp.,
Inc. v. Arkansas-Best Freight Sys., Inc., 419 U.S. 281, 285 (1974)). As a result, where an
agency’s action has a reasonable basis, the Court cannot substitute its judgment for that of the
agency. See Honeywell, Inc. v. United States, 870 F.2d 644, 648 (Fed. Cir. 1989) (holding that
as long as there is “a reasonable basis for the agency’s action, the court should stay its hand even
though it might, as an original proposition, have reached a different conclusion”) (quoting M.
Steinthal & Co. v. Seamans, 455 F.2d 1289, 1301 (D.C. Cir. 1971)). The Court’s function is
therefore limited to “determin[ing] whether ‘the contracting agency provided a coherent and
9
reasonable explanation of its exercise of discretion.’” Impresa Construzioni Geom. Domenico
Garufi v. United States, 238 F.3d 1324, 1332–33 (Fed. Cir. 2001) (quoting Latecoere Int’l, Inc.
v. U.S. Dep’t of Navy, 19 F.3d 1342, 1356 (11th Cir. 1994)); see also Motor Vehicle Mfrs. Ass’n
v. State Farm Mut. Auto. Ins. Co., 463 U.S. 29, 43 (1983) (noting that a court should review
agency action to determine if the agency has “examine[d] the relevant data and articulate[d] a
satisfactory explanation for its action”).
The scope of judicial review of competitive range determinations is particularly narrow.
Thus, “a contracting officer has broad discretion in determining competitive range, and such
decisions are not disturbed unless clearly unreasonable.” Birch & Davis Int’l, Inc. v. Christopher,
4 F.3d 970, 973 (Fed. Cir. 1993); see also Impresa Construzioni Geom. Domenico Garufi v.
United States, 44 Fed. Cl. 540, 554–55 (1999), aff’d in relevant part, 238 F.3d 1324, 1340 (Fed.
Cir. 2001); Rivada Mercury, LLC v. United States, 131 Fed. Cl. 663, 677 (2017).
In short, a disappointed offeror “bears a heavy burden” in attempting to show that a
procuring agency’s decision lacked a rational basis. Impresa Construzioni, 238 F.3d at 1338. For
the agency to prevail, it need only articulate “a rational connection between the facts found and
the choice made,” and courts will “uphold a decision of less than ideal clarity if the agency’s
path may reasonably be discerned.” Motor Vehicle Mfrs. Ass’n, 463 U.S. at 43 (quotations
omitted).
In this case, KBA’s challenges to its exclusion from the competitive range in the BOSS
procurement fall into two general categories. First, it claims that NASA’s decisions to assign its
proposals certain weaknesses under the Management and Technical subfactors of the Mission
Suitability factor were arbitrary and capricious. Second, it challenges the competitive range
determination itself, alleging: 1) that it was inadequately documented; 2) that it placed excessive
reliance on adjectival ratings and point scores (which were “distorted and arbitrary”); and 3) that
the determination does not reflect a meaningful consideration of relevant factors, including price.
Pl.’s Mem. in Supp. of Mot. for J. on the Admin. R. (“Pl.’s Mem.”) at 7, ECF No. 31-1.
For the reasons set forth below, each of KBA’s protest grounds lacks merit. KBA has not
met its “heavy burden” of showing that its exclusion from the competitive range lacked a rational
basis. Therefore, its motion for judgment on the administrative record must be denied and the
government and PSP’s motions granted.
IV.
KBA’s Challenges to the Assignment of Weaknesses to Its Proposal
The SEB assigned KBA’s proposal two weaknesses under the Management Approach
subfactor (one of them at the “significant” level). It assigned five weaknesses under the
Technical Approach subfactor (one of which was at the “significant” level). KBA now contends
that several of these evaluation decisions were arbitrary and capricious because they were based
on a misreading or misunderstanding of the relevant elements of KBA’s proposal.
It is well established that the evaluation of proposals for their technical quality generally
requires the special expertise of procurement officials. Reviewing courts therefore give the
greatest deference possible to these determinations. See E.W. Bliss Co. v. United States, 77 F.3d
445, 449 (Fed. Cir. 1996) (Protests concerning “the minutiae of the procurement process in such
10
matters as technical ratings . . . involve discretionary determinations of procurement officials that
a court will not second guess.”); see also One Largo Metro, LLC v. United States, 109 Fed. Cl.
39, 74 (2013) (observing that “the evaluation of proposals for their technical excellence or
quality is a process that often requires the special expertise of procurement officials, and thus
reviewing courts give the greatest deference possible to these determinations”) (quoting Beta
Analytics Int’l, Inc. v. United States, 67 Fed. Cl. 384, 395 (2005)). “[N]aked claims” of
disagreement with evaluations, “no matter how vigorous, fall far short of meeting the heavy
burden of demonstrating that the findings in question were the product of an irrational process
and hence were arbitrary and capricious.” Banknote Corp. of Am. v. United States, 56 Fed. Cl.
377, 384 (2003), aff’d, 365 F.3d 1345 (Fed. Cir. 2004).
Thus, to successfully challenge NASA’s determinations regarding the soundness of
KBA’s proposal, KBA must show that the agency “entirely failed to consider an important
aspect of the problem, offered an explanation for its decision that runs counter to the evidence
before [it], or [made a decision that] is so implausible that it could not be ascribed to a difference
in view or the product of agency expertise.” Motor Vehicle Mfrs. Ass’n, 463 U.S. at 43. With
these principles in mind, the Court turns to the objections KBA has raised with respect to the
agency’s evaluation of its proposal under the Management and Technical Approach subfactors.
A.
NASA’s Assignment of a Significant Weakness Concerning KBA’s Proposal
for Managing the Baseline Work Performed Under the Contract
1.
The Solicitation’s Requirements
The Solicitation establishes that the baseline work of the contract would encompass two
types of service requests: 1) Baseline Repairs and Replacements (“BRRs”), i.e., “request[s] for
work to existing infrastructure that is essential to protect, preserve, or restore Facilities, Systems,
Equipment, and Utilities (FSEU),” AR Tab 15 at 13842; and 2) Service Orders (“SOs”), i.e.,
“request[s] for facilities-related work that is new in nature and not typically essential to protect,
preserve, or restore FSEU,” id. at 13856. The baseline work was to be compensated under a
fixed-price CLIN item. Accordingly, for purposes of contract administration, NASA uses a very
complicated system that establishes units of measurement for the tasks that are included as part
of the baseline work. Those units of measurement, or “established counts,” are “used to represent
the number of categorized [BRRs] and [SOs] in a given Government Fiscal Year.” Id. at 13845.
Under the Performance Work Statement (at PWS 1.2.1-23), the contractor was required
to “[m]anage the established counts of SOs and BRRs with the established counts exchange rate
(ECER)” and “[e]xchange the established counts of SOs and BRRs within each customer using
the ECER to meet fluctuating needs.” Id. at 13723. Offerors were required to supply a
description of their “approach to prioritize, schedule, report, and ensure completion of baseline
and IDIQ requirements in a multi-user Spaceport environment, including responding to surge
requirements and completing the total combined established units for each customer per fiscal
year.” Id. at 13696. They would also be required to “[d]evelop and direct a monthly, two hour
Surveillance Review with the [contracting officer’s representative] and other Government
representatives to discuss [topics including] established counts status.” Id. at 13725. During that
meeting, the contractor would secure the approval of the contracting officer’s representative for
11
exchanges between baseline customers, based on review of the established counts status. See id.
at 13724–25 (PWS 1.2.3-3).
2.
NASA’s Evaluation of KBA’s Proposal for Managing Established
Counts
In the section of its proposal responsive to the foregoing requirements and the
corresponding evaluation criteria, KBA stated as follows, in pertinent part:
We provide near real-time reporting on the Government management portal of the
running total of counts by work category and total combined units per customer.
We develop a budget management system in Excel that monitors and tracks the
expenditure of work units by client. We establish a set point of 85% for each type
of work unit by client. Once an 85% spend level is attained, the customer is notified
in writing of the potential spending limit infraction and a path forward is developed.
We manage and prioritize EWRs to ensure the total combined established units for
each baseline customer are not exceeded, and communicate with the Government
on recommended approaches (e.g., deferred maintenance (DM), exchanges
between customers, IDIQ) to ensure sufficient units are available throughout each
fiscal year.
AR Tab 20a at 15635.
The SEB assigned KBA a significant weakness with respect to this aspect of its proposal.
It concluded that “KBA’s approach of using a set point of 85% to monitor and track the
expenditure of ‘work units’ by customer is an inappropriate approach to managing the
established counts of [BRRs] and [SOs].” AR Tab 32a at 27668. The SEB explained that the
approach did not “demonstrate how KBA will not exceed the total combined established units,”
nor did it “demonstrate how KBA will perform PWS 1.2.1-23 requirement to exchange the
established counts of BRRs and SOs within each customer using the ECER to meet fluctuating
needs.” Id. In addition, the SEB found that KBA’s “approach of notifying the customers does not
meet with PWS 1.2.3-3 requirement to discuss established counts status at the monthly
Surveillance Review with the Contracting Officer’s Representative.” Id.
The SEB found these flaws in KBA’s proposal significant because “[m]anaging the
established counts [was] a significant portion of the contract,” and because, in the SEB’s view,
KBA “fail[ed] to describe an approach for actively managing the usage rates of the counts until
the 85% set point is reached.” Id. According to the SEB, this aspect of the proposal “appreciably
increase[d] the risk [that] facilities [would] be unavailable and cause budgetary burdens on
customers to fund purchases of additional pre-priced counts or deferral of priority work.” Id.
KBA now contends (as it did unsuccessfully before GAO) that the agency’s assignment
of a significant weakness concerning its proposal to manage established counts is based on a
misunderstanding of the proposal. Thus, while KBA concedes that the “85% set point
notification” does not satisfy the requirement set forth in PWS 1.2.1-23 that the contractor
actively manage established counts, it asserts that it was “never intended” to do so. Pl.’s Mem. at
24. Instead, according to KBA, it proposed to manage established counts by “develop[ing] a
12
budget management system in Excel that monitors and tracks the expenditure of work units by
client” and by “communicating with the government regarding needs for approaches to ensure
sufficient units are available, such as ‘exchanges between customers.’” Id. (citing AR Tab 20a at
15635) (emphasis omitted).
KBA’s challenge to this aspect of NASA’s evaluation lacks merit. “An offeror has the
responsibility to submit a well-written proposal with adequately detailed information that allows
for a meaningful review by the procuring agency.” Structural Assocs., Inc./Comfort Sys. USA
(Syracuse) Joint Venture v. United States, 89 Fed. Cl. 735, 744 (2009) (citation omitted).
Further, “all offerors, including incumbents, are expected to demonstrate their capabilities in
their proposals.” Int’l Resource Recovery, Inc. v. United States, 60 Fed. Cl. 1, 6 (2004).
Regardless of what KBA intended to communicate, it was not unreasonable for the agency to
have understood that the 85% notification threshold was a central feature of KBA’s approach to
managing the established counts and that, as such, KBA’s proposal did not adequately address
this important requirement.
The agency’s conclusion was particularly reasonable given the conclusory nature of the
other language in KBA’s proposal, which asserted (without explaining how) that KBA
“manage[s] and prioritize[s] [electronic work requests] to ensure the total combined established
units for each baseline customer are not exceeded, and communicate[s] with the Government on
recommended approaches . . . to ensure sufficient units are available throughout each fiscal
year.” AR Tab 20a at 15635. In fact, this portion of KBA’s proposal essentially just restates the
Solicitation requirements, which offerors were cautioned not to do. See AR Tab 15 at 13695
(“The proposal shall not simply rephrase or restate the Government’s requirements, but rather
shall provide convincing rationale to address how the Offeror intends to meet the requirements.
. . . Information shall be precise, factual, detailed, and complete.”).
The Court finds similarly unpersuasive KBA’s argument that “NASA evaluators were
incorrect in their assertion that KBA’s proposal failed to address the monthly surveillance review
meeting.” Pl.’s Mem. at 24. KBA appears to argue that the evaluators erred by failing to infer
that this requirement was satisfied by a separate section of its proposal—which assigned its
Program General Manager to participate in the surveillance review meetings. But the SEB’s
criticism was not directed at whether the appropriate personnel would be participating in the
meetings; rather, it was directed at KBA’s failure to specify that established counts would be
discussed at each meeting. Again, it was KBA’s responsibility to submit a well-written proposal
that was responsive to the requirements of the Solicitation.
Finally, KBA has challenged the agency’s view that its proposal created a risk of
imposing “budgetary burdens,” contending that budget issues are “not properly considered on a
fixed-price contract.” Pl.’s Mem. at 25–26. As the government points out, however, KBA
misunderstands the nature of the “budgetary” risks the SEB identified. The SEB concluded that
KBA’s proposed approach could “cause budgetary burdens on customers to fund purchases of
additional pre-priced counts or deferral of priority work.” AR Tab 32a at 27668. In other words,
the risk identified was that additional counts would need to be procured or work deferred
because of inadequate management of the established counts. That risk is appropriately a
consideration where, as here, the fixed price CLIN covers only a set number of counts. This
13
challenge to the SEB’s decision to assign a significant weakness to KBA’s proposal, therefore,
falls short.
B.
Assignment of a Weakness Regarding the Outage Management Process
KBA also challenges the SEB’s decision to assign its proposal a weakness based on its
conclusion that KBA’s proposed “outage management process . . . include[d] cost type contract
assumptions,” which the SEB believed “demonstrate[d] a lack of understanding of and ability to
manage the contract as a fixed-price contract.” AR Tab 32a at 27670. According to the SEB,
these “cost type” assumptions were reflected in KBA’s proposal to assign a budget to each
outage, to use an outage cost control coordinator to track, update, and report the cost
performance against the outage budget, and to present such reports at the weekly outage meeting.
Id. These features of its proposal, according to the SEB, evinced KBA’s failure to appreciate that
“all outages [would] be accomplished either within the fixed baseline price of the contract or
through fixed-price pre-priced [IDIQ] task orders.” Id.
KBA asserts that the inferences that the SEB drew from its proposal were unreasonable.
It contends that the proposal “clearly indicated that KBA understood that outage management
was to be performed under fixed-price baseline or IDIQ work.” Pl.’s Mem. at 27. In support of
that assertion, it cites several pages in the Technical Approach 1.0 section of its proposal, in
which it referenced the concept of “baseline work” or “Baseline Repairs and Replacement” while
also discussing outages. Id. But the SEB’s concern arose out of KBA’s decision to characterize
and feature the cost management and control procedures as a “key element” of its Outage
Management Process. AR Tab 32a at 27670. The oblique references upon which KBA relies in
its motion do not persuade the Court that it was irrational for the SEB to conclude that KBA’s
proposal betrayed a lack of appreciation of the fixed-price nature of the contract.
The Court is also not persuaded by KBA’s argument that that the Solicitation “require[d]
that actual costs are tracked in order to actively manage categories of BRRs and SOs that
correspond to different cost levels,” and that therefore, “KBA’s proposal to monitor costs and
work scope for outages performed as part of the overall baseline fixed-price effort is consistent
with contract requirements.” Pl.’s Mem. at 27–28. As the government explains, the cost tracking
associated with BRRs and SOs “addresses a separate PWS requirement” from outage
management. Def.’s Mot. at 34; Def.’s Reply in Supp. of Mot. for J. on the Admin. R. at 13–14,
ECF No. 42.
In short, the Court is satisfied that the SEB examined the proposal in its entirety, applied
its expertise to the proposal’s interpretation, and supplied an adequate explanation of its
reasoning. Its assignment of a weakness to this aspect of KBA’s proposal was rational and
supported by the record.
C.
Assignment of a Significant Weakness Concerning Failure to Identify Other
Direct Costs in Basis of Estimates
The agency assigned KBA’s proposal a significant weakness under the Technical
Approach subfactor because its “basis of estimate” (“BOE”) spreadsheet “fail[ed] to identify any
other direct costs” (“ODCs”) for operations, maintenance, and engineering to perform the [PWS]
14
requirements. AR Tab 32a at 27672. KBA challenges this determination on the grounds that the
Solicitation did not require it to provide the information omitted from its spreadsheet and that, in
any event, the information was included elsewhere in its proposal. These contentions lack merit.
The SEB’s decision to assign the significant weakness was based on what it viewed as a
failure on KBA’s part to comply with Section L.16.2 of the Solicitation. That provision specified
the information an offeror was required to include in its proposal in response to the Technical
Approach subfactor. It stated that offerors must provide BOEs “for the requirements identified in
Attachment L-03 (BOE Template), using the format provided.” AR Tab 15 at 13698; see id. at
14984 (BOE Template). It further stated that on the template, offerors were required to include
“(a) An itemization of resources (labor and non-labor for the Offeror, including all
subcontractors) described in sufficient detail to demonstrate the Offeror’s understanding of the
requirements and the reasonableness of the proposed approaches”; and “(b) A detailed
explanation of methodologies, rationale, and other relevant information to allow the Government
to clearly understand the proposed approaches.” Id. at 13698. The Solicitation informed potential
offerors that NASA would “evaluate the Offeror’s Basis of Estimates (BOEs) for demonstration
of understanding of the requirements and the reasonableness of the proposed approaches.” Id. at
13710.
Based on these instructions, each offeror submitted a completed version of Attachment L03 with its proposal. It is undisputed, however, that the BOE Template that KBA submitted did
not include entries for the two columns covering “other direct costs” to perform the operations,
maintenance, and engineering requirements. AR Tab 32a at 27672. The SEB observed that the
maintenance function, for example, involved lubrication and the cleaning of equipment, and yet
KBA’s spreadsheet did not include other direct costs for the materials needed to perform these
tasks. Id. Similarly, “the RFP requires a trained, certified, and licensed workforce”; nonetheless
KBA did not propose any other direct costs for engineering; nor did it supply a “rationale for
how it would obtain the required training, tools or software to provide and maintain an
appropriately qualified workforce without requiring ODCs.” Id. The SEB concluded that KBA’s
failure to include ODCs “demonstrate[d] a lack of understanding of the requirements, which
appreciably increase[d] the risk of unsuccessful performance.” Id.
KBA contends that the Solicitation did not require any itemization of ODCs for the PWS
requirements at issue. Pl.’s Mem. at 32–33. Further, it asserts that it “incorporated ODCs into its
BOE for these PWS sections,” but rather than break them out on the BOE Template, the ODCs
“instead were made part of KBA’s overall proposed cost.” Id. at 33. KBA also contends that the
BOE Template did not contain any “substantive instruction” in the “Instruction” and “Legend”
included in the attachment. Id. at 34–35; see also AR Tab 15 at 14983 (Instruction and Legend
page of Attachment L-03).
The Court finds KBA’s arguments unpersuasive. As detailed above, the Solicitation
specified that offerors were to provide BOEs for the requirements listed, and that they were to
use the format provided in Attachment L-03. AR Tab 15 at 13698. That attachment, in turn,
included two ODC columns. Id. at 14984. Further, the Solicitation required offerors’ BOEs to
include “an itemization of resources . . . described in sufficient detail to demonstrate the
Offeror’s understanding of the requirements and the reasonableness of the proposed approaches.”
Id. at 13698 (emphasis supplied). Offerors were further instructed to provide a “detailed
15
explanation of methodologies, rationale, and other relevant information to allow [NASA] to
clearly understand the proposed approaches.” Id. And, importantly, offerors were on notice that
their BOEs would be evaluated for “demonstration of [their] understanding of the requirements
and reasonableness of the proposed approaches.” Id. at 13710.
In light of the foregoing, it was reasonable for the agency to assign a significant weakness
to this aspect of KBA’s proposal, notwithstanding that KBA included ODCs in its “overall
proposed cost.” The agency instructed offerors to fill out the BOE Template so that it could fully
assess their understanding of the contract requirements and the reasonableness of their
approaches. The SEB’s assignment of the significant weakness based on KBA’s failure to
sufficiently demonstrate its comprehension was reasonable and consistent with the Solicitation.
The Court also rejects KBA’s argument that it was “prohibited” from providing ODCs
for the requirement to provide a trained and fully licensed workforce. The fact that the
Solicitation required training to be provided at the contractor’s expense did not obviate the
requirement that offerors list the ODCs of such training to demonstrate to the agency’s
satisfaction that they understood the training requirement and had a reasonable proposal to meet
it.
Once again, it was KBA’s responsibility to submit a “well-written proposal with
adequately detailed information that allow[ed] for a meaningful review by [NASA].” Structural
Assocs., Inc./Comfort Sys. USA (Syracuse) Joint Venture, 89 Fed. Cl. 735 at 744. Though KBA
now maintains that it “incorporated ODCs into its BOE” for the PWS sections at issue, the
Solicitation was clear that offerors were required to demonstrate their understanding of the
requirements by “showing their work” using the BOE Template to reflect those costs. The Court
finds that NASA did not act arbitrarily or unreasonably in assigning a significant weakness for
KBA’s failure to provide any ODCs for the baseline work requirements listed in Attachment L03.
D.
Assignment of a Weakness Concerning the Process for Managing
Maintenance Action Requests
Finally, NASA assigned KBA a weakness under the Technical Approach subfactor based
on KBA’s failure to meet the PWS 3.1.2-2 requirement to “[d]evelop and maintain a process to
manage Maintenance Action Requests (MARs) within Maximo.” AR Tab 15 at 13767. Like its
other challenges to the SEB’s evaluation decisions, KBA’s claim that the assignment of this
weakness was improper lacks merit.
Maximo is a government-provided computerized maintenance management system that is
used at the Kennedy Space Center. Id. at 13745. An MAR is “[t]he authorizing document for
adding, changing or deleting items of equipment and maintenance procedures in Maximo in
accordance with the Maintenance Plan.” Id. at 13852. According to the SEB, KBA’s proposal
did not explain how it would “develop and maintain a continuous process to manage MARs
within Maximo”; it merely stated that it would “review and disposition MARs at the weekly
Baseline Work Integration meeting.” AR Tab 32a at 27674. Further, the SEB found KBA’s
proposed approach to managing MARs flawed because it would lead to “an inefficient use of the
Government’s time at the [weekly] meeting.” Id. As a result, the SEB predicted a “risk of
16
untimely MARs disposition, which increases the risk of schedule disruption and degradation of
the systems for which [the contractor] is responsible.” Id.
KBA again contends that the SEB misread its proposal. It argues that in addition to
providing for review of MARs during the Baseline Work Integration Meeting, other aspects of
its proposal included “a continuous process/work flow within Maximo.” Pl.’s Mem. at 36.
Specifically, KBA notes that its proposal made reference to related software tools and that it
stated: “We use EWRS and Maximo so the current work status of all WONs is available on
demand.” Id. at 37 (citing AR Tab 20a at 15668).
As the government points out, the separate section of the proposal cited by KBA is not
responsive to the particular PWS requirement at hand. See Def.’s Mot. at 40 (statements cited by
KBA “correspond to the requirements of PWS 2.1.1.2 and 2.1.1.3 which address configuring and
utilizing Maximo and EWRS”) (citing AR Tab 58a at 28355). Further, the SEB found the
proposal’s references to MAR processes within Maximo too general to meet the requirement of
the Solicitation that it explain how issues would be managed through Maximo. Def.’s Mot. at
39–40. It determined that the proposal’s conclusory references to the relevant software programs
and its confirmation that “[w]e use EWRS and Maximo” to keep track of work orders did not
address the Solicitation’s requirement to explain the “continuous process” KBA planned to use
to manage MARs within Maximo. The SEB’s determination appears reasonable to the Court;
accordingly, it declines KBA’s invitation to substitute its views for those expressed by the SEB
in exercising its expertise and considerable discretion to evaluate the technical merit of KBA’s
proposal.
The Court also finds rational the SEB’s concern that, under KBA’s proposal, an
excessive amount of time would be spent on MARs at weekly meetings. KBA’s proposal
specifies that it will maintain an MAR log listing work orders, which in turn is to be reviewed at
each weekly Baseline Work Integration Meeting, where “each open action/issue will be reviewed
and dispositioned in a number of ways.” AR Tab 20a at 15669. The proposal further
contemplates that “[d]isposition outcome(s) will be reviewed at the next scheduled meeting with
the expectation [that] the action will be positively addressed and closed.” Id. In other words,
KBA proposed an approach to manage MARs by discussing them at weekly meetings, not
through Maximo as the Solicitation required. The contracting officer reasonably concluded that
this would result in inefficiencies.3
3
KBA urges the Court to disregard the contracting officer’s rationale concerning “inefficiencies”
found in her statement because it is a “post-hoc statement” which “is not found in the actual
evaluation findings [] or the contemporaneous evaluation record.” Pl.’s Reply in Supp. of Mot.
for J. on the Admin R. & Resp. to Def.’s & Def.-Intervenor’s Cross-Mots. For J. on the Admin.
R. (“Pl.’s Reply”) at 18 n.16, ECF No. 40. The Court finds that it may properly consider this
aspect of the contracting officer’s statement because it is “consistent with the [SEB]’s
contemporaneous documentation” of its evaluation findings regarding this weakness.
PlanetSpace Inc. v. United States, 96 Fed. Cl. 119, 127 (2010). This additional “inefficiencies”
rationale is “credible and consistent with the underlying evaluation,” and thus represents “an
expanded explanation” of the agency’s evaluation. Sayed Hamid Behbehani & Sons, WLL, B17
In short, the Court cannot second-guess the SEB’s conclusion that KBA’s proposal did
not adequately meet this technical requirement. KBA’s challenge to the weakness the SEB
assigned regarding this issue is therefore without merit.
V.
KBA’s Argument That the Agency’s Competitive Range Determination Was Not
Adequately Documented and Placed Undue Reliance on Adjectival Ratings and
“Distorted” Numerical Point Scores
In addition to challenging NASA’s assignment of weaknesses to aspects of its proposal as
discussed above, KBA contends that the competitive range determination itself was arbitrary and
capricious. It argues that, in making the determination, the contracting officer “improperly
focused almost exclusively on high-level adjectival ratings, the number of assigned
Strengths/Weaknesses, and distorted point scores,” and further, that she “did not document any
meaningful analysis of the evaluation findings or proposal features underlying those high-level
findings.” Pl.’s Mem. at 8.
As noted above, to prevail in its challenge to the contracting officer’s competitive range
determination, KBA must establish that it was “clearly unreasonable.” For the reasons set forth
below, the Court finds that KBA has failed to do so. The reasons for the contracting officer’s
competitive range determination were fully documented in the record and were not based solely
on adjectival ratings and point scores. Instead, the determination reflects a meaningful
consideration of all relevant factors. Further, and contrary to KBA’s argument, NASA’s pointscore system did not artificially inflate the evaluated differences between the proposals. Rather,
it provided a useful point of reference for distinguishing which proposals were the most highly
rated and, therefore, should be included in the competitive range.
A.
KBA’s Argument That the Competitive Range Determination Was Not
Adequately Documented
FAR 15.306(c)(1) governs the establishment of the competitive range. It states that
“[b]ased on the ratings of each proposal against all evaluation criteria, the contracting officer
shall establish a competitive range comprised of all of the most highly rated proposals, unless the
range is further reduced for purposes of efficiency.” FAR 15.306(c)(1) does not prescribe any
documentation requirement or mandatory content for a contracting officer’s memorandum
memorializing her competitive range determination. Cf. FAR 15.308 (stating that a source
selection decision “shall be documented,” and that “the documentation shall include the rationale
for any business judgments and tradeoffs made or relied on by the SSA, including benefits
associated with additional costs”). There is thus no legal authority that supports KBA’s
contention that the Court is precluded from looking beyond the competitive range memorandum
itself when reviewing whether the contracting officer’s decision to exclude it from the
288818.6, 2002 CPD ¶ 163, 2002 WL 31159457, at *4 n.2 (“Where, as here, a post-protest
explanation simply fills in previously unrecorded details of contemporaneous conclusions, we
will consider it in our review of the rationality of [the] selection decision so long as the
explanation is credible and consistent with the contemporaneous record.”).
18
competitive range lacked a rational basis. See Pl.’s Mem. at 14 (arguing that “when making
competitive range determinations, agencies are not only required to look behind the point scores
and adjectival ratings, but such an analysis must actually be documented in the competitive range
determination”) (emphasis altered); see also id. at 7. To the contrary, as with other agency
procurement decisions, the Court may look to the entire administrative record when assessing
whether a competitive range determination was arbitrary, capricious, or an abuse of discretion.
See Med. Dev. Int’l, Inc. v. United States, 89 Fed. Cl. 691, 710–11 (2009) (finding a
determination adequately documented where the record included “a short summary of the
balancing that took place” and noting further that “such balancing is entitled to a presumption of
regularity”).4
Here, the basis for the contracting officer’s competitive range determination is well
documented in the competitive range memorandum and in the record of the SEB’s evaluation of
the proposals. Thus, the SEB’s presentation of its initial evaluation of each offeror’s proposal to
the SSA contains an explanation of the strengths and weaknesses the SEB assigned to each
offeror’s proposal for each technical subfactor under the Mission Suitability factor. These
strengths and weaknesses, which were memorialized in individual memoranda, served as the
basis for the adjectival ratings and point scores the SEB assigned. AR Tab 38 at 27804–39. The
presentation further contains an explanation of the reasoning for the “Very High” confidence
rating assigned for each offeror’s past performance and includes a summary of the total
evaluated price of all offerors. Id. at 27841–48.
In the competitive range memorandum itself, the contracting officer explained that she
had decided to place the three most highly scored and rated proposals into the competitive range.
That determination was supported by the fact that there was a natural breakpoint between the
Mission Suitability scores of the three highest-rated proposals (Offeror A, Offeror B, and PSP) as
compared to the remaining two proposals (Offeror C and KBA). See Arsenault Acquisition
Corp.; E. Mulberry, LLC, B-276959, 1997 CPD ¶ 74, 1997 WL 577522 (Comp. Gen. Aug. 12,
1997) (upholding establishment of competitive range using “natural breakpoint” among the total
scores of the proposals). The contracting officer also took the price of the top three proposals
into consideration and concluded that “[d]iscussions would likely enhance all three proposals,
thereby increasing competition, which maximizes the Government’s ability to obtain the best
value.” AR Tab 39 at 27916.
4
For this argument and others, KBA relies heavily upon GAO’s Pinnacle decision. See, e.g.,
Pl.’s Mem. at 7, 11, 14–17. The Court is skeptical of Pinnacle’s relevance to the instant protest,
largely because GAO itself found its own prior decision distinguishable when applied to the facts
at issue here. AR Tab 71 at 60125. And in any event, to the extent that Pinnacle is in tension with
the Court’s findings in the present case, GAO decisions are not binding on this Court, but instead
may be used as persuasive authority when appropriate. Univ. Research Co. v. United States, 65
Fed. Cl. 500, 503 (2005). The Court thus rejects KBA’s proffer of Pinnacle as authority to
support its arguments in this case, particularly the argument that the Court must look exclusively
to the competitive range memorandum—and may not review the entire administrative record—to
discern the agency’s reasoning.
19
The contracting officer also provided a specific explanation in the memorandum as to
why she had found that KBA’s proposal was not among the most highly rated. She cited the
proposal’s several significant weaknesses, which had resulted in it being assigned the secondlowest Mission Suitability score of the five proposals. These included, as described above,
“several Basis of Estimate areas that demonstrated a lack of understanding in various resource
areas” and an “approach to managing the established counts” that the SEB found inadequate. Id.
The contracting officer also explained that, in her view, it was unlikely that holding
discussions with KBA would be fruitful. She observed that even if KBA corrected its
weaknesses as a result of discussions, its proposal still would lack any strengths or significant
strengths in the Management and Technical subfactors. She predicted that KBA would therefore
have to make “significant proposal revisions” in order to increase its Mission Suitability score
substantially. Id. Moreover, she reasoned, KBA’s evaluated price was the second highest,
making the possibility of any tradeoff unlikely. Id.
As the foregoing demonstrates, the record reflects that the contracting officer concluded
that KBA’s proposal was not among the most highly rated because it was the second highest in
price and yet had significant weaknesses and no strengths with respect to the two most important
Mission Suitability subfactors. KBA’s adjectival ratings and point scores were inferior to those
of the three proposals that the contracting officer concluded were the most highly rated,
including two proposals that came in at a significantly lower price. The contracting officer’s
reasoning was adequately documented through the competitive range memorandum and the
supporting evaluation materials. Therefore, KBA’s contention that the competitive range
determination does not reflect a meaningful consideration of relevant factors, lacks merit.
B.
KBA’s Argument That NASA’s Point Score System Distorted the Extent of
the Qualitative Differences Between the Proposals
In addition to its argument that the contracting officer did not adequately document the
basis for her competitive range determination, KBA challenges as “arbitrary and misleading” the
Mission Suitability point-scoring system that NASA employed in connection with its evaluation
process. Pl.’s Mem. at 15. KBA contends that a comparison of the total number of strengths and
weaknesses assigned to each offeror’s proposal does not portend a “large disparity” in the
“underlying Mission Suitability evaluations.” Id. at 15; see also AR Tab 38 at 27912. The point
scores, KBA argues, “distort[ed] and artificially inflate[d] the actual evaluated differences
between the offerors’ Mission Suitability proposals.” Pl.’s Mem. at 15–16.
KBA’s “distortion” argument lacks merit. It is premised on the notion that the strengths
and weaknesses that the SEB identified in each subfactor were of equal importance in scoring the
Mission Suitability factor. In fact, however, each subfactor was assigned a weight based on its
importance (525 points for Management Approach, 375 points for Technical Approach, and only
100 points for Small Business Utilization). Strengths and weaknesses assigned under the
Management Approach subfactor, therefore, carried more than five times the weight of those
assigned under the Small Business Utilization subfactor. And, as the summary chart below
shows, KBA lacked any strengths that might outweigh its weaknesses under the two most
20
important subfactors, Management and Technical Approach.5 When the relative importance of
the subfactors is taken into consideration, the differences between the quality of the proposals are
substantial.
Offeror
Offeror A
Offeror B
PSP
KBA
Offeror C
Offeror
Offeror A
Offeror B
PSP
KBA
Offeror C
Management - 525 Points
Strengths
Weaknesses
Significant
Significant
Other
Other
0
2
0
0
0
1
0
1
0
2
0
0
0
0
1
1
0
1
1
1
Percentile
Adjectival Rating
Points
Score
Good
68
357
Good
59
310
Good
69
362
Fair
37
194
Fair
41
215
Technical - 375 Points
Strengths
Weaknesses
Significant
Significant
Other
Other
1
1
0
1
1
0
0
1
2
1
2
5
0
0
1
4
1
0
2
5
Percentile
Adjectival Rating
Points
Score
Very Good
86
323
Very Good
81
304
Good
58
218
Fair
40
150
Fair
32
120
Small Business - 100 Points
Strengths
Weaknesses
Significant
Significant
Other
Other
1
0
0
0
0
1
0
0
1
0
0
0
1
0
0
0
1
0
0
0
Adjectival Percentile
Total
Points
Rating
Score
Points
Very Good
85
85
765
Good
70
70
684
Very Good
82
82
662
Very Good
87
87
431
Very Good
83
83
418
The Court finds similarly without merit KBA’s contention that the competitive range
determination was arbitrary and capricious because there is nothing in the record that explains
“how the evaluators arrived at specific point scores within a ‘percentile point range’” before
multiplying the percentile scores by the number of available points for each subfactor. Pl.’s
Mem. at 16. Thus, KBA argues that “the underlying record still does not show . . . how the
evaluators determined whether an offeror fell within the low-end, middle, or high-end of a given
‘percentile point range.’” Id. at 16–17.
KBA’s contention that there is nothing in the record that explains how specific point
scores were assigned ignores the government’s explanation that the evaluators reached
“consensus findings” and assigned adjectival ratings and scores which were “validated” for
consistency. See AR Tab 58 at 28321, 28378. Further, while not explicitly stated, the Court can
readily infer that the point scores were assigned on the basis of the number and nature of the
strengths and weaknesses found in the proposals with respect to each subfactor.
The evidence supporting this inference is reflected in the summary chart above, which
shows each offeror’s percentile scores and total points for each Mission Suitability subfactor, as
well as the offerors’ total Mission Suitability scores. Taking the Management Approach
subfactor as an example, Offeror A and PSP received almost identical percentile scores of 68 and
69, which is rational considering their identical distribution of two “other” strengths each, with
no significant strengths or any weaknesses of either type. In contrast, although it earned the same
adjectival rating of “Good,” Offeror B received a percentile score approximately ten points
lower, which reasonably follows from the fact that it achieved only one strength and was also
assigned a weakness. Along the same lines, there is a rational relationship between Offeror A
5
Data in the chart is taken from AR Tab 38 at 27803, 27805, 27807, 27809, 27811, 27813,
27815, 27818, 27820–23, 27825, 27827, 27829, 27831, 27833, 27835–37, 27839–40.
21
and Offeror B’s Technical Approach scores within the “Very Good” range, because Offeror A
did slightly better than Offeror B in that it earned a strength whereas Offeror B did not.
The relative scores of KBA and Offeror C similarly reflect a rational scoring scheme.
Each of these offerors achieved the same “Fair” rating for Management Approach and Technical
Approach, but their percentile scores differed based upon underlying differences in their
respective evaluations. Offeror C fared slightly better on the scoring of its Management
Approach because it earned a strength under that subfactor and KBA did not. KBA fared better
under Technical Approach because Offeror C was assigned two significant weaknesses and five
weaknesses under that subfactor, pulling its percentile score down toward the bottom of the
“Fair” range.
These types of rational relationships hold across all of the percentile scores within the
given adjectival ranges. Proposals evaluated similarly under a given subfactor received similar
percentile scores for that subfactor, and where the percentile scores varied, the differences are
based on the relative weaknesses and strengths assigned as part of the underlying evaluation.
In sum, while the specific basis for determining the percentile scores is of “less than ideal
clarity . . . the agency’s path may reasonably be discerned.” Motor Vehicle Mfrs. Ass’n, 463 U.S.
at 43. The Court is satisfied that the point-score system, which was used as a guide to the
competitive range determination, did not “distort” or “artificially inflate” the differences among
the offerors’ proposals.6
C.
KBA’s Challenges to the Contracting Officer’s Determination Regarding the
Relative Quality of the Offerors’ Proposals
KBA mounts various challenges to the contracting officer’s substantive treatment of the
relative qualities of each offeror’s proposal. For example, it claims that the competitive range
memorandum “merely count[ed] the Weaknesses assigned to KBA’s proposal” and failed to
acknowledge or discuss the fact that other proposals with the same or similar weaknesses were
included in the competitive range. See Pl.’s Mem. at 8–9.
6
In addition to opposing on the merits KBA’s argument that the use of point scores artificially
distorted and inflated the differences among proposals, the government also contends that—
under the reasoning of Blue & Gold Fleet, L.P. v. United States, 492 F.3d 1308 (Fed. Cir.
2007)—KBA waived that argument by not raising it before the proposals were evaluated.
See Def.’s Mot. at 18–19. The Court finds the government’s waiver argument unpersuasive. It
does not view KBA’s challenge as one to the use of the point-score system referenced in the
Solicitation; rather, the gist of KBA’s argument is that the agency did not provide an adequate
explanation in the record as to why it assigned particular point scores within the specified ranges
when evaluating the various proposals. KBA also argues that, as applied, the point scores
assigned distorted the differences between the proposals. The Court views these as challenges to
the evaluation process rather than to a “patent error” in the terms of the Solicitation. See Blue &
Gold Fleet, 492 F.3d at 1315.
22
But the contracting officer did not merely count weaknesses; she considered the relative
importance of the weaknesses assigned to each offeror’s proposal and weighed them against their
strengths. KBA’s proposals were not comparable to those it cites, for the proposals of the other
offerors that had similar weaknesses were also assigned strengths and/or significant strengths
under the Management and Technical Approach subfactors. KBA’s proposal, on the other hand,
was assigned only the weaknesses, and no strengths.
The evaluation scheme required NASA to balance each proposal’s strengths and
weaknesses to determine its adjectival rating. See, e.g., AR Tab 38 at 27802 (“Fair” rating, for
example, indicates that a proposal’s “[w]eaknesses outbalance any strengths”) (emphasis
supplied). Because KBA’s weaknesses in the two most important subfactors were not
counterbalanced by any strengths or significant strengths—as noted in the competitive range
memorandum—it was reasonable for NASA to base its exclusion of KBA from the competitive
range in part on these weaknesses, as well as on the “Fair” ratings assigned to KBA’s proposal as
a result of the imbalance.
KBA also alleges that in determining the competitive range, NASA gave short shrift to its
high ratings and strong showing under the Small Business Utilization subfactor within the
Mission Suitability evaluation. Pl.’s Mem. at 9–12. KBA argues that “[h]ad NASA’s competitive
range determination actually looked behind the offerors’ adjectival ratings under the Small
Business Utilization plan subfactor and actually documented a reasoned consideration of the
actual evaluation findings, KBA’s Small Business Utilization plan clearly would have been
deemed to be superior to the other offerors’ plans.” Id. at 10 (emphasis removed).
But NASA in fact did deem KBA’s small business plan superior; it assigned KBA the
highest score of 87 out of 100 for the Small Business Utilization subfactor. AR Tab 38 at 27840.
The fact remains, however, that this subfactor was by far the least important of the three
subfactors, for the maximum number of points available under it was 100, as compared to the
525 and 375 points available for the Management and Technical Approach subfactors,
respectively. Even if KBA had been assigned the maximum Small Business Utilization score of
100 points, its overall total Mission Suitability score would have increased only from 431 to 444,
leaving it well behind PSP’s score of 662, the lowest Mission Suitability score among
competitive-range offerors.
KBA’s challenges to the contracting officer’s analysis of the Past Performance factor are
similarly unpersuasive. As explained above, all proposals were assigned a “Very High”
confidence rating under this factor. KBA contends that the contracting officer should have
treated its proposal as superior to the others with respect to past performance. It contends that “a
reasonable evaluation of KBA’s ‘exceptional’ and ‘very highly pertinent’ past performance on
the incumbent contract would have given KBA an edge over the other offerors who, although
similarly rated, did not have this unique type of highly relevant experience.” Pl.’s Mem. at 12–
13.
The record shows that—contrary to KBA’s argument—the agency engaged in a
meaningful evaluation of each offeror’s past performance under the criteria set forth in the
Solicitation. See generally AR Tabs 32b, 33b, 34b, 35b, 36b (detailed SEB analysis and findings
as to past performance of each offeror); see also AR Tab 38 at 27844–48 (SEB presentation
23
slides summarizing past performance evaluations); id. at 27878–900 (SEB presentation Past
Performance back-up slides). Further, KBA received credit for the exceptional performance of
joint venturer YEI on the predecessor contract. See AR Tab 32 at 27683–90.7 At the same time,
KBA actually fared slightly worse than the other four offerors when all of the contracts
submitted for the Past Performance evaluation are taken into account. KBA demonstrated
“significantly relevant” experience across four out of five PWS areas and “relevant” experience
on the fifth, whereas the other four offerors all demonstrated “significantly relevant” experience
on each of the five areas. Compare AR Tab 38 at 27847 (summary of KBA’s past performance
evaluation) with id. at 27844–46, 27848 (summaries of other offerors’ past performance
evaluations). Thus, KBA’s argument that the contracting officer was required to give it an edge
in terms of its Past Performance rating based on YEI’s work on the predecessor contract lacks
merit.
D.
KBA’s Contention That NASA Did Not Meaningfully Consider Price When
Determining the Competitive Range
Finally, KBA asserts that the competitive range determination “does not document a
meaningful consideration of price.” Pl.’s Mem. at 18 (emphasis omitted). More specifically,
KBA posits that the competitive range determination did not “document any sort of comparison
of the offerors’ prices to the Independent Government Cost Estimate,” nor did it “probe the
underlying reasons” for the disparity between KBA and Offeror A’s prices—$658.3 million and
$670.0 million, respectively—and the other three offerors’ prices, all of which were below $490
million. Pl.’s Mem. at 18–19 (emphasis removed); AR Tab 38 at 27851 (table showing each
offeror’s total evaluated price).
KBA’s argument that the agency did not document a comparison of the offerors’ prices is
inconsistent with the administrative record. The underlying SEB presentation explicitly reflects
that—consistent with the criteria set forth in the Solicitation—the SEB “perform[ed] a price
analysis in accordance with FAR 15.404-1(b).” AR Tab 15 at 13711 (Solicitation); AR Tab 38 at
27850 (SEB presentation).8 The SEB presentation includes a calculation and comparison of the
total evaluated price for each proposal, which included baseline annual services (CLIN 001 &
003), baseline established counts (CLIN 002 & 004), government-provided values (CLIN 005),
estimated IDIQ price (CLIN 006 & 007), and phase-in costs. AR Tab 38 at 27850–51. The
presentation reflects a comparison between each proposal’s total evaluated price and the
7
For example, the SEB observed that YEI was a “major” subcontractor on the predecessor
contract and that, for the fifth PWS area (Logistics), YEI’s performance on that contract
“mitigat[ed] the lack of relevant experience on the other cited contracts.” AR Tab 32 at 27688.
8
That provision states that in most situations “[c]omparison of proposed prices received in
response to [a] solicitation” is a satisfactory price analysis technique because “[n]ormally,
adequate price competition establishes a fair and reasonable price.” FAR 15.404-1(b)(2)(i)
(citing FAR 15.403-1(c)(1)(i)).
24
Independent Government Cost Estimate. Id. at 27851. The SEB also evaluated the proposals for
unbalanced pricing and found none. AR Tab 65a at 59955.
KBA’s argument that NASA should have “probe[d] the underlying reasons” for the price
differences between the higher- and lower-priced proposals also lacks merit. The reasons for the
price differences would only be relevant if the Solicitation called for a price realism analysis.
“Where an award of a fixed-price contract is contemplated, a proposal’s price realism is not
ordinarily considered, since a fixed-price contract places the risk of loss on the contractor.”
NVE, Inc. v. United States, 121 Fed. Cl. 169, 180 (2015) (internal quotation marks and citation
omitted). Accordingly, “for a price realism analysis to apply in a fixed-price contract, the
solicitation must expressly or implicitly require a price realism analysis for a proposal to be
rejected for an unrealistically low price.” Id. (citation omitted).
Further, as explained above, the contracting officer explicitly noted KBA’s high price
(coupled with its relatively low Mission Suitability score) in explaining why its proposal was not
included in the competitive range. Although Offeror A’s proposed price was slightly higher than
KBA’s price, Offeror A’s proposal also contained several strengths (unlike KBA’s), and Offeror
A achieved ratings of “Good” and “Very Good” for two Mission Suitability subfactors under
which KBA achieved a rating of only “Fair.” The record therefore shows that price was
considered as one of the factors relevant to the determination of which proposals were the most
highly rated.
*
*
*
In summary, the record reveals that the contracting officer considered all of the
evaluation criteria, including price, and weighed the proposals holistically in determining which
to include in the competitive range. She did not rely exclusively on adjectival ratings and
numerical scores, but rather used them as points of reference or guides in making comparisons
among the offerors. Accordingly, the Court rejects KBA’s argument that the record does not
reflect that the contracting officer engaged in a meaningful analysis of all relevant factors in
making her competitive range determination.
CONCLUSION
For the reasons discussed above, KBA’s motion for judgment on the administrative
record is DENIED, and the government and PSP’s cross-motions for judgment on the
administrative record are GRANTED. The Clerk is directed to enter judgment accordingly. Each
party shall bear its own costs.
IT IS SO ORDERED.
s/ Elaine D. Kaplan
ELAINE D. KAPLAN
Judge
25
Disclaimer: Justia Dockets & Filings provides public litigation records from the federal appellate and district courts. These filings and docket sheets should not be considered findings of fact or liability, nor do they necessarily reflect the view of Justia.
Why Is My Information Online?