Back to Resources

How Government Proposals Are Scored

9 min read

The Black Box of Government Evaluation

For most suppliers, proposal evaluation is a black box. You submit hundreds of pages of carefully crafted content, wait weeks or months, and then receive a brief notification that you were either successful or unsuccessful. If you are lucky, you get a debriefing with some general feedback. But what actually happens between submission and award?

Understanding the evaluation process from the inside gives you a significant competitive advantage. When you know how evaluators think, what they look for, and how scores are determined, you can write proposals that are fundamentally easier to evaluate well.

The Evaluation Team

Government proposals are not scored by a single person. A formal evaluation committee is assembled, typically consisting of:

  • Technical evaluators -- Subject matter experts who assess the quality and feasibility of your proposed approach. Usually 3 to 5 evaluators for the technical portion.
  • Financial evaluators -- Specialists who review pricing, verify cost reasonableness, and calculate the financial score.
  • Contracting officer -- Oversees the process, ensures compliance with procurement rules, and makes the final award recommendation.
  • Legal advisor -- Sometimes included to review compliance and contractual terms.
  • Fairness monitor -- For high-value procurements, an independent observer may be appointed to ensure the process is fair and transparent.

Evaluators sign conflict-of-interest declarations and are instructed to evaluate proposals solely based on the information contained in the submission and the published evaluation criteria.

Consensus Scoring vs. Individual Scoring

There are two primary approaches to arriving at final scores:

Individual Scoring

Each evaluator independently scores every proposal against the evaluation criteria. The final score is the average (or sometimes the median) of all individual scores. This approach is faster but can lead to wide variance if evaluators interpret criteria differently.

Consensus Scoring

More common in Canadian federal procurement and increasingly used in the United States, consensus scoring works as follows:

  1. Each evaluator reads and scores every proposal independently.
  2. The evaluation committee meets to discuss each proposal.
  3. For each criterion, evaluators share their individual scores and rationale.
  4. The committee discusses differences and reaches a consensus score that all members agree on.
  5. The consensus score becomes the official score.

Why this matters to you: In consensus scoring, proposals that are clear and unambiguous tend to maintain their scores. If evaluators disagree about what you meant, the discussion usually resolves in a conservative (lower) score. Ambiguity is your enemy.

Point Allocation and Scoring Scales

Numerical Scoring

The most common approach assigns a maximum number of points to each criterion. Evaluators assign a score from zero to the maximum based on the quality of the response.

For example, if "Technical Approach" has a maximum of 40 points, evaluators might use a rating scale like:

| Rating | Percentage | Description | |--------|-----------|-------------| | Excellent | 90-100% (36-40 pts) | Significantly exceeds the requirement. Demonstrates deep understanding, innovative approach, and clearly relevant experience. | | Good | 70-89% (28-35 pts) | Fully meets the requirement with some areas of strength. Demonstrates solid understanding and relevant experience. | | Satisfactory | 50-69% (20-27 pts) | Meets the minimum requirement. Adequate but lacks depth or specificity. | | Below Expectations | 25-49% (10-19 pts) | Partially meets the requirement. Notable gaps or weaknesses. | | Non-Responsive | 0-24% (0-9 pts) | Fails to address the requirement or response is inadequate. |

Adjectival Scoring

Some procurements use adjectival ratings (Outstanding, Good, Acceptable, Marginal, Unacceptable) without converting to numerical scores. In these cases, the evaluation committee ranks proposals based on the overall pattern of adjectival ratings and a qualitative assessment of tradeoffs between technical merit and price.

Minimum Thresholds

Many solicitations establish minimum scoring thresholds that proposals must meet to proceed to the next phase:

  • Overall minimum: For example, "Proposals must achieve a minimum overall technical score of 60% to be considered for award."
  • Per-criterion minimum: For example, "Proposals must score at least 50% on each individual criterion."
  • Mandatory minimums on key criteria: For example, "Proposals scoring below 70% on Technical Approach will be eliminated regardless of overall score."

Critical point: A proposal that scores brilliantly on most criteria but falls below the minimum threshold on a single criterion is eliminated. Balanced performance across all criteria is safer than excellence in some areas with weakness in others.

Technical vs. Financial Envelope Opening

Many government procurements use a two-envelope system:

Envelope 1: Technical Proposal

The technical proposal is opened and evaluated first. Evaluators score the technical content without any knowledge of the bidder's pricing. This separation prevents price from influencing the technical assessment.

Envelope 2: Financial Proposal

Financial envelopes are opened only after technical evaluation is complete. In some procurements, only proposals that meet the minimum technical threshold have their financial envelopes opened. Proposals that fail technically never have their pricing revealed.

This two-stage process is designed to ensure that quality is properly assessed before cost enters the equation. It also means that your technical proposal must stand entirely on its own merits -- you cannot compensate for a weak technical score with a low price (unless the procurement uses a lowest-price-technically-acceptable methodology).

How Evaluators Think: What They Look For

Having participated in or studied hundreds of government evaluations, certain patterns emerge in how evaluators approach proposals:

They Score What Is Written, Not What They Know

This is the most important principle. Even if an evaluator is personally familiar with your company and knows you can deliver, they are instructed to score based solely on the content of your proposal. If your proposal does not explicitly address a criterion, you will not receive points for it.

This is why firms with stellar reputations sometimes score poorly -- they assume the evaluator knows their track record and fail to document it in the proposal.

They Look for Direct Responses to the Criteria

Evaluators typically work through proposals with a scoring sheet that lists each criterion and sub-criterion. They are looking for specific content that maps to each evaluation factor. If you bury your response to criterion 3.2 somewhere in the middle of a section addressing criterion 3.1, the evaluator may miss it entirely.

They Value Specificity Over Generality

Compare these two statements:

  • "Our team has extensive experience in cybersecurity." -- This tells the evaluator nothing actionable. How extensive? What kind of cybersecurity? For whom?
  • "Our team has delivered cybersecurity assessments for 12 federal departments over the past 5 years, including a comprehensive security architecture review for Shared Services Canada covering 180,000 endpoints." -- This is specific, quantified, and directly relevant.

The second statement will always score higher. Evaluators are trained to reward specificity because it demonstrates genuine capability rather than marketing language.

They Penalize Ambiguity

When evaluators encounter vague or ambiguous content, they face a choice: give the benefit of the doubt or take the conservative interpretation. In a formal evaluation setting, evaluators almost always choose the conservative path. If your proposal could be interpreted two ways, the evaluator will likely choose the interpretation that is less favorable to you.

They Get Fatigued

Evaluators may read 10, 20, or even 50 proposals for a single competition. By proposal number 15, their attention span is diminished. Proposals that are well-organized, clearly structured, and easy to navigate score better than dense, poorly formatted submissions -- even if the underlying content is similar.

Improving Your Scores: Practical Strategies

Strategy 1: Map Your Response to the Evaluation Criteria

Before writing a single word, create a detailed outline that mirrors the evaluation criteria structure. Each major criterion becomes a section. Each sub-criterion becomes a sub-section. Use the same numbering scheme as the RFP. This makes the evaluator's job as easy as possible.

Strategy 2: Lead with Proof Points

For every claim you make, immediately follow it with evidence. Use the pattern: Claim + Proof + Relevance.

For example: "We will implement an agile project management methodology (claim). On our recent engagement with the Canada Revenue Agency, this approach reduced delivery time by 22% and eliminated scope creep across a 14-month program (proof). This same methodology is directly applicable to your requirement because of the similar scale, technology stack, and stakeholder complexity (relevance)."

Strategy 3: Use Callout Boxes and Visual Aids

Evaluators notice and remember visual elements. Use tables to compare your approach to requirements. Use callout boxes to highlight key differentiators. Use diagrams to illustrate complex technical approaches. These elements break up dense text and draw attention to your strongest points.

Strategy 4: Address Weaknesses Proactively

If you have an obvious weakness (lack of experience in a specific area, smaller team than competitors), address it directly with a mitigation strategy rather than hoping the evaluator will not notice. Evaluators respect honesty and a clear plan to address gaps. Silence on a known weakness is far more damaging than an upfront acknowledgment with a credible mitigation approach.

Strategy 5: Write for Skim-Readability

Use descriptive headings, bold topic sentences, bullet points, and short paragraphs. An evaluator should be able to understand the key points of each section by skimming headings and the first sentence of each paragraph. The detail is there for when they read closely, but the structure guides them to your strongest content.

Strategy 6: Get Feedback Before Final Submission

Conduct an internal review where a colleague who was not involved in writing the proposal reads it as if they were an evaluator. Give them the scoring criteria and ask them to assign scores. This "red team" review consistently reveals gaps, ambiguities, and missed requirements that the writing team overlooked. TenderIQ's AI analysis of tender requirements can serve as a first-pass check to ensure you have not missed any key evaluation factors before your internal review.

Key Takeaways

  • Government proposals are scored by a committee of evaluators using published criteria, not by a single decision-maker using subjective judgment.
  • Consensus scoring rewards clarity and punishes ambiguity. If evaluators disagree about your intent, the score trends downward.
  • Minimum score thresholds can eliminate otherwise strong proposals. Ensure balanced performance across all criteria rather than excelling in some and underperforming in others.
  • Evaluators score only what is written in your proposal. Never assume they know your capabilities -- document everything explicitly.
  • Specificity and evidence are what separate high-scoring proposals from average ones. Every claim should be backed by quantified, relevant proof.
  • Structure your proposal to mirror the evaluation criteria exactly, use visual aids to break up dense text, and write for skim-readability to accommodate evaluator fatigue.

Get weekly procurement tips

Actionable advice on winning government contracts, delivered to your inbox every Thursday.