#management #okrs #performance #incentives #consolidated

Performance Management: From MBOs to OKRs and Beyond

Why traditional performance reviews are broken by design, how Intel solved the innovation problem, and what actually works for modern organizations.

Most organizations spend enormous effort on performance management systems that don’t actually work. The symptoms are everywhere: engineers gaming metrics, managers avoiding difficult feedback, ambitious goals that no one wants to commit to, and annual reviews that satisfy neither the person giving them nor receiving them.

This isn’t because people are doing it wrong. It’s because the systems themselves are broken by design—optimized for a world of work that no longer exists. This post traces the evolution from Management by Objectives through OKRs, and explores why performance reviews remain problematic even in organizations that adopt better goal-setting frameworks.

The History: Management by Objectives

“Management by Objectives” (MBO) was the most common performance management model for decades. The concept is straightforward: establish goals, attach metrics, reward achievement of those metrics.

Originated from sales-based organizations, MBO gained traction at HP and was popularized by Peter Drucker in 1954. The model worked great for activities with high predictability: if you have a factory and a few underperforming people, the more measurement there is, the more you can understand and take action.

In stable, predictable environments with clear cause-and-effect relationships, MBO was genuinely effective:

  • Sales targets (more calls → more sales → measurable outcomes)
  • Manufacturing output (more units per hour → clear productivity metrics)
  • Operations work (tickets resolved, uptime percentages, etc.)

The key assumption: performance is measurable, predictable, and primarily individual.

This assumption held in industrial contexts. It breaks down in knowledge work, and especially in innovation contexts.

Intel’s Problem: The Innovation Penalty

So, what was Intel’s problem with MBO?

Here’s the killer question that exposed the flaw:

If you adopt MBO to reward a new AI project in a bank, where the risk of failure is high, how many people will want to participate in a project which will cause them to lose their annual bonus?

Think about this carefully. Under an MBO system:

  • You set measurable objectives at the start of the year
  • Your compensation is tied to achieving those objectives
  • High-risk, high-reward innovation projects have uncertain outcomes
  • Rational employees avoid projects that might cause them to miss targets

Adopting MBO universally in an organization means people will only put themselves on targets they can achieve, and this will kill innovation.

This wasn’t hypothetical. Intel was experiencing exactly this problem. The best engineers were avoiding the most important projects because the incentive system punished ambitious bets.

The Structural Problem

The issue isn’t individual behavior—it’s structural incentives. When you tie compensation directly to goal achievement:

  • People sandbag (set easily achievable goals)
  • Teams avoid risk (safer to hit 100% of a modest goal than 70% of an ambitious one)
  • Innovation projects can’t attract talent (who wants to risk their bonus?)
  • Sandbagging gets rewarded (you hit your numbers!)

This is a system that optimizes for predictability at the expense of innovation.

The Structural Solution: OKRs

Intel needed a way to make the company able to run risk. Andy Grove’s answer was to build a management system where you detach goals from compensation.

The Core Insight

OKRs (Objectives and Key Results) solve the innovation problem by making it safe to be ambitious:

  • Set stretch goals that you expect to achieve 60-70% of
  • Compensation is not directly tied to OKR achievement
  • Ambitious goals are celebrated, not punished
  • Failure on an OKR doesn’t mean failure as an employee

This seems simple, but it’s a profound shift. Under OKRs:

  • You can commit to a moonshot project without risking your bonus
  • Missing an ambitious target isn’t career-damaging
  • The best people want to work on the hardest problems
  • Innovation projects become attractive, not toxic

This is why all tech companies in the Valley adopt OKRs: because they want to push people to be more ambitious in goal-setting without creating a system that punishes that ambition.

When to Use What

Large organizations like Google adopt both models in different contexts:

  • MBO for sales departments, where each additional sale has a direct impact on the bottom line at the end of the year. Predictable work, measurable outcomes, clear cause-and-effect.
  • OKR for product people. Can we tie a product owner’s compensation to “how many user stories they added to the product backlog per week”? No. Is there a metric you can create that really reflects their contribution to the company bottom line? Or is there a danger that the metric could be misleading and you will promote a bad product owner while the good one leaves your organization?

This is the real essence of OKR: a management model that allows a company to run risk, pushing people to create exceptional outcomes by making them feel safe about performance rewards.

Why Performance Reviews Are Still Broken

But here’s the problem: adopting OKRs doesn’t fix performance reviews. Many organizations implement OKRs for goal-setting but keep traditional backward-looking performance assessments. This creates a mismatch between how work happens and how it’s evaluated.

In a famous 1957 article in Harvard Business Review, Douglas McGregor stated his “Theory Y” approach to management: most employees want to perform well and will do so if supported properly.

Why then do we spend so much time focusing on past performance, instead of people development?

Problem 1: Individual Goals in Team-Based Work

Why do we set personal goals when most of us work in teams? The work that matters is collaborative. Individual metrics create incentives to optimize for what’s measured, not what’s valuable.

Modern software engineering is fundamentally team-based:

  • Code reviews require collaboration
  • Architecture decisions involve multiple people
  • System reliability is a team outcome
  • Product success depends on cross-functional work

Yet we evaluate individuals. This creates perverse incentives:

  • Taking credit for team achievements
  • Avoiding helping others (doesn’t count toward your goals)
  • Optimizing for visible individual contributions over valuable team contributions
  • Political maneuvering around who gets credit

Problem 2: Assessing the Past is Harder Than It Looks

Even when business cycles are stable enough to establish measurable goals, we’re all subject to cognitive biases. Hindsight makes everything look obvious. Context gets flattened. The person doing the assessment wasn’t there for most of the moments that mattered.

Hindsight bias: “Of course that project succeeded, the approach was obvious.” (It wasn’t obvious at the time.)

Recency bias: Recent work weighs more heavily than work from 6-9 months ago.

Availability bias: Visible work (presentations, launches) gets weighted more than invisible work (prevented outages, architectural improvements, mentoring).

Attribution errors: Success gets attributed to individuals; failure gets attributed to circumstances.

These aren’t failures of individual judgment—they’re inherent limitations of backward-looking assessment.

Problem 3: Unstable Environments Break the Model Entirely

What happens when business cycles aren’t stable and jobs become increasingly complex? Can you really go back and assess whether it was possible to do better, given the context an employee was in six months ago?

The honest answer is no. But we pretend otherwise.

In rapidly changing environments:

  • Goals set 6 months ago may no longer be relevant
  • The priorities that mattered in Q1 are different in Q4
  • “Failure” to hit a goal might mean correctly pivoting to something more important
  • “Success” might mean hitting an obsolete goal while missing what actually mattered

Yet we still conduct annual reviews as if we can meaningfully assess what someone “should have” achieved in January based on what we know in December.

What the Agile Manifesto Got Right

The Agile Manifesto outlined several key values that directly contradict traditional performance management:

  • Responding to change over following a plan
  • Collaboration over individual performance
  • Self-organizing teams over top-down direction
  • Reflecting on how to improve on a regular basis

Notice: none of these are backward-looking. They’re all about creating conditions for future performance.

Agile teams don’t do annual retrospectives. They do continuous retrospectives. They don’t wait until the end of the year to give feedback. They give feedback constantly. They don’t evaluate individuals in isolation. They focus on team outcomes.

The question is: why do we accept this for engineering practices but not for performance management?

A Better Model: OKRs + CFRs

John Doerr’s Measure What Matters provides two complementary tools that actually align with how modern work happens:

1. OKRs (Objectives and Key Results)

Set ambitious goals, get people onboard, align teams, and track progress toward outcomes that matter. The key word is progress, not judgment.

OKRs are:

  • Transparent: Everyone sees what everyone else is working toward
  • Ambitious: Stretch goals are expected and celebrated
  • Decoupled from compensation: Hitting 70% of an ambitious OKR is success
  • Regular: Set quarterly, reviewed frequently
  • Outcome-focused: Not activity-based

2. CFRs (Conversations, Feedback, Recognition)

Support continuous improvement through regular dialogue. Not annual reviews. Not quarterly check-ins. Continuous.

CFRs are:

  • Conversations: Regular one-on-ones focused on growth and development
  • Feedback: Real-time, specific, actionable input on work
  • Recognition: Celebrating contributions as they happen, not once a year

The key insight: separate development from evaluation.

The Incentive Problem

Traditional performance reviews exist because organizations need to make compensation and promotion decisions. That’s legitimate.

But conflating “how do we allocate rewards” with “how do we help people improve” creates a system that does neither well.

When feedback affects compensation:

  • People optimize for looking good in reviews rather than actually improving
  • Managers avoid honest feedback because it affects compensation
  • Development conversations become negotiation sessions
  • Trust erodes (is this feedback to help me, or to justify a rating?)

The solution is to separate these functions:

  • Use lightweight, continuous feedback (CFRs) for development
  • Use OKRs for alignment and progress tracking
  • Use different processes for compensation decisions (calibration, market benchmarking, contribution assessment)

This doesn’t mean compensation decisions are easy—it means they’re not pretending to be development conversations.

The Meta-Lesson

Performance management systems are themselves complex systems. They respond to incentives, not intentions.

If you design a system that rewards individual metrics, you’ll get individual optimization—even when collaboration would produce better outcomes. If you design a system that punishes honest disclosure of problems, you’ll get hidden problems. If you design a system that ties compensation to hitting goals, you’ll get sandbagged goals.

The question isn’t “how do we assess performance better?” It’s “what behaviors does our assessment system actually incentivize?

Usually, the answer is uncomfortable.

What Actually Works

Based on decades of experience across different organizational contexts:

For Goal-Setting:

  • Use OKRs for innovation and product work
  • Use MBOs for predictable, measurable work (sales, operations)
  • Make OKRs transparent across the organization
  • Celebrate ambitious goals, even when partially achieved
  • Review and update quarterly, not annually

For Development:

  • Continuous feedback (CFRs), not annual reviews
  • Regular one-on-ones focused on growth
  • Real-time recognition of contributions
  • Team retrospectives for collective learning
  • Coaching and mentoring as core leadership responsibilities

For Compensation:

  • Separate from day-to-day feedback
  • Use calibration across teams for fairness
  • Benchmark against market rates
  • Consider contribution and impact, not just goal achievement
  • Be transparent about the process, even if individual decisions are private

For Culture:

  • Make psychological safety a priority (people need to feel safe taking risks)
  • Reward collaboration, not just individual achievement
  • Measure what actually matters, not what’s easy to measure
  • Trust employees to do good work (Theory Y)
  • Focus on removing obstacles, not tracking activities

Conclusion

The evolution from MBOs to OKRs represents a fundamental shift in how we think about goals: from predictability to innovation, from individual achievement to team outcomes, from backward-looking assessment to forward-looking development.

But adopting OKRs alone doesn’t fix performance management. The real shift requires:

  • Separating development from evaluation
  • Moving from annual reviews to continuous feedback
  • Focusing on creating conditions for great work rather than measuring past work
  • Designing systems that incentivize the behaviors you actually want

The uncomfortable truth: most organizations don’t want to make these changes because they require giving up the illusion of control that traditional reviews provide. It’s easier to pretend that annual reviews produce useful data than to admit that they mostly produce theatre.

The organizations that figure this out—that align their goal-setting, feedback, and compensation systems with how modern work actually happens—will have an enormous advantage in attracting and retaining talent.

The question is whether your organization is willing to rethink systems that haven’t worked for decades, or whether it’s more comfortable pretending they do.

← Back to all posts