Tag Archives: manager

Defense mechanisms to prevent amygdala hijacking in Software Engineering

C-Requests: amigdala hijacking in software

tl;dr

  • Urgent requests from executives can disrupt software engineering teams and lead to rushed implementation.
  • Scarcity mindset and cognitive load impact attention and decision-making in such situations.
  • Cultural differences and hierarchical leadership styles in lower-cost outsourcing destinations can further complicate decision-making processes.
  • Strategies for mitigating these challenges include having strong product managers to shield the team, maintaining a well-prioritized backlog, and physically separating external requests from the team’s ongoing work.
  • Having solid product manager principles, conducting market research, prioritizing features, and working closely with the development team are essential for effective product management.
  • Using tools like Asana, Jira, or Google Docs to maintain a well-organized backlog helps balance immediate tasks, future goals, and customer transparency.
  • Avoid letting the backlog become too large and regularly review it to ensure efficiency and focus.
  • Physically separating external requests from the team’s work creates a buffer and allows for better prioritization and evaluation of their importance.
  • Applying techniques like the Eisenhower Matrix, First Things First, and Habit 3 of The 7 Habits of Highly Effective People can aid in prioritization and decision-making.
Without strong leadership, urgent requests bottleneck a team

In Episode 145 of No Stupid Questions “Do You Have a Scarcity Mindset or an Abundance Mindset?” Steven and Angela touch on this important part:

DUBNER: One thing that really catches my ear as you’re describing the Mullainathan et al. research is about attention and how more of it is requiredor more cognitive load, you might call it, is required — when you’re under scarcity because you’re trying to solve a huge set of problems. I mean, when economists talk about scarcity, that’s the problem with scarcity. Things get more expensive. When it’s a product, when it’s a good or service, and it’s more scarce, it costs you more. But also in the mindset mode, it can take your attention.

Steven talks about scarcity impacting your attention when you’re under cognitive load

This applies well to software engineering, where knowledge workers are under cognitive load trying to design new features and solutions for their customers. When a new request comes in that seems urgent, this becomes a time scarcity problem, people drop things to pickup a super-important-from-a-VP-request (Chief-Level, C-Level requests or C-Requests). Different teams have different coping mechanisms for this: one team that I worked with had an engineer oncall rotation only to handle C-Requests so they could show a great time to resolve.

The amygdala also activates the fight-or-flight response. This response can help people in immediate physical danger react quickly for their safety and security. For example, the fight-or-flight response helped early humans respond to threats to avoid injury or death. Today, that fight-or-flight response is more likely to be triggered by emotions such as stress, fear, anxiety, aggression, and anger.

Amygdala Hijack: When Emotion Takes Over

People make bad decisions when stressed

Picture this, your team manages a customer monitoring solution. You’re in the middle of a large data migration from one storage solution to another, for example Hadoop to Google BigQuery. Front-end engineers are scarce, and are focusing on the next generation of UI.

A request comes in, it’s from a Vice President, they ask for you to change the functionality of the legacy monitoring solution so that when you click on a specific area, you can perform an action. Without a strong request shield in place, this request is made directly to a software engineer on the team, and because this VP wants to ensure it is fixed, they also message another lead on the team without telling these two people.

Now you have a request from an executive, sent to two individuals, that appears to be urgent. What happens?
A. Both engineers talk to one another, and decide to put this in the backlog
B. The two engineers drop what they are working on to appease an executive
From what I’ve seen: Option B.

They rush to implement this feature, one of them finishes it first, and the other notices a similar code change when attempting to merge. Now they talk and realize they both had the same request, the first person to commit wins, the code is merged, and they go back to their regularly scheduled Jira Story. They have victory for that day, but no quality control, no design, no communication, and no future-thought was put into this. This feature is never rolled into the new system, and creates bugs for edge cases on the legacy system that will plague the team for months.

Cultures handle authority differently

With a focus from most companies to reduce costs, we have seen layoffs and outsourcing to places like Mexico and India (ref). What comes with this are some cultural changes that don’t combine well with authoritarian requests from Vice Presidents. Look at the graphic below – from research from Erin Meyer et al. we see that the countries that tend to be lower cost also avoid confrontation as part of their cultural norms.

Meyer, E. (2014). The culture map: breaking through the invisible boundaries of global business. First edition. New York, Public Affairs. Page 119.

This means that, left unchecked, executive requests will likely make the bulk of the feature requests that are prioritized, leaving more important features behind and potentially introducing more bugs. Once more, take a look at the next image below – this shows that these same countries are used to hierarchical leading, where the boss calls the shots. This goes counter to the quote “Those closest to the problem are closest to the solution, but typically furthest from the resources and power to do anything about it.” (DeAnna Hoskins).

You typically want high-value knowledge workers solving problems who are the most familiar with them – not high-level executives.

Meyer, E. (2014). The culture map: breaking through the invisible boundaries of global business. First edition. New York, Public Affairs. Page 76.

This is why you might see an emphasis on buying tools instead of building them. When knowledge workers who avoid confrontation are setup to be integrators of purchased tools, all executive feature requests then go through the vendor and not software engineering team.

Strategies for avoiding amygdala hijacking at work

When you have calmed down or feel less stressed, you can activate your frontal cortex. Begin by thinking about what activated the response, and how you felt. Then, consider responses you can and should have. These will be more thoughtful and rational responses. If you still feel emotional in the moment, give yourself more time.

Amygdala Hijack: When Emotion Takes Over
What you want: Product Managers and Engineering Managers that constantly shield the team from urgent request that aren’t important

Product Managers (or just the mindset) can serve as a great frontal cortex by providing protection from amygdala hijacks.

Have solid product manager principles

You may not have the luxury of a full-time Product Manager for your project, either because they are split over multiple product lines or your team lacks general investment in this role. What’s critical is that someone upholds these principles to focus the team on only the features and products that will bring value to your customer.

  1. Define the product vision: This involves identifying the problem that the product will solve and the target audience for the product.
  2. Conduct market research: Gather and synthesize information about the target audience, their needs, and the competition.
  3. Develop a product roadmap: Outline the key features and milestones for the product.
  4. Prioritize features: Prioritize the features based on their importance and impact on the product.
  5. Develop a product backlog: Work with engineering teams to maintain list of all the features that need to be developed for the product.
  6. Work with the development team: The product manager should work closely with the development team to ensure that the product is developed according to the vision, OKRs, roadmap timelines, and feature backlog.
  7. Test and launch the product: Hands-on, thorough testing to ensure that it meets the requirements. Once the product is ready, it should be launched to the target audience.
  8. Analyze metrics: Ensure features are delivering based on projections, prioritize improvements based on these metrics.

Have a well maintained backlog using tools like Asana and Jira (or even GDocs)

Having a good balance between what needs to be done now, next, and the future is important. You’ll have to merge and adjust these lists as you think about new features. Giving your customers transparency on what your backlog looks like gives them confidence that you care. Being proactive with your backlog and release notes are great interrupts to avoid constant status meetings.

Being well organized also exudes outward confidence to senior leaders that you’ll work on the most impactful things. When “urgent” executive requests come in, you can speak logically against your current set of work to illustrate where this new request fits in (now, next, feature). You may need to do some analysis before accepting an external request – and that is okay. Executives might think they have the best idea since LLMs, but only the data should confirm or deny this.

Don’t let your backlog get too large. The maximum number of backlog tasks should be 50 – use Google Docs or Confluence pages that describe additional work the team needs to work on, but it not yet in your backlog. Being prideful for a large backlog is the wrong culture to have – a large backlog shows wasteful hygiene. Review backlogs bi-weekly.

Physically separate the teams work from external requests

Don’t pollute the work the team needs to do with external requests. If partner teams and senior leaders want to contribute, give them a separate place to do so – either convert emails/DMs to updates to existing Google Docs, or maintain a distinctly separate customer feature backlog that allows for transparency and even voting for the most important new features. Review these monthly. This is your physical buffer that shields the team from unnecessary escalations that seem urgent but may not be important (also read more about The Eisenhower Matrix, First Things First, and Habit 3 of The 7 Habits of Highly Effective People).

Silent firing is Engineering Managers who don’t give feedback

Good managers will always encourage you and tell you that you’re doing a good job. Great managers will challenge you to mature as a software engineer. In my career I’ve observed EMs who “silent fire” someone by not spending enough time as a coach. The goal of this article is to remove this veil for software engineers so they can improve their manager’s effectiveness. Managers: your team is your responsibility.

One rower in a boat can slow down the team.

Having good optics as an engineer can help you navigate through ineffective managers. Photo taken at Museo Galileo in Florence, Italy.

A great engineering manager maximizes these key traits:
1. Problem solving
2. Project management
3. Effective communication
4. Emotional intelligence
5. Great attention to detail
6. Technical skills
7. Delegation skills
8. Time management
9. Maintain a good work/life balance
10. Provides effective feedback
11. Fosters trust
12. Motivates

All of these twelve are important, but an engineering manager’s ability to be a multiplier as a coach and mentor relies on effective communication and feedback.

Have you ever received feedback like this:

1. The feedback I received from others is that you aren’t meeting the bar (but the specifics of that feedback are left out)
2. You could be doing better (but better is left up to chance)
3. We need to see an improvement in the next few weeks (without providing specifics)
4. You are doing great! Do you have any plans for the weekend? (Where a discussion about your performance is shunted to weekend chat)
5. I don’t think you’re ready for this project, so I’m going to put this other person on it

The abstract things that bad EMs say

Indirect and abstract feedback creates a negative flywheel, downward trend, 📉 that looks something like this:

1. An engineer has a clear gap in ability where coaching could help
2. During several 1:1s an engineering manager lacks radical candor (caring about someone enough to give direct feedback) to rumble and have a direct conversation about it
3. The engineer continues to fail because this gap isn’t addressed
4. Performance reviews come and a poor rating is given with abstract and unhelpful feedback
5. The engineer doesn’t feel valued
6. The manager is asked for a ranked list of talent on their team
7. The company decides to make budget cuts and sets a 10% layoff target for low performers (likely with other qualifiers like: in a high cost location, not near an office, working on a product that isn’t returning on investment or hasn’t been delivered fast enough).
8. The engineer is caught in a layoff

The keystone here is the manager. A manager who doesn’t dare to lead will eventually find the truth, but it takes years for that truth to play out while engineers who might have contributed more to the company are let out the door.

Without any additional mentorship force, an engineer’s career will continue to fall along the same trajectory. Photo taken at Museo Galileo in Florence, Italy.

One of the clearest patterns I’ve witnessed is the “nice manager”. Someone who is extremely personable, makes friends with everyone, and is mostly optimistic about the company and people. This creates a halo effect: engineers see them as an ally, the manager is well-liked. During upward feedback surveys, direct reports tend to give glistening feedback. During calibrations this manager believes that every single person is performing above their level. Since they have a halo effect, people tend to take them at their word (another reason for checks and balances in a calibration system). Outside of this team, people question why milestones are missed and quality is poor – on their merits, the team may not be producing, but it’s challenging for that feedback loop to have any impact. This manager is “silent firing” because they never provide valuable feedback for fear of their kind image. Left undetected, correction can take years.

The second pattern is the “lack of trust” manager. This manager doesn’t believe in people from the start and limits their opportunities, but also doesn’t give them feedback to improve. For engineers that become managers, it’s hard for them to risk slowing down their project deliveries by migrating away from a known-good workhorse on the team who might have more experience in a domain. These managers “silent fire” by not coaching people who have low task relevant maturity. In a 9-box system, this expediency bias (and these others) limits the potential the manager sees in the employee.

As a manager, how can you avoid this?

  1. Practice giving direct feedback. You can still be well-liked and give good feedback.
  2. Read Dare to Lead and Radical Candor. Check out this DevInterrupted podcast episode. Engineers: suggest that you’ve read these books and recommend them to your manager.
  3. Listen to others around you. Model your behavior after leaders who you see giving clear feedback to others. If you can’t figure this out on your own, ask another engineer: what was the last impactful feedback you received? You rely on tools to debug memory leaks, why not rely on others as a self-improvement tool?
  4. Fix your scaling problem. If your team is too large (more than 12 people) and doesn’t allow you to go deep in 1:1s with each person, look to hire another manager or instead of load balancing over everyone, only focus on specific people for a given week. Take notes in your 1:1s and follow-up – write-to-disk, trying to keep everything in-memory will not scale.
  5. Debug external feedback. Did you receive feedback about someone on your team? Ask for specifics. Try to understand if it’s a pattern or a single event. Take notes for each person on your team like you’re preparing a research paper without a due date. During your next 1:1, say “hey, I received some feedback about ______ and wanted to discuss this more with you.”
  6. Ask others for 360 feedback for each person on your team. Put it through your own filter: not all feedback is valuable or actionable. Over time, identify patterns and create an action plan to address them.
  7. Give people on your team a chance. Start newcomers with smaller projects, test the water with longer tenured people by giving them projects outside of their comfort zone. If everyone on the team is extremely specialized, you cannot effectively pivot to new asks, and at some point not providing opportunity will undermine your potential as a manager. You ramp features in production, why not ramp your trust in people?
  8. Check your peers. If you see that teams aren’t consistently performing because of a “nice manager” or “lack of trust” manager, give that feedback directly to that manager. Not providing this feedback prioritizes short-term gains, look to influence over the long-term to keep that team engaged and people performing well. Ignoring deprecation warnings will eventually bite you.

Engineers, what can you do?

  1. Check yourself. Are you checked out and not contributing to the team? According to this HBR article, by checking out, you might be creating a self-fulfilling prophecy where your manager is likely to “silent fire” you. Turn on your own linter.
  2. Apply BFS to your work network. Ask for feedback outside of your team, find a mentor where you observe admirable traits.
  3. Improve your survey responses. In the upward feedback company survey, make it clear by selecting the lowest options available for questions that indicate you’re not receiving quality coaching. If you can, add text comments that validate this. Most company surveys are anonymous, as long as you don’t add specifics. If your manager keeps dangling carrots for “just this one more thing” to justify a promotion, you may bias towards giving a great upward review – check your own biases each time you’re asked to review your management chain.
  4. Feedback horizon. If you’ve been on a team for about 4 months, you should have received some kind of feedback about the work you’ve been doing. At a minimum, after 6 months, a manager should be giving you feedback on your strengths and opportunities. You wouldn’t push code without a logging statement to validate your feature is working, why would you go months without instrumenting logging in your job?
  5. ⭐️ Ask for specific feedback. That presentation you gave, that document you wrote, that product you pushed to production – ask for some direct feedback on an artifact that you created. This is a good unit test – it should force your manager to give you feedback and can be a warning sign if this doesn’t occur. I added a star to this one because I think it’s the most important. If your manager can’t give you specific feedback for growth, ask them for more challenging projects or stretch goals. Think of this as pen testing – keep testing until you find a way in.
  6. Humble brag about yourself. In public docs (like a Wiki), maintain a table of contents that link to everything you’ve worked on. Make it easy for leaders to see that canonical document so they can jump to various artifacts and give direct feedback.

Why the McKinsey 9-Box Model Is Silently Destroying Engineering Teams (And What to Watch For)

GE-McKinsey nine-box matrix

The 9 Box Matrix is a tool developed by McKinsey in the 1970s to help GE prioritize investments across its business units. It evaluates the units on industry attractiveness and competitive strength. In recent years, Human Resources teams have adopted the model as a talent management tool, replacing the two industry axes with performance and potential to categorize employees and determine which to promote, retain, and invest in, and which to reallocate. The HR version of the model is not standardized and there are variations in circulation.

It can take years to find the goldilocks zone for performance

I only figured out these secrets as early as 2015, where I’m not familiar of how calibrations were performed in those orgs at the time. So while I’ll share what worked for me, this is not a checklist of things that are guaranteed to make you successful.

The shift from technical efficiency to organizational restructuring

If your company has been in contact with a consulting company like McKinsey or Deloitte in order to reduce costs (read “improve efficiency”), you can sure bet that this model is being rolled out at your company. For some companies, there may be a theme of efficiency that started around 2018 with brainstorms of how to drive down cost mostly by improving data center efficiency. In the years that followed a Chief of Staff might have came in from one of these consulting companies and changed the way you looked at cost savings, at some point the 9-box model was likely introduced.

The 9-box model (taken from RapidBI)

When we began to use this for calibrations, it also came with a suggestion that only 10% of the team should be in the “Future Leader” or Box 9. The general logic is that bonus should pay people for high performance and RSUs (e.g. with a 3-year vest) should compensate potential. The “Under Performer” or Box 1 also had a similar 10% guidance that aligns with the Vitality Curve created by Jack Welsh and misused since then.

This 9-box may have worked with GE as they prioritized their investments, but doesn’t fit to calibrate engineers, specifically because potential measurement in people is different than with capital expenditure investments. This adds noise to the process.

For example, these could be used to determine potential:
– “Next level” (e.g. at the levels higher than their current role) abilities and motivation
– Skills and mastery
– Ability to determine future target state
– Thinking beyond themself
– Automation
– Team player
– Learning
– Drive

Performance likely looks at:
– What they have worked on
– How they have done it

You may have heard the saying “pay for performance”, but that was more like “bonus and merit increase for performance, RSU for potential”. In some organizations, merit increases were essentially taken away from managers in favor of an HR algorithm that looked at role, level, and location and gave managers a value that was typically less than 3% unless someone was being promoted, then it was less than 9%.

The baseball card process with the McKinsey 9-box

Summarize an entire year on a PowerPoint slide. Created with stable diffusion with prompt: davinci style front of sports card modern with graphs and laptop with pcb circuitboard background

Every organization is different, but some teams may be asked to pre-fill this 9 Box before calibrations.

During calibration meetings, managers presented a PowerPoint Slide that looks similar to a baseball card.


It might have sections like:
1. Box Number (e.g. Box 3 “Trusted Professional”)
2. Outcomes – things they worked on (notice I didn’t say completed work)
3. Skill dimension attributes like customer centricity, technical skill (design, quality, requirements, expertise, proficiency, efficiency), collaboration and an icon to indicate which level they performed at: developing, proficient, accomplished, and expert.
4. Basic details, like time in role (e.g. 5 months), level (e.g. E4, L5, MTS1), and date of last promotion

Shoehorn

Now we have two things: a 9-Box rubric and a PowerPoint slide for each person.

This step of shoehorning now attempts to assign each person a Box as engineering managers meet. Each manager, who doesn’t know much about other software engineers in other teams now needs to read an extreme summary of mutually exclusive information like “They worked on a database migration from Teradata to BigQuery” or “They are doing a great job as a scrum master”.

Given twelve different skill dimensions, and a baseball card summary, managers then decide how that related to being a “Core Employee” versus “Effective”. This is where much of the final decision rested on the manager’s ability to:
1. Present their case
2. Defend their case
3. Know what engineers worked on
4. Draw comparisons between other engineers

If someone is strong in their collaboration skills, but weak in their efficiency where does that put them? There are too many variables at play to be able to translate them into a single box. This is different than how GE may have used this by reviewing their portfolio of products to align on a box by compare revenue and growth estimates.

What can do you do as an employee? Humble brag about yourself, all the time, and keep a record of it. Record how significant your achievements are and make sure your manager, other managers, your skip-level, and other people know about them. Ask for a copy of the template your manager is using: fill it out for them. Use specific wording and highlight key achievements. You know the most about what you did, the goal of a calibration is for those achievements to be compared against others. Personally, I would want to control my own destiny as much as possible and over the years I’ve filled out my own promotion slides and calibration documents several times.

Rating all employees on one dimension at a time (in this example, safety) exemplifies a noise-reduction strategy we will discuss in more detail in the next chapter: structuring a complex judgment into several dimensions. Structuring is an attempt to limit the halo effect, which usually keeps the ratings of one individual on different dimensions within a small range. (Structuring, of course, works only if the ranking is done on each dimension separately, as in this example: ranking employees on an ill-defined, aggregate judgment of “work quality” would not reduce the halo effect.)

Kahneman, Daniel; Sibony, Olivier; Sunstein, Cass R.. Noise (pp. 293-294). Little, Brown and Company. Kindle Edition.

Ranking reduces both pattern noise and level noise. You are less likely to be inconsistent (and to create pattern noise) when you compare the performance of two members of your team than when you separately give each one a grade.

Kahneman, Daniel; Sibony, Olivier; Sunstein, Cass R.. Noise (p. 294). Little, Brown and Company. Kindle Edition.

How should it be done? Engineering Managers, here is my advice to you.


1. Reduce subjective skills that are assessed. Twelve different axes is too noisy. If this can be reduced to 5, then managers can provide an assessment of [Does Not Meet Expectations, Meets Expectations, Exceeds Some Expectations, Exceeds Most Expectations, Exceeds All Expectations and Beyond]. These scores should be transparent and specific for all of the 5 attributes and should be calibrated among all engineers.
2. Transparently weigh performance more heavily than potential, or remove it all together since most potential assessments in their current form are subjective. Use promotions as the time to look at potential at the next level using historical evidence, but use annual calibrations for salary/bonus/RSUs to consider performance.
3. Be more strict representing people. Move from PowerPoints to Documents with strict templates for sections, headers, length, and content. PowerPoint allows people to change font sizes, and change the sizes of sections: reduce noise by tightening the restrictions on managers. Add a total word requirement per section: Outcomes should be larger, subjective sections should be smaller. Give everyone a fixed amount of time to read the content for each person and also provide a fixed time for discussion for each person to keep calibrations efficient.
4. Look at some software engineering stats as directional, but not the only metric. Things like number of PRs opened, PR closure time, PR comments given may draw out outliers that will force a longer discussion if they are also in one of the top talent boxes. It may also force them down from the middle box if the justification is light. I have seen a few engineers where the number of commits dropped, their deploys dropped, and it triggered a discussion on productivity – as a manager, you need to be proactive here: don’t let this wait 6 months.
5. Have someone designated to check bias. “She was a rockstar and knocked everything out of the park” would likely bias others if a manager said that – and this kind of comment is subjective. A person who checks bias would stop this discussion from continuing. A less biased way to say that would be “She delivered all major milestones ahead of time with performance unlike anyone else within our team and at her level”.
6. Take notes. We would spend hours calibrating, but some managers would not keep their own notes. This creates a negative flywheel when they go back to the software engineer to discuss strength and growth areas. Feedback should be normalized during this process so that everyone receives it irrespective of their manager. This is different than promotions, where that process should, by default, collect decent feedback to folks seeking promotion.
7. Get rid of the 9-box. Come up with an anchor case, and rank engineers above or below that anchor case. Refer to rankings for each attribute as part of the calibration and determine how five different attributes will be averaged to create an overall rating. If your organization didn’t plan their budget effectively, and can’t manage the budget, and can’t be creative with solutions (okay, maybe not even creative, but humane) to avoid a layoff – use this ranked list, and a set of specific algorithms determined ahead of time, to determine where to eliminate roles. Key here: the 9-box system is noisy, especially for engineers: using this as your method for layoffs is a failure.
8. Don’t keep your performance management practices a secret. Everyone should know what specific actions are taken to determine top/mid/low performance. If you’re rating someone’s potential (don’t do this unless this can be objective), they should be getting consistent feedback on how they can increase that potential.
9. Practice, especially with new managers. Don’t do a single calibration. Create anonymized versions of each level and rating, then remove the rating and have managers rate them independently until you see a consistent final result. This has the side-effect of a more performant process as people get more task relevant maturity.

Goal Posts

Regular goal posts are immobile, so should your criteria for calibrations: Created with stable diffusion with prompt: painting of moving goal posts that are blurry

This section deserves it’s own discussion as I’ve seen this happen numerous times over my 7 years as an EM.

Research suggests that a combination of improved rating formats and training of the raters can help achieve more consistency between raters in their use of the scale. At a minimum, performance rating scales must be anchored on descriptors that are sufficiently specific to be interpreted consistently.

Kahneman, Daniel; Sibony, Olivier; Sunstein, Cass R.. Noise (p. 297). Little, Brown and Company. Kindle Edition.

Here are some examples that have come up in calibration meetings:

  • Not aspirational enough: Should a senior engineer who is delivering the most challenging projects, before deadlines, and helping raise everyone around them be penalized if they aren’t aspiring to be an even more senior software engineer?
  • Critical project: Should someone who delivered a critical project be rewarded more than someone who delivered multiple projects during the same timeframe?
  • Program Manager Software Engineering: Should a software engineer be rewarded for scheduling meetings with the team, and managing the backlog, but isn’t delivering any significant code?
  • Responsiveness: Should someone who is responsive at all hours of the day and night be rewarded when they haven’t been able to deliver a project?
  • Design over delivery: Should a senior engineer be rewarded for designing multiple systems and features when delivery of them has faltered?
  • Delayed impact: Should someone be rewarded for designing a roadmap if that is for next year and unable to be proven out until then?

All of these questions should be answered with a sufficiently specific anchor, as Kahneman points out, for each level of performance.

For example, for a Senior Software Engineer, emphasis should be on both design and delivery of their projects. Equally weighted: where if the team is not delivering quality solutions fast enough, no amount of brilliant design will improve the rating. For Junior Software Engineers, design may not be as important, but consistent and quality delivery should be.

The key point here is that these discussions should not happen when you’re already in calibration sessions. Get together as a team and make sure you talk through these cases.

If there is an HR guideline (read: forced ranking) for who can be a top/mid/low performer, each person should be ranked against each other so that logical lines can be drawn at the respective cut-off points. What you don’t want to see are last minute attempts to figure out who to move into different buckets where goal posts get added just to justify a rating change.

Rewarding

One of the fundamental questions you should ask if you leadership team is: How are bonuses/RSUs/salary increases distributed? Does the VP make the decision or delegate budget to Sr. Director, Director, or Managers? If budget is distributed all the way down to each manager (ideally in a completely fair way, split equally), you could be missing out if you’re going the extra mile.

If budget is split among managers (instead of Directors or Senior Directors), this restricts the overall bonus that top talent can receive.

$1000 split between 5 managers allows each manager to give $200 as they see fit.

Taking that same model, but allowing for the $1000 to be pooled where 80% of that is given to 5 managers ($160 to distribute instead of $200), then leaves $200 to be distributed to top talent.

As a manager, I would prefer for the money to be pooled at the Senior Director level, with a fair calibration system, so that top talent can receive a larger bonus for going above and beyond. Unfortunately, at some companies, this is left up to the organization to determine on their own, so people are rewarded differently.

9-box for people or factories

“We can’t choose our fate, but we can choose others. Be careful in knowing that.” Photo taken at Harry Potter Studio Tour in UK

Factories, like the ones General Electric used their 9-box for have physical machines and capital. Small changes to production lines, machinery, and formulas can improve performance. New equipment or future tech can improve the potential of that factory. Engineers aren’t the same as factories.

Engineers are curious learners, they can change their focus, improve systems, and deliver results. Communication is what helps engineers thrive. A failure of communication doesn’t mean engineers don’t have potential. A poor engineering manager may not create an environment where people are curious. Senior engineers may be overworked, stressed, and don’t give junior engineers something to aspire to: that doesn’t mean that junior engineer doesn’t have potential. Withholding opportunities for engineers to thrive is the fault on the potential of the leadership, not the engineer.

A manager who cannot accurately determine performance or potential introduces noise to the 9-box system. An industrial engineer assessing the performance and potential of a factory leverages a rubric with measurable criterium: you can’t say the same for performance reviews of software engineers. Don’t force-fit processes from one system to another because a consultant says it works. Don’t outsource your own brains to find the way to measure performance – if it doesn’t work at first, iterate – that’s what great engineers do.

FAQ

Why only engineers? I have the best knowledge of day-to-day engineering expectations, but I imagine a similar case could be made for Product Managers (PM), Program Managers (PgM), User Experience (UX), and Business Operations (BusOps).

If you remove the 9-box, what should be used? Only leverage your existing career ladder (for example, Square). Having two different frameworks creates overhead. As discussed in this article, slim down the amount of skills within that ladder to five or less.