3

Models for Software Engineer Performance Reviews | Toptal

 3 years ago
source link: https://www.toptal.com/software-engineers/software-engineer-performance-review
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Software Engineer Performance Reviews Explained

Nermin Hajdarbegović
A veteran tech writer, Nermin helped create online publications covering everything from the semiconductor industry to cryptocurrencies.
0shares
SHARE

When considering different approaches to software engineer performance reviews, one question is bound to come to mind: Why do we need to use multiple review models? The simple answer is that software development is a complex, multifaceted process that often involves dozens of individuals working in various teams.

Executives and stakeholders don’t always have intimate knowledge of each developer’s qualifications and responsibilities, especially in big organizations and teams. This is why performance reviews should be left to technically proficient professionals capable of understanding each software engineer’s responsibilities, competencies, skill set, and role in the software development process as a whole.

So, what is the right way of conducting performance reviews? The answer will depend on many factors, ranging from the organization’s size and goals to more granular aspects of an engineer’s performance.

Management Performance Reviews

Managers play a leading role in engineering performance reviews. In many smaller organizations, a direct manager may be the only person conducting a review. This is usually not the case in big companies, as their review processes are often more complex and involve more people in various roles and departments. Bigger organizations also tend to use peer reviews and self-assessments more often than smaller organizations.

Performance reviews have come a long way since major corporations adopted them in the second half of the 20th century, but the history of performance reviews is beyond the scope of this article, as is the behavioral psychology that underpins some performance review models. Instead, this piece focuses on the practical aspects of the process, starting with management’s responsibilities.

Although approaches may vary depending on the size and type of organization, some basic tenets apply to most, if not all, review situations.

How Managers Need to Approach Performance Reviews

Management needs to thoroughly plan the review process and ensure that everyone involved is aware of their responsibilities.

  • The review process needs to be defined well ahead of time, allowing managers and engineers ample time to take part and submit their feedback. Last-minute feedback might be of lesser value since it could be submitted hastily to meet a deadline.
  • Management needs to communicate the goals, objectives, and value of the review process to engineers and other stakeholders. Good communication should eliminate misgivings about the process and improve the quality of the reviews.
  • Review templates or forms need to be agreed upon in advance and they should be designed with longevity in mind. Ideally, they should not change between review cycles, ensuring the results of reviews are comparable over time.
  • The methodology should aim to minimize bias and ensure a high degree of consistency. Every manager and engineer has their own way of doing certain things, but consistency prevents individuals and their biases or preferences from influencing outcome unduly.
  • When peer reviews and self-assessments are employed, management needs to ensure the integrity of the review process.

Mitigating Bias and Handling Questionable Reviews

Due to the outsize influence of management on the review process, managers need to be mindful of potential biases and other issues that may undermine the process. Even if the planning stage is executed well and the whole process is designed properly, management may need to eliminate certain unwelcome practices and ensure the integrity of the process.

  • Competencies and expectations need to be taken into account during all stages of the process. Reviewing every team member with a broad brush could cause managers or peers to submit overly positive or negative reviews. Suppose a peer submits a questionable review because they’re not familiar with a certain engineer’s specific competencies. In that case, management needs to intervene and ensure such a review does not skew the overall score.
  • Managers can also decline reviews. Suppose a particular manager is disconnected from the work of a small team of engineers. In that case, they should not be reviewing the team’s performance directly, as they’re likely to lack the context and knowledge needed for a balanced and detailed review.
  • Reviewers lacking in-depth knowledge of a specific individual or their duties may feel compelled to submit a review of their performance to check a box, thus generating a review that does not have much substance and does not add much value to the review process.
  • Biased and one-sided reviews can distort results, too. If a manager reviews a team member who was hired against their wishes or a team handling a project wasn’t endorsed by a particular manager, it’s possible that their reviews may not be objective. Alternatively, reviewers might “cherry pick” specific performance indicators to make individuals or teams appear better because it would suit their interests.

Ideally, managers and executives would be able to conduct reviews with a purely objective mindset but biases exist. Being aware of them, however, can mitigate their effects.

Bear in mind that the way a manager reviews a software engineer can offer valuable insights into the manager’s performance and professionalism.

Peer Reviews

Peer reviews offer several advantages compared to manager reviews, though there are some trade-offs to keep in mind.

Peers tend to be better positioned than managers to evaluate each other’s performance. They have much more exposure to the work of their teammates. They often work on the same projects and collaborate with the same people, and therefore tend to have a good grasp of team dynamics and individual engineers’ capabilities.

However, peer reviews can also be affected by bias. Bias can appear either as positive, based on friendship, or negative, caused by personal issues or rivalry among team members. Groupthink can also influence the review process, especially in tightly knit teams, as people may be inclined to cover for their teammates. Given these possibilities, peer review templates and questionnaires need to be designed in ways that mitigates bias, focusing on specific competencies and objective criteria whenever possible. How team members’ results track to key performance indicators tends to add more value than subjective questions about personal traits or other open-ended questions.

The potential for bias raises a key question: Should peer reviews be anonymous?

Valid arguments can be made to support both anonymous and public reviews but it’s important to consider different organizational schemes and team sizes. Hence, there is no definitively right or wrong answer, though most organizations favor anonymous reviews.

Anonymous vs. Public Feedback

Let’s take a closer look at the advantages of anonymous feedback:

  • Anonymity can encourage openness and original thinking. If most of the team feels positive about something or someone, dissenting opinions might be unpopular. Anonymous reviewers can offer a different perspective without antagonizing their coworkers.
  • Anonymous feedback can contain valuable information. Let’s say a professional is compiling anonymous and public feedback for the same person. Chances are they will cite anonymous input to raise issues they might be reluctant to cite from a public review. A few additional points can have great value, especially if issues are raised before becoming apparent to the rest of the team. This early warning gives management and the reviewee a chance to address and rectify newly identified shortcomings, lest they escalate into something more serious.
  • Preserving relationships is another crucial aspect of anonymous feedback. People react to negative comments in different ways, so maintaining anonymity can preserve cohesion and prevent friction between team members.
  • If reviews aren’t mandatory, it’s usually easier to convince people to participate in anonymous reviews.

However, there are some disadvantages to anonymous peer reviews:

  • Anonymity cuts both ways. It encourages candid reviews, yet it can spur some people to abuse it to promote their agenda through disingenuous reviews. There’s a risk someone will use their anonymity to undermine a coworker based solely on their personal preferences. Conversely, anonymity can be used to submit positive reviews for people who don’t deserve them, as reviewers can choose to protect their longtime colleagues and friends, possibly at the expense of other team members.
  • Public reviews can carry more weight. Suppose an individual receives a few lines of negative feedback from one of dozens of anonymous reviewers. Chances are that that feedback will not be as impactful as getting the same feedback from a trusted and respected team member. Employees are far more likely to take feedback seriously when it comes from someone close to them.
  • Anonymity can be challenging to ensure in some organizations, namely small ones. Someone who receives a total of four reviews from five people they work with daily will likely be able to tell who submitted which review. This can cause people to treat the reviews as anything but anonymous, thus defeating the whole point of anonymizing them.
  • While it might be more challenging to get people to submit public reviews, reviewers are more likely to take them seriously knowing their name is attached. Therefore, they could put in more time to offer detailed, objective, and balanced feedback rather than treating the review process as a formality.

Self-Assessments

Self-assessments – or self-appraisals – are another approach commonly used in performance reviews. As with other review models, they can present their own controversy.

Self-assessments are typically required by management of staffers on a regular basis, which makes sense if the goal is to use them to track progress and changes over time. Few organizations mandate monthly appraisals but annual, bi-annual, and even quarterly self-assessments are common. Asking engineers to provide feedback on a regular basis can be beneficial, especially when dealing with teams and individuals operating with a high degree of autonomy. Reviewees can use these regular assessments to communicate potential problems that need to be resolved, explain how they overcame specific challenges, detail how and why they improved their performance, and identify what’s preventing them from improving their performance.

Mitigating the Limitations of Self-Assessments

Unfortunately, self-assessments suffer from several serious shortcomings, bias being the most obvious one. Some people are likely to overstate their performance, refuse to divulge deficiencies in their work, or list problems that impede their performance. Others may be too critical of themselves. In either case, the outcomes can be skewed.

How can organizations mitigate shortcomings? Managers can design self-assessment forms and questions to account for biases and minimize their impact.

  • Avoid open-ended questions that allow for too much subjectivity.
  • Focus on tangible results instead of subjective goals and values.
  • Place a higher value on critical responsibilities handled by the reviewee.
  • Emphasize key performance indicators and quantifiable goals.
  • Reiterate the organization’s core values and assess performance accordingly.

To allow engineers to address issues that may not be included in the self-assessment form, provide a comments section..

360-Degree Assessments

A 360-degree feedback process combines a number of previously discussed models to provide more expansive feedback and identify the strengths and weaknesses of reviewees. In a 360-degree system, direct performance reviews, reviews from fellow engineers (peers), managers, clients, and other sources are tabulated to generate a single result and present it to the reviewee in an easy-to-understand format.

360-degree feedback performance review model

As this approach ensures feedback from multiple sources and covers more than basic performance indicators and skills, it can be useful in many scenarios. It provides an overview of an engineer’s performance, allowing management to gain valuable insights at a glance. In addition, should an organization decide not to share the results of every review with each employee, it can share the results of 360-degree feedback instead.

This approach assesses basic team skills and provides team feedback on an engineer’s performance, behavior, communication, and any other desired criteria. However, it’s not ideal for assessing technical skills, skills specific to an individual project, or granular performance indicators. Since it typically involves many people with different backgrounds and levels of involvement with the reviewee, 360-degree feedback may be too subjective to assess some aspects of a software engineer’s performance.

What to Include in Software Engineer Performance Reviews

What should be included in a performance review that generates value for stakeholders and provides them with actionable information? Should reviews be comprehensive or focus on a few items to work on in the near term?

The answer depends on the type of organization and the scope of the review, though some points should be included in most, if not all, performance reviews.

Speed and Iteration

The speed at which a developer finishes a task is an essential metric in any performance review, as is the way they handle iterative software development. Speed and iteration are critical when dealing with large teams working on a single project, individuals who often jump from one project and client to another, and firefighting efforts. A software engineer’s ability to hit the ground running can make or break a project.

Code Quality and Code Reviews

While speed is a key metric, it is less valuable if it comes at a high price. The quality of code must be paramount and should not be compromised to meet tight deadlines. Code of lesser quality may cause headaches for the rest of the team or the organization later on.

A code review ensures that someone examines code written by somebody else. The process, albeit time-consuming, is straightforward and a good way of ensuring and maintaining quality. Ongoing code review frees organizations from having to review every line of code written by its developers in its entirety. Code reviewers need to be highly skilled individuals capable of identifying various problems and critical areas that need attention, ranging from design and functionality to style and documentation.

Internal Communication and Responsibility

Communication is not a technical skill but it can profoundly impact the quality of a software engineer’s work. Engineers communicate with their peers, team leads, stakeholders, and clients routinely and need to demonstrate a high degree of responsibility and professionalism.

Poor communication can undermine the quality of their work and allow minor issues to escalate into bigger and far costlier problems. Professional and timely communication is foundational and should be subject to review. Even the most impressive technical skills aren’t as important as the need to take responsibility and communicate effectively.

Recruitment, Leadership, and Planning

Senior software engineers and team leads often play key roles in recruitment, so it is important to review these aspects of their performance as well. If a team lead makes poor recruitment decisions, that impacts the whole team and possibly the entire organization.

Leadership can be difficult to gauge and review, especially if team members are reluctant to provide negative feedback. Therefore, it is necessary to ensure that the review process shields them from possible reprisals for unflattering reviews of their superiors.

Planning is another subjective category. Leaders need to ensure adequate planning and execution of team goals and objectives. However, their performance in this respect depends on other team members, both subordinates and superiors. Missed targets and deadlines are obvious red flags but the review process should consider a range of factors that may have caused them, e.g., poor management that failed to take timely action to get the project back on track or a lack of time or resources needed to meet a deadline..

Performance Reviews Aren’t Easy - Don’t Make Them Harder

Each organization should create a performance review model tailored to its particular needs. Just because Google or Apple is doing something, that doesn’t necessarily mean it will work for another company or team.

Performance reviews require a lot of planning and careful consideration. It is necessary to strike the right balance between complexity and thoroughness on one side and practicality and usefulness on the other. Small organizations can conduct performance reviews without making the process too cumbersome and difficult. Likewise, big organizations should do their best to make the process as lean as possible.

Don’t forget to review the review process itself. Whether conducting reviews on a quarterly or annual basis, review the most recent round of reviews before proceeding with the next one. Did the process go smoothly? Did it uncover useful information? Identify any shortcomings, address them, and strive to improve the review process continuously.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK