4

Can You Measure Software Developer Productivity? - Slashdot

 8 months ago
source link: https://developers.slashdot.org/story/23/08/19/2036255/can-you-measure-software-developer-productivity
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Can You Measure Software Developer Productivity?

Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

binspamdupenotthebestofftopicslownewsdaystalestupid freshfunnyinsightfulinterestingmaybe offtopicflamebaittrollredundantoverrated insightfulinterestinginformativefunnyunderrated descriptive typodupeerror

Sign up for the Slashdot newsletter! OR check out the new Slashdot job board to browse remote jobs or jobs in your area

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!
×

Can You Measure Software Developer Productivity? (mckinsey.com) 67

Posted by EditorDavid

on Saturday August 19, 2023 @06:34PM from the measurements-of-units dept.
Long-time Slashdot reader theodp writes: Measuring, tracking, and benchmarking developer productivity has long been considered a black box. It doesn't have to be that way." So begins global management consulting firm McKinsey in Yes, You Can Measure Software Developer Productivity... "Compared with other critical business functions such as sales or customer operations, software development is perennially undermeasured. The long-held belief by many in tech is that it's not possible to do it correctly—and that, in any case, only trained engineers are knowledgeable enough to assess the performance of their peers.

"Yet that status quo is no longer sustainable."

"All C-suite leaders who are not engineers or who have been in management for a long time will need a primer on the software development process and how it is evolving," McKinsey advises companies starting on a developer productivity initiative. "Assess your systems. Because developer productivity has not typically been measured at the level needed to identify improvement opportunities, most companies' tech stacks will require potentially extensive reconfiguration. For example, to measure test coverage (the extent to which areas of code have been adequately tested), a development team needs to equip their codebase with a tool that can track code executed during a test run."

Before getting your hopes up too high over McKinsey's 2023 developer productivity silver bullet suggestions, consider that Googling to "find a tool that can track code executed during a test run" will lead you back to COBOL test coverage tools from the 80's that offered this kind of capability and 40+ year-old papers that offered similar advice (1, 2, 3). A cynic might also suggest considering McKinsey's track record, which has had some notable misses.
  • by quonset ( 4839537 ) on Saturday August 19, 2023 @06:40PM (#63781040)

    McKinsey is trying to drum up business. "Here, let our consultants show you how to measure things. Only $700/hour. Should only take a year or two. Maybe three. If you're not satisfied we'll keep working til you are."

    • by rudy_wayne ( 414635 ) on Saturday August 19, 2023 @06:50PM (#63781066)

      Yes, You Can Measure Software Developer Productivity.

      If you pay McKinsey a lot of money.

      ** No refunds. Satisfaction not guaranteed.
      • by Knightman ( 142928 ) on Saturday August 19, 2023 @10:02PM (#63781406)

        It's quite easy to measure developer productivity.

        Are they doing what they are supposed to be doing and are they doing it within a reasonable timeframe and quality?

        To answer that they need to have a boss that actually understand what being a developer entails - which seldom is the case. Adding more metrics means there will be an increase in micromanagement that leads to developers being less productive since they are forced to chase lagging metrics.

        As always, YMMV...

      • Re:

        And you have to pay them again every few years to revise their measurement technique as Goodhart's Law [cna.org] takes effect. Mind you it's a great deal for McKinsey.
    • Re:

      Yea, in addition to a weird cult-like structure they also have been recommended a lot of shady business practices and been involved in a number of scandals. https://en.wikipedia.org/wiki/... [wikipedia.org]
    • Re:

      How do I measure the productivity of the consultants doing the measuring?
    • Re:

      They're pretty productive; they got $14 million out of the university of arizona for useless advice.
  • about fuckin McKinsey offering businesses new reasons to fire people? And then call it an "imrpovement" to the employer's "tech stack".
    • Re:

      Yes, a pretty meaningless figure in the big picture view of things. What makes that worse is it is probably the best quantifiable metric out there. The reality is only a person who has done the job before, and has looked the work being done and evaluated it in detail, can make a meaningful assessment

      Sometimes I am at my most productive while blankly staring out the window while running the problem over in my head. Sometimes I am just looking at view and wondering when I can go home. How do you measur
      • Re:

        The reality is only a person who has done the job before, and has looked the work being done and evaluated it in detail, can make a meaningful assessment

        I'm not even sure about this. Take such a person, have them evaluate the performance of another person given that task, and they'll undoubtedly hate it, say it took too long, and the solution was bad. So that person gets fired and this guy takes over. Then you hire another person who has done the job before, and have HIM evaluate the solution, and he'll s

    • Easy way to find the programmer productivity:

      Just multiply the wavelength of the programmer by the voltage drop across, then divide by the speed of light.

      I have a bunch of pat answers to use when some damn-fool of an interviewer (or reporter) asks a nonsensical question. My favorite is "three", as in:

      Her: "How does one make an intelligent program?"
      Me: "There's no simple answer to that question."
      Her: "Can't you just give us a quick overview?"
      Me: "Okay, Three".
      Her: "Um... what do you mean, three?"
      Me: "It's th

        • Re:

          This is the first reference i've seen to Sluggy Freelance in... ever? (or at least 10 years ?)

          let me check my notes

  • When the requirements are clear and a well written SDS is provided to the team and there is a strong team lead then it is possible for the team lead to split the tasks, approve estimates and then it is possible at least to control the deliverables. The trick is to prevent overblown estimates that concentrate on easy details but totally guestimate the more complex ones and pull numbers out of their ass. Then the real trick is a payment structure that is based on deliverables. Give bonuses for delivering

    • If only the real worlds was that simple.

      Unless you are doing routine work, the same as before, then the real problem is "You don't know what you don't know". Have you ever started a sentence to you boss with "I just wanted to...".
      • Re:

        Well, I am the boss, so. I provide many of the requirements, sometimes I put together the SDS myself as well. Some things are unknown, however most of the things we do are fairly well understood.

    • Re:

      When the requirements are so well designed that their implementation can be actually measured, then a machine can probably do the job. But when the job is too complex for a machine to handle, actually measuring productivity is probably impossible. You can guesstimate things, and it's worked well so far.

  • https://en.wikipedia.org/wiki/... [wikipedia.org]

    tells us that 'When a measure becomes a target, it ceases to be a good measure'. In programming that means that if developers are measured on a specific measure, they will address that measure - to the detriment of other aspects of the software development.

    • by Echoez ( 562950 ) * on Saturday August 19, 2023 @08:32PM (#63781282)

      Exactly. If you measure me by lines of code? I'll create 100 lines of code to do something simple. Measure me by Jira tickets closed? I'll open more and then do minor things to fix them. Measure things by reliability in production? I'll spend months testing simple changes to ensure nothing goes wrong no matter what. Measure me by thorough code reviews? I'll spend a week on each one.

      • Or when they say I have to do a certain amount of training a year. Oh OK, we'll drop actual program work and do the training.

      • Re:

        "Lines of code written" has long been recognized as a terrible way to measure software developer productivity.

        If we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent."

        -- Edsger Dijkstra

        See also https://www.folklore.org/Story... [folklore.org]

  • I mean, when I write code, you can tell how I'm doing by how many mugs of coffee I've poured down my zombified gullet.:P
  • They know the cost of everything, and the value of nothing.

    And I remember arguments about "lines of code added" as an argument (in both mathematical and philosophical senses) for productivity. There's the friend who took over a compiler project, refactored it, got rid of 20k SLOC. I told him, "Huh. By the productivity models, you owe your employer a couple years worth of work."

  • Depends on the task: writing new code, editing existing code, debugging, porting, testing, etc... Some people are better at some things than others, and some are *way* better. I'm also going to add: in what programming language(s) and/or on what OS(es).

    I have experience in several languages on several OSes and have been a sysadmin and software developer, mostly systems-type programming, on all the platforms I've administered and also have ported code (compiled and scripted) between many of those platf

  • by Retired Chemist ( 5039029 ) on Saturday August 19, 2023 @07:19PM (#63781144)

    McKinsey comes in, asks the employees what is needed, and then tells management as if it was a revelation. Then they make a bunch of recommendations, which the leadership pretends to follow. There major function is to provide cover for senior management, if something goes wrong. The leadership can say that they were following the advice of the leading management company.
    • Perfectly put. I have seen this exact pattern in several cases. Sometimes it takes years to run its course, but typically as soon as the "consultants" left, it was back to whatever was done before.

      The worst bit was the inevitable employee who sees the whole thing as an opportunity to self-promote by strongly embracing whatever the consultant says, which then the consultant tells management that this person is key to their transformation and should be promoted. Anyone who questions the consultant openly is "

    • Re:

      Norwegian Blue Parrots, no doubt!!

  • Sometimes you're typing a million characters per second, and sometimes you spend an hour or two (or more) thinking about what you should be typing.

    I can see no way to judge productivity other than comparing completion time and bug count against similar past projects.

    Find employees you trust, pay them enough to keep them, and unless they're surfing for porn or something... leave them the hell alone as long as milestones are being hit approximately as quickly as for similar past work.

    Give a manager a metric for judging coder performance, and you will ultimately have low morale, bad code, and high churn.

  • This reminds me, by contrast, of Waddell and Bodek's "Rebirth of American Industry", in which they compared the management practises of American and Japanese carmakers in the 1980s. The Americans were focused on breaking down every step of manufacturing in accounting terms so that they could figure out exactly how much profit could be attributed to the assembly of a steering wheel or the installation of a taillight. The Japanese were focused on responding as quickly as possible to worker productivity suggestions so that they could build better cars faster.

    It was an attitude difference between "your employees are cogs who must be measured and controlled" and "your employees are doing the actual work and probably have the best ideas about how to do it better."

    Anyway, hopefully some software developer subject to these McKinsey ideas can write up a program to easily measure the productivity of their management.

  • A lot of good software development requires sitting down and well, thinking. Be it trying to come up with a creative solution to the problem, or just "a solution". I would love to have a way to know if my sitting around and thinking is getting me closer to the solution I seek. I mean, imagine how much better my life would be if I had a progressbar indicating my progress. If I wander down the wrong path, the progress bar wouldn't move or move backwards, showing my path is wrong and I should back up and try the other way.

    So much time wasted going down the wrong way, knowing that would make me much more productive.

    And what about when I need to research the problem space more to understand the problem and potential solutions?

    Then again, why do companies waste money buying these non-solutions when for similar amounts of money, they could spend it improving the lives of their employees? Millions spent on consultants vs. those same dollars spent making people's working lives less miserable? It could be simple things - standing desks are stupidly cheap, as are nicer monitors and keyboards and even things like computers. Or even better chairs. Or better lighting. Or walls, or better venilation, or remote work?

    I mean, I was tasked to do a relatively simple thing, but the most straightforward solution to the problem didn't work (ended up in a dependency loop in the build system). So I tried a less straightforward solution, which ended badly before I came with a third solution that ended up being the most elegant solution given the restrictions I discovered during the first two attempts. I think over the 3 weeks I worked on it, I might have written maybe 2000 lines of code, but when I finally committed my solution to the repository, I really added maybe 100 lines total. Negative lines, if you consider that I got rid of some code.

    It took me 3 weeks, but I avoided adding a ton of technical debt and came up with something small and elegant along the way in that things worked the way everyone expected, the changes were self-contained and small and not exploding across the entire repository, and I removed a bunch of #ifdefs as the conditional code no longer applied, streamlining the source tree. No more special case code.

    How do you measure that? I spent nearly a month on a stupid problem that was an iceberg in disguise and ended up writing a tiny amount of code that cleaned up a lot of the code base.

    • Re:

      A lot of software development is basically just like this - you're conceptually rewiring a network of dependencies. And those dependencies go right down to between lines of code. Every system is completely different, so every rewiring is new, unexplored, unpredictable territory.

      If the problem was predictable, then means it was repeatable, that means it should have been abstracted away and packaged into some component, and the issue (bug/feature) won't arise again. The next issue will be a new issue, not
    • Re:

      Everyone here is such a craftsman, coming up with bespoke pieces of art every day, brings a tear to my eye. The reality is that there are people to come up with designs and architecture and there are people to implement it and if those, who have to implement it are given tasks that they have to think about for days or weeks, then those who give them tasks are not doing their jobs.

      The way it should work, the designs should be described well in an SDS and the coding should be split among people who basically

      • Re:

        You've only moved the problem up the chain a bit. Sure, it's easy to measure the productivity of "code monkey" work. How do you measure the productivity of the people making the "well described designs" in the SDS and their partitioning of the work among the code monkeys? The hard thinking that's hard to measure has to happen somewhere or you're not really doing much of value.

        • Re:

          No, I separated the problem into pieces that can be dealt with individually. The coding and testing can be estimated, measured, controlled. The SDS is done by people who are proven to be fit for their positions in the first place, their work also can be measured, if not efficiency then accuracy and the results can be recorded because if people, who are building to the SDS have to start designing, then the people creating the SDS have failed.

          • No, you are ignoring the issue. You want someone to create the SDS to pass onto the coders, while prohibiting that designer from actually examining the problem via trial implementations. Effectively you seem to be assuming that the ancient waterfall model is the only suitable model. Far too many times have I talked to a customer about their requirements and things have gone thus:

            1. Customer asks for X.
            2. I say "OK, and where do I get the data to determine X?"
            3. Customer answers question, specifying where th

    • Re:

      My proudest moments are those when I was able to remove three or five thousand lines of code while keeping all the features.
  • If you could not measure productivity of something, that means that you could put a monkey to do that job. As in a literal fucking ape. And not one of the great ones. One of the stupid little ones would do fine.

    The question here isn't if you can measure software developer productivity. You absolutely can, because we can measure that said small monkey cannot do the job as well as a typical software developer. And no you racist twits, there's a reason why American Indians outearn you by such a massive margin

  • But you can tell on the timescale of projects, typically a couple of months at least.
  • You can tell they were productive if you're still using their code two, five, or even ten years later.

    You can tell they were unproductive if you're not, or if you had to fire them, or if they got frustrated for whatever reason and quit.

    • Re:

      Some unproductive people produce shitty code that gets used many years later. They get their promotion while others are stuck maintaining it.
  • Measure my productivity by lines of code. I can pump shit out at a phenomenal rate. But it's going to be exactly that: shit.

    I'm paid to solve problems. Sometimes a problem requires me to change a single variable. But finding that single variable could take hours, sometimes days. Debugging is often a time consuming process.

  • measuring developer productivity has never been a particularly complex issue, measuring them where the developer can't fudge the results is the hard part.
    • Re:

      Microsoft had some way of measuring productivity.
      The number of probes inserted and retinal scans may slightly irritate the person you're measuring but as long as they didn't read the terms of service it's all good.
  • Can you measure productivity? Yeah, kind of. It's not an exact science. Whatever you measure, people will optimize for it [joelonsoftware.com], specially software engineers.

    Another thing that's maybe as or more important than productivity is trustworthiness, which is not usually measured, but I much rather work with a trustworthy but mediocre engineer than a brilliant and highly productive but unreliable one.

    This sound they're fishing for business.

  • Leadership Program or whatever. Louis Rossmann covered this.

  • You can measure anything. The numbers will just be bullshit and useless. You can't make reliable predictions from them.

    Software is not an assembly line. There isn't some blue-print design that gets made repeatedly. If something in software development repeats, it's reused again.

    This means everything in software is always new, untested output. Especially at shitty web development jobs were people continually reinvent new Javascript frameworks. That means there is no applicable historical data that can
    • Re:

      Indeed. And even more critically, you cannot predict whether a software project will succeed or fail, whether the code will be maintainable to a reasonable degree, whether it will have potentially catastrophic security problems, whether usability will be good, etc.

      Face it, writing software is custom engineering. In any other engineering disciplines, that is reserved for experienced highly capable engineers. In software it is often done by people that do not really even understand the basics. That cannot and

  • What happens when you figure out that 10% of programmers are 10x more efficient than 90% of the programmers?

    Are you going to pay your high performers 10x more?

    Are you going to pay your low performers 10x less?

    We don't have nearly the range of compensation to deal with the consequences of truthfully measuring developer productivity.

    • Re:

      Well you don't enforce the policies on the 10x employees as heavily like return to work.
      HR also knows that retention is important with those employees now (in theory).
      The goal is to always pay the bare minimum for any performing employee.
      If we wanted truth the salaries and performance would be made public but someone I suspect that won't happen.
      The report won't be fed back to the engineers. It's for management only typically.
      If you do see one of these reports you'll understand why they don't share them
    • Re:

      You help the other 90% get 10x more productive. You hired those people, you know they have the capability, just figure out what's causing them to not be productive.

      Or, you really look at your hiring process.

  • Or rather not meaningfully. You need to look at quality as well, at level of insight in the solutions, at maintainability, and more important than ever, security. You can measure some of these in isolation but the result will be meaningless.

  • Ah, yes. McKinsey, where management advice goes to die (after being billed).

    I've been involved in I think 4 McKinsey "interventions" in my career. 2 were outright harmful, and the other 2 were merely a massive waste of money and time.

    Can you measure software productivity? Well, maybe, depending on how you *define* productivity. For sure, you can't apply any naive metric; the real wizards in any given organization are the ones who might spend a day making the proverbial chalk mark where the part isn't w

    • As regarding measuring the wrong thing.

      Military bases frequently have a "base exchange". In a nutshell, they're department stores where military members can purchase goods. Quite useful since many military bases are fairly isolated and junior enlisted frequently don't have cars to travel to nearby cities or towns. In any case, these stores need managers and these managers need to be evaluated. At one point some higher up decided that a good criteria would be so see how well the shelves were stocked. After a

  • I'm sure AI will be used to measure it in the future. However, developers will also write code using AI.

  • My practice was, wherever possible, if you compile a component with RUN_TEST define you get an executable that tests the component. Sometimes a subdirectory needs to be provided with input data. The documentation then includes the output of a test run. The code reviewer also looks at the test. Results from a test engineer' review are then are given back to the developer and their supervisor so that inadequate tests are detected and learning can take place. When UI is involved you can rely on standard U
  • A lot of good software ideas come when you aren't in the middle of writing software, but doing other things like being in a shower...

    Maybe a way to get a good measure of the productive of a person, is to have them describe how many times they had that week that did not come while in front of a computer.

    Such a measurement would act as a proxy for how much time the brain is spending in the background thinking about development.

  • Linked not once, but twice:

    https://dl.acm.org/action/cookieAbsent

    /sarcasm


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK