Thoughts on Developer Productivity

Color Cycling in Pixel Art | by Stephen Schroeder | Prototypr

I used to have a dream to work at a consulting giant until a few years ago. I worked with a bunch of them, as my parent company decided to hire one of the consulting companies, and they gave me nothing but a mess (Duh! 🙄). A while back, I stumbled upon this “Yes, you can measure software developer productivity” article from the consulting giant, and it sparked debate in a lot of tech blogs/newsletters.

I’m not going much into the critique of the article, as there are things that we, as a startup, can take from the consulting approach. 2020 onward has been a year where the hyper-growth vs profitability is no longer a conundrum. Industry, especially the software engineering team, is going through “normalization.” Engineering leaders are being asked to do more with less (read: budget, manpower) compared to the hyper-growth era, which induces the need for the Engineering Leaders to explain how valuable is the engineering team in a quantifiable manner to the company. I won’t go much into the details on how complex it is to measure engineering team impact on the company; Gergely Orosz and Kent Beck have a great article on that. Here are a few takeaways that I used to craft the productivity measurement at TipTip:

  • Measuring outcomes and impact is not enough as there are other factors (effort spent, deliverables)
  • Individual performance does not directly predict team performance
  • Team performance is more straightforward to measure than individual performance
  • Existing frameworks complement each other: DORA, SPACE, DVI

Going back to TipTip, as a growing startup with a sizeable number of team members, TipTip needs measurement metrics for the engineering team to ensure everyone has effective work in hybrid mode (WFH+WFO), a fair workload, and gets contribution recognition based on data. Thus, we came up with metrics called Engineering Delivery Proxy Metrics.

These metrics are proxy metrics. It’s not meant to be black and white performance rating. Engineering manager calibration and 360 feedback are still needed for performance justification.

CategoryMetricsDefinitionHow to Track
Communication, Satisfaction & Well-BeingEngineering Satisfaction SurveyHow fulfilled developers feel with their work, team, tools, or culture; How healthy and happy they are, and how their work impacts itAn average number of pull or merge requests merged by one developer in one week.
Async Contribution RecognitionExpression of recognition and gratitude toward peers.Employee Recognition, alternative for HeyTaco: GitHub – chralp/heyburrito
EfficiencyMerge FrequencyAverage number of pull or merge requests merged by one developer in one week.Gitlab API
PR Pickup TimeTime a pull requests waits for someone to start review. Low pickup time ~= Good review process.Gitlab API
PR Review TimeTime it takes to complete a code review and get a pull request merged.Gitlab API
Approved PRNumbers of PR that contributed (review, comment) and approvedGitlab API
Quality and PredictabilityShift-Left MetricsMeasures bug slipped from shift left testing scenariosQA Manual Tracking
Planning AccuracyThe ratio of planned work vs what actually delivered during sprint or iteration.TPM manual tracking

References

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments