Q&A
Rethinking Team Metrics
Are your team metrics letting you down?
Team metrics date back to the waterfall days before Agile DevOps, Kanban, Scrum and all the rest, but they might not be providing the valuable information you need in the modern software development world.
While it's easy to rely on the same old measurements and point to rising numbers to show the team is accomplishing things, your metrics might not tell you if the things being completed are the right things -- the most valuable things that will delight customers.
That's because metrics have lagged a bit as development has changed and often rely on waterfall-style milestones and phase-gates to determine a team's effectiveness. Nowadays there are some traditional metrics you should rethink -- velocity, anyone? -- and some newer ones -- "team happiness" comes to mind -- that are more relevant and interesting.
To help sort out the modern metrics story, Angela Dugan will present a quick-hit session titled "Rethinking Team Metrics in 20 Minutes" at the upcoming Visual Studio Live! developer conference in Las Vegas in March.
Dugan, a strategic business consultant and agility coach, will discuss some of the pitfalls of commonly used metrics and make the case for not so commonly used measures that give you the insights that you're really striving for, with attendees promised to learn:
- Why it is so difficult to identify meaningful metrics in the software world
- How to achieve a "metrics mindset"
- The best types of quality-focused team metrics to focus on in an agile organization
To learn more about her session, we caught up with Dugan for a short Q&A.
VisualStudioMagazine: What inspired you do present a session on rethinking team metrics in 20 Minutes?
Dugan: So many talks out there are about "holy grail" metrics and how they can tell you everything you need to know. It leads people to think of metrics themselves as the goal. They are not the end goal, they are a means to supporting a team in making that end goal. I wanted to focus on more than metrics; in my 20 minutes I only mention a few key metrics AFTER we talk about mindset and traps to avoid. Without understanding those two things, ANY metric you pick could be lacking meaning or worse, not trustworthy.
For those unfamiliar, could you briefly explain team metrics, what they are and how they are used?
To me, team metrics are collected for the team by the team, and the team chooses them based on what will be helpful in experimenting, growing and improving.
"Metrics might focus on how easy it is to build and deploy code, what kinds of opportunities they have to stretch and grow within their role, and even how satisfied with and motivated they are by the work they're doing."
Angela Dugan, Strategic Business Consultant & Agility Coach
Those metrics might focus on how easy it is to build and deploy code, what kinds of opportunities they have to stretch and grow within their role, and even how satisfied with and motivated they are by the work they're doing. They could be used as often as daily, as infrequently as quarterly; it's up to the team to decide with some guidance from their leader.
In your session, you mention the difficulty in identifying meaningful metrics in software development. Could you elaborate on why this is a particularly challenging area and what common pitfalls teams should avoid?
Absolutely, in my experience it can be difficult because there is a push towards overly focusing on outputs. This is often called "productivity metrics" like stories completed, number of deploys, tickets closed. It can feel good to drive those numbers up and point to them as proof that the team is accomplishing things. What it does not tell you is if the things they are completing are the right things, the most valuable things that will delight customers. It is also easy to "game" these metrics if undue pressure is felt by the team to make them, for instance if bonuses are tied to achieving those metrics. There are many examples in the industry of people doing all kinds of mental gymnastics, sometimes even compromising their values to hit numbers, to the detriment of the company's reputation or goodwill with its customer base. The Wells Fargo debacle in 2016 is a good example of that.
You suggest that traditional metrics like percent complete and velocity may be misleading. What are some of the limitations of these commonly used metrics, and how might they fail to accurately reflect a team's progress or success?
Great question. Percent complete is one to be avoided because it can literally never be accurate. I like to say that building software is not simply a typing exercise, and how much work is needed to be able to call a feature "done" is impossible to know, until it is done. So any numbers we throw out in terms of percent complete, assumes we know precisely how long the thing will take, which is why we often experience people being 95 percent done for days or even weeks. Velocity is a bit different. I don't think it is a bad metric per se, I think it can be misunderstood. Velocity is a quantitative measure of output and it only gives the team credit for features deemed done, which is qualitative. Establishing a pattern of how much work a team can take on and complete successfully can aid in expectations setting, but if you start making an increase in velocity the thing to strive for, well most software developers are GREAT at math. I can make velocity whatever number you need it to be, if all you're concerned about is the number. I encourage people to look at Velocity as a signal, and if that number start to vary wildly that's a good time to have a conversation about what's impacting their velocity and see if the team could use some support.
What are some examples of not so commonly used metrics that you believe offer greater insights into team performance, and how can these metrics be implemented effectively?
WIP (Work in Progress) is chock full of metadata. It measures the amount of active work a team has going on, think of it as the number of open work items or tickets at any given point. It can be misleading in that you want to keep WIP low, making it appear less work is being done. What's happening is that by keeping active work low, you encourage teams to focus (ideally on the highest priority items), they tend to swarm and collaborate, and you don't lose time to the "multi-tasking tax!" Again, if the team is using WIP it's important they understand how they'll use it, agree on their WIP limit, and that they know when to get excited and how to react as they monitor it.
Another is team happiness. This can be measured in any number of ways, is very qualitative and subjective, and typically is not something you can automate. At best you can use something like SurveyMonkey, or engagement tools like Small Improvements or Lattice to collect answers from the team. What impacts happiness is something the team can determine, and often include factors like "do I feel supported by my team," "do I have enough opportunities to learn and grow," "it is easy to build and deploy code," whatever factors into their motivation and engagement. Motivated and happy teams make better products, so why not spend some effort measuring how happy and motivated the teams are, and have conversations around those metrics to keep things moving in the right direction?
You plan to discuss achieving a 'metrics mindset.' Can you give us a preview of what this mindset entails and how it differs from more traditional approaches to measuring team performance?
A metrics mindset encompasses a couple of concepts. One, is that metrics should never be viewed in a vacuum. Echoing what I said previously, they are not the end goal, they are simply a signal of whether things are moving in the right direction. When we see a "blip" in the trend, we should be having informed conversations about what the metrics mean. That's lesson two of a metrics mindset, metrics should always be surrounded by conversation. Without context, we can take any number of conclusions away from a dashboard of metrics, and most of them will be wrong. This is why I always bristle when I am asked to build dashboards that will be used by managers to measure and judge teams in their department. They're often pasted into reports, and used to communicate progress and status, without the valuable context that tells the real story. If you want to know what the other principals of a metrics mindset are, be sure to come to my talk!
In agile organizations, what types of quality-focused team metrics should be prioritized, and how do they contribute to a more accurate assessment of a team's performance?
I suppose that depends on what is meant by performance. Believe it or not, that is a pretty subjective thing to measure. In most cases however, what really matters to the people paying the bills when they want to measure performance is "did I actually get what I wanted for the funds I invested." This is very tricky to measure since it factors in human expectations, both explicit and implicit. Business value can be a great way to measure this, and what I espouse for Business Value is often different than what people are used to. In my opinion, Business Value should represent what is the most valuable thing to whomever you're building for whether it's a service, an enhancement, or a shiny new product. I've seen most organizations plopping arbitrary numbers into Business Value as a proxy for priority, and then that number is never revisited. A team may deliver all of the things, but if the VALUE isn't realized by the customer, it feels like all of that work was a huge waste of time and money.
In my ideal world, business value is a quantitative assessment of the ROI on some work to be done, maybe it's directly bringing in a certain amount revenue, maybe it's saving a certain amount of money, maybe it is securing compliance to prevent the company from incurring fines or being shut down, or it is attracting thousands of new customers. Whatever the hypothesis is, we should be including that in everything we ask a team to build. It can factor into prioritization, it may impact how a feature is designed, built, or deployed because they understand the desired outcome. And then after the thing is out in the world, the team can look back and determine if it met its goal. Business Value hypothesis won't always be correct, they will help inform future prioritization decisions, future designs, and how and where the team focuses its efforts. Now the rub is that we've expanded beyond the team, as Business Value determinations and hypothesis absolutely require a much larger definition of "team" to get right. To this end, the team's performance, to me is less about how well those bets about BV panned out, and more about how the team interpreted BV data, pushed back on the business and asked probing questions when BV determinations deviated from past successes, and made informed decisions about how to deliver what they were being asked for.
Note: Those wishing to attend the conference can save hundreds of dollars by registering early, according to the event's pricing page. "Register for VSLive! Las Vegas by the Super Early Bird Deadline (Jan. 16) to save up to $400 and secure your seat for intensive developer training in exciting Las Vegas!" said the organizer of the developer conference.
About the Author
David Ramel is an editor and writer for Converge360.