Analytics are great. They help measure and compare information. Sometimes that information is important, sometimes it’s superfluous, but the important thing is you’re capturing it. The thing that gets me is the part afterwards – what do you do with those numbers, with those comparisons, with all that data after you’ve captured it?
I recently started trying the Pomodoro Technique as a way to better organize my day. There’s three major aspects to Pomodoro – planning, recording, and reviewing. I find I’m doing a pretty good job at the planning, an okay job at the recording, and a terrible job at the reviewing. At the end of the day, I have a bunch of numbers indicating how many times I was interrupted and how many “units of work” I got done in a day. And those numbers don’t really mean anything to me.
This isn’t a knock on Pomodoro – just a statement that it doesn’t really work well for my workflow. I’m still going to use the planning and time management techniques I’ve learned as a result. But the main point here is that keeping metrics for the sake of keeping metrics is only half the battle.
I think this is part of the reason there was such a backlash on agile techniques when larger companies tried to adopt them. So many teams were doing planning poker and velocity tracking and all these other metrics without actually realizing why they were writing the numbers down. Eventually it just became busy work with no value, and “agile” started losing its credibility because nothing was actually changing.
What are you trying to fix?
When you work with metrics, the first question you should be asking yourself is “what am I trying to achieve?” Are you having issues with time management? Maybe being interrupted too many times in a day? Bugs being reported by clients? If you don’t have a target issue, you might as well not bother writing anything down.
After you’ve clarified exactly what you’re trying to solve, figure out the metric you’re going to use. Maybe it’s as simple as a tick on a piece of paper every time you’re interrupted, or as complicated as using an issue tracking software to keep track of who reported what bugs.
Once you have a metric, determine some goals. What do you think you’re measurement will be? What would a “good result” be? What would a “bad result” be? Maybe you think you’re being interrupted 15 times a day, when 5 would be more acceptable. The main part of this exercise is to give yourself a bar to measure against, however arbitrary it may be. Odds are after a week or two of keeping track of the metric you’ll want to adjust your good/bad measures, but at least you’ve got somewhere to start.
Why are you trying to fix it?
Finally, come up with an action plan. What are you going to do after you’ve gathered the metric? If you find out that 40% of your bugs are being reported by live clients, what are you going to do about it? If you’re being interrupted 20 times a day by other staff, how are you going to manage your time better? This is the most important part – measuring something and realizing it’s not where you’d like it to be means nothing if you don’t have a way of addressing it.
You might not even have to implement your action plan. Maybe after a few weeks of metric gathering, you realize you actually don’t get interrupted as much as you thought. This is fine. The plan is there so that you know what to do when you do find something that needs to change.
You shouldn’t be spending time writing stuff down that you’re not going to do anything about. I’ve seen people spend days on reports, gathering information like “tasks per week” and “bugs reported”, without any actual outcome. Skip the busy work and keep track of things that actually matter.