Forums

I've been a long time follower of the manager tools ways and the concepts I have learned have helped me tremendously in my current career.  I've embraced the idea of goals and metrics and am trying to take them broader to more members of my group.  I've taught them the concepts of setting benchmarks and measuring improvements before and after certain pipelines are developed to measure success.  I have run into a little resistance and am wondering what kinds of advice I can give to my team members?

They believe that there are many many things they do that cannot be quantified because it is simply not possible to know other than it is better by some level than before. For example, in the past week, they have put together a tech demo that has proven that we can easily communicate between two applications. There is no measurable success here other than communication between the two was non existent and now it exists. This could be quantified as a 100% improvement in the level of communication but they feel this could be extremely misleading. This is a very important step in the progression of our future development, but they don't feel that there is really a way to quantify it with numbers.

Similarly squashing a bug in a script isn't really quantifiable. It doesn't make the pipeline any faster other than in say 25% of the cases the pipeline would have to be done manually which may take 5 minutes and now will only take 5 seconds. But at the same time the manual processing might take 10 minutes, or 1 minute. There is no way to know and it differs every time. Also, perhaps the bug caused a problem that needed cleaning up. There is no way to quantify how long that cleanup took.

Do you have more resources on examples of metrics that others have used for this kind of thing?

asteriskrntt1's picture

Hi Pace

For your first situation, you have improved communication.  The premise is that this certain communication leads to something getting done.  So what gets done now that you have this improved communication?  Measure that.

For situation #2, you don't need to measure the exact time spent.  What you can measure is how many times you are now eliminating manual processing and the inherent mistakes in the manual processing.  EG., reduced monthly manual processing adjustments from 798 to 44, freeing up an average of 5 minutes per.  I am sure you can come up with something more specific.

 

*RNTT

 

 

jcvolman's picture

I'm not in a technical field, so I'll offer a broad comment.  Often when folks are presented with metrics, they need to be reminded that thier success/contribution is judged IN PART on the metrics and in part on qualitative measures/contributions, if indeed that is the case.

Mark's picture
Admin Role Badge

And as it happens, I have this great book full of metrics... ;-) LOL.

I'd be happy to hear more about exactly what your team does, and how you think it contributes. The more information we have, the more we can make concrete suggestions.

Lacking that, I think both of the comments above have merit, and I'd like to add my perspective.

You don't have to have a PURE connection to the ultimate output of the firm or your division.  It's not necessary that you can PROVE that reducing bugs definitively has a cost or quality implication in the marketplace, or even an internal customer.

Measures which you believe are reasonably well correlated - or even better,  measures which you believe are causally related - to the output you're responsible for ARE EXCELLENT METRICS.  The fact that saving time (as an example) isn't specifically judged by someone doesn't make it a poor metric. 

If you believe that reducing a metric or improving a metric is prima facie highly correlated to a more desired outcome, that is a legitimate metric even if there is not PROOF of its value.  This is the classic definition of a PROXY METRIC.

To address your team's concerns directly: Just because something only reduces time or cost compared to how you used to do it, without some sort of external proof of value, doesn't mean that reducing time or cost isn't inherently good by itself.  Lacking anything better, reducing time to market without reducing quality is ALWAYS a legitimate goal, metric, as a PROXY for the broader concept of lower costs leading to more profitability.  Reducing bugs in less time is always better.

Who decides what's the right proxy?  You do.  That's part of your job.  Without externally provided and/or validated metrics, you are expected to come up with proxies.  In other words, what do you believe you could and should change/work on?  Whatever you decide, you are left to come up with metrics for it.  Those measures are proxies - you believe that if you do those things, the larger responsibilities you have will be met.

Regardless of whether bugs are big or small, lowering the average time to identify and eliminate them is ALWAYS better.  Sure, it's possible that a jerk boss would say, "I can't believe you're telling me you reduced your bug killing time to eleventy-forty minutes this quarter when your productivity and overtime numbers are so terrible!!!"  That happens, but if you don't measure your bug killing it won't get better, and you'll STILL get in trouble for the productivity and overtime numbers.

I'd say you can probably guess pretty well at what would be good proxies. Probably time and cost and quality ones are good places to start.  [I kinda like to have at least one pair of metrics that exert some tension on each other: number of bugs killed and overtime, for instance, so folks don't find it EASY to game the system by creating bugs that they get overtime to fix.  Trust me, it happens. ]

And then you simply have to say to your staff, "Look, we weren't given metrics, but I think we oughta measure our stuff because what gets measured is what gets done.  And here's the stuff that we're gonna measure. 1, 2, 3..."

And when the carping starts, and folks complain (it appears you're already there), well, it's THEN that you know that you're a manager.  And you are on the road to becoming a good manager when you listen to all the comments and say, "thanks for your input....and these are our metrics."

Keep us posted.

Mark

simonspeichert's picture

Thanks for the great post, Mark. In preparation for my ongoing job search, I struggled to make the metrics my boss keeps track of meaningful in a way that a recruiter would understand.

Let me give you some background - I'm an administrative assistant at a quasi-judicial tribunal, and my primary focus is editing legal decisions. I also perform a number of other actions with ongoing files. Statistics are kept with respect to volume, but I found a crucial metric to be the outstanding actions that had yet to be completed at the end of the week.

Looking at my metrics, I could see that in 2008, I completed almost double the number of decisions that were assigned to me. As well, 47 out of 52 weeks in 2008, I had no outstanding actions at the end of the week. In an interview, I made those numbers meaningful. Completing double my workload helped my whole unit increase their efficiency, which decreased one of the organization's two performance measures that are reported to the provincial government. Having no outstanding actions meant 94% of Mondays, I walked in to a clean desk, knowing that the clock that would have been ticking on the weekend had been stopped.

Of course in my interview I elaborated on how I approached achieving those numbers, but hopefully my example shows that metrics that could ordinarily appear mundane can be tied to the kind of behaviours that managers look for in new hires or promotions.

rwwh's picture
Licensee BadgeTraining Badge

There is a [commercial] web site that is collecting key performance indicators: http://kpilibrary.com/

This may not contain the right one for you, but it can at least serve as a source of inspiration.

Note: I am not a member of this web site, nor affiliated to them.

 

kima's picture
Training Badge

Aaaah, metrics, one of my favorite topics.  Can't resist adding my two cents.  From my experience, you need to think about four basic areas of measure (in no particular order):

1.  Productivity -- what is driving the work, and is your team more productive over time?  More comments on this one in the paragraphs below.  The key is to think of the metric not in terms of a static, how much did we do this month, but as way of demonstrating to your manager, with data, that your team is consistently becoming more productive.  I like to use a KPI (Key Productivity Indicator) for that. 

2. Customer -- whether your customers are internal or external, someone somewhere is the recipient of what your team creates/delivers/produces.  Are you meeting agreements and customer expectations?  Might be response time, might be number of defects, or projects on time.  Look for measures that demonstrate that you are doing what you promised your customers you would do.

3.  Business Basics -- financial metrics required in your company, headcount, volumes of product produced, counts of things like inventory. 

4.  Key Objectives or change -- measures related to your team's progress towards major objectives or change areas

OK, more on my favorite measure, Key Productivity Indicators: Each KPI has three parts, 1. Consumption driver, 2. Headcount, 3. KPI ratio

Start by thinking about what is driving the work.  If it is the number of bugs, measure that, or lines of code or projects or helpdesk tickets or whatever it is that causes you to require the headcount you need to get the job done. One way to find it is to think about, what if you suddenly learned that you had to decrease your team by 10%, what work would be impacted.  Your driver is probably there.   I guarantee, there is a driver for everything your team does, you just need to figure out what they are.  That driver, becomes your consumption - how many units of something that your does/produces/delivers (lines of code, bugs, tickets worked, widgets produced.)  Whatever it is, track it every month. 

The second part is headcount - how many people does it take to do the work. 

Third, you calculate a ratio of the number of consumption units divided by the number of people it took.  That ratio is the actual KPI measure.  For example, let's say you're measuring a team that fixes bugs or code defects: 

March defects fixed = 250, March headcount = 10, March KPI = 25:1   (or on the average, 1 person is addressing 25 defects.) 

Now, comes the interesting part.  Track it for a while to get a sense of your monthly results.  It will vary from month to month - what you're looking for is the trend.  Is your team getting more productive over time?  If they are, you'll be able to see it.  Once you have a baseline for your results, set improvement goals and then every month, compare your actual to your improvement goal.  Of course you'll need an action plan to achieve the improvement, but as you execute that plan, you will be able to see (and show your manager) every month whether your plan is achieving the savings you'd hoped.  That's the value of having the measure!

And once you become more advanced, you can start to see interesting relationships in the results.  If your demand (consumption) is rather dynamic (up one month, down the next) but your headcount is flat, then your team will look more or less productive.  If you begin to see a trend where the consumption is consistently going down - well, guess what, maybe you need to revisit your staffing plan.  Or maybe you need to work with sales to bring in more business.  Or, let's say there was a huge increase in bugs last month but your team worked overtime, then maybe for that month your KPI will be 40:1 and you'll have that data to show your boss. 

 

Kim