Forums

Good day everyone.

I'm seeking guidance and would like to hear [read] your thoughts on a situation I'm having.  I was promoted recently and now have responsibility over three teams of testers.  I used to manage one of the three.  

I am getting a lot of resistance over goal setting and measurement with my new directs (Those who now report to me).  I understand that the other teams have not been as exposed to the use of metrics.  It seems that I am not getting through to them no matter how I try to explain the need for measurement. 

Earlier today, one of my (new) directs said: "Why do you want to change something that has been working with something that we don't know will work".  I may be wrong, but I think that this person is resistant to defining measurable goals because she received an Exceeds Expectations rating the past three years (I was not her manager), despite not defining specific measures for each of her goals.  It's possible that she worries that she will not get the same rating if we put in measures.

 I am baffled by the idea that an engineer would oppose the use of measurement.  I don't want to be heavy handed and use role power but I will If I have to.

Any suggestions?

HMac's picture

I could be completely wrong, but I suspect there's a subtext here.  They're not resisting measurement, they're resisting your new authority.  They are testing you, challenging you as the newly-appointed leader ("maybe if I just put up some resistance, I'll be left alone").

There's lots of great advice on these forums and in the podcasts that I won't try to replicate here (but think in a sense that you're a new manager, and look up the "first 90 days" casts).

The one bit of advice I will give: don't take the bait and turn this into a debate about the virtues of measuring performance.  That's really not what this is about.  It's about how you'll behave as their new manager.

 

Good luck!

Mark's picture
Admin Role Badge

I don't think there's enough information here to draw the conclusion that this push back isn't about measurement.

Frankly, there are a lot of reasons why most standard measurements aren't effective in the software development arena - coders and testers are quite famous for gaming virtually any system.  It doesn't help them in the big picture, but they tend to maximize value on a smaller scale.

That said, I would recommend you try to make a case for measuring in a way that makes sense to them.  I don't know what you've done, but if you have done that, someone asking why you're doing something isn't inherently a big deal.  People do that all the time, and still end up having to get on board the new idea.

Is your one previous team using metrics?  What is the value it has shown?  Do you have proof?

Further, if you're just in the role 90 days or less, wait some.  Take some time to get to know everyone..that will help you determine how best to proceed.

Mark

stephenbooth_uk's picture

 Have you started O3s yet?  Whether the push back is actually about measurements or not it is likely there is an element trust lack in there.  You're a new, to them, manager who has come in, replacing the manager they at least tolerated if not liked and you're introducing a new stick with which to beat them (I have yet to see a performance measuring process that did not look, at first at least, to the people subject to it like a stick to beat them).  O3s are a good way to build trust.  Right now they probably can't see you past the neon sign your role power has put on your forehead saying "I am your boss and can sack you."  Build trust and they will see you and may be more ready to accept changes you make.  You may still get push back but having built trust it may be less and should be easier to get to the root and adderess that.

Having worked as a tester, alongside contractor testers, on a project, I would like to echo Mark's comment about standard measures not being effective in a software development/testing environment.  An absolute classic was that the contractors were promised a bonus if they averaged more than a certain number of scripts run in a week.  The result was they wrote very short scripts testing discrete components rather than the end to end process.

Stephen

 

--

Skype: stephenbooth_uk  | DiSC: 6137

"Start with the customer and work backwards, not with the tools and work forwards" - James Womack

 

maura's picture
Training Badge

I manage a team of testers too.  Overall, the best ones are skeptical and detailed by nature and those are great qualities to have in the job... but it can be really frustrating when you are trying to introduce change.  I bet you have a lot of high C's and some high D's in your team.  

As to the specific comment, maybe THAT tester doesn't know that your measurement plan will work, but YOU do, because you've used measurement in your prior role with the smaller team and have seen the results.  Maybe you need to change your delivery style to tap into their High-C-ness: show them some numbers. 

You may also want to have this debate point in your back pocket:  when they evaluate their projects, they rely on metrics to determine how good the code is and whether or not the project is fit for release (defect density, scope and severity of each defect, script pass rate, etc).  They would not get very far if they logged a defect that said simply "it's broken", or if they tried to stop a proejct from releasing by just saying "it's buggy" and not specifying what the risks are. They need to provide details and data to support what's going on in that feature.  All you are asking for is the same level of diligence and supporting data when they report on their own performance. 

And, as Mark said, even if they question it or disagree with it, it doesn't mean they don't have to get on board anyway.

VanessaUmali's picture

HMAC:  You're not completely wrong, but I chose not to think of it that way.  Doing so gets to me sometimes and brings down my confidence.  I think i'm fairly competent and shouldn't let the drama get to me.  Yes, I have received comments from other people about all the drama that this topic has brought on during after-work drinks.  In hindsight, I think I wouldn't have been better off not listening to the comment at all, even if I did not respond to it. 

Mark:  I agree that in the field of Software Development and Testing, measuring performance is tricky. There is no best way of going about it.  Because of the nature of testing, and how getting the good bugs are highly dependent on the area that you're assigned to,  I have asked each person during our goals discussion how they think we can measure their goals.  Was I wrong in doing that?  I stated that the purpose for asking was to us to come to an agreement on how I rate them.  I thought I was clear when I said that I wanted things to be clear from the beginning so that we don't disagree on the rating that I will give them come review time.  Maybe I didn't communicate it well enough?  I'm trying to remember now the content of podcast: "Greetings in DISC".  I ought to talk to them again and try to speak in the language that they will understand.

Maura:  How right you are.  Testers can be really challenging when dealing with change.

You've all given really good feedback and I appreciate the wisdom that come from  your comments.  I will ponder this and let the ideas stew this weekend.  

Thanks!