Forums

As usual recession/budget has hampered our ability to operate as effectively as we want. I've been faced with an interesting challenge. I have a development staff but no true QA team. Well, we've got a QA team of one. I know it would be very difficult to bring in/hire up to 3 QA testers. I was thinking about bring in a few interns. We would have to train them but it's a win win for everyone. Has anyone else faced this issue and how did they handle it? Thanks.

basking2's picture

There are two methods I've seen. The first is to get entry level workers and have someone guide and mentor them. Is your existing QA engineer the type that could mentor the interns? Would he like to step-up and get a little leadership / management under his belt?? I would *jump* at the chance, personally. You also have the added benefit of growing your next-gen team to some extent if you get Interns who might like to move to development given time, not unlike a AA team in baseball.

The big caveat is that you must have a good handle on the product documentation/requirements and the test plans. Otherwise the interns, if they are real go-getters (and I'll only mention the positive, because we all know how to hire great people ;) ), will waste a lot of time by exploring how to test the product and distracting the developers with questions or running invalid variations on tests. It's a nice problem in that you have hired motivated people. I would also recommend having "run times" for your tests so inexperienced hires can feel some reasonable pressure without someone looming over them saying, "Is it done?" Figure it takes 20 minutes to setup and 60 to run a test, ask the new hire to shoot for getting it done in 150% or 200% the time.

The other option that, initially is very expensive, but everyone loves when it is in place, is automated testing. I'm not sure this is a great option because the initial cost is very high, the ROI will take some time to develop, and for this to really work well you need a development team that has some buy-in that test-driven-development is a real win. At this point, it's too many assumptions on my part, but I thought it was worth mentioning.

I hope this helps! I've seen this work out fairly well with a strong lead in QA. I'm also very interested in any other war stories that might show up in this thread.

Sam Baskinger

mtietel's picture
Training Badge

I agree with Sam that it's critical to have a strong Quality Lead as a mentor (regardless of whether you're taking the intern or entry-level route). Make sure they are someone who knows the difference between QA & QC and Verification and Validation...

However, be careful - there's a steep slippery slope near some of Sam's remarks that you should be aware of. You absolutely do NOT want to have a developer as the mentor - any more than you'd want an electrician to be teaching a plumber's apprentice. Testers aren't developer "wannabes", folks who couldn't cut it as developers, or your "B" team. Make sure you do everything to ensure that testers are professionals and treated professionally (the "How to Introduce People" cast is a good start - cue war story...). They are colleagues of, rather than subordinates to, the development team. Make sure they get what they need to be effective (Requirements, as Sam mentions). With out requirements, you're toast from both a development and testing perspective (no or poor requirements are the root cause of most of my technical war stories from 20 years in the development world and 2+ in the quality world).

Automation can be a great solution if, as Sam mentions, you can establish the ROI. (There's an 80/20 rule of thumb here as well - 20% of the time to create the test, 80% to execute it for manual tests. The opposite ratio for automated.) Automation may be critical in some situations (the test access point is a non-GUI/UI). Keep in mind too that there are different types of tests, for different purposes, created/maintained/executed by different people. The strategy and ROI for each is different. There's high return for automating developer created/maintained unit and/or integration tests (here's where you need developer buy-in, but the developers should be doing unit testing work, NOT the testers...). Low return for automating tester created/maintained functional tests - at least until those tests become part of the test catalog that is used for regression testing. Then the equation changes, but you've got to actively maintain the regression suite so that it doesn't grow out of control. And ideally, you'd choose different tests as part of the regression suite for each release based on coding changes in the release, past defect analysis, risk, usage, complexity, etc.

DrMaltz's picture

Great comments so far. I sit back and try to figure out how to justify the hiring of a QA mentor. We really don't have this person in house. we have about 3000 users that request from our group new off the shelf products and custom products. I know our CIO will want hard facts when trying to justify this position but I just don't know where to get the numbers. You're absolutely correct that we need this QA Mentor or else this doesn't work.

mtietel's picture
Training Badge

You could start simply, with standard metrics that show the general need for a quality org/team. They may be enough to serve as a proxy for direct justification for the QA mentor position. Of course, the QA/QC mentor will be able to show you how to interpret these measures, so there's a chicken and egg problem...

Cost of (non) Quality - a QC measure that shows the effectiveness of your testing. There is a mountain of data that shows that it's cheaper to fix a bug earlier in the software lifecycle.

  • % of defects found post-release
  • avg cost of fixing post-release defects (time to reproduce, analyze, fix, build, re-test, re-deploy, acceptance test including lost opportunity cost * avg loaded labor rate)
  • avg cost of fixing pre-release defects (time to reproduce, analyze, fix, build, re-test including lost opportunity cost * avg loaded labor rate)

If the first one is high, then you should easily be able to justify your QA/QC mentor. The second two quantify the cost savings.

You can get finer-grained as well, with a "phase containment" metric. Measure the defect cost based on the defect's root cause phase (e.g., requirements, design, code). This is a QA measure that re-inforces that it's cheaper to find/fix defects earlier, but it also will point you towards process improvements (inspections/reviews, better techniques to gather requirements from your customers, coding standards, static or dynamic code analysis tools, etc). There are mountains of data regarding these metrics as well.

bug_girl's picture

Remember, Interns can be cheap labor--but they have hidden costs. If you take on an intern, you are making a commitment to help them learn crucial skills and behaviors, and also to provide them with some sort of reporting to their home institution and/or future references.

One problem with students is sometimes they don't actually *know* they have learned something until you sit down and process what they are doing with them. There can also be a LOT of coaching involved with interns, since they usually need more feedback than regular employees.

Personally, I find that really fun and enjoyable; some of my folks would, Like, Totally, Like, rather not have an intern.

DrMaltz's picture

BUG_GIRL,

you're absolutely right. I've talked about interns initially and there are many people opposed to bringing interns on board..