Showing posts with label Testing. Show all posts
Showing posts with label Testing. Show all posts

Thursday, December 23, 2010

The role of a tester in the era of test-friendly devs

Back in the day (okay, if I can so use a term to denote the '90's), devs cared for one thing, and one thing only; the coolness to code. And to hell with whether the d**b-users got it on how to use the software. Or if the software was crappy.

Okay, so maybe I'm generalizing a bit here. But I do remember getting this piece of code thrown over the wall. I write a 1-line piece of test code that "new's" the class. I get a null-ref. And I'm sitting there thinking... "Really! The dev couldn't write 1 line to verify if the class could be instantiated??"

Fast forward to the early 21st century. The young-un devs I meet nowadays have one totally exemplary attribute. They get the need for quality. They write unit tests without nagging, begging, fisticuffs. They ask how they can help us in Test test better. They care about the quality of their code. They don't want to break the build. They want to find bugs BEFORE they checkin!

This is the world we in Test have been struggling for, for so long. Hurrah!!

Except...

Now that we're here, I believe an existential crisis is at hand: So why do we need Test? What's the role of the tester in this new world? Can devs not do the entirety of building and shipping quality software?

James Whittaker has a provocative and thought-provoking set of ideas on this topic in a webinar titled "More Bang for your Testing":

Sunday, July 5, 2009

To automate tests, or to not automate tests?

I added my $0.02 on a topic on whether to automate tests or not on one of the discussions on LinkedIn.

Here's my response:

It's important to not lose sight of the primary goal of testing. ie to ensure that the product is released with as few bugs as possible (with the highest quality possible). Automation plays A role in that effort, not THE role.

For functional testing, bugs found via automation tends to be of 2 basic categories. Those found during test case development (ie test coding), and during regression test runs. The first category of bugs is directly correlated to the quality of the test case being automated. The second category of bugs is as a result of code changes and can be caught before it makes it into the build eg. via a pre-checkin system.

Performance and stress testing can only (usually) be performed using automated tests. Performance testing needs to be automated to provide the repeatability. Stress testing needs to be automated to provide the ability to load and\or hammer the system.

For me, the key indicator is where testers are spending their time, and the result of that effort. If testers are spending significant amounts of time on automation, but they are not discovering a whole lot of bugs, then that's a red flag.

Nothing beats an intelligent knowledegeable tester spending time figuring out how to test the feature. It is the result of that intellectual effort that drives the quality of the test effort. It is not the tool, technique, automated vs manual etc that drives the quality of the test effort.

And, as everything else in life, a balance is needed. An overemphasis of one technique of testing will not be beneficial to the ultimate goal of shipping high quality software.

Wednesday, June 3, 2009

Testing


Came across this interesting question on Linked in about code coverage and testing.
Here's my response to the question:

Ultimately, software testing is limited by time, specifically time to ship\release. Software that is 100% bug free is worthless if it's not in the customer's hands. Software that is 100% bug free is also an impossibility. Along the lines of Godel's incompleteness theorem.

100% code coverage has nothing to do about flushing out all bugs. A simple and often quoted example is as thus: Take the following code -

for (int i = 0; i < 10; i++)
{
//do something
}

So you have a test that hits the above code. You get 100% code coverage. But what if the value of the limit in the loop (10) was wrong? That's a bug. But code coverage didn't not expose it.

An oft-repeated phrase in the software engineering field is that testing does not prove the absence of bugs, but rather the presence of bugs.

The most pragmatic goal in testing (IMO) is testing the product to a good enough level of quality for release. The challenge is figuring out what "good enough" implies, and how to achieve it. Nailing this conundrum is what differentiates an effective test effort from one that isn't.

Saturday, May 23, 2009

James Whittaker leaves Microsoft

I was bummed to hear that James Whittaker has resigned from Microsoft: http://blogs.msdn.com/james_whittaker/archive/2009/05/21/tour-of-the-month-the-exit-stage-right-tour.aspx

James is a terrific ambassador for testing, showing the world that testing is extremely fun, highly technical and an intellectual exercise.

[Breaking news: 5\27\2009] Per this blog, James is now a Director of Test in Google!

I first noticed James as a result of his paper "What is software testing, and why is it so hard?" way back in 2000 (see embedded document below). As a tester myself, I was blown away that an academic (as I then wrongly considered James to be) knew what it was like in the trenches.

What is Software Testing and Why It is So Hard
What is Software Testing and Why It is So Hard api_user_11797_vyasanuj05