wotifink

Monday, 6 October 2008

Improved scenario step monitoring with JBehave

If you are running JBehave Scenarios and you want your JUnit runner to look like this :





check out my little experiment at http://code.google.com/p/jbehave-junit-monitor/

and give me some feedback!

Friday, 3 October 2008

Budget Driven Automated Acceptance Tests

Any oft repeated manual process is a good candidate for automation.
The question for me isn't whether to automate acceptance tests or not, but how we keep them relevant and manageable over time.

One way is to acceptance test the features that are most important to you, but how do you discover what features are most important to you?

If parts of your product enabled different revenue streams, you could bias your acceptance tests towards features that generated the most revenue.

If you had statistics on how your product was used out in the wild, you could bias your acceptance tests towards the most commonly used features.

Or I guess you could just bias your tests towards the stuff that keeps bloody breaking !

But how do we trim out the fat?

One possible answer could be to define a budget for the automated acceptance tests, you could perhaps define a ratio of lines of code to minutes of execution time for acceptance testing.
You now have a simple metric to allow you to discuss what is most important to you.
Expensive test, not much value, drop it.
Expensive test, pretty valuable, see if you can get it cheaper without losing to much value (refactor).

Monday, 29 September 2008

Write your own JUnit runner

I've been looking at how to integrate the JUnit test running model with a custom test framework.
This is a handy thing to have, because it allows your custom test framework to be used in all existing JUnit runners - and gives you IDE integration for free.

The RunWith annotation allows you to specify a custom runner for your class.
Extend the JUnit Runner class and over-ride the functions neccessary.



@RunWith(MyTargetTestClass.TheRunner.class)
public class MyTargetTestClass {
static int count = 0;
public boolean doStuff() {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {}

count ++;

if(count % 2 == 0) { throw new RuntimeException("A Failure"); }

return !(count % 3 == 0);
}

public static class TheRunner extends Runner {
List descriptions = new ArrayList();
private final Class<? extends MyTargetTestClass> testClass;
private final MyTargetTestClass testContainingInstance;
private Description testSuiteDescription;

public TheRunner(Class<? extends MyTargetTestClass> testClass) {
this.testClass = testClass;
testContainingInstance = reflectMeATestContainingInstance(testClass);
testSuiteDescription = Description.createSuiteDescription("All my stuff is happening now dudes");
testSuiteDescription.addChild(createTestDescription("first bit happening"));
testSuiteDescription.addChild(createTestDescription("second bit happening"));
testSuiteDescription.addChild(createTestDescription("third bit happening"));
}


@Override
public Description getDescription() {
return testSuiteDescription;
}

@Override
public void run(RunNotifier notifier) {
for(Description description : testSuiteDescription.getChildren()) {
notifier.fireTestStarted(description);
try {
if(testContainingInstance.doStuff()) {
notifier.fireTestFinished(description);
}
else {
notifier.fireTestIgnored(description);
}
} catch (Exception e) {
notifier.fireTestFailure(new Failure(description, e));
}
}

}

private MyTargetTestClass reflectMeATestContainingInstance(Class<? extends MyTargetTestClass> testClass) {
try {
return testClass.newInstance();
} catch (Exception e) {
throw new RuntimeException(e);
}
}

private Description createTestDescription(String description) {
return Description.createTestDescription(testClass, description);
}

}
}



It seems that the ability to write custom runners was added pretty late on, and as such the RunNotifier class has some methods that are indicated as being internal use only. Just make sure you don't call them and you'll be fine!

Tuesday, 5 August 2008

Using Junit 4 to run tests repeatedly

We had a problem with a test that would fail intermittently on continuous integration.
The best way to find the problem was by running the test hundreds of times to see the problem.
We used the code below to easily do this:


@RunWith(MyRunner.class)
public class UnusualAndRareProblemTest {
...
..
.
}

public static class MyRunner extends JUnit4ClassRunner {

public MyRunner(Class klass) throws InitializationError {
super(klass);
}

@Override
public void run(final RunNotifier notifier) {
for(int i=0; i<1000; i++) {
super.run(notifier);
}
}
}



It turns out the class under test was spawning threads and we were getting race/dead-locking conditions due to the way we were trying to retrieve the outcomes from different threads.
PS. We fixed this by re-writing the class so that instead of spawning threads directly it made calls to a 'action factory' that we could mock and avoid multi-threading in the test entirely.

Wednesday, 9 July 2008

A little refactoring that pleased me

The other day I had to change the Page classes method to getBadgeTag, and found the following. I did not like it because the logic seemed repetitive, and it was not very clear what as going on.


class PageBeforeRefactor {


public Tag getBadgeTag() {
Tag badgeTag = getBadgeTag(getImpliedSeries());
if (badgeTag != null) {return badgeTag;}
badgeTag = getFirstBadgeTag(getImpliedKeywords());
if (badgeTag != null) {return badgeTag;}
badgeTag = getImpliedContributor();
if (badgeTag != null) {return badgeTag;}
badgeTag = getBadgeTag(getImpliedBookSection());
if (badgeTag != null) {return badgeTag;}
badgeTag = getBadgeTag(getImpliedBook());
if (badgeTag != null) {return badgeTag;}
badgeTag = getFirstBadgeTag(getImpliedBlogs());
return badgeTag;
}

private Tag getBadgeTag(Tag tag) {
if (tag != null) {
return tag.getBadge()==null?null:tag;
}
return null;
}

private Tag getFirstBadgeTag(List tags) {
for (Tag tag : tags) {
if (!keywordClassifier.isFootballClub(tag) && tag.getBadge() != null) {
return tag;
}
}
return null;
}
}


It would of been easy to just add a new if statement to the method, but I wanted to try and make the code more readable and more descriptive of what it was doing. I ended up with this:


class PageAfterRefactor {

public Tag getBadgeTag() {
return new PriorityOrderedBadgeFinder()
.check(getImpliedBlogs())
.check(getImpliedSeries())
.check(getImpliedKeywords())
.check(getImpliedContributor())
.check(getImpliedBookSection())
.check(getImpliedBook())
.getBadgeTag();
}

class PriorityOrderedBadgeFinder {

Tag badgeTag;

public PriorityOrderedBadgeFinder check(Tag t) {
if(badgeTag == null && t!=null && !keywordClassifier.isFootballClub(t) && .getBadge()!=null) {
badgeTag = t;
}
return this;
}

public PriorityOrderedBadgeFinder check(List tags) {
for (Tag tag : tags) {
check(tag);
}
return this;
}

public Tag getBadgeTag() {

return badgeTag;

}
}
}


An added bonus to this refactoring was that I pulled some logic out the slightly bloated (100+ lines) page class. It could now be tested separately and I could shift 6 or so tests out of the PageTest class.

Tuesday, 8 April 2008

A Recipe for Budget Driven Bug Fixing

Our team at the guardian is really getting into the spirit of being budget-driven, this is the recipe we are using roll bugs into everyday development:

Take the number of bugs to be completed in an iteration.
Divide by the number of stories to be completed in an iteration.
This number is your Bug Debt.
You must fix the number of bugs denoted by your bug debt in order to commence a story.

mmm... the sweet taste of programming new features juxtaposed with the sour taste of humble pie

Blog Archive