Anyone with a two-year-old knows that one of the most effective ways to test your software is to put it in front of the child: if there's any odd combination of clicks and inputs that will crash the program, the child will invariably find it.
Agitator 3.0 is certainly far more rational in its testing procedures than a toddler, but it takes a similar tactic, handily testing your Java code by sending over a maelstrom of test values to ferret out errors.
The package will parse the code to look for potential problems and then build the testing code to target these dangers, choosing numbers and dates from a specific range and adjusting the range according to the constants it sees in your code. If a method seems to be using large values, the random-number generator sends large values its way; if it wants dates, it sends dates. If you have a better idea of the types of data that might cause trouble, you can also customize this and focus the selection of test data by generating your own subclasses, known as factories, to the test procedures.
The code testing is only half of the game. The software will also enforce many standard rules of thumb for developing Java code, such as closing your JDBC connections in the final block to guarantee that the connection is truly closed. You may turn these coding rules on or off and, if your shop feels the need, add some new ones.
Agitator bundles all of this information into a development "dashboard" that displays the success or failure of the various classes and packages of code with colour-coded green and red bars. This mechanism may be ideal for a project manager who is attempting to herd developers along the same path. Running these tests daily will enforce the rules automatically.
To test Agitator, I set it on some of my old code, a process that is very easy if you happen to use Eclipse because the application is built as a set of Eclipse plug-ins. After opening up the workspace, I pressed one button and Agitator started scanning my code for errors and pushing random values at my methods. When it was done, the results appeared in a list of errors and warnings, much like the messages from the compiler complaining about errant import statements or semicolons. The software will find a host of serious and minor errors; for instance, Agitator seemed worried about catching general exceptions, wanting the code to spell out the exact type of the exception being caught.
This was a relatively small detail, but other messages were eye-opening. For example, one method was not using the equals method to test whether two strings are the same, a mistake akin to a writer substituting "they're" for "their". Another constructor was calling nonstatic, nonfinal methods, a process that can cause errors when the object is not completely initialized. (This is an area where Java's semantics need a lot of work.)
I really enjoyed the discipline of the coding rules. Dyslexic-style mistakes happen, and structural testing is the only way to catch it. Although the implementation is gorgeous, the proliferation of detail can be a bit confusing if you're not careful. For example, each method is marked up with little numbers that indicate how many times the line was executed by Agitator -- one line was called 144 times and another was called 247 times. In the same vein, the tool for drilling down into the code and seeing what these random values generated is impressive, filled with obsessive detail.
In my code, these random tests seemed to find many null-pointer errors that never appear until the code is shipped. Its random-number generator would root out the poorly written lines that passed my own tests, smoothly addressing one of the major problems with unit tests. When we write the tests ourselves, most of our code will pass those tests because they encode all of the problems that we can predict. Agitator, however, doesn't have that bias and can pull out the errors that we didn't think to test for, such as the aforementioned null pointers.