ant-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <>
Subject [Ant Wiki] Update of "Proposals/EnhancedTestReports" by DanFabulich
Date Tue, 27 Nov 2007 19:20:50 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Ant Wiki" for change notification.

The following page has been changed by DanFabulich:

   1. Scales well to lots of JUnit tests on a single process
   1. Integrates with CI servers
   1. Produced by lots of tools  : <junit>, antunit, maven surefire, testng
-  1. Consumed by: <junitreport>, CruiseControl, Luntbuild, Bamboo, Hudson, IntelliJ
TeamCity, AntHill
+  1. Consumed by: <junitreport>, maven surefire-reports, CruiseControl, Luntbuild,
Bamboo, Hudson, IntelliJ TeamCity, AntHill
   1. Includes log output, JVM properties, test names
   1. Reasonable HTML security (we sanitise all output but test names in the HTML)
@@ -19, +19 @@

   * Storing summary information as root node attributes prevents us from streaming output
   * JVM crashes mid-test result in empty XML files; no opportunity for post mortem
   * No metadata about tests stored other than package/name, which we assume is always java
package format
+    * The list of test cases is flat, even though the test run may have had a complex hierarchy
of suites
+    * No record of test parameters (data-driven testing); the same test appears 7 times,
all identical
   * No machine information recorded other than JVM properties, hostname and wall time.
-  * No direct integration with issue tracking systems (which bug is this a regression test
of, who to page)
+  * No direct integration with issue tracking systems (of which bug is this a regression
test, who to page)
   * No notion of skipped tests, timed out, other failure modes.
   * Output is logged but log4j/commons-logging/java-util log output is not split up into
separate events
   * Output is logged by testCase, link with executed test method is difficult to make
@@ -30, +32 @@

   * There could be more datamining opportunities if more system state was recorded (e.g.
detect which platforms/configurations trigger test failure)
   * stack traces saved as simple text, not detailed file/line data with the ability to span
   * No way to attach artifacts such as VMWare images to test results
+  * Only one failure is allowed per test case.  (JUnit requires there be only one failure
per test case, but other test frameworks e.g. Selenium allow for the possibility of multiple
failures per test case.)
  Summary: it was good at the time, but as testing has got more advanced, we need to evolve
the format (carefully)
@@ -54, +57 @@

   * base metadata to include: machine info, test description, links and issue IDs (for binding
to issue tracking systems)
   *  ''add new features in streaming friendly manner''
   *  ''add handling for JVM crashes'' Stream results to a different process for logging,
so that JVM crash truncates the log instead of killing it. 
+  * ''make it easy to integrate''  Numerous tools already support the old format; ideally
they should not need to write a whole bunch of new code or understand a lot of radically new
concepts in order to support the new format.
  == Interested Parties ==
@@ -91, +94 @@

   * environment variables on java5+
  The risk here is that we could massively increase XML file size by duplicating all this
stuff for every test. We may only need this per test suite.
- == Improved Fault Information ===
+ === Improved Fault Information ===
  Java faults should be grabbed and their (recusrsive) stack traces extracted into something
that can be zoomed in on 
@@ -110, +113 @@

  This would be better placed for display on demand operations; other postmortem activities.
We should still render the fault in the existing format for IDEs to handle (e.g. IDEA's "analyze
stack trace" feature)
+ DanFabulich: Many popular tools parse exceptions directly; they're already reasonably structured,
even if they aren't really XML.
  == Open ended failure categories ==
  success and skipped are fixed, but there are many forms of failure (error, failure, timeout,
out of performance bounds, and others that depend on the framework itself) Can we enumerate
all possible categories for failure, or could we make it extensible?
+ DanFabulich: I recommend against attempting to enumerate failure categories.  Allowing a
user-defined failure type (with user-defined semantics) makes more sense to me.
  == Add partial-failure, not-yet-finished as test outcomes ==

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message