ant-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stephane Bailliez <>
Subject RE: [COVERAGE] jakarta-ant
Date Fri, 01 Feb 2002 10:28:10 GMT
> -----Original Message-----
> From: Diane Holt []

> But wouldn't some of these tasks be pretty hard to have tests for? For
> example, to test the Perforce tasks you'd need to have 
> Perforce running on
> the test machine(s) -- you can get a free version of it (limited to 2
> users), but who'd be the one to have to set something like 
> that up?  (I
> don't even want to think about someone having to deal with setting up
> Clearcase :)  And I'm not sure how you'd really test the 
> Sound task, other
> than to verify the error-checking works right -- unless 
> someone's around
> to listen and make sure the sound-file really played :)

You cannot tests everything but at least some tests are better than no tests
at all (mm.. sounds familiar, maybe this one is in Martin Fowler's book
about refactoring and unit tests)

Testing fully for clearcase or p4 tasks would be a matter of creating mock
executables with parameter validation... Not going that far would be to
design tasks to be much more modular and easier to test by providing entry
points to check for values. It's no negligible work, but in the long run it

like metrics, coverage gives figures you really have to 'weight' and
analyze. it's no one number fits all..but at least it can give you an
indication of what has been done and what could be done.


To unsubscribe, e-mail:   <>
For additional commands, e-mail: <>

View raw message