Is there a way to programatically (from a Fantom program) launch the unit test framework? Sorry if it's already documented, still working my way through the docs.
I will be delving into this language over the next few weeks. If it is as good as I think it is, I'd be interested in contributing to this project. Do you need any help?
brianFri 25 Feb 2011
@panicdotal - welcome to Fantom, glad you like it so far!
Is there a way to programatically (from a Fantom program) launch the unit test framework?
From Java, the "harness" is in Fant class. Don't have a Fantom harness API yet, but we probably definitely need it (surprised no one has asked for that yet) I will put it on my todo list for next build.
If it is as good as I think it is, I'd be interested in contributing to this project. Do you need any help?
Always :-) Depending on your interest level, I would say top priority for Fantom success is IDE support - we already have an active project for Eclipse and another for NetBeans. But depends on your interests obviously (there are lots of other projects).
cbeustSat 26 Feb 2011
Hi Brian,
I have always assumed that your needs in testing were covered but your message seems to imply that such is not the case.
Could you be more specific about what you currently have today and what you need?
Thanks.
-- Cédric
brianMon 28 Feb 2011
Hi Cedric,
Here are my thoughts on testing...
I wanted a super simple basic Test framework that lives in sys - the root of all dependencies. This allows all pods to use the test framework without any additional dependencies. Obviously this is nice for the core pods to avoid pulling in any unneeded dependencies.
The test framework by design is simple using one sys::Test class, simple lifecycle, and test methods marked by convention of starting with "test". We have a very simple test harness that runs the tests.
This design suits the needs of the core framework and pods, but by no means do I expect it is the be all, end all of Fantom test frameworks. I expect certain projects might want more sophisticated test life cycles, facet annotations, or more flexible use of test harnesses (such as plugging into IDEs, etc).
The only thing I would ask of potential new test frameworks is that we ensure that core sys::Test class and tests developed with it work in whatever wiz bang extensions might be developed. Obviously you might one to write such a framework :-)
For this particular issue, what I want to do is ensure that enough is exposed by sys::Test that you could write new more advanced test frameworks. What I was thinking of, was writing an example script that showed how it could be done. Or might be actually create a class in util.
So recap: my goal with sys::Test to define a simple, but fully functional test framework. If others would like to extend that and make more sophisticated test frameworks, I want to make sure the low level API enables that successfully.
Make sense?
DanielFathWed 2 Mar 2011
I'm still convinced @Test annotation would be better than convention. If for nothing else than extensibility and easier implementation for IDE imho.
panicdotalThu 3 Mar 2011
How about an @IgnoreTest facet (or @NoTest to be consistent with @NoDoc)?
mslThu 3 Mar 2011
I think that @NoTest would be a bit confusing:
internal class BlahTest {
@Test
@NoTest
Void testSomething() {}
}
I'd prefer something like @Skip
internal class BlahTest {
@Test
@Skip
Void testSomething() {}
}
(You could argue that @NoTest would replace @Test - but that's effectively the same as removing the facet altogether - so makes it redundant and easily missed)
I'm interested to hear what angle Cedric brings to the conversation, based on TestNG experiences.
Martin
alex_panchenkoThu 3 Mar 2011
JUnit: @Ignore
TestNG: @Test(enabled=false)
mslThu 3 Mar 2011
I stand corrected - @Ignore was what I had in mind
cbeustThu 3 Mar 2011
I (predictably) think that facets are a superior way to mark tests. The "methods must start with test" approach was a hack for when we didn't have anything better. Testing is an orthogonal concern, which makes it the perfect target for an annotation. There are a few added benefits: you can name your methods any way you want (e.g shouldInsertInDatabase()) and it allows you to specify additional configuration in the facet (e.g. @Test { enabled = true; timeOut = 10.seconds }).
Here are what I think are the most popular features in TestNG, which I think might benefit Fantom as well:
Data providers. Very easy to set up and universally useful.
Groups. In my opinion, it's the best way to separate the static aspect of your tests (their business logic) from the runtime part (which tests to run at a given time).
Configuration points: methods, classes, pods. I wonder if we need something between classes and pods, I've found it very useful to be able to group a set of classes together (what TestNG calls a "Test" and which is contained in <test></test> tags in testng.xml).
Parallelism. It should be easy to run all your tests in different threads, and ideally, we should provide different strategies (one thread per test method, one thread per class, one thread per pod, one thread per group, etc...)
Test dependencies. Universally frowned upon for unit testing but really useful in the functional testing world, especially for web testing (Selenium, etc...).
I'm not sure the equivalent of a testng.xml file would fit Fantom's philosophy. Since Fantom build files are .fan files, I'm guessing that description file should be a .fan as well, but I do think it should be a separate, mostly declarative file.
That's a very broad overview, there are a few additional "micro" features that I think are very useful as well (such as time outs, invoking a specific test method from a thread pool, exception support, etc...). I'm also glossing over the API aspect since Brian would like this to remain as small as possible since it might end up in sys.
brianThu 3 Mar 2011
I (predictably) think that facets are a superior way to mark tests. The "methods must start with test" approach was a hack for when we didn't have anything better
Actually Fantom's test framework started the same way b/c it was written before facets. I personally think convention with "test" prefix is fine, but I don't actually care that much one way or the other. I can definitely see advantages to having a facet with additional attributes. The main problem is that there can be only one sys::Test class and ideally you would want to use that "good name" for the facet, not the base class - so we'd have to make some big breaking changes. I can get behind that change, but something that big of a breaking change needs majority community support.
Groups. In my opinion, it's the best way to separate the static aspect of your tests (their business logic) from the runtime part (which tests to run at a given time).
Yeah we have discussed a bit in past and I think its a useful generic feature for lots of things (tests, docs, other reflective stuff). Maybe something like:
I don't much to comment on other stuff other than like I said before - I want simple core test framework, but I also want to enable the community who is interested in layering more powerful stuff above it.
cbeustThu 3 Mar 2011
Brian,
Maybe it would make more sense to put these types in a sub-pod of Sys then? Sys::Test? (sorry if this is not possible, I'm not 100% familiar with the way pods work these days).
It would also provide more isolation and more freedom to create additional test related types.
brianThu 3 Mar 2011
Maybe it would make more sense to put these types in a sub-pod of Sys then? Sys::Test? (sorry if this is not possible, I'm not 100% familiar with the way pods work these days).
I don't know if that really helps one way or the other actually. To me the fundamental problem is defining the core model that everyone uses (even if we build extensions above that). If there was wide agreement that facets should be used instead of test prefix, we'd probably want to ensure that design was used by all the core code too (since it serves as an a primary example). What I would highly like to avoid is forking how testing is done. For example if the awesomeTest pod defined a new wizz bang test framework, you'd like to ensure that at least all the existing tests written to use sys could be run in that test framework (so we have a common substrate).
mslFri 4 Mar 2011
Given what seems to be a preference for using facets over a base class, the Test harness would expand quite a bit over the single sys::Test class that's there ATM.
Given current conversations, there'd end up being (off the top of my head):
It starts looking to me like a pod in it's own right, with hopefully enough room for extension (through runners and listeners) to provide a core to build lots of other stuff on top of.
Based on that extensability you could then start looking at some of the extra stuff Cedric has mentioned which could well be packaged as part of the core (mythical) test pod:
@DataProvider
class DataProviderRunnner : TestRunner { ... }
@RunWith
class RunWithRunner : TestRunner { ... }
class ParallelRunner : TestRunner { ... }
class SuiteRunner : TestRunner { ... }
class Suite { ... }
And it still leaves room for other custom external extensions as required.
M
mslFri 4 Mar 2011
static imports, puuuhllease? :)
Ignore that - someone slap me around the head and remind me that mixin methods have bodies.
qualidafialFri 4 Mar 2011
@DataProvider is an important one for me. I use parameterized tests on many of my projects. So that's an argument in favor of the JUnit4/TestNG style testing.
tacticsFri 4 Mar 2011
Perhaps a better name for a @Test facet would be @Case or @TestCase.
timclarkFri 4 Mar 2011
The main modification that I would like to see from the existing test framework is the exposure of the data structure that the verify method update so that I can write my own test runners. This is mainly so I can tune the chattiness of the test output when I am building, some people like to see all of the test scolling past as they run, some people like to see a summary. The Maven surefire plugin is nice in this respect in that it only reports assertion errors - unfortunately it reports them in a non-helpful way that requires digging through xml files to see the assertion error.
So I would like to be able flip from an output like this:
Run: treacle::Test1.testStickiness..
Pass: treacle::Test1.testStickiness [2]
Run: treacle::Test1.testSweetness..
TEST FAILED
blah::huge: I am a stack trace that needs to be examined
To this:
Running tests in treacle
4054 tests passed, 549229 assertions verified
TESTS FAILED - 5 tests failed:
treacle::Test1.testSweetness - expected 'sweet' received 'sour'
treacle::Test2.testUseAsGlue - expected treacle::NotGlueErr to be thrown,
nothing was thrown
treacle::Test5.testThickness - expected 0.01(+- 0.0001) received 10
So the test runner would have some sort of flag to flip its output and test may need to expose a richer data structure than the current verifyCount.
Additionally, I find the hamcrest matchers and libraries like jmock2, mockito, spec and rspec make you think differently about how to write tests; it would be great if the current test framework didn't stop myself or others building similar frameworks on top of the existing fantom sys::Test class.
brianFri 4 Mar 2011
Given what seems to be a preference for using facets over a base class, the Test harness would expand quite a bit over the single sys::Test class that's there ATM.
I wasn't thinking we take it that far myself.
@DataProvider is an important one for me. I use parameterized tests on many of my projects. So that's an argument in favor of the JUnit4/TestNG style testing.
Not sure this really needs to be core, that sort of thing seems easily layered above a simple core right?
So the test runner would have some sort of flag to flip its output and test may need to expose a richer data structure than the current verifyCount.
I am definitely going to do this, and was getting ready to do it for this build. But sort of wanted to see how this discussion turned out before committing to any new API. My planned API is something like this (just quick prototype, haven't thought about it much):
Add sys::Test facet and require it to be used instead of relying on "test" prefix naming convention
Is there wide spread support for that change or would you like to see things stay the same?
rfeldmanFri 4 Mar 2011
Regarding renaming, I like the idea of doing it the other way around - just add an @TestCase facet.
The methods are, after all, test cases.
timclarkFri 4 Mar 2011
I am intrigued by the need for the Facet, it makes me feel like an old man moaning that we didn't do things like that in my day! I really prefer extending the testcase class and naming my methods testFoo, I still write my Java tests that way. My test classes always seem to have a single purpose - that is testing - so saying that testing is an orthogonal concern needing an annotation has never seemed to apply to my tests.
When Cedric released TestNG I watched in interest, I then watched in interest as JUnit 4 added annotations, I then worked on projects using JUnit 4 with annotations. I then started wondering why my JUnit 3 testcases were shorter than the equivalent JUnit 4 testcases! Then I started wondering why Java (C#) needed annotations (attributes), I am still waiting for a good answer.
I am not criticising TestNG because I know it does so much more that just allowing you to write a test by adding @Test in front of the method.
I think that the TestRunner proposal looks good. It seems to have plenty of hooks available to add custom behaviours around the tests.
cbeustFri 4 Mar 2011
Brian, I think this early runner prototype looks good.
> @DataProvider is an important one for me. I use parameterized tests on many of my projects. So that's an argument in favor of the JUnit4/TestNG style testing.
Not sure this really needs to be core, that sort of thing seems easily layered above a simple core right?
I'm not sure: the strength of @DataProvider is that it lets you invoke test methods with parameters:
//This method will provide data to any test method that declares that its Data Provider
//is named "test1"
@DataProvider(name = "test1")
public Object[][] createData1() {
return new Object[][] {
{ "Cedric", new Integer(36) },
{ "Anne", new Integer(37)},
};
}
//This test method declares that its data should be supplied by the Data Provider
//named "test1"
@Test(dataProvider = "test1")
public void verifyData1(String n1, Integer n2) {
System.out.println(n1 + " " + n2);
}
Do you think it would be possible for a third party to implement your skeleton runner and provide this ability?
Also, I'd like to caution against the @RunWith approach that JUnit 4 is using: the way they designed it, it's impossible to compose runners, so for example, you can have a runner that will run your methods in parallel and one that supports parameterized testing, but not both. This is a big design flaw in my opinion, so we should either 1) make sure that runners can be composed or 2) not go down the customizable runner approach and find another way (the choice I made with TestNG).
cbeustFri 4 Mar 2011
Tim:
I'm not sure why the JUnit 3 tests you looked at were shorter than JUnit 4, but I'm betting it has probably very little to do with annotations.
There are many, many reasons why annotations are a pretty sound idea, but if I had to give one to try to convince you, it would be to imagine that one day you use a framework that allows you to call your methods remotely but in order to do that, they need to start with "remote". Now how do you write a method that's both a test and remote?
How about a framework that will allow your methods to be part of a two phase commit, but they need to start with "twoPC"?
See where this is going?
This is why all these concerns are orthogonal to your code and to the way you name your methods. They belong in annotations.
panicdotalFri 4 Mar 2011
Would it make more sense to generalize the @DataProvider concept using dependency injection?
I've been thinking that it makes a lot of sense to include rudimentary DI in the language. (This is probably a good topic for a separate thread.)
If you have DI in the language, you can make your test runner instantiate your test fixtures using the DI framework and let it supply the pre-configured dependencies.
tcolarFri 4 Mar 2011
+1 to DI, I was thinking about that earlier this week.
timclarkSat 5 Mar 2011
If you have DI in the language, you can make your test runner instantiate your test fixtures using the DI framework and let it supply the pre-configured dependencies.
Doesn't Fantom have DI already? Surely you can pass any dependencies your class needs when it is constructed? As for testing, if your fixtures are that complicated wouldn't you factor their creation out into common methods, factories or builders?
I'd prefer a more literate style of testing similar to the style used in this book, which does implicit DI by just using the Java language.
mslSat 5 Mar 2011
Is there wide spread support for that change or would you like to see things stay the same?
I think the change is good as a first step - but as may have been hinted at above, I don't think it goes far enough.
I think that we need to provide two interfaces for augmenting tests:
TestRunner (with the composability Cedric mentioned above); and
TestListener (to customise the behaviour of logging, etc as Tim mentioned).
On top of that, there needs to be some way to add these on a type-by-type basis (2 new facets):
@Runners { val = [,] }; and
@Listeners { val = [,] })
As well as providing a global way, and a pod-level means for configuring how tests are run. With that in mind, I've started mocking up a working example of what this looks like (in my mind). Bitbucket has the source, but tests looks something like:
Obviously just a working prototype - but this is closer to the direction I'd like to see testing headed within Fantom.
Martin
timclarkSat 5 Mar 2011
I like the TestRunner and TestListener. I was thinking that there would be a need for some sort of TestResult class but your listener seems to fulfill that purpose perfectly.
Tim
brianSat 5 Mar 2011
Maybe I don't get it, but why can't data provider just use normal code:
What is the justification for a whole meta-data driven layer when normal simple code works just as well? Is this design rooted in how painful Java reflection is to use? However to answer your question Cedric, I think it would be simple to wrap in a layer above.
Likewise with BeforeTest, AfterTest, etc - why doesn't simple virtual methods on the test base class suffice?
I can see value in a facet for the test method. But this explosion of other facets doesn't seem like the simplest thing that works, but maybe I am not getting it?
, so we should either 1) make sure that runners can be composed or 2) not go down the customizable runner approach and find another way
I am not sure I follow this. Can you explain it a bit more? Did the simple design I propose work?
I think that we need to provide two interfaces for augmenting tests:
Would you ever have more than one listener?
jodastephenSat 5 Mar 2011
The approach of simple Fantom code will test all the values as you suggest. However, what you lose is the test reporting at the per data item level. This is useful when tests fail:
A data provider can also be shared between multiple test methods, which allow variations on a theme to be tested. ThreeTen/JSR310 uses this a lot.
tacticsMon 7 Mar 2011
@rfeldman
Regarding renaming, I like the idea of doing it the other way around - just add an @TestCase facet.
I agree. The naming convention should be @Test for the classes and @TestCase for the test methods. (Although @TestCase feels just slightly too long).
brianMon 7 Mar 2011
Personally I think as the "good name" @Test should be used for the methods - you annotate 10 methods for every class, so seems we would want the shorter name there. Although still don't hear any overwhelming support for making a big breaking change...
cbeustMon 7 Mar 2011
Whatever the facet name we end up with, I don't see any reason why it should be different depending on whether it's on a method or on a class. Or is that a restriction of Fantom's Facets?
cbeustMon 7 Mar 2011
As Stephen pointed out, explicitly supporting parameter passing in the framework makes it easier to have more accurate reporting. Having said that, Brian is right that you can simulate this yourself, if you really want to keep complexity low.
However, I still think that the skeleton runner should allow test methods to be invoked with parameters. @DataProvider is one example where this is useful, but there are others.
For example, TestNG provides some convenient dependency injections: if you declare parameters of a certain type that TestNG understands (e.g. test context or a method type), TestNG will fill it in for you. For example, you can use this in a @BeforeMethod to know what test method is about to be invoked.
Speaking of dependency injection, the framework should also provide a hook to instantiate test objects, thereby allowing testers to create their own instances of test classes (possibly injected). For example, here is how I recently added Guice support to TestNG.
rfeldmanTue 8 Mar 2011
I don't care about the breaking change...it wouldn't be a tough one to fix. I was assuming we'd speak up if we had a problem with it.
mslTue 8 Mar 2011
Although still don't hear any overwhelming support for making a big breaking change...
I'm bang up for updating the test harness.
However, I don't know where the line between "simple core" and "extensible framework" lies. I'm still updating the code I posted on the weekend with what I'd like to see in fan's test harness.
It supports most of the big features listed here - Data Providers, custom listeners, composable runners (although I'm not real happy with how this is looking and was going to request feedback once it's pushed to bitbucket), and (if I get the chance) multiple threads of execution.
However, the result of all of that is a pretty substantial code base that's looking a lot more like testNG or Junit rather than something light weight that could be included in the core.
My take on the changes that I think Brian is proposing:
Create a @Test facet which (presumably, see c10290) can be used on a type or a method
Update fanx.tools.Fant.java to look for these facets rather than methods named testXX
Rename sys::Test to something else (for the sake of discussion, sys::TestCase). This means that classes still need to extend sys::TestCase to access the various verifyXX methods.
Is that a fair review?
M
cbeustTue 8 Mar 2011
Hi M,
This looks good. I'm assuming sys::TestCase is a mix-in, though, right?
I'm curious to see how your composable runner would work because in light of what I wrote above, I'm still not quite sure how it can be done.
For example, suppose we have a runner that support @DataProvider, and therefore injects parameters supplied from a user method, but also supports the private injection that I mentioned, which means injecting some other data, coming from the test framework this time. The two runners would have to cooperate to fill in the parameters to pass to the test method before it can actually be invoked.
Need to think a bit more about that.
mslTue 8 Mar 2011
Right now - it's pretty gruesome (verbose) - hence wanting feedback. The truncated-for-example version looks something like:
So, I guess saying "composable" is a blatant lie :). It's really just leveraging Fantom's ability to have multiple inheritance of mixins and specify which version of super.method you want to call - which ends up being verbose and error prone if you miss adding one of your super mixin methods where it's needed.
On the other hand, it does make the control process explicit, so you don't get into issues around the system guessing which version of the various mixins executes the test (for example when starting, listeners are added, then notified; when stopping; listeners are notified, then removed).
Martin
cbeustTue 8 Mar 2011
I see two problems with this:
The runners don't really know about each other so if one calculates a parameter list for a test method, this might get overridden by another runner
Who's making the invocation?
mslTue 8 Mar 2011
two problems
Only 2?
I agree - it's way less than ideal - but I wanted to get things working in some way/shape/form first and then look at refactoring it where necessary.
mslTue 8 Mar 2011
I didn't realise the previous code I had up was in a private repo - sorry about that.
I've just pushed up what I currently have and is in a working state. I have some other components (including the main runner) that are still on their way - but I think there's enough here to atleast request feedback and throughts on.
panicdotal Fri 25 Feb 2011
I really like what I see in Fantom.
Is there a way to programatically (from a Fantom program) launch the unit test framework? Sorry if it's already documented, still working my way through the docs.
I will be delving into this language over the next few weeks. If it is as good as I think it is, I'd be interested in contributing to this project. Do you need any help?
brian Fri 25 Feb 2011
@panicdotal - welcome to Fantom, glad you like it so far!
From Java, the "harness" is in Fant class. Don't have a Fantom harness API yet, but we probably definitely need it (surprised no one has asked for that yet) I will put it on my todo list for next build.
Always :-) Depending on your interest level, I would say top priority for Fantom success is IDE support - we already have an active project for Eclipse and another for NetBeans. But depends on your interests obviously (there are lots of other projects).
cbeust Sat 26 Feb 2011
Hi Brian,
I have always assumed that your needs in testing were covered but your message seems to imply that such is not the case.
Could you be more specific about what you currently have today and what you need?
Thanks.
-- Cédric
brian Mon 28 Feb 2011
Hi Cedric,
Here are my thoughts on testing...
I wanted a super simple basic Test framework that lives in
sys
- the root of all dependencies. This allows all pods to use the test framework without any additional dependencies. Obviously this is nice for the core pods to avoid pulling in any unneeded dependencies.The test framework by design is simple using one
sys::Test
class, simple lifecycle, and test methods marked by convention of starting with "test". We have a very simple test harness that runs the tests.This design suits the needs of the core framework and pods, but by no means do I expect it is the be all, end all of Fantom test frameworks. I expect certain projects might want more sophisticated test life cycles, facet annotations, or more flexible use of test harnesses (such as plugging into IDEs, etc).
The only thing I would ask of potential new test frameworks is that we ensure that core
sys::Test
class and tests developed with it work in whatever wiz bang extensions might be developed. Obviously you might one to write such a framework :-)For this particular issue, what I want to do is ensure that enough is exposed by
sys::Test
that you could write new more advanced test frameworks. What I was thinking of, was writing an example script that showed how it could be done. Or might be actually create a class in util.So recap: my goal with
sys::Test
to define a simple, but fully functional test framework. If others would like to extend that and make more sophisticated test frameworks, I want to make sure the low level API enables that successfully.Make sense?
DanielFath Wed 2 Mar 2011
I'm still convinced
@Test
annotation would be better than convention. If for nothing else than extensibility and easier implementation for IDE imho.panicdotal Thu 3 Mar 2011
How about an @IgnoreTest facet (or @NoTest to be consistent with @NoDoc)?
msl Thu 3 Mar 2011
I think that @NoTest would be a bit confusing:
I'd prefer something like @Skip
(You could argue that @NoTest would replace @Test - but that's effectively the same as removing the facet altogether - so makes it redundant and easily missed)
I'm interested to hear what angle Cedric brings to the conversation, based on TestNG experiences.
Martin
alex_panchenko Thu 3 Mar 2011
JUnit:
@Ignore
TestNG:
@Test(enabled=false)
msl Thu 3 Mar 2011
I stand corrected -
@Ignore
was what I had in mindcbeust Thu 3 Mar 2011
I (predictably) think that facets are a superior way to mark tests. The "methods must start with test" approach was a hack for when we didn't have anything better. Testing is an orthogonal concern, which makes it the perfect target for an annotation. There are a few added benefits: you can name your methods any way you want (e.g
shouldInsertInDatabase()
) and it allows you to specify additional configuration in the facet (e.g.@Test { enabled = true; timeOut = 10.seconds }
).Here are what I think are the most popular features in TestNG, which I think might benefit Fantom as well:
<test></test>
tags intestng.xml
).I'm not sure the equivalent of a
testng.xml
file would fit Fantom's philosophy. Since Fantom build files are .fan files, I'm guessing that description file should be a .fan as well, but I do think it should be a separate, mostly declarative file.That's a very broad overview, there are a few additional "micro" features that I think are very useful as well (such as time outs, invoking a specific test method from a thread pool, exception support, etc...). I'm also glossing over the API aspect since Brian would like this to remain as small as possible since it might end up in
sys
.brian Thu 3 Mar 2011
Actually Fantom's test framework started the same way b/c it was written before facets. I personally think convention with "test" prefix is fine, but I don't actually care that much one way or the other. I can definitely see advantages to having a facet with additional attributes. The main problem is that there can be only one
sys::Test
class and ideally you would want to use that "good name" for the facet, not the base class - so we'd have to make some big breaking changes. I can get behind that change, but something that big of a breaking change needs majority community support.Yeah we have discussed a bit in past and I think its a useful generic feature for lots of things (tests, docs, other reflective stuff). Maybe something like:
I don't much to comment on other stuff other than like I said before - I want simple core test framework, but I also want to enable the community who is interested in layering more powerful stuff above it.
cbeust Thu 3 Mar 2011
Brian,
Maybe it would make more sense to put these types in a sub-pod of Sys then? Sys::Test? (sorry if this is not possible, I'm not 100% familiar with the way pods work these days).
It would also provide more isolation and more freedom to create additional test related types.
brian Thu 3 Mar 2011
I don't know if that really helps one way or the other actually. To me the fundamental problem is defining the core model that everyone uses (even if we build extensions above that). If there was wide agreement that facets should be used instead of test prefix, we'd probably want to ensure that design was used by all the core code too (since it serves as an a primary example). What I would highly like to avoid is forking how testing is done. For example if the
awesomeTest
pod defined a new wizz bang test framework, you'd like to ensure that at least all the existing tests written to use sys could be run in that test framework (so we have a common substrate).msl Fri 4 Mar 2011
Given what seems to be a preference for using facets over a base class, the Test harness would expand quite a bit over the single
sys::Test
class that's there ATM.Given current conversations, there'd end up being (off the top of my head):
@Test
(replaceVoid testXX()
)@Before
(replaceVoid setup()
)@After
(replaceVoid teardown()
)Verify
(contain thestatic Void verifyXXX()
methods -- static imports, puuuhllease? :))If you then add in some extra niceties:
@BeforeClass
@AfterClass
@Runners
mixin TestRunner { ... }
class StandardRunner : TestRunner { ... }
@Listeners
mixin TestListener { ... }
class StandardListener : TestListener { ... }
It starts looking to me like a pod in it's own right, with hopefully enough room for extension (through runners and listeners) to provide a core to build lots of other stuff on top of.
Based on that extensability you could then start looking at some of the extra stuff Cedric has mentioned which could well be packaged as part of the core (mythical)
test
pod:@DataProvider
class DataProviderRunnner : TestRunner { ... }
@RunWith
class RunWithRunner : TestRunner { ... }
class ParallelRunner : TestRunner { ... }
class SuiteRunner : TestRunner { ... }
class Suite { ... }
And it still leaves room for other custom external extensions as required.
M
msl Fri 4 Mar 2011
Ignore that - someone slap me around the head and remind me that mixin methods have bodies.
qualidafial Fri 4 Mar 2011
@DataProvider
is an important one for me. I use parameterized tests on many of my projects. So that's an argument in favor of the JUnit4/TestNG style testing.tactics Fri 4 Mar 2011
Perhaps a better name for a
@Test
facet would be@Case
or@TestCase
.timclark Fri 4 Mar 2011
The main modification that I would like to see from the existing test framework is the exposure of the data structure that the verify method update so that I can write my own test runners. This is mainly so I can tune the chattiness of the test output when I am building, some people like to see all of the test scolling past as they run, some people like to see a summary. The Maven surefire plugin is nice in this respect in that it only reports assertion errors - unfortunately it reports them in a non-helpful way that requires digging through xml files to see the assertion error.
So I would like to be able flip from an output like this:
To this:
So the test runner would have some sort of flag to flip its output and test may need to expose a richer data structure than the current verifyCount.
Additionally, I find the hamcrest matchers and libraries like jmock2, mockito, spec and rspec make you think differently about how to write tests; it would be great if the current test framework didn't stop myself or others building similar frameworks on top of the existing fantom sys::Test class.
brian Fri 4 Mar 2011
I wasn't thinking we take it that far myself.
Not sure this really needs to be core, that sort of thing seems easily layered above a simple core right?
I am definitely going to do this, and was getting ready to do it for this build. But sort of wanted to see how this discussion turned out before committing to any new API. My planned API is something like this (just quick prototype, haven't thought about it much):
So just start with this fundamental proposal:
sys::Test
tosys::TestCase
sys::Test
facet and require it to be used instead of relying on "test" prefix naming conventionIs there wide spread support for that change or would you like to see things stay the same?
rfeldman Fri 4 Mar 2011
Regarding renaming, I like the idea of doing it the other way around - just add an
@TestCase
facet.The methods are, after all, test cases.
timclark Fri 4 Mar 2011
I am intrigued by the need for the Facet, it makes me feel like an old man moaning that we didn't do things like that in my day! I really prefer extending the testcase class and naming my methods testFoo, I still write my Java tests that way. My test classes always seem to have a single purpose - that is testing - so saying that testing is an orthogonal concern needing an annotation has never seemed to apply to my tests.
When Cedric released TestNG I watched in interest, I then watched in interest as JUnit 4 added annotations, I then worked on projects using JUnit 4 with annotations. I then started wondering why my JUnit 3 testcases were shorter than the equivalent JUnit 4 testcases! Then I started wondering why Java (C#) needed annotations (attributes), I am still waiting for a good answer.
I am not criticising TestNG because I know it does so much more that just allowing you to write a test by adding @Test in front of the method.
I think that the TestRunner proposal looks good. It seems to have plenty of hooks available to add custom behaviours around the tests.
cbeust Fri 4 Mar 2011
Brian, I think this early runner prototype looks good.
I'm not sure: the strength of @DataProvider is that it lets you invoke test methods with parameters:
Do you think it would be possible for a third party to implement your skeleton runner and provide this ability?
Also, I'd like to caution against the @RunWith approach that JUnit 4 is using: the way they designed it, it's impossible to compose runners, so for example, you can have a runner that will run your methods in parallel and one that supports parameterized testing, but not both. This is a big design flaw in my opinion, so we should either 1) make sure that runners can be composed or 2) not go down the customizable runner approach and find another way (the choice I made with TestNG).
cbeust Fri 4 Mar 2011
Tim:
I'm not sure why the JUnit 3 tests you looked at were shorter than JUnit 4, but I'm betting it has probably very little to do with annotations.
There are many, many reasons why annotations are a pretty sound idea, but if I had to give one to try to convince you, it would be to imagine that one day you use a framework that allows you to call your methods remotely but in order to do that, they need to start with "remote". Now how do you write a method that's both a test and remote?
How about a framework that will allow your methods to be part of a two phase commit, but they need to start with "twoPC"?
See where this is going?
This is why all these concerns are orthogonal to your code and to the way you name your methods. They belong in annotations.
panicdotal Fri 4 Mar 2011
Would it make more sense to generalize the @DataProvider concept using dependency injection?
I've been thinking that it makes a lot of sense to include rudimentary DI in the language. (This is probably a good topic for a separate thread.)
If you have DI in the language, you can make your test runner instantiate your test fixtures using the DI framework and let it supply the pre-configured dependencies.
tcolar Fri 4 Mar 2011
+1 to DI, I was thinking about that earlier this week.
timclark Sat 5 Mar 2011
Doesn't Fantom have DI already? Surely you can pass any dependencies your class needs when it is constructed? As for testing, if your fixtures are that complicated wouldn't you factor their creation out into common methods, factories or builders?
I'd prefer a more literate style of testing similar to the style used in this book, which does implicit DI by just using the Java language.
msl Sat 5 Mar 2011
I think the change is good as a first step - but as may have been hinted at above, I don't think it goes far enough.
I think that we need to provide two interfaces for augmenting tests:
TestRunner
(with the composability Cedric mentioned above); andTestListener
(to customise the behaviour of logging, etc as Tim mentioned).On top of that, there needs to be some way to add these on a type-by-type basis (2 new facets):
@Runners { val = [,] }
; and@Listeners { val = [,] }
)As well as providing a global way, and a pod-level means for configuring how tests are run. With that in mind, I've started mocking up a working example of what this looks like (in my mind). Bitbucket has the source, but tests looks something like:
And the two components mentioned above (TestListener, TestRunner):
Obviously just a working prototype - but this is closer to the direction I'd like to see testing headed within Fantom.
Martin
timclark Sat 5 Mar 2011
I like the TestRunner and TestListener. I was thinking that there would be a need for some sort of TestResult class but your listener seems to fulfill that purpose perfectly.
Tim
brian Sat 5 Mar 2011
Maybe I don't get it, but why can't data provider just use normal code:
What is the justification for a whole meta-data driven layer when normal simple code works just as well? Is this design rooted in how painful Java reflection is to use? However to answer your question Cedric, I think it would be simple to wrap in a layer above.
Likewise with BeforeTest, AfterTest, etc - why doesn't simple virtual methods on the test base class suffice?
I can see value in a facet for the test method. But this explosion of other facets doesn't seem like the simplest thing that works, but maybe I am not getting it?
I am not sure I follow this. Can you explain it a bit more? Did the simple design I propose work?
Would you ever have more than one listener?
jodastephen Sat 5 Mar 2011
The approach of simple Fantom code will test all the values as you suggest. However, what you lose is the test reporting at the per data item level. This is useful when tests fail:
A data provider can also be shared between multiple test methods, which allow variations on a theme to be tested. ThreeTen/JSR310 uses this a lot.
tactics Mon 7 Mar 2011
@rfeldman
I agree. The naming convention should be
@Test
for the classes and@TestCase
for the test methods. (Although@TestCase
feels just slightly too long).brian Mon 7 Mar 2011
Personally I think as the "good name" @Test should be used for the methods - you annotate 10 methods for every class, so seems we would want the shorter name there. Although still don't hear any overwhelming support for making a big breaking change...
cbeust Mon 7 Mar 2011
Whatever the facet name we end up with, I don't see any reason why it should be different depending on whether it's on a method or on a class. Or is that a restriction of Fantom's Facets?
cbeust Mon 7 Mar 2011
As Stephen pointed out, explicitly supporting parameter passing in the framework makes it easier to have more accurate reporting. Having said that, Brian is right that you can simulate this yourself, if you really want to keep complexity low.
However, I still think that the skeleton runner should allow test methods to be invoked with parameters. @DataProvider is one example where this is useful, but there are others.
For example, TestNG provides some convenient dependency injections: if you declare parameters of a certain type that TestNG understands (e.g. test context or a method type), TestNG will fill it in for you. For example, you can use this in a @BeforeMethod to know what test method is about to be invoked.
Speaking of dependency injection, the framework should also provide a hook to instantiate test objects, thereby allowing testers to create their own instances of test classes (possibly injected). For example, here is how I recently added Guice support to TestNG.
rfeldman Tue 8 Mar 2011
I don't care about the breaking change...it wouldn't be a tough one to fix. I was assuming we'd speak up if we had a problem with it.
msl Tue 8 Mar 2011
I'm bang up for updating the test harness.
However, I don't know where the line between "simple core" and "extensible framework" lies. I'm still updating the code I posted on the weekend with what I'd like to see in fan's test harness.
It supports most of the big features listed here - Data Providers, custom listeners, composable runners (although I'm not real happy with how this is looking and was going to request feedback once it's pushed to bitbucket), and (if I get the chance) multiple threads of execution.
However, the result of all of that is a pretty substantial code base that's looking a lot more like testNG or Junit rather than something light weight that could be included in the core.
My take on the changes that I think Brian is proposing:
@Test
facet which (presumably, see c10290) can be used on a type or a methodfanx.tools.Fant.java
to look for these facets rather than methods namedtestXX
sys::Test
to something else (for the sake of discussion,sys::TestCase
). This means that classes still need to extendsys::TestCase
to access the variousverifyXX
methods.Is that a fair review?
M
cbeust Tue 8 Mar 2011
Hi M,
This looks good. I'm assuming sys::TestCase is a mix-in, though, right?
I'm curious to see how your composable runner would work because in light of what I wrote above, I'm still not quite sure how it can be done.
For example, suppose we have a runner that support @DataProvider, and therefore injects parameters supplied from a user method, but also supports the private injection that I mentioned, which means injecting some other data, coming from the test framework this time. The two runners would have to cooperate to fill in the parameters to pass to the test method before it can actually be invoked.
Need to think a bit more about that.
msl Tue 8 Mar 2011
Right now - it's pretty gruesome (verbose) - hence wanting feedback. The truncated-for-example version looks something like:
So, I guess saying "composable" is a blatant lie :). It's really just leveraging Fantom's ability to have multiple inheritance of mixins and specify which version of
super.method
you want to call - which ends up being verbose and error prone if you miss adding one of your super mixin methods where it's needed.On the other hand, it does make the control process explicit, so you don't get into issues around the system guessing which version of the various mixins executes the test (for example when starting, listeners are added, then notified; when stopping; listeners are notified, then removed).
Martin
cbeust Tue 8 Mar 2011
I see two problems with this:
msl Tue 8 Mar 2011
Only 2?
I agree - it's way less than ideal - but I wanted to get things working in some way/shape/form first and then look at refactoring it where necessary.
msl Tue 8 Mar 2011
I didn't realise the previous code I had up was in a private repo - sorry about that.
I've just pushed up what I currently have and is in a working state. I have some other components (including the main runner) that are still on their way - but I think there's enough here to atleast request feedback and throughts on.
Go here: https://bitbucket.org/martinlau/fan-test/src.
Martin