I think what you've done is cool, but lets be clear that this is not really a testing tool, per se. It's just an interface that is compatible with several browser automation tools.
Cucumber, FitNesse, and robotframework are testing frameworks. xUnit frameworks can act as one. If you look at Robot's architecture (http://robotframework.googlecode.com/hg/doc/userguide/RobotF...) what you've written would occupy the "Test Tools" tier.
A nit pick - 'acceptance testing' really refers to the purpose of the test, whether it's automated or manual. Splinter is a test tool - it would work equally well for regression testing, smoke testing, etc.
tl;dr Great library, thanks for helping the Pythonistas keep up with the rubyists, can't wait to try it.
---
I've been developing a DSL for acceptance testing (not unlike Cucumber, but more pythonic and with further bias towards on web acceptance testing) while concurrently implementing a test suite for our startup's web application.
A tool like this has been seriously lacking within the Python community and I'd like to take the time out to say thanks!
I'm probably going to stick with my own selenium + mechanize scripts for the first version of this suite, but I'm definitely going to check this library out as we move on. I'm sick of writing multiple tests for the same functionality.
As for people who are complaining that this isn't a Cucumber replacement... you're right! However, the point of this library isn't to replace a framework like cucumber, but rather to enhance it. The idea is that if you write your cucumber/whatever tests using this library, you won't have to repeat yourself w.r.t. the various types of test you might want to investigate (e.g. a mechanize test is much faster than a heavyweight selenium run).
You could think of it as an alternative API for Selenium WebDriver. It uses Selenium to be able to drive Firefox and Chrome, although the idea might be to add other back-ends in the future. Splinter's API may or may not be better, but Selenium's major advantage is that many test cases can be written with little to no code using Selenium-IDE. QA can generate test cases without programming resources.
Splinter seems to inherit some of Selenium's weaknesses. There's no way to test HTTP status codes and headers, so you can verify that the user sees the error but you can't verify that the user agent sees it. You can't test that static resources are being properly cached (or not).
It also seems to add new limitations. Selenium has complete support for drag-and-drop, moving the mouse around, etc. I don't see equivalent functionality in Splinter.
Timeouts seem to be handled worse than Selenium. There's a default timeout of 2 seconds for requests. I guess your test is expected to assert that the content is there after each request, to fail on timeout. Since partial content can be delivered (on rendering errors and possibly on timeout during rendering), I guess you need to check for the footer's presence. Timeouts seem like such an exceptional condition that I'd just throw.
Perhaps because of the version of Selenium they're using, the sample sleeps for 10 seconds after doing an asynchronous file upload because it can't wait for the IFRAME to load. This means that the test case can't finish faster than 10 seconds (a problem for large suites) and the test can fail (if the file is large enough or the network is slow enough) even when the application's behavior is correct. Selenium (in trunk, at least) has waitForFrameToLoad/wait_for_frame_to_load that looks like a better solution.
None of these are fatal problems, especially for a young project. A good API can be a very worthwhile thing.
It's meant to provide a compatibility layer between various browser automation tools like Selenium, Windmill, etc.
To the poster - it's a neat idea, but I know of very few people that are writing tests with more than one of those tools. And if that's the case, what you've written is just another layer of complexity. It's a nice fluent interface, but there are enough eccentricities to each testing tool that I don't think it would be worth the tradeoff.
Is there a way to integrate this with a development version of Django? I.e. the Client() is really useful to test the backend but it could be great to also use this library to test the front-end. I mean, it's a bit "late" to test the front-end if it's already live no?
So that means you can only test read-only features of your site then? I can't imagine you want dummy test data floating around your production servers.
I don't know if this really qualifies as an acceptance test, it's more like an advanced version of Pingdom.
It really depends on how your application is structured. If only the test sees the test data, you can easily do it on the live site. If there were a hundred test accounts in gmail sending mail back an forth to and from other services, you would never know.
Cucumber, FitNesse, and robotframework are testing frameworks. xUnit frameworks can act as one. If you look at Robot's architecture (http://robotframework.googlecode.com/hg/doc/userguide/RobotF...) what you've written would occupy the "Test Tools" tier.
A nit pick - 'acceptance testing' really refers to the purpose of the test, whether it's automated or manual. Splinter is a test tool - it would work equally well for regression testing, smoke testing, etc.