You could think of it as an alternative API for Selenium WebDriver. It uses Selenium to be able to drive Firefox and Chrome, although the idea might be to add other back-ends in the future. Splinter's API may or may not be better, but Selenium's major advantage is that many test cases can be written with little to no code using Selenium-IDE. QA can generate test cases without programming resources.
Splinter seems to inherit some of Selenium's weaknesses. There's no way to test HTTP status codes and headers, so you can verify that the user sees the error but you can't verify that the user agent sees it. You can't test that static resources are being properly cached (or not).
It also seems to add new limitations. Selenium has complete support for drag-and-drop, moving the mouse around, etc. I don't see equivalent functionality in Splinter.
Timeouts seem to be handled worse than Selenium. There's a default timeout of 2 seconds for requests. I guess your test is expected to assert that the content is there after each request, to fail on timeout. Since partial content can be delivered (on rendering errors and possibly on timeout during rendering), I guess you need to check for the footer's presence. Timeouts seem like such an exceptional condition that I'd just throw.
Perhaps because of the version of Selenium they're using, the sample sleeps for 10 seconds after doing an asynchronous file upload because it can't wait for the IFRAME to load. This means that the test case can't finish faster than 10 seconds (a problem for large suites) and the test can fail (if the file is large enough or the network is slow enough) even when the application's behavior is correct. Selenium (in trunk, at least) has waitForFrameToLoad/wait_for_frame_to_load that looks like a better solution.
None of these are fatal problems, especially for a young project. A good API can be a very worthwhile thing.
Splinter seems to inherit some of Selenium's weaknesses. There's no way to test HTTP status codes and headers, so you can verify that the user sees the error but you can't verify that the user agent sees it. You can't test that static resources are being properly cached (or not).
It also seems to add new limitations. Selenium has complete support for drag-and-drop, moving the mouse around, etc. I don't see equivalent functionality in Splinter.
Timeouts seem to be handled worse than Selenium. There's a default timeout of 2 seconds for requests. I guess your test is expected to assert that the content is there after each request, to fail on timeout. Since partial content can be delivered (on rendering errors and possibly on timeout during rendering), I guess you need to check for the footer's presence. Timeouts seem like such an exceptional condition that I'd just throw.
Perhaps because of the version of Selenium they're using, the sample sleeps for 10 seconds after doing an asynchronous file upload because it can't wait for the IFRAME to load. This means that the test case can't finish faster than 10 seconds (a problem for large suites) and the test can fail (if the file is large enough or the network is slow enough) even when the application's behavior is correct. Selenium (in trunk, at least) has waitForFrameToLoad/wait_for_frame_to_load that looks like a better solution.
None of these are fatal problems, especially for a young project. A good API can be a very worthwhile thing.