I was an early adopter of RSpec, have been using it for years now, and I came to think it's not worth it. More generally, mimicking natural language with a programming language DSL seems like a very bad idea. It seems nice that the tests look like a specification in english, but it is a one way correspondence, then you start to write things that look like they should work because they reassemble an english sentence but they don't really work because of the limitations of semantics of the programming language; moreover those can fail silently or work in ways other than expected. And this is so after the RSpec guys inserted all this rocket-science to make the DSL as elaborate as possible, in the process often introducing tricky bugs because the code for accomplishing this is so complex.
Since there are no tests for tests, tests should be written in really the simplest possible way. The principal goal of a test framework should be simplicity, reproducibility and reliability. In the end non-programmers anyway don't read the "specs" unless you are living in a fantasy world of the TDD gurus.
The biggest thing from RSpec I've missed since switching from Ruby are the describe/it blocks. I felt like they helped you, as you were writing tests, compose sentences that, in the end, should be true about the system under test. It allowed you to abstract your thinking to a level that was more conducive to understanding the essence of the code/system. I'll agree with you on all the should/be stuff; TestUnit assertions work just fine.
I can't completely agree with both of you regarding assertions.
There's always cases where a test needs to be done on several properties of an object at once (validation of a rails model attribute, for instance, when you want to test that the model is invalid and that the right error message is present) or where the setup to prepare the assertions is quite heavy, I think this is where custom assertions can be quite helpful, first it gives you more meaningful tests, and the reported failures can be more documented than with asserts.
In that regard, you can write simpler assertions for each of these tests, and do that for every fields of every models. In the end it's a lot of code duplication that may also lead to errors (tests have to be refactored as much as the tested code).
"it is a one way correspondence, then you start to write things that look like they should work because they reassemble an english sentence but they don't really work because of the limitations of semantics of the programming language"
This is why I can't stand AppleScript. It sort of looks like English prose, but you still have to figure out precisely what syntax is valid.
So I get that I can do the following with Spectacular
itsInstance 'length', -> should equal 0
Instead of what I can currently do with Mocha+Chai.js
item.length.should.equal 0
But besides the different syntax (which doesn't matter to me) what does this framework offer me that Mocha+Chai.js doesn't?
Btw. the above is an honest genuine question. I'm not questioning that the framework has something to offer I just can't figure out what it is from reading the website (the DOM features doesn't mean anything to me since I'm currently writing tests for a library).
I must say that I'm also a user and lover of jasmine, all the previous lib I did was tested using Jasmine. However, I was missing some feature from RSpec (Jasmine, like Mocha, takes a big part of their syntax from RSpec), such implicit subjects, let blocks, self-describing matchers, etc.
If I were to list some of the advantages of Spectacular over Jasmine, I'll say there is:
- native nodejs support: Jasmine was primarily intended for browsers, and the jasmine-node module isn't guaranteed to use the latest jasmine version.
- subject, auto-subject describe and implicit subject in test. This is quite handy and when combined with CoffeeScript syntax it leads to very readable tests
- first class async support: asynchronous tests aren't just an edge case, but are at the root of the framework. Even matchers can be asynchronous, allowing to write matchers such as the shoulda's have_db_column matcher, that, in a javascript context, will have to rely on asynchronous API.
- built-in factories in FactoryGirl fashion.
- tests randomization
- tests dependencies
- out-of-the-box phantomjs and slimerjs support
As a last word I want to say that, at the beginning of this project, the idea was to see how I could build a BDD/TDD framework using BDD/TDD, and, as a RSpec user, I wanted to have the same kind of feeling that I can have when I write RSpec tests: simplicity, readability, reusability, etc. In the end it grew as something that could benefit others and so I pushed the development further in order to provides a full and robust framework.
I like the async test handling better than Jasmine. It's the first thing I look for in testing frameworks nowadays.
I'm working on a simple lightweight framework at the moment that's designed to run on browsers in target environments such as VMs or cloud browsers, and a lot of my code deals with async issues.
Moving to a promise-based system is definitely something I'm considering investigating.
One pet peeve I have with all of these test frameworks though is the "kitchen sink" approach: there is always a substantial list of features. This led to me to roll my own.
BUT: Does the "Download" button behave weird for anyone else as well? It's quite slow, not really responsive, and at the same time pushes the content below the button downwards/upwards depending on state (might be part of being slow).
[Safari on OS X]
Seems like the author combined the spec side of mocha with should.js and a stub/mocking library, focused it on node and polished it up with some neat conventions and utilities.
I'll probably give it a shot on my next project. I'm interested in finding out if it went the route of throwing exceptions for error reporting like mocha, or something else.
I didn't followed that approach, matchers (like in RSpec) returns a boolean (or a promise of a boolean value). Errors raised in a matcher will be catched and, as in RSpec, will flag the test as errored (and not as failed).
I've always found it more useful to detect when something went wrong in your test setup than failures.
I'm really not a fan of having the options on window.options. Not only is that weirdly populating the window object with single use options. It also is extremely not targeted. At least it should be window.something.options.
Since there are no tests for tests, tests should be written in really the simplest possible way. The principal goal of a test framework should be simplicity, reproducibility and reliability. In the end non-programmers anyway don't read the "specs" unless you are living in a fantasy world of the TDD gurus.