Why are integration specs written in QUnit?

I’m wondering what was the reason behind writing integration tests in QUnit when RSpec is available?

The advantages of RSpec would be:

  • Runs the phantomjs driver directly rather than through a shell command
  • Removes the layer of abstraction on top of QUnit (really a unit testing framework)
  • RSpec has a nice DSL that is easy to use. Not much ruby experience needed.
  • Far less boiler plate code (javascript)
  • RSpec can run single tests or groups of them (tags, etc) with options like --fail-fast
  • Headless browser tests run faster
  • Native output to stdout and correct exit code
  • support for creating fixtures and transactional tests.
  • There are already some integration specs in spec/integration

The list goes on. I’ve written complex AngularJS applications with RSpec integration tests. It’s pretty simple really.

1 Like

Do you mean RSpec as in


I am actually a fan of having a set of acceptance (high level) tests that allows us to test the entire stack at the same time.

Note that acceptance tests are not the same as integration tests.

Right now, our acceptance tests for the front end uses fixtures to stub the server’s response and that requires us to update it whenever we change our backend API. Furthermore, I find that acceptance tests for complicated scenarios are really hard to write as I have to stub every single request made to the server. That to me introduces alot of friction when writing acceptance tests. My ideal scenario will be to have fixtures be created server side and we just assert for behavior of the app as we interact with it via Capybara (for example). One caveat though is that acceptance tests are usually much slow from my experience. Somehow using capybara to click around and waiting for elements to appear takes quite a bit of time.

With all that being said, writing acceptance tests for the entire stack will not be possible if we ever decide to migrate to ember-cli.


This is just a test runner. I mean something like this: https://github.com/discourse/discourse/blob/master/spec/integration/category_tag_spec.rb

Also see: http://betterspecs.org/

1 Like

I am super confused here, there is no “ruby libphantomjs gem” any driver is going to have to run the phantomjs and pipe commands to it. Only way around this is to avoid the need for a real dom and use something like mini_racer to run the specs, which is what I do in the pretty text spec, but is far more difficult to engineer at the application level.

Feel free to correct me if I am mistaken

The only way to do this would be by adding a different (and probably) slower layer of abstraction on top. Cause we got to run the app somehow when specs are running and forward requests to it.

Personally, these days, I really hate rspec and much prefer minitest. I usually prefer qunit tests to rspec. Our bottleneck is friction around running the qunit specs, not the framework itself.

Anyway… in general our focus on refining our current toolset vs “flipping the chess set and choosing checkers instead”.

I strongly recommend you try out rake autospec, try editing a spec see what happens, make it break, make it work and so on. I would like that type of workflow for qunit, it would make me happy.


I’m curious to know which tests do you classify as integration tests. We have alot unit tests for our Ember models, controllers, components and widgets which can’t be run and written in RSpec.

1 Like

I see the acceptance tests are stubbing out http requests to the rails app. So, it’s not correct for me to call them integration specs because they don’t test the entire stack. You can however write the acceptance tests in RSpec or mini test.

ahh I’m probably confuse because of our different interpretation of the terms being used here. I was under the assumption that you were referring to acceptance tests as integration tests because it will then be possible to write those in RSpec using Capybara. However, majority of our Javascript tests are unit tests and I don’t see how we can replace Qunit with RSpec.

1 Like

I realize I may be stepping into a quagmire with regards to terminology, but I like the term “end to end tests” to refer to and disambiguate tests that go through the client and server code together.

I usually use the term “smoke test” for those.

https://github.com/discourse/discourse/blob/b3965eb06994fbe11b5a945efc99b136714b3fe3/spec/phantom_js/smoke_test.js has got to pass for a build to be stamped with test passed.

We have “integration” tests on the server that look at end-to-end server behavior. And “integration” tests in qunit for end-to-end client behavior.

But the “smoke test” is the only test we have the glues everything together and tests a real production instance.

1 Like

Another nice thing about RSpec/Capybara is that these tests are framework agnostic. As long as your application has a DOM (or in the case of ReactJS, a virtual DOM), you can write tests against it. So, theoretically, you can swap out your application architecture and your integration (or end-to-end) specs should pass.

I’ve been criticised in the past for writing RSpec/Capybara tests against AngularJS because that’s not the way the AngularJS community like to write integration tests. Same for EmberJS apps. But developers have to learn a new test framework for every different application and often these frameworks are not as mature as RSpec, Capybara, Cucumber, etc.

I understand this application already has acceptance tests in QUnit and I’m not criticising the excellent work that’s been put into this project. I just wondered what the reasoning was and possibly I did not fully comprehend how this application works.

1 Like