Experiments with Model-Based Testing

Hi Discourse Community! :slight_smile:

I’ve been working on tech (Colimit) that helps people apply Model-based testing to test their web/browser/DOM frontend, API backend, or even ORM-models or view-models. If you’re not familiar with the idea, you basically define a high-level spec of what your app is supposed to do in terms of possible actions that can be taken, and from that you get hundreds or thousands of sequences of valid user actions (i.e. user flows) that can intelligently crawl a large state space of possibilities and uncover hard-to-reach bugs compared to hand-written test cases.

I’m coming out of the phase of using Colimit on small examples and thought of Discourse as a possibly great candidate to try it out on next, because it’s both a sophisticated app that’s run in production but is also fully open source, making it easy to tinker with. The cool thing is that Colimit is written in an abstracted way so you can reuse the same specification/model (by writing adapters) to do different styles of testing like e.g. the Discourse API tests via the gem, versus the integration tests using Capybara, vs the smoke tests using Puppeteer, etc.

Before getting started, I was wondering if anyone from the Discourse community can think of any areas of the app that are currently major sources of bugs or have less coverage that could be more valuable to focus time on?
Also, just out of curiosity I’d appreciate any replies from people that think this sounds interesting and sees potential value for testing the Discourse project, or if the opinion is that the current state of testing is already good enough (in which case there is still at least possible maintenance value from being able to replace manually written tests from those that could be generated automatically from a model).

Thanks!

1 Like