Does anybody have a QA checklist they’re willing to share, that they use when testing a new release? Our QA team created their own unofficial QA checklist back when we used vBulletin, adding as they tested or when a new plugin was installed, but Discourse does many things differently. Rather than start all over with a brand new list, perhaps you all can make this task easier for us. Crossing my fingers.
I know you are being a bit sarcastic here and that is OK, it was indeed a short answer but I actually believe those three actions are the most important and essential ones to test, beyond “does the site actually load in a brower”.
Haha, I should have added a smiley face to my reply to lessen the sarcasm. Feedback much appreciated. We run some user account integration with Wordpress and custom user management in general, so our QA has to cover some non-standard processes related to anything within the user registration/verification process and user account status changes( being assigned mod/admin role, delete as spammer, anonymize account, impersonation, etc.).
You could add more monitoring checks, than e.g. checking if the site is up and its content serves a specific string (I do that with Icinga on monitoring-portal.org).
If you for example enable the REST API, there’s many things you can test/check. Even insights into the applications currently running. Since Discourse runs in Docker, there’s a variety of APIs and checks around to go further with Redis, PostgreSQL, Nginx, Ruby-on-Rails, and so on.
You may also go the route with full application and UX monitoring with end2end tests. There’s certain frameworks around like casperJS, not sure though if that works with Discourse. Probably better to have such things automated via API.
But still, you could go the route to trigger events and expect something in return on the current site.
I don’t do that yet, but it is on my list to increase monitoring and always know if the site is fully operational, or if I have hit a bug or a regression.
That’s a good point to make. I should probably elaborate why SP has its own testing document then. We have it because we’re one of the very few instances that runs off a fork, so merge issues can occur and we don’t necessarily run all the tests to ensure functionality still passes after the merge is done.
So let that be warning to not use a fork, it adds complexity and requires additional resources to ensure everything is functioning as it should.