How to deploy a SSO bridge alongside a discourse_docker deployment?

I realized I wanted to develop and deploy a Discourse SSO endpoint that wraps authentication to a OIDC provider. The endpoint is now developed as a Flask application available as a PyPI package: discourse-sso-oidc-bridge-consideratio. But, I now would like to deploy it, and preferably alongside my discourse_docker deployment.

Question 1 - Does it make sense to integrate the bridge deployment with my discourse deployment?

I have deployed discourse following the tutorial. While I could deploy this Discourse SSO OIDC Bridge in a standalone manner somewhere, it would be great if it could integrate with my discourse deployment instead. But, I need to understand a lot about this deployment in order to conclude if it could make sense to deploy this bridge alongside. Also, I have a lot of learning on how to accomplish it. Perhaps I could get some help understanding various parts?

Starting out, does it make sense to attempt this?

Question 2 - What requirements does a integrated bridge/discourse deployment put on the Bridge Flask application?

It would be great to realize soon if an integrated deployment requires certain things by the Flask application.

About the Bridge Flask application

Endpoints declared

  • / - A redirect to Discourse
  • /sso/login
  • /sso/auth
  • /logout

Reference stuff

Question 3 - So, I should add a pups .yaml template that I can consume?

It seems like I ought to make a template alongside those found in discourse/discourse_docker’s template folder, does this make sense?

Question 4 - How to augment the nginx configuration with additional locations?

I assume I also need to configure the nginx utilized by my Discourse to direct traffic to the sso-oidc bridge endpoints such as /sso/login /sso/auth and /logout. I have seen various of replace commands utilized from the .yaml pups templates in the nginx configuration, but I have not seen a template add a new location which may be more tricky.

Also, perhaps this location should be added very early on in the sequence of modifying the nginx config as other steps may refine the location? But, the key question is how to add the location entry in the nginx config at all.

If you are running a bridge I would strongly recommend you keep it as simple as possible and just run it in a cheap dedicated droplet in its own container.

If you want to run it with Discourse the sanest thing to do here is simply improve the official OIDC plugin to add the group feature vs go down this path.


Thank you for the directions @sam! I’ll focus on making a Dockerfile that builds into an image exposing the bridge and is meant to run in a standalone manner.

Wieeee I got got it to function as a bridge hosted standalone from discourse stuff:

I deployed it on using Kubernetes and a Helm chart that I made.


@sam do you see a reasonable way for this Python based Dockerized bridge to be integrated on the same VM that a discourse_docker deployment resides?

It is my understanding that for the bridge to be a plugin it must be written in Ruby, and that the .yaml files in /templates of discourse_docker are snippets to build one image rather than templates of other images to deploy alongside the primary discourse image.

Hmmm, perhaps there is someone who has demonstrated how one could run an additional docker container on the VM? I’m thinking that we could optionally startup the Dockerized bridge I made alongside the discourse container, and add some nginx rules using a .yaml file within /templates to access the additional docker container or something like that?

1 Like

I can think of many ways, but all of them are way more complex than just sending a PR to the OIDC plugin to add the missing features :blush:

You can run HAProxy on the host and terminate SSL there, then have 1 container for your web and one for you bridge. With zero knowledge this would probably take you 4-12 hours to swing. My question is … is it worth the $60 a year savings at that point?


Ah thanks for the input @sam!

One great aspect about open source projects to me is that what I do can benefit many more than me, and that is what motivated me to come up with smoother solution than spinning up a secondary VM or similar to host the bridge.