Enrich API Calls of AI Plugin?

Hi folks,

I´m working in an enterprise environment and we´re using Discourse as discussion board for supporting an cloud platform.
We want to use the Discourse AI plugin for several usecases and even have internal AI endpoints which are OpenAI compatible.

The thing is, that outgoing requests to these endpoint have to include an authentication header with an oauth2 token coming from a internal m2m auth endpoint which has to be retrieved upfront.

I thought of several approaches, like a local proxy on the ec2 instance hosting discourse, which could enrich the request with that auth information.
Another approach is an API gateway with an authorizer lambda getting the token.

What I didn´t understand so far are the tools, you can add within the Discourse AI plugin itself.
Could that be used to achieve what I have in mind ?

Many thanks for your support and have a great day!

Cheers,

WS

This is a tough one.

We generally do not like to add too many knobs cause it confuses people but I hear you that this is hard to solve now, we may need another knob.

One option would be to allow the open ai compatible to have a “custom headers” section.

Tools could not easily solve this cause this would create an incredibly complex workflow and we don’t have the ability of easily passing all the information the tool needs.

@Falco thoughts?

Moving this to feature, cause it is a feature request.

1 Like

Hey @sam,

thanks for your reply and your thoughts on that.

A field for custom headers wouldn´t be enough, because the token has to be retrieved dynamically upfront to each API call.

Maybe rather a kind of pipeline/middleware, where someone can transmute the whole outgoing call with own code before it get´s sent out?

Many thanks guys and have a great day!

Cheers,

WS

Yikes this really is quite advanced.

I guess if custom tools came with enough richness they could accomplish this… it does feel like a bit of a rube goldberg machine but imagine.

  1. IF a configuration with a persona:
    1. Forces tool calls
    2. Has a custom tool forced and it has NO params
  2. THEN we invoke no LLM and simply pass control to the tool
  3. THEN we give the tool enough infra to stream results back to the app via inversion of control in some fashion

Its a pretty staggering amount of change and would end up being an absolute bear to maintain.

I guess an alternative is for you to define a new custom plugin that depends on Discourse-AI and defines your own Dialect and Endpoint - it is certainly the simplest way to go about this.

It is sooooo much easier to solve this specific need via a lightweight proxy, like Nginx with LUA scripting, that I think @Wurzelseppi will be better served going that route.

1 Like

Hey guys,

you´re awesome discussing the need of a small user like me in earnest. I´m always struck by your dedication, and I mean it (no joke :slight_smile: )

Yeah, as the whole thing runs on a ec2 instance and I already decided to go the AWS API Gateway → Lambda → LLM endpoint way …

Built in into discourse would be cooler, but I understand the complexity that would bring, of course …

Thanks a lot for your time, and the time you help all users here!

Couldn´t think of a better board software, not only because of the maturity of the software, but also because of the people who lend support.

Great week, guys … and stay as you are!

Cheers,

WS