How hard would it be for non programmers to be able to use the Discourse AI - AI bot to help them create plugins and/or themes

NOTE: If this starts a side discussion then it really needs to be moved to a new topic. I am giving you the details because it will help you understand ONE of my needs and to give more information that could also help others so please don’t take it wrong if I ask for replies related to this side discussion to be moved.

One of my curiosities is: How hard would it be for non programmers to be able to use the Discourse AI - AI bot to help them create plugins and/or themes.

As I have been using ChatGPT for several months now to help me with coding languages like Python, JavaScript, Prolog, Regular Expressions, PowerShell, Bash, and many more, knew what to expect, how to get results and when to just walk away.

In trying to use the Discourse AI - AI Bot to create a very simple plugin I decided to start with one that I knew existed, worked and is simple, so choose CakeDay.

After I looked at the code and such on the page was surprised at how much I would need to learn to create it. So then took this Ruby code

and asked the AI to explain it. Having done similar with other source code in other languages knew what to expect. The result was uninspiring; needed to ask more questions to understand some of the syntax, methods, functions interface, etc. Much of it was easy for me to understand (think read) but I knew I could not create (think write) such code much less even know what or how to ask the AI bot to create such code; don’t know the correct terminology to prompt the AI to generate the correct text as I know that Ruby-on-Rails uses terminology that I don’t use elsewhere, e.g. bake, slug.

So, wanted to also see if Python could be used instead as that is much easier for many to understand and also for ChatGPT to create correct code.

1 Like

It’s not Ruby that you would need to learn, but Rails.

That’s a fine curiousity, but if what you want to do is write a Discourse plugin with AI, then the AI plugin doesn’t seem like the place to start. I’d want to work with an AI designed to develop code.

More than that, however, if you want to write a Discourse plugin, choose one that does something similar and change it. Cakeday doesn’t seem like an especially simple plugin, but if what you want is a plugin that puts an indicator by a user’s avatar or does something on a schedule, then it could be a good place to start.

It’s pretty complicated, though, it involves all of the following:

  • adding data to the user serializer, so the front end has access to it
  • running a job on a schedule
  • creating a route that provides new information (Discourse Meta)
  • using a plugin outlet to add information to a page

And that’s just the start.

1 Like

I expected this, so will be asking for this side conversation to be moved to a new topic.

I wasn’t going to spell out all of the details but yes Rails is in the mix along with the other technologies that Discourse uses that one should know.

Did you specifically mean the Discourse AI - AI Bot here. There are many AIs and I agree many I would not use for coding and even some that are for coding are not good with every programming language.

The feeling I am getting from the Discourse staff is that in the long run having a Discourse AI Bot, (persona as they are calling them at the moment) is desirable. But as they already know how to create Discourse code, having someone like me to give some feedback is helpful. Granted I know programming which gives me a head start but being half way there also gives me insight into what to expect and what not to expect.

Care to suggest a simpler one?

1 Like

At the moment, with current AI capabilities, it would be very hard for someone who doesn’t know how to program to come up with a complete working Discourse plugin/theme of reasonable complexity. And I hope we never implied that something like this would be possible, as it would be a frustrating experience.

That said, for someone with a junior-level programming knowledge, using either a common LLM, a code-specific one, or something like GitHub CoPilot can make the journey definitely easier, handling a lot of the boilerplate for you. Combining it with an existing plugin/theme and starting with small changes sounds like a good idea for someone willing to learn.


Lola’s javascript debugging help broke my dev Discourse instance. The token context window gives the impression of a sort of anterograde amnesia.

1 Like

No, no one at Discourse implied it would be possible, it was a feature request of mine to have the AI Bot help with Discourse programming since it uses GPT4 which has some knowledge of programming; I just wanted to see how far I could push it. Personally if plugins and themes were easier for me to create I would be creating them when the needed arose.

I fully agree with that!


Yeah it is a very tricky problem.

Fitting the entire world into 6000 or so words that GPT-4 8k is allowed to know about is a very very hard problem.

I am borderline on just bumping lola to use 32k tokens here, but the costs are really really high and I want to be testing stuff that is closer to what the general public are using.

The current workaround/solution for this problem is function calling, you get GPT-4 to reason about what information it will need and then a few round trips later it finds the right context. This can involve searching (either using embeddings or just pure keyword search)

I don’t anticipate being able to solve “I am not a programmer, make a plugin for me” solution any time soon.

That said I can see lola getting better at helping people who have a reasonable foundation around programming - especially stuff like semantically searching through our code base and so on.

We also have access to Anthropic Claude here which comes with 80 thousand or so words of context, but sadly its performance is much closer to GPT 3.5 than to 4 and it is just very very very hard to steer it.

Making progress slowly in this uncharted territory…


I know that is a name we are unofficially using for the Discourse AI - AI Bot. Is your use of the name an official recognition that Lola will be the new norm or will there be a contest or something? AFAIK @Lilly started using Lola or Lola Bot so if that is the norm, for historical purposes she gets the credit.

1 Like

You guys can call it whatever you want. GPTbot4 does not roll off the tongue well when I converse with her. If I’m going to have a personal assistant it’s going to have a name, I think she’s cross at the shade being thrown at our collective programming skills, but I’m having fun and learning from her. She helps me simply by being my sounding board for ideas and also for critical thinking - I do enjoy pointing out when she is wrong. Lola is a great learning companion for me and I like her for building a framework, but I don’t expect her write me theme components. She forgets about my one setting in my .yaml file by her second or third reply and will build nebulous arrays of new objects.


Personally I would not agree with that.

However your next statement I do agree with

Expanding on what Sam is noting. Here is a practical workaround that I use, it even works for other tasks that one might thinks needs large context windows but really do not.

First, for those that do not know the term context window it refers to how many tokens the LLM can use for the prompt and completion combined. I will not go into more detail on this but advise others to read Learn Prompting (Welcome | Learn Prompting: Your Guide to Communicating with AI) to become familiar with the terminology.

Here is a classic question that comes up time and again on LLM sites such as OpenAI.

How do I create a book using ChatGPT when the context window is too small to hold the entire book?

The solution is not to think of getting the entire book in one prompt, but to break it up into parts. Now the next thing users try to do is get the prompts to write the first 20 pages, then the next 20 and so on which also is not very practical. The way to do this is top down in chapters. First use a prompt for the high level that gives a general outline of the book or index of the book with chapter titles, then in the next prompt ask for chapter 1. Now for the next prompt make a summary of chapter one and with that ask for chapter two. Keep creating a summary of the information that is only needed for the next chapter when prompting to create the next chapter. It is a bit more time consuming but allows one to create larger works with a smaller context window.

Now the same can be done when creating software but instead of breaking the process into a sequence break it down into a tree of function calls. So ask for the high level function first, and then start filling in more of the supporting functions as needed. Also this can be done from the bottom up if you are really sure of what is needed. For those that create parsers, the familiarly with top down or bottom up parser should be jumping to mind.

Another common programming task is to do code updates or modifications, again, this can be easily done with a smaller context window if a user gives the function headers in stead of the full functions when creating the prompt and only request the code for the function needing changing.

A few other things I have learned along the way is to only work with one function at a time and don’t go over 100 lines of code. Doing this with early versions of ChatGPT which had a relatively smaller context window was able to create some nice code, it even included Prolog, JavaScript, HTML and JSON in the mix

While all of this is nice, I am not expecting Discourse to offer a bot for users to create Discourse code anytime in the future.

I have not really tried that yet. As I noted in another post I have no skills with Ruby or Ruby-on-Rails and the JavaScript technologies used so I don’t even know the correct terminology to get good results but will keep that in mind as something to try and give feedback.

That is a plus in my book.


Lola did a great job of helping me debug a json schema I am using for one of my theme component updates. I gave her a working example of one, then gave her mine and she found my incorrect comma and bracket that would have taken me a bit longer to find myself. She is good for catching things I miss or don’t otherwise see in vs code.