プログラマー以外でもDiscourse AI - AIボットを使ってプラグインやテーマを作成するのはどれくらい難しいでしょうか

Personally I would not agree with that.

However your next statement I do agree with

Expanding on what Sam is noting. Here is a practical workaround that I use, it even works for other tasks that one might thinks needs large context windows but really do not.

First, for those that do not know the term context window it refers to how many tokens the LLM can use for the prompt and completion combined. I will not go into more detail on this but advise others to read Learn Prompting (Prompt Engineering Guide) to become familiar with the terminology.

Here is a classic question that comes up time and again on LLM sites such as OpenAI.

How do I create a book using ChatGPT when the context window is too small to hold the entire book?

The solution is not to think of getting the entire book in one prompt, but to break it up into parts. Now the next thing users try to do is get the prompts to write the first 20 pages, then the next 20 and so on which also is not very practical. The way to do this is top down in chapters. First use a prompt for the high level that gives a general outline of the book or index of the book with chapter titles, then in the next prompt ask for chapter 1. Now for the next prompt make a summary of chapter one and with that ask for chapter two. Keep creating a summary of the information that is only needed for the next chapter when prompting to create the next chapter. It is a bit more time consuming but allows one to create larger works with a smaller context window.

Now the same can be done when creating software but instead of breaking the process into a sequence break it down into a tree of function calls. So ask for the high level function first, and then start filling in more of the supporting functions as needed. Also this can be done from the bottom up if you are really sure of what is needed. For those that create parsers, the familiarly with top down or bottom up parser should be jumping to mind.

Another common programming task is to do code updates or modifications, again, this can be easily done with a smaller context window if a user gives the function headers in stead of the full functions when creating the prompt and only request the code for the function needing changing.

A few other things I have learned along the way is to only work with one function at a time and don’t go over 100 lines of code. Doing this with early versions of ChatGPT which had a relatively smaller context window was able to create some nice code, it even included Prolog, JavaScript, HTML and JSON in the mix

While all of this is nice, I am not expecting Discourse to offer a bot for users to create Discourse code anytime in the future.

I have not really tried that yet. As I noted in another post I have no skills with Ruby or Ruby-on-Rails and the JavaScript technologies used so I don’t even know the correct terminology to get good results but will keep that in mind as something to try and give feedback.

That is a plus in my book.

「いいね!」 3