Seems to have done it again today… Just changed the wording from yesterday.
Yesterday you offered ‘guidance’ but today you offered ‘advice’ on post notices
Seems to have done it again today… Just changed the wording from yesterday.
Yesterday you offered ‘guidance’ but today you offered ‘advice’ on post notices
I think Bert misses Jammy and might now have a crush on me or something. I don’t mind - the extra tasks that I didn’t do make me look really busy lol.
The bot’s looking at the wrong part lol.
Looks like Bert is taking a nap today today
I should probably wake up the sleeping bot.
edit: the bot is awake
Some of the categories show as text and not a direct link (for example, #Integrations instead of Integrations)
You can see this in this summary
I think that is still the known problem of incorrectly mentioned (sub)categories.
#support:wordpress
becomes #wordpress::tag
when you write #wordpress
: wordpress
Other subcategories break like #documentation:integrations
because #integrations
doesn’t work: #integrations
And category names with more than one word also break like #theme-component
because the bot does not add the dash #theme component
: Theme component
But we all know which category the bot is talking about.
LLMs will get better, they keep getting better.
GPT4-o has been moving quite a lot lately, may flip the report to use it for now.
I’ve been wondering why I’m not a fan of AI generated text on this forum. I think it’s related to the excessive use of active voice.
An example from the latest daily summary:
HAWK suggested creating a custom AI persona/bot rather than customizing Discobot for specific forum topics.
NateDhaliwal engaged in discussions about simplifying Discourse installation and suggested voice messages as an alternative to phone integration.
tgxworld explained recent changes to user and group IDs in Discourse Docker to address permissions issues.
mcwumbly emphasized the importance of understanding use cases when writing feature requests.
keegan provided an update on changes to the AI composer helper interface.
The same pattern is used in AI generated topic summaries.
My understanding is that active voice is the preferred writing style for professional communication. This is obviously subjective, but the use of active voice in AI generated text feels off to me. I can’t quite put my finger on it, but it feels overly assertive… something about it rubs me the wrong way.
How would you prefer it was written?
Bert must have hired you without asking. I’ll see if it can fire you on the next update, not to worry.
It’s a tricky problem. If I was writing the summaries myself, I’d do something similar to what the LLM is doing, just better. For text written by an LLM, I’d prefer it to be as matter of fact as possible.
What stands out when I read the summaries is the repeated use of the subject-verb pattern:
The verb following the username is essentially saying that the user posted.
Attribution sentences are common in note taking. A name followed by a colon implies that what follows the colon can be attributed to the name. Maybe a similar approach could work for post summaries.
Seems like this was implemented!
Now the question is - was it intentionally implemented, or did Bert just happen to choose to use that format for this run?
I’m not sure anything has changed. Getting an LLM to summarize user generated content in an appropriate manner is a tricky problem. It’s likely that the way it’s currently being done seems fine to most people.
What “feels off” to me is that the LLM is writing in a style that I’d expect from a human user who actually knows the forum’s members.
Testing on ChatGPT 4o, the following prompt gets a bit closer to a style that feels appropriate to me:
Summarize forum posts in a neutral, objective tone, emphasizing the content of the discussion. Write the summaries in the format ‘username: post content,’ avoiding any language that implies familiarity with the users. Present the information in a formal style, focusing on the topics discussed rather than the actions of the users. Limit each summary to a maximum of two sentences.