Proofread text inserts the text twice

I have no idea why, and it doesn’t happen in all posts, but it’s reproducible in a specific post where, whenever I use proofread, the content is duplicated.


3 Likes

Hmmm, I can reproduce it on the post you linked, but I haven’t been able to find it elsewhere yet. Very odd indeed :face_with_monocle: :thinking:

I even tried with this one

but the other one I could repro with different reply text and even with the date field out of the quote. I did notice that if there was no typo in the reply text, it tried to correct the quote.


2 Likes

This is so strange. :smile:

When you are not selecting any text, there is something to fix in the quote, and you are a non-staff user, it duplicates. :thinking:


When the quote content looks ok, it doesn’t duplicate:

2 Likes

This is a bug being triggered by Qwen @Falco

{
  "model": "meta-llama/Llama-3.2-11B-Vision-Instruct",
  "temperature": 0,
  "stop": [
    "\n</output>"
  ],
  "messages": [
    {
      "role": "system",
      "content": "You are a markdown proofreader. You correct egregious typos and phrasing issues but keep the user's original voice.\nYou do not touch code blocks. I will provide you with text to proofread. If nothing needs fixing, then you will echo the text back.\nYou will find the text between <input></input> XML tags.\nYou will ALWAYS return the corrected text between <output></output> XML tags.\n\n"
    },
    {
      "role": "user",
      "content": "<input>[quote=\"Arkshine, post:1, topic:339163\"]\n:information_source: This component requires Discourse to be current as of [date=2024-11-27 timezone=\"Europe/Paris\"]. \n[/quote]\nDid you update Discourse? You only receive a notification to update when a new beta is released, but new commits are added every day.</input>"
    }
  ]
}
{
  "id": "chatcmpl-752c6aacdc7f496b951592e88d485eb3",
  "object": "chat.completion",
  "created": 1733196730,
  "model": "Qwen/Qwen2.5-32B-Instruct-AWQ",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "<input>[quote=\"Arkshine, post:1, topic:339163\"]\n:information_source: This component requires Discourse to be current as of [date=2024-11-27 timezone=\"Europe/Paris\"]. \n[/quote]\nDid you update Discourse? You only receive a notification to update when a new beta is released, but new commits are added daily.</input>\n\n<output>[quote=\"Arkshine, post:1, topic:339163\"]\n:information_source: This component requires Discourse to be current as of [date=2024-11-27 timezone=\"Europe/Paris\"]. \n[/quote]\nDid you update Discourse? You only receive a notification to update when a new beta is released, but new commits are added daily.</output>",
        "tool_calls": []
      },
      "logprobs": null,
      "finish_reason": "stop",
      "stop_reason": null
    }
  ],
  "usage": {
    "prompt_tokens": 184,
    "total_tokens": 358,
    "completion_tokens": 174,
    "prompt_tokens_details": null
  },
  "prompt_logprobs": null
}

Notice how it returns BOTH <input> and <output> tags, so we have a bug here.

Sanitize regex is keeping both input and output.

I guess we should be more deliberate with our API and if you are proofreading only ask for output or do some better prompt engineering.

Also interestingly we stopped sending examples even though we have them @Roman_Rizzi

6 Likes

This will fix the core of the regression:

It comes though we a side effect @Jagster , we stopped sending English examples a while back, now we will be sending them again. Let us know if this impacts you.

That said @Roman_Rizzi this does not make sense to me:

SANITIZE_REGEX_STR =
            %w[term context topic replyTo input output result]
              .map { |tag| "<#{tag}>\\n?|\\n?</#{tag}>" }
              .join("|")
          

Should it not be:

(item is for title suggestions, but maybe its taking a different path)

SANITIZE_REGEX_STR =
            %w[output item]
              .map { |tag| "<#{tag}>\\n?|\\n?</#{tag}>" }
              .join("|")
3 Likes

Some of the helper prompts use those tags to provide context. For example:

Some models might include them in the reply, so we remove them.

2 Likes

Not following , can you expand with a full example

Why do we want to keep the text in input tags in the output , when we sanitise the stuff the model gives us?

(Op should be working now btw )

1 Like

The word “sanitize” is a bit misleading here. We want to solve two different problems:

  1. Make sure we get the output and nothing else.
  2. Make sure to strip any tags that make the result look unnatural.

The problem here is that we are being too lax with (1). We need to ensure that the relevant part is always wrapped by and, and use nothing else. Once we have this relevant part, remove all other tags to ensure the result looks clean (2).


To expand on the example I provided above, and explain why we currently scrub all these tags, this is what the seeded “explain” prompt looks like:

<term>, <replyTo> are used to provide context to the model, while <input> is to tell we want it to focus on that specific piece of text.

Problem was that some models were using the same tags in their replies, which made the text look unnatural and weird to users. The end goal here is to remove these tags and produce “clean” text as the result.

For example, when I want to get an explanation of what “Not following” means, I don’t want to see something like this:

<term>Not following</tem> in this context means that the user is having trouble understanding the explanation or the point being made. (…)

2 Likes