Links between topics always connect them, this has been true of Discourse since beta. So just add a link to a post pointing to the new topic.
Whats the performance impact of long threads?
I have one that has escalated to 4.5K very rapidly and will keep escalating for at least 36 hours. Expecting thousands of messages, server is screaming.
I would recommend keeping topics below the 10k count for now. Worth closing and doing a part2 when it reaches that point.
Yeah, I am guessing it will be below 10K and cools down tomorrow midnight (trade deadline). Just wondering if it gives me any benefit to cut it as we have our annual super peak.
We keep deferring this work, the bandage to stop the bleeding was a new default that auto-closes topics at 10k replies… but the underlying issue is still present.
Only ignorant brainstorming. If the bottleneck is a large number of posts belonging to a topic, would there be a way to insert a middle layer something like
topic has many blocks
block has many posts
similar to changing a large array into a multidimensional array but for database queries.
I stopped this bleeding in
While I was looking through the client side code, I noticed that the client only cares about the posts in the response payload and doesn’t do anything with the
stream (all the post ids).
Maybe only the timeline does, and @sam already optimized that yesterday (by removing date from the timeline), which should also be covered in this topic I think.
From what I’m seeing the client do with the response I’m 99% sure that we don’t need to send down all the post ids when scrolling.
Maybe @eviltrout can review?
Yup we removed the date in
However, it didn’t actually affect the performance of the query much when we tried it out. The root cause is due to the fact that we’re over selecting the ids here.
Instead of having the client send the post ids that it wants to the server while scrolling, I plan to have the client send the current position and tell the server whether it is looking up/down the stream. I’ve got it working locally but I’ll need to sort out all the other edge cases.
I’m fine with a few days spent on this because we have deferred the work for two years now…
It’s tricky though, because what about filters? For example best of mode, or when restricting to one particular poster, or when moderators have “gaps” that they can expand to see deleted posts.
There are a lot of edge cases. I’m all for optimizing it, but be prepared for a LOT of regressions and testing if you take this path.
I’m not sure about the challenges with filters at the moment but my plan is to move most if not all of the calculations that we’re doing client side into the server side. The logic for scrolling up and down would simply be given the current position which is determined by the
post_number ask the server for the next segment of posts/gaps. The server has knowledge of what filters are applied and can query for the segments accordingly. The client would just render what is given by the server. I’m still in the experimental stages but that is how I think the architecture should be.
Did you want to summarize here tomorrow?
There was another spot that has been fixed and should resolve the following problem:
What is the deal with performance as the size of a topic grows?
The first load performance of a topic degrades as a topic gets larger and is due to the fact that we have to send down every single post id when we load a topic. This would naturally raise a red flag for anyone reading this for the first time but after reading through the code base, I’ve concluded that this is a trade off that we’re making in order to be able to have clean and accurate implementation for the topic timeline.
If we look at the two numbers on our topic timeline and translate them into their respective DB queries, we would get the following:
SELECT posts.* FROM ( SELECT posts.*, ROW_NUMBER() OVER () AS row_number FROM posts WHERE posts.deleted_at IS NULL AND posts.topic_id = 1 AND posts.post_type in (1, 2, ,3, 4) ) AS posts
SELECT COUNT(*) FROM posts WHERE posts.deleted_at IS NULL AND posts.topic_id = 1 AND posts.post_type in (1, 2, ,3, 4)
For the first query, we basically need the DB to fetch all the posts for the given filters before we can figure out the “index” of each post in the stream.
The second query is a count which naturally becomes expensive as the topic grows. A counter cache would not work here because the count changes based on what filters are applied to the topic.
In addition, having the entire stream of post ids present on the client side makes the following features straight forward to implement:
- Jumping to a certain index in the stream,
- Selecting all posts below a certain index
- Fetching excepts while scrolling through a stream.
Attempts were made to re-implement (1) and (3) above to work without the stream of post ids but that either made the feature inaccurate or complicated the client side code so much that I felt it wasn’t worth the trade off
Stopping the bleeding on MEGATOPICs (> 10_000 posts)
Megatopics are expensive to load for three main reasons:
- To calculate all the gaps within a topic, a query is run to fetch all the
posts.idfor posts regardless of whether the posts have been deleted or not.
- To generate the stream of all post ids, a query is run to fetch all the
posts.idfor posts given a set of filters.
- First load time suffers because the client has to download the stream of post ids which has a length that is greater than 10_000.
Our plan to stop the bleeding here is to drop/approximate certain features on Megatopics:
The ability to display the closest date for a given index is dropped on megatopics
Gaps are not supported on megatopics
Loading excerpts while scrolling through the timeline is not supported
The numbers on the timeline becomes an approximation. Instead of the index of the post in the set of possible results, we use
Post#post_numberof the post. Instead of a count of all the possible results, we use the
Jumping to an index on the timeline becomes jumping to the closest post number on the timeline
This may seem like taking a step backwards but do note that MEGATOPICs are quite rate in the wild. We’ve mainly seen them appear on sites with imported content since normal sites would hit the
SiteSetting#auto_close_topics_post_count guard that is in place. For more information on why we think Megatopics are bad, you can read @codinghorror’s analysis about it.
Might there be a way to take advantage of the difference between
count(*) counts all rows, whereas
count(field_intentier) counts only the fields that are not null.
My thinking is that if each row had something like an
is_countable field that was null for rows not wanted to be counted it could simplify the query and hopefully improve performance. The cost being that the field would need to be updated which might offset any benefit it might provide.
Just confirming one thing. With these megatopic-specific shortcuts in place, performance on megatopics improved by two thirds, correct? So what used to take 1000ms will now take ~333ms with these changes?
For a 70K posts topic, time spent in the DB goes from 400.1ms to 137.7ms while the first payload size decreases from 735 KB to 178 KB
And to summarize, this is the tradeoff on megatopics (>10k posts) only:
- Dates will not be shown on the timeline
- “view x hidden replies” and other gaps will not appear in the post stream
- Excerpts will not be shown when scrolling the timeline
- Timeline post numbers and “jump to post” may be absolute versus relative post numbers
The sheer size of the number of records that has to be fetched would still make the query expensive to run.