I noticed a couple of months ago that our major off-topic/spam thread was becoming slower than other threads. Yesterday a user finally complained about it. At 188.7k replies it consistently takes 4 seconds or longer to perform any navigation in the thread, while other threads take less than a second.
We’re used to shutting these threads down and creating news ones, as this was a problem on vBulletin as well. I’m not a forum architect so I’m not here to say it can (or should) be fixed, and I don’t know how many other forums have topics with so many replies, but I figured I’d put the information here. This is obviously nonessential, but if you ever achieve all of your hopes for Discourse and want one last thing to work on, I guess there’s this.
Ah, we did. I guess we didn’t consider that there might be consequences. We have three other threads that are longer than 10k posts at the moment, so I guess it’d be good to keep an eye on them.
When auto-closing, is there any sort of thread re-creation that ties the new thread back to the old one? Just curious, as we might re-enable that limit if so. Otherwise we’ll just handle long threads ourselves since they don’t crop up too often.
Only ignorant brainstorming. If the bottleneck is a large number of posts belonging to a topic, would there be a way to insert a middle layer something like
topic has many blocks
block has many posts
similar to changing a large array into a multidimensional array but for database queries.
However, it didn’t actually affect the performance of the query much when we tried it out. The root cause is due to the fact that we’re over selecting the ids here.
Instead of having the client send the post ids that it wants to the server while scrolling, I plan to have the client send the current position and tell the server whether it is looking up/down the stream. I’ve got it working locally but I’ll need to sort out all the other edge cases.
I’m not sure about the challenges with filters at the moment but my plan is to move most if not all of the calculations that we’re doing client side into the server side. The logic for scrolling up and down would simply be given the current position which is determined by the post_number ask the server for the next segment of posts/gaps. The server has knowledge of what filters are applied and can query for the segments accordingly. The client would just render what is given by the server. I’m still in the experimental stages but that is how I think the architecture should be.
There was another spot that has been fixed and should resolve the following problem:
What is the deal with performance as the size of a topic grows?
The first load performance of a topic degrades as a topic gets larger and is due to the fact that we have to send down every single post id when we load a topic. This would naturally raise a red flag for anyone reading this for the first time but after reading through the code base, I’ve concluded that this is a trade off that we’re making in order to be able to have clean and accurate implementation for the topic timeline.
If we look at the two numbers on our topic timeline and translate them into their respective DB queries, we would get the following:
SELECT posts.*, ROW_NUMBER() OVER () AS row_number
WHERE posts.deleted_at IS NULL
AND posts.topic_id = 1
AND posts.post_type in (1, 2, ,3, 4)
) AS posts
WHERE posts.deleted_at IS NULL
AND posts.topic_id = 1 AND posts.post_type in (1, 2, ,3, 4)
For the first query, we basically need the DB to fetch all the posts for the given filters before we can figure out the “index” of each post in the stream.
The second query is a count which naturally becomes expensive as the topic grows. A counter cache would not work here because the count changes based on what filters are applied to the topic.
In addition, having the entire stream of post ids present on the client side makes the following features straight forward to implement:
Jumping to a certain index in the stream,
Selecting all posts below a certain index
Fetching excepts while scrolling through a stream.
Attempts were made to re-implement (1) and (3) above to work without the stream of post ids but that either made the feature inaccurate or complicated the client side code so much that I felt it wasn’t worth the trade off
Stopping the bleeding on MEGATOPICs (> 10_000 posts)
Megatopics are expensive to load for three main reasons:
To calculate all the gaps within a topic, a query is run to fetch all the posts.id for posts regardless of whether the posts have been deleted or not.
To generate the stream of all post ids, a query is run to fetch all the posts.id for posts given a set of filters.
First load time suffers because the client has to download the stream of post ids which has a length that is greater than 10_000.
Our plan to stop the bleeding here is to drop/approximate certain features on Megatopics:
The ability to display the closest date for a given index is dropped on megatopics
Loading excerpts while scrolling through the timeline is not supported
The numbers on the timeline becomes an approximation. Instead of the index of the post in the set of possible results, we use Post#post_number of the post. Instead of a count of all the possible results, we use the Topic#highest_post_number.
Jumping to an index on the timeline becomes jumping to the closest post number on the timeline
This may seem like taking a step backwards but do note that MEGATOPICs are quite rate in the wild. We’ve mainly seen them appear on sites with imported content since normal sites would hit the SiteSetting#auto_close_topics_post_count guard that is in place. For more information on why we think Megatopics are bad, you can read @codinghorror’s analysis about it.
Might there be a way to take advantage of the difference between count(*) and count(field_identier) ?
eg. count(*) counts all rows, whereas count(field_intentier) counts only the fields that are not null.
My thinking is that if each row had something like an is_countable field that was null for rows not wanted to be counted it could simplify the query and hopefully improve performance. The cost being that the field would need to be updated which might offset any benefit it might provide.
Just confirming one thing. With these megatopic-specific shortcuts in place, performance on megatopics improved by two thirds, correct? So what used to take 1000ms will now take ~333ms with these changes?