拥有超过1百万帖的大型论坛,你的搜索功能表现如何?

我们近期迎来了大量新用户和流量激增,随之而来的是同时进行的搜索请求大幅增加。

我们开始看到约120万篇帖子出现搜索超时问题。

各位规模较大的论坛在搜索方面采取了哪些措施?你们是否已弃用Discourse核心搜索功能,还是其表现依然良好?是否更新了搜索算法或查询条件以使其更加严格或具体?是否有进行数据库调优?

感谢大家的评论!

你们的服务器配置是什么?哪些资源看起来最受限?

我正在考虑迁移一个拥有约 2000 万条帖子(公开帖子 + 私人对话)的论坛,因此我非常希望了解大型论坛可能出现的性能问题。

从 100 万篇帖子到 2000 万篇帖子,差距巨大,因此很少有建议能同时适用于你们双方。顺便提一下,我运营着一个拥有 900 万篇帖子的网站,在配备 i7-7500U @ 2.7GHz 处理器和 16GB 内存的服务器上,性能表现尚可。同一台服务器上还有几个其他低流量网站。

另一个使用 AWS 数据库、配备 8GB 内存、拥有 400 万篇帖子的网站,尤其在搜索方面遇到了一些困难。

@Jumanji,你的服务器配置是什么?

我们使用 AWS。论坛运行在 4 个实例上,我认为是 t3.large 规格,即 4 个 vCPU 和 16 GB 内存。我们在自定义 Docker 镜像上运行 Kubernetes。我们的数据库运行在独立的实例上,规格为 m4.large,即 2 个 vCPU 和 8 GB 内存。数据库实例的内存肯定可能是瓶颈。

超时的查询通常是更通用的术语,即单个单词。我们曾遇到过运行时间超过 60 秒的情况。此外,最近一个月我们的网站流量大幅增加。上个月,与上上月相比,流量增长了 40%。在此期间,我们也从 v1.9 升级到了 v2.4。我知道自 1.9 版本以来,搜索功能增加了自动补全搜索,因此我推测这给数据库服务器带来了更多负载。

我们进行了一些查询分析,发现以下查询的运行时间在 20 到 60 秒之间:

`SELECT "posts".*
FROM "posts"
JOIN (
    SELECT *, row_number() over() row_number
    FROM (
        SELECT topics.id, min(posts.post_number) post_number
        FROM "posts"
        INNER JOIN "post_search_data" ON "post_search_data"."post_id" = "posts"."id"
        INNER JOIN "topics" ON "topics"."id" = "posts"."topic_id" AND ("topics"."deleted_at" IS NULL)
        LEFT JOIN categories ON categories.id = topics.category_id
        WHERE 
            ("posts"."deleted_at" IS NULL)
            AND "posts"."post_type" IN (1, 2, 3)
            AND (topics.visible)
            AND (topics.archetype <> 'private_message')
            AND (post_search_data.search_data @@ TO_TSQUERY('english', '''a'':*ABD & ''price'':*ABD'))
            AND (categories.id NOT IN (SELECT categories.id WHERE categories.search_priority = 1))
            AND ((categories.id IS NULL) OR (NOT categories.read_restricted))
        GROUP BY topics.id
        ORDER BY
            MAX((
                TS_RANK_CD(
                    post_search_data.search_data,
                    TO_TSQUERY('english', '''a'':*ABD & ''price'':*ABD'),
                    1|32
                ) * (
                    CASE categories.search_priority
                    WHEN 2
                    THEN 0.6
                    WHEN 3
                    THEN 0.8
                    WHEN 4
                    THEN 1.2
                    WHEN 5
                    THEN 1.4
                    ELSE
                    CASE
                        WHEN topics.closed
                        THEN 0.9
                        ELSE 1
                        END
                    END
                )
            )) DESC,
            topics.bumped_at DESC
        LIMIT 6
        OFFSET 0
    ) xxx
) x ON x.id = posts.topic_id AND x.post_number = posts.post_number
WHERE ("posts"."deleted_at" IS NULL)
ORDER BY row_number;

---

 Sort  (cost=495214.55..495214.56 rows=1 width=1177) (actual time=20529.725..20529.727 rows=6 loops=1)
   Sort Key: (row_number() OVER (?))
   Sort Method: quicksort  Memory: 29kB
   ->  Nested Loop  (cost=495164.08..495214.54 rows=1 width=1177) (actual time=20525.899..20529.703 rows=6 loops=1)
         ->  WindowAgg  (cost=495163.65..495163.80 rows=6 width=16) (actual time=20520.976..20521.078 rows=6 loops=1)
               ->  Limit  (cost=495163.65..495163.66 rows=6 width=24) (actual time=20520.969..20521.063 rows=6 loops=1)
                     ->  Sort  (cost=495163.65..495232.24 rows=27438 width=24) (actual time=20520.967..20520.969 rows=6 loops=1)
                           Sort Key: (max((ts_rank_cd(post_search_data.search_data, '''price'':*ABD'::tsquery, 33) * (CASE categories.search_priority WHEN 2 THEN 0.6 WHEN 3 THEN 0.8 WHEN 4 THEN 1.2 WHEN 5 THEN 1.4 ELSE CASE WHEN topics.closed THEN 0.9 ELSE '1'::numeric END END)::double precision))) DESC, topics.bumped_at DESC
                           Sort Method: top-N heapsort  Memory: 25kB
                           ->  GroupAggregate  (cost=493642.90..494671.83 rows=27438 width=24) (actual time=19082.214..20506.763 rows=32951 loops=1)
                                 Group Key: topics.id
                                 ->  Sort  (cost=493642.90..493711.50 rows=27438 width=400) (actual time=19082.184..19283.907 rows=191436 loops=1)
                                       Sort Key: topics.id
                                       Sort Method: external merge  Disk: 77632kB
                                       ->  Hash Left Join  (cost=36655.60..486646.69 rows=27438 width=400) (actual time=1562.696..18611.724 rows=191436 loops=1)
                                             Hash Cond: (topics.category_id = categories.id)
                                             Filter: (((categories.id IS NULL) OR (NOT categories.read_restricted)) AND (NOT (SubPlan 1)))
                                             ->  Gather  (cost=36645.63..486471.60 rows=58991 width=400) (actual time=1562.623..18249.349 rows=191436 loops=1)
                                                   Workers Planned: 2
                                                   Workers Launched: 2
                                                   ->  Nested Loop  (cost=35645.63..479572.50 rows=24580 width=400) (actual time=1556.547..18541.793 rows=63812 loops=3)
                                                         ->  Hash Join  (cost=35645.20..328688.61 rows=157831 width=25) (actual time=1551.912..13356.416 rows=285279 loops=3)
                                                               Hash Cond: (posts_1.topic_id = topics.id)
                                                               ->  Parallel Seq Scan on posts posts_1  (cost=0.00..286245.92 rows=504559 width=12) (actual time=0.280..11249.160 rows=404770 loops=3)
                                                                     Filter: ((deleted_at IS NULL) AND (post_type = ANY ('{1,2,3}'::integer[])))
                                                                     Rows Removed by Filter: 21884
                                                               ->  Hash  (cost=33938.80..33938.80 rows=92912 width=17) (actual time=1549.103..1549.103 rows=80351 loops=3)
                                                                     Buckets: 65536  Batches: 2  Memory Usage: 2557kB
                                                                     ->  Seq Scan on topics  (cost=0.00..33938.80 rows=92912 width=17) (actual time=0.010..1492.606 rows=80351 loops=3)
                                                                           Filter: ((deleted_at IS NULL) AND visible AND ((archetype)::text <> 'private_message'::text))
                                                                           Rows Removed by Filter: 216751
                                                         ->  Index Scan using posts_search_pkey on post_search_data  (cost=0.43..0.96 rows=1 width=383) (actual time=0.017..0.017 rows=0 loops=855836)
                                                               Index Cond: (post_id = posts_1.id)
                                                               Filter: (search_data @@ '''price'':*ABD'::tsquery)
                                                               Rows Removed by Filter: 1
                                             ->  Hash  (cost=9.43..9.43 rows=43 width=9) (actual time=0.059..0.059 rows=43 loops=1)
                                                   Buckets: 1024  Batches: 1  Memory Usage: 10kB
                                                   ->  Seq Scan on categories  (cost=0.00..9.43 rows=43 width=9) (actual time=0.007..0.045 rows=43 loops=1)
                                             SubPlan 1
                                               ->  Result  (cost=0.00..0.01 rows=1 width=4) (actual time=0.000..0.000 rows=0 loops=191436)
                                                     One-Time Filter: (categories.search_priority = 1)
         ->  Index Scan using index_posts_on_topic_id_and_post_number on posts  (cost=0.43..8.45 rows=1 width=1169) (actual time=1.435..1.435 rows=1 loops=6)
               Index Cond: ((topic_id = topics.id) AND (post_number = (min(posts_1.post_number))))
               Filter: (deleted_at IS NULL)
 Planning time: 3.508 ms
 Execution time: 20541.411 ms
(46 rows)`

您的数据库总大小是否小于 8 GB?您应尽可能将更多数据放入内存中。

哦,这会非常慢。如果磁盘是网络挂载,情况会更糟。

另外,请查看我上面的回复以获取优化建议。

@Jumanji 能否分享一些运行 Discourse 的高流量论坛网站?请提供链接。流量需达到百万级别。

我并非 Discourse 团队的成员,但我通过搜索找到了这个。

我们目前为 24 GB。肯定大于该实例可用的内存。

在某些情况下,执行大规模操作后运行 vacuum analyze 可以提升性能。

另外,如果您在 AWS 上,是否使用了 RDS?如果没有……为什么呢?

谢谢。是的,我们运行了那个,现在使用的是 RDS。