API: Getting all posts in a topic

I’m working on a script that will evaluate participation in a discussion and produce a number based on how many messages they wrote, likes, and replies and such that will become students’ grade for “participating” in a discussion.

/t/blah/TOPIC_ID.json returns only 20 posts. Is there a way to get all of them, or will I need to do a request for all of them?

I looked a little at what gets passed to poll, but it wasn’t immediately apparent that I could somehow pass it something like a range or number of posts that I wanted.

2 个赞

Maybe the easiest way is to get the data I want from the data explorer plugin.

Now I’m thinking that it would be cool to write a plugin that showed people’s scores next to their profile pic in their posts in that topic.

When you GET /t/blah/TOPIC_ID.json the output will also contain a stream array that has all the id’s for the topic.

You then can call: /t/blah/TOPIC_ID/posts.json?post_ids[] and pass in all the ids from the stream array.

1 个赞

Thanks! And the problem with data explorer is that there is no way (that I can see quickly) to pass in the topic_id that I want.

Also its probably a good idea to still break up fetching all the posts into multiple requests rather than 1 big one. So if you have a topic with 100 posts. You should break it up into 5 smaller requests fetching 20 posts at a time.

1 个赞

Hi @blake @pfaffman

I don’t know why i’m getting a response from /t/blah/TOPIC_ID.json with only the last 10 posts.
I see the chunk_size parameter (in response) set to 10, but i don’t know why is like that or where can i change this value.

Thank you very much

1 个赞

I once wrote code that downloaded all of the posts in a topic.

I don’t know if the code still works, but it looks like how many posts you get is harder to predict than you might think. You pass the first post-id to control what posts you get.

See https://github.com/pfaffman/discourse-downloader/blob/master/discourse-downloader#L69

2 个赞

The best way to download all the posts to a topic via the api is to mimic what Discourse is doing in the web browser so please checkout How to reverse engineer the Discourse API for details. Basically go the the topic in the browser open up your dev tools and look at the xhr requests as you scroll through the topic.

Here are the steps to download all the posts in the topic via the api:

  1. Hit /t/-/{id}.json. This will contain a ‘posts_stream’ hash that contains a ‘posts’ array and a ‘stream’ array. The ‘posts’ array will give you the first 20 posts.

  2. Now you need to loop through the ‘stream’ array which gives you all of the post ids in the topic. Remove the first 20 post ids from the stream (otherwise you are re-downloading them for no reason).

  3. In chunks of 20 pass in all the post_ids to /t/{id}/posts.json like this:
    http://localhost:3000/t/8/posts.json?post_ids[]=46&post_ids[]=47&post_ids[]=48&post_ids[]=49&post_ids[]=50&post_ids[]=51&post_ids[]=52&post_ids[]=53&post_ids[]=54&post_ids[]=55&post_ids[]=56&post_ids[]=57&post_ids[]=58&post_ids[]=59&post_ids[]=60&post_ids[]=61&post_ids[]=62&post_ids[]=63&post_ids[]=64&post_ids[]=65

8 个赞

Thanks @blake and @pfaffman for your soon response.

I agree with @blake steps for get all posts in a topic.

According with step 1
I just wanted to know if there are any parameter (maybe in the header of request) to set the chunk_size in /t/blah/TOPIC_ID.json request, because if i made a request from POSTMAN i get the first 20 posts as described previosly, but if i made the request using my web app using angular, i just get the first 10.

So i think there are something in the request that change the response from discourse server.

I use this request because from her i can get the post stream and the first 20 posts in one request. So i take her like my first request base to get all post in a topic.

I know this question is not critical, i can figure out to get my solution using multiple request. I am just curious to know why

For some reason your angular app is triggering slow_chunk_size.

from

So that might be something to look into.

There is not a chunk_size parameter you can set, but if you pass in print=true like /t/-/{id}.json?print=true it will set the chunk size to 1000.

9 个赞

Thank you so much :ok_hand:

That is the trick. I am running my app (actually is an ionic v3 app) from chrome dev tools on android device and i alwasy get the first 10. When i swtich to browser mode, i get the 20.

1 个赞

this saved my life !

5 个赞

print parameter just saved me :star_struck:

4 个赞

The ?print=true command is great, indeed! It seems however that there is a rate limit for ?print=true commands of five calls per hour. Is there a way to make more API calls per hour?

1 个赞

此解决方案不适用于“大型主题”。您是否有针对这些主题的解决方案?

否则,您将需要发出多个请求来检索其余内容。它返回多少?

虽然 ?print=true (以及 \u0026page=2)有效,但似乎比没有 print=true 时有更多的速率限制。我想知道我可以进行多少次请求,并被认为是安全的,以避免遇到状态码 422。

我试图读取大约 9000 篇帖子,所以一次读取 20 篇帖子,要么会非常慢,要么会被速率限制……

我建议你编写代码时,要考虑到可能遇到速率限制。

它在我的 UserScript 中。
typescript 的语言高亮不起作用;js 的表现也很奇怪。必须是 javascript。不过 tstypescript 在 StackOverflow 上都能用。

interface IPost {
  id: number
  username: string
  post_number: number
  cooked: string
}

interface ITopicResponse {
  actions_summary: {}[]
  archetype: string
  fancy_title: string
  title: string
  post_stream: {
    posts: IPost[]
    stream: number[]
  }
  posts_count: number
  reply_count: number
}

export async function jsonFetch<T>(url: string): Promise<T | null> {
  const r = await fetch(url)
  if (r.ok) {
    const json = await r.json()
    if (!json.errors) {
      return json
    }
  }

  logger('error', r)
  return null
}

export async function fetchAll(urlBase: string) {
  const r0 = await jsonFetch<ITopicResponse>(urlBase + '.json?print=true')
  if (!r0) return []

  const posts: IPost[] = r0.post_stream.posts
  let page = 2
  while (posts.length < r0.posts_count) {
    const r = await jsonFetch<ITopicResponse>(
      urlBase + '.json?print=true&page=' + page++
    )
    if (!r || !r.post_stream.posts.length) {
      break
    }
    posts.push(...r.post_stream.posts)
    await new Promise((resolve) => setTimeout(resolve, 1000))
  }

  return posts
}

fetchAll('https://community.wanikani.com/t/16404').then(console.log)

ts-node 脚本中使用 Axios,

(node:1102374) UnhandledPromiseRejectionWarning: Error: Request failed with status code 422

如果我等待足够长的时间,比如 10 分钟,它会在第 2 页失败;但如果我现在重复,我就无法加载任何 URL。

而且没有 print 也能正常工作。


实际上,我通过避免 print=true 解决了这个问题。

export async function fetchAll(urlBase: string) {
  const r0 = await jsonFetch<ITopicResponse>(urlBase + '.json');
  if (!r0) return [];

  const stream = r0.post_stream.stream || [];
  const chunks: number[][] = [];
  while (stream.length) {
    chunks.push(stream.splice(0, 300));
  }

  const posts: IPost[] = [];
  let isContinue = true;
  while (chunks.length && isContinue) {
    const rs = await Promise.all(
      chunks
        .splice(0, 10)
        .map((ids) =>
          jsonFetch<ITopicPostResponse>(
            urlBase +
              '/posts.json?' +
              ids.map((id) => `post_ids[]=${id}`).join('&'),
          ),
        ),
    ).then((rs) =>
      rs.map((r) => {
        if (!r) {
          isContinue = false;
          return [];
        }
        return r.post_stream.posts;
      }),
    );

    rs.map((r) => {
      posts.push(...r);
    });

    if (chunks.length) {
      await new Promise((r) => setTimeout(r, 1000));
    }
  }
  posts.push(...r0.post_stream.posts);

  if (!isContinue) {
    logger(
      'error',
      `Total posts: ${r0.posts_count} != real count: ${posts.length}, due to Rate Limit?`,
    );
  }

  return posts;
}

我遇到了一个问题,即 'print' => true 不起作用,但 'print' => 'true' 可以。
PHP Guzzle。
也许您应该也添加 print = 1 的处理程序。

1 个赞