How to avoid throttling limits with admin API key?

I am getting the, “429 Too Many Requests” message for API requests to my self-hosted instance even with

  • DISCOURSE_MAX_ADMIN_API_REQS_PER_MINUTE increased to 600
    • I set this in the env section of app.yml and then ran ./launcher rebuild and confirmed the variable was set in the rebuilt container.
    • this is well over the number of requests per minute I am attempting
  • an unrestricted admin API key

It seems this has been discussed before without a clear answer as to why changing DISCOURSE_MAX_ADMIN_API_REQS_PER_MINUTE doesn’t seem to work:

How can I ensure that API requests with an admin key/user are not subject to throttling?

Hi @aas,

Could you give some context?

  • How many API requests are you making? Per second, minute, hour, per day
  • Are you sure you are using an Admin API key?
  • Are all of these coming from the same IP address? Perhaps due to a reverse proxy?

Could it be nginx or another piece of software hitting you with that error?

1 Like

Hi @Bas,

Apologies for the delayed response!

I am now taking a look at this again as we have launched a Discourse integration and we want to make sure we don’t run into any issues related to rate limiting.

I tested it with a new key to make sure it isn’t limited in any way. To be clear, what exactly do you mean by an admin API key?

I created a key with the following settings:

It says, “API key has no restriction and all endpoints are accessible.”

I am testing this by making API requests from a local Python shell, so they are coming from the same IP address. We also ran into the rate limits when running a script on our server. In that case, all requests came from the same IP address.

I confirmed that the rate limit is hit with the following code:

async def get_topic_post_stream(topic_id):
    url = f"{DISCOURSE_URL}/t/{topic_id}"
    async with httpx.AsyncClient(headers=HEADERS) as client:
        topic = await client.get(url)
    return topic.status_code


async def get_topic_post_streams(topic_ids):
    tasks = [functools.partial(get_topic_post_stream, topic_id) for topic_id in topic_ids]
    topics = await aiometer.run_all(
        tasks,
        # max_per_second=1,
        )
    return topics

# Just get a slice of 15 of the topics in topic_ids for testing.
topics = asyncio.run(get_topic_post_streams(topic_ids[:15]))

Note that the max_per_second parameter is commented out, which results in no limits on the number of requests.

This completes in 2.05 s and 2 out of the 15 requests return 429.

When I run it with max_per_second=1, everything completes successfully.

Let me know if I can provide any more details. Thanks!

@Bas, here’s the javascript equivalent of the Python code to make this easier to reproduce using the dev tools console:

const DISCOURSE_URL = '';
const HEADERS = {
    'Api-Key': '',
    'Api-Username': '',
    'Content-Type': 'application/json'
};


const topicIds = Array.from({ length: 100 }, (_, i) => i + 1);

async function getTopicPostStream(topicId) {
    const url = `${DISCOURSE_URL}/t/${topicId}`;
    const response = await fetch(url, { headers: HEADERS });
    return response.status;
}

async function getTopicPostStreams(topicIds) {
    const results = await Promise.all(topicIds.map(topicId => getTopicPostStream(topicId)));
    return results;
}

// Don't rate limit the requests and see that you get two 429s.
(async () => {
    const topics = await getTopicPostStreams(topicIds.slice(0, 15));
    console.log(topics);
})();

async function getTopicPostStreamsRateLimited(topicIds) {
    const results = [];
    for (const topicId of topicIds) {
        const result = await getTopicPostStream(topicId);
        results.push(result);
        await new Promise(resolve => setTimeout(resolve, 1000)); // Delay for 1 second
    }
    return results;
}

// 1 request per second returns all 200s
(async () => {
    const topics = await getTopicPostStreamsRateLimited(topicIds.slice(0, 15));
    console.log(topics);
})();
1 Like

If I had to hazard a guess, this is most likely the issue. You aren’t being rate limited on the API, but on the basis of your IP.

You could look at the per-IP settings here: Available settings for global rate limits and throttling

Please let me know if that actually solves your issue :slight_smile:

1 Like

Thanks, @Bas!

It seems to me that I should not be getting these 429s regardless of any settings mentioned in that post. In the example I provided, I sent 15 requests which is under all of the default API limits. I did this using an admin API key and username.

The example doesn’t exceed the following per-IP defaults:

It doesn’t even exceed the non-admin limits:

Changing DISCOURSE_MAX_REQS_PER_IP_MODE to warn or none did not help.

Am I missing something? :thinking:

BTW, I changed the settings by editing app.yml and running ./launcher destroy app && ./launcher start app.

I can see in /var/log/nginx/access.log that the IP address is correct, so I don’t think Discourse considers all requests to be coming from the same IP.

I can also see user’s IP addresses in the admin.

These are the settings I have modified:

  DISCOURSE_MAX_ADMIN_API_REQS_PER_MINUTE: 1200
  DISCOURSE_MAX_USER_API_REQS_PER_MINUTE: 60
  DISCOURSE_MAX_REQS_PER_IP_MODE: none
  DISCOURSE_MAX_REQS_PER_IP_PER_10_SECONDS: 100
  DISCOURSE_MAX_REQS_PER_IP_PER_MINUTE: 400

EDIT: I just checked the response contents of one of the failed requests and noticed that it mentioned nginx:

<html>\r\n<head><title>429 Too Many Requests</title></head>\r\n<body>\r\n<center><h1>429 Too Many Requests</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n

I’ll do some more investigating on the topics that mention nginx.

1 Like

The two relevant sections of the nginx config appear to be:

limit_req_zone $binary_remote_addr zone=flood:10m rate=12r/s;
limit_req_zone $binary_remote_addr zone=bot:10m rate=200r/m;
limit_req_status 429;
limit_conn_zone $binary_remote_addr zone=connperip:10m;
limit_conn_status 429;
server {
  listen 80;
  return 301 https://community.ankihub.net$request_uri;
}

and

  location @discourse {
add_header Strict-Transport-Security 'max-age=31536000'; # remember the certificate for a year and automatically connect to HTTPS for this domain
  limit_conn connperip 20;
  limit_req zone=flood burst=12 nodelay;
  limit_req zone=bot burst=100 nodelay;
    proxy_set_header Host $http_host;
    proxy_set_header X-Request-Start "t=${msec}";
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $thescheme;
    proxy_pass http://discourse;
  }
}

My remaining questions now are:

  • Should I edit both sections in order to match my Discourse settings? Or just the values for location @discourse?

  • What’s the correct way to modify these values and persist the changes across rebuilds?
    I assume that I can edit the nginx config directly in the container then stop/start the container. But it looks like these values originally came from templates/web.ratelimited.template.yml and may be overwritten on a rebuild?

Thanks so much for your help! :pray:

1 Like

Oof, now we’re getting outside of my comfort zone I’m afraid.

If you are being rate-limited by nginx, then yes, fiddling with those settings and making them less restrictive makes sense. I’m not sure if Nginx can whitelist IP addresses?

Something like

map $remote_addr $exclude_from_limit {
    default 0;
    192.168.1.1 1;
    192.168.1.2 1;
}

and then wrap the limits in an if

        if ($exclude_from_limit = 0) {
            limit_req zone=flood burst=24 nodelay;
            limit_req zone=bot burst=400 nodelay;
            limit_conn connperip 20;
        }

Yes, you should do some replacing with pups during build to make this persistent, see for instance web.ssl.template.yml on how to approach that.

Or you could forget about this and make your API client script run more slowly by inserting some sleeps in strategic places. ← recommended approach

4 Likes

Like in a rescue when it gets rate limited. That’s what I like to think that I usually do.

1 Like