After opening the emoji selector in the post editor, discourse throws users out once they load a page again (like after submitting a post).
429 Too Many Requests
Your IP address has made too many requests to this service over a short amount of time. Please wait a few minutes and try again.
If you are behind a proxy or coming from a large company, you and many other users may appear to be the same person. Please let us know of recurring problems by email: firstname.lastname@example.org
It seems it was not the user trying to DDOS discourse, but discourse itself. If there is a way, maybe distinct requests to resources (like emojis) should not be considered? Or all emojis should be combined into a sprite?
Very little point in that cause we have HTTP/2
Can you clarify, is this happening on
meta.discourse.org or on some other site we are hosting?
If it’s on meta then yes we got to add a bypass to our special rate limiting here for emojis, https://github.com/discourse/discourse_docker/blob/master/templates/web.ratelimited.template.yml is too indescriminent.
One plan we have is to shift to a global rate limit in the APP so this would be totally sidestepped and then the NGINX rate limit can be way higher.
You will need to clarify this part first @Fredo.
It’s hosted by discourse (https://forums.sketchup.com/). But you have to scroll quickly through all the emojis to reproduce it.
FTR I was able to reproduce it on https://forums.sketchup.com by scrolling through the emojis.
I am leaving this assigned to me for now, we do plan to improve this in 2 ways
whitelist emoji requests from the CDN so they do not participate in the rate limiting
centralize emoji storage across all our sites so there is only one CDN for emojis.
We need to centralize emoji paths and avatar paths (on Discourse hosting). So this is a to-do we still need to get to.
We have done a lot here to improve the situation, better rules at our CDN, looser limits for emojis.
The fundamental one of having all our hosting use one location for all emojis is not done yet and I can not see us scheduling it any time in the near future cause of the enormity of such a change.
Since there was no recent occurrence of such issue, I think we can consider it fixed.