Thanks. It’s DO, but I will go back and work on the SSL setup more. It’s not Full-strict but Full which I think interacts with the letsencyrpt.
I did enable Bot and Ai-Bot mitigation + one very specific geo-server bloc
Thanks. It’s DO, but I will go back and work on the SSL setup more. It’s not Full-strict but Full which I think interacts with the letsencyrpt.
I did enable Bot and Ai-Bot mitigation + one very specific geo-server bloc
Yeah it is weird. Some of my sites like full, some full strict. Which is quite odd as they all run on the same webserver lol. When turning CF on check every few hours, try a VPN so you bypass your isp’s DNS cache etc. I think one of my sites only uses strict SSL only origin pull and gives errors otherwise lol. If I get an SSL error I just change it to one that works and leave it there. Out of like 30 websites ive done CF on 100% of them I had to adjust that setting a day or so later.
I have not come across any SSL errors browsing and that’s hoping around on a VPN.
Just checked Qualys and got B by 4 times.
The IP wasn’t protected all this time. That being said once you drop CF in the middle there can be no more direct IP hits, do I have this right?
I will be changing the server IP regardless.
Kind of. They can’t directly hit the site or data base directly. But the DNS history is public. https://dnshistory.org/ and other sites keep record of it. So they could still DDoS th IP. Usually I find ddos’s to specific ip’s rare if they don’t know what it hosts. It’s a waste of time for their DDoS bandwidth as they can’t see any results to what they’re effecting.
I have also noticed a problematic uptick that does not seem to be slowing down. Its clearly inorganic. Any advice before this becomes a problem?
Wow. Your last dashboard graph looks remarkably like mine but you have more robust traffic volume.
Maybe the same timing too. The real increased started about 7 days ago.
Seemed like Two phases as well.
First uptick that sustains for a number of days what seems like a pleasant increase in traffic and then followed by phase two surge.
Your geo locations are identical too.
It does look like the same wave.
Can’t post comparative graphics right now
The prime culprit appeared to be a Huawei ASN in Singapore. Once that was blocked. It collapsed that traffic.
Another consequence it also nearly immediately destroyed Adsense metrics.
Huawei started to invest in Singapore from 2019 onward as a AI hub.
The same Singapore Huawei ASN came in on the Mexico and Hong Kong traffic.
Does anyone know how that works? Spoofing location?
Also have a look at your “other” columns on their own. I might be surprising, shocking or both!
Even if you are not under DDoS attack. I found this guide posted onto Cloudflare post on their community (discourse) forum useful as focus and possible action points to work through.
https://community.cloudflare.com/t/mitigating-an-http-ddos-attack-manually-with-cloudflare/302366
When the peak swarm traffic hit, the Bounce rate jumped over 80% - typically this is in the low to mid 20% strata with usual traffic.
During the swarm surge I can confirm average engagement timed
also dropped dramatically just like your analytics @piffy and there is also a corresponding drop this time as well.
I do not normally look at that metric but did because of you posted your graph. I think average engagement timed
this or bounce rate
are good flags that something is way off with the traffic quality and type.
Weirdly I am seeing another jump to almost 80% bounce rate on one days traffic but there is no surge. The traffic levels look normal i.e. pre-surge.
Analytics were delayed, there is a bit of a surge, about 30% of previous mega surge. Same pattern again. The only difference is CF is managing that pressure and the server is ticking along but this is not good as it is wrecking the user engagement metrics.
My solution was geoblock all countries no portuguese speaker but still receving high traffic from my country, USA and Germanay
I just seem that brazil is the most source for this type of attack so my tentative was failed I’ve 20 members actives but 2M requests for month by rate
Unbelievable my instance continues receving traffic from Fediverso even when the plugin was disabled, I’m tired and I have no idea how solve it
In terms of Cloudfalre mitigation what is working for me:
Bots
WAF Rules 1)
ASN - JS challenge Applied / ASN = 136907
(locations in order of most traffic)
Anyone such as @piffy check to see if the same ASN is hitting you too. This looked like real traffic in google analytics, it smashed the bounce rate up to over 80% and user engamenet collapsed. It will mess up your adsence RPM / CPC to afaict.
WAF Rule 2)
Geo JS challenge Applied (in order of most traffic)
There seemed to be a new uptick in Cloudflare servings and again the same geo-regions are top so now I have applied a Geo JS Challenge to these top 3 offenders.
An intervention for the main purposes of restoring analytics / user engagement metric health, this traffic is not putting pressure on the server anymore since CF handles so much via caching and managing bot attacks, but overall the metrics are being really screwed up. This is belligerent traffic.
I will update if I see improvement in analytics metrics / adsense etc.
I did it and now I back block all countries again with others rules enabled for awhile managed challenge was enough and I seem attack sources slow down
Now my rate engajament grow up 131% and rejection down for 16% my guess is Playstore impulsinate so for a while I needs wait a couple weeks to see if this growing is bots or legimite traffic
WAF RuleNo.2 has crushed the extra traffic using Geo applied JS Challenge
Before the ratio was approx - 4:1
Cloudflare:OriginServer
Now it’s closer to 1:1 serving
These regions were still sending a ton of traffic.
Analytics was spitting anomaly alerts and the metrics in analytics were still all over the place.
Adsense were really wrecked, page RPM was nearly at .00c with the surge. I guess Adsense was detecting this as suspicious traffic and pulled metering.
Dashboard
This is how the dashboard views looked, the last 5 days are lower because CF mitigation were deployed from here on out. If not, there is no reason why this traffic was going to stop increasing.
For perspective the Other (red) page views traffic on the highest day was nearly 10 times the volume of the Anon (green) page views. Let that sink in.
Fingers crossed this balances out over the next 2/3 days.
Challange JS don’t have the same effect as managed, I did almost same applying hotlink blocking to medias and libraries like CSS and JS it just works from referer (openning direct access from tab = blocked) this help me reduce uses of bandwith and CPU.
I will leave JS Challenge in place for some time because so far the CSR (Challenge Solved Rate) = 0%
I will experiment with Managed after more traffic is under the bridge and/or the CSR starts to fail.
Right now the CF:OS serving ratio is looking like an even tighter 1:1 ratio.
Mitigated has taken the place and ballooned.
What I wonder is why do attacks continue if they are getting blocked by the JS after x period (not very sophisticated after all? ) and will there be an attempt to come in via other ASN’s and Geo locations?
If this happens then I will try Managed on any new vectors of attack.
Because JS Challange has the same objective but the worst efficiency than ‘Manage Challange’ maybe it is deprecated cause this option not show me a capatcha just discard me as a legit traffic like ‘managed’ does
I found this topic on meta which I think is of deep interest and is probably best served as it’s own topic Cloudflare Security WAF (Web Application Firewall) + Discourse?
Another resource that might be useful - https://radar.cloudflare.com/
I said I would update the topic so here is the update.
Overall Cloudflare mitigation on the free tier has worked out very well as a solution for now, i.e. the immediate short term.
In one sense I wish I had known Cloudflare was ok to use with Discourse but somehow missed that.
The various mitigations using Cloudflare instigated an almost immediate stop to the spurious traffic, from the Singapore, Hong Kong and Mexico traffic (probably spoofed) locations.
Yesterday and today saw a trend where the same traffic sources look like they have stopped trying as the volume has dropped off a cliff. This is about how long it took for it to maybe give up.
However, it is still early days.
I can also identify other short burst attacks / bot traffic more easily.
I think cloudflare eventually mitigates these attacks after about 30-60 minutes, these Cloudflare served surges is the place to focus on, and it makes picking out source ASN’s or Geo locations much easier and adding to the bloc list or whatever WAF rule.
The radar link https://radar.cloudflare.com/bots/ was really useful to gauge an ASN quality, I picked out some WOW server swarms which I assume that is a World Of Warcraft server? That one kept popping up in surges.
I have noticed the traffic being back to more usual levels also looks more steady and even rhythm too.
The adsense metrics have improved almost back to normal or is at leat recovering (i.e. shyte RPM is better than no RPM!)
This was probably the first time I had seen traffic put the server under so much pressure that the service was struggling, in this instance, and apart from other mitigations like changing IP of the server, overall as a quick solution and management solution in the moment Cloudflare has been great, especially when you do not have time to tinker or the skillz undertake hood with nginx etc. whiel under some kind of growing attacks, and has allowed a min spec droplet run close to idle, giving it headroom beyond it’s specs.