Benchmarks: VPS vs dedicated/bare metal?

(ljpp) #1

Lately I have been planning one potential Discourse case, where big resources will be needed. I am well aware of the benefits in virtualization, but for this particular case I find a dedicated server an interesting option. The topic is not discussed too much here.

@mpalmer has mentioned earlier that he is using some dedis. UpCloud has published a benchmark featuring their 40$ plan vs a LeaseWeb Dell R210. According to their couple of years old test the performance is very similar.

As an example, from Hetzner you get a Skylake Core i7 dedicated with 32GB ram and 2x500GB SSD for 39€ (+setup). So a quad core CPU, tons more RAM than typical 40€/$ VPS’s and a large disk. I am wondering how this kind of box, or similar, works for Discourse? What is the real-life capability to serve in comparison to a typical VPS offerings?

With VPS you can scale up if you run out of power, but with dedis you do not have this option. So I am trying to figure out how capable these boxes really are.

(Matt Palmer) #2

I find Hetzner’s dedicated server offerings to be excellent value for money, and their service is entirely adequate for a box-only hosting provider. I recommend them regularly to anyone who knows their way around a shell.

We (CDCK) run all our stuff on bare metal (via Docker containers, but there’s no virtualisation overhead in containers), because early on we found that both Xen and KVM virtualisation imposed something like a 20% overhead on running Rails apps in VMs – and, since all we’re doing on our boxes is running a multi-tenant hosting environment anyway, the added isolation really wasn’t worth 20% overhead on the apps.

As for exactly how that configuration of box will work, there’s only one way to find out: try it and find out. If that Skylake is the i7-6700K, then that’s the same CPU we use in our machines, and it does the job very nicely. We run more RAM, but that’s mostly because we have a lot of sites so we need more unicorns than a single large site would need for the same amount of traffic.

I really wouldn’t worry too much about not being able to quickly “scale up” your way out of performance problems. At a certain point, almost certainly before you hit the limits of the scale of box you’re describing, you’re doing enough traffic that you want redundancy anyway, and once you’ve got everything setup to use two boxes as webservers, you’ve got everything you need to use 20, if you need 'em. Databases benefit from “scale up” more than app servers, but PostgreSQL will do a lot of work on that sort of hardware, and Hetzner have much bigger boxes you could use for (just) the database if you needed, while keeping a flock of those cheap screamers on hand for scale-out web tier work.

(ljpp) #3

So you are referring to this one?

So roughly guesstimating, the dedi-server Skylake cores are about +66% faster than UpCloud cores (the best reported VPS performance), and you get the benefits of more RAM per $.

What kind of buffer and unicorn numbers would you apply for this kind of Hetzner server, Skylake i7 quad-core with 32GB RAM?

And trust me - the day I need to scale horizontally is the day I click the buy button at

(Matt Palmer) #4

I’d start by tuning PostgreSQL for 16GB of available memory and provisioning 3 unicorns per core (that’s core, not hyperthread – so a total of 12). On a 32GB box, that should leave you with about 8GB of RAM for Redis, disk cache, overhead, and “whoops that unicorn got a bit bigger than I was expecting”. Having 32-64GB of swap on hand, also, should go without saying.

Then, see what happens: if you seem to be bottlenecked on unicorns (and you’re not CPU bound), throw in a couple more. If your PostgreSQL queries start to chug, retune PostgreSQL to use a bit more RAM. At 39€/month, though, I’d very quickly consider splitting DB and web if that one box started to feel overloaded. It’s almost too cheap not to.

(ljpp) #5

Matt, I love your responses. Factual, down to earth concrete, real value, something the people can immediately apply in their projects. The web needs more Matt Palmers.