What is the best docker storage backend to use on a VPS?

From what I have found, these are the available storage drivers:

  1. Union File Systems - aufs and overlayfs

    I’m currently using overlayfs and it seems to work ok but I’m not sure about it as I’ve read that it can use too many inodes which can cause problems.

  2. Snapshot Enabled File System - btrfs

    This looks like an interesting option but it isn’t recommended for production use.

  3. Device Mapper Loopback

    This uses Device Mapper over Loopback Mounted Image Files. From what I’ve read, this is a slow and unrecommend option.

  4. Device Mapper using raw block devices

    This option uses Device Mapper, usually with LVM on raw block devices. It seems to be better then the loopback option but how much I don’t know. Also on a Virtual Private Server (VPS), the raw block device is actually a virtual image file so I don’t know for sure if this helps here as the raw block device is still not really a hard disk but in a VPS, no hard disks are real so it may not matter.

1 Like

aufs or overlay if you are feeling adventurous, avoid all others.


Ok. I had thought device-mapper might only be an issue with loopback images but it is ok to use with raw block devlices. It seems that is not the main issue with the device-mapper backend.
At the moment I use overlay as think I would need to compile my own kernel to use aufs (Which I can do, just more convient to use the pre-compiled kernel).

I have personally had lots of luck with overlay, perf is better than aufs. You got to watch out for inode counts though.

Never, ever use device mapper. That is a one way ticket to corruption town, sooner or later. It is buggy as hell and has been for a year despite Red Hat saying “it works now” over and over.

Interesting, in theory device-mapper seems like a good option.
If in practice it just causes problems then I wonder why it’s still an available option in Docker and preferred over the overlayfs option if available.

I don’t know but you can ask the support customer whose corrupted install on Centos (and thus devicemapper) we just had to recover via a backup…

For some context see:




I think we have made lot of progress on improving devicemapper graph driver. Lot of issues are because of static binary and I think many of the issues are configuration issues. Based on configuraiton, one can easily switch between loop based thin pool, or thin pool on block devices or
usage of external lvm thin pool. If such switch happens, data in /var/lib/docker/devmapper/ is stale
and all sorts of errors will happen.

To make things little better and detect configuation issues early, I have created following PR. This should help a bit I think.


For our most recent corruption, I don’t know if the docker binary was statically or dynamically compiled, I do know it was using latest docker.

We have been burnt so many times by device mapper that my confidence is totally eroded.


That is helpful information.

I know see there are two ways to interpret what I wrote.
I was not questioning if you guys were correct, I was questioning why the docker developers would not either remove the device-mapper backend or at last move it to the bottom of the storage backends list.
After looking at the link quoted above, it seems someone was trying to get the overlayfs backend higher up in the priority list.
It appears looking at the link that overlayfs has some issues that still need to be fixed before it’s first though it seems it should at least be a higher prioity then the vfs backend as the vfs backend ends up using all available hard disk space very quickly.


To answer the question I asked about a year ago, the best storage backend driver appears to be overlay2 which was added in docker 1.12 and according to the docker page on overlay vs. overlay2, the “overlay2 driver addresses” the “known limitations with inode exhaustion and commit performance” of the “the overlay driver”.

It is however “only compatible with Linux kernel 4.0” but that isn’t a problem with my VPS as it uses a recent kernel (4.7 at the moment).

I didn’t really get around to working on my web site that much in the last year so the fact that it took about a year for this driver to come into existence didn’t end up being a problem.