Errno::ENOSPC: No space left on device

Hi there,

I’m trying to rebuild Discourse but I’m running into this error:

FAILED
--------------------
RuntimeError: cd /var/www/discourse && su discourse -c 'bundle exec rake assets:precompile' failed with return #<Process::Status: pid 4266 exit 1>
Location of failure: /pups/lib/pups/exec_command.rb:105:in `spawn'
exec failed with the params {"cd"=>"$home", "hook"=>"bundle_exec", "cmd"=>["su discourse -c 'bundle install --deployment --verbose --without test --without development'", "su discourse -c 'bundle exec rake db:migrate'", "su discourse -c 'bundle exec rake assets:precompile'"]}
702e319df6b2f9a477ab0ad6eebdd689ad0b7bab97770d4c80fd82a73f3a95dc
** FAILED TO BOOTSTRAP ** please scroll up and look for earlier error messages, there may be more than one

Scrolling up I found:

Errno::ENOSPC: No space left on device - /var/www/discourse/tmp/cache/assets/production/sprockets/c88f54c1a16e5204b2aebda1caf30657

I tried locating the /var/www/discourse/ directory, but it doesn’t exist:

ubuntu@discourse:/var/discourse$ cd /var/www/discourse
-bash: cd: /var/www/discourse: No such file or directory

Any thoughts on how to recover from this?

Weirdly, I seem to be out of disk space indeed:

ubuntu@discourse:/var/discourse$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.8G  6.7G  700M  91% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
udev            996M   12K  996M   1% /dev
tmpfs           201M  356K  200M   1% /run
none            5.0M     0  5.0M   0% /run/lock
none           1001M     0 1001M   0% /run/shm
none            100M     0  100M   0% /run/user

How did this happen? This is a new install with less than 50 topics.

I have built and rebuilt the app a few times; could that be reason I ran out of disk space? If so, then the question is: why? Could this be because of temporary files that were not being deleted?

I thought /var/www/discourse was a good place to start, but it seems like it isn’t.

Where should I look?

can you start with a

./launcher cleanup

I bet you have 2 docker images going

2 Likes

Thanks, @sam, but that unfortunately didn’t solve the problem:

ubuntu@discourse:/var$ cd /var/discourse/
ubuntu@discourse:/var/discourse$ sudo ./launcher cleanup

The following command will
- Delete all docker images for old containers
- Delete all stopped and orphan containers

Are you sure (Y/n): Y
Starting Cleanup (bytes free 716712)
date: invalid date '"2015-09-28 10:45:1'
scripts/docker-gc: line 69: 1443439271 - : syntax error: operand expected (error token is "- ")
Finished Cleanup (bytes free 716692)

It’s still using a ton of space:

ubuntu@discourse:/var/discourse$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.8G  6.7G  700M  91% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
udev            996M   12K  996M   1% /dev
tmpfs           201M  356K  200M   1% /run
none            5.0M     0  5.0M   0% /run/lock
none           1001M     0 1001M   0% /run/shm
none            100M     0  100M   0% /run/user

I need to start running cleanup from a lightweight image …

what does docker images return … do a docker rmi on the old discourse image …

also 7 gigs is really tight considering our images are 1.5gigs (once you factor in bootstrapping and so on)

1 Like
ubuntu@discourse:/var/discourse$ sudo docker images
REPOSITORY             TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
local_discourse/app    latest              9698d6e190bb        49 minutes ago      1.929 GB
<none>                 <none>              48086d6c592e        2 days ago          1.928 GB
<none>                 <none>              537dec28027b        2 days ago          1.928 GB
<none>                 <none>              aa2b4af001ab        3 days ago          1.928 GB
samsaffron/discourse   1.0.13              27f52292c186        6 days ago          1.238 GB

Why are all those images there?!

do a docker rmi on those images.

Done. All deleted. Here’s the disk space situation:

ubuntu@discourse:/var/discourse$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.8G  4.5G  2.9G  61% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
udev            996M   12K  996M   1% /dev
tmpfs           201M  356K  200M   1% /run
none            5.0M     0  5.0M   0% /run/lock
none           1001M     0 1001M   0% /run/shm
none            100M     0  100M   0% /run/user

Question is: why did I have 5 images?

My guess… somehow these failed builds are leaving behind an image, we should investigate that @mpalmer at some point.

Also another possibility is that you are running an old version of bash and launcher is relying on features you do not have.

But the first failure was caused by lack of disk space. There was no other cause for failure, which suggests to me that your assumption is not correct. Are we missing something else here?

Would you like me to provide more info on their versions? Let me know how and I’ll happily do that.

Thanks again, @sam, for your help. The forum is back up now!

2 Likes

This is still a problem.

Today I have same Errno::ENOSPC: No space left on device

I was rebuilding app a few times before this error occur. ./launcher cleanup helped

@sam I’m trying to remove an existing image but I can’t seem to remove it

root@ip-172-31-27-60:/var/discourse# docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
local_discourse/app   latest              2bc4af777b30        12 days ago         2.284 GB
discourse/discourse   1.3.8               7f8853cc1cb9        4 weeks ago         1.534 GB

root@ip-172-31-27-60:/var/discourse# docker rmi 7f8853cc1cb9
Error response from daemon: conflict: unable to delete 7f8853cc1cb9 (cannot be forced) - image has dependent child images

I’m not sure how else to free up space on my system (I’m not exactly a tech guru)

root@ip-172-31-27-60:/var/discourse# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            476M     0  476M   0% /dev
tmpfs           100M   11M   89M  11% /run
/dev/xvda1      7.8G  7.1G  253M  97% /
tmpfs           496M  1.3M  495M   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           496M     0  496M   0% /sys/fs/cgroup
none            7.8G  7.1G  253M  97% /var/lib/docker/aufs/mnt/7a2649494208457846b47f6209a368e3986d8b2bfaf3ff9b2dcb25b14f3ece0d
shm              64M  4.0K   64M   1% /var/lib/docker/containers/90f51aff6b681c0b16084c37c3934d117f006dbdbeb9465ab87fa97db6d1983b/shm
tmpfs           100M     0  100M   0% /run/user/1001

Any suggestions?

I have very limited free space on discourse right now and I’m afraid it may crash or something.
I understand my total space is quite small but that’s because I’m hosting on AWS (t2.micro)

Also I get this error when I try to run cleanup:

root@ip-172-31-27-60:/var/discourse# sudo ./launcher cleanup

WARNING: We are about to start downloading the Discourse base image
This process may take anywhere between a few minutes to an hour, depending on your network speed

Please be patient

Unable to find image 'discourse/discourse:1.3.9' locally
1.3.9: Pulling from discourse/discourse
b87f06441b40: Pulling fs layer
69c598d5b6ca: Pulling fs layer
b87f06441b40: Verifying Checksum
b87f06441b40: Download complete
69c598d5b6ca: Verifying Checksum
69c598d5b6ca: Download complete
/usr/bin/docker: failed to register layer: Error processing tar file(exit status 1): write /var/lib/dpkg/status-old: no space left on device.
See '/usr/bin/docker run --help'.
Your Docker installation is not working correctly

See: How do I debug docker installation issues

Do a backup and then:

1 Like

I wanted to login to the admin UI from browser just now but I think it may be broken due to the lack of space?
(The whole layout of the page is suddenly all wrong)

Is this the guide I want to follow? I already have a good amount of data inside the discourse and I don’t want to mess it up.

Managed to fix the issue

  • expanded to a larger volume on AWS, which allowed me to do cleanup and also rebuild the app

Initially got a 502 bad gateway but that was also fixed after I rebuilt the app
Seems to be working normally now, thanks @Falco!

4 Likes

Hi, I encounter the same problem. So you mean I should ask the KVM company to provide more space first?

Do you have anyother method to solve the “No space left” problem?

Yours

If you are still low on space after a successful cleanup you need to buy more space.

This topic was automatically closed after 3176 days. New replies are no longer allowed.