I wrote my own Docker backup tool

You would think that there is a solid solution for backing up Docker volumes, but there isn't. I wrote a tool to solve this problem for myself.

2023-07-11


Containerized deployments are popular, Docker is a popular choice for small deployments and many individuals. Despite the popularity, there is no easy way to backup your containers. I wrote a tool to solve this problem.

Where we are

Docker and similar containerization tools are popular and see wide spread use. You will find Docker Daemons running on small ARM-based NAS devices and on large dedicated servers, run by professionals and hobbyists alike, so let’s reduce the scope of this article to a specific subset of environments. Other environments will have different restrictions and requirements and will not be covered here.

I do not manage an entire cluster of servers and I do have a single product that I need to maintain. Instead I run a single server at home and one or two rented servers. These servers mostly handle personal projects but also some services that are used by a few friends or other people. This means that more often than not, I have to run whatever image is provided by the authors of the software I want to run. In some cases, I can build the image myself, but I do not have the time to maintain an image for every piece of software I use. Some of these applications were never build to work in a containerized environment and adding support is unfeasible. Similarly, while there are higher level tools like Kubernetes or Docker Swarm, they are intended for large deployments of homogeneous hardware, usually in a data center with ample bandwidth between the nodes and unified storage. In my case, I had great success with Docker Compose and a thin layer of shell scripts to manage updates and configuration.

This allows me to keep my entire infrastructure in a single git repository, and be able to deploy it on any server I want. What it doesn’t allow me however, is to do the same with my data. Docker introduced a great concept for separating the image from its data, by using volumes. There are also bind mounts, but they are prone to permission errors and make it way to easy to shot yourself in the foot. So I’m not discussing them further, if you think they solve this problem, they don’t. Volumes on the other hand are conceptually very simple, you take the same image and provide the same volume and boom, you have restored your application. While ideally your application would connect to a database or other storage backend with easy backup capabilities, this is almost never the case. This leaves us with volumes as the common denominator, as we can’t possibly know how the application stores its data.

The problem

Docker provides no way to backup volumes. You could theoretically just copy the entire /var/lib/docker or /var/lib/docker/volumes directory, but this removes the ability to selectively target individual volumes. You certainly don’t want to waste backup space on the cache of your webserver, or the logs of your database. Additionally, and this is the main problem, you can’t backup volumes of running containers, which is also true if you use bind mounts. Assuming we have no knowledge of the application, we can’t know if it is safe to backup the volume while the container is running. So our easiest option is to stop the container, backup the volume and then start the container again. There is an obvious problem with this approach, as it will introduce downtime, however I argue that any service too important to have downtime, will also have redundancy. If it doesn’t, you should probably fix that first and once you have, why worry about downtime. Additionally I don’t consider a small downtime for backups an issue.

With all of this said, the path is clear, we need to know which volumes contain important data, which containers depend of them and then we need to stop the containers, backup the volumes and start the containers again, ideally taking down entire services at once, rather than individual containers. The problem actually is, and this is the reason I wrote my own tool, no one has done this before or I’m the only person who does it this way. You could of course just write a shell script, harcode the entire process and be done with it, but then you need to update it every time your infrastructure changes. You also need to find a way to access volume data, as /var/lib/docker/volumes is actually an implementation detail and not meant to be accessed directly, which you will learn the hard way, when you need to restore SELinux contexts or use alternative storage backends. And last but not least, you also need to perform the actual backup, ideally with incremental backups and encryption to an offsite location. This step might be solved by using one of the many backup tools, but these are not designed to work with Docker volumes.

Entering: salvage

As outlined above (fancy, how everything naturally falls into place), salvage solves all of these problems. salvage is a tool designed to identify volumes you want to backup, collect the containers that depend on them, stop the containers, backup the volumes and start the containers again, restoring the previous state. salvage interfaces directly with the Docker daemon, so it can query which containers depend on which volumes and can control their state in the intended way. It it configured by labels attached to the containers, so containers can tell salvage, if they even need to be stopped, or if they can be backed up while running. salvage also has basic knowledge of docker-compose and can identify containers that belong to the same service, so the entire service will be stopped and backed up at once.

Instead of implementing the actual backup, salvage uses the Docker daemon itself to simply attach the volumes to backup containers which then perform the actual backup. This design allows salvage to be used with any backup tool, as long as it can be run in a container and can read a few environment variables, that salvage will provide, a task that is trivial to do with a few lines of shell scripting. Note that the backup tool does not need to be container aware, even tar will work. For more elaborate setups, salvage can also mount additional volumes into the backup container, so you can for example mount remote storage or a network share, as well as configuration files or caches.

Limitations

If you have read this far and are familiar with Docker, you might have noticed that salvage is not a perfect solution. First of all, it is not possible to prevent outside interference with the Docker daemon. This means that other monitoring tools or even humans can start or stop containers, while salvage is running. Additionally, salvage currently has no way of preventing automated host restarts. It seems possible to use systemd inhibitors to prevent system shutdown, but only on systemd based systems with D-Bus, which is not available on all systems, and I consider impact of this limitation to be low. Most backup tools are aleady capable of handling interrupted backups, in which case a later backup will somewhat mitigate the issue.

Reliability of docker-compose

The docker-compose binary uses labels for certain tasks, these labels include the name of the project, which is the name of the directory containing the docker-compose.yml file (unless overridden). Another label contains the path of the docker-compose.yml file itself. Mapping volumes inside the volumes section to the real volume names on the Docker daemon is done by some arcane logic, that only works as long as nothing changes. The extend to which docker-compose is able to handle changes is unknown to me, and I have no idea what could happen if you rename a project or move the docker-compose.yml file. As such, the entire detection of containers, resolution of project scopes volume names and so on, relies on crude assumptions and might break in the future or in edge cases, yet it never has in over a year of use.

Unfavorable applications

Some applications exhibit unfavorable behavior, that greatly reduces the usefulness of salvage. For example databases will constantly update binary files and write transaction logs, sometimes even store cached data. Even when stopping the database, those files are constantly updated and changed, which makes it hard or impossible for backup tools to efficiently detect changes, resulting in large backups. Poorly written applications might also write logs to the volume, which will also result in large backups. Some of these cases can be solved by simply excluding certain files, this is possible if the backup container has support for this. While this is usually the case, it requires knowledge of the application, something we wanted to avoid in the first place since it breaks the black box approach.

Restore

As shocking as it might sound, salvage does not support restoring backups. Since salvage injects additional metadata into the backup container, it would have to at least be able to extract this metadata from the backup, in order to recreate the volume on the Docker daemon. The very same metadata is also used to even know which volume is contained within the backup, making salvage unaware of the volume name before invoking the backup container at least once. Which puts the backup container in charge of even finding the backup in the first place, a task that ultimately requires manual intervention. While this is certainly possible, the majority of the work would be done by the backup container anyway, since salvage would basically just recreate the volume and then pass any additional information to the backup container, making it just a thin wrapper around the backup container. For this reason, using docker run directly is probably the better option, as it allows you to pass any additional information to the backup container, not having to rely on salvage to support it. After all, we are already using mature backup tools, so we might as well use them to restore the backup. This is not to say that salvage will never support restoring backups, but it is not a priority for me, since I have not yet needed to restore an entire host. (I know, famous last words)

Access to the Docker daemon

Some people consider it a security risk to allow access to the Docker daemon, as it allows you to run arbitrary containers, which can be used to gain access to the host. I’m not discussing the implications of this here and I expect you to know what you are doing. If you know the risks but still want more security, check out cetusguard. Ideally this would be the point where I give you a ready to use deployment for cetusguard, but I have not yet found the time to do so.

Potential improvements

There are a few things that could be improved. First of all, integration tests would be nice and should be possible to be performed inside GitHub actions. Testing more failure cases is probably be a good idea, but most of these are hard to reproduce. With larger volumes also comes the problem of long downtimes, which can be solved by using a filesystem snapshot, but this is not supported by Docker and would require us to fall back to the /var/lib/docker/volumes directory.

I have also started some work on database containers. The general idea is to dump/prepare the backup into a distinct backup volume, which is then backed up by salvage instead of the actual database volume. This works with regular SQL dump files, but can even be extended to binary backups by using database specific tools to create the backup in a similar way or use replication. All of these approaches have the disadvantage of requiring additional storage space.

Conclusion

While writing this article, I have been using salvage for over a year now. slavage has become one of my tools that just work in the background and I don’t have to worry about. I have not yet had any issues besides actual connection issues, which were handled gracefully by salvage. I also eventually added support for Discord webhooks, so I get a notification when a backup fails or succeeds. A quick check every morning is giving me peace of mind, knowing that my data is safe. Also enough time has passed to see incremental backups working. I’m using a Hetzner Storage Box as offsite storage and while the initial backup took a few days, the incremental backups are usually done in a few minutes up to an hour, depending on the amount of data that changed. I had a few cases in which I had to restore a single file, which was also possible by just running the backup container manually.

One very interesting thing for me was the large amount of misconceptions I came across, when talking to people about the problem and then later about salvage. A friend of mine was convinced that journaling filesystems prevent reading inconsistent data. Fully convinced that ext4 would not allow other processes to see the file, until the write was completed. This is not true, ext4 will happily allow you to read the file, even if it is currently being written to, you get whatever data was written up to that point and if the file is being overwritten, you get a mix of old and new data.

Someone else thought that databases are crash consistent, therefore you can just copy the files and be done with it. This is also not true, a crash is not the same as copying the files while the database is running. A crash leaves you with a single point in time, while copying the files will give you a mix of old and new data. This would actually be different if you would take a filesystem snapshot. But even then, additional steps may be required to restore a database from a crashed state, not something you would want to do as part of a restore.

Needless to say, I did not correct them, I just listened and nodded.

Links

If you want to check out salvage, you can find it on GitHub together with some documentation. After you are familiar with the terminology, you can also check out my backup containers, which currently includes borg and a simple dummy container, both of which are available here.

Recent blog posts