Docker
Intro
There are a number of reasons why we have always shied away from adopting docker itself too much in production. Those are (in no particular order):
Networking woes
Docker's networking approach of creating bridged private networks that hides the containers from the rest of the infrastructure, expecting them in the default case to be exposed via TCP/UDP port forwardings on the node (and this doesn't even touch on the older and deprecated but still around links
). This behavior ends making nodes routers, a behavior that isn't necessarily expected and can lead to confusion. It's pretty OK for a development workstation, but in large clusters of nodes it becomes a nuisance, especially for applications that need to gossip.
Interestingly there is no "nice" way to configure what gets exposed. EXPOSE
stanzas in Dockerfile
don't mean much without a -P
flag to docker run
. The -P
flag on the other hand doesn't do what you expect, as it is exposes declared ports on random ephemeral ports on the host. And for running anything in production, one needs to run docker run -p <port1>:<port2>
to explicitly ask what they want (note that docker run -p <port> still goes for an ephemeral port on the host) which defeats the point of the EXPOSE
stanza.
Docker does support different network "drivers", we aren't really limited on bridge only. However, it turns out that once one starts to mess with docker networks they come into contact with docker swarm (docker's effort to mimic and compete with kubernetes), internal only networks, ipvlans and macvlans etc and the mess of configuring them correctly with a UI that isn't very friendly to configuration management.
It also messes really badly with networking, not playing well with existing fire walling solutions where depending on which order commands are run in, you might end up with a working service or not.
Note btw that a dockerd restart (e.g. an upgrade) will also interrupt networking.
Given the above, most use cases I 've seen, stick to either the bridge network or just no network at all
Persistent Storage woes
Docker added later on the concept of volumes. Unfortunately, the situation there isn't that much better than networking. Files in docker volumes are hidden under /var/lib/docker
, using them requires passing -v
with a syntax that isn't easy to figure out intuitively. The docs are pretty telling
Consists of three fields, separated by colon characters (:). The fields must be in the correct order, and the meaning of each field isn't immediately obvious.
and for --mount
(an alternative to -v)
Consists of multiple key-value pairs, separated by commas and each consisting of a <key>=<value> tuple. The --mount syntax is more verbose than -v or --volume, but the order of the keys isn't significant, and the value of the flag is easier to understand.
Furthermore, depending on how they are used, volumes require periodic pruning.
Orchestration, configuration and UX woes
This is probably the biggest issue. The standard way of interacting with docker, that is via the docker command isn't friendly at all to configuration management tools like ansible, chef or puppet. All of these tools adopt a declarative approach to state management whereas the docker CLI is an imperative tool. That ends up creating the following: all of these tools run multiple docker commands to see if resources are how they want them to be and if not create those resources. Add in that the commands to create those resources can be non intuitive and don't clean up and one is a typo away from creating tons of needless resources in the space of a few config management tool runs.
Docker did figure out that early enough and tried to address with the python (later re-written in Golang) based docker-compose
tool, that allowed a more declarative approach. Unfortunately that tool never played well with the various systems above and by the time it matured (~2017) Kubernetes already provided a better alternative. It is a very useful development tool of course, but if you want to run things in WMF production, Kubernetes is a better platform.
A variety of other things like security settings (e.g. seccomp profiles, privileged pods), health commands, capabilities, devices, env variables, memory/cpu limits, networks, ports, restart policies, are all passed as command line arguments ending up big convoluted command line invocations. Add in that you also need to be explicit for image tags to avoid the latest
(aka which latest am I running after all?) hell.
The main issue that remains is that hooking up Docker workloads to configuration management can be tedious and requires to be careful.
Running docker in production?
In general, due to all of the above, we advise against running straight docker containers as is. The allure is great, the mess, after a while, not so much.
However...
It can be that the value from using Docker for something is too good to be ignored. The following are some recommendations to mitigate some of the pains above in such a case.
- For networking, the general answer is
always run with --net=host
. That is, don't even try to reason with Docker's networking model. Use the host's networking namespace and avoid all the weird mess with the firewalling, the ports, the NATs, etc. - Don't use docker volumes but rather just bind mount from the host the directories you want. Alternatively, if the application requires access to raw devices, pass the devices as is and let the application handle them. Docker volumes would have you believe that it's better to use them, however none of the "advantages" listed in that page apply to our production workloads.
- Never ever use
latest
for anything production like. Be extremely explicit in which image tag you use in your configuration management - Hook up docker to configuration management via the proper
systemd
unit files. Trying to avoid that will only lead to an inevitable outage as Docker's CLI imperative nature increases the probability this will happen.
With the above, one would removing the parts of Docker that aren't really suitable for production (unsurprisingly, Kubernetes does that exact same thing), turning it essentially into an application execution engine for binaries that are bundled in OCI images.
Registry
Wikimedia also its own docker-registry.
Installation
To install Docker on Debian: apt-get install docker.io (note it's NOT just "docker", that's an entirely unrelated package)
To install Docker on Mac: download the Desktop installer from the official website.
Publishing production images
Refer to Kubernetes/Images#Production images
Updating CI docker images
TODO: this should be moved somewhere else
When updating CI images follow the documentation at mw:Continuous integration/Docker MediaWiki. The basic process is:
- Update the changelog of the appropriate image to force a docker image rebuild. You can use docker-pkg (see MediaWiki page above)
- Update the image version in the CI job specification