Swift/Ring Management
Swift Ring Management
Swift rings are managed by an automated process; you edit a YAML file that describes the hosts in the cluster, and then the swift_ring_manager
process (run by a systemd timer) gradually adjusts the deployed rings to match the state specified in the YAML file. The aim is that the YAML file should be reasonably intuitive; some common tasks are outlined here.
The swift ring manager code is installed on one front-end per Swift cluster (see the ring_manager
entry in swift_clusters
in hiera); the [code repository] is checked out into /srv/deployment/swift_ring_manager
, and the script /usr/local/bin/swift_ring_manager
is templated by puppet to ensure the correct version of python is used. That repository contains a test suite, which contains [an extensive set of example changes].
The configuration file is deployed as /etc/swift/hosts.yaml
by Puppet; so to change the state of the rings, edit the relevant YAML file (found in modules/swift/files/CLUSTERNAME_hosts.yaml
), and put through a CR as usual.
Storage Schemes
Every hosts.yaml
file must define one or more schemes, which specify which devices belong to which rings, and with what weight. You typically won't have to edit these, as WMF tends to buy hardware of consistent specification. Each scheme is a mapping of ring name to the members of that ring, with an additional member "weight" which contains a mapping of ring name to the weight of members of that ring. For example:
schemes: prod: objects: [sdc1, sdd1, sde1] accounts: &ap [sda3, sdb3] containers: *ap ssds: &sp [sda4, sdb4] weight: objects: 4000 accounts: &acw 100 containers: *acw ssds: 300
Here, there is a single scheme "prod". There are three devices (/dev/sdc1
, /dev/sdd1
, /dev/sde1
) which hold objects, with weight 4000. In common with the previous Swift ring management code, the rings are given friendly names ("objects" for the "object" ring, "ssds" for the "object-1" ring, etc.); the mapping of friendly name to ring is defined in the regularize_ring_names
function.
Adding a host
If a host is the same type as other hosts in the cluster, simply add the host to the relevant list in hosts.yaml. So
hosts: prod: - hostname
would become
hosts: prod: - hostname - newhostname
To add newhostname
, a new host with the prod
device scheme. Remember to also add the new node to swift::storagehosts
Removing a host
Simply removing the host entry from hosts.yaml will cause it to be immediately removed from the rings; it is generally better to drain it first (i.e. gradually remove weight from all of its devices). This can be done thus:
hosts: prod: - hostname: - drain
Using failed
instead of drain
will cause all weights to be set to 0 immediately, rather than gradually.
Removing a device
Do this by setting its weight to zero. By default, this will take effect gradually; if you want to do so straight away, specify immediate
as well as weight zero. As a convenience, you can also use drain
for "weight 0" and failed
for "weight 0, immediate". For example:
hosts: prod: - hostname: - sdc1: 0 - sdd1: [0, immediate] - sde1: failed
Here /dev/sdc1
will have weight on it gradually reduced to 0, whereas /dev/sdd1
and /dev/sde1
will both have their weight set to 0 immediately.
Adjusting a device weight
This is essentially the same as the previous operation, but you can specify any weight you like. You may also specify immediate
for a change you want to not be made gradually. For example:
hosts: prod: - hostname: - sdc1: 2000 - sdd1: [3500, immediate]
Here /dev/sdc1
will have its weight gradually changed to 2000, and /dev/sdd1
will have its weight immediately set to 3500.
Rebalancing the rings without making other changes
Sometimes after a lot of changes have completed you may notice that the rings are somewhat unbalanced ("balance" in swift-ring-builder
output is high); you can rebalance the rings by running, on the ring manager host:
sudo /usr/local/bin/swift_ring_manager -o /var/cache/swift_rings --doit --rebalance -v
It's only useful to do this when the rings are otherwise not being changed (as a change to the rings will involve a rebalance anyway).
Invoking swift_ring_manager.py
Typically, you don't need to do this - it should be run automatically on the cluster's ring manager node, which will make changes and distribute the ring files for you. You can run it yourself to see what changes it would make, however - the default operation is to merely say what changes if any would be made. The help message details all the command-line arguments:
usage: swift_ring_manager.py [-h] [--doit] [--immediate-only] [--update-tmp-rings] [-v] [-c CONF_FILE] [--ring-dir RING_DIR] [-o OUT_DIR] [--skip-dispersion-check] [--skip-replication-check] [-s] [--host-file-path HOST_FILE_PATH] [--generate-host-file] Swift Ring Manager optional arguments: -h, --help show this help message and exit --doit Update ring files and deploy --immediate-only Only make changes flagged as immediate --update-tmp-rings Only update temporary copies of ring files -v, --verbose emit more debugging information -c CONF_FILE, --conf-file CONF_FILE path to configuration file --ring-dir RING_DIR directory containing ring builder files -o OUT_DIR, --out-dir OUT_DIR directory to put new rings into --skip-dispersion-check don't check dispersion OK before making changes --skip-replication-check don't check replication OK before making changes -s, --syslog Log to syslog rather than stdout --host-file-path HOST_FILE_PATH fake hosts list file path --generate-host-file generate a hosts file for future testing use