Portal:Toolforge/Admin/Apt repository
Toolforge has an internal apt repository managed with aptly
. The repository exists on the tools-services nodes.
Repositories are declared in puppet, but packages should be added to the aptly repository by hand. We usually have one repository per operating system and project, i.e:
- stretch-tools
- buster-tools
- stretch-toolsbeta
- buster-toolsbeta
Quick example of packages being stored here are:
- https://gerrit.wikimedia.org/r/#/admin/projects/labs/toollabs
- https://gerrit.wikimedia.org/r/admin/projects/operations/software/tools-webservice
- https://gerrit.wikimedia.org/r/admin/projects/operations/software/tools-manifest
(among others)
The repository data, located at /srv/packages is stored in a mounted cinder volume.
Deployment components and architecture
Information on how the setup is deployed, and the different components.
Servers
Usually a VM with a cinder volume to store repository data.
Addressing, DNS and proxy
There is an horizon web proxy called deb-tools.wmcloud.org that should point to TCP/80 on the server. This allows to build docker images using toolforge internal packages.
Other than that, servers don't have any special DNS or adressing. The don't have floating IPs.
Worth noting that these servers in the tools cloudvps project may offer services for the toolsbeta project as well.
Puppet
The main role in use is role::wmcs::toolforge::services.
Admin operations
Information on maintenance and administration of this setup.
managing aptly repo
Is managed as a standard aptly repo.
health
Some interesting bits to check if you want to know the status/health of the server.
- aptly repos are present, and they contain packages, i.e:
sudo aptly repo list
andsudo aptly repo show --with-packages=true stretch-tools
- disk is not filled, i.e:
df -h /
failover
We don't have a specific failover mechanism rather than building a new VM and re-attach the cinder volume.
Care should be taken to don't loss aptly repo data, since generating it from scratch can take some time.
History
This was heavily remodeled when migrating the grid to SGE and to Stretch. Previous to the migration, the services nodes used to store Bigbrother (deprecated), and webservicemonitor (moved to cron servers).
Again, when migrating from Stretch to Buster, the 2 VM approach was dropped in favor of storing the data in a cinder volume, see https://phabricator.wikimedia.org/T278354.