Maps
Appearance
This page describes the technical aspects of deploying Maps service on Wikimedia Foundation infrastructure.
The service is being actively redesigned, documentation can be found under Maps/v2
Intro
The maps service consists of Kartotherian - a nodejs service to serve map tiles, Tilerator - a non-public service to prepare vector tiles (data blobs) from OSM database into Cassandra storage, and TileratorUI - an interface to manage Tilerator jobs. There are 20 servers in the maps
group: maps20[01-10].codfw.wmnet
and maps10[01-10].eqiad.wmnet
that run Kartotherian (port 6533, NCPU instances), Tilerator (port 6534, half of NCPU instance), TileratorUI (port 6535, 1 instance). Also, there are four Varnish servers per datacenter in the cache_maps
group.
The infrastructure
Miscellaneous
Development processes
Puppetization and Automation
Prerequisites
- passwords and postgres replication configuration is set in Ops private repo (
root@puppetmaster1001:/srv/private/hieradata/role/(codfw|eqiad)/maps/server.yaml
) - other configuration in
puppet/hieradata/role/(codfw|common|eqiad)/maps/*.yaml
cassandra::rack
is defined inpuppet/hieradata/hosts/maps*.yaml
- the
role::maps::master
/role::maps::slave
roles are associated to the maps nodes (site.pp)
Monitoring
- KPI dashboard
- Usage dashboard
- Usage - varnish
- Kartotherian - Grafana
- Kartotherian - Logstash
- Maps Cassandra - Logstash
- Tilerator - Grafana
- Tilerator - Logstash
Subpages
- Beta Cluster setup
- ClearTables
- ClearTables/Loading
- Debugging
- Dynamic tile sources
- External usage
- Infrastructure plans
- Kartotherian
- Kartotherian packages
- Keyspace Setup
- Maintenance
- Maps experiments
- OSM
- OSM Database Legacy
- Runbook
- Services deployment
- Tile storage
- Tilerator
- v2
- v2/Architecture
- v2/Common tasks
- v2/Infrastructure
- v2/Logs
- v2/Metrics
- v2/Troubleshooting