User:ABorrero (WMF)
Kung-fu
Some random tricks I would like to persist somewhere.
openstack
Some interesting openstack tricks that I use from time to time.
- Create a VM in a given project, and schedule it directly in a given hypervisor:
root@cloudcontrol1004:~# openstack --os-project-id openstack server create --flavor 3 --image 10783e59-b30e-4426-b509-2fbef7d3103c --nic net-id=7425e328-560c-4f00-8e99-706f3fb90bb4 --availability-zone nova:cloudvirt1024:cloudvirt1024.eqiad.wmnet moritz-mds-stretch-test
[..]
- Generate a list of running VMs (with project) in a given hypervisor:
aborrero@cloudcontrol1004:~$ for i in $(sudo wmcs-openstack server list --all-projects --host cloudvirt1030 -c ID -f value) ; do VM=$(sudo wmcs-openstack server show $i -c name -f value) ; PROJ=$(sudo wmcs-openstack server show $i -c project_id -f value) ; echo "VM: $VM PROJECT: $PROJ" ; done
VM: taxonbota-b PROJECT: dwl
VM: irc-buster PROJECT: dwl
VM: janus1-1 PROJECT: analytics
VM: integration-agent-docker-1012 PROJECT: integration
VM: integration-agent-docker-1011 PROJECT: integration
[..]
wmcs-hypervisor-stats.sh
Compare number of scheduled VMs to number stored in the nova database for the hypervisor stats:
aborrero@cloudcontrol1004:~$ ./wmcs-hypervisor-stats.sh
cloudvirt1001 scheduled VMs: 20 DB stats: 20
cloudvirt1002 scheduled VMs: 21 DB stats: 24 <---
cloudvirt1003 scheduled VMs: 23 DB stats: 23
cloudvirt1004 scheduled VMs: 20 DB stats: 20
cloudvirt1005 scheduled VMs: 24 DB stats: 24
cloudvirt1006 scheduled VMs: 23 DB stats: 16 <---
cloudvirt1007 scheduled VMs: 19 DB stats: 19
cloudvirt1008 scheduled VMs: 21 DB stats: 21
cloudvirt1009 scheduled VMs: 28 DB stats: 15 <---
cloudvirt1012 scheduled VMs: 32 DB stats: 32
cloudvirt1013 scheduled VMs: 1 DB stats: 1
cloudvirt1014 scheduled VMs: 52 DB stats: 52
cloudvirt1015 scheduled VMs: 0 DB stats: 0
cloudvirt1016 scheduled VMs: 42 DB stats: 42
cloudvirt1017 scheduled VMs: 66 DB stats: 66
cloudvirt1018 scheduled VMs: 57 DB stats: 57
cloudvirt1019 scheduled VMs: 3 DB stats: 3
cloudvirt1020 scheduled VMs: 3 DB stats: 3
cloudvirt1021 scheduled VMs: 45 DB stats: 45
cloudvirt1022 scheduled VMs: 4 DB stats: 4
cloudvirt1023 scheduled VMs: 1 DB stats: 1
cloudvirt1024 scheduled VMs: 1 DB stats: 1
cloudvirt1025 scheduled VMs: 42 DB stats: 39 <---
cloudvirt1026 scheduled VMs: 55 DB stats: 46 <---
cloudvirt1027 scheduled VMs: 48 DB stats: 45 <---
cloudvirt1028 scheduled VMs: 51 DB stats: 50 <---
cloudvirt1029 scheduled VMs: 36 DB stats: 36
cloudvirt1030 scheduled VMs: 38 DB stats: 38
Source code:
wmcs-hypervisor-stats.sh |
---|
#!/bin/bash
# compare hypervisor stats
HYPERVISORS=$(sudo wmcs-openstack hypervisor list -f value -c "Hypervisor Hostname" | egrep cloudvirt[0-9]{4} | awk -F'.' '{print $1}' | sort)
for hypervisor in ${HYPERVISORS} ; do
echo -n "$hypervisor "
scheduled=$(sudo wmcs-openstack server list --host $hypervisor --all-project -f value -c ID 2>/dev/null | wc -l)
echo -n "scheduled VMs: ${scheduled} "
stats=$(sudo wmcs-openstack hypervisor show ${hypervisor}.eqiad.wmnet -f value -c running_vms 2>/dev/null || echo 0)
echo -n "DB stats: ${stats} "
if [ "${stats}" != "${scheduled}" ] ; then
echo "<---"
else
echo
fi
done
|
wmcs-canary-vm-refresh.sh
See Portal:Cloud_VPS/Admin/Procedures_and_operations#Canary_VM_instance_in_every_hypervisor instead.
cumin
Some interesting cumin commands I used.
Select servers using an alias and a fact:
aborrero@cumin1001:~ $ sudo cumin -x 'A:cloud-eqiad1 and P{F:lsbdistcodename = jessie}' "dpkg -l python-pysaml2"
IGNORE EXIT CODES mode enabled, all commands executed will be considered successful
15 hosts will be targeted:
cloudcontrol[1003-1004].wikimedia.org,cloudservices[1003-1004].wikimedia.org,cloudvirt[1014,1016-1017,1021-1023].eqiad.wmnet,cloudvirtan[1001-1005].eqiad.wmnet
Select servers using a puppet role and a fact:
aborrero@cumin1001:~ $ sudo cumin 'P{O:wmcs::openstack::eqiad1::virt} and P{F:lsbdistcodename = stretch}'
7 hosts will be targeted:
cloudvirt[1013,1024,1026-1030].eqiad.wmnet
DRY-RUN mode enabled, aborting
Running puppet in installs servers before installing a server:
aborrero@cumin1001:~ $ sudo cumin A:installserver run-puppet-agent
2 hosts will be targeted:
install[1002,2002].wikimedia.org
Confirm to continue [y/n]? y
Finding cloudvirts of a given vendor:
aborrero@cumin1001:~ $ sudo cumin 'P{cloudvirt1* and F:manufacturer = "Dell Inc."}'
14 hosts will be targeted:
cloudvirt[1015-1018,1021-1030].eqiad.wmnet
DRY-RUN mode enabled, aborting
Using a regexp match in server name in CloudVPS:
aborrero@labpuppetmaster1001:~ $ sudo cumin "project:tools name:^tools-static*" uname
2 hosts will be targeted:
tools-static-[12-13].tools.eqiad.wmflabs
Confirm to continue [y/n]? y
Matching 2 different server names in CloudVPS:
aborrero@labpuppetmaster1001:~ $ sudo cumin "O{project:tools name:^tools-k8s-master*} OR O{project:tools name:^tools-docker-registry*}" ":"
3 hosts will be targeted:
tools-docker-registry-[03-04].tools.eqiad.wmflabs,tools-k8s-master-01.tools.eqiad.wmflabs
Confirm to continue [y/n]?
Very quick & basic healthcheck for VM instances in CloudVPS after draining a hypervisor:
aborrero@labpuppetmaster1002:~ $ sudo cumin -m sync F{file.txt} 'cat /etc/debian_version' 'touch /tmp/cumintest && rm -f /tmp/cumintest'
You can generate the list of hosts to check with a command like this, and the copy-paste the list to a file in the cumin server:
root@cloudcontrol1004:~# nova list --all-tenants --host cloudvirt1024 | grep ACTIVE | awk -F' ' '{print $4}'
accounts-appserver5
canary1024-01
[...]
local scripts
Some local scripts I run in my laptop when working in the WMF/WMCS environment.
I copy/pasted here so I don't lost them. And to allow others to reuse them.
I don't track them in git, and the code here may be outdated. Ping me if you need a refresh or need any help!
wmf-cleanup-puppetmaster.sh
I use this script to cleanup the puppet git repository in a CloudVPS project-local puppetmaster.
wmf-cleanup-puppetmaster.sh |
---|
#!/bin/bash
PUPPETMASTER="$1"
if [ -z "$PUPPETMASTER" ] ; then
echo "E: no puppetmaster specified" >&2
exit 1
fi
if [ "$PUPPETMASTER" == "tools" ] ; then
PUPPETMASTER="tools-puppetmaster-02.eqiad.wmflabs"
fi
if [ "$PUPPETMASTER" == "toolsbeta" ] ; then
PUPPETMASTER="toolsbeta-puppetmaster-04.eqiad.wmflabs"
fi
if [ "$PUPPETMASTER" == "paws" ] ; then
PUPPETMASTER="paws-puppetmaster-01.eqiad.wmflabs"
fi
SSH_COMMAND="{ cd /var/lib/git/operations/puppet ;
sudo git checkout -f ;
sudo git clean -fd ;
sudo git-sync-upstream ;}"
echo "INFO: cleaning up $PUPPETMASTER"
echo
ssh $PUPPETMASTER $SSH_COMMAND
|
wmf-export-puppet-patch.sh
I use this script to export a local puppet patch to live-hack a puppetmaster (commonly CloudVPS local puppetmasters):
wmf-export-puppet-patch.sh |
---|
#!/bin/bash
PUPPETMASTER="$1"
if [ -z "$PUPPETMASTER" ] ; then
echo "E: no puppetmaster specified" >&2
exit 1
fi
PATCH="wmf-export-puppet.patch"
PATCH_PATH="../${PATCH}"
git diff origin/HEAD > ${PATCH_PATH}
scp ${PATCH_PATH} ${PUPPETMASTER}:
SSH_COMMAND="{ cd /var/lib/git/operations/puppet ;
sudo git checkout -f ;
sudo git clean -fd ;
sudo git pull --rebase ;
sudo patch -p1 <~/${PATCH} ;}"
ssh $PUPPETMASTER $SSH_COMMAND
|
wmf-git-review.sh
I use this script to rebase a patch before submitting to WMF's gerrit:
wmf-git-review.sh |
---|
#!/bin/bash
STG="$(which stg)"
GIT="$(which git)"
set -e
if [ "${USER}" == "root" ] ; then
echo "E: running as root?" >&2
exit 1
fi
if [ ! -x "$STG" ] ; then
echo "E: no stg binary found" >&2
exit 1
fi
if [ ! -x "$GIT" ] ; then
echo "E: no git binary found" >&2
exit 1
fi
$STG pop -a
$GIT pull --rebase
$STG push
$GIT review
|
wmf-puppet-class-tree.sh
I use this script to get an idea of the puppet classes tree for a given puppet class.
Usefull when working with complex roles/profiles which interact between them (for example, CloudVPS/Openstack):
wmf-puppet-class-tree.sh |
---|
#!/bin/bash
if [ "$1" == "-h" ] || [ "$1" == "--help" ] ; then
echo "Usage: $0 <class> <puppet_tree_path>"
exit 0
fi
INPUT_CLASS=$1
INPUT_PUPPET=$2
if [ "$INPUT_CLASS" == "" ] ; then
echo "E: no input class given!" >&2
exit 2
fi
if [ "$INPUT_PUPPET" == "" ] ; then
echo "W: no puppet tree specified, asuming ." >&2
INPUT_PUPPET="."
fi
cd $INPUT_PUPPET
function class2file()
{
local class=$1
local tree=$2
grep -olER '^(class|define)[[:space:]]+'"${class}[[:space:]]?(\{|\()" . 2>/dev/null | grep .pp$
}
function classesinfile()
{
local file=$1
includes=$(grep -E '^[[:space:]]+(include|require|contain)[[:space:]]+[:a-zA-Z1-9_-]+' $file \
| awk -F' ' '{print $2}' | sed s/^:://g)
declarations=$(grep -E ^[[:space:]]+class[[:space:]]+{[[:space:]]\'[:a-zA-Z1-9_-]+ $file \
| awk -F\' '{print $2}' | sed s/^:://g)
echo $includes $declarations | sort | uniq
}
function print()
{
local indent=$1
local msg=$2
local separator=""
for i in $(seq 1 ${indent}) ; do separator=${separator}" " ; done
echo -e "${separator}${msg}"
}
function resolve()
{
local class=$1
local tree=$2
local recursion=$3
local file=""
if [ $recursion -gt 100 ] ; then
echo "E: recursion limit reached" >&2
exit 3
fi
file=$(class2file $class $tree)
if [ "$file" == "" ] ; then
echo "W: class $class not found" >&2
return
fi
print $recursion "${class}\t\t${file}"
classes=$(classesinfile $file)
for i in $classes ; do
resolve $i $tree $((recursion + 1))
done
}
resolve $INPUT_CLASS $INPUT_PUPPET 0
|
wmcs-netbox-list.py
I use this script to fetch & list WMCS server info from Netbox and generate a CSV (to later import into a spreadsheet):
wmcs-netbox-list.py |
---|
#!/usr/bin/env python3
import requests
import argparse
class Config:
""" configuration for this script.
"""
def __init__(self, api_token, netbox_url):
self.api_token = api_token
if not netbox_url:
self.netbox_url = "https://netbox.wikimedia.org"
else:
self.netbox_url = netbox_url
def generate_netbox_link(config, id):
return '{}/dcim/devices/{}/'.format(config.netbox_url, id)
def print_result_header():
print('name, purchase_date, support_until, site, netbox_link')
def print_result(config, result):
netbox_link = generate_netbox_link(config, result['id'])
# name, custom_fields.purchase_date, custom_fields.support_until, site.slug, netbox_link
print('{}, {}, {}, {}, {}'.format(result['name'], result['custom_fields']['purchase_date'],
result['custom_fields']['support_until'], result['site']['slug'],
netbox_link))
def request_query_list(config, query):
request_url = '{}/api/dcim/devices/?q={}'.format(config.netbox_url, query)
request_headers = {'Authorization': 'Token {}'.format(config.api_token)}
return requests.get(url=request_url, headers=request_headers)
def main():
parser = argparse.ArgumentParser(description="Utility to fetch and list WMCS servers from Netbox")
parser.add_argument("--token", action="store", help="Netbox API token")
parser.add_argument("--url", action="store", help="Netbox URL")
args = parser.parse_args()
config = Config(args.token, args.url)
print_result_header()
r = request_query_list(config, 'lab')
for result in r.json()['results']:
print_result(config, result)
r = request_query_list(config, 'labtest')
for result in r.json()['results']:
print_result(config, result)
r = request_query_list(config, 'cloud')
for result in r.json()['results']:
print_result(config, result)
if __name__ == "__main__":
main()
|
wmcs-generate-dmz-cidr.sh
This script can be used to generate the hiera dmz_cidr configuration for operations/puppet.git
wmcs-generate-dmz-cidr.sh |
---|
#!/bin/bash
src="172.16.128.0/24"
print_entry() {
comment=$1
addr=$2
echo " # VMs --> $comment"
echo " - \"${src} . ${addr}\""
}
DCs="eqiad codfw ulsfo eqsin esams"
SVCs="text-lb upload-lb"
for dc in $DCs ; do
for svc in $SVCs ; do
hostname="${svc}.${dc}"
fqdn="${hostname}.wikimedia.org"
addr=$(dig +short $fqdn)
print_entry "wiki ($hostname)" $addr
done
done
echo
recursors="ns-recursor1 ns-recursor0"
deployments="openstack.eqiad1 openstack.codfw1dev"
domain="wikimediacloud.org"
for deployment in $deployments ; do
for recursor in $recursors ; do
fqdn="${recursor}.${deployment}.${domain}"
addr=$(dig +short $fqdn)
print_entry $fqdn $addr
done
done
echo
extras="gerrit-replica gerrit apt1001 apt2001 sodium
cloudcontrol1003 cloudcontrol1004 cloudcontrol1005
cloudcontrol2001-dev cloudcontrol2003-dev cloudcontrol2004-dev
contint2001 kraz
ldap-ro.eqiad ldap-ro.codfw
nfs-maps labstore1007 labstore1006 cloudstore1009 cloudstore1008
"
for extra in $extras ; do
fqdn="${extra}.wikimedia.org"
addr=$(dig +short $fqdn)
print_entry $fqdn $addr
done
|
other scripts
Other random scripts.
wmcs-vm-console.py
I use this script to help in jumping into the console of a given VM instance.
Example usage:
aborrero@cloudcontrol1004:~ $ python3 wmcs-vm-console.py toolsbeta-test-k8s-etcd-2 toolsbeta
VM nova ID: 59922260-bd39-4bdc-b233-feb78ddb2665
VM Hypervisor: cloudvirt1014.eqiad.wmnet
VM libvirt id: i-0000ddf4
Try this in your laptop:
ssh -t cloudvirt1014.eqiad.wmnet "sudo virsh console --devname serial1 i-0000ddf4"
(I would really love to do that myself, but SSH creds, etc...)
wmcs-vm-console.py |
---|
#!/usr/bin/env python3
import os
import sys
import argparse
import yaml
import keystoneauth1
from keystoneauth1.identity import v3
from keystoneauth1 import session as keystone_session
from novaclient import client as novaclient
def exit_with_msg(msg):
print("ERROR: {}".format(msg), file=sys.stderr)
exit(1)
def main():
parser = argparse.ArgumentParser(description="Jump by SSH to a VM console")
parser.add_argument("name", help="VM name")
parser.add_argument(
"project",
help="OpenStack project scope",
)
parser.add_argument(
"--observer-pass",
default="",
help="Password for the OpenStack observer account",
)
parser.add_argument(
"--auth-url",
default="",
help="Keystone URL -- can be obtained from novaobserver.yaml",
)
args = parser.parse_args()
if not args.observer_pass:
if os.path.isfile("/etc/novaobserver.yaml"):
with open("/etc/novaobserver.yaml") as conf_fh:
nova_observer_config = yaml.safe_load(conf_fh)
args.observer_pass = nova_observer_config["OS_PASSWORD"]
args.auth_url = nova_observer_config["OS_AUTH_URL"]
else:
exit_with_msg("The --observer-pass argument is required without /etc/novaobserver.yaml")
if not args.auth_url:
exit_with_msg("the --auth-url argument is required without /etc/novaobserver.yaml")
auth = v3.Password(
auth_url=args.auth_url,
username="novaobserver",
password=args.observer_pass,
user_domain_name='Default',
project_domain_name='Default',
project_name=args.project
)
session = keystoneauth1.session.Session(auth=auth)
mynovaclient = novaclient.Client("2.0", session=session)
try:
server = mynovaclient.servers.list(search_opts={'name': '^{}$'.format(args.name)})
except keystoneauth1.exceptions.http.Unauthorized:
exit_with_msg("couldn't query the openstack API. Operation not authorized.")
except:
raise
if len(server) > 1:
exit_with_msg("more than 1 VM found with that name.")
if len(server) == 0:
exit_with_msg("no VM found with that name.")
vm_info = server[0]._info
vm_id = server[0].id
vm_hypervisor = vm_info['OS-EXT-SRV-ATTR:hypervisor_hostname']
vm_libvirt_id = vm_info['OS-EXT-SRV-ATTR:instance_name']
print("VM nova ID: {}".format(vm_id))
print("VM hypervisor: {}".format(vm_hypervisor))
print("VM libvirt id: {}".format(vm_libvirt_id))
print()
print("Try this in your laptop:\n")
print("\tssh -t {} \"sudo virsh console --devname serial1 {}\"".format(
vm_hypervisor, vm_libvirt_id))
print("\n(I would really love to do that myself, but SSH creds etc...)")
if __name__ == "__main__":
main()
|
wmcs-instance-hard-reboot.sh
I use this script in cloudcontrol servers to hard-reboot a given Cloud VPS instance.
Example usage:
aborrero@cloudcontrol1004:~$ sudo su
root@cloudcontrol1004:/home/aborrero# cd
root@cloudcontrol1004:~# source novaenv.sh
root@cloudcontrol1004:~# cd -
/home/aborrero
root@cloudcontrol1004:/home/aborrero# bash wmcs-instance-hard-reboot.sh tools-sgeexec-0924
I: UUID is 4087f96d-7cd8-445c-9e9c-7aabe6d564d7
I: stopping instance
I: status is ACTIVE
I: status is ACTIVE
I: status is ACTIVE
I: status is ACTIVE
I: status is ACTIVE
I: status is ACTIVE
I: status is ACTIVE
I: status is ACTIVE
I: status is ACTIVE
I: status is ACTIVE
I: status is ACTIVE
I: status is ACTIVE
I: status is SHUTOFF
I: starting instance
wmcs-instance-hard-reboot.sh |
---|
#!/bin/bash
INSTANCE_NAME=$1
if [ "$INSTANCE_NAME" == "" ] ; then
echo "E: usage: $0 <instance name>" >&2
exit 1
fi
if [ "$(id -u)" != "0" ] ; then
echo "E: root required!" >&2
exit 1
fi
status()
{
openstack server show -c status -f value $1
}
UUID=$(openstack server list -c ID -c Name -f value --all-projects | grep $INSTANCE_NAME | awk -F' ' '{print $1}')
echo "I: UUID is $UUID"
status=$(status $UUID)
if [ "$status" != "ACTIVE" ] ; then
echo "W: not doing anything, status is $status" >&2
exit 0
fi
echo "I: stopping instance"
openstack server stop $UUID
while true ; do
status=$(status $UUID)
echo "I: status is $status"
if [ "$status" == "SHUTOFF" ] ; then
echo "I: starting instance"
openstack server start $UUID
break
fi
sleep 2
done
|
sssd_rollout.sh
I used this script to rollout the sssd stack to Toolforge in a controlled fashion.
This is meant to be executed in the clushmaster node, with only one input paramenter pointing to a file with a list of nodes:
tools-sgeexec-0908.tools.eqiad.wmflabs tools-sgeexec-0932.tools.eqiad.wmflabs tools-sgeexec-0918.tools.eqiad.wmflabs tools-sgeexec-0927.tools.eqiad.wmflabs tools-sgeexec-0941.tools.eqiad.wmflabs tools-sgeexec-0907.tools.eqiad.wmflabs tools-sgeexec-0913.tools.eqiad.wmflabs tools-sgeexec-0933.tools.eqiad.wmflabs tools-sgeexec-0914.tools.eqiad.wmflabs [...]
sssd_rollout.sh |
---|
#!/bin/bash
LIST=$1
if [ ! -r "$LIST" ] ; then
echo "E: can't read input file with list of sgeexec nodes $LIST" >&2
exit 1
fi
CLUSH="$(which clush)"
#CLUSH="echo clush"
SGE_MASTER="tools-sgegrid-master.tools.eqiad.wmflabs"
depool()
{
echo "I: depooling $1"
$CLUSH -w $SGE_MASTER "sudo exec-manage depool $1"
}
wait_depool()
{
echo "I: waiting depool $1"
while true ; do
ret=$($CLUSH -N -w $SGE_MASTER "sudo exec-manage status $1 | head -n2 | tail -1")
if [ "$ret" != 0 ] ; then
echo -n "."
sleep 2
else
break
fi
done
echo
}
enable_puppet()
{
echo "I: enabling puppet $1"
$CLUSH -w $1 "sudo puppet agent --enable"
}
run_puppet()
{
echo "I: running puppet agent $1"
$CLUSH -w $1 "sudo puppet agent -tv"
}
reboot()
{
echo "I: rebooting $1"
$CLUSH -w $1 "sudo nohup reboot &"
}
wait_reboot()
{
echo "I: waiting reboot $1"
today=$(date --rfc-3339=date)
while true ; do
uptime=$($CLUSH -w $1 "uptime -s" 2>/dev/null | awk -F' ' '{print $2}')
if [ "$today" == "$uptime" ] ; then
break
fi
echo -n "."
sleep 2
done
echo
}
repool()
{
echo "I: repooling $1"
$CLUSH -w $SGE_MASTER "sudo exec-manage repool $1"
}
for i in $(cat $LIST) ; do
echo "I: doing $i"
depool $i
wait_depool $i
enable_puppet $i
run_puppet $i
reboot $i
wait_reboot $i
repool $i
echo "I: press any key to continue"
read x
done
|
wmcs-generate-legacy-redirector-list.py
This script is used to generate the list of tool webservices allowed in the legacy redirector system. The output is a LUA map text that can be copy/pasted into the legacy redirector LUA script.
To simplify access/credentials, execute this in a tools-k8s-control node, example:
aborrero@tools-k8s-control-3:~$ sudo -i python3 /home/aborrero/wmcs-generate-legacy-redirector-list.py
local allowed = Set {
'abbe98tools','abbreviso','abcgames','abricot','account-creator',
'ace2018','actrial','adas','add','add-information',
}
1014 webservices detected
wmcs-generate-legacy-redirector-list.py |
---|
#!/usr/bin/env python3
import requests
import pykube
import os
import re
# This script geerates a LUA map with all the webservices running in Toolforge
# in both the grid and k8s. Then, the output can be copy/pasted into the legacy redirector LUA script.
# The map should be static, I don't expect this script to be used more than once (famous last words?)
# so I didn't pay special attention to any coding quality, style or the like.
# To simplify access/credentials, execute this in a tools-k8s-control node, example:
#
# aborrero@tools-k8s-control-3:~$ sudo -i python3 /home/aborrero/wmcs-generate-legacy-redirector-list.py
def print_lua_map(webservices):
# The LUA map has this syntax:
#
# local allowed = Set {
# 'tool1', 'tool2', 'tool3', 'tool4', 'tool5', 'tool6',
# 'tool7', 'tool8', 'tool9', 'tool10', 'tool11', 'tool12',
# }
tools_per_line = 5
print(" local allowed = Set {")
i = 0
for tool in sorted(webservices):
if i == 0:
print(" ", end='')
print("'{}',".format(tool), end='')
i += 1
if i == tools_per_line:
print("")
i = 0
print("\n }")
def main():
webservices = set()
grid_url = 'http://tools.wmflabs.org:8081/list'
grid_webservices_json = requests.get(grid_url).json()
for i in grid_webservices_json:
webservices.add(i)
kubeconfig = pykube.KubeConfig.from_file(os.path.expanduser("~/.kube/config"))
api = pykube.HTTPClient(kubeconfig)
all_ingresses = pykube.Ingress.objects(api).filter(namespace=pykube.all,selector={"tools.wmflabs.org/webservice":"true"})
for ingress in all_ingresses:
if ingress.obj["spec"]["rules"][0]["host"] != "tools.wmflabs.org":
continue
toolname = re.search('^tool-(.*)', ingress.obj["metadata"]["namespace"]).group(1)
webservices.add(toolname)
print_lua_map(webservices)
print("\n{} webservices detected".format(len(webservices)))
if __name__ == "__main__":
main()
|
puppet
Some personal puppet stuff.
catalog compilation in jenkins
Jenkins puppet-catalog-compier friendly list of canary servers from different Cloud VPS deployments.
Those are meant to quickly copy-paste into the compiler job.
- all the cloud hardware!
re:^cloud.*(wmnet|org)
- all Toolforge!
re:.*\.tools\.eqiad\.wmflabs
- all the cloudXXX-dev hardware!
re:^cloud.*-dev\..*(wmnet|org)
switch puppetmaster for CloudVPS VMs
I used this quick and dirty bash script to test puppetmaster enrollment for several CloudVPS VMs. It should be run from the WMCS cumin server (which is the main puppetmaster too).
cumin_puppetmaster_enroll.sh |
---|
#!/bin/bash
NODES="
arturo-k8s-test-2.openstack.eqiad.wmflabs
arturo-k8s-test-3.openstack.eqiad.wmflabs
arturo-k8s-test-4-1.openstack.eqiad.wmflabs
arturo-k8s-test-4-2.openstack.eqiad.wmflabs"
CUMIN_QUERY_NODES="O{project:openstack name:^arturo-k8s-test*}"
CUMIN_QUERY_MASTER="O{project:openstack name:^openstack-puppetmaster-01*}"
cumin_nodes_cmd()
{
sudo cumin --force -x "${CUMIN_QUERY_NODES}" "${1}"
}
cumin_master_cmd()
{
sudo cumin --force -x "${CUMIN_QUERY_MASTER}" "${1}"
}
clean_ssl_client()
{
cumin_nodes_cmd "rm -rf /var/lib/puppet/ssl"
}
run_puppet_agent_client()
{
cumin_nodes_cmd "run-puppet-agent"
}
master_sign_cert()
{
cumin_master_cmd "puppet cert --allow-dns-alt-names sign $1"
}
master_clean_cert()
{
cumin_master_cmd "puppet cert clean $1"
}
master_clean_certs()
{
for client in $NODES ; do
master_clean_cert $client
done
}
master_accept_certs()
{
for client in $NODES ; do
master_sign_cert $client
done
}
echo "I: running puppet agent"
run_puppet_agent_client
echo "I: cleanup ssl on clients"
clean_ssl_client
echo "I: cleanup ssl on master"
master_clean_certs
echo "I: request new certs"
run_puppet_agent_client
echo "I: sign new certs"
master_accept_certs
echo "I: run puppet agent again"
run_puppet_agent_client
|
Notes
See also
Disclaimer: I work for or provide services to the Wikimedia Foundation, and this is the account I try to use for edits or statements I make in that role. However, the Foundation does not vet all my activity, so edits, statements, or other contributions made by this account may not reflect the views of the Foundation.