Machine Learning/LiftWing/Inference Services
Summary
Our Machine Learning models are hosted as Inference Services (isvc), which are Custom Resource Definitions (CRDs), an extension of the Kubernetes API. The isvc CRD is provided by KServe, which utilizes Knative and Istio and provides serverless, asynchronous micro-services designed for performing inference. These services are written in Python and use the asyncio framework (via FastAPI).
Steps for Inference Services development
- Developing an inference service (see LiftWing/KServe)
- Developing a Blubberfile (see Production Image Development)
- Testing with Docker and/or ML-Sandbox (see KServe#Example, ML-Sandbox#Deploy)
- Configuring CI pipelines (see Pipelines)
Once the production image has been published to the WMF Docker Registry, we'll proceed to the deployment (see Machine Learning/LiftWing/Deploy)
Development
Clone the repository with commit-msg hook from Gerrit:
- liftwing/inference-services - the monorepo that the ML team uses to store the inference service code.
Docker
Develop and test KServe inference service locally with Docker is possible, but it needs a little bit of knowledge about how Kserve works. It does not require a K8s environment, therefore it is an easy and convenient way for quickly testing your idea or developing a new inference service.
- KServe Guide: Machine Learning/LiftWing/KServe
Production Images
Each Inference Service is a K8s pod that can be comprised of different containers (transformer, predictor, explainer, storage-initializer). When we are ready to deploy a service, we first need to create a production image for each service and publish it to the WMF Docker Registry using the Deployment Pipeline.
- Production Image Development Guide: Machine Learning/LiftWing/Inference Services/Production Image Development
ML-Sandbox
ML-Sandbox is a development cluster running the WMF KServe stack. ML team members use the ML-Sandbox to test inference services before deploying to production.
- ML-Sandbox Guide: Machine Learning/LiftWing/ML-Sandbox
Pipelines
Since the inference service code is stored in a monorepo, we manage all individual Inference Service images using separate test and publish pipelines on Jenkins.
All pipelines are configured in the .pipeline/config.yaml file in the project root and use PipelineLib to describe what actions need to happen in the continuous integration pipeline and what to publish. Once you have created a Blubberfile and configured a pipeline, you will need to add them into the Deployment Pipeline. This will require you to define the jobs and set triggers in the jenkins job builder spec in the integrations/config repo.
First clone the repository:
- integration/config - the Wikimedia configuration for Jenkins.
Specifically, you will need to add new entries to the following two files:
jjb/project-pipelines.yaml
zuul/layout.yaml
You can check here for more information about configuring CI: PipelineLib/Guides/How to configure CI for your project
Test/Build pipelines
Currently, our test/build pipelines are triggered whenever we edit code for a given InferenceService. When we push a new CR to Gerrit, jenkins-bot starts a job on the isvc's pipeline. This job uses the tox tool to run a test suite on our isvc code: right now it's just doing flake8 and then running the black formatter, but could be expanded for different model types. If the code passes the checks, then we attempt to build the full production image (as defined in the blubberfile).
Publish pipelines
The publish pipelines are run as post-merge jobs. Whenever a CR is merged on Gerrit, the post-merge jobs will run (as seen in Zuul) and will attempt to re-build the production image again and, if successful, will be published to the WMF Docker Registry. After the image has been pushed, PipelineBot will respond with a message on the Gerrit CR with the newly tagged image uri.
Jenkins pipelines
Each of our pipelines run jobs on Jenkins and are managed via Zuul: