Jump to content

Machine Learning/LiftWing/Inference Services

From Wikitech

Summary

Our Machine Learning models are hosted as Inference Services (isvc), which are Custom Resource Definitions (CRDs), an extension of the Kubernetes API. The isvc CRD is provided by KServe, which utilizes Knative and Istio and provides serverless, asynchronous micro-services designed for performing inference. These services are written in Python and use the asyncio framework (via FastAPI).

Steps for Inference Services development

  1. Developing an inference service (see LiftWing/KServe)
  2. Developing a Blubberfile (see Production Image Development)
  3. Testing with Docker and/or ML-Sandbox (see KServe#Example, ML-Sandbox#Deploy)
  4. Configuring CI pipelines (see Pipelines)

Once the production image has been published to the WMF Docker Registry, we'll proceed to the deployment (see Machine Learning/LiftWing/Deploy)

Development

Clone the repository with commit-msg hook from Gerrit:

Docker

Develop and test KServe inference service locally with Docker is possible, but it needs a little bit of knowledge about how Kserve works. It does not require a K8s environment, therefore it is an easy and convenient way for quickly testing your idea or developing a new inference service.

Production Images

Each Inference Service is a K8s pod that can be comprised of different containers (transformer, predictor, explainer, storage-initializer). When we are ready to deploy a service, we first need to create a production image for each service and publish it to the WMF Docker Registry using the Deployment Pipeline.

ML-Sandbox

ML-Sandbox is a development cluster running the WMF KServe stack. ML team members use the ML-Sandbox to test inference services before deploying to production.

Pipelines

Since the inference service code is stored in a monorepo, we manage all individual Inference Service images using separate test and publish pipelines on Jenkins.

All pipelines are configured in the .pipeline/config.yaml file in the project root and use PipelineLib to describe what actions need to happen in the continuous integration pipeline and what to publish. Once you have created a Blubberfile and configured a pipeline, you will need to add them into the Deployment Pipeline. This will require you to define the jobs and set triggers in the jenkins job builder spec in the integrations/config repo.

First clone the repository:

Specifically, you will need to add new entries to the following two files:

  • jjb/project-pipelines.yaml
  • zuul/layout.yaml

You can check here for more information about configuring CI: PipelineLib/Guides/How to configure CI for your project

Test/Build pipelines

Currently, our test/build pipelines are triggered whenever we edit code for a given InferenceService. When we push a new CR to Gerrit, jenkins-bot starts a job on the isvc's pipeline. This job uses the tox tool to run a test suite on our isvc code: right now it's just doing flake8 and then running the black formatter, but could be expanded for different model types. If the code passes the checks, then we attempt to build the full production image (as defined in the blubberfile).

Publish pipelines

The publish pipelines are run as post-merge jobs. Whenever a CR is merged on Gerrit, the post-merge jobs will run (as seen in Zuul) and will attempt to re-build the production image again and, if successful, will be published to the WMF Docker Registry. After the image has been pushed, PipelineBot will respond with a message on the Gerrit CR with the newly tagged image uri.

Jenkins pipelines

Each of our pipelines run jobs on Jenkins and are managed via Zuul:

Production Images Pipelines
articlequality articlequality, articlequality-publish
draftquality draftquality, draftquality-publish
editquality editquality, editquality-publish
topic topic, topic-publish
outlink outlink, outlink-publish
outlink-transformer outlink-transformer, outlink-transformer-publish