Here we extend on the standard MLFlow client to manage different environments with the same MLFlow instance, which mainly involves the model registry and experiment management. Our goal is to run multiple logical environments (acc, preprod, prod) in the same databricks workspace with proper permission controls. We wrote a blog about the combination of our MLFlow client and the basic permission structure that is available with the terraform Databricks provider.
- abstraction for environment scoped model names
- helper function for logging a model and registering a model version
- automatic model stage assignment based on the environment
- abstraction for environment scoped experiment folders
- methods for common usage patterns (f.i. load latest model version of any model flavor)
pypi repository:
https://pypi.org/project/environment-mlflow-client/
>>pip install environment-mlflow-client
Python:
from environment_mlflow_client import EnvMlflowClient
model_name = "deepar"
mlflow_client = EnvMlflowClient(env_name="test")
model_versions = mlflow_client.get_latest_versions(name=model_name)
Compatible with MLFlow 2.x.
A fixture is included that starts a local MLFlow instance and cleans it up after the testing session is finished. The unit tests are thus conducted against the MLFlow API to validate our client.
Github actions are triggered on pull requests to validate the code change against the unit tests. When a commit is tagged on main a Python wheel is build and published to pypi and github releases.