This repository provides a comprehensive guide to setting up a complete MLOps pipeline for a Dog Breed Classifier application using Teachable Machine, TensorFlow, Flask, Docker, and CI/CD practices. The goal is to demonstrate how Machine Learning (ML) models are integrated within a continuous integration and continuous deployment (CI/CD) framework.
This tutorial will cover:
- Training a model with Google's Teachable Machine
- Setting up a Flask application to serve predictions
- Containerizing the application with Docker
- Implementing a CI/CD pipeline using GitHub Actions
- Git
- Python 3.8+
- Docker
- A GitHub account
- Basic familiarity with Flask and Docker
- Visit Teachable Machine, create a new image project, upload images of various dog breeds, train the model, and export it as a TensorFlow model.
- Download the
model.json
andweights.bin
files.
Create a basic Flask application to serve the model. The application will allow users to upload an image and receive the dog breed prediction.
Dog-Breed-classifier-MLOps/ │ ├── app/ │ ├── static/ │ │ ├── css/ │ │ ├── js/ │ │ └── images/ │ ├── templates/ │ ├── init.py │ ├── views.py │ └── predict.py │ ├── model/ │ ├── model.json │ ├── group1-shard1of1.bin │ ├── tests/ │ ├── test_app.py │ ├── Dockerfile ├── requirements.txt └── README.md
- Flask serves a webpage that allows users to upload images.
- Predictions are made using the TensorFlow model loaded in Flask.
Containerize the Flask application using Docker to ensure it can be deployed consistently across any environment.
# Use a lightweight Python base image
FROM python:3.9-slim
WORKDIR /app
COPY . /app
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 5000
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "app:create_app()"]
Set up GitHub Actions to automate testing, building, and deploying the Flask application.
-
Continuous Integration:
- Run tests.
- Build the Docker image.
-
Continuous Deployment:
- Push the Docker image to a registry.
- Deploy the image to a hosting service like Heroku or AWS.
name: CI/CD Pipeline
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run tests
run: |
pytest
- name: Build and push Docker image
uses: docker/build-push-action@v2
with:
context: .
push: true
tags: user/myapp:latest
This tutorial provides a basic framework for building a MLOps pipeline that incorporates machine learning model training, a web application, Docker containerization, and a CI/CD workflow. It aims to guide the integration of machine learning development with production operations to improve the automation and monitoring at all steps of ML system construction.