This project aims to build a pipeline that can be used within a web or mobile app to process real-world, user-supplied images. Given an image of a dog, our algorithm identifies an estimate of the canine’s breed. If supplied an image of a human, the code identifies the resembling dog breed.
Along with exploring state-of-the-art CNN models for classification and localization, this project aims to highlight the challenges involved in piecing together a series of models designed to perform various tasks in a data processing pipeline. Each model has its strengths and weaknesses, and engineering a real-world application often involves solving many problems without a perfect answer.
To get a better understanding of the project, you may read the Project Report.
This project mainly makes use of Python, Jupyter Notebooks, OpenCV and PyTorch. Other packages used in this project have been listed in the requirements.txt
file and may be installed following the instructions in the Project Instructions section.
To manually run through the code, you may either follow the instructions given below to run the code locally, or you may simple follow this URL to open the notebook on Google Colab: https://bit.ly/3hFjpC3
-
Clone the repository and navigate to the downloaded folder.
git clone https://github.com/aaakashkumar/Dog-Breed-Classifier.git cd deep-learning-v2-pytorch/project-dog-classification
-
Download the dog dataset. Unzip the folder and place it in the repo, at location
path/to/dog-project/dogImages
. ThedogImages/
folder should contain 133 folders, each corresponding to a different dog breed. -
Download the human dataset. Unzip the folder and place it in the repo, at location
path/to/dog-project/lfw
. If you are using a Windows machine, you are encouraged to use 7zip to extract the folder. -
Install the necessary Python packages
pip install -r requirements.txt
-
Open a terminal window and navigate to the project folder. Open the notebook and follow the instructions.
jupyter notebook dog_app.ipynb
This project was submitted as the Capstone Project for the Machine Learning Engineering Nanodegree Program at Udacity