This is a container that will simply run the DEEP as a Service API component, with audio-classification-tf (src: audio-classification-tf).
To run the Docker container directly from Docker Hub and start using the API simply run the following command:
$ docker run -ti -p 5000:5000 -p 6006:6006 -p 8888:8888 deephdc/deep-oc-audio-classification-tf
This command will pull the Docker container from the Docker Hub deephdc repository and start the default command (deepaas-run --listen-ip=0.0.0.0).
docker-compose.yml allows you to run the application with various configurations via docker-compose.
N.B! docker-compose.yml is of version '2.3', one needs docker 17.06.0+ and docker-compose ver.1.16.0+, see https://docs.docker.com/compose/install/
If you want to use Nvidia GPU, you need nvidia-docker and docker-compose ver1.19.0+ , see nvidia/FAQ
If you want to build the container directly in your machine (because you want
to modify the Dockerfile
for instance) follow the following instructions:
Building the container:
-
Get the
DEEP-OC-audio-classification-tf
repository (this repo):$ git clone https://github.com/deephdc/DEEP-OC-audio-classification-tf
-
Build the container:
$ cd DEEP-OC-audio-classification-tf $ docker build -t deephdc/deep-oc-audio-classification-tf .
-
Run the container:
$ docker run -ti -p 5000:5000 -p 6006:6006 -p 8888:8888 deephdc/deep-oc-audio-classification-tf
You can also run Jupyter Lab inside the container:
$ docker run -ti -p 5000:5000 -p 6006:6006 -p 8888:8888 deephdc/deep-oc-audio-classification-tf /bin/bash $root@47a6604ef008:/srv# jupyter lab --allow-root
These three steps will download the repository from GitHub and will build the
Docker container locally on your machine. You can inspect and modify the
Dockerfile
in order to check what is going on. For instance, you can pass the
--debug=True
flag to the deepaas-run
command, in order to enable the debug
mode.
Once the container is up and running, browse to http://localhost:5000/ui
to get
the OpenAPI (Swagger) documentation page.