Follow-up work:
You can also see the previous work from:
- https://github.com/Curt-Park/producer-consumer-fastapi-celery
- https://github.com/Curt-Park/triton-inference-server-practice (00-quick-start)
Install Anaconda and execute the following commands:
$ make env # create a conda environment (need only once)
$ source init.sh # activate the env
$ make setup # setup packages (need only once)
$ source create_model.sh
$ tree model_repository
model_repository
└── mnist_cnn
├── 1
│ └── model.pt
└── config.pbtxt
2 directories, 2 files
Install Redis & Docker, and run the following commands:
$ make triton # run triton server
$ make broker # run redis broker
$ make worker # run celery worker
$ make api # run fastapi server
$ make dashboard # run dashboard that monitors celery
Install Docker & Docker Compose, and run the following command:
$ docker-compose up
You can start up additional Triton servers on other devices.
$ make triton
You can start up additional workers on other devices.
$ export BROKER_URL=redis://redis-broker-ip:6379 # default is localhost
$ export BACKEND_URL=redis://redis-backend-ip:6379 # default is localhost
$ export TRITON_SERVER_URL=triton-server-ip:9000 # default is localhost
$ make worker
- NOTE: Worker needs to run on the machine which Triton runs on due to shared memory settings.
$ make load # for load test without Triton
or
$ make load-triton # for load test with Triton
Open http://0.0.0.0:8089
Type url for the API server.
$ ulimit -n 1024
We recommend to use Linux server if you would like to run docker-compose up
.
$ make setup-dev # setup for developers
$ make format # format scripts
$ make lint # lints scripts
$ make utest # runs unit tests
$ make cov # opens unit test coverage information