Playground for all kinds of examples with NOS.
We assume the NOS server package and all its dependencies has been installed in prior. Check the quick start document from NOS docs first if you haven't setup the develop enviroment yet.
Simply run the following commands to serve the model you would like to deply:
cd examples/MODEL_ID
nos serve up -c serve.yaml
Then you can use the nos python client library to run the inference:
from PIL import Image
from nos.client import Client
client = Client("[::]:50051")
model_id = "YOUR-MODEL-ID"
models: List[str] = client.ListModels()
assert model_id in models
# Check if the selected model has been served.
inputs = YOUR-MODEL-INPUT
response = model(inputs) # Get output as response.
#change to model.DEFAULT_METHOD_NAME if the default method is defined as "__call__"
model_id: str = "meta-llama/Llama-2-7b-chat-hf"
model_id: str = "m-bain/whisperx-large-v2"
model_id: List[str] = ["sd-xl-turbo",
"playground-v2",
"latent-consistency-model"]
model_id: str = "animate-diff"
model_id: str = "stable-video-diffusion"
model_id: str = "mv-dream"
model_id: str = "bark"
model_id: str = "music-gen"
- 💬 Send us an email at support@autonomi.ai or join our Discord for help.
- 📣 Follow us on Twitter, and LinkedIn to keep up-to-date on our products.