This is a playground for various examples using NOS.
Before proceeding, ensure that the NOS server package and all its dependencies have been installed. If you haven't set up the development environment yet, refer to the quick start document from the NOS docs.
To serve the model you wish to deploy, execute the following commands:
cd examples/MODEL_ID
nos serve up -c serve.yaml
You can then use the NOS Python client library to run the inference:
from PIL import Image
from nos.client import Client
client = Client()
model_id = "YOUR-MODEL-ID"
models: List[str] = client.ListModels()
assert model_id in models
# Check if the selected model has been served.
inputs = YOUR-MODEL-INPUT
response = model(inputs) # Get output as response.
#change to model.DEFAULT_METHOD_NAME if the default method is defined as "__call__"
model_id: str = "meta-llama/Llama-2-7b-chat-hf"
model_id: str = "m-bain/whisperx-large-v2"
model_id: List[str] = ["sd-xl-turbo",
"playground-v2",
"latent-consistency-model"]
model_id: str = "animate-diff"
model_id: str = "stable-video-diffusion"
model_id: str = "mv-dream"
model_id: str = "dream-gaussian"
model_id: str = "bark"
model_id: str = "music-gen"
- 💬 For assistance, send us an email at support@autonomi.ai or join our Discord.
- 📣 Stay updated on our products by following us on Twitter and LinkedIn.