Skip to content

autonomi-ai/nos-playground

Repository files navigation

NOS Playground 🛝

This is a playground for various examples using NOS.

Installation

Before proceeding, ensure that the NOS server package and all its dependencies have been installed. If you haven't set up the development environment yet, refer to the quick start document from the NOS docs.

Quick Start

To serve the model you wish to deploy, execute the following commands:

cd examples/MODEL_ID
nos serve up -c serve.yaml

You can then use the NOS Python client library to run the inference:

from PIL import Image
from nos.client import Client

client = Client()

model_id = "YOUR-MODEL-ID"
models: List[str] = client.ListModels()
assert model_id in models
# Check if the selected model has been served.

inputs = YOUR-MODEL-INPUT
response = model(inputs) # Get output as response.
#change to model.DEFAULT_METHOD_NAME if the default method is defined as  "__call__"

Available Examples

Chat Completion

model_id: str = "meta-llama/Llama-2-7b-chat-hf"

Video Transcription

model_id: str = "m-bain/whisperx-large-v2"

Text to Image

model_id: List[str] = ["sd-xl-turbo",
                        "playground-v2",
                        "latent-consistency-model"]

Text to Video

model_id: str = "animate-diff"

Image to Video

model_id: str = "stable-video-diffusion"

Text to 360-View Images

model_id: str = "mv-dream"

Image to Mesh Model

model_id: str = "dream-gaussian"

Text to Speech

model_id: str = "bark"

Text to Music

model_id: str = "music-gen"

Reach US

About

Playground for all kinds of examples with NOS

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •