Skip to content

[deprecated] AI Gateway - core infrastructure stack for building production-ready AI Applications

License

Notifications You must be signed in to change notification settings

missingstudio/gateway

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Gateway

Core infrastructure stack for building production-ready AI Applications

Lint Release License Version Discord Twitter



Introduction

🌈 A Robust cloud-native AI Gateway - core LLMOps infrastructure stack for building production-ready AI Applications. It provides an Universal API for inferencing 100+ LLMs(OpenAI, Azure, Cohere, Anthropic, HuggingFace, Replicate, Stable Diffusion).

πŸš€ Key Features

βœ…Β  Seamless Integration with Universal API
βœ…Β  Reliable LLM Routing with AI Router
βœ…Β  Load balance across multiple models and providers
βœ…Β  Automatic Retries with exponential fallbacks
βœ…Β  High availability and resiliency using production-ready LLMOps
🚧  Detailed Usage Analytics
🚧  PII detection and masking
🚧  Simple & Semantic Caching for cost reduction
🚧  No Vendor lock-in Observability - Logging, monitoring and tracing
βœ…Β  Enterprise-ready with enhanced security, reliability, and scale with custom deployments support.

Supported Providers

Provider Provider Name Support Supported Endpoints
OpenAI openai βœ… /chat/completions, /chat/completions:stream
Groq groq βœ… /chat/completions, /chat/completions:stream
Anyscale anyscale βœ… /chat/completions
Deepinfra deepinfra βœ… /chat/completions
Together AI togetherai βœ… /chat/completions

Not supported (yet): images, audio, files, fine-tunes, moderations

Installation

AI gateway can be intall on macOS, Windows, Linux, OpenBSD, FreeBSD, and on any machine

Binary (Cross-platform)

Download the appropriate version for your platform from releases page. Once downloaded, the binary can be run from anywhere. Ideally, you should install it somewhere in your PATH for easy use. /usr/local/bin is the most probable location.

MacOS

gateway is available via a Homebrew Tap, and as downloadable binary from the releases page:

brew install missingstudio/tap/gateway

To upgrade to the latest version:

brew upgrade gateway

Linux

gateway is available as downloadable binaries from the releases page. Download the .deb or .rpm from the releases page and install with sudo dpkg -i and sudo rpm -i respectively.

Windows

gateway is available via scoop, and as a downloadable binary from the releases page:

scoop bucket add gateway https://github.com/missingstudio/scoop-bucket.git

To upgrade to the latest version:

scoop update gateway

Docker

We provide ready to use Docker container images. To pull the latest image:

docker pull missingstudio/gateway:latest

To pull a specific version:

docker pull missingstudio/gateway:v0.0.1

Docker compose

To start missing studio AI gateway, simply run the following command:

make up

Your AI Gateway is now running on http://localhost:8080 πŸ’₯

Usage

Let's make a chat completion request to OpenAI through the AI Gateway using both REST and gRPC protocols

Send a request using curl

curl \
--header "Content-Type: application/json" \
--header "x-ms-provider: openai" \
--header "Authorization: Bearer {{OPENAI_API_KEY}}" \
--data '{"model":"gpt-3.5-turbo","messages":[{"role":"user","content":"who are you?"}]}' \
http://localhost:8080/v1/chat/completions

Send a request using grpcurl

grpcurl \
-d '{"model":"gpt-3.5-turbo","messages":[{"role":"user","content":"hi"}]}' \
-H 'x-ms-provider: openai' \
-H 'Authorization: Bearer {{OPENAI_API_KEY}}' \
-plaintext  localhost:8080  llm.v1.LLMService.ChatCompletions

🫢 Contributions

AI studio is an open-source project, and contributions are welcome. If you want to contribute, you can create new features, fix bugs, or improve the infrastructure.

It's still very early days for this so your mileage will vary here and lots of things will break. But almost any contribution will be beneficial at this point. Check the current Issues to see where you can jump in!

If you've got an improvement, just send in a pull request!

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'feat(module): add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

If you've got feature ideas, simply open a new issues!

Please refer to the CONTRIBUTING.md file in the repository for more information on how to contribute.

License

AI Studio is Apache 2.0 licensed.