The fastest way to build Voice AI application with Vocode and Next.js
Features · Demo · Deploy to Vercel · Clone and run locally · Feedback and issues · Learn More
Get your Voice AI application running in two simple steps:
Start by creating a new .env
file in your local directory. This file should contain all the necessary environment variables required for the application to run. Below is a template you can use as a starting point. Make sure to replace the placeholder values with your actual API keys and configuration settings.
# OPENAI_API_KEY: Your OpenAI API key for accessing OpenAI services.
# You can obtain it from https://platform.openai.com/signup/
OPENAI_API_KEY=
# DEEPGRAM_API_KEY: Your Deepgram API key for accessing Deepgram's speech recognition services.
# You can create an API key at https://console.deepgram.com/signup
DEEPGRAM_API_KEY=
# AZURE_SPEECH_REGION: The region of your Azure Speech service instance.
# AZURE_SPEECH_KEY: Your Azure Speech service subscription key.
# You can find this in the Azure portal under your Speech resource's "Keys and Endpoint" section.
# For instructions on creating a speech resource, visit https://docs.microsoft.com/azure/cognitive-services/speech-service/get-started
AZURE_SPEECH_REGION=
AZURE_SPEECH_KEY=
# DOCKER_ENV: The environment setting for Docker to specify which configuration to use.
# In this case, 'all-in-one' indicates a single container setup.
DOCKER_ENV=all-in-one
# LANGSMITH_SYSTEM_PROMPT: The system prompt key for Langsmith services.
# Uncomment and set this variable if you want to use a custom system prompt from Langsmith.
# LANGSMITH_SYSTEM_PROMPT=vocode/main
# SYSTEM_PROMPT: The default system prompt message for initiating conversations.
# This message is used if LANGSMITH_SYSTEM_PROMPT is not set.
SYSTEM_PROMPT=Have a pleasant conversation about life
# INITIAL_MESSAGE: The initial message sent by the system when a conversation starts.
# This message can be customized to greet users or provide instructions.
INITIAL_MESSAGE=Hello there!
With your .env
file ready, execute the following command in your terminal to start the application. This command will download the Docker image from the GitHub Container Registry and run it, starting both the frontend and backend services.
docker run --rm --env-file .env -p 3000:3000 ghcr.io/artisanlabs/vocode-next-template:latest
After running the command, the frontend will be available at http://localhost:3000
This is a hybrid Next.js + Python application that uses Next.js for the frontend and FastAPI for the API backend. It is designed to build Voice AI applications with Vocode and Next.js. The backend is powered by FastAPI, a modern, fast (high-performance), web framework for building APIs with Python 3.6+ based on standard Python type hints. This setup allows you to write Next.js apps that use Python AI libraries on the backend, providing a powerful tool for AI application development.
- Local Development: This template is configured to work seamlessly in a local environment, using Python as backend FastAPI.
- FastAPI Integration: The template establishes a connection between the Next.js frontend and the FastAPI Python backend.
- Vercel Deployment: While Vercel does not currently support WebSocket, this template can still be deployed on Vercel as a frontend application.
The Python/FastAPI server is integrated into the Next.js app under the /api/
route.
This is achieved using next.config.js
rewrites to map any request to /api/:path*
to the FastAPI API, which is hosted in the /api
folder.
On localhost, the rewrite will be made to the 127.0.0.1:8000
port, which is where the FastAPI server is running.
In production, the FastAPI server is hosted as Python serverless functions on Vercel.
The FastAPI backend uses the Vocode library to connect to AI providers like Text-to-Speech (TTS), Speech-to-Text (STT), and Language Model (LLM) via websockets. This allows for real-time, efficient communication between the application and the AI services, enabling the development of robust, interactive Voice AI applications.
Before you can run this application, you need to have the following installed:
- Node.js and npm
- Python (version 3.9 or higher)
- Poetry
- OpenSSL 1.1.1 (required for Azure)
- FFmpeg
Please follow the links to download and install each prerequisite.
You can clone & deploy it to Vercel with one click:
You can clone & create this repo with the following command
npx create-next-app vocode-nextjs --example "https://github.com/vocodedev/vocode-next-template"
Then, install the dependencies:
npm install
# or
yarn install
To run the development server:
npm run dev
# or
yarn dev
Open http://localhost:3000 with your browser to see the result. The page auto-updates as you edit the file. Please note that this has been extensively tested with the latest versions of Chrome. For other browsers, if you encounter any issues, please create a GitHub issue in this repository: https://github.com/vocodedev/vocode-react-sdk.
docker build --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
--build-arg VCS_REF=$(git rev-parse --short HEAD) \
--build-arg VERSION=0.1.111 \
-t vocode/vocode:0.1.111 .
docker run --rm --env-file .env -p 3000:3000 vocode/vocode:0.1.111
To learn more about Next.js, take a look at the following resources:
- Vocode Documentation - learn about Vocode features and API.
- Next.js Documentation - learn about Next.js features and API.
- FastAPI Documentation - learn about FastAPI features and API.
You can check out the Vocode Python GitHub repository - your feedback and contributions are welcome!
Contributions are welcome! Please read our Contributing Guide and our Code of Conduct for more information.
This project is licensed under the MIT License.
If you have any questions, feel free to open an issue or contact us directly at our GitHub.