Baby AI is an AI that is being communally raised by an international network of parents of artists, architects and activists.
Baby AI, currently in the form of a large language model, was born from questions of how to nurture an AI that aligns with decolonial values of care and liberation. Constantly growing and evolving, these values are technically implemented through approaches such as:
- Training on collectively curated texts.
- Prompting without harsh rules.
- Using Retrieval-Augmented Generation to precisely attribute its speech to the texts it was fed.
You are invited to join the co-parenting of Baby AI by:
- Speaking to it.
- Reading new texts to it, by uploading documents for training data.
- Suggesting new ways for it to learn and grow, whether through sharing your ideas with us, code contributions, or prompt design.
Baby AI consists of two main parts:
- API: Handles backend processing and data retrieval.
- Webpage: Provides an interactive interface.
To run Baby AI, you need to set up both parts.
There are two options for running Baby AI: Docker or manual setup.
Docker is the easiest way to run Baby AI. Even if you’re new to Docker, getting started is as simple as downloading one program and running a single command.
- Install Docker.
- Run Docker.
- Create a .env file in the root of the project based on
.env.example
, for example by runningcp .env.example .env
. - Run Docker Compose:
docker compose up --build
.
Now Baby AI should be accessible at http://localhost:5173/.
Make sure you have the following installed:
You need to run the Ollama service manually in the background for Baby AI to function. Install Ollama, and then use the following command to download a model and start the Ollama service:
ollama pull llama2 # Replace llama2 with the model you want
ollama serve
- Open a terminal and navigate to the
api
directory:
cd api
- Install the required Python dependencies using Poetry:
poetry install
- Run the API:
poetry run python src/main.py
- Open a new terminal and navigate to the
web
directory:
cd web
- Install the required Node.js dependencies:
npm install
- Run the webpage:
npm run dev
Now Baby AI should be accessible at http://localhost:5173/.
Once it is running, you can also access the API through Langchain's own UI at http://localhost:8000/agent/playground/.
By default, the project uses the model defined in the .env file. You can refer to Ollama's model library for available models and modify the .env file to use another model.
Sometimes outdated Docker images or cached layers cause issues. Rebuild your Docker image to ensure everything is up to date. The Docker 'turn it off and on again' equivalent is:
docker compose down --volumes
docker compose build --no-cache
docker compose up