Klama is a CLI tool that helps diagnose and troubleshoot DevOps-related issues using AI-powered assistance. It interacts with language models to interpret user queries, suggest and execute safe commands, and provide insights based on the results. Currently, Klama supports Kubernetes (K8s) debugging, with plans to expand to other DevOps domains in the future.
- Klama sends your DevOps-related query to the AI model.
- The AI, acting as a DevOps expert, interprets the query and may suggest commands to gather more information.
- If a command is suggested, Klama will ask for your approval before executing it.
- The command is executed if approved, and the output is sent back to the AI for further analysis.
- This process repeats until the AI has enough information to provide a final answer.
- Klama presents the AI's findings and any relevant information.
This approach ensures safety and gives users full control over the commands run in their environment.
- Access to a Kubernetes cluster (for K8s-related command execution)
You have several options to install Klama:
If you're on macOS, you can use Homebrew to install Klama:
brew install eliran89c/tap/klama
You can download pre-built binaries for Linux and Windows from the releases page on GitHub. Choose the appropriate binary for your operating system and architecture.
You can install Klama directly from GitHub using Go:
go install github.com/eliran89c/klama@latest
This will download the source code, compile it, and install the klama
binary in your $GOPATH/bin
directory. Make sure your $GOPATH/bin
is in your system's PATH.
Klama requires a YAML configuration file to set up the AI model. The configuration file is searched for in the following order:
- Custom location specified by the
--config
flag $XDG_CONFIG_HOME/klama/config.yaml
(usually~/.config/klama/config.yaml
)$HOME/.klama.yaml
If no configuration file is found, a default configuration will be created at $XDG_CONFIG_HOME/klama/config.yaml
.
The following fields are required in your configuration:
agent.name
: The name of the AI modelagent.base_url
: The API endpoint for the AI model
Klama will not run if these required fields are missing from the configuration file.
Klama requires an OpenAI or OpenAI-compatible server to function. The application has been tested with the following frameworks and services:
- OpenAI models
- Self-hosted models using vLLM
- Amazon Bedrock models via Bedrock Access Gateway
- Azure AI
While these have been specifically tested, any server that implements the OpenAI API should be compatible with Klama.
Create a file named .klama.yaml
in your home directory or in the directory where you run Klama. Here's an example of what the file should contain:
agent:
name: "anthropic.claude-3-5-sonnet-20240620-v1:0" # Required
base_url: "https://bedrock-gateway.example.com/api/v1" # Required
auth_token: "" # Set via KLAMA_AGENT_TOKEN environment variable
azure_api_version: "" # Required only when working with Azure AI
pricing: # Optional, will be used to calculate session price
input: 0.003 # Price per 1K input tokens (optional)
output: 0.015 # Price per 1K output tokens (optional)
You can set the authentication token using an environment variable:
KLAMA_AGENT_TOKEN
: Set the authentication token for the agent model
Example:
export KLAMA_AGENT_TOKEN="your-agent-token-here"
You can specify a custom configuration file location using the --config
flag:
klama --config /path/to/your/config.yaml k8s
Currently, Klama provides one main subcommand:
Run Klama with the k8s
subcommand to start a Kubernetes debugging session:
klama k8s
This will start an interactive session where you can ask Kubernetes-related questions and get AI-powered assistance.
--config
: Specify a custom configuration file location--debug
: Enable debug mode. (Saves output toklama.debug
file)
Example with flags:
klama k8s --debug --config /path/to/config.yaml
If Klama fails to start due to missing or invalid configuration, it will provide an error message indicating the issue. Ensure that your configuration file is properly formatted and contains all required fields before running Klama.
While Klama currently focuses on Kubernetes debugging, there are plans to expand its capabilities to cover other DevOps domains in the future. Stay tuned for updates!
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License. See the LICENSE file for details.