- β Β Build any application with complete flexibility (docs)
- β
Β Preview changes locally with a Playground UI (docs)
- β
Β Improve performance with Experiments and Evals (docs)
- β
Β Debug issues with logs and traces (docs)
- β
Β Integrate with your frontend with ClientSDKs or REST API (docs)
- β
Β Setup Continous Integration and Pull-Request Previews (docs)
- β
Β Manage your application from a control panel (docs)
npx palico init <project-name>
Checkout our quickstart guide.
Product.Demo.-.V2.copy.mp4
With Palico, you have complete control over the implementation details of your LLM application. Build any application by creating a Chat
function.
import { Chat } from '@palico-ai/app';
import OpenAI from 'openai';
// 1. implement the Chat type
const handler: Chat = async ({ userMessage }) => {
// 2. implement your application logic
const response = await openai.chat.completions.create({
model: 'gpt-3.5-turbo-0125',
messages: [{ role: 'user', content: userMessage }],
});
return {
message: response.choices[0].message.content,
};
};
// 3. export the handler
export default handler;
Learn more about building your application with palico (docs).
Feature | Description |
---|---|
Streaming | Stream messages, data, and intermediate steps to your client-app |
Memory Management | Store conversation states between request without managing any storage infrastructure |
Tool Executions | Build Agents that can execute tools on client-side and server-side |
Feature Flags | Easily swap models, prompts, RAG, and custom logic at runtime |
Monitoring | Debug issues with faster with comprehensive logs and traces |
Since you own the implementation details, you can use Palico with most other external tools and libraries
Tools or Libraries | Supported | |
---|---|---|
Langchain | β | |
LlamaIndex | β | |
Portkey | β | |
OpenAI | β | |
Anthropic | β | |
Cohere | β | |
Azure | β | |
AWS Bedrock | β | |
GCP Vertex | β | |
Pinecone | β | |
PG Vector | β | |
Chroma | β |
Learn more from docs.
Make a code change and instantly preview it locally on our playground UI
Palico helps you create an iterative loop to systematically improve the performance of your application.
Define test-cases that models the expected behavior of your application
const testCases: TestDatasetFN = async () => {
return [
{
input: { // input to your LLM Application
userMessage: "What is the capital of France?",
},
tags: { // tags to help you categorize your test-cases
intent: "question",
},
metrics: [
// example metrics
containsEvalMetric({
substring: "Paris",
}),
levensteinEvalMetric({
expected: "Paris",
}),
rougeSkipBigramSimilarityEvalMetric({
expected: "Paris",
}),
],
},
];
};
Learn more about experiments
You can deploy your Palico app to any cloud provider using Docker.
Setup CI/CD and Pull-Request preview with Coolify and Palico. Learn more about deployment.
const response = await client.agent.chat({
agentName: "chat",
stream: true,
userMessage: "What is the weather today?",
payload: {
location: "San Francisco",
},
appConfig: {
model: "gpt-3.5-turbo",
provider: "openai",
},
});
import { useChat } from "@palico-ai/react";
const { messages, sendMessage } = useChat({
agentName: "assistant",
apiURL: "/api/palico",
});
Learn more about Client SDKs
The easiest way to contribute is to pick an issue with the good first issue
tag πͺ. Read the contribution guidelines here.
Bug Report? File here | Feature Request? File here