Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

register_models() plugin hook #65

Merged
merged 66 commits into from
Jul 10, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
66 commits
Select commit Hold shift + click to select a range
aed5f29
WIP register_models() plugin hook, refs #53
simonw Jun 26, 2023
e0b1431
Ran cog, showing help for llm models command
simonw Jun 26, 2023
3991356
New mechanism for handling API keys for models
simonw Jun 26, 2023
64747af
Implemented PaLM 2, to test out new plugin hook - refs #20
simonw Jun 26, 2023
74e4d43
Refactor Template into templates.py
simonw Jul 1, 2023
6f01b1d
First attempt at internal API docs, refs #65
simonw Jul 1, 2023
7eb6c61
Model.stream() and .get_key() methods
simonw Jul 1, 2023
87df6a7
CLI now uses new prompt/stream methods
simonw Jul 1, 2023
ae47f0a
Fix for --key being beaten by env variable, closes #71
simonw Jul 1, 2023
ce70b4d
llm openai models command, closes #70
simonw Jul 1, 2023
dd2de5c
Ran cog for llm openai --help, refs #70
simonw Jul 1, 2023
2b21b53
-s shortcut for --system, closes #69
simonw Jul 1, 2023
80a0a5e
Fixed a test
simonw Jul 1, 2023
f1819cd
Fix plugins tests to account for default plugins
simonw Jul 1, 2023
50c29d5
Disabled DB logging test for the moment
simonw Jul 1, 2023
e9d473e
Move default plugins into llm/default_plugins
simonw Jul 1, 2023
e6be325
Fixed import error in llm openai models
simonw Jul 1, 2023
807bbfd
Fixed timezone related test failure
simonw Jul 1, 2023
75102bd
Pass model to the Response
simonw Jul 1, 2023
d663689
Use pip install -e '.[test]'
simonw Jul 1, 2023
836a690
Fix for missing package bug
simonw Jul 1, 2023
f9b116b
Don't include tests/ in the package
simonw Jul 1, 2023
1108276
Upgrade to pydantic 2 using bump-pydantic, refs #74
simonw Jul 1, 2023
500548b
Fix for pydantic warning, refs #74
simonw Jul 1, 2023
b6413d7
Include cogapp in pip install -e '.[test]'
simonw Jul 1, 2023
5d45aa8
Removed PaLM 2 vertex model
simonw Jul 1, 2023
85b226c
.stream is now in base Response class
simonw Jul 1, 2023
e294bc2
Initial experimental response.reply() method
simonw Jul 1, 2023
d90d1f7
llm models default command, plus refactored env variables
simonw Jul 1, 2023
c9f366e
Better type hint for iter_prompt method
simonw Jul 2, 2023
3682906
Added mypy, plus some fixes to make it happy - refs #77
simonw Jul 2, 2023
2b50797
type stubs for PyYAML and requests, refs #77
simonw Jul 2, 2023
5a8cd10
Lint using Ruff, refs #78
simonw Jul 2, 2023
28149ab
Rough initial version of new logging, to log2 table
simonw Jul 2, 2023
2c1660d
just fix command
simonw Jul 3, 2023
c5e3444
-o/--option, implemented for OpenAI models - closes #63
simonw Jul 3, 2023
a1b2a8a
Drop the debug field from the logs, combine chunks from stream
simonw Jul 3, 2023
488562d
New LogMessage design, plus Response.json() method
simonw Jul 3, 2023
dbaa412
Implemented new logs database schema
simonw Jul 3, 2023
dbcbc1e
Fix column order in logs
simonw Jul 3, 2023
118f8b4
Test messages logged in new format
simonw Jul 3, 2023
8460678
Moved things into inner classes, log_message is now defined on base R…
simonw Jul 4, 2023
21dd29b
Removed .stream() method in favor of .prompt(stream=False)
simonw Jul 4, 2023
157b0e9
stream defaults to True on prompt() method
simonw Jul 4, 2023
56b586e
Better error message display
simonw Jul 5, 2023
1993119
Model.execute() model now defaults to using self.Response
simonw Jul 5, 2023
e1348e5
llm install -e/--editable option
simonw Jul 6, 2023
768c91f
Improved how keys work, execute() now has default implementation
simonw Jul 6, 2023
3cda359
Default __str__ method for models
simonw Jul 6, 2023
f9fc69b
llm logs now decodes JSON for prompt_json etc
simonw Jul 6, 2023
d05f3a1
iter_prompt() now takes prompt
simonw Jul 6, 2023
a0e2e9b
Markov plugin now lives in llm-markov repo
simonw Jul 6, 2023
e20e8c1
Options base class is now llm.Options not Model.Options
simonw Jul 6, 2023
8dd0667
Read prompt after validating options
simonw Jul 7, 2023
543443c
Detailed tutorial on writing plugins
simonw Jul 7, 2023
6fd860c
types-click
simonw Jul 7, 2023
a16d680
Fix lin to Gist in tutorial
simonw Jul 7, 2023
674728c
Snappier tutorial title
simonw Jul 7, 2023
b5638f6
Fixed type hint on Prompt
simonw Jul 8, 2023
c45ccfc
Switch tutorial from setup.py to pyproject.toml
simonw Jul 8, 2023
b6f345f
Run black at end of just fix
simonw Jul 10, 2023
f4e9f17
Moved iter_prompt from Response to Model, moved a lot of other stuff
simonw Jul 10, 2023
e6c8fa7
Renamed iter_prompt() to execute() and updated tutorial
simonw Jul 10, 2023
292c836
Renamed template.execute() to template.evaluate() and added type hints
simonw Jul 10, 2023
6f3d5e3
Updated Gist example for tutorial
simonw Jul 10, 2023
d21e1b9
Show error for --continue mode, remove deleted code
simonw Jul 10, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 10 additions & 2 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,19 @@ jobs:
cache-dependency-path: setup.py
- name: Install dependencies
run: |
pip install '.[test]'
pip install -e '.[test]'
- name: Run tests
run: |
pytest
- name: Check if cog needs to be run
run: |
pip install -r docs/requirements.txt
cog --check docs/*.md
- name: Run Black
run: |
black --check .
- name: Run mypy
run: |
mypy llm
- name: Run ruff
run: |
ruff .
11 changes: 11 additions & 0 deletions Justfile
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
# Run tests and linters
@default: test lint

# Install dependencies and test dependencies
@init:
pipenv run pip install -e '.[test]'

# Run pytest with supplied options
@test *options:
pipenv run pytest {{options}}
Expand All @@ -9,6 +13,8 @@
@lint:
pipenv run black . --check
pipenv run cog --check README.md docs/*.md
pipenv run mypy llm
pipenv run ruff .

# Rebuild docs with cog
@cog:
Expand All @@ -21,3 +27,8 @@
# Apply Black
@black:
pipenv run black .

# Run automatic fixes
@fix: cog
pipenv run ruff . --fix
pipenv run black .
1 change: 1 addition & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
global-exclude tests/*
86 changes: 72 additions & 14 deletions docs/help.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,8 @@ Commands:
install Install packages from PyPI into the same environment as LLM
keys Manage stored API keys for different models
logs Tools for exploring logged prompts and responses
models Manage available models
openai Commands for working directly with the OpenAI API
plugins List installed plugins
templates Manage stored prompt templates
uninstall Uninstall Python packages from the LLM environment
Expand All @@ -73,17 +75,18 @@ Usage: llm prompt [OPTIONS] [PROMPT]
Documentation: https://llm.datasette.io/en/stable/usage.html

Options:
--system TEXT System prompt to use
-m, --model TEXT Model to use
-t, --template TEXT Template to use
-p, --param <TEXT TEXT>... Parameters for template
--no-stream Do not stream output
-n, --no-log Don't log to database
-c, --continue Continue the most recent conversation.
--chat INTEGER Continue the conversation with the given chat ID.
--key TEXT API key to use
--save TEXT Save prompt with this template name
--help Show this message and exit.
-s, --system TEXT System prompt to use
-m, --model TEXT Model to use
-o, --option <TEXT TEXT>... key/value options for the model
-t, --template TEXT Template to use
-p, --param <TEXT TEXT>... Parameters for template
--no-stream Do not stream output
-n, --no-log Don't log to database
-c, --continue Continue the most recent conversation.
--chat INTEGER Continue the conversation with the given chat ID.
--key TEXT API key to use
--save TEXT Save prompt with this template name
--help Show this message and exit.
```
### llm init-db --help
```
Expand Down Expand Up @@ -167,6 +170,37 @@ Options:
-t, --truncate Truncate long strings in output
--help Show this message and exit.
```
### llm models --help
```
Usage: llm models [OPTIONS] COMMAND [ARGS]...

Manage available models

Options:
--help Show this message and exit.

Commands:
default Show or set the default model
list List available models
```
#### llm models list --help
```
Usage: llm models list [OPTIONS]

List available models

Options:
--help Show this message and exit.
```
#### llm models default --help
```
Usage: llm models default [OPTIONS] [MODEL]

Show or set the default model

Options:
--help Show this message and exit.
```
### llm templates --help
```
Usage: llm templates [OPTIONS] COMMAND [ARGS]...
Expand Down Expand Up @@ -229,13 +263,14 @@ Options:
```
### llm install --help
```
Usage: llm install [OPTIONS] PACKAGES...
Usage: llm install [OPTIONS] [PACKAGES]...

Install packages from PyPI into the same environment as LLM

Options:
-U, --upgrade Upgrade packages to latest version
--help Show this message and exit.
-U, --upgrade Upgrade packages to latest version
-e, --editable DIRECTORY Install a project in editable mode from this path
--help Show this message and exit.
```
### llm uninstall --help
```
Expand All @@ -247,4 +282,27 @@ Options:
-y, --yes Don't ask for confirmation
--help Show this message and exit.
```
### llm openai --help
```
Usage: llm openai [OPTIONS] COMMAND [ARGS]...

Commands for working directly with the OpenAI API

Options:
--help Show this message and exit.

Commands:
models List models available to you from the OpenAI API
```
#### llm openai models --help
```
Usage: llm openai models [OPTIONS]

List models available to you from the OpenAI API

Options:
--json Output as JSON
--key TEXT OpenAI API key
--help Show this message and exit.
```
<!-- [[[end]]] -->
2 changes: 2 additions & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,11 @@ maxdepth: 3
---
setup
usage
python-api
templates
logging
plugins
tutorial-model-plugin
help
contributing
changelog
Expand Down
17 changes: 9 additions & 8 deletions docs/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,6 @@ On my Mac that outputs:
```
This will differ for other operating systems.

(You can customize the location of this file by setting a path in the `LLM_LOG_PATH` environment variable.)

Once that SQLite database has been created any prompts you run will be logged to that database.

To avoid logging a prompt, pass `--no-log` or `-n` to the command:
Expand Down Expand Up @@ -64,7 +62,7 @@ import sqlite_utils
import re
db = sqlite_utils.Database(memory=True)
migrate(db)
schema = db["log"].schema
schema = db["logs"].schema

def cleanup_sql(sql):
first_line = sql.split('(')[0]
Expand All @@ -77,16 +75,19 @@ cog.out(
)
]]] -->
```sql
CREATE TABLE "log" (
CREATE TABLE "logs" (
[id] INTEGER PRIMARY KEY,
[model] TEXT,
[timestamp] TEXT,
[prompt] TEXT,
[system] TEXT,
[prompt_json] TEXT,
[options_json] TEXT,
[response] TEXT,
[chat_id] INTEGER REFERENCES [log]([id]),
[debug] TEXT,
[duration_ms] INTEGER
[response_json] TEXT,
[reply_to_id] INTEGER REFERENCES [logs]([id]),
[chat_id] INTEGER REFERENCES [logs]([id]),
[duration_ms] INTEGER,
[datetime_utc] TEXT
);
```
<!-- [[[end]]] -->
37 changes: 37 additions & 0 deletions docs/python-api.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# Python API

LLM provides a Python API for executing prompts, in addition to the command-line interface.

Understanding this API is also important for writing plugins.

The API consists of the following key classes:

- `Model` - represents a language model against which prompts can be executed
- `Prompt` - a prompt that can be prepared and then executed against a model
- `Response` - the response executing a prompt against a model
- `Template` - a reusable template for generating prompts

## Prompt

A prompt object represents all of the information needed to be passed to the LLM. This could be a single prompt string, but it might also include a separate system prompt, various settings (for temperature etc) or even a JSON array of previous messages.

## Model

The `Model` class is an abstract base class that needs to be subclassed to provide a concrete implementation. Different LLMs will use different implementations of this class.

Model instances provide the following methods:

- `prompt(prompt: str, stream: bool, ...options) -> Response` - a convenience wrapper which creates a `Prompt` instance and then executes it. This is the most common way to use LLM models.
- `response(prompt: Prompt, stream: bool) -> Response` - execute a prepared Prompt instance against the model and return a `Response`.

Models usually return subclasses of `Response` that are specific to that model.

## Response

The response from an LLM. This could encapusulate a string of text, but for streaming APIs this class will be iterable, with each iteration yielding a short string of text as it is generated.

Calling `.text()` will return the full text of the response, waiting for the stream to stop executing if necessary.

## Template

Templates are reusable objects that can be used to generate prompts. They are used by the {ref}`prompt-templates` feature.
14 changes: 14 additions & 0 deletions docs/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,3 +89,17 @@ The environment variable will be used only if no `--key` option is passed to the
If no environment variable is found, the tool will fall back to checking `keys.json`.

You can force the tool to use the key from `keys.json` even if an environment variable has also been set using `llm "prompt" --key openai`.

## Custom directory location

This tool stores various files - prompt templates, stored keys, preferences, a database of logs - in a directory on your computer.

On macOS this is `~/Library/Application Support/io.datasette.llm/`.

On Linux it may be something like `~/.config/io.datasette.llm/`.

You can set a custom location for this directory by setting the `LLM_USER_PATH` environment variable:

```bash
export LLM_USER_PATH=/path/to/my/custom/directory
```
Loading