Skip to content

Releases: simonw/llm

0.18

17 Nov 20:33
Compare
Choose a tag to compare
  • Initial support for async models. Plugins can now provide an AsyncModel subclass that can be accessed in the Python API using the new llm.get_async_model(model_id) method. See async models in the Python API docs and implementing async models in plugins. #507
  • OpenAI models all now include async models, so function calls such as llm.get_async_model("gpt-4o-mini") will return an async model.
  • gpt-4o-audio-preview model can be used to send audio attachments to the GPT-4o audio model. #608
  • Attachments can now be sent without requiring a prompt. #611
  • llm models --options now includes information on whether a model supports attachments. #612
  • llm models --async shows available async models.
  • Custom OpenAI-compatible models can now be marked as can_stream: false in the YAML if they do not support streaming. Thanks, Chris Mungall. #600
  • Fixed bug where OpenAI usage data was incorrectly serialized to JSON. #614
  • Standardized on audio/wav MIME type for audio attachments rather than audio/wave. [#603](#603

0.18a1

14 Nov 23:11
Compare
Choose a tag to compare
0.18a1 Pre-release
Pre-release
  • Fixed bug where conversations did not work for async OpenAI models. #632
  • __repr__ methods for Response and AsyncResponse.

0.18a0

14 Nov 01:56
Compare
Choose a tag to compare
0.18a0 Pre-release
Pre-release

Alpha support for async models. #507

Multiple smaller changes.

0.17.1

01 Nov 21:22
Compare
Choose a tag to compare
  • Fixed a bug where llm chat crashes if a follow-up prompt is provided. #601

0.17

29 Oct 02:39
Compare
Choose a tag to compare

Support for attachments, allowing multi-modal models to accept images, audio, video and other formats. #587

The default OpenAI gpt-4o and gpt-4o-mini models can both now be prompted with JPEG, GIF, PNG and WEBP images.

Attachments in the CLI can be URLs:

llm -m gpt-4o "describe this image" \
  -a https://static.simonwillison.net/static/2024/pelicans.jpg

Or file paths:

llm -m gpt-4o-mini "extract text" -a image1.jpg -a image2.jpg

Or binary data, which may need to use --attachment-type to specify the MIME type:

cat image | llm -m gpt-4o-mini "extract text" --attachment-type - image/jpeg

Attachments are also available in the Python API:

model = llm.get_model("gpt-4o-mini")
response = model.prompt(
    "Describe these images",
    attachments=[
        llm.Attachment(path="pelican.jpg"),
        llm.Attachment(url="https://static.simonwillison.net/static/2024/pelicans.jpg"),
    ]
)

Plugins that provide alternative models can support attachments, see Attachments for multi-modal models for details.

The latest llm-claude-3 plugin now supports attachments for Anthropic's Claude 3 and 3.5 models. The llm-gemini plugin supports attachments for Google's Gemini 1.5 models.

Also in this release: OpenAI models now record their "usage" data in the database even when the response was streamed. These records can be viewed using llm logs --json. #591

0.17a0

28 Oct 22:49
Compare
Choose a tag to compare
0.17a0 Pre-release
Pre-release

Alpha support for attachments, allowing multi-modal models to accept images, audio, video and other formats. #578

Attachments in the CLI can be URLs:

llm "describe this image" \
  -a https://static.simonwillison.net/static/2024/pelicans.jpg

Or file paths:

llm "extract text" -a image1.jpg -a image2.jpg

Or binary data, which may need to use --attachment-type to specify the MIME type:

cat image | llm "extract text" --attachment-type - image/jpeg

Attachments are also available in the Python API:

model = llm.get_model("gpt-4o-mini")
response = model.prompt(
    "Describe these images",
    attachments=[
        llm.Attachment(path="pelican.jpg"),
        llm.Attachment(url="https://static.simonwillison.net/static/2024/pelicans.jpg"),
    ]
)

Plugins that provide alternative models can support attachments, see Attachments for multi-modal models for details.

0.16

12 Sep 23:20
Compare
Choose a tag to compare
  • OpenAI models now use the internal self.get_key() mechanism, which means they can be used from Python code in a way that will pick up keys that have been configured using llm keys set or the OPENAI_API_KEY environment variable. #552. This code now works correctly:
    import llm
    print(llm.get_model("gpt-4o-mini").prompt("hi"))
  • New documented API methods: llm.get_default_model(), llm.set_default_model(alias), llm.get_default_embedding_model(alias), llm.set_default_embedding_model(). #553
  • Support for OpenAI's new o1 family of preview models, llm -m o1-preview "prompt" and llm -m o1-mini "prompt". These models are currently only available to tier 5 OpenAI API users, though this may change in the future. #570

0.15

18 Jul 19:33
Compare
Choose a tag to compare
  • Support for OpenAI's new GPT-4o mini model: llm -m gpt-4o-mini 'rave about pelicans in French' #536
  • gpt-4o-mini is now the default model if you do not specify your own default, replacing GPT-3.5 Turbo. GPT-4o mini is both cheaper and better than GPT-3.5 Turbo.
  • Fixed a bug where llm logs -q 'flourish' -m haiku could not combine both the -q search query and the -m model specifier. #515

0.14

13 May 20:40
Compare
Choose a tag to compare
  • Support for OpenAI's new GPT-4o model: llm -m gpt-4o 'say hi in Spanish' #490
  • The gpt-4-turbo alias is now a model ID, which indicates the latest version of OpenAI's GPT-4 Turbo text and image model. Your existing logs.db database may contain records under the previous model ID of gpt-4-turbo-preview. #493
  • New llm logs -r/--response option for outputting just the last captured response, without wrapping it in Markdown and accompanying it with the prompt. #431
  • Nine new {ref}plugins <plugin-directory> since version 0.13:

0.13.1

27 Jan 00:28
8021e12
Compare
Choose a tag to compare
  • Fix for No module named 'readline' error on Windows. #407