Skip to content

Releases: karthink/gptel

Version 0.9.0

24 Jun 02:07
Compare
Choose a tag to compare

Version 0.9.0 adds gptel-context and support for more models/backends

Backends

  • Add support for OpenAI's gpt-4o (#313, contributed by @axelknock)
  • Add support for Anthropic's claude-3-5-sonnet-20240620 (#331, contributed by @erwald).
  • Add support for the PrivateGPT backend (#312, contributed by @Aquan1412).

New feature

gptel can now include arbitrary regions, buffers or files with requests. This is useful as background context for queries. For example, when you want to talk about the contents of code buffers/files in your project in a chat buffer. These additional contexts are "live" and not "snapshots", i.e. they are scanned at the time of the query.

This feature is available via the gptel-add command or from gptel's transient menu. See the README for more details.

This feature was contributed by @daedsidog.

UI changes

  • Calling M-x gptel now asks the user to pick an existing chat buffer or name a new one. A suitable default name is chosen if the user leaves the field blank. The prefix-arg behavior of gptel has been removed for now.
  • New option gptel-track-response in non-chat buffers to control whether gptel distinguishes between the user's prompts and LLM responses. (The two are always distinguished in dedicated chat buffers.) This option can also be set from the transient menu.

Bug fixes and other news

  • gptel-org-branching-context now requires Org 9.67 or later.
  • Fix bugs with one-shot Markdown to Org conversion.
  • Fix bugs with setting transient switches buffer-locally in gptel-menu.

Version 0.8.6

01 May 20:22
Compare
Choose a tag to compare

Version 0.8.6 is a bugfix release

NonGNU ELPA

gptel is now available on NonGNU ELPA. This means it is directly installable via M-x package-install without the need to add MELPA to the list of package archives.

Backends

  • Add support for OpenRouter (#282)
  • Add support for Gemini 1.5 (#284)
  • Add support for GPT 4 Turbo (#286)

Bug fixes

Several bugs have been fixed:

  • gptel's status in the header line now updates when sending queries from the transient menu. (#293)
  • More Org elements are supported in the Markdown to Org converter (#296)
  • gptel now supports saving and resuming chats when using Ollama (#181)
  • Ollama integration is now stateless, resolving a number of bugs (#270, #279)
  • Fix Ollama response parsing errors (#179)

Version 0.8.5

04 Apr 08:03
Compare
Choose a tag to compare

Version 0.8.5 adds the following features:

gptel and Org mode

Additional features are now available when using gptel in Org mode:

  • By default the context for a query is the contents of the buffer up to the cursor. You can set the context to be the lineage of the current Org heading by enabling gptel-org-branching-context. This makes each heading at a given level a different branch of the conversation.
  • Limit the conversation context to an Org heading by setting gptel-org-set-topic. (This is an alternative to branching context.)
  • Set the current gptel configuration (backend, model, system message, temperature and max tokens) as Org properties under the current heading with the command gptel-org-set-properties. When these properties are present, they override the buffer-specific or global settings. (#141)
  • On-the-fly conversion of responses to Org mode has been smoothed out further. For best results, ask the LLM to respond only in Markdown if you are using Org mode.

UI

  • The current system prompt is now shown (in truncated form) in the header-line in gptel chat buffers (#274).

Bug fixes

Several bugs have been fixed:

  • (Anthropic) Attach additional directive correctly when interacting with Anthropic Claude models (#276)
  • (Anthropic) Handle edge cases when parsing partial responses (#261)
  • (All backends) Fix empty responses caused by json.el parsing errors when libjansson support is not available (#251, #264)
  • (Ollama) Fix parsing errors caused by the libjansson transition (#255)
  • The dry-run commands now perform an actual dry-run, i.e. everything except sending the queries (#276).
  • The cursor can now be moved by functions in gptel-post-response-hook. (#269)

Version 0.8.0

17 Mar 04:26
Compare
Choose a tag to compare

Version 0.8.0 adds the following features:

Backends

  • Support for the Anthropic API and Claude 3 models
  • Support for Groq
  • Updated OpenAI model list.

UI and configuration

There have been many improvements to the gptel's transient menu interface:

  • From the menu, you can now attach an additional directive to the next query on top of the system message. This is useful for specific instructions that change with each query, like when refactoring code and regenerating or refining responses.
  • Some introspection commands have been added to the menu: you can see exactly what will be sent (as a Lisp or JSON object) with the next query. To enable these commands in the menu, turn on gptel-log-level, which see.
  • Various aspects of the menu have been tuned to be more efficient and cause less friction.
  • Model and query parameters (including the system message) are now global variables by default. This is to make it easier to work at a project level without having to set them in each buffer. You can continue to set them at a buffer-local level (the previous default) using a switch in the menu.

Other

  • gptel now uses libjansson if Emacs is compiled with support for it. This makes json parsing about 3x faster. (LLM responses are typically not large, so there is only a modest increase in parsing speed and Emacs' responsiveness when using gptel.)
  • Org mode output when streaming responses is much improved, with most edge cases resolved.

Deprecation notice

  • The dedicated "refactor" or "rewrite" menus are deprecated and will be removed in the next major release. Note that all of their functionality (including ediff-ing) is now available from the main gptel menu.

Version 0.7.0

21 Feb 08:20
Compare
Choose a tag to compare

Version 0.7.0 adds the following features:

Backends

  • Support for Perplexity.ai (contributed by @dbactual)
  • Updated OpenAI model list.
  • Better support for Gemini
  • Support for arbitrary curl arguments (for HTTP requests) that gptel does not provide customization options for (contributed by @r0man)

UI and configuration

  • Response regeneration and history: You can now regenerate a response at point and cycle through past versions of the response at point. These are accessible from the transient menu when the point is over a response.
  • Customizable display-buffer action to choose where gptel chat windows are placed when chat buffers are created with M-x gptel.

Version 0.6.5

03 Feb 22:39
Compare
Choose a tag to compare

Version 0.6.5 adds the following features:

Backends

  • Support for the Kagi summarizer engine(s).
  • Support for Together.ai and Anyscale.
  • Updated OpenAI model list.

UI and configuration

  • gptel handles the case of trying to use unsupported models for a backend (contributed by @joaotavora).
  • Easier configuration for OpenAI backends (header does not need to be specified)
  • Redirect responses to any buffer, not just gptel chat sessions.
  • Optional logging of all requests/response data.

Bug fixes

  • Persist multi-line directives in saved files correctly.
  • Org-conversion bug fixes.

Many more minor bug fixes.

Version 0.6.0

13 Jan 07:38
Compare
Choose a tag to compare

Version 0.6.0 adds the following features:

Backends

  • Support for Kagi's FastGPT model

UI

  • Descriptions of directives when picking them.
  • Option to choose the kill-ring contents as the prompt.

Additionally there are bug fixes involving auto-scrolling, the customize interface, and improved documentation and polish.

Version 0.5.5

22 Dec 02:17
Compare
Choose a tag to compare

Version 0.5.5 adds the following features:

Backends

  • Support for Google's Gemini LLMs (contributed by @mrdylanyin)

UI

  • Options for auto-scrolling as the response grows, and
  • Moving the point to the prompt at the end
  • Minimal status indicator for gptel-mode that does not take over the header-line (contributed by @codeasone)

Additionally there are bug fixes involving Curl interaction and API key retrieval.

Version 0.5.0

17 Dec 00:05
Compare
Choose a tag to compare

Version 0.5.5 adds the following features:

LLM Backends

  • Support local LLMs via the Ollama and GPT4All web APIs.
  • Support Azure instances
  • Support latest GPT-3.5 and GPT-4 versions.

UI

  • Add customizable prompt and response prefixes (Contributed by @daedsidog)

Bug fixes

  • Better support for Curl on Windows
  • Handle large payloads when using Curl
  • Better error handling.

Version 0.4.0

28 Jul 23:08
Compare
Choose a tag to compare
  • Fix several minor bugs
  • Add feature to save chats to a file and resume them later
  • Add crowdsourced prompts