Skip to content

v0.2.1: Prompt previews, Toggleable prompt variables, Anthropic Claude2

Compare
Choose a tag to compare
@ianarawjo ianarawjo released this 12 Jul 21:33
· 113 commits to main since this release
3657609

We've made several quality-of-life improvements from 0.2 to this release.

Prompt previews

You can now inspect what generated prompts will be sent off to LLMs. For a quick glance, simply hover over the 'list' icon on Prompt Nodes:

hover-over-prompt-preview

For full inspection, just click the button to bring up a popup inspector.

Thanks to Issue #90 raised by @profplum700 !

Ability To Enable/Disable Prompt Variables in Text Fields Without Deleting Them

You can now enable/disable prompt variables selectively:

selective-field-visibility.mov

Thanks to Issue #93 raised by @profplum700 !

Anthropic model Claude-2

We've also added the newest Claude model, Claude-2. All prior models remain supported; however, strangely, Claude-1 and 100k context models have disappeared from the Anthropic API documentation. So, if you are using earlier Claude models, just know that they may stop working at some future point.

Bug fixes

There have also been numerous bug fixes, including:

  • braces { and } inside Tabular Data tables are now escaped by default when data is pulled from the nodes, so that they are never treated as prompt templates
  • escaping template braces { and } now removes the escape slash when generating prompts for models
  • outputs of Prompt Nodes, when chained into other Prompt Nodes, now escape the braces in LLM responses by default. Note that whenever prompts are generated, the escaped braces are cleaned up to just { and }. In response inspectors, input variables will appear with escaped braces, as input variables in ChainForge may themselves be templates.

Future Goals

We've been running pilot studies internally at Harvard HCI and getting some informal feedback.

  • One point that keeps coming up echoes Issue #56 , raised by @jjordanbaird : the ability to keep chat context and evaluate multiple chatbot turns. We are thinking to implement this as a Chat Turn Node, where optionally, one can provide "past conversation" context as input. The overall structure will be similar to Prompt Nodes, except that only Chat Models will be available. See #56 for more details.
  • Another issue we're aware of is the need for better documentation on what you can do with ChainForge, particularly on the rather unique feature of chaining prompt templates together.

As always, if you have any feedback or comments, open an Issue or start a Discussion.