This release notably includes the following changes:
-
Models from the
Anthropic
andChatAnthropic
providers are now merged in the config UI, so all Anthropic models are shown in the same place in the "Language model" dropdown. -
Anthropic Claude v1 LLMs have been removed, as the models are retired and no longer available from the API.
-
The chat system prompt has been updated to encourage the LLM to express dollar quantities in LaTeX, i.e. the LLM should prefer returning
\(\$100\)
instead of$100
. For the latest LLMs, this generally fixes a rendering issue when multiple dollar quantities are given literally in the same sentence.- Note that the issue may still persist in older LLMs, which do not respect the system prompt as frequently.
-
/export
has been fixed to include streamed replies, which were previously omitted. -
Calling non-chat providers with history has been fixed to behave properly in magics.
- Remove retired models and add new
Haiku-3.5
model in Anthropic #1092 (@srdas) - Reduced padding in cell around code icons in code toolbar #1072 (@srdas)
- Merge Anthropic language model providers #1069 (@srdas)
- Add examples of using Fields and EnvAuthStrategy to developer documentation #1056 (@alanmeeson)
- Continue to allow
$
symbols to delimit inline math in human messages #1094 (@dlqqq) - Fix
/export
by including streamed agent messages #1077 (@mcavdar) - Fix magic commands when using non-chat providers w/ history #1075 (@alanmeeson)
- Allow
$
to literally denote quantities of USD in chat #1068 (@dlqqq)
- Improve installation documentation and clarify provider dependencies #1087 (@srdas)
- Added Ollama to the providers table in user docs #1064 (@srdas)
(GitHub contributors page for this release)
@alanmeeson | @dlqqq | @krassowski | @mcavdar | @srdas
(GitHub contributors page for this release)
@dlqqq | @pre-commit-ci | @srdas
This release notably includes the addition of a "Stop streaming" button, which takes over the "Send" button when a reply is streaming and the chat input is empty. While Jupyternaut is streaming a reply to a user, the user has the option to click the "Stop streaming" button to interrupt Jupyternaut and stop it from streaming further. Thank you @krassowski for contributing this feature! 🎉
- Support Quarto Markdown in
/learn
#1047 (@dlqqq) - Update requirements contributors doc #1045 (@JasonWeill)
- Remove clear_message_ids from RootChatHandler #1042 (@michaelchia)
- Migrate streaming logic to
BaseChatHandler
#1039 (@dlqqq) - Unify message clearing & broadcast logic #1038 (@dlqqq)
- Learn from JSON files #1024 (@jlsajfj)
- Allow users to stop message streaming #1022 (@krassowski)
- Always use
username
fromIdentityProvider
#1034 (@krassowski)
- Support
jupyter-collaboration
v3 #1035 (@krassowski) - Test Python 3.9 and 3.12 on CI, test minimum dependencies #1029 (@krassowski)
- Update requirements contributors doc #1045 (@JasonWeill)
(GitHub contributors page for this release)
@dlqqq | @JasonWeill | @jlsajfj | @krassowski | @michaelchia | @pre-commit-ci
- Export context hooks from NPM package entry point #1020 (@dlqqq)
- Add support for optional telemetry plugin #1018 (@dlqqq)
- Add back history and reset subcommand in magics #997 (@akaihola)
(GitHub contributors page for this release)
@akaihola | @dlqqq | @jtpio | @pre-commit-ci
- Make path argument required on /learn #1012 (@andrewfulton9)
(GitHub contributors page for this release)
@andrewfulton9 | @dlqqq | @hockeymomonow
This release notably introduces a new context command @file:<file-path>
to the chat UI, which includes the content of the target file with your prompt when sent. This allows you to ask questions like:
What does @file:src/components/ActionButton.tsx do?
Can you refactor @file:src/index.ts to use async/await syntax?
How do I add an optional dependency to @file:pyproject.toml?
The context command feature also includes an autocomplete menu UI to help navigate your filesystem with fewer keystrokes.
Thank you @michaelchia for developing this feature!
- Migrate to
ChatOllama
base class in Ollama provider #1015 (@srdas) - Add
metadata
field to agent messages #1013 (@dlqqq) - Add OpenRouter support #996 (@akaihola)
- Framework for adding context to LLM prompt #993 (@michaelchia)
- Adds unix shell-style wildcard matching to
/learn
#989 (@andrewfulton9)
- Run mypy on CI, fix or ignore typing issues #987 (@krassowski)
(GitHub contributors page for this release)
@akaihola | @andrewfulton9 | @dlqqq | @ellisonbg | @hockeymomonow | @krassowski | @michaelchia | @srdas
- Allow unlimited LLM memory through traitlets configuration #986 (@krassowski)
- Allow to disable automatic inline completions #981 (@krassowski)
- Add ability to delete messages + start new chat session #951 (@michaelchia)
- Fix
RunnableWithMessageHistory
import #980 (@krassowski) - Fix sort messages #975 (@michaelchia)
(GitHub contributors page for this release)
@dlqqq | @krassowski | @michaelchia | @srdas
- Add 'Generative AI' submenu #971 (@dlqqq)
- Add Gemini 1.5 to the list of chat options #964 (@trducng)
- Allow configuring a default model for cell magics (and line error magic) #962 (@krassowski)
- Make chat memory size traitlet configurable + /clear to reset memory #943 (@michaelchia)
(GitHub contributors page for this release)
@dlqqq | @krassowski | @michaelchia | @pre-commit-ci | @srdas | @trducng
- Add optional configurable message footer #942 (@dlqqq)
- Add support for Azure Open AI Embeddings to Jupyter AI #940 (@gsrikant7)
- Make help message template configurable #938 (@dlqqq)
- Add latest Bedrock models (Titan, Llama 3.1 405b, Mistral Large 2, Jamba Instruct) #923 (@gabrielkoo)
- Add support for custom/provisioned models in Bedrock #922 (@dlqqq)
- Settings section improvement #918 (@andrewfulton9)
- Bind reject method to promise, improve typing #949 (@krassowski)
- Fix sending empty input with Enter #946 (@michaelchia)
- Fix saving chat settings #935 (@dlqqq)
- Add documentation on how to use Amazon Bedrock #936 (@srdas)
- Update copyright template #925 (@srdas)
(GitHub contributors page for this release)
@andrewfulton9 | @dlqqq | @gabrielkoo | @gsrikant7 | @krassowski | @michaelchia | @srdas
- Respect selected persona in chat input placeholder #916 (@dlqqq)
- Migrate to
langchain-aws
for AWS providers #909 (@dlqqq) - Added new Bedrock Llama 3.1 models and gpt-4o-mini #908 (@srdas)
- Rework selection inclusion; new Send button UX #905 (@dlqqq)
(GitHub contributors page for this release)
@dlqqq | @JasonWeill | @srdas
- Allow overriding the Ollama base URL #904 (@jtpio)
- Make magic aliases user-customizable #901 (@krassowski)
- Trim leading whitespace when processing #900 (@krassowski)
- Fix python<3.10 compatibility #899 (@michaelchia)
- Add notebooks to the documentation #906 (@andrewfulton9)
- Update docs to reflect Python 3.12 support #898 (@dlqqq)
(GitHub contributors page for this release)
@andrewfulton9 | @dlqqq | @jtpio | @krassowski | @michaelchia | @pre-commit-ci
This is a significant release that implements LLM response streaming in Jupyter AI along with several other enhancements & fixes listed below. Special thanks to @krassowski for his generous contributions this release!
- Upgrade to
langchain~=0.2.0
andlangchain_community~=0.2.0
#897 (@dlqqq) - Rework selection replacement #895 (@dlqqq)
- Ensure all slash commands support
-h/--help
#878 (@krassowski) - Add keyboard shortcut command to focus chat input #876 (@krassowski)
- Implement LLM response streaming #859 (@dlqqq)
- Add Ollama #646 (@jtpio)
- Fix streaming in
HuggingFaceHub
provider #894 (@krassowski) - Fix removal of pending messages on error #888 (@krassowski)
- Ensuring restricted access to the
/learn
index directory #887 (@krassowski) - Make preferred-dir the default read/write directory for slash commands #881 (@andrewfulton9)
- Fix prefix removal when streaming inline completions #879 (@krassowski)
- Limit chat input height to 20 lines #877 (@krassowski)
- Do not redefine
refreshCompleterState
on each render #875 (@krassowski) - Remove unused toolbars/menus from schema #873 (@krassowski)
- Fix plugin ID format #872 (@krassowski)
- Address error on
/learn
after change of embedding model #870 (@srdas) - Fix pending message overlapping text #857 (@michaelchia)
- Fixes error when allowed or blocked model list is passed in config #855 (@3coins)
- Fixed
/export
for timestamp, agent name #854 (@srdas)
- Update to
actions/checkout@v4
#893 (@jtpio) - Upload
jupyter-releaser
built distributions #892 (@jtpio) - Updated integration tests workflow #890 (@krassowski)
(GitHub contributors page for this release)
@3coins | @andrewfulton9 | @brichet | @dannongruver | @dlqqq | @JasonWeill | @jtpio | @krassowski | @lalanikarim | @michaelchia | @pedrogutobjj | @srdas
- Add claude sonnet 3.5 models #847 (@srdas)
- Update
clear
slash command to useHelpChatHandler
to reinstate the help menu #846 (@srdas)
- Fix send via keyboard after sending slash command with arguments #850 (@dlqqq)
- Fix Cohere models by using new
langchain-cohere
partner package #848 (@dlqqq)
(GitHub contributors page for this release)
- Add new Cohere models #834 (@srdas)
- Group messages with their replies #832 (@michaelchia)
- Support Notebook 7 #827 (@jtpio)
- Support pending/loading message while waiting for response #821 (@michaelchia)
- Fix compatibility with Python 3.8 #844 (@krassowski)
- Updates end of maintenance messaging to be in the past tense #843 (@JasonWeill)
(GitHub contributors page for this release)
@dlqqq | @JasonWeill | @jtpio | @krassowski | @michaelchia | @srdas
- Add
/fix
slash command #828 (@dlqqq) - Add support for MistralAI #823 (@jtpio)
- Document supported file types for /learn #816 (@JasonWeill)
- Refactor split function with tests #811 (@srdas)
- Autocomplete UI for slash commands #810 (@dlqqq)
- Prevent overriding
server_settings
on base provider class #825 (@krassowski) - Fix import deprecations #824 (@jtpio)
- Document supported file types for /learn #816 (@JasonWeill)
- Document how to create completions using full notebook content #777 (@krassowski)
(GitHub contributors page for this release)
@dlqqq | @JasonWeill | @jtpio | @krassowski | @srdas
- Fix Azure OpenAI authentication from UI #794 (@dlqqq)
- Updated Hugging Face chat and magics processing with new APIs, clients #784 (@srdas)
(GitHub contributors page for this release)
@dlqqq | @JasonWeill | @srdas
- Add Titan embedding model v2 #778 (@srdas)
- Save chat history to Jupyter Lab's root directory #770 (@srdas)
- Add new Bedrock model IDs #764 (@srdas)
- learn arxiv tex files #742 (@srdas)
- Distinguish between completion and chat models #711 (@krassowski)
- Save chat history to Jupyter Lab's root directory #770 (@srdas)
- change unsupported_slash_commands default value from dict to set #768 (@michaelchia)
- Switch to langchain_community #758 (@srdas)
(GitHub contributors page for this release)
@3coins | @dlqqq | @krassowski | @michaelchia | @srdas
- Load persisted vector store by default #753 (@dlqqq)
- Remove
pypdf
from required dependencies #752 (@dlqqq) - Fix /learn in 2.14.0 #747 (@michaelchia)
(GitHub contributors page for this release)
@3coins | @dlqqq | @michaelchia
- Handle single files, pdfs, errors from missing loader dependencies in
/learn
#733 (@srdas) - Move methods generating completion replies to the provider #717 (@krassowski)
- Handle Single Files and also enable html, pdf file formats for /learn #712 (@srdas)
- Catch embedding model validation errors on extension init #735 (@dlqqq)
- Require
jupyter_ai_magics
2.13.0 to fixPersona
import #731 (@krassowski) - Fixes help slash command. #729 (@3coins)
- Remove trailing Markdown code tags in completion suggestions #726 (@bartleusink)
- Update Azure OpenAI fields #722 (@cloutier)
- Handle Single Files and also enable html, pdf file formats for /learn #712 (@srdas)
(GitHub contributors page for this release)
@3coins | @bartleusink | @cloutier | @dlqqq | @krassowski | @srdas | @welcome
- Improve support for custom providers #713 (@dlqqq)
- Update Anthropic providers to use
langchain_anthropic
partner package #700 (@dlqqq) - Add Claude-3-Haiku #696 (@srdas)
- Use
AZURE_OPENAI_API_KEY
for Azure OpenAI provider #691 (@aroffe99) - /export added #658 (@apurvakhatri)
- Fix rendering of model IDs with a colon in their name #704 (@dlqqq)
- Update Anthropic providers to use
langchain_anthropic
partner package #700 (@dlqqq) - Use new
langchain-openai
partner package #653 (@startakovsky)
(GitHub contributors page for this release)
@apurvakhatri | @aroffe99 | @dlqqq | @lumberbot-app | @srdas | @startakovsky | @welcome
- Add Anthropic Claude 3 models to providers #672 (@srdas)
- Add support for Gemini #666 (@giswqs)
- %ai version added #665 (@apurvakhatri)
- Together.ai provider added #654 (@MahdiDavari)
- Fix selecting models with a colon in their ID #682 (@dlqqq)
- Use regex in TeX replace function to catch repeating symbol occurrences #675 (@andrii-i)
- Resolves chat panel initialization error #660 (@abbott)
- fix bug: check before using the variables #656 (@ya0guang)
(GitHub contributors page for this release)
@abbott | @andrii-i | @apurvakhatri | @dlqqq | @giswqs | @lumberbot-app | @MahdiDavari | @srdas | @welcome | @ya0guang
This release notably includes a significant UI improvement for the chat side panel. The chat UI now uses the native JupyterLab frontend to render Markdown, code blocks, and TeX markup instead of a third party package. Thank you to @andrii-i for building this feature!
- Fix cookiecutter template #637 (@dlqqq)
- Add OpenAI text-embedding-3-small, -large models #628 (@JasonWeill)
- Add new OpenAI models #625 (@EduardDurech)
- Use @jupyterlab/rendermime for in-chat markdown rendering #564 (@andrii-i)
- Unifies parameters to instantiate llm while incorporating model params #632 (@JasonWeill)
- Add
nodejs=20
to the contributing docs #645 (@jtpio) - Update docs to mention
langchain_community.llms
#642 (@jtpio) - Fix cookiecutter template #637 (@dlqqq)
- Fix conda-forge typo in readme #626 (@droumis)
(GitHub contributors page for this release)
@andrii-i | @dlqqq | @droumis | @EduardDurech | @JasonWeill | @jtpio | @krassowski | @lalanikarim | @lumberbot-app | @welcome | @Wzixiao
This is the first public release of Jupyter AI inline completion, initially developed by @krassowski. Inline completion requires jupyterlab==4.1.0
to work, so make sure you have that installed if you want to try it out! 🎉
- Bump
@jupyterlab/completer
resolution to^4.1.0
#621 (@dlqqq) - Restyles model names in Markdown to avoid wrapping model names #606 (@JasonWeill)
- Expose templates for customisation in providers #581 (@krassowski)
- Add nvidia provider #579 (@stevie-35)
- Reflect theme changes without a refresh #575 (@garsonbyte)
- Setting default model providers #421 (@aws-khatria)
- Allow usage without NVIDIA partner package #622 (@dlqqq)
- fix to conda install instructions in readme #610 (@Tom-A-Lynch)
- Uses invoke() to call custom chains. Handles dict output format. #600 (@JasonWeill)
- Removes deprecated models, adds updated models for openai #596 (@JasonWeill)
- Upgrades cohere dependency, model list #594 (@JasonWeill)
- Mentions conda install instructions in docs #611 (@JasonWeill)
- Add Kaggle to supported platforms #577 (@adriens)
(GitHub contributors page for this release)
@3coins | @adriens | @aws-khatria | @dlqqq | @garsonbyte | @JasonWeill | @krassowski | @lumberbot-app | @stevie-35 | @Tom-A-Lynch | @welcome
- Fix streaming, add minimal tests #592 (@krassowski)
(GitHub contributors page for this release)
- Inline completion support #582 (@krassowski)
(GitHub contributors page for this release)
@dlqqq | @jtpio | @krassowski
(GitHub contributors page for this release)
@dlqqq | @JasonWeill | @jtpio | @welcome
- Implement
stop_extension()
#565 (@dlqqq) - Upgrades openai to version 1, removes openai history in magics #551 (@JasonWeill)
(GitHub contributors page for this release)
@dlqqq | @JasonWeill | @krassowski | @lumberbot-app
- Fixes lookup for custom chains #560 (@JasonWeill)
- Pin
langchain-core
dependency to prevent Settings UI crash #558 (@dlqqq)
(GitHub contributors page for this release)
@dlqqq | @JasonWeill | @krassowski
- Add gpt-4-1106-preview model from openai #540 (@jamesjun)
- Adds multi-environment variable authentication, Baidu Qianfan ERNIE-bot provider #531 (@JasonWeill)
- Clarify/fix conda instructions #547 (@krassowski)
- Main branch is compatible with Lab 4 only #536 (@JasonWeill)
- Document entry point and API for custom embedding models #533 (@krassowski)
(GitHub contributors page for this release)
@dlqqq | @ellisonbg | @jamesjun | @JasonWeill | @krassowski | @sundaraa-deshaw | @welcome | @Zsailer
- Refactor ConfigManager._init_config #527 (@andrii-i)
- Upgrades to langchain 0.0.350 #522 (@JasonWeill)
- Dynamically generate help message for slash commands in chat UI #520 (@krassowski)
- Run Python unit tests as a part of CI #519 (@andrii-i)
- Update README.md - under incubation #517 (@JasonWeill)
- Make Jupyternaut reply for API auth errors user-friendly #513 (@andrii-i)
- Respect user preferred dir and allow to configure logs dir #490 (@krassowski)
- Refactor ConfigManager._init_config #527 (@andrii-i)
- Upgrades to langchain 0.0.350 #522 (@JasonWeill)
- Run Python unit tests as a part of CI #519 (@andrii-i)
- Update README.md - under incubation #517 (@JasonWeill)
(GitHub contributors page for this release)
@andrii-i | @dlqqq | @JasonWeill | @krassowski | @lumberbot-app
- Adds new models to Bedrock provider #499 (@JasonWeill)
- Base chat handler refactor for custom slash commands #398 (@JasonWeill)
- Remove stale
@jupyterlab/collaboration
dependency #489 (@krassowski) - Don't run check-release on release #477 (@Adithya4720)
- Remove config.json-related information #503 (@andrii-i)
- Update Users section of the docs #494 (@andrii-i)
- Update README.md #473 (@3coins)
(GitHub contributors page for this release)
@3coins | @Adithya4720 | @andrii-i | @dlqqq | @JasonWeill | @krassowski | @welcome | @Zsailer
- Pydantic v1 and v2 compatibility #466 (@JasonWeill)
- Add step to create a GPT4All cache folder to the docs #457 (@andrii-i)
- Add gpt4all local models, including an embedding provider #454 (@3coins)
- Copy edits for Jupyternaut messages #439 (@JasonWeill)
- If model_provider_id or embeddings_provider_id is not associated with models, set it to None #459 (@andrii-i)
- Add gpt4all local models, including an embedding provider #454 (@3coins)
- Ensure initials appear in collaborative mode #443 (@aychang95)
- Add step to create a GPT4All cache folder to the docs #457 (@andrii-i)
- Updated docs for config. #450 (@3coins)
- Copy edits for Jupyternaut messages #439 (@JasonWeill)
(GitHub contributors page for this release)
@3coins | @andrii-i | @aychang95 | @dlqqq | @JasonWeill | @welcome
- Model allowlist and blocklists #446 (@dlqqq)
- DOC: Render hugging face url as link #432 (@arokem)
- Log exceptions in
/generate
to a file #431 (@dlqqq) - Model parameters option to pass in model tuning, arbitrary parameters #430 (@3coins)
- /learn skips hidden files/dirs by default, unless "-a" is specified #427 (@JasonWeill)
- Model parameters option to pass in model tuning, arbitrary parameters #430 (@3coins)
- Rename Bedrock and Bedrock chat providers in docs #429 (@JasonWeill)
- DOC: Render hugging face url as link #432 (@arokem)
- Rename Bedrock and Bedrock chat providers in docs #429 (@JasonWeill)
- Document how to add custom model providers #420 (@krassowski)
(GitHub contributors page for this release)
@3coins | @arokem | @dlqqq | @ellisonbg | @JasonWeill | @jtpio | @krassowski | @welcome | @Wzixiao
Hey Jupyternauts! We're excited to announce the 2.4.0 release of Jupyter AI, which includes better support for Bedrock Anthropic models. Thanks to @krassowski for providing a new feature in Jupyter AI that let's admins specify allowlists and blocklists to filter the list of providers available in the chat settings panel.
- Allow to define block and allow lists for providers #415 (@krassowski)
(GitHub contributors page for this release)
@3coins | @krassowski | @pre-commit-ci
Hey Jupyternauts! We're excited to announce the 2.3.0 release of Jupyter AI, which includes better support for Anthropic models and integration with Amazon Bedrock.
There is also a significant change to how Jupyter AI settings are handled (see #353). The most significant changes are:
- API key values can no longer be read from the client. This was taken as a security measure to prevent accidental leakage of keys. You can still update existing API keys if you do decide to change your key in the future.
- The settings can not be updated if they were updated by somebody else after you opened the settings panel. This prevents different users connecting to the same server from clobbering updates from each other.
- There is now a much better UI for updating and deleting API keys. We hope you enjoy it.
Updating to 2.3.0 shouldn't require any changes on your end. However, if you notice an error, please submit a bug report with the server logs emitted in the terminal from the jupyter lab
process. Renaming the config file $JUPYTER_DATA_DIR/jupyter_ai/config.json
to some other name and then restarting jupyter lab
may fix the issue if it is a result of the new config changes.
- Adds chat anthropic provider, new models #391 (@3coins)
- Adds help text for registry model providers in chat UI settings #373 (@JasonWeill)
- jupyter_ai and jupyter_ai_magics version match #367 (@JasonWeill)
- Config V2 #353 (@dlqqq)
- Add E2E tests #350 (@andrii-i)
- Upgraded LangChain, fixed prompts for Bedrock #401 (@3coins)
- Adds chat anthropic provider, new models #391 (@3coins)
- jupyter_ai and jupyter_ai_magics version match #367 (@JasonWeill)
(GitHub contributors page for this release)
@3coins | @andrii-i | @dlqqq | @JasonWeill | @krassowski
- Loads vector store index lazily #374 (@3coins)
- Added alias for bedrock titan model #368 (@3coins)
- Update README, docs #347 (@JasonWeill)
- fix newline typo in improve_code #364 (@michaelchia)
- Upgrades LangChain to 0.0.277 #375 (@3coins)
- relax pinning on importlib_metadata, typing_extensions #363 (@minrk)
- Remove front end unit tests from code and README.md #371 (@andrii-i)
- Update README, docs #347 (@JasonWeill)
(GitHub contributors page for this release)
@3coins | @andrii-i | @JasonWeill | @krassowski | @michaelchia | @minrk | @welcome
- Add new 0613 GPT-3.5 and GPT-4 models #337 (@bjornjorgensen)
- howto add key #330 (@bjornjorgensen)
- Azure OpenAI and OpenAI proxy support #322 (@dlqqq)
- Add GPT4All local provider #209 (@krassowski)
- Update README.md #338 (@3coins)
- howto add key #330 (@bjornjorgensen)
(GitHub contributors page for this release)
@3coins | @anammari | @bjornjorgensen | @dlqqq | @JasonWeill | @krassowski | @welcome
- add claude 2 to anthropic models #314 (@jmkuebler)
- Prompt template override in BaseProvider #309 (@JasonWeill)
- Updates docs to refer to JupyterLab versions #300 (@JasonWeill)
- handle IDPs that don't return initials #316 (@dlqqq)
- Handles /clear command with selection #307 (@JasonWeill)
- Updates docs to refer to JupyterLab versions #300 (@JasonWeill)
- Update magics.ipynb #320 (@eltociear)
(GitHub contributors page for this release)
@dlqqq | @eltociear | @JasonWeill | @jmkuebler | @pre-commit-ci | @welcome
This is currently the latest major version, and supports exclusively JupyterLab 4.
Existing users who are unable to migrate to JupyterLab 3 immediately should use v1.x. However, feature releases and bug fixes will only be backported to v1.x as we deem necessary, so we highly encourage existing Jupyter AI users to migrate to JupyterLab 4 and Jupyter AI v2 as soon as possible to enjoy all of the latest features we are currently developing.
Thank you all for your support of Jupyter AI! 🎉
(GitHub contributors page for this release)
This release serves exclusively to dedicate a major version to the 1.x branch providing JupyterLab 3 support.
- Chat help message on load #277 (@JasonWeill)
(GitHub contributors page for this release)
(GitHub contributors page for this release)
- Allows specifying chunk size and overlap with /learn #267 (@3coins)
- Added Bedrock provider #263 (@3coins)
- Validate JSON for request schema #261 (@JasonWeill)
- Updates docs with reset, model lists #254 (@JasonWeill)
- Migrate to Dask #244 (@dlqqq)
- Sets font color for intro text #265 (@JasonWeill)
- Added Bedrock provider #263 (@3coins)
- Updates docs with reset, model lists #254 (@JasonWeill)
(GitHub contributors page for this release)
@3coins | @dlqqq | @JasonWeill | @pre-commit-ci
- Fixes "replace selection" behavior when nothing is selected #251 (@JasonWeill)
- Adds str method for TextWithMetadata #250 (@JasonWeill)
- Fix settings update and vertical scroll #249 (@3coins)
- Truncate chat history to last 2 conversations #240 (@3coins)
- Use pre-commit #237 (@dlqqq)
- Removes unused dialog code #234 (@JasonWeill)
- Change sagemaker example to make more sense #231 (@JasonWeill)
- add JS lint workflow #230 (@JasonWeill)
(GitHub contributors page for this release)
@3coins | @dlqqq | @JasonWeill | @pre-commit-ci
- Support SageMaker Endpoints in chat #197 (@dlqqq)
- Migrate to click #188 (@dlqqq)
- Adds %ai error magic command to explain the most recent error #170 (@JasonWeill)
- Register, update, and delete aliases #136 (@JasonWeill)
- Only attempt re-connect on abnormal closure #222 (@3coins)
- Update system prompt #221 (@JasonWeill)
- Fixes double call to cell help command #220 (@JasonWeill)
- Creates a new websocket connection in case of disconnect #219 (@3coins)
- SageMaker endpoint magic command support #215 (@JasonWeill)
- Removes comment from magic command #213 (@JasonWeill)
- Added python version to release action #223 (@3coins)
- Pinning python version to 3.10.x #212 (@3coins)
(GitHub contributors page for this release)
@3coins | @dlqqq | @JasonWeill
- Additional docs fix for 3.8 support #185 (@JasonWeill)
- Drops support for Python 3.7, mandates 3.8 or later #184 (@JasonWeill)
- SageMaker Studio support #192 (@3coins)
- fix: Correct recursion error on load in JupyterHub #178 (@mschroering)
- Additional docs fix for 3.8 support #185 (@JasonWeill)
(GitHub contributors page for this release)
@3coins | @dlqqq | @JasonWeill | @mschroering
- Adds config option to use ENTER to send message #164 (@JasonWeill)
- Changes chat messages to use absolute timestamps #159 (@JasonWeill)
- Chat UI quality of life improvements #154 (@JasonWeill)
- Fix
yarn install
in CI #174 (@dlqqq) - Avoids using str.removeprefix and str.removesuffix #169 (@JasonWeill)
- Remove reference to now-nonexistent file #165 (@JasonWeill)
- Uses React 17, not 18, for @jupyter-ai/core dependency #157 (@JasonWeill)
- Remove reference to now-nonexistent file #165 (@JasonWeill)
(GitHub contributors page for this release)
- Documents server 2 as a requirement #158 (@JasonWeill)
- Documents server 2 as a requirement #158 (@JasonWeill)
(GitHub contributors page for this release)
- Updates docs to refer to new setup process #149 (@JasonWeill)
- Tweak font styles for code blocks in chat #148 (@dlqqq)
- Introduce Jupyternaut #147 (@dlqqq)
- Runtime model configurability #146 (@dlqqq)
- Update providers.py #145 (@thorhojhus)
- Adds helper text to chat input field #139 (@3coins)
- Additional README copy edits #132 (@JasonWeill)
- Copy edits in README #131 (@JasonWeill)
- Revise screen shots in docs #125 (@JasonWeill)
- Docs: Moves chat icon to left tab bar #120 (@JasonWeill)
- Update chat interface privacy and cost notice #116 (@JasonWeill)
- Implement better non-collaborative identity #114 (@dlqqq)
- Adds initial docs for chat UI #112 (@JasonWeill)
- Updates contributor docs with more info about prerequisites #109 (@JasonWeill)
- Adds %ai list, %ai help magic commands #100 (@JasonWeill)
- Removes version from docs config #99 (@JasonWeill)
- Format image provider #66 (@JasonWeill)
- Adds missing newline before closing code block #155 (@JasonWeill)
- Runtime model configurability #146 (@dlqqq)
- Pin LangChain version #134 (@3coins)
- Upgraded ray version, installation instructions that work with python 3.9 and 3.10 #127 (@3coins)
- Strips language indicator from start of code output #126 (@JasonWeill)
- Updates docs to refer to new setup process #149 (@JasonWeill)
- Additional README copy edits #132 (@JasonWeill)
- Copy edits in README #131 (@JasonWeill)
- Revise screen shots in docs #125 (@JasonWeill)
- Docs: Moves chat icon to left tab bar #120 (@JasonWeill)
- Update chat interface privacy and cost notice #116 (@JasonWeill)
- Adds initial docs for chat UI #112 (@JasonWeill)
- Updates contributor docs with more info about prerequisites #109 (@JasonWeill)
- Removes version from docs config #99 (@JasonWeill)
(GitHub contributors page for this release)
@3coins | @dlqqq | @ellisonbg | @JasonWeill | @thorhojhus | @welcome
- Ray based document parsing of more file types #94 (@ellisonbg)
- Create /autonotebook command for AI generated notebooks #90 (@ellisonbg)
- Added support to index py, ipynb, md, and R files #89 (@3coins)
- This creates a memory actor for sharing memory across actors #82 (@ellisonbg)
- Add a /clear command to clear the chat history #78 (@ellisonbg)
- Removes chatgpt, dalle modules #71 (@JasonWeill)
- General UI/UX improvements #70 (@ellisonbg)
- Added doc indexing, moved processing to ray actors #67 (@3coins)
- implement better chat history UI #65 (@dlqqq)
- Basic collaborative chat #58 (@dlqqq)
- Adds code format option #57 (@JasonWeill)
- make selections more robust #54 (@dlqqq)
- Adds prompt templates #53 (@JasonWeill)
- Make provider call async #51 (@3coins)
- Adds Err array with exceptions captured #34 (@JasonWeill)
- Error handling and messaging when the chat service doesn't work #88 (@3coins)
- Removed sleep that was slowing replies down #79 (@ellisonbg)
- Documents requirements to use Python 3.10, JupyterLab #74 (@JasonWeill)
- Documents special error list, updates example file #63 (@JasonWeill)
- Strips prefix and suffix #60 (@JasonWeill)
- Updates README, adds screen shots #56 (@JasonWeill)
- Moved actors to separate modules. #80 (@3coins)
- Remove old UI #77 (@ellisonbg)
- Removes chatgpt, dalle modules #71 (@JasonWeill)
- Documents requirements to use Python 3.10, JupyterLab #74 (@JasonWeill)
- Misc work on README, docs, and magic #69 (@ellisonbg)
- Documents special error list, updates example file #63 (@JasonWeill)
- Updates README, adds screen shots #56 (@JasonWeill)
(GitHub contributors page for this release)
@3coins | @dlqqq | @ellisonbg | @JasonWeill | @welcome
- use --force-publish option for lerna version #49 (@dlqqq)
- Move magics to
jupyter-ai-magics
package #48 (@dlqqq) - Chat backend #40 (@3coins)
- Documents changes while server is running #33 (@JasonWeill)
- Implement chat UI #25 (@dlqqq)
- Documents changes while server is running #33 (@JasonWeill)
(GitHub contributors page for this release)
@3coins | @dlqqq | @ellisonbg | @JasonWeill | @welcome
- Various magic enhancements and fixes #32 (@dlqqq)
- Magic tweaks #31 (@dlqqq)
- Add magics example notebook #30 (@dlqqq)
- Removes docs about dialog, replaces with magics #29 (@JasonWeill)
- Update README.md #24 (@JasonWeill)
- Use new provider interface in magics #23 (@dlqqq)
- Initial docs #22 (@JasonWeill)
- Various magic enhancements and fixes #32 (@dlqqq)
- Update config.example.py #26 (@JasonWeill)
- Add magics example notebook #30 (@dlqqq)
- Removes docs about dialog, replaces with magics #29 (@JasonWeill)
- Update README.md #24 (@JasonWeill)
- Initial docs #22 (@JasonWeill)
(GitHub contributors page for this release)
@dlqqq | @JasonWeill | @welcome
- implement IPython magics #18 (@dlqqq)
- add tasks for AI modules #16 (@dlqqq)
- Decouple tasks from model engines and introduce modalities #15 (@dlqqq)
(GitHub contributors page for this release)
(GitHub contributors page for this release)
- bump all project versions in bump-version #10 (@dlqqq)
- fix insert-below-in-image insertion mode #9 (@dlqqq)
(GitHub contributors page for this release)
- rename NPM packages to be under @jupyter-ai org #7 (@dlqqq)
- disable check-release for PRs #6 (@dlqqq)
- Set up releaser configuration #3 (@dlqqq)
- Improve development setup #1 (@dlqqq)