Releases: OpenInterpreter/open-interpreter
v0.1.10
Bug fixes, pinned LiteLLM to prevent printed stream issue.
What's Changed
- Fix "depracated" typo by @jamiew in #642
- Fix issue #635 by @leifktaylor in #643
- Fix typo in setup_text_llm.py by @eltociear in #632
- Fix indentation in language_map.py by @smwyzi in #648
New Contributors
- @jamiew made their first contribution in #642
- @leifktaylor made their first contribution in #643
- @smwyzi made their first contribution in #648
Full Changelog: v0.1.9...v0.1.10
v0.1.9
The (Mini) Hackathon Update
The Open Interpreter Hackathon is on. To make OI easier to build on, we decided to add some developer features, such as exposing Open Procedures via interpreter.procedures
.
This lets you use RAG (retrieval augmented generation) to teach Open Interpreter new things.
Learn more about these new developer features via this Colab Notebook.
Full Changelog: v0.1.8...v0.1.9
The Local Update (Part I)
Open Interpreter's --local
mode is now powered by Mistral 7B
.
Significantly more architectures supported locally via ooba
, a headless Oobabooga wrapper.
What's Changed
- Fix bug when trying to use local non-CodeLlama model by @alexweberk in #571
- Update README_ZH.md by @orangeZSCB in #563
- chore: update test suite by @ericrallen in #594
- Fixed a bug in setup_text_llm.py by @kylehh in #560
- feat: add %tokens magic command that counts tokens via tiktoken by @ericrallen in #607
- feat: add support for loading different config.yaml files by @ericrallen in #609
- feat: add optional prompt token/cost estimate to %tokens by @ericrallen in #614
- Added powershell language by @DaveChini in #620
- Local Update by @KillianLucas in #625
New Contributors
- @alexweberk made their first contribution in #571
- @orangeZSCB made their first contribution in #563
- @kylehh made their first contribution in #560
- @DaveChini made their first contribution in #620
Full Changelog: v0.1.7...v0.1.8
v0.1.7
Generator Update (Quick Fixes II)
Particularly for Windows users and the new --config
flag.
We also added @ericrallen's --scan
flag, but this is not the official release for that. We'll direct attention to it on a subsequent release.
What's Changed
- Skip wrap_in_trap on Windows by @goalkeepr in #548
- fix: allow args to have choices and defaults by @ericrallen in #511
- feat: add semgrep code scanning via -safe argument by @ericrallen in #484
- fix: stop overwriting safe_mode config.yaml setting with default in args by @ericrallen in #554
New Contributors
- @goalkeepr made their first contribution in #548
Full Changelog: v0.1.6...v0.1.7
v0.1.6
Generator Update (Quick Fixes I)
What's Changed
- fix: stop overwriting boolean config values by @ericrallen in #508
- Update WINDOWS.md by @rsfutch77 in #523
- Fix ARM64 llama-cpp-python Install on Apple Silicon by @gavinmclelland in #505
- Broken empty message response by @blujus in #501
- fix crash on unknwon command on call to display help message by @mocy in #493
- Update get_relevant_procedures.py by @kubla in #492
New Contributors
- @ericrallen made their first contribution in #508
- @rsfutch77 made their first contribution in #523
- @gavinmclelland made their first contribution in #505
- @blujus made their first contribution in #501
- @mocy made their first contribution in #493
- @kubla made their first contribution in #492
Full Changelog: v0.1.5...v0.1.6
The Generator Update
Features
- Modular, generator-based foundation (rewrote entire codebase)
- Significantly easier to build Open Interpreter into your applications via
interpreter.chat(message)
(see JARVIS for example implementation) - Run
interpreter --config
to configureinterpreter
to run with any settings by default (set your default language model, system message, etc) - Run
interpreter --conversations
to resume conversations - Budget manager (thank you LiteLLM!) via
interpreter --max_budget 0.1
(sets max budget per session in USD) - Change the system message, temperature, max_tokens, etc. from the command line
- Central
/conversations
folder for persistent memory - New hosted language models (thank you LiteLLM!) like Claude, Google PaLM, Cohere, and more.
What's Changed
- Fix typo 'recieved'> 'received' by @merlinfrombelgium in #361
- Pull request template created by @TanmayDoesAI in #365
- docs: move pr template to .github folder by @jordanbtucker in #373
- chore: enhance .gitignore by @jordanbtucker in #374
- chore: add vscode debug support by @jordanbtucker in #375
- discard the / as command as it will block the Mac/Linux to load the file by @moming2k in #378
- Update interpreter.py for a typo error by @YUFEIFUT in #397
- Translated Open Interpreter README into Hindi by @zeelsheladiya in #417
- Add models to pull request template by @mak448a in #423
- Retry connecting to openai after hitting rate limit to fix #442 by @mathiasrw in #452
- Handle %load_message failure in interpreter.py by @richawo in #431
- add budget manager for api calls by @krrishdholakia in #316
- The Generator Update by @KillianLucas in #482
New Contributors
- @YUFEIFUT made their first contribution in #397
- @zeelsheladiya made their first contribution in #417
- @mak448a made their first contribution in #423
- @mathiasrw made their first contribution in #452
- @richawo made their first contribution in #431
- @krrishdholakia made their first contribution in #316
Full Changelog: v0.1.4...v0.1.5
v0.1.4
What's Changed
-
Add support for R language by @freestatman in #249
-
Feature: Implement and Document New Interactive Mode Commands by @moming2k in #302
-
Remove previous message and its responses from chat history with Undo-command. by @oliverpalonkorp in #273
-
Enable resume download from HF by @jerzydziewierz in #345
-
ui: Optimize welcome message by @codeacme17 in #257
-
feat: Add hints to Azure model by @codeacme17 in #237
-
docs: Upgrade issue templates by @jordanbtucker in #262
-
docs: Separate system versions into own fields by @jordanbtucker in #264
-
Docs: use x64 in WINDOWS.md and GPU.md by @jordanbtucker in #287
-
Fix using litellm.api_base, litellm.api_key, litellm.api_version by @ishaan-jaff in #284
-
Fix typo. by @Michael-Lfx in #292
-
fix(ui): Fix the display problem of welcome message by @codeacme17 in #270
-
Docs: Add security policy by @jordanbtucker in #266
-
Check disk space before downloading models by @michaelzdrav in #323
-
Update GPU.md by @metantonio in #335
-
remove duplicate import of inquirer library in get_hf_llm.py by @lalebot in #327
-
fix: merge os.environ with llama install env_vars by @jordanbtucker in #338
-
docs: move CONTRIBUTING to common path by @jordanbtucker in #350
-
Fix minor typo by @osanseviero in #248
New Contributors
- @freestatman made their first contribution in #249
- @osanseviero made their first contribution in #248
- @okisdev made their first contribution in #253
- @codeacme17 made their first contribution in #257
- @gijigae made their first contribution in #282
- @Michael-Lfx made their first contribution in #292
- @jjolly made their first contribution in #278
- @michaelzdrav made their first contribution in #323
- @metantonio made their first contribution in #335
- @lalebot made their first contribution in #327
- @jerzydziewierz made their first contribution in #345
Full Changelog: v0.1.3...v0.1.4
v0.1.3
What's Changed
- Quick fix for
--model tiiuae/falcon-180B
(redirect to GGUF version). - Quick fix for #247
Update pushed to pip
with just the fixes above. After that, I merged this commit, which will be in the next pip
version:
- Add support for R language, update instructions for package installation by @freestatman in #249
New Contributors
- @freestatman made their first contribution in #249
Full Changelog: v0.1.2...v0.1.3
v0.1.2
What's Changed
- docs: explain GPU support by @jordanbtucker in #102
- feat: add AZURE_API_KEY that falls back to OPENAI_API_KEY by @jordanbtucker in #135
- docs: explain Windows Code-Llama build requirements by @jordanbtucker in #138
- Created contribution guidelines by @TanmayDoesAI in #101
- docs: create issue templates by @jordanbtucker in #176
- moved all markdown files to a folder, updated the readme for the same… by @TanmayDoesAI in #182
- Fix download URL for CodeLlama 7B high quality model by @merlinfrombelgium in #181
- docs: add interpreter version to template by @jordanbtucker in #190
- docs: fix example version number for interpreter by @jordanbtucker in #191
- docs: add enhancement label to feature requests by @jordanbtucker in #192
- docs: prevent blank issues by @jordanbtucker in #195
- docs: provide issue template link by @jordanbtucker in #196
- Update README.md by @macterra in #197
- Create MACOS Documentation by @ihgalis in #177
- Add option to override Azure API type by @Taik in #189
- Feature: add cli environment variable by @moming2k in #157
- Update MACOS.md by @ihgalis in #215
- Falcon // Any 🤗 model via
--model meta/llama
by @KillianLucas in #213 - Update contributing.md with instructions on how to get local fork running by @oliverpalonkorp in #235
- remove redundant checks for apple silicon by @shubhe25p in #230
- Fix GPT 3.5 from failing to run commands by @Maclean-D in #96
New Contributors
- @jordanbtucker made their first contribution in #102
- @merlinfrombelgium made their first contribution in #181
- @macterra made their first contribution in #197
- @ihgalis made their first contribution in #177
- @Taik made their first contribution in #189
- @moming2k made their first contribution in #157
- @oliverpalonkorp made their first contribution in #235
- @shubhe25p made their first contribution in #230
- @Maclean-D made their first contribution in #96
Full Changelog: v0.1.1...v0.1.2
v0.1.1
What's Changed
- Added Azure support by @ifsheldon in #62
- CodeLlama improvements by @KillianLucas in #87
- Rate limit error fix
New Contributors
- @ifsheldon made their first contribution in #62
Full Changelog: v0.1.0...v0.1.1