Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build(deps): bump the pip group across 2 directories with 3 updates #1

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

dependabot[bot]
Copy link

@dependabot dependabot bot commented on behalf of github Aug 18, 2024

Bumps the pip group with 3 updates in the / directory: streamlit, llama-index and llama-index-core.
Bumps the pip group with 3 updates in the /dev/recursive_retrieval directory: streamlit, llama-index and llama-index-core.

Updates streamlit from 1.31.1 to 1.37.0

Release notes

Sourced from streamlit's releases.

1.37.0

What's Changed

New Features 🎉

Bug Fixes 🐛

Other Changes

New Contributors

Full Changelog: streamlit/streamlit@1.36.0...1.37.0

1.36.0

What's Changed

... (truncated)

Commits
  • e2c3c93 Up version to 1.37.0
  • 88389e3 Docstrings for 1.37.0 (#9115)
  • 898fd80 Temp solution to fix invalid material icon error rendering (#9113)
  • b2c88c6 Reset ctx.current_fragment_id to last ID instead of None (#9114)
  • 3a63985 Validate the path using Tornado before performing checks (#8990)
  • 40303e1 Move the filled star icon for feedback widget from python code to web app (#9...
  • 6296baf Update the feedback widget design (#9094)
  • b9c3521 Fixes two st.map width bugs (#9070)
  • a2ae47a Only expose selected objects in components module (#8873)
  • 340f3f7 De-experimentalize st.dialog (#9020)
  • Additional commits viewable in compare view

Updates llama-index from 0.10.11 to 0.10.13

Release notes

Sourced from llama-index's releases.

v0.10.13

New Features

  • Added a llama-pack for KodaRetriever, for on-the-fly alpha tuning (#11311)
  • Added support for mistral-large (#11398)
  • Last token pooling mode for huggingface embeddings models like SFR-Embedding-Mistral (#11373)
  • Added fsspec support to SimpleDirectoryReader (#11303)

Bug Fixes / Nits

  • Fixed an issue with context window + prompt helper (#11379)
  • Moved OpenSearch vector store to BasePydanticVectorStore (#11400)
  • Fixed function calling in fireworks LLM (#11363)
  • Made cohere embedding types more automatic (#11288)
  • Improve function calling in react agent (#11280)
  • Fixed MockLLM imports (#11376)

v0.10.12

No release notes provided.

Changelog

Sourced from llama-index's changelog.

[0.10.13] - 2024-02-26

New Features

  • Added a llama-pack for KodaRetriever, for on-the-fly alpha tuning (#11311)
  • Added support for mistral-large (#11398)
  • Last token pooling mode for huggingface embeddings models like SFR-Embedding-Mistral (#11373)
  • Added fsspec support to SimpleDirectoryReader (#11303)

Bug Fixes / Nits

  • Fixed an issue with context window + prompt helper (#11379)
  • Moved OpenSearch vector store to BasePydanticVectorStore (#11400)
  • Fixed function calling in fireworks LLM (#11363)
  • Made cohere embedding types more automatic (#11288)
  • Improve function calling in react agent (#11280)
  • Fixed MockLLM imports (#11376)

[0.10.12] - 2024-02-22

New Features

  • Added llama-index-postprocessor-colbert-rerank package (#11057)
  • MyMagicAI LLM (#11263)
  • MariaTalk LLM (#10925)
  • Add retries to github reader (#10980)
  • Added FireworksAI embedding and LLM modules (#10959)

Bug Fixes / Nits

  • Fixed string formatting in weaviate (#11294)
  • Fixed off-by-one error in semantic splitter (#11295)
  • Fixed download_llama_pack for multiple files (#11272)
  • Removed BUILD files from packages (#11267)
  • Loosened python version reqs for all packages (#11267)
  • Fixed args issue with chromadb (#11104)
Commits
  • 6d642a0 Logan/release v0.10.13 (#11408)
  • 78a4c9e fix prompt helper init (#11379)
  • 52383c7 Update opensearch vectorstore to PydanticVectorStore class (#11400)
  • 4077fee Astra DB Vector store, package rename for naming consistency (#11056)
  • 65290f5 Alpha Tuning Llama Pack: KodaRetriever (#11311)
  • 70d4a5c Only firefunction is function calling (#11363)
  • 3a10235 Elastic Search retrieval : Bug Fix for Cases when No Relationships Detected (...
  • 0b13b8d Add support for mistral-large (#11398)
  • 1f48dd9 Astra DB clients identify themselves as coming through LlamaIndex usage (#11396)
  • 6024956 Last token pooling for Huggingface models like SFR-Embedding-Mistral (#11373)
  • Additional commits viewable in compare view

Updates llama-index-core from 0.10.12 to 0.10.24

Release notes

Sourced from llama-index-core's releases.

v0.10.24

No release notes provided.

v0.10.23

No release notes provided.

v0.10.22

No release notes provided.

v0.10.20

No release notes provided.

v0.10.19

llama-index-cli [0.1.9]

  • Removed chroma as a bundled dep to reduce llama-index deps

llama-index-core [0.10.19]

  • Introduce retries for rate limits in OpenAI llm class (#11867)
  • Added table comments to SQL table schemas in SQLDatabase (#11774)
  • Added LogProb type to ChatResponse object (#11795)
  • Introduced LabelledSimpleDataset (#11805)
  • Fixed insert IndexNode objects with unserializable objects (#11836)
  • Fixed stream chat type error when writing response to history in CondenseQuestionChatEngine (#11856)
  • Improve post-processing for json query engine (#11862)

llama-index-embeddings-cohere [0.1.4]

  • Fixed async kwarg error (#11822)

llama-index-embeddings-dashscope [0.1.2]

  • Fixed pydantic import (#11765)

llama-index-graph-stores-neo4j [0.1.3]

  • Properly close connection after verifying connectivity (#11821)

llama-index-llms-cohere [0.1.3]

  • Add support for new command-r model (#11852)

llama-index-llms-huggingface [0.1.4]

  • Fixed streaming decoding with special tokens (#11807)

llama-index-llms-mistralai [0.1.5]

  • Added support for latest and open models (#11792)

... (truncated)

Changelog

Sourced from llama-index-core's changelog.

llama-index-core [0.10.24]

  • pretty prints in LlamaDebugHandler (#12216)
  • stricter interpreter constraints on pandas query engine (#12278)
  • PandasQueryEngine can now execute 'pd.*' functions (#12240)
  • delete proper metadata in docstore delete function (#12276)
  • improved openai agent parsing function hook (#12062)
  • add raise_on_error flag for SimpleDirectoryReader (#12263)
  • remove un-caught openai import in core (#12262)
  • Fix download_llama_dataset and download_llama_pack (#12273)
  • Implement EvalQueryEngineTool (#11679)
  • Expand instrumenation Span coverage for AgentRunner (#12249)
  • Adding concept of function calling agent/llm (mistral supported for now) (#12222, )

llama-index-embeddings-huggingface [0.2.0]

  • Use sentence-transformers as a backend (#12277)

llama-index-postprocessor-voyageai-rerank [0.1.0]

  • Added voyageai as a reranker (#12111)

llama-index-readers-gcs [0.1.0]

  • Added google cloud storage reader (#12259)

llama-index-readers-google [0.2.1]

  • Support for different drives (#12146)
  • Remove unnecessary PyDrive dependency from Google Drive Reader (#12257)

llama-index-readers-readme [0.1.0]

  • added readme.com reader (#12246)

llama-index-packs-raft [0.1.3]

  • added pack for RAFT (#12275)

[2024-03-23]

llama-index-core [0.10.23]

  • Added (a)predict_and_call() function to base LLM class + openai + mistralai (#12188)
  • fixed bug with wait() in async agent streaming (#12187)

llama-index-embeddings-alephalpha [0.1.0]

  • Added alephalpha embeddings (#12149)

... (truncated)

Commits

Updates streamlit from 1.31.1 to 1.37.0

Release notes

Sourced from streamlit's releases.

1.37.0

What's Changed

New Features 🎉

Bug Fixes 🐛

Other Changes

New Contributors

Full Changelog: streamlit/streamlit@1.36.0...1.37.0

1.36.0

What's Changed

... (truncated)

Commits
  • e2c3c93 Up version to 1.37.0
  • 88389e3 Docstrings for 1.37.0 (#9115)
  • 898fd80 Temp solution to fix invalid material icon error rendering (#9113)
  • b2c88c6 Reset ctx.current_fragment_id to last ID instead of None (#9114)
  • 3a63985 Validate the path using Tornado before performing checks (#8990)
  • 40303e1 Move the filled star icon for feedback widget from python code to web app (#9...
  • 6296baf Update the feedback widget design (#9094)
  • b9c3521 Fixes two st.map width bugs (#9070)
  • a2ae47a Only expose selected objects in components module (#8873)
  • 340f3f7 De-experimentalize st.dialog (#9020)
  • Additional commits viewable in compare view

Updates llama-index from 0.10.11 to 0.10.13

Release notes

Sourced from llama-index's releases.

v0.10.13

New Features

  • Added a llama-pack for KodaRetriever, for on-the-fly alpha tuning (#11311)
  • Added support for mistral-large (#11398)
  • Last token pooling mode for huggingface embeddings models like SFR-Embedding-Mistral (#11373)
  • Added fsspec support to SimpleDirectoryReader (#11303)

Bug Fixes / Nits

  • Fixed an issue with context window + prompt helper (#11379)
  • Moved OpenSearch vector store to BasePydanticVectorStore (#11400)
  • Fixed function calling in fireworks LLM (#11363)
  • Made cohere embedding types more automatic (#11288)
  • Improve function calling in react agent (#11280)
  • Fixed MockLLM imports (#11376)

v0.10.12

No release notes provided.

Changelog

Sourced from llama-index's changelog.

[0.10.13] - 2024-02-26

New Features

  • Added a llama-pack for KodaRetriever, for on-the-fly alpha tuning (#11311)
  • Added support for mistral-large (#11398)
  • Last token pooling mode for huggingface embeddings models like SFR-Embedding-Mistral (#11373)
  • Added fsspec support to SimpleDirectoryReader (#11303)

Bug Fixes / Nits

  • Fixed an issue with context window + prompt helper (#11379)
  • Moved OpenSearch vector store to BasePydanticVectorStore (#11400)
  • Fixed function calling in fireworks LLM (#11363)
  • Made cohere embedding types more automatic (#11288)
  • Improve function calling in react agent (#11280)
  • Fixed MockLLM imports (#11376)

[0.10.12] - 2024-02-22

New Features

  • Added llama-index-postprocessor-colbert-rerank package (#11057)
  • MyMagicAI LLM (#11263)
  • MariaTalk LLM (#10925)
  • Add retries to github reader (#10980)
  • Added FireworksAI embedding and LLM modules (#10959)

Bug Fixes / Nits

  • Fixed string formatting in weaviate (#11294)
  • Fixed off-by-one error in semantic splitter (#11295)
  • Fixed download_llama_pack for multiple files (#11272)
  • Removed BUILD files from packages (#11267)
  • Loosened python version reqs for all packages (#11267)
  • Fixed args issue with chromadb (#11104)
Commits
  • 6d642a0 Logan/release v0.10.13 (#11408)
  • 78a4c9e fix prompt helper init (#11379)
  • 52383c7 Update opensearch vectorstore to PydanticVectorStore class (#11400)
  • 4077fee Astra DB Vector store, package rename for naming consistency (#11056)
  • 65290f5 Alpha Tuning Llama Pack: KodaRetriever (#11311)
  • 70d4a5c Only firefunction is function calling (#11363)
  • 3a10235 Elastic Search retrieval : Bug Fix for Cases when No Relationships Detected (...
  • 0b13b8d Add support for mistral-large (#11398)
  • 1f48dd9 Astra DB clients identify themselves as coming through LlamaIndex usage (#11396)
  • 6024956 Last token pooling for Huggingface models like SFR-Embedding-Mistral (#11373)
  • Additional commits viewable in compare view

Updates llama-index-core from 0.10.12 to 0.10.24

Release notes

Sourced from llama-index-core's releases.

v0.10.24

No release notes provided.

v0.10.23

No release notes provided.

v0.10.22

No release notes provided.

v0.10.20

No release notes provided.

v0.10.19

llama-index-cli [0.1.9]

  • Removed chroma as a bundled dep to reduce llama-index deps

llama-index-core [0.10.19]

  • Introduce retries for rate limits in OpenAI llm class (#11867)
  • Added table comments to SQL table schemas in SQLDatabase (#11774)
  • Added LogProb type to ChatResponse object (#11795)
  • Introduced LabelledSimpleDataset (#11805)
  • Fixed insert IndexNode objects with unserializable objects (#11836)
  • Fixed stream chat type error when writing response to history in CondenseQuestionChatEngine (#11856)
  • Improve post-processing for json query engine (#11862)

llama-index-embeddings-cohere [0.1.4]

  • Fixed async kwarg error (#11822)

llama-index-embeddings-dashscope [0.1.2]

  • Fixed pydantic import (#11765)

llama-index-graph-stores-neo4j [0.1.3]

  • Properly close connection after verifying connectivity (#11821)

llama-index-llms-cohere [0.1.3]

  • Add support for new command-r model (#11852)

llama-index-llms-huggingface [0.1.4]

  • Fixed streaming decoding with special tokens (#11807)

llama-index-llms-mistralai [0.1.5]

  • Added support for latest and open models (#11792)

... (truncated)

Changelog

Sourced from llama-index-core's changelog.

llama-index-core [0.10.24]

  • pretty prints in LlamaDebugHandler (#12216)
  • stricter interpreter constraints on pandas query engine (#12278)
  • PandasQueryEngine can now execute 'pd.*' functions (#12240)
  • delete proper metadata in docstore delete function (#12276)
  • improved openai agent parsing function hook (#12062)
  • add raise_on_error flag for SimpleDirectoryReader (#12263)
  • remove un-caught openai import in core (#12262)
  • Fix download_llama_dataset and download_llama_pack (#12273)
  • Implement EvalQueryEngineTool (#11679)
  • Expand instrumenation Span coverage for AgentRunner (#12249)
  • Adding concept of function calling agent/llm (mistral supported for now) (#12222, )

llama-index-embeddings-huggingface [0.2.0]

  • Use sentence-transformers as a backend (#12277)

llama-index-postprocessor-voyageai-rerank [0.1.0]

  • Added voyageai as a reranker (#12111)

llama-index-readers-gcs [0.1.0]

  • Added google cloud storage reader (#12259)

llama-index-readers-google [0.2.1]

  • Support for different drives (#12146)
  • Remove unnecessary PyDrive dependency from Google Drive Reader (#12257)

llama-index-readers-readme [0.1.0]

  • added readme.com reader (#12246)

llama-index-packs-raft [0.1.3]

  • added pack for RAFT (#12275)

[2024-03-23]

llama-index-core [0.10.23]

  • Added (a)predict_and_call() function to base LLM class + openai + mistralai (#12188)
  • fixed bug with wait() in async agent streaming (#12187)

llama-index-embeddings-alephalpha [0.1.0]

  • Added alephalpha embeddings (#12149)

... (truncated)

Commits

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore <dependency name> major version will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)
  • @dependabot ignore <dependency name> minor version will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)
  • @dependabot ignore <dependency name> will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)
  • @dependabot unignore <dependency name> will remove all of the ignore conditions of the specified dependency
  • @dependabot unignore <dependency name> <ignore condition> will remove the ignore condition of the specified dependency and ignore conditions
    You can disable automated security fix PRs for this repo from the Security Alerts page.

Summary by CodeRabbit

  • New Features

    • Upgraded streamlit to version 1.37.0, enhancing features and performance.
    • Updated llama-index to version 0.10.13 and llama-index-core to 0.10.24, introducing stability and functionality improvements.
  • Bug Fixes

    • The updates may include essential bug fixes from the newer library versions, improving overall application reliability.

Bumps the pip group with 3 updates in the / directory: [streamlit](https://github.com/streamlit/streamlit), [llama-index](https://github.com/run-llama/llama_index) and [llama-index-core](https://github.com/run-llama/llama_index).
Bumps the pip group with 3 updates in the /dev/recursive_retrieval directory: [streamlit](https://github.com/streamlit/streamlit), [llama-index](https://github.com/run-llama/llama_index) and [llama-index-core](https://github.com/run-llama/llama_index).


Updates `streamlit` from 1.31.1 to 1.37.0
- [Release notes](https://github.com/streamlit/streamlit/releases)
- [Commits](streamlit/streamlit@1.31.1...1.37.0)

Updates `llama-index` from 0.10.11 to 0.10.13
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](run-llama/llama_index@v0.10.11...v0.10.13)

Updates `llama-index-core` from 0.10.12 to 0.10.24
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](run-llama/llama_index@v0.10.12...v0.10.24)

Updates `streamlit` from 1.31.1 to 1.37.0
- [Release notes](https://github.com/streamlit/streamlit/releases)
- [Commits](streamlit/streamlit@1.31.1...1.37.0)

Updates `llama-index` from 0.10.11 to 0.10.13
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](run-llama/llama_index@v0.10.11...v0.10.13)

Updates `llama-index-core` from 0.10.12 to 0.10.24
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](run-llama/llama_index@v0.10.12...v0.10.24)

---
updated-dependencies:
- dependency-name: streamlit
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: llama-index
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: llama-index-core
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: streamlit
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: llama-index
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: llama-index-core
  dependency-type: direct:production
  dependency-group: pip
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Aug 18, 2024
Copy link

coderabbitai bot commented Aug 18, 2024

Walkthrough

The recent updates focus on enhancing the project by upgrading key dependencies in the requirements.txt and pyproject.toml files. Notably, streamlit, llama-index, and llama-index-core received version boosts, signaling improvements in features, performance, and stability. These changes aim to leverage new functionalities while maintaining compatibility, ensuring the software remains robust and effective without a comprehensive overhaul of all dependencies.

Changes

Files Change Summary
dev/.../requirements.txt Upgraded streamlit from 1.31.1 to 1.37.0, llama-index from 0.10.11 to 0.10.13, and llama-index-core from 0.10.12 to 0.10.24.
pyproject.toml Upgraded streamlit from 1.31.1 to 1.37.0, llama-index from 0.10.11 to 0.10.13, and llama-index-core from 0.10.12 to 0.10.24.
.../requirements.txt Upgraded streamlit from 1.31.1 to 1.37.0, llama-index from 0.10.11 to 0.10.13, and llama-index-core from 0.10.12 to 0.10.24.

Poem

In the garden of code, I hop and play,
New versions sprout, brightening the day.
Streamlit shines, and llamas prance,
With every update, we take a chance.
Bugs vanish like dew in the morn,
In this code rabbit's world, a new dawn is born! 🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 276ca06 and a410a69.

Files selected for processing (3)
  • dev/recursive_retrieval/requirements.txt (1 hunks)
  • pyproject.toml (1 hunks)
  • requirements.txt (1 hunks)
Additional comments not posted (9)
dev/recursive_retrieval/requirements.txt (3)

4-4: Upgrade Streamlit to 1.37.0.

The update from 1.31.1 to 1.37.0 includes new features and bug fixes. Ensure compatibility with existing code.


5-5: Upgrade Llama-index to 0.10.13.

The update from 0.10.11 to 0.10.13 includes enhancements and bug fixes. Verify that these changes do not introduce breaking changes in your application.


7-7: Upgrade Llama-index-core to 0.10.24.

The significant update from 0.10.12 to 0.10.24 suggests important changes. Ensure all functionalities relying on this library are tested for compatibility.

requirements.txt (3)

4-4: Upgrade Streamlit to 1.37.0.

This update aligns with the changes in dev/recursive_retrieval/requirements.txt. Ensure consistent application behavior across environments.


5-5: Upgrade Llama-index to 0.10.13.

Consistent with the update in dev/recursive_retrieval/requirements.txt. Verify compatibility with your application's functionality.


7-7: Upgrade Llama-index-core to 0.10.24.

Ensure that this significant version bump does not introduce unexpected behavior in your application.

pyproject.toml (3)

25-25: Upgrade Streamlit to 1.37.0.

This update is consistent with the requirements.txt files. Ensure compatibility with all project components.


26-26: Upgrade Llama-index to 0.10.13.

The update aligns with the requirements.txt files. Test for any integration issues.


28-28: Upgrade Llama-index-core to 0.10.24.

This significant update is consistent with the requirements.txt files. Verify that all dependencies and integrations are functioning as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants