Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(deps): update machine-learning #9304

Merged
merged 3 commits into from
May 14, 2024
Merged

fix(deps): update machine-learning #9304

merged 3 commits into from
May 14, 2024

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented May 7, 2024

Mend Renovate

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
fastapi 0.110.3 -> 0.111.0 age adoption passing confidence
huggingface-hub 0.22.2 -> 0.23.0 age adoption passing confidence

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

tiangolo/fastapi (fastapi)

v0.111.0

Compare Source

Features

Try it out with:

$ pip install --upgrade fastapi

$ fastapi dev main.py

 ╭────────── FastAPI CLI - Development mode ───────────╮
 │                                                     │
 │  Serving at: http://127.0.0.1:8000                  │
 │                                                     │
 │  API docs: http://127.0.0.1:8000/docs               │
 │                                                     │
 │  Running in development mode, for production use:   │
 │                                                     │
 │  fastapi run                                        │
 │                                                     │
 ╰─────────────────────────────────────────────────────╯

INFO:     Will watch for changes in these directories: ['/home/user/code/awesomeapp']
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO:     Started reloader process [2248755] using WatchFiles
INFO:     Started server process [2248757]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
Refactors
  • 🔧 Add configs and setup for fastapi-slim including optional extras fastapi-slim[standard], and fastapi including by default the same standard extras. PR #​11503 by @​tiangolo.
huggingface/huggingface_hub (huggingface-hub)

v0.23.0: : LLMs with tools, seamless downloads, and much more!

Compare Source

📁 Seamless download to local dir

The 0.23.0 release comes with a big revamp of the download process, especially when it comes to downloading to a local directory. Previously the process was still involving the cache directory and symlinks which led to misconceptions and a suboptimal user experience. The new workflow involves a .cache/huggingface/ folder, similar to the .git/ one, that keeps track of the progress of a download. The main features are:

  • no symlinks
  • no local copy
  • don't re-download when not necessary
  • same behavior on both Unix and Windows
  • unrelated to cache-system

Example to download q4 GGUF file for microsoft/Phi-3-mini-4k-instruct-gguf:

### Download q4 GGUF file from 
huggingface-cli download microsoft/Phi-3-mini-4k-instruct-gguf Phi-3-mini-4k-instruct-q4.gguf --local-dir=data/phi3

With this addition, interrupted downloads are now resumable! This applies both for downloads in local and cache directories which should greatly improve UX for users with slow/unreliable connections. In this regard, the resume_download parameter is now deprecated (not relevant anymore).

💡 Grammar and Tools in InferenceClient

It is now possible to provide a list of tools when chatting with a model using the InferenceClient! This major improvement has been made possible thanks to TGI that handle them natively.

>>> from huggingface_hub import InferenceClient

### Ask for weather in the next days using tools
>>> client = InferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")
>>> messages = [
...     {"role": "system", "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."},
...     {"role": "user", "content": "What's the weather like the next 3 days in San Francisco, CA?"},
... ]
>>> tools = [
...     {
...         "type": "function",
...         "function": {
...             "name": "get_current_weather",
...             "description": "Get the current weather",
...             "parameters": {
...                 "type": "object",
...                 "properties": {
...                     "location": {
...                         "type": "string",
...                         "description": "The city and state, e.g. San Francisco, CA",
...                     },
...                     "format": {
...                         "type": "string",
...                         "enum": ["celsius", "fahrenheit"],
...                         "description": "The temperature unit to use. Infer this from the users location.",
...                     },
...                 },
...                 "required": ["location", "format"],
...             },
...         },
...     },
...     ...
... ]
>>> response = client.chat_completion(
...     model="meta-llama/Meta-Llama-3-70B-Instruct",
...     messages=messages,
...     tools=tools,
...     tool_choice="auto",
...     max_tokens=500,
... )
>>> response.choices[0].message.tool_calls[0].function
ChatCompletionOutputFunctionDefinition(
    arguments={
        'location': 'San Francisco, CA',
        'format': 'fahrenheit',
        'num_days': 3
    },
    name='get_n_day_weather_forecast',
    description=None
)

It is also possible to provide grammar rules to the text_generation task. This ensures that the output follows a precise JSON Schema specification or matches a regular expression. For more details about it, check out the Guidance guide from Text-Generation-Inference docs.

  • Add support for Grammar/Tools + TGI-based specs in InferenceClient by @​Wauplin in #​2237
⚙️ Other

Mention more chat-completion task instead of conversation in documentation.

  • Add chat_completion and remove conversational from Inference guide by @​Wauplin in #​2215

chat-completion relies on server-side rendering in all cases, including when model is transformers-backed. Previously it was only the case for TGI-backed models and templates were rendered client-side otherwise.

Improved logic to determine whether a model is served via TGI or transformers.

🌐 📚 Korean community is on fire!

The PseudoLab team is a non-profit dedicated to make AI more accessible in the Korean-speaking community. In the past few weeks, their team of contributors managed to translated (almost) entirely the huggingface_hub documentation. Huge shout-out to the coordination on this task! Documentation can be accessed here.

🛠️ Misc improvements

User API

@​bilgehanertan added support for 2 new routes:

  • get_user_overview to retrieve high-level information about a user: username, avatar, number of models/datasets/Spaces, number of likes and upvotes, number of interactions in discussion, etc.
CLI tag

@​bilgehanertan added a new command to the CLI to handle tags. It is now possible to:

  • tag a repo
>>> huggingface-cli tag Wauplin/my-cool-model v1.0
You are about to create tag v1.0 on model Wauplin/my-cool-model
Tag v1.0 created on Wauplin/my-cool-model
  • retrieve the list of tags for a repo
>>> huggingface-cli tag Wauplin/gradio-space-ci -l --repo-type space
Tags for space Wauplin/gradio-space-ci:
0.2.2
0.2.1
0.2.0
0.1.2
0.0.2
0.0.1
  • delete a tag on a repo
>>> huggingface-cli tag -d Wauplin/my-cool-model v1.0
You are about to delete tag v1.0 on model Wauplin/my-cool-model
Proceed? [Y/n] y
Tag v1.0 deleted on Wauplin/my-cool-model

For more details, check out the CLI guide.

🧩 ModelHubMixin

This ModelHubMixin got a set of nice improvement to generate model cards and handle custom data types in the config.json file. More info in the integration guide.

⚙️ Other

In a shared environment, it is now possible to set a custom path HF_TOKEN_PATH as environment variable so that each user of the cluster has their own access token.

Thanks to @​Y4suyuki and @​lappemic, most custom errors defined in huggingface_hub are now aggregated in the same module. This makes it very easy to import them from from huggingface_hub.errors import ....

Fixed HFSummaryWriter (class to seamlessly log tensorboard events to the Hub) to work with either tensorboardX or torch.utils implementation, depending on the user setup.

Speed to list files using HfFileSystem has been drastically improved, thanks to @​awgr. The values returned from the cache are not deep-copied anymore, which was unfortunately the part taking the most time in the process. If users want to modify values returned by HfFileSystem, they would need to copy them before-hand. This is expected to be a very limited drawback.

Progress bars in huggingface_hub got some flexibility!
It is now possible to provide a name to a tqdm bar (similar to logging.getLogger) and to enable/disable only some progress bars. More details in this guide.

>>> from huggingface_hub.utils import tqdm, disable_progress_bars
>>> disable_progress_bars("peft.foo")

### No progress bars for `peft.boo.bar`
>>> for _ in tqdm(range(5), name="peft.foo.bar"):
...     pass

### But for `peft` yes
>>> for _ in tqdm(range(5), name="peft"):
...     pass
100%|█████████████████| 5/5 [00:00<00:00, 117817.53it/s]

💔 Breaking changes

--local-dir-use-symlink and --resume-download

As part of the download process revamp, some breaking changes have been introduced. However we believe that the benefits outweigh the change cost. Breaking changes include:

  • a .cache/huggingface/ folder is not present at the root of the local dir. It only contains file locks, metadata and partially downloaded files. If you need to, you can safely delete this folder without corrupting the data inside the root folder. However, you should expect a longer recovery time if you try to re-run your download command.
  • --local-dir-use-symlink is not in used anymore and will be ignored. It is not possible anymore to symlinks your local dir with the cache directory. Thanks to the .cache/huggingface/ folder, it shouldn't be needed anyway.
  • --resume-download has been deprecated and will be ignored. Resuming failed downloads is now activated by default all the time. If you need to force a new download, use --force-download.
Inference Types

As part of #​2237 (Grammar and Tools support), we've updated the return value from InferenceClient.chat_completion and InferenceClient.text_generation to match exactly TGI output. The attributes of the returned objects did not change but the classes definition themselves yes. Expect errors if you've previously had from huggingface_hub import TextGenerationOutput in your code. This is however not the common usage since those objects are already instantiated by huggingface_hub directly.

Expected breaking changes

Some other breaking changes were expected (and announced since 0.19.x):

  • list_files_info is definitively removed in favor of get_paths_info and list_repo_tree
  • WebhookServer.run is definitively removed in favor of WebhookServer.launch
  • api_endpoint in ModelHubMixin push_to_hub's method is definitively removed in favor of the HF_ENDPOINT environment variable

Check #​2156 for more details.

Small fixes and maintenance

⚙️ CI optimization
⚙️ fixes
⚙️ internal

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @​lappemic
    • Fix Typos in CONTRIBUTION.md and Formatting in README.md (#​2201)
    • Define errors in errors file (#​2202)
    • [wip] Implement hierarchical progress bar control in huggingface_hub (#​2217)
    • Update harmonized token param desc and type def (#​2252)
  • @​bilgehanertan
  • @​cjfghk5697
    • 🌐 [i18n-KO] Translated guides/repository.md to Korean (#​2124)
    • 🌐 [i18n-KO] Translated package_reference/inference_client.md to Korean (#​2178)
    • 🌐 [i18n-KO] Translated package_reference/utilities.md to Korean (#​2196)
  • @​SeungAhSon
    • 🌐 [i18n-KO] Translated guides/model_cards.md to Korean" (#​2128)
    • 🌐 [i18n-KO] Translated reference/login.md to Korean (#​2151)
    • 🌐 [i18n-KO] Translated package_reference/hf_file_system.md to Korean (#​2174)
  • @​seoulsky-field
    • 🌐 [i18n-KO] Translated guides/community.md to Korean (#​2126)
  • @​Y4suyuki
  • @​harheem
    • 🌐 [i18n-KO] Translated guides/cli.md to Korean (#​2131)
    • 🌐 [i18n-KO] Translated reference/inference_endpoints.md to Korean (#​2180)
  • @​seoyoung-3060
    • 🌐 [i18n-KO] Translated guides/search.md to Korean (#​2134)
    • 🌐 [i18n-KO] Translated package_reference/file_download.md to Korean (#​2184)
    • 🌐 [i18n-KO] Translated package_reference/serialization.md to Korean (#​2233)
  • @​boyunJang
    • 🌐 [i18n-KO] Translated guides/inference.md to Korean (#​2130)
    • 🌐 [i18n-KO] Translated package_reference/collections.md to Korean (#​2214)
    • 🌐 [i18n-KO] Translated package_reference/space_runtime.md to Korean (#​2213)
    • 🌐 [i18n-KO] Translated guides/manage-spaces.md to Korean (#​2220)
  • @​nuatmochoi
    • 🌐 [i18n-KO] Translated guides/webhooks_server.md to Korean (#​2145)
    • 🌐 [i18n-KO] Translated package_reference/cache.md to Korean (#​2191)
  • @​fabxoe
    • 🌐 [i18n-KO] Translated package_reference/tensorboard.md to Korean (#​2173)
    • 🌐 [i18n-KO] Translated package_reference/inference_types.md to Korean (#​2171)
    • 🌐 [i18n-KO] Translated package_reference/hf_api.md to Korean (#​2165)
    • 🌐 [i18n-KO] Translated package_reference/mixins.md to Korean (#​2166)
  • @​junejae
    • 🌐 [i18n-KO] Translated guides/upload.md to Korean (#​2139)
    • 🌐 [i18n-KO] Translated reference/repository.md to Korean (#​2189)
  • @​heuristicwave
    • 🌐 [i18n-KO] Translating guides/hf_file_system.md to Korean (#​2146)
  • @​usr-bin-ksh
    • 🌐 [i18n-KO] Translated guides/inference_endpoints.md to Korean (#​2164)

Configuration

📅 Schedule: Branch creation - "on tuesday" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Mend Renovate. View repository job log here.

@renovate renovate bot requested a review from mertalev as a code owner May 7, 2024 14:54
@renovate renovate bot added dependencies Pull requests that update a dependency file renovate labels May 7, 2024
Copy link

cloudflare-workers-and-pages bot commented May 7, 2024

Deploying immich with  Cloudflare Pages  Cloudflare Pages

Latest commit: 9293bbe
Status: ✅  Deploy successful!
Preview URL: https://826ba549.immich.pages.dev
Branch Preview URL: https://renovate-machine-learning.immich.pages.dev

View logs

@renovate renovate bot force-pushed the renovate/machine-learning branch from f29ecad to f980312 Compare May 8, 2024 01:07
@renovate renovate bot changed the title fix(deps): update dependency huggingface-hub to v0.23.0 fix(deps): update machine-learning May 8, 2024
Copy link
Contributor Author

renovate bot commented May 8, 2024

Edited/Blocked Notification

Renovate will not automatically rebase this PR, because it does not recognize the last commit author and assumes somebody else may have edited the PR.

You can manually request rebase by checking the rebase/retry box above.

⚠️ Warning: custom changes will be lost.

@mertalev
Copy link
Contributor

mertalev commented May 8, 2024

Why are the linux builds for onnxruntime-openvino getting removed from the lock? 🤔

@mertalev mertalev enabled auto-merge (squash) May 14, 2024 14:42
@mertalev mertalev merged commit 09e9e91 into main May 14, 2024
23 checks passed
@mertalev mertalev deleted the renovate/machine-learning branch May 14, 2024 14:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant