Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update dependency huggingface-hub to v0.25.1 #1189

Merged
merged 1 commit into from
Oct 4, 2024

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Oct 4, 2024

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
huggingface-hub == 0.24.6 -> ==0.25.1 age adoption passing confidence
huggingface-hub == 0.24.5 -> ==0.25.1 age adoption passing confidence

Release Notes

huggingface/huggingface_hub (huggingface-hub)

v0.25.1: [v0.25.1]: Raise error if encountered in chat completion SSE stream

Compare Source

Full Changelog : v0.25.0...v0.25.1
For more details, refer to the related PR #​2558

v0.25.0: : Large uploads made simple + quality of life improvements

Compare Source

📂 Upload large folders

Uploading large models or datasets is challenging. We've already written some tips and tricks to facilitate the process but something was still missing. We are now glad to release the huggingface-cli upload-large-folder command. Consider it as a "please upload this no matter what, and be quick" command. Contrarily to huggingface-cli download, this new command is more opinionated and will split the upload into several commits. Multiple workers are started locally to hash, pre-upload and commit the files in a way that is resumable, resilient to connection errors, and optimized against rate limits. This feature has already been stress tested by the community over the last months to make it as easy and convenient to use as possible.

Here is how to use it:

huggingface-cli upload-large-folder <repo-id> <local-path> --repo-type=dataset

Every minute, a report is logged with the current status of the files and workers:

---------- 2024-04-26 16:24:25 (0:00:00) ----------
Files:   hashed 104/104 (22.5G/22.5G) | pre-uploaded: 0/42 (0.0/22.5G) | committed: 58/104 (24.9M/22.5G) | ignored: 0
Workers: hashing: 0 | get upload mode: 0 | pre-uploading: 6 | committing: 0 | waiting: 0
---------------------------------------------------

You can also run it from a script:

>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.upload_large_folder(
...     repo_id="HuggingFaceM4/Docmatix",
...     repo_type="dataset",
...     folder_path="/path/to/local/docmatix",
... )

For more details about the command options, run:

huggingface-cli upload-large-folder --help

or visit the upload guide.

✨ HfApi & CLI improvements

🔍 Search API

The search API have been updated. You can now list gated models and datasets, and filter models by their inference status (warm, cold, frozen).

More complete support for the expand[] parameter:

👤 User API

Organizations are now included when retrieving the user overview:

get_user_followers and get_user_following are now paginated. This was not the case before, leading to issues for users with more than 1000 followers.

📦 Repo API

Added auth_check to easily verify if a user has access to a repo. It raises GatedRepoError if the repo is gated and the user don't have the permission or RepositoryNotFoundError if the repo does not exist or is private. If the method does not raise an error, you can assume the user has the permission to access the repo.

>>> from huggingface_hub import auth_check
>>> from huggingface_hub.utils import GatedRepoError, RepositoryNotFoundError
try:
    auth_check("user/my-cool-model")
except GatedRepoError:

### Handle gated repository error
    print("You do not have permission to access this gated repository.")
except RepositoryNotFoundError:

### Handle repository not found error
    print("The repository was not found or you do not have access.")

It is now possible to set a repo as gated from a script:

>>> from huggingface_hub import HfApi

>>> api = HfApi()
>>> api.update_repo_settings(repo_id=repo_id, gated="auto")  # Set to "auto", "manual" or False
⚡️ Inference Endpoint API

A few improvements in the InferenceEndpoint API. It's now possible to set a scale_to_zero_timeout parameter + to configure secrets when creating or updating an Inference Endpoint.

💾 Serialization

The torch serialization module now supports tensor subclasses.
We also made sure that now the library is tested with both torch 1.x and 2.x to ensure compatibility.

💔 Breaking changes

Breaking changes:

  • InferenceClient.conversational task has been removed in favor of InferenceClient.chat_completion. Also removed ConversationalOutput data class.
  • All InferenceClient output values are now dataclasses, not dictionaries.
  • list_repo_likers is now paginated. This means the output is now an iterator instead of a list.

Deprecation:

  • multi_commit: bool parameter in upload_folder is not deprecated, along the create_commits_on_pr. It is now recommended to use upload_large_folder instead. Thought its API and internals are different, the goal is still to be able to upload many files in several commits.

🛠️ Small fixes and maintenance

⚡️ InferenceClient fixes

Thanks to community feedback, we've been able to improve or fix significant things in both the InferenceClient and its async version AsyncInferenceClient. This fixes have been mainly focused on the OpenAI-compatible chat_completion method and the Inference Endpoints services.

😌 QoL improvements

When uploading a folder, we validate the README.md file before hashing all the files, not after.
This should save some precious time when uploading large files and a corrupted model card.

Also, it is now possible to pass a --max-workers argument when uploading a folder from the CLI

  • huggingface-cli upload - Validate README.md before file hashing by @​hlky in #​2452
  • Solved: Need to add the max-workers argument to the huggingface-cli command by @​devymex in #​2500

All custom exceptions raised by huggingface_hub are now defined in huggingface_hub.errors module. This should make it easier to import them for your try/except statements.

At the same occasion, we've reworked how errors are formatted in hf_raise_for_status to print more relevant information to the users.

All constants in huggingface_hub are now imported as a module. This makes it easier to patch their values, for example in a test pipeline.

Other quality of life improvements:

🐛 fixes
🏗️ internal

Significant community contributions

The following contributors have made significant changes to the library over the last release:

v0.24.7: [v0.24.7]: Fix race-condition issue when downloading from multiple threads

Compare Source

Full Changelog: huggingface/huggingface_hub@v0.24.6...v0.24.7

For more details, refer to the related PR #​2534.


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot requested a review from a team as a code owner October 4, 2024 17:07
@renovate renovate bot added dependencies Pull requests that update a dependency file tech-debt Not a feature, but still necessary labels Oct 4, 2024
Copy link

netlify bot commented Oct 4, 2024

Deploy Preview for leapfrogai-docs ready!

Name Link
🔨 Latest commit 1d85fef
🔍 Latest deploy log https://app.netlify.com/sites/leapfrogai-docs/deploys/670036b64971db00087fa6e5
😎 Deploy Preview https://deploy-preview-1189--leapfrogai-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.
Lighthouse
Lighthouse
1 paths audited
Performance: 41 (🟢 up 1 from production)
Accessibility: 98 (no change from production)
Best Practices: 100 (no change from production)
SEO: 92 (no change from production)
PWA: -
View the detailed breakdown and full score reports

To edit notification comments on pull requests, go to your Netlify site configuration.

| datasource | package         | from   | to     |
| ---------- | --------------- | ------ | ------ |
| pypi       | huggingface-hub | 0.24.6 | 0.25.1 |
| pypi       | huggingface-hub | 0.24.5 | 0.25.1 |
@renovate renovate bot force-pushed the renovate/huggingface-hub-0.x branch from d8cc98f to 1d85fef Compare October 4, 2024 18:40
@justinthelaw justinthelaw self-assigned this Oct 4, 2024
@justinthelaw justinthelaw merged commit 8129c34 into main Oct 4, 2024
24 of 26 checks passed
@justinthelaw justinthelaw deleted the renovate/huggingface-hub-0.x branch October 4, 2024 19:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file tech-debt Not a feature, but still necessary
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant