Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build(deps): update transformers requirement from <4.43.0,>=4.42.3 to >=4.42.3,<4.45.0 in /requirements #2681

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Aug 7, 2024

Updates the requirements on transformers to permit the latest version.

Release notes

Sourced from transformers's releases.

Release v4.44.0: End to end compile generation!!! Gemma2 (with assisted decoding), Codestral (Mistral for code), Nemotron, Efficient SFT training, CPU Offloaded KVCache, torch export for static cache

This release comes a bit early in our cycle because we wanted to ship important and requested models along with improved performances for everyone!

All of these are included with examples in the awesome https://github.com/huggingface/local-gemma repository! 🎈 We tried to share examples of what is now possible with all the shipped features! Kudos to @​gante, @​sanchit-gandhi and @​xenova

💥 End-to-end generation compile

Generate: end-to-end compilation #30788 by @​gante: model.generate now supports compiling! There are a few limitations, but here is a small snippet:

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
import copy
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Meta-Llama-3.1-8B", torch_dtype=torch.bfloat16, device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3.1-8B")
compile generate
compiled_generate = torch.compile(model.generate, fullgraph=True, mode="reduce-overhead")
compiled generate does NOT accept parameterization except a) model inputs b) a generation config
generation_config = copy.deepcopy(model.generation_config)
generation_config.pad_token_id = model.config.eos_token_id
model_inputs = tokenizer(["Write a poem about the market crashing in summer"], return_tensors="pt")
model_inputs = model_inputs.to(model.device)
output_compiled = compiled_generate(**model_inputs, generation_config=generation_config)
print(output_compiled)

⚡ 3 to 5x compile speedup (compilation time 👀 not runtime)

  • 3-5x faster torch.compile forward compilation for autoregressive decoder models #32227* by @​fxmarty . As documented on the PR, this makes the whole generation a lot faster when you re-use the cache! You can see this when you run model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)

🪶 Offloaded KV cache: offload the cache to CPU when you are GPU poooooor 🚀

  • Offloaded KV Cache #31325* by @​n17s : you just have to set cache_implementation="offloaded" when calling from_pretrained or using this:
from transformers import GenerationConfig
gen_config = GenerationConfig(cache_implementation="offloaded", # other generation options such as num_beams=4,num_beam_groups=2,num_return_sequences=4,diversity_penalty=1.0,max_new_tokens=50,early_stopping=True)
outputs = model.generate(inputs["input_ids"],generation_config=gen_config)

📦 Torch export for static cache

pytorch team gave us a great gift: you can now use torch.export directly compatible with Executorch! Find examples here.

... (truncated)

Commits

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

📚 Documentation preview 📚: https://torchmetrics--2681.org.readthedocs.build/en/2681/

@dependabot dependabot bot added the test / CI testing or CI label Aug 7, 2024
@dependabot dependabot bot requested a review from a team August 7, 2024 23:12
@dependabot dependabot bot force-pushed the dependabot-pip-requirements-transformers-gte-4.42.3-and-lt-4.45.0 branch from 259f71d to 682a016 Compare August 7, 2024 23:25
@Borda
Copy link
Member

Borda commented Aug 7, 2024

@dependabot rebase

@mergify mergify bot added the has conflicts label Aug 7, 2024
Updates the requirements on [transformers](https://github.com/huggingface/transformers) to permit the latest version.
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.42.3...v4.44.0)

---
updated-dependencies:
- dependency-name: transformers
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot force-pushed the dependabot-pip-requirements-transformers-gte-4.42.3-and-lt-4.45.0 branch from 682a016 to 5d13bd8 Compare August 7, 2024 23:26
@mergify mergify bot removed the has conflicts label Aug 7, 2024
@Borda Borda merged commit 2d9c009 into master Aug 8, 2024
68 checks passed
@Borda Borda deleted the dependabot-pip-requirements-transformers-gte-4.42.3-and-lt-4.45.0 branch August 8, 2024 09:37
@mergify mergify bot added the ready label Aug 8, 2024
Borda pushed a commit that referenced this pull request Sep 11, 2024
… >=4.42.3,<4.45.0 in /requirements (#2681)

build(deps): update transformers requirement in /requirements

Updates the requirements on [transformers](https://github.com/huggingface/transformers) to permit the latest version.
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.42.3...v4.44.0)

---
updated-dependencies:
- dependency-name: transformers
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
(cherry picked from commit 2d9c009)
Borda pushed a commit that referenced this pull request Sep 13, 2024
… >=4.42.3,<4.45.0 in /requirements (#2681)

build(deps): update transformers requirement in /requirements

Updates the requirements on [transformers](https://github.com/huggingface/transformers) to permit the latest version.
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.42.3...v4.44.0)

---
updated-dependencies:
- dependency-name: transformers
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
(cherry picked from commit 2d9c009)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready test / CI testing or CI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant