Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improved export_meta_llama_bin.py #71

Closed
wants to merge 3 commits into from
Closed

Improved export_meta_llama_bin.py #71

wants to merge 3 commits into from

Conversation

joey00072
Copy link

made export llama it little better

@karpathy
Copy link
Owner

ty. this looks equivalent on skim, but did you check by any chance?

@joey00072
Copy link
Author

Fixed: #78 (comment)
weight are loaded on cpu only

Nope I haven't tested this. My machine don't have enough ram 😬

Can some one test this

@Foundation42
Copy link

Can some one test this

Hmmm, it's different but not working still. Did I do something wrong?

f42@formica:~/dev/llama$ torchrun --nproc_per_node 1 export_meta_llama_bin.py
> initializing model parallel with size 1
> initializing ddp with size 1
> initializing pipeline with size 1
Traceback (most recent call last):
  File "/home/f42/dev/llama/export_meta_llama_bin.py", line 159, in <module>
    generator = build(
  File "/home/f42/dev/llama/export_meta_llama_bin.py", line 85, in build
    model = llama.Transformer(model_args)
  File "/home/f42/dev/llama/llama/model.py", line 259, in __init__
    self.layers.append(TransformerBlock(layer_id, params))
  File "/home/f42/dev/llama/llama/model.py", line 221, in __init__
    self.attention = Attention(args)
  File "/home/f42/dev/llama/llama/model.py", line 135, in __init__
    ).cuda()
  File "/home/f42/.local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 229, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 2024) of binary: /usr/bin/python3
Traceback (most recent call last):
  File "/home/f42/.local/bin/torchrun", line 8, in <module>
    sys.exit(main())
  File "/home/f42/.local/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/home/f42/.local/lib/python3.10/site-packages/torch/distributed/run.py", line 762, in main
    run(args)
  File "/home/f42/.local/lib/python3.10/site-packages/torch/distributed/run.py", line 753, in run
    elastic_launch(
  File "/home/f42/.local/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/f42/.local/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
export_meta_llama_bin.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-07-25_19:30:51
  host      : formica
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 2024)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================

@karpathy
Copy link
Owner

thank you but deprecated due to 5bcd19a

@karpathy karpathy closed this Jul 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants