Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(*bias): last dimension must be contiguous #72

Open
yihp opened this issue Feb 1, 2024 · 4 comments
Open

(*bias): last dimension must be contiguous #72

yihp opened this issue Feb 1, 2024 · 4 comments

Comments

@yihp
Copy link

yihp commented Feb 1, 2024

我在v100机器运行demo,发现这个错误
(xrayglm) [yhp@node45 XrayGLM]$ python cli_demo.py --from_pretrained /home/yhp/XrayGLM/XrayGLM-3000/ --prompt_zh '详细描述这张胸部X光片的诊断结果',烦请大家帮忙看看
[2024-02-01 18:40:16,440] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-02-01 18:40:18,238] [WARNING] Failed to load bitsandbytes:No module named 'bitsandbytes'
[2024-02-01 18:40:18,239] [INFO] building FineTuneVisualGLMModel model ...
[2024-02-01 18:40:18,255] [INFO] [RANK 0] > initializing model parallel with size 1
[2024-02-01 18:40:18,255] [INFO] [RANK 0] You are using model-only mode.
For torch.distributed users or loading model parallel models, set environment variables RANK, WORLD_SIZE and LOCAL_RANK.
/home/yhp/.conda/envs/xrayglm/lib/python3.10/site-packages/torch/nn/init.py:452: UserWarning: Initializing zero-element tensors is a no-op
warnings.warn("Initializing zero-element tensors is a no-op")
replacing layer 0 attention with lora
replacing layer 14 attention with lora
[2024-02-01 18:40:30,087] [INFO] [RANK 0] > number of parameters on model parallel rank 0: 7811237376
[2024-02-01 18:40:31,623] [INFO] [RANK 0] global rank 0 is loading checkpoint /home/yhp/XrayGLM/XrayGLM-3000/3000/mp_rank_00_model_states.pt
[2024-02-01 18:40:41,798] [INFO] [RANK 0] > successfully loaded /home/yhp/XrayGLM/XrayGLM-3000/3000/mp_rank_00_model_states.pt
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
欢迎使用 XrayGLM 模型,输入图像URL或本地路径读图,继续输入内容对话,clear 重新开始,stop 终止程序
请输入图像路径或URL(回车进入纯文本对话): /home/yhp/XrayGLM/data/demo/2p.png
*(bias): last dimension must be contiguous
请输入图像路径或URL(回车进入纯文本对话):
我的环境:
Package Version


aiofiles 23.2.1
aiohttp 3.9.3
aiosignal 1.3.1
altair 5.2.0
annotated-types 0.6.0
anyio 4.2.0
async-timeout 4.0.3
attrs 23.2.0
boto3 1.34.32
botocore 1.34.32
certifi 2023.11.17
charset-normalizer 3.3.2
click 8.1.7
colorama 0.4.6
contourpy 1.2.0
cpm-kernels 1.0.11
cycler 0.12.1
datasets 2.16.1
deepspeed 0.13.1
dill 0.3.7
einops 0.7.0
exceptiongroup 1.2.0
fastapi 0.109.0
ffmpy 0.3.1
filelock 3.13.1
fonttools 4.47.2
frozenlist 1.4.1
fsspec 2023.10.0
gradio 3.25.0
gradio_client 0.8.1
h11 0.14.0
hjson 3.1.0
httpcore 1.0.2
httpx 0.26.0
huggingface-hub 0.20.3
idna 3.6
importlib-resources 6.1.1
Jinja2 3.1.3
jmespath 1.0.1
jsonschema 4.21.1
jsonschema-specifications 2023.12.1
kiwisolver 1.4.5
latex2mathml 3.77.0
linkify-it-py 2.0.2
Markdown 3.5.2
markdown-it-py 2.2.0
MarkupSafe 2.1.4
matplotlib 3.8.2
mdit-py-plugins 0.3.3
mdtex2html 1.3.0
mdurl 0.1.2
mpmath 1.3.0
multidict 6.0.4
multiprocess 0.70.15
networkx 3.2.1
ninja 1.11.1.1
numpy 1.26.3
nvidia-cublas-cu12 12.1.3.1
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cudnn-cu12 8.9.2.26
nvidia-cufft-cu12 11.0.2.54
nvidia-curand-cu12 10.3.2.106
nvidia-cusolver-cu12 11.4.5.107
nvidia-cusparse-cu12 12.1.0.106
nvidia-nccl-cu12 2.19.3
nvidia-nvjitlink-cu12 12.3.101
nvidia-nvtx-cu12 12.1.105
orjson 3.9.12
packaging 23.2
pandas 2.2.0
pillow 10.2.0
pip 23.3.1
protobuf 4.25.2
psutil 5.9.8
py-cpuinfo 9.0.0
pyarrow 15.0.0
pyarrow-hotfix 0.6
pydantic 1.10.13
pydantic_core 2.16.1
pydub 0.25.1
Pygments 2.17.2
pynvml 11.5.0
pyparsing 3.1.1
python-dateutil 2.8.2
python-multipart 0.0.6
pytz 2023.4
PyYAML 6.0.1
referencing 0.33.0
regex 2023.12.25
requests 2.31.0
rich 13.7.0
rpds-py 0.17.1
ruff 0.1.15
s3transfer 0.10.0
safetensors 0.4.2
semantic-version 2.10.0
sentencepiece 0.1.99
setuptools 68.2.2
shellingham 1.5.4
six 1.16.0
sniffio 1.3.0
starlette 0.35.1
SwissArmyTransformer 0.4.0
sympy 1.12
tensorboardX 2.6.2.2
tokenizers 0.13.3
tomlkit 0.12.0
toolz 0.12.1
torch 2.2.0
torchvision 0.17.0
tqdm 4.66.1
transformers 4.27.1
triton 2.2.0
typer 0.9.0
typing_extensions 4.9.0
tzdata 2023.4
uc-micro-py 1.0.2
urllib3 2.0.7
uvicorn 0.27.0.post1
websockets 11.0.3
wheel 0.41.2
xxhash 3.4.1
yarl 1.9.4

@yihp
Copy link
Author

yihp commented Feb 2, 2024

已经解决了,降低了torch版本,安装和自己cuda对应的torch

@WangHaoyuuu
Copy link

您好,可以提供一下解决的torch版本吗?

@yihp
Copy link
Author

yihp commented Mar 21, 2024

@noviceswing 我用的2.0或者2.1,现在环境不在了,你自己试一下

@WangHaoyuuu
Copy link

@yihp 谢谢!我用的2.1解决了

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants