We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
使用vllm(0.6.4 post1)部署服务后,发现某些具体的图片会导致整个服务挂掉,错误如下,请问有什么解决办法吗
NFO 12-17 19:20:28 model_runner_base.py:120] Writing input of failed execution to /tmp/err_execute_model_input_20241217-192028.pkl... INFO 12-17 19:20:28 model_runner_base.py:120] Writing input of failed execution to /tmp/err_execute_model_input_20241217-192028.pkl... INFO 12-17 19:20:28 model_runner_base.py:149] Completed writing input of failed execution to /tmp/err_execute_model_input_20241217-192028.pkl. CRITICAL 12-17 19:20:28 launcher.py:99] MQLLMEngine is already dead, terminating server process CRITICAL 12-17 19:20:28 launcher.py:99] MQLLMEngine is already dead, terminating server process INFO: 172.30.192.10:55585 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error ERROR 12-17 19:20:28 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^ ERROR 12-17 19:20:28 engine.py:135] hidden_or_intermediate_states = model_executable( ERROR 12-17 19:20:28 engine.py:135] hidden_or_intermediate_states = model_executable( ERROR 12-17 19:20:28 engine.py:135] return forward_call(*args, **kwargs) ERROR 12-17 19:20:28 engine.py:135] ^^^^^^^^^^^^^^^^^ ERROR 12-17 19:20:28 engine.py:135] ^^^^^^^^^^^^^^^^^ ERROR 12-17 19:20:28 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 12-17 19:20:28 engine.py:135] return self._call_impl(*args, **kwargs) ERROR 12-17 19:20:28 engine.py:135] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl ERROR 12-17 19:20:28 engine.py:135] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl ERROR 12-17 19:20:28 engine.py:135] File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl ERROR 12-17 19:20:28 engine.py:135] return forward_call(*args, **kwargs) ERROR 12-17 19:20:28 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 12-17 19:20:28 engine.py:135] inputs_embeds = self._merge_multimodal_embeddings( ERROR 12-17 19:20:28 engine.py:135] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen2_vl.py", line 1287, in forward ERROR 12-17 19:20:28 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 12-17 19:20:28 engine.py:135] inputs_embeds[mask, :] = multimodal_embeddings ERROR 12-17 19:20:28 engine.py:135] inputs_embeds = self._merge_multimodal_embeddings( ERROR 12-17 19:20:28 engine.py:135] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen2_vl.py", line 1287, in forward ERROR 12-17 19:20:28 engine.py:135] ~~~~~~~~~~~~~^^^^^^^^^ ERROR 12-17 19:20:28 engine.py:135] ~~~~~~~~~~~~~^^^^^^^^^ ERROR 12-17 19:20:28 engine.py:135] The above exception was the direct cause of the following exception: ERROR 12-17 19:20:28 engine.py:135] RuntimeError: shape mismatch: value tensor of shape [8768, 3584] cannot be broadcast to indexing result of shape [4081, 3584] ERROR 12-17 19:20:28 engine.py:135] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 214, in engine_step ERROR 12-17 19:20:28 engine.py:135] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 196, in run_engine_loop ERROR 12-17 19:20:28 engine.py:135] request_outputs = self.engine_step() ERROR 12-17 19:20:28 engine.py:135] ^^^^^^^^^^^^^^^^^^ ERROR 12-17 19:20:28 engine.py:135] self.run_engine_loop() ERROR 12-17 19:20:28 engine.py:135] Traceback (most recent call last): ERROR 12-17 19:20:28 engine.py:135] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 196, in run_engine_loop ERROR 12-17 19:20:28 engine.py:135] outputs = self.model_executor.execute_model( ERROR 12-17 19:20:28 engine.py:135] return self.engine.step() ERROR 12-17 19:20:28 engine.py:135] ^^^^^^^^^^^^^^^^^^ ERROR 12-17 19:20:28 engine.py:135] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 205, in engine_step ERROR 12-17 19:20:28 engine.py:135] ^^^^^^^^^^^^^^^^^^ ERROR 12-17 19:20:28 engine.py:135] File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 1454, in step ERROR 12-17 19:20:28 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 12-17 19:20:28 engine.py:135] File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker_base.py", line 343, in execute_model ERROR 12-17 19:20:28 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 12-17 19:20:28 engine.py:135] output = self.driver_worker.execute_model(execute_model_req) ERROR 12-17 19:20:28 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^ INFO: Waiting for application shutdown. ERROR 12-17 19:20:28 engine.py:135] return func(*args, **kwargs) ERROR 12-17 19:20:28 engine.py:135] File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context INFO: Finished server process [1]
The text was updated successfully, but these errors were encountered:
No branches or pull requests
使用vllm(0.6.4 post1)部署服务后,发现某些具体的图片会导致整个服务挂掉,错误如下,请问有什么解决办法吗
The text was updated successfully, but these errors were encountered: