-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: When using tp for inference, an error occurs: Worker VllmWorkerProcess pid 3283517 died, exit code: -15. #6145
Comments
@njhill should be working on this. this is a benign error report, your program should run as normal, but the shutdown is not clean. |
same error, how to fix it? |
I have a similar error. What is the solution? model: AI-ModelScope/Mixtral-8x22B-Instruct-v0.1 [rank0]: AssertionError: 32768 is not divisible by 3 ERROR 07-13 02:56:30 multiproc_worker_utils.py:120] Worker VllmWorkerProcess pid 382811 died, exit code: -15 |
me too, same issue |
same issue here |
Same issue. |
…taset; Handle vLLM Benign Error (#540) In this PR: 1. **Support Multi-Model Multi-Category Generation**: - The `openfunctions_evaluation.py` can now take a list of model names and a list of test categories as command line input. - Partially address #501. 2. **Handling vLLM's Error**: - A benign error would occur during the cleanup phase after completing a generation task, causing the pipeline to fail despite generating model results. This issue stems from vLLM and is outside our control. [See this issue](vllm-project/vllm#6145) from the vLLM repo. - This is annoying because when users attempt category-specific generation for locally-hosted models (as supported in #512), only the first category result for the first model is generated since the error occurs immediately after. - To improve the user experience, we now combine all selected test categories into one task and submit that single task to vLLM, splitting the results afterwards. - Note: If multiple locally-hosted models are queued for inference, only the tasks of the first model will complete. Subsequent tasks will still fail due to the cleanup phase error from the first model. Therefore, we recommend running the inference command for one model at a time until vLLM rolls out the fix. 3. **Adding Index to Dataset**: - Each test file and possible_answer file now includes an index to help match entries. This PR **will not** affect the leaderboard score.
same issue here |
same issue here. |
…taset; Handle vLLM Benign Error (ShishirPatil#540) In this PR: 1. **Support Multi-Model Multi-Category Generation**: - The `openfunctions_evaluation.py` can now take a list of model names and a list of test categories as command line input. - Partially address ShishirPatil#501. 2. **Handling vLLM's Error**: - A benign error would occur during the cleanup phase after completing a generation task, causing the pipeline to fail despite generating model results. This issue stems from vLLM and is outside our control. [See this issue](vllm-project/vllm#6145) from the vLLM repo. - This is annoying because when users attempt category-specific generation for locally-hosted models (as supported in ShishirPatil#512), only the first category result for the first model is generated since the error occurs immediately after. - To improve the user experience, we now combine all selected test categories into one task and submit that single task to vLLM, splitting the results afterwards. - Note: If multiple locally-hosted models are queued for inference, only the tasks of the first model will complete. Subsequent tasks will still fail due to the cleanup phase error from the first model. Therefore, we recommend running the inference command for one model at a time until vLLM rolls out the fix. 3. **Adding Index to Dataset**: - Each test file and possible_answer file now includes an index to help match entries. This PR **will not** affect the leaderboard score.
🙋🏻♀️ |
same error 0.5.5 |
same issue 0.6.0 |
I have the similar issue when using nsys-gui to launch the server. |
same issue, vllm version is 0.6.0, when use: nsys profile --stats=true python examples/offline_inference.py |
|
do you find any way to solve? Would you like to share it? Please |
Your current environment
🐛 Describe the bug
When using tensor_parallel for inference, there is a certain probability of encountering the error
ERROR 07-04 23:23:23 multiproc_worker_utils.py:120] Worker VllmWorkerProcess pid 3283517 died, exit code: -15.
The reproducible code is as follows:
The complete running results are as follows:
The text was updated successfully, but these errors were encountered: