Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix failing CI test using thread sanitizer #582

Closed
ggerganov opened this issue Mar 28, 2023 · 3 comments · Fixed by #682
Closed

Fix failing CI test using thread sanitizer #582

ggerganov opened this issue Mar 28, 2023 · 3 comments · Fixed by #682
Labels
help wanted Extra attention is needed high priority Very important issue testing Everything test related

Comments

@ggerganov
Copy link
Owner

I cannot reproduce on my machines:

https://github.com/ggerganov/llama.cpp/actions/runs/4545676297/jobs/8013336777

If someone that can reproduce, please try to fix this

@ggerganov ggerganov added help wanted Extra attention is needed testing Everything test related labels Mar 28, 2023
@cyyynthia
Copy link

The 143 exit code (and the The runner has received a shutdown signal. annotation left by GitHub) leads me to believe the tests are exceeding GitHub Actions's limits in some way and are getting terminated.

See actions/runner-images#6680, actions/runner-images#7188

@ggerganov ggerganov added the high priority Very important issue label Mar 29, 2023
@slaren
Copy link
Collaborator

slaren commented Mar 31, 2023

I suspect this may be caused by high memory usage:

Maximum resident set size (kbytes): 11846496

That's nearly 12 GB (!). Anyone knows the limit for github actions?

@cyyynthia
Copy link

According to GitHub's documentation, Linux runners have 2 x86 cores and 7GB of RAM.

GitHub's large runners may be able to let the jobs complete, but these are in limited beta and only for organizations and enterprises sadly.

AAbushady pushed a commit to AAbushady/llama.cpp that referenced this issue Jan 27, 2024
* Downgrade CUDA to 11.4

This helps the binary be smaller and adds K80 support, the manual compiles we did already had this.

* Update kcpp-build-release-win-cuda.yaml

* Update kcpp-build-release-win-cuda.yaml

* Update kcpp-build-release-win-cuda.yaml

* Update kcpp-build-release-win-cuda.yaml

* Update kcpp-build-release-win-cuda.yaml

* Update kcpp-build-release-win-cuda.yaml

* Restore concedo_experimental
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed high priority Very important issue testing Everything test related
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants