Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multi gpu docs #391

Merged
merged 2 commits into from
Feb 22, 2024
Merged

multi gpu docs #391

merged 2 commits into from
Feb 22, 2024

Conversation

ImmanuelSegol
Copy link
Contributor

Describe the changes

This PR...

Linked Issues

Resolves #

Co-authored-by: Jeremy Felder <jeremy.felder1@gmail.com>
@ImmanuelSegol ImmanuelSegol merged commit 4867b0d into migrate-docs Feb 22, 2024
@ImmanuelSegol ImmanuelSegol deleted the multi-gpu branch February 22, 2024 14:53

This approach wont let us tackle larger computation sizes but it will allow us to compute multiple computations which we wouldn't be able to load onto a single GPU.

For example lets say that you had to compute two MSMs of size 2^20 on a 16GB VRAM GPU you would normally have to perform them asynchronously. However, if you double the number of GPUs in your system you can now run them in parallel.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2^20 will fit into 16GB RAM even quite more of these will, right? even with precomputation - for example for BLS12-381 - 2^20 * (48+32) = 80MB

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! That's a typo will fix


One common challenge with Zero-Knowledge computation is managing the large input sizes. It's not uncommon to encounter circuits surpassing 2^25 constraints, pushing the capabilities of even advanced GPUs to their limits. To effectively scale and process such large circuits, leveraging multiple GPUs in tandem becomes a necessity.

Multi-GPU programming involves developing software to operate across multiple GPU devices. Lets first explore different approaches to Multi-GPU programming then we will cover how ICICLE allows you to easily develop youR ZK computations to run across many GPUs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"programming then" - double space


This approach wont let us tackle larger computation sizes but it will allow us to compute multiple computations which we wouldn't be able to load onto a single GPU.

For example lets say that you had to compute two MSMs of size 2^20 on a 16GB VRAM GPU you would normally have to perform them asynchronously. However, if you double the number of GPUs in your system you can now run them in parallel.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think size 2^20 MSM on bls12 curves should require less than 500 Mb. For bls12, 2^26 is probably the size when 1 MSM fits into 16 GB but 2 do not


The approach we have taken for the moment is a GPU Server approach; we assume you have a machine with multiple GPUs and you wish to run some computation on each GPU.

To dive deeper and learn about the API checkout the docs for our different ICICLE API
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

checkout -> check out


## Device context API

The `DeviceContext` is embedded into `NTTConfig`, `MSMConfig` and `PoseidonConfig`, meaning you can simple pass a `device_id` to your existing config an the same computation will be triggered on a different device automatically.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

simple -> simply

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

an -> and

Copy link
Contributor

@vhnatyk vhnatyk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi @ImmanuelSegol - I see pr was merged already 😊, looks great 👍🏻 and there are small notes to consider


- Never hardcode device IDs, if you want your software to take advantage of all GPUs on a machine use methods such as `get_device_count` to support arbitrary number of GPUs.

- Launch one thread per GPU, to avoid nasty errors and hard to read code we suggest that for every GPU task you wish to launch you create a dedicated thread. This will make your code way more manageable, easy to read and performant.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

umm - one CPU thread per GPU - actually you can do more tasks on that thread as long as they target the same GPU. Also the section imo needs a link to https://developer.nvidia.com/blog/cuda-pro-tip-always-set-current-device-avoid-multithreading-bugs/


## Device context API

The `DeviceContext` is embedded into `NTTConfig`, `MSMConfig` and `PoseidonConfig`, meaning you can simple pass a `device_id` to your existing config an the same computation will be triggered on a different device automatically.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and typo?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually current implementation doesn't have the "automatic" - we just check device_id from config matches the current device id for the thread, so it won't be executed on wrong device


#### [`DeviceContext`](https://github.com/vhnatyk/icicle/blob/eef6876b037a6b0797464e7cdcf9c1ecfcf41808/wrappers/rust/icicle-cuda-runtime/src/device_context.rs#L11)

Represents the configuration a CUDA device, encapsulating the device's stream, ID, and memory pool. The default device is always `0`, unless configured otherwise.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

, unless configured otherwise probably should be removed - I doubt it's possible


- **`device_id: usize`**

The index of the GPU currently in use. The default value is `0`, indicating the first GPU in the system.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

umm, assuming invocation command was prepended with CUDA_VISIBLE_DEVICES=2,3,7 in the system with 8 GPUs - the device_id 0 will correspond to GPU with id 2, so technically a third GPU in the system

@DmytroTym
Copy link
Contributor

@vhnatyk @ImmanuelSegol maybe we can fix the issues in #389

ImmanuelSegol added a commit that referenced this pull request Feb 25, 2024
* Update README.md (#385)

* refactor

* refactor

* refactor

* rename task

* update codespell

* multi gpu docs (#391)

* Refactor

* refacotr

* fix typo

* Apply suggestions from code review

Co-authored-by: Jeremy Felder <jeremy.felder1@gmail.com>

* refactor

* refactor

---------

Co-authored-by: DmytroTym <dmytrotym1@gmail.com>
Co-authored-by: ChickenLover <Romangg81@gmail.com>
Co-authored-by: Jeremy Felder <jeremy.felder1@gmail.com>
ImmanuelSegol added a commit that referenced this pull request Feb 28, 2024
migrate docs website + improved docs (#389)

* Update README.md (#385)

* refactor

* refactor

* refactor

* rename task

* update codespell

* multi gpu docs (#391)

* Refactor

* refacotr

* fix typo

* Apply suggestions from code review



* refactor

* refactor

---------

Co-authored-by: ImmanuelSegol <3ditds@gmail.com>
Co-authored-by: DmytroTym <dmytrotym1@gmail.com>
Co-authored-by: ChickenLover <Romangg81@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants