Skip to content

Commit

Permalink
updated docs (#602)
Browse files Browse the repository at this point in the history
  • Loading branch information
Otsar-Raikou authored Sep 3, 2024
1 parent 5ef9ed1 commit 0b53c1c
Show file tree
Hide file tree
Showing 30 changed files with 112 additions and 124 deletions.
1 change: 0 additions & 1 deletion .github/workflows/codespell.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ on:
branches:
- main
- V3
- yshekel/V3

jobs:
spelling-checker:
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/cpp.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,11 @@ on:
pull_request:
branches:
- V3
- yshekel/V3 # TODO remove when merged to V3
- main
push:
branches:
- V3
- main

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/deploy-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ on:
push:
branches:
- main
- V3
paths:
- 'docs/**'

Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/examples.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,11 @@ on:
pull_request:
branches:
- V3
- yshekel/V3 # TODO remove when merged to V3
- main
push:
branches:
- V3
- main

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
Expand Down
9 changes: 5 additions & 4 deletions .github/workflows/golang.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,11 @@ on:
pull_request:
branches:
- V3
- yshekel/V3 # TODO remove when merged to V3
- main
push:
branches:
- V3
- V3
- main

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
Expand Down Expand Up @@ -43,7 +44,7 @@ jobs:
needs: [check-changed-files, check-format, extract-cuda-backend-branch]
strategy:
matrix:
curve:
curve:
- name: bn254
build_args:
- name: bls12_381
Expand Down Expand Up @@ -82,7 +83,7 @@ jobs:
CURVE=$(echo ${{ matrix.curve.name }} | sed -e 's/_//g')
export ICICLE_BACKEND_INSTALL_DIR=/usr/local/lib
go test ./$CURVE/tests -count=1 -failfast -p 2 -timeout 60m -v
build-fields-linux:
name: Build and test fields on Linux
runs-on: [self-hosted, Linux, X64, icicle]
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ on:
jobs:
release:
name: Release
runs-on: ubuntu-latest
runs-on: [self-hosted, Linux, X64, icicle]
steps:
- name: Checkout
uses: actions/checkout@v4
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/rust.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,11 @@ on:
pull_request:
branches:
- V3
- yshekel/V3 # TODO remove when merged to V3
- main
push:
branches:
- V3
- main

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/test-deploy-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ on:
pull_request:
branches:
- main
- V3
paths:
- 'docs/**'

Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ This guide will help you get started with ICICLE in C++, Rust, and Go.
> **Developers**: We highly recommend reading our [documentation](https://dev.ingonyama.com/) for a comprehensive explanation of ICICLE’s capabilities.
> [!TIP]
> Try out ICICLE by running some [examples] available in C++, Rust, and Go bindings. Check out our install-and-use examples in [C++](https://github.com/ingonyama-zk/icicle/tree/yshekel/V3/examples/c%2B%2B/install-and-use-icicle), [Rust](https://github.com/ingonyama-zk/icicle/tree/yshekel/V3/examples/rust/install-and-use-icicle) and [Go](TODO)
> Try out ICICLE by running some [examples] available in C++, Rust, and Go bindings. Check out our install-and-use examples in [C++](https://github.com/ingonyama-zk/icicle/tree/main/examples/c%2B%2B/install-and-use-icicle), [Rust](https://github.com/ingonyama-zk/icicle/tree/main/examples/rust/install-and-use-icicle) and [Go](TODO)
### Prerequisites

Expand Down
23 changes: 8 additions & 15 deletions docs/docs/icicle/arch_overview.md
Original file line number Diff line number Diff line change
@@ -1,34 +1,27 @@
# Architecture Overview

## Introduction

ICICLE V3 is designed with flexibility and extensibility in mind, offering a robust framework that supports multiple compute backends and accommodates various cryptographic needs. This section provides an overview of ICICLE's architecture, highlighting its open and closed components, multi-device support, and extensibility.
ICICLE v3 is designed with flexibility and extensibility in mind, offering a robust framework that supports multiple compute backends and accommodates various cryptographic needs. This section provides an overview of ICICLE's architecture, highlighting its open and closed components, multi-device support, and extensibility.

## Open Frontend and CPU Backend
### Frontend and CPU Backend

- **Frontend (FE):** The ICICLE frontend is open-source and designed to provide a unified API across different programming languages, including C++, Rust, and Go. This frontend abstracts the complexity of working with different backends, allowing developers to write backend-agnostic code that can be deployed across various platforms.
- **CPU Backend:** ICICLE includes an open-source CPU backend that allows for development and testing on standard hardware. This backend is ideal for prototyping and for environments where specialized hardware is not available.

## Closed CUDA Backend
## CUDA Backend

- **CUDA Backend:** ICICLE also includes a high-performance CUDA backend that is closed-source. This backend is optimized for NVIDIA GPUs and provides significant acceleration for cryptographic operations.
- **Installation and Licensing:** The CUDA backend needs to be downloaded and installed. Refer to the [installation guide](./install_cuda_backend.md) for detailed instructions.

## Extensible Design

ICICLE is designed to be extensible, allowing developers to integrate new backends or customize existing ones to suit their specific needs. The architecture supports:

- **Custom Backends:** Developers can create their own backends to leverage different hardware or optimize for specific use cases. The process of building and integrating a custom backend is documented in the [Build Your Own Backend](./build_your_own_backend.md) section.
- **Pluggable Components:** ICICLE's architecture allows for easy integration of additional cryptographic primitives or enhancements, ensuring that the framework can evolve with the latest advancements in cryptography and hardware acceleration.

## Multi-Device Support

- **Scalability:** ICICLE supports multi-device configurations, enabling the distribution of workloads across multiple GPUs or other hardware accelerators. This feature allows for scaling ZK proofs and other cryptographic operations across larger data centers or high-performance computing environments.

---

### Conclusion
## Build Your Own Backend

ICICLE is designed to be extensible, allowing developers to integrate new backends or customize existing ones to suit their specific needs. The architecture supports:

The architecture of ICICLE V3 is built to be flexible, scalable, and extensible, making it a powerful tool for developers working with zero-knowledge proofs and other cryptographic operations. Whether you're working with open-source CPU backends or closed-source CUDA backends, ICICLE provides the tools and flexibility needed to achieve high performance and scalability in cryptographic computations.
- **Custom Backends:** Developers can create their own backends to leverage different hardware or optimize for specific use cases. The process of building and integrating a custom backend is documented in the [Build Your Own Backend](./build_your_own_backend.md) section.
- **Pluggable Components:** ICICLE's architecture allows for easy integration of additional cryptographic primitives or enhancements, ensuring that the framework can evolve with the latest advancements in cryptography and hardware acceleration.

Explore the following sections to learn more about building your own backend, using ICICLE across multiple devices, and integrating it into your projects.
5 changes: 2 additions & 3 deletions docs/docs/icicle/build_from_source.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,6 +175,5 @@ Make sure to install icicle libs when installing a library/application that depe
### Go: Build, Test, and Install (TODO)
## Install cuda backend
[Install CUDA Backend (and License)](./install_cuda_backend.md#installation)
---
**To install CUDA backend and license click [here](./install_cuda_backend.md#installation)**
18 changes: 8 additions & 10 deletions docs/docs/icicle/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,21 +24,19 @@ Future releases will also include support for macOS and other systems.

Each ICICLE release includes a tar file named `icicle30-<distribution>.tar.gz`, where `icicle30` indicates version 3.0. This tar file contains ICICLE frontend build artifacts and headers for a specific distribution. The tar file structure includes:

- **`./icicle/include/`**: This directory contains all the necessary header files for using the Icicle library from C++.
- **`./icicle/include/`**: This directory contains all the necessary header files for using the ICICLE library from C++.
- **`./icicle/lib/`**:
- **Icicle Libraries**: All the core Icicle libraries are located in this directory. Applications linking to Icicle will use these libraries.
- **Icicle Libraries**: All the core ICICLE libraries are located in this directory. Applications linking to ICICLE will use these libraries.
- **Backends**: The `./icicle/lib/backend/` directory houses backend libraries, including the CUDA backend (not included in this tar).

- **CUDA backend** comes as separate tar `icicle30-<distribution>-cuda122.tar.gz`
- per distribution, for icicle-frontend V3.0 and CUDA 12.2.
- per distribution, for ICICLE-frontend v3.0 and CUDA 12.2.

## installing and using icicle
## Installing and using ICICLE

- [Full C++ example](https://github.com/ingonyama-zk/icicle/tree/yshekel/V3/examples/c++/install-and-use-icicle)
- [Full Rust example](https://github.com/ingonyama-zk/icicle/tree/yshekel/V3/examples/rust/install-and-use-icicle)
- [Full Go example](https://github.com/ingonyama-zk/icicle/tree/yshekel/V3/examples/golang/install-and-use-icicle)

*(TODO update links to main branch when merged)
- [Full C++ example](https://github.com/ingonyama-zk/icicle/tree/main/examples/c++/install-and-use-icicle)
- [Full Rust example](https://github.com/ingonyama-zk/icicle/tree/main/examples/rust/install-and-use-icicle)
- [Full Go example](https://github.com/ingonyama-zk/icicle/tree/main/examples/golang/install-and-use-icicle)

1. **Extract and install the Tar Files**:
- [Download](https://github.com/ingonyama-zk/icicle/releases) the appropriate tar files for your distribution (Ubuntu 20.04, Ubuntu 22.04, or UBI 8,9 for RHEL compatible binaries).
Expand Down Expand Up @@ -106,7 +104,7 @@ Each ICICLE release includes a tar file named `icicle30-<distribution>.tar.gz`,

**Rust**
- When building the ICICLE crates, ICICLE frontend libs are built from source, along with the Rust bindings. They are installed to `target/<buildtype>/deps/icicle`, and Cargo will link them correctly. Note that you still need to install the CUDA backend if you have a CUDA GPU.
- Simply use `cargo build` or `cargo run` and it should link to icicle libs.
- Simply use `cargo build` or `cargo run` and it should link to ICICLE libs.

**Go** - TODO

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/icicle/golang-bindings/multi-gpu.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Multi GPU APIs

To learn more about the theory of Multi GPU programming refer to [this part](../multi-gpu.md) of documentation.
To learn more about the theory of Multi GPU programming refer to [this part](../multi-device.md) of documentation.

Here we will cover the core multi GPU apis and an [example](#a-multi-gpu-example)

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/icicle/install_cuda_backend.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

## Overview

The CUDA backend in ICICLE V3 is a high-performance, closed-source component designed to accelerate cryptographic computations using NVIDIA GPUs. This backend includes specialized libraries optimized for various cryptographic fields and curves, providing significant speedups for operations such as MSM, NTT, and elliptic curve operations.
The CUDA backend in ICICLE v3 is a high-performance, closed-source component designed to accelerate cryptographic computations using NVIDIA GPUs. This backend includes specialized libraries optimized for various cryptographic fields and curves, providing significant speedups for operations such as MSM, NTT, and elliptic curve operations.

## Installation

Expand All @@ -12,7 +12,7 @@ The CUDA backend is a closed-source component that requires a license. [To insta
### Licensing

:::note
Currently, the CUDA backend is free to use via Ingonyama’s icicle-cuda-backend-license server. By default, the CUDA backend will attempt to access this server. For more details, please contact support@ingonyama.com.
Currently, the CUDA backend is free to use via Ingonyama’s backend license server. By default, the CUDA backend will attempt to access this server. For more details, please contact support@ingonyama.com.
:::

The CUDA backend requires a valid license to function. There are two types of CUDA backend licenses:
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/icicle/libraries.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ ICICLE is composed of two main logical parts:

The ICICLE device library serves as an abstraction layer for interacting with various hardware devices. It provides a comprehensive interface for tasks such as setting the active device, querying device-specific information like free and total memory, determining the number of available devices, and managing memory allocation. Additionally, it offers functionality for copying data to and from devices, managing task queues (streams) for efficient device utilization, and abstracting the complexities of device management away from the user.

See programmers guide for more details. [C++](./programmers_guide/cpp#device-management), [Rust](./programmers_guide/rust#device-management), [Go TODO](./programmers_guide/go)
See programmers guide for more details. [C++](./programmers_guide/cpp#device-management), [Rust](./programmers_guide/rust#device-management), [Go](./programmers_guide/go)

## ICICLE Core

Expand Down
16 changes: 8 additions & 8 deletions docs/docs/icicle/migrate_from_v2.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@

# Migration from Icicle V2 to V3
# Migration from ICICLE v2 to v3

Icicle V3 introduces a unified interface for high-performance computing across various devices, extending the functionality that was previously limited to GPUs in Icicle V2. This guide will assist you in transitioning from Icicle V2 to V3 by highlighting the key changes and providing examples for both C++ and Rust.
ICICLE v3 introduces a unified interface for high-performance computing across various devices, extending the functionality that was previously limited to GPUs in Icicle V2. This guide will assist you in transitioning from ICICLE v2 to v3 by highlighting the key changes and providing examples for both C++ and Rust.

## Key Conceptual Changes

Expand All @@ -10,20 +10,20 @@ Icicle V3 introduces a unified interface for high-performance computing across v
- **Unified API**: The APIs are now standardized across all devices, ensuring consistent usage and reducing the complexity of managing different hardware backends.

:::warning
When migrating from V2 to V3, it is important to note that, by default, your code now executes on the CPU. This contrasts with V2, which was exclusively a CUDA library. For details on installing and using CUDA GPUs, refer to the [CUDA backend guide](./install_cuda_backend.md).
When migrating from v2 to v3, it is important to note that, by default, your code now executes on the CPU. This contrasts with V2, which was exclusively a CUDA library. For details on installing and using CUDA GPUs, refer to the [CUDA backend guide](./install_cuda_backend.md).
:::

## Migration Guide for C++

### Replacing CUDA APIs with Icicle APIs

In Icicle V3, CUDA-specific APIs have been replaced with Icicle APIs that are designed to be backend-agnostic. This allows your code to run on different devices without requiring modifications.
In ICICLE v3, CUDA-specific APIs have been replaced with Icicle APIs that are designed to be backend-agnostic. This allows your code to run on different devices without requiring modifications.

- **Device Management**: Use Icicle's device management APIs instead of CUDA-specific functions. For example, instead of `cudaSetDevice()`, you would use `icicle_set_device()`.
- **Device Management**: Use ICICLE's device management APIs instead of CUDA-specific functions. For example, instead of `cudaSetDevice()`, you would use `icicle_set_device()`.

- **Memory Management**: Replace CUDA memory management functions such as `cudaMalloc()` and `cudaFree()` with Icicle's `icicle_malloc()` and `icicle_free()`.
- **Memory Management**: Replace CUDA memory management functions such as `cudaMalloc()` and `cudaFree()` with ICICLE's `icicle_malloc()` and `icicle_free()`.

- **Stream Management**: Replace `cudaStream_t` with `icicleStreamHandle` and use Icicle's stream management functions.
- **Stream Management**: Replace `cudaStream_t` with `icicleStreamHandle` and use ICICLE's stream management functions.

For a detailed overview and examples, please refer to the [Icicle C++ Programmer's Guide](./programmers_guide/cpp.md) for full API details.

Expand Down Expand Up @@ -55,7 +55,7 @@ icicle_free(device_ptr);

### Replacing `icicle_cuda_runtime` with `icicle_runtime`

In Icicle V3, the `icicle_cuda_runtime` crate is replaced with the `icicle_runtime` crate. This change reflects the broader support for different devices beyond just CUDA-enabled GPUs.
In ICICLE v3, the `icicle_cuda_runtime` crate is replaced with the `icicle_runtime` crate. This change reflects the broader support for different devices beyond just CUDA-enabled GPUs.

- **Device Management**: Use `icicle_runtime`'s device management functions instead of those in `icicle_cuda_runtime`. The `Device` struct remains central, but it's now part of a more generalized runtime.

Expand Down
6 changes: 4 additions & 2 deletions docs/docs/icicle/multi-device.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,9 @@ There are many [different strategies](https://github.com/NVIDIA/multi-gpu-progra

This approach typically involves a single or multiple CPUs managing threads that read from and write to multiple devices, such as GPUs, CPUs, or accelerators. You can think of it as a scaled-up HOST-Device model.

![Multi-Device Server Approach](image.png)
<p align="center">
<img src="image.png" alt="Multi-Device Server Approach"/>
</p>

This approach doesn't necessarily allow for tackling larger computation sizes, but it does enable the simultaneous computation of tasks that wouldn't fit on a single device.

Expand All @@ -33,7 +35,7 @@ This approach requires redesigning the algorithm at the software level to be com

Currently, ICICLE adopts a Device Server approach, where we assume you have a machine with multiple devices (GPUs, CPUs, etc.) and wish to run computations on each device.

Each thread needs to set a device. Following api calls (including memory management and compute apis) will execute on that device, for this thread.
Each thread needs to set a device. Following API calls (including memory management and compute APIs) will execute on that device, for this thread.

### C++
```cpp
Expand Down
Loading

0 comments on commit 0b53c1c

Please sign in to comment.