Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Allow reusing same shared IpcSharedMem for transfers #356

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

sagudev
Copy link
Member

@sagudev sagudev commented Aug 20, 2024

Simple "solution" to #126, it's not exactly zero-copy, but it allows to modify inner buffer of IpcSharedMem and then send it back. This is all unsafe but we can uphold all safety variants for GPUBuffer (request-response msgs).

It works in test, but I didn't dig in underlying system APIs to check if it is allowed.

Example

before:

const SIZE: usize = 50;
let (tx1, rx1) = ipc::channel().unwrap();
let (tx2, rx2) = ipc::channel().unwrap();
std::thread::spawn(move || {
    let mut data = rx1.recv().unwrap() // allocation 1
                                      .to_vec(); // allocation 2
    for i in 0..=SIZE {
        *data[i] = i as u8;
    }
    tx2.send(IpcSharedMemory::from_bytes(&data)).unwrap(); // allocation 3
});
tx1.send(IpcSharedMemory::from_byte(0, SIZE)).unwrap();
rx2.recv().unwrap();

after:

const SIZE: usize = 50;
let (tx1, rx1) = ipc::channel().unwrap();
let (tx2, rx2) = ipc::channel().unwrap();
std::thread::spawn(move || {
    // SAFETY: This is safe because IpcSharedMemory is constructed in place from main thread (will only be accessed by send)
    let mut ism = unsafe{ rx1.recv().unwrap() }; // allocation 1
    {
        let data = ism.deref_mut();
        for i in 0..=SIZE {
            *data[i] = i as u8;
        }
    }
    tx2.send(ism).unwrap();
});
tx1.send(IpcSharedMemory::from_byte(0, SIZE)).unwrap();
rx2.recv().unwrap();

Signed-off-by: sagudev <16504129+sagudev@users.noreply.github.com>
Signed-off-by: sagudev <16504129+sagudev@users.noreply.github.com>
@sagudev
Copy link
Member Author

sagudev commented Aug 20, 2024

Benchmarking ping_pong_shared_mem_mut_4194304_100: Collecting 100 samples in estimated 5.0000 s (
ping_pong_shared_mem_mut_4194304_100
                        time:   [0.0000 ps 0.0000 ps 0.0000 ps]
Found 13 outliers among 100 measurements (13.00%)
  5 (5.00%) high mild
  8 (8.00%) high severe

Benchmarking ping_pong_shared_mem_4194304_100: Collecting 100 samples in estimated 5.1199 s (200 
ping_pong_shared_mem_4194304_100
                        time:   [225.63 ms 226.61 ms 227.80 ms]
Found 4 outliers among 100 measurements (4.00%)
  1 (1.00%) low mild
  2 (2.00%) high mild
  1 (1.00%) high severe

Benchmarking ping_pong_shared_mem_mut_4194304_125: Collecting 100 samples in estimated 5.0000 s (
ping_pong_shared_mem_mut_4194304_125
                        time:   [0.0192 ps 0.0205 ps 0.0222 ps]
Found 12 outliers among 100 measurements (12.00%)
  4 (4.00%) high mild
  8 (8.00%) high severe

Benchmarking ping_pong_shared_mem_4194304_125: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 5.5s, or reduce sample count to 90.
Benchmarking ping_pong_shared_mem_4194304_125: Collecting 100 samples in estimated 5.5099 s (100 
ping_pong_shared_mem_4194304_125
                        time:   [558.99 ms 562.31 ms 566.19 ms]
Found 7 outliers among 100 measurements (7.00%)
  3 (3.00%) high mild
  4 (4.00%) high severe

@sagudev
Copy link
Member Author

sagudev commented Aug 20, 2024

@sagudev sagudev marked this pull request as ready for review August 20, 2024 13:35
@sagudev sagudev changed the title Allow reusing same shared IpcSharedMem for transfers feat: Allow reusing same shared IpcSharedMem for transfers Aug 20, 2024
@sagudev sagudev requested a review from jdm August 20, 2024 13:57
src/ipc.rs Show resolved Hide resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants