Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal for Core Object Management Improvement #383

Open
bitboom opened this issue Oct 22, 2024 · 0 comments
Open

Proposal for Core Object Management Improvement #383

bitboom opened this issue Oct 22, 2024 · 0 comments

Comments

@bitboom
Copy link
Collaborator

bitboom commented Oct 22, 2024

1. Problem Background

Currently, in our RMM, we manage core objects such as the RMM's Page Table, Granule Status Table, and RTTs as static variables. This approach was chosen because these objects need to be accessible throughout the entire runtime of the program, and some addresses must not change. This approach was effective in traditional C programming because it allowed for straightforward access to global data. However, in Rust, this presents issues due to Rust's strict ownership and borrowing rules, which aim to ensure memory safety and prevent data races, making the static approach more challenging to manage effectively.

1.1 Resource Management Issues with Static Objects

In Rust, drop is not called for static variables (static items), which contrasts with typical Rust memory management where drop is used to automatically release resources. This design choice is significant because it relies on the operating system to reclaim memory once the program terminates, whereas explicit memory management is crucial in our context. This design choice is based on the assumption that the operating system can reclaim memory once the program terminates, thereby eliminating the need for explicit memory management. However, in a privileged module like a hypervisor, it is crucial to manage memory explicitly since we allocate memory directly and cannot rely on others to release it. Careful memory management is therefore required.

1.2 Race Condition Issues in Multi-threaded Environments

When using static variables mutably, race conditions can arise in a multi-threaded environment. To prevent this, locking mechanisms like mutexes are required. However, this introduces performance overhead and increases code complexity, as a lock must be acquired each time these objects are accessed.

2. Proposed Improvement: Using Pinning

To address these issues, we can consider utilizing Rust's concept of Pinning.

2.1 Overview of Rust Pinning

Pinning is a method in Rust that ensures an object's memory address remains unchanged. For example, an object can be pinned using the Pin<Box<T>> type, where T is the type of the object. By using Pin::new(Box::new(object)), we can create a pinned version of the object that cannot be moved in memory. By pinning an object, we make it immovable in memory, which guarantees stability throughout the object's lifetime. Pinning helps maintain the internal state of an object by ensuring that its memory location does not change.

Pinning is particularly useful in scenarios where data structures rely on stable memory locations, such as when implementing intrusive linked lists or when working with low-level hardware interactions that require precise control over memory. In Rust, the Pin type is used to achieve this immovability by wrapping pointers or heap-allocated objects. Once an object is pinned, Rust's type system enforces that it cannot be moved, thus providing safety guarantees for systems programming tasks.

2.2 Specific Proposal

Instead of declaring the Page Table as a static object, we propose declaring it as a pinned object and binding it to a Monitor object to ensure its validity throughout the program's execution. Specifically:

  • Convert the Page Table into a pinned object and bind it to a Monitor object created in the main function.
  • This approach has the following advantages:
    • It resolves memory management issues that can arise with static objects by allowing the object to be safely deallocated when no longer needed. By using Pinning, we can ensure that the object is properly dropped, which guarantees memory deallocation, unlike static variables that are not automatically dropped. This explicit deallocation helps prevent memory leaks and ensures better control over resource management.
    • It reduces the cost associated with locking by leveraging Rust's memory safety rules. (While locking might still be required in a multi-core environment, its use can be minimized.)
    • It also reduces code complexity that arises from locking.

2.3 Code Complexity Comparison

To understand the reduction in code complexity, let's briefly compare the existing approach using lazy_static with Mutex versus the proposed approach using Pinning:

Existing Approach (lazy_static + Mutex)

use lazy_static::lazy_static;
use std::sync::Mutex;

lazy_static! {
    static ref PAGE_TABLE: Mutex<PageTable> = Mutex::new(PageTable::new());
}

fn access_page_table() {
    let mut page_table = PAGE_TABLE.lock().unwrap();
    page_table.do_something();
}

In this approach, accessing the PAGE_TABLE requires locking using a Mutex, which adds both performance overhead and increased code complexity. Every access to the page table must be wrapped in a lock operation, leading to boilerplate code and potential deadlock scenarios if not managed properly.

Proposed Approach (Pinning)

use std::pin::Pin;

struct Monitor {
    page_table: Pin<Box<PageTable>>,
}

impl Monitor {
    fn new() -> Self {
        Self {
            page_table: Pin::new(Box::new(PageTable::new())),
        }
    }

    fn access_page_table(&self) {
        self.page_table.do_something();
    }
}

With the proposed approach, the page_table is pinned and managed by the Monitor object, eliminating the need for explicit locking. The code becomes simpler and more readable, as we do not need to deal with Mutex locking and unlocking operations. The memory safety is guaranteed by Rust's ownership and borrowing rules, reducing the chance of runtime errors and deadlocks.

3. Expected Benefits

This improvement will enhance the reliability of memory management for core objects and effectively reduce race condition issues in multi-threaded environments. Additionally, it will improve code readability and make maintenance easier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant