Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

todo-bot: touch TLS todos #415

Merged
merged 1 commit into from
Aug 12, 2019
Merged

Conversation

Orycterope
Copy link
Member

Because the TLS PR is so big, todobot bailed on it.

Make a PR that only touches every // todo: we opened in #339, so todobot picks them again.

Because the TLS PR is so big, todobot bailed on it.

Make a PR that only touches every `// todo:` we opened in sunriseos#339, so todobot picks them again.
@todo
Copy link

todo bot commented Aug 12, 2019

per-cpu TSSs / GDT

There are multiple things that aren't ideal about the way we handle TSSs.

Initialization

TSSs must always be initialized with an iopb_offset of `size_of::<TSS>()`, so that the TSS's data is not interpreted as the iopb.
However, because MAIN_TASK has a huge iopb (0x2001 bytes), we want it to live in the .bss, and be lazy initialized (iopb_offset value, and iopb array memset to 0xFF). `lazy_static` seems appropriate for that, and we should use it, so we cannot forget to initialize a TSS.
DOUBLE_FAULT_TASK could be statically initialized, except for the `cr3` field.

Per-cpu

But we will likely want a MAIN and DOUBLE_FAULT TSS per core. However, they cannot trivially be put behind a `#[thread_local]`, as they are initialized with the GDT, before cpu-locals are initialized. It might be possible to make them `#[thread_local]` with some post-initialization routine that switches to using the MAIN and DOUBLE_FAULT_TASK in the cpu-local memory area instead of the static early one, after cpu-local have been initialized, for core 0. The static early one could do without an iopb, since we're not going to userspace with it.
For other cores, having a `#[thead_local]` inside a `lazy_static!` seems to work, but I don't yet know how cores are going to be started, whether they allocate/initialize their own GDT + MAIN + DOUBLE_FAULT TSS, if it their parent core do it.
Because of these unknowns, the search for a good design for TSSs/GDT is postponed.

Locking

Since the TSSs are supposed to be cpu-local, there is no reason for them to have a mutex around them. An ideal design would be lock-less, which can either be achieved with `#[thread_local]`, or some custom wrapper around an UnsafeCell just for TSSs.

DOUBLE_FAULT's cr3

The DOUBLE_FAULT TSS(s)'s cr3 must point to a valid page directory, which will remain valid (i.e. not be freed) for the entire lifetime of the kernel, and possibly updated when kernel page tables are modified.
For now, because we have no such hierarchy, we always make DOUBLE_FAULT's cr3 point to the current cr3, and update it when we switch page table hierarchies. However the current way we do kernel paging is not viable for SMP, and we might finally implement such a hierarchy for SMP, we could then make DOUBLE_FAULT TSS(s) point to it.


// TODO: per-cpu TSSs / GDT
// BODY: There are multiple things that aren't ideal about the way we handle TSSs.
// BODY:
// BODY: ## Initialization
// BODY:
// BODY: TSSs must always be initialized with an iopb_offset of `size_of::<TSS>()`,
// BODY: so that the TSS's data is not interpreted as the iopb.
// BODY:
// BODY: However, because MAIN_TASK has a huge iopb (0x2001 bytes), we want it to live in the
// BODY: .bss, and be lazy initialized (iopb_offset value, and iopb array memset to 0xFF).
// BODY: `lazy_static` seems appropriate for that, and we should use it, so we cannot *forget* to


This comment was generated by todo based on a TODO comment in 7333eaf in #415. cc @Orycterope.

@todo
Copy link

todo bot commented Aug 12, 2019

expose current page directory's address in an arch-independant way.

pub use self::i386::{read_cr2, read_cr3}; // todo: expose current page directory's address in an arch-independant way.
pub use self::i386::lands::{KernelLand, UserLand, RecursiveTablesLand};


This comment was generated by todo based on a todo comment in 7333eaf in #415. cc @Orycterope.

@todo
Copy link

todo bot commented Aug 12, 2019

permanently_disable_interrupts shouldn't be unsafe.

disabling interrupts doesn't break any safety guidelines, and is perfectly safe as far as rustc is concerned.


// todo: permanently_disable_interrupts shouldn't be unsafe.
// body: disabling interrupts doesn't break any safety guidelines, and is perfectly safe as far as rustc is concerned.
// Disable interrupts forever!
unsafe { sync::permanently_disable_interrupts(); }
// Don't deadlock in the logger


This comment was generated by todo based on a todo comment in 7333eaf in #415. cc @Orycterope.

@todo
Copy link

todo bot commented Aug 12, 2019

Kernel Stack dump update

Update the kernel stack dump functions to be compatible the new and improved kernel panic.
Now that know the origin (userspace or kernelspace) in the panic, this should be easy, and we can finally have userspace stack dumps that actually work.


// TODO: Kernel Stack dump update
// BODY: Update the kernel stack dump functions to be compatible the new and improved
// BODY: kernel panic.
// BODY:
// BODY: Now that know the origin (userspace or kernelspace) in the panic, this should
// BODY: be easy, and we can finally have userspace stack dumps that actually work.
let stackdump_source = None;
// Then print the stack


This comment was generated by todo based on a TODO comment in 7333eaf in #415. cc @Orycterope.

@todo
Copy link

todo bot commented Aug 12, 2019

Libuser Thread stack guard

Currently the stack of every non-main thread is allocated in the heap, and no page guard protects from stack-overflowing and rewriting all the heap.
This is of course terrible for security, as with this stack overflowing is U.B.
The simpler way to fix this would be to continue allocating the stack on the heap, but remap the last page with no permissions with the yet unimplemented svcMapMemory syscall.


// todo: Libuser Thread stack guard
// body: Currently the stack of every non-main thread is allocated in the heap, and no page
// body: guard protects from stack-overflowing and rewriting all the heap.
// body:
// body: This is of course terrible for security, as with this stack overflowing is U.B.
// body:
// body: The simpler way to fix this would be to continue allocating the stack on the heap,
// body: but remap the last page with no permissions with the yet unimplemented svcMapMemory syscall.
pub fn create(entry: fn (usize) -> (), arg: usize) -> Result<Self, Error> {
let tls_elf = Once::new();


This comment was generated by todo based on a todo comment in 7333eaf in #415. cc @Orycterope.

@todo
Copy link

todo bot commented Aug 12, 2019

Properly free resource after thread detach

When detaching a thread, we should ensure that the associated resources (stack, handle, context, etc...) are properly freed before the Process exits. This can be done by adding the ThreadContext to a global Vec<> of ThreadContext that gets freed when the main thread (or the last thread alive?) exits.


// TODO: Properly free resource after thread detach
// BODY: When detaching a thread, we should ensure that the associated resources (stack,
// BODY: handle, context, etc...) are properly freed before the Process exits. This can be
// BODY: done by adding the ThreadContext to a global Vec<> of ThreadContext that gets freed
// BODY: when the main thread (or the last thread alive?) exits.
}
}


This comment was generated by todo based on a TODO comment in 7333eaf in #415. cc @Orycterope.

@Orycterope Orycterope requested review from roblabla and marysaka and removed request for roblabla August 12, 2019 17:34
@roblabla
Copy link
Member

This case swap is a terrible hack. I love it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants