diff --git a/.lock b/.lock new file mode 100644 index 0000000..e69de29 diff --git a/crates.js b/crates.js new file mode 100644 index 0000000..241c46c --- /dev/null +++ b/crates.js @@ -0,0 +1 @@ +window.ALL_CRATES = ["page_table_entry","page_table_multiarch"]; \ No newline at end of file diff --git a/help.html b/help.html new file mode 100644 index 0000000..2fbce19 --- /dev/null +++ b/help.html @@ -0,0 +1 @@ +
#[repr(u64)]pub enum MemAttr {
+ Device = 0,
+ Normal = 1,
+ NormalNonCacheable = 2,
+}
The memory attributes index field in the descriptor, which is used to index +into the MAIR (Memory Attribute Indirection Register).
+Device-nGnRE memory
+Normal memory
+Normal non-cacheable memory
+clone_to_uninit
)clone_to_uninit
)AArch64 VMSAv8-64 translation table format descriptors.
+pub struct A64PTE(/* private fields */);
A VMSAv8-64 translation table descriptor.
+Note that the AttrIndx[2:0] (bit[4:2]) field is set to 0
for device
+memory, and 1
for normal memory. The system must configure the MAIR_ELx
+system register accordingly.
clone_to_uninit
)clone_to_uninit
)pub struct DescriptorAttr(/* private fields */);
Memory attribute fields in the VMSAv8-64 translation table format descriptors.
+The descriptor gives the address of the next level of translation table or 4KB page. +(not a 2M, 1G block)
+Non-secure bit. For memory accesses from Secure state, specifies whether the output +address is in Secure or Non-secure memory.
+Shareability: Inner or Outer Shareable (otherwise Non-shareable).
+Indicates that 16 adjacent translation table entries point to contiguous memory regions.
+Access permissions limit for subsequent levels of lookup: access at EL0 not permitted.
+Access permissions limit for subsequent levels of lookup: write access not permitted.
+Get the underlying bits value.
+The returned value is exactly the bits set in this flags value.
+Convert from a bits value.
+This method will return None
if any unknown bits are set.
Convert from a bits value, unsetting any unknown bits.
+Convert from a bits value exactly.
+Get a flags value with the bits of a flag with the given name set.
+This method will return None
if name
is empty or doesn’t
+correspond to any named flag.
Whether any set bits in a source flags value are also set in a target flags value.
+Whether all set bits in a source flags value are also set in a target flags value.
+The intersection of a source flags value with the complement of a target flags value (&!
).
This method is not equivalent to self & !other
when other
has unknown bits set.
+remove
won’t truncate other
, but the !
operator will.
The bitwise exclusive-or (^
) of the bits in two flags values.
Call insert
when value
is true
or remove
when value
is false
.
The bitwise and (&
) of the bits in two flags values.
The bitwise or (|
) of the bits in two flags values.
The intersection of a source flags value with the complement of a target flags value (&!
).
This method is not equivalent to self & !other
when other
has unknown bits set.
+difference
won’t truncate other
, but the !
operator will.
The bitwise exclusive-or (^
) of the bits in two flags values.
The bitwise negation (!
) of the bits in a flags value, truncating the result.
Yield a set of contained flags values.
+Each yielded flags value will correspond to a defined named flag. Any unknown bits +will be yielded together as a final flags value.
+Yield a set of contained named flags values.
+This method is like iter
, except only yields bits in contained named flags.
+Any unknown bits, or bits not corresponding to a contained flag will not be yielded.
The bitwise and (&
) of the bits in two flags values.
The bitwise or (|
) of the bits in two flags values.
|
operator.The bitwise or (|
) of the bits in two flags values.
The bitwise exclusive-or (^
) of the bits in two flags values.
The bitwise or (|
) of the bits in each flags value.
extend_one
)extend_one
)|
) of the bits in two flags values.&!
). Read more^
) of the bits in two flags values.Flags::insert
] when value
is true
or [Flags::remove
] when value
is false
.&
) of the bits in two flags values.&!
). Read more^
) of the bits in two flags values.!
) of the bits in a flags value, truncating the result.The bitwise or (|
) of the bits in each flags value.
The intersection of a source flags value with the complement of a target flags value (&!
).
This method is not equivalent to self & !other
when other
has unknown bits set.
+difference
won’t truncate other
, but the !
operator will.
-
operator.The intersection of a source flags value with the complement of a target flags value (&!
).
This method is not equivalent to self & !other
when other
has unknown bits set.
+difference
won’t truncate other
, but the !
operator will.
Redirecting to ../../../page_table_entry/aarch64/enum.MemAttr.html...
+ + + \ No newline at end of file diff --git a/page_table_entry/arch/aarch64/index.html b/page_table_entry/arch/aarch64/index.html new file mode 100644 index 0000000..b5c339b --- /dev/null +++ b/page_table_entry/arch/aarch64/index.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_entry/aarch64/index.html...
+ + + \ No newline at end of file diff --git a/page_table_entry/arch/aarch64/struct.A64PTE.html b/page_table_entry/arch/aarch64/struct.A64PTE.html new file mode 100644 index 0000000..a314ba1 --- /dev/null +++ b/page_table_entry/arch/aarch64/struct.A64PTE.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_entry/aarch64/struct.A64PTE.html...
+ + + \ No newline at end of file diff --git a/page_table_entry/arch/aarch64/struct.DescriptorAttr.html b/page_table_entry/arch/aarch64/struct.DescriptorAttr.html new file mode 100644 index 0000000..3bf37c6 --- /dev/null +++ b/page_table_entry/arch/aarch64/struct.DescriptorAttr.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_entry/aarch64/struct.DescriptorAttr.html...
+ + + \ No newline at end of file diff --git a/page_table_entry/arch/riscv/index.html b/page_table_entry/arch/riscv/index.html new file mode 100644 index 0000000..33e2517 --- /dev/null +++ b/page_table_entry/arch/riscv/index.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_entry/riscv/index.html...
+ + + \ No newline at end of file diff --git a/page_table_entry/arch/riscv/struct.PTEFlags.html b/page_table_entry/arch/riscv/struct.PTEFlags.html new file mode 100644 index 0000000..fffbb54 --- /dev/null +++ b/page_table_entry/arch/riscv/struct.PTEFlags.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_entry/riscv/struct.PTEFlags.html...
+ + + \ No newline at end of file diff --git a/page_table_entry/arch/riscv/struct.Rv64PTE.html b/page_table_entry/arch/riscv/struct.Rv64PTE.html new file mode 100644 index 0000000..f5d10b3 --- /dev/null +++ b/page_table_entry/arch/riscv/struct.Rv64PTE.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_entry/riscv/struct.Rv64PTE.html...
+ + + \ No newline at end of file diff --git a/page_table_entry/arch/x86_64/index.html b/page_table_entry/arch/x86_64/index.html new file mode 100644 index 0000000..f1e816d --- /dev/null +++ b/page_table_entry/arch/x86_64/index.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_entry/x86_64/index.html...
+ + + \ No newline at end of file diff --git a/page_table_entry/arch/x86_64/struct.PTF.html b/page_table_entry/arch/x86_64/struct.PTF.html new file mode 100644 index 0000000..b1d7250 --- /dev/null +++ b/page_table_entry/arch/x86_64/struct.PTF.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_entry/x86_64/struct.PTF.html...
+ + + \ No newline at end of file diff --git a/page_table_entry/arch/x86_64/struct.X64PTE.html b/page_table_entry/arch/x86_64/struct.X64PTE.html new file mode 100644 index 0000000..a08d2e6 --- /dev/null +++ b/page_table_entry/arch/x86_64/struct.X64PTE.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_entry/x86_64/struct.X64PTE.html...
+ + + \ No newline at end of file diff --git a/page_table_entry/index.html b/page_table_entry/index.html new file mode 100644 index 0000000..57950e5 --- /dev/null +++ b/page_table_entry/index.html @@ -0,0 +1,32 @@ +This crate provides the definition of page table entry for various hardware +architectures.
+Currently supported architectures and page table entry types:
+x86_64::X64PTE
aarch64::A64PTE
riscv::Rv64PTE
All these types implement the GenericPTE
trait, which provides unified
+methods for manipulating various page table entries.
use memory_addr::PhysAddr;
+use x86_64::structures::paging::page_table::PageTableFlags;
+use page_table_entry::{GenericPTE, MappingFlags, x86_64::X64PTE};
+
+let paddr = PhysAddr::from(0x233000);
+let pte = X64PTE::new_page(
+ paddr,
+ /* flags: */ MappingFlags::READ | MappingFlags::WRITE,
+ /* is_huge: */ false,
+);
+assert!(!pte.is_unused());
+assert!(pte.is_present());
+assert_eq!(pte.paddr(), paddr);
+assert_eq!(
+ pte.bits(),
+ 0x800_0000000233_003, // PRESENT | WRITE | NO_EXECUTE | paddr(0x233000)
+);
pub struct PTEFlags(/* private fields */);
Page-table entry flags.
+Get the underlying bits value.
+The returned value is exactly the bits set in this flags value.
+Convert from a bits value.
+This method will return None
if any unknown bits are set.
Convert from a bits value, unsetting any unknown bits.
+Convert from a bits value exactly.
+Get a flags value with the bits of a flag with the given name set.
+This method will return None
if name
is empty or doesn’t
+correspond to any named flag.
Whether any set bits in a source flags value are also set in a target flags value.
+Whether all set bits in a source flags value are also set in a target flags value.
+The intersection of a source flags value with the complement of a target flags value (&!
).
This method is not equivalent to self & !other
when other
has unknown bits set.
+remove
won’t truncate other
, but the !
operator will.
The bitwise exclusive-or (^
) of the bits in two flags values.
Call insert
when value
is true
or remove
when value
is false
.
The bitwise and (&
) of the bits in two flags values.
The bitwise or (|
) of the bits in two flags values.
The intersection of a source flags value with the complement of a target flags value (&!
).
This method is not equivalent to self & !other
when other
has unknown bits set.
+difference
won’t truncate other
, but the !
operator will.
The bitwise exclusive-or (^
) of the bits in two flags values.
The bitwise negation (!
) of the bits in a flags value, truncating the result.
Yield a set of contained flags values.
+Each yielded flags value will correspond to a defined named flag. Any unknown bits +will be yielded together as a final flags value.
+Yield a set of contained named flags values.
+This method is like iter
, except only yields bits in contained named flags.
+Any unknown bits, or bits not corresponding to a contained flag will not be yielded.
The bitwise and (&
) of the bits in two flags values.
The bitwise or (|
) of the bits in two flags values.
The bitwise exclusive-or (^
) of the bits in two flags values.
The bitwise or (|
) of the bits in each flags value.
extend_one
)extend_one
)|
) of the bits in two flags values.&!
). Read more^
) of the bits in two flags values.Flags::insert
] when value
is true
or [Flags::remove
] when value
is false
.&
) of the bits in two flags values.&!
). Read more^
) of the bits in two flags values.!
) of the bits in a flags value, truncating the result.The bitwise or (|
) of the bits in each flags value.
The intersection of a source flags value with the complement of a target flags value (&!
).
This method is not equivalent to self & !other
when other
has unknown bits set.
+difference
won’t truncate other
, but the !
operator will.
pub struct Rv64PTE(/* private fields */);
Sv39 and Sv48 page table entry for RV64 systems.
+clone_to_uninit
)clone_to_uninit
)pub struct MappingFlags(/* private fields */);
Generic page table entry flags that indicate the corresponding mapped +memory region permissions and attributes.
+Get the underlying bits value.
+The returned value is exactly the bits set in this flags value.
+Convert from a bits value.
+This method will return None
if any unknown bits are set.
Convert from a bits value, unsetting any unknown bits.
+Convert from a bits value exactly.
+Get a flags value with the bits of a flag with the given name set.
+This method will return None
if name
is empty or doesn’t
+correspond to any named flag.
Whether any set bits in a source flags value are also set in a target flags value.
+Whether all set bits in a source flags value are also set in a target flags value.
+The intersection of a source flags value with the complement of a target flags value (&!
).
This method is not equivalent to self & !other
when other
has unknown bits set.
+remove
won’t truncate other
, but the !
operator will.
The bitwise exclusive-or (^
) of the bits in two flags values.
Call insert
when value
is true
or remove
when value
is false
.
The bitwise and (&
) of the bits in two flags values.
The bitwise or (|
) of the bits in two flags values.
The intersection of a source flags value with the complement of a target flags value (&!
).
This method is not equivalent to self & !other
when other
has unknown bits set.
+difference
won’t truncate other
, but the !
operator will.
The bitwise exclusive-or (^
) of the bits in two flags values.
The bitwise negation (!
) of the bits in a flags value, truncating the result.
Yield a set of contained flags values.
+Each yielded flags value will correspond to a defined named flag. Any unknown bits +will be yielded together as a final flags value.
+Yield a set of contained named flags values.
+This method is like iter
, except only yields bits in contained named flags.
+Any unknown bits, or bits not corresponding to a contained flag will not be yielded.
The bitwise and (&
) of the bits in two flags values.
The bitwise or (|
) of the bits in two flags values.
|
operator.The bitwise or (|
) of the bits in two flags values.
The bitwise exclusive-or (^
) of the bits in two flags values.
source
. Read moreThe bitwise or (|
) of the bits in each flags value.
extend_one
)extend_one
)|
) of the bits in two flags values.&!
). Read more^
) of the bits in two flags values.Flags::insert
] when value
is true
or [Flags::remove
] when value
is false
.&
) of the bits in two flags values.&!
). Read more^
) of the bits in two flags values.!
) of the bits in a flags value, truncating the result.The bitwise or (|
) of the bits in each flags value.
self
and other
values to be equal, and is used
+by ==
.The intersection of a source flags value with the complement of a target flags value (&!
).
This method is not equivalent to self & !other
when other
has unknown bits set.
+difference
won’t truncate other
, but the !
operator will.
-
operator.The intersection of a source flags value with the complement of a target flags value (&!
).
This method is not equivalent to self & !other
when other
has unknown bits set.
+difference
won’t truncate other
, but the !
operator will.
clone_to_uninit
)clone_to_uninit
)pub trait GenericPTE: Debug + Clone + Copy + Sync + Send + Sized {
+ // Required methods
+ fn new_page(paddr: PhysAddr, flags: MappingFlags, is_huge: bool) -> Self;
+ fn new_table(paddr: PhysAddr) -> Self;
+ fn paddr(&self) -> PhysAddr;
+ fn flags(&self) -> MappingFlags;
+ fn set_paddr(&mut self, paddr: PhysAddr);
+ fn set_flags(&mut self, flags: MappingFlags, is_huge: bool);
+ fn bits(self) -> usize;
+ fn is_unused(&self) -> bool;
+ fn is_present(&self) -> bool;
+ fn is_huge(&self) -> bool;
+ fn clear(&mut self);
+}
A generic page table entry.
+All architecture-specific page table entry types implement this trait.
+Creates a page table entry point to a terminate page or block.
+Creates a page table entry point to a next level page table.
+Returns the flags of this entry.
+Set flags of the entry.
+Returns whether this entry flag indicates present.
+pub struct PTF(/* private fields */);
Possible flags for a page table entry.
+Specifies whether the mapped frame or page table is loaded in memory.
+Controls whether writes to the mapped frames are allowed.
+If this bit is unset in a level 1 page table entry, the mapped frame is read-only. +If this bit is unset in a higher level page table entry the complete range of mapped +pages is read-only.
+Controls whether accesses from userspace (i.e. ring 3) are permitted.
+If this bit is set, a “write-through” policy is used for the cache, else a “write-back” +policy is used.
+Disables caching for the pointed entry is cacheable.
+Set by the CPU when the mapped frame or page table is accessed.
+Set by the CPU on a write to the mapped frame.
+Specifies that the entry maps a huge frame instead of a page table. Only allowed in +P2 or P3 tables.
+Indicates that the mapping is present in all address spaces, so it isn’t flushed from +the TLB on an address space switch.
+Available to the OS, can be used to store additional data, e.g. custom flags.
+Available to the OS, can be used to store additional data, e.g. custom flags.
+Available to the OS, can be used to store additional data, e.g. custom flags.
+Available to the OS, can be used to store additional data, e.g. custom flags.
+Available to the OS, can be used to store additional data, e.g. custom flags.
+Available to the OS, can be used to store additional data, e.g. custom flags.
+Available to the OS, can be used to store additional data, e.g. custom flags.
+Available to the OS, can be used to store additional data, e.g. custom flags.
+Available to the OS, can be used to store additional data, e.g. custom flags.
+Available to the OS, can be used to store additional data, e.g. custom flags.
+Available to the OS, can be used to store additional data, e.g. custom flags.
+Available to the OS, can be used to store additional data, e.g. custom flags.
+Available to the OS, can be used to store additional data, e.g. custom flags.
+Available to the OS, can be used to store additional data, e.g. custom flags.
+Forbid code execution from the mapped frames.
+Can be only used when the no-execute page protection feature is enabled in the EFER +register.
+Get a flags value with all bits unset.
+Get a flags value with all known bits set.
+Get the underlying bits value.
+The returned value is exactly the bits set in this flags value.
+Convert from a bits value.
+This method will return None
if any unknown bits are set.
Convert from a bits value, unsetting any unknown bits.
+Convert from a bits value exactly.
+Get a flags value with the bits of a flag with the given name set.
+This method will return None
if name
is empty or doesn’t
+correspond to any named flag.
Whether any set bits in a source flags value are also set in a target flags value.
+Whether all set bits in a source flags value are also set in a target flags value.
+The bitwise or (|
) of the bits in two flags values.
The intersection of a source flags value with the complement of a target flags value (&!
).
This method is not equivalent to self & !other
when other
has unknown bits set.
+remove
won’t truncate other
, but the !
operator will.
The bitwise exclusive-or (^
) of the bits in two flags values.
Call insert
when value
is true
or remove
when value
is false
.
The bitwise and (&
) of the bits in two flags values.
The bitwise or (|
) of the bits in two flags values.
The intersection of a source flags value with the complement of a target flags value (&!
).
This method is not equivalent to self & !other
when other
has unknown bits set.
+difference
won’t truncate other
, but the !
operator will.
The bitwise exclusive-or (^
) of the bits in two flags values.
The bitwise negation (!
) of the bits in a flags value, truncating the result.
Yield a set of contained flags values.
+Each yielded flags value will correspond to a defined named flag. Any unknown bits +will be yielded together as a final flags value.
+Yield a set of contained named flags values.
+This method is like iter
, except only yields bits in contained named flags.
+Any unknown bits, or bits not corresponding to a contained flag will not be yielded.
The bitwise and (&
) of the bits in two flags values.
&
operator.The bitwise and (&
) of the bits in two flags values.
The bitwise or (|
) of the bits in two flags values.
|
operator.The bitwise or (|
) of the bits in two flags values.
The bitwise exclusive-or (^
) of the bits in two flags values.
^
operator.The bitwise exclusive-or (^
) of the bits in two flags values.
source
. Read moreThe bitwise or (|
) of the bits in each flags value.
extend_one
)extend_one
)|
) of the bits in two flags values.&!
). Read more^
) of the bits in two flags values.Flags::insert
] when value
is true
or [Flags::remove
] when value
is false
.&
) of the bits in two flags values.&!
). Read more^
) of the bits in two flags values.!
) of the bits in a flags value, truncating the result.The bitwise or (|
) of the bits in each flags value.
The bitwise negation (!
) of the bits in a flags value, truncating the result.
!
operator.self
and other
values to be equal, and is used
+by ==
.self
and other
) and is used by the <=
+operator. Read moreThe intersection of a source flags value with the complement of a target flags value (&!
).
This method is not equivalent to self & !other
when other
has unknown bits set.
+difference
won’t truncate other
, but the !
operator will.
-
operator.The intersection of a source flags value with the complement of a target flags value (&!
).
This method is not equivalent to self & !other
when other
has unknown bits set.
+difference
won’t truncate other
, but the !
operator will.
clone_to_uninit
)clone_to_uninit
)pub struct X64PTE(/* private fields */);
An x86_64 page table entry.
+clone_to_uninit
)clone_to_uninit
)AArch64 specific page table structures.
+pub struct A64PagingMetaData;
Metadata of AArch64 page tables.
+source
. Read moreclone_to_uninit
)clone_to_uninit
)pub type A64PageTable<H> = PageTable64<A64PagingMetaData, A64PTE, H>;
AArch64 VMSAv8-64 translation table.
+struct A64PageTable<H> { /* private fields */ }
Redirecting to ../../../page_table_multiarch/aarch64/index.html...
+ + + \ No newline at end of file diff --git a/page_table_multiarch/arch/aarch64/struct.A64PagingMetaData.html b/page_table_multiarch/arch/aarch64/struct.A64PagingMetaData.html new file mode 100644 index 0000000..5760688 --- /dev/null +++ b/page_table_multiarch/arch/aarch64/struct.A64PagingMetaData.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_multiarch/aarch64/struct.A64PagingMetaData.html...
+ + + \ No newline at end of file diff --git a/page_table_multiarch/arch/aarch64/type.A64PageTable.html b/page_table_multiarch/arch/aarch64/type.A64PageTable.html new file mode 100644 index 0000000..a09a199 --- /dev/null +++ b/page_table_multiarch/arch/aarch64/type.A64PageTable.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_multiarch/aarch64/type.A64PageTable.html...
+ + + \ No newline at end of file diff --git a/page_table_multiarch/arch/riscv/index.html b/page_table_multiarch/arch/riscv/index.html new file mode 100644 index 0000000..f1f73ba --- /dev/null +++ b/page_table_multiarch/arch/riscv/index.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_multiarch/riscv/index.html...
+ + + \ No newline at end of file diff --git a/page_table_multiarch/arch/riscv/struct.Sv39MetaData.html b/page_table_multiarch/arch/riscv/struct.Sv39MetaData.html new file mode 100644 index 0000000..2730d1c --- /dev/null +++ b/page_table_multiarch/arch/riscv/struct.Sv39MetaData.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_multiarch/riscv/struct.Sv39MetaData.html...
+ + + \ No newline at end of file diff --git a/page_table_multiarch/arch/riscv/struct.Sv48MetaData.html b/page_table_multiarch/arch/riscv/struct.Sv48MetaData.html new file mode 100644 index 0000000..3a3f8eb --- /dev/null +++ b/page_table_multiarch/arch/riscv/struct.Sv48MetaData.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_multiarch/riscv/struct.Sv48MetaData.html...
+ + + \ No newline at end of file diff --git a/page_table_multiarch/arch/riscv/type.Sv39PageTable.html b/page_table_multiarch/arch/riscv/type.Sv39PageTable.html new file mode 100644 index 0000000..31f6a2b --- /dev/null +++ b/page_table_multiarch/arch/riscv/type.Sv39PageTable.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_multiarch/riscv/type.Sv39PageTable.html...
+ + + \ No newline at end of file diff --git a/page_table_multiarch/arch/riscv/type.Sv48PageTable.html b/page_table_multiarch/arch/riscv/type.Sv48PageTable.html new file mode 100644 index 0000000..722185c --- /dev/null +++ b/page_table_multiarch/arch/riscv/type.Sv48PageTable.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_multiarch/riscv/type.Sv48PageTable.html...
+ + + \ No newline at end of file diff --git a/page_table_multiarch/arch/x86_64/index.html b/page_table_multiarch/arch/x86_64/index.html new file mode 100644 index 0000000..f42670e --- /dev/null +++ b/page_table_multiarch/arch/x86_64/index.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_multiarch/x86_64/index.html...
+ + + \ No newline at end of file diff --git a/page_table_multiarch/arch/x86_64/struct.X64PagingMetaData.html b/page_table_multiarch/arch/x86_64/struct.X64PagingMetaData.html new file mode 100644 index 0000000..e9bdaf1 --- /dev/null +++ b/page_table_multiarch/arch/x86_64/struct.X64PagingMetaData.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_multiarch/x86_64/struct.X64PagingMetaData.html...
+ + + \ No newline at end of file diff --git a/page_table_multiarch/arch/x86_64/type.X64PageTable.html b/page_table_multiarch/arch/x86_64/type.X64PageTable.html new file mode 100644 index 0000000..881df52 --- /dev/null +++ b/page_table_multiarch/arch/x86_64/type.X64PageTable.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../../page_table_multiarch/x86_64/type.X64PageTable.html...
+ + + \ No newline at end of file diff --git a/page_table_multiarch/bits64/struct.PageTable64.html b/page_table_multiarch/bits64/struct.PageTable64.html new file mode 100644 index 0000000..c61d919 --- /dev/null +++ b/page_table_multiarch/bits64/struct.PageTable64.html @@ -0,0 +1,11 @@ + + + + +Redirecting to ../../page_table_multiarch/struct.PageTable64.html...
+ + + \ No newline at end of file diff --git a/page_table_multiarch/enum.PageSize.html b/page_table_multiarch/enum.PageSize.html new file mode 100644 index 0000000..901b7e6 --- /dev/null +++ b/page_table_multiarch/enum.PageSize.html @@ -0,0 +1,25 @@ +#[repr(usize)]pub enum PageSize {
+ Size4K = 4_096,
+ Size2M = 2_097_152,
+ Size1G = 1_073_741_824,
+}
The page sizes supported by the hardware page table.
+Size of 4 kilobytes (212 bytes).
+Size of 2 megabytes (221 bytes).
+Size of 1 gigabytes (230 bytes).
+clone_to_uninit
)clone_to_uninit
)pub enum PagingError {
+ NoMemory,
+ NotAligned,
+ NotMapped,
+ AlreadyMapped,
+ MappedToHugePage,
+}
The error type for page table operation failures.
+Cannot allocate memory.
+The address is not aligned to the page size.
+The mapping is not present.
+The mapping is already present.
+The page table entry represents a huge page, but the target physical +frame is 4K in size.
+self
and other
values to be equal, and is used
+by ==
.This crate provides generic, unified, architecture-independent, and OS-free page table structures for various hardware architectures.
+The core struct is PageTable64<M, PTE, H>
. OS-functions and architecture-dependent types are provided by generic parameters:
M
: The architecture-dependent metadata, requires to implement the PagingMetaData
trait.PTE
: The architecture-dependent page table entry, requires to implement the GenericPTE
trait.H
: OS-functions such as physical memory allocation, requires to implement the PagingHandler
trait.Currently supported architectures and page table structures:
+x86_64::X64PageTable
aarch64::A64PageTable
riscv::Sv39PageTable
, riscv::Sv48PageTable
use memory_addr::{PhysAddr, VirtAddr};
+use page_table_multiarch::x86_64::{X64PageTable};
+use page_table_multiarch::{MappingFlags, PagingHandler, PageSize};
+
+use core::alloc::Layout;
+
+extern crate alloc;
+
+struct PagingHandlerImpl;
+
+impl PagingHandler for PagingHandlerImpl {
+ fn alloc_frame() -> Option<PhysAddr> {
+ let layout = Layout::from_size_align(0x1000, 0x1000).unwrap();
+ let ptr = unsafe { alloc::alloc::alloc(layout) };
+ Some(PhysAddr::from(ptr as usize))
+ }
+
+ fn dealloc_frame(paddr: PhysAddr) {
+ let layout = Layout::from_size_align(0x1000, 0x1000).unwrap();
+ let ptr = paddr.as_usize() as *mut u8;
+ unsafe { alloc::alloc::dealloc(ptr, layout) };
+ }
+
+ fn phys_to_virt(paddr: PhysAddr) -> VirtAddr {
+ VirtAddr::from(paddr.as_usize())
+ }
+}
+
+let vaddr = VirtAddr::from(0xdead_beef_000);
+let paddr = PhysAddr::from(0x2000);
+let flags = MappingFlags::READ | MappingFlags::WRITE;
+let mut pt = X64PageTable::<PagingHandlerImpl>::try_new().unwrap();
+
+assert!(pt.root_paddr().is_aligned_4k());
+assert!(pt.map(vaddr, paddr, PageSize::Size4K, flags).is_ok());
+assert_eq!(pt.query(vaddr), Ok((paddr, flags, PageSize::Size4K)));
pub use page_table_entry::GenericPTE;
pub use page_table_entry::MappingFlags;
PageTable64
.PageTable64
.Result
type for page table operations.RISC-V specific page table structures.
+pub struct Sv39MetaData;
Metadata of RISC-V Sv39 page tables.
+source
. Read moreclone_to_uninit
)clone_to_uninit
)pub struct Sv48MetaData;
Metadata of RISC-V Sv48 page tables.
+source
. Read moreclone_to_uninit
)clone_to_uninit
)pub type Sv39PageTable<H> = PageTable64<Sv39MetaData, Rv64PTE, H>;
Sv39: Page-Based 39-bit (3 levels) Virtual-Memory System.
+struct Sv39PageTable<H> { /* private fields */ }
pub type Sv48PageTable<H> = PageTable64<Sv48MetaData, Rv64PTE, H>;
Sv48: Page-Based 48-bit (4 levels) Virtual-Memory System.
+struct Sv48PageTable<H> { /* private fields */ }
pub struct PageTable64<M: PagingMetaData, PTE: GenericPTE, H: PagingHandler> { /* private fields */ }
A generic page table struct for 64-bit platform.
+It also tracks all intermediate level tables. They will be deallocated
+When the PageTable64
itself is dropped.
Creates a new page table instance or returns the error.
+It will allocate a new page for the root page table.
+Returns the physical address of the root page table.
+Maps a virtual page to a physical frame with the given page_size
+and mapping flags
.
The virtual page starts with vaddr
, amd the physical frame starts with
+target
. If the addresses is not aligned to the page size, they will be
+aligned down automatically.
Returns Err(PagingError::AlreadyMapped)
+if the mapping is already present.
Unmaps the mapping starts with vaddr
.
Returns Err(PagingError::NotMapped)
if the
+mapping is not present.
Query the result of the mapping starts with vaddr
.
Returns the physical address of the target frame, mapping flags, and +the page size.
+Returns Err(PagingError::NotMapped)
if the
+mapping is not present.
Updates the target or flags of the mapping starts with vaddr
. If the
+corresponding argument is None
, it will not be updated.
Returns the page size of the mapping.
+Returns Err(PagingError::NotMapped)
if the
+mapping is not present.
Map a contiguous virtual memory region to a contiguous physical memory
+region with the given mapping flags
.
The virtual and physical memory regions start with vaddr
and paddr
+respectively. The region size is size
. The addresses and size
must
+be aligned to 4K, otherwise it will return Err(PagingError::NotAligned)
.
When allow_huge
is true, it will try to map the region with huge pages
+if possible. Otherwise, it will map the region with 4K pages.
Unmap a contiguous virtual memory region.
+The region must be mapped before using PageTable64::map_region
, or
+unexpected behaviors may occur.
Walk the page table recursively.
+When reaching the leaf page table, call func
on the current page table
+entry. The max number of enumerations in one table is limited by limit
.
The arguments of func
are:
0
): usize
usize
VirtAddr
]&PTE
pub trait PagingHandler: Sized {
+ // Required methods
+ fn alloc_frame() -> Option<PhysAddr>;
+ fn dealloc_frame(paddr: PhysAddr);
+ fn phys_to_virt(paddr: PhysAddr) -> VirtAddr;
+}
The low-level OS-dependent helpers that must be provided for
+PageTable64
.
Request to allocate a 4K-sized physical frame.
+Request to free a allocated physical frame.
+Returns a virtual address that maps to the given physical address.
+Used to access the physical memory directly in page table implementation.
+pub trait PagingMetaData: Sync + Send + Sized {
+ const LEVELS: usize;
+ const PA_MAX_BITS: usize;
+ const VA_MAX_BITS: usize;
+ const PA_MAX_ADDR: usize = _;
+
+ // Provided methods
+ fn paddr_is_valid(paddr: usize) -> bool { ... }
+ fn vaddr_is_valid(vaddr: usize) -> bool { ... }
+}
The architecture-dependent metadata that must be provided for
+PageTable64
.
The maximum number of bits of physical address.
+The maximum number of bits of virtual address.
+The maximum physical address.
+Whether a given physical address is valid.
+Whether a given virtual address is valid.
+pub type PagingResult<T = ()> = Result<T, PagingError>;
The specialized Result
type for page table operations.
enum PagingResult<T = ()> {
+ Ok(T),
+ Err(PagingError),
+}
x86 specific page table structures.
+pub struct X64PagingMetaData;
metadata of x86_64 page tables.
+pub type X64PageTable<H> = PageTable64<X64PagingMetaData, X64PTE, H>;
x86_64 page table.
+struct X64PageTable<H> { /* private fields */ }
&
) of the bits in two flags values.\nThe bitwise and (&
) of the bits in two flags values.\nThe bitwise or (|
) of the bits in two flags values.\nThe bitwise or (|
) of the bits in two flags values.\nReturns the raw bits of this entry.\nGet the underlying bits value.\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise exclusive-or (^
) of the bits in two flags …\nSet this entry to zero.\nThe bitwise negation (!
) of the bits in a flags value, …\nWhether all set bits in a source flags value are also set …\nThe intersection of a source flags value with the …\nGet a flags value with all bits unset.\nThe bitwise or (|
) of the bits in each flags value.\nReturns the flags of this entry.\nReturns the argument unchanged.\nConvert from a bits value.\nConvert from a bits value exactly.\nConvert from a bits value, unsetting any unknown bits.\nThe bitwise or (|
) of the bits in each flags value.\nGet a flags value with the bits of a flag with the given …\nThe bitwise or (|
) of the bits in two flags values.\nThe bitwise and (&
) of the bits in two flags values.\nWhether any set bits in a source flags value are also set …\nCalls U::from(self)
.\nWhether all known bits in this flags value are set.\nWhether all bits in this flags value are unset.\nFor non-last level translation, returns whether this entry …\nReturns whether this entry flag indicates present.\nReturns whether this entry is zero.\nYield a set of contained flags values.\nYield a set of contained named flags values.\nCreates a page table entry point to a terminate page or …\nCreates a page table entry point to a next level page …\nThe bitwise negation (!
) of the bits in a flags value, …\nReturns the physical address mapped by this entry.\nThe intersection of a source flags value with the …\nRISC-V page table entries.\nCall insert
when value
is true
or remove
when value
is …\nSet flags of the entry.\nSet mapped physical address of the entry.\nThe intersection of a source flags value with the …\nThe intersection of a source flags value with the …\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise or (|
) of the bits in two flags values.\nx86 page table entries on 64-bit paging.\nA VMSAv8-64 translation table descriptor.\nThe Access flag.\nAccess permission: accessable at EL0.\nAccess permissions limit for subsequent levels of lookup: …\nAccess permissions limit for subsequent levels of lookup: …\nAccess permission: read-only.\nMemory attributes index field.\nIndicates that 16 adjacent translation table entries point …\nMemory attribute fields in the VMSAv8-64 translation table …\nDevice-nGnRE memory\nShareability: Inner Shareable (otherwise Outer Shareable).\nThe MAIR_ELx register should be set to this value to match …\nThe memory attributes index field in the descriptor, which …\nThe not global bit.\nThe descriptor gives the address of the next level of …\nNon-secure bit. For memory accesses from Secure state, …\nFor memory accesses from Secure state, specifies the …\nNormal memory\nNormal non-cacheable memory\nThe Privileged execute-never field.\nPXN limit for subsequent levels of lookup.\nShareability: Inner or Outer Shareable (otherwise …\nThe Execute-never or Unprivileged execute-never field.\nWhether the descriptor is valid.\nXN limit for subsequent levels of lookup.\nGet a flags value with all known bits set.\nGet a flags value with all known bits set.\nThe bitwise and (&
) of the bits in two flags values.\nThe bitwise and (&
) of the bits in two flags values.\nThe bitwise or (|
) of the bits in two flags values.\nThe bitwise or (|
) of the bits in two flags values.\nGet the underlying bits value.\nGet the underlying bits value.\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise negation (!
) of the bits in a flags value, …\nThe bitwise negation (!
) of the bits in a flags value, …\nWhether all set bits in a source flags value are also set …\nWhether all set bits in a source flags value are also set …\nThe intersection of a source flags value with the …\nThe intersection of a source flags value with the …\nGet a flags value with all bits unset.\nGet a flags value with all bits unset.\nCreates an empty descriptor with all bits set to zero.\nThe bitwise or (|
) of the bits in each flags value.\nReturns the argument unchanged.\nReturns the argument unchanged.\nReturns the argument unchanged.\nConvert from a bits value.\nConvert from a bits value.\nConvert from a bits value exactly.\nConvert from a bits value exactly.\nConvert from a bits value, unsetting any unknown bits.\nConvert from a bits value, unsetting any unknown bits.\nThe bitwise or (|
) of the bits in each flags value.\nConstructs a descriptor from the memory index, leaving the …\nGet a flags value with the bits of a flag with the given …\nGet a flags value with the bits of a flag with the given …\nThe bitwise or (|
) of the bits in two flags values.\nThe bitwise or (|
) of the bits in two flags values.\nThe bitwise and (&
) of the bits in two flags values.\nThe bitwise and (&
) of the bits in two flags values.\nWhether any set bits in a source flags value are also set …\nWhether any set bits in a source flags value are also set …\nCalls U::from(self)
.\nCalls U::from(self)
.\nCalls U::from(self)
.\nWhether all known bits in this flags value are set.\nWhether all known bits in this flags value are set.\nWhether all bits in this flags value are unset.\nWhether all bits in this flags value are unset.\nYield a set of contained flags values.\nYield a set of contained flags values.\nYield a set of contained named flags values.\nYield a set of contained named flags values.\nReturns the memory attribute index field.\nThe bitwise negation (!
) of the bits in a flags value, …\nThe intersection of a source flags value with the …\nThe intersection of a source flags value with the …\nCall insert
when value
is true
or remove
when value
is …\nCall insert
when value
is true
or remove
when value
is …\nThe intersection of a source flags value with the …\nThe intersection of a source flags value with the …\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise or (|
) of the bits in two flags values.\nThe bitwise or (|
) of the bits in two flags values.\nIndicates the virtual page has been read, written, or …\nIndicates the virtual page has been written since the last …\nDesignates a global mapping.\nPage-table entry flags.\nWhether the page is readable.\nSv39 and Sv48 page table entry for RV64 systems.\nWhether the page is accessible to user mode.\nWhether the PTE is valid.\nWhether the page is writable.\nWhether the page is executable.\nGet a flags value with all known bits set.\nGet a flags value with all known bits set.\nThe bitwise and (&
) of the bits in two flags values.\nThe bitwise and (&
) of the bits in two flags values.\nThe bitwise or (|
) of the bits in two flags values.\nThe bitwise or (|
) of the bits in two flags values.\nGet the underlying bits value.\nGet the underlying bits value.\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise negation (!
) of the bits in a flags value, …\nThe bitwise negation (!
) of the bits in a flags value, …\nWhether all set bits in a source flags value are also set …\nWhether all set bits in a source flags value are also set …\nThe intersection of a source flags value with the …\nThe intersection of a source flags value with the …\nGet a flags value with all bits unset.\nGet a flags value with all bits unset.\nThe bitwise or (|
) of the bits in each flags value.\nReturns the argument unchanged.\nReturns the argument unchanged.\nConvert from a bits value.\nConvert from a bits value.\nConvert from a bits value exactly.\nConvert from a bits value exactly.\nConvert from a bits value, unsetting any unknown bits.\nConvert from a bits value, unsetting any unknown bits.\nThe bitwise or (|
) of the bits in each flags value.\nGet a flags value with the bits of a flag with the given …\nGet a flags value with the bits of a flag with the given …\nThe bitwise or (|
) of the bits in two flags values.\nThe bitwise or (|
) of the bits in two flags values.\nThe bitwise and (&
) of the bits in two flags values.\nThe bitwise and (&
) of the bits in two flags values.\nWhether any set bits in a source flags value are also set …\nWhether any set bits in a source flags value are also set …\nCalls U::from(self)
.\nCalls U::from(self)
.\nWhether all known bits in this flags value are set.\nWhether all known bits in this flags value are set.\nWhether all bits in this flags value are unset.\nWhether all bits in this flags value are unset.\nYield a set of contained flags values.\nYield a set of contained flags values.\nYield a set of contained named flags values.\nYield a set of contained named flags values.\nThe bitwise negation (!
) of the bits in a flags value, …\nThe intersection of a source flags value with the …\nThe intersection of a source flags value with the …\nCall insert
when value
is true
or remove
when value
is …\nCall insert
when value
is true
or remove
when value
is …\nThe intersection of a source flags value with the …\nThe intersection of a source flags value with the …\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise or (|
) of the bits in two flags values.\nThe bitwise or (|
) of the bits in two flags values.\nSet by the CPU when the mapped frame or page table is …\nAvailable to the OS, can be used to store additional data, …\nAvailable to the OS, can be used to store additional data, …\nAvailable to the OS, can be used to store additional data, …\nAvailable to the OS, can be used to store additional data, …\nAvailable to the OS, can be used to store additional data, …\nAvailable to the OS, can be used to store additional data, …\nAvailable to the OS, can be used to store additional data, …\nAvailable to the OS, can be used to store additional data, …\nAvailable to the OS, can be used to store additional data, …\nAvailable to the OS, can be used to store additional data, …\nAvailable to the OS, can be used to store additional data, …\nAvailable to the OS, can be used to store additional data, …\nAvailable to the OS, can be used to store additional data, …\nAvailable to the OS, can be used to store additional data, …\nSet by the CPU on a write to the mapped frame.\nIndicates that the mapping is present in all address …\nSpecifies that the entry maps a huge frame instead of a …\nDisables caching for the pointed entry is cacheable.\nForbid code execution from the mapped frames.\nSpecifies whether the mapped frame or page table is loaded …\nPossible flags for a page table entry.\nControls whether accesses from userspace (i.e. ring 3) are …\nControls whether writes to the mapped frames are allowed.\nIf this bit is set, a “write-through” policy is used …\nAn x86_64 page table entry.\nGet a flags value with all known bits set.\nThe bitwise and (&
) of the bits in two flags values.\nThe bitwise and (&
) of the bits in two flags values.\nThe bitwise or (|
) of the bits in two flags values.\nThe bitwise or (|
) of the bits in two flags values.\nGet the underlying bits value.\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise negation (!
) of the bits in a flags value, …\nWhether all set bits in a source flags value are also set …\nThe intersection of a source flags value with the …\nGet a flags value with all bits unset.\nThe bitwise or (|
) of the bits in each flags value.\nReturns the argument unchanged.\nReturns the argument unchanged.\nConvert from a bits value.\nConvert from a bits value exactly.\nConvert from a bits value, unsetting any unknown bits.\nThe bitwise or (|
) of the bits in each flags value.\nGet a flags value with the bits of a flag with the given …\nThe bitwise or (|
) of the bits in two flags values.\nThe bitwise and (&
) of the bits in two flags values.\nWhether any set bits in a source flags value are also set …\nCalls U::from(self)
.\nCalls U::from(self)
.\nWhether all known bits in this flags value are set.\nWhether all bits in this flags value are unset.\nYield a set of contained flags values.\nYield a set of contained named flags values.\nThe bitwise negation (!
) of the bits in a flags value, …\nThe intersection of a source flags value with the …\nCall insert
when value
is true
or remove
when value
is …\nThe intersection of a source flags value with the …\nThe intersection of a source flags value with the …\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise exclusive-or (^
) of the bits in two flags …\nThe bitwise or (|
) of the bits in two flags values.")
\ No newline at end of file
diff --git a/search.desc/page_table_multiarch/page_table_multiarch-desc-0-.js b/search.desc/page_table_multiarch/page_table_multiarch-desc-0-.js
new file mode 100644
index 0000000..e833a66
--- /dev/null
+++ b/search.desc/page_table_multiarch/page_table_multiarch-desc-0-.js
@@ -0,0 +1 @@
+searchState.loadedDescShard("page_table_multiarch", 0, "page_table_multiarch\nThe mapping is already present.\nContains the error value\nThe number of levels of the hardware page table.\nThe page table entry represents a huge page, but the …\nCannot allocate memory.\nThe address is not aligned to the page size.\nThe mapping is not present.\nContains the success value\nThe maximum physical address.\nThe maximum number of bits of physical address.\nThe page sizes supported by the hardware page table.\nA generic page table struct for 64-bit platform.\nThe error type for page table operation failures.\nThe low-level OS-dependent helpers that must be provided …\nThe architecture-dependent metadata that must be provided …\nThe specialized Result
type for page table operations.\nSize of 1 gigabytes (230 bytes).\nSize of 2 megabytes (221 bytes).\nSize of 4 kilobytes (212 bytes).\nThe maximum number of bits of virtual address.\nAArch64 specific page table structures.\nRequest to allocate a 4K-sized physical frame.\nRequest to free a allocated physical frame.\nReturns the argument unchanged.\nReturns the argument unchanged.\nReturns the argument unchanged.\nCalls U::from(self)
.\nCalls U::from(self)
.\nCalls U::from(self)
.\nWhether this page size is considered huge (larger than 4K).\nMaps a virtual page to a physical frame with the given …\nMap a contiguous virtual memory region to a contiguous …\nWhether a given physical address is valid.\nReturns a virtual address that maps to the given physical …\nQuery the result of the mapping starts with vaddr
.\nRISC-V specific page table structures.\nReturns the physical address of the root page table.\nCreates a new page table instance or returns the error.\nUnmaps the mapping starts with vaddr
.\nUnmap a contiguous virtual memory region.\nUpdates the target or flags of the mapping starts with …\nWhether a given virtual address is valid.\nWalk the page table recursively.\nx86 specific page table structures.\nAArch64 VMSAv8-64 translation table.\nMetadata of AArch64 page tables.\nReturns the argument unchanged.\nCalls U::from(self)
.\nMetadata of RISC-V Sv39 page tables.\nSv39: Page-Based 39-bit (3 levels) Virtual-Memory System.\nMetadata of RISC-V Sv48 page tables.\nSv48: Page-Based 48-bit (4 levels) Virtual-Memory System.\nReturns the argument unchanged.\nReturns the argument unchanged.\nCalls U::from(self)
.\nCalls U::from(self)
.\nx86_64 page table.\nmetadata of x86_64 page tables.\nReturns the argument unchanged.\nCalls U::from(self)
.")
\ No newline at end of file
diff --git a/settings.html b/settings.html
new file mode 100644
index 0000000..956605d
--- /dev/null
+++ b/settings.html
@@ -0,0 +1 @@
+1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +164 +165 +166 +167 +168 +169 +170 +171 +172 +173 +174 +175 +176 +177 +178 +179 +180 +181 +182 +183 +184 +185 +186 +187 +188 +189 +190 +191 +192 +193 +194 +195 +196 +197 +198 +199 +200 +201 +202 +203 +204 +205 +206 +207 +208 +209 +210 +211 +212 +213 +214 +215 +216 +217 +218 +219 +220 +221 +222 +223 +224 +225 +226 +227 +228 +229 +230 +231 +232 +233 +234 +235 +236 +237 +238 +239 +240 +241 +242 +243 +244 +245 +246 +
//! AArch64 VMSAv8-64 translation table format descriptors.
+
+use aarch64_cpu::registers::MAIR_EL1;
+use core::fmt;
+use memory_addr::PhysAddr;
+
+use crate::{GenericPTE, MappingFlags};
+
+bitflags::bitflags! {
+ /// Memory attribute fields in the VMSAv8-64 translation table format descriptors.
+ #[derive(Debug)]
+ pub struct DescriptorAttr: u64 {
+ // Attribute fields in stage 1 VMSAv8-64 Block and Page descriptors:
+
+ /// Whether the descriptor is valid.
+ const VALID = 1 << 0;
+ /// The descriptor gives the address of the next level of translation table or 4KB page.
+ /// (not a 2M, 1G block)
+ const NON_BLOCK = 1 << 1;
+ /// Memory attributes index field.
+ const ATTR_INDX = 0b111 << 2;
+ /// Non-secure bit. For memory accesses from Secure state, specifies whether the output
+ /// address is in Secure or Non-secure memory.
+ const NS = 1 << 5;
+ /// Access permission: accessable at EL0.
+ const AP_EL0 = 1 << 6;
+ /// Access permission: read-only.
+ const AP_RO = 1 << 7;
+ /// Shareability: Inner Shareable (otherwise Outer Shareable).
+ const INNER = 1 << 8;
+ /// Shareability: Inner or Outer Shareable (otherwise Non-shareable).
+ const SHAREABLE = 1 << 9;
+ /// The Access flag.
+ const AF = 1 << 10;
+ /// The not global bit.
+ const NG = 1 << 11;
+ /// Indicates that 16 adjacent translation table entries point to contiguous memory regions.
+ const CONTIGUOUS = 1 << 52;
+ /// The Privileged execute-never field.
+ const PXN = 1 << 53;
+ /// The Execute-never or Unprivileged execute-never field.
+ const UXN = 1 << 54;
+
+ // Next-level attributes in stage 1 VMSAv8-64 Table descriptors:
+
+ /// PXN limit for subsequent levels of lookup.
+ const PXN_TABLE = 1 << 59;
+ /// XN limit for subsequent levels of lookup.
+ const XN_TABLE = 1 << 60;
+ /// Access permissions limit for subsequent levels of lookup: access at EL0 not permitted.
+ const AP_NO_EL0_TABLE = 1 << 61;
+ /// Access permissions limit for subsequent levels of lookup: write access not permitted.
+ const AP_NO_WRITE_TABLE = 1 << 62;
+ /// For memory accesses from Secure state, specifies the Security state for subsequent
+ /// levels of lookup.
+ const NS_TABLE = 1 << 63;
+ }
+}
+
+/// The memory attributes index field in the descriptor, which is used to index
+/// into the MAIR (Memory Attribute Indirection Register).
+#[repr(u64)]
+#[derive(Debug, Clone, Copy, Eq, PartialEq)]
+pub enum MemAttr {
+ /// Device-nGnRE memory
+ Device = 0,
+ /// Normal memory
+ Normal = 1,
+ /// Normal non-cacheable memory
+ NormalNonCacheable = 2,
+}
+
+impl DescriptorAttr {
+ #[allow(clippy::unusual_byte_groupings)]
+ const ATTR_INDEX_MASK: u64 = 0b111_00;
+
+ /// Constructs a descriptor from the memory index, leaving the other fields
+ /// empty.
+ pub const fn from_mem_attr(idx: MemAttr) -> Self {
+ let mut bits = (idx as u64) << 2;
+ if matches!(idx, MemAttr::Normal | MemAttr::NormalNonCacheable) {
+ bits |= Self::INNER.bits() | Self::SHAREABLE.bits();
+ }
+ Self::from_bits_retain(bits)
+ }
+
+ /// Returns the memory attribute index field.
+ pub const fn mem_attr(&self) -> Option<MemAttr> {
+ let idx = (self.bits() & Self::ATTR_INDEX_MASK) >> 2;
+ Some(match idx {
+ 0 => MemAttr::Device,
+ 1 => MemAttr::Normal,
+ 2 => MemAttr::NormalNonCacheable,
+ _ => return None,
+ })
+ }
+}
+
+impl MemAttr {
+ /// The MAIR_ELx register should be set to this value to match the memory
+ /// attributes in the descriptors.
+ pub const MAIR_VALUE: u64 = {
+ // Device-nGnRE memory
+ let attr0 = MAIR_EL1::Attr0_Device::nonGathering_nonReordering_EarlyWriteAck.value;
+ // Normal memory
+ let attr1 = MAIR_EL1::Attr1_Normal_Inner::WriteBack_NonTransient_ReadWriteAlloc.value
+ | MAIR_EL1::Attr1_Normal_Outer::WriteBack_NonTransient_ReadWriteAlloc.value;
+ let attr2 = MAIR_EL1::Attr2_Normal_Inner::NonCacheable.value
+ + MAIR_EL1::Attr2_Normal_Outer::NonCacheable.value;
+ attr0 | attr1 | attr2 // 0x44_ff_04
+ };
+}
+
+impl From<DescriptorAttr> for MappingFlags {
+ fn from(attr: DescriptorAttr) -> Self {
+ if !attr.contains(DescriptorAttr::VALID) {
+ return Self::empty();
+ }
+ let mut flags = Self::READ;
+ if !attr.contains(DescriptorAttr::AP_RO) {
+ flags |= Self::WRITE;
+ }
+ if attr.contains(DescriptorAttr::AP_EL0) {
+ flags |= Self::USER;
+ if !attr.contains(DescriptorAttr::UXN) {
+ flags |= Self::EXECUTE;
+ }
+ } else if !attr.intersects(DescriptorAttr::PXN) {
+ flags |= Self::EXECUTE;
+ }
+ match attr.mem_attr() {
+ Some(MemAttr::Device) => flags |= Self::DEVICE,
+ Some(MemAttr::NormalNonCacheable) => flags |= Self::UNCACHED,
+ _ => {}
+ }
+ flags
+ }
+}
+
+impl From<MappingFlags> for DescriptorAttr {
+ fn from(flags: MappingFlags) -> Self {
+ if flags.is_empty() {
+ return Self::empty();
+ }
+ let mut attr = if flags.contains(MappingFlags::DEVICE) {
+ Self::from_mem_attr(MemAttr::Device)
+ } else if flags.contains(MappingFlags::UNCACHED) {
+ Self::from_mem_attr(MemAttr::NormalNonCacheable)
+ } else {
+ Self::from_mem_attr(MemAttr::Normal)
+ };
+ if flags.contains(MappingFlags::READ) {
+ attr |= Self::VALID;
+ }
+ if !flags.contains(MappingFlags::WRITE) {
+ attr |= Self::AP_RO;
+ }
+ if flags.contains(MappingFlags::USER) {
+ attr |= Self::AP_EL0 | Self::PXN;
+ if !flags.contains(MappingFlags::EXECUTE) {
+ attr |= Self::UXN;
+ }
+ } else {
+ attr |= Self::UXN;
+ if !flags.contains(MappingFlags::EXECUTE) {
+ attr |= Self::PXN;
+ }
+ }
+ attr
+ }
+}
+
+/// A VMSAv8-64 translation table descriptor.
+///
+/// Note that the **AttrIndx\[2:0\]** (bit\[4:2\]) field is set to `0` for device
+/// memory, and `1` for normal memory. The system must configure the MAIR_ELx
+/// system register accordingly.
+#[derive(Clone, Copy)]
+#[repr(transparent)]
+pub struct A64PTE(u64);
+
+impl A64PTE {
+ const PHYS_ADDR_MASK: u64 = 0x0000_ffff_ffff_f000; // bits 12..48
+
+ /// Creates an empty descriptor with all bits set to zero.
+ pub const fn empty() -> Self {
+ Self(0)
+ }
+}
+
+impl GenericPTE for A64PTE {
+ fn new_page(paddr: PhysAddr, flags: MappingFlags, is_huge: bool) -> Self {
+ let mut attr = DescriptorAttr::from(flags) | DescriptorAttr::AF;
+ if !is_huge {
+ attr |= DescriptorAttr::NON_BLOCK;
+ }
+ Self(attr.bits() | (paddr.as_usize() as u64 & Self::PHYS_ADDR_MASK))
+ }
+ fn new_table(paddr: PhysAddr) -> Self {
+ let attr = DescriptorAttr::NON_BLOCK | DescriptorAttr::VALID;
+ Self(attr.bits() | (paddr.as_usize() as u64 & Self::PHYS_ADDR_MASK))
+ }
+ fn paddr(&self) -> PhysAddr {
+ PhysAddr::from((self.0 & Self::PHYS_ADDR_MASK) as usize)
+ }
+ fn flags(&self) -> MappingFlags {
+ DescriptorAttr::from_bits_truncate(self.0).into()
+ }
+ fn set_paddr(&mut self, paddr: PhysAddr) {
+ self.0 = (self.0 & !Self::PHYS_ADDR_MASK) | (paddr.as_usize() as u64 & Self::PHYS_ADDR_MASK)
+ }
+ fn set_flags(&mut self, flags: MappingFlags, is_huge: bool) {
+ let mut attr = DescriptorAttr::from(flags) | DescriptorAttr::AF;
+ if !is_huge {
+ attr |= DescriptorAttr::NON_BLOCK;
+ }
+ self.0 = (self.0 & Self::PHYS_ADDR_MASK) | attr.bits();
+ }
+
+ fn bits(self) -> usize {
+ self.0 as usize
+ }
+ fn is_unused(&self) -> bool {
+ self.0 == 0
+ }
+ fn is_present(&self) -> bool {
+ DescriptorAttr::from_bits_truncate(self.0).contains(DescriptorAttr::VALID)
+ }
+ fn is_huge(&self) -> bool {
+ !DescriptorAttr::from_bits_truncate(self.0).contains(DescriptorAttr::NON_BLOCK)
+ }
+ fn clear(&mut self) {
+ self.0 = 0
+ }
+}
+
+impl fmt::Debug for A64PTE {
+ fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+ let mut f = f.debug_struct("A64PTE");
+ f.field("raw", &self.0)
+ .field("paddr", &self.paddr())
+ .field("attr", &DescriptorAttr::from_bits_truncate(self.0))
+ .field("flags", &self.flags())
+ .finish()
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +
//! RISC-V page table entries.
+
+use core::fmt;
+use memory_addr::PhysAddr;
+
+use crate::{GenericPTE, MappingFlags};
+
+bitflags::bitflags! {
+ /// Page-table entry flags.
+ #[derive(Debug)]
+ pub struct PTEFlags: usize {
+ /// Whether the PTE is valid.
+ const V = 1 << 0;
+ /// Whether the page is readable.
+ const R = 1 << 1;
+ /// Whether the page is writable.
+ const W = 1 << 2;
+ /// Whether the page is executable.
+ const X = 1 << 3;
+ /// Whether the page is accessible to user mode.
+ const U = 1 << 4;
+ /// Designates a global mapping.
+ const G = 1 << 5;
+ /// Indicates the virtual page has been read, written, or fetched from
+ /// since the last time the A bit was cleared.
+ const A = 1 << 6;
+ /// Indicates the virtual page has been written since the last time the
+ /// D bit was cleared.
+ const D = 1 << 7;
+ }
+}
+
+impl From<PTEFlags> for MappingFlags {
+ fn from(f: PTEFlags) -> Self {
+ let mut ret = Self::empty();
+ if !f.contains(PTEFlags::V) {
+ return ret;
+ }
+ if f.contains(PTEFlags::R) {
+ ret |= Self::READ;
+ }
+ if f.contains(PTEFlags::W) {
+ ret |= Self::WRITE;
+ }
+ if f.contains(PTEFlags::X) {
+ ret |= Self::EXECUTE;
+ }
+ if f.contains(PTEFlags::U) {
+ ret |= Self::USER;
+ }
+ ret
+ }
+}
+
+impl From<MappingFlags> for PTEFlags {
+ fn from(f: MappingFlags) -> Self {
+ if f.is_empty() {
+ return Self::empty();
+ }
+ let mut ret = Self::V;
+ if f.contains(MappingFlags::READ) {
+ ret |= Self::R;
+ }
+ if f.contains(MappingFlags::WRITE) {
+ ret |= Self::W;
+ }
+ if f.contains(MappingFlags::EXECUTE) {
+ ret |= Self::X;
+ }
+ if f.contains(MappingFlags::USER) {
+ ret |= Self::U;
+ }
+ ret
+ }
+}
+
+/// Sv39 and Sv48 page table entry for RV64 systems.
+#[derive(Clone, Copy)]
+#[repr(transparent)]
+pub struct Rv64PTE(u64);
+
+impl Rv64PTE {
+ const PHYS_ADDR_MASK: u64 = (1 << 54) - (1 << 10); // bits 10..54
+}
+
+impl GenericPTE for Rv64PTE {
+ fn new_page(paddr: PhysAddr, flags: MappingFlags, _is_huge: bool) -> Self {
+ let flags = PTEFlags::from(flags) | PTEFlags::A | PTEFlags::D;
+ debug_assert!(flags.intersects(PTEFlags::R | PTEFlags::X));
+ Self(flags.bits() as u64 | ((paddr.as_usize() >> 2) as u64 & Self::PHYS_ADDR_MASK))
+ }
+ fn new_table(paddr: PhysAddr) -> Self {
+ Self(PTEFlags::V.bits() as u64 | ((paddr.as_usize() >> 2) as u64 & Self::PHYS_ADDR_MASK))
+ }
+ fn paddr(&self) -> PhysAddr {
+ PhysAddr::from(((self.0 & Self::PHYS_ADDR_MASK) << 2) as usize)
+ }
+ fn flags(&self) -> MappingFlags {
+ PTEFlags::from_bits_truncate(self.0 as usize).into()
+ }
+ fn set_paddr(&mut self, paddr: PhysAddr) {
+ self.0 = (self.0 & !Self::PHYS_ADDR_MASK)
+ | ((paddr.as_usize() as u64 >> 2) & Self::PHYS_ADDR_MASK);
+ }
+ fn set_flags(&mut self, flags: MappingFlags, _is_huge: bool) {
+ let flags = PTEFlags::from(flags) | PTEFlags::A | PTEFlags::D;
+ debug_assert!(flags.intersects(PTEFlags::R | PTEFlags::X));
+ self.0 = (self.0 & Self::PHYS_ADDR_MASK) | flags.bits() as u64;
+ }
+
+ fn bits(self) -> usize {
+ self.0 as usize
+ }
+ fn is_unused(&self) -> bool {
+ self.0 == 0
+ }
+ fn is_present(&self) -> bool {
+ PTEFlags::from_bits_truncate(self.0 as usize).contains(PTEFlags::V)
+ }
+ fn is_huge(&self) -> bool {
+ PTEFlags::from_bits_truncate(self.0 as usize).intersects(PTEFlags::R | PTEFlags::X)
+ }
+ fn clear(&mut self) {
+ self.0 = 0
+ }
+}
+
+impl fmt::Debug for Rv64PTE {
+ fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+ let mut f = f.debug_struct("Rv64PTE");
+ f.field("raw", &self.0)
+ .field("paddr", &self.paddr())
+ .field("flags", &self.flags())
+ .finish()
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +
//! x86 page table entries on 64-bit paging.
+
+use core::fmt;
+use memory_addr::PhysAddr;
+
+pub use x86_64::structures::paging::page_table::PageTableFlags as PTF;
+
+use crate::{GenericPTE, MappingFlags};
+
+impl From<PTF> for MappingFlags {
+ fn from(f: PTF) -> Self {
+ if !f.contains(PTF::PRESENT) {
+ return Self::empty();
+ }
+ let mut ret = Self::READ;
+ if f.contains(PTF::WRITABLE) {
+ ret |= Self::WRITE;
+ }
+ if !f.contains(PTF::NO_EXECUTE) {
+ ret |= Self::EXECUTE;
+ }
+ if f.contains(PTF::USER_ACCESSIBLE) {
+ ret |= Self::USER;
+ }
+ if f.contains(PTF::NO_CACHE) {
+ ret |= Self::UNCACHED;
+ }
+ ret
+ }
+}
+
+impl From<MappingFlags> for PTF {
+ fn from(f: MappingFlags) -> Self {
+ if f.is_empty() {
+ return Self::empty();
+ }
+ let mut ret = Self::PRESENT;
+ if f.contains(MappingFlags::WRITE) {
+ ret |= Self::WRITABLE;
+ }
+ if !f.contains(MappingFlags::EXECUTE) {
+ ret |= Self::NO_EXECUTE;
+ }
+ if f.contains(MappingFlags::USER) {
+ ret |= Self::USER_ACCESSIBLE;
+ }
+ if f.contains(MappingFlags::DEVICE) || f.contains(MappingFlags::UNCACHED) {
+ ret |= Self::NO_CACHE | Self::WRITE_THROUGH;
+ }
+ ret
+ }
+}
+
+/// An x86_64 page table entry.
+#[derive(Clone, Copy)]
+#[repr(transparent)]
+pub struct X64PTE(u64);
+
+impl X64PTE {
+ const PHYS_ADDR_MASK: u64 = 0x000f_ffff_ffff_f000; // bits 12..52
+}
+
+impl GenericPTE for X64PTE {
+ fn new_page(paddr: PhysAddr, flags: MappingFlags, is_huge: bool) -> Self {
+ let mut flags = PTF::from(flags);
+ if is_huge {
+ flags |= PTF::HUGE_PAGE;
+ }
+ Self(flags.bits() | (paddr.as_usize() as u64 & Self::PHYS_ADDR_MASK))
+ }
+ fn new_table(paddr: PhysAddr) -> Self {
+ let flags = PTF::PRESENT | PTF::WRITABLE | PTF::USER_ACCESSIBLE;
+ Self(flags.bits() | (paddr.as_usize() as u64 & Self::PHYS_ADDR_MASK))
+ }
+ fn paddr(&self) -> PhysAddr {
+ PhysAddr::from((self.0 & Self::PHYS_ADDR_MASK) as usize)
+ }
+ fn flags(&self) -> MappingFlags {
+ PTF::from_bits_truncate(self.0).into()
+ }
+ fn set_paddr(&mut self, paddr: PhysAddr) {
+ self.0 = (self.0 & !Self::PHYS_ADDR_MASK) | (paddr.as_usize() as u64 & Self::PHYS_ADDR_MASK)
+ }
+ fn set_flags(&mut self, flags: MappingFlags, is_huge: bool) {
+ let mut flags = PTF::from(flags);
+ if is_huge {
+ flags |= PTF::HUGE_PAGE;
+ }
+ self.0 = (self.0 & Self::PHYS_ADDR_MASK) | flags.bits()
+ }
+
+ fn bits(self) -> usize {
+ self.0 as usize
+ }
+ fn is_unused(&self) -> bool {
+ self.0 == 0
+ }
+ fn is_present(&self) -> bool {
+ PTF::from_bits_truncate(self.0).contains(PTF::PRESENT)
+ }
+ fn is_huge(&self) -> bool {
+ PTF::from_bits_truncate(self.0).contains(PTF::HUGE_PAGE)
+ }
+ fn clear(&mut self) {
+ self.0 = 0
+ }
+}
+
+impl fmt::Debug for X64PTE {
+ fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+ let mut f = f.debug_struct("X64PTE");
+ f.field("raw", &self.0)
+ .field("paddr", &self.paddr())
+ .field("flags", &self.flags())
+ .finish()
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +
#![cfg_attr(not(test), no_std)]
+#![feature(doc_auto_cfg)]
+#![feature(doc_cfg)]
+#![doc = include_str!("../README.md")]
+
+mod arch;
+
+use core::fmt::{self, Debug};
+use memory_addr::PhysAddr;
+
+pub use self::arch::*;
+
+bitflags::bitflags! {
+ /// Generic page table entry flags that indicate the corresponding mapped
+ /// memory region permissions and attributes.
+ #[derive(Clone, Copy, PartialEq)]
+ pub struct MappingFlags: usize {
+ /// The memory is readable.
+ const READ = 1 << 0;
+ /// The memory is writable.
+ const WRITE = 1 << 1;
+ /// The memory is executable.
+ const EXECUTE = 1 << 2;
+ /// The memory is user accessible.
+ const USER = 1 << 3;
+ /// The memory is device memory.
+ const DEVICE = 1 << 4;
+ /// The memory is uncached.
+ const UNCACHED = 1 << 5;
+ }
+}
+
+impl Debug for MappingFlags {
+ fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
+ Debug::fmt(&self.0, f)
+ }
+}
+
+/// A generic page table entry.
+///
+/// All architecture-specific page table entry types implement this trait.
+pub trait GenericPTE: Debug + Clone + Copy + Sync + Send + Sized {
+ /// Creates a page table entry point to a terminate page or block.
+ fn new_page(paddr: PhysAddr, flags: MappingFlags, is_huge: bool) -> Self;
+ /// Creates a page table entry point to a next level page table.
+ fn new_table(paddr: PhysAddr) -> Self;
+
+ /// Returns the physical address mapped by this entry.
+ fn paddr(&self) -> PhysAddr;
+ /// Returns the flags of this entry.
+ fn flags(&self) -> MappingFlags;
+
+ /// Set mapped physical address of the entry.
+ fn set_paddr(&mut self, paddr: PhysAddr);
+ /// Set flags of the entry.
+ fn set_flags(&mut self, flags: MappingFlags, is_huge: bool);
+
+ /// Returns the raw bits of this entry.
+ fn bits(self) -> usize;
+ /// Returns whether this entry is zero.
+ fn is_unused(&self) -> bool;
+ /// Returns whether this entry flag indicates present.
+ fn is_present(&self) -> bool;
+ /// For non-last level translation, returns whether this entry maps to a
+ /// huge frame.
+ fn is_huge(&self) -> bool;
+ /// Set this entry to zero.
+ fn clear(&mut self);
+}
+
//! AArch64 specific page table structures.
+
+use crate::{PageTable64, PagingMetaData};
+use page_table_entry::aarch64::A64PTE;
+
+/// Metadata of AArch64 page tables.
+#[derive(Copy, Clone)]
+pub struct A64PagingMetaData;
+
+impl PagingMetaData for A64PagingMetaData {
+ const LEVELS: usize = 4;
+ const PA_MAX_BITS: usize = 48;
+ const VA_MAX_BITS: usize = 48;
+
+ fn vaddr_is_valid(vaddr: usize) -> bool {
+ let top_bits = vaddr >> Self::VA_MAX_BITS;
+ top_bits == 0 || top_bits == 0xffff
+ }
+}
+
+/// AArch64 VMSAv8-64 translation table.
+pub type A64PageTable<H> = PageTable64<A64PagingMetaData, A64PTE, H>;
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +
//! RISC-V specific page table structures.
+
+use crate::{PageTable64, PagingMetaData};
+use page_table_entry::riscv::Rv64PTE;
+
+/// Metadata of RISC-V Sv39 page tables.
+#[derive(Clone, Copy)]
+pub struct Sv39MetaData;
+
+/// Metadata of RISC-V Sv48 page tables.
+#[derive(Clone, Copy)]
+pub struct Sv48MetaData;
+
+impl PagingMetaData for Sv39MetaData {
+ const LEVELS: usize = 3;
+ const PA_MAX_BITS: usize = 56;
+ const VA_MAX_BITS: usize = 39;
+}
+
+impl PagingMetaData for Sv48MetaData {
+ const LEVELS: usize = 4;
+ const PA_MAX_BITS: usize = 56;
+ const VA_MAX_BITS: usize = 48;
+}
+
+/// Sv39: Page-Based 39-bit (3 levels) Virtual-Memory System.
+pub type Sv39PageTable<H> = PageTable64<Sv39MetaData, Rv64PTE, H>;
+
+/// Sv48: Page-Based 48-bit (4 levels) Virtual-Memory System.
+pub type Sv48PageTable<H> = PageTable64<Sv48MetaData, Rv64PTE, H>;
+
//! x86 specific page table structures.
+
+use crate::{PageTable64, PagingMetaData};
+use page_table_entry::x86_64::X64PTE;
+
+/// metadata of x86_64 page tables.
+pub struct X64PagingMetaData;
+
+impl PagingMetaData for X64PagingMetaData {
+ const LEVELS: usize = 4;
+ const PA_MAX_BITS: usize = 52;
+ const VA_MAX_BITS: usize = 48;
+}
+
+/// x86_64 page table.
+pub type X64PageTable<H> = PageTable64<X64PagingMetaData, X64PTE, H>;
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +105 +106 +107 +108 +109 +110 +111 +112 +113 +114 +115 +116 +117 +118 +119 +120 +121 +122 +123 +124 +125 +126 +127 +128 +129 +130 +131 +132 +133 +134 +135 +136 +137 +138 +139 +140 +141 +142 +143 +144 +145 +146 +147 +148 +149 +150 +151 +152 +153 +154 +155 +156 +157 +158 +159 +160 +161 +162 +163 +164 +165 +166 +167 +168 +169 +170 +171 +172 +173 +174 +175 +176 +177 +178 +179 +180 +181 +182 +183 +184 +185 +186 +187 +188 +189 +190 +191 +192 +193 +194 +195 +196 +197 +198 +199 +200 +201 +202 +203 +204 +205 +206 +207 +208 +209 +210 +211 +212 +213 +214 +215 +216 +217 +218 +219 +220 +221 +222 +223 +224 +225 +226 +227 +228 +229 +230 +231 +232 +233 +234 +235 +236 +237 +238 +239 +240 +241 +242 +243 +244 +245 +246 +247 +248 +249 +250 +251 +252 +253 +254 +255 +256 +257 +258 +259 +260 +261 +262 +263 +264 +265 +266 +267 +268 +269 +270 +271 +272 +273 +274 +275 +276 +277 +278 +279 +280 +281 +282 +283 +284 +285 +286 +287 +288 +289 +290 +291 +292 +293 +294 +295 +296 +297 +298 +299 +300 +301 +302 +303 +304 +305 +306 +307 +308 +309 +310 +311 +312 +313 +314 +315 +316 +317 +318 +319 +320 +321 +322 +323 +324 +325 +326 +327 +328 +329 +330 +331 +332 +333 +334 +335 +336 +337 +338 +339 +340 +341 +342 +343 +344 +345 +346 +347 +348 +349 +350 +351 +352 +353 +354 +355 +356 +357 +358 +359 +360 +361 +362 +363 +364 +365 +366 +367 +368 +369 +370 +371 +372 +373 +374 +375 +376 +377 +378 +379 +380 +381 +382 +383 +384 +385 +386 +
extern crate alloc;
+
+use alloc::{vec, vec::Vec};
+use core::marker::PhantomData;
+
+use memory_addr::{PhysAddr, VirtAddr, PAGE_SIZE_4K};
+
+use crate::{GenericPTE, PagingHandler, PagingMetaData};
+use crate::{MappingFlags, PageSize, PagingError, PagingResult};
+
+const ENTRY_COUNT: usize = 512;
+
+const fn p4_index(vaddr: VirtAddr) -> usize {
+ (vaddr.as_usize() >> (12 + 27)) & (ENTRY_COUNT - 1)
+}
+
+const fn p3_index(vaddr: VirtAddr) -> usize {
+ (vaddr.as_usize() >> (12 + 18)) & (ENTRY_COUNT - 1)
+}
+
+const fn p2_index(vaddr: VirtAddr) -> usize {
+ (vaddr.as_usize() >> (12 + 9)) & (ENTRY_COUNT - 1)
+}
+
+const fn p1_index(vaddr: VirtAddr) -> usize {
+ (vaddr.as_usize() >> 12) & (ENTRY_COUNT - 1)
+}
+
+/// A generic page table struct for 64-bit platform.
+///
+/// It also tracks all intermediate level tables. They will be deallocated
+/// When the [`PageTable64`] itself is dropped.
+pub struct PageTable64<M: PagingMetaData, PTE: GenericPTE, H: PagingHandler> {
+ root_paddr: PhysAddr,
+ intrm_tables: Vec<PhysAddr>,
+ _phantom: PhantomData<(M, PTE, H)>,
+}
+
+impl<M: PagingMetaData, PTE: GenericPTE, H: PagingHandler> PageTable64<M, PTE, H> {
+ /// Creates a new page table instance or returns the error.
+ ///
+ /// It will allocate a new page for the root page table.
+ pub fn try_new() -> PagingResult<Self> {
+ let root_paddr = Self::alloc_table()?;
+ Ok(Self {
+ root_paddr,
+ intrm_tables: vec![root_paddr],
+ _phantom: PhantomData,
+ })
+ }
+
+ /// Returns the physical address of the root page table.
+ pub const fn root_paddr(&self) -> PhysAddr {
+ self.root_paddr
+ }
+
+ /// Maps a virtual page to a physical frame with the given `page_size`
+ /// and mapping `flags`.
+ ///
+ /// The virtual page starts with `vaddr`, amd the physical frame starts with
+ /// `target`. If the addresses is not aligned to the page size, they will be
+ /// aligned down automatically.
+ ///
+ /// Returns [`Err(PagingError::AlreadyMapped)`](PagingError::AlreadyMapped)
+ /// if the mapping is already present.
+ pub fn map(
+ &mut self,
+ vaddr: VirtAddr,
+ target: PhysAddr,
+ page_size: PageSize,
+ flags: MappingFlags,
+ ) -> PagingResult {
+ let entry = self.get_entry_mut_or_create(vaddr, page_size)?;
+ if !entry.is_unused() {
+ return Err(PagingError::AlreadyMapped);
+ }
+ *entry = GenericPTE::new_page(target.align_down(page_size), flags, page_size.is_huge());
+ Ok(())
+ }
+
+ /// Unmaps the mapping starts with `vaddr`.
+ ///
+ /// Returns [`Err(PagingError::NotMapped)`](PagingError::NotMapped) if the
+ /// mapping is not present.
+ pub fn unmap(&mut self, vaddr: VirtAddr) -> PagingResult<(PhysAddr, PageSize)> {
+ let (entry, size) = self.get_entry_mut(vaddr)?;
+ if entry.is_unused() {
+ return Err(PagingError::NotMapped);
+ }
+ let paddr = entry.paddr();
+ entry.clear();
+ Ok((paddr, size))
+ }
+
+ /// Query the result of the mapping starts with `vaddr`.
+ ///
+ /// Returns the physical address of the target frame, mapping flags, and
+ /// the page size.
+ ///
+ /// Returns [`Err(PagingError::NotMapped)`](PagingError::NotMapped) if the
+ /// mapping is not present.
+ pub fn query(&self, vaddr: VirtAddr) -> PagingResult<(PhysAddr, MappingFlags, PageSize)> {
+ let (entry, size) = self.get_entry_mut(vaddr)?;
+ if entry.is_unused() {
+ return Err(PagingError::NotMapped);
+ }
+ let off = vaddr.align_offset(size);
+ Ok((entry.paddr() + off, entry.flags(), size))
+ }
+
+ /// Updates the target or flags of the mapping starts with `vaddr`. If the
+ /// corresponding argument is `None`, it will not be updated.
+ ///
+ /// Returns the page size of the mapping.
+ ///
+ /// Returns [`Err(PagingError::NotMapped)`](PagingError::NotMapped) if the
+ /// mapping is not present.
+ pub fn update(
+ &mut self,
+ vaddr: VirtAddr,
+ paddr: Option<PhysAddr>,
+ flags: Option<MappingFlags>,
+ ) -> PagingResult<PageSize> {
+ let (entry, size) = self.get_entry_mut(vaddr)?;
+ if let Some(paddr) = paddr {
+ entry.set_paddr(paddr);
+ }
+ if let Some(flags) = flags {
+ entry.set_flags(flags, size.is_huge());
+ }
+ Ok(size)
+ }
+
+ /// Map a contiguous virtual memory region to a contiguous physical memory
+ /// region with the given mapping `flags`.
+ ///
+ /// The virtual and physical memory regions start with `vaddr` and `paddr`
+ /// respectively. The region size is `size`. The addresses and `size` must
+ /// be aligned to 4K, otherwise it will return [`Err(PagingError::NotAligned)`].
+ ///
+ /// When `allow_huge` is true, it will try to map the region with huge pages
+ /// if possible. Otherwise, it will map the region with 4K pages.
+ ///
+ /// [`Err(PagingError::NotAligned)`]: PagingError::NotAligned
+ pub fn map_region(
+ &mut self,
+ vaddr: VirtAddr,
+ paddr: PhysAddr,
+ size: usize,
+ flags: MappingFlags,
+ allow_huge: bool,
+ ) -> PagingResult {
+ if !vaddr.is_aligned(PageSize::Size4K)
+ || !paddr.is_aligned(PageSize::Size4K)
+ || !memory_addr::is_aligned(size, PageSize::Size4K.into())
+ {
+ return Err(PagingError::NotAligned);
+ }
+ trace!(
+ "map_region({:#x}): [{:#x}, {:#x}) -> [{:#x}, {:#x}) {:?}",
+ self.root_paddr(),
+ vaddr,
+ vaddr + size,
+ paddr,
+ paddr + size,
+ flags,
+ );
+ let mut vaddr = vaddr;
+ let mut paddr = paddr;
+ let mut size = size;
+ while size > 0 {
+ let page_size = if allow_huge {
+ if vaddr.is_aligned(PageSize::Size1G)
+ && paddr.is_aligned(PageSize::Size1G)
+ && size >= PageSize::Size1G as usize
+ {
+ PageSize::Size1G
+ } else if vaddr.is_aligned(PageSize::Size2M)
+ && paddr.is_aligned(PageSize::Size2M)
+ && size >= PageSize::Size2M as usize
+ {
+ PageSize::Size2M
+ } else {
+ PageSize::Size4K
+ }
+ } else {
+ PageSize::Size4K
+ };
+ self.map(vaddr, paddr, page_size, flags).inspect_err(|e| {
+ error!(
+ "failed to map page: {:#x?}({:?}) -> {:#x?}, {:?}",
+ vaddr, page_size, paddr, e
+ )
+ })?;
+ vaddr += page_size as usize;
+ paddr += page_size as usize;
+ size -= page_size as usize;
+ }
+ Ok(())
+ }
+
+ /// Unmap a contiguous virtual memory region.
+ ///
+ /// The region must be mapped before using [`PageTable64::map_region`], or
+ /// unexpected behaviors may occur.
+ pub fn unmap_region(&mut self, vaddr: VirtAddr, size: usize) -> PagingResult {
+ trace!(
+ "unmap_region({:#x}) [{:#x}, {:#x})",
+ self.root_paddr(),
+ vaddr,
+ vaddr + size,
+ );
+ let mut vaddr = vaddr;
+ let mut size = size;
+ while size > 0 {
+ let (_, page_size) = self
+ .unmap(vaddr)
+ .inspect_err(|e| error!("failed to unmap page: {:#x?}, {:?}", vaddr, e))?;
+ assert!(vaddr.is_aligned(page_size));
+ assert!(page_size as usize <= size);
+ vaddr += page_size as usize;
+ size -= page_size as usize;
+ }
+ Ok(())
+ }
+
+ /// Walk the page table recursively.
+ ///
+ /// When reaching the leaf page table, call `func` on the current page table
+ /// entry. The max number of enumerations in one table is limited by `limit`.
+ ///
+ /// The arguments of `func` are:
+ /// - Current level (starts with `0`): `usize`
+ /// - The index of the entry in the current-level table: `usize`
+ /// - The virtual address that is mapped to the entry: [`VirtAddr`]
+ /// - The reference of the entry: [`&PTE`](GenericPTE)
+ pub fn walk<F>(&self, limit: usize, func: &F) -> PagingResult
+ where
+ F: Fn(usize, usize, VirtAddr, &PTE),
+ {
+ self.walk_recursive(
+ self.table_of(self.root_paddr()),
+ 0,
+ VirtAddr::from(0),
+ limit,
+ func,
+ )
+ }
+}
+
+// Private implements.
+impl<M: PagingMetaData, PTE: GenericPTE, H: PagingHandler> PageTable64<M, PTE, H> {
+ fn alloc_table() -> PagingResult<PhysAddr> {
+ if let Some(paddr) = H::alloc_frame() {
+ let ptr = H::phys_to_virt(paddr).as_mut_ptr();
+ unsafe { core::ptr::write_bytes(ptr, 0, PAGE_SIZE_4K) };
+ Ok(paddr)
+ } else {
+ Err(PagingError::NoMemory)
+ }
+ }
+
+ fn table_of<'a>(&self, paddr: PhysAddr) -> &'a [PTE] {
+ let ptr = H::phys_to_virt(paddr).as_ptr() as _;
+ unsafe { core::slice::from_raw_parts(ptr, ENTRY_COUNT) }
+ }
+
+ fn table_of_mut<'a>(&self, paddr: PhysAddr) -> &'a mut [PTE] {
+ let ptr = H::phys_to_virt(paddr).as_mut_ptr() as _;
+ unsafe { core::slice::from_raw_parts_mut(ptr, ENTRY_COUNT) }
+ }
+
+ fn next_table_mut<'a>(&self, entry: &PTE) -> PagingResult<&'a mut [PTE]> {
+ if !entry.is_present() {
+ Err(PagingError::NotMapped)
+ } else if entry.is_huge() {
+ Err(PagingError::MappedToHugePage)
+ } else {
+ Ok(self.table_of_mut(entry.paddr()))
+ }
+ }
+
+ fn next_table_mut_or_create<'a>(&mut self, entry: &mut PTE) -> PagingResult<&'a mut [PTE]> {
+ if entry.is_unused() {
+ let paddr = Self::alloc_table()?;
+ self.intrm_tables.push(paddr);
+ *entry = GenericPTE::new_table(paddr);
+ Ok(self.table_of_mut(paddr))
+ } else {
+ self.next_table_mut(entry)
+ }
+ }
+
+ fn get_entry_mut(&self, vaddr: VirtAddr) -> PagingResult<(&mut PTE, PageSize)> {
+ let p3 = if M::LEVELS == 3 {
+ self.table_of_mut(self.root_paddr())
+ } else if M::LEVELS == 4 {
+ let p4 = self.table_of_mut(self.root_paddr());
+ let p4e = &mut p4[p4_index(vaddr)];
+ self.next_table_mut(p4e)?
+ } else {
+ unreachable!()
+ };
+ let p3e = &mut p3[p3_index(vaddr)];
+ if p3e.is_huge() {
+ return Ok((p3e, PageSize::Size1G));
+ }
+
+ let p2 = self.next_table_mut(p3e)?;
+ let p2e = &mut p2[p2_index(vaddr)];
+ if p2e.is_huge() {
+ return Ok((p2e, PageSize::Size2M));
+ }
+
+ let p1 = self.next_table_mut(p2e)?;
+ let p1e = &mut p1[p1_index(vaddr)];
+ Ok((p1e, PageSize::Size4K))
+ }
+
+ fn get_entry_mut_or_create(
+ &mut self,
+ vaddr: VirtAddr,
+ page_size: PageSize,
+ ) -> PagingResult<&mut PTE> {
+ let p3 = if M::LEVELS == 3 {
+ self.table_of_mut(self.root_paddr())
+ } else if M::LEVELS == 4 {
+ let p4 = self.table_of_mut(self.root_paddr());
+ let p4e = &mut p4[p4_index(vaddr)];
+ self.next_table_mut_or_create(p4e)?
+ } else {
+ unreachable!()
+ };
+ let p3e = &mut p3[p3_index(vaddr)];
+ if page_size == PageSize::Size1G {
+ return Ok(p3e);
+ }
+
+ let p2 = self.next_table_mut_or_create(p3e)?;
+ let p2e = &mut p2[p2_index(vaddr)];
+ if page_size == PageSize::Size2M {
+ return Ok(p2e);
+ }
+
+ let p1 = self.next_table_mut_or_create(p2e)?;
+ let p1e = &mut p1[p1_index(vaddr)];
+ Ok(p1e)
+ }
+
+ fn walk_recursive<F>(
+ &self,
+ table: &[PTE],
+ level: usize,
+ start_vaddr: VirtAddr,
+ limit: usize,
+ func: &F,
+ ) -> PagingResult
+ where
+ F: Fn(usize, usize, VirtAddr, &PTE),
+ {
+ let mut n = 0;
+ for (i, entry) in table.iter().enumerate() {
+ let vaddr = start_vaddr + (i << (12 + (M::LEVELS - 1 - level) * 9));
+ if entry.is_present() {
+ func(level, i, vaddr, entry);
+ if level < M::LEVELS - 1 && !entry.is_huge() {
+ let table_entry = self.next_table_mut(entry)?;
+ self.walk_recursive(table_entry, level + 1, vaddr, limit, func)?;
+ }
+ n += 1;
+ if n >= limit {
+ break;
+ }
+ }
+ }
+ Ok(())
+ }
+}
+
+impl<M: PagingMetaData, PTE: GenericPTE, H: PagingHandler> Drop for PageTable64<M, PTE, H> {
+ fn drop(&mut self) {
+ for frame in &self.intrm_tables {
+ H::dealloc_frame(*frame);
+ }
+ }
+}
+
1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +14 +15 +16 +17 +18 +19 +20 +21 +22 +23 +24 +25 +26 +27 +28 +29 +30 +31 +32 +33 +34 +35 +36 +37 +38 +39 +40 +41 +42 +43 +44 +45 +46 +47 +48 +49 +50 +51 +52 +53 +54 +55 +56 +57 +58 +59 +60 +61 +62 +63 +64 +65 +66 +67 +68 +69 +70 +71 +72 +73 +74 +75 +76 +77 +78 +79 +80 +81 +82 +83 +84 +85 +86 +87 +88 +89 +90 +91 +92 +93 +94 +95 +96 +97 +98 +99 +100 +101 +102 +103 +104 +
#![cfg_attr(not(test), no_std)]
+#![feature(const_trait_impl)]
+#![feature(doc_auto_cfg)]
+#![doc = include_str!("../README.md")]
+
+#[macro_use]
+extern crate log;
+
+mod arch;
+mod bits64;
+
+use memory_addr::{PhysAddr, VirtAddr};
+
+pub use self::arch::*;
+pub use self::bits64::PageTable64;
+
+#[doc(no_inline)]
+pub use page_table_entry::{GenericPTE, MappingFlags};
+
+/// The error type for page table operation failures.
+#[derive(Debug, PartialEq)]
+pub enum PagingError {
+ /// Cannot allocate memory.
+ NoMemory,
+ /// The address is not aligned to the page size.
+ NotAligned,
+ /// The mapping is not present.
+ NotMapped,
+ /// The mapping is already present.
+ AlreadyMapped,
+ /// The page table entry represents a huge page, but the target physical
+ /// frame is 4K in size.
+ MappedToHugePage,
+}
+
+/// The specialized `Result` type for page table operations.
+pub type PagingResult<T = ()> = Result<T, PagingError>;
+
+/// The **architecture-dependent** metadata that must be provided for
+/// [`PageTable64`].
+pub trait PagingMetaData: Sync + Send + Sized {
+ /// The number of levels of the hardware page table.
+ const LEVELS: usize;
+ /// The maximum number of bits of physical address.
+ const PA_MAX_BITS: usize;
+ /// The maximum number of bits of virtual address.
+ const VA_MAX_BITS: usize;
+
+ /// The maximum physical address.
+ const PA_MAX_ADDR: usize = (1 << Self::PA_MAX_BITS) - 1;
+
+ /// Whether a given physical address is valid.
+ #[inline]
+ fn paddr_is_valid(paddr: usize) -> bool {
+ paddr <= Self::PA_MAX_ADDR // default
+ }
+
+ /// Whether a given virtual address is valid.
+ #[inline]
+ fn vaddr_is_valid(vaddr: usize) -> bool {
+ // default: top bits sign extended
+ let top_mask = usize::MAX << (Self::VA_MAX_BITS - 1);
+ (vaddr & top_mask) == 0 || (vaddr & top_mask) == top_mask
+ }
+}
+
+/// The low-level **OS-dependent** helpers that must be provided for
+/// [`PageTable64`].
+pub trait PagingHandler: Sized {
+ /// Request to allocate a 4K-sized physical frame.
+ fn alloc_frame() -> Option<PhysAddr>;
+ /// Request to free a allocated physical frame.
+ fn dealloc_frame(paddr: PhysAddr);
+ /// Returns a virtual address that maps to the given physical address.
+ ///
+ /// Used to access the physical memory directly in page table implementation.
+ fn phys_to_virt(paddr: PhysAddr) -> VirtAddr;
+}
+
+/// The page sizes supported by the hardware page table.
+#[repr(usize)]
+#[derive(Debug, Copy, Clone, Eq, PartialEq)]
+pub enum PageSize {
+ /// Size of 4 kilobytes (2<sup>12</sup> bytes).
+ Size4K = 0x1000,
+ /// Size of 2 megabytes (2<sup>21</sup> bytes).
+ Size2M = 0x20_0000,
+ /// Size of 1 gigabytes (2<sup>30</sup> bytes).
+ Size1G = 0x4000_0000,
+}
+
+impl PageSize {
+ /// Whether this page size is considered huge (larger than 4K).
+ pub const fn is_huge(self) -> bool {
+ matches!(self, Self::Size1G | Self::Size2M)
+ }
+}
+
+impl From<PageSize> for usize {
+ #[inline]
+ fn from(size: PageSize) -> usize {
+ size as usize
+ }
+}
+
fn:
) to \
+ restrict the search to a given item kind.","Accepted kinds are: fn
, mod
, struct
, \
+ enum
, trait
, type
, macro
, \
+ and const
.","Search functions by type signature (e.g., vec -> usize
or \
+ -> vec
or String, enum:Cow -> bool
)","You can look for items with an exact name by putting double quotes around \
+ your request: \"string\"
","Look for functions that accept or return \
+ slices and \
+ arrays by writing \
+ square brackets (e.g., -> [u8]
or [] -> Option
)","Look for items inside another one by searching for a path: vec::Vec
",].map(x=>""+x+"
").join("");const div_infos=document.createElement("div");addClass(div_infos,"infos");div_infos.innerHTML="${value.replaceAll(" ", " ")}
`}else{error[index]=value}});output+=`Takes each element in the Iterator
: if it is an Err
, no further\nelements are taken, and the Err
is returned. Should no Err
occur, a\ncontainer with the values of each Result
is returned.
Here is an example which increments every integer in a vector,\nchecking for overflow:
\n\nlet v = vec![1, 2];\nlet res: Result<Vec<u32>, &'static str> = v.iter().map(|x: &u32|\n x.checked_add(1).ok_or(\"Overflow!\")\n).collect();\nassert_eq!(res, Ok(vec![2, 3]));
Here is another example that tries to subtract one from another list\nof integers, this time checking for underflow:
\n\nlet v = vec![1, 2, 0];\nlet res: Result<Vec<u32>, &'static str> = v.iter().map(|x: &u32|\n x.checked_sub(1).ok_or(\"Underflow!\")\n).collect();\nassert_eq!(res, Err(\"Underflow!\"));
Here is a variation on the previous example, showing that no\nfurther elements are taken from iter
after the first Err
.
let v = vec![3, 2, 1, 10];\nlet mut shared = 0;\nlet res: Result<Vec<u32>, &'static str> = v.iter().map(|x: &u32| {\n shared += x;\n x.checked_sub(2).ok_or(\"Underflow!\")\n}).collect();\nassert_eq!(res, Err(\"Underflow!\"));\nassert_eq!(shared, 6);
Since the third element caused an underflow, no further elements were taken,\nso the final value of shared
is 6 (= 3 + 2 + 1
), not 16.
try_trait_v2
)Residual
type. Read moreReturns a consuming iterator over the possibly contained value.
\nThe iterator yields one value if the result is Result::Ok
, otherwise none.
let x: Result<u32, &str> = Ok(5);\nlet v: Vec<u32> = x.into_iter().collect();\nassert_eq!(v, [5]);\n\nlet x: Result<u32, &str> = Err(\"nothing!\");\nlet v: Vec<u32> = x.into_iter().collect();\nassert_eq!(v, []);
self
and other
) and is used by the <=
\noperator. Read moreTakes each element in the Iterator
: if it is an Err
, no further\nelements are taken, and the Err
is returned. Should no Err
\noccur, the product of all elements is returned.
This multiplies each number in a vector of strings,\nif a string could not be parsed the operation returns Err
:
let nums = vec![\"5\", \"10\", \"1\", \"2\"];\nlet total: Result<usize, _> = nums.iter().map(|w| w.parse::<usize>()).product();\nassert_eq!(total, Ok(100));\nlet nums = vec![\"5\", \"10\", \"one\", \"2\"];\nlet total: Result<usize, _> = nums.iter().map(|w| w.parse::<usize>()).product();\nassert!(total.is_err());
Maps a Result<&mut T, E>
to a Result<T, E>
by copying the contents of the\nOk
part.
let mut val = 12;\nlet x: Result<&mut i32, i32> = Ok(&mut val);\nassert_eq!(x, Ok(&mut 12));\nlet copied = x.copied();\nassert_eq!(copied, Ok(12));
Maps a Result<&mut T, E>
to a Result<T, E>
by cloning the contents of the\nOk
part.
let mut val = 12;\nlet x: Result<&mut i32, i32> = Ok(&mut val);\nassert_eq!(x, Ok(&mut 12));\nlet cloned = x.cloned();\nassert_eq!(cloned, Ok(12));
Transposes a Result
of an Option
into an Option
of a Result
.
Ok(None)
will be mapped to None
.\nOk(Some(_))
and Err(_)
will be mapped to Some(Ok(_))
and Some(Err(_))
.
#[derive(Debug, Eq, PartialEq)]\nstruct SomeErr;\n\nlet x: Result<Option<i32>, SomeErr> = Ok(Some(5));\nlet y: Option<Result<i32, SomeErr>> = Some(Ok(5));\nassert_eq!(x.transpose(), y);
result_flattening
)Converts from Result<Result<T, E>, E>
to Result<T, E>
#![feature(result_flattening)]\nlet x: Result<Result<&'static str, u32>, u32> = Ok(Ok(\"hello\"));\nassert_eq!(Ok(\"hello\"), x.flatten());\n\nlet x: Result<Result<&'static str, u32>, u32> = Ok(Err(6));\nassert_eq!(Err(6), x.flatten());\n\nlet x: Result<Result<&'static str, u32>, u32> = Err(6);\nassert_eq!(Err(6), x.flatten());
Flattening only removes one level of nesting at a time:
\n\n#![feature(result_flattening)]\nlet x: Result<Result<Result<&'static str, u32>, u32>, u32> = Ok(Ok(Ok(\"hello\")));\nassert_eq!(Ok(Ok(\"hello\")), x.flatten());\nassert_eq!(Ok(\"hello\"), x.flatten().flatten());
Returns true
if the result is Ok
and the value inside of it matches a predicate.
let x: Result<u32, &str> = Ok(2);\nassert_eq!(x.is_ok_and(|x| x > 1), true);\n\nlet x: Result<u32, &str> = Ok(0);\nassert_eq!(x.is_ok_and(|x| x > 1), false);\n\nlet x: Result<u32, &str> = Err(\"hey\");\nassert_eq!(x.is_ok_and(|x| x > 1), false);
Returns true
if the result is Err
and the value inside of it matches a predicate.
use std::io::{Error, ErrorKind};\n\nlet x: Result<u32, Error> = Err(Error::new(ErrorKind::NotFound, \"!\"));\nassert_eq!(x.is_err_and(|x| x.kind() == ErrorKind::NotFound), true);\n\nlet x: Result<u32, Error> = Err(Error::new(ErrorKind::PermissionDenied, \"!\"));\nassert_eq!(x.is_err_and(|x| x.kind() == ErrorKind::NotFound), false);\n\nlet x: Result<u32, Error> = Ok(123);\nassert_eq!(x.is_err_and(|x| x.kind() == ErrorKind::NotFound), false);
Converts from Result<T, E>
to Option<E>
.
Converts self
into an Option<E>
, consuming self
,\nand discarding the success value, if any.
let x: Result<u32, &str> = Ok(2);\nassert_eq!(x.err(), None);\n\nlet x: Result<u32, &str> = Err(\"Nothing here\");\nassert_eq!(x.err(), Some(\"Nothing here\"));
Converts from &Result<T, E>
to Result<&T, &E>
.
Produces a new Result
, containing a reference\ninto the original, leaving the original in place.
let x: Result<u32, &str> = Ok(2);\nassert_eq!(x.as_ref(), Ok(&2));\n\nlet x: Result<u32, &str> = Err(\"Error\");\nassert_eq!(x.as_ref(), Err(&\"Error\"));
Converts from &mut Result<T, E>
to Result<&mut T, &mut E>
.
fn mutate(r: &mut Result<i32, i32>) {\n match r.as_mut() {\n Ok(v) => *v = 42,\n Err(e) => *e = 0,\n }\n}\n\nlet mut x: Result<i32, i32> = Ok(2);\nmutate(&mut x);\nassert_eq!(x.unwrap(), 42);\n\nlet mut x: Result<i32, i32> = Err(13);\nmutate(&mut x);\nassert_eq!(x.unwrap_err(), 0);
Maps a Result<T, E>
to Result<U, E>
by applying a function to a\ncontained Ok
value, leaving an Err
value untouched.
This function can be used to compose the results of two functions.
\nPrint the numbers on each line of a string multiplied by two.
\n\nlet line = \"1\\n2\\n3\\n4\\n\";\n\nfor num in line.lines() {\n match num.parse::<i32>().map(|i| i * 2) {\n Ok(n) => println!(\"{n}\"),\n Err(..) => {}\n }\n}
Returns the provided default (if Err
), or\napplies a function to the contained value (if Ok
).
Arguments passed to map_or
are eagerly evaluated; if you are passing\nthe result of a function call, it is recommended to use map_or_else
,\nwhich is lazily evaluated.
let x: Result<_, &str> = Ok(\"foo\");\nassert_eq!(x.map_or(42, |v| v.len()), 3);\n\nlet x: Result<&str, _> = Err(\"bar\");\nassert_eq!(x.map_or(42, |v| v.len()), 42);
Maps a Result<T, E>
to U
by applying fallback function default
to\na contained Err
value, or function f
to a contained Ok
value.
This function can be used to unpack a successful result\nwhile handling an error.
\nlet k = 21;\n\nlet x : Result<_, &str> = Ok(\"foo\");\nassert_eq!(x.map_or_else(|e| k * 2, |v| v.len()), 3);\n\nlet x : Result<&str, _> = Err(\"bar\");\nassert_eq!(x.map_or_else(|e| k * 2, |v| v.len()), 42);
Maps a Result<T, E>
to Result<T, F>
by applying a function to a\ncontained Err
value, leaving an Ok
value untouched.
This function can be used to pass through a successful result while handling\nan error.
\nfn stringify(x: u32) -> String { format!(\"error code: {x}\") }\n\nlet x: Result<u32, u32> = Ok(2);\nassert_eq!(x.map_err(stringify), Ok(2));\n\nlet x: Result<u32, u32> = Err(13);\nassert_eq!(x.map_err(stringify), Err(\"error code: 13\".to_string()));
Converts from Result<T, E>
(or &Result<T, E>
) to Result<&<T as Deref>::Target, &E>
.
Coerces the Ok
variant of the original Result
via Deref
\nand returns the new Result
.
let x: Result<String, u32> = Ok(\"hello\".to_string());\nlet y: Result<&str, &u32> = Ok(\"hello\");\nassert_eq!(x.as_deref(), y);\n\nlet x: Result<String, u32> = Err(42);\nlet y: Result<&str, &u32> = Err(&42);\nassert_eq!(x.as_deref(), y);
Converts from Result<T, E>
(or &mut Result<T, E>
) to Result<&mut <T as DerefMut>::Target, &mut E>
.
Coerces the Ok
variant of the original Result
via DerefMut
\nand returns the new Result
.
let mut s = \"HELLO\".to_string();\nlet mut x: Result<String, u32> = Ok(\"hello\".to_string());\nlet y: Result<&mut str, &mut u32> = Ok(&mut s);\nassert_eq!(x.as_deref_mut().map(|x| { x.make_ascii_uppercase(); x }), y);\n\nlet mut i = 42;\nlet mut x: Result<String, u32> = Err(42);\nlet y: Result<&mut str, &mut u32> = Err(&mut i);\nassert_eq!(x.as_deref_mut().map(|x| { x.make_ascii_uppercase(); x }), y);
Returns an iterator over the possibly contained value.
\nThe iterator yields one value if the result is Result::Ok
, otherwise none.
let x: Result<u32, &str> = Ok(7);\nassert_eq!(x.iter().next(), Some(&7));\n\nlet x: Result<u32, &str> = Err(\"nothing!\");\nassert_eq!(x.iter().next(), None);
Returns a mutable iterator over the possibly contained value.
\nThe iterator yields one value if the result is Result::Ok
, otherwise none.
let mut x: Result<u32, &str> = Ok(7);\nmatch x.iter_mut().next() {\n Some(v) => *v = 40,\n None => {},\n}\nassert_eq!(x, Ok(40));\n\nlet mut x: Result<u32, &str> = Err(\"nothing!\");\nassert_eq!(x.iter_mut().next(), None);
Returns the contained Ok
value, consuming the self
value.
Because this function may panic, its use is generally discouraged.\nInstead, prefer to use pattern matching and handle the Err
\ncase explicitly, or call unwrap_or
, unwrap_or_else
, or\nunwrap_or_default
.
Panics if the value is an Err
, with a panic message including the\npassed message, and the content of the Err
.
let x: Result<u32, &str> = Err(\"emergency failure\");\nx.expect(\"Testing expect\"); // panics with `Testing expect: emergency failure`
We recommend that expect
messages are used to describe the reason you\nexpect the Result
should be Ok
.
let path = std::env::var(\"IMPORTANT_PATH\")\n .expect(\"env variable `IMPORTANT_PATH` should be set by `wrapper_script.sh`\");
Hint: If you’re having trouble remembering how to phrase expect\nerror messages remember to focus on the word “should” as in “env\nvariable should be set by blah” or “the given binary should be available\nand executable by the current user”.
\nFor more detail on expect message styles and the reasoning behind our recommendation please\nrefer to the section on “Common Message\nStyles” in the\nstd::error
module docs.
Returns the contained Ok
value, consuming the self
value.
Because this function may panic, its use is generally discouraged.\nInstead, prefer to use pattern matching and handle the Err
\ncase explicitly, or call unwrap_or
, unwrap_or_else
, or\nunwrap_or_default
.
Panics if the value is an Err
, with a panic message provided by the\nErr
’s value.
Basic usage:
\n\nlet x: Result<u32, &str> = Ok(2);\nassert_eq!(x.unwrap(), 2);
let x: Result<u32, &str> = Err(\"emergency failure\");\nx.unwrap(); // panics with `emergency failure`
Returns the contained Ok
value or a default
Consumes the self
argument then, if Ok
, returns the contained\nvalue, otherwise if Err
, returns the default value for that\ntype.
Converts a string to an integer, turning poorly-formed strings\ninto 0 (the default value for integers). parse
converts\na string to any other type that implements FromStr
, returning an\nErr
on error.
let good_year_from_input = \"1909\";\nlet bad_year_from_input = \"190blarg\";\nlet good_year = good_year_from_input.parse().unwrap_or_default();\nlet bad_year = bad_year_from_input.parse().unwrap_or_default();\n\nassert_eq!(1909, good_year);\nassert_eq!(0, bad_year);
Returns the contained Err
value, consuming the self
value.
Panics if the value is an Ok
, with a panic message including the\npassed message, and the content of the Ok
.
let x: Result<u32, &str> = Ok(10);\nx.expect_err(\"Testing expect_err\"); // panics with `Testing expect_err: 10`
Returns the contained Err
value, consuming the self
value.
Panics if the value is an Ok
, with a custom panic message provided\nby the Ok
’s value.
let x: Result<u32, &str> = Ok(2);\nx.unwrap_err(); // panics with `2`
let x: Result<u32, &str> = Err(\"emergency failure\");\nassert_eq!(x.unwrap_err(), \"emergency failure\");
unwrap_infallible
)Returns the contained Ok
value, but never panics.
Unlike unwrap
, this method is known to never panic on the\nresult types it is implemented for. Therefore, it can be used\ninstead of unwrap
as a maintainability safeguard that will fail\nto compile if the error type of the Result
is later changed\nto an error that can actually occur.
\nfn only_good_news() -> Result<String, !> {\n Ok(\"this is fine\".into())\n}\n\nlet s: String = only_good_news().into_ok();\nprintln!(\"{s}\");
unwrap_infallible
)Returns the contained Err
value, but never panics.
Unlike unwrap_err
, this method is known to never panic on the\nresult types it is implemented for. Therefore, it can be used\ninstead of unwrap_err
as a maintainability safeguard that will fail\nto compile if the ok type of the Result
is later changed\nto a type that can actually occur.
\nfn only_bad_news() -> Result<!, String> {\n Err(\"Oops, it failed\".into())\n}\n\nlet error: String = only_bad_news().into_err();\nprintln!(\"{error}\");
Returns res
if the result is Ok
, otherwise returns the Err
value of self
.
Arguments passed to and
are eagerly evaluated; if you are passing the\nresult of a function call, it is recommended to use and_then
, which is\nlazily evaluated.
let x: Result<u32, &str> = Ok(2);\nlet y: Result<&str, &str> = Err(\"late error\");\nassert_eq!(x.and(y), Err(\"late error\"));\n\nlet x: Result<u32, &str> = Err(\"early error\");\nlet y: Result<&str, &str> = Ok(\"foo\");\nassert_eq!(x.and(y), Err(\"early error\"));\n\nlet x: Result<u32, &str> = Err(\"not a 2\");\nlet y: Result<&str, &str> = Err(\"late error\");\nassert_eq!(x.and(y), Err(\"not a 2\"));\n\nlet x: Result<u32, &str> = Ok(2);\nlet y: Result<&str, &str> = Ok(\"different result type\");\nassert_eq!(x.and(y), Ok(\"different result type\"));
Calls op
if the result is Ok
, otherwise returns the Err
value of self
.
This function can be used for control flow based on Result
values.
fn sq_then_to_string(x: u32) -> Result<String, &'static str> {\n x.checked_mul(x).map(|sq| sq.to_string()).ok_or(\"overflowed\")\n}\n\nassert_eq!(Ok(2).and_then(sq_then_to_string), Ok(4.to_string()));\nassert_eq!(Ok(1_000_000).and_then(sq_then_to_string), Err(\"overflowed\"));\nassert_eq!(Err(\"not a number\").and_then(sq_then_to_string), Err(\"not a number\"));
Often used to chain fallible operations that may return Err
.
use std::{io::ErrorKind, path::Path};\n\n// Note: on Windows \"/\" maps to \"C:\\\"\nlet root_modified_time = Path::new(\"/\").metadata().and_then(|md| md.modified());\nassert!(root_modified_time.is_ok());\n\nlet should_fail = Path::new(\"/bad/path\").metadata().and_then(|md| md.modified());\nassert!(should_fail.is_err());\nassert_eq!(should_fail.unwrap_err().kind(), ErrorKind::NotFound);
Returns res
if the result is Err
, otherwise returns the Ok
value of self
.
Arguments passed to or
are eagerly evaluated; if you are passing the\nresult of a function call, it is recommended to use or_else
, which is\nlazily evaluated.
let x: Result<u32, &str> = Ok(2);\nlet y: Result<u32, &str> = Err(\"late error\");\nassert_eq!(x.or(y), Ok(2));\n\nlet x: Result<u32, &str> = Err(\"early error\");\nlet y: Result<u32, &str> = Ok(2);\nassert_eq!(x.or(y), Ok(2));\n\nlet x: Result<u32, &str> = Err(\"not a 2\");\nlet y: Result<u32, &str> = Err(\"late error\");\nassert_eq!(x.or(y), Err(\"late error\"));\n\nlet x: Result<u32, &str> = Ok(2);\nlet y: Result<u32, &str> = Ok(100);\nassert_eq!(x.or(y), Ok(2));
Calls op
if the result is Err
, otherwise returns the Ok
value of self
.
This function can be used for control flow based on result values.
\nfn sq(x: u32) -> Result<u32, u32> { Ok(x * x) }\nfn err(x: u32) -> Result<u32, u32> { Err(x) }\n\nassert_eq!(Ok(2).or_else(sq).or_else(sq), Ok(2));\nassert_eq!(Ok(2).or_else(err).or_else(sq), Ok(2));\nassert_eq!(Err(3).or_else(sq).or_else(err), Ok(9));\nassert_eq!(Err(3).or_else(err).or_else(err), Err(3));
Returns the contained Ok
value or a provided default.
Arguments passed to unwrap_or
are eagerly evaluated; if you are passing\nthe result of a function call, it is recommended to use unwrap_or_else
,\nwhich is lazily evaluated.
let default = 2;\nlet x: Result<u32, &str> = Ok(9);\nassert_eq!(x.unwrap_or(default), 9);\n\nlet x: Result<u32, &str> = Err(\"error\");\nassert_eq!(x.unwrap_or(default), default);
Returns the contained Ok
value, consuming the self
value,\nwithout checking that the value is not an Err
.
Calling this method on an Err
is undefined behavior.
let x: Result<u32, &str> = Ok(2);\nassert_eq!(unsafe { x.unwrap_unchecked() }, 2);
let x: Result<u32, &str> = Err(\"emergency failure\");\nunsafe { x.unwrap_unchecked(); } // Undefined behavior!
Returns the contained Err
value, consuming the self
value,\nwithout checking that the value is not an Ok
.
Calling this method on an Ok
is undefined behavior.
let x: Result<u32, &str> = Ok(2);\nunsafe { x.unwrap_err_unchecked() }; // Undefined behavior!
let x: Result<u32, &str> = Err(\"emergency failure\");\nassert_eq!(unsafe { x.unwrap_err_unchecked() }, \"emergency failure\");
Takes each element in the Iterator
: if it is an Err
, no further\nelements are taken, and the Err
is returned. Should no Err
\noccur, the sum of all elements is returned.
This sums up every integer in a vector, rejecting the sum if a negative\nelement is encountered:
\n\nlet f = |&x: &i32| if x < 0 { Err(\"Negative element found\") } else { Ok(x) };\nlet v = vec![1, 2];\nlet res: Result<i32, _> = v.iter().map(f).sum();\nassert_eq!(res, Ok(3));\nlet v = vec![1, -2];\nlet res: Result<i32, _> = v.iter().map(f).sum();\nassert_eq!(res, Err(\"Negative element found\"));
try_trait_v2
)?
when not short-circuiting.try_trait_v2
)FromResidual::from_residual
\nas part of ?
when short-circuiting. Read moretry_trait_v2
)Output
type. Read moretry_trait_v2
)?
to decide whether the operator should produce a value\n(because this returned ControlFlow::Continue
)\nor propagate a value back to the caller\n(because this returned ControlFlow::Break
). Read moreCreates a new page table instance or returns the error.
\nIt will allocate a new page for the root page table.
\nReturns the physical address of the root page table.
\nMaps a virtual page to a physical frame with the given page_size
\nand mapping flags
.
The virtual page starts with vaddr
, amd the physical frame starts with\ntarget
. If the addresses is not aligned to the page size, they will be\naligned down automatically.
Returns Err(PagingError::AlreadyMapped)
\nif the mapping is already present.
Unmaps the mapping starts with vaddr
.
Returns Err(PagingError::NotMapped)
if the\nmapping is not present.
Query the result of the mapping starts with vaddr
.
Returns the physical address of the target frame, mapping flags, and\nthe page size.
\nReturns Err(PagingError::NotMapped)
if the\nmapping is not present.
Updates the target or flags of the mapping starts with vaddr
. If the\ncorresponding argument is None
, it will not be updated.
Returns the page size of the mapping.
\nReturns Err(PagingError::NotMapped)
if the\nmapping is not present.
Map a contiguous virtual memory region to a contiguous physical memory\nregion with the given mapping flags
.
The virtual and physical memory regions start with vaddr
and paddr
\nrespectively. The region size is size
. The addresses and size
must\nbe aligned to 4K, otherwise it will return Err(PagingError::NotAligned)
.
When allow_huge
is true, it will try to map the region with huge pages\nif possible. Otherwise, it will map the region with 4K pages.
Unmap a contiguous virtual memory region.
\nThe region must be mapped before using PageTable64::map_region
, or\nunexpected behaviors may occur.
Walk the page table recursively.
\nWhen reaching the leaf page table, call func
on the current page table\nentry. The max number of enumerations in one table is limited by limit
.
The arguments of func
are:
0
): usize
usize
VirtAddr
]&PTE