Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make executable memory non writable in the core #1459

Merged
merged 8 commits into from
May 19, 2017

Conversation

etienne-lms
Copy link
Contributor

@etienne-lms etienne-lms commented Apr 3, 2017

Make executable memory non writable in the OP-TEE core static mapping

(P-R title changed: drop the RFC state)

-1 core reuses the mmu xlat table when possible (allows core to map inside xlat already mapped)

  • Needed for non-LPAE mapping only.

-2 new core rx/ro/rw memory, align of core (private ?) memory on attributes boundaries, apply to map

  • CFG_CORE_RWDATA_NOEXEC=y/n Separate executable and writable memories. Defaults to 'y'.
  • CFG_CORE_RODATA_NOEXEC=y/n Separate executable and from rodata. Defaults to 'n'.

-3 pager specific case: try to make rx pages non writable through the pager aliasing. => postponed
-4 Last: enable write-implies-execute-never in mmu (when applicable, updated plat configs) => postponed

Happy review...

@etienne-lms
Copy link
Contributor Author

Wipe previous proposal and build a new one on basis of #1468.

@etienne-lms
Copy link
Contributor Author

Needs of rebase on top of #1468. Waiting for it to be merged before proceeding.

@etienne-lms etienne-lms changed the title [RFC] Make executable memory non writable in the core Make executable memory non writable in the core Apr 12, 2017
@etienne-lms
Copy link
Contributor Author

Rebased (#1468 merged).
Dropped RFC status in P-R title. Updated P-R description on what's in these changes.

@@ -356,7 +357,7 @@ static struct tee_mmap_region *init_xlation_table(struct tee_mmap_region *mm,

assert(level <= 3);

debug_print("New xlat table (level %u):", level);
debug_print("New xlat table (level %u) [0x%8x]:", level, cur);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is "cur"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oups. bug. That a copy paste from non-lpae. I understand I did not test the LPAE with the MMU debug traces enabled.
I will fix.


/* Clear table before use */
if (*table) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assigning desc with *table here would make the following code a little bit easier and the call to virt_to_phys() below could be performed only when a new table is allocated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, i will found a clean way.

xlat = phys_to_virt(*table &
~DESC_ATTRS(*table),
MEM_AREA_TEE_RAM);
} else {
DMSG("xlat used: %d/%d", next_xlat, MAX_XLAT_TABLES);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indentation

pg_idx++;
}
desc |= ((mm->pa & ~SECTION_MASK) +
pg_idx * SMALL_PAGE_SIZE) | attr;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't need to "or" in attr as it's already assigned to desc above.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my fault. I forgot to clean it up since your previous comment.
I fix that.

@@ -36,6 +36,8 @@
#define SCTLR_C BIT32(2)
#define SCTLR_SA BIT32(3)
#define SCTLR_I BIT32(12)
#define SCTLR_WXN BIT32(19)
#define SCTLR_UWXN BIT32(20)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't find this bit defined in AArch64 state any where, where did you find it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

true! Exists only in AArch32! Thanks. Maybe to overcome issues when LPAE is no enabled?
I will drop SCTLR_UWXN and update generic_entry_a64.S.

@@ -32,8 +37,6 @@ endif

$(call force,CFG_GENERIC_BOOT,y)
$(call force,CFG_GIC,y)
$(call force,CFG_HWSUPP_MEM_PERM_PXN,y)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't these be set for all the vexpress platforms?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Juno is an A53 and an A57 ? Why does it uses cortex-a15 for aarch32 implementation ?

Shouldn't we include the cortex-a53.mk file instead. Actually, I think I should merge cortex-a53.mk and cortex-a57.mk into a cortex-armv8-0.mk and follow ARMv8 minor versions to distinguish the cores.

Another solution is to simply add here

$(call force,CFG_HWSUPP_MEM_PERM_PXN,y)
$(call force,CFG_HWSUPP_MEM_PERM_WXN,y)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cortex-a15 is used because gcc didn't like cortex-a53 when the initial port was made, perhaps that has changed now. Juno is actually both A53 and A57.

OUTPUT_FORMAT(CFG_KERN_LINKER_FORMAT)
OUTPUT_ARCH(CFG_KERN_LINKER_ARCH)

ENTRY(_start)
SECTIONS
{
. = CFG_TEE_LOAD_ADDR;
. = ALIGN(SMALL_PAGE_SIZE);
__core_unpg_rx_start = .;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "core" part isn't needed and also inconsistent with all the other symbols defined here.
Isn't __core_unpg_rx_start supposed to be exactly the same thing as __text_start or am I missing something?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're right. __core_unpg_rx_start aims at being __text_start.
I would tend to keep definition of each __core_(unpg|init)_r(x|o|w)_(start|end|size) and add a PROVIDE(__core_unpg_rx_start = __text_start) but maybe you would prefer to not introduce the useless _core_XXXX_rX_XXX and use the already defined flag. I felt my way helped more understanding the mapping when reading 'kernel.ld.S'.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm basically OK with having redundant definitions as it helps understanding, but definitions that are supposed to be the same should be assigned at the same place. I think the new locations are better so please move __text_start and friends.

@@ -50,13 +50,19 @@

#include <platform_config.h>

#ifndef SMALL_PAGE_SIZE
#define SMALL_PAGE_SIZE 4096
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is introduced, please update all the other cases here where we align on small page boundary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok. it seems i was a bit lazy this time.

@@ -250,6 +268,10 @@ SECTIONS
__text_init_end = .;
}

. = ALIGN(SMALL_PAGE_SIZE);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it that harmful to mix ro-data with code in a page?
If an attacker is able to select some address to jump to the code pages has to be so much have so much more potential to contain useful code any way.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. An alternate implementation of this P-R simply splits read-only and red-write memory. It can be easily done and could save some kbytes (dear pager!). But when pager is disabled, we don't expect to struggle for few kBytes.

I would not like a new configuration switch as CFG_CORE_MAPS_READONLY_EXECUTABLE.
It's about reducing attack surface. Should we cowardly restrict this, or nicely waste memory few maybe-useless extra hardening?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's silly, but I seem to be the only one that cares so never mind.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand your feeling about wasting memory and I fully agree with when pager is used. I will go for a CFG_CORE_RODATA_NONEXECUTABLE switch and default switch it off.

. = ALIGN(8);

/*
* Insure init sections ends on mapping page
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Parts of the code in the init section is even after OP-TEE is fully initialized so it will never be paged out for good.
This aligning wastes memory for with the only win that a few lines of code can be removed from init_runtime()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This alignment is required when separating executable memory from read-only memory.
Can onoly drop it if we discard the read-only/non-executable requirement.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again I understand your concern about physical memory waste in paged configuration.

I will propose a CFG_CORE_RODATA_NONEXECUTABLE=y|n.

One way to overcome this specific alignment could be to swap paged executable text and paged read-only-data. Would you agree with this change ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, sounds good.

Copy link
Contributor Author

@etienne-lms etienne-lms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This P-R build on buggy reuse of xlat tables.
I will submit in a specific P-R.


/* Recurse to fill in new table */
mm = init_xlation_table(mm, base_va, new_table,
level + 1);
mm = init_xlation_table(mm, base_va, xlat, level + 1);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is crappy. If a mm reference is leaves room for the next mm in the xlat, the increment table++ below craps the later reuse of same xlat table.

@@ -712,7 +710,7 @@ static void map_memarea(struct tee_mmap_region *mm, uint32_t *ttb)
mm->attr & TEE_MATTR_SECURE ? "S" : "NS");

attr = mattr_to_desc(1, mm->attr | TEE_MATTR_TABLE);
pa = map_page_memarea(mm);
pa = map_page_memarea(mm, ttb[mm->va >> SECTION_SHIFT]);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fails if the mm reference covers several ttb entries.

@etienne-lms
Copy link
Contributor Author

This 2nd commit "core: reuse xlat table when possible" could be simplified.
Actually the LPAE implementation already well handles reuse of xlat tables. Only the non-LPAE needs to be updated.

@etienne-lms
Copy link
Contributor Author

Rebased since no comment. Previous comments addressed and squashed.
LPAE mapping untouched. Checkpatch and sunxi build fixed.

@@ -571,11 +571,19 @@ static void init_mem_map(struct tee_mmap_region *memory_map, size_t num_elems)
assert(end <= CFG_TEE_RAM_START + CFG_TEE_RAM_VA_SIZE);

if (core_mmu_place_tee_ram_at_top(va)) {
bool __maybe_unused va_is_secure = true; /* any init fits */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move outside of the block since it's also used in the else part.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The 2 are unrelated (but from functional purpose. Yet, ok to move definition to function header.

const char *str __maybe_unused)
{
if (!(mm->attr & TEE_MATTR_VALID_BLOCK))
debug_print("%s [%08" PRIxVA " %08" PRIxVA "] not-mapped",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/not-mapped/not mapped/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

mm->attr & (TEE_MATTR_CACHE_CACHED <<
TEE_MATTR_CACHE_SHIFT) ? "MEM" : "DEV",
mm->attr & TEE_MATTR_PW ? "RW" : "RO",
mm->attr & TEE_MATTR_PX ? "X" : "XN",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/XN/NX/ maybe?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"XN" is the ARM convention for eXecute-Never. I would prefer to use it instead of a "NX".
Maybe I should rename CFG_CORE_RxDATA_NOEXEC into CFG_CORE_RxDATA_XN ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

XN is ok. And I'm fine with _NOEXEC in config settings, it's easier to read I think.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, i'll keep xxx_NOEXEC then.


return NULL;
found:
return map_pa2va(mmap, pa);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be more compact as:

mmap = find_map_by_type_and_pa(MEM_AREA_TEE_RAM, pa);
if (!mmap)
        mmap = find_map_by_type_and_pa(MEM_AREA_TEE_RAM_RW, pa);
if (!mmap)
        mmap = find_map_by_type_and_pa(MEM_AREA_TEE_RAM_RO, pa);
if (!mmap)
        mmap = find_map_by_type_and_pa(MEM_AREA_TEE_RAM_RX, pa);

return map_pa2va(mmap, pa);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@@ -145,6 +145,8 @@ static inline const char *teecore_memtype_name(enum teecore_memtypes type)
return names[type];
}

#define MEM_AREA_TEE_RAM_DATA MEM_AREA_TEE_RAM
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't MEM_AREA_TEE_RAM_RW_DATA be a better name? (I mean,the TEE core has a RO data section, too).


.rodata : ALIGN(8) {
__rodata_start = .;
__rodata_start = .;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have no strong opinion on this, and it may be a matter of personal taste, but I'd rather keep an align statement here for clarity (although it's not strictly needed since there is one in the previous block). In fact, I like the previous syntax better (it's clear and compact).
Same below of course.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, i will restore the old way.

* __vcore_/unpg_|ro_/start ordering: | __flatmap_unpg_ro_start / +size
* \init \rw \size | __flatmap_unpg_rw_start / +size
* v __flatmap_init_rx_start / +size
* __flatmap_init_ro_start / +size
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It took me a few seconds to parse that ;) in fact I don't think this comment has much value, it just repeats what can be seen in the file. Same for sunxi.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok


return NULL;
found:
return map_pa2va(mmap, pa);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

* __vcore_/unpg_|ro_/start ordering: | __flatmap_unpg_ro_start / +size
* \init \rw \size | __flatmap_unpg_rw_start / +size
* v __flatmap_init_rx_start / +size
* __flatmap_init_ro_start / +size
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

PROVIDE(__vcore_init_rx_start = 0);
PROVIDE(__vcore_init_ro_start = 0);
PROVIDE(__vcore_init_rx_size = 0);
PROVIDE(__vcore_init_ro_size = 0);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will remove these definition as __vcore_init_xxx are not used when CFG_WITH_PAGER is disabled.


.rodata : ALIGN(8) {
__rodata_start = .;
__rodata_start = .;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, i will restore the old way.

const char *str __maybe_unused)
{
if (!(mm->attr & TEE_MATTR_VALID_BLOCK))
debug_print("%s [%08" PRIxVA " %08" PRIxVA "] not-mapped",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

mm->attr & (TEE_MATTR_CACHE_CACHED <<
TEE_MATTR_CACHE_SHIFT) ? "MEM" : "DEV",
mm->attr & TEE_MATTR_PW ? "RW" : "RO",
mm->attr & TEE_MATTR_PX ? "X" : "XN",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"XN" is the ARM convention for eXecute-Never. I would prefer to use it instead of a "NX".
Maybe I should rename CFG_CORE_RxDATA_NOEXEC into CFG_CORE_RxDATA_XN ?

l2 = core_mmu_alloc_l2(mm->size);
else
l2 = phys_to_virt(xlat & SECTION_PT_ATTR_MASK,
MEM_AREA_TEE_RAM_DATA);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to check that core_mmu_alloc_l2() or phys_to_virt() returned a valid pointer.

_end_of_ram = .;

#ifndef CFG_WITH_PAGER
. = ALIGN(SMALL_PAGE_SIZE);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, this is useless. I guess it is even useless to move this before _end_of_ram = .;.
I will remove it.


#ifdef CFG_WITH_PAGER
/*
* Core init mapping shall mapped up to end of physical RAM.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/shall mapped/shall cover/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks

l2 = core_mmu_alloc_l2(mm->size);
else
l2 = phys_to_virt(xlat & SECTION_PT_ATTR_MASK,
MEM_AREA_TEE_RAM_RW_DATA);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to check that l2 is a valid pointer

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean something like an assert(l2); ?
By construction, core_mmu_alloc_l2() returns a valid address or panics.
By construction, if xlat is not null, it already stores a valid L2 table address (+ attributes/descc. type). map_page_memarea_in_pgdirs() will assert that attributes are valid. Maybe, in this case, we could assert here that the attribute do match ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An assert() would be nice, together with a short explanation why l2 can't be NULL.

@jenswi-linaro
Copy link
Contributor

Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>

@etienne-lms
Copy link
Contributor Author

(rebased, squash on-going...)

@etienne-lms
Copy link
Contributor Author

Oups ! I have squashed 2 commit into a single one. Commit comment for "core: label the bounds of core flat mapped memory" does not reflect the changes made. I will fix and push back.

@etienne-lms
Copy link
Contributor Author

rebased, squash issue fixed, tag applied.

@etienne-lms
Copy link
Contributor Author

rebased,
a bit too early: i've included #1531 :( => i will wait it is merged (it's a small straightforward fix) and will rebase back.

@etienne-lms
Copy link
Contributor Author

rebased

@jforissier
Copy link
Contributor

FYI, works on HiKey (64 bits, 32 bits, 32 bits with CFG_WITH_PAGER=y, 64 bits with CFG_CORE_RODATA_NOEXEC=y, 64 bits with CFG_CORE_RODATA_NOEXEC=n CFG_CORE_RWDATA_NOEXEC=n).

Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey)

@etienne-lms
Copy link
Contributor Author

Thanks. I'll add my "tested-by": on qemu_virt and qemu_armv8, with several configs; pager (when applicable), rw-xn/ro-xn, debug, lpae/no-lpae (when applicable).

@etienne-lms
Copy link
Contributor Author

Tested-by tags applied.

@etienne-lms
Copy link
Contributor Author

...rebased...

@etienne-lms
Copy link
Contributor Author

etienne-lms commented May 17, 2017

Dear all,
Can we merge this if it reached a satisfactory state ?
I would like to propose 2 P-R on top of it but I would rather like this current P-R to be merge before pushing the proposals to ease review process:

  • CFG_HWSUPP_MEM_PERM_WXN (2 commits: this + this)
  • pager-aliases not always writable (1 commit: this).

(edited: fix)

@etienne-lms
Copy link
Contributor Author

etienne-lms commented May 18, 2017

I added a later patch to fix a weakness in the patch serie.
However, since this later change can be related to few other cleaning, I proposed them as separated PR: #1547.

EDITED: extra later patch not required => being reviewed (& hopefully merged) from #1547. Current PR only needs a rebase without any extra change (applied tags are still valid) once #1547 is merged.

Before this patch, core static mapping did not insure level2 tables
do not mix secure and non-secure virtual memory mappings.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
When a level2 translation table is already used for a virtual mapping
range, allow core to reuse it to extend mapping in the same virtual
region. map_memarea() now maps a single given "memory map area" based
on MMU root table knowledge. Each non-LPAE level2 table is reset to zero
when allocated since the code now map_page_memarea() only the fills the
mapped virtual range in the target MMU table.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Define new memory type IDs for the core private memory:
- MEM_AREA_TEE_RAM_RX defines read-only/executable memory.
- MEM_AREA_TEE_RAM_RO defines read-only/non-executable memory.
- MEM_AREA_TEE_RAM_RW defines read/write/non-executable memory.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
This change prepares split of executable and writable from rw memory.
Upon configuration the core private memory "TEE RAM" (heap, stack, ...)
will either be in TEE_RAM or TEE_RAM_RW. This MEM_AREA_TEE_RAM_RW_DATA
abstracts the memory used for core data.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Use __init_start and __init_end to refer to "init" bounds.
Removed unused __text_init_start/_end, __rodata_init_start/_end, ...

Define SMALL_PAGE_SIZE locally to linker file kernel.ld.S.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Define labels to identify the core memory layout. These labels will
be later use to define the mapping bounds between executable and
writable memories.

Update plat-sunxi/kernel.ld.S to match support of generic_boot.c.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Make executable memory non-writable and writable memory no-executable.
Effective upon CFG_CORE_RWDATA_NOEXEC=y. Default configuration enables
this directive.

If CFG_CORE_RWDATA_NOEXEC is enabled, the read-only sections are
mapped read-only/executable while the read/write memories are mapped
read/write/not-executable. Potential 4KB of secure RAM wasted since the
page alignment between unpaged text/rodata and unpaged read/write data.

If CFG_CORE_RWDATA_NOEXEC not disabled, all text/rodata/data/... sections
of the core are mapped read/write/executable.

Both code and rodata and mapped together without alignment constraint.
Hence define all "ro" are inside the "rx" relate area:
    __vcore_init_ro_size = 0 or init "ro" effective size.

As init sections are mapped read-only, core won't be able to fill
trailing content of the init last page. Hence __init_end and __init_size
are page aligned.

Core must premap all physical memory as readable to allow move of
has tables to the allocated buffer during core inits.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey)
Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (qemu_virt)
Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (qemu_armv8)
Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (b2260)
CFG_CORE_RODATA_NOEXEC=y/n allows to map non-executable memory with
a not-executable attribute.

Added alignments that may waste secure memory:
- unpaged text/rodata bound
- init text/rodata bound

To prevent wasting at least one page, the sections text_paged and
rodata_paged are swapped in the memory layout.

Signed-off-by: Etienne Carriere <etienne.carriere@linaro.org>
Reviewed-by: Jens Wiklander <jens.wiklander@linaro.org>
Tested-by: Jerome Forissier <jerome.forissier@linaro.org> (HiKey)
Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (qemu_virt)
Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (qemu_armv8)
Tested-by: Etienne Carriere <etienne.carriere@linaro.org> (b2260)
@etienne-lms
Copy link
Contributor Author

rebased, travis happy. ready to be merged.

@jforissier jforissier merged commit 3181c73 into OP-TEE:master May 19, 2017
@etienne-lms etienne-lms deleted the rxrorw2 branch May 30, 2017 11:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants