Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changelog tables in ABI #16

Closed
fpetrogalli opened this issue Jan 30, 2020 · 4 comments
Closed

Changelog tables in ABI #16

fpetrogalli opened this issue Jan 30, 2020 · 4 comments
Labels
abi Label for issues related to the `abi` folder.

Comments

@fpetrogalli
Copy link
Contributor

I think we should convert the tables that track the changes in the ABI specifications in sections + paragraphs. At the moment, we see changes even in the rows describing the previous versions just because we need to reformat the table. I think this could be error prone (changes could be missed).

As an example of what I mean for "reformatting", see this PR:

https://github.com/ARM-software/software-standards/pull/15/files#diff-b590b56110000b706ad06b2250159441

@fpetrogalli fpetrogalli added the abi Label for issues related to the `abi` folder. label Jan 30, 2020
@stuij
Copy link
Member

stuij commented Jan 31, 2020

This is a potential issue with any table. I personally think the table layout for the change log is quite nice in that it gives a nice clear view of the changes, and I'm not sure if the effort is worth the outcome.

@fpetrogalli
Copy link
Contributor Author

This is a potential issue with any table.

Yes, but the other tables will rarely change. The changelog table will change every time we do a release of a document after changing it.

@stuij
Copy link
Member

stuij commented Feb 3, 2020

The main reason we see so many tables change in for this release is because the tables needed classes and therefore whole-table readjustments. In rare occasions I widened some columns so it's easier to add info. But in general the column width should stay static, either for changelog tables or other tables. Any changes to individual rows will of course only affect those rows, and for changelog tables this should only be new release info. And if it's just the table widening, and there are no other changes around the table, most diffing tools, including the github one will show incremental changes in a deeper color, which should make it clear that we're not accidentally changing things we don't want.

@fpetrogalli
Copy link
Contributor Author

@stuij , I think you gave enough arguments to leave things the way they are. Closing this. Thank you.

statham-arm added a commit to statham-arm/abi-aa that referenced this issue Jul 1, 2024
This brings AAELF64 into line with AAELF32, which already has a
similar clarification for the MOVW+MOVT pair. For the instructions
which shift their operand left (ADRP, and the shifted MOVZ and MOVK),
if the relocation addend is taken from the input value of the
immediate field, it is not treated as shifted.

The rationale is that this allows a sequence of related instructions
to consistently compute the same value (symbol + small offset), and
cooperate to load that value into the target register, one small chunk
at a time. For example, this would load `mySymbol + 0x123`:

  mov  x0, #0x123          ; R_AARCH64_MOVW_UABS_G0_NC(mySymbol)
  movk x0, #0x123, lsl ARM-software#16 ; R_AARCH64_MOVW_UABS_G1_NC(mySymbol)
  movk x0, #0x123, lsl ARM-software#32 ; R_AARCH64_MOVW_UABS_G2_NC(mySymbol)
  movk x0, #0x123, lsl ARM-software#48 ; R_AARCH64_MOVW_UABS_G3(mySymbol)

The existing text made it unclear whether the addends were shifted or
not. If they are interpreted as shifted, then nothing useful happens,
because the first instruction would load the low 16 bits of
`mySymbol+0x123`, and the second would load the next 16 bits of
`mySymbol+0x1230000`, and so on. This doesn't reliably get you _any_
useful offset from the symbol, because the relocations are processed
independently, so that a carry out of the low 16 bits won't be taken
into account in the next 16.

If you do need to compute a large offset from the symbol, you have no
option but to use SHT_RELA and specify a full 64-bit addend: there's
no way to represent that in an SHT_REL setup. But interpreting the
SHT_REL addends in the way specified here, you can at least specify
_small_ addends successfully.
statham-arm added a commit to statham-arm/abi-aa that referenced this issue Jul 2, 2024
This brings AAELF64 into line with AAELF32, which already has a
similar clarification for the MOVW+MOVT pair. For the instructions
which shift their operand left (ADRP, and the shifted MOVZ and MOVK),
if the relocation addend is taken from the input value of the
immediate field, it is not treated as shifted.

The rationale is that this allows a sequence of related instructions
to consistently compute the same value (symbol + small offset), and
cooperate to load that value into the target register, one small chunk
at a time. For example, this would load `mySymbol + 0x123`:

  mov  x0, #0x123          ; R_AARCH64_MOVW_UABS_G0_NC(mySymbol)
  movk x0, #0x123, lsl ARM-software#16 ; R_AARCH64_MOVW_UABS_G1_NC(mySymbol)
  movk x0, #0x123, lsl ARM-software#32 ; R_AARCH64_MOVW_UABS_G2_NC(mySymbol)
  movk x0, #0x123, lsl ARM-software#48 ; R_AARCH64_MOVW_UABS_G3(mySymbol)

The existing text made it unclear whether the addends were shifted or
not. If they are interpreted as shifted, then nothing useful happens,
because the first instruction would load the low 16 bits of
`mySymbol+0x123`, and the second would load the next 16 bits of
`mySymbol+0x1230000`, and so on. This doesn't reliably get you _any_
useful offset from the symbol, because the relocations are processed
independently, so that a carry out of the low 16 bits won't be taken
into account in the next 16.

If you do need to compute a large offset from the symbol, you have no
option but to use SHT_RELA and specify a full 64-bit addend: there's
no way to represent that in an SHT_REL setup. But interpreting the
SHT_REL addends in the way specified here, you can at least specify
_small_ addends successfully.
statham-arm added a commit to statham-arm/abi-aa that referenced this issue Jul 2, 2024
This brings AAELF64 into line with AAELF32, which already has a
similar clarification for the MOVW+MOVT pair. For the instructions
which shift their operand left (ADRP, and the shifted MOVZ and MOVK),
if the relocation addend is taken from the input value of the
immediate field, it is not treated as shifted.

The rationale is that this allows a sequence of related instructions
to consistently compute the same value (symbol + small offset), and
cooperate to load that value into the target register, one small chunk
at a time. For example, this would load `mySymbol + 0x123`:

  mov  x0, #0x123          ; R_AARCH64_MOVW_UABS_G0_NC(mySymbol)
  movk x0, #0x123, lsl ARM-software#16 ; R_AARCH64_MOVW_UABS_G1_NC(mySymbol)
  movk x0, #0x123, lsl ARM-software#32 ; R_AARCH64_MOVW_UABS_G2_NC(mySymbol)
  movk x0, #0x123, lsl ARM-software#48 ; R_AARCH64_MOVW_UABS_G3(mySymbol)

The existing text made it unclear whether the addends were shifted or
not. If they are interpreted as shifted, then nothing useful happens,
because the first instruction would load the low 16 bits of
`mySymbol+0x123`, and the second would load the next 16 bits of
`mySymbol+0x1230000`, and so on. This doesn't reliably get you _any_
useful offset from the symbol, because the relocations are processed
independently, so that a carry out of the low 16 bits won't be taken
into account in the next 16.

If you do need to compute a large offset from the symbol, you have no
option but to use SHT_RELA and specify a full 64-bit addend: there's
no way to represent that in an SHT_REL setup. But interpreting the
SHT_REL addends in the way specified here, you can at least specify
_small_ addends successfully.
smithp35 pushed a commit that referenced this issue Jul 8, 2024
This brings AAELF64 into line with AAELF32, which already has a
similar clarification for the MOVW+MOVT pair. For the instructions
which shift their operand left (ADRP, and the shifted MOVZ and MOVK),
if the relocation addend is taken from the input value of the
immediate field, it is not treated as shifted.

The rationale is that this allows a sequence of related instructions
to consistently compute the same value (symbol + small offset), and
cooperate to load that value into the target register, one small chunk
at a time. For example, this would load `mySymbol + 0x123`:

  mov  x0, #0x123          ; R_AARCH64_MOVW_UABS_G0_NC(mySymbol)
  movk x0, #0x123, lsl #16 ; R_AARCH64_MOVW_UABS_G1_NC(mySymbol)
  movk x0, #0x123, lsl #32 ; R_AARCH64_MOVW_UABS_G2_NC(mySymbol)
  movk x0, #0x123, lsl #48 ; R_AARCH64_MOVW_UABS_G3(mySymbol)

The existing text made it unclear whether the addends were shifted or
not. If they are interpreted as shifted, then nothing useful happens,
because the first instruction would load the low 16 bits of
`mySymbol+0x123`, and the second would load the next 16 bits of
`mySymbol+0x1230000`, and so on. This doesn't reliably get you _any_
useful offset from the symbol, because the relocations are processed
independently, so that a carry out of the low 16 bits won't be taken
into account in the next 16.

If you do need to compute a large offset from the symbol, you have no
option but to use SHT_RELA and specify a full 64-bit addend: there's
no way to represent that in an SHT_REL setup. But interpreting the
SHT_REL addends in the way specified here, you can at least specify
_small_ addends successfully.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
abi Label for issues related to the `abi` folder.
Projects
None yet
Development

No branches or pull requests

2 participants