-
Notifications
You must be signed in to change notification settings - Fork 194
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changelog tables in ABI #16
Comments
This is a potential issue with any table. I personally think the table layout for the change log is quite nice in that it gives a nice clear view of the changes, and I'm not sure if the effort is worth the outcome. |
Yes, but the other tables will rarely change. The changelog table will change every time we do a release of a document after changing it. |
The main reason we see so many tables change in for this release is because the tables needed classes and therefore whole-table readjustments. In rare occasions I widened some columns so it's easier to add info. But in general the column width should stay static, either for changelog tables or other tables. Any changes to individual rows will of course only affect those rows, and for changelog tables this should only be new release info. And if it's just the table widening, and there are no other changes around the table, most diffing tools, including the github one will show incremental changes in a deeper color, which should make it clear that we're not accidentally changing things we don't want. |
@stuij , I think you gave enough arguments to leave things the way they are. Closing this. Thank you. |
This brings AAELF64 into line with AAELF32, which already has a similar clarification for the MOVW+MOVT pair. For the instructions which shift their operand left (ADRP, and the shifted MOVZ and MOVK), if the relocation addend is taken from the input value of the immediate field, it is not treated as shifted. The rationale is that this allows a sequence of related instructions to consistently compute the same value (symbol + small offset), and cooperate to load that value into the target register, one small chunk at a time. For example, this would load `mySymbol + 0x123`: mov x0, #0x123 ; R_AARCH64_MOVW_UABS_G0_NC(mySymbol) movk x0, #0x123, lsl ARM-software#16 ; R_AARCH64_MOVW_UABS_G1_NC(mySymbol) movk x0, #0x123, lsl ARM-software#32 ; R_AARCH64_MOVW_UABS_G2_NC(mySymbol) movk x0, #0x123, lsl ARM-software#48 ; R_AARCH64_MOVW_UABS_G3(mySymbol) The existing text made it unclear whether the addends were shifted or not. If they are interpreted as shifted, then nothing useful happens, because the first instruction would load the low 16 bits of `mySymbol+0x123`, and the second would load the next 16 bits of `mySymbol+0x1230000`, and so on. This doesn't reliably get you _any_ useful offset from the symbol, because the relocations are processed independently, so that a carry out of the low 16 bits won't be taken into account in the next 16. If you do need to compute a large offset from the symbol, you have no option but to use SHT_RELA and specify a full 64-bit addend: there's no way to represent that in an SHT_REL setup. But interpreting the SHT_REL addends in the way specified here, you can at least specify _small_ addends successfully.
This brings AAELF64 into line with AAELF32, which already has a similar clarification for the MOVW+MOVT pair. For the instructions which shift their operand left (ADRP, and the shifted MOVZ and MOVK), if the relocation addend is taken from the input value of the immediate field, it is not treated as shifted. The rationale is that this allows a sequence of related instructions to consistently compute the same value (symbol + small offset), and cooperate to load that value into the target register, one small chunk at a time. For example, this would load `mySymbol + 0x123`: mov x0, #0x123 ; R_AARCH64_MOVW_UABS_G0_NC(mySymbol) movk x0, #0x123, lsl ARM-software#16 ; R_AARCH64_MOVW_UABS_G1_NC(mySymbol) movk x0, #0x123, lsl ARM-software#32 ; R_AARCH64_MOVW_UABS_G2_NC(mySymbol) movk x0, #0x123, lsl ARM-software#48 ; R_AARCH64_MOVW_UABS_G3(mySymbol) The existing text made it unclear whether the addends were shifted or not. If they are interpreted as shifted, then nothing useful happens, because the first instruction would load the low 16 bits of `mySymbol+0x123`, and the second would load the next 16 bits of `mySymbol+0x1230000`, and so on. This doesn't reliably get you _any_ useful offset from the symbol, because the relocations are processed independently, so that a carry out of the low 16 bits won't be taken into account in the next 16. If you do need to compute a large offset from the symbol, you have no option but to use SHT_RELA and specify a full 64-bit addend: there's no way to represent that in an SHT_REL setup. But interpreting the SHT_REL addends in the way specified here, you can at least specify _small_ addends successfully.
This brings AAELF64 into line with AAELF32, which already has a similar clarification for the MOVW+MOVT pair. For the instructions which shift their operand left (ADRP, and the shifted MOVZ and MOVK), if the relocation addend is taken from the input value of the immediate field, it is not treated as shifted. The rationale is that this allows a sequence of related instructions to consistently compute the same value (symbol + small offset), and cooperate to load that value into the target register, one small chunk at a time. For example, this would load `mySymbol + 0x123`: mov x0, #0x123 ; R_AARCH64_MOVW_UABS_G0_NC(mySymbol) movk x0, #0x123, lsl ARM-software#16 ; R_AARCH64_MOVW_UABS_G1_NC(mySymbol) movk x0, #0x123, lsl ARM-software#32 ; R_AARCH64_MOVW_UABS_G2_NC(mySymbol) movk x0, #0x123, lsl ARM-software#48 ; R_AARCH64_MOVW_UABS_G3(mySymbol) The existing text made it unclear whether the addends were shifted or not. If they are interpreted as shifted, then nothing useful happens, because the first instruction would load the low 16 bits of `mySymbol+0x123`, and the second would load the next 16 bits of `mySymbol+0x1230000`, and so on. This doesn't reliably get you _any_ useful offset from the symbol, because the relocations are processed independently, so that a carry out of the low 16 bits won't be taken into account in the next 16. If you do need to compute a large offset from the symbol, you have no option but to use SHT_RELA and specify a full 64-bit addend: there's no way to represent that in an SHT_REL setup. But interpreting the SHT_REL addends in the way specified here, you can at least specify _small_ addends successfully.
This brings AAELF64 into line with AAELF32, which already has a similar clarification for the MOVW+MOVT pair. For the instructions which shift their operand left (ADRP, and the shifted MOVZ and MOVK), if the relocation addend is taken from the input value of the immediate field, it is not treated as shifted. The rationale is that this allows a sequence of related instructions to consistently compute the same value (symbol + small offset), and cooperate to load that value into the target register, one small chunk at a time. For example, this would load `mySymbol + 0x123`: mov x0, #0x123 ; R_AARCH64_MOVW_UABS_G0_NC(mySymbol) movk x0, #0x123, lsl #16 ; R_AARCH64_MOVW_UABS_G1_NC(mySymbol) movk x0, #0x123, lsl #32 ; R_AARCH64_MOVW_UABS_G2_NC(mySymbol) movk x0, #0x123, lsl #48 ; R_AARCH64_MOVW_UABS_G3(mySymbol) The existing text made it unclear whether the addends were shifted or not. If they are interpreted as shifted, then nothing useful happens, because the first instruction would load the low 16 bits of `mySymbol+0x123`, and the second would load the next 16 bits of `mySymbol+0x1230000`, and so on. This doesn't reliably get you _any_ useful offset from the symbol, because the relocations are processed independently, so that a carry out of the low 16 bits won't be taken into account in the next 16. If you do need to compute a large offset from the symbol, you have no option but to use SHT_RELA and specify a full 64-bit addend: there's no way to represent that in an SHT_REL setup. But interpreting the SHT_REL addends in the way specified here, you can at least specify _small_ addends successfully.
I think we should convert the tables that track the changes in the ABI specifications in sections + paragraphs. At the moment, we see changes even in the rows describing the previous versions just because we need to reformat the table. I think this could be error prone (changes could be missed).
As an example of what I mean for "reformatting", see this PR:
https://github.com/ARM-software/software-standards/pull/15/files#diff-b590b56110000b706ad06b2250159441
The text was updated successfully, but these errors were encountered: