Message ID | 20190605205706.569-6-richard.henderson@linaro.org |
---|---|
State | New |
Headers | show |
Series | linux-user/aarch64: Support PROT_BTI | expand |
On Wed, Jun 05, 2019 at 09:57:05PM +0100, Richard Henderson wrote: > For aarch64, this includes the GNU_PROPERTY_AARCH64_FEATURE_1_BTI bit, > which indicates that the image should be mapped with guarded pages. Heads-up: for arm64 I plan to move to making PT_GNU_PROPERTY authoritiative for ELF on linux: if this is present, we use it to find the note directly and ignore any other notes; if there is no PT_GNU_PROPERTY entry then we assume there is no NT_GNU_PROPERTY_TYPE_0 note. This is not quite decided yet, but to avoid fragmentation I'd prefer qemu and Linux apply the same policy -- I'll keep you in the loop about the decision. I think you can reasonable upstream this patch in qemu and then subsequently make it stricter without an ABI break. Upstream GNU ld generates this entry today. > > Signed-off-by: Richard Henderson <richard.henderson@linaro.org> > --- > linux-user/elfload.c | 83 +++++++++++++++++++++++++++++++++++++++----- > 1 file changed, 75 insertions(+), 8 deletions(-) > > diff --git a/linux-user/elfload.c b/linux-user/elfload.c > index a57b7049dd..1a12c60a33 100644 > --- a/linux-user/elfload.c > +++ b/linux-user/elfload.c > @@ -2253,7 +2253,7 @@ static void load_elf_image(const char *image_name, int image_fd, > struct elfhdr *ehdr = (struct elfhdr *)bprm_buf; > struct elf_phdr *phdr; > abi_ulong load_addr, load_bias, loaddr, hiaddr, error; > - int i, retval; > + int i, retval, prot_exec = PROT_EXEC; > const char *errmsg; > > /* First of all, some simple consistency checks */ > @@ -2288,17 +2288,78 @@ static void load_elf_image(const char *image_name, int image_fd, > loaddr = -1, hiaddr = 0; > info->alignment = 0; > for (i = 0; i < ehdr->e_phnum; ++i) { > - if (phdr[i].p_type == PT_LOAD) { > - abi_ulong a = phdr[i].p_vaddr - phdr[i].p_offset; > + struct elf_phdr *eppnt = phdr + i; > + > + if (eppnt->p_type == PT_LOAD) { > + abi_ulong a = eppnt->p_vaddr - eppnt->p_offset; > if (a < loaddr) { > loaddr = a; > } > - a = phdr[i].p_vaddr + phdr[i].p_memsz; > + a = eppnt->p_vaddr + eppnt->p_memsz; > if (a > hiaddr) { > hiaddr = a; > } > ++info->nsegs; > - info->alignment |= phdr[i].p_align; > + info->alignment |= eppnt->p_align; > + } else if (eppnt->p_type == PT_NOTE) { > +#ifdef TARGET_AARCH64 > + /* > + * Process NT_GNU_PROPERTY_TYPE_0. > + * > + * TODO: The only item that is AArch64 specific is the > + * GNU_PROPERTY_AARCH64_FEATURE_1_AND processing at the end. > + * If we were to ever process GNU_PROPERTY_X86_*, all of the > + * code through checking the gnu0 magic number is sharable. > + * But for now, since this *is* only used by AArch64, don't > + * process the note elsewhere. > + */ > + const uint32_t gnu0_magic = const_le32('G' | 'N' << 8 | 'U' << 16); > + uint32_t note[7]; > + > + /* > + * The note contents are 7 words, but depending on LP64 vs ILP32 > + * there may be an 8th padding word at the end. Check for and > + * read the minimum size. Further checks below will validate > + * that the sizes of everything involved are as we expect. > + */ > + if (eppnt->p_filesz < sizeof(note)) { > + continue; > + } > + if (eppnt->p_offset + eppnt->p_filesz <= BPRM_BUF_SIZE) { > + memcpy(note, bprm_buf + eppnt->p_offset, sizeof(note)); > + } else { > + retval = pread(image_fd, note, sizeof(note), eppnt->p_offset); > + if (retval != sizeof(note)) { > + goto exit_perror; > + } > + } Can we police that the segment alignment matches the ELF class (i.e., 8 for 64-bit, 4 for 32-bit)? hjl specifies this, but it's controversial and sometimes missed -- there's a bug right now in GNU ld where --force-bti generates a note with alignment 1. Due to the high chance of screwing this up, it'd be good to police it wherever appropriate. > +#ifdef BSWAP_NEEDED > + for (i = 0; i < ARRAY_SIZE(note); ++i) { > + bswap32s(note + i); > + } > +#endif > + /* > + * Check that this is a NT_GNU_PROPERTY_TYPE_0 note. > + * Again, descsz includes padding. Full size validation > + * awaits checking the final payload. > + */ > + if (note[0] != 4 || /* namesz */ > + note[1] < 12 || /* descsz */ > + note[2] != NT_GNU_PROPERTY_TYPE_0 || /* type */ > + note[3] != gnu0_magic) { /* name */ > + continue; > + } > + /* > + * Check for the BTI feature. If present, this indicates > + * that all the executable pages of the binary should be > + * mapped with PROT_BTI, so that branch targets are enforced. > + */ > + if (note[4] == GNU_PROPERTY_AARCH64_FEATURE_1_AND && > + note[5] == 4 && > + (note[6] & GNU_PROPERTY_AARCH64_FEATURE_1_BTI)) { > + prot_exec |= TARGET_PROT_BTI; > + } > +#endif /* TARGET_AARCH64 */ > } > } > > @@ -2358,9 +2419,15 @@ static void load_elf_image(const char *image_name, int image_fd, > abi_ulong vaddr, vaddr_po, vaddr_ps, vaddr_ef, vaddr_em, vaddr_len; > int elf_prot = 0; > > - if (eppnt->p_flags & PF_R) elf_prot = PROT_READ; > - if (eppnt->p_flags & PF_W) elf_prot |= PROT_WRITE; > - if (eppnt->p_flags & PF_X) elf_prot |= PROT_EXEC; > + if (eppnt->p_flags & PF_R) { > + elf_prot |= PROT_READ; > + } > + if (eppnt->p_flags & PF_W) { > + elf_prot |= PROT_WRITE; > + } > + if (eppnt->p_flags & PF_X) { > + elf_prot |= prot_exec; > + } Ack. There are interesting subtleties with segment alignments here: the prot flags on each segment "bleed" to an adjacent boundary. I'm not sure that behaviour is well specified, but we at least get away with it due to conventions regarding the ordering of segments. I still have to think about whether this works right for PROT_BTI, which is a prohibition rather than a permission, suggesting that the bleed rules might need to be inverted for it. My current policy is that because and ELF is all-BTI if any page is BTI, we probably get away with it, without having to do anything special. [...] Cheers ---Dave
diff --git a/linux-user/elfload.c b/linux-user/elfload.c index a57b7049dd..1a12c60a33 100644 --- a/linux-user/elfload.c +++ b/linux-user/elfload.c @@ -2253,7 +2253,7 @@ static void load_elf_image(const char *image_name, int image_fd, struct elfhdr *ehdr = (struct elfhdr *)bprm_buf; struct elf_phdr *phdr; abi_ulong load_addr, load_bias, loaddr, hiaddr, error; - int i, retval; + int i, retval, prot_exec = PROT_EXEC; const char *errmsg; /* First of all, some simple consistency checks */ @@ -2288,17 +2288,78 @@ static void load_elf_image(const char *image_name, int image_fd, loaddr = -1, hiaddr = 0; info->alignment = 0; for (i = 0; i < ehdr->e_phnum; ++i) { - if (phdr[i].p_type == PT_LOAD) { - abi_ulong a = phdr[i].p_vaddr - phdr[i].p_offset; + struct elf_phdr *eppnt = phdr + i; + + if (eppnt->p_type == PT_LOAD) { + abi_ulong a = eppnt->p_vaddr - eppnt->p_offset; if (a < loaddr) { loaddr = a; } - a = phdr[i].p_vaddr + phdr[i].p_memsz; + a = eppnt->p_vaddr + eppnt->p_memsz; if (a > hiaddr) { hiaddr = a; } ++info->nsegs; - info->alignment |= phdr[i].p_align; + info->alignment |= eppnt->p_align; + } else if (eppnt->p_type == PT_NOTE) { +#ifdef TARGET_AARCH64 + /* + * Process NT_GNU_PROPERTY_TYPE_0. + * + * TODO: The only item that is AArch64 specific is the + * GNU_PROPERTY_AARCH64_FEATURE_1_AND processing at the end. + * If we were to ever process GNU_PROPERTY_X86_*, all of the + * code through checking the gnu0 magic number is sharable. + * But for now, since this *is* only used by AArch64, don't + * process the note elsewhere. + */ + const uint32_t gnu0_magic = const_le32('G' | 'N' << 8 | 'U' << 16); + uint32_t note[7]; + + /* + * The note contents are 7 words, but depending on LP64 vs ILP32 + * there may be an 8th padding word at the end. Check for and + * read the minimum size. Further checks below will validate + * that the sizes of everything involved are as we expect. + */ + if (eppnt->p_filesz < sizeof(note)) { + continue; + } + if (eppnt->p_offset + eppnt->p_filesz <= BPRM_BUF_SIZE) { + memcpy(note, bprm_buf + eppnt->p_offset, sizeof(note)); + } else { + retval = pread(image_fd, note, sizeof(note), eppnt->p_offset); + if (retval != sizeof(note)) { + goto exit_perror; + } + } +#ifdef BSWAP_NEEDED + for (i = 0; i < ARRAY_SIZE(note); ++i) { + bswap32s(note + i); + } +#endif + /* + * Check that this is a NT_GNU_PROPERTY_TYPE_0 note. + * Again, descsz includes padding. Full size validation + * awaits checking the final payload. + */ + if (note[0] != 4 || /* namesz */ + note[1] < 12 || /* descsz */ + note[2] != NT_GNU_PROPERTY_TYPE_0 || /* type */ + note[3] != gnu0_magic) { /* name */ + continue; + } + /* + * Check for the BTI feature. If present, this indicates + * that all the executable pages of the binary should be + * mapped with PROT_BTI, so that branch targets are enforced. + */ + if (note[4] == GNU_PROPERTY_AARCH64_FEATURE_1_AND && + note[5] == 4 && + (note[6] & GNU_PROPERTY_AARCH64_FEATURE_1_BTI)) { + prot_exec |= TARGET_PROT_BTI; + } +#endif /* TARGET_AARCH64 */ } } @@ -2358,9 +2419,15 @@ static void load_elf_image(const char *image_name, int image_fd, abi_ulong vaddr, vaddr_po, vaddr_ps, vaddr_ef, vaddr_em, vaddr_len; int elf_prot = 0; - if (eppnt->p_flags & PF_R) elf_prot = PROT_READ; - if (eppnt->p_flags & PF_W) elf_prot |= PROT_WRITE; - if (eppnt->p_flags & PF_X) elf_prot |= PROT_EXEC; + if (eppnt->p_flags & PF_R) { + elf_prot |= PROT_READ; + } + if (eppnt->p_flags & PF_W) { + elf_prot |= PROT_WRITE; + } + if (eppnt->p_flags & PF_X) { + elf_prot |= prot_exec; + } vaddr = load_bias + eppnt->p_vaddr; vaddr_po = TARGET_ELF_PAGEOFFSET(vaddr);
For aarch64, this includes the GNU_PROPERTY_AARCH64_FEATURE_1_BTI bit, which indicates that the image should be mapped with guarded pages. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> --- linux-user/elfload.c | 83 +++++++++++++++++++++++++++++++++++++++----- 1 file changed, 75 insertions(+), 8 deletions(-) -- 2.17.1