Message ID | 20170815145714.17635-4-richard.henderson@linaro.org |
---|---|
State | New |
Headers | show |
Series | Fixup logic for exclusive pair | expand |
On 08/15/2017 09:57 AM, Richard Henderson wrote: > From: Alistair Francis <alistair.francis@xilinx.com> > > Acording to the ARM ARM exclusive loads require the same allignment as s/Acording/According/ s/allignmnet/alignment/ > exclusive stores. Let's update the memops used for the load to match > that of the store. This adds the alignment requirement to the memops. > > Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> > Signed-off-by: Alistair Francis <alistair.francis@xilinx.com> > [rth: Require 16-byte alignment for 64-bit LDXP.] > Signed-off-by: Richard Henderson <richard.henderson@linaro.org> > --- -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org
Hi Richard, On 08/15/2017 11:57 AM, Richard Henderson wrote: > From: Alistair Francis <alistair.francis@xilinx.com> > > Acording to the ARM ARM exclusive loads require the same allignment as > exclusive stores. Let's update the memops used for the load to match > that of the store. This adds the alignment requirement to the memops. > > Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> > Signed-off-by: Alistair Francis <alistair.francis@xilinx.com> > [rth: Require 16-byte alignment for 64-bit LDXP.] > Signed-off-by: Richard Henderson <richard.henderson@linaro.org> > --- > target/arm/translate-a64.c | 11 ++++++----- > 1 file changed, 6 insertions(+), 5 deletions(-) > > diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c > index eac545e4f2..2200e25be0 100644 > --- a/target/arm/translate-a64.c > +++ b/target/arm/translate-a64.c > @@ -1861,7 +1861,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, > g_assert(size >= 2); > if (size == 2) { > /* The pair must be single-copy atomic for the doubleword. */ > - memop |= MO_64; > + memop |= MO_64 | MO_ALIGN; isn't MO_ALIGN_8 enough? > tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop); > if (s->be_data == MO_LE) { > tcg_gen_extract_i64(cpu_reg(s, rt), cpu_exclusive_val, 0, 32); > @@ -1871,10 +1871,11 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, > tcg_gen_extract_i64(cpu_reg(s, rt2), cpu_exclusive_val, 0, 32); > } > } else { > - /* The pair must be single-copy atomic for *each* doubleword, > - but not the entire quadword. */ > + /* The pair must be single-copy atomic for *each* doubleword, not > + the entire quadword, however it must be quadword aligned. */ > memop |= MO_64; > - tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop); > + tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, > + memop | MO_ALIGN_16); ok > > TCGv_i64 addr2 = tcg_temp_new_i64(); > tcg_gen_addi_i64(addr2, addr, 8); > @@ -1885,7 +1886,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, > tcg_gen_mov_i64(cpu_reg(s, rt2), cpu_exclusive_high); > } > } else { > - memop |= size; > + memop |= size | MO_ALIGN; MO_ALIGN_8 here too? > tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop); > tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val); > } > Regards, Phil.
On 15 August 2017 at 16:56, Philippe Mathieu-Daudé <f4bug@amsat.org> wrote: > Hi Richard, > > On 08/15/2017 11:57 AM, Richard Henderson wrote: >> >> From: Alistair Francis <alistair.francis@xilinx.com> >> >> Acording to the ARM ARM exclusive loads require the same allignment as >> exclusive stores. Let's update the memops used for the load to match >> that of the store. This adds the alignment requirement to the memops. >> >> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> >> Signed-off-by: Alistair Francis <alistair.francis@xilinx.com> >> [rth: Require 16-byte alignment for 64-bit LDXP.] >> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> >> --- >> target/arm/translate-a64.c | 11 ++++++----- >> 1 file changed, 6 insertions(+), 5 deletions(-) >> >> diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c >> index eac545e4f2..2200e25be0 100644 >> --- a/target/arm/translate-a64.c >> +++ b/target/arm/translate-a64.c >> @@ -1861,7 +1861,7 @@ static void gen_load_exclusive(DisasContext *s, int >> rt, int rt2, >> g_assert(size >= 2); >> if (size == 2) { >> /* The pair must be single-copy atomic for the doubleword. >> */ >> - memop |= MO_64; >> + memop |= MO_64 | MO_ALIGN; > > > isn't MO_ALIGN_8 enough? MO_ALIGN is "align to size of access", ie to 8 bytes in this case. MO_ALIGN_8 is "align to a specified size (8 bytes) which isn't the size of the access". In this case the access size is 8 bytes so we don't need MO_ALIGN_8. Alignments to something other than the access size are the oddball case, so I think it makes sense to stick with MO_ALIGN for the common case of "just aligned to the access size" so you can spot the odd cases when you're reading the source. thanks -- PMM
On 08/15/2017 08:56 AM, Philippe Mathieu-Daudé wrote: >> @@ -1885,7 +1886,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, >> int rt2, >> tcg_gen_mov_i64(cpu_reg(s, rt2), cpu_exclusive_high); >> } >> } else { >> - memop |= size; >> + memop |= size | MO_ALIGN; > > MO_ALIGN_8 here too? > >> tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop); >> tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val); Peter already pointed out that MO_ALIGN_N should be reserved for those cases where an explicit size really needed. You should note that using MO_ALIGN_8 would be actively wrong here -- it would incorrectly require 8 byte alignment for the single byte access of LDXRB. r~
On 08/15/2017 01:14 PM, Peter Maydell wrote: > On 15 August 2017 at 16:56, Philippe Mathieu-Daudé <f4bug@amsat.org> wrote: >> Hi Richard, >> >> On 08/15/2017 11:57 AM, Richard Henderson wrote: >>> >>> From: Alistair Francis <alistair.francis@xilinx.com> >>> >>> Acording to the ARM ARM exclusive loads require the same allignment as >>> exclusive stores. Let's update the memops used for the load to match >>> that of the store. This adds the alignment requirement to the memops. >>> >>> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> >>> Signed-off-by: Alistair Francis <alistair.francis@xilinx.com> >>> [rth: Require 16-byte alignment for 64-bit LDXP.] >>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> >>> --- >>> target/arm/translate-a64.c | 11 ++++++----- >>> 1 file changed, 6 insertions(+), 5 deletions(-) >>> >>> diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c >>> index eac545e4f2..2200e25be0 100644 >>> --- a/target/arm/translate-a64.c >>> +++ b/target/arm/translate-a64.c >>> @@ -1861,7 +1861,7 @@ static void gen_load_exclusive(DisasContext *s, int >>> rt, int rt2, >>> g_assert(size >= 2); >>> if (size == 2) { >>> /* The pair must be single-copy atomic for the doubleword. >>> */ >>> - memop |= MO_64; >>> + memop |= MO_64 | MO_ALIGN; >> >> >> isn't MO_ALIGN_8 enough? > > MO_ALIGN is "align to size of access", ie to 8 bytes in this case. > MO_ALIGN_8 is "align to a specified size (8 bytes) which isn't the > size of the access". > In this case the access size is 8 bytes so we don't need MO_ALIGN_8. > > Alignments to something other than the access size are the oddball > case, so I think it makes sense to stick with MO_ALIGN for the > common case of "just aligned to the access size" so you can > spot the odd cases when you're reading the source. Ok, I misunderstood "MO_ALIGN supposes the alignment size is the size of a memory access." thanks! > > thanks > -- PMM >
On 08/15/2017 01:32 PM, Richard Henderson wrote: > On 08/15/2017 08:56 AM, Philippe Mathieu-Daudé wrote: >>> @@ -1885,7 +1886,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, >>> int rt2, >>> tcg_gen_mov_i64(cpu_reg(s, rt2), cpu_exclusive_high); >>> } >>> } else { >>> - memop |= size; >>> + memop |= size | MO_ALIGN; >> >> MO_ALIGN_8 here too? >> >>> tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop); >>> tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val); > > Peter already pointed out that MO_ALIGN_N should be reserved for those cases > where an explicit size really needed. You should note that using MO_ALIGN_8 > would be actively wrong here -- it would incorrectly require 8 byte alignment > for the single byte access of LDXRB. Indeed I didn't think of that, thanks! > > > r~ >
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c index eac545e4f2..2200e25be0 100644 --- a/target/arm/translate-a64.c +++ b/target/arm/translate-a64.c @@ -1861,7 +1861,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, g_assert(size >= 2); if (size == 2) { /* The pair must be single-copy atomic for the doubleword. */ - memop |= MO_64; + memop |= MO_64 | MO_ALIGN; tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop); if (s->be_data == MO_LE) { tcg_gen_extract_i64(cpu_reg(s, rt), cpu_exclusive_val, 0, 32); @@ -1871,10 +1871,11 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, tcg_gen_extract_i64(cpu_reg(s, rt2), cpu_exclusive_val, 0, 32); } } else { - /* The pair must be single-copy atomic for *each* doubleword, - but not the entire quadword. */ + /* The pair must be single-copy atomic for *each* doubleword, not + the entire quadword, however it must be quadword aligned. */ memop |= MO_64; - tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop); + tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, + memop | MO_ALIGN_16); TCGv_i64 addr2 = tcg_temp_new_i64(); tcg_gen_addi_i64(addr2, addr, 8); @@ -1885,7 +1886,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, tcg_gen_mov_i64(cpu_reg(s, rt2), cpu_exclusive_high); } } else { - memop |= size; + memop |= size | MO_ALIGN; tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop); tcg_gen_mov_i64(cpu_reg(s, rt), cpu_exclusive_val); }