Message ID | 20190925161255.1871-12-ard.biesheuvel@linaro.org |
---|---|
State | New |
Headers | show |
Series | crypto: wireguard using the existing crypto API | expand |
On Wed, Sep 25, 2019 at 9:14 AM Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: > > config ARCH_SUPPORTS_INT128 > bool > + depends on !$(cc-option,-D__SIZEOF_INT128__=0) Hmm. Does this actually work? If that "depends on" now ends up being 'n', afaik the people who _enable_ it just do a select ARCH_SUPPORTS_INT128 and now you'll end up with the Kconfig erroring out with WARNING: unmet direct dependencies detected for ARCH_SUPPORTS_INT128 and then you end up with CONFIG_ARCH_SUPPORTS_INT128 anyway, instead of the behavior you _want_ to get, which is to not get that CONFIG defined at all. So I heartily agree with your intent, but I don't think that model works. I think you need to change the cases that currently do select ARCH_SUPPORTS_INT128 to instead have that cc-option test. And take all the above with a pinch of salt. Maybe what you are doing works, and I am just missing some piece of the puzzle. But I _think_ it's broken, and you didn't test with a compiler that doesn't support that thing properly. Linus
On Wed, 25 Sep 2019 at 23:01, Linus Torvalds <torvalds@linux-foundation.org> wrote: > > On Wed, Sep 25, 2019 at 9:14 AM Ard Biesheuvel > <ard.biesheuvel@linaro.org> wrote: > > > > config ARCH_SUPPORTS_INT128 > > bool > > + depends on !$(cc-option,-D__SIZEOF_INT128__=0) > > Hmm. Does this actually work? > > If that "depends on" now ends up being 'n', afaik the people who > _enable_ it just do a > > select ARCH_SUPPORTS_INT128 > > and now you'll end up with the Kconfig erroring out with > > WARNING: unmet direct dependencies detected for ARCH_SUPPORTS_INT128 > > and then you end up with CONFIG_ARCH_SUPPORTS_INT128 anyway, instead > of the behavior you _want_ to get, which is to not get that CONFIG > defined at all. > > So I heartily agree with your intent, but I don't think that model > works. I think you need to change the cases that currently do > > select ARCH_SUPPORTS_INT128 > > to instead have that cc-option test. > > And take all the above with a pinch of salt. Maybe what you are doing > works, and I am just missing some piece of the puzzle. But I _think_ > it's broken, and you didn't test with a compiler that doesn't support > that thing properly. > I think you may be right. Instead, I'll add a separate CC_HAS_INT128 symbol with the $(cc-option) test, and replace occurrences of select ARCH_SUPPORTS_INT128 with select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 which is a slightly cleaner approach in any case.
diff --git a/crypto/ecc.c b/crypto/ecc.c index dfe114bc0c4a..6e6aab6c987c 100644 --- a/crypto/ecc.c +++ b/crypto/ecc.c @@ -336,7 +336,7 @@ static u64 vli_usub(u64 *result, const u64 *left, u64 right, static uint128_t mul_64_64(u64 left, u64 right) { uint128_t result; -#if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) +#if defined(CONFIG_ARCH_SUPPORTS_INT128) unsigned __int128 m = (unsigned __int128)left * right; result.m_low = m; diff --git a/init/Kconfig b/init/Kconfig index bd7d650d4a99..e66f64a26d7d 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -785,6 +785,7 @@ config ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH # config ARCH_SUPPORTS_INT128 bool + depends on !$(cc-option,-D__SIZEOF_INT128__=0) # For architectures that (ab)use NUMA to represent different memory regions # all cpu-local but of different latencies, such as SuperH. diff --git a/lib/ubsan.c b/lib/ubsan.c index e7d31735950d..b652cc14dd60 100644 --- a/lib/ubsan.c +++ b/lib/ubsan.c @@ -119,7 +119,7 @@ static void val_to_string(char *str, size_t size, struct type_descriptor *type, { if (type_is_int(type)) { if (type_bit_width(type) == 128) { -#if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) +#if defined(CONFIG_ARCH_SUPPORTS_INT128) u_max val = get_unsigned_val(type, value); scnprintf(str, size, "0x%08x%08x%08x%08x", diff --git a/lib/ubsan.h b/lib/ubsan.h index b8fa83864467..7b56c09473a9 100644 --- a/lib/ubsan.h +++ b/lib/ubsan.h @@ -78,7 +78,7 @@ struct invalid_value_data { struct type_descriptor *type; }; -#if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) +#if defined(CONFIG_ARCH_SUPPORTS_INT128) typedef __int128 s_max; typedef unsigned __int128 u_max; #else
In order to use 128-bit integer arithmetic in C code, the architecture needs to have declared support for it by setting ARCH_SUPPORTS_INT128, and it requires a version of the toolchain that supports this at build time. This is why all existing tests for ARCH_SUPPORTS_INT128 also test whether __SIZEOF_INT128__ is defined, since this is only the case for compilers that can support 128-bit integers. Let's fold this additional test into the Kconfig declaration of ARCH_SUPPORTS_INT128 so that we can also use the symbol in Makefiles, e.g., to decide whether a certain object needs to be included in the first place. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> --- crypto/ecc.c | 2 +- init/Kconfig | 1 + lib/ubsan.c | 2 +- lib/ubsan.h | 2 +- 4 files changed, 4 insertions(+), 3 deletions(-) -- 2.20.1