mbox series

[v6,0/3] implement getrandom() in vDSO

Message ID 20221121152909.3414096-1-Jason@zx2c4.com
Headers show
Series implement getrandom() in vDSO | expand

Message

Jason A. Donenfeld Nov. 21, 2022, 3:29 p.m. UTC
Changes v5->v6:
--------------
- Fix various build errors for odd configurations.
- Do not leak any secrets onto the stack at all, to account for possibility of
  fork()ing in a multithreaded scenario, which would ruin forward secrecy.
  Instead provide a arch-specific implementation that doesn't need stack
  space.
- Prevent page alignment from overflowing variable, and clamp to acceptable
  limits.
- Read/write unaligned bytes using get/put_unaligned.
- Add extensive comments to vDSO function explaining subtle aspects.
- Account for fork() races when writing generation counter.

Changes v4->v5:
--------------
- Add example code to vDSO addition commit showing intended use and
  interaction with allocations.
- Reset buffer to beginning when retrying.
- Rely on generation counter never being zero for fork detection, rather than
  adding extra boolean.
- Make use of __ARCH_WANT_VGETRANDOM_ALLOC macro around new syscall so that
  it's condition by archs that actually choose to add this, and don't forget
  to bump __NR_syscalls.
- Separate __cvdso_getrandom() into __cvdso_getrandom() and
  __cvdso_getrandom_data() so that powerpc can make a more efficient call.

Changes v3->v4:
--------------
- Split up into small series rather than one big patch.
- Use proper ordering in generation counter reads.
- Make properly generic, not just a hairball with x86, by moving symbols into
  correct files.

Changes v2->v3:
--------------

Big changes:

Thomas' previous objection was two-fold: 1) vgetrandom
should really have the same function signature as getrandom, in
addition to all of the same behavior, and 2) having vgetrandom_alloc
be a vDSO function doesn't make sense, because it doesn't actually
need anything from the VDSO data page and it doesn't correspond to an
existing syscall.

After a discussion at Plumbers this last week, we devised the following
ways to fix these: 1) we make the opque state argument be the last
argument of vgetrandom, rather than the first one, since the real
syscall ignores the additional argument, and that way all the registers
are the same, and no behavior changes; and 2) we make vgetrandom_alloc a
syscall, rather than a vDSO function, which also gives it added
flexibility for the future, which is good.

Making those changes also reduced the size of this patch a bit.

Smaller changes:
- Properly add buffer offset position.
- Don't EXPORT_SYMBOL for vDSO code.
- Account for timens and vvar being in swapped pages.

--------------

Two statements:

  1) Userspace wants faster cryptographically secure random numbers of
     arbitrary size, big or small.

  2) Userspace is currently unable to safely roll its own RNG with the
     same security profile as getrandom().

Statement (1) has been debated for years, with arguments ranging from
"we need faster cryptographically secure card shuffling!" to "the only
things that actually need good randomness are keys, which are few and
far between" to "actually, TLS CBC nonces are frequent" and so on. I
don't intend to wade into that debate substantially, except to note that
recently glibc added arc4random(), whose goal is to return a
cryptographically secure uint32_t, and there are real user reports of it
being too slow. So here we are.

Statement (2) is more interesting. The kernel is the nexus of all
entropic inputs that influence the RNG. It is in the best position, and
probably the only position, to decide anything at all about the current
state of the RNG and of its entropy. One of the things it uniquely knows
about is when reseeding is necessary.

For example, when a virtual machine is forked, restored, or duplicated,
it's imparative that the RNG doesn't generate the same outputs. For this
reason, there's a small protocol between hypervisors and the kernel that
indicates this has happened, alongside some ID, which the RNG uses to
immediately reseed, so as not to return the same numbers. Were userspace
to expand a getrandom() seed from time T1 for the next hour, and at some
point T2 < hour, the virtual machine forked, userspace would continue to
provide the same numbers to two (or more) different virtual machines,
resulting in potential cryptographic catastrophe. Something similar
happens on resuming from hibernation (or even suspend), with various
compromise scenarios there in mind.

There's a more general reason why userspace rolling its own RNG from a
getrandom() seed is fraught. There's a lot of attention paid to this
particular Linuxism we have of the RNG being initialized and thus
non-blocking or uninitialized and thus blocking until it is initialized.
These are our Two Big States that many hold to be the holy
differentiating factor between safe and not safe, between
cryptographically secure and garbage. The fact is, however, that the
distinction between these two states is a hand-wavy wishy-washy inexact
approximation. Outside of a few exceptional cases (e.g. a HW RNG is
available), we actually don't really ever know with any rigor at all
when the RNG is safe and ready (nor when it's compromised). We do the
best we can to "estimate" it, but entropy estimation is fundamentally
impossible in the general case. So really, we're just doing guess work,
and hoping it's good and conservative enough. Let's then assume that
there's always some potential error involved in this differentiator.

In fact, under the surface, the RNG is engineered around a different
principal, and that is trying to *use* new entropic inputs regularly and
at the right specific moments in time. For example, close to boot time,
the RNG reseeds itself more often than later. At certain events, like VM
fork, the RNG reseeds itself immediately. The various heuristics for
when the RNG will use new entropy and how often is really a core aspect
of what the RNG has some potential to do decently enough (and something
that will probably continue to improve in the future from random.c's
present set of algorithms). So in your mind, put away the metal
attachment to the Two Big States, which represent an approximation with
a potential margin of error. Instead keep in mind that the RNG's primary
operating heuristic is how often and exactly when it's going to reseed.

So, if userspace takes a seed from getrandom() at point T1, and uses it
for the next hour (or N megabytes or some other meaningless metric),
during that time, potential errors in the Two Big States approximation
are amplified. During that time potential reseeds are being lost,
forgotten, not reflected in the output stream. That's not good.

The simplest statement you could make is that userspace RNGs that expand
a getrandom() seed at some point T1 are nearly always *worse*, in some
way, than just calling getrandom() every time a random number is
desired.

For those reasons, after some discussion on libc-alpha, glibc's
arc4random() now just calls getrandom() on each invocation. That's
trivially safe, and gives us latitude to then make the safe thing faster
without becoming unsafe at our leasure. Card shuffling isn't
particularly fast, however.

How do we rectify this? By putting a safe implementation of getrandom()
in the vDSO, which has access to whatever information a
particular iteration of random.c is using to make its decisions. I use
that careful language of "particular iteration of random.c", because the
set of things that a vDSO getrandom() implementation might need for making
decisions as good as the kernel's will likely change over time. This
isn't just a matter of exporting certain *data* to userspace. We're not
going to commit to a "data API" where the various heuristics used are
exposed, locking in how the kernel works for decades to come, and then
leave it to various userspaces to roll something on top and shoot
themselves in the foot and have all sorts of complexity disasters.
Rather, vDSO getrandom() is supposed to be the *same exact algorithm*
that runs in the kernel, except it's been hoisted into userspace as
much as possible. And so vDSO getrandom() and kernel getrandom() will
always mirror each other hermetically.

API-wise, the vDSO gains this function:

  ssize_t vgetrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state);

The return value and the first 3 arguments are the same as ordinary
getrandom(), while the last argument is a pointer to some state
allocated with vgetrandom_alloc(), explained below. Were all four
arguments passed to the getrandom syscall, nothing different would
happen, and the functions would have the exact same behavior.

Then, we introduce a new syscall:

  void *vgetrandom_alloc([inout] size_t *num, [out] size_t *size_per_each, unsigned int flags);

This takes the desired number of opaque states in `num`, and returns a
pointer to an array of opaque states, the number actually allocated back
in `num`, and the size in bytes of each one in `size_per_each`, enabling
a libc to slice up the returned array into a state per each thread. (The
`flags` argument is always zero for now.) We very intentionally do *not*
leave state allocation up to the caller of vgetrandom, but provide
vgetrandom_alloc for that allocation. There are too many weird things
that can go wrong, and it's important that vDSO does not provide too
generic of a mechanism. It's not going to store its state in just any
old memory address. It'll do it only in ones it allocates.

Right now this means it's a mlock'd page with WIPEONFORK set. In the
future maybe there will be other interesting page flags or
anti-heartbleed measures, or other platform-specific kernel-specific
things that can be set from the syscall. Again, it's important that the
kernel has a say in how this works rather than agreeing to operate on
any old address; memory isn't neutral.

The syscall currently accomplishes this with a call to vm_mmap() and
then a call to do_madvise(). It'd be nice to do this all at once, but
I'm not sure that a helper function exists for that now, and it seems a
bit premature to add one, at least for now.

The interesting meat of the implementation is in lib/vdso/getrandom.c,
as generic C code, and it aims to mainly follow random.c's buffered fast
key erasure logic. Before the RNG is initialized, it falls back to the
syscall. Right now it uses a simple generation counter to make its decisions
on reseeding (though this could be made more extensive over time).

The actual place that has the most work to do is in all of the other
files. Most of the vDSO shared page infrastructure is centered around
gettimeofday, and so the main structs are all in arrays for different
timestamp types, and attached to time namespaces, and so forth. I've
done the best I could to add onto this in an unintrusive way.

In my test results, performance is pretty stellar (around 15x for uint32_t
generation), and it seems to be working. There's an extended example in the
second commit of this series, showing how the syscall and the vDSO function
are meant to be used together.

Cc: linux-crypto@vger.kernel.org
Cc: x86@kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
Cc: Carlos O'Donell <carlos@redhat.com>

Jason A. Donenfeld (3):
  random: add vgetrandom_alloc() syscall
  random: introduce generic vDSO getrandom() implementation
  x86: vdso: Wire up getrandom() vDSO implementation

 MAINTAINERS                             |   2 +
 arch/x86/Kconfig                        |   2 +
 arch/x86/entry/syscalls/syscall_64.tbl  |   1 +
 arch/x86/entry/vdso/Makefile            |   3 +-
 arch/x86/entry/vdso/vdso.lds.S          |   2 +
 arch/x86/entry/vdso/vgetrandom-chacha.S | 181 ++++++++++++++++++++++++
 arch/x86/entry/vdso/vgetrandom.c        |  18 +++
 arch/x86/include/asm/unistd.h           |   1 +
 arch/x86/include/asm/vdso/getrandom.h   |  49 +++++++
 arch/x86/include/asm/vdso/vsyscall.h    |   2 +
 arch/x86/include/asm/vvar.h             |  16 +++
 drivers/char/random.c                   |  68 +++++++++
 include/uapi/asm-generic/unistd.h       |   7 +-
 include/vdso/datapage.h                 |   6 +
 kernel/sys_ni.c                         |   3 +
 lib/vdso/Kconfig                        |   5 +
 lib/vdso/getrandom.c                    | 113 +++++++++++++++
 lib/vdso/getrandom.h                    |  23 +++
 scripts/checksyscalls.sh                |   4 +
 tools/include/uapi/asm-generic/unistd.h |   7 +-
 20 files changed, 510 insertions(+), 3 deletions(-)
 create mode 100644 arch/x86/entry/vdso/vgetrandom-chacha.S
 create mode 100644 arch/x86/entry/vdso/vgetrandom.c
 create mode 100644 arch/x86/include/asm/vdso/getrandom.h
 create mode 100644 lib/vdso/getrandom.c
 create mode 100644 lib/vdso/getrandom.h

Comments

Jason A. Donenfeld Nov. 22, 2022, 8:14 p.m. UTC | #1
Hey folks,

Exciting development: one of the glibc maintainers, Adhemerval, has
written up the beginning of an implementation for this series:
https://github.com/zatrazz/glibc/commits/azanella/arc4random-vdso

I assume it'll continue to mature while this patch stews on the
list here. But so far in my testing, it works well, and the performance
boost is there and real. I've patched it into my system's glibc and am
daily driving it.

Jason
Florian Weimer Nov. 23, 2022, 10:46 a.m. UTC | #2
* Jason A. Donenfeld:

> + * The vgetrandom() function in userspace requires an opaque state, which this
> + * function provides to userspace, by mapping a certain number of special pages
> + * into the calling process. It takes a hint as to the number of opaque states
> + * desired, and returns the number of opaque states actually allocated, the
> + * size of each one in bytes, and the address of the first state.
> + */
> +SYSCALL_DEFINE3(vgetrandom_alloc, unsigned long __user *, num,
> +		unsigned long __user *, size_per_each, unsigned int, flags)

I think you should make this __u64, so that you get a consistent
userspace interface on all architectures, without the need for compat
system calls.

Thanks,
Florian
Florian Weimer Nov. 24, 2022, 5:25 a.m. UTC | #3
* Jason A. Donenfeld:

> Hi Florian,
>
> On Wed, Nov 23, 2022 at 11:46:58AM +0100, Florian Weimer wrote:
>> * Jason A. Donenfeld:
>> 
>> > + * The vgetrandom() function in userspace requires an opaque state, which this
>> > + * function provides to userspace, by mapping a certain number of special pages
>> > + * into the calling process. It takes a hint as to the number of opaque states
>> > + * desired, and returns the number of opaque states actually allocated, the
>> > + * size of each one in bytes, and the address of the first state.
>> > + */
>> > +SYSCALL_DEFINE3(vgetrandom_alloc, unsigned long __user *, num,
>> > +		unsigned long __user *, size_per_each, unsigned int, flags)
>> 
>> I think you should make this __u64, so that you get a consistent
>> userspace interface on all architectures, without the need for compat
>> system calls.
>
> That would be quite unconventional. Most syscalls that take lengths do
> so with the native register size (`unsigned long`, `size_t`), rather
> than u64. If you can point to a recent trend away from this by
> indicating some commits that added new syscalls with u64, I'd be happy
> to be shown otherwise. But AFAIK, that's not the way it's done.

See clone3 and struct clone_args.  It's more common with pointers, which
are now 64 bits unconditionally: struct futex_waitv, struct rseq_cs and
struct rseq.

If the length or pointer is a system call argument, widening it to 64
bits is not necessary because zero-extension to the full register
eliminates the need for a compat system call.  But if you pass the
address to a size or pointer, you'll need compat syscalls if you don't
make the passed data __u64.

Thanks,
Florian
Jason A. Donenfeld Nov. 24, 2022, 12:03 p.m. UTC | #4
Hi Florian,

On Thu, Nov 24, 2022 at 06:25:39AM +0100, Florian Weimer wrote:
> * Jason A. Donenfeld:
> 
> > Hi Florian,
> >
> > On Wed, Nov 23, 2022 at 11:46:58AM +0100, Florian Weimer wrote:
> >> * Jason A. Donenfeld:
> >> 
> >> > + * The vgetrandom() function in userspace requires an opaque state, which this
> >> > + * function provides to userspace, by mapping a certain number of special pages
> >> > + * into the calling process. It takes a hint as to the number of opaque states
> >> > + * desired, and returns the number of opaque states actually allocated, the
> >> > + * size of each one in bytes, and the address of the first state.
> >> > + */
> >> > +SYSCALL_DEFINE3(vgetrandom_alloc, unsigned long __user *, num,
> >> > +		unsigned long __user *, size_per_each, unsigned int, flags)
> >> 
> >> I think you should make this __u64, so that you get a consistent
> >> userspace interface on all architectures, without the need for compat
> >> system calls.
> >
> > That would be quite unconventional. Most syscalls that take lengths do
> > so with the native register size (`unsigned long`, `size_t`), rather
> > than u64. If you can point to a recent trend away from this by
> > indicating some commits that added new syscalls with u64, I'd be happy
> > to be shown otherwise. But AFAIK, that's not the way it's done.
> 
> See clone3 and struct clone_args.

The struct is one thing. But actually, clone3 takes a `size_t`:

    SYSCALL_DEFINE2(clone3, struct clone_args __user *, uargs, size_t, size)

I take from this that I too should use `size_t` rather than `unsigned
long.` And it doesn't seem like there's any compat clone3.

Jason
Jason A. Donenfeld Nov. 24, 2022, 12:24 p.m. UTC | #5
Hi Florian,

On Thu, Nov 24, 2022 at 01:15:24PM +0100, Florian Weimer wrote:
> * Jason A. Donenfeld:
> 
> > Hi Florian,
> >
> > On Thu, Nov 24, 2022 at 06:25:39AM +0100, Florian Weimer wrote:
> >> * Jason A. Donenfeld:
> >> 
> >> > Hi Florian,
> >> >
> >> > On Wed, Nov 23, 2022 at 11:46:58AM +0100, Florian Weimer wrote:
> >> >> * Jason A. Donenfeld:
> >> >> 
> >> >> > + * The vgetrandom() function in userspace requires an opaque state, which this
> >> >> > + * function provides to userspace, by mapping a certain number of special pages
> >> >> > + * into the calling process. It takes a hint as to the number of opaque states
> >> >> > + * desired, and returns the number of opaque states actually allocated, the
> >> >> > + * size of each one in bytes, and the address of the first state.
> >> >> > + */
> >> >> > +SYSCALL_DEFINE3(vgetrandom_alloc, unsigned long __user *, num,
> >> >> > +		unsigned long __user *, size_per_each, unsigned int, flags)
> >> >> 
> >> >> I think you should make this __u64, so that you get a consistent
> >> >> userspace interface on all architectures, without the need for compat
> >> >> system calls.
> >> >
> >> > That would be quite unconventional. Most syscalls that take lengths do
> >> > so with the native register size (`unsigned long`, `size_t`), rather
> >> > than u64. If you can point to a recent trend away from this by
> >> > indicating some commits that added new syscalls with u64, I'd be happy
> >> > to be shown otherwise. But AFAIK, that's not the way it's done.
> >> 
> >> See clone3 and struct clone_args.
> >
> > The struct is one thing. But actually, clone3 takes a `size_t`:
> >
> >     SYSCALL_DEFINE2(clone3, struct clone_args __user *, uargs, size_t, size)
> >
> > I take from this that I too should use `size_t` rather than `unsigned
> > long.` And it doesn't seem like there's any compat clone3.
> 
> But vgetrandom_alloc does not use unsigned long, but unsigned long *.
> You need to look at the contents for struct clone_args for comparison.

Ah! I see what you mean; that's a good point. The usual register
clearing thing isn't going to happen because these are addresses.

I still am somewhat hesitant, though, because `size_t` is really the
"proper" type to be used. Maybe the compat syscall thing is just a
necessary evil?

The other direction would be making this a u32, since 640k ought to be
enough for anybody and such, but maybe that'd be a mistake too.

So I'm not sure. Anybody else on the list with experience adding
syscalls have an opinion?

Jason
Christian Brauner Nov. 24, 2022, 12:49 p.m. UTC | #6
On Thu, Nov 24, 2022 at 01:24:42PM +0100, Jason A. Donenfeld wrote:
> Hi Florian,
> 
> On Thu, Nov 24, 2022 at 01:15:24PM +0100, Florian Weimer wrote:
> > * Jason A. Donenfeld:
> > 
> > > Hi Florian,
> > >
> > > On Thu, Nov 24, 2022 at 06:25:39AM +0100, Florian Weimer wrote:
> > >> * Jason A. Donenfeld:
> > >> 
> > >> > Hi Florian,
> > >> >
> > >> > On Wed, Nov 23, 2022 at 11:46:58AM +0100, Florian Weimer wrote:
> > >> >> * Jason A. Donenfeld:
> > >> >> 
> > >> >> > + * The vgetrandom() function in userspace requires an opaque state, which this
> > >> >> > + * function provides to userspace, by mapping a certain number of special pages
> > >> >> > + * into the calling process. It takes a hint as to the number of opaque states
> > >> >> > + * desired, and returns the number of opaque states actually allocated, the
> > >> >> > + * size of each one in bytes, and the address of the first state.
> > >> >> > + */
> > >> >> > +SYSCALL_DEFINE3(vgetrandom_alloc, unsigned long __user *, num,
> > >> >> > +		unsigned long __user *, size_per_each, unsigned int, flags)
> > >> >> 
> > >> >> I think you should make this __u64, so that you get a consistent
> > >> >> userspace interface on all architectures, without the need for compat
> > >> >> system calls.
> > >> >
> > >> > That would be quite unconventional. Most syscalls that take lengths do
> > >> > so with the native register size (`unsigned long`, `size_t`), rather
> > >> > than u64. If you can point to a recent trend away from this by
> > >> > indicating some commits that added new syscalls with u64, I'd be happy
> > >> > to be shown otherwise. But AFAIK, that's not the way it's done.
> > >> 
> > >> See clone3 and struct clone_args.

For system calls that take structs as arguments we use u64 in the struct
for proper alignment so we can extend structs without regressing old
kernels. We have a few of those extensible struct system calls.

But we don't really have a lot system calls that pass u64 as a pointer
outside of a structure so far. Neither as register and nor as pointer
iirc. Passing them as a register arg is problematic because of 32bit
arches. But passing as pointer should be fine but it is indeed uncommon.

> > >
> > > The struct is one thing. But actually, clone3 takes a `size_t`:
> > >
> > >     SYSCALL_DEFINE2(clone3, struct clone_args __user *, uargs, size_t, size)
> > >
> > > I take from this that I too should use `size_t` rather than `unsigned
> > > long.` And it doesn't seem like there's any compat clone3.
> > 
> > But vgetrandom_alloc does not use unsigned long, but unsigned long *.
> > You need to look at the contents for struct clone_args for comparison.
> 
> Ah! I see what you mean; that's a good point. The usual register
> clearing thing isn't going to happen because these are addresses.
> 
> I still am somewhat hesitant, though, because `size_t` is really the
> "proper" type to be used. Maybe the compat syscall thing is just a
> necessary evil?

We try to avoid adding new compat-requiring syscalls like the plague
usually. (At least for new syscalls that don't need to inherit behavior
from earlier syscalls they are a revisions of.)

> 
> The other direction would be making this a u32, since 640k ought to be
> enough for anybody and such, but maybe that'd be a mistake too.

I think making this a size_t is fine. We haven't traditionally used u32
for sizes. All syscalls that pass structs versioned by size use size_t.
So I would recommend to stick with that.

Alternatively, you could also introduce a simple struct versioned by
size for this system call similar to mount_setatt() and clone3() and so
on. This way you don't need to worry about future extensibilty. Just a
thought.
Arnd Bergmann Nov. 24, 2022, 1:18 p.m. UTC | #7
On Thu, Nov 24, 2022, at 13:48, Jason A. Donenfeld wrote:
> On Thu, Nov 24, 2022 at 01:24:42PM +0100, Jason A. Donenfeld wrote:

> Looks like set_mempolicy, get_mempoliy, and migrate_pages pass an
> unsigned long pointer and I don't see any compat stuff around it:
>
>     SYSCALL_DEFINE3(set_mempolicy, int, mode, const unsigned long 
> __user *, nmask,
>                     unsigned long, maxnode)
>    
>     SYSCALL_DEFINE5(get_mempolicy, int __user *, policy,
>                     unsigned long __user *, nmask, unsigned long, maxnode,
>                     unsigned long, addr, unsigned long, flags)
>
>     SYSCALL_DEFINE4(migrate_pages, pid_t, pid, unsigned long, maxnode,
>                     const unsigned long __user *, old_nodes,
>                     const unsigned long __user *, new_nodes)

Compat handling for these is done all the way down in the
pointer access:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/mm/mempolicy.c#n1368

This works here because it's a special bitmap but is not the
best approach if you just have a pointer to a single value.

       Arnd
Jason A. Donenfeld Nov. 24, 2022, 4:30 p.m. UTC | #8
On Thu, Nov 24, 2022 at 01:24:42PM +0100, Jason A. Donenfeld wrote:
> Hi Florian,
> 
> On Thu, Nov 24, 2022 at 01:15:24PM +0100, Florian Weimer wrote:
> > * Jason A. Donenfeld:
> > 
> > > Hi Florian,
> > >
> > > On Thu, Nov 24, 2022 at 06:25:39AM +0100, Florian Weimer wrote:
> > >> * Jason A. Donenfeld:
> > >> 
> > >> > Hi Florian,
> > >> >
> > >> > On Wed, Nov 23, 2022 at 11:46:58AM +0100, Florian Weimer wrote:
> > >> >> * Jason A. Donenfeld:
> > >> >> 
> > >> >> > + * The vgetrandom() function in userspace requires an opaque state, which this
> > >> >> > + * function provides to userspace, by mapping a certain number of special pages
> > >> >> > + * into the calling process. It takes a hint as to the number of opaque states
> > >> >> > + * desired, and returns the number of opaque states actually allocated, the
> > >> >> > + * size of each one in bytes, and the address of the first state.
> > >> >> > + */
> > >> >> > +SYSCALL_DEFINE3(vgetrandom_alloc, unsigned long __user *, num,
> > >> >> > +		unsigned long __user *, size_per_each, unsigned int, flags)
> > >> >> 
> > >> >> I think you should make this __u64, so that you get a consistent
> > >> >> userspace interface on all architectures, without the need for compat
> > >> >> system calls.
> > >> >
> > >> > That would be quite unconventional. Most syscalls that take lengths do
> > >> > so with the native register size (`unsigned long`, `size_t`), rather
> > >> > than u64. If you can point to a recent trend away from this by
> > >> > indicating some commits that added new syscalls with u64, I'd be happy
> > >> > to be shown otherwise. But AFAIK, that's not the way it's done.
> > >> 
> > >> See clone3 and struct clone_args.
> > >
> > > The struct is one thing. But actually, clone3 takes a `size_t`:
> > >
> > >     SYSCALL_DEFINE2(clone3, struct clone_args __user *, uargs, size_t, size)
> > >
> > > I take from this that I too should use `size_t` rather than `unsigned
> > > long.` And it doesn't seem like there's any compat clone3.
> > 
> > But vgetrandom_alloc does not use unsigned long, but unsigned long *.
> > You need to look at the contents for struct clone_args for comparison.
> 
> The other direction would be making this a u32

I think `unsigned int` is actually a sensible size for what these values
should be. That eliminates the problem and potential bikeshed too. So
I'll go with that for v+1.

Jason