Message ID | CACXcFmnPumpkfLLzzjqkBmxwtpMa0izNj3LOtf2ycTugAKAUwQ@mail.gmail.com |
---|---|
State | New |
Headers | show |
Series | random.c Remove locking in extract_buf() | expand |
This patch is missing a S-o-b line. Either way, I don't think this is safe to do. We want the feed forward there to totally separate generations of seeds.
Jason A. Donenfeld <Jason@zx2c4.com> wrote: > Either way, I don't think this is safe to do. We want the feed forward > there to totally separate generations of seeds. Yes, but the right way to do that is to lock the chacha context in the reseed function and call extract_buf() while that lock is held. I'll send a patch for that soon.
> > Yes, but the right way to do that is to lock the chacha context > > in the reseed function and call extract_buf() while that lock > > is held. I'll send a patch for that soon. > > Extract_buf() is supposed to be able to reliably generate high quality > randomness; that's why we use it for the chacha reseed. If > extract_buf() can return return the same value for two parallel calls > to extract_buf(), that's a Bad Thing. I agree completely. > For example, suppose there were > two chacha contexts reseeding using extract_buf(), and they were > racing against each other on two different CPU's. Having two of them > reseed with the same value would be a cryptographic weakness. This confuses me a bit. Are you saying two CPUs can have different primary chacha contexts but reseed from the same input pool? Why? Reading the code, I thought there'd be only one primary crng & others would reseed from it. > NACK to both patches. OK, but as Mike wrote in the thread he started about his proposed lockless driver: " It is highly unusual that /dev/random is allowed to degrade the " performance of all other subsystems - and even bring the " system to a halt when it runs dry. No other kernel feature " is given this dispensation, I don't think a completely lockless driver is at all a good idea & I think he overstates the point a bit in the quoted text. But I do think he has a point. Locking the input pool while extract_buf() reads & hashes it seems wrong to me because it can unnecessarily block other processes. crng_reseed() already locks the crng context. My patch (which I probably will not now write since it has already got a NACK) would make it call extract_buf() while holding that lock, which prevents any problem of duplicate outputs but avoids locking the input pool during the hash. If my proposed patch would be unacceptable, it seems worth asking if there is a better way to eliminate the unnecessary lock.
diff --git a/drivers/char/random.c b/drivers/char/random.c index 68613f0b6887..9dbf7c8c68dd 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1355,7 +1355,6 @@ static void extract_buf(u8 *out) } /* Generate a hash across the pool */ - spin_lock_irqsave(&input_pool.lock, flags); blake2s_update(&state, (const u8 *)input_pool_data, POOL_BYTES); blake2s_final(&state, hash); /* final zeros out state */ @@ -1368,8 +1367,7 @@ static void extract_buf(u8 *out) * brute-forcing the feedback as hard as brute-forcing the * hash. */ - __mix_pool_bytes(hash, sizeof(hash)); - spin_unlock_irqrestore(&input_pool.lock, flags); + mix_pool_bytes(hash, sizeof(hash)); /* Note that EXTRACT_SIZE is half of hash size here, because above * we've dumped the full length back into mixer. By reducing the