diff mbox

[v2,7/9] util/qht: atomically set b->hashes

Message ID 20160922101316.13064-8-alex.bennee@linaro.org
State Superseded
Headers show

Commit Message

Alex Bennée Sept. 22, 2016, 10:13 a.m. UTC
ThreadSanitizer detects a possible race between reading/writing the
hashes. As ordering semantics are already documented for qht we just
need to ensure a race can't tear the hash value so we can use the
relaxed atomic_set/read functions.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>

---
 util/qht.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

-- 
2.9.3

Comments

Alex Bennée Sept. 27, 2016, 9:03 p.m. UTC | #1
Emilio G. Cota <cota@braap.org> writes:

> On Thu, Sep 22, 2016 at 11:13:14 +0100, Alex Bennée wrote:

>> ThreadSanitizer detects a possible race between reading/writing the

>> hashes. As ordering semantics are already documented for qht we just

>> need to ensure a race can't tear the hash value so we can use the

>> relaxed atomic_set/read functions.

>

> Just being pedantic, but I think the commit log could be improved.

> I think it would be more correct to say we're avoiding being out

> of C11's spec by using atomic_read/set, instead of tolerating concurrent

> regular loads/stores.

>

> Tearing is not really the issue, in the sense that the seqlock protects

> against that. IOW, we're not worried about tearing, we're worried about

> being out of spec, as Paolo pointed out:

>

> On Mon, Sep 19, 2016 at 20:37:06 +0200, Paolo Bonzini wrote:

>> On 19/09/2016 20:06, Emilio G. Cota wrote:

>> > On Mon, Sep 19, 2016 at 16:51:38 +0100, Alex Bennée wrote:

>> >> > ThreadSanitizer detects a possible race between reading/writing the

>> >> > hashes. As ordering semantics are already documented for qht we just

>> >> > need to ensure a race can't tear the hash value so we can use the

>> >> > relaxed atomic_set/read functions.

>> > This was discussed here:

>> >

>> > https://lists.gnu.org/archive/html/qemu-devel/2016-05/msg03658.html

>> >

>> > To reiterate: reading torn hash values is fine, since the retry will

>> > happen regardless (and all pointers[] remain valid through the RCU

>> > read-critical section).

>>

>> True, but C11 says data races are undefined, not merely unspecified.

>> seqlock-protected data requires a relaxed read and write, because they

>> are read concurrently in the read and write sides.


You are quite right. Having been in the guts of the ThreadSanitizer with
the toolchain guys I'm starting to see this is the only real way to mark
things (it doesn't actually implement stand-along barriers).

>

> Acknowledging in the commit log the tiny-yet-measurable perf hit would be

> good, too (I'd just copy the before/after results I posted).


Will do.

>

> That said,

>

>   Reviewed-by: Emilio G. Cota <cota@braap.org>

>

> Thanks,


Thnaks,


--
Alex Bennée
diff mbox

Patch

diff --git a/util/qht.c b/util/qht.c
index 16a8d79..571639d 100644
--- a/util/qht.c
+++ b/util/qht.c
@@ -379,7 +379,7 @@  static void qht_bucket_reset__locked(struct qht_bucket *head)
             if (b->pointers[i] == NULL) {
                 goto done;
             }
-            b->hashes[i] = 0;
+            atomic_set(&b->hashes[i], 0);
             atomic_set(&b->pointers[i], NULL);
         }
         b = b->next;
@@ -444,7 +444,7 @@  void *qht_do_lookup(struct qht_bucket *head, qht_lookup_func_t func,
 
     do {
         for (i = 0; i < QHT_BUCKET_ENTRIES; i++) {
-            if (b->hashes[i] == hash) {
+            if (atomic_read(&b->hashes[i]) == hash) {
                 /* The pointer is dereferenced before seqlock_read_retry,
                  * so (unlike qht_insert__locked) we need to use
                  * atomic_rcu_read here.
@@ -538,8 +538,8 @@  static bool qht_insert__locked(struct qht *ht, struct qht_map *map,
     if (new) {
         atomic_rcu_set(&prev->next, b);
     }
-    b->hashes[i] = hash;
     /* smp_wmb() implicit in seqlock_write_begin.  */
+    atomic_set(&b->hashes[i], hash);
     atomic_set(&b->pointers[i], p);
     seqlock_write_end(&head->sequence);
     return true;
@@ -607,10 +607,10 @@  qht_entry_move(struct qht_bucket *to, int i, struct qht_bucket *from, int j)
     qht_debug_assert(to->pointers[i]);
     qht_debug_assert(from->pointers[j]);
 
-    to->hashes[i] = from->hashes[j];
+    atomic_set(&to->hashes[i], from->hashes[j]);
     atomic_set(&to->pointers[i], from->pointers[j]);
 
-    from->hashes[j] = 0;
+    atomic_set(&from->hashes[j], 0);
     atomic_set(&from->pointers[j], NULL);
 }