From patchwork Thu Oct 11 04:59:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Honnappa Nagarahalli X-Patchwork-Id: 148601 Delivered-To: patch@linaro.org Received: by 2002:a2e:8595:0:0:0:0:0 with SMTP id b21-v6csp1682543lji; Wed, 10 Oct 2018 22:00:19 -0700 (PDT) X-Google-Smtp-Source: ACcGV6122snl3kXMZpTRpWkVNsuLw+/tyq6UvzRN6kAuY1uMUOMHoxP2jOeLu3A6Hqp5M19qRb9k X-Received: by 2002:a17:906:27d9:: with SMTP id k25-v6mr431403ejc.122.1539234019590; Wed, 10 Oct 2018 22:00:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539234019; cv=none; d=google.com; s=arc-20160816; b=zCy0fxmsTKiMyeH+hlHKzyDbDGIeABUDAetkLlKLl8Dugt5C5FDhvU53EMJq972eyA dlB+CFHDuC3wnu76e3Nt7IG37SH6ysQ5zIJccnoxN8vUqB/TnrHr7l0vZNpbO8wnxlzW JGBRcPjzkCd1Wnd1o6Y1Ie5OBW30uYT6SBSHY8vkOnIQyxQ339xCRS30n3zHPGM+D+/Y /DN/up9LuWNLbf+G07tF4nueV59fWdjEdPvqvZYaLPoY0vk4wvpb+N/6bYliF7+Xb2/F 5q5GvBRCdWOmglE/r3DwXNKhTFdwOzRzhhXCyX122fRaZOXga6aqa46c/u5MYzHKOh8u Ri/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:references:in-reply-to :message-id:date:cc:to:from; bh=mEE9rrfoUt2f4jKhvaDq5tykSJFU6ZcHGZ+6E160t6A=; b=xJFX+CwsI7xBKFZy4Zq2JB+5YdrnrlQga27ROpTyMUnkBaHlqYhL3uuK3oG4GgBxI3 CT+9HQ7ZEqKvf717b+pQHOheY57gVJsn8mtiHLMnCTaPEKS2FkacjU2GbLw7CRKmWFbS djH3IEtU0Z6cn0DvqYQlGB6DC+L92Nfib91DY2DQsRNbE6Keso9YcQzNFpEWGgfn5kjF 874HyOP3n7pNP8guFCGv0JDWUJlcYu+9S7se2l+VlUj4HJMhUSZw3rsS4seCKdcmZ+iA NpQaBr+yIm8TkzHjbWcIpe3ennUkjV4/uYYRc40vUkErqOhE6DSf6fR3rwhdi1T0HMeI BsZg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id b11-v6si7693843edj.131.2018.10.10.22.00.19; Wed, 10 Oct 2018 22:00:19 -0700 (PDT) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8BEC51B432; Thu, 11 Oct 2018 06:59:53 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 38A0F1B3AC for ; Thu, 11 Oct 2018 06:59:47 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A9B0C1596; Wed, 10 Oct 2018 21:59:46 -0700 (PDT) Received: from 2p2660v4-1.austin.arm.com (2p2660v4-1.austin.arm.com [10.118.12.190]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4D2A23F5B3; Wed, 10 Oct 2018 21:59:46 -0700 (PDT) From: Honnappa Nagarahalli To: bruce.richardson@intel.com, pablo.de.lara.guarch@intel.com Cc: dev@dpdk.org, yipeng1.wang@intel.com, honnappa.nagarahalli@arm.com, Dharmik.Thakkar@arm.com, nd@arm.com Date: Wed, 10 Oct 2018 23:59:29 -0500 Message-Id: <1539233972-49860-5-git-send-email-honnappa.nagarahalli@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1539233972-49860-1-git-send-email-honnappa.nagarahalli@arm.com> References: <1539233972-49860-1-git-send-email-honnappa.nagarahalli@arm.com> Subject: [dpdk-dev] [PATCH v2 4/7] hash: add memory ordering to avoid race conditions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Only race condition that can occur is - using the key store element before the key write is completed. Hence, while inserting the element the release memory order is used. Any other race condition is caught by the key comparison. Memory orderings are added only where needed. For ex: reads in the writer's context do not need memory ordering as there is a single writer. key_idx in the bucket entry and pdata in the key store element are used for synchronisation. key_idx is used to release an inserted entry in the bucket to the reader. Use of pdata for synchronisation is required due to updation of an existing entry where-in only the pdata is updated without updating key_idx. Signed-off-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu Reviewed-by: Ola Liljedahl Reviewed-by: Steve Capper Reviewed-by: Yipeng Wang --- lib/librte_hash/rte_cuckoo_hash.c | 112 ++++++++++++++++++++++++++++---------- 1 file changed, 83 insertions(+), 29 deletions(-) -- 2.7.4 diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuckoo_hash.c index f3e95f2..e2b0260 100644 --- a/lib/librte_hash/rte_cuckoo_hash.c +++ b/lib/librte_hash/rte_cuckoo_hash.c @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2010-2016 Intel Corporation + * Copyright(c) 2018 Arm Limited */ #include @@ -495,7 +496,9 @@ enqueue_slot_back(const struct rte_hash *h, rte_ring_sp_enqueue(h->free_slots, slot_id); } -/* Search a key from bucket and update its data */ +/* Search a key from bucket and update its data. + * Writer holds the lock before calling this. + */ static inline int32_t search_and_update(const struct rte_hash *h, void *data, const void *key, struct rte_hash_bucket *bkt, hash_sig_t sig, hash_sig_t alt_hash) @@ -509,8 +512,13 @@ search_and_update(const struct rte_hash *h, void *data, const void *key, k = (struct rte_hash_key *) ((char *)keys + bkt->key_idx[i] * h->key_entry_size); if (rte_hash_cmp_eq(key, k->key, h) == 0) { - /* Update data */ - k->pdata = data; + /* 'pdata' acts as the synchronization point + * when an existing hash entry is updated. + * Key is not updated in this case. + */ + __atomic_store_n(&k->pdata, + data, + __ATOMIC_RELEASE); /* * Return index where key is stored, * subtracting the first dummy index @@ -564,7 +572,15 @@ rte_hash_cuckoo_insert_mw(const struct rte_hash *h, if (likely(prim_bkt->key_idx[i] == EMPTY_SLOT)) { prim_bkt->sig_current[i] = sig; prim_bkt->sig_alt[i] = alt_hash; - prim_bkt->key_idx[i] = new_idx; + /* Key can be of arbitrary length, so it is + * not possible to store it atomically. + * Hence the new key element's memory stores + * (key as well as data) should be complete + * before it is referenced. + */ + __atomic_store_n(&prim_bkt->key_idx[i], + new_idx, + __ATOMIC_RELEASE); break; } } @@ -647,8 +663,10 @@ rte_hash_cuckoo_move_insert_mw(const struct rte_hash *h, prev_bkt->sig_current[prev_slot]; curr_bkt->sig_current[curr_slot] = prev_bkt->sig_alt[prev_slot]; - curr_bkt->key_idx[curr_slot] = - prev_bkt->key_idx[prev_slot]; + /* Release the updated bucket entry */ + __atomic_store_n(&curr_bkt->key_idx[curr_slot], + prev_bkt->key_idx[prev_slot], + __ATOMIC_RELEASE); curr_slot = prev_slot; curr_node = prev_node; @@ -657,7 +675,10 @@ rte_hash_cuckoo_move_insert_mw(const struct rte_hash *h, curr_bkt->sig_current[curr_slot] = sig; curr_bkt->sig_alt[curr_slot] = alt_hash; - curr_bkt->key_idx[curr_slot] = new_idx; + /* Release the new bucket entry */ + __atomic_store_n(&curr_bkt->key_idx[curr_slot], + new_idx, + __ATOMIC_RELEASE); __hash_rw_writer_unlock(h); @@ -788,8 +809,15 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key, new_idx = (uint32_t)((uintptr_t) slot_id); /* Copy key */ rte_memcpy(new_k->key, key, h->key_len); - new_k->pdata = data; - + /* Key can be of arbitrary length, so it is not possible to store + * it atomically. Hence the new key element's memory stores + * (key as well as data) should be complete before it is referenced. + * 'pdata' acts as the synchronization point when an existing hash + * entry is updated. + */ + __atomic_store_n(&new_k->pdata, + data, + __ATOMIC_RELEASE); /* Find an empty slot and insert */ ret = rte_hash_cuckoo_insert_mw(h, prim_bkt, sec_bkt, key, data, @@ -875,21 +903,27 @@ search_one_bucket(const struct rte_hash *h, const void *key, hash_sig_t sig, void **data, const struct rte_hash_bucket *bkt) { int i; + uint32_t key_idx; + void *pdata; struct rte_hash_key *k, *keys = h->key_store; for (i = 0; i < RTE_HASH_BUCKET_ENTRIES; i++) { - if (bkt->sig_current[i] == sig && - bkt->key_idx[i] != EMPTY_SLOT) { + key_idx = __atomic_load_n(&bkt->key_idx[i], + __ATOMIC_ACQUIRE); + if (bkt->sig_current[i] == sig && key_idx != EMPTY_SLOT) { k = (struct rte_hash_key *) ((char *)keys + - bkt->key_idx[i] * h->key_entry_size); + key_idx * h->key_entry_size); + pdata = __atomic_load_n(&k->pdata, + __ATOMIC_ACQUIRE); + if (rte_hash_cmp_eq(key, k->key, h) == 0) { if (data != NULL) - *data = k->pdata; + *data = pdata; /* * Return index where key is stored, * subtracting the first dummy index */ - return bkt->key_idx[i] - 1; + return key_idx - 1; } } } @@ -988,21 +1022,25 @@ remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsigned i) } } -/* Search one bucket and remove the matched key */ +/* Search one bucket and remove the matched key. + * Writer is expected to hold the lock while calling this + * function. + */ static inline int32_t search_and_remove(const struct rte_hash *h, const void *key, struct rte_hash_bucket *bkt, hash_sig_t sig) { struct rte_hash_key *k, *keys = h->key_store; unsigned int i; - int32_t ret; + uint32_t key_idx; /* Check if key is in primary location */ for (i = 0; i < RTE_HASH_BUCKET_ENTRIES; i++) { - if (bkt->sig_current[i] == sig && - bkt->key_idx[i] != EMPTY_SLOT) { + key_idx = __atomic_load_n(&bkt->key_idx[i], + __ATOMIC_ACQUIRE); + if (bkt->sig_current[i] == sig && key_idx != EMPTY_SLOT) { k = (struct rte_hash_key *) ((char *)keys + - bkt->key_idx[i] * h->key_entry_size); + key_idx * h->key_entry_size); if (rte_hash_cmp_eq(key, k->key, h) == 0) { bkt->sig_current[i] = NULL_SIGNATURE; bkt->sig_alt[i] = NULL_SIGNATURE; @@ -1012,13 +1050,14 @@ search_and_remove(const struct rte_hash *h, const void *key, if (h->recycle_on_del) remove_entry(h, bkt, i); + __atomic_store_n(&bkt->key_idx[i], + EMPTY_SLOT, + __ATOMIC_RELEASE); /* * Return index where key is stored, * subtracting the first dummy index */ - ret = bkt->key_idx[i] - 1; - bkt->key_idx[i] = EMPTY_SLOT; - return ret; + return key_idx - 1; } } } @@ -1202,6 +1241,7 @@ __rte_hash_lookup_bulk(const struct rte_hash *h, const void **keys, const struct rte_hash_bucket *secondary_bkt[RTE_HASH_LOOKUP_BULK_MAX]; uint32_t prim_hitmask[RTE_HASH_LOOKUP_BULK_MAX] = {0}; uint32_t sec_hitmask[RTE_HASH_LOOKUP_BULK_MAX] = {0}; + void *pdata[RTE_HASH_LOOKUP_BULK_MAX]; /* Prefetch first keys */ for (i = 0; i < PREFETCH_OFFSET && i < num_keys; i++) @@ -1271,18 +1311,25 @@ __rte_hash_lookup_bulk(const struct rte_hash *h, const void **keys, while (prim_hitmask[i]) { uint32_t hit_index = __builtin_ctzl(prim_hitmask[i]); - uint32_t key_idx = primary_bkt[i]->key_idx[hit_index]; + uint32_t key_idx = + __atomic_load_n( + &primary_bkt[i]->key_idx[hit_index], + __ATOMIC_ACQUIRE); const struct rte_hash_key *key_slot = (const struct rte_hash_key *)( (const char *)h->key_store + key_idx * h->key_entry_size); + + if (key_idx != EMPTY_SLOT) + pdata[i] = __atomic_load_n(&key_slot->pdata, + __ATOMIC_ACQUIRE); /* * If key index is 0, do not compare key, * as it is checking the dummy slot */ if (!!key_idx & !rte_hash_cmp_eq(key_slot->key, keys[i], h)) { if (data != NULL) - data[i] = key_slot->pdata; + data[i] = pdata[i]; hits |= 1ULL << i; positions[i] = key_idx - 1; @@ -1294,11 +1341,19 @@ __rte_hash_lookup_bulk(const struct rte_hash *h, const void **keys, while (sec_hitmask[i]) { uint32_t hit_index = __builtin_ctzl(sec_hitmask[i]); - uint32_t key_idx = secondary_bkt[i]->key_idx[hit_index]; + uint32_t key_idx = + __atomic_load_n( + &secondary_bkt[i]->key_idx[hit_index], + __ATOMIC_ACQUIRE); const struct rte_hash_key *key_slot = (const struct rte_hash_key *)( (const char *)h->key_store + key_idx * h->key_entry_size); + + if (key_idx != EMPTY_SLOT) + pdata[i] = __atomic_load_n(&key_slot->pdata, + __ATOMIC_ACQUIRE); + /* * If key index is 0, do not compare key, * as it is checking the dummy slot @@ -1306,7 +1361,7 @@ __rte_hash_lookup_bulk(const struct rte_hash *h, const void **keys, if (!!key_idx & !rte_hash_cmp_eq(key_slot->key, keys[i], h)) { if (data != NULL) - data[i] = key_slot->pdata; + data[i] = pdata[i]; hits |= 1ULL << i; positions[i] = key_idx - 1; @@ -1371,7 +1426,8 @@ rte_hash_iterate(const struct rte_hash *h, const void **key, void **data, uint32 idx = *next % RTE_HASH_BUCKET_ENTRIES; /* If current position is empty, go to the next one */ - while (h->buckets[bucket_idx].key_idx[idx] == EMPTY_SLOT) { + while ((position = __atomic_load_n(&h->buckets[bucket_idx].key_idx[idx], + __ATOMIC_ACQUIRE)) == EMPTY_SLOT) { (*next)++; /* End of table */ if (*next == total_entries) @@ -1380,8 +1436,6 @@ rte_hash_iterate(const struct rte_hash *h, const void **key, void **data, uint32 idx = *next % RTE_HASH_BUCKET_ENTRIES; } __hash_rw_reader_lock(h); - /* Get position of entry in key table */ - position = h->buckets[bucket_idx].key_idx[idx]; next_key = (struct rte_hash_key *) ((char *)h->key_store + position * h->key_entry_size); /* Return key and data */