Message ID | 20200502000245.11071-3-honnappa.nagarahalli@arm.com |
---|---|
State | Superseded |
Headers | show |
Series | use c11 atomics for service core lib | expand |
> -----Original Message----- > From: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> > Sent: Saturday, May 2, 2020 1:03 AM > To: dev@dpdk.org; phil.yang@arm.com; Van Haaren, Harry > <harry.van.haaren@intel.com> > Cc: thomas@monjalon.net; david.marchand@redhat.com; Ananyev, Konstantin > <konstantin.ananyev@intel.com>; jerinj@marvell.com; > hemant.agrawal@nxp.com; Eads, Gage <gage.eads@intel.com>; Richardson, > Bruce <bruce.richardson@intel.com>; honnappa.nagarahalli@arm.com; > nd@arm.com; stable@dpdk.org > Subject: [PATCH v3 2/6] service: identify service running on another core > correctly > > The logic to identify if the MT unsafe service is running on another > core can return -EBUSY spuriously. In such cases, running the service > becomes costlier than using atomic operations. Assume that the > application passes the right parameters and reduces the number of > instructions for all cases. > > Cc: stable@dpdk.org > Fixes: 8d39d3e237c2 ("service: fix race in service on app lcore function") Add "fix" to the title, suggestion: service: fix identification of service running on other lcore > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> > Reviewed-by: Phil Yang <phil.yang@arm.com> I believe there may be some optimizations we can apply after this patchset as the "num_mapped_cores" variable is no longer used in a significant way for the atomic selection, however lets leave that optimization outside of 20.05 scope. With title (see above) & comment (see below) updated. Acked-by: Harry van Haaren <harry.van.haaren@intel.com> > --- <snip some diff> > @@ -412,24 +412,14 @@ rte_service_run_iter_on_app_lcore(uint32_t id, > uint32_t serialize_mt_unsafe) > > SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL); > > - /* Atomically add this core to the mapped cores first, then examine if > - * we can run the service. This avoids a race condition between > - * checking the value, and atomically adding to the mapped count. > + /* Increment num_mapped_cores to indicate that the service > + * is running on a core. > */ > - if (serialize_mt_unsafe) > - rte_atomic32_inc(&s->num_mapped_cores); > + rte_atomic32_inc(&s->num_mapped_cores); The comment for the added lines here are a little confusing to me, the "num_mapped_cores" does not indicate that the service "is running on a core", it indicates the number of mapped lcores to that service. Suggestion below? /* Increment num_mapped_cores to reflect that this core is * now mapped capable of running the service. */ > - if (service_mt_safe(s) == 0 && > - rte_atomic32_read(&s->num_mapped_cores) > 1) { > - if (serialize_mt_unsafe) > - rte_atomic32_dec(&s->num_mapped_cores); > - return -EBUSY; > - } > - > - int ret = service_run(id, cs, UINT64_MAX, s); > + int ret = service_run(id, cs, UINT64_MAX, s, serialize_mt_unsafe); <snip rest of diff>
diff --git a/lib/librte_eal/common/rte_service.c b/lib/librte_eal/common/rte_service.c index b8c465eb9..c89472b83 100644 --- a/lib/librte_eal/common/rte_service.c +++ b/lib/librte_eal/common/rte_service.c @@ -360,7 +360,7 @@ rte_service_runner_do_callback(struct rte_service_spec_impl *s, /* Expects the service 's' is valid. */ static int32_t service_run(uint32_t i, struct core_state *cs, uint64_t service_mask, - struct rte_service_spec_impl *s) + struct rte_service_spec_impl *s, uint32_t serialize_mt_unsafe) { if (!s) return -EINVAL; @@ -374,7 +374,7 @@ service_run(uint32_t i, struct core_state *cs, uint64_t service_mask, cs->service_active_on_lcore[i] = 1; - if (service_mt_safe(s) == 0) { + if ((service_mt_safe(s) == 0) && (serialize_mt_unsafe == 1)) { if (!rte_atomic32_cmpset((uint32_t *)&s->execute_lock, 0, 1)) return -EBUSY; @@ -412,24 +412,14 @@ rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe) SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL); - /* Atomically add this core to the mapped cores first, then examine if - * we can run the service. This avoids a race condition between - * checking the value, and atomically adding to the mapped count. + /* Increment num_mapped_cores to indicate that the service + * is running on a core. */ - if (serialize_mt_unsafe) - rte_atomic32_inc(&s->num_mapped_cores); + rte_atomic32_inc(&s->num_mapped_cores); - if (service_mt_safe(s) == 0 && - rte_atomic32_read(&s->num_mapped_cores) > 1) { - if (serialize_mt_unsafe) - rte_atomic32_dec(&s->num_mapped_cores); - return -EBUSY; - } - - int ret = service_run(id, cs, UINT64_MAX, s); + int ret = service_run(id, cs, UINT64_MAX, s, serialize_mt_unsafe); - if (serialize_mt_unsafe) - rte_atomic32_dec(&s->num_mapped_cores); + rte_atomic32_dec(&s->num_mapped_cores); return ret; } @@ -449,7 +439,7 @@ rte_service_runner_func(void *arg) if (!service_valid(i)) continue; /* return value ignored as no change to code flow */ - service_run(i, cs, service_mask, service_get(i)); + service_run(i, cs, service_mask, service_get(i), 1); } cs->loops++;