From patchwork Tue Jan 5 18:58:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 357294 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 377A5C43381 for ; Tue, 5 Jan 2021 19:02:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E705F22D57 for ; Tue, 5 Jan 2021 19:02:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730570AbhAETBu (ORCPT ); Tue, 5 Jan 2021 14:01:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729925AbhAETBt (ORCPT ); Tue, 5 Jan 2021 14:01:49 -0500 Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com [IPv6:2a00:1450:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60176C06179E; Tue, 5 Jan 2021 11:00:47 -0800 (PST) Received: by mail-ed1-x536.google.com with SMTP id y24so1833305edt.10; Tue, 05 Jan 2021 11:00:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mNfzPBhLMMZWAzE97w1+ZZWCiryTh9dlLPyVpfooGOM=; b=htjenFF8D1wjRCV3Ivb7XsHIeCHimPGwGS2vuqUQM1szLxo7iNRaUPr0j5UQSyh838 M9lHVFoIKQisFHKPJEYMX0TPnYxmdiycPagXmEL5LnARRCIep2pYeuiVZCfz7sv9k+iy u8L6iEtVRj1jiyR22zkYxi24d5xHEgRbvGJM1p3ZqXPnXIaNZuO9Fk5UA+A69hClpIup RpS6hCqZhtFo3SvTeU1SSoqOHA+cy5BcR9d/deiGTtNdOXL1mfJemeHwtgYvcl66JlTI zSu3YJJ/PEMtTfAUPW0I0K2mw37nolU4LqM6g68W/BVIxz82ESI2k66h2BM6h5c0DvL6 uAnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mNfzPBhLMMZWAzE97w1+ZZWCiryTh9dlLPyVpfooGOM=; b=ohL7RT28V2IpzJCZaGllNiPK1H7SZTOtPmV4Ko+5GAiBtEd1LShUU8Qh22MvBM0YM6 GKpyvVlZZ10hH8ftE1zRIh2+Ic/25oJvUqDJpC06+gwL+h1uNWW/W1b3TG8oba51ae4f DR0Dbf7LSYya/C4UJ4NRHJ+rJEKzuYc0S/OBN5MzPBcx6ilDJQjXBm/UiEo/TUytKlic 3ZWzsxOPmFBNBTGjNORCxQWs2ieWeFPMem5cscqS42uu8Y1YLFcm0Z2AxzxWoIoEY+p+ zDK/OUs1Gx0KHUIO3YF2qIqQEtWMhW5zJn8Km9ULxw1HZFF0BF7lZfl2/FjmU1WR/pg4 HZlQ== X-Gm-Message-State: AOAM530QfN09NfIE6zMrGWv1uX+OTARffmgPNTYgwPvUZTOpv9Q0Cx8A lOsl+MQr5OsANP3obyo8WZY= X-Google-Smtp-Source: ABdhPJx4rUUBv+jkf79k31dTgo8if/HFn3r0jcnyalTjeVo11EAzybHYg2Gskk4tCRAHxNA1SnVCyA== X-Received: by 2002:a50:ec18:: with SMTP id g24mr1236916edr.6.1609873246100; Tue, 05 Jan 2021 11:00:46 -0800 (PST) Received: from localhost.localdomain (5-12-227-87.residential.rdsnet.ro. [5.12.227.87]) by smtp.gmail.com with ESMTPSA id z13sm205084edq.48.2021.01.05.11.00.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Jan 2021 11:00:45 -0800 (PST) From: Vladimir Oltean To: "David S . Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Andrew Lunn , Florian Fainelli , Paul Gortmaker , Pablo Neira Ayuso , Jiri Benc , Cong Wang , Jamal Hadi Salim , Stephen Hemminger , Eric Dumazet , George McCollister , Oleksij Rempel , Jay Vosburgh , Veaceslav Falico , Andy Gospodarek , Arnd Bergmann , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Taehee Yoo , Jiri Pirko , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Paolo Abeni , Christian Brauner , Florian Westphal , linux-s390@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-parisc@vger.kernel.org, linux-scsi@vger.kernel.org, linux-usb@vger.kernel.org, dev@openvswitch.org Subject: [RFC PATCH v2 net-next 01/12] net: mark dev_base_lock for deprecation Date: Tue, 5 Jan 2021 20:58:51 +0200 Message-Id: <20210105185902.3922928-2-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210105185902.3922928-1-olteanv@gmail.com> References: <20210105185902.3922928-1-olteanv@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Vladimir Oltean There is a movement to eliminate the usage of dev_base_lock, which exists since as far as I could track the kernel history down (the "7a2deb329241 Import changeset" commit from the bitkeeper branch). The dev_base_lock approach has multiple issues: - It is global and not per netns. - Its meaning has mutated over the years and the vast majority of current users is using it to protect against changes of network device state, as opposed to changes of the interface list. - It is overlapping with other protection mechanisms introduced in the meantime, which have higher performance for readers, like RCU introduced in 2009 by Eric Dumazet for kernel 2.6. The old comment that I just deleted made this distinction: writers protect only against readers by holding dev_base_lock, whereas they need to protect against other writers by holding the RTNL mutex. This is slightly confusing/incorrect, since a rwlock_t cannot have more than one concurrently running writer, so that explanation does not justify why the RTNL mutex would be necessary. Instead, Stephen Hemminger makes this clarification here: https://lore.kernel.org/netdev/20201129211230.4d704931@hermes.local/T/#u | There are really two different domains being covered by locks here. | | The first area is change of state of network devices. This has traditionally | been covered by RTNL because there are places that depend on coordinating | state between multiple devices. RTNL is too big and held too long but getting | rid of it is hard because there are corner cases (like state changes from userspace | for VPN devices). | | The other area is code that wants to do read access to look at list of devices. | These pure readers can/should be converted to RCU by now. Writers should hold RTNL. This patch edits the comment for dev_base_lock, minimizing its role in the protection of network interface lists, and clarifies that it is has other purposes as well. Signed-off-by: Vladimir Oltean --- net/core/dev.c | 33 ++++++++++++++++----------------- 1 file changed, 16 insertions(+), 17 deletions(-) diff --git a/net/core/dev.c b/net/core/dev.c index a46334906c94..2aa613d22318 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -169,23 +169,22 @@ static int call_netdevice_notifiers_extack(unsigned long val, static struct napi_struct *napi_by_id(unsigned int napi_id); /* - * The @dev_base_head list is protected by @dev_base_lock and the rtnl - * semaphore. - * - * Pure readers hold dev_base_lock for reading, or rcu_read_lock() - * - * Writers must hold the rtnl semaphore while they loop through the - * dev_base_head list, and hold dev_base_lock for writing when they do the - * actual updates. This allows pure readers to access the list even - * while a writer is preparing to update it. - * - * To put it another way, dev_base_lock is held for writing only to - * protect against pure readers; the rtnl semaphore provides the - * protection against other writers. - * - * See, for example usages, register_netdevice() and - * unregister_netdevice(), which must be called with the rtnl - * semaphore held. + * The network interface list of a netns (@net->dev_base_head) and the hash + * tables per ifindex (@net->dev_index_head) and per name (@net->dev_name_head) + * are protected using the following rules: + * + * Pure readers should hold rcu_read_lock() which should protect them against + * concurrent changes to the interface lists made by the writers. Pure writers + * must serialize by holding the RTNL mutex while they loop through the list + * and make changes to it. + * + * It is also possible to hold the global rwlock_t @dev_base_lock for + * protection (holding its read side as an alternative to rcu_read_lock, and + * its write side as an alternative to the RTNL mutex), however this should not + * be done in new code, since it is deprecated and pending removal. + * + * One other role of @dev_base_lock is to protect against changes in the + * operational state of a network interface. */ DEFINE_RWLOCK(dev_base_lock); EXPORT_SYMBOL(dev_base_lock);