From patchwork Mon Sep 21 10:55:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 260507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3EA5C43464 for ; Mon, 21 Sep 2020 10:56:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7A4C4207BC for ; Mon, 21 Sep 2020 10:56:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=blackwall-org.20150623.gappssmtp.com header.i=@blackwall-org.20150623.gappssmtp.com header.b="BCyDPsx3" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726518AbgIUK4O (ORCPT ); Mon, 21 Sep 2020 06:56:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726333AbgIUK4M (ORCPT ); Mon, 21 Sep 2020 06:56:12 -0400 Received: from mail-wr1-x444.google.com (mail-wr1-x444.google.com [IPv6:2a00:1450:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A29DAC061755 for ; Mon, 21 Sep 2020 03:56:11 -0700 (PDT) Received: by mail-wr1-x444.google.com with SMTP id t10so12246634wrv.1 for ; Mon, 21 Sep 2020 03:56:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hUlbZThuj5etaf+babxSJzhQjwbLaOi/wOfX/kpZKzg=; b=BCyDPsx3dv59RiP//P0F/K4Z9uEUPYTyJz9Fge98iEUiErkepQvrWZIiGQ6kmpy2nO f4zfy0ehHPXDSXfxLlAit/IqpZNwPjDXEnhTvImduTH+7ZH4yqcY3CEgrDc8vfg6wQfG qLcTcx4R2wdvfDuP44zQFZDEUli/PoWHcylU+RRJxQE+YwNfp0BECCsgMM223dIhJ/Zv kOkb5WWt4Ldm915z/+35Eg4v0mnFvS7lFREL2WJgC0ca2fMz2ql7/KSclZm+pXiE6P8N XM3cBUtUP+BSLpI8UmKr+wcLnr5HDe/cLPfvHxiHtHbGr414ve+MKRZj3+aJm39sl6Ui 27mw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hUlbZThuj5etaf+babxSJzhQjwbLaOi/wOfX/kpZKzg=; b=TXr1lu6VUmvfxVS5X+bYiE2fvr0+iFEV5nAkBzcqmx4kB32uig1poPLYR/M0gSv0jJ Wu+frsDjKRpqk2Qpq2/OhM+43eCvWsCX0EpyQO4mBDFeCQXEw0f7LuOA+ONP7vQfYLQy xfUSG2bcnp20VRXNLoESxRoqfx8+LbhqBiUKDuo/l9QX1537QmBfQAmKIWwq3ruxFtVD M/3sdKaAmW3U7Xik4lRnI1O4LMZMFdLVn4ozUJgdtDgOUfCdv35N+gTZbm/PaYJemz+w X18yjotSNkoJA8XmXCHusILkH4FWcnw80wZTpmrMWYoh3nZjkTWzUwxbS2jGaoH2Df7o jBsA== X-Gm-Message-State: AOAM533HOzkxluNVWt84JsNINnGpCVhtMXdyY6hrWEgHm4e0UwZmIrE3 XvBTN873ySxWeu2xGP0BjqLV4cHM1j/T0Wh7ekk= X-Google-Smtp-Source: ABdhPJyO4mOjdYXfY6r2zpIRIXZS4wWsHu9GKiyp9RTExdWXUCy6b6lJbLYT9ucD/GBROEBa5qdoAg== X-Received: by 2002:adf:df05:: with SMTP id y5mr56370192wrl.39.1600685770049; Mon, 21 Sep 2020 03:56:10 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id s11sm19637727wrt.43.2020.09.21.03.56.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Sep 2020 03:56:09 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, davem@davemloft.net, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 01/16] net: bridge: mdb: use extack in br_mdb_parse() Date: Mon, 21 Sep 2020 13:55:11 +0300 Message-Id: <20200921105526.1056983-2-razor@blackwall.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200921105526.1056983-1-razor@blackwall.org> References: <20200921105526.1056983-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Nikolay Aleksandrov We can drop the pr_info() calls and just use extack to return a meaningful error to user-space when br_mdb_parse() fails. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_mdb.c | 60 +++++++++++++++++++++++++++++---------------- 1 file changed, 39 insertions(+), 21 deletions(-) diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index 00f1651a6aba..d4031f5554f7 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -629,33 +629,50 @@ void br_rtr_notify(struct net_device *dev, struct net_bridge_port *port, rtnl_set_sk_err(net, RTNLGRP_MDB, err); } -static bool is_valid_mdb_entry(struct br_mdb_entry *entry) +static bool is_valid_mdb_entry(struct br_mdb_entry *entry, + struct netlink_ext_ack *extack) { - if (entry->ifindex == 0) + if (entry->ifindex == 0) { + NL_SET_ERR_MSG_MOD(extack, "Zero entry ifindex is not allowed"); return false; + } if (entry->addr.proto == htons(ETH_P_IP)) { - if (!ipv4_is_multicast(entry->addr.u.ip4)) + if (!ipv4_is_multicast(entry->addr.u.ip4)) { + NL_SET_ERR_MSG_MOD(extack, "IPv4 entry group address is not multicast"); return false; - if (ipv4_is_local_multicast(entry->addr.u.ip4)) + } + if (ipv4_is_local_multicast(entry->addr.u.ip4)) { + NL_SET_ERR_MSG_MOD(extack, "IPv4 entry group address is local multicast"); return false; + } #if IS_ENABLED(CONFIG_IPV6) } else if (entry->addr.proto == htons(ETH_P_IPV6)) { - if (ipv6_addr_is_ll_all_nodes(&entry->addr.u.ip6)) + if (ipv6_addr_is_ll_all_nodes(&entry->addr.u.ip6)) { + NL_SET_ERR_MSG_MOD(extack, "IPv6 entry group address is link-local all nodes"); return false; + } #endif - } else + } else { + NL_SET_ERR_MSG_MOD(extack, "Unknown entry protocol"); return false; - if (entry->state != MDB_PERMANENT && entry->state != MDB_TEMPORARY) + } + + if (entry->state != MDB_PERMANENT && entry->state != MDB_TEMPORARY) { + NL_SET_ERR_MSG_MOD(extack, "Unknown entry state"); return false; - if (entry->vid >= VLAN_VID_MASK) + } + if (entry->vid >= VLAN_VID_MASK) { + NL_SET_ERR_MSG_MOD(extack, "Invalid entry VLAN id"); return false; + } return true; } static int br_mdb_parse(struct sk_buff *skb, struct nlmsghdr *nlh, - struct net_device **pdev, struct br_mdb_entry **pentry) + struct net_device **pdev, struct br_mdb_entry **pentry, + struct netlink_ext_ack *extack) { struct net *net = sock_net(skb->sk); struct br_mdb_entry *entry; @@ -671,36 +688,37 @@ static int br_mdb_parse(struct sk_buff *skb, struct nlmsghdr *nlh, bpm = nlmsg_data(nlh); if (bpm->ifindex == 0) { - pr_info("PF_BRIDGE: br_mdb_parse() with invalid ifindex\n"); + NL_SET_ERR_MSG_MOD(extack, "Invalid bridge ifindex"); return -EINVAL; } dev = __dev_get_by_index(net, bpm->ifindex); if (dev == NULL) { - pr_info("PF_BRIDGE: br_mdb_parse() with unknown ifindex\n"); + NL_SET_ERR_MSG_MOD(extack, "Bridge device doesn't exist"); return -ENODEV; } if (!(dev->priv_flags & IFF_EBRIDGE)) { - pr_info("PF_BRIDGE: br_mdb_parse() with non-bridge\n"); + NL_SET_ERR_MSG_MOD(extack, "Device is not a bridge"); return -EOPNOTSUPP; } *pdev = dev; - if (!tb[MDBA_SET_ENTRY] || - nla_len(tb[MDBA_SET_ENTRY]) != sizeof(struct br_mdb_entry)) { - pr_info("PF_BRIDGE: br_mdb_parse() with invalid attr\n"); + if (!tb[MDBA_SET_ENTRY]) { + NL_SET_ERR_MSG_MOD(extack, "Missing MDBA_SET_ENTRY attribute"); return -EINVAL; } - - entry = nla_data(tb[MDBA_SET_ENTRY]); - if (!is_valid_mdb_entry(entry)) { - pr_info("PF_BRIDGE: br_mdb_parse() with invalid entry\n"); + if (nla_len(tb[MDBA_SET_ENTRY]) != sizeof(struct br_mdb_entry)) { + NL_SET_ERR_MSG_MOD(extack, "Invalid MDBA_SET_ENTRY attribute length"); return -EINVAL; } + entry = nla_data(tb[MDBA_SET_ENTRY]); + if (!is_valid_mdb_entry(entry, extack)) + return -EINVAL; *pentry = entry; + return 0; } @@ -797,7 +815,7 @@ static int br_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh, struct net_bridge *br; int err; - err = br_mdb_parse(skb, nlh, &dev, &entry); + err = br_mdb_parse(skb, nlh, &dev, &entry, extack); if (err < 0) return err; @@ -892,7 +910,7 @@ static int br_mdb_del(struct sk_buff *skb, struct nlmsghdr *nlh, struct net_bridge *br; int err; - err = br_mdb_parse(skb, nlh, &dev, &entry); + err = br_mdb_parse(skb, nlh, &dev, &entry, extack); if (err < 0) return err; From patchwork Mon Sep 21 10:55:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 260505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE72BC43463 for ; Mon, 21 Sep 2020 10:56:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A0ABD206E5 for ; Mon, 21 Sep 2020 10:56:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=blackwall-org.20150623.gappssmtp.com header.i=@blackwall-org.20150623.gappssmtp.com header.b="N8IjIoTy" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726573AbgIUK4X (ORCPT ); Mon, 21 Sep 2020 06:56:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726507AbgIUK4O (ORCPT ); Mon, 21 Sep 2020 06:56:14 -0400 Received: from mail-wr1-x443.google.com (mail-wr1-x443.google.com [IPv6:2a00:1450:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19B6CC0613CF for ; Mon, 21 Sep 2020 03:56:14 -0700 (PDT) Received: by mail-wr1-x443.google.com with SMTP id z1so12249638wrt.3 for ; Mon, 21 Sep 2020 03:56:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sbHsP3+rb6n8HpEyq3TxWopOvaJM25/6iBadHf2ysoI=; b=N8IjIoTywGIim4Unc1pfDYz0Ob4JCW9eLhw35iR4ubQqGwJ99pIAcAjfzCe2Xj4x6J Qq7KWCHxlTgAaVrCrHtX+rCodIG2PTZJOpyOynUSuWnvc6Sl2cTlEWDRfRnB+yAOZbK+ klMrfjec2Z3k1yNvGd1+4aofXKFanscWjx9A4OG0WMD9jZEHJfLgLskJuUa42pBLry/5 M29GGE9xbaWYfWzPy9XpZAgFUuOxFoLSOpZ4bGP5a8jjPbwoooPU7vYzWmeXbGsiDfI4 iJvighYF+JPk3/hC1bM8Xr5QmXMvUJBepEiS4+/kkSd0VfRs1801k+Z2rUvIvj4xTliR KKPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sbHsP3+rb6n8HpEyq3TxWopOvaJM25/6iBadHf2ysoI=; b=rQ2On+kDYcyCHS0bXFxqzUoX1z/JH7dRVBEh6rk9KHq0tgf8sS/y/QPBuUH2UQ2oii x1MBny+/IPYvdC4o1luuijpHeM/wEIZOgNvGVvwNeIZFaN8q4AkXY0eWQ/WpL4Uoygwf 1m6dwrQExPFpXsd6UAQgu4lxz4U5ZAQ8ViN0uXwGgRQBH5pJeU2eMcPsGhpSApUEHSeQ Yaa6t1tq+WyEXER9hgmZkLboFIFjmwdQe2j7hcSdF6N/vh8/Lx42MZQjZmNmeoBDMEnz 2W/9xRnfLEaiiwC5jcIdz2VXi9hTRhMN1mEacQ4IKX+jQiaZ9Jtx1HwGrodDzD5JdmGd hLcw== X-Gm-Message-State: AOAM532g5Ob/O6nOMcWWTsbHbE2PxcOURu8lmjHJ6w74vxtAO+1Jh2TX hQ157fhQ6I6AjC5pX2tWgMABa7awvFdNWvDZIkM= X-Google-Smtp-Source: ABdhPJwrHUGsPHBvW8eUgboCExe6sjY3dXTOWbCCyJyr3G5VgoWDsA3TwKIkR7JrefKxHMY9XRn4mw== X-Received: by 2002:adf:fd8d:: with SMTP id d13mr53335781wrr.104.1600685772402; Mon, 21 Sep 2020 03:56:12 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id s11sm19637727wrt.43.2020.09.21.03.56.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Sep 2020 03:56:11 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, davem@davemloft.net, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 03/16] net: bridge: mdb: use extack in br_mdb_add() and br_mdb_add_group() Date: Mon, 21 Sep 2020 13:55:13 +0300 Message-Id: <20200921105526.1056983-4-razor@blackwall.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200921105526.1056983-1-razor@blackwall.org> References: <20200921105526.1056983-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Nikolay Aleksandrov Pass and use extack all the way down to br_mdb_add_group(). Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_mdb.c | 54 +++++++++++++++++++++++++++++++++++---------- 1 file changed, 42 insertions(+), 12 deletions(-) diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index 92ab7369fee0..1df62d887953 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -723,7 +723,8 @@ static int br_mdb_parse(struct sk_buff *skb, struct nlmsghdr *nlh, } static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, - struct br_ip *group, struct br_mdb_entry *entry) + struct br_ip *group, struct br_mdb_entry *entry, + struct netlink_ext_ack *extack) { struct net_bridge_mdb_entry *mp; struct net_bridge_port_group *p; @@ -742,10 +743,14 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, /* host join */ if (!port) { /* don't allow any flags for host-joined groups */ - if (entry->state) + if (entry->state) { + NL_SET_ERR_MSG_MOD(extack, "Flags are not allowed for host groups"); return -EINVAL; - if (mp->host_joined) + } + if (mp->host_joined) { + NL_SET_ERR_MSG_MOD(extack, "Group is already joined by host"); return -EEXIST; + } br_multicast_host_join(mp, false); br_mdb_notify(br->dev, mp, NULL, RTM_NEWMDB); @@ -756,16 +761,20 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, for (pp = &mp->ports; (p = mlock_dereference(*pp, br)) != NULL; pp = &p->next) { - if (p->port == port) + if (p->port == port) { + NL_SET_ERR_MSG_MOD(extack, "Group is already joined by port"); return -EEXIST; + } if ((unsigned long)p->port < (unsigned long)port) break; } p = br_multicast_new_port_group(port, group, *pp, entry->state, NULL, MCAST_EXCLUDE); - if (unlikely(!p)) + if (unlikely(!p)) { + NL_SET_ERR_MSG_MOD(extack, "Couldn't allocate new port group"); return -ENOMEM; + } rcu_assign_pointer(*pp, p); if (entry->state == MDB_TEMPORARY) mod_timer(&p->timer, now + br->multicast_membership_interval); @@ -776,7 +785,8 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, static int __br_mdb_add(struct net *net, struct net_bridge *br, struct net_bridge_port *p, - struct br_mdb_entry *entry) + struct br_mdb_entry *entry, + struct netlink_ext_ack *extack) { struct br_ip ip; int ret; @@ -784,7 +794,7 @@ static int __br_mdb_add(struct net *net, struct net_bridge *br, __mdb_entry_to_br_ip(entry, &ip); spin_lock_bh(&br->multicast_lock); - ret = br_mdb_add_group(br, p, &ip, entry); + ret = br_mdb_add_group(br, p, &ip, entry, extack); spin_unlock_bh(&br->multicast_lock); return ret; @@ -808,17 +818,37 @@ static int br_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh, br = netdev_priv(dev); - if (!netif_running(br->dev) || !br_opt_get(br, BROPT_MULTICAST_ENABLED)) + if (!netif_running(br->dev)) { + NL_SET_ERR_MSG_MOD(extack, "Bridge device is not running"); return -EINVAL; + } + + if (!br_opt_get(br, BROPT_MULTICAST_ENABLED)) { + NL_SET_ERR_MSG_MOD(extack, "Bridge's multicast processing is disabled"); + return -EINVAL; + } if (entry->ifindex != br->dev->ifindex) { pdev = __dev_get_by_index(net, entry->ifindex); - if (!pdev) + if (!pdev) { + NL_SET_ERR_MSG_MOD(extack, "Port net device doesn't exist"); return -ENODEV; + } p = br_port_get_rtnl(pdev); - if (!p || p->br != br || p->state == BR_STATE_DISABLED) + if (!p) { + NL_SET_ERR_MSG_MOD(extack, "Net device is not a bridge port"); return -EINVAL; + } + + if (p->br != br) { + NL_SET_ERR_MSG_MOD(extack, "Port belongs to a different bridge device"); + return -EINVAL; + } + if (p->state == BR_STATE_DISABLED) { + NL_SET_ERR_MSG_MOD(extack, "Port is in disabled state"); + return -EINVAL; + } vg = nbp_vlan_group(p); } else { vg = br_vlan_group(br); @@ -830,12 +860,12 @@ static int br_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh, if (br_vlan_enabled(br->dev) && vg && entry->vid == 0) { list_for_each_entry(v, &vg->vlan_list, vlist) { entry->vid = v->vid; - err = __br_mdb_add(net, br, p, entry); + err = __br_mdb_add(net, br, p, entry, extack); if (err) break; } } else { - err = __br_mdb_add(net, br, p, entry); + err = __br_mdb_add(net, br, p, entry, extack); } return err; From patchwork Mon Sep 21 10:55:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 260506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63AFDC43464 for ; Mon, 21 Sep 2020 10:56:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2201F206E5 for ; Mon, 21 Sep 2020 10:56:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=blackwall-org.20150623.gappssmtp.com header.i=@blackwall-org.20150623.gappssmtp.com header.b="OSP2THmw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726584AbgIUK4Y (ORCPT ); Mon, 21 Sep 2020 06:56:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726333AbgIUK4P (ORCPT ); Mon, 21 Sep 2020 06:56:15 -0400 Received: from mail-wm1-x341.google.com (mail-wm1-x341.google.com [IPv6:2a00:1450:4864:20::341]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39DE1C0613D0 for ; Mon, 21 Sep 2020 03:56:15 -0700 (PDT) Received: by mail-wm1-x341.google.com with SMTP id e17so11667498wme.0 for ; Mon, 21 Sep 2020 03:56:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=v0VyLBidFMgxaEIbE0qIA152wkMYPe0lHT8kdjtZmtI=; b=OSP2THmw1TyZKAGS7yzefO2pKInkY7BvVFxJAj31KwbSw5vuyIHod9PG0W22OsIBBw EYaik+sFpNryraAU1anxdSztI+DnuMf7S6ED0uwC84M+Yl1Z+UxA++acN1YBzn/fF9fF EQyWODKHqRr9INwFmC3i8ZVtXEMTOTNcjFq3Y1zVOP9yr/ImLjcaxeSy2zXnwu+Ziq09 QV5C1mRga7XoicYsM5GxQP1dqGwYZrR22RyayOfO7tiLbghz/959wnFw2CY16ypUoZUu Me1ZZ9m3Q8No4ivqFaDchAGNV2YAtdZlyN7LZNonGt65yVcAgAhcaLvTRoJC80RHduYu 4xVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=v0VyLBidFMgxaEIbE0qIA152wkMYPe0lHT8kdjtZmtI=; b=Vm+aWh9dExO7hlXO4PtTyDmuxqiopHoIXpMk+t92olgg+sl0E+niM5AK0BJq7OPXXG SiI+guMuYAJ2pK1bdMqczg9ZNCIEeKKkhBCPq0rJMbVvV14aV1PPTMyk5CoDnYMcxeAh 78lrloq0vMJwF6d42xKy+6DD2fOesmzqzoD61JX8y7ZPb/K/gUBncdVU9skmDSy9FMXe tA6FghK4i2EfE5TlBKG7sqLDaI+yHRtwtTii8hWTzrh44rpjLRZ32pU/2CeSzKCzvUDa /iMDEJw26lsxUX0j6Gm4bhwTlAUDZxxCfAS7Td+Wo7dJ0Ce64J6SDDhe9X17yAo6MBHb 9AEA== X-Gm-Message-State: AOAM530/xLhrknu+jLHVquCO2rIuKDfY+356E4lvAoyelB6xTCOeP996 SX4DUSwyXUhrFxjllw26be63lmPUl9Eyo/TTKJk= X-Google-Smtp-Source: ABdhPJxjNqsH7wNsPPCKctRN8DIm5R6bxvVT4oEAPLuO8Ppr80ft8S2WvfrCAa8gT4yJn7ifnPe+oQ== X-Received: by 2002:a1c:b7d7:: with SMTP id h206mr8646509wmf.159.1600685773616; Mon, 21 Sep 2020 03:56:13 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id s11sm19637727wrt.43.2020.09.21.03.56.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Sep 2020 03:56:13 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, davem@davemloft.net, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 04/16] net: bridge: add src field to br_ip Date: Mon, 21 Sep 2020 13:55:14 +0300 Message-Id: <20200921105526.1056983-5-razor@blackwall.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200921105526.1056983-1-razor@blackwall.org> References: <20200921105526.1056983-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Nikolay Aleksandrov Add a new src field to struct br_ip which will be used to lookup S, G entries. When SSM option is added we will enable full br_ip lookups. Signed-off-by: Nikolay Aleksandrov --- include/linux/if_bridge.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/include/linux/if_bridge.h b/include/linux/if_bridge.h index 6479a38e52fa..4fb9c4954f3a 100644 --- a/include/linux/if_bridge.h +++ b/include/linux/if_bridge.h @@ -18,6 +18,12 @@ struct br_ip { __be32 ip4; #if IS_ENABLED(CONFIG_IPV6) struct in6_addr ip6; +#endif + } src; + union { + __be32 ip4; +#if IS_ENABLED(CONFIG_IPV6) + struct in6_addr ip6; #endif } u; __be16 proto; From patchwork Mon Sep 21 10:55:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 260504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58E4DC43464 for ; Mon, 21 Sep 2020 10:56:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 24095206E5 for ; Mon, 21 Sep 2020 10:56:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=blackwall-org.20150623.gappssmtp.com header.i=@blackwall-org.20150623.gappssmtp.com header.b="ksjAAvYm" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726654AbgIUK4j (ORCPT ); Mon, 21 Sep 2020 06:56:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36508 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726541AbgIUK4T (ORCPT ); Mon, 21 Sep 2020 06:56:19 -0400 Received: from mail-wm1-x344.google.com (mail-wm1-x344.google.com [IPv6:2a00:1450:4864:20::344]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B152EC0613D2 for ; Mon, 21 Sep 2020 03:56:18 -0700 (PDT) Received: by mail-wm1-x344.google.com with SMTP id e17so11667649wme.0 for ; Mon, 21 Sep 2020 03:56:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pqwHkNx0jImltrLZO0K43fN+lXj/WQjYzW88BeELohU=; b=ksjAAvYmkCYlp8jtCb7nM1DJsVS2cYQR9j+7H3NuzQ4j8KQF5C0Pnk/vIJcBdTbfJV cbO+9Bs402jPJWBPi5aiGUg1zw/p0a6Qlpb/bhzqmLxG79R/UGygVb8/15U92GIefxE7 Yg1zxxFcnu+b87sqOq6b5Wv3xzkxF59Zc4+KfsoN6V1mfvfeHnTembsQqm1gTuVbuMxz ES2vgPXNgcHCsZc2dWyKIWJFINd3ot66exvd/J8jnhq1KgBkkBTwA/vjl/JXIadAtsjn FgfZqWDRCF3Ph1GtV9tli5p70MrO0aKmcrU9vlR2l9V7jWlHg+d/I640m2fuemJ00583 OyUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pqwHkNx0jImltrLZO0K43fN+lXj/WQjYzW88BeELohU=; b=rqhNHy/4TbkH2EwigHVJOJJtVH84bGcVHkum1WGPKBlz2UiVGz4ce5tOC+E5DFPSWM NG2lqcFoCkSeUG093AeinYdk4DXkLFuLrjaz6I8edJn0WjyFw7l0lmXioLR05XC9StZE 2fwm25VHn2z0ctNjMxIL7Dn6r8RO3pIWWfZzo/yt+ZShKhJQ/sJ2h+M2c4F4CjFRwNY0 Z4J/xolTRPBhDJQwEwPw1BbrNlJsGFadV5R14Q9RoE8GC2VPhE5oa84sAwWAXHFGGqLk BM+AT/XHG7rtCBWuDd4a7dUrdWBCeLaPsr+cJN2owm/Q+jde+k+ZjROUb0tpdkQtxx2w M5BQ== X-Gm-Message-State: AOAM531Ne8Q6+vH2RPRS0et7yCB0BlXWvxzU3+z+5GLMJimoHxWgqUyg u9WaTW8CbssGN9B7BGsmKaoy9bWwVS5ZuxUk6aeUTg== X-Google-Smtp-Source: ABdhPJzqRbdKTqowiZq/NlIpHHcctARlU3IDwOeq5aBXxboyvxunPaMKKcTR9HZq7MLCBJnSmyFKsw== X-Received: by 2002:a1c:9ec1:: with SMTP id h184mr30158190wme.180.1600685777036; Mon, 21 Sep 2020 03:56:17 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id s11sm19637727wrt.43.2020.09.21.03.56.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Sep 2020 03:56:16 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, davem@davemloft.net, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 07/16] net: bridge: mdb: add support to extend add/del commands Date: Mon, 21 Sep 2020 13:55:17 +0300 Message-Id: <20200921105526.1056983-8-razor@blackwall.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200921105526.1056983-1-razor@blackwall.org> References: <20200921105526.1056983-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Nikolay Aleksandrov Since the MDB add/del code expects an exact struct br_mdb_entry we can't really add any extensions, thus add a new nested attribute at the level of MDBA_SET_ENTRY called MDBA_SET_ENTRY_ATTRS which will be used to pass all new options via netlink attributes. This patch doesn't change anything functionally since the new attribute is not used yet, only parsed. Signed-off-by: Nikolay Aleksandrov --- include/uapi/linux/if_bridge.h | 12 ++++++++++++ net/bridge/br_mdb.c | 22 +++++++++++++++++++--- 2 files changed, 31 insertions(+), 3 deletions(-) diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index 75a2ac479247..dc52f8cffa0d 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -530,10 +530,22 @@ struct br_mdb_entry { enum { MDBA_SET_ENTRY_UNSPEC, MDBA_SET_ENTRY, + MDBA_SET_ENTRY_ATTRS, __MDBA_SET_ENTRY_MAX, }; #define MDBA_SET_ENTRY_MAX (__MDBA_SET_ENTRY_MAX - 1) +/* [MDBA_SET_ENTRY_ATTRS] = { + * [MDBE_ATTR_xxx] + * ... + * } + */ +enum { + MDBE_ATTR_UNSPEC, + __MDBE_ATTR_MAX, +}; +#define MDBE_ATTR_MAX (__MDBE_ATTR_MAX - 1) + /* Embedded inside LINK_XSTATS_TYPE_BRIDGE */ enum { BRIDGE_XSTATS_UNSPEC, diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index a1ff0a372185..907df6d695ec 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -670,9 +670,12 @@ static bool is_valid_mdb_entry(struct br_mdb_entry *entry, return true; } +static const struct nla_policy br_mdbe_attrs_pol[MDBE_ATTR_MAX + 1] = { +}; + static int br_mdb_parse(struct sk_buff *skb, struct nlmsghdr *nlh, struct net_device **pdev, struct br_mdb_entry **pentry, - struct netlink_ext_ack *extack) + struct nlattr **mdb_attrs, struct netlink_ext_ack *extack) { struct net *net = sock_net(skb->sk); struct br_mdb_entry *entry; @@ -719,6 +722,17 @@ static int br_mdb_parse(struct sk_buff *skb, struct nlmsghdr *nlh, return -EINVAL; *pentry = entry; + if (tb[MDBA_SET_ENTRY_ATTRS]) { + err = nla_parse_nested(mdb_attrs, MDBE_ATTR_MAX, + tb[MDBA_SET_ENTRY_ATTRS], + br_mdbe_attrs_pol, extack); + if (err) + return err; + } else { + memset(mdb_attrs, 0, + sizeof(struct nlattr *) * (MDBE_ATTR_MAX + 1)); + } + return 0; } @@ -803,6 +817,7 @@ static int __br_mdb_add(struct net *net, struct net_bridge *br, static int br_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh, struct netlink_ext_ack *extack) { + struct nlattr *mdb_attrs[MDBE_ATTR_MAX + 1]; struct net *net = sock_net(skb->sk); struct net_bridge_vlan_group *vg; struct net_bridge_port *p = NULL; @@ -812,7 +827,7 @@ static int br_mdb_add(struct sk_buff *skb, struct nlmsghdr *nlh, struct net_bridge *br; int err; - err = br_mdb_parse(skb, nlh, &dev, &entry, extack); + err = br_mdb_parse(skb, nlh, &dev, &entry, mdb_attrs, extack); if (err < 0) return err; @@ -921,6 +936,7 @@ static int __br_mdb_del(struct net_bridge *br, struct br_mdb_entry *entry) static int br_mdb_del(struct sk_buff *skb, struct nlmsghdr *nlh, struct netlink_ext_ack *extack) { + struct nlattr *mdb_attrs[MDBE_ATTR_MAX + 1]; struct net *net = sock_net(skb->sk); struct net_bridge_vlan_group *vg; struct net_bridge_port *p = NULL; @@ -930,7 +946,7 @@ static int br_mdb_del(struct sk_buff *skb, struct nlmsghdr *nlh, struct net_bridge *br; int err; - err = br_mdb_parse(skb, nlh, &dev, &entry, extack); + err = br_mdb_parse(skb, nlh, &dev, &entry, mdb_attrs, extack); if (err < 0) return err; From patchwork Mon Sep 21 10:55:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 260500 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82401C43463 for ; Mon, 21 Sep 2020 10:57:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3F393206E5 for ; Mon, 21 Sep 2020 10:57:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=blackwall-org.20150623.gappssmtp.com header.i=@blackwall-org.20150623.gappssmtp.com header.b="EycxlAX2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726731AbgIUK5A (ORCPT ); Mon, 21 Sep 2020 06:57:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726581AbgIUK4Y (ORCPT ); Mon, 21 Sep 2020 06:56:24 -0400 Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com [IPv6:2a00:1450:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E1F5C061755 for ; Mon, 21 Sep 2020 03:56:23 -0700 (PDT) Received: by mail-wm1-x335.google.com with SMTP id l9so12139906wme.3 for ; Mon, 21 Sep 2020 03:56:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ssYfGnyDOQ3YSeKyz9kCpEKCQyG6t2mIaagcn1H4wQs=; b=EycxlAX2S0unzaIb4TYPDfU2Z/zw6YRpSN6JPR7SWpB+d7puHrrtecvI8OXPVVpoRI uMJwQPno2IwTA0iaftzz0VG4BrFvsHxobw7Idy266rdwWHP3i8VuphHPyB+jxuux31Jc YEaaU1mDO5LP6dc2msljdsOLHLjrRNNceg6DYRY+JUzXHjYbigmA1Fofl2b5BQcArsUL urDcWuPy+6uVb53AHQjaHYmT9Qaxqme8z2Ooq4OhMAmbdDd24YKQXcP2WY8ETGMwBgom ofD8mq5eICbtsRft68Oo/WGMwkmqNz+RSP+gFX9zkBNJ9tdK0lyJjtTJZfJss8D4GOQx 0yOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ssYfGnyDOQ3YSeKyz9kCpEKCQyG6t2mIaagcn1H4wQs=; b=oq19UoLXB7ecuOgsJRFNrboiOrmbxDvZ3RMZTEiE5uJ05iwm2y8Ij0KbOFYG+gktoe AsBbCcZjn5ibCRfC2D3jCOSwgDF+td5fPSH68cd3mhfXt/ej9zjXMjwdEh1bNCn0pwNN 08pOeLbYAQZrwfzQpfz3benlgTgQhT8afwxz1WDmQ9uQD2Gm/UeJPpLdnucGoLknXa+T CytESmGEQpRUQOjCeI4WjQlH9Vc2iYeeP6wNwWMKY07FldPtTCOJWLgwev9LiBawq6ll 68pvebi1hp8YPJavu+OkXtYvaII12EImTLCfwRGr3jWa5V96u3AnteSsWpienwc2WAKn ZYKA== X-Gm-Message-State: AOAM530JYlmPD4aVTON+bt7lvxEIAlP8m4nbHhWi4w0rkS8/NfIF1Zkk keilGKileXTm3UcxI1zPqShQNHw/oTmTpmmJBEDt5Q== X-Google-Smtp-Source: ABdhPJy9Km2CAmYqqLa2D6YOFcfn3TCqVM+kjhWmu0mcDFBFC9hHGlWEjTbUdgsdYeIHCNgJ5W+Sjw== X-Received: by 2002:a1c:2903:: with SMTP id p3mr30506856wmp.170.1600685781489; Mon, 21 Sep 2020 03:56:21 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id s11sm19637727wrt.43.2020.09.21.03.56.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Sep 2020 03:56:21 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, davem@davemloft.net, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 11/16] net: bridge: mcast: add sg_port rhashtable Date: Mon, 21 Sep 2020 13:55:21 +0300 Message-Id: <20200921105526.1056983-12-razor@blackwall.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200921105526.1056983-1-razor@blackwall.org> References: <20200921105526.1056983-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Nikolay Aleksandrov To speedup S,G forward handling we need to be able to quickly find out if a port is a member of an S,G group. To do that add a global S,G port rhashtable with key: source addr, group addr, protocol, vid (all br_ip fields) and port pointer. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_forward.c | 2 +- net/bridge/br_mdb.c | 34 +++++----- net/bridge/br_multicast.c | 130 +++++++++++++++++++++++++------------- net/bridge/br_private.h | 10 ++- 4 files changed, 111 insertions(+), 65 deletions(-) diff --git a/net/bridge/br_forward.c b/net/bridge/br_forward.c index 7629b63f6f30..4d12999e4576 100644 --- a/net/bridge/br_forward.c +++ b/net/bridge/br_forward.c @@ -281,7 +281,7 @@ void br_multicast_flood(struct net_bridge_mdb_entry *mdst, while (p || rp) { struct net_bridge_port *port, *lport, *rport; - lport = p ? p->port : NULL; + lport = p ? p->key.port : NULL; rport = hlist_entry_safe(rp, struct net_bridge_port, rlist); if ((unsigned long)lport > (unsigned long)rport) { diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index b386a5e07698..4e3a5cefc626 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -101,7 +101,7 @@ static int __mdb_fill_srcs(struct sk_buff *skb, return -EMSGSIZE; hlist_for_each_entry_rcu(ent, &p->src_list, node, - lockdep_is_held(&p->port->br->multicast_lock)) { + lockdep_is_held(&p->key.port->br->multicast_lock)) { nest_ent = nla_nest_start(skb, MDBA_MDB_SRCLIST_ENTRY); if (!nest_ent) goto out_cancel_err; @@ -156,7 +156,7 @@ static int __mdb_fill_info(struct sk_buff *skb, memset(&e, 0, sizeof(e)); if (p) { - ifindex = p->port->dev->ifindex; + ifindex = p->key.port->dev->ifindex; mtimer = &p->timer; flags = p->flags; } else { @@ -263,7 +263,7 @@ static int br_mdb_fill_info(struct sk_buff *skb, struct netlink_callback *cb, for (pp = &mp->ports; (p = rcu_dereference(*pp)) != NULL; pp = &p->next) { - if (!p->port) + if (!p->key.port) continue; if (pidx < s_pidx) goto skip_pg; @@ -423,21 +423,21 @@ static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg) /* MDBA_MDB_EATTR_RTPROT */ nlmsg_size += nla_total_size(sizeof(u8)); - switch (pg->addr.proto) { + switch (pg->key.addr.proto) { case htons(ETH_P_IP): /* MDBA_MDB_EATTR_SOURCE */ - if (pg->addr.src.ip4) + if (pg->key.addr.src.ip4) nlmsg_size += nla_total_size(sizeof(__be32)); - if (pg->port->br->multicast_igmp_version == 2) + if (pg->key.port->br->multicast_igmp_version == 2) goto out; addr_size = sizeof(__be32); break; #if IS_ENABLED(CONFIG_IPV6) case htons(ETH_P_IPV6): /* MDBA_MDB_EATTR_SOURCE */ - if (!ipv6_addr_any(&pg->addr.src.ip6)) + if (!ipv6_addr_any(&pg->key.addr.src.ip6)) nlmsg_size += nla_total_size(sizeof(struct in6_addr)); - if (pg->port->br->multicast_mld_version == 1) + if (pg->key.port->br->multicast_mld_version == 1) goto out; addr_size = sizeof(struct in6_addr); break; @@ -486,7 +486,7 @@ static void br_mdb_complete(struct net_device *dev, int err, void *priv) goto out; for (pp = &mp->ports; (p = mlock_dereference(*pp, br)) != NULL; pp = &p->next) { - if (p->port != port) + if (p->key.port != port) continue; p->flags |= MDB_PG_FLAGS_OFFLOAD; } @@ -561,21 +561,21 @@ void br_mdb_notify(struct net_device *dev, else ipv6_eth_mc_map(&mp->addr.dst.ip6, mdb.addr); #endif - mdb.obj.orig_dev = pg->port->dev; + mdb.obj.orig_dev = pg->key.port->dev; switch (type) { case RTM_NEWMDB: complete_info = kmalloc(sizeof(*complete_info), GFP_ATOMIC); if (!complete_info) break; - complete_info->port = pg->port; + complete_info->port = pg->key.port; complete_info->ip = mp->addr; mdb.obj.complete_priv = complete_info; mdb.obj.complete = br_mdb_complete; - if (switchdev_port_obj_add(pg->port->dev, &mdb.obj, NULL)) + if (switchdev_port_obj_add(pg->key.port->dev, &mdb.obj, NULL)) kfree(complete_info); break; case RTM_DELMDB: - switchdev_port_obj_del(pg->port->dev, &mdb.obj); + switchdev_port_obj_del(pg->key.port->dev, &mdb.obj); break; } } else { @@ -869,11 +869,11 @@ static int br_mdb_add_group(struct net_bridge *br, struct net_bridge_port *port, for (pp = &mp->ports; (p = mlock_dereference(*pp, br)) != NULL; pp = &p->next) { - if (p->port == port) { + if (p->key.port == port) { NL_SET_ERR_MSG_MOD(extack, "Group is already joined by port"); return -EEXIST; } - if ((unsigned long)p->port < (unsigned long)port) + if ((unsigned long)p->key.port < (unsigned long)port) break; } @@ -1013,10 +1013,10 @@ static int __br_mdb_del(struct net_bridge *br, struct br_mdb_entry *entry, for (pp = &mp->ports; (p = mlock_dereference(*pp, br)) != NULL; pp = &p->next) { - if (!p->port || p->port->dev->ifindex != entry->ifindex) + if (!p->key.port || p->key.port->dev->ifindex != entry->ifindex) continue; - if (p->port->state == BR_STATE_DISABLED) + if (p->key.port->state == BR_STATE_DISABLED) goto unlock; br_multicast_del_pg(mp, p, pp); diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index b6e7b0ece422..0fec9f38787c 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -41,6 +41,13 @@ static const struct rhashtable_params br_mdb_rht_params = { .automatic_shrinking = true, }; +static const struct rhashtable_params br_sg_port_rht_params = { + .head_offset = offsetof(struct net_bridge_port_group, rhnode), + .key_offset = offsetof(struct net_bridge_port_group, key), + .key_len = sizeof(struct net_bridge_port_group_sg_key), + .automatic_shrinking = true, +}; + static void br_multicast_start_querier(struct net_bridge *br, struct bridge_mcast_own_query *query); static void br_multicast_add_router(struct net_bridge *br, @@ -60,6 +67,16 @@ static void br_ip6_multicast_leave_group(struct net_bridge *br, __u16 vid, const unsigned char *src); #endif +static struct net_bridge_port_group * +br_sg_port_find(struct net_bridge *br, + struct net_bridge_port_group_sg_key *sg_p) +{ + lockdep_assert_held_once(&br->multicast_lock); + + return rhashtable_lookup_fast(&br->sg_port_tbl, sg_p, + br_sg_port_rht_params); +} + static struct net_bridge_mdb_entry *br_mdb_ip_get_rcu(struct net_bridge *br, struct br_ip *dst) { @@ -212,7 +229,7 @@ static void br_multicast_destroy_group_src(struct net_bridge_mcast_gc *gc) static void br_multicast_del_group_src(struct net_bridge_group_src *src) { - struct net_bridge *br = src->pg->port->br; + struct net_bridge *br = src->pg->key.port->br; hlist_del_init_rcu(&src->node); src->pg->src_ents--; @@ -237,10 +254,12 @@ void br_multicast_del_pg(struct net_bridge_mdb_entry *mp, struct net_bridge_port_group *pg, struct net_bridge_port_group __rcu **pp) { - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; struct net_bridge_group_src *ent; struct hlist_node *tmp; + rhashtable_remove_fast(&br->sg_port_tbl, &pg->rhnode, + br_sg_port_rht_params); rcu_assign_pointer(*pp, pg->next); hlist_del_init(&pg->mglist); hlist_for_each_entry_safe(ent, tmp, &pg->src_list, node) @@ -260,7 +279,7 @@ static void br_multicast_find_del_pg(struct net_bridge *br, struct net_bridge_mdb_entry *mp; struct net_bridge_port_group *p; - mp = br_mdb_ip_get(br, &pg->addr); + mp = br_mdb_ip_get(br, &pg->key.addr); if (WARN_ON(!mp)) return; @@ -281,7 +300,7 @@ static void br_multicast_port_group_expired(struct timer_list *t) { struct net_bridge_port_group *pg = from_timer(pg, t, timer); struct net_bridge_group_src *src_ent; - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; struct hlist_node *tmp; bool changed; @@ -302,7 +321,7 @@ static void br_multicast_port_group_expired(struct timer_list *t) if (hlist_empty(&pg->src_list)) { br_multicast_find_del_pg(br, pg); } else if (changed) { - struct net_bridge_mdb_entry *mp = br_mdb_ip_get(br, &pg->addr); + struct net_bridge_mdb_entry *mp = br_mdb_ip_get(br, &pg->key.addr); if (WARN_ON(!mp)) goto out; @@ -330,7 +349,7 @@ static struct sk_buff *br_ip4_multicast_alloc_query(struct net_bridge *br, u8 sflag, u8 *igmp_type, bool *need_rexmit) { - struct net_bridge_port *p = pg ? pg->port : NULL; + struct net_bridge_port *p = pg ? pg->key.port : NULL; struct net_bridge_group_src *ent; size_t pkt_size, igmp_hdr_size; unsigned long now = jiffies; @@ -476,7 +495,7 @@ static struct sk_buff *br_ip6_multicast_alloc_query(struct net_bridge *br, u8 sflag, u8 *igmp_type, bool *need_rexmit) { - struct net_bridge_port *p = pg ? pg->port : NULL; + struct net_bridge_port *p = pg ? pg->key.port : NULL; struct net_bridge_group_src *ent; size_t pkt_size, mld_hdr_size; unsigned long now = jiffies; @@ -778,7 +797,7 @@ br_multicast_new_group_src(struct net_bridge_port_group *pg, struct br_ip *src_i return NULL; grp_src->pg = pg; - grp_src->br = pg->port->br; + grp_src->br = pg->key.port->br; grp_src->addr = *src_ip; grp_src->mcast_gc.destroy = br_multicast_destroy_group_src; timer_setup(&grp_src->timer, br_multicast_group_src_expired, 0); @@ -804,13 +823,21 @@ struct net_bridge_port_group *br_multicast_new_port_group( if (unlikely(!p)) return NULL; - p->addr = *group; - p->port = port; + p->key.addr = *group; + p->key.port = port; p->flags = flags; p->filter_mode = filter_mode; p->rt_protocol = rt_protocol; p->mcast_gc.destroy = br_multicast_destroy_port_group; INIT_HLIST_HEAD(&p->src_list); + + if (!br_multicast_is_star_g(group) && + rhashtable_lookup_insert_fast(&port->br->sg_port_tbl, &p->rhnode, + br_sg_port_rht_params)) { + kfree(p); + return NULL; + } + rcu_assign_pointer(p->next, next); timer_setup(&p->timer, br_multicast_port_group_expired, 0); timer_setup(&p->rexmit_timer, br_multicast_port_group_rexmit, 0); @@ -828,7 +855,7 @@ static bool br_port_group_equal(struct net_bridge_port_group *p, struct net_bridge_port *port, const unsigned char *src) { - if (p->port != port) + if (p->key.port != port) return false; if (!(port->flags & BR_MULTICAST_TO_UNICAST)) @@ -890,7 +917,7 @@ static int br_multicast_add_group(struct net_bridge *br, pp = &p->next) { if (br_port_group_equal(p, port, src)) goto found; - if ((unsigned long)p->port < (unsigned long)port) + if ((unsigned long)p->key.port < (unsigned long)port) break; } @@ -1166,7 +1193,7 @@ static void br_multicast_port_group_rexmit(struct timer_list *t) { struct net_bridge_port_group *pg = from_timer(pg, t, rexmit_timer); struct bridge_mcast_other_query *other_query = NULL; - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; bool need_rexmit = false; spin_lock(&br->multicast_lock); @@ -1175,7 +1202,7 @@ static void br_multicast_port_group_rexmit(struct timer_list *t) !br_opt_get(br, BROPT_MULTICAST_QUERIER)) goto out; - if (pg->addr.proto == htons(ETH_P_IP)) + if (pg->key.addr.proto == htons(ETH_P_IP)) other_query = &br->ip4_other_query; #if IS_ENABLED(CONFIG_IPV6) else @@ -1187,11 +1214,11 @@ static void br_multicast_port_group_rexmit(struct timer_list *t) if (pg->grp_query_rexmit_cnt) { pg->grp_query_rexmit_cnt--; - __br_multicast_send_query(br, pg->port, pg, &pg->addr, - &pg->addr, false, 1, NULL); + __br_multicast_send_query(br, pg->key.port, pg, &pg->key.addr, + &pg->key.addr, false, 1, NULL); } - __br_multicast_send_query(br, pg->port, pg, &pg->addr, - &pg->addr, true, 0, &need_rexmit); + __br_multicast_send_query(br, pg->key.port, pg, &pg->key.addr, + &pg->key.addr, true, 0, &need_rexmit); if (pg->grp_query_rexmit_cnt || need_rexmit) mod_timer(&pg->rexmit_timer, jiffies + @@ -1325,7 +1352,7 @@ static int __grp_src_delete_marked(struct net_bridge_port_group *pg) static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) { struct bridge_mcast_other_query *other_query = NULL; - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; u32 lmqc = br->multicast_last_member_count; unsigned long lmqt, lmi, now = jiffies; struct net_bridge_group_src *ent; @@ -1334,7 +1361,7 @@ static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) !br_opt_get(br, BROPT_MULTICAST_ENABLED)) return; - if (pg->addr.proto == htons(ETH_P_IP)) + if (pg->key.addr.proto == htons(ETH_P_IP)) other_query = &br->ip4_other_query; #if IS_ENABLED(CONFIG_IPV6) else @@ -1359,8 +1386,8 @@ static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) !other_query || timer_pending(&other_query->timer)) return; - __br_multicast_send_query(br, pg->port, pg, &pg->addr, - &pg->addr, true, 1, NULL); + __br_multicast_send_query(br, pg->key.port, pg, &pg->key.addr, + &pg->key.addr, true, 1, NULL); lmi = now + br->multicast_last_member_interval; if (!timer_pending(&pg->rexmit_timer) || @@ -1371,14 +1398,14 @@ static void __grp_src_query_marked_and_rexmit(struct net_bridge_port_group *pg) static void __grp_send_query_and_rexmit(struct net_bridge_port_group *pg) { struct bridge_mcast_other_query *other_query = NULL; - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; unsigned long now = jiffies, lmi; if (!netif_running(br->dev) || !br_opt_get(br, BROPT_MULTICAST_ENABLED)) return; - if (pg->addr.proto == htons(ETH_P_IP)) + if (pg->key.addr.proto == htons(ETH_P_IP)) other_query = &br->ip4_other_query; #if IS_ENABLED(CONFIG_IPV6) else @@ -1389,8 +1416,8 @@ static void __grp_send_query_and_rexmit(struct net_bridge_port_group *pg) other_query && !timer_pending(&other_query->timer)) { lmi = now + br->multicast_last_member_interval; pg->grp_query_rexmit_cnt = br->multicast_last_member_count - 1; - __br_multicast_send_query(br, pg->port, pg, &pg->addr, - &pg->addr, false, 0, NULL); + __br_multicast_send_query(br, pg->key.port, pg, &pg->key.addr, + &pg->key.addr, false, 0, NULL); if (!timer_pending(&pg->rexmit_timer) || time_after(pg->rexmit_timer.expires, lmi)) mod_timer(&pg->rexmit_timer, lmi); @@ -1410,7 +1437,7 @@ static void __grp_send_query_and_rexmit(struct net_bridge_port_group *pg) static bool br_multicast_isinc_allow(struct net_bridge_port_group *pg, void *srcs, u32 nsrcs, size_t src_size) { - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; struct net_bridge_group_src *ent; unsigned long now = jiffies; bool changed = false; @@ -1418,7 +1445,7 @@ static bool br_multicast_isinc_allow(struct net_bridge_port_group *pg, u32 src_idx; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -1452,7 +1479,7 @@ static void __grp_src_isexc_incl(struct net_bridge_port_group *pg, ent->flags |= BR_SGRP_F_DELETE; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -1475,7 +1502,7 @@ static void __grp_src_isexc_incl(struct net_bridge_port_group *pg, static bool __grp_src_isexc_excl(struct net_bridge_port_group *pg, void *srcs, u32 nsrcs, size_t src_size) { - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; struct net_bridge_group_src *ent; unsigned long now = jiffies; bool changed = false; @@ -1486,7 +1513,7 @@ static bool __grp_src_isexc_excl(struct net_bridge_port_group *pg, ent->flags |= BR_SGRP_F_DELETE; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -1512,7 +1539,7 @@ static bool __grp_src_isexc_excl(struct net_bridge_port_group *pg, static bool br_multicast_isexc(struct net_bridge_port_group *pg, void *srcs, u32 nsrcs, size_t src_size) { - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; bool changed = false; switch (pg->filter_mode) { @@ -1538,7 +1565,7 @@ static bool br_multicast_isexc(struct net_bridge_port_group *pg, static bool __grp_src_toin_incl(struct net_bridge_port_group *pg, void *srcs, u32 nsrcs, size_t src_size) { - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; u32 src_idx, to_send = pg->src_ents; struct net_bridge_group_src *ent; unsigned long now = jiffies; @@ -1549,7 +1576,7 @@ static bool __grp_src_toin_incl(struct net_bridge_port_group *pg, ent->flags |= BR_SGRP_F_SEND; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -1580,7 +1607,7 @@ static bool __grp_src_toin_incl(struct net_bridge_port_group *pg, static bool __grp_src_toin_excl(struct net_bridge_port_group *pg, void *srcs, u32 nsrcs, size_t src_size) { - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; u32 src_idx, to_send = pg->src_ents; struct net_bridge_group_src *ent; unsigned long now = jiffies; @@ -1592,7 +1619,7 @@ static bool __grp_src_toin_excl(struct net_bridge_port_group *pg, ent->flags |= BR_SGRP_F_SEND; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -1653,7 +1680,7 @@ static void __grp_src_toex_incl(struct net_bridge_port_group *pg, ent->flags = (ent->flags & ~BR_SGRP_F_SEND) | BR_SGRP_F_DELETE; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -1691,7 +1718,7 @@ static bool __grp_src_toex_excl(struct net_bridge_port_group *pg, ent->flags = (ent->flags & ~BR_SGRP_F_SEND) | BR_SGRP_F_DELETE; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -1722,7 +1749,7 @@ static bool __grp_src_toex_excl(struct net_bridge_port_group *pg, static bool br_multicast_toex(struct net_bridge_port_group *pg, void *srcs, u32 nsrcs, size_t src_size) { - struct net_bridge *br = pg->port->br; + struct net_bridge *br = pg->key.port->br; bool changed = false; switch (pg->filter_mode) { @@ -1755,7 +1782,7 @@ static void __grp_src_block_incl(struct net_bridge_port_group *pg, ent->flags &= ~BR_SGRP_F_SEND; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -1770,7 +1797,7 @@ static void __grp_src_block_incl(struct net_bridge_port_group *pg, __grp_src_query_marked_and_rexmit(pg); if (pg->filter_mode == MCAST_INCLUDE && hlist_empty(&pg->src_list)) - br_multicast_find_del_pg(pg->port->br, pg); + br_multicast_find_del_pg(pg->key.port->br, pg); } /* State Msg type New state Actions @@ -1789,7 +1816,7 @@ static bool __grp_src_block_excl(struct net_bridge_port_group *pg, ent->flags &= ~BR_SGRP_F_SEND; memset(&src_ip, 0, sizeof(src_ip)); - src_ip.proto = pg->addr.proto; + src_ip.proto = pg->key.addr.proto; for (src_idx = 0; src_idx < nsrcs; src_idx++) { memcpy(&src_ip.src, srcs, src_size); ent = br_multicast_find_group_src(pg, &src_ip); @@ -2496,7 +2523,7 @@ br_multicast_leave_group(struct net_bridge *br, for (p = mlock_dereference(mp->ports, br); p != NULL; p = mlock_dereference(p->next, br)) { - if (p->port != port) + if (p->key.port != port) continue; if (!hlist_unhashed(&p->mglist) && @@ -3256,7 +3283,7 @@ int br_multicast_list_adjacent(struct net_device *dev, if (!entry) goto unlock; - entry->addr = group->addr; + entry->addr = group->key.addr; list_add(&entry->list, br_ip_list); count++; } @@ -3513,10 +3540,23 @@ void br_multicast_get_stats(const struct net_bridge *br, int br_mdb_hash_init(struct net_bridge *br) { - return rhashtable_init(&br->mdb_hash_tbl, &br_mdb_rht_params); + int err; + + err = rhashtable_init(&br->sg_port_tbl, &br_sg_port_rht_params); + if (err) + return err; + + err = rhashtable_init(&br->mdb_hash_tbl, &br_mdb_rht_params); + if (err) { + rhashtable_destroy(&br->sg_port_tbl); + return err; + } + + return 0; } void br_mdb_hash_fini(struct net_bridge *br) { + rhashtable_destroy(&br->sg_port_tbl); rhashtable_destroy(&br->mdb_hash_tbl); } diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index dae7e3526fc7..55486b4956d3 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -238,10 +238,14 @@ struct net_bridge_group_src { struct rcu_head rcu; }; -struct net_bridge_port_group { +struct net_bridge_port_group_sg_key { struct net_bridge_port *port; - struct net_bridge_port_group __rcu *next; struct br_ip addr; +}; + +struct net_bridge_port_group { + struct net_bridge_port_group __rcu *next; + struct net_bridge_port_group_sg_key key; unsigned char eth_addr[ETH_ALEN] __aligned(2); unsigned char flags; unsigned char filter_mode; @@ -254,6 +258,7 @@ struct net_bridge_port_group { struct timer_list rexmit_timer; struct hlist_node mglist; + struct rhash_head rhnode; struct net_bridge_mcast_gc mcast_gc; struct rcu_head rcu; }; @@ -441,6 +446,7 @@ struct net_bridge { unsigned long multicast_startup_query_interval; struct rhashtable mdb_hash_tbl; + struct rhashtable sg_port_tbl; struct hlist_head mcast_gc_list; struct hlist_head mdb_list; From patchwork Mon Sep 21 10:55:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 260501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB52EC43469 for ; Mon, 21 Sep 2020 10:56:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B15FB206E5 for ; Mon, 21 Sep 2020 10:56:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=blackwall-org.20150623.gappssmtp.com header.i=@blackwall-org.20150623.gappssmtp.com header.b="xPWFLSNl" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726701AbgIUK4x (ORCPT ); Mon, 21 Sep 2020 06:56:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726534AbgIUK41 (ORCPT ); Mon, 21 Sep 2020 06:56:27 -0400 Received: from mail-wm1-x342.google.com (mail-wm1-x342.google.com [IPv6:2a00:1450:4864:20::342]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70591C0613D1 for ; Mon, 21 Sep 2020 03:56:26 -0700 (PDT) Received: by mail-wm1-x342.google.com with SMTP id e17so11667993wme.0 for ; Mon, 21 Sep 2020 03:56:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dSI61IYGUvQV65ypmHP+5t4/9m2yNYCAWOhMPESqfJ8=; b=xPWFLSNljiVJ0iFREi1JRrBz4WlMpqAAelgpsfASnh4mAk5+HEZBULJ+aAk9Fa+17y 2J9IMRVmBfqU1CkDy8Yhi03g7TOG/QBS2fe8zN1c5ie4MylIe+aBq29GLHjBWid0zrEv tIBe2L0sjcdCBlyoQQ0e6XGTjFLzJ9g0miq4PnxLFVwgK8uKdUun2O9JTpo1S3QIr7jF geIla/7UytU8US9bWl9Gej9Onct1ysLgRna13Xf5efUpcp+C3Z5mSuG1unbQOrW9eRwF i2xvXKSnXm9h3BWX2soigIfog3cxSVgVUFX5VExb3WKW02/eT2T+7BvhJuZ2zpPGz9OM m4QA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dSI61IYGUvQV65ypmHP+5t4/9m2yNYCAWOhMPESqfJ8=; b=ID+c7EiN5l/0aC7UgUOsiOQYZuVsyTgN5iubN9zRGX5S4p5xMeUZCCxAjjVcKFRd4J 9cjPmB02/lz+HNWvjOWkh9GuDYTttqncBE7HnoGdmhGLCs0DTnxTyloD9BK2xARDzr03 8A7RSWATmJ0h7wK1wW2qixGD3flimxVc2u50MVfXuE9EO24uBjEqu/eN6FaJ4+hrAJzj cfirXlYwe7SZySjZYjd2g/U7vwtTlDVWhWSH/DDQIZbIVZg29AEk0IS+Z4tYTicBUE15 7LQR/Xh1EQ/Vyp1USwbWTnAASd71MPdxJC9jzj5JCZja2t0RzLBEGl1r9btNHSAgnrqw spNw== X-Gm-Message-State: AOAM5311C9LiiftsuKVuLQAIfNoBIsnF9Ytcqy/YO+73ZzdgP7KEKdyx ZxN1TfJKGAXv/pN7JbaM7wnEUDSBiFdmC1lC+uNATw== X-Google-Smtp-Source: ABdhPJw1At9tHQPBdQl5p3IR8KoYEGgtP4WkGH04NHOb2cdiUjn7wQJYJRO2KqzS8+MBxqb5u07BLw== X-Received: by 2002:a1c:818f:: with SMTP id c137mr25666692wmd.0.1600685784817; Mon, 21 Sep 2020 03:56:24 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id s11sm19637727wrt.43.2020.09.21.03.56.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Sep 2020 03:56:24 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, davem@davemloft.net, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 14/16] net: bridge: mcast: add support for blocked port groups Date: Mon, 21 Sep 2020 13:55:24 +0300 Message-Id: <20200921105526.1056983-15-razor@blackwall.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200921105526.1056983-1-razor@blackwall.org> References: <20200921105526.1056983-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Nikolay Aleksandrov When excluding S,G entries we need a way to block a particular S,G,port. The new port group flag is managed based on the source's timer as per RFCs 3376 and 3810. When a source expires and its port group is in EXCLUDE mode, it will be blocked. Signed-off-by: Nikolay Aleksandrov --- include/uapi/linux/if_bridge.h | 1 + net/bridge/br_mdb.c | 2 ++ net/bridge/br_multicast.c | 49 +++++++++++++++++++++++++++++----- net/bridge/br_private.h | 1 + 4 files changed, 47 insertions(+), 6 deletions(-) diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h index e4bd30a25f6b..4c687686aa8f 100644 --- a/include/uapi/linux/if_bridge.h +++ b/include/uapi/linux/if_bridge.h @@ -519,6 +519,7 @@ struct br_mdb_entry { #define MDB_FLAGS_OFFLOAD (1 << 0) #define MDB_FLAGS_FAST_LEAVE (1 << 1) #define MDB_FLAGS_STAR_EXCL (1 << 2) +#define MDB_FLAGS_BLOCKED (1 << 3) __u8 flags; __u16 vid; struct { diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index 28cd35a9cf37..e15bab19a012 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -64,6 +64,8 @@ static void __mdb_entry_fill_flags(struct br_mdb_entry *e, unsigned char flags) e->flags |= MDB_FLAGS_FAST_LEAVE; if (flags & MDB_PG_FLAGS_STAR_EXCL) e->flags |= MDB_FLAGS_STAR_EXCL; + if (flags & MDB_PG_FLAGS_BLOCKED) + e->flags |= MDB_FLAGS_BLOCKED; } static void __mdb_entry_to_br_ip(struct br_mdb_entry *entry, struct br_ip *ip, diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index f39bbd733722..11d224c01914 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -72,7 +72,8 @@ __br_multicast_add_group(struct net_bridge *br, struct br_ip *group, const unsigned char *src, u8 filter_mode, - bool igmpv2_mldv1); + bool igmpv2_mldv1, + bool blocked); static void br_multicast_find_del_pg(struct net_bridge *br, struct net_bridge_port_group *pg); @@ -211,7 +212,7 @@ static void __fwd_add_star_excl(struct net_bridge_port_group *pg, return; src_pg = __br_multicast_add_group(br, pg->key.port, sg_ip, pg->eth_addr, - MCAST_INCLUDE, false); + MCAST_INCLUDE, false, false); if (IS_ERR_OR_NULL(src_pg) || src_pg->rt_protocol != RTPROT_KERNEL) return; @@ -343,7 +344,7 @@ void br_multicast_sg_add_exclude_ports(struct net_bridge_mdb_entry *star_mp, src_pg = __br_multicast_add_group(br, pg->key.port, &sg->key.addr, sg->eth_addr, - MCAST_INCLUDE, false); + MCAST_INCLUDE, false, false); if (IS_ERR_OR_NULL(src_pg) || src_pg->rt_protocol != RTPROT_KERNEL) continue; @@ -364,7 +365,8 @@ static void br_multicast_fwd_src_add(struct net_bridge_group_src *src) sg_ip = src->pg->key.addr; sg_ip.src = src->addr.src; sg = __br_multicast_add_group(src->br, src->pg->key.port, &sg_ip, - src->pg->eth_addr, MCAST_INCLUDE, false); + src->pg->eth_addr, MCAST_INCLUDE, false, + !timer_pending(&src->timer)); if (IS_ERR_OR_NULL(sg)) return; src->flags |= BR_SGRP_F_INSTALLED; @@ -415,9 +417,38 @@ static void br_multicast_fwd_src_remove(struct net_bridge_group_src *src) src->flags &= ~BR_SGRP_F_INSTALLED; } +/* install S,G and based on src's timer enable or disable forwarding */ static void br_multicast_fwd_src_handle(struct net_bridge_group_src *src) { + struct net_bridge_port_group_sg_key sg_key; + struct net_bridge_port_group *sg; + u8 old_flags; + br_multicast_fwd_src_add(src); + + memset(&sg_key, 0, sizeof(sg_key)); + sg_key.addr = src->pg->key.addr; + sg_key.addr.src = src->addr.src; + sg_key.port = src->pg->key.port; + + sg = br_sg_port_find(src->br, &sg_key); + if (!sg || (sg->flags & MDB_PG_FLAGS_PERMANENT)) + return; + + old_flags = sg->flags; + if (timer_pending(&src->timer)) + sg->flags &= ~MDB_PG_FLAGS_BLOCKED; + else + sg->flags |= MDB_PG_FLAGS_BLOCKED; + + if (old_flags != sg->flags) { + struct net_bridge_mdb_entry *sg_mp; + + sg_mp = br_mdb_ip_get(src->br, &sg_key.addr); + if (!sg_mp) + return; + br_mdb_notify(src->br->dev, sg_mp, sg, RTM_NEWMDB); + } } static void br_multicast_destroy_mdb_entry(struct net_bridge_mcast_gc *gc) @@ -995,7 +1026,10 @@ static void br_multicast_group_src_expired(struct timer_list *t) if (!hlist_empty(&pg->src_list)) goto out; br_multicast_find_del_pg(br, pg); + } else { + br_multicast_fwd_src_handle(src); } + out: spin_unlock(&br->multicast_lock); } @@ -1131,7 +1165,8 @@ __br_multicast_add_group(struct net_bridge *br, struct br_ip *group, const unsigned char *src, u8 filter_mode, - bool igmpv2_mldv1) + bool igmpv2_mldv1, + bool blocked) { struct net_bridge_port_group __rcu **pp; struct net_bridge_port_group *p = NULL; @@ -1167,6 +1202,8 @@ __br_multicast_add_group(struct net_bridge *br, goto out; } rcu_assign_pointer(*pp, p); + if (blocked) + p->flags |= MDB_PG_FLAGS_BLOCKED; br_mdb_notify(br->dev, mp, p, RTM_NEWMDB); found: @@ -1189,7 +1226,7 @@ static int br_multicast_add_group(struct net_bridge *br, spin_lock(&br->multicast_lock); pg = __br_multicast_add_group(br, port, group, src, filter_mode, - igmpv2_mldv1); + igmpv2_mldv1, false); /* NULL is considered valid for host joined groups */ err = IS_ERR(pg) ? PTR_ERR(pg) : 0; spin_unlock(&br->multicast_lock); diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h index 128d2d0417a0..345118e35c42 100644 --- a/net/bridge/br_private.h +++ b/net/bridge/br_private.h @@ -214,6 +214,7 @@ struct net_bridge_fdb_entry { #define MDB_PG_FLAGS_OFFLOAD BIT(1) #define MDB_PG_FLAGS_FAST_LEAVE BIT(2) #define MDB_PG_FLAGS_STAR_EXCL BIT(3) +#define MDB_PG_FLAGS_BLOCKED BIT(4) #define PG_SRC_ENT_LIMIT 32 From patchwork Mon Sep 21 10:55:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 260502 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 668AAC43466 for ; Mon, 21 Sep 2020 10:56:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 309A1206E5 for ; Mon, 21 Sep 2020 10:56:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=blackwall-org.20150623.gappssmtp.com header.i=@blackwall-org.20150623.gappssmtp.com header.b="C17WewuT" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726689AbgIUK4w (ORCPT ); Mon, 21 Sep 2020 06:56:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726550AbgIUK42 (ORCPT ); Mon, 21 Sep 2020 06:56:28 -0400 Received: from mail-wr1-x444.google.com (mail-wr1-x444.google.com [IPv6:2a00:1450:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86A3FC0613D2 for ; Mon, 21 Sep 2020 03:56:27 -0700 (PDT) Received: by mail-wr1-x444.google.com with SMTP id m6so12260562wrn.0 for ; Mon, 21 Sep 2020 03:56:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lgky1A3hcxj1MPtWkJkYb67UkAbVS/Usr+n+qMifYIA=; b=C17WewuTMBmPOJeGBAe4bpLQkSlLYAIqfc/WuuE3KNpR/UHv0TonzV/CFA9ogkUsF9 PRD0lI/nPA3Jeq27b2l2iHDrXv4+YcdPbl6Fs/DSwdqkfoZVBJYKBDywmE0Cy0zP+wa+ +JbwjSW1sNGFEtLFYgaihzRzHRELVveiUYOU3BsrXw4lGGX0H76w65bJfXgJAitM48KO oo3sOhQ2WmeJ+H+xbpougKh0+7o0hMPH13g+LFxypMzRYGsQTMqlTyg1C08v8jlSmPom YCrGbBreVJDVvPKvXmeSP+Gab8r0AAXL+iLrhqDL9081kYa03B6BUFdvWQYnRWlSOAio 38lQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lgky1A3hcxj1MPtWkJkYb67UkAbVS/Usr+n+qMifYIA=; b=j0IEoYT3hVC2o9QTUQauGLlx6L+vUbGA4AXqsrJhiYwq9U3tgbaNv0wMoFdWQzL7PO VyKehJcFpi7gx/SZHLa838fKRJMAUgh+Fbhd47xS7s+FaFoQegMgbFr4rCtAzt1FgVe7 g9C+jMawMIHD4+piVUfLGS3bU1hR92mRUcL8PH/a+o+Cr5ameeq9jWkdUX/i1SKZ53Ex aBqbqF6NrvshLntgYlzO7Yg6D3ZxeTiqW/oWwkDrXhxFHgkRvaXWie0aVTYNNSbvfJUa c93SPChNyhhq88VyoqGtuFL5QRlHRW/yIjGLq/WzCl85NP/1UrIf/djkLz1khamnIrD0 SX5A== X-Gm-Message-State: AOAM533rV//bAivOlFzONnB+GlAJM7fx78gRLu+xUE6nukzsU6xdXPFn 4bepKScs92wVeW5uQFANzBPr5Fmwu5MrlIt78IabSA== X-Google-Smtp-Source: ABdhPJzPaX9Qyit+Cnc/LXLXynJ+PNAQ9hKD8yvljfj9TDt7PNv60nZwXm2IjLWbN2RMjhNtFrJcLA== X-Received: by 2002:a5d:4486:: with SMTP id j6mr49650564wrq.278.1600685785928; Mon, 21 Sep 2020 03:56:25 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id s11sm19637727wrt.43.2020.09.21.03.56.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Sep 2020 03:56:25 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, davem@davemloft.net, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 15/16] net: bridge: mcast: handle host state Date: Mon, 21 Sep 2020 13:55:25 +0300 Message-Id: <20200921105526.1056983-16-razor@blackwall.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200921105526.1056983-1-razor@blackwall.org> References: <20200921105526.1056983-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Nikolay Aleksandrov Since host joins are considered as EXCLUDE {} joins we need to reflect that in all of *,G ports' S,G entries. Since the S,Gs can have host_joined == true only set automatically we can safely set it to false when removing all automatically added entries upon S,G delete. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_multicast.c | 58 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index 11d224c01914..66eb62ded192 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -286,6 +286,53 @@ void br_multicast_star_g_handle_mode(struct net_bridge_port_group *pg, } } +/* called when adding a new S,G with host_joined == false by default */ +static void br_multicast_sg_host_state(struct net_bridge_mdb_entry *star_mp, + struct net_bridge_port_group *sg) +{ + struct net_bridge_mdb_entry *sg_mp; + + if (WARN_ON(!br_multicast_is_star_g(&star_mp->addr))) + return; + if (!star_mp->host_joined) + return; + + sg_mp = br_mdb_ip_get(star_mp->br, &sg->key.addr); + if (!sg_mp) + return; + sg_mp->host_joined = true; +} + +/* set the host_joined state of all of *,G's S,G entries */ +static void br_multicast_star_g_host_state(struct net_bridge_mdb_entry *star_mp) +{ + struct net_bridge *br = star_mp->br; + struct net_bridge_mdb_entry *sg_mp; + struct net_bridge_port_group *pg; + struct br_ip sg_ip; + + if (WARN_ON(!br_multicast_is_star_g(&star_mp->addr))) + return; + + memset(&sg_ip, 0, sizeof(sg_ip)); + sg_ip = star_mp->addr; + for (pg = mlock_dereference(star_mp->ports, br); + pg; + pg = mlock_dereference(pg->next, br)) { + struct net_bridge_group_src *src_ent; + + hlist_for_each_entry(src_ent, &pg->src_list, node) { + if (!(src_ent->flags & BR_SGRP_F_INSTALLED)) + continue; + sg_ip.src = src_ent->addr.src; + sg_mp = br_mdb_ip_get(br, &sg_ip); + if (!sg_mp) + continue; + sg_mp->host_joined = star_mp->host_joined; + } + } +} + static void br_multicast_sg_del_exclude_ports(struct net_bridge_mdb_entry *sgmp) { struct net_bridge_port_group __rcu **pp; @@ -305,6 +352,12 @@ static void br_multicast_sg_del_exclude_ports(struct net_bridge_mdb_entry *sgmp) MDB_PG_FLAGS_PERMANENT))) return; + /* currently the host can only have joined the *,G which means + * we treat it as EXCLUDE {}, so for an S,G it's considered a + * STAR_EXCLUDE entry and we can safely leave it + */ + sgmp->host_joined = false; + for (pp = &sgmp->ports; (p = mlock_dereference(*pp, sgmp->br)) != NULL;) { if (!(p->flags & MDB_PG_FLAGS_PERMANENT)) @@ -326,6 +379,7 @@ void br_multicast_sg_add_exclude_ports(struct net_bridge_mdb_entry *star_mp, if (WARN_ON(!br_multicast_is_star_g(&star_mp->addr))) return; + br_multicast_sg_host_state(star_mp, sg); memset(&sg_key, 0, sizeof(sg_key)); sg_key.addr = sg->key.addr; /* we need to add all exclude ports to the S,G */ @@ -1143,6 +1197,8 @@ void br_multicast_host_join(struct net_bridge_mdb_entry *mp, bool notify) { if (!mp->host_joined) { mp->host_joined = true; + if (br_multicast_is_star_g(&mp->addr)) + br_multicast_star_g_host_state(mp); if (notify) br_mdb_notify(mp->br->dev, mp, NULL, RTM_NEWMDB); } @@ -1155,6 +1211,8 @@ void br_multicast_host_leave(struct net_bridge_mdb_entry *mp, bool notify) return; mp->host_joined = false; + if (br_multicast_is_star_g(&mp->addr)) + br_multicast_star_g_host_state(mp); if (notify) br_mdb_notify(mp->br->dev, mp, NULL, RTM_DELMDB); } From patchwork Mon Sep 21 10:55:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikolay Aleksandrov X-Patchwork-Id: 260503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D8B1C43465 for ; Mon, 21 Sep 2020 10:56:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 37379206E5 for ; Mon, 21 Sep 2020 10:56:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=blackwall-org.20150623.gappssmtp.com header.i=@blackwall-org.20150623.gappssmtp.com header.b="xyGBhkOG" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726682AbgIUK4p (ORCPT ); Mon, 21 Sep 2020 06:56:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726662AbgIUK4l (ORCPT ); Mon, 21 Sep 2020 06:56:41 -0400 Received: from mail-wr1-x444.google.com (mail-wr1-x444.google.com [IPv6:2a00:1450:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77F42C0613D3 for ; Mon, 21 Sep 2020 03:56:28 -0700 (PDT) Received: by mail-wr1-x444.google.com with SMTP id e16so12259990wrm.2 for ; Mon, 21 Sep 2020 03:56:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blackwall-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=W1k28u9m4opk3xAHXu3X6yCLygcG5WtwBqkfTVt8j0k=; b=xyGBhkOGELPKn//wEelqXKR2capv+rcr01F4odCptjBa5YW1aMl9K9f6t/0AE/wp2F H8mU9/tvQ/ES2Gn5EstZdkbKciKhK/oKiGIWUjX3LtKCma9GaXbvw+gfcBg+t5W49o1A JHjfV7vzht0dYR7MYq/zmlq1IYz/NJMcNp441j3HZy23AMNTTznaX74gHX3SCSMwFNrx Vaey+fwS46Ycelax5JgnPTb3YsMBtq1X2N9kPwkPkyiSHD7sLTOx++j4LpbTQLHkUFkn DB/tET4zDJ1tQGLh3sW/wGCKCaipL+70mXN5PY4rRkJKirCxLDGDKUw4OOZ6ERVM6Z6T X7yA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=W1k28u9m4opk3xAHXu3X6yCLygcG5WtwBqkfTVt8j0k=; b=Zrlpdc9e4scymbFqX+rE2GZWcYO2pS7n4CIeAzxLBhO6CZeFCObnG+GsOpKEmOJmZv eDoxclsedNoVNWq3lnFppVHTpK2QusjCWSn30G7sbg6UrCwA9oY3Ajyqf1rTTPInInSr meWYfp96lXjEKptaorj1+Wm8PwAgMYnRB/fAlQ7EuXT6150+76nAryWEgp+1Wf3q2MnR qtBVczATgPYDJ0vIYM0+XfofdXIpQ7DI9s8kOk1fVutsNT9W/modjrjiasBbLSSIHGUH 3szlS3TGjFJWck/dqTt3fXXQugJYg5QEbVTwY9x7aTFYB1MVw4IXsbDEIeoEmQF0SreG sFTw== X-Gm-Message-State: AOAM533xulzL1k7qv658lic+VoHmbsGJ7T0NuFyVNIsiJ2XjixKruCzD q5eeFjhDvqqaiqJq9o7HWCv2B6bxTTzgJ+yEoZo9Qg== X-Google-Smtp-Source: ABdhPJwcdLN2KgA8ISfNJtMrE9FRZfmoLfTdUJ1nxmQ+ZF48FONNG75fUXSJ8sUWvYqhodFwlbtEbg== X-Received: by 2002:adf:f492:: with SMTP id l18mr54104001wro.280.1600685786973; Mon, 21 Sep 2020 03:56:26 -0700 (PDT) Received: from debil.vdiclient.nvidia.com (84-238-136-197.ip.btc-net.bg. [84.238.136.197]) by smtp.gmail.com with ESMTPSA id s11sm19637727wrt.43.2020.09.21.03.56.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Sep 2020 03:56:26 -0700 (PDT) From: Nikolay Aleksandrov To: netdev@vger.kernel.org Cc: roopa@nvidia.com, davem@davemloft.net, bridge@lists.linux-foundation.org, Nikolay Aleksandrov Subject: [PATCH net-next 16/16] net: bridge: mcast: when forwarding handle filter mode and blocked flag Date: Mon, 21 Sep 2020 13:55:26 +0300 Message-Id: <20200921105526.1056983-17-razor@blackwall.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200921105526.1056983-1-razor@blackwall.org> References: <20200921105526.1056983-1-razor@blackwall.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Nikolay Aleksandrov We need to avoid forwarding to ports in MCAST_INCLUDE filter mode when the mdst entry is a *,G or when the port has the blocked flag. Signed-off-by: Nikolay Aleksandrov --- net/bridge/br_forward.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/net/bridge/br_forward.c b/net/bridge/br_forward.c index 4d12999e4576..e28ffadd1371 100644 --- a/net/bridge/br_forward.c +++ b/net/bridge/br_forward.c @@ -274,10 +274,19 @@ void br_multicast_flood(struct net_bridge_mdb_entry *mdst, struct net_bridge *br = netdev_priv(dev); struct net_bridge_port *prev = NULL; struct net_bridge_port_group *p; + bool allow_mode_include = true; struct hlist_node *rp; rp = rcu_dereference(hlist_first_rcu(&br->router_list)); - p = mdst ? rcu_dereference(mdst->ports) : NULL; + if (mdst) { + p = rcu_dereference(mdst->ports); + if (br_multicast_should_handle_mode(br, mdst->addr.proto) && + br_multicast_is_star_g(&mdst->addr)) + allow_mode_include = false; + } else { + p = NULL; + } + while (p || rp) { struct net_bridge_port *port, *lport, *rport; @@ -292,6 +301,10 @@ void br_multicast_flood(struct net_bridge_mdb_entry *mdst, local_orig); goto delivered; } + if ((!allow_mode_include && + p->filter_mode == MCAST_INCLUDE) || + (p->flags & MDB_PG_FLAGS_BLOCKED)) + goto delivered; } else { port = rport; }