From patchwork Thu May 21 21:10:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 218760 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 000ABC433E1 for ; Thu, 21 May 2020 21:10:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C280420826 for ; Thu, 21 May 2020 21:10:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Tt+K23IO" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730586AbgEUVKy (ORCPT ); Thu, 21 May 2020 17:10:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726814AbgEUVKy (ORCPT ); Thu, 21 May 2020 17:10:54 -0400 Received: from mail-ej1-x642.google.com (mail-ej1-x642.google.com [IPv6:2a00:1450:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA608C061A0E for ; Thu, 21 May 2020 14:10:53 -0700 (PDT) Received: by mail-ej1-x642.google.com with SMTP id s21so10597975ejd.2 for ; Thu, 21 May 2020 14:10:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZonQOyncl1AtkOCcxLI36jYh3pCrGXGOAlprW87oCJk=; b=Tt+K23IOr1yopG5GgYPNr6BrLVrbtwxyDn1lf9Wljz9ge49Fl8Xnold1V0GA+Ih4FL R+hgcxZhinRbnRPyfKYuWFNAeOI5BCY6RMWoQzbahUeCpCEoJK7U/8thrFTlcYzNwzKx b6VrwNktOdlHGgY9Dz5XV21TXdukX9O160DWhiiaBCb3TY90+te8YjurwcFtSJosAfyF l0v88AULr5JrBN/4h4WRd8tMdKwpuMam5Od190qpZ7Xga0SeXRYRvR/gDD2qMydahKhL wnF/yDX3x2bKAA7CpVdDtPJfsSqyClEAKmfYmlQkQyNxoEuUUzmU3EIuuZBnXHzRuRv/ sJSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZonQOyncl1AtkOCcxLI36jYh3pCrGXGOAlprW87oCJk=; b=ea8EqgX/kqjTf+e7NCbU83zycJWA4dPSA1NGXH2aFoGf1TWr9csXypzNN5Xbqdv3ej D2HM2D0mZLwKH0Im01B/7bRdCi4UAcCY70CTQsohx+iBIV4YbGVbUtlxMw0xGczEyTNa txHahX9aO0Wj665XOCcVE/uVSIqoqXJ7igtFM/PfdDQhqZ34ttn4d77aNHLyHxdvZroa Lh4XCHnCRNbUBR1JxecimlN249/k4PMOtH8OwH0fkgyudE1gJplLJUwahcfAbK8xUf54 EBeMx/IN/giC4HgmIT4noZdyxXaYvsdSacrtktifCySt9Y1Z+EC8tRvRtXii4vsZeLX3 hezA== X-Gm-Message-State: AOAM53070GDu0X2lpT6OxzSGg/UNjN0gjB4m8nTdcy1MaFA5D4y+w3fL u+oLbOA1gQ4y4V7/IYok8kM= X-Google-Smtp-Source: ABdhPJxi5ZlR5eJOzEssB/VDGZOvtIKOspvoagW0I1K5BlrVni7iEg/TR2v4Oqm23BtRKEhBTDRoVw== X-Received: by 2002:a17:906:2e0e:: with SMTP id n14mr5183079eji.545.1590095452268; Thu, 21 May 2020 14:10:52 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.10.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:10:51 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 01/13] net: core: dev_addr_lists: add VID to device address Date: Fri, 22 May 2020 00:10:24 +0300 Message-Id: <20200521211036.668624-2-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Ivan Khoronzhuk Despite this is supposed to be used for Ethernet VLANs, not Ethernet addresses with space for VID also can reuse this, so VID is considered as virtual ID extension, not belonging strictly to Ethernet VLAN VIDs, and overall change can be named individual virtual device filtering (IVDF). This patch adds VID tag at the end of each address. The actual reserved address size is 32 bytes. For Ethernet addresses with 6 bytes long that's possible to add tag w/o increasing address size. Thus, each address for the case has 32 - 6 = 26 bytes to hold additional info, say VID for virtual device addresses. Therefore, when addresses are synced to the address list of parent device the address list of latter can contain separate addresses for virtual devices. It allows to track separate address tables for virtual devices if they present and the device can be placed on any place of device tree as the address is propagated to to the end real device thru *_sync()/ndo_set_rx_mode() APIs. Also it simplifies handling VID addresses at real device when it supports IVDF. If parent device doesn't want to have virtual addresses in its address space the vid_len has to be 0, thus its address space is "shrunk" to the state as before this patch. For now it's 0 for every device. It allows two devices with and w/o IVDF to be part of same bond device for instance. The end real device supporting IVDF can retrieve VID tag from an address and set it for a given virtual device only. By default, vid 0 is used for real devices to distinguish it from virtual addresses. See next patches to see how it's used. Note that adding the vid_len member to struct net_device is not intended to change the structure layout. Here is the output of pahole: For ARM 32, on 1 hole less: --------------------------- before (https://pastebin.com/DG1SVpFR): /* size: 1344, cachelines: 21, members: 123 */ /* sum members: 1304, holes: 5, sum holes: 28 */ /* padding: 12 */ /* bit_padding: 31 bits */ after (https://pastebin.com/ZUMhxGkA): /* size: 1344, cachelines: 21, members: 124 */ /* sum members: 1305, holes: 5, sum holes: 27 */ /* padding: 12 */ /* bit_padding: 31 bits */ For ARM 64, on 1 hole less: --------------------------- before (https://pastebin.com/5CdTQWkc): /* size: 2048, cachelines: 32, members: 120 */ /* sum members: 1972, holes: 7, sum holes: 48 */ /* padding: 28 */ /* bit_padding: 31 bits */ after (https://pastebin.com/32ktb1iV): /* size: 2048, cachelines: 32, members: 121 */ /* sum members: 1973, holes: 7, sum holes: 47 */ /* padding: 28 */ /* bit_padding: 31 bits */ Signed-off-by: Ivan Khoronzhuk Signed-off-by: Vladimir Oltean --- include/linux/netdevice.h | 4 ++ net/core/dev_addr_lists.c | 127 ++++++++++++++++++++++++++++++++------ 2 files changed, 111 insertions(+), 20 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index a18f8fdf4260..2d11b93f3af4 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1698,6 +1698,7 @@ enum netdev_priv_flags { * @perm_addr: Permanent hw address * @addr_assign_type: Hw address assignment type * @addr_len: Hardware address length + * @vid_len: Virtual ID length, set in case of IVDF * @upper_level: Maximum depth level of upper devices. * @lower_level: Maximum depth level of lower devices. * @neigh_priv_len: Used in neigh_alloc() @@ -1950,6 +1951,7 @@ struct net_device { unsigned char perm_addr[MAX_ADDR_LEN]; unsigned char addr_assign_type; unsigned char addr_len; + unsigned char vid_len; unsigned char upper_level; unsigned char lower_level; unsigned short neigh_priv_len; @@ -4316,8 +4318,10 @@ int dev_addr_init(struct net_device *dev); /* Functions used for unicast addresses handling */ int dev_uc_add(struct net_device *dev, const unsigned char *addr); +int dev_vid_uc_add(struct net_device *dev, const unsigned char *addr); int dev_uc_add_excl(struct net_device *dev, const unsigned char *addr); int dev_uc_del(struct net_device *dev, const unsigned char *addr); +int dev_vid_uc_del(struct net_device *dev, const unsigned char *addr); int dev_uc_sync(struct net_device *to, struct net_device *from); int dev_uc_sync_multiple(struct net_device *to, struct net_device *from); void dev_uc_unsync(struct net_device *to, struct net_device *from); diff --git a/net/core/dev_addr_lists.c b/net/core/dev_addr_lists.c index 2f949b5a1eb9..90eaa99b19e5 100644 --- a/net/core/dev_addr_lists.c +++ b/net/core/dev_addr_lists.c @@ -541,6 +541,35 @@ int dev_addr_del(struct net_device *dev, const unsigned char *addr, } EXPORT_SYMBOL(dev_addr_del); +static int get_addr_len(struct net_device *dev) +{ + return dev->addr_len + dev->vid_len; +} + +/** + * set_vid_addr - Copy a device address into a new address with IVDF. + * @dev: device + * @addr: address to copy + * @naddr: location of new address + * + * Transform a regular device address into one with IVDF (Individual + * Virtual Device Filtering). If the device does not support IVDF, the + * original device address length is returned and no copying is done. + * Otherwise, the length of the IVDF address is returned. + * The VID is set to zero which denotes the address of a real device. + */ +static int set_vid_addr(struct net_device *dev, const unsigned char *addr, + unsigned char *naddr) +{ + if (!dev->vid_len) + return dev->addr_len; + + memcpy(naddr, addr, dev->addr_len); + memset(naddr + dev->addr_len, 0, dev->vid_len); + + return get_addr_len(dev); +} + /* * Unicast list handling functions */ @@ -552,18 +581,22 @@ EXPORT_SYMBOL(dev_addr_del); */ int dev_uc_add_excl(struct net_device *dev, const unsigned char *addr) { + unsigned char naddr[MAX_ADDR_LEN]; struct netdev_hw_addr *ha; - int err; + int addr_len, err; + + addr_len = set_vid_addr(dev, addr, naddr); + addr = dev->vid_len ? naddr : addr; netif_addr_lock_bh(dev); list_for_each_entry(ha, &dev->uc.list, list) { - if (!memcmp(ha->addr, addr, dev->addr_len) && + if (!memcmp(ha->addr, addr, addr_len) && ha->type == NETDEV_HW_ADDR_T_UNICAST) { err = -EEXIST; goto out; } } - err = __hw_addr_create_ex(&dev->uc, addr, dev->addr_len, + err = __hw_addr_create_ex(&dev->uc, addr, addr_len, NETDEV_HW_ADDR_T_UNICAST, true, false); if (!err) __dev_set_rx_mode(dev); @@ -574,47 +607,89 @@ int dev_uc_add_excl(struct net_device *dev, const unsigned char *addr) EXPORT_SYMBOL(dev_uc_add_excl); /** - * dev_uc_add - Add a secondary unicast address + * dev_vid_uc_add - Add a secondary unicast address with tag * @dev: device - * @addr: address to add + * @addr: address to add, includes vid tag already * * Add a secondary unicast address to the device or increase * the reference count if it already exists. */ -int dev_uc_add(struct net_device *dev, const unsigned char *addr) +int dev_vid_uc_add(struct net_device *dev, const unsigned char *addr) { int err; netif_addr_lock_bh(dev); - err = __hw_addr_add(&dev->uc, addr, dev->addr_len, + err = __hw_addr_add(&dev->uc, addr, get_addr_len(dev), NETDEV_HW_ADDR_T_UNICAST); if (!err) __dev_set_rx_mode(dev); netif_addr_unlock_bh(dev); return err; } +EXPORT_SYMBOL(dev_vid_uc_add); + +/** + * dev_uc_add - Add a secondary unicast address + * @dev: device + * @addr: address to add + * + * Add a secondary unicast address to the device or increase + * the reference count if it already exists. + */ +int dev_uc_add(struct net_device *dev, const unsigned char *addr) +{ + unsigned char naddr[MAX_ADDR_LEN]; + int err; + + set_vid_addr(dev, addr, naddr); + addr = dev->vid_len ? naddr : addr; + + err = dev_vid_uc_add(dev, addr); + return err; +} EXPORT_SYMBOL(dev_uc_add); /** * dev_uc_del - Release secondary unicast address. * @dev: device - * @addr: address to delete + * @addr: address to delete, includes vid tag already * * Release reference to a secondary unicast address and remove it * from the device if the reference count drops to zero. */ -int dev_uc_del(struct net_device *dev, const unsigned char *addr) +int dev_vid_uc_del(struct net_device *dev, const unsigned char *addr) { int err; netif_addr_lock_bh(dev); - err = __hw_addr_del(&dev->uc, addr, dev->addr_len, + err = __hw_addr_del(&dev->uc, addr, get_addr_len(dev), NETDEV_HW_ADDR_T_UNICAST); if (!err) __dev_set_rx_mode(dev); netif_addr_unlock_bh(dev); return err; } +EXPORT_SYMBOL(dev_vid_uc_del); + +/** + * dev_uc_del - Release secondary unicast address. + * @dev: device + * @addr: address to delete + * + * Release reference to a secondary unicast address and remove it + * from the device if the reference count drops to zero. + */ +int dev_uc_del(struct net_device *dev, const unsigned char *addr) +{ + unsigned char naddr[MAX_ADDR_LEN]; + int err; + + set_vid_addr(dev, addr, naddr); + addr = dev->vid_len ? naddr : addr; + + err = dev_vid_uc_del(dev, addr); + return err; +} EXPORT_SYMBOL(dev_uc_del); /** @@ -638,7 +713,7 @@ int dev_uc_sync(struct net_device *to, struct net_device *from) return -EINVAL; netif_addr_lock(to); - err = __hw_addr_sync(&to->uc, &from->uc, to->addr_len); + err = __hw_addr_sync(&to->uc, &from->uc, get_addr_len(to)); if (!err) __dev_set_rx_mode(to); netif_addr_unlock(to); @@ -668,7 +743,7 @@ int dev_uc_sync_multiple(struct net_device *to, struct net_device *from) return -EINVAL; netif_addr_lock(to); - err = __hw_addr_sync_multiple(&to->uc, &from->uc, to->addr_len); + err = __hw_addr_sync_multiple(&to->uc, &from->uc, get_addr_len(to)); if (!err) __dev_set_rx_mode(to); netif_addr_unlock(to); @@ -692,7 +767,7 @@ void dev_uc_unsync(struct net_device *to, struct net_device *from) netif_addr_lock_bh(from); netif_addr_lock(to); - __hw_addr_unsync(&to->uc, &from->uc, to->addr_len); + __hw_addr_unsync(&to->mc, &from->mc, get_addr_len(to)); __dev_set_rx_mode(to); netif_addr_unlock(to); netif_addr_unlock_bh(from); @@ -736,18 +811,22 @@ EXPORT_SYMBOL(dev_uc_init); */ int dev_mc_add_excl(struct net_device *dev, const unsigned char *addr) { + unsigned char naddr[MAX_ADDR_LEN]; struct netdev_hw_addr *ha; - int err; + int addr_len, err; + + addr_len = set_vid_addr(dev, addr, naddr); + addr = dev->vid_len ? naddr : addr; netif_addr_lock_bh(dev); list_for_each_entry(ha, &dev->mc.list, list) { - if (!memcmp(ha->addr, addr, dev->addr_len) && + if (!memcmp(ha->addr, addr, addr_len) && ha->type == NETDEV_HW_ADDR_T_MULTICAST) { err = -EEXIST; goto out; } } - err = __hw_addr_create_ex(&dev->mc, addr, dev->addr_len, + err = __hw_addr_create_ex(&dev->mc, addr, addr_len, NETDEV_HW_ADDR_T_MULTICAST, true, false); if (!err) __dev_set_rx_mode(dev); @@ -760,10 +839,14 @@ EXPORT_SYMBOL(dev_mc_add_excl); static int __dev_mc_add(struct net_device *dev, const unsigned char *addr, bool global) { - int err; + unsigned char naddr[MAX_ADDR_LEN]; + int addr_len, err; + + addr_len = set_vid_addr(dev, addr, naddr); + addr = dev->vid_len ? naddr : addr; netif_addr_lock_bh(dev); - err = __hw_addr_add_ex(&dev->mc, addr, dev->addr_len, + err = __hw_addr_add_ex(&dev->mc, addr, addr_len, NETDEV_HW_ADDR_T_MULTICAST, global, false, 0); if (!err) __dev_set_rx_mode(dev); @@ -800,10 +883,14 @@ EXPORT_SYMBOL(dev_mc_add_global); static int __dev_mc_del(struct net_device *dev, const unsigned char *addr, bool global) { - int err; + unsigned char naddr[MAX_ADDR_LEN]; + int addr_len, err; + + addr_len = set_vid_addr(dev, addr, naddr); + addr = dev->vid_len ? naddr : addr; netif_addr_lock_bh(dev); - err = __hw_addr_del_ex(&dev->mc, addr, dev->addr_len, + err = __hw_addr_del_ex(&dev->mc, addr, addr_len, NETDEV_HW_ADDR_T_MULTICAST, global, false); if (!err) __dev_set_rx_mode(dev); From patchwork Thu May 21 21:10:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 218759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B4ECC433E1 for ; Thu, 21 May 2020 21:10:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2B7A3207D3 for ; Thu, 21 May 2020 21:10:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="P1aVlaj7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730599AbgEUVK6 (ORCPT ); Thu, 21 May 2020 17:10:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726814AbgEUVK5 (ORCPT ); Thu, 21 May 2020 17:10:57 -0400 Received: from mail-ej1-x641.google.com (mail-ej1-x641.google.com [IPv6:2a00:1450:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBE64C061A0E for ; Thu, 21 May 2020 14:10:55 -0700 (PDT) Received: by mail-ej1-x641.google.com with SMTP id d7so10554012eja.7 for ; Thu, 21 May 2020 14:10:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XHvVDx0bn4cAErYabGuB0Tm/Q0t8MyOtAfbMRm6tWGc=; b=P1aVlaj7P/tjZ2lwJwJvr+7f43sjMVXItZQMs/Vl+yXY4hC+UQ2OHUZRy65Wd6VRAt aeVbcwtuO7yLm3HQZ7QvkeJ7foHdU3RHmtwacKZlDm7rZxsyWebljKB6oI1xlClMA8pd 929mmmX9TBhNX+PhucfdwyT6mljiG1sCeZE2fE1hnuHlNpftNDfAuJhcP0gYC5Mg4/mD JyGzrJympvGEysd+jnyf+3MJ1MApDCx6h4JP6+J3Iad+uBrC2SPF7qAi1jbJsjGj/1tr PWHjrjBtR4rr537uJL+GdAs0ra+sloWhSjHEsbHEsSOAzcQve4qGmuhRIppdVdCGrAgV 1BWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XHvVDx0bn4cAErYabGuB0Tm/Q0t8MyOtAfbMRm6tWGc=; b=jrqenADgtQrvwB0LFvjDFXTwbLWRxucZHnQukbs117TRC9uxI4MPPX3rwVmTbA2AOd ftsydGKRAohyUX+Cojx98RmJmPDNTd0N47ZPNSyml2tbbtpDW+En6Aakrp7otbNiWiPC 43Z3V/p2V3X+4MOrL9/oNdaY4OlBN7/kHZAHeYwjDl/Vb+k5xWOOMjMUhRveaxKRENow SD6ghZRc852SxB126TBcGNmh/mBUQw+TJNZzRj0uVosqi7jtReuDXjvhyOHtjz5U1BSJ nFRdeAceEnRMD+9eQkRQQO1U7h+Q0h7f4TA22UPv0zazuPLZYkBcp5gWgaRclsFPIOl2 Hx2A== X-Gm-Message-State: AOAM531l1UtAxxMzgoQep7kS9Y8kaPkrWa5rZH8tkp+LHLm5SuBBGHo4 NxPwIMuC/8813PIk3+hIfHU= X-Google-Smtp-Source: ABdhPJzfu6PLzY56K18O5JU7AN3ZJlRdFpN7ahVYFSIWotzlVt8BFsSuTRWcWShkGmQJkZ/hqE0+ZQ== X-Received: by 2002:a17:906:4088:: with SMTP id u8mr5506778ejj.444.1590095454668; Thu, 21 May 2020 14:10:54 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.10.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:10:54 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 03/13] net: 8021q: vlan_dev: add vid tag for vlan device own address Date: Fri, 22 May 2020 00:10:26 +0300 Message-Id: <20200521211036.668624-4-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Ivan Khoronzhuk The vlan device address is held separately from uc/mc lists and handled differently. The vlan dev address is bound with real device address only if it's inherited from init, in all other cases it's separate address entry in uc list. With vid set, the address becomes not inherited from real device after it's set manually as before, but is part of uc list any way, with appropriate vid tag set. If vid_len for real device is 0, the behaviour is the same as before this change, so shouldn't be any impact on systems w/o individual virtual device filtering (IVDF) enabled. This allows to control and sync vlan device address and disable concrete vlan packet ingress when vlan interface is down. Signed-off-by: Ivan Khoronzhuk Signed-off-by: Vladimir Oltean --- net/8021q/vlan.c | 3 ++ net/8021q/vlan_dev.c | 75 +++++++++++++++++++++++++++++++++----------- 2 files changed, 60 insertions(+), 18 deletions(-) diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c index d4bcfd8f95bf..4cc341c191a4 100644 --- a/net/8021q/vlan.c +++ b/net/8021q/vlan.c @@ -298,6 +298,9 @@ static void vlan_sync_address(struct net_device *dev, if (vlan_dev_inherit_address(vlandev, dev)) goto out; + if (dev->vid_len) + goto out; + /* vlan address was different from the old address and is equal to * the new address */ if (!ether_addr_equal(vlandev->dev_addr, vlan->real_dev_addr) && diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c index c2c3e5ae535c..f3f570a12ffd 100644 --- a/net/8021q/vlan_dev.c +++ b/net/8021q/vlan_dev.c @@ -252,12 +252,61 @@ static void vlan_dev_set_addr_vid(struct net_device *vlan_dev, u8 *addr) addr[vlan_dev->addr_len + 1] = (vid >> 8) & 0xf; } +static int vlan_dev_add_addr(struct net_device *dev, u8 *addr) +{ + struct net_device *real_dev = vlan_dev_real_dev(dev); + unsigned char naddr[ETH_ALEN + NET_8021Q_VID_TSIZE]; + + if (real_dev->vid_len) { + memcpy(naddr, addr, dev->addr_len); + vlan_dev_set_addr_vid(dev, naddr); + return dev_vid_uc_add(real_dev, naddr); + } + + if (ether_addr_equal(addr, real_dev->dev_addr)) + return 0; + + return dev_uc_add(real_dev, addr); +} + +static void vlan_dev_del_addr(struct net_device *dev, u8 *addr) +{ + struct net_device *real_dev = vlan_dev_real_dev(dev); + unsigned char naddr[ETH_ALEN + NET_8021Q_VID_TSIZE]; + + if (real_dev->vid_len) { + memcpy(naddr, addr, dev->addr_len); + vlan_dev_set_addr_vid(dev, naddr); + dev_vid_uc_del(real_dev, naddr); + return; + } + + if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr)) + dev_uc_del(real_dev, addr); +} + +static int vlan_dev_subs_addr(struct net_device *dev, u8 *addr) +{ + int err; + + err = vlan_dev_add_addr(dev, addr); + if (err < 0) + return err; + + vlan_dev_del_addr(dev, dev->dev_addr); + return err; +} + bool vlan_dev_inherit_address(struct net_device *dev, struct net_device *real_dev) { if (dev->addr_assign_type != NET_ADDR_STOLEN) return false; + if (real_dev->vid_len) + if (vlan_dev_subs_addr(dev, real_dev->dev_addr)) + return false; + ether_addr_copy(dev->dev_addr, real_dev->dev_addr); call_netdevice_notifiers(NETDEV_CHANGEADDR, dev); return true; @@ -273,9 +322,10 @@ static int vlan_dev_open(struct net_device *dev) !(vlan->flags & VLAN_FLAG_LOOSE_BINDING)) return -ENETDOWN; - if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr) && - !vlan_dev_inherit_address(dev, real_dev)) { - err = dev_uc_add(real_dev, dev->dev_addr); + if (ether_addr_equal(dev->dev_addr, real_dev->dev_addr) || + (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr) && + !vlan_dev_inherit_address(dev, real_dev))) { + err = vlan_dev_add_addr(dev, dev->dev_addr); if (err < 0) goto out; } @@ -308,8 +358,7 @@ static int vlan_dev_open(struct net_device *dev) if (dev->flags & IFF_ALLMULTI) dev_set_allmulti(real_dev, -1); del_unicast: - if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr)) - dev_uc_del(real_dev, dev->dev_addr); + vlan_dev_del_addr(dev, dev->dev_addr); out: netif_carrier_off(dev); return err; @@ -327,8 +376,7 @@ static int vlan_dev_stop(struct net_device *dev) if (dev->flags & IFF_PROMISC) dev_set_promiscuity(real_dev, -1); - if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr)) - dev_uc_del(real_dev, dev->dev_addr); + vlan_dev_del_addr(dev, dev->dev_addr); if (!(vlan->flags & VLAN_FLAG_BRIDGE_BINDING)) netif_carrier_off(dev); @@ -337,9 +385,7 @@ static int vlan_dev_stop(struct net_device *dev) static int vlan_dev_set_mac_address(struct net_device *dev, void *p) { - struct net_device *real_dev = vlan_dev_priv(dev)->real_dev; struct sockaddr *addr = p; - int err; if (!is_valid_ether_addr(addr->sa_data)) return -EADDRNOTAVAIL; @@ -347,15 +393,8 @@ static int vlan_dev_set_mac_address(struct net_device *dev, void *p) if (!(dev->flags & IFF_UP)) goto out; - if (!ether_addr_equal(addr->sa_data, real_dev->dev_addr)) { - err = dev_uc_add(real_dev, addr->sa_data); - if (err < 0) - return err; - } - - if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr)) - dev_uc_del(real_dev, dev->dev_addr); - + if (vlan_dev_subs_addr(dev, addr->sa_data)) + return true; out: ether_addr_copy(dev->dev_addr, addr->sa_data); return 0; From patchwork Thu May 21 21:10:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 218758 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4661C433DF for ; Thu, 21 May 2020 21:11:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 926A520759 for ; Thu, 21 May 2020 21:11:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="li9Xr5ia" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730615AbgEUVLB (ORCPT ); Thu, 21 May 2020 17:11:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730602AbgEUVK6 (ORCPT ); Thu, 21 May 2020 17:10:58 -0400 Received: from mail-ej1-x643.google.com (mail-ej1-x643.google.com [IPv6:2a00:1450:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80CEBC061A0E for ; Thu, 21 May 2020 14:10:58 -0700 (PDT) Received: by mail-ej1-x643.google.com with SMTP id x1so10539243ejd.8 for ; Thu, 21 May 2020 14:10:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PTgQbBvJIIShG3QEmW73X3/Ipq2Ev776Y3DL4ft32sw=; b=li9Xr5iaHNp3pFaKKAu54IVx9n+Nz7cxtIB+wF56BXbN/zlWqfnUvpHAKjLt8NNU0s g/BChiatT9rWPiherEHgzEVnEN09ZOXZSzfhnp72o4Rx2TwMXKlOT//uT1JS7JReMXUf 38SvXt4E1WbSsXIMexy3s2IZAgDDirQMnG0wvnDD/QsbTnB6LJPFjPh5NrURFVt5Mf6H w/AEaAC2TpcETxNBy2rW3Tv1SP0FC2unqUhRWslo/t9SpIaXUtse1LkkaYfmX5HmiVvP Mv3tiNFmkzvw7NAOdHjGWDDlu24id36mil5VH4150nwv6FkRUVNQH6bpXkEjMDqEVO3c Mx6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PTgQbBvJIIShG3QEmW73X3/Ipq2Ev776Y3DL4ft32sw=; b=l9lH4WfRnIPPj3Oj5clDbnmeEqv9G5izfIPIxlKC7H6FgMznM6G2EAmt0NU73ARF9c sTOKlz17Qe8Z8RLZ/lnCZlx3VIBF5bvcYurQX1B6PMaLPZMHiVAIj/lP489QziuBAAHS nfE3cvayVxkHiDAAVL2p6TvvsZZAwZbgoM0GC9SZ6Gsl0KRd8ZoItIc1MIGEOmoBhlkN cLkcDmBEX7UowwS/ad1hq1k4ervCDRCGtpDQM1npSEW90lJcsZRE0Ed48ZY0uuTOclPh x1A/bn9b8eZKno3zuCgG/H9kegvqZAUl+vZ6QzJwaEwnn8KfrHQBk8+tn3vqpjfkiGtp mB0Q== X-Gm-Message-State: AOAM532qsmBEBANKPrYLMa81q3NXKKY0I2C7kQjEcRNsEVvNbxUIhv+E Ck+B8+crzk8+PIP4qPUoueM= X-Google-Smtp-Source: ABdhPJytKUeSsqkoAfWiOO1nddJSz48dxCK6g4XFe0pRezN4qtspIP55qsl1BRDRICl/gdb/wbtRwQ== X-Received: by 2002:a17:906:ae93:: with SMTP id md19mr5410426ejb.4.1590095457256; Thu, 21 May 2020 14:10:57 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.10.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:10:56 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 05/13] net: bridge: multicast: propagate br_mc_disabled_update() return Date: Fri, 22 May 2020 00:10:28 +0300 Message-Id: <20200521211036.668624-6-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Florian Fainelli Some Ethernet switches might not be able to support disabling multicast flooding globally when e.g: several bridges span the same physical device, propagate the return value of br_mc_disabled_update() such that this propagates correctly to user-space. Signed-off-by: Florian Fainelli Signed-off-by: Vladimir Oltean --- net/bridge/br_multicast.c | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-) diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c index ad12fe3fca8c..9e93035b1483 100644 --- a/net/bridge/br_multicast.c +++ b/net/bridge/br_multicast.c @@ -809,7 +809,7 @@ static void br_ip6_multicast_port_query_expired(struct timer_list *t) } #endif -static void br_mc_disabled_update(struct net_device *dev, bool value) +static int br_mc_disabled_update(struct net_device *dev, bool value) { struct switchdev_attr attr = { .orig_dev = dev, @@ -818,11 +818,13 @@ static void br_mc_disabled_update(struct net_device *dev, bool value) .u.mc_disabled = !value, }; - switchdev_port_attr_set(dev, &attr); + return switchdev_port_attr_set(dev, &attr); } int br_multicast_add_port(struct net_bridge_port *port) { + int ret; + port->multicast_router = MDB_RTR_TYPE_TEMP_QUERY; timer_setup(&port->multicast_router_timer, @@ -833,8 +835,11 @@ int br_multicast_add_port(struct net_bridge_port *port) timer_setup(&port->ip6_own_query.timer, br_ip6_multicast_port_query_expired, 0); #endif - br_mc_disabled_update(port->dev, - br_opt_get(port->br, BROPT_MULTICAST_ENABLED)); + ret = br_mc_disabled_update(port->dev, + br_opt_get(port->br, + BROPT_MULTICAST_ENABLED)); + if (ret) + return ret; port->mcast_stats = netdev_alloc_pcpu_stats(struct bridge_mcast_stats); if (!port->mcast_stats) @@ -2049,12 +2054,16 @@ static void br_multicast_start_querier(struct net_bridge *br, int br_multicast_toggle(struct net_bridge *br, unsigned long val) { struct net_bridge_port *port; + int err = 0; spin_lock_bh(&br->multicast_lock); if (!!br_opt_get(br, BROPT_MULTICAST_ENABLED) == !!val) goto unlock; - br_mc_disabled_update(br->dev, val); + err = br_mc_disabled_update(br->dev, val); + if (err && err != -EOPNOTSUPP) + goto unlock; + br_opt_toggle(br, BROPT_MULTICAST_ENABLED, !!val); if (!br_opt_get(br, BROPT_MULTICAST_ENABLED)) { br_multicast_leave_snoopers(br); @@ -2071,7 +2080,7 @@ int br_multicast_toggle(struct net_bridge *br, unsigned long val) unlock: spin_unlock_bh(&br->multicast_lock); - return 0; + return err; } bool br_multicast_enabled(const struct net_device *dev) From patchwork Thu May 21 21:10:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 218754 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A422C433E0 for ; Thu, 21 May 2020 21:11:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0751320759 for ; Thu, 21 May 2020 21:11:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="OLaWPySr" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730623AbgEUVLE (ORCPT ); Thu, 21 May 2020 17:11:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730614AbgEUVLB (ORCPT ); Thu, 21 May 2020 17:11:01 -0400 Received: from mail-ej1-x641.google.com (mail-ej1-x641.google.com [IPv6:2a00:1450:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61C4FC05BD43 for ; Thu, 21 May 2020 14:11:01 -0700 (PDT) Received: by mail-ej1-x641.google.com with SMTP id x1so10539357ejd.8 for ; Thu, 21 May 2020 14:11:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UxPaIBDRafmiQ0UKafviws3l8/Ted7xW1QOTsLZHkDg=; b=OLaWPySrBqnIg6Pf6BjB5H4W1LcV9h1gTbn5DGM5SU1a/bePMwWbfV9pgSfLk+7V3I 9MwQan3iuGVF0dwlEdza1OVrdj1Qlb75zl0et6wKm/iMCyY3slPffDw6mruS2zSuSQlC GDYINZRMKVsUKfBa2jA2AQpJJ6rXbVjaVkLFYh05R8TPVgZyaimo4dc90YEwIjkeGuGu EY6/KVthlNL/9pFdKUbRs9tlFSR2/4yHkF8X4KRFuKaQYNx4K8hD2hEyP55gCQiiPfEL LIR6zqdSa2hEWfT/JJZx03q4AlPxTT3IBxApnKf3IQi2rCXWpUW5bv1qbQ+e0yUSiuIQ 6ERw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UxPaIBDRafmiQ0UKafviws3l8/Ted7xW1QOTsLZHkDg=; b=ruJ6uAFE3ic3+0U2dDfB0pSxsPtH1HErSd4XQZOtR4+qAjQ1+vKOl3fRLyzkiE6RQK rc44O+/tzYcEvCtadL59pEB3LRkIbaHxf5An+x+OmvyWdiV4vGkjSNZZw+7w6jTmpDLM 4IDNznFw5WEidgobpLLO+tyBX3/g3JtqMPc8g+T12RYTbVJ1vIqP3Ba7g3ainlwbs362 Atzx2UgTHUi1/DlrfKHDJYf06YD+0LbBwTG9xeLm0tk0swXltGqDqVQDktJCcBuxp3l6 wj/tok9CwpfCJrBpmcwQUD4vh5u1AzDLDkbofthfGAQI8VXbQOKMQf7zd4Sr89+NQuZz TkBQ== X-Gm-Message-State: AOAM5326lvtZ9adQjisYIkMtNbg0LPaycPbsSWY6AzfSC4LNHUuznzq1 Wo9ZcdPKFelKmge56y6wa5I= X-Google-Smtp-Source: ABdhPJy+zJV7Rq+419DJvC+EYHH5Q6ELRBZ/K/HcH5/Uu37UyXnvDoTScDUEJUiJIGyTjhzTTs2Ohg== X-Received: by 2002:a17:906:6990:: with SMTP id i16mr5653026ejr.175.1590095459940; Thu, 21 May 2020 14:10:59 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.10.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:10:59 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 07/13] net: dsa: don't use switchdev_notifier_fdb_info in dsa_switchdev_event_work Date: Fri, 22 May 2020 00:10:30 +0300 Message-Id: <20200521211036.668624-8-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean Currently DSA doesn't add FDB entries on the CPU port, because it only does so through switchdev, which is associated with a net_device, and there are none of those for the CPU port. But actually FDB addresses on the CPU port can be associated with RX filtering, so we can initiate switchdev operations from within the DSA layer. We need the deferred work because .ndo_set_rx_mode runs in atomic context. There is just one problem with the existing code: it passes a structure in dsa_switchdev_event_work which was retrieved directly from switchdev, so it contains a net_device. We need to generalize the contents to something that covers the CPU port as well: the "ds, port" tuple is fine for that. Note that the new procedure for notifying the successful FDB offload is inspired from the rocker model. Also, nothing was being done if added_by_user was false. Let's check for that a lot earlier, and don't actually bother to schedule the whole workqueue for nothing. Signed-off-by: Vladimir Oltean --- net/dsa/dsa_priv.h | 12 ++++++ net/dsa/slave.c | 98 +++++++++++++++++++++++----------------------- 2 files changed, 60 insertions(+), 50 deletions(-) diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h index adecf73bd608..001668007efd 100644 --- a/net/dsa/dsa_priv.h +++ b/net/dsa/dsa_priv.h @@ -72,6 +72,18 @@ struct dsa_notifier_mtu_info { int mtu; }; +struct dsa_switchdev_event_work { + struct dsa_switch *ds; + int port; + struct work_struct work; + unsigned long event; + /* Specific for SWITCHDEV_FDB_ADD_TO_DEVICE and + * SWITCHDEV_FDB_DEL_TO_DEVICE + */ + unsigned char addr[ETH_ALEN]; + u16 vid; +}; + struct dsa_slave_priv { /* Copy of CPU port xmit for faster access in slave transmit hot path */ struct sk_buff * (*xmit)(struct sk_buff *skb, diff --git a/net/dsa/slave.c b/net/dsa/slave.c index 886490fb203d..d2072fbd22fe 100644 --- a/net/dsa/slave.c +++ b/net/dsa/slave.c @@ -1914,72 +1914,60 @@ static int dsa_slave_netdevice_event(struct notifier_block *nb, return NOTIFY_DONE; } -struct dsa_switchdev_event_work { - struct work_struct work; - struct switchdev_notifier_fdb_info fdb_info; - struct net_device *dev; - unsigned long event; -}; +static void +dsa_fdb_offload_notify(struct dsa_switchdev_event_work *switchdev_work) +{ + struct dsa_switch *ds = switchdev_work->ds; + struct dsa_port *dp = dsa_to_port(ds, switchdev_work->port); + struct switchdev_notifier_fdb_info info; + + if (!dsa_is_user_port(ds, dp->index)) + return; + + info.addr = switchdev_work->addr; + info.vid = switchdev_work->vid; + info.offloaded = true; + call_switchdev_notifiers(SWITCHDEV_FDB_OFFLOADED, + dp->slave, &info.info, NULL); +} static void dsa_slave_switchdev_event_work(struct work_struct *work) { struct dsa_switchdev_event_work *switchdev_work = container_of(work, struct dsa_switchdev_event_work, work); - struct net_device *dev = switchdev_work->dev; - struct switchdev_notifier_fdb_info *fdb_info; - struct dsa_port *dp = dsa_slave_to_port(dev); + struct dsa_switch *ds = switchdev_work->ds; + struct dsa_port *dp = dsa_to_port(ds, switchdev_work->port); int err; rtnl_lock(); switch (switchdev_work->event) { case SWITCHDEV_FDB_ADD_TO_DEVICE: - fdb_info = &switchdev_work->fdb_info; - if (!fdb_info->added_by_user) - break; - - err = dsa_port_fdb_add(dp, fdb_info->addr, fdb_info->vid); + err = dsa_port_fdb_add(dp, switchdev_work->addr, + switchdev_work->vid); if (err) { - netdev_dbg(dev, "fdb add failed err=%d\n", err); + dev_dbg(ds->dev, "port %d fdb add failed err=%d\n", + dp->index, err); break; } - fdb_info->offloaded = true; - call_switchdev_notifiers(SWITCHDEV_FDB_OFFLOADED, dev, - &fdb_info->info, NULL); + dsa_fdb_offload_notify(switchdev_work); break; case SWITCHDEV_FDB_DEL_TO_DEVICE: - fdb_info = &switchdev_work->fdb_info; - if (!fdb_info->added_by_user) - break; - - err = dsa_port_fdb_del(dp, fdb_info->addr, fdb_info->vid); + err = dsa_port_fdb_del(dp, switchdev_work->addr, + switchdev_work->vid); if (err) { - netdev_dbg(dev, "fdb del failed err=%d\n", err); - dev_close(dev); + dev_dbg(ds->dev, "port %d fdb del failed err=%d\n", + dp->index, err); + if (dsa_is_user_port(ds, dp->index)) + dev_close(dp->slave); } break; } rtnl_unlock(); - kfree(switchdev_work->fdb_info.addr); kfree(switchdev_work); - dev_put(dev); -} - -static int -dsa_slave_switchdev_fdb_work_init(struct dsa_switchdev_event_work * - switchdev_work, - const struct switchdev_notifier_fdb_info * - fdb_info) -{ - memcpy(&switchdev_work->fdb_info, fdb_info, - sizeof(switchdev_work->fdb_info)); - switchdev_work->fdb_info.addr = kzalloc(ETH_ALEN, GFP_ATOMIC); - if (!switchdev_work->fdb_info.addr) - return -ENOMEM; - ether_addr_copy((u8 *)switchdev_work->fdb_info.addr, - fdb_info->addr); - return 0; + if (dsa_is_user_port(ds, dp->index)) + dev_put(dp->slave); } /* Called under rcu_read_lock() */ @@ -1987,7 +1975,9 @@ static int dsa_slave_switchdev_event(struct notifier_block *unused, unsigned long event, void *ptr) { struct net_device *dev = switchdev_notifier_info_to_dev(ptr); + const struct switchdev_notifier_fdb_info *fdb_info; struct dsa_switchdev_event_work *switchdev_work; + struct dsa_port *dp; int err; if (event == SWITCHDEV_PORT_ATTR_SET) { @@ -2000,20 +1990,32 @@ static int dsa_slave_switchdev_event(struct notifier_block *unused, if (!dsa_slave_dev_check(dev)) return NOTIFY_DONE; + dp = dsa_slave_to_port(dev); + switchdev_work = kzalloc(sizeof(*switchdev_work), GFP_ATOMIC); if (!switchdev_work) return NOTIFY_BAD; INIT_WORK(&switchdev_work->work, dsa_slave_switchdev_event_work); - switchdev_work->dev = dev; + switchdev_work->ds = dp->ds; + switchdev_work->port = dp->index; switchdev_work->event = event; switch (event) { case SWITCHDEV_FDB_ADD_TO_DEVICE: /* fall through */ case SWITCHDEV_FDB_DEL_TO_DEVICE: - if (dsa_slave_switchdev_fdb_work_init(switchdev_work, ptr)) - goto err_fdb_work_init; + fdb_info = ptr; + + if (!fdb_info->added_by_user) { + kfree(switchdev_work); + return NOTIFY_OK; + } + + ether_addr_copy(switchdev_work->addr, + fdb_info->addr); + switchdev_work->vid = fdb_info->vid; + dev_hold(dev); break; default: @@ -2023,10 +2025,6 @@ static int dsa_slave_switchdev_event(struct notifier_block *unused, dsa_schedule_work(&switchdev_work->work); return NOTIFY_OK; - -err_fdb_work_init: - kfree(switchdev_work); - return NOTIFY_BAD; } static int dsa_slave_switchdev_blocking_event(struct notifier_block *unused, From patchwork Thu May 21 21:10:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 218757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69F9BC433DF for ; Thu, 21 May 2020 21:11:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2E75120759 for ; Thu, 21 May 2020 21:11:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="oIt2LPFS" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730628AbgEUVLG (ORCPT ); Thu, 21 May 2020 17:11:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730602AbgEUVLD (ORCPT ); Thu, 21 May 2020 17:11:03 -0400 Received: from mail-ed1-x542.google.com (mail-ed1-x542.google.com [IPv6:2a00:1450:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B416CC08C5C0 for ; Thu, 21 May 2020 14:11:02 -0700 (PDT) Received: by mail-ed1-x542.google.com with SMTP id h16so7726234eds.5 for ; Thu, 21 May 2020 14:11:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GeNsWMw9QCbJxJekQNWDhdnyPJFizHZoxDKdXKPyLJg=; b=oIt2LPFSBWz91MxVqUdAKbX8Fxh4twQ7EnUh4mQtwEB8P/cJNfjef2/kTaaDy0H2cA I2A9E+w1wTIFKwYYVjeOq6PgVFA4VLhziQo/aEpOHaox3HJnQuN8x9eidmNZSJGS8o5N kwKYJcBf3FVLtt6+9NFBdhHd2O97j6XuP0ogkuO10MLHh/rlahKqTCKtPHw5YUSaeIwR QVYsTtTRBra1tV/pnM1XCiBUpJECQ1wY2T4vG97ngEHsRsHDZkiuK9f4OO6pTTCqyzMw 8ybOMnaEZFvqm4szBVIgMP1dwSQSCYII60JduIwDtdfvkQiitb4eT+nj4S0GdKzzKwwv o48w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GeNsWMw9QCbJxJekQNWDhdnyPJFizHZoxDKdXKPyLJg=; b=O3RY9Tqt+3YJIpGdRX+ZfHH5pjR1wwfA6RzncnJ3bOrWDHDQfjn6XIx2f9wVJe3epT UppYQVJIKZYPluFd+fBIst0HcJOv++mqcJ/CfVsvJ5QbQvfgLuTlxnI0YopMZdNrNf2S CkC4qrQVOySuByi1UjO5nx4A2qgmP3JYXKj0H43pP7bzGhma4HAQ/YG1t4vqHSvOVTuQ PHzJ9pIc6h/ZWc7tvJucHuMox/IISzB60c1gbcG7iYyhlSBAmDgK3uuYrDwDGdDZx+x/ U/v0Q5OWAfB39RhtGfSX3KfITu0PQIZxBYUZ/CSkrmvBDnZEF2zGEvs7mw8/lBGtlbJZ ypcQ== X-Gm-Message-State: AOAM532X3BuQz92lwiyxk2LdGulGTxsN+jqBpfqfuAa4V5yF/G32Da8/ UE00edgXQdl8djkp0MYOxic= X-Google-Smtp-Source: ABdhPJxxOnkgXB6Wa8AgVQ9zL5fg3O6ZXIdn562OfTYLesEZHDR5oE2hM63NMjsuHBpwrSc3gtKBiA== X-Received: by 2002:a50:d50f:: with SMTP id u15mr573035edi.244.1590095461189; Thu, 21 May 2020 14:11:01 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.11.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:11:00 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 08/13] net: dsa: add ability to program unicast and multicast filters for CPU port Date: Fri, 22 May 2020 00:10:31 +0300 Message-Id: <20200521211036.668624-9-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Florian Fainelli When the switch ports operate as individual network devices, the switch driver might have configured the switch to flood multicast all the way to the CPU port. This is really undesirable as it can lead to receiving a lot of unwanted traffic that the network stack needs to filter in software. For each valid multicast address, program it into the switch's MDB only when the host is interested in receiving such traffic, e.g: running a multicast application. For unicast filtering, consider that termination can only be done through the primary MAC address of each net device virtually corresponding to a switch port, as well as through upper interfaces (VLAN, bridge) that add their MAC address to the list of secondary unicast addresses of the switch net devices. For each such unicast address, install a reference-counted FDB entry towards the CPU port. Signed-off-by: Florian Fainelli Signed-off-by: Vladimir Oltean --- include/net/dsa.h | 6 ++ net/dsa/Kconfig | 1 + net/dsa/dsa2.c | 6 ++ net/dsa/slave.c | 182 ++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 195 insertions(+) diff --git a/include/net/dsa.h b/include/net/dsa.h index 50389772c597..7aa78884a5f2 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -261,6 +261,12 @@ struct dsa_switch { */ const struct dsa_switch_ops *ops; + /* + * {MAC, VLAN} addresses that are copied to the CPU. + */ + struct netdev_hw_addr_list uc; + struct netdev_hw_addr_list mc; + /* * Slave mii_bus and devices for the individual ports. */ diff --git a/net/dsa/Kconfig b/net/dsa/Kconfig index 739613070d07..d4644afdbdd7 100644 --- a/net/dsa/Kconfig +++ b/net/dsa/Kconfig @@ -9,6 +9,7 @@ menuconfig NET_DSA tristate "Distributed Switch Architecture" depends on HAVE_NET_DSA depends on BRIDGE || BRIDGE=n + depends on VLAN_8021Q_IVDF || VLAN_8021Q_IVDF=n select GRO_CELLS select NET_SWITCHDEV select PHYLINK diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c index 076908fdd29b..cd17554a912b 100644 --- a/net/dsa/dsa2.c +++ b/net/dsa/dsa2.c @@ -429,6 +429,9 @@ static int dsa_switch_setup(struct dsa_switch *ds) goto unregister_notifier; } + __hw_addr_init(&ds->mc); + __hw_addr_init(&ds->uc); + ds->setup = true; return 0; @@ -449,6 +452,9 @@ static void dsa_switch_teardown(struct dsa_switch *ds) if (!ds->setup) return; + __hw_addr_flush(&ds->mc); + __hw_addr_flush(&ds->uc); + if (ds->slave_mii_bus && ds->ops->phy_read) mdiobus_unregister(ds->slave_mii_bus); diff --git a/net/dsa/slave.c b/net/dsa/slave.c index d2072fbd22fe..2743d689f6b1 100644 --- a/net/dsa/slave.c +++ b/net/dsa/slave.c @@ -62,6 +62,158 @@ static int dsa_slave_get_iflink(const struct net_device *dev) return dsa_slave_to_master(dev)->ifindex; } +/* Add a static host MDB entry, corresponding to a slave multicast MAC address, + * to the CPU port. The MDB entry is reference-counted (4 slave ports listening + * on the same multicast MAC address will only call this function once). + */ +static int dsa_upstream_sync_mdb_addr(struct net_device *dev, + const unsigned char *addr) +{ + struct switchdev_obj_port_mdb mdb; + + memset(&mdb, 0, sizeof(mdb)); + mdb.obj.id = SWITCHDEV_OBJ_ID_HOST_MDB; + mdb.obj.flags = SWITCHDEV_F_DEFER; + mdb.vid = vlan_dev_get_addr_vid(dev, addr); + ether_addr_copy(mdb.addr, addr); + + return switchdev_port_obj_add(dev, &mdb.obj, NULL); +} + +/* Delete a static host MDB entry, corresponding to a slave multicast MAC + * address, to the CPU port. The MDB entry is reference-counted (4 slave ports + * listening on the same multicast MAC address will only call this function + * once). + */ +static int dsa_upstream_unsync_mdb_addr(struct net_device *dev, + const unsigned char *addr) +{ + struct switchdev_obj_port_mdb mdb; + + memset(&mdb, 0, sizeof(mdb)); + mdb.obj.id = SWITCHDEV_OBJ_ID_HOST_MDB; + mdb.obj.flags = SWITCHDEV_F_DEFER; + mdb.vid = vlan_dev_get_addr_vid(dev, addr); + ether_addr_copy(mdb.addr, addr); + + return switchdev_port_obj_del(dev, &mdb.obj); +} + +static int dsa_slave_sync_mdb_addr(struct net_device *dev, + const unsigned char *addr) +{ + struct dsa_port *dp = dsa_slave_to_port(dev); + struct dsa_switch *ds = dp->ds; + int err; + + err = __hw_addr_add(&ds->mc, addr, dev->addr_len + dev->vid_len, + NETDEV_HW_ADDR_T_MULTICAST); + if (err) + return err; + + return __hw_addr_sync_dev(&ds->mc, dev, dsa_upstream_sync_mdb_addr, + dsa_upstream_unsync_mdb_addr); +} + +static int dsa_slave_unsync_mdb_addr(struct net_device *dev, + const unsigned char *addr) +{ + struct dsa_port *dp = dsa_slave_to_port(dev); + struct dsa_switch *ds = dp->ds; + int err; + + err = __hw_addr_del(&ds->mc, addr, dev->addr_len + dev->vid_len, + NETDEV_HW_ADDR_T_MULTICAST); + if (err) + return err; + + return __hw_addr_sync_dev(&ds->mc, dev, dsa_upstream_sync_mdb_addr, + dsa_upstream_unsync_mdb_addr); +} + +static void dsa_slave_switchdev_event_work(struct work_struct *work); + +static int dsa_upstream_fdb_addr(struct net_device *slave_dev, + const unsigned char *addr, + unsigned long event) +{ + int addr_len = slave_dev->addr_len + slave_dev->vid_len; + struct dsa_port *dp = dsa_slave_to_port(slave_dev); + u16 vid = vlan_dev_get_addr_vid(slave_dev, addr); + struct dsa_switchdev_event_work *switchdev_work; + + switchdev_work = kzalloc(sizeof(*switchdev_work), GFP_ATOMIC); + if (!switchdev_work) + return -ENOMEM; + + INIT_WORK(&switchdev_work->work, dsa_slave_switchdev_event_work); + switchdev_work->ds = dp->ds; + switchdev_work->port = dsa_upstream_port(dp->ds, dp->index); + switchdev_work->event = event; + + memcpy(switchdev_work->addr, addr, addr_len); + switchdev_work->vid = vid; + + dev_hold(slave_dev); + dsa_schedule_work(&switchdev_work->work); + + return 0; +} + +/* Add a static FDB entry, corresponding to a slave unicast MAC address, + * to the CPU port. The FDB entry is reference-counted (4 slave ports having + * the same MAC address will only call this function once). + */ +static int dsa_upstream_sync_fdb_addr(struct net_device *slave_dev, + const unsigned char *addr) +{ + return dsa_upstream_fdb_addr(slave_dev, addr, + SWITCHDEV_FDB_ADD_TO_DEVICE); +} + +/* Remove a static FDB entry, corresponding to a slave unicast MAC address, + * from the CPU port. The FDB entry is reference-counted (the MAC address is + * only removed when there is no remaining slave port that uses it). + */ +static int dsa_upstream_unsync_fdb_addr(struct net_device *slave_dev, + const unsigned char *addr) +{ + return dsa_upstream_fdb_addr(slave_dev, addr, + SWITCHDEV_FDB_DEL_TO_DEVICE); +} + +static int dsa_slave_sync_fdb_addr(struct net_device *dev, + const unsigned char *addr) +{ + struct dsa_port *dp = dsa_slave_to_port(dev); + struct dsa_switch *ds = dp->ds; + int err; + + err = __hw_addr_add(&ds->uc, addr, dev->addr_len + dev->vid_len, + NETDEV_HW_ADDR_T_UNICAST); + if (err) + return err; + + return __hw_addr_sync_dev(&ds->uc, dev, dsa_upstream_sync_fdb_addr, + dsa_upstream_unsync_fdb_addr); +} + +static int dsa_slave_unsync_fdb_addr(struct net_device *dev, + const unsigned char *addr) +{ + struct dsa_port *dp = dsa_slave_to_port(dev); + struct dsa_switch *ds = dp->ds; + int err; + + err = __hw_addr_del(&ds->uc, addr, dev->addr_len + dev->vid_len, + NETDEV_HW_ADDR_T_UNICAST); + if (err) + return err; + + return __hw_addr_sync_dev(&ds->uc, dev, dsa_upstream_sync_fdb_addr, + dsa_upstream_unsync_fdb_addr); +} + static int dsa_slave_open(struct net_device *dev) { struct net_device *master = dsa_slave_to_master(dev); @@ -76,6 +228,9 @@ static int dsa_slave_open(struct net_device *dev) if (err < 0) goto out; } + err = dsa_slave_sync_fdb_addr(dev, dev->dev_addr); + if (err < 0) + goto out; if (dev->flags & IFF_ALLMULTI) { err = dev_set_allmulti(master, 1); @@ -103,6 +258,7 @@ static int dsa_slave_open(struct net_device *dev) del_unicast: if (!ether_addr_equal(dev->dev_addr, master->dev_addr)) dev_uc_del(master, dev->dev_addr); + dsa_slave_unsync_fdb_addr(dev, dev->dev_addr); out: return err; } @@ -116,6 +272,9 @@ static int dsa_slave_close(struct net_device *dev) dev_mc_unsync(master, dev); dev_uc_unsync(master, dev); + __dev_mc_unsync(dev, dsa_slave_unsync_mdb_addr); + __dev_uc_unsync(dev, dsa_slave_unsync_fdb_addr); + if (dev->flags & IFF_ALLMULTI) dev_set_allmulti(master, -1); if (dev->flags & IFF_PROMISC) @@ -143,7 +302,17 @@ static void dsa_slave_change_rx_flags(struct net_device *dev, int change) static void dsa_slave_set_rx_mode(struct net_device *dev) { struct net_device *master = dsa_slave_to_master(dev); + struct dsa_port *dp = dsa_slave_to_port(dev); + + /* If the port is bridged, the bridge takes care of sending + * SWITCHDEV_OBJ_ID_HOST_MDB to program the host's MC filter + */ + if (netdev_mc_empty(dev) || dp->bridge_dev) + goto out; + __dev_mc_sync(dev, dsa_slave_sync_mdb_addr, dsa_slave_unsync_mdb_addr); +out: + __dev_uc_sync(dev, dsa_slave_sync_fdb_addr, dsa_slave_unsync_fdb_addr); dev_mc_sync(master, dev); dev_uc_sync(master, dev); } @@ -165,9 +334,15 @@ static int dsa_slave_set_mac_address(struct net_device *dev, void *a) if (err < 0) return err; } + err = dsa_slave_sync_fdb_addr(dev, addr->sa_data); + if (err < 0) + goto out; if (!ether_addr_equal(dev->dev_addr, master->dev_addr)) dev_uc_del(master, dev->dev_addr); + err = dsa_slave_unsync_fdb_addr(dev, dev->dev_addr); + if (err < 0) + goto out; out: ether_addr_copy(dev->dev_addr, addr->sa_data); @@ -1752,6 +1927,8 @@ int dsa_slave_create(struct dsa_port *port) else eth_hw_addr_inherit(slave_dev, master); slave_dev->priv_flags |= IFF_NO_QUEUE; + if (ds->ops->port_fdb_add && ds->ops->port_egress_floods) + slave_dev->priv_flags |= IFF_UNICAST_FLT; slave_dev->netdev_ops = &dsa_slave_netdev_ops; slave_dev->min_mtu = 0; if (ds->ops->port_max_mtu) @@ -1759,6 +1936,7 @@ int dsa_slave_create(struct dsa_port *port) else slave_dev->max_mtu = ETH_MAX_MTU; SET_NETDEV_DEVTYPE(slave_dev, &dsa_type); + vlan_dev_ivdf_set(slave_dev, true); netdev_for_each_tx_queue(slave_dev, dsa_slave_set_lockdep_class_one, NULL); @@ -1854,6 +2032,10 @@ static int dsa_slave_changeupper(struct net_device *dev, if (netif_is_bridge_master(info->upper_dev)) { if (info->linking) { + /* Remove existing MC addresses that might have been + * programmed + */ + __dev_mc_unsync(dev, dsa_slave_unsync_mdb_addr); err = dsa_port_bridge_join(dp, info->upper_dev); if (!err) dsa_bridge_mtu_normalization(dp); From patchwork Thu May 21 21:10:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 218756 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53D8AC433DF for ; Thu, 21 May 2020 21:11:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1D0BA20759 for ; Thu, 21 May 2020 21:11:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="aADDdK9Z" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730645AbgEUVLL (ORCPT ); Thu, 21 May 2020 17:11:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730629AbgEUVLG (ORCPT ); Thu, 21 May 2020 17:11:06 -0400 Received: from mail-ej1-x644.google.com (mail-ej1-x644.google.com [IPv6:2a00:1450:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56898C061A0E for ; Thu, 21 May 2020 14:11:06 -0700 (PDT) Received: by mail-ej1-x644.google.com with SMTP id yc10so10496607ejb.12 for ; Thu, 21 May 2020 14:11:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XfeIlbPMvfl48ZJdzb2qJI/UmnPjkgbWlWaOp9Lgfqw=; b=aADDdK9Zq8YEivkVtecVd8Y9yD3lVGG0KuVm5pHnKZOp+sVJgeen7kAPLz5SNkXDgo xFJ6nvDqbcCecDvBn5LKD0FDCk8wbH0aF2DT077EKisI4FEhlYUDGanVFKBLlF8WkNVt LBtLiMBnWEIvaGn4rJZlvAq1F16cqOxPrsj/M88X7+d9N3bHWHbUH/EYdEU6nDrDMz4l fLIbIIWxolwQ1gZG/cDNHpek+I5KT5TnRxyAAhtCazOw13TNmB407fH9gr/g0ECwAGlM pAdmzI7YoZtC1gmSzHmZtOvBnUtIwRMpuKIkTDEGOcQR7OkutNmQ1ysyIAO5yW8f4bC7 hQNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XfeIlbPMvfl48ZJdzb2qJI/UmnPjkgbWlWaOp9Lgfqw=; b=YZS6xYc5izf626AdGr8eRel0DUpXyTLePwQybqWGTVyx5joMyH07IH9j4WEfJWjSQq nvWUZSi38TtdUm231rl4G+VcNcu1P1r2G3DGoc0VrsfwjHLCFcCt2XnXp8swaWWk//IX tCWxCsVs8Md0bAc6Szuhwk31PUFwVPvOCXAbOZf8iAOyo9HjZBiWOf6392oMfKiNEp50 oS+1Xa6OCXOZ8ciZF+aBihN/YBoQrdGLztu+I2zrravM7okK75N7a4cfscErTdzuNxgy fYXt6u92sW6Te6FAS/FmS2YLR1R7AQ5mmVbuHlh9xSdt8ro3jPlVFlQkDH6m/NMi2uzu cl/A== X-Gm-Message-State: AOAM531b/V7F7bUuySMS626RpiRYwMq4yjUlMRhqhXUxoCgunF6RNSDW XNfFk9taOSHuZcBbsBXl/n0= X-Google-Smtp-Source: ABdhPJzCRDkTx2q7yucZfiYL8/zNCHtT4usKaHLIaXgaUXQgEWF0OtQCZcQjwLW9k9wMYMz6P7kMsQ== X-Received: by 2002:a17:906:70c2:: with SMTP id g2mr5215796ejk.207.1590095465033; Thu, 21 May 2020 14:11:05 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.11.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:11:04 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 11/13] net: dsa: deal with new flooding port attributes from bridge Date: Fri, 22 May 2020 00:10:34 +0300 Message-Id: <20200521211036.668624-12-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean This refactors the DSA core handling of flooding attributes, since 3 more have been introduced (related to host flooding). In DSA, actually host flooding is the same as egress flooding of the CPU port. Note that there are some switches where flooding is a decision taken per {source port, destination port}. In DSA, it is only per egress port. For now, let's keep it that way, which means that we need to implement a "flood count" for the CPU port (keep it in flooding while there is at least one user port with the BR_HOST_FLOOD flag set). With this patch, RX filtering can be done for switch ports operating in standalone mode and in bridge mode with no foreign interfaces. When bridging with other net devices in the system, all unknown destinations are allowed to go to the CPU, where they continue to be forwarded in software. Signed-off-by: Vladimir Oltean --- include/net/dsa.h | 8 ++++ net/dsa/dsa_priv.h | 2 +- net/dsa/port.c | 113 +++++++++++++++++++++++++++++++++------------ 3 files changed, 93 insertions(+), 30 deletions(-) diff --git a/include/net/dsa.h b/include/net/dsa.h index 7aa78884a5f2..c256467f1f4a 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -198,6 +198,14 @@ struct dsa_port { struct devlink_port devlink_port; struct phylink *pl; struct phylink_config pl_config; + /* Operational state of flooding */ + int uc_flood_count; + int mc_flood_count; + bool uc_flood; + bool mc_flood; + /* Knobs from bridge */ + unsigned long br_flags; + bool mrouter; struct list_head list; diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h index 001668007efd..91cbaefc56b3 100644 --- a/net/dsa/dsa_priv.h +++ b/net/dsa/dsa_priv.h @@ -167,7 +167,7 @@ int dsa_port_mdb_del(const struct dsa_port *dp, const struct switchdev_obj_port_mdb *mdb); int dsa_port_pre_bridge_flags(const struct dsa_port *dp, unsigned long flags, struct switchdev_trans *trans); -int dsa_port_bridge_flags(const struct dsa_port *dp, unsigned long flags, +int dsa_port_bridge_flags(struct dsa_port *dp, unsigned long flags, struct switchdev_trans *trans); int dsa_port_mrouter(struct dsa_port *dp, bool mrouter, struct switchdev_trans *trans); diff --git a/net/dsa/port.c b/net/dsa/port.c index c4032f79225a..b527740d03a8 100644 --- a/net/dsa/port.c +++ b/net/dsa/port.c @@ -144,10 +144,7 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br) }; int err; - /* Set the flooding mode before joining the port in the switch */ - err = dsa_port_bridge_flags(dp, BR_FLOOD | BR_MCAST_FLOOD, NULL); - if (err) - return err; + dp->cpu_dp->mrouter = br_multicast_router(br); /* Here the interface is already bridged. Reflect the current * configuration so that drivers can program their chips accordingly. @@ -156,12 +153,6 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br) err = dsa_broadcast(DSA_NOTIFIER_BRIDGE_JOIN, &info); - /* The bridging is rolled back on error */ - if (err) { - dsa_port_bridge_flags(dp, 0, NULL); - dp->bridge_dev = NULL; - } - return err; } @@ -184,8 +175,12 @@ void dsa_port_bridge_leave(struct dsa_port *dp, struct net_device *br) if (err) pr_err("DSA: failed to notify DSA_NOTIFIER_BRIDGE_LEAVE\n"); - /* Port is leaving the bridge, disable flooding */ - dsa_port_bridge_flags(dp, 0, NULL); + dp->cpu_dp->mrouter = false; + + /* Port is leaving the bridge, disable host flooding and enable + * egress flooding + */ + dsa_port_bridge_flags(dp, BR_FLOOD | BR_MCAST_FLOOD, NULL); /* Port left the bridge, put in BR_STATE_DISABLED by the bridge layer, * so allow it to be in BR_STATE_FORWARDING to be kept functional @@ -289,48 +284,108 @@ int dsa_port_ageing_time(struct dsa_port *dp, clock_t ageing_clock, return dsa_port_notify(dp, DSA_NOTIFIER_AGEING_TIME, &info); } +static int dsa_port_update_flooding(struct dsa_port *dp, int uc_flood_count, + int mc_flood_count) +{ + struct dsa_switch *ds = dp->ds; + bool uc_flood_changed; + bool mc_flood_changed; + int port = dp->index; + bool uc_flood; + bool mc_flood; + int err; + + if (!ds->ops->port_egress_floods) + return 0; + + uc_flood = !!uc_flood_count; + mc_flood = dp->mrouter; + + uc_flood_changed = dp->uc_flood ^ uc_flood; + mc_flood_changed = dp->mc_flood ^ mc_flood; + + if (uc_flood_changed || mc_flood_changed) { + err = ds->ops->port_egress_floods(ds, port, uc_flood, mc_flood); + if (err) + return err; + } + + dp->uc_flood_count = uc_flood_count; + dp->mc_flood_count = mc_flood_count; + dp->uc_flood = uc_flood; + dp->mc_flood = mc_flood; + + return 0; +} + int dsa_port_pre_bridge_flags(const struct dsa_port *dp, unsigned long flags, struct switchdev_trans *trans) { + const unsigned long mask = BR_FLOOD | BR_MCAST_FLOOD | BR_BCAST_FLOOD | + BR_HOST_FLOOD | BR_HOST_MCAST_FLOOD | + BR_HOST_BCAST_FLOOD; struct dsa_switch *ds = dp->ds; - if (!ds->ops->port_egress_floods || - (flags & ~(BR_FLOOD | BR_MCAST_FLOOD))) - return -EINVAL; + if (!ds->ops->port_egress_floods || (flags & ~mask)) + return -EOPNOTSUPP; return 0; } -int dsa_port_bridge_flags(const struct dsa_port *dp, unsigned long flags, +int dsa_port_bridge_flags(struct dsa_port *dp, unsigned long flags, struct switchdev_trans *trans) { - struct dsa_switch *ds = dp->ds; - int port = dp->index; + struct dsa_port *cpu_dp = dp->cpu_dp; + int cpu_uc_flood_count; + int cpu_mc_flood_count; + unsigned long changed; + int uc_flood_count; + int mc_flood_count; int err = 0; if (switchdev_trans_ph_prepare(trans)) return 0; - if (ds->ops->port_egress_floods) - err = ds->ops->port_egress_floods(ds, port, flags & BR_FLOOD, - flags & BR_MCAST_FLOOD); + uc_flood_count = dp->uc_flood_count; + mc_flood_count = dp->mc_flood_count; + cpu_uc_flood_count = cpu_dp->uc_flood_count; + cpu_mc_flood_count = cpu_dp->mc_flood_count; - return err; + changed = dp->br_flags ^ flags; + + if (changed & BR_FLOOD) + uc_flood_count += (flags & BR_FLOOD) ? 1 : -1; + if (changed & BR_MCAST_FLOOD) + mc_flood_count += (flags & BR_MCAST_FLOOD) ? 1 : -1; + if (changed & BR_HOST_FLOOD) + cpu_uc_flood_count += (flags & BR_HOST_FLOOD) ? 1 : -1; + if (changed & BR_HOST_MCAST_FLOOD) + cpu_mc_flood_count += (flags & BR_HOST_MCAST_FLOOD) ? 1 : -1; + + err = dsa_port_update_flooding(dp, uc_flood_count, mc_flood_count); + if (err && err != -EOPNOTSUPP) + return err; + + err = dsa_port_update_flooding(cpu_dp, cpu_uc_flood_count, + cpu_mc_flood_count); + if (err && err != -EOPNOTSUPP) + return err; + + dp->br_flags = flags; + + return 0; } int dsa_port_mrouter(struct dsa_port *dp, bool mrouter, struct switchdev_trans *trans) { - struct dsa_switch *ds = dp->ds; - int port = dp->index; - - if (!ds->ops->port_egress_floods) - return -EOPNOTSUPP; - if (switchdev_trans_ph_prepare(trans)) return 0; - return ds->ops->port_egress_floods(ds, port, true, mrouter); + dp->mrouter = mrouter; + + return dsa_port_update_flooding(dp, dp->uc_flood_count, + dp->mc_flood_count); } int dsa_port_mtu_change(struct dsa_port *dp, int new_mtu, From patchwork Thu May 21 21:10:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Oltean X-Patchwork-Id: 218755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D7A0C433E0 for ; Thu, 21 May 2020 21:11:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3169F20759 for ; Thu, 21 May 2020 21:11:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QFXa7pJ3" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730651AbgEUVLP (ORCPT ); Thu, 21 May 2020 17:11:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730566AbgEUVLI (ORCPT ); Thu, 21 May 2020 17:11:08 -0400 Received: from mail-ed1-x542.google.com (mail-ed1-x542.google.com [IPv6:2a00:1450:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C774C061A0E for ; Thu, 21 May 2020 14:11:07 -0700 (PDT) Received: by mail-ed1-x542.google.com with SMTP id l25so7726910edj.4 for ; Thu, 21 May 2020 14:11:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RbB6CXL0qgQWxXTRExMkrz+2z7uh3PUfZ9RjInw+Vf8=; b=QFXa7pJ3teMwRJt7Zx0QraCNc5oPPIJmEA0fviwzRl3B+8vNVVKhUBrt51N4KUbN9H pz+RkmSck8H0NI2k/SlfIxYdbXx4ohEkTh8dSiaOfAzMctOSwn2LkGrWMHP5uCf/qxgZ vJoHzcym/jmOkzXr/GXrXXaUWu15a++I5lbilwt9T4p5d7R+dZ2PSe9BlrU9XCaKUqLs kQxkWKxnU2kp/twFqDViJ2k5KMFnqYa9ICY5y0VsnltUiIo+UbYj+rON16gatr0uolvA psoXzL4TKBu7IfcDPy67hKfqXdHeSgxwqRvi/6q6wgwwSipm1KOI4sNOOXq9bBzb3F7G Ndhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RbB6CXL0qgQWxXTRExMkrz+2z7uh3PUfZ9RjInw+Vf8=; b=MMKIKzyUTQRG8WSvbLKOfN1LyZlTGOeDhX47zOXD/vKgxkBm3jGAl2Eba+hwd1DGZP CSIsNXVlt5YqLGMoPlX4HqmrdshW/oeIOH7JbL26ntDS4DSMZ+QyOHoCl0kE1+3vQVqP lILmQhV57Nfn7mEyBJkkOG/ELgTxhHDuUz8U2H/nBWJ6omb16WEXdDGwWIoe69/onjUB cwlkBqO89seYKHvU6Ze221faY1P85LoU5m+wc2huhfsh3b0fvESaWEMIVpURacXLCD5t TYD7/7gOY3SKB7HHvWfCf8KuWZyoKaN3t1C+boOm2naGjPgREk5QNUFk/4gNz+8USnlg /W4w== X-Gm-Message-State: AOAM532DjXpo2xcCsZWJ1C6qYcAszeQ1Bq4UsuK8AGj0y/i87QHiYcQj TIR0KU5nLSw2APEu7dFINPg= X-Google-Smtp-Source: ABdhPJy1cRHIiCJ3jatnT9eOB9gzd+WRTzSMXxdaoyMRUsJQ6lYoBNi0At4MXbENZZZpm7I45E3dMg== X-Received: by 2002:a50:9f66:: with SMTP id b93mr547968edf.376.1590095466236; Thu, 21 May 2020 14:11:06 -0700 (PDT) Received: from localhost.localdomain ([188.25.147.193]) by smtp.gmail.com with ESMTPSA id h8sm5797637edk.72.2020.05.21.14.11.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 14:11:05 -0700 (PDT) From: Vladimir Oltean To: andrew@lunn.ch, f.fainelli@gmail.com, vivien.didelot@gmail.com, davem@davemloft.net Cc: jiri@resnulli.us, idosch@idosch.org, kuba@kernel.org, ivecera@redhat.com, netdev@vger.kernel.org, horatiu.vultur@microchip.com, allan.nielsen@microchip.com, nikolay@cumulusnetworks.com, roopa@cumulusnetworks.com Subject: [PATCH RFC net-next 12/13] net: dsa: treat switchdev notifications for multicast router connected to port Date: Fri, 22 May 2020 00:10:35 +0300 Message-Id: <20200521211036.668624-13-olteanv@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200521211036.668624-1-olteanv@gmail.com> References: <20200521211036.668624-1-olteanv@gmail.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Vladimir Oltean Similar to the "bridge is multicast router" case, unknown multicast should be flooded by this bridge to the ports where a multicast router is connected. Signed-off-by: Vladimir Oltean --- net/dsa/slave.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/net/dsa/slave.c b/net/dsa/slave.c index 2743d689f6b1..c023f1120736 100644 --- a/net/dsa/slave.c +++ b/net/dsa/slave.c @@ -467,7 +467,12 @@ static int dsa_slave_port_attr_set(struct net_device *dev, case SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS: ret = dsa_port_bridge_flags(dp, attr->u.brport_flags, trans); break; + case SWITCHDEV_ATTR_ID_PORT_MROUTER: + /* A multicast router is connected to this external port */ + ret = dsa_port_mrouter(dp, attr->u.mrouter, trans); + break; case SWITCHDEV_ATTR_ID_BRIDGE_MROUTER: + /* The local bridge is a multicast router */ ret = dsa_port_mrouter(dp->cpu_dp, attr->u.mrouter, trans); break; default: