From patchwork Tue May 5 16:25:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julian Wiedmann X-Patchwork-Id: 219864 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34A29C47257 for ; Tue, 5 May 2020 16:26:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 01898206B9 for ; Tue, 5 May 2020 16:26:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730655AbgEEQ0K (ORCPT ); Tue, 5 May 2020 12:26:10 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:32408 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730475AbgEEQ0J (ORCPT ); Tue, 5 May 2020 12:26:09 -0400 Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 045G2lp9052337; Tue, 5 May 2020 12:26:07 -0400 Received: from ppma03fra.de.ibm.com (6b.4a.5195.ip4.static.sl-reverse.com [149.81.74.107]) by mx0a-001b2d01.pphosted.com with ESMTP id 30u8sgxu4a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 05 May 2020 12:26:07 -0400 Received: from pps.filterd (ppma03fra.de.ibm.com [127.0.0.1]) by ppma03fra.de.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 045GJwor023816; Tue, 5 May 2020 16:26:05 GMT Received: from b06cxnps3075.portsmouth.uk.ibm.com (d06relay10.portsmouth.uk.ibm.com [9.149.109.195]) by ppma03fra.de.ibm.com with ESMTP id 30s0g5jy5u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 05 May 2020 16:26:05 +0000 Received: from d06av24.portsmouth.uk.ibm.com (d06av24.portsmouth.uk.ibm.com [9.149.105.60]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 045GQ2Kl63897758 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 5 May 2020 16:26:02 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 570CF4203F; Tue, 5 May 2020 16:26:02 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1092F42045; Tue, 5 May 2020 16:26:02 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 5 May 2020 16:26:01 +0000 (GMT) From: Julian Wiedmann To: David Miller Cc: netdev , linux-s390 , Heiko Carstens , Ursula Braun , Julian Wiedmann Subject: [PATCH net-next 02/11] s390/qeth: process local address events Date: Tue, 5 May 2020 18:25:50 +0200 Message-Id: <20200505162559.14138-3-jwi@linux.ibm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200505162559.14138-1-jwi@linux.ibm.com> References: <20200505162559.14138-1-jwi@linux.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.676 definitions=2020-05-05_09:2020-05-04,2020-05-05 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 clxscore=1015 priorityscore=1501 adultscore=0 malwarescore=0 bulkscore=0 lowpriorityscore=0 mlxscore=0 impostorscore=0 phishscore=0 suspectscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2005050126 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org In configurations where specific HW offloads are in use, OSA adapters will raise notifications to their virtual devices about the IP addresses that currently reside on the same adapter. Cache these addresses in two RCU-enabled hash tables, and flush the tables once the relevant HW offload(s) get disabled. Signed-off-by: Julian Wiedmann --- drivers/s390/net/qeth_core.h | 13 ++ drivers/s390/net/qeth_core_main.c | 217 ++++++++++++++++++++++++++++++ drivers/s390/net/qeth_core_mpc.h | 25 ++++ drivers/s390/net/qeth_l2_main.c | 1 + drivers/s390/net/qeth_l3_main.c | 1 + 5 files changed, 257 insertions(+) diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h index 2ac7771394d8..b92af3735dd4 100644 --- a/drivers/s390/net/qeth_core.h +++ b/drivers/s390/net/qeth_core.h @@ -21,8 +21,10 @@ #include #include #include +#include #include #include +#include #include #include @@ -356,6 +358,12 @@ static inline bool qeth_l3_same_next_hop(struct qeth_hdr_layer3 *h1, &h2->next_hop.ipv6_addr); } +struct qeth_local_addr { + struct hlist_node hnode; + struct rcu_head rcu; + struct in6_addr addr; +}; + enum qeth_qdio_info_states { QETH_QDIO_UNINITIALIZED, QETH_QDIO_ALLOCATED, @@ -800,6 +808,10 @@ struct qeth_card { wait_queue_head_t wait_q; DECLARE_HASHTABLE(mac_htable, 4); DECLARE_HASHTABLE(ip_htable, 4); + DECLARE_HASHTABLE(local_addrs4, 4); + DECLARE_HASHTABLE(local_addrs6, 4); + spinlock_t local_addrs4_lock; + spinlock_t local_addrs6_lock; struct mutex ip_lock; DECLARE_HASHTABLE(ip_mc_htable, 4); struct work_struct rx_mode_work; @@ -1025,6 +1037,7 @@ void qeth_notify_cmd(struct qeth_cmd_buffer *iob, int reason); void qeth_put_cmd(struct qeth_cmd_buffer *iob); void qeth_schedule_recovery(struct qeth_card *); +void qeth_flush_local_addrs(struct qeth_card *card); int qeth_poll(struct napi_struct *napi, int budget); void qeth_clear_ipacmd_list(struct qeth_card *); int qeth_qdio_clear_card(struct qeth_card *, int); diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index ef96890eea5c..6b5d42a4501c 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include @@ -623,6 +624,187 @@ void qeth_notify_cmd(struct qeth_cmd_buffer *iob, int reason) } EXPORT_SYMBOL_GPL(qeth_notify_cmd); +static void qeth_flush_local_addrs4(struct qeth_card *card) +{ + struct qeth_local_addr *addr; + struct hlist_node *tmp; + unsigned int i; + + spin_lock_irq(&card->local_addrs4_lock); + hash_for_each_safe(card->local_addrs4, i, tmp, addr, hnode) { + hash_del_rcu(&addr->hnode); + kfree_rcu(addr, rcu); + } + spin_unlock_irq(&card->local_addrs4_lock); +} + +static void qeth_flush_local_addrs6(struct qeth_card *card) +{ + struct qeth_local_addr *addr; + struct hlist_node *tmp; + unsigned int i; + + spin_lock_irq(&card->local_addrs6_lock); + hash_for_each_safe(card->local_addrs6, i, tmp, addr, hnode) { + hash_del_rcu(&addr->hnode); + kfree_rcu(addr, rcu); + } + spin_unlock_irq(&card->local_addrs6_lock); +} + +void qeth_flush_local_addrs(struct qeth_card *card) +{ + qeth_flush_local_addrs4(card); + qeth_flush_local_addrs6(card); +} +EXPORT_SYMBOL_GPL(qeth_flush_local_addrs); + +static void qeth_add_local_addrs4(struct qeth_card *card, + struct qeth_ipacmd_local_addrs4 *cmd) +{ + unsigned int i; + + if (cmd->addr_length != + sizeof_field(struct qeth_ipacmd_local_addr4, addr)) { + dev_err_ratelimited(&card->gdev->dev, + "Dropped IPv4 ADD LOCAL ADDR event with bad length %u\n", + cmd->addr_length); + return; + } + + spin_lock(&card->local_addrs4_lock); + for (i = 0; i < cmd->count; i++) { + unsigned int key = ipv4_addr_hash(cmd->addrs[i].addr); + struct qeth_local_addr *addr; + bool duplicate = false; + + hash_for_each_possible(card->local_addrs4, addr, hnode, key) { + if (addr->addr.s6_addr32[3] == cmd->addrs[i].addr) { + duplicate = true; + break; + } + } + + if (duplicate) + continue; + + addr = kmalloc(sizeof(*addr), GFP_ATOMIC); + if (!addr) { + dev_err(&card->gdev->dev, + "Failed to allocate local addr object. Traffic to %pI4 might suffer.\n", + &cmd->addrs[i].addr); + continue; + } + + ipv6_addr_set(&addr->addr, 0, 0, 0, cmd->addrs[i].addr); + hash_add_rcu(card->local_addrs4, &addr->hnode, key); + } + spin_unlock(&card->local_addrs4_lock); +} + +static void qeth_add_local_addrs6(struct qeth_card *card, + struct qeth_ipacmd_local_addrs6 *cmd) +{ + unsigned int i; + + if (cmd->addr_length != + sizeof_field(struct qeth_ipacmd_local_addr6, addr)) { + dev_err_ratelimited(&card->gdev->dev, + "Dropped IPv6 ADD LOCAL ADDR event with bad length %u\n", + cmd->addr_length); + return; + } + + spin_lock(&card->local_addrs6_lock); + for (i = 0; i < cmd->count; i++) { + u32 key = ipv6_addr_hash(&cmd->addrs[i].addr); + struct qeth_local_addr *addr; + bool duplicate = false; + + hash_for_each_possible(card->local_addrs6, addr, hnode, key) { + if (ipv6_addr_equal(&addr->addr, &cmd->addrs[i].addr)) { + duplicate = true; + break; + } + } + + if (duplicate) + continue; + + addr = kmalloc(sizeof(*addr), GFP_ATOMIC); + if (!addr) { + dev_err(&card->gdev->dev, + "Failed to allocate local addr object. Traffic to %pI6c might suffer.\n", + &cmd->addrs[i].addr); + continue; + } + + addr->addr = cmd->addrs[i].addr; + hash_add_rcu(card->local_addrs6, &addr->hnode, key); + } + spin_unlock(&card->local_addrs6_lock); +} + +static void qeth_del_local_addrs4(struct qeth_card *card, + struct qeth_ipacmd_local_addrs4 *cmd) +{ + unsigned int i; + + if (cmd->addr_length != + sizeof_field(struct qeth_ipacmd_local_addr4, addr)) { + dev_err_ratelimited(&card->gdev->dev, + "Dropped IPv4 DEL LOCAL ADDR event with bad length %u\n", + cmd->addr_length); + return; + } + + spin_lock(&card->local_addrs4_lock); + for (i = 0; i < cmd->count; i++) { + struct qeth_ipacmd_local_addr4 *addr = &cmd->addrs[i]; + unsigned int key = ipv4_addr_hash(addr->addr); + struct qeth_local_addr *tmp; + + hash_for_each_possible(card->local_addrs4, tmp, hnode, key) { + if (tmp->addr.s6_addr32[3] == addr->addr) { + hash_del_rcu(&tmp->hnode); + kfree_rcu(tmp, rcu); + break; + } + } + } + spin_unlock(&card->local_addrs4_lock); +} + +static void qeth_del_local_addrs6(struct qeth_card *card, + struct qeth_ipacmd_local_addrs6 *cmd) +{ + unsigned int i; + + if (cmd->addr_length != + sizeof_field(struct qeth_ipacmd_local_addr6, addr)) { + dev_err_ratelimited(&card->gdev->dev, + "Dropped IPv6 DEL LOCAL ADDR event with bad length %u\n", + cmd->addr_length); + return; + } + + spin_lock(&card->local_addrs6_lock); + for (i = 0; i < cmd->count; i++) { + struct qeth_ipacmd_local_addr6 *addr = &cmd->addrs[i]; + u32 key = ipv6_addr_hash(&addr->addr); + struct qeth_local_addr *tmp; + + hash_for_each_possible(card->local_addrs6, tmp, hnode, key) { + if (ipv6_addr_equal(&tmp->addr, &addr->addr)) { + hash_del_rcu(&tmp->hnode); + kfree_rcu(tmp, rcu); + break; + } + } + } + spin_unlock(&card->local_addrs6_lock); +} + static void qeth_issue_ipa_msg(struct qeth_ipa_cmd *cmd, int rc, struct qeth_card *card) { @@ -686,9 +868,19 @@ static struct qeth_ipa_cmd *qeth_check_ipa_data(struct qeth_card *card, case IPA_CMD_MODCCID: return cmd; case IPA_CMD_REGISTER_LOCAL_ADDR: + if (cmd->hdr.prot_version == QETH_PROT_IPV4) + qeth_add_local_addrs4(card, &cmd->data.local_addrs4); + else if (cmd->hdr.prot_version == QETH_PROT_IPV6) + qeth_add_local_addrs6(card, &cmd->data.local_addrs6); + QETH_CARD_TEXT(card, 3, "irla"); return NULL; case IPA_CMD_UNREGISTER_LOCAL_ADDR: + if (cmd->hdr.prot_version == QETH_PROT_IPV4) + qeth_del_local_addrs4(card, &cmd->data.local_addrs4); + else if (cmd->hdr.prot_version == QETH_PROT_IPV6) + qeth_del_local_addrs6(card, &cmd->data.local_addrs6); + QETH_CARD_TEXT(card, 3, "urla"); return NULL; default: @@ -1376,6 +1568,10 @@ static void qeth_setup_card(struct qeth_card *card) qeth_init_qdio_info(card); INIT_DELAYED_WORK(&card->buffer_reclaim_work, qeth_buffer_reclaim_work); INIT_WORK(&card->close_dev_work, qeth_close_dev_handler); + hash_init(card->local_addrs4); + hash_init(card->local_addrs6); + spin_lock_init(&card->local_addrs4_lock); + spin_lock_init(&card->local_addrs6_lock); } static void qeth_core_sl_print(struct seq_file *m, struct service_level *slr) @@ -6496,6 +6692,24 @@ void qeth_enable_hw_features(struct net_device *dev) } EXPORT_SYMBOL_GPL(qeth_enable_hw_features); +static void qeth_check_restricted_features(struct qeth_card *card, + netdev_features_t changed, + netdev_features_t actual) +{ + netdev_features_t ipv6_features = NETIF_F_TSO6; + netdev_features_t ipv4_features = NETIF_F_TSO; + + if (!card->info.has_lp2lp_cso_v6) + ipv6_features |= NETIF_F_IPV6_CSUM; + if (!card->info.has_lp2lp_cso_v4) + ipv4_features |= NETIF_F_IP_CSUM; + + if ((changed & ipv6_features) && !(actual & ipv6_features)) + qeth_flush_local_addrs6(card); + if ((changed & ipv4_features) && !(actual & ipv4_features)) + qeth_flush_local_addrs4(card); +} + int qeth_set_features(struct net_device *dev, netdev_features_t features) { struct qeth_card *card = dev->ml_priv; @@ -6537,6 +6751,9 @@ int qeth_set_features(struct net_device *dev, netdev_features_t features) changed ^= NETIF_F_TSO6; } + qeth_check_restricted_features(card, dev->features ^ features, + dev->features ^ changed); + /* everything changed successfully? */ if ((dev->features ^ features) == changed) return 0; diff --git a/drivers/s390/net/qeth_core_mpc.h b/drivers/s390/net/qeth_core_mpc.h index d89a04bfd8b0..9d6f39d8f9ab 100644 --- a/drivers/s390/net/qeth_core_mpc.h +++ b/drivers/s390/net/qeth_core_mpc.h @@ -772,6 +772,29 @@ struct qeth_ipacmd_addr_change { struct qeth_ipacmd_addr_change_entry entry[]; } __packed; +/* [UN]REGISTER_LOCAL_ADDRESS notifications */ +struct qeth_ipacmd_local_addr4 { + __be32 addr; + u32 flags; +}; + +struct qeth_ipacmd_local_addrs4 { + u32 count; + u32 addr_length; + struct qeth_ipacmd_local_addr4 addrs[]; +}; + +struct qeth_ipacmd_local_addr6 { + struct in6_addr addr; + u32 flags; +}; + +struct qeth_ipacmd_local_addrs6 { + u32 count; + u32 addr_length; + struct qeth_ipacmd_local_addr6 addrs[]; +}; + /* Header for each IPA command */ struct qeth_ipacmd_hdr { __u8 command; @@ -803,6 +826,8 @@ struct qeth_ipa_cmd { struct qeth_ipacmd_setbridgeport sbp; struct qeth_ipacmd_addr_change addrchange; struct qeth_ipacmd_vnicc vnicc; + struct qeth_ipacmd_local_addrs4 local_addrs4; + struct qeth_ipacmd_local_addrs6 local_addrs6; } data; } __attribute__ ((packed)); diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c index 0bd5b09e7a22..47f624b37040 100644 --- a/drivers/s390/net/qeth_l2_main.c +++ b/drivers/s390/net/qeth_l2_main.c @@ -291,6 +291,7 @@ static void qeth_l2_stop_card(struct qeth_card *card) qeth_qdio_clear_card(card, 0); qeth_clear_working_pool_list(card); flush_workqueue(card->event_wq); + qeth_flush_local_addrs(card); card->info.promisc_mode = 0; } diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c index 0742a749d26e..fec4ac41e946 100644 --- a/drivers/s390/net/qeth_l3_main.c +++ b/drivers/s390/net/qeth_l3_main.c @@ -1176,6 +1176,7 @@ static void qeth_l3_stop_card(struct qeth_card *card) qeth_qdio_clear_card(card, 0); qeth_clear_working_pool_list(card); flush_workqueue(card->event_wq); + qeth_flush_local_addrs(card); card->info.promisc_mode = 0; } From patchwork Tue May 5 16:25:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julian Wiedmann X-Patchwork-Id: 219859 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCC46C47257 for ; Tue, 5 May 2020 16:26:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AFD56206B9 for ; Tue, 5 May 2020 16:26:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730774AbgEEQ0o (ORCPT ); Tue, 5 May 2020 12:26:44 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:59676 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730282AbgEEQ0J (ORCPT ); Tue, 5 May 2020 12:26:09 -0400 Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 045Fac7h168309; Tue, 5 May 2020 12:26:07 -0400 Received: from ppma03ams.nl.ibm.com (62.31.33a9.ip4.static.sl-reverse.com [169.51.49.98]) by mx0b-001b2d01.pphosted.com with ESMTP id 30s50gt5b9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 05 May 2020 12:26:07 -0400 Received: from pps.filterd (ppma03ams.nl.ibm.com [127.0.0.1]) by ppma03ams.nl.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 045GJkal003354; Tue, 5 May 2020 16:26:05 GMT Received: from b06cxnps3075.portsmouth.uk.ibm.com (d06relay10.portsmouth.uk.ibm.com [9.149.109.195]) by ppma03ams.nl.ibm.com with ESMTP id 30s0g5qc3e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 05 May 2020 16:26:05 +0000 Received: from d06av24.portsmouth.uk.ibm.com (d06av24.portsmouth.uk.ibm.com [9.149.105.60]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 045GQ2H760358896 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 5 May 2020 16:26:02 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A3DC54203F; Tue, 5 May 2020 16:26:02 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6441C42047; Tue, 5 May 2020 16:26:02 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 5 May 2020 16:26:02 +0000 (GMT) From: Julian Wiedmann To: David Miller Cc: netdev , linux-s390 , Heiko Carstens , Ursula Braun , Julian Wiedmann Subject: [PATCH net-next 03/11] s390/qeth: add debugfs file for local IP addresses Date: Tue, 5 May 2020 18:25:51 +0200 Message-Id: <20200505162559.14138-4-jwi@linux.ibm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200505162559.14138-1-jwi@linux.ibm.com> References: <20200505162559.14138-1-jwi@linux.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.676 definitions=2020-05-05_08:2020-05-04,2020-05-05 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 clxscore=1011 impostorscore=0 adultscore=0 priorityscore=1501 malwarescore=0 phishscore=0 mlxscore=0 spamscore=0 mlxlogscore=999 bulkscore=0 suspectscore=2 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2005050124 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org For debugging purposes, provide read access to the local_addr caches via debug/qeth//local_addrs. Signed-off-by: Julian Wiedmann --- drivers/s390/net/qeth_core.h | 2 ++ drivers/s390/net/qeth_core_main.c | 32 ++++++++++++++++++++++++++++++- 2 files changed, 33 insertions(+), 1 deletion(-) diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h index b92af3735dd4..3d8b8e0f2438 100644 --- a/drivers/s390/net/qeth_core.h +++ b/drivers/s390/net/qeth_core.h @@ -11,6 +11,7 @@ #define __QETH_CORE_H__ #include +#include #include #include #include @@ -797,6 +798,7 @@ struct qeth_card { struct qeth_channel data; struct net_device *dev; + struct dentry *debugfs; struct qeth_card_stats stats; struct qeth_card_info info; struct qeth_token token; diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index 6b5d42a4501c..771282cb7aef 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -61,6 +61,7 @@ EXPORT_SYMBOL_GPL(qeth_core_header_cache); static struct kmem_cache *qeth_qdio_outbuf_cache; static struct device *qeth_core_root_dev; +static struct dentry *qeth_debugfs_root; static struct lock_class_key qdio_out_skb_queue_key; static void qeth_issue_next_read_cb(struct qeth_card *card, @@ -805,6 +806,24 @@ static void qeth_del_local_addrs6(struct qeth_card *card, spin_unlock(&card->local_addrs6_lock); } +static int qeth_debugfs_local_addr_show(struct seq_file *m, void *v) +{ + struct qeth_card *card = m->private; + struct qeth_local_addr *tmp; + unsigned int i; + + rcu_read_lock(); + hash_for_each_rcu(card->local_addrs4, i, tmp, hnode) + seq_printf(m, "%pI4\n", &tmp->addr.s6_addr32[3]); + hash_for_each_rcu(card->local_addrs6, i, tmp, hnode) + seq_printf(m, "%pI6c\n", &tmp->addr); + rcu_read_unlock(); + + return 0; +} + +DEFINE_SHOW_ATTRIBUTE(qeth_debugfs_local_addr); + static void qeth_issue_ipa_msg(struct qeth_ipa_cmd *cmd, int rc, struct qeth_card *card) { @@ -1608,6 +1627,11 @@ static struct qeth_card *qeth_alloc_card(struct ccwgroup_device *gdev) if (!card->read_cmd) goto out_read_cmd; + card->debugfs = debugfs_create_dir(dev_name(&gdev->dev), + qeth_debugfs_root); + debugfs_create_file("local_addrs", 0400, card->debugfs, card, + &qeth_debugfs_local_addr_fops); + card->qeth_service_level.seq_print = qeth_core_sl_print; register_service_level(&card->qeth_service_level); return card; @@ -5085,9 +5109,11 @@ static int qeth_qdio_establish(struct qeth_card *card) static void qeth_core_free_card(struct qeth_card *card) { QETH_CARD_TEXT(card, 2, "freecrd"); + + unregister_service_level(&card->qeth_service_level); + debugfs_remove_recursive(card->debugfs); qeth_put_cmd(card->read_cmd); destroy_workqueue(card->event_wq); - unregister_service_level(&card->qeth_service_level); dev_set_drvdata(&card->gdev->dev, NULL); kfree(card); } @@ -6967,6 +6993,8 @@ static int __init qeth_core_init(void) pr_info("loading core functions\n"); + qeth_debugfs_root = debugfs_create_dir("qeth", NULL); + rc = qeth_register_dbf_views(); if (rc) goto dbf_err; @@ -7008,6 +7036,7 @@ static int __init qeth_core_init(void) register_err: qeth_unregister_dbf_views(); dbf_err: + debugfs_remove_recursive(qeth_debugfs_root); pr_err("Initializing the qeth device driver failed\n"); return rc; } @@ -7021,6 +7050,7 @@ static void __exit qeth_core_exit(void) kmem_cache_destroy(qeth_core_header_cache); root_device_unregister(qeth_core_root_dev); qeth_unregister_dbf_views(); + debugfs_remove_recursive(qeth_debugfs_root); pr_info("core functions removed\n"); } From patchwork Tue May 5 16:25:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julian Wiedmann X-Patchwork-Id: 219860 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B858C47247 for ; Tue, 5 May 2020 16:26:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2D39E20735 for ; Tue, 5 May 2020 16:26:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730765AbgEEQ0h (ORCPT ); Tue, 5 May 2020 12:26:37 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:31898 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730634AbgEEQ0K (ORCPT ); Tue, 5 May 2020 12:26:10 -0400 Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 045G2jKR052189; Tue, 5 May 2020 12:26:09 -0400 Received: from ppma03fra.de.ibm.com (6b.4a.5195.ip4.static.sl-reverse.com [149.81.74.107]) by mx0a-001b2d01.pphosted.com with ESMTP id 30u8sgxu4n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 05 May 2020 12:26:08 -0400 Received: from pps.filterd (ppma03fra.de.ibm.com [127.0.0.1]) by ppma03fra.de.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 045GJnBa023461; Tue, 5 May 2020 16:26:06 GMT Received: from b06cxnps4074.portsmouth.uk.ibm.com (d06relay11.portsmouth.uk.ibm.com [9.149.109.196]) by ppma03fra.de.ibm.com with ESMTP id 30s0g5jy5v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 05 May 2020 16:26:06 +0000 Received: from d06av24.portsmouth.uk.ibm.com (d06av24.portsmouth.uk.ibm.com [9.149.105.60]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 045GQ3bM60293270 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 5 May 2020 16:26:03 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4A08A42041; Tue, 5 May 2020 16:26:03 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0969E42049; Tue, 5 May 2020 16:26:03 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 5 May 2020 16:26:02 +0000 (GMT) From: Julian Wiedmann To: David Miller Cc: netdev , linux-s390 , Heiko Carstens , Ursula Braun , Julian Wiedmann Subject: [PATCH net-next 05/11] s390/qeth: don't use restricted offloads for local traffic Date: Tue, 5 May 2020 18:25:53 +0200 Message-Id: <20200505162559.14138-6-jwi@linux.ibm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200505162559.14138-1-jwi@linux.ibm.com> References: <20200505162559.14138-1-jwi@linux.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.676 definitions=2020-05-05_09:2020-05-04,2020-05-05 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 clxscore=1015 priorityscore=1501 adultscore=0 malwarescore=0 bulkscore=0 lowpriorityscore=0 mlxscore=0 impostorscore=0 phishscore=0 suspectscore=0 mlxlogscore=786 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2005050126 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Current OSA models don't support TSO for traffic to local next-hops, and some old models didn't offer TX CSO for such packets either. So as part of .ndo_features_check, check if a packet's next-hop resides on the same OSA Adapter. Opt out from affected HW offloads accordingly. Signed-off-by: Julian Wiedmann --- drivers/s390/net/qeth_core_main.c | 84 +++++++++++++++++++++++++++++-- drivers/s390/net/qeth_l2_main.c | 1 + 2 files changed, 81 insertions(+), 4 deletions(-) diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index 771282cb7aef..1f18b38047a0 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -806,6 +806,58 @@ static void qeth_del_local_addrs6(struct qeth_card *card, spin_unlock(&card->local_addrs6_lock); } +static bool qeth_next_hop_is_local_v4(struct qeth_card *card, + struct sk_buff *skb) +{ + struct qeth_local_addr *tmp; + bool is_local = false; + unsigned int key; + __be32 next_hop; + + if (hash_empty(card->local_addrs4)) + return false; + + rcu_read_lock(); + next_hop = qeth_next_hop_v4_rcu(skb, qeth_dst_check_rcu(skb, 4)); + key = ipv4_addr_hash(next_hop); + + hash_for_each_possible_rcu(card->local_addrs4, tmp, hnode, key) { + if (tmp->addr.s6_addr32[3] == next_hop) { + is_local = true; + break; + } + } + rcu_read_unlock(); + + return is_local; +} + +static bool qeth_next_hop_is_local_v6(struct qeth_card *card, + struct sk_buff *skb) +{ + struct qeth_local_addr *tmp; + struct in6_addr *next_hop; + bool is_local = false; + u32 key; + + if (hash_empty(card->local_addrs6)) + return false; + + rcu_read_lock(); + next_hop = qeth_next_hop_v6_rcu(skb, qeth_dst_check_rcu(skb, 6)); + key = ipv6_addr_hash(next_hop); + + hash_for_each_possible_rcu(card->local_addrs6, tmp, hnode, key) { + if (ipv6_addr_equal(&tmp->addr, next_hop)) { + is_local = true; + break; + } + } + rcu_read_unlock(); + + return is_local; +} + static int qeth_debugfs_local_addr_show(struct seq_file *m, void *v) { struct qeth_card *card = m->private; @@ -6578,10 +6630,6 @@ static int qeth_set_csum_on(struct qeth_card *card, enum qeth_ipa_funcs cstype, if (lp2lp) *lp2lp = qeth_ipa_caps_enabled(&caps, QETH_IPA_CHECKSUM_LP2LP); - if (lp2lp && !*lp2lp) - dev_warn(&card->gdev->dev, - "Hardware checksumming is performed only if %s and its peer use different OSA Express 3 ports\n", - QETH_CARD_IFNAME(card)); return 0; } @@ -6816,6 +6864,34 @@ netdev_features_t qeth_features_check(struct sk_buff *skb, struct net_device *dev, netdev_features_t features) { + /* Traffic with local next-hop is not eligible for some offloads: */ + if (skb->ip_summed == CHECKSUM_PARTIAL) { + struct qeth_card *card = dev->ml_priv; + netdev_features_t restricted = 0; + + if (skb_is_gso(skb) && !netif_needs_gso(skb, features)) + restricted |= NETIF_F_ALL_TSO; + + switch (vlan_get_protocol(skb)) { + case htons(ETH_P_IP): + if (!card->info.has_lp2lp_cso_v4) + restricted |= NETIF_F_IP_CSUM; + + if (restricted && qeth_next_hop_is_local_v4(card, skb)) + features &= ~restricted; + break; + case htons(ETH_P_IPV6): + if (!card->info.has_lp2lp_cso_v6) + restricted |= NETIF_F_IPV6_CSUM; + + if (restricted && qeth_next_hop_is_local_v6(card, skb)) + features &= ~restricted; + break; + default: + break; + } + } + /* GSO segmentation builds skbs with * a (small) linear part for the headers, and * page frags for the data. diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c index 47f624b37040..da47e423e1b1 100644 --- a/drivers/s390/net/qeth_l2_main.c +++ b/drivers/s390/net/qeth_l2_main.c @@ -710,6 +710,7 @@ static int qeth_l2_setup_netdev(struct qeth_card *card) if (card->dev->hw_features & (NETIF_F_TSO | NETIF_F_TSO6)) { card->dev->needed_headroom = sizeof(struct qeth_hdr_tso); + netif_keep_dst(card->dev); netif_set_gso_max_size(card->dev, PAGE_SIZE * (QDIO_MAX_ELEMENTS_PER_BUFFER - 1)); } From patchwork Tue May 5 16:25:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julian Wiedmann X-Patchwork-Id: 219863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7E57C47254 for ; Tue, 5 May 2020 16:26:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AE68D206B9 for ; Tue, 5 May 2020 16:26:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729483AbgEEQ0N (ORCPT ); Tue, 5 May 2020 12:26:13 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:10220 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730664AbgEEQ0L (ORCPT ); Tue, 5 May 2020 12:26:11 -0400 Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 045G76Nf097028; Tue, 5 May 2020 12:26:09 -0400 Received: from ppma05fra.de.ibm.com (6c.4a.5195.ip4.static.sl-reverse.com [149.81.74.108]) by mx0a-001b2d01.pphosted.com with ESMTP id 30s4v88arg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 05 May 2020 12:26:09 -0400 Received: from pps.filterd (ppma05fra.de.ibm.com [127.0.0.1]) by ppma05fra.de.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 045GJoN5002641; Tue, 5 May 2020 16:26:07 GMT Received: from b06cxnps3074.portsmouth.uk.ibm.com (d06relay09.portsmouth.uk.ibm.com [9.149.109.194]) by ppma05fra.de.ibm.com with ESMTP id 30s0g5jy95-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 05 May 2020 16:26:07 +0000 Received: from d06av24.portsmouth.uk.ibm.com (d06av24.portsmouth.uk.ibm.com [9.149.105.60]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 045GQ4Nm7602464 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 5 May 2020 16:26:04 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 3CD224203F; Tue, 5 May 2020 16:26:04 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id F1EF242042; Tue, 5 May 2020 16:26:03 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 5 May 2020 16:26:03 +0000 (GMT) From: Julian Wiedmann To: David Miller Cc: netdev , linux-s390 , Heiko Carstens , Ursula Braun , Julian Wiedmann Subject: [PATCH net-next 08/11] s390/qeth: set TX IRQ marker on last buffer in a group Date: Tue, 5 May 2020 18:25:56 +0200 Message-Id: <20200505162559.14138-9-jwi@linux.ibm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200505162559.14138-1-jwi@linux.ibm.com> References: <20200505162559.14138-1-jwi@linux.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.676 definitions=2020-05-05_09:2020-05-04,2020-05-05 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 spamscore=0 suspectscore=0 adultscore=0 clxscore=1015 impostorscore=0 lowpriorityscore=0 bulkscore=0 phishscore=0 mlxscore=0 mlxlogscore=999 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2005050124 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org When qeth_flush_buffers() gets called for a group of TX buffers (currently up to 2 for OSA-style devices), the code iterates over each buffer for some final processing. During this processing, it sets the TX IRQ marker on the leading buffer rather than the last one. This can result in delayed TX completion of the trailing buffers. So pull the IRQ marker code out of the loop, and apply it to the final buffer. Signed-off-by: Julian Wiedmann --- drivers/s390/net/qeth_core_main.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index 4d1d053eebb7..164cc7f377fc 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -3617,11 +3617,11 @@ static int qeth_switch_to_nonpacking_if_needed(struct qeth_qdio_out_q *queue) static void qeth_flush_buffers(struct qeth_qdio_out_q *queue, int index, int count) { + struct qeth_qdio_out_buffer *buf = queue->bufs[index]; + unsigned int qdio_flags = QDIO_FLAG_SYNC_OUTPUT; struct qeth_card *card = queue->card; - struct qeth_qdio_out_buffer *buf; int rc; int i; - unsigned int qdio_flags; for (i = index; i < index + count; ++i) { unsigned int bidx = QDIO_BUFNR(i); @@ -3638,9 +3638,10 @@ static void qeth_flush_buffers(struct qeth_qdio_out_q *queue, int index, if (IS_IQD(card)) { skb_queue_walk(&buf->skb_list, skb) skb_tx_timestamp(skb); - continue; } + } + if (!IS_IQD(card)) { if (!queue->do_pack) { if ((atomic_read(&queue->used_buffers) >= (QETH_HIGH_WATERMARK_PACK - @@ -3665,12 +3666,12 @@ static void qeth_flush_buffers(struct qeth_qdio_out_q *queue, int index, buf->buffer->element[0].sflags |= SBAL_SFLAGS0_PCI_REQ; } } + + if (atomic_read(&queue->set_pci_flags_count)) + qdio_flags |= QDIO_FLAG_PCI_OUT; } QETH_TXQ_STAT_INC(queue, doorbell); - qdio_flags = QDIO_FLAG_SYNC_OUTPUT; - if (atomic_read(&queue->set_pci_flags_count)) - qdio_flags |= QDIO_FLAG_PCI_OUT; rc = do_QDIO(CARD_DDEV(queue->card), qdio_flags, queue->queue_no, index, count); From patchwork Tue May 5 16:25:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julian Wiedmann X-Patchwork-Id: 219862 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E794C47254 for ; Tue, 5 May 2020 16:26:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E8922206B9 for ; Tue, 5 May 2020 16:26:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730488AbgEEQ0R (ORCPT ); Tue, 5 May 2020 12:26:17 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:60816 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730681AbgEEQ0M (ORCPT ); Tue, 5 May 2020 12:26:12 -0400 Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 045G767P097078; Tue, 5 May 2020 12:26:10 -0400 Received: from ppma01fra.de.ibm.com (46.49.7a9f.ip4.static.sl-reverse.com [159.122.73.70]) by mx0a-001b2d01.pphosted.com with ESMTP id 30s4v88as4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 05 May 2020 12:26:09 -0400 Received: from pps.filterd (ppma01fra.de.ibm.com [127.0.0.1]) by ppma01fra.de.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 045GK6RQ017360; Tue, 5 May 2020 16:26:07 GMT Received: from b06cxnps3074.portsmouth.uk.ibm.com (d06relay09.portsmouth.uk.ibm.com [9.149.109.194]) by ppma01fra.de.ibm.com with ESMTP id 30s0g5b09x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 05 May 2020 16:26:07 +0000 Received: from d06av24.portsmouth.uk.ibm.com (d06av24.portsmouth.uk.ibm.com [9.149.105.60]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 045GQ4lf12124502 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 5 May 2020 16:26:05 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id DFE2942045; Tue, 5 May 2020 16:26:04 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9706242041; Tue, 5 May 2020 16:26:04 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 5 May 2020 16:26:04 +0000 (GMT) From: Julian Wiedmann To: David Miller Cc: netdev , linux-s390 , Heiko Carstens , Ursula Braun , Julian Wiedmann Subject: [PATCH net-next 10/11] s390/qeth: allow reset via ethtool Date: Tue, 5 May 2020 18:25:58 +0200 Message-Id: <20200505162559.14138-11-jwi@linux.ibm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200505162559.14138-1-jwi@linux.ibm.com> References: <20200505162559.14138-1-jwi@linux.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.676 definitions=2020-05-05_09:2020-05-04,2020-05-05 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 spamscore=0 suspectscore=0 adultscore=0 clxscore=1015 impostorscore=0 lowpriorityscore=0 bulkscore=0 phishscore=0 mlxscore=0 mlxlogscore=999 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2005050124 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Implement the .reset callback. Only a full reset is supported. Signed-off-by: Julian Wiedmann Reviewed-by: Alexandra Winter --- drivers/s390/net/qeth_ethtool.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/drivers/s390/net/qeth_ethtool.c b/drivers/s390/net/qeth_ethtool.c index ebdc03210608..0d12002d0615 100644 --- a/drivers/s390/net/qeth_ethtool.c +++ b/drivers/s390/net/qeth_ethtool.c @@ -193,6 +193,21 @@ static void qeth_get_drvinfo(struct net_device *dev, CARD_RDEV_ID(card), CARD_WDEV_ID(card), CARD_DDEV_ID(card)); } +static int qeth_reset(struct net_device *dev, u32 *flags) +{ + struct qeth_card *card = dev->ml_priv; + int rc; + + if (*flags != ETH_RESET_ALL) + return -EINVAL; + + rc = qeth_schedule_recovery(card); + if (!rc) + *flags = 0; + + return rc; +} + static void qeth_get_channels(struct net_device *dev, struct ethtool_channels *channels) { @@ -522,6 +537,7 @@ const struct ethtool_ops qeth_ethtool_ops = { .get_ethtool_stats = qeth_get_ethtool_stats, .get_sset_count = qeth_get_sset_count, .get_drvinfo = qeth_get_drvinfo, + .reset = qeth_reset, .get_channels = qeth_get_channels, .set_channels = qeth_set_channels, .get_ts_info = qeth_get_ts_info,