From patchwork Mon Jun 7 18:20:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karsten Graul X-Patchwork-Id: 456467 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D60C4C47082 for ; Mon, 7 Jun 2021 18:20:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BBCD461105 for ; Mon, 7 Jun 2021 18:20:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231147AbhFGSWg (ORCPT ); Mon, 7 Jun 2021 14:22:36 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:6030 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S230443AbhFGSWb (ORCPT ); Mon, 7 Jun 2021 14:22:31 -0400 Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 157I3xkE039743; Mon, 7 Jun 2021 14:20:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=HblBm5Y/tc5GrClRmdzXCdSmM+tg4PgJ6DQhNCbfYQs=; b=kV6+Ab5UJr4bJzAxuQ8100b8zYTS51GKpgSu/djgC0n9rIAAdtQWf1Qn+hLMaQC8VTDN wN6UaKytXO1ohYVNJg/DKNAo3VGmZ4Y04KkrD6KnKbv5ww9mARWmuZhxL9TsTsFPj6kx 4jD8qHR3bG6frPnph6ZfLipkHbI5jOGFlVJgNsceFfxvn7offXbI+fBOW6CMcGQUWVuz OV1XGrI3Pt8jFSvxwHJhvnJYnyq325O0CJ2XOtfkJ1brBefkDn5uhVBHuQD3R7qRGlRD MrOX9MeWGxHhkwtU4EByHeqJp6lY+wWR6cvQxnMT5UK/yNRy1U3ncOCK9MLly8Wtu0PM 1A== Received: from ppma04ams.nl.ibm.com (63.31.33a9.ip4.static.sl-reverse.com [169.51.49.99]) by mx0b-001b2d01.pphosted.com with ESMTP id 391q892ke2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 07 Jun 2021 14:20:35 -0400 Received: from pps.filterd (ppma04ams.nl.ibm.com [127.0.0.1]) by ppma04ams.nl.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 157IC7J6017117; Mon, 7 Jun 2021 18:20:33 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma04ams.nl.ibm.com with ESMTP id 3900w892tv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 07 Jun 2021 18:20:33 +0000 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 157IKU3p26477036 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 7 Jun 2021 18:20:30 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A8476A4055; Mon, 7 Jun 2021 18:20:30 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6058FA4057; Mon, 7 Jun 2021 18:20:30 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 7 Jun 2021 18:20:30 +0000 (GMT) From: Karsten Graul To: David Miller , Jakub Kicinski Cc: Heiko Carstens , Stefan Raspl , netdev@vger.kernel.org, linux-s390@vger.kernel.org Subject: [PATCH net-next 1/4] net/smc: Add SMC statistics support Date: Mon, 7 Jun 2021 20:20:11 +0200 Message-Id: <20210607182014.3384922-2-kgraul@linux.ibm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607182014.3384922-1-kgraul@linux.ibm.com> References: <20210607182014.3384922-1-kgraul@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: ak2DLTEDL9JKf9Pdu_w3QM7lNvj4cI_I X-Proofpoint-ORIG-GUID: ak2DLTEDL9JKf9Pdu_w3QM7lNvj4cI_I X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.761 definitions=2021-06-07_14:2021-06-04,2021-06-07 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 spamscore=0 clxscore=1011 priorityscore=1501 suspectscore=0 adultscore=0 malwarescore=0 mlxscore=0 lowpriorityscore=0 bulkscore=0 phishscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000 definitions=main-2106070125 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Guvenc Gulce Add the ability to collect SMC statistics information. Per-cpu variables are used to collect the statistic information for better performance and for reducing concurrency pitfalls. The code that is collecting statistic data is implemented in macros to increase code reuse and readability. Signed-off-by: Guvenc Gulce Signed-off-by: Karsten Graul --- net/smc/Makefile | 2 +- net/smc/af_smc.c | 89 ++++++++++++---- net/smc/smc_core.c | 13 ++- net/smc/smc_rx.c | 8 ++ net/smc/smc_stats.c | 35 ++++++ net/smc/smc_stats.h | 253 ++++++++++++++++++++++++++++++++++++++++++++ net/smc/smc_tx.c | 16 ++- 7 files changed, 395 insertions(+), 21 deletions(-) create mode 100644 net/smc/smc_stats.c create mode 100644 net/smc/smc_stats.h diff --git a/net/smc/Makefile b/net/smc/Makefile index 77e54fe42b1c..99a0186cba5b 100644 --- a/net/smc/Makefile +++ b/net/smc/Makefile @@ -2,4 +2,4 @@ obj-$(CONFIG_SMC) += smc.o obj-$(CONFIG_SMC_DIAG) += smc_diag.o smc-y := af_smc.o smc_pnet.o smc_ib.o smc_clc.o smc_core.o smc_wr.o smc_llc.o -smc-y += smc_cdc.o smc_tx.o smc_rx.o smc_close.o smc_ism.o smc_netlink.o +smc-y += smc_cdc.o smc_tx.o smc_rx.o smc_close.o smc_ism.o smc_netlink.o smc_stats.o diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c index 5eff7cccceff..efeaed384769 100644 --- a/net/smc/af_smc.c +++ b/net/smc/af_smc.c @@ -49,6 +49,7 @@ #include "smc_tx.h" #include "smc_rx.h" #include "smc_close.h" +#include "smc_stats.h" static DEFINE_MUTEX(smc_server_lgr_pending); /* serialize link group * creation on server @@ -508,9 +509,42 @@ static void smc_link_save_peer_info(struct smc_link *link, link->peer_mtu = clc->r0.qp_mtu; } -static void smc_switch_to_fallback(struct smc_sock *smc) +static void smc_stat_inc_fback_rsn_cnt(struct smc_sock *smc, + struct smc_stats_fback *fback_arr) +{ + int cnt; + + for (cnt = 0; cnt < SMC_MAX_FBACK_RSN_CNT; cnt++) { + if (fback_arr[cnt].fback_code == smc->fallback_rsn) { + fback_arr[cnt].count++; + break; + } + if (!fback_arr[cnt].fback_code) { + fback_arr[cnt].fback_code = smc->fallback_rsn; + fback_arr[cnt].count++; + break; + } + } +} + +static void smc_stat_fallback(struct smc_sock *smc) +{ + mutex_lock(&smc_stat_fback_rsn); + if (smc->listen_smc) { + smc_stat_inc_fback_rsn_cnt(smc, fback_rsn.srv); + fback_rsn.srv_fback_cnt++; + } else { + smc_stat_inc_fback_rsn_cnt(smc, fback_rsn.clnt); + fback_rsn.clnt_fback_cnt++; + } + mutex_unlock(&smc_stat_fback_rsn); +} + +static void smc_switch_to_fallback(struct smc_sock *smc, int reason_code) { smc->use_fallback = true; + smc->fallback_rsn = reason_code; + smc_stat_fallback(smc); if (smc->sk.sk_socket && smc->sk.sk_socket->file) { smc->clcsock->file = smc->sk.sk_socket->file; smc->clcsock->file->private_data = smc->clcsock; @@ -522,8 +556,7 @@ static void smc_switch_to_fallback(struct smc_sock *smc) /* fall back during connect */ static int smc_connect_fallback(struct smc_sock *smc, int reason_code) { - smc_switch_to_fallback(smc); - smc->fallback_rsn = reason_code; + smc_switch_to_fallback(smc, reason_code); smc_copy_sock_settings_to_clc(smc); smc->connect_nonblock = 0; if (smc->sk.sk_state == SMC_INIT) @@ -538,6 +571,7 @@ static int smc_connect_decline_fallback(struct smc_sock *smc, int reason_code, int rc; if (reason_code < 0) { /* error, fallback is not possible */ + this_cpu_inc(smc_stats->clnt_hshake_err_cnt); if (smc->sk.sk_state == SMC_INIT) sock_put(&smc->sk); /* passive closing */ return reason_code; @@ -545,6 +579,7 @@ static int smc_connect_decline_fallback(struct smc_sock *smc, int reason_code, if (reason_code != SMC_CLC_DECL_PEERDECL) { rc = smc_clc_send_decline(smc, reason_code, version); if (rc < 0) { + this_cpu_inc(smc_stats->clnt_hshake_err_cnt); if (smc->sk.sk_state == SMC_INIT) sock_put(&smc->sk); /* passive closing */ return rc; @@ -992,6 +1027,7 @@ static int __smc_connect(struct smc_sock *smc) if (rc) goto vlan_cleanup; + SMC_STAT_CLNT_SUCC_INC(aclc); smc_connect_ism_vlan_cleanup(smc, ini); kfree(buf); kfree(ini); @@ -1308,6 +1344,7 @@ static void smc_listen_out_err(struct smc_sock *new_smc) { struct sock *newsmcsk = &new_smc->sk; + this_cpu_inc(smc_stats->srv_hshake_err_cnt); if (newsmcsk->sk_state == SMC_INIT) sock_put(&new_smc->sk); /* passive closing */ newsmcsk->sk_state = SMC_CLOSED; @@ -1325,8 +1362,7 @@ static void smc_listen_decline(struct smc_sock *new_smc, int reason_code, smc_listen_out_err(new_smc); return; } - smc_switch_to_fallback(new_smc); - new_smc->fallback_rsn = reason_code; + smc_switch_to_fallback(new_smc, reason_code); if (reason_code && reason_code != SMC_CLC_DECL_PEERDECL) { if (smc_clc_send_decline(new_smc, reason_code, version) < 0) { smc_listen_out_err(new_smc); @@ -1699,8 +1735,7 @@ static void smc_listen_work(struct work_struct *work) /* check if peer is smc capable */ if (!tcp_sk(newclcsock->sk)->syn_smc) { - smc_switch_to_fallback(new_smc); - new_smc->fallback_rsn = SMC_CLC_DECL_PEERNOSMC; + smc_switch_to_fallback(new_smc, SMC_CLC_DECL_PEERNOSMC); smc_listen_out_connected(new_smc); return; } @@ -1778,6 +1813,7 @@ static void smc_listen_work(struct work_struct *work) } smc_conn_save_peer_info(new_smc, cclc); smc_listen_out_connected(new_smc); + SMC_STAT_SERV_SUCC_INC(ini); goto out_free; out_unlock: @@ -1984,18 +2020,19 @@ static int smc_sendmsg(struct socket *sock, struct msghdr *msg, size_t len) if (msg->msg_flags & MSG_FASTOPEN) { if (sk->sk_state == SMC_INIT && !smc->connect_nonblock) { - smc_switch_to_fallback(smc); - smc->fallback_rsn = SMC_CLC_DECL_OPTUNSUPP; + smc_switch_to_fallback(smc, SMC_CLC_DECL_OPTUNSUPP); } else { rc = -EINVAL; goto out; } } - if (smc->use_fallback) + if (smc->use_fallback) { rc = smc->clcsock->ops->sendmsg(smc->clcsock, msg, len); - else + } else { rc = smc_tx_sendmsg(smc, msg, len); + SMC_STAT_TX_PAYLOAD(smc, len, rc); + } out: release_sock(sk); return rc; @@ -2030,6 +2067,7 @@ static int smc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, } else { msg->msg_namelen = 0; rc = smc_rx_recvmsg(smc, msg, NULL, len, flags); + SMC_STAT_RX_PAYLOAD(smc, rc, rc); } out: @@ -2194,8 +2232,7 @@ static int smc_setsockopt(struct socket *sock, int level, int optname, case TCP_FASTOPEN_NO_COOKIE: /* option not supported by SMC */ if (sk->sk_state == SMC_INIT && !smc->connect_nonblock) { - smc_switch_to_fallback(smc); - smc->fallback_rsn = SMC_CLC_DECL_OPTUNSUPP; + smc_switch_to_fallback(smc, SMC_CLC_DECL_OPTUNSUPP); } else { rc = -EINVAL; } @@ -2204,18 +2241,22 @@ static int smc_setsockopt(struct socket *sock, int level, int optname, if (sk->sk_state != SMC_INIT && sk->sk_state != SMC_LISTEN && sk->sk_state != SMC_CLOSED) { - if (val) + if (val) { + SMC_STAT_INC(!smc->conn.lnk, ndly_cnt); mod_delayed_work(smc->conn.lgr->tx_wq, &smc->conn.tx_work, 0); + } } break; case TCP_CORK: if (sk->sk_state != SMC_INIT && sk->sk_state != SMC_LISTEN && sk->sk_state != SMC_CLOSED) { - if (!val) + if (!val) { + SMC_STAT_INC(!smc->conn.lnk, cork_cnt); mod_delayed_work(smc->conn.lgr->tx_wq, &smc->conn.tx_work, 0); + } } break; case TCP_DEFER_ACCEPT: @@ -2338,11 +2379,13 @@ static ssize_t smc_sendpage(struct socket *sock, struct page *page, goto out; } release_sock(sk); - if (smc->use_fallback) + if (smc->use_fallback) { rc = kernel_sendpage(smc->clcsock, page, offset, size, flags); - else + } else { + SMC_STAT_INC(!smc->conn.lnk, sendpage_cnt); rc = sock_no_sendpage(sock, page, offset, size, flags); + } out: return rc; @@ -2391,6 +2434,7 @@ static ssize_t smc_splice_read(struct socket *sock, loff_t *ppos, flags = MSG_DONTWAIT; else flags = 0; + SMC_STAT_INC(!smc->conn.lnk, splice_cnt); rc = smc_rx_recvmsg(smc, NULL, pipe, len, flags); } out: @@ -2514,10 +2558,16 @@ static int __init smc_init(void) if (!smc_close_wq) goto out_alloc_hs_wq; + rc = smc_stats_init(); + if (rc) { + pr_err("%s: smc_stats_init fails with %d\n", __func__, rc); + goto out_alloc_wqs; + } + rc = smc_core_init(); if (rc) { pr_err("%s: smc_core_init fails with %d\n", __func__, rc); - goto out_alloc_wqs; + goto out_smc_stat; } rc = smc_llc_init(); @@ -2569,6 +2619,8 @@ static int __init smc_init(void) proto_unregister(&smc_proto); out_core: smc_core_exit(); +out_smc_stat: + smc_stats_exit(); out_alloc_wqs: destroy_workqueue(smc_close_wq); out_alloc_hs_wq: @@ -2591,6 +2643,7 @@ static void __exit smc_exit(void) smc_ib_unregister_client(); destroy_workqueue(smc_close_wq); destroy_workqueue(smc_hs_wq); + smc_stats_exit(); proto_unregister(&smc_proto6); proto_unregister(&smc_proto); smc_pnet_exit(); diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c index 317bc2c90fab..d69f58f670a1 100644 --- a/net/smc/smc_core.c +++ b/net/smc/smc_core.c @@ -33,6 +33,7 @@ #include "smc_close.h" #include "smc_ism.h" #include "smc_netlink.h" +#include "smc_stats.h" #define SMC_LGR_NUM_INCR 256 #define SMC_LGR_FREE_DELAY_SERV (600 * HZ) @@ -2029,6 +2030,7 @@ static int __smc_buf_create(struct smc_sock *smc, bool is_smcd, bool is_rmb) struct smc_link_group *lgr = conn->lgr; struct list_head *buf_list; int bufsize, bufsize_short; + bool is_dgraded = false; struct mutex *lock; /* lock buffer list */ int sk_buf_size; @@ -2056,6 +2058,8 @@ static int __smc_buf_create(struct smc_sock *smc, bool is_smcd, bool is_rmb) /* check for reusable slot in the link group */ buf_desc = smc_buf_get_slot(bufsize_short, lock, buf_list); if (buf_desc) { + SMC_STAT_RMB_SIZE(is_smcd, is_rmb, bufsize); + SMC_STAT_BUF_REUSE(is_smcd, is_rmb); memset(buf_desc->cpu_addr, 0, bufsize); break; /* found reusable slot */ } @@ -2067,9 +2071,16 @@ static int __smc_buf_create(struct smc_sock *smc, bool is_smcd, bool is_rmb) if (PTR_ERR(buf_desc) == -ENOMEM) break; - if (IS_ERR(buf_desc)) + if (IS_ERR(buf_desc)) { + if (!is_dgraded) { + is_dgraded = true; + SMC_STAT_RMB_DOWNGRADED(is_smcd, is_rmb); + } continue; + } + SMC_STAT_RMB_ALLOC(is_smcd, is_rmb); + SMC_STAT_RMB_SIZE(is_smcd, is_rmb, bufsize); buf_desc->used = 1; mutex_lock(lock); list_add(&buf_desc->list, buf_list); diff --git a/net/smc/smc_rx.c b/net/smc/smc_rx.c index fcfac59f8b72..ce1ae39923b1 100644 --- a/net/smc/smc_rx.c +++ b/net/smc/smc_rx.c @@ -21,6 +21,7 @@ #include "smc_cdc.h" #include "smc_tx.h" /* smc_tx_consumer_update() */ #include "smc_rx.h" +#include "smc_stats.h" /* callback implementation to wakeup consumers blocked with smc_rx_wait(). * indirectly called by smc_cdc_msg_recv_action(). @@ -227,6 +228,7 @@ static int smc_rx_recv_urg(struct smc_sock *smc, struct msghdr *msg, int len, conn->urg_state == SMC_URG_READ) return -EINVAL; + SMC_STAT_INC(!conn->lnk, urg_data_cnt); if (conn->urg_state == SMC_URG_VALID) { if (!(flags & MSG_PEEK)) smc->conn.urg_state = SMC_URG_READ; @@ -303,6 +305,12 @@ int smc_rx_recvmsg(struct smc_sock *smc, struct msghdr *msg, timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); target = sock_rcvlowat(sk, flags & MSG_WAITALL, len); + readable = atomic_read(&conn->bytes_to_rcv); + if (readable >= conn->rmb_desc->len) + SMC_STAT_RMB_RX_FULL(!conn->lnk); + + if (len < readable) + SMC_STAT_RMB_RX_SIZE_SMALL(!conn->lnk); /* we currently use 1 RMBE per RMB, so RMBE == RMB base addr */ rcvbuf_base = conn->rx_off + conn->rmb_desc->cpu_addr; diff --git a/net/smc/smc_stats.c b/net/smc/smc_stats.c new file mode 100644 index 000000000000..76e938388520 --- /dev/null +++ b/net/smc/smc_stats.c @@ -0,0 +1,35 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Shared Memory Communications over RDMA (SMC-R) and RoCE + * + * SMC statistics netlink routines + * + * Copyright IBM Corp. 2021 + * + * Author(s): Guvenc Gulce + */ +#include +#include +#include +#include +#include "smc_stats.h" + +/* serialize fallback reason statistic gathering */ +DEFINE_MUTEX(smc_stat_fback_rsn); +struct smc_stats __percpu *smc_stats; /* per cpu counters for SMC */ +struct smc_stats_reason fback_rsn; + +int __init smc_stats_init(void) +{ + memset(&fback_rsn, 0, sizeof(fback_rsn)); + smc_stats = alloc_percpu(struct smc_stats); + if (!smc_stats) + return -ENOMEM; + + return 0; +} + +void smc_stats_exit(void) +{ + free_percpu(smc_stats); +} diff --git a/net/smc/smc_stats.h b/net/smc/smc_stats.h new file mode 100644 index 000000000000..928372114cf1 --- /dev/null +++ b/net/smc/smc_stats.h @@ -0,0 +1,253 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Shared Memory Communications over RDMA (SMC-R) and RoCE + * + * Macros for SMC statistics + * + * Copyright IBM Corp. 2021 + * + * Author(s): Guvenc Gulce + */ + +#ifndef NET_SMC_SMC_STATS_H_ +#define NET_SMC_SMC_STATS_H_ +#include +#include +#include +#include +#include + +#include "smc_clc.h" + +#define SMC_MAX_FBACK_RSN_CNT 30 + +extern struct smc_stats __percpu *smc_stats; /* per cpu counters for SMC */ +extern struct smc_stats_reason fback_rsn; +extern struct mutex smc_stat_fback_rsn; + +enum { + SMC_BUF_8K, + SMC_BUF_16K, + SMC_BUF_32K, + SMC_BUF_64K, + SMC_BUF_128K, + SMC_BUF_256K, + SMC_BUF_512K, + SMC_BUF_1024K, + SMC_BUF_G_1024K, + SMC_BUF_MAX, +}; + +struct smc_stats_fback { + int fback_code; + u16 count; +}; + +struct smc_stats_reason { + struct smc_stats_fback srv[SMC_MAX_FBACK_RSN_CNT]; + struct smc_stats_fback clnt[SMC_MAX_FBACK_RSN_CNT]; + u64 srv_fback_cnt; + u64 clnt_fback_cnt; +}; + +struct smc_stats_rmbcnt { + u64 buf_size_small_peer_cnt; + u64 buf_size_small_cnt; + u64 buf_full_peer_cnt; + u64 buf_full_cnt; + u64 reuse_cnt; + u64 alloc_cnt; + u64 dgrade_cnt; +}; + +struct smc_stats_memsize { + u64 buf[SMC_BUF_MAX]; +}; + +struct smc_stats_tech { + struct smc_stats_memsize tx_rmbsize; + struct smc_stats_memsize rx_rmbsize; + struct smc_stats_memsize tx_pd; + struct smc_stats_memsize rx_pd; + struct smc_stats_rmbcnt rmb_tx; + struct smc_stats_rmbcnt rmb_rx; + u64 clnt_v1_succ_cnt; + u64 clnt_v2_succ_cnt; + u64 srv_v1_succ_cnt; + u64 srv_v2_succ_cnt; + u64 sendpage_cnt; + u64 urg_data_cnt; + u64 splice_cnt; + u64 cork_cnt; + u64 ndly_cnt; + u64 rx_bytes; + u64 tx_bytes; + u64 rx_cnt; + u64 tx_cnt; +}; + +struct smc_stats { + struct smc_stats_tech smc[2]; + u64 clnt_hshake_err_cnt; + u64 srv_hshake_err_cnt; +}; + +#define SMC_STAT_PAYLOAD_SUB(_tech, key, _len, _rc) \ +do { \ + typeof(_tech) t = (_tech); \ + typeof(_len) l = (_len); \ + int _pos = fls64((l) >> 13); \ + typeof(_rc) r = (_rc); \ + int m = SMC_BUF_MAX - 1; \ + this_cpu_inc((*smc_stats).smc[t].key ## _cnt); \ + if (r <= 0) \ + break; \ + _pos = (_pos < m) ? ((l == 1 << (_pos + 12)) ? _pos - 1 : _pos) : m; \ + this_cpu_inc((*smc_stats).smc[t].key ## _pd.buf[_pos]); \ + this_cpu_add((*smc_stats).smc[t].key ## _bytes, r); \ +} \ +while (0) + +#define SMC_STAT_TX_PAYLOAD(_smc, length, rcode) \ +do { \ + typeof(_smc) __smc = _smc; \ + typeof(length) _len = (length); \ + typeof(rcode) _rc = (rcode); \ + bool is_smcd = !__smc->conn.lnk; \ + if (is_smcd) \ + SMC_STAT_PAYLOAD_SUB(SMC_TYPE_D, tx, _len, _rc); \ + else \ + SMC_STAT_PAYLOAD_SUB(SMC_TYPE_R, tx, _len, _rc); \ +} \ +while (0) + +#define SMC_STAT_RX_PAYLOAD(_smc, length, rcode) \ +do { \ + typeof(_smc) __smc = _smc; \ + typeof(length) _len = (length); \ + typeof(rcode) _rc = (rcode); \ + bool is_smcd = !__smc->conn.lnk; \ + if (is_smcd) \ + SMC_STAT_PAYLOAD_SUB(SMC_TYPE_D, rx, _len, _rc); \ + else \ + SMC_STAT_PAYLOAD_SUB(SMC_TYPE_R, rx, _len, _rc); \ +} \ +while (0) + +#define SMC_STAT_RMB_SIZE_SUB(_tech, k, _len) \ +do { \ + typeof(_len) _l = (_len); \ + typeof(_tech) t = (_tech); \ + int _pos = fls((_l) >> 13); \ + int m = SMC_BUF_MAX - 1; \ + _pos = (_pos < m) ? ((_l == 1 << (_pos + 12)) ? _pos - 1 : _pos) : m; \ + this_cpu_inc((*smc_stats).smc[t].k ## _rmbsize.buf[_pos]); \ +} \ +while (0) + +#define SMC_STAT_RMB_SUB(type, t, key) \ + this_cpu_inc((*smc_stats).smc[t].rmb ## _ ## key.type ## _cnt) + +#define SMC_STAT_RMB_SIZE(_is_smcd, _is_rx, _len) \ +do { \ + typeof(_is_smcd) is_d = (_is_smcd); \ + typeof(_is_rx) is_r = (_is_rx); \ + typeof(_len) l = (_len); \ + if ((is_d) && (is_r)) \ + SMC_STAT_RMB_SIZE_SUB(SMC_TYPE_D, rx, l); \ + if ((is_d) && !(is_r)) \ + SMC_STAT_RMB_SIZE_SUB(SMC_TYPE_D, tx, l); \ + if (!(is_d) && (is_r)) \ + SMC_STAT_RMB_SIZE_SUB(SMC_TYPE_R, rx, l); \ + if (!(is_d) && !(is_r)) \ + SMC_STAT_RMB_SIZE_SUB(SMC_TYPE_R, tx, l); \ +} \ +while (0) + +#define SMC_STAT_RMB(type, _is_smcd, _is_rx) \ +do { \ + typeof(_is_smcd) is_d = (_is_smcd); \ + typeof(_is_rx) is_r = (_is_rx); \ + if ((is_d) && (is_r)) \ + SMC_STAT_RMB_SUB(type, SMC_TYPE_D, rx); \ + if ((is_d) && !(is_r)) \ + SMC_STAT_RMB_SUB(type, SMC_TYPE_D, tx); \ + if (!(is_d) && (is_r)) \ + SMC_STAT_RMB_SUB(type, SMC_TYPE_R, rx); \ + if (!(is_d) && !(is_r)) \ + SMC_STAT_RMB_SUB(type, SMC_TYPE_R, tx); \ +} \ +while (0) + +#define SMC_STAT_BUF_REUSE(is_smcd, is_rx) \ + SMC_STAT_RMB(reuse, is_smcd, is_rx) + +#define SMC_STAT_RMB_ALLOC(is_smcd, is_rx) \ + SMC_STAT_RMB(alloc, is_smcd, is_rx) + +#define SMC_STAT_RMB_DOWNGRADED(is_smcd, is_rx) \ + SMC_STAT_RMB(dgrade, is_smcd, is_rx) + +#define SMC_STAT_RMB_TX_PEER_FULL(is_smcd) \ + SMC_STAT_RMB(buf_full_peer, is_smcd, false) + +#define SMC_STAT_RMB_TX_FULL(is_smcd) \ + SMC_STAT_RMB(buf_full, is_smcd, false) + +#define SMC_STAT_RMB_TX_PEER_SIZE_SMALL(is_smcd) \ + SMC_STAT_RMB(buf_size_small_peer, is_smcd, false) + +#define SMC_STAT_RMB_TX_SIZE_SMALL(is_smcd) \ + SMC_STAT_RMB(buf_size_small, is_smcd, false) + +#define SMC_STAT_RMB_RX_SIZE_SMALL(is_smcd) \ + SMC_STAT_RMB(buf_size_small, is_smcd, true) + +#define SMC_STAT_RMB_RX_FULL(is_smcd) \ + SMC_STAT_RMB(buf_full, is_smcd, true) + +#define SMC_STAT_INC(is_smcd, type) \ +do { \ + if ((is_smcd)) \ + this_cpu_inc(smc_stats->smc[SMC_TYPE_D].type); \ + else \ + this_cpu_inc(smc_stats->smc[SMC_TYPE_R].type); \ +} \ +while (0) + +#define SMC_STAT_CLNT_SUCC_INC(_aclc) \ +do { \ + typeof(_aclc) acl = (_aclc); \ + bool is_v2 = (acl->hdr.version == SMC_V2); \ + bool is_smcd = (acl->hdr.typev1 == SMC_TYPE_D); \ + if (is_v2 && is_smcd) \ + this_cpu_inc(smc_stats->smc[SMC_TYPE_D].clnt_v2_succ_cnt); \ + else if (is_v2 && !is_smcd) \ + this_cpu_inc(smc_stats->smc[SMC_TYPE_R].clnt_v2_succ_cnt); \ + else if (!is_v2 && is_smcd) \ + this_cpu_inc(smc_stats->smc[SMC_TYPE_D].clnt_v1_succ_cnt); \ + else if (!is_v2 && !is_smcd) \ + this_cpu_inc(smc_stats->smc[SMC_TYPE_R].clnt_v1_succ_cnt); \ +} \ +while (0) + +#define SMC_STAT_SERV_SUCC_INC(_ini) \ +do { \ + typeof(_ini) i = (_ini); \ + bool is_v2 = (i->smcd_version & SMC_V2); \ + bool is_smcd = (i->is_smcd); \ + if (is_v2 && is_smcd) \ + this_cpu_inc(smc_stats->smc[SMC_TYPE_D].srv_v2_succ_cnt); \ + else if (is_v2 && !is_smcd) \ + this_cpu_inc(smc_stats->smc[SMC_TYPE_R].srv_v2_succ_cnt); \ + else if (!is_v2 && is_smcd) \ + this_cpu_inc(smc_stats->smc[SMC_TYPE_D].srv_v1_succ_cnt); \ + else if (!is_v2 && !is_smcd) \ + this_cpu_inc(smc_stats->smc[SMC_TYPE_R].srv_v1_succ_cnt); \ +} \ +while (0) + +int smc_stats_init(void) __init; +void smc_stats_exit(void); + +#endif /* NET_SMC_SMC_STATS_H_ */ diff --git a/net/smc/smc_tx.c b/net/smc/smc_tx.c index 4532c16bf85e..a043544d715f 100644 --- a/net/smc/smc_tx.c +++ b/net/smc/smc_tx.c @@ -27,6 +27,7 @@ #include "smc_close.h" #include "smc_ism.h" #include "smc_tx.h" +#include "smc_stats.h" #define SMC_TX_WORK_DELAY 0 #define SMC_TX_CORK_DELAY (HZ >> 2) /* 250 ms */ @@ -45,6 +46,8 @@ static void smc_tx_write_space(struct sock *sk) /* similar to sk_stream_write_space */ if (atomic_read(&smc->conn.sndbuf_space) && sock) { + if (test_bit(SOCK_NOSPACE, &sock->flags)) + SMC_STAT_RMB_TX_FULL(!smc->conn.lnk); clear_bit(SOCK_NOSPACE, &sock->flags); rcu_read_lock(); wq = rcu_dereference(sk->sk_wq); @@ -151,6 +154,15 @@ int smc_tx_sendmsg(struct smc_sock *smc, struct msghdr *msg, size_t len) goto out_err; } + if (len > conn->sndbuf_desc->len) + SMC_STAT_RMB_TX_SIZE_SMALL(!conn->lnk); + + if (len > conn->peer_rmbe_size) + SMC_STAT_RMB_TX_PEER_SIZE_SMALL(!conn->lnk); + + if (msg->msg_flags & MSG_OOB) + SMC_STAT_INC(!conn->lnk, urg_data_cnt); + while (msg_data_left(msg)) { if (sk->sk_state == SMC_INIT) return -ENOTCONN; @@ -419,8 +431,10 @@ static int smc_tx_rdma_writes(struct smc_connection *conn, /* destination: RMBE */ /* cf. snd_wnd */ rmbespace = atomic_read(&conn->peer_rmbe_space); - if (rmbespace <= 0) + if (rmbespace <= 0) { + SMC_STAT_RMB_TX_PEER_FULL(!conn->lnk); return 0; + } smc_curs_copy(&prod, &conn->local_tx_ctrl.prod, conn); smc_curs_copy(&cons, &conn->local_rx_ctrl.cons, conn); From patchwork Mon Jun 7 18:20:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karsten Graul X-Patchwork-Id: 455686 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0D92C48BDF for ; Mon, 7 Jun 2021 18:20:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8D9C861003 for ; Mon, 7 Jun 2021 18:20:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230306AbhFGSWc (ORCPT ); Mon, 7 Jun 2021 14:22:32 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:57352 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230432AbhFGSW3 (ORCPT ); Mon, 7 Jun 2021 14:22:29 -0400 Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 157I4C2l135274; Mon, 7 Jun 2021 14:20:36 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=kgm3VGzvhetdSNTxDoF2a/ph/L8tdXrhcwaonGMRb3U=; b=WEDuvMXZKfxXAAVKgy/8AWYQ0+pUjwEdETJv7lS+lTZPDdv674FCa6zx+rINbLm0Jz/p DVj1iaG42pVV/dwdd1YecjOEM6uIMs46yiR4Q0wN8I78FW9VPd5uHqQ0VCBl0z/AFuyk pPbk9XnKoJY6KqEcHdV8qkfpxLASWhuPv5V1K1ayodECfRZ3mrcUbN3BoO5SuVzpfTJ+ O5P5uFwuMrslO01Nk56s7gfFh7Rt4ZfYxPYjdQxp4vFlNuY4KgAOVuMusdCtVVlNtGxL SeYpQ22+DSsveZtQwc8+82cT0fDEbX/AvJDDEZp2UDSNJmbPHSN997cHLy2W/7tPrA6t QQ== Received: from ppma02fra.de.ibm.com (47.49.7a9f.ip4.static.sl-reverse.com [159.122.73.71]) by mx0a-001b2d01.pphosted.com with ESMTP id 391r4ggx1p-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 07 Jun 2021 14:20:36 -0400 Received: from pps.filterd (ppma02fra.de.ibm.com [127.0.0.1]) by ppma02fra.de.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 157IDMnN017342; Mon, 7 Jun 2021 18:20:34 GMT Received: from b06avi18878370.portsmouth.uk.ibm.com (b06avi18878370.portsmouth.uk.ibm.com [9.149.26.194]) by ppma02fra.de.ibm.com with ESMTP id 3900w8gkf1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 07 Jun 2021 18:20:34 +0000 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06avi18878370.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 157IJiFO35717432 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 7 Jun 2021 18:19:44 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id F1C96A4055; Mon, 7 Jun 2021 18:20:30 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B3993A4059; Mon, 7 Jun 2021 18:20:30 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 7 Jun 2021 18:20:30 +0000 (GMT) From: Karsten Graul To: David Miller , Jakub Kicinski Cc: Heiko Carstens , Stefan Raspl , netdev@vger.kernel.org, linux-s390@vger.kernel.org Subject: [PATCH net-next 2/4] net/smc: Add netlink support for SMC statistics Date: Mon, 7 Jun 2021 20:20:12 +0200 Message-Id: <20210607182014.3384922-3-kgraul@linux.ibm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607182014.3384922-1-kgraul@linux.ibm.com> References: <20210607182014.3384922-1-kgraul@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: xj-BqKlX6XYIL5Iix0RTe-1EsJVyMCOW X-Proofpoint-ORIG-GUID: xj-BqKlX6XYIL5Iix0RTe-1EsJVyMCOW X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.761 definitions=2021-06-07_14:2021-06-04,2021-06-07 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 lowpriorityscore=0 clxscore=1015 mlxscore=0 adultscore=0 impostorscore=0 spamscore=0 suspectscore=0 bulkscore=0 phishscore=0 mlxlogscore=999 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000 definitions=main-2106070125 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Guvenc Gulce Add the netlink function which collects the statistics information and delivers it to the userspace. Signed-off-by: Guvenc Gulce Signed-off-by: Karsten Graul --- include/uapi/linux/smc.h | 69 ++++++++++ net/smc/smc_netlink.c | 6 + net/smc/smc_stats.c | 279 +++++++++++++++++++++++++++++++++++++++ net/smc/smc_stats.h | 1 + 4 files changed, 355 insertions(+) diff --git a/include/uapi/linux/smc.h b/include/uapi/linux/smc.h index 3e68da07fba2..f32f11b30963 100644 --- a/include/uapi/linux/smc.h +++ b/include/uapi/linux/smc.h @@ -47,6 +47,7 @@ enum { SMC_NETLINK_GET_LGR_SMCD, SMC_NETLINK_GET_DEV_SMCD, SMC_NETLINK_GET_DEV_SMCR, + SMC_NETLINK_GET_STATS, }; /* SMC_GENL_FAMILY top level attributes */ @@ -58,6 +59,7 @@ enum { SMC_GEN_LGR_SMCD, /* nest */ SMC_GEN_DEV_SMCD, /* nest */ SMC_GEN_DEV_SMCR, /* nest */ + SMC_GEN_STATS, /* nest */ __SMC_GEN_MAX, SMC_GEN_MAX = __SMC_GEN_MAX - 1 }; @@ -159,4 +161,71 @@ enum { SMC_NLA_DEV_MAX = __SMC_NLA_DEV_MAX - 1 }; +/* SMC_NLA_STATS_T_TX(RX)_RMB_SIZE nested attributes */ +/* SMC_NLA_STATS_TX(RX)PLOAD_SIZE nested attributes */ +enum { + SMC_NLA_STATS_PLOAD_PAD, + SMC_NLA_STATS_PLOAD_8K, /* u64 */ + SMC_NLA_STATS_PLOAD_16K, /* u64 */ + SMC_NLA_STATS_PLOAD_32K, /* u64 */ + SMC_NLA_STATS_PLOAD_64K, /* u64 */ + SMC_NLA_STATS_PLOAD_128K, /* u64 */ + SMC_NLA_STATS_PLOAD_256K, /* u64 */ + SMC_NLA_STATS_PLOAD_512K, /* u64 */ + SMC_NLA_STATS_PLOAD_1024K, /* u64 */ + SMC_NLA_STATS_PLOAD_G_1024K, /* u64 */ + __SMC_NLA_STATS_PLOAD_MAX, + SMC_NLA_STATS_PLOAD_MAX = __SMC_NLA_STATS_PLOAD_MAX - 1 +}; + +/* SMC_NLA_STATS_T_TX(RX)_RMB_STATS nested attributes */ +enum { + SMC_NLA_STATS_RMB_PAD, + SMC_NLA_STATS_RMB_SIZE_SM_PEER_CNT, /* u64 */ + SMC_NLA_STATS_RMB_SIZE_SM_CNT, /* u64 */ + SMC_NLA_STATS_RMB_FULL_PEER_CNT, /* u64 */ + SMC_NLA_STATS_RMB_FULL_CNT, /* u64 */ + SMC_NLA_STATS_RMB_REUSE_CNT, /* u64 */ + SMC_NLA_STATS_RMB_ALLOC_CNT, /* u64 */ + SMC_NLA_STATS_RMB_DGRADE_CNT, /* u64 */ + __SMC_NLA_STATS_RMB_MAX, + SMC_NLA_STATS_RMB_MAX = __SMC_NLA_STATS_RMB_MAX - 1 +}; + +/* SMC_NLA_STATS_SMCD_TECH and _SMCR_TECH nested attributes */ +enum { + SMC_NLA_STATS_T_PAD, + SMC_NLA_STATS_T_TX_RMB_SIZE, /* nest */ + SMC_NLA_STATS_T_RX_RMB_SIZE, /* nest */ + SMC_NLA_STATS_T_TXPLOAD_SIZE, /* nest */ + SMC_NLA_STATS_T_RXPLOAD_SIZE, /* nest */ + SMC_NLA_STATS_T_TX_RMB_STATS, /* nest */ + SMC_NLA_STATS_T_RX_RMB_STATS, /* nest */ + SMC_NLA_STATS_T_CLNT_V1_SUCC, /* u64 */ + SMC_NLA_STATS_T_CLNT_V2_SUCC, /* u64 */ + SMC_NLA_STATS_T_SRV_V1_SUCC, /* u64 */ + SMC_NLA_STATS_T_SRV_V2_SUCC, /* u64 */ + SMC_NLA_STATS_T_SENDPAGE_CNT, /* u64 */ + SMC_NLA_STATS_T_SPLICE_CNT, /* u64 */ + SMC_NLA_STATS_T_CORK_CNT, /* u64 */ + SMC_NLA_STATS_T_NDLY_CNT, /* u64 */ + SMC_NLA_STATS_T_URG_DATA_CNT, /* u64 */ + SMC_NLA_STATS_T_RX_BYTES, /* u64 */ + SMC_NLA_STATS_T_TX_BYTES, /* u64 */ + SMC_NLA_STATS_T_RX_CNT, /* u64 */ + SMC_NLA_STATS_T_TX_CNT, /* u64 */ + __SMC_NLA_STATS_T_MAX, + SMC_NLA_STATS_T_MAX = __SMC_NLA_STATS_T_MAX - 1 +}; + +/* SMC_GEN_STATS attributes */ +enum { + SMC_NLA_STATS_PAD, + SMC_NLA_STATS_SMCD_TECH, /* nest */ + SMC_NLA_STATS_SMCR_TECH, /* nest */ + SMC_NLA_STATS_CLNT_HS_ERR_CNT, /* u64 */ + SMC_NLA_STATS_SRV_HS_ERR_CNT, /* u64 */ + __SMC_NLA_STATS_MAX, + SMC_NLA_STATS_MAX = __SMC_NLA_STATS_MAX - 1 +}; #endif /* _UAPI_LINUX_SMC_H */ diff --git a/net/smc/smc_netlink.c b/net/smc/smc_netlink.c index 140419a19dbf..30e30b23370f 100644 --- a/net/smc/smc_netlink.c +++ b/net/smc/smc_netlink.c @@ -19,6 +19,7 @@ #include "smc_core.h" #include "smc_ism.h" #include "smc_ib.h" +#include "smc_stats.h" #include "smc_netlink.h" #define SMC_CMD_MAX_ATTR 1 @@ -55,6 +56,11 @@ static const struct genl_ops smc_gen_nl_ops[] = { /* can be retrieved by unprivileged users */ .dumpit = smcr_nl_get_device, }, + { + .cmd = SMC_NETLINK_GET_STATS, + /* can be retrieved by unprivileged users */ + .dumpit = smc_nl_get_stats, + }, }; static const struct nla_policy smc_gen_nl_policy[2] = { diff --git a/net/smc/smc_stats.c b/net/smc/smc_stats.c index 76e938388520..72119d3d8558 100644 --- a/net/smc/smc_stats.c +++ b/net/smc/smc_stats.c @@ -12,6 +12,10 @@ #include #include #include +#include +#include +#include +#include "smc_netlink.h" #include "smc_stats.h" /* serialize fallback reason statistic gathering */ @@ -33,3 +37,278 @@ void smc_stats_exit(void) { free_percpu(smc_stats); } + +static int smc_nl_fill_stats_rmb_data(struct sk_buff *skb, + struct smc_stats *stats, int tech, + int type) +{ + struct smc_stats_rmbcnt *stats_rmb_cnt; + struct nlattr *attrs; + + if (type == SMC_NLA_STATS_T_TX_RMB_STATS) + stats_rmb_cnt = &stats->smc[tech].rmb_tx; + else + stats_rmb_cnt = &stats->smc[tech].rmb_rx; + + attrs = nla_nest_start(skb, type); + if (!attrs) + goto errout; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_RMB_REUSE_CNT, + stats_rmb_cnt->reuse_cnt, + SMC_NLA_STATS_RMB_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_RMB_SIZE_SM_PEER_CNT, + stats_rmb_cnt->buf_size_small_peer_cnt, + SMC_NLA_STATS_RMB_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_RMB_SIZE_SM_CNT, + stats_rmb_cnt->buf_size_small_cnt, + SMC_NLA_STATS_RMB_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_RMB_FULL_PEER_CNT, + stats_rmb_cnt->buf_full_peer_cnt, + SMC_NLA_STATS_RMB_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_RMB_FULL_CNT, + stats_rmb_cnt->buf_full_cnt, + SMC_NLA_STATS_RMB_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_RMB_ALLOC_CNT, + stats_rmb_cnt->alloc_cnt, + SMC_NLA_STATS_RMB_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_RMB_DGRADE_CNT, + stats_rmb_cnt->dgrade_cnt, + SMC_NLA_STATS_RMB_PAD)) + goto errattr; + + nla_nest_end(skb, attrs); + return 0; + +errattr: + nla_nest_cancel(skb, attrs); +errout: + return -EMSGSIZE; +} + +static int smc_nl_fill_stats_bufsize_data(struct sk_buff *skb, + struct smc_stats *stats, int tech, + int type) +{ + struct smc_stats_memsize *stats_pload; + struct nlattr *attrs; + + if (type == SMC_NLA_STATS_T_TXPLOAD_SIZE) + stats_pload = &stats->smc[tech].tx_pd; + else if (type == SMC_NLA_STATS_T_RXPLOAD_SIZE) + stats_pload = &stats->smc[tech].rx_pd; + else if (type == SMC_NLA_STATS_T_TX_RMB_SIZE) + stats_pload = &stats->smc[tech].tx_rmbsize; + else if (type == SMC_NLA_STATS_T_RX_RMB_SIZE) + stats_pload = &stats->smc[tech].rx_rmbsize; + else + goto errout; + + attrs = nla_nest_start(skb, type); + if (!attrs) + goto errout; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_PLOAD_8K, + stats_pload->buf[SMC_BUF_8K], + SMC_NLA_STATS_PLOAD_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_PLOAD_16K, + stats_pload->buf[SMC_BUF_16K], + SMC_NLA_STATS_PLOAD_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_PLOAD_32K, + stats_pload->buf[SMC_BUF_32K], + SMC_NLA_STATS_PLOAD_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_PLOAD_64K, + stats_pload->buf[SMC_BUF_64K], + SMC_NLA_STATS_PLOAD_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_PLOAD_128K, + stats_pload->buf[SMC_BUF_128K], + SMC_NLA_STATS_PLOAD_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_PLOAD_256K, + stats_pload->buf[SMC_BUF_256K], + SMC_NLA_STATS_PLOAD_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_PLOAD_512K, + stats_pload->buf[SMC_BUF_512K], + SMC_NLA_STATS_PLOAD_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_PLOAD_1024K, + stats_pload->buf[SMC_BUF_1024K], + SMC_NLA_STATS_PLOAD_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_PLOAD_G_1024K, + stats_pload->buf[SMC_BUF_G_1024K], + SMC_NLA_STATS_PLOAD_PAD)) + goto errattr; + + nla_nest_end(skb, attrs); + return 0; + +errattr: + nla_nest_cancel(skb, attrs); +errout: + return -EMSGSIZE; +} + +static int smc_nl_fill_stats_tech_data(struct sk_buff *skb, + struct smc_stats *stats, int tech) +{ + struct smc_stats_tech *smc_tech; + struct nlattr *attrs; + + smc_tech = &stats->smc[tech]; + if (tech == SMC_TYPE_D) + attrs = nla_nest_start(skb, SMC_NLA_STATS_SMCD_TECH); + else + attrs = nla_nest_start(skb, SMC_NLA_STATS_SMCR_TECH); + + if (!attrs) + goto errout; + if (smc_nl_fill_stats_rmb_data(skb, stats, tech, + SMC_NLA_STATS_T_TX_RMB_STATS)) + goto errattr; + if (smc_nl_fill_stats_rmb_data(skb, stats, tech, + SMC_NLA_STATS_T_RX_RMB_STATS)) + goto errattr; + if (smc_nl_fill_stats_bufsize_data(skb, stats, tech, + SMC_NLA_STATS_T_TXPLOAD_SIZE)) + goto errattr; + if (smc_nl_fill_stats_bufsize_data(skb, stats, tech, + SMC_NLA_STATS_T_RXPLOAD_SIZE)) + goto errattr; + if (smc_nl_fill_stats_bufsize_data(skb, stats, tech, + SMC_NLA_STATS_T_TX_RMB_SIZE)) + goto errattr; + if (smc_nl_fill_stats_bufsize_data(skb, stats, tech, + SMC_NLA_STATS_T_RX_RMB_SIZE)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_T_CLNT_V1_SUCC, + smc_tech->clnt_v1_succ_cnt, + SMC_NLA_STATS_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_T_CLNT_V2_SUCC, + smc_tech->clnt_v2_succ_cnt, + SMC_NLA_STATS_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_T_SRV_V1_SUCC, + smc_tech->srv_v1_succ_cnt, + SMC_NLA_STATS_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_T_SRV_V2_SUCC, + smc_tech->srv_v2_succ_cnt, + SMC_NLA_STATS_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_T_RX_BYTES, + smc_tech->rx_bytes, + SMC_NLA_STATS_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_T_TX_BYTES, + smc_tech->tx_bytes, + SMC_NLA_STATS_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_T_RX_CNT, + smc_tech->rx_cnt, + SMC_NLA_STATS_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_T_TX_CNT, + smc_tech->tx_cnt, + SMC_NLA_STATS_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_T_SENDPAGE_CNT, + smc_tech->sendpage_cnt, + SMC_NLA_STATS_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_T_CORK_CNT, + smc_tech->cork_cnt, + SMC_NLA_STATS_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_T_NDLY_CNT, + smc_tech->ndly_cnt, + SMC_NLA_STATS_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_T_SPLICE_CNT, + smc_tech->splice_cnt, + SMC_NLA_STATS_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_T_URG_DATA_CNT, + smc_tech->urg_data_cnt, + SMC_NLA_STATS_PAD)) + goto errattr; + + nla_nest_end(skb, attrs); + return 0; + +errattr: + nla_nest_cancel(skb, attrs); +errout: + return -EMSGSIZE; +} + +int smc_nl_get_stats(struct sk_buff *skb, + struct netlink_callback *cb) +{ + struct smc_nl_dmp_ctx *cb_ctx = smc_nl_dmp_ctx(cb); + struct smc_stats *stats; + struct nlattr *attrs; + int cpu, i, size; + void *nlh; + u64 *src; + u64 *sum; + + if (cb_ctx->pos[0]) + goto errmsg; + nlh = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, + &smc_gen_nl_family, NLM_F_MULTI, + SMC_NETLINK_GET_STATS); + if (!nlh) + goto errmsg; + + attrs = nla_nest_start(skb, SMC_GEN_STATS); + if (!attrs) + goto errnest; + stats = kzalloc(sizeof(*stats), GFP_KERNEL); + if (!stats) + goto erralloc; + size = sizeof(*stats) / sizeof(u64); + for_each_possible_cpu(cpu) { + src = (u64 *)per_cpu_ptr(smc_stats, cpu); + sum = (u64 *)stats; + for (i = 0; i < size; i++) + *(sum++) += *(src++); + } + if (smc_nl_fill_stats_tech_data(skb, stats, SMC_TYPE_D)) + goto errattr; + if (smc_nl_fill_stats_tech_data(skb, stats, SMC_TYPE_R)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_CLNT_HS_ERR_CNT, + stats->clnt_hshake_err_cnt, + SMC_NLA_STATS_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_STATS_SRV_HS_ERR_CNT, + stats->srv_hshake_err_cnt, + SMC_NLA_STATS_PAD)) + goto errattr; + + nla_nest_end(skb, attrs); + genlmsg_end(skb, nlh); + cb_ctx->pos[0] = 1; + kfree(stats); + return skb->len; + +errattr: + kfree(stats); +erralloc: + nla_nest_cancel(skb, attrs); +errnest: + genlmsg_cancel(skb, nlh); +errmsg: + return skb->len; +} diff --git a/net/smc/smc_stats.h b/net/smc/smc_stats.h index 928372114cf1..84baaca59eaf 100644 --- a/net/smc/smc_stats.h +++ b/net/smc/smc_stats.h @@ -247,6 +247,7 @@ do { \ } \ while (0) +int smc_nl_get_stats(struct sk_buff *skb, struct netlink_callback *cb); int smc_stats_init(void) __init; void smc_stats_exit(void); From patchwork Mon Jun 7 18:20:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karsten Graul X-Patchwork-Id: 456468 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EF1BC47095 for ; Mon, 7 Jun 2021 18:20:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 766E561105 for ; Mon, 7 Jun 2021 18:20:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230507AbhFGSWd (ORCPT ); Mon, 7 Jun 2021 14:22:33 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:52868 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230434AbhFGSW3 (ORCPT ); Mon, 7 Jun 2021 14:22:29 -0400 Received: from pps.filterd (m0098396.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 157I4jqF114593; Mon, 7 Jun 2021 14:20:37 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=cw5F3182k7d4uDD5I11Mk/si9K6mSPF9F5g5jKFWBGk=; b=Gm19nBnHZsA7Zuaudzffrd0IB1K8jWhDNnO9qYasZUNBJANpn5/HHdgkaaNe6jSkq0x/ gu3ZDIbpua3gZTvXStLwsl0DH/O/aqFFKUeh43+qMxjkMyBJEGK1xeWg/dzvdUMDLu8r C3q9cU/pbnSJKqjAv8TQi3ReYApBF881mMZgX51oUCcyH1vKQPjQXnTWU4t4Yf9pYyiY /zn4d6ziI5cxmjRDkKhfeDr3UcIqZGdC/PmelCe1PP84EdZBa0LtRqKiLunX4WS8rQ4d WEok+7lTXv4GP6WxkE61qO7FkrJwEpqWz5h4MFSA8BL2Fpio48IslRMvd1SOmiZBP1w1 qw== Received: from ppma06fra.de.ibm.com (48.49.7a9f.ip4.static.sl-reverse.com [159.122.73.72]) by mx0a-001b2d01.pphosted.com with ESMTP id 391qvc9e73-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 07 Jun 2021 14:20:36 -0400 Received: from pps.filterd (ppma06fra.de.ibm.com [127.0.0.1]) by ppma06fra.de.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 157IKYQ2032099; Mon, 7 Jun 2021 18:20:34 GMT Received: from b06avi18878370.portsmouth.uk.ibm.com (b06avi18878370.portsmouth.uk.ibm.com [9.149.26.194]) by ppma06fra.de.ibm.com with ESMTP id 3900hhgke7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 07 Jun 2021 18:20:34 +0000 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06avi18878370.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 157IJiVc35717434 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 7 Jun 2021 18:19:44 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 41BC6A4055; Mon, 7 Jun 2021 18:20:31 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 08C60A405D; Mon, 7 Jun 2021 18:20:31 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 7 Jun 2021 18:20:30 +0000 (GMT) From: Karsten Graul To: David Miller , Jakub Kicinski Cc: Heiko Carstens , Stefan Raspl , netdev@vger.kernel.org, linux-s390@vger.kernel.org Subject: [PATCH net-next 3/4] net/smc: Add netlink support for SMC fallback statistics Date: Mon, 7 Jun 2021 20:20:13 +0200 Message-Id: <20210607182014.3384922-4-kgraul@linux.ibm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607182014.3384922-1-kgraul@linux.ibm.com> References: <20210607182014.3384922-1-kgraul@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: EL6KhkUs9PH0b1NFTXDlUitTjkyoo3T7 X-Proofpoint-GUID: EL6KhkUs9PH0b1NFTXDlUitTjkyoo3T7 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.761 definitions=2021-06-07_14:2021-06-04,2021-06-07 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 phishscore=0 clxscore=1015 adultscore=0 spamscore=0 mlxlogscore=999 priorityscore=1501 suspectscore=0 lowpriorityscore=0 bulkscore=0 impostorscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000 definitions=main-2106070125 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Guvenc Gulce Add support to collect more detailed SMC fallback reason statistics and provide these statistics to user space on the netlink interface. Signed-off-by: Guvenc Gulce Signed-off-by: Karsten Graul --- include/uapi/linux/smc.h | 14 ++++++ net/smc/smc_netlink.c | 5 +++ net/smc/smc_netlink.h | 2 +- net/smc/smc_stats.c | 92 ++++++++++++++++++++++++++++++++++++++++ net/smc/smc_stats.h | 1 + 5 files changed, 113 insertions(+), 1 deletion(-) diff --git a/include/uapi/linux/smc.h b/include/uapi/linux/smc.h index f32f11b30963..0f7f87c70baf 100644 --- a/include/uapi/linux/smc.h +++ b/include/uapi/linux/smc.h @@ -48,6 +48,7 @@ enum { SMC_NETLINK_GET_DEV_SMCD, SMC_NETLINK_GET_DEV_SMCR, SMC_NETLINK_GET_STATS, + SMC_NETLINK_GET_FBACK_STATS, }; /* SMC_GENL_FAMILY top level attributes */ @@ -60,6 +61,7 @@ enum { SMC_GEN_DEV_SMCD, /* nest */ SMC_GEN_DEV_SMCR, /* nest */ SMC_GEN_STATS, /* nest */ + SMC_GEN_FBACK_STATS, /* nest */ __SMC_GEN_MAX, SMC_GEN_MAX = __SMC_GEN_MAX - 1 }; @@ -228,4 +230,16 @@ enum { __SMC_NLA_STATS_MAX, SMC_NLA_STATS_MAX = __SMC_NLA_STATS_MAX - 1 }; + +/* SMC_GEN_FBACK_STATS attributes */ +enum { + SMC_NLA_FBACK_STATS_PAD, + SMC_NLA_FBACK_STATS_TYPE, /* u8 */ + SMC_NLA_FBACK_STATS_SRV_CNT, /* u64 */ + SMC_NLA_FBACK_STATS_CLNT_CNT, /* u64 */ + SMC_NLA_FBACK_STATS_RSN_CODE, /* u32 */ + SMC_NLA_FBACK_STATS_RSN_CNT, /* u16 */ + __SMC_NLA_FBACK_STATS_MAX, + SMC_NLA_FBACK_STATS_MAX = __SMC_NLA_FBACK_STATS_MAX - 1 +}; #endif /* _UAPI_LINUX_SMC_H */ diff --git a/net/smc/smc_netlink.c b/net/smc/smc_netlink.c index 30e30b23370f..6fb6f96c1d17 100644 --- a/net/smc/smc_netlink.c +++ b/net/smc/smc_netlink.c @@ -61,6 +61,11 @@ static const struct genl_ops smc_gen_nl_ops[] = { /* can be retrieved by unprivileged users */ .dumpit = smc_nl_get_stats, }, + { + .cmd = SMC_NETLINK_GET_FBACK_STATS, + /* can be retrieved by unprivileged users */ + .dumpit = smc_nl_get_fback_stats, + }, }; static const struct nla_policy smc_gen_nl_policy[2] = { diff --git a/net/smc/smc_netlink.h b/net/smc/smc_netlink.h index 3477265cba6c..5ce2c0a89ccd 100644 --- a/net/smc/smc_netlink.h +++ b/net/smc/smc_netlink.h @@ -18,7 +18,7 @@ extern struct genl_family smc_gen_nl_family; struct smc_nl_dmp_ctx { - int pos[2]; + int pos[3]; }; static inline struct smc_nl_dmp_ctx *smc_nl_dmp_ctx(struct netlink_callback *c) diff --git a/net/smc/smc_stats.c b/net/smc/smc_stats.c index 72119d3d8558..b3d279d29c52 100644 --- a/net/smc/smc_stats.c +++ b/net/smc/smc_stats.c @@ -312,3 +312,95 @@ int smc_nl_get_stats(struct sk_buff *skb, errmsg: return skb->len; } + +static int smc_nl_get_fback_details(struct sk_buff *skb, + struct netlink_callback *cb, int pos, + bool is_srv) +{ + struct smc_nl_dmp_ctx *cb_ctx = smc_nl_dmp_ctx(cb); + int cnt_reported = cb_ctx->pos[2]; + struct smc_stats_fback *trgt_arr; + struct nlattr *attrs; + int rc = 0; + void *nlh; + + if (is_srv) + trgt_arr = &fback_rsn.srv[0]; + else + trgt_arr = &fback_rsn.clnt[0]; + if (!trgt_arr[pos].fback_code) + return -ENODATA; + nlh = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, + &smc_gen_nl_family, NLM_F_MULTI, + SMC_NETLINK_GET_FBACK_STATS); + if (!nlh) + goto errmsg; + attrs = nla_nest_start(skb, SMC_GEN_FBACK_STATS); + if (!attrs) + goto errout; + if (nla_put_u8(skb, SMC_NLA_FBACK_STATS_TYPE, is_srv)) + goto errattr; + if (!cnt_reported) { + if (nla_put_u64_64bit(skb, SMC_NLA_FBACK_STATS_SRV_CNT, + fback_rsn.srv_fback_cnt, + SMC_NLA_FBACK_STATS_PAD)) + goto errattr; + if (nla_put_u64_64bit(skb, SMC_NLA_FBACK_STATS_CLNT_CNT, + fback_rsn.clnt_fback_cnt, + SMC_NLA_FBACK_STATS_PAD)) + goto errattr; + cnt_reported = 1; + } + + if (nla_put_u32(skb, SMC_NLA_FBACK_STATS_RSN_CODE, + trgt_arr[pos].fback_code)) + goto errattr; + if (nla_put_u16(skb, SMC_NLA_FBACK_STATS_RSN_CNT, + trgt_arr[pos].count)) + goto errattr; + + cb_ctx->pos[2] = cnt_reported; + nla_nest_end(skb, attrs); + genlmsg_end(skb, nlh); + return rc; + +errattr: + nla_nest_cancel(skb, attrs); +errout: + genlmsg_cancel(skb, nlh); +errmsg: + return -EMSGSIZE; +} + +int smc_nl_get_fback_stats(struct sk_buff *skb, struct netlink_callback *cb) +{ + struct smc_nl_dmp_ctx *cb_ctx = smc_nl_dmp_ctx(cb); + int rc_srv = 0, rc_clnt = 0, k; + int skip_serv = cb_ctx->pos[1]; + int snum = cb_ctx->pos[0]; + bool is_srv = true; + + mutex_lock(&smc_stat_fback_rsn); + for (k = 0; k < SMC_MAX_FBACK_RSN_CNT; k++) { + if (k < snum) + continue; + if (!skip_serv) { + rc_srv = smc_nl_get_fback_details(skb, cb, k, is_srv); + if (rc_srv && rc_srv != ENODATA) + break; + } else { + skip_serv = 0; + } + rc_clnt = smc_nl_get_fback_details(skb, cb, k, !is_srv); + if (rc_clnt && rc_clnt != ENODATA) { + skip_serv = 1; + break; + } + if (rc_clnt == ENODATA && rc_srv == ENODATA) + break; + } + mutex_unlock(&smc_stat_fback_rsn); + cb_ctx->pos[1] = skip_serv; + cb_ctx->pos[0] = k; + return skb->len; +} diff --git a/net/smc/smc_stats.h b/net/smc/smc_stats.h index 84baaca59eaf..7c35b22d9e29 100644 --- a/net/smc/smc_stats.h +++ b/net/smc/smc_stats.h @@ -248,6 +248,7 @@ do { \ while (0) int smc_nl_get_stats(struct sk_buff *skb, struct netlink_callback *cb); +int smc_nl_get_fback_stats(struct sk_buff *skb, struct netlink_callback *cb); int smc_stats_init(void) __init; void smc_stats_exit(void); From patchwork Mon Jun 7 18:20:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karsten Graul X-Patchwork-Id: 455685 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 708CEC4743F for ; Mon, 7 Jun 2021 18:20:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5657661105 for ; Mon, 7 Jun 2021 18:20:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230434AbhFGSWe (ORCPT ); Mon, 7 Jun 2021 14:22:34 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:60344 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S230316AbhFGSWb (ORCPT ); Mon, 7 Jun 2021 14:22:31 -0400 Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 157I3wWJ039660; Mon, 7 Jun 2021 14:20:36 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=TlUVyhd1I/S1s9oywuVdqCiNJhe6oZ3yzZTdEVcmpsE=; b=nPEzznPVgJrI92ZzV/1kI2QPp96qnS7+Dqug+3Ktib1F5XaNp95tE/SbN+FhbCdaqN1L CroySoWEeuaeYbV9btgXf5Qg5mB4FHkBcP88F+LUkMc3cR4RS6YwAs8Seab8WmB+wOOv epscZG7Beb6Fyest7oBjie9HQTC/iwQvT0X4wA5yZqJFLSTqiHIlet9OZJPe85ui8gNq wzL4GqFIy45DKY9CQC8ecM2eh2uNg04zK1OCwcQsDxRIQkR26bB/YcFdyY0WLYOzVnSo hoDD90+UdM0gw3ne3cRi7nUx/2rlc1PtpIggEEOWRHlOTXAIQ/Z4lWAIGW76sVtK/0Wk iQ== Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0b-001b2d01.pphosted.com with ESMTP id 391q892ke9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 07 Jun 2021 14:20:36 -0400 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 157IBL3V021620; Mon, 7 Jun 2021 18:20:34 GMT Received: from b06avi18878370.portsmouth.uk.ibm.com (b06avi18878370.portsmouth.uk.ibm.com [9.149.26.194]) by ppma06ams.nl.ibm.com with ESMTP id 3900hhs30j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 07 Jun 2021 18:20:34 +0000 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06avi18878370.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 157IJikk35717438 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 7 Jun 2021 18:19:45 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8B813A4059; Mon, 7 Jun 2021 18:20:31 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4D8CAA405E; Mon, 7 Jun 2021 18:20:31 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 7 Jun 2021 18:20:31 +0000 (GMT) From: Karsten Graul To: David Miller , Jakub Kicinski Cc: Heiko Carstens , Stefan Raspl , netdev@vger.kernel.org, linux-s390@vger.kernel.org Subject: [PATCH net-next 4/4] net/smc: Make SMC statistics network namespace aware Date: Mon, 7 Jun 2021 20:20:14 +0200 Message-Id: <20210607182014.3384922-5-kgraul@linux.ibm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210607182014.3384922-1-kgraul@linux.ibm.com> References: <20210607182014.3384922-1-kgraul@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: pcmIkX-EcHQfMDLvEBt2ssbnlU3AAzpT X-Proofpoint-ORIG-GUID: pcmIkX-EcHQfMDLvEBt2ssbnlU3AAzpT X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.761 definitions=2021-06-07_14:2021-06-04,2021-06-07 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 spamscore=0 clxscore=1015 priorityscore=1501 suspectscore=0 adultscore=0 malwarescore=0 mlxscore=0 lowpriorityscore=0 bulkscore=0 phishscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000 definitions=main-2106070125 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Guvenc Gulce Make the gathered SMC statistics network namespace aware, for each namespace collect an own set of statistic information. Signed-off-by: Guvenc Gulce Signed-off-by: Karsten Graul --- include/net/net_namespace.h | 4 ++ include/net/netns/smc.h | 16 ++++++ net/smc/af_smc.c | 65 +++++++++++++-------- net/smc/smc_core.c | 10 ++-- net/smc/smc_rx.c | 6 +- net/smc/smc_stats.c | 47 ++++++++------- net/smc/smc_stats.h | 111 ++++++++++++++++++++---------------- net/smc/smc_tx.c | 12 ++-- 8 files changed, 163 insertions(+), 108 deletions(-) create mode 100644 include/net/netns/smc.h diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h index fa5887143f0d..befc5b93f311 100644 --- a/include/net/net_namespace.h +++ b/include/net/net_namespace.h @@ -32,6 +32,7 @@ #include #include #include +#include #include #include #include @@ -170,6 +171,9 @@ struct net { struct sock *crypto_nlsk; #endif struct sock *diag_nlsk; +#if IS_ENABLED(CONFIG_SMC) + struct netns_smc smc; +#endif } __randomize_layout; #include diff --git a/include/net/netns/smc.h b/include/net/netns/smc.h new file mode 100644 index 000000000000..ea8a9cf2619b --- /dev/null +++ b/include/net/netns/smc.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __NETNS_SMC_H__ +#define __NETNS_SMC_H__ +#include +#include + +struct smc_stats_rsn; +struct smc_stats; +struct netns_smc { + /* per cpu counters for SMC */ + struct smc_stats __percpu *smc_stats; + /* protect fback_rsn */ + struct mutex mutex_fback_rsn; + struct smc_stats_rsn *fback_rsn; +}; +#endif diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c index efeaed384769..e41fdac606d4 100644 --- a/net/smc/af_smc.c +++ b/net/smc/af_smc.c @@ -529,15 +529,17 @@ static void smc_stat_inc_fback_rsn_cnt(struct smc_sock *smc, static void smc_stat_fallback(struct smc_sock *smc) { - mutex_lock(&smc_stat_fback_rsn); + struct net *net = sock_net(&smc->sk); + + mutex_lock(&net->smc.mutex_fback_rsn); if (smc->listen_smc) { - smc_stat_inc_fback_rsn_cnt(smc, fback_rsn.srv); - fback_rsn.srv_fback_cnt++; + smc_stat_inc_fback_rsn_cnt(smc, net->smc.fback_rsn->srv); + net->smc.fback_rsn->srv_fback_cnt++; } else { - smc_stat_inc_fback_rsn_cnt(smc, fback_rsn.clnt); - fback_rsn.clnt_fback_cnt++; + smc_stat_inc_fback_rsn_cnt(smc, net->smc.fback_rsn->clnt); + net->smc.fback_rsn->clnt_fback_cnt++; } - mutex_unlock(&smc_stat_fback_rsn); + mutex_unlock(&net->smc.mutex_fback_rsn); } static void smc_switch_to_fallback(struct smc_sock *smc, int reason_code) @@ -568,10 +570,11 @@ static int smc_connect_fallback(struct smc_sock *smc, int reason_code) static int smc_connect_decline_fallback(struct smc_sock *smc, int reason_code, u8 version) { + struct net *net = sock_net(&smc->sk); int rc; if (reason_code < 0) { /* error, fallback is not possible */ - this_cpu_inc(smc_stats->clnt_hshake_err_cnt); + this_cpu_inc(net->smc.smc_stats->clnt_hshake_err_cnt); if (smc->sk.sk_state == SMC_INIT) sock_put(&smc->sk); /* passive closing */ return reason_code; @@ -579,7 +582,7 @@ static int smc_connect_decline_fallback(struct smc_sock *smc, int reason_code, if (reason_code != SMC_CLC_DECL_PEERDECL) { rc = smc_clc_send_decline(smc, reason_code, version); if (rc < 0) { - this_cpu_inc(smc_stats->clnt_hshake_err_cnt); + this_cpu_inc(net->smc.smc_stats->clnt_hshake_err_cnt); if (smc->sk.sk_state == SMC_INIT) sock_put(&smc->sk); /* passive closing */ return rc; @@ -1027,7 +1030,7 @@ static int __smc_connect(struct smc_sock *smc) if (rc) goto vlan_cleanup; - SMC_STAT_CLNT_SUCC_INC(aclc); + SMC_STAT_CLNT_SUCC_INC(sock_net(smc->clcsock->sk), aclc); smc_connect_ism_vlan_cleanup(smc, ini); kfree(buf); kfree(ini); @@ -1343,8 +1346,9 @@ static void smc_listen_out_connected(struct smc_sock *new_smc) static void smc_listen_out_err(struct smc_sock *new_smc) { struct sock *newsmcsk = &new_smc->sk; + struct net *net = sock_net(newsmcsk); - this_cpu_inc(smc_stats->srv_hshake_err_cnt); + this_cpu_inc(net->smc.smc_stats->srv_hshake_err_cnt); if (newsmcsk->sk_state == SMC_INIT) sock_put(&new_smc->sk); /* passive closing */ newsmcsk->sk_state = SMC_CLOSED; @@ -1813,7 +1817,7 @@ static void smc_listen_work(struct work_struct *work) } smc_conn_save_peer_info(new_smc, cclc); smc_listen_out_connected(new_smc); - SMC_STAT_SERV_SUCC_INC(ini); + SMC_STAT_SERV_SUCC_INC(sock_net(newclcsock->sk), ini); goto out_free; out_unlock: @@ -2242,7 +2246,7 @@ static int smc_setsockopt(struct socket *sock, int level, int optname, sk->sk_state != SMC_LISTEN && sk->sk_state != SMC_CLOSED) { if (val) { - SMC_STAT_INC(!smc->conn.lnk, ndly_cnt); + SMC_STAT_INC(smc, ndly_cnt); mod_delayed_work(smc->conn.lgr->tx_wq, &smc->conn.tx_work, 0); } @@ -2253,7 +2257,7 @@ static int smc_setsockopt(struct socket *sock, int level, int optname, sk->sk_state != SMC_LISTEN && sk->sk_state != SMC_CLOSED) { if (!val) { - SMC_STAT_INC(!smc->conn.lnk, cork_cnt); + SMC_STAT_INC(smc, cork_cnt); mod_delayed_work(smc->conn.lgr->tx_wq, &smc->conn.tx_work, 0); } @@ -2383,7 +2387,7 @@ static ssize_t smc_sendpage(struct socket *sock, struct page *page, rc = kernel_sendpage(smc->clcsock, page, offset, size, flags); } else { - SMC_STAT_INC(!smc->conn.lnk, sendpage_cnt); + SMC_STAT_INC(smc, sendpage_cnt); rc = sock_no_sendpage(sock, page, offset, size, flags); } @@ -2434,7 +2438,7 @@ static ssize_t smc_splice_read(struct socket *sock, loff_t *ppos, flags = MSG_DONTWAIT; else flags = 0; - SMC_STAT_INC(!smc->conn.lnk, splice_cnt); + SMC_STAT_INC(smc, splice_cnt); rc = smc_rx_recvmsg(smc, NULL, pipe, len, flags); } out: @@ -2523,6 +2527,16 @@ static void __net_exit smc_net_exit(struct net *net) smc_pnet_net_exit(net); } +static __net_init int smc_net_stat_init(struct net *net) +{ + return smc_stats_init(net); +} + +static void __net_exit smc_net_stat_exit(struct net *net) +{ + smc_stats_exit(net); +} + static struct pernet_operations smc_net_ops = { .init = smc_net_init, .exit = smc_net_exit, @@ -2530,6 +2544,11 @@ static struct pernet_operations smc_net_ops = { .size = sizeof(struct smc_net), }; +static struct pernet_operations smc_net_stat_ops = { + .init = smc_net_stat_init, + .exit = smc_net_stat_exit, +}; + static int __init smc_init(void) { int rc; @@ -2538,6 +2557,10 @@ static int __init smc_init(void) if (rc) return rc; + rc = register_pernet_subsys(&smc_net_stat_ops); + if (rc) + return rc; + smc_ism_init(); smc_clc_init(); @@ -2558,16 +2581,10 @@ static int __init smc_init(void) if (!smc_close_wq) goto out_alloc_hs_wq; - rc = smc_stats_init(); - if (rc) { - pr_err("%s: smc_stats_init fails with %d\n", __func__, rc); - goto out_alloc_wqs; - } - rc = smc_core_init(); if (rc) { pr_err("%s: smc_core_init fails with %d\n", __func__, rc); - goto out_smc_stat; + goto out_alloc_wqs; } rc = smc_llc_init(); @@ -2619,8 +2636,6 @@ static int __init smc_init(void) proto_unregister(&smc_proto); out_core: smc_core_exit(); -out_smc_stat: - smc_stats_exit(); out_alloc_wqs: destroy_workqueue(smc_close_wq); out_alloc_hs_wq: @@ -2643,11 +2658,11 @@ static void __exit smc_exit(void) smc_ib_unregister_client(); destroy_workqueue(smc_close_wq); destroy_workqueue(smc_hs_wq); - smc_stats_exit(); proto_unregister(&smc_proto6); proto_unregister(&smc_proto); smc_pnet_exit(); smc_nl_exit(); + unregister_pernet_subsys(&smc_net_stat_ops); unregister_pernet_subsys(&smc_net_ops); rcu_barrier(); } diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c index d69f58f670a1..cd0d7c908b2a 100644 --- a/net/smc/smc_core.c +++ b/net/smc/smc_core.c @@ -2058,8 +2058,8 @@ static int __smc_buf_create(struct smc_sock *smc, bool is_smcd, bool is_rmb) /* check for reusable slot in the link group */ buf_desc = smc_buf_get_slot(bufsize_short, lock, buf_list); if (buf_desc) { - SMC_STAT_RMB_SIZE(is_smcd, is_rmb, bufsize); - SMC_STAT_BUF_REUSE(is_smcd, is_rmb); + SMC_STAT_RMB_SIZE(smc, is_smcd, is_rmb, bufsize); + SMC_STAT_BUF_REUSE(smc, is_smcd, is_rmb); memset(buf_desc->cpu_addr, 0, bufsize); break; /* found reusable slot */ } @@ -2074,13 +2074,13 @@ static int __smc_buf_create(struct smc_sock *smc, bool is_smcd, bool is_rmb) if (IS_ERR(buf_desc)) { if (!is_dgraded) { is_dgraded = true; - SMC_STAT_RMB_DOWNGRADED(is_smcd, is_rmb); + SMC_STAT_RMB_DOWNGRADED(smc, is_smcd, is_rmb); } continue; } - SMC_STAT_RMB_ALLOC(is_smcd, is_rmb); - SMC_STAT_RMB_SIZE(is_smcd, is_rmb, bufsize); + SMC_STAT_RMB_ALLOC(smc, is_smcd, is_rmb); + SMC_STAT_RMB_SIZE(smc, is_smcd, is_rmb, bufsize); buf_desc->used = 1; mutex_lock(lock); list_add(&buf_desc->list, buf_list); diff --git a/net/smc/smc_rx.c b/net/smc/smc_rx.c index ce1ae39923b1..170b733bc736 100644 --- a/net/smc/smc_rx.c +++ b/net/smc/smc_rx.c @@ -228,7 +228,7 @@ static int smc_rx_recv_urg(struct smc_sock *smc, struct msghdr *msg, int len, conn->urg_state == SMC_URG_READ) return -EINVAL; - SMC_STAT_INC(!conn->lnk, urg_data_cnt); + SMC_STAT_INC(smc, urg_data_cnt); if (conn->urg_state == SMC_URG_VALID) { if (!(flags & MSG_PEEK)) smc->conn.urg_state = SMC_URG_READ; @@ -307,10 +307,10 @@ int smc_rx_recvmsg(struct smc_sock *smc, struct msghdr *msg, readable = atomic_read(&conn->bytes_to_rcv); if (readable >= conn->rmb_desc->len) - SMC_STAT_RMB_RX_FULL(!conn->lnk); + SMC_STAT_RMB_RX_FULL(smc, !conn->lnk); if (len < readable) - SMC_STAT_RMB_RX_SIZE_SMALL(!conn->lnk); + SMC_STAT_RMB_RX_SIZE_SMALL(smc, !conn->lnk); /* we currently use 1 RMBE per RMB, so RMBE == RMB base addr */ rcvbuf_base = conn->rx_off + conn->rmb_desc->cpu_addr; diff --git a/net/smc/smc_stats.c b/net/smc/smc_stats.c index b3d279d29c52..614013e3b574 100644 --- a/net/smc/smc_stats.c +++ b/net/smc/smc_stats.c @@ -18,24 +18,28 @@ #include "smc_netlink.h" #include "smc_stats.h" -/* serialize fallback reason statistic gathering */ -DEFINE_MUTEX(smc_stat_fback_rsn); -struct smc_stats __percpu *smc_stats; /* per cpu counters for SMC */ -struct smc_stats_reason fback_rsn; - -int __init smc_stats_init(void) +int smc_stats_init(struct net *net) { - memset(&fback_rsn, 0, sizeof(fback_rsn)); - smc_stats = alloc_percpu(struct smc_stats); - if (!smc_stats) - return -ENOMEM; - + net->smc.fback_rsn = kzalloc(sizeof(*net->smc.fback_rsn), GFP_KERNEL); + if (!net->smc.fback_rsn) + goto err_fback; + net->smc.smc_stats = alloc_percpu(struct smc_stats); + if (!net->smc.smc_stats) + goto err_stats; + mutex_init(&net->smc.mutex_fback_rsn); return 0; + +err_stats: + kfree(net->smc.fback_rsn); +err_fback: + return -ENOMEM; } -void smc_stats_exit(void) +void smc_stats_exit(struct net *net) { - free_percpu(smc_stats); + kfree(net->smc.fback_rsn); + if (net->smc.smc_stats) + free_percpu(net->smc.smc_stats); } static int smc_nl_fill_stats_rmb_data(struct sk_buff *skb, @@ -256,6 +260,7 @@ int smc_nl_get_stats(struct sk_buff *skb, struct netlink_callback *cb) { struct smc_nl_dmp_ctx *cb_ctx = smc_nl_dmp_ctx(cb); + struct net *net = sock_net(skb->sk); struct smc_stats *stats; struct nlattr *attrs; int cpu, i, size; @@ -279,7 +284,7 @@ int smc_nl_get_stats(struct sk_buff *skb, goto erralloc; size = sizeof(*stats) / sizeof(u64); for_each_possible_cpu(cpu) { - src = (u64 *)per_cpu_ptr(smc_stats, cpu); + src = (u64 *)per_cpu_ptr(net->smc.smc_stats, cpu); sum = (u64 *)stats; for (i = 0; i < size; i++) *(sum++) += *(src++); @@ -318,6 +323,7 @@ static int smc_nl_get_fback_details(struct sk_buff *skb, bool is_srv) { struct smc_nl_dmp_ctx *cb_ctx = smc_nl_dmp_ctx(cb); + struct net *net = sock_net(skb->sk); int cnt_reported = cb_ctx->pos[2]; struct smc_stats_fback *trgt_arr; struct nlattr *attrs; @@ -325,9 +331,9 @@ static int smc_nl_get_fback_details(struct sk_buff *skb, void *nlh; if (is_srv) - trgt_arr = &fback_rsn.srv[0]; + trgt_arr = &net->smc.fback_rsn->srv[0]; else - trgt_arr = &fback_rsn.clnt[0]; + trgt_arr = &net->smc.fback_rsn->clnt[0]; if (!trgt_arr[pos].fback_code) return -ENODATA; nlh = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, @@ -342,11 +348,11 @@ static int smc_nl_get_fback_details(struct sk_buff *skb, goto errattr; if (!cnt_reported) { if (nla_put_u64_64bit(skb, SMC_NLA_FBACK_STATS_SRV_CNT, - fback_rsn.srv_fback_cnt, + net->smc.fback_rsn->srv_fback_cnt, SMC_NLA_FBACK_STATS_PAD)) goto errattr; if (nla_put_u64_64bit(skb, SMC_NLA_FBACK_STATS_CLNT_CNT, - fback_rsn.clnt_fback_cnt, + net->smc.fback_rsn->clnt_fback_cnt, SMC_NLA_FBACK_STATS_PAD)) goto errattr; cnt_reported = 1; @@ -375,12 +381,13 @@ static int smc_nl_get_fback_details(struct sk_buff *skb, int smc_nl_get_fback_stats(struct sk_buff *skb, struct netlink_callback *cb) { struct smc_nl_dmp_ctx *cb_ctx = smc_nl_dmp_ctx(cb); + struct net *net = sock_net(skb->sk); int rc_srv = 0, rc_clnt = 0, k; int skip_serv = cb_ctx->pos[1]; int snum = cb_ctx->pos[0]; bool is_srv = true; - mutex_lock(&smc_stat_fback_rsn); + mutex_lock(&net->smc.mutex_fback_rsn); for (k = 0; k < SMC_MAX_FBACK_RSN_CNT; k++) { if (k < snum) continue; @@ -399,7 +406,7 @@ int smc_nl_get_fback_stats(struct sk_buff *skb, struct netlink_callback *cb) if (rc_clnt == ENODATA && rc_srv == ENODATA) break; } - mutex_unlock(&smc_stat_fback_rsn); + mutex_unlock(&net->smc.mutex_fback_rsn); cb_ctx->pos[1] = skip_serv; cb_ctx->pos[0] = k; return skb->len; diff --git a/net/smc/smc_stats.h b/net/smc/smc_stats.h index 7c35b22d9e29..84b7ecd8c05c 100644 --- a/net/smc/smc_stats.h +++ b/net/smc/smc_stats.h @@ -21,10 +21,6 @@ #define SMC_MAX_FBACK_RSN_CNT 30 -extern struct smc_stats __percpu *smc_stats; /* per cpu counters for SMC */ -extern struct smc_stats_reason fback_rsn; -extern struct mutex smc_stat_fback_rsn; - enum { SMC_BUF_8K, SMC_BUF_16K, @@ -43,7 +39,7 @@ struct smc_stats_fback { u16 count; }; -struct smc_stats_reason { +struct smc_stats_rsn { struct smc_stats_fback srv[SMC_MAX_FBACK_RSN_CNT]; struct smc_stats_fback clnt[SMC_MAX_FBACK_RSN_CNT]; u64 srv_fback_cnt; @@ -92,122 +88,135 @@ struct smc_stats { u64 srv_hshake_err_cnt; }; -#define SMC_STAT_PAYLOAD_SUB(_tech, key, _len, _rc) \ +#define SMC_STAT_PAYLOAD_SUB(_smc_stats, _tech, key, _len, _rc) \ do { \ + typeof(_smc_stats) stats = (_smc_stats); \ typeof(_tech) t = (_tech); \ typeof(_len) l = (_len); \ int _pos = fls64((l) >> 13); \ typeof(_rc) r = (_rc); \ int m = SMC_BUF_MAX - 1; \ - this_cpu_inc((*smc_stats).smc[t].key ## _cnt); \ + this_cpu_inc((*stats).smc[t].key ## _cnt); \ if (r <= 0) \ break; \ _pos = (_pos < m) ? ((l == 1 << (_pos + 12)) ? _pos - 1 : _pos) : m; \ - this_cpu_inc((*smc_stats).smc[t].key ## _pd.buf[_pos]); \ - this_cpu_add((*smc_stats).smc[t].key ## _bytes, r); \ + this_cpu_inc((*stats).smc[t].key ## _pd.buf[_pos]); \ + this_cpu_add((*stats).smc[t].key ## _bytes, r); \ } \ while (0) #define SMC_STAT_TX_PAYLOAD(_smc, length, rcode) \ do { \ typeof(_smc) __smc = _smc; \ + struct net *_net = sock_net(&__smc->sk); \ + struct smc_stats __percpu *_smc_stats = _net->smc.smc_stats; \ typeof(length) _len = (length); \ typeof(rcode) _rc = (rcode); \ bool is_smcd = !__smc->conn.lnk; \ if (is_smcd) \ - SMC_STAT_PAYLOAD_SUB(SMC_TYPE_D, tx, _len, _rc); \ + SMC_STAT_PAYLOAD_SUB(_smc_stats, SMC_TYPE_D, tx, _len, _rc); \ else \ - SMC_STAT_PAYLOAD_SUB(SMC_TYPE_R, tx, _len, _rc); \ + SMC_STAT_PAYLOAD_SUB(_smc_stats, SMC_TYPE_R, tx, _len, _rc); \ } \ while (0) #define SMC_STAT_RX_PAYLOAD(_smc, length, rcode) \ do { \ typeof(_smc) __smc = _smc; \ + struct net *_net = sock_net(&__smc->sk); \ + struct smc_stats __percpu *_smc_stats = _net->smc.smc_stats; \ typeof(length) _len = (length); \ typeof(rcode) _rc = (rcode); \ bool is_smcd = !__smc->conn.lnk; \ if (is_smcd) \ - SMC_STAT_PAYLOAD_SUB(SMC_TYPE_D, rx, _len, _rc); \ + SMC_STAT_PAYLOAD_SUB(_smc_stats, SMC_TYPE_D, rx, _len, _rc); \ else \ - SMC_STAT_PAYLOAD_SUB(SMC_TYPE_R, rx, _len, _rc); \ + SMC_STAT_PAYLOAD_SUB(_smc_stats, SMC_TYPE_R, rx, _len, _rc); \ } \ while (0) -#define SMC_STAT_RMB_SIZE_SUB(_tech, k, _len) \ +#define SMC_STAT_RMB_SIZE_SUB(_smc_stats, _tech, k, _len) \ do { \ typeof(_len) _l = (_len); \ typeof(_tech) t = (_tech); \ int _pos = fls((_l) >> 13); \ int m = SMC_BUF_MAX - 1; \ _pos = (_pos < m) ? ((_l == 1 << (_pos + 12)) ? _pos - 1 : _pos) : m; \ - this_cpu_inc((*smc_stats).smc[t].k ## _rmbsize.buf[_pos]); \ + this_cpu_inc((*(_smc_stats)).smc[t].k ## _rmbsize.buf[_pos]); \ } \ while (0) -#define SMC_STAT_RMB_SUB(type, t, key) \ - this_cpu_inc((*smc_stats).smc[t].rmb ## _ ## key.type ## _cnt) +#define SMC_STAT_RMB_SUB(_smc_stats, type, t, key) \ + this_cpu_inc((*(_smc_stats)).smc[t].rmb ## _ ## key.type ## _cnt) -#define SMC_STAT_RMB_SIZE(_is_smcd, _is_rx, _len) \ +#define SMC_STAT_RMB_SIZE(_smc, _is_smcd, _is_rx, _len) \ do { \ + struct net *_net = sock_net(&(_smc)->sk); \ + struct smc_stats __percpu *_smc_stats = _net->smc.smc_stats; \ typeof(_is_smcd) is_d = (_is_smcd); \ typeof(_is_rx) is_r = (_is_rx); \ typeof(_len) l = (_len); \ if ((is_d) && (is_r)) \ - SMC_STAT_RMB_SIZE_SUB(SMC_TYPE_D, rx, l); \ + SMC_STAT_RMB_SIZE_SUB(_smc_stats, SMC_TYPE_D, rx, l); \ if ((is_d) && !(is_r)) \ - SMC_STAT_RMB_SIZE_SUB(SMC_TYPE_D, tx, l); \ + SMC_STAT_RMB_SIZE_SUB(_smc_stats, SMC_TYPE_D, tx, l); \ if (!(is_d) && (is_r)) \ - SMC_STAT_RMB_SIZE_SUB(SMC_TYPE_R, rx, l); \ + SMC_STAT_RMB_SIZE_SUB(_smc_stats, SMC_TYPE_R, rx, l); \ if (!(is_d) && !(is_r)) \ - SMC_STAT_RMB_SIZE_SUB(SMC_TYPE_R, tx, l); \ + SMC_STAT_RMB_SIZE_SUB(_smc_stats, SMC_TYPE_R, tx, l); \ } \ while (0) -#define SMC_STAT_RMB(type, _is_smcd, _is_rx) \ +#define SMC_STAT_RMB(_smc, type, _is_smcd, _is_rx) \ do { \ + struct net *net = sock_net(&(_smc)->sk); \ + struct smc_stats __percpu *_smc_stats = net->smc.smc_stats; \ typeof(_is_smcd) is_d = (_is_smcd); \ typeof(_is_rx) is_r = (_is_rx); \ if ((is_d) && (is_r)) \ - SMC_STAT_RMB_SUB(type, SMC_TYPE_D, rx); \ + SMC_STAT_RMB_SUB(_smc_stats, type, SMC_TYPE_D, rx); \ if ((is_d) && !(is_r)) \ - SMC_STAT_RMB_SUB(type, SMC_TYPE_D, tx); \ + SMC_STAT_RMB_SUB(_smc_stats, type, SMC_TYPE_D, tx); \ if (!(is_d) && (is_r)) \ - SMC_STAT_RMB_SUB(type, SMC_TYPE_R, rx); \ + SMC_STAT_RMB_SUB(_smc_stats, type, SMC_TYPE_R, rx); \ if (!(is_d) && !(is_r)) \ - SMC_STAT_RMB_SUB(type, SMC_TYPE_R, tx); \ + SMC_STAT_RMB_SUB(_smc_stats, type, SMC_TYPE_R, tx); \ } \ while (0) -#define SMC_STAT_BUF_REUSE(is_smcd, is_rx) \ - SMC_STAT_RMB(reuse, is_smcd, is_rx) +#define SMC_STAT_BUF_REUSE(smc, is_smcd, is_rx) \ + SMC_STAT_RMB(smc, reuse, is_smcd, is_rx) -#define SMC_STAT_RMB_ALLOC(is_smcd, is_rx) \ - SMC_STAT_RMB(alloc, is_smcd, is_rx) +#define SMC_STAT_RMB_ALLOC(smc, is_smcd, is_rx) \ + SMC_STAT_RMB(smc, alloc, is_smcd, is_rx) -#define SMC_STAT_RMB_DOWNGRADED(is_smcd, is_rx) \ - SMC_STAT_RMB(dgrade, is_smcd, is_rx) +#define SMC_STAT_RMB_DOWNGRADED(smc, is_smcd, is_rx) \ + SMC_STAT_RMB(smc, dgrade, is_smcd, is_rx) -#define SMC_STAT_RMB_TX_PEER_FULL(is_smcd) \ - SMC_STAT_RMB(buf_full_peer, is_smcd, false) +#define SMC_STAT_RMB_TX_PEER_FULL(smc, is_smcd) \ + SMC_STAT_RMB(smc, buf_full_peer, is_smcd, false) -#define SMC_STAT_RMB_TX_FULL(is_smcd) \ - SMC_STAT_RMB(buf_full, is_smcd, false) +#define SMC_STAT_RMB_TX_FULL(smc, is_smcd) \ + SMC_STAT_RMB(smc, buf_full, is_smcd, false) -#define SMC_STAT_RMB_TX_PEER_SIZE_SMALL(is_smcd) \ - SMC_STAT_RMB(buf_size_small_peer, is_smcd, false) +#define SMC_STAT_RMB_TX_PEER_SIZE_SMALL(smc, is_smcd) \ + SMC_STAT_RMB(smc, buf_size_small_peer, is_smcd, false) -#define SMC_STAT_RMB_TX_SIZE_SMALL(is_smcd) \ - SMC_STAT_RMB(buf_size_small, is_smcd, false) +#define SMC_STAT_RMB_TX_SIZE_SMALL(smc, is_smcd) \ + SMC_STAT_RMB(smc, buf_size_small, is_smcd, false) -#define SMC_STAT_RMB_RX_SIZE_SMALL(is_smcd) \ - SMC_STAT_RMB(buf_size_small, is_smcd, true) +#define SMC_STAT_RMB_RX_SIZE_SMALL(smc, is_smcd) \ + SMC_STAT_RMB(smc, buf_size_small, is_smcd, true) -#define SMC_STAT_RMB_RX_FULL(is_smcd) \ - SMC_STAT_RMB(buf_full, is_smcd, true) +#define SMC_STAT_RMB_RX_FULL(smc, is_smcd) \ + SMC_STAT_RMB(smc, buf_full, is_smcd, true) -#define SMC_STAT_INC(is_smcd, type) \ +#define SMC_STAT_INC(_smc, type) \ do { \ + typeof(_smc) __smc = _smc; \ + bool is_smcd = !(__smc)->conn.lnk; \ + struct net *net = sock_net(&(__smc)->sk); \ + struct smc_stats __percpu *smc_stats = net->smc.smc_stats; \ if ((is_smcd)) \ this_cpu_inc(smc_stats->smc[SMC_TYPE_D].type); \ else \ @@ -215,11 +224,12 @@ do { \ } \ while (0) -#define SMC_STAT_CLNT_SUCC_INC(_aclc) \ +#define SMC_STAT_CLNT_SUCC_INC(net, _aclc) \ do { \ typeof(_aclc) acl = (_aclc); \ bool is_v2 = (acl->hdr.version == SMC_V2); \ bool is_smcd = (acl->hdr.typev1 == SMC_TYPE_D); \ + struct smc_stats __percpu *smc_stats = (net)->smc.smc_stats; \ if (is_v2 && is_smcd) \ this_cpu_inc(smc_stats->smc[SMC_TYPE_D].clnt_v2_succ_cnt); \ else if (is_v2 && !is_smcd) \ @@ -231,11 +241,12 @@ do { \ } \ while (0) -#define SMC_STAT_SERV_SUCC_INC(_ini) \ +#define SMC_STAT_SERV_SUCC_INC(net, _ini) \ do { \ typeof(_ini) i = (_ini); \ bool is_v2 = (i->smcd_version & SMC_V2); \ bool is_smcd = (i->is_smcd); \ + typeof(net->smc.smc_stats) smc_stats = (net)->smc.smc_stats; \ if (is_v2 && is_smcd) \ this_cpu_inc(smc_stats->smc[SMC_TYPE_D].srv_v2_succ_cnt); \ else if (is_v2 && !is_smcd) \ @@ -249,7 +260,7 @@ while (0) int smc_nl_get_stats(struct sk_buff *skb, struct netlink_callback *cb); int smc_nl_get_fback_stats(struct sk_buff *skb, struct netlink_callback *cb); -int smc_stats_init(void) __init; -void smc_stats_exit(void); +int smc_stats_init(struct net *net); +void smc_stats_exit(struct net *net); #endif /* NET_SMC_SMC_STATS_H_ */ diff --git a/net/smc/smc_tx.c b/net/smc/smc_tx.c index a043544d715f..075c4f4b41cf 100644 --- a/net/smc/smc_tx.c +++ b/net/smc/smc_tx.c @@ -47,7 +47,7 @@ static void smc_tx_write_space(struct sock *sk) /* similar to sk_stream_write_space */ if (atomic_read(&smc->conn.sndbuf_space) && sock) { if (test_bit(SOCK_NOSPACE, &sock->flags)) - SMC_STAT_RMB_TX_FULL(!smc->conn.lnk); + SMC_STAT_RMB_TX_FULL(smc, !smc->conn.lnk); clear_bit(SOCK_NOSPACE, &sock->flags); rcu_read_lock(); wq = rcu_dereference(sk->sk_wq); @@ -155,13 +155,13 @@ int smc_tx_sendmsg(struct smc_sock *smc, struct msghdr *msg, size_t len) } if (len > conn->sndbuf_desc->len) - SMC_STAT_RMB_TX_SIZE_SMALL(!conn->lnk); + SMC_STAT_RMB_TX_SIZE_SMALL(smc, !conn->lnk); if (len > conn->peer_rmbe_size) - SMC_STAT_RMB_TX_PEER_SIZE_SMALL(!conn->lnk); + SMC_STAT_RMB_TX_PEER_SIZE_SMALL(smc, !conn->lnk); if (msg->msg_flags & MSG_OOB) - SMC_STAT_INC(!conn->lnk, urg_data_cnt); + SMC_STAT_INC(smc, urg_data_cnt); while (msg_data_left(msg)) { if (sk->sk_state == SMC_INIT) @@ -432,7 +432,9 @@ static int smc_tx_rdma_writes(struct smc_connection *conn, /* cf. snd_wnd */ rmbespace = atomic_read(&conn->peer_rmbe_space); if (rmbespace <= 0) { - SMC_STAT_RMB_TX_PEER_FULL(!conn->lnk); + struct smc_sock *smc = container_of(conn, struct smc_sock, + conn); + SMC_STAT_RMB_TX_PEER_FULL(smc, !conn->lnk); return 0; } smc_curs_copy(&prod, &conn->local_tx_ctrl.prod, conn);