From patchwork Mon Jun 12 08:21:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 691934 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FE4DC7EE23 for ; Mon, 12 Jun 2023 08:53:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233651AbjFLIxb (ORCPT ); Mon, 12 Jun 2023 04:53:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233345AbjFLIxA (ORCPT ); Mon, 12 Jun 2023 04:53:00 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C34361FDB for ; Mon, 12 Jun 2023 01:52:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559942; x=1718095942; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=nvTw/GZuY+NyXCnGB2Z1yYs3/CsVd3QAeUEWzLQSEEQ=; b=PqDRgbCUr8DdxoTVDMZasx1oJjg1933AEwmZfahvmZVqBHnDLF92WftZ Ni5qpa7uVTWez69E0e6l/2JoC23ntAviE/TC8FYKY03p7Bmgxdxm/SzTW UT9tbkrN+Ae5pdHfiH/8k0NI/O/RxDizOtKMsmhlW3PU0DmvWzlB4QT0l v6+5fI7h3DHmGshnDfSG8p6IK12CGpDAS2UlN8c6jfYZecgbMYba+3nKk ccn1GuLlGL0PGtS1Mo0rXRWUDkezhd67aN8GonQtIVyFtDNpp2n5eDdbW xPi9JiBF3q1/JXOoE6epVq94h+rMxHIY8f++jyHcJq46e6ziMY8nte1VX A==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627245" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627245" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247723" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247723" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:37 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 52FD434C; Mon, 12 Jun 2023 11:21:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 01/20] thunderbolt: Ignore data CRC mismatch for USB4 routers Date: Mon, 12 Jun 2023 11:21:26 +0300 Message-Id: <20230612082145.62218-2-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This is also something not always updated after the DROM contents itself so issue warning but continue parsing it as we do for pre-USB4 DROMs too. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/eeprom.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c index 0f6099c18a94..eb241b270f79 100644 --- a/drivers/thunderbolt/eeprom.c +++ b/drivers/thunderbolt/eeprom.c @@ -605,9 +605,8 @@ static int usb4_drom_parse(struct tb_switch *sw) crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len); if (crc != header->data_crc32) { tb_sw_warn(sw, - "DROM data CRC32 mismatch (expected: %#x, got: %#x), aborting\n", + "DROM data CRC32 mismatch (expected: %#x, got: %#x), continuing\n", header->data_crc32, crc); - return -EINVAL; } return tb_drom_parse_entries(sw, USB4_DROM_HEADER_SIZE); From patchwork Mon Jun 12 08:21:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 691935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8B41C7EE43 for ; Mon, 12 Jun 2023 08:53:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233053AbjFLIx3 (ORCPT ); Mon, 12 Jun 2023 04:53:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233889AbjFLIw6 (ORCPT ); Mon, 12 Jun 2023 04:52:58 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD5BA1BF0 for ; Mon, 12 Jun 2023 01:52:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559940; x=1718095940; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2spGKKJuYNiuA8qKZX/i0/6rIr2eqb7O4rNsuJf0POc=; b=dEwD7MVsVOvLgRO5TV/l9j1Md7mJLmbGlokHADY+Xb1Powm2oksxGk9E D/u35hwr/rEr26CBjNKH60ela8v6y0FTdy6H6/qhiuBPE4Q7cauJAHskK vY5ED9ZJrB/aEhfo8glljr4v3iXFDXrjQkiCbgi2Q64y5NE+VTpnz3P5w 1NtQqLuj3TW6/qx5ydTHLb0CqQnxx6QItlC2+WWw6tqusHklzQ/WKdawP K41A0ni6CsfPDIQUc0fioJ9+myn5iq7kY4TjwBWT8Bm/gbW7BFy8tRaFz Uawh90aiDGTwfC0ZlzsCC75cRBYcCkiWQ64VAHt4h7/oAibG7iSUpekwQ Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627242" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627242" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247721" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247721" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:37 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 58129358; Mon, 12 Jun 2023 11:21:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 02/20] thunderbolt: Do not touch lane 1 adapter path config space Date: Mon, 12 Jun 2023 11:21:27 +0300 Message-Id: <20230612082145.62218-3-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org It is not required to be implemented at all because USB4 does not use lane 1 for tunneling except when aggregated with lane 0. For this reason do not try to read the path config space of USB4 lane 1 adapters. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/switch.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 0c11caec7e8e..47961afdcc73 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -723,7 +723,7 @@ static int tb_init_port(struct tb_port *port) * can be read from the path config space. Legacy * devices we use hard-coded value. */ - if (tb_switch_is_usb4(port->sw)) { + if (port->cap_usb4) { struct tb_regs_hop hop; if (!tb_port_read(port, &hop, TB_CFG_HOPS, 0, 2)) From patchwork Mon Jun 12 08:21:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 693247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95AA3C7EE25 for ; Mon, 12 Jun 2023 08:53:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234071AbjFLIxb (ORCPT ); Mon, 12 Jun 2023 04:53:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230412AbjFLIw7 (ORCPT ); Mon, 12 Jun 2023 04:52:59 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A64E10B for ; Mon, 12 Jun 2023 01:52:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559942; x=1718095942; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cHqRPfmmnmHmtQBYQCniLv3+OUQKlWU/ZtwzlQLFucE=; b=LEjbGGQRDDzwFErgFfGijywhbSFAjxuHlRNLicKAMjPTn/aJr2Nv9ZLt lDgXWfslFwZ0iWSj1Cs0XyiN4SfZD79kGSb+eFzcg6WEBs1n04QDe9h0K 8riEcBT0sFyxBJRpG4zYoRSnD+w6hFwXOUQ/CzPzBfHbZMucT2miaMHKk 5qYtJtpDeuIZxECDA5Bp9KYPJPvv7jv5v8lzGQiz/SkteDHTy9D1CV3OT XXPMY53xy/J4rIeKrQRxuBRrK56YsfkO6qoxQb87EfumQ5e6drtRymGqp fKaQT26fLQbp5gg9Ky2CONaaYncE6Cbzex8rmPw/L7yFLskcOWFWlQgPZ Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627239" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627239" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247716" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247716" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:37 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 6E97C3BC; Mon, 12 Jun 2023 11:21:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 03/20] thunderbolt: Identify USB4 v2 routers Date: Mon, 12 Jun 2023 11:21:28 +0300 Message-Id: <20230612082145.62218-4-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Gil Fine Add a new function usb4_switch_version() that can be used to figure out the spec version of the router and make tb_switch_is_usb4() to use it as well. Update the uevent accordingly. Signed-off-by: Gil Fine Signed-off-by: Mika Westerberg --- drivers/thunderbolt/switch.c | 5 +++-- drivers/thunderbolt/tb.h | 34 +++++++++++++++++++++++----------- drivers/thunderbolt/tb_regs.h | 4 ++-- 3 files changed, 28 insertions(+), 15 deletions(-) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 47961afdcc73..3a1fc3e053f6 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -2056,8 +2056,9 @@ static int tb_switch_uevent(const struct device *dev, struct kobj_uevent_env *en const struct tb_switch *sw = tb_to_switch(dev); const char *type; - if (sw->config.thunderbolt_version == USB4_VERSION_1_0) { - if (add_uevent_var(env, "USB4_VERSION=1.0")) + if (tb_switch_is_usb4(sw)) { + if (add_uevent_var(env, "USB4_VERSION=%u.0", + usb4_switch_version(sw))) return -ENOMEM; } diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 58df106aaa5e..bc91fcf5f430 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -947,17 +947,6 @@ static inline bool tb_switch_is_tiger_lake(const struct tb_switch *sw) return false; } -/** - * tb_switch_is_usb4() - Is the switch USB4 compliant - * @sw: Switch to check - * - * Returns true if the @sw is USB4 compliant router, false otherwise. - */ -static inline bool tb_switch_is_usb4(const struct tb_switch *sw) -{ - return sw->config.thunderbolt_version == USB4_VERSION_1_0; -} - /** * tb_switch_is_icm() - Is the switch handled by ICM firmware * @sw: Switch to check @@ -1198,6 +1187,29 @@ static inline struct tb_retimer *tb_to_retimer(struct device *dev) return NULL; } +/** + * usb4_switch_version() - Returns USB4 version of the router + * @sw: Router to check + * + * Returns major version of USB4 router (%1 for v1, %2 for v2 and so + * on). Can be called to pre-USB4 router too and in that case returns %0. + */ +static inline unsigned int usb4_switch_version(const struct tb_switch *sw) +{ + return FIELD_GET(USB4_VERSION_MAJOR_MASK, sw->config.thunderbolt_version); +} + +/** + * tb_switch_is_usb4() - Is the switch USB4 compliant + * @sw: Switch to check + * + * Returns true if the @sw is USB4 compliant router, false otherwise. + */ +static inline bool tb_switch_is_usb4(const struct tb_switch *sw) +{ + return usb4_switch_version(sw) > 0; +} + int usb4_switch_setup(struct tb_switch *sw); int usb4_switch_read_uid(struct tb_switch *sw, u64 *uid); int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf, diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h index 2636423748cd..0716d6b7701a 100644 --- a/drivers/thunderbolt/tb_regs.h +++ b/drivers/thunderbolt/tb_regs.h @@ -190,8 +190,8 @@ struct tb_regs_switch_header { u32 thunderbolt_version:8; } __packed; -/* USB4 version 1.0 */ -#define USB4_VERSION_1_0 0x20 +/* Used with the router thunderbolt_version */ +#define USB4_VERSION_MAJOR_MASK GENMASK(7, 5) #define ROUTER_CS_1 0x01 #define ROUTER_CS_4 0x04 From patchwork Mon Jun 12 08:21:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 693244 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D902C7EE45 for ; Mon, 12 Jun 2023 08:53:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234143AbjFLIxh (ORCPT ); Mon, 12 Jun 2023 04:53:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52436 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233914AbjFLIxD (ORCPT ); Mon, 12 Jun 2023 04:53:03 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C34E51FDC for ; Mon, 12 Jun 2023 01:52:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559942; x=1718095942; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EnTyw/k2Y3nmLDFlF2Dcyb9y3VfyDHHOatuSrh7rvlY=; b=hJSqiRpq9k8P5Wo/qNkkRGm+na8quno2Ib6DjCRPyOYKN6wSWu+rCBFt xZr20pYVDTVNB9m4skyOboQZRBf423bbNeF5fc0sLcczVecQ+VsZHK8/L A5NOEWA/hB5WMwyJ51vKpO42dH4kixNxgsL3g0ZL558+geWFjjyMmrn59 EaXKIsxrOt2pWYtbokC8ATBR+S0J9TXIp9hfi3RhjfbXjVZtsbXNyGAGM r+x1YVhS/10lItz4bYYdPzkVRmhvvrScW0BREKwkuUCIx1WbKbHHIoU7i e4RmZCFSxmziWK172mK2ZyQj8XgW7tvNCR/tfag1YCi+cOAs2MiF2857c A==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627248" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627248" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247725" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247725" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:37 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 791C7677; Mon, 12 Jun 2023 11:21:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 04/20] thunderbolt: Add support for USB4 v2 80 Gb/s link Date: Mon, 12 Jun 2023 11:21:29 +0300 Message-Id: <20230612082145.62218-5-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Gil Fine USB4 v2 bumps the per-lane speed up to 40 Gb/s. Also the lanes are always bonded which gives 80 Gb/s symmetric link (and 120/40 Gb/s asymmetric). This updates the speed and width of routers and XDomain connections to support the Gen 4 link. For now we keep the link as is even if it is already asymmetric. While there make tb_port_set_link_width() static. Signed-off-by: Gil Fine Signed-off-by: Mika Westerberg --- drivers/thunderbolt/dma_test.c | 10 +- drivers/thunderbolt/icm.c | 6 +- drivers/thunderbolt/switch.c | 185 +++++++++++++++++++++++---------- drivers/thunderbolt/tb.c | 38 ++++++- drivers/thunderbolt/tb.h | 14 ++- drivers/thunderbolt/tb_regs.h | 1 + drivers/thunderbolt/xdomain.c | 82 ++++++++++++--- include/linux/thunderbolt.h | 18 +++- 8 files changed, 266 insertions(+), 88 deletions(-) diff --git a/drivers/thunderbolt/dma_test.c b/drivers/thunderbolt/dma_test.c index 58496f889d03..39476fc48801 100644 --- a/drivers/thunderbolt/dma_test.c +++ b/drivers/thunderbolt/dma_test.c @@ -412,6 +412,7 @@ static void speed_get(const struct dma_test *dt, u64 *val) static int speed_validate(u64 val) { switch (val) { + case 40: case 20: case 10: case 0: @@ -489,9 +490,12 @@ static void dma_test_check_errors(struct dma_test *dt, int ret) if (!dt->error_code) { if (dt->link_speed && dt->xd->link_speed != dt->link_speed) { dt->error_code = DMA_TEST_SPEED_ERROR; - } else if (dt->link_width && - dt->xd->link_width != dt->link_width) { - dt->error_code = DMA_TEST_WIDTH_ERROR; + } else if (dt->link_width) { + const struct tb_xdomain *xd = dt->xd; + + if ((dt->link_width == 1 && xd->link_width != TB_LINK_WIDTH_SINGLE) || + (dt->link_width == 2 && xd->link_width < TB_LINK_WIDTH_DUAL)) + dt->error_code = DMA_TEST_WIDTH_ERROR; } else if (dt->packets_to_send != dt->packets_sent || dt->packets_to_receive != dt->packets_received || dt->crc_errors || dt->buffer_overflow_errors) { diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c index 05274caf1466..dbdcad8d73bf 100644 --- a/drivers/thunderbolt/icm.c +++ b/drivers/thunderbolt/icm.c @@ -850,7 +850,8 @@ icm_fr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr) sw->security_level = security_level; sw->boot = boot; sw->link_speed = speed_gen3 ? 20 : 10; - sw->link_width = dual_lane ? 2 : 1; + sw->link_width = dual_lane ? TB_LINK_WIDTH_DUAL : + TB_LINK_WIDTH_SINGLE; sw->rpm = intel_vss_is_rtd3(pkg->ep_name, sizeof(pkg->ep_name)); if (add_switch(parent_sw, sw)) @@ -1272,7 +1273,8 @@ __icm_tr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr, sw->security_level = security_level; sw->boot = boot; sw->link_speed = speed_gen3 ? 20 : 10; - sw->link_width = dual_lane ? 2 : 1; + sw->link_width = dual_lane ? TB_LINK_WIDTH_DUAL : + TB_LINK_WIDTH_SINGLE; sw->rpm = force_rtd3; if (!sw->rpm) sw->rpm = intel_vss_is_rtd3(pkg->ep_name, diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 3a1fc3e053f6..a0451218af2a 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -903,15 +903,23 @@ int tb_port_get_link_speed(struct tb_port *port) speed = (val & LANE_ADP_CS_1_CURRENT_SPEED_MASK) >> LANE_ADP_CS_1_CURRENT_SPEED_SHIFT; - return speed == LANE_ADP_CS_1_CURRENT_SPEED_GEN3 ? 20 : 10; + + switch (speed) { + case LANE_ADP_CS_1_CURRENT_SPEED_GEN4: + return 40; + case LANE_ADP_CS_1_CURRENT_SPEED_GEN3: + return 20; + default: + return 10; + } } /** * tb_port_get_link_width() - Get current link width * @port: Port to check (USB4 or CIO) * - * Returns link width. Return values can be 1 (Single-Lane), 2 (Dual-Lane) - * or negative errno in case of failure. + * Returns link width. Return the link width as encoded in &enum + * tb_link_width or negative errno in case of failure. */ int tb_port_get_link_width(struct tb_port *port) { @@ -926,11 +934,13 @@ int tb_port_get_link_width(struct tb_port *port) if (ret) return ret; + /* Matches the values in enum tb_link_width */ return (val & LANE_ADP_CS_1_CURRENT_WIDTH_MASK) >> LANE_ADP_CS_1_CURRENT_WIDTH_SHIFT; } -static bool tb_port_is_width_supported(struct tb_port *port, int width) +static bool tb_port_is_width_supported(struct tb_port *port, + unsigned int width_mask) { u32 phy, widths; int ret; @@ -946,20 +956,25 @@ static bool tb_port_is_width_supported(struct tb_port *port, int width) widths = (phy & LANE_ADP_CS_0_SUPPORTED_WIDTH_MASK) >> LANE_ADP_CS_0_SUPPORTED_WIDTH_SHIFT; - return !!(widths & width); + return widths & width_mask; +} + +static bool is_gen4_link(struct tb_port *port) +{ + return tb_port_get_link_speed(port) > 20; } /** * tb_port_set_link_width() - Set target link width of the lane adapter * @port: Lane adapter - * @width: Target link width (%1 or %2) + * @width: Target link width * * Sets the target link width of the lane adapter to @width. Does not * enable/disable lane bonding. For that call tb_port_set_lane_bonding(). * * Return: %0 in case of success and negative errno in case of error */ -int tb_port_set_link_width(struct tb_port *port, unsigned int width) +int tb_port_set_link_width(struct tb_port *port, enum tb_link_width width) { u32 val; int ret; @@ -974,11 +989,14 @@ int tb_port_set_link_width(struct tb_port *port, unsigned int width) val &= ~LANE_ADP_CS_1_TARGET_WIDTH_MASK; switch (width) { - case 1: + case TB_LINK_WIDTH_SINGLE: + /* Gen 4 link cannot be single */ + if (is_gen4_link(port)) + return -EOPNOTSUPP; val |= LANE_ADP_CS_1_TARGET_WIDTH_SINGLE << LANE_ADP_CS_1_TARGET_WIDTH_SHIFT; break; - case 2: + case TB_LINK_WIDTH_DUAL: val |= LANE_ADP_CS_1_TARGET_WIDTH_DUAL << LANE_ADP_CS_1_TARGET_WIDTH_SHIFT; break; @@ -1000,12 +1018,9 @@ int tb_port_set_link_width(struct tb_port *port, unsigned int width) * cases one should use tb_port_lane_bonding_enable() instead to enable * lane bonding. * - * As a side effect sets @port->bonding accordingly (and does the same - * for lane 1 too). - * * Return: %0 in case of success and negative errno in case of error */ -int tb_port_set_lane_bonding(struct tb_port *port, bool bonding) +static int tb_port_set_lane_bonding(struct tb_port *port, bool bonding) { u32 val; int ret; @@ -1023,19 +1038,8 @@ int tb_port_set_lane_bonding(struct tb_port *port, bool bonding) else val &= ~LANE_ADP_CS_1_LB; - ret = tb_port_write(port, &val, TB_CFG_PORT, - port->cap_phy + LANE_ADP_CS_1, 1); - if (ret) - return ret; - - /* - * When lane 0 bonding is set it will affect lane 1 too so - * update both. - */ - port->bonded = bonding; - port->dual_link_port->bonded = bonding; - - return 0; + return tb_port_write(port, &val, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_1, 1); } /** @@ -1052,36 +1056,52 @@ int tb_port_set_lane_bonding(struct tb_port *port, bool bonding) */ int tb_port_lane_bonding_enable(struct tb_port *port) { + enum tb_link_width width; int ret; /* * Enable lane bonding for both links if not already enabled by * for example the boot firmware. */ - ret = tb_port_get_link_width(port); - if (ret == 1) { - ret = tb_port_set_link_width(port, 2); + width = tb_port_get_link_width(port); + if (width == TB_LINK_WIDTH_SINGLE) { + ret = tb_port_set_link_width(port, TB_LINK_WIDTH_DUAL); if (ret) goto err_lane0; } - ret = tb_port_get_link_width(port->dual_link_port); - if (ret == 1) { - ret = tb_port_set_link_width(port->dual_link_port, 2); + width = tb_port_get_link_width(port->dual_link_port); + if (width == TB_LINK_WIDTH_SINGLE) { + ret = tb_port_set_link_width(port->dual_link_port, + TB_LINK_WIDTH_DUAL); if (ret) goto err_lane0; } - ret = tb_port_set_lane_bonding(port, true); - if (ret) - goto err_lane1; + /* + * Only set bonding if the link was not already bonded. This + * avoids the lane adapter to re-enter bonding state. + */ + if (width == TB_LINK_WIDTH_SINGLE) { + ret = tb_port_set_lane_bonding(port, true); + if (ret) + goto err_lane1; + } + + /* + * When lane 0 bonding is set it will affect lane 1 too so + * update both. + */ + port->bonded = true; + port->dual_link_port->bonded = true; return 0; err_lane1: - tb_port_set_link_width(port->dual_link_port, 1); + tb_port_set_link_width(port->dual_link_port, TB_LINK_WIDTH_SINGLE); err_lane0: - tb_port_set_link_width(port, 1); + tb_port_set_link_width(port, TB_LINK_WIDTH_SINGLE); + return ret; } @@ -1095,27 +1115,34 @@ int tb_port_lane_bonding_enable(struct tb_port *port) void tb_port_lane_bonding_disable(struct tb_port *port) { tb_port_set_lane_bonding(port, false); - tb_port_set_link_width(port->dual_link_port, 1); - tb_port_set_link_width(port, 1); + tb_port_set_link_width(port->dual_link_port, TB_LINK_WIDTH_SINGLE); + tb_port_set_link_width(port, TB_LINK_WIDTH_SINGLE); + port->dual_link_port->bonded = false; + port->bonded = false; } /** * tb_port_wait_for_link_width() - Wait until link reaches specific width * @port: Port to wait for - * @width: Expected link width (%1 or %2) + * @width_mask: Expected link width mask * @timeout_msec: Timeout in ms how long to wait * * Should be used after both ends of the link have been bonded (or * bonding has been disabled) to wait until the link actually reaches - * the expected state. Returns %-ETIMEDOUT if the @width was not reached - * within the given timeout, %0 if it did. + * the expected state. Returns %-ETIMEDOUT if the width was not reached + * within the given timeout, %0 if it did. Can be passed a mask of + * expected widths and succeeds if any of the widths is reached. */ -int tb_port_wait_for_link_width(struct tb_port *port, int width, +int tb_port_wait_for_link_width(struct tb_port *port, unsigned int width_mask, int timeout_msec) { ktime_t timeout = ktime_add_ms(ktime_get(), timeout_msec); int ret; + /* Gen 4 link does not support single lane */ + if ((width_mask & TB_LINK_WIDTH_SINGLE) && is_gen4_link(port)) + return -EOPNOTSUPP; + do { ret = tb_port_get_link_width(port); if (ret < 0) { @@ -1126,7 +1153,7 @@ int tb_port_wait_for_link_width(struct tb_port *port, int width, */ if (ret != -EACCES) return ret; - } else if (ret == width) { + } else if (ret & width_mask) { return 0; } @@ -1778,20 +1805,57 @@ static ssize_t speed_show(struct device *dev, struct device_attribute *attr, static DEVICE_ATTR(rx_speed, 0444, speed_show, NULL); static DEVICE_ATTR(tx_speed, 0444, speed_show, NULL); -static ssize_t lanes_show(struct device *dev, struct device_attribute *attr, - char *buf) +static ssize_t rx_lanes_show(struct device *dev, struct device_attribute *attr, + char *buf) { struct tb_switch *sw = tb_to_switch(dev); + unsigned int width; + + switch (sw->link_width) { + case TB_LINK_WIDTH_SINGLE: + case TB_LINK_WIDTH_ASYM_TX: + width = 1; + break; + case TB_LINK_WIDTH_DUAL: + width = 2; + break; + case TB_LINK_WIDTH_ASYM_RX: + width = 3; + break; + default: + WARN_ON_ONCE(1); + return -EINVAL; + } - return sysfs_emit(buf, "%u\n", sw->link_width); + return sysfs_emit(buf, "%u\n", width); } +static DEVICE_ATTR(rx_lanes, 0444, rx_lanes_show, NULL); -/* - * Currently link has same amount of lanes both directions (1 or 2) but - * expose them separately to allow possible asymmetric links in the future. - */ -static DEVICE_ATTR(rx_lanes, 0444, lanes_show, NULL); -static DEVICE_ATTR(tx_lanes, 0444, lanes_show, NULL); +static ssize_t tx_lanes_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct tb_switch *sw = tb_to_switch(dev); + unsigned int width; + + switch (sw->link_width) { + case TB_LINK_WIDTH_SINGLE: + case TB_LINK_WIDTH_ASYM_RX: + width = 1; + break; + case TB_LINK_WIDTH_DUAL: + width = 2; + break; + case TB_LINK_WIDTH_ASYM_TX: + width = 3; + break; + default: + WARN_ON_ONCE(1); + return -EINVAL; + } + + return sysfs_emit(buf, "%u\n", width); +} +static DEVICE_ATTR(tx_lanes, 0444, tx_lanes_show, NULL); static ssize_t nvm_authenticate_show(struct device *dev, struct device_attribute *attr, char *buf) @@ -2624,6 +2688,7 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw) { struct tb_port *up, *down; u64 route = tb_route(sw); + unsigned int width_mask; int ret; if (!route) @@ -2635,8 +2700,8 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw) up = tb_upstream_port(sw); down = tb_switch_downstream_port(sw); - if (!tb_port_is_width_supported(up, 2) || - !tb_port_is_width_supported(down, 2)) + if (!tb_port_is_width_supported(up, TB_LINK_WIDTH_DUAL) || + !tb_port_is_width_supported(down, TB_LINK_WIDTH_DUAL)) return 0; ret = tb_port_lane_bonding_enable(up); @@ -2652,7 +2717,11 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw) return ret; } - ret = tb_port_wait_for_link_width(down, 2, 100); + /* Any of the widths are all bonded */ + width_mask = TB_LINK_WIDTH_DUAL | TB_LINK_WIDTH_ASYM_TX | + TB_LINK_WIDTH_ASYM_RX; + + ret = tb_port_wait_for_link_width(down, width_mask, 100); if (ret) { tb_port_warn(down, "timeout enabling lane bonding\n"); return ret; @@ -2676,6 +2745,7 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw) void tb_switch_lane_bonding_disable(struct tb_switch *sw) { struct tb_port *up, *down; + int ret; if (!tb_route(sw)) return; @@ -2693,7 +2763,8 @@ void tb_switch_lane_bonding_disable(struct tb_switch *sw) * It is fine if we get other errors as the router might have * been unplugged. */ - if (tb_port_wait_for_link_width(down, 1, 100) == -ETIMEDOUT) + ret = tb_port_wait_for_link_width(down, TB_LINK_WIDTH_SINGLE, 100); + if (ret == -ETIMEDOUT) tb_sw_warn(sw, "timeout disabling lane bonding\n"); tb_port_update_credits(down); diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index aa6e11589c28..440693f561a4 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -570,7 +570,8 @@ static int tb_available_bandwidth(struct tb *tb, struct tb_port *src_port, usb3_consumed_down = 0; } - *available_up = *available_down = 40000; + /* Maximum possible bandwidth asymmetric Gen 4 link is 120 Gb/s */ + *available_up = *available_down = 120000; /* Find the minimum available bandwidth over all links */ tb_for_each_port_on_path(src_port, dst_port, port) { @@ -581,18 +582,45 @@ static int tb_available_bandwidth(struct tb *tb, struct tb_port *src_port, if (tb_is_upstream_port(port)) { link_speed = port->sw->link_speed; + /* + * sw->link_width is from upstream perspective + * so we use the opposite for downstream of the + * host router. + */ + if (port->sw->link_width == TB_LINK_WIDTH_ASYM_TX) { + up_bw = link_speed * 3 * 1000; + down_bw = link_speed * 1 * 1000; + } else if (port->sw->link_width == TB_LINK_WIDTH_ASYM_RX) { + up_bw = link_speed * 1 * 1000; + down_bw = link_speed * 3 * 1000; + } else { + up_bw = link_speed * port->sw->link_width * 1000; + down_bw = up_bw; + } } else { link_speed = tb_port_get_link_speed(port); if (link_speed < 0) return link_speed; - } - link_width = port->bonded ? 2 : 1; + link_width = tb_port_get_link_width(port); + if (link_width < 0) + return link_width; + + if (link_width == TB_LINK_WIDTH_ASYM_TX) { + up_bw = link_speed * 1 * 1000; + down_bw = link_speed * 3 * 1000; + } else if (link_width == TB_LINK_WIDTH_ASYM_RX) { + up_bw = link_speed * 3 * 1000; + down_bw = link_speed * 1 * 1000; + } else { + up_bw = link_speed * link_width * 1000; + down_bw = up_bw; + } + } - up_bw = link_speed * link_width * 1000; /* Mb/s */ /* Leave 10% guard band */ up_bw -= up_bw / 10; - down_bw = up_bw; + down_bw -= down_bw / 10; tb_port_dbg(port, "link total bandwidth %d/%d Mb/s\n", up_bw, down_bw); diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index bc91fcf5f430..845e851012e5 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -135,7 +135,7 @@ struct tb_switch_tmu { * @vendor_name: Name of the vendor (or %NULL if not known) * @device_name: Name of the device (or %NULL if not known) * @link_speed: Speed of the link in Gb/s - * @link_width: Width of the link (1 or 2) + * @link_width: Width of the upstream facing link * @link_usb4: Upstream link is USB4 * @generation: Switch Thunderbolt generation * @cap_plug_events: Offset to the plug events capability (%0 if not found) @@ -173,6 +173,11 @@ struct tb_switch_tmu { * switches) you need to have domain lock held. * * In USB4 terminology this structure represents a router. + * + * Note @link_width is not the same as whether link is bonded or not. + * For Gen 4 links the link is also bonded when it is asymmetric. The + * correct way to find out whether the link is bonded or not is to look + * @bonded field of the upstream port. */ struct tb_switch { struct device dev; @@ -188,7 +193,7 @@ struct tb_switch { const char *vendor_name; const char *device_name; unsigned int link_speed; - unsigned int link_width; + enum tb_link_width link_width; bool link_usb4; unsigned int generation; int cap_plug_events; @@ -1050,11 +1055,10 @@ static inline bool tb_port_use_credit_allocation(const struct tb_port *port) int tb_port_get_link_speed(struct tb_port *port); int tb_port_get_link_width(struct tb_port *port); -int tb_port_set_link_width(struct tb_port *port, unsigned int width); -int tb_port_set_lane_bonding(struct tb_port *port, bool bonding); +int tb_port_set_link_width(struct tb_port *port, enum tb_link_width width); int tb_port_lane_bonding_enable(struct tb_port *port); void tb_port_lane_bonding_disable(struct tb_port *port); -int tb_port_wait_for_link_width(struct tb_port *port, int width, +int tb_port_wait_for_link_width(struct tb_port *port, unsigned int width_mask, int timeout_msec); int tb_port_update_credits(struct tb_port *port); diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h index 0716d6b7701a..69455eaf6351 100644 --- a/drivers/thunderbolt/tb_regs.h +++ b/drivers/thunderbolt/tb_regs.h @@ -346,6 +346,7 @@ struct tb_regs_port_header { #define LANE_ADP_CS_1_CURRENT_SPEED_SHIFT 16 #define LANE_ADP_CS_1_CURRENT_SPEED_GEN2 0x8 #define LANE_ADP_CS_1_CURRENT_SPEED_GEN3 0x4 +#define LANE_ADP_CS_1_CURRENT_SPEED_GEN4 0x2 #define LANE_ADP_CS_1_CURRENT_WIDTH_MASK GENMASK(25, 20) #define LANE_ADP_CS_1_CURRENT_WIDTH_SHIFT 20 #define LANE_ADP_CS_1_PMS BIT(30) diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c index 8389961b2d45..5b5566862318 100644 --- a/drivers/thunderbolt/xdomain.c +++ b/drivers/thunderbolt/xdomain.c @@ -1290,13 +1290,16 @@ static int tb_xdomain_link_state_change(struct tb_xdomain *xd, static int tb_xdomain_bond_lanes_uuid_high(struct tb_xdomain *xd) { + unsigned int width, width_mask; struct tb_port *port; - int ret, width; + int ret; if (xd->target_link_width == LANE_ADP_CS_1_TARGET_WIDTH_SINGLE) { - width = 1; + width = TB_LINK_WIDTH_SINGLE; + width_mask = width; } else if (xd->target_link_width == LANE_ADP_CS_1_TARGET_WIDTH_DUAL) { - width = 2; + width = TB_LINK_WIDTH_DUAL; + width_mask = width | TB_LINK_WIDTH_ASYM_TX | TB_LINK_WIDTH_ASYM_RX; } else { if (xd->state_retries-- > 0) { dev_dbg(&xd->dev, @@ -1328,15 +1331,16 @@ static int tb_xdomain_bond_lanes_uuid_high(struct tb_xdomain *xd) return ret; } - ret = tb_port_wait_for_link_width(port, width, XDOMAIN_BONDING_TIMEOUT); + ret = tb_port_wait_for_link_width(port, width_mask, + XDOMAIN_BONDING_TIMEOUT); if (ret) { dev_warn(&xd->dev, "error waiting for link width to become %d\n", - width); + width_mask); return ret; } - port->bonded = width == 2; - port->dual_link_port->bonded = width == 2; + port->bonded = width > TB_LINK_WIDTH_SINGLE; + port->dual_link_port->bonded = width > TB_LINK_WIDTH_SINGLE; tb_port_update_credits(port); tb_xdomain_update_link_attributes(xd); @@ -1735,16 +1739,57 @@ static ssize_t speed_show(struct device *dev, struct device_attribute *attr, static DEVICE_ATTR(rx_speed, 0444, speed_show, NULL); static DEVICE_ATTR(tx_speed, 0444, speed_show, NULL); -static ssize_t lanes_show(struct device *dev, struct device_attribute *attr, - char *buf) +static ssize_t rx_lanes_show(struct device *dev, struct device_attribute *attr, + char *buf) { struct tb_xdomain *xd = container_of(dev, struct tb_xdomain, dev); + unsigned int width; - return sysfs_emit(buf, "%u\n", xd->link_width); + switch (xd->link_width) { + case TB_LINK_WIDTH_SINGLE: + case TB_LINK_WIDTH_ASYM_RX: + width = 1; + break; + case TB_LINK_WIDTH_DUAL: + width = 2; + break; + case TB_LINK_WIDTH_ASYM_TX: + width = 3; + break; + default: + WARN_ON_ONCE(1); + return -EINVAL; + } + + return sysfs_emit(buf, "%u\n", width); } +static DEVICE_ATTR(rx_lanes, 0444, rx_lanes_show, NULL); + +static ssize_t tx_lanes_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct tb_xdomain *xd = container_of(dev, struct tb_xdomain, dev); + unsigned int width; -static DEVICE_ATTR(rx_lanes, 0444, lanes_show, NULL); -static DEVICE_ATTR(tx_lanes, 0444, lanes_show, NULL); + switch (xd->link_width) { + case TB_LINK_WIDTH_SINGLE: + case TB_LINK_WIDTH_ASYM_TX: + width = 1; + break; + case TB_LINK_WIDTH_DUAL: + width = 2; + break; + case TB_LINK_WIDTH_ASYM_RX: + width = 3; + break; + default: + WARN_ON_ONCE(1); + return -EINVAL; + } + + return sysfs_emit(buf, "%u\n", width); +} +static DEVICE_ATTR(tx_lanes, 0444, tx_lanes_show, NULL); static struct attribute *xdomain_attrs[] = { &dev_attr_device.attr, @@ -1974,6 +2019,7 @@ void tb_xdomain_remove(struct tb_xdomain *xd) */ int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd) { + unsigned int width_mask; struct tb_port *port; int ret; @@ -1997,7 +2043,12 @@ int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd) return ret; } - ret = tb_port_wait_for_link_width(port, 2, XDOMAIN_BONDING_TIMEOUT); + /* Any of the widths are all bonded */ + width_mask = TB_LINK_WIDTH_DUAL | TB_LINK_WIDTH_ASYM_TX | + TB_LINK_WIDTH_ASYM_RX; + + ret = tb_port_wait_for_link_width(port, width_mask, + XDOMAIN_BONDING_TIMEOUT); if (ret) { tb_port_warn(port, "failed to enable lane bonding\n"); return ret; @@ -2024,8 +2075,11 @@ void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd) port = tb_xdomain_downstream_port(xd); if (port->dual_link_port) { + int ret; + tb_port_lane_bonding_disable(port); - if (tb_port_wait_for_link_width(port, 1, 100) == -ETIMEDOUT) + ret = tb_port_wait_for_link_width(port, TB_LINK_WIDTH_SINGLE, 100); + if (ret == -ETIMEDOUT) tb_port_warn(port, "timeout disabling lane bonding\n"); tb_port_disable(port->dual_link_port); tb_port_update_credits(port); diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h index 90cd08ab2f5d..02333f47c994 100644 --- a/include/linux/thunderbolt.h +++ b/include/linux/thunderbolt.h @@ -171,6 +171,20 @@ struct tb_property *tb_property_get_next(struct tb_property_dir *dir, int tb_register_property_dir(const char *key, struct tb_property_dir *dir); void tb_unregister_property_dir(const char *key, struct tb_property_dir *dir); +/** + * enum tb_link_width - Thunderbolt/USB4 link width + * @TB_LINK_WIDTH_SINGLE: Single lane link + * @TB_LINK_WIDTH_DUAL: Dual lane symmetric link + * @TB_LINK_WIDTH_ASYM_TX: Dual lane asymmetric Gen 4 link with 3 trasmitters + * @TB_LINK_WIDTH_ASYM_RX: Dual lane asymmetric Gen 4 link with 3 receivers + */ +enum tb_link_width { + TB_LINK_WIDTH_SINGLE = BIT(0), + TB_LINK_WIDTH_DUAL = BIT(1), + TB_LINK_WIDTH_ASYM_TX = BIT(2), + TB_LINK_WIDTH_ASYM_RX = BIT(3), +}; + /** * struct tb_xdomain - Cross-domain (XDomain) connection * @dev: XDomain device @@ -186,7 +200,7 @@ void tb_unregister_property_dir(const char *key, struct tb_property_dir *dir); * @vendor_name: Name of the vendor (or %NULL if not known) * @device_name: Name of the device (or %NULL if not known) * @link_speed: Speed of the link in Gb/s - * @link_width: Width of the link (1 or 2) + * @link_width: Width of the downstream facing link * @link_usb4: Downstream link is USB4 * @is_unplugged: The XDomain is unplugged * @needs_uuid: If the XDomain does not have @remote_uuid it will be @@ -234,7 +248,7 @@ struct tb_xdomain { const char *vendor_name; const char *device_name; unsigned int link_speed; - unsigned int link_width; + enum tb_link_width link_width; bool link_usb4; bool is_unplugged; bool needs_uuid; From patchwork Mon Jun 12 08:21:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 693246 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB361C7EE43 for ; Mon, 12 Jun 2023 08:53:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233776AbjFLIxc (ORCPT ); Mon, 12 Jun 2023 04:53:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51634 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233654AbjFLIxB (ORCPT ); Mon, 12 Jun 2023 04:53:01 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D0EE1FE4 for ; Mon, 12 Jun 2023 01:52:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559943; x=1718095943; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5R/PimgGjI9n3Xb1mKdZWc2F0yx9PxCNvv8p+CjD85k=; b=nUcl/gZU+aB1RnnNdxDOdrp1TbbaQZWyjdP/Ey/Ni+7K4wAWPRekojyl i8DWuBoj4fKL7l5b6CBsvLh1AQUIcBBXLRFqwUj0POmNqGewglD9uB2F+ XIrdeC/zgKnvN9Y8+JEZMhjnyHOUEts7OkDAyZ27Yfd96Fk1Fz5mZXWWY wEoG2M82dcBfKGfGj2/W3bmWpqACPtz/x7bTdK7VUmds+x+p9Iyx7uqws jaiD78sHu+8cNGoToR4kcilbOu1YtPjMDakb7R4svljHmtnSA0rLRbaw/ jKVV7v2H0Nr8rpKuLeKu/yp8Cj3ohgHcxzKh82vfTyWwgc21Nslj0WsFW g==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627264" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627264" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247744" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247744" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:40 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 7E4573B6; Mon, 12 Jun 2023 11:21:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 05/20] thunderbolt: Add the new USB4 v2 notification types Date: Mon, 12 Jun 2023 11:21:30 +0300 Message-Id: <20230612082145.62218-6-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org USB4 v2 spec adds a bunch of new notifications that the connection manager can use instead of polling. While we do not use these yet we need to ack the ones routers expect to be acked. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/ctl.c | 28 ++++++++++++++++++++++++++++ drivers/thunderbolt/tb.c | 17 +++++++++++++---- drivers/thunderbolt/tb_msgs.h | 7 +++++++ 3 files changed, 48 insertions(+), 4 deletions(-) diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c index 3a213322ec7a..d997a4c545f7 100644 --- a/drivers/thunderbolt/ctl.c +++ b/drivers/thunderbolt/ctl.c @@ -409,6 +409,13 @@ static int tb_async_error(const struct ctl_pkg *pkg) case TB_CFG_ERROR_HEC_ERROR_DETECTED: case TB_CFG_ERROR_FLOW_CONTROL_ERROR: case TB_CFG_ERROR_DP_BW: + case TB_CFG_ERROR_ROP_CMPLT: + case TB_CFG_ERROR_POP_CMPLT: + case TB_CFG_ERROR_PCIE_WAKE: + case TB_CFG_ERROR_DP_CON_CHANGE: + case TB_CFG_ERROR_DPTX_DISCOVERY: + case TB_CFG_ERROR_LINK_RECOVERY: + case TB_CFG_ERROR_ASYM_LINK: return true; default: @@ -758,6 +765,27 @@ int tb_cfg_ack_notification(struct tb_ctl *ctl, u64 route, case TB_CFG_ERROR_DP_BW: name = "DP_BW"; break; + case TB_CFG_ERROR_ROP_CMPLT: + name = "router operation completion"; + break; + case TB_CFG_ERROR_POP_CMPLT: + name = "port operation completion"; + break; + case TB_CFG_ERROR_PCIE_WAKE: + name = "PCIe wake"; + break; + case TB_CFG_ERROR_DP_CON_CHANGE: + name = "DP connector change"; + break; + case TB_CFG_ERROR_DPTX_DISCOVERY: + name = "DPTX discovery"; + break; + case TB_CFG_ERROR_LINK_RECOVERY: + name = "link recovery"; + break; + case TB_CFG_ERROR_ASYM_LINK: + name = "asymmetric link"; + break; default: name = "unknown"; break; diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 440693f561a4..f18cb5a52f0b 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -1952,17 +1952,26 @@ static void tb_queue_dp_bandwidth_request(struct tb *tb, u64 route, u8 port) static void tb_handle_notification(struct tb *tb, u64 route, const struct cfg_error_pkg *error) { - if (tb_cfg_ack_notification(tb->ctl, route, error)) - tb_warn(tb, "could not ack notification on %llx\n", route); switch (error->error) { + case TB_CFG_ERROR_PCIE_WAKE: + case TB_CFG_ERROR_DP_CON_CHANGE: + case TB_CFG_ERROR_DPTX_DISCOVERY: + if (tb_cfg_ack_notification(tb->ctl, route, error)) + tb_warn(tb, "could not ack notification on %llx\n", + route); + break; + case TB_CFG_ERROR_DP_BW: + if (tb_cfg_ack_notification(tb->ctl, route, error)) + tb_warn(tb, "could not ack notification on %llx\n", + route); tb_queue_dp_bandwidth_request(tb, route, error->port); break; default: - /* Ack is enough */ - return; + /* Ignore for now */ + break; } } diff --git a/drivers/thunderbolt/tb_msgs.h b/drivers/thunderbolt/tb_msgs.h index 3234bff07899..cd750e4b3440 100644 --- a/drivers/thunderbolt/tb_msgs.h +++ b/drivers/thunderbolt/tb_msgs.h @@ -30,6 +30,13 @@ enum tb_cfg_error { TB_CFG_ERROR_FLOW_CONTROL_ERROR = 13, TB_CFG_ERROR_LOCK = 15, TB_CFG_ERROR_DP_BW = 32, + TB_CFG_ERROR_ROP_CMPLT = 33, + TB_CFG_ERROR_POP_CMPLT = 34, + TB_CFG_ERROR_PCIE_WAKE = 35, + TB_CFG_ERROR_DP_CON_CHANGE = 36, + TB_CFG_ERROR_DPTX_DISCOVERY = 37, + TB_CFG_ERROR_LINK_RECOVERY = 38, + TB_CFG_ERROR_ASYM_LINK = 39, }; /* common header */ From patchwork Mon Jun 12 08:21:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 691933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B754DC7EE23 for ; Mon, 12 Jun 2023 08:53:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234097AbjFLIxd (ORCPT ); Mon, 12 Jun 2023 04:53:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233668AbjFLIxB (ORCPT ); Mon, 12 Jun 2023 04:53:01 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D8AE1FE9 for ; Mon, 12 Jun 2023 01:52:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559943; x=1718095943; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uK14nIFmHfp4OUokX9zi3VVxUWRvaclzLgvYv5QixDU=; b=iC2o4/Q86qBo+y3zXl72oLlJH8ktR7SSluPJ68OqoikWWXrrrNNXDjlz EI/xA8LFq3sEUr+NWfxaFPOLuP4Hb4+v7osSQTA6ymhFXxP4bhr89gBI+ wltcnirUylIs7jPM/Okm9Y9o6UfQAr8T5piLifnzCkU9avFJRVDawlHnk 2QRttwSaJv8ImwNPCbxD9XUzAnVou2Q+vXRaEsZf0Ioyh/7Rq+2K7Yi75 vGrCDmVljUm2sVF1G4jcNisZtjUq7nk6pekSvNqkIbjTlYLsbx8KGSDb1 eeI6Y0imoGrvG0Ty+pJUPXNvPIX6NBK8BkVC6myMmhj6gasLTjpnBwDST w==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627267" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627267" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247745" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247745" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:40 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 881E5690; Mon, 12 Jun 2023 11:21:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 06/20] thunderbolt: Reset USB4 v2 host router Date: Mon, 12 Jun 2023 11:21:31 +0300 Message-Id: <20230612082145.62218-7-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org USB4 v2 added a bit that can be used to reset the host router so we use this to trigger reset when the driver probes. This will reset the already connected topology as well but doing this simplifies things a lot if for instance the link is already set to asymmetric. We also add a module parameter to prevent this in case of problems. While there rename the REG_HOP_COUNT to REG_CAPS to match the USB4 spec naming better. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/nhi.c | 39 +++++++++++++++++++++++++++++++++- drivers/thunderbolt/nhi_regs.h | 19 +++++++++++------ 2 files changed, 50 insertions(+), 8 deletions(-) diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c index a979f47109e3..116016695a6a 100644 --- a/drivers/thunderbolt/nhi.c +++ b/drivers/thunderbolt/nhi.c @@ -46,6 +46,10 @@ #define QUIRK_AUTO_CLEAR_INT BIT(0) #define QUIRK_E2E BIT(1) +static bool host_reset = true; +module_param(host_reset, bool, 0444); +MODULE_PARM_DESC(host_reset, "reset USBv2 host router (default: true)"); + static int ring_interrupt_index(const struct tb_ring *ring) { int bit = ring->hop; @@ -1217,6 +1221,37 @@ static void nhi_check_iommu(struct tb_nhi *nhi) str_enabled_disabled(port_ok)); } +static void nhi_reset(struct tb_nhi *nhi) +{ + ktime_t timeout; + u32 val; + + val = ioread32(nhi->iobase + REG_CAPS); + /* Reset only v2 and later routers */ + if (FIELD_GET(REG_CAPS_VERSION_MASK, val) < REG_CAPS_VERSION_2) + return; + + if (!host_reset) { + dev_dbg(&nhi->pdev->dev, "skipping host router reset\n"); + return; + } + + iowrite32(REG_RESET_HRR, nhi->iobase + REG_RESET); + msleep(100); + + timeout = ktime_add_ms(ktime_get(), 500); + do { + val = ioread32(nhi->iobase + REG_RESET); + if (!(val & REG_RESET_HRR)) { + dev_warn(&nhi->pdev->dev, "host router reset successful\n"); + return; + } + usleep_range(10, 20); + } while (ktime_before(ktime_get(), timeout)); + + dev_warn(&nhi->pdev->dev, "timeout resetting host router\n"); +} + static int nhi_init_msi(struct tb_nhi *nhi) { struct pci_dev *pdev = nhi->pdev; @@ -1317,7 +1352,7 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id) nhi->ops = (const struct tb_nhi_ops *)id->driver_data; /* cannot fail - table is allocated in pcim_iomap_regions */ nhi->iobase = pcim_iomap_table(pdev)[0]; - nhi->hop_count = ioread32(nhi->iobase + REG_HOP_COUNT) & 0x3ff; + nhi->hop_count = ioread32(nhi->iobase + REG_CAPS) & 0x3ff; dev_dbg(dev, "total paths: %d\n", nhi->hop_count); nhi->tx_rings = devm_kcalloc(&pdev->dev, nhi->hop_count, @@ -1330,6 +1365,8 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id) nhi_check_quirks(nhi); nhi_check_iommu(nhi); + nhi_reset(nhi); + res = nhi_init_msi(nhi); if (res) return dev_err_probe(dev, res, "cannot enable MSI, aborting\n"); diff --git a/drivers/thunderbolt/nhi_regs.h b/drivers/thunderbolt/nhi_regs.h index 6ba295815477..297a3e440648 100644 --- a/drivers/thunderbolt/nhi_regs.h +++ b/drivers/thunderbolt/nhi_regs.h @@ -37,7 +37,7 @@ struct ring_desc { /* NHI registers in bar 0 */ /* - * 16 bytes per entry, one entry for every hop (REG_HOP_COUNT) + * 16 bytes per entry, one entry for every hop (REG_CAPS) * 00: physical pointer to an array of struct ring_desc * 08: ring tail (set by NHI) * 10: ring head (index of first non posted descriptor) @@ -46,7 +46,7 @@ struct ring_desc { #define REG_TX_RING_BASE 0x00000 /* - * 16 bytes per entry, one entry for every hop (REG_HOP_COUNT) + * 16 bytes per entry, one entry for every hop (REG_CAPS) * 00: physical pointer to an array of struct ring_desc * 08: ring head (index of first not posted descriptor) * 10: ring tail (set by NHI) @@ -56,7 +56,7 @@ struct ring_desc { #define REG_RX_RING_BASE 0x08000 /* - * 32 bytes per entry, one entry for every hop (REG_HOP_COUNT) + * 32 bytes per entry, one entry for every hop (REG_CAPS) * 00: enum_ring_flags * 04: isoch time stamp ?? (write 0) * ..: unknown @@ -64,7 +64,7 @@ struct ring_desc { #define REG_TX_OPTIONS_BASE 0x19800 /* - * 32 bytes per entry, one entry for every hop (REG_HOP_COUNT) + * 32 bytes per entry, one entry for every hop (REG_CAPS) * 00: enum ring_flags * If RING_FLAG_E2E_FLOW_CONTROL is set then bits 13-23 must be set to * the corresponding TX hop id. @@ -77,7 +77,7 @@ struct ring_desc { /* * three bitfields: tx, rx, rx overflow - * Every bitfield contains one bit for every hop (REG_HOP_COUNT). + * Every bitfield contains one bit for every hop (REG_CAPS). * New interrupts are fired only after ALL registers have been * read (even those containing only disabled rings). */ @@ -87,7 +87,7 @@ struct ring_desc { /* * two bitfields: rx, tx - * Both bitfields contains one bit for every hop (REG_HOP_COUNT). To + * Both bitfields contains one bit for every hop (REG_CAPS). To * enable/disable interrupts set/clear the corresponding bits. */ #define REG_RING_INTERRUPT_BASE 0x38200 @@ -104,12 +104,17 @@ struct ring_desc { #define REG_INT_VEC_ALLOC_REGS (32 / REG_INT_VEC_ALLOC_BITS) /* The last 11 bits contain the number of hops supported by the NHI port. */ -#define REG_HOP_COUNT 0x39640 +#define REG_CAPS 0x39640 +#define REG_CAPS_VERSION_MASK GENMASK(23, 16) +#define REG_CAPS_VERSION_2 0x40 #define REG_DMA_MISC 0x39864 #define REG_DMA_MISC_INT_AUTO_CLEAR BIT(2) #define REG_DMA_MISC_DISABLE_AUTO_CLEAR BIT(17) +#define REG_RESET 0x39898 +#define REG_RESET_HRR BIT(0) + #define REG_INMAIL_DATA 0x39900 #define REG_INMAIL_CMD 0x39904 From patchwork Mon Jun 12 08:21:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 691930 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33DE7C7EE25 for ; Mon, 12 Jun 2023 08:53:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234211AbjFLIxk (ORCPT ); Mon, 12 Jun 2023 04:53:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233935AbjFLIxH (ORCPT ); Mon, 12 Jun 2023 04:53:07 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1BCB8268A for ; Mon, 12 Jun 2023 01:52:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559945; x=1718095945; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pkgD4LRkd2/zDwOPGpc94eNLUX1+9W2u1GBpQ+li/q0=; b=Gh3l9gzTg924M9I1XM53R21xTMIi08TsKMRXDX3z06Y+0R8LGJeGgOwY axKE0BNJFeOV3efGhsLxTjC3UCuGTNvMFhi6GNU9GdzvJOcx285g7Dvmy +GKL0u4AAMd4/1e7OjN2TthkVvtbi4wOmmAZYppVwXNgPvF52BK3WFkjm oHaOSo1aT3Bvybqrgu6R3Zb2ny36CZ9Zua9uwjyaKrpfR4MKIad3OeytE Ti+tBm6js8bhQc+3tbB8r8j6Ayp8huu9u9J9qcJLWUi/TLY/eOcAa+QpT gXlaAkaMrxUHmwq4MDBoZnv2gOnWXL+xJ+fX9jBGimoS7rRK8RWF9B7cG g==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627281" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627281" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247754" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247754" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:40 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 968BA6AE; Mon, 12 Jun 2023 11:21:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 07/20] thunderbolt: Announce USB4 v2 connection manager support Date: Mon, 12 Jun 2023 11:21:32 +0300 Message-Id: <20230612082145.62218-8-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Gil Fine Program the CMUV (Connection Manager USB4 Version) field for USB4 v2 and v1 routers according to the spec. Signed-off-by: Gil Fine Signed-off-by: Mika Westerberg --- drivers/thunderbolt/switch.c | 8 ++++++-- drivers/thunderbolt/tb_regs.h | 3 +++ 2 files changed, 9 insertions(+), 2 deletions(-) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index a0451218af2a..ebe9559c8c79 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -2430,9 +2430,13 @@ int tb_switch_configure(struct tb_switch *sw) /* * For USB4 devices, we need to program the CM version * accordingly so that it knows to expose all the - * additional capabilities. + * additional capabilities. Program it according to USB4 + * version to avoid changing existing (v1) routers behaviour. */ - sw->config.cmuv = USB4_VERSION_1_0; + if (usb4_switch_version(sw) < 2) + sw->config.cmuv = ROUTER_CS_4_CMUV_V1; + else + sw->config.cmuv = ROUTER_CS_4_CMUV_V2; sw->config.plug_events_delay = 0xa; /* Enumerate the switch */ diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h index 69455eaf6351..c8e40ef09903 100644 --- a/drivers/thunderbolt/tb_regs.h +++ b/drivers/thunderbolt/tb_regs.h @@ -195,6 +195,9 @@ struct tb_regs_switch_header { #define ROUTER_CS_1 0x01 #define ROUTER_CS_4 0x04 +/* Used with the router cmuv field */ +#define ROUTER_CS_4_CMUV_V1 0x10 +#define ROUTER_CS_4_CMUV_V2 0x20 #define ROUTER_CS_5 0x05 #define ROUTER_CS_5_SLP BIT(0) #define ROUTER_CS_5_WOP BIT(1) From patchwork Mon Jun 12 08:21:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 691932 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C698AC7EE23 for ; Mon, 12 Jun 2023 08:53:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233555AbjFLIxi (ORCPT ); Mon, 12 Jun 2023 04:53:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52446 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233927AbjFLIxE (ORCPT ); Mon, 12 Jun 2023 04:53:04 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05447210C for ; Mon, 12 Jun 2023 01:52:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559943; x=1718095943; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Y2YobN/E4kGk5ZU1Uth/c4ac3L1l0yCi9BZbTpt+t18=; b=LvJ5cxDaIXK6pfcWzU3qZP2F4Qa0z2UZp6fLOKgNKUgNLaQeDXvOoZkT KEmd0E1MBRBCA/XLY5dET+qtbX+qLTJZjpny/yQCEsKkEHMLTVJa8xVPI DDOt7lsUjuwuJRp8Dt+K6E34BRMX03TqvefFEoR97x/55m2VlWV0sDk1i flCggSBUozoDr2w2qjsS5q53XjKVq7r5EARjfXcLecqQekbCFYgbuoftK n05JfreMYqi2t+vruJxanC48I3ytkHM/xbDzJWAIg3b5KIfEGf5sA6U3t HqV+UEr643/Nk+n1BFFOroNX3f2A+VFK6+GEKfBtfpysSgYCOCtH20FzH Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627270" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627270" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247748" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247748" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:40 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id A504F765; Mon, 12 Jun 2023 11:21:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 08/20] thunderbolt: Enable USB4 v2 PCIe TLP/DLLP extended encapsulation Date: Mon, 12 Jun 2023 11:21:33 +0300 Message-Id: <20230612082145.62218-9-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Gil Fine USB4 v2 spec introduces modified encapsulation of PCIe TLP and DLLP packets. This improves the PCIe tunneled traffic usage by reducing overhead. Enable this if both sides of the link support it. Signed-off-by: Gil Fine Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.h | 2 ++ drivers/thunderbolt/tb_regs.h | 2 ++ drivers/thunderbolt/tunnel.c | 37 ++++++++++++++++++++++++++++++++--- drivers/thunderbolt/usb4.c | 34 ++++++++++++++++++++++++++++++++ 4 files changed, 72 insertions(+), 3 deletions(-) diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 845e851012e5..002e0426a82c 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -1301,6 +1301,8 @@ int usb4_dp_port_allocated_bw(struct tb_port *port); int usb4_dp_port_allocate_bw(struct tb_port *port, int bw); int usb4_dp_port_requested_bw(struct tb_port *port); +int usb4_pci_port_set_ext_encapsulation(struct tb_port *port, bool enable); + static inline bool tb_is_usb4_port_device(const struct device *dev) { return dev->type == &usb4_port_device_type; diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h index c8e40ef09903..549cc79c7313 100644 --- a/drivers/thunderbolt/tb_regs.h +++ b/drivers/thunderbolt/tb_regs.h @@ -451,6 +451,8 @@ struct tb_regs_port_header { /* PCIe adapter registers */ #define ADP_PCIE_CS_0 0x00 #define ADP_PCIE_CS_0_PE BIT(31) +#define ADP_PCIE_CS_1 0x01 +#define ADP_PCIE_CS_1_EE BIT(0) /* USB adapter registers */ #define ADP_USB3_CS_0 0x00 diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c index 7df5f90e21d4..f1d0ab2b39a2 100644 --- a/drivers/thunderbolt/tunnel.c +++ b/drivers/thunderbolt/tunnel.c @@ -153,18 +153,49 @@ static struct tb_tunnel *tb_tunnel_alloc(struct tb *tb, size_t npaths, return tunnel; } +static int tb_pci_set_ext_encapsulation(struct tb_tunnel *tunnel, bool enable) +{ + int ret; + + /* Only supported of both routers are at least USB4 v2 */ + if (usb4_switch_version(tunnel->src_port->sw) < 2 || + usb4_switch_version(tunnel->dst_port->sw) < 2) + return 0; + + ret = usb4_pci_port_set_ext_encapsulation(tunnel->src_port, enable); + if (ret) + return ret; + + ret = usb4_pci_port_set_ext_encapsulation(tunnel->dst_port, enable); + if (ret) + return ret; + + tb_tunnel_dbg(tunnel, "extended encapsulation %sabled\n", + enable ? "en" : "dis"); + return 0; +} + static int tb_pci_activate(struct tb_tunnel *tunnel, bool activate) { int res; + if (activate) { + res = tb_pci_set_ext_encapsulation(tunnel, activate); + if (res) + return res; + } + res = tb_pci_port_enable(tunnel->src_port, activate); if (res) return res; - if (tb_port_is_pcie_up(tunnel->dst_port)) - return tb_pci_port_enable(tunnel->dst_port, activate); + if (tb_port_is_pcie_up(tunnel->dst_port)) { + res = tb_pci_port_enable(tunnel->dst_port, activate); + if (res) + return res; + } - return 0; + return activate ? 0 : tb_pci_set_ext_encapsulation(tunnel, activate); } static int tb_pci_init_credits(struct tb_path_hop *hop) diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c index 9f5a98347bee..b80972cb5b9d 100644 --- a/drivers/thunderbolt/usb4.c +++ b/drivers/thunderbolt/usb4.c @@ -2796,3 +2796,37 @@ int usb4_dp_port_requested_bw(struct tb_port *port) return (val & ADP_DP_CS_8_REQUESTED_BW_MASK) * granularity; } + +/** + * usb4_pci_port_set_ext_encapsulation() - Enable/disable extended encapsulation + * @port: PCIe adapter + * @enable: Enable/disable extended encapsulation + * + * Can be called to any adapter. Enables or disables extended + * encapsulation used in PCIe tunneling. Returns %0 on success and + * negative errno otherwise. + */ +int usb4_pci_port_set_ext_encapsulation(struct tb_port *port, bool enable) +{ + u32 val; + int ret; + + if (!tb_port_is_pcie_up(port) && !tb_port_is_pcie_down(port)) + return 0; + + if (usb4_switch_version(port->sw) < 2) + return 0; + + ret = tb_port_read(port, &val, TB_CFG_PORT, + port->cap_adap + ADP_PCIE_CS_1, 1); + if (ret) + return ret; + + if (enable) + val |= ADP_PCIE_CS_1_EE; + else + val &= ~ADP_PCIE_CS_1_EE; + + return tb_port_write(port, &val, TB_CFG_PORT, + port->cap_adap + ADP_PCIE_CS_1, 1); +} From patchwork Mon Jun 12 08:21:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 693241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02042C7EE25 for ; Mon, 12 Jun 2023 08:53:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229844AbjFLIxp (ORCPT ); Mon, 12 Jun 2023 04:53:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229479AbjFLIxI (ORCPT ); Mon, 12 Jun 2023 04:53:08 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18CAD2D6B for ; Mon, 12 Jun 2023 01:52:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559946; x=1718095946; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XOcR5/HkZsASAT2fR8RTSQnuONlrym90djJLiM3VzKg=; b=iDTNR8Dxok+h2EIWkWczwYmUs7PY7QdDHLKQAcmHIHxDP20K4aEwwQli u0Di93GZsAoy9MH5dNfPwFF9Urwkog0pLbV4ssLi2Sw8cyuLT35kvFxC8 SbBuPQHj5V9uLe+PNCZwtwZJZNg4duhXunIvkv3a0WXTdRmG289mkeTQu T+xTYGLAxCAQfgmxlqbdma4wt0u/VXAvQxyuuKS9g/lQJvXZwvVb5tnx0 sJRwvAwSJiMoHS8WEFkWFijGS2DQa+Q1hhC/Uq+VH8Wf0yaC9DDiVhqxn AhqK1Mwqy1voHs6dZ/cKG0lK1mR4BkjszsmnkrzicylVievvBNvQPxc5h g==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627289" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627289" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247756" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247756" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:40 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id AA8266D3; Mon, 12 Jun 2023 11:21:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 09/20] thunderbolt: Add two additional double words for adapters TMU for USB4 v2 routers Date: Mon, 12 Jun 2023 11:21:34 +0300 Message-Id: <20230612082145.62218-10-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Gil Fine For USB4 v2 routers, the adapters's TMU capability has two additional double words. Include them in the debugfs register dump. Signed-off-by: Gil Fine Signed-off-by: Mika Westerberg --- drivers/thunderbolt/debugfs.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/thunderbolt/debugfs.c b/drivers/thunderbolt/debugfs.c index 40b59e662ee3..48aaba17d1db 100644 --- a/drivers/thunderbolt/debugfs.c +++ b/drivers/thunderbolt/debugfs.c @@ -19,7 +19,8 @@ #define PORT_CAP_LANE_LEN 3 #define PORT_CAP_USB3_LEN 5 #define PORT_CAP_DP_LEN 8 -#define PORT_CAP_TMU_LEN 8 +#define PORT_CAP_TMU_V1_LEN 8 +#define PORT_CAP_TMU_V2_LEN 10 #define PORT_CAP_BASIC_LEN 9 #define PORT_CAP_USB4_LEN 20 @@ -1161,7 +1162,10 @@ static void port_cap_show(struct tb_port *port, struct seq_file *s, break; case TB_PORT_CAP_TIME1: - length = PORT_CAP_TMU_LEN; + if (usb4_switch_version(port->sw) < 2) + length = PORT_CAP_TMU_V1_LEN; + else + length = PORT_CAP_TMU_V2_LEN; break; case TB_PORT_CAP_POWER: From patchwork Mon Jun 12 08:21:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 691929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFFE7C7EE23 for ; Mon, 12 Jun 2023 08:53:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232086AbjFLIxn (ORCPT ); Mon, 12 Jun 2023 04:53:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52278 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234110AbjFLIxI (ORCPT ); Mon, 12 Jun 2023 04:53:08 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22B9D2D7F for ; Mon, 12 Jun 2023 01:52:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559946; x=1718095946; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rJB2w0kMN/N3XT8QbpWu4FSXjbg/3Oo5m4DDAPUZ+eA=; b=ek0gkUFuODuU0y/yvU5y97wKRGcHujBbZ+j1vDqdu8Q1f4byP1DOQYXq rMbFmoYCxj9xq0qxAFYA0TYK8bibg2WOHcKoU64kIeZ5/GxPLZAMrLEkd 5i8AgjqjrHzPFowEF9mdnfUD2Q7mJ9vhABM0v8uu4AF/nufobDXEsTQ8Z 47GNw7y6vUVwqRuPSdqZQy3AWHD06f5gR92VeXjVeijFRk2mNsc4YSbZ1 5+iGHlQEt0f6UOOrcpeHxCrLsEQwm347wwWqse8JbdBjbv+go+lkecksI AfBDkYfqlAwyev9mBLOOc9BuA9BlkrIEkasba6WDAWGyB5rbK2QUbBky7 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627292" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627292" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247761" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247761" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:40 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id BD15F911; Mon, 12 Jun 2023 11:21:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 10/20] thunderbolt: Fix DisplayPort IN adapter capability length for USB4 v2 routers Date: Mon, 12 Jun 2023 11:21:35 +0300 Message-Id: <20230612082145.62218-11-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Gil Fine For USB4 v2 routers, the DisplayPort IN adapter capability length is longer. Display the correct capability length in the debugfs register dump. Signed-off-by: Gil Fine Signed-off-by: Mika Westerberg --- drivers/thunderbolt/debugfs.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/drivers/thunderbolt/debugfs.c b/drivers/thunderbolt/debugfs.c index 48aaba17d1db..78bcf77831fe 100644 --- a/drivers/thunderbolt/debugfs.c +++ b/drivers/thunderbolt/debugfs.c @@ -18,7 +18,8 @@ #define PORT_CAP_POWER_LEN 2 #define PORT_CAP_LANE_LEN 3 #define PORT_CAP_USB3_LEN 5 -#define PORT_CAP_DP_LEN 8 +#define PORT_CAP_DP_V1_LEN 9 +#define PORT_CAP_DP_V2_LEN 14 #define PORT_CAP_TMU_V1_LEN 8 #define PORT_CAP_TMU_V2_LEN 10 #define PORT_CAP_BASIC_LEN 9 @@ -1175,11 +1176,13 @@ static void port_cap_show(struct tb_port *port, struct seq_file *s, case TB_PORT_CAP_ADAP: if (tb_port_is_pcie_down(port) || tb_port_is_pcie_up(port)) { length = PORT_CAP_PCIE_LEN; - } else if (tb_port_is_dpin(port) || tb_port_is_dpout(port)) { - if (usb4_dp_port_bw_mode_supported(port)) - length = PORT_CAP_DP_LEN + 1; + } else if (tb_port_is_dpin(port)) { + if (usb4_switch_version(port->sw) < 2) + length = PORT_CAP_DP_V1_LEN; else - length = PORT_CAP_DP_LEN; + length = PORT_CAP_DP_V2_LEN; + } else if (tb_port_is_dpout(port)) { + length = PORT_CAP_DP_V1_LEN; } else if (tb_port_is_usb3_down(port) || tb_port_is_usb3_up(port)) { length = PORT_CAP_USB3_LEN; From patchwork Mon Jun 12 08:21:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 693242 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95A41C7EE43 for ; Mon, 12 Jun 2023 08:53:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231538AbjFLIxm (ORCPT ); Mon, 12 Jun 2023 04:53:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233952AbjFLIxH (ORCPT ); Mon, 12 Jun 2023 04:53:07 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18D722D7E for ; Mon, 12 Jun 2023 01:52:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559946; x=1718095946; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+lLQwjuDjVIyIFbMo+no2sdW8t6c99YOhn+al4CK9rw=; b=kyBMXPENEY7uXy+Qv7bVeQn7jpK1SiGQcJYERq42qJNAjGE0TZC2mvDh XLuPKIXlylSlIF7nZ3QRu27QX3attqUwYGiGe7L8FTkSJZSHEqfHr43hq 95LiTnBP6PZlCjBPnZMsrDkARvV8pRfgsHIvR6B2Rg+Xkfl0ScD0RS3zy dltDcCnq5fbXjoASKvQZ+Sm/pB4dE4uqyHjvn3ysRdMwdlOGRHVDtXHPV AqqXLQN2yY/bQqRajQjKFWAIZjJ90z+zDADMd5/qR8Oocj88tpHUR6Nbl dqKmLtbQLAfbJE38LhD5maW9k+qV+48C8TOFfzgDNgi+Nd4sGL7Jo03Sk w==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627295" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627295" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247763" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247763" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:40 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id C246A85F; Mon, 12 Jun 2023 11:21:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 11/20] thunderbolt: Fix PCIe adapter capability length for USB4 v2 routers Date: Mon, 12 Jun 2023 11:21:36 +0300 Message-Id: <20230612082145.62218-12-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Gil Fine For USB4 v2 routers, the PCIe adapter capability length is longer. Display the correct capability length in the debugfs register dump. Signed-off-by: Gil Fine Signed-off-by: Mika Westerberg --- drivers/thunderbolt/debugfs.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/thunderbolt/debugfs.c b/drivers/thunderbolt/debugfs.c index 78bcf77831fe..c9ddd49138d8 100644 --- a/drivers/thunderbolt/debugfs.c +++ b/drivers/thunderbolt/debugfs.c @@ -14,7 +14,8 @@ #include "tb.h" #include "sb_regs.h" -#define PORT_CAP_PCIE_LEN 1 +#define PORT_CAP_V1_PCIE_LEN 1 +#define PORT_CAP_V2_PCIE_LEN 2 #define PORT_CAP_POWER_LEN 2 #define PORT_CAP_LANE_LEN 3 #define PORT_CAP_USB3_LEN 5 @@ -1175,7 +1176,10 @@ static void port_cap_show(struct tb_port *port, struct seq_file *s, case TB_PORT_CAP_ADAP: if (tb_port_is_pcie_down(port) || tb_port_is_pcie_up(port)) { - length = PORT_CAP_PCIE_LEN; + if (usb4_switch_version(port->sw) < 2) + length = PORT_CAP_V1_PCIE_LEN; + else + length = PORT_CAP_V2_PCIE_LEN; } else if (tb_port_is_dpin(port)) { if (usb4_switch_version(port->sw) < 2) length = PORT_CAP_DP_V1_LEN; From patchwork Mon Jun 12 08:21:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 693245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B60BC7EE43 for ; Mon, 12 Jun 2023 08:53:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231779AbjFLIxf (ORCPT ); Mon, 12 Jun 2023 04:53:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51564 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233896AbjFLIxC (ORCPT ); Mon, 12 Jun 2023 04:53:02 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6924F2106 for ; Mon, 12 Jun 2023 01:52:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559943; x=1718095943; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mdpNIpe3M7LORulbwwf0paeY7CFPssSM7JAbZWYPNts=; b=FDTciUPolHNcjfbpw0+zteGIZWWS+YmtwmkZFZFQLF2HzejpIX+7gXbc s851CGt1/w3r+aeqaBXbopp580w55AgxHndhUXtGqC5HpdpsyQ+1zpoKV MGb12wah2z5kL1reYqlcBy4Gn9Ze3F9h7turOLF9hSR5Lzy4yGK5lFcbv 49LMDAAJyYeUVHV2bCkmmkThfhrt8H4rCKeXT3iwU3QjB+Lj8T22a84t3 Zdy3LAn55mdGySN+GU0I21C+sskGIzmxzlEVx96QFv0CKMR0g8mRBhJ27 QP1VEBovYBEXfiojCHCJ9oMQPQMcb1SHUSLO9RGcr+ZnSILU22oy2ltpK Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627273" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627273" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247752" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247752" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:40 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id CB60A929; Mon, 12 Jun 2023 11:21:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 12/20] thunderbolt: Add Intel Barlow Ridge PCI ID Date: Mon, 12 Jun 2023 11:21:37 +0300 Message-Id: <20230612082145.62218-13-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Intel Barlow Ridge is the first USB4 v2 controller from Intel. The controller exposes standard USB4 PCI class ID in typical configurations, however there is a way to configure it so that it uses a special class ID to allow using s different driver than the Windows inbox one. For this reason add the Barlow Ridge PCI ID to the Linux driver too so that the driver can attach regardless of the class ID. Tested-by: Pengfei Xu Signed-off-by: Mika Westerberg --- drivers/thunderbolt/nhi.c | 2 ++ drivers/thunderbolt/nhi.h | 2 ++ 2 files changed, 4 insertions(+) diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c index 116016695a6a..4b7bec74e89f 100644 --- a/drivers/thunderbolt/nhi.c +++ b/drivers/thunderbolt/nhi.c @@ -1517,6 +1517,8 @@ static struct pci_device_id nhi_ids[] = { .driver_data = (kernel_ulong_t)&icl_nhi_ops }, { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_P_NHI1), .driver_data = (kernel_ulong_t)&icl_nhi_ops }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI) }, + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI) }, /* Any USB4 compliant host */ { PCI_DEVICE_CLASS(PCI_CLASS_SERIAL_USB_USB4, ~0) }, diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h index b0718020c6f5..c15a0c46c9cf 100644 --- a/drivers/thunderbolt/nhi.h +++ b/drivers/thunderbolt/nhi.h @@ -75,6 +75,8 @@ extern const struct tb_nhi_ops icl_nhi_ops; #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_BRIDGE 0x15ef #define PCI_DEVICE_ID_INTEL_ADL_NHI0 0x463e #define PCI_DEVICE_ID_INTEL_ADL_NHI1 0x466d +#define PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI 0x5781 +#define PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI 0x5784 #define PCI_DEVICE_ID_INTEL_MTL_M_NHI0 0x7eb2 #define PCI_DEVICE_ID_INTEL_MTL_P_NHI0 0x7ec2 #define PCI_DEVICE_ID_INTEL_MTL_P_NHI1 0x7ec3 From patchwork Mon Jun 12 08:21:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 691931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5D83C7EE25 for ; Mon, 12 Jun 2023 08:53:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234153AbjFLIxi (ORCPT ); Mon, 12 Jun 2023 04:53:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233714AbjFLIxF (ORCPT ); Mon, 12 Jun 2023 04:53:05 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 056F6210E for ; Mon, 12 Jun 2023 01:52:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559943; x=1718095943; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kFKuMVtbq4L6IbR5gbsn6IxsrUNY/j9eEUFY1+5pP/0=; b=Mqh13gaYPGEgb6sZ9wq0v2eElI6oEh8JdAxGqf3H+m3qmxfrMAqdzckT h0IcTy+UKy752WThw73W+rvRFcEVclViXWgtzgKZWNYRWx6yXMf64WvUg 04fXu1Qs91M9S4fJPNFcB6cm83DI4J+D86qeo/Zitm66rs2pPpXadNNbC V+bBY5Gakn+QYtWnYc+JpIyouHubWTjna+LFyT19G8aUxL9dMexXJfhsJ rB3c+B7ScisD2vps47r5J6UWTC2Cq6xlzHl5MvFb9WbZtYhuuP7OnKY9x e0/ou8RPOvBL2LiZBMUQv+XhyfehgvtuMYPHd5VU5xUPOZ3vEpfxZBneZ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627285" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627285" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247760" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247760" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:40 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id D4733991; Mon, 12 Jun 2023 11:21:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 13/20] thunderbolt: Limit Intel Barlow Ridge USB3 bandwidth Date: Mon, 12 Jun 2023 11:21:38 +0300 Message-Id: <20230612082145.62218-14-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Intel Barlow Ridge discrete USB4 host router has the same limitation as the previous generations so make sure the USB3 bandwidth limitation quirk is applied to Barlow Ridge too. Signed-off-by: Gil Fine Signed-off-by: Mika Westerberg --- drivers/thunderbolt/nhi.h | 2 ++ drivers/thunderbolt/quirks.c | 8 ++++++++ 2 files changed, 10 insertions(+) diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h index c15a0c46c9cf..0f029ce75882 100644 --- a/drivers/thunderbolt/nhi.h +++ b/drivers/thunderbolt/nhi.h @@ -77,6 +77,8 @@ extern const struct tb_nhi_ops icl_nhi_ops; #define PCI_DEVICE_ID_INTEL_ADL_NHI1 0x466d #define PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI 0x5781 #define PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI 0x5784 +#define PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HUB_80G_BRIDGE 0x5786 +#define PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HUB_40G_BRIDGE 0x57a4 #define PCI_DEVICE_ID_INTEL_MTL_M_NHI0 0x7eb2 #define PCI_DEVICE_ID_INTEL_MTL_P_NHI0 0x7ec2 #define PCI_DEVICE_ID_INTEL_MTL_P_NHI1 0x7ec3 diff --git a/drivers/thunderbolt/quirks.c b/drivers/thunderbolt/quirks.c index 854d84148850..488138a28ae1 100644 --- a/drivers/thunderbolt/quirks.c +++ b/drivers/thunderbolt/quirks.c @@ -75,6 +75,14 @@ static const struct tb_quirk tb_quirks[] = { quirk_usb3_maximum_bandwidth }, { 0x8087, PCI_DEVICE_ID_INTEL_MTL_P_NHI1, 0x0000, 0x0000, quirk_usb3_maximum_bandwidth }, + { 0x8087, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI, 0x0000, 0x0000, + quirk_usb3_maximum_bandwidth }, + { 0x8087, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI, 0x0000, 0x0000, + quirk_usb3_maximum_bandwidth }, + { 0x8087, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HUB_80G_BRIDGE, 0x0000, 0x0000, + quirk_usb3_maximum_bandwidth }, + { 0x8087, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HUB_40G_BRIDGE, 0x0000, 0x0000, + quirk_usb3_maximum_bandwidth }, /* * CLx is not supported on AMD USB4 Yellow Carp and Pink Sardine platforms. */ From patchwork Mon Jun 12 08:21:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 693243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4527C7EE43 for ; Mon, 12 Jun 2023 08:53:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233970AbjFLIxj (ORCPT ); Mon, 12 Jun 2023 04:53:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233433AbjFLIxF (ORCPT ); Mon, 12 Jun 2023 04:53:05 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E3AD1FDE for ; Mon, 12 Jun 2023 01:52:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559944; x=1718095944; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+lygcNog0E1gtsHEKLUH1uvBAx15FjTLhg8o0SZeANw=; b=Cn4UczsNGune9h0ZHvfV5FGYwG/B99KtcbNRZCRCQlJ0I4763y8zhLv2 gb7Q1TRf1YSWaBtsIfgaSlcs8uy0k2o02oSJXMVmY8aRwGacUxLDzN7me HUlY822LzYd8wTQAT4I55cPtC2E/plAhNy04IaNFvqE6NvD9gc160NEsp 5cCIin7DLy7+YW9yswkd4W2L42kS0N5FUBbjuF6Yjnu2O3iewNdH8otNG uq2uRLU7p6rawiWbD897GxEK1Fj2Jcpyr5tBOfiV6yKIiklX3kWxhbKy5 X9PmVh0XH8IIglIncv1jdfJRmPyHE9RqrdKMo4KioI79xdZeHoZjF/kY0 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627278" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627278" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247750" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247750" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:40 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id E29759B9; Mon, 12 Jun 2023 11:21:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 14/20] thunderbolt: Move constants related to NVM into nvm.c Date: Mon, 12 Jun 2023 11:21:39 +0300 Message-Id: <20230612082145.62218-15-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Gil Fine Move constants related to NVM into nvm.c to make the code cleaner. Use a separate constant for USB4_DATA_DWORDS in usb4.c. No functional changes. Signed-off-by: Gil Fine Signed-off-by: Mika Westerberg --- drivers/thunderbolt/nvm.c | 4 ++++ drivers/thunderbolt/tb.h | 4 ---- drivers/thunderbolt/usb4.c | 11 ++++++----- 3 files changed, 10 insertions(+), 9 deletions(-) diff --git a/drivers/thunderbolt/nvm.c b/drivers/thunderbolt/nvm.c index 3dd5f81bd629..b004f29ad2b6 100644 --- a/drivers/thunderbolt/nvm.c +++ b/drivers/thunderbolt/nvm.c @@ -12,6 +12,10 @@ #include "tb.h" +#define NVM_MIN_SIZE SZ_32K +#define NVM_MAX_SIZE SZ_512K +#define NVM_DATA_DWORDS 16 + /* Intel specific NVM offsets */ #define INTEL_NVM_DEVID 0x05 #define INTEL_NVM_VERSION 0x08 diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 002e0426a82c..088033726648 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -19,10 +19,6 @@ #include "ctl.h" #include "dma_port.h" -#define NVM_MIN_SIZE SZ_32K -#define NVM_MAX_SIZE SZ_512K -#define NVM_DATA_DWORDS 16 - /* Keep link controller awake during update */ #define QUIRK_FORCE_POWER_LINK_CONTROLLER BIT(0) /* Disable CLx if not supported */ diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c index b80972cb5b9d..2da43b5fc840 100644 --- a/drivers/thunderbolt/usb4.c +++ b/drivers/thunderbolt/usb4.c @@ -15,6 +15,7 @@ #include "tb.h" #define USB4_DATA_RETRIES 3 +#define USB4_DATA_DWORDS 16 enum usb4_sb_target { USB4_SB_TARGET_ROUTER, @@ -112,7 +113,7 @@ static int __usb4_switch_op(struct tb_switch *sw, u16 opcode, u32 *metadata, { const struct tb_cm_ops *cm_ops = sw->tb->cm_ops; - if (tx_dwords > NVM_DATA_DWORDS || rx_dwords > NVM_DATA_DWORDS) + if (tx_dwords > USB4_DATA_DWORDS || rx_dwords > USB4_DATA_DWORDS) return -EINVAL; /* @@ -702,7 +703,7 @@ int usb4_switch_credits_init(struct tb_switch *sw) int max_usb3, min_dp_aux, min_dp_main, max_pcie, max_dma; int ret, length, i, nports; const struct tb_port *port; - u32 data[NVM_DATA_DWORDS]; + u32 data[USB4_DATA_DWORDS]; u32 metadata = 0; u8 status = 0; @@ -1198,7 +1199,7 @@ static int usb4_port_wait_for_bit(struct tb_port *port, u32 offset, u32 bit, static int usb4_port_read_data(struct tb_port *port, void *data, size_t dwords) { - if (dwords > NVM_DATA_DWORDS) + if (dwords > USB4_DATA_DWORDS) return -EINVAL; return tb_port_read(port, data, TB_CFG_PORT, port->cap_usb4 + PORT_CS_2, @@ -1208,7 +1209,7 @@ static int usb4_port_read_data(struct tb_port *port, void *data, size_t dwords) static int usb4_port_write_data(struct tb_port *port, const void *data, size_t dwords) { - if (dwords > NVM_DATA_DWORDS) + if (dwords > USB4_DATA_DWORDS) return -EINVAL; return tb_port_write(port, data, TB_CFG_PORT, port->cap_usb4 + PORT_CS_2, @@ -1844,7 +1845,7 @@ static int usb4_port_retimer_nvm_read_block(void *data, unsigned int dwaddress, int ret; metadata = dwaddress << USB4_NVM_READ_OFFSET_SHIFT; - if (dwords < NVM_DATA_DWORDS) + if (dwords < USB4_DATA_DWORDS) metadata |= dwords << USB4_NVM_READ_LENGTH_SHIFT; ret = usb4_port_retimer_write(port, index, USB4_SB_METADATA, &metadata, From patchwork Mon Jun 12 08:21:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 693239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76AA4C7EE23 for ; Mon, 12 Jun 2023 08:53:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233724AbjFLIxu (ORCPT ); Mon, 12 Jun 2023 04:53:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233788AbjFLIxL (ORCPT ); Mon, 12 Jun 2023 04:53:11 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB92D30CD for ; Mon, 12 Jun 2023 01:52:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559947; x=1718095947; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pg1BQKuT7Y/1a8RxUOjAcae90AI6hp57BTgaGlKFFB0=; b=M4HX7Vuxc66HRbzXiOIzGcCMetvbmp38PocDQnKjmHBUxxoIXaqyVIB/ HR46A5O/uHjXcjG6DwTgJooC4MRIwfK3u3tsqexWEaZ8KZZ4DUgYdu+cf Lvm2kfiKQHx4OmrhTFRPfwUUhLMdGJcz1pj0iiez0xOULQ27tBVpH558g IEQ4Ceo//jixjeRWp4YCCnaFzPdyJrI5t+E/WvWSTpJMviIR6U5XVMXwV wapyCOuGVoeMWwoTGIa3hRc7ACJy15FXV8kcSjlfGHyN0xNM1z09OXjPs 3l4zMdp4//Z/ArBDLAMffLy4cwyVMkR1bN4kny3oy9h1pVzKFuYz10/UO Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627313" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627313" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247779" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247779" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:42 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id E875199D; Mon, 12 Jun 2023 11:21:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 15/20] thunderbolt: Increase NVM_MAX_SIZE to support Intel Barlow Ridge controller Date: Mon, 12 Jun 2023 11:21:40 +0300 Message-Id: <20230612082145.62218-16-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Gil Fine Intel Barlow Ridge discrete USB4 controller has larger NOR Flash, hence increase NVM_MAX_SIZE to support it. Signed-off-by: Gil Fine Signed-off-by: Mika Westerberg --- drivers/thunderbolt/nvm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/thunderbolt/nvm.c b/drivers/thunderbolt/nvm.c index b004f29ad2b6..69fb3b0fa34f 100644 --- a/drivers/thunderbolt/nvm.c +++ b/drivers/thunderbolt/nvm.c @@ -13,7 +13,7 @@ #include "tb.h" #define NVM_MIN_SIZE SZ_32K -#define NVM_MAX_SIZE SZ_512K +#define NVM_MAX_SIZE SZ_1M #define NVM_DATA_DWORDS 16 /* Intel specific NVM offsets */ From patchwork Mon Jun 12 08:21:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 693238 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85AF2C7EE23 for ; Mon, 12 Jun 2023 08:53:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233865AbjFLIxx (ORCPT ); Mon, 12 Jun 2023 04:53:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232614AbjFLIxR (ORCPT ); Mon, 12 Jun 2023 04:53:17 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB12E30D6 for ; Mon, 12 Jun 2023 01:52:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559947; x=1718095947; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fTvkG05pbHDcjA3DKT3YNQm7HkDjZOaNIDL8WyEfjpY=; b=Hl4BW9TqpgPN7OREbRVTIBFkXxfNm9KFM59HwuOztrNqTbEU2hhaTJmS W44+4irjxL8eNUkGqVEdkPEt6z5pxmJKw4UPuBsDepWZh9QvwZ1V1jLgv lmb/HnAgCb21vYQbZ3qVRnh0xjLH8HOC1IzvoKRyn2esvjKltJohcw7tO kRKal2DN+HrhOuDoOhbi3IxlM1UkNqG21P8W1sM+ssAnki58Dz/5xFTkP 8IeYAx/RBLWzDwo6KHbk3VdZ5EKJBf2FWQUaj26H1lebg57jozYXxQIJh agKD4o7mQVNTAFU7G1hSr5Ph1Z8p6ss4qzE5qYzWFOlxMevA/xb5KWPhC w==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627325" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627325" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247787" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247787" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:42 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 0993CA6B; Mon, 12 Jun 2023 11:21:45 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 16/20] thunderbolt: Add support for enhanced uni-directional TMU mode Date: Mon, 12 Jun 2023 11:21:41 +0300 Message-Id: <20230612082145.62218-17-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This is new TMU mode introduced with the USB4 v2. This mode is simpler than the existing ones and allows all CL states as well. Enable this for all links where both side routers are v2 and keep the existing functionality for the v1 and earlier links. Currently only support the MedRes rate. We can add the HiFi rate later too if it turns out to be useful. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/switch.c | 16 + drivers/thunderbolt/tb.c | 58 +++- drivers/thunderbolt/tb.h | 75 +++-- drivers/thunderbolt/tb_regs.h | 12 +- drivers/thunderbolt/tmu.c | 595 +++++++++++++++++++++++++++------- drivers/thunderbolt/usb4.c | 31 +- 6 files changed, 614 insertions(+), 173 deletions(-) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index ebe9559c8c79..7ea63bb31714 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -2466,6 +2466,22 @@ int tb_switch_configure(struct tb_switch *sw) return tb_plug_events_active(sw, true); } +/** + * tb_switch_configuration_valid() - Set the tunneling configuration to be valid + * @sw: Router to configure + * + * Needs to be called before any tunnels can be setup through the + * router. Can be called to any router. + * + * Returns %0 in success and negative errno otherwise. + */ +int tb_switch_configuration_valid(struct tb_switch *sw) +{ + if (tb_switch_is_usb4(sw)) + return usb4_switch_configuration_valid(sw); + return 0; +} + static int tb_switch_set_uuid(struct tb_switch *sw) { bool uid = false; diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index f18cb5a52f0b..ff034975a87e 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -297,11 +297,23 @@ static int tb_increase_switch_tmu_accuracy(struct device *dev, void *data) struct tb_switch *sw; sw = tb_to_switch(dev); - if (sw) { - tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, - tb_switch_clx_is_enabled(sw, TB_CL1)); - if (tb_switch_tmu_enable(sw)) - tb_sw_warn(sw, "failed to increase TMU rate\n"); + if (!sw) + return 0; + + if (tb_switch_tmu_is_configured(sw, TB_SWITCH_TMU_MODE_LOWRES)) { + enum tb_switch_tmu_mode mode; + int ret; + + if (tb_switch_clx_is_enabled(sw, TB_CL1)) + mode = TB_SWITCH_TMU_MODE_HIFI_UNI; + else + mode = TB_SWITCH_TMU_MODE_HIFI_BI; + + ret = tb_switch_tmu_configure(sw, mode); + if (ret) + return ret; + + return tb_switch_tmu_enable(sw); } return 0; @@ -319,6 +331,9 @@ static void tb_increase_tmu_accuracy(struct tb_tunnel *tunnel) * accuracy of first depth child routers (and the host router) * to the highest. This is needed for the DP tunneling to work * but also allows CL0s. + * + * If both routers are v2 then we don't need to do anything as + * they are using enhanced TMU mode that allows all CLx. */ sw = tunnel->tb->root_switch; device_for_each_child(&sw->dev, NULL, tb_increase_switch_tmu_accuracy); @@ -329,14 +344,22 @@ static int tb_enable_tmu(struct tb_switch *sw) int ret; /* - * If CL1 is enabled then we need to configure the TMU accuracy - * level to normal. Otherwise we keep the TMU running at the - * highest accuracy. + * If both routers at the end of the link are v2 we simply + * enable the enhanched uni-directional mode. That covers all + * the CL states. For v1 and before we need to use the normal + * rate to allow CL1 (when supported). Otherwise we keep the TMU + * running at the highest accuracy. */ - if (tb_switch_clx_is_enabled(sw, TB_CL1)) - ret = tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true); - else - ret = tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); + ret = tb_switch_tmu_configure(sw, + TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI); + if (ret == -EOPNOTSUPP) { + if (tb_switch_clx_is_enabled(sw, TB_CL1)) + ret = tb_switch_tmu_configure(sw, + TB_SWITCH_TMU_MODE_LOWRES); + else + ret = tb_switch_tmu_configure(sw, + TB_SWITCH_TMU_MODE_HIFI_BI); + } if (ret) return ret; @@ -963,6 +986,12 @@ static void tb_scan_port(struct tb_port *port) if (tb_enable_tmu(sw)) tb_sw_warn(sw, "failed to enable TMU\n"); + /* + * Configuration valid needs to be set after the TMU has been + * enabled for the upstream port of the router so we do it here. + */ + tb_switch_configuration_valid(sw); + /* Scan upstream retimers */ tb_retimer_scan(upstream_port, true); @@ -2086,8 +2115,7 @@ static int tb_start(struct tb *tb) * To support highest CLx state, we set host router's TMU to * Normal mode. */ - tb_switch_tmu_configure(tb->root_switch, TB_SWITCH_TMU_RATE_NORMAL, - false); + tb_switch_tmu_configure(tb->root_switch, TB_SWITCH_TMU_MODE_LOWRES); /* Enable TMU if it is off */ tb_switch_tmu_enable(tb->root_switch); /* Full scan to discover devices added before the driver was loaded. */ @@ -2139,6 +2167,8 @@ static void tb_restore_children(struct tb_switch *sw) if (tb_enable_tmu(sw)) tb_sw_warn(sw, "failed to restore TMU configuration\n"); + tb_switch_configuration_valid(sw); + tb_switch_for_each_port(sw, port) { if (!tb_port_has_remote(port) && !port->xdomain) continue; diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 088033726648..68ab9b3c9580 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -73,44 +73,37 @@ enum tb_nvm_write_ops { #define USB4_SWITCH_MAX_DEPTH 5 /** - * enum tb_switch_tmu_rate - TMU refresh rate - * @TB_SWITCH_TMU_RATE_OFF: %0 (Disable Time Sync handshake) - * @TB_SWITCH_TMU_RATE_HIFI: %16 us time interval between successive - * transmission of the Delay Request TSNOS - * (Time Sync Notification Ordered Set) on a Link - * @TB_SWITCH_TMU_RATE_NORMAL: %1 ms time interval between successive - * transmission of the Delay Request TSNOS on - * a Link + * enum tb_switch_tmu_mode - TMU mode + * @TB_SWITCH_TMU_MODE_OFF: TMU is off + * @TB_SWITCH_TMU_MODE_LOWRES: Uni-directional, normal mode + * @TB_SWITCH_TMU_MODE_HIFI_UNI: Uni-directional, HiFi mode + * @TB_SWITCH_TMU_MODE_HIFI_BI: Bi-directional, HiFi mode + * @TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI: Enhanced Uni-directional, MedRes mode + * + * Ordering is based on TMU accuracy level (highest last). */ -enum tb_switch_tmu_rate { - TB_SWITCH_TMU_RATE_OFF = 0, - TB_SWITCH_TMU_RATE_HIFI = 16, - TB_SWITCH_TMU_RATE_NORMAL = 1000, +enum tb_switch_tmu_mode { + TB_SWITCH_TMU_MODE_OFF, + TB_SWITCH_TMU_MODE_LOWRES, + TB_SWITCH_TMU_MODE_HIFI_UNI, + TB_SWITCH_TMU_MODE_HIFI_BI, + TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI, }; /** - * struct tb_switch_tmu - Structure holding switch TMU configuration + * struct tb_switch_tmu - Structure holding router TMU configuration * @cap: Offset to the TMU capability (%0 if not found) * @has_ucap: Does the switch support uni-directional mode - * @rate: TMU refresh rate related to upstream switch. In case of root - * switch this holds the domain rate. Reflects the HW setting. - * @unidirectional: Is the TMU in uni-directional or bi-directional mode - * related to upstream switch. Don't care for root switch. - * Reflects the HW setting. - * @unidirectional_request: Is the new TMU mode: uni-directional or bi-directional - * that is requested to be set. Related to upstream switch. - * Don't care for root switch. - * @rate_request: TMU new refresh rate related to upstream switch that is - * requested to be set. In case of root switch, this holds - * the new domain rate that is requested to be set. + * @mode: TMU mode related to the upstream router. Reflects the HW + * setting. Don't care for host router. + * @mode_request: TMU mode requested to set. Related to upstream router. + * Don't care for host router. */ struct tb_switch_tmu { int cap; bool has_ucap; - enum tb_switch_tmu_rate rate; - bool unidirectional; - bool unidirectional_request; - enum tb_switch_tmu_rate rate_request; + enum tb_switch_tmu_mode mode; + enum tb_switch_tmu_mode mode_request; }; /** @@ -801,6 +794,7 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent, struct tb_switch *tb_switch_alloc_safe_mode(struct tb *tb, struct device *parent, u64 route); int tb_switch_configure(struct tb_switch *sw); +int tb_switch_configuration_valid(struct tb_switch *sw); int tb_switch_add(struct tb_switch *sw); void tb_switch_remove(struct tb_switch *sw); void tb_switch_suspend(struct tb_switch *sw, bool runtime); @@ -975,19 +969,33 @@ int tb_switch_tmu_init(struct tb_switch *sw); int tb_switch_tmu_post_time(struct tb_switch *sw); int tb_switch_tmu_disable(struct tb_switch *sw); int tb_switch_tmu_enable(struct tb_switch *sw); -int tb_switch_tmu_configure(struct tb_switch *sw, enum tb_switch_tmu_rate rate, - bool unidirectional); +int tb_switch_tmu_configure(struct tb_switch *sw, enum tb_switch_tmu_mode mode); + +/** + * tb_switch_tmu_is_configured() - Is given TMU mode configured + * @sw: Router whose mode to check + * @mode: Mode to check + * + * Checks if given router TMU mode is configured to @mode. Note the + * router TMU might not be enabled to this mode. + */ +static inline bool tb_switch_tmu_is_configured(const struct tb_switch *sw, + enum tb_switch_tmu_mode mode) +{ + return sw->tmu.mode_request == mode; +} + /** * tb_switch_tmu_is_enabled() - Checks if the specified TMU mode is enabled * @sw: Router whose TMU mode to check * * Return true if hardware TMU configuration matches the requested - * configuration. + * configuration (and is not %TB_SWITCH_TMU_MODE_OFF). */ static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw) { - return sw->tmu.rate == sw->tmu.rate_request && - sw->tmu.unidirectional == sw->tmu.unidirectional_request; + return sw->tmu.mode != TB_SWITCH_TMU_MODE_OFF && + sw->tmu.mode == sw->tmu.mode_request; } bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx); @@ -1211,6 +1219,7 @@ static inline bool tb_switch_is_usb4(const struct tb_switch *sw) } int usb4_switch_setup(struct tb_switch *sw); +int usb4_switch_configuration_valid(struct tb_switch *sw); int usb4_switch_read_uid(struct tb_switch *sw, u64 *uid); int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf, size_t size); diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h index 549cc79c7313..c95fc7fe7adf 100644 --- a/drivers/thunderbolt/tb_regs.h +++ b/drivers/thunderbolt/tb_regs.h @@ -252,11 +252,13 @@ enum usb4_switch_op { #define TMU_RTR_CS_3_LOCAL_TIME_NS_MASK GENMASK(15, 0) #define TMU_RTR_CS_3_TS_PACKET_INTERVAL_MASK GENMASK(31, 16) #define TMU_RTR_CS_3_TS_PACKET_INTERVAL_SHIFT 16 -#define TMU_RTR_CS_15 0xf +#define TMU_RTR_CS_15 0x0f #define TMU_RTR_CS_15_FREQ_AVG_MASK GENMASK(5, 0) #define TMU_RTR_CS_15_DELAY_AVG_MASK GENMASK(11, 6) #define TMU_RTR_CS_15_OFFSET_AVG_MASK GENMASK(17, 12) #define TMU_RTR_CS_15_ERROR_AVG_MASK GENMASK(23, 18) +#define TMU_RTR_CS_18 0x12 +#define TMU_RTR_CS_18_DELTA_AVG_CONST_MASK GENMASK(23, 16) #define TMU_RTR_CS_22 0x16 #define TMU_RTR_CS_24 0x18 #define TMU_RTR_CS_25 0x19 @@ -322,6 +324,14 @@ struct tb_regs_port_header { #define TMU_ADP_CS_3_UDM BIT(29) #define TMU_ADP_CS_6 0x06 #define TMU_ADP_CS_6_DTS BIT(1) +#define TMU_ADP_CS_8 0x08 +#define TMU_ADP_CS_8_REPL_TIMEOUT_MASK GENMASK(14, 0) +#define TMU_ADP_CS_8_EUDM BIT(15) +#define TMU_ADP_CS_8_REPL_THRESHOLD_MASK GENMASK(25, 16) +#define TMU_ADP_CS_9 0x09 +#define TMU_ADP_CS_9_REPL_N_MASK GENMASK(7, 0) +#define TMU_ADP_CS_9_DIRSWITCH_N_MASK GENMASK(15, 8) +#define TMU_ADP_CS_9_ADP_TS_INTERVAL_MASK GENMASK(31, 16) /* Lane adapter registers */ #define LANE_ADP_CS_0 0x00 diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index c926fb71c43d..1269f417515b 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -11,23 +11,63 @@ #include "tb.h" +static const unsigned int tmu_rates[] = { + [TB_SWITCH_TMU_MODE_OFF] = 0, + [TB_SWITCH_TMU_MODE_LOWRES] = 1000, + [TB_SWITCH_TMU_MODE_HIFI_UNI] = 16, + [TB_SWITCH_TMU_MODE_HIFI_BI] = 16, + [TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI] = 16, +}; + +const struct { + unsigned int freq_meas_window; + unsigned int avg_const; + unsigned int delta_avg_const; + unsigned int repl_timeout; + unsigned int repl_threshold; + unsigned int repl_n; + unsigned int dirswitch_n; +} tmu_params[] = { + [TB_SWITCH_TMU_MODE_OFF] = { }, + [TB_SWITCH_TMU_MODE_LOWRES] = { 30, 4, }, + [TB_SWITCH_TMU_MODE_HIFI_UNI] = { 800, 8, }, + [TB_SWITCH_TMU_MODE_HIFI_BI] = { 800, 8, }, + [TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI] = { + 800, 4, 0, 3125, 25, 128, 255, + }, +}; + +static const char *tmu_mode_name(enum tb_switch_tmu_mode mode) +{ + switch (mode) { + case TB_SWITCH_TMU_MODE_OFF: + return "off"; + case TB_SWITCH_TMU_MODE_LOWRES: + return "uni-directional, LowRes"; + case TB_SWITCH_TMU_MODE_HIFI_UNI: + return "uni-directional, HiFi"; + case TB_SWITCH_TMU_MODE_HIFI_BI: + return "bi-directional, HiFi"; + case TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI: + return "enhanced uni-directional, MedRes"; + default: + return "unknown"; + } +} + +static bool tb_switch_tmu_enhanced_is_supported(const struct tb_switch *sw) +{ + return usb4_switch_version(sw) > 1; +} + static int tb_switch_set_tmu_mode_params(struct tb_switch *sw, - enum tb_switch_tmu_rate rate) + enum tb_switch_tmu_mode mode) { - u32 freq_meas_wind[2] = { 30, 800 }; - u32 avg_const[2] = { 4, 8 }; u32 freq, avg, val; int ret; - if (rate == TB_SWITCH_TMU_RATE_NORMAL) { - freq = freq_meas_wind[0]; - avg = avg_const[0]; - } else if (rate == TB_SWITCH_TMU_RATE_HIFI) { - freq = freq_meas_wind[1]; - avg = avg_const[1]; - } else { - return 0; - } + freq = tmu_params[mode].freq_meas_window; + avg = tmu_params[mode].avg_const; ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, sw->tmu.cap + TMU_RTR_CS_0, 1); @@ -56,37 +96,30 @@ static int tb_switch_set_tmu_mode_params(struct tb_switch *sw, FIELD_PREP(TMU_RTR_CS_15_OFFSET_AVG_MASK, avg) | FIELD_PREP(TMU_RTR_CS_15_ERROR_AVG_MASK, avg); - return tb_sw_write(sw, &val, TB_CFG_SWITCH, - sw->tmu.cap + TMU_RTR_CS_15, 1); -} - -static const char *tb_switch_tmu_mode_name(const struct tb_switch *sw) -{ - bool root_switch = !tb_route(sw); + ret = tb_sw_write(sw, &val, TB_CFG_SWITCH, + sw->tmu.cap + TMU_RTR_CS_15, 1); + if (ret) + return ret; - switch (sw->tmu.rate) { - case TB_SWITCH_TMU_RATE_OFF: - return "off"; + if (tb_switch_tmu_enhanced_is_supported(sw)) { + u32 delta_avg = tmu_params[mode].delta_avg_const; - case TB_SWITCH_TMU_RATE_HIFI: - /* Root switch does not have upstream directionality */ - if (root_switch) - return "HiFi"; - if (sw->tmu.unidirectional) - return "uni-directional, HiFi"; - return "bi-directional, HiFi"; + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, + sw->tmu.cap + TMU_RTR_CS_18, 1); + if (ret) + return ret; - case TB_SWITCH_TMU_RATE_NORMAL: - if (root_switch) - return "normal"; - return "uni-directional, normal"; + val &= ~TMU_RTR_CS_18_DELTA_AVG_CONST_MASK; + val |= FIELD_PREP(TMU_RTR_CS_18_DELTA_AVG_CONST_MASK, delta_avg); - default: - return "unknown"; + ret = tb_sw_write(sw, &val, TB_CFG_SWITCH, + sw->tmu.cap + TMU_RTR_CS_18, 1); } + + return ret; } -static bool tb_switch_tmu_ucap_supported(struct tb_switch *sw) +static bool tb_switch_tmu_ucap_is_supported(struct tb_switch *sw) { int ret; u32 val; @@ -182,6 +215,103 @@ static bool tb_port_tmu_is_unidirectional(struct tb_port *port) return val & TMU_ADP_CS_3_UDM; } +static bool tb_port_tmu_is_enhanced(struct tb_port *port) +{ + int ret; + u32 val; + + ret = tb_port_read(port, &val, TB_CFG_PORT, + port->cap_tmu + TMU_ADP_CS_8, 1); + if (ret) + return false; + + return val & TMU_ADP_CS_8_EUDM; +} + +/* Can be called to non-v2 lane adapters too */ +static int tb_port_tmu_enhanced_enable(struct tb_port *port, bool enable) +{ + int ret; + u32 val; + + if (!tb_switch_tmu_enhanced_is_supported(port->sw)) + return 0; + + ret = tb_port_read(port, &val, TB_CFG_PORT, + port->cap_tmu + TMU_ADP_CS_8, 1); + if (ret) + return ret; + + if (enable) + val |= TMU_ADP_CS_8_EUDM; + else + val &= ~TMU_ADP_CS_8_EUDM; + + return tb_port_write(port, &val, TB_CFG_PORT, + port->cap_tmu + TMU_ADP_CS_8, 1); +} + +static int tb_port_set_tmu_mode_params(struct tb_port *port, + enum tb_switch_tmu_mode mode) +{ + u32 repl_timeout, repl_threshold, repl_n, dirswitch_n, val; + int ret; + + repl_timeout = tmu_params[mode].repl_timeout; + repl_threshold = tmu_params[mode].repl_threshold; + repl_n = tmu_params[mode].repl_n; + dirswitch_n = tmu_params[mode].dirswitch_n; + + ret = tb_port_read(port, &val, TB_CFG_PORT, + port->cap_tmu + TMU_ADP_CS_8, 1); + if (ret) + return ret; + + val &= ~TMU_ADP_CS_8_REPL_TIMEOUT_MASK; + val &= ~TMU_ADP_CS_8_REPL_THRESHOLD_MASK; + val |= FIELD_PREP(TMU_ADP_CS_8_REPL_TIMEOUT_MASK, repl_timeout); + val |= FIELD_PREP(TMU_ADP_CS_8_REPL_THRESHOLD_MASK, repl_threshold); + + ret = tb_port_write(port, &val, TB_CFG_PORT, + port->cap_tmu + TMU_ADP_CS_8, 1); + if (ret) + return ret; + + ret = tb_port_read(port, &val, TB_CFG_PORT, + port->cap_tmu + TMU_ADP_CS_9, 1); + if (ret) + return ret; + + val &= ~TMU_ADP_CS_9_REPL_N_MASK; + val &= ~TMU_ADP_CS_9_DIRSWITCH_N_MASK; + val |= FIELD_PREP(TMU_ADP_CS_9_REPL_N_MASK, repl_n); + val |= FIELD_PREP(TMU_ADP_CS_9_DIRSWITCH_N_MASK, dirswitch_n); + + return tb_port_write(port, &val, TB_CFG_PORT, + port->cap_tmu + TMU_ADP_CS_9, 1); +} + +/* Can be called to non-v2 lane adapters too */ +static int tb_port_tmu_rate_write(struct tb_port *port, int rate) +{ + int ret; + u32 val; + + if (!tb_switch_tmu_enhanced_is_supported(port->sw)) + return 0; + + ret = tb_port_read(port, &val, TB_CFG_PORT, + port->cap_tmu + TMU_ADP_CS_9, 1); + if (ret) + return ret; + + val &= ~TMU_ADP_CS_9_ADP_TS_INTERVAL_MASK; + val |= FIELD_PREP(TMU_ADP_CS_9_ADP_TS_INTERVAL_MASK, rate); + + return tb_port_write(port, &val, TB_CFG_PORT, + port->cap_tmu + TMU_ADP_CS_9, 1); +} + static int tb_port_tmu_time_sync(struct tb_port *port, bool time_sync) { u32 val = time_sync ? TMU_ADP_CS_6_DTS : 0; @@ -224,6 +354,50 @@ static int tb_switch_tmu_set_time_disruption(struct tb_switch *sw, bool set) return tb_sw_write(sw, &val, TB_CFG_SWITCH, offset, 1); } +static int tmu_mode_init(struct tb_switch *sw) +{ + bool enhanced, ucap; + int ret, rate; + + ucap = tb_switch_tmu_ucap_is_supported(sw); + if (ucap) + tb_sw_dbg(sw, "TMU: supports uni-directional mode\n"); + enhanced = tb_switch_tmu_enhanced_is_supported(sw); + if (enhanced) + tb_sw_dbg(sw, "TMU: supports enhanced uni-directional mode\n"); + + ret = tb_switch_tmu_rate_read(sw); + if (ret < 0) + return ret; + rate = ret; + + /* Off by default */ + sw->tmu.mode = TB_SWITCH_TMU_MODE_OFF; + + if (tb_route(sw)) { + struct tb_port *up = tb_upstream_port(sw); + + if (enhanced && tb_port_tmu_is_enhanced(up)) { + sw->tmu.mode = TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI; + } else if (ucap && tb_port_tmu_is_unidirectional(up)) { + if (tmu_rates[TB_SWITCH_TMU_MODE_LOWRES] == rate) + sw->tmu.mode = TB_SWITCH_TMU_MODE_LOWRES; + else if (tmu_rates[TB_SWITCH_TMU_MODE_LOWRES] == rate) + sw->tmu.mode = TB_SWITCH_TMU_MODE_HIFI_UNI; + } else if (rate) { + sw->tmu.mode = TB_SWITCH_TMU_MODE_HIFI_BI; + } + } else if (rate) { + sw->tmu.mode = TB_SWITCH_TMU_MODE_HIFI_BI; + } + + /* Update the initial request to match the current mode */ + sw->tmu.mode_request = sw->tmu.mode; + sw->tmu.has_ucap = ucap; + + return 0; +} + /** * tb_switch_tmu_init() - Initialize switch TMU structures * @sw: Switch to initialized @@ -252,27 +426,11 @@ int tb_switch_tmu_init(struct tb_switch *sw) port->cap_tmu = cap; } - ret = tb_switch_tmu_rate_read(sw); - if (ret < 0) + ret = tmu_mode_init(sw); + if (ret) return ret; - sw->tmu.rate = ret; - - sw->tmu.has_ucap = tb_switch_tmu_ucap_supported(sw); - if (sw->tmu.has_ucap) { - tb_sw_dbg(sw, "TMU: supports uni-directional mode\n"); - - if (tb_route(sw)) { - struct tb_port *up = tb_upstream_port(sw); - - sw->tmu.unidirectional = - tb_port_tmu_is_unidirectional(up); - } - } else { - sw->tmu.unidirectional = false; - } - - tb_sw_dbg(sw, "TMU: current mode: %s\n", tb_switch_tmu_mode_name(sw)); + tb_sw_dbg(sw, "TMU: current mode: %s\n", tmu_mode_name(sw->tmu.mode)); return 0; } @@ -375,6 +533,23 @@ int tb_switch_tmu_post_time(struct tb_switch *sw) return ret; } +static int disable_enhanced(struct tb_port *up, struct tb_port *down) +{ + int ret; + + /* + * Router may already been disconnected so ignore errors on the + * upstream port. + */ + tb_port_tmu_rate_write(up, 0); + tb_port_tmu_enhanced_enable(up, false); + + ret = tb_port_tmu_rate_write(down, 0); + if (ret) + return ret; + return tb_port_tmu_enhanced_enable(down, false); +} + /** * tb_switch_tmu_disable() - Disable TMU of a switch * @sw: Switch whose TMU to disable @@ -384,11 +559,10 @@ int tb_switch_tmu_post_time(struct tb_switch *sw) int tb_switch_tmu_disable(struct tb_switch *sw) { /* Already disabled? */ - if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF) + if (sw->tmu.mode == TB_SWITCH_TMU_MODE_OFF) return 0; if (tb_route(sw)) { - bool unidirectional = sw->tmu.unidirectional; struct tb_port *down, *up; int ret; @@ -405,33 +579,46 @@ int tb_switch_tmu_disable(struct tb_switch *sw) * uni-directional mode and we don't want to change it's TMU * mode. */ - tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF); + tb_switch_tmu_rate_write(sw, tmu_rates[TB_SWITCH_TMU_MODE_OFF]); tb_port_tmu_time_sync_disable(up); ret = tb_port_tmu_time_sync_disable(down); if (ret) return ret; - if (unidirectional) { + switch (sw->tmu.mode) { + case TB_SWITCH_TMU_MODE_LOWRES: + case TB_SWITCH_TMU_MODE_HIFI_UNI: /* The switch may be unplugged so ignore any errors */ tb_port_tmu_unidirectional_disable(up); ret = tb_port_tmu_unidirectional_disable(down); if (ret) return ret; + break; + + case TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI: + ret = disable_enhanced(up, down); + if (ret) + return ret; + break; + + default: + break; } } else { - tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF); + tb_switch_tmu_rate_write(sw, tmu_rates[TB_SWITCH_TMU_MODE_OFF]); } - sw->tmu.unidirectional = false; - sw->tmu.rate = TB_SWITCH_TMU_RATE_OFF; + sw->tmu.mode = TB_SWITCH_TMU_MODE_OFF; tb_sw_dbg(sw, "TMU: disabled\n"); return 0; } -static void tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional) +/* Called only when there is failure enabling requested mode */ +static void tb_switch_tmu_off(struct tb_switch *sw) { + unsigned int rate = tmu_rates[TB_SWITCH_TMU_MODE_OFF]; struct tb_port *down, *up; down = tb_switch_downstream_port(sw); @@ -445,20 +632,30 @@ static void tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional) */ tb_port_tmu_time_sync_disable(down); tb_port_tmu_time_sync_disable(up); - if (unidirectional) - tb_switch_tmu_rate_write(tb_switch_parent(sw), - TB_SWITCH_TMU_RATE_OFF); - else - tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF); - tb_switch_set_tmu_mode_params(sw, sw->tmu.rate); + switch (sw->tmu.mode_request) { + case TB_SWITCH_TMU_MODE_LOWRES: + case TB_SWITCH_TMU_MODE_HIFI_UNI: + tb_switch_tmu_rate_write(tb_switch_parent(sw), rate); + break; + case TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI: + disable_enhanced(up, down); + break; + default: + break; + } + + /* Always set the rate to 0 */ + tb_switch_tmu_rate_write(sw, rate); + + tb_switch_set_tmu_mode_params(sw, sw->tmu.mode); tb_port_tmu_unidirectional_disable(down); tb_port_tmu_unidirectional_disable(up); } /* * This function is called when the previous TMU mode was - * TB_SWITCH_TMU_RATE_OFF. + * TB_SWITCH_TMU_MODE_OFF. */ static int tb_switch_tmu_enable_bidirectional(struct tb_switch *sw) { @@ -476,7 +673,7 @@ static int tb_switch_tmu_enable_bidirectional(struct tb_switch *sw) if (ret) goto out; - ret = tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_HIFI); + ret = tb_switch_tmu_rate_write(sw, tmu_rates[TB_SWITCH_TMU_MODE_HIFI_BI]); if (ret) goto out; @@ -491,7 +688,7 @@ static int tb_switch_tmu_enable_bidirectional(struct tb_switch *sw) return 0; out: - tb_switch_tmu_off(sw, false); + tb_switch_tmu_off(sw); return ret; } @@ -522,7 +719,7 @@ static int tb_switch_tmu_disable_objections(struct tb_switch *sw) /* * This function is called when the previous TMU mode was - * TB_SWITCH_TMU_RATE_OFF. + * TB_SWITCH_TMU_MODE_OFF. */ static int tb_switch_tmu_enable_unidirectional(struct tb_switch *sw) { @@ -532,11 +729,11 @@ static int tb_switch_tmu_enable_unidirectional(struct tb_switch *sw) up = tb_upstream_port(sw); down = tb_switch_downstream_port(sw); ret = tb_switch_tmu_rate_write(tb_switch_parent(sw), - sw->tmu.rate_request); + tmu_rates[sw->tmu.mode_request]); if (ret) return ret; - ret = tb_switch_set_tmu_mode_params(sw, sw->tmu.rate_request); + ret = tb_switch_set_tmu_mode_params(sw, sw->tmu.mode_request); if (ret) return ret; @@ -559,12 +756,62 @@ static int tb_switch_tmu_enable_unidirectional(struct tb_switch *sw) return 0; out: - tb_switch_tmu_off(sw, true); + tb_switch_tmu_off(sw); + return ret; +} + +/* + * This function is called when the previous TMU mode was + * TB_SWITCH_TMU_RATE_OFF. + */ +static int tb_switch_tmu_enable_enhanced(struct tb_switch *sw) +{ + unsigned int rate = tmu_rates[sw->tmu.mode_request]; + struct tb_port *up, *down; + int ret; + + /* Router specific parameters first */ + ret = tb_switch_set_tmu_mode_params(sw, sw->tmu.mode_request); + if (ret) + return ret; + + up = tb_upstream_port(sw); + down = tb_switch_downstream_port(sw); + + ret = tb_port_set_tmu_mode_params(up, sw->tmu.mode_request); + if (ret) + goto out; + + ret = tb_port_tmu_rate_write(up, rate); + if (ret) + goto out; + + ret = tb_port_tmu_enhanced_enable(up, true); + if (ret) + goto out; + + ret = tb_port_set_tmu_mode_params(down, sw->tmu.mode_request); + if (ret) + goto out; + + ret = tb_port_tmu_rate_write(down, rate); + if (ret) + goto out; + + ret = tb_port_tmu_enhanced_enable(down, true); + if (ret) + goto out; + + return 0; + +out: + tb_switch_tmu_off(sw); return ret; } static void tb_switch_tmu_change_mode_prev(struct tb_switch *sw) { + unsigned int rate = tmu_rates[sw->tmu.mode]; struct tb_port *down, *up; down = tb_switch_downstream_port(sw); @@ -575,42 +822,97 @@ static void tb_switch_tmu_change_mode_prev(struct tb_switch *sw) * In case of additional failures in the functions below, * ignore them since the caller shall already report a failure. */ - tb_port_tmu_set_unidirectional(down, sw->tmu.unidirectional); - if (sw->tmu.unidirectional_request) - tb_switch_tmu_rate_write(tb_switch_parent(sw), sw->tmu.rate); - else - tb_switch_tmu_rate_write(sw, sw->tmu.rate); + switch (sw->tmu.mode) { + case TB_SWITCH_TMU_MODE_LOWRES: + case TB_SWITCH_TMU_MODE_HIFI_UNI: + tb_port_tmu_set_unidirectional(down, true); + tb_switch_tmu_rate_write(tb_switch_parent(sw), rate); + break; + + case TB_SWITCH_TMU_MODE_HIFI_BI: + tb_port_tmu_set_unidirectional(down, false); + tb_switch_tmu_rate_write(sw, rate); + break; - tb_switch_set_tmu_mode_params(sw, sw->tmu.rate); - tb_port_tmu_set_unidirectional(up, sw->tmu.unidirectional); + default: + break; + } + + tb_switch_set_tmu_mode_params(sw, sw->tmu.mode); + + switch (sw->tmu.mode) { + case TB_SWITCH_TMU_MODE_LOWRES: + case TB_SWITCH_TMU_MODE_HIFI_UNI: + tb_port_tmu_set_unidirectional(up, true); + break; + + case TB_SWITCH_TMU_MODE_HIFI_BI: + tb_port_tmu_set_unidirectional(up, false); + break; + + default: + break; + } } static int tb_switch_tmu_change_mode(struct tb_switch *sw) { + unsigned int rate = tmu_rates[sw->tmu.mode_request]; struct tb_port *up, *down; int ret; up = tb_upstream_port(sw); down = tb_switch_downstream_port(sw); - ret = tb_port_tmu_set_unidirectional(down, sw->tmu.unidirectional_request); - if (ret) - goto out; - if (sw->tmu.unidirectional_request) - ret = tb_switch_tmu_rate_write(tb_switch_parent(sw), - sw->tmu.rate_request); - else - ret = tb_switch_tmu_rate_write(sw, sw->tmu.rate_request); - if (ret) - return ret; + /* Program the upstream router downstream facing lane adapter */ + switch (sw->tmu.mode_request) { + case TB_SWITCH_TMU_MODE_LOWRES: + case TB_SWITCH_TMU_MODE_HIFI_UNI: + ret = tb_port_tmu_set_unidirectional(down, true); + if (ret) + goto out; + ret = tb_switch_tmu_rate_write(tb_switch_parent(sw), rate); + if (ret) + goto out; + break; + + case TB_SWITCH_TMU_MODE_HIFI_BI: + ret = tb_port_tmu_set_unidirectional(down, false); + if (ret) + goto out; + ret = tb_switch_tmu_rate_write(sw, rate); + if (ret) + goto out; + break; - ret = tb_switch_set_tmu_mode_params(sw, sw->tmu.rate_request); + default: + /* Not allowed to change modes from other than above */ + return -EINVAL; + } + + ret = tb_switch_set_tmu_mode_params(sw, sw->tmu.mode_request); if (ret) return ret; - ret = tb_port_tmu_set_unidirectional(up, sw->tmu.unidirectional_request); - if (ret) - goto out; + /* Program the new mode and the downstream router lane adapter */ + switch (sw->tmu.mode_request) { + case TB_SWITCH_TMU_MODE_LOWRES: + case TB_SWITCH_TMU_MODE_HIFI_UNI: + ret = tb_port_tmu_set_unidirectional(up, true); + if (ret) + goto out; + break; + + case TB_SWITCH_TMU_MODE_HIFI_BI: + ret = tb_port_tmu_set_unidirectional(up, false); + if (ret) + goto out; + break; + + default: + /* Not allowed to change modes from other than above */ + return -EINVAL; + } ret = tb_port_tmu_time_sync_enable(down); if (ret) @@ -637,13 +939,14 @@ static int tb_switch_tmu_change_mode(struct tb_switch *sw) */ int tb_switch_tmu_enable(struct tb_switch *sw) { - bool unidirectional = sw->tmu.unidirectional_request; int ret; if (tb_switch_tmu_is_enabled(sw)) return 0; - if (tb_switch_is_titan_ridge(sw) && unidirectional) { + if (tb_switch_is_titan_ridge(sw) && + (sw->tmu.mode_request == TB_SWITCH_TMU_MODE_LOWRES || + sw->tmu.mode_request == TB_SWITCH_TMU_MODE_HIFI_UNI)) { ret = tb_switch_tmu_disable_objections(sw); if (ret) return ret; @@ -659,19 +962,30 @@ int tb_switch_tmu_enable(struct tb_switch *sw) * HiFi-Uni/HiFi-BiDir/Normal-Uni or from Normal-Uni to * HiFi-Uni. */ - if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF) { - if (unidirectional) + if (sw->tmu.mode == TB_SWITCH_TMU_MODE_OFF) { + switch (sw->tmu.mode_request) { + case TB_SWITCH_TMU_MODE_LOWRES: + case TB_SWITCH_TMU_MODE_HIFI_UNI: ret = tb_switch_tmu_enable_unidirectional(sw); - else + break; + + case TB_SWITCH_TMU_MODE_HIFI_BI: ret = tb_switch_tmu_enable_bidirectional(sw); - if (ret) - return ret; - } else if (sw->tmu.rate == TB_SWITCH_TMU_RATE_NORMAL) { + break; + case TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI: + ret = tb_switch_tmu_enable_enhanced(sw); + break; + default: + ret = -EINVAL; + break; + } + } else if (sw->tmu.mode == TB_SWITCH_TMU_MODE_LOWRES || + sw->tmu.mode == TB_SWITCH_TMU_MODE_HIFI_UNI || + sw->tmu.mode == TB_SWITCH_TMU_MODE_HIFI_BI) { ret = tb_switch_tmu_change_mode(sw); - if (ret) - return ret; + } else { + ret = -EINVAL; } - sw->tmu.unidirectional = unidirectional; } else { /* * Host router port configurations are written as @@ -679,35 +993,68 @@ int tb_switch_tmu_enable(struct tb_switch *sw) * of the child node - see above. * Here only the host router' rate configuration is written. */ - ret = tb_switch_tmu_rate_write(sw, sw->tmu.rate_request); - if (ret) - return ret; + ret = tb_switch_tmu_rate_write(sw, tmu_rates[sw->tmu.mode_request]); } - sw->tmu.rate = sw->tmu.rate_request; + if (ret) { + tb_sw_warn(sw, "TMU: failed to enable mode %s: %d\n", + tmu_mode_name(sw->tmu.mode_request), ret); + } else { + sw->tmu.mode = sw->tmu.mode_request; + tb_sw_dbg(sw, "TMU: mode set to: %s\n", tmu_mode_name(sw->tmu.mode)); + } - tb_sw_dbg(sw, "TMU: mode set to: %s\n", tb_switch_tmu_mode_name(sw)); return tb_switch_tmu_set_time_disruption(sw, false); } /** - * tb_switch_tmu_configure() - Configure the TMU rate and directionality + * tb_switch_tmu_configure() - Configure the TMU mode * @sw: Router whose mode to change - * @rate: Rate to configure Off/Normal/HiFi - * @unidirectional: If uni-directional (bi-directional otherwise) + * @mode: Mode to configure * - * Selects the rate of the TMU and directionality (uni-directional or - * bi-directional). Must be called before tb_switch_tmu_enable(). + * Selects the TMU mode that is enabled when tb_switch_tmu_enable() is + * next called. * - * Returns %0 in success and negative errno otherwise. + * Returns %0 in success and negative errno otherwise. Specifically + * returns %-EOPNOTSUPP if the requested mode is not possible (not + * supported by the router and/or topology). */ -int tb_switch_tmu_configure(struct tb_switch *sw, enum tb_switch_tmu_rate rate, - bool unidirectional) +int tb_switch_tmu_configure(struct tb_switch *sw, enum tb_switch_tmu_mode mode) { - if (unidirectional && !sw->tmu.has_ucap) + switch (mode) { + case TB_SWITCH_TMU_MODE_OFF: + break; + + case TB_SWITCH_TMU_MODE_LOWRES: + case TB_SWITCH_TMU_MODE_HIFI_UNI: + if (!sw->tmu.has_ucap) + return -EOPNOTSUPP; + break; + + case TB_SWITCH_TMU_MODE_HIFI_BI: + break; + + case TB_SWITCH_TMU_MODE_MEDRES_ENHANCED_UNI: { + const struct tb_switch *parent_sw = tb_switch_parent(sw); + + if (!parent_sw || !tb_switch_tmu_enhanced_is_supported(parent_sw)) + return -EOPNOTSUPP; + if (!tb_switch_tmu_enhanced_is_supported(sw)) + return -EOPNOTSUPP; + + break; + } + + default: + tb_sw_warn(sw, "TMU: unsupported mode %u\n", mode); return -EINVAL; + } + + if (sw->tmu.mode_request != mode) { + tb_sw_dbg(sw, "TMU: mode change %s -> %s requested\n", + tmu_mode_name(sw->tmu.mode), tmu_mode_name(mode)); + sw->tmu.mode_request = mode; + } - sw->tmu.unidirectional_request = unidirectional; - sw->tmu.rate_request = rate; return 0; } diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c index 2da43b5fc840..2d84a53996fa 100644 --- a/drivers/thunderbolt/usb4.c +++ b/drivers/thunderbolt/usb4.c @@ -232,6 +232,9 @@ static bool link_is_usb4(struct tb_port *port) * is not available for some reason (like that there is Thunderbolt 3 * switch upstream) then the internal xHCI controller is enabled * instead. + * + * This does not set the configuration valid bit of the router. To do + * that call usb4_switch_configuration_valid(). */ int usb4_switch_setup(struct tb_switch *sw) { @@ -288,7 +291,33 @@ int usb4_switch_setup(struct tb_switch *sw) /* TBT3 supported by the CM */ val |= ROUTER_CS_5_C3S; - /* Tunneling configuration is ready now */ + + return tb_sw_write(sw, &val, TB_CFG_SWITCH, ROUTER_CS_5, 1); +} + +/** + * usb4_switch_configuration_valid() - Set tunneling configuration to be valid + * @sw: USB4 router + * + * Sets configuration valid bit for the router. Must be called before + * any tunnels can be set through the router and after + * usb4_switch_setup() has been called. Can be called to host and device + * routers (does nothing for the latter). + * + * Returns %0 in success and negative errno otherwise. + */ +int usb4_switch_configuration_valid(struct tb_switch *sw) +{ + u32 val; + int ret; + + if (!tb_route(sw)) + return 0; + + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, ROUTER_CS_5, 1); + if (ret) + return ret; + val |= ROUTER_CS_5_CV; ret = tb_sw_write(sw, &val, TB_CFG_SWITCH, ROUTER_CS_5, 1); From patchwork Mon Jun 12 08:21:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 691928 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC665C7EE23 for ; Mon, 12 Jun 2023 08:53:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233373AbjFLIxr (ORCPT ); Mon, 12 Jun 2023 04:53:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233964AbjFLIxK (ORCPT ); Mon, 12 Jun 2023 04:53:10 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E40B7110 for ; Mon, 12 Jun 2023 01:52:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559946; x=1718095946; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4pv4f2XjddlBjRx0U7zA7xCKm72RQ1gYEtA71wQ6mOs=; b=LK9R5C3AyW86UXmQ4oUYDkZPjOgwLSKoq2NPx44U3xtrJyKgyXqD/kVE EnDYOgYQG6hzKBbhBEbY994qVGsW+I4ei7EUUY5LV7pC5g5eZDzykOUJp OAG5hr4/sFitCAjtpv4ulnfQT1mt5ecpuzMrP/eMtzVXjYxWqdJWnwS0s pmvDJ3qplPu8TMAWpybwQA0Wl1HdaY1E2soFkytazOkjJKsHZBHVixBP5 /k2toNWlC2Ek38f7S3wfUoJSZHTs3pzgYUPHx6IYzN4vCEACS+RgKJJe0 9QhiWnmoWKAIkvpwnWbsTKgOLMH4IjnsF24SxF+7VLmqzZ/hbV+ZCSWET Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627317" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627317" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247781" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247781" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:42 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 107CCAC0; Mon, 12 Jun 2023 11:21:46 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 17/20] thunderbolt: Enable CL2 low power state Date: Mon, 12 Jun 2023 11:21:42 +0300 Message-Id: <20230612082145.62218-18-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org For USB4 v2 routers we can also enable CL2 which allows better power savings and thermal management than CL0s and CL1. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/clx.c | 31 +++++++++++++++++++------------ drivers/thunderbolt/tb.c | 9 ++++++--- 2 files changed, 25 insertions(+), 15 deletions(-) diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c index 604cceb23659..13d217ae98e6 100644 --- a/drivers/thunderbolt/clx.c +++ b/drivers/thunderbolt/clx.c @@ -17,17 +17,22 @@ MODULE_PARM_DESC(clx, "allow low power states on the high-speed lanes (default: static const char *clx_name(unsigned int clx) { - if (!clx) - return "disabled"; - - if (clx & TB_CL2) + switch (clx) { + case TB_CL0S | TB_CL1 | TB_CL2: return "CL0s/CL1/CL2"; - if (clx & TB_CL1) + case TB_CL1 | TB_CL2: + return "CL1/CL2"; + case TB_CL0S | TB_CL2: + return "CL0s/CL2"; + case TB_CL0S | TB_CL1: return "CL0s/CL1"; - if (clx & TB_CL0S) + case TB_CL0S: return "CL0s"; - - return "unknown"; + case 0: + return "disabled"; + default: + return "unknown"; + } } static int tb_port_pm_secondary_set(struct tb_port *port, bool secondary) @@ -104,6 +109,8 @@ static int tb_port_clx_set(struct tb_port *port, unsigned int clx, bool enable) mask |= LANE_ADP_CS_1_CL0S_ENABLE; if (clx & TB_CL1) mask |= LANE_ADP_CS_1_CL1_ENABLE; + if (clx & TB_CL2) + mask |= LANE_ADP_CS_1_CL2_ENABLE; if (!mask) return -EOPNOTSUPP; @@ -291,8 +298,6 @@ bool tb_switch_clx_is_supported(const struct tb_switch *sw) static bool validate_mask(unsigned int clx) { /* Previous states need to be enabled */ - if (clx & TB_CL2) - return (clx & (TB_CL0S | TB_CL1)) == (TB_CL0S | TB_CL1); if (clx & TB_CL1) return (clx & TB_CL0S) == TB_CL0S; return true; @@ -331,8 +336,10 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx) !tb_switch_clx_is_supported(sw)) return 0; - /* CL2 is not yet supported */ - if (clx & TB_CL2) + /* Only support CL2 for v2 routers */ + if ((clx & TB_CL2) && + (usb4_switch_version(parent_sw) < 2 || + usb4_switch_version(sw) < 2)) return -EOPNOTSUPP; ret = tb_switch_pm_secondary_resolve(sw); diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index ff034975a87e..79efc85db38b 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -244,6 +244,7 @@ static void tb_discover_dp_resources(struct tb *tb) static int tb_enable_clx(struct tb_switch *sw) { struct tb_cm *tcm = tb_priv(sw->tb); + unsigned int clx = TB_CL0S | TB_CL1; const struct tb_tunnel *tunnel; int ret; @@ -275,10 +276,12 @@ static int tb_enable_clx(struct tb_switch *sw) } /* - * CL0s and CL1 are enabled and supported together. - * Silently ignore CLx enabling in case CLx is not supported. + * Initially try with CL2. If that's not supported by the + * topology try with CL0s and CL1 and then give up. */ - ret = tb_switch_clx_enable(sw, TB_CL0S | TB_CL1); + ret = tb_switch_clx_enable(sw, clx | TB_CL2); + if (ret == -EOPNOTSUPP) + ret = tb_switch_clx_enable(sw, clx); return ret == -EOPNOTSUPP ? 0 : ret; } From patchwork Mon Jun 12 08:21:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 691926 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8797CC83005 for ; Mon, 12 Jun 2023 08:53:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231396AbjFLIxv (ORCPT ); Mon, 12 Jun 2023 04:53:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233810AbjFLIxM (ORCPT ); Mon, 12 Jun 2023 04:53:12 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB9BB30D0 for ; Mon, 12 Jun 2023 01:52:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559947; x=1718095947; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=agf7D+0Fk89X2jfHNyHyeMpu6LLQmvXRXE3Gy1/1dqs=; b=NRbgVZAhHbpkfBUWkfD50F1k2WQoz6g6c3S0oSVP92vTA+yCc1+yibMe u9CSxTZ9lR56ddBON3VLZcHjzeoaC57EXsygZhUxMU7UVvAIxqsmgnTbi kxjTF64nRst0EJc/LOlLz6w/kJZ9wGD4KNQZg0CDD31LykDp+LL+p/4ZR gk+ztaR1byr1P4jdo2FvN3DZGGA2DWj4eoqNbLSeJDkjQ0LsPubpJ7OLo VuTqOCfTYEBBP/TEHjgweJR/72tgP/N5Nq879kCF0jifXzXe90M731jxV pZwuQ2lb0/STb9Zxqp7Qf+lpli8T34iJpK66uZ1QiQ5L3iGyVFjdEQ1Tp g==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627319" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627319" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247783" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247783" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:42 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 182C7ACC; Mon, 12 Jun 2023 11:21:46 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 18/20] thunderbolt: Make bandwidth allocation mode function names consistent Date: Mon, 12 Jun 2023 11:21:43 +0300 Message-Id: <20230612082145.62218-19-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Make sure the DisplayPort bandwidth allocation mode function names are consistent with the existing ones, such as USB3. No functional changes. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 10 ++++----- drivers/thunderbolt/tb.h | 15 +++++++------ drivers/thunderbolt/tunnel.c | 41 ++++++++++++++++++------------------ drivers/thunderbolt/usb4.c | 32 ++++++++++++++++------------ 4 files changed, 52 insertions(+), 46 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 79efc85db38b..62b26b7998fd 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -131,7 +131,7 @@ tb_attach_bandwidth_group(struct tb_cm *tcm, struct tb_port *in, static void tb_discover_bandwidth_group(struct tb_cm *tcm, struct tb_port *in, struct tb_port *out) { - if (usb4_dp_port_bw_mode_enabled(in)) { + if (usb4_dp_port_bandwidth_mode_enabled(in)) { int index, i; index = usb4_dp_port_group_id(in); @@ -1169,7 +1169,7 @@ tb_recalc_estimated_bandwidth_for_group(struct tb_bandwidth_group *group) struct tb_tunnel *tunnel; struct tb_port *out; - if (!usb4_dp_port_bw_mode_enabled(in)) + if (!usb4_dp_port_bandwidth_mode_enabled(in)) continue; tunnel = tb_find_tunnel(tb, TB_TUNNEL_DP, in, NULL); @@ -1217,7 +1217,7 @@ tb_recalc_estimated_bandwidth_for_group(struct tb_bandwidth_group *group) else estimated_bw = estimated_up; - if (usb4_dp_port_set_estimated_bw(in, estimated_bw)) + if (usb4_dp_port_set_estimated_bandwidth(in, estimated_bw)) tb_port_warn(in, "failed to update estimated bandwidth\n"); } @@ -1912,12 +1912,12 @@ static void tb_handle_dp_bandwidth_request(struct work_struct *work) tb_port_dbg(in, "handling bandwidth allocation request\n"); - if (!usb4_dp_port_bw_mode_enabled(in)) { + if (!usb4_dp_port_bandwidth_mode_enabled(in)) { tb_port_warn(in, "bandwidth allocation mode not enabled\n"); goto unlock; } - ret = usb4_dp_port_requested_bw(in); + ret = usb4_dp_port_requested_bandwidth(in); if (ret < 0) { if (ret == -ENODATA) tb_port_dbg(in, "no bandwidth request active\n"); diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 68ab9b3c9580..57a9b272cb94 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -1292,19 +1292,20 @@ int usb4_usb3_port_release_bandwidth(struct tb_port *port, int *upstream_bw, int *downstream_bw); int usb4_dp_port_set_cm_id(struct tb_port *port, int cm_id); -bool usb4_dp_port_bw_mode_supported(struct tb_port *port); -bool usb4_dp_port_bw_mode_enabled(struct tb_port *port); -int usb4_dp_port_set_cm_bw_mode_supported(struct tb_port *port, bool supported); +bool usb4_dp_port_bandwidth_mode_supported(struct tb_port *port); +bool usb4_dp_port_bandwidth_mode_enabled(struct tb_port *port); +int usb4_dp_port_set_cm_bandwidth_mode_supported(struct tb_port *port, + bool supported); int usb4_dp_port_group_id(struct tb_port *port); int usb4_dp_port_set_group_id(struct tb_port *port, int group_id); int usb4_dp_port_nrd(struct tb_port *port, int *rate, int *lanes); int usb4_dp_port_set_nrd(struct tb_port *port, int rate, int lanes); int usb4_dp_port_granularity(struct tb_port *port); int usb4_dp_port_set_granularity(struct tb_port *port, int granularity); -int usb4_dp_port_set_estimated_bw(struct tb_port *port, int bw); -int usb4_dp_port_allocated_bw(struct tb_port *port); -int usb4_dp_port_allocate_bw(struct tb_port *port, int bw); -int usb4_dp_port_requested_bw(struct tb_port *port); +int usb4_dp_port_set_estimated_bandwidth(struct tb_port *port, int bw); +int usb4_dp_port_allocated_bandwidth(struct tb_port *port); +int usb4_dp_port_allocate_bandwidth(struct tb_port *port, int bw); +int usb4_dp_port_requested_bandwidth(struct tb_port *port); int usb4_pci_port_set_ext_encapsulation(struct tb_port *port, bool enable); diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c index f1d0ab2b39a2..64ec0cccc0df 100644 --- a/drivers/thunderbolt/tunnel.c +++ b/drivers/thunderbolt/tunnel.c @@ -640,7 +640,7 @@ static int tb_dp_xchg_caps(struct tb_tunnel *tunnel) in->cap_adap + DP_REMOTE_CAP, 1); } -static int tb_dp_bw_alloc_mode_enable(struct tb_tunnel *tunnel) +static int tb_dp_bandwidth_alloc_mode_enable(struct tb_tunnel *tunnel) { int ret, estimated_bw, granularity, tmp; struct tb_port *out = tunnel->dst_port; @@ -652,7 +652,7 @@ static int tb_dp_bw_alloc_mode_enable(struct tb_tunnel *tunnel) if (!bw_alloc_mode) return 0; - ret = usb4_dp_port_set_cm_bw_mode_supported(in, true); + ret = usb4_dp_port_set_cm_bandwidth_mode_supported(in, true); if (ret) return ret; @@ -716,12 +716,12 @@ static int tb_dp_bw_alloc_mode_enable(struct tb_tunnel *tunnel) tb_port_dbg(in, "estimated bandwidth %d Mb/s\n", estimated_bw); - ret = usb4_dp_port_set_estimated_bw(in, estimated_bw); + ret = usb4_dp_port_set_estimated_bandwidth(in, estimated_bw); if (ret) return ret; /* Initial allocation should be 0 according the spec */ - ret = usb4_dp_port_allocate_bw(in, 0); + ret = usb4_dp_port_allocate_bandwidth(in, 0); if (ret) return ret; @@ -743,7 +743,7 @@ static int tb_dp_init(struct tb_tunnel *tunnel) if (!tb_switch_is_usb4(sw)) return 0; - if (!usb4_dp_port_bw_mode_supported(in)) + if (!usb4_dp_port_bandwidth_mode_supported(in)) return 0; tb_port_dbg(in, "bandwidth allocation mode supported\n"); @@ -752,17 +752,17 @@ static int tb_dp_init(struct tb_tunnel *tunnel) if (ret) return ret; - return tb_dp_bw_alloc_mode_enable(tunnel); + return tb_dp_bandwidth_alloc_mode_enable(tunnel); } static void tb_dp_deinit(struct tb_tunnel *tunnel) { struct tb_port *in = tunnel->src_port; - if (!usb4_dp_port_bw_mode_supported(in)) + if (!usb4_dp_port_bandwidth_mode_supported(in)) return; - if (usb4_dp_port_bw_mode_enabled(in)) { - usb4_dp_port_set_cm_bw_mode_supported(in, false); + if (usb4_dp_port_bandwidth_mode_enabled(in)) { + usb4_dp_port_set_cm_bandwidth_mode_supported(in, false); tb_port_dbg(in, "bandwidth allocation mode disabled\n"); } } @@ -826,21 +826,22 @@ static int tb_dp_nrd_bandwidth(struct tb_tunnel *tunnel, int *max_bw) return nrd_bw; } -static int tb_dp_bw_mode_consumed_bandwidth(struct tb_tunnel *tunnel, - int *consumed_up, int *consumed_down) +static int tb_dp_bandwidth_mode_consumed_bandwidth(struct tb_tunnel *tunnel, + int *consumed_up, + int *consumed_down) { struct tb_port *out = tunnel->dst_port; struct tb_port *in = tunnel->src_port; int ret, allocated_bw, max_bw; - if (!usb4_dp_port_bw_mode_enabled(in)) + if (!usb4_dp_port_bandwidth_mode_enabled(in)) return -EOPNOTSUPP; if (!tunnel->bw_mode) return -EOPNOTSUPP; /* Read what was allocated previously if any */ - ret = usb4_dp_port_allocated_bw(in); + ret = usb4_dp_port_allocated_bandwidth(in); if (ret < 0) return ret; allocated_bw = ret; @@ -875,10 +876,10 @@ static int tb_dp_allocated_bandwidth(struct tb_tunnel *tunnel, int *allocated_up * If we have already set the allocated bandwidth then use that. * Otherwise we read it from the DPRX. */ - if (usb4_dp_port_bw_mode_enabled(in) && tunnel->bw_mode) { + if (usb4_dp_port_bandwidth_mode_enabled(in) && tunnel->bw_mode) { int ret, allocated_bw, max_bw; - ret = usb4_dp_port_allocated_bw(in); + ret = usb4_dp_port_allocated_bandwidth(in); if (ret < 0) return ret; allocated_bw = ret; @@ -910,7 +911,7 @@ static int tb_dp_alloc_bandwidth(struct tb_tunnel *tunnel, int *alloc_up, struct tb_port *in = tunnel->src_port; int max_bw, ret, tmp; - if (!usb4_dp_port_bw_mode_enabled(in)) + if (!usb4_dp_port_bandwidth_mode_enabled(in)) return -EOPNOTSUPP; ret = tb_dp_nrd_bandwidth(tunnel, &max_bw); @@ -919,14 +920,14 @@ static int tb_dp_alloc_bandwidth(struct tb_tunnel *tunnel, int *alloc_up, if (in->sw->config.depth < out->sw->config.depth) { tmp = min(*alloc_down, max_bw); - ret = usb4_dp_port_allocate_bw(in, tmp); + ret = usb4_dp_port_allocate_bandwidth(in, tmp); if (ret) return ret; *alloc_down = tmp; *alloc_up = 0; } else { tmp = min(*alloc_up, max_bw); - ret = usb4_dp_port_allocate_bw(in, tmp); + ret = usb4_dp_port_allocate_bandwidth(in, tmp); if (ret) return ret; *alloc_down = 0; @@ -1047,8 +1048,8 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up, * mode is enabled first and then read the bandwidth * through those registers. */ - ret = tb_dp_bw_mode_consumed_bandwidth(tunnel, consumed_up, - consumed_down); + ret = tb_dp_bandwidth_mode_consumed_bandwidth(tunnel, consumed_up, + consumed_down); if (ret < 0) { if (ret != -EOPNOTSUPP) return ret; diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c index 2d84a53996fa..5c414a60935d 100644 --- a/drivers/thunderbolt/usb4.c +++ b/drivers/thunderbolt/usb4.c @@ -2294,13 +2294,14 @@ int usb4_dp_port_set_cm_id(struct tb_port *port, int cm_id) } /** - * usb4_dp_port_bw_mode_supported() - Is the bandwidth allocation mode supported + * usb4_dp_port_bandwidth_mode_supported() - Is the bandwidth allocation mode + * supported * @port: DP IN adapter to check * * Can be called to any DP IN adapter. Returns true if the adapter * supports USB4 bandwidth allocation mode, false otherwise. */ -bool usb4_dp_port_bw_mode_supported(struct tb_port *port) +bool usb4_dp_port_bandwidth_mode_supported(struct tb_port *port) { int ret; u32 val; @@ -2317,13 +2318,14 @@ bool usb4_dp_port_bw_mode_supported(struct tb_port *port) } /** - * usb4_dp_port_bw_mode_enabled() - Is the bandwidth allocation mode enabled + * usb4_dp_port_bandwidth_mode_enabled() - Is the bandwidth allocation mode + * enabled * @port: DP IN adapter to check * * Can be called to any DP IN adapter. Returns true if the bandwidth * allocation mode has been enabled, false otherwise. */ -bool usb4_dp_port_bw_mode_enabled(struct tb_port *port) +bool usb4_dp_port_bandwidth_mode_enabled(struct tb_port *port) { int ret; u32 val; @@ -2340,7 +2342,8 @@ bool usb4_dp_port_bw_mode_enabled(struct tb_port *port) } /** - * usb4_dp_port_set_cm_bw_mode_supported() - Set/clear CM support for bandwidth allocation mode + * usb4_dp_port_set_cm_bandwidth_mode_supported() - Set/clear CM support for + * bandwidth allocation mode * @port: DP IN adapter * @supported: Does the CM support bandwidth allocation mode * @@ -2349,7 +2352,8 @@ bool usb4_dp_port_bw_mode_enabled(struct tb_port *port) * otherwise. Specifically returns %-OPNOTSUPP if the passed in adapter * does not support this. */ -int usb4_dp_port_set_cm_bw_mode_supported(struct tb_port *port, bool supported) +int usb4_dp_port_set_cm_bandwidth_mode_supported(struct tb_port *port, + bool supported) { u32 val; int ret; @@ -2623,7 +2627,7 @@ int usb4_dp_port_set_granularity(struct tb_port *port, int granularity) } /** - * usb4_dp_port_set_estimated_bw() - Set estimated bandwidth + * usb4_dp_port_set_estimated_bandwidth() - Set estimated bandwidth * @port: DP IN adapter * @bw: Estimated bandwidth in Mb/s. * @@ -2633,7 +2637,7 @@ int usb4_dp_port_set_granularity(struct tb_port *port, int granularity) * and negative errno otherwise. Specifically returns %-EOPNOTSUPP if * the adapter does not support this. */ -int usb4_dp_port_set_estimated_bw(struct tb_port *port, int bw) +int usb4_dp_port_set_estimated_bandwidth(struct tb_port *port, int bw) { u32 val, granularity; int ret; @@ -2659,14 +2663,14 @@ int usb4_dp_port_set_estimated_bw(struct tb_port *port, int bw) } /** - * usb4_dp_port_allocated_bw() - Return allocated bandwidth + * usb4_dp_port_allocated_bandwidth() - Return allocated bandwidth * @port: DP IN adapter * * Reads and returns allocated bandwidth for @port in Mb/s (taking into * account the programmed granularity). Returns negative errno in case * of error. */ -int usb4_dp_port_allocated_bw(struct tb_port *port) +int usb4_dp_port_allocated_bandwidth(struct tb_port *port) { u32 val, granularity; int ret; @@ -2752,7 +2756,7 @@ static int usb4_dp_port_wait_and_clear_cm_ack(struct tb_port *port, } /** - * usb4_dp_port_allocate_bw() - Set allocated bandwidth + * usb4_dp_port_allocate_bandwidth() - Set allocated bandwidth * @port: DP IN adapter * @bw: New allocated bandwidth in Mb/s * @@ -2760,7 +2764,7 @@ static int usb4_dp_port_wait_and_clear_cm_ack(struct tb_port *port, * driver). Takes into account the programmed granularity. Returns %0 in * success and negative errno in case of error. */ -int usb4_dp_port_allocate_bw(struct tb_port *port, int bw) +int usb4_dp_port_allocate_bandwidth(struct tb_port *port, int bw) { u32 val, granularity; int ret; @@ -2794,7 +2798,7 @@ int usb4_dp_port_allocate_bw(struct tb_port *port, int bw) } /** - * usb4_dp_port_requested_bw() - Read requested bandwidth + * usb4_dp_port_requested_bandwidth() - Read requested bandwidth * @port: DP IN adapter * * Reads the DPCD (graphics driver) requested bandwidth and returns it @@ -2803,7 +2807,7 @@ int usb4_dp_port_allocate_bw(struct tb_port *port, int bw) * the adapter does not support bandwidth allocation mode, and %ENODATA * if there is no active bandwidth request from the graphics driver. */ -int usb4_dp_port_requested_bw(struct tb_port *port) +int usb4_dp_port_requested_bandwidth(struct tb_port *port) { u32 val, granularity; int ret; From patchwork Mon Jun 12 08:21:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 693240 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2324CC7EE45 for ; Mon, 12 Jun 2023 08:53:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233964AbjFLIxs (ORCPT ); Mon, 12 Jun 2023 04:53:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233682AbjFLIxK (ORCPT ); Mon, 12 Jun 2023 04:53:10 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89A6B30C0 for ; Mon, 12 Jun 2023 01:52:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559946; x=1718095946; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PpeVzZcGbyUvYegS7OXn5u6Us4JA//RLFBzAfz0nby0=; b=XeliRabREH6LCfFxVbfqu0Ks8SP1/MeJhMLYVurAkVP5TK4/2Wb/Kt3A lk6myr1CJSyIhb8uSgMkaVYqoAf+RfoirJXtfeH0HHNZW5+cArkOJKKJc ppVwjnytr9sKKbYQWB0fg+mTKXdjXImVUzRf/Yd22SxIIPRTYtwMWX8ah WZJzhNI1S/Uf976Sr8jgO1Bce9H3Eqd6xxyCA2nVzOdh1fGoTMh2YwVi2 43i1ZSUj1847r+MhRdBfsdbUY+zH0iTcvtKyCyEUszD0zQHm92NRJYazs hTeIrBUkV99K1PKxs4iEyNbfSuq9YYcoApDuHB5ej8Bwz6FONTl/f13j4 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627309" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627309" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247777" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247777" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:43 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 1F4C0BCF; Mon, 12 Jun 2023 11:21:46 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 19/20] thunderbolt: Add DisplayPort 2.x tunneling support Date: Mon, 12 Jun 2023 11:21:44 +0300 Message-Id: <20230612082145.62218-20-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This adds support for the UHBR (Ultra High Bit Rate) bandwidths introduced with DisplayPort 2.0 (and refined in 2.1). These can go up to 80 Gbit/s and their support is represent in additional bits in the DP IN capability. This updates the DisplayPort tunneling to support these new rates too. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb_regs.h | 3 + drivers/thunderbolt/tunnel.c | 100 ++++++++++++++++++++++++++++------ 2 files changed, 87 insertions(+), 16 deletions(-) diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h index c95fc7fe7adf..cf9f2370878a 100644 --- a/drivers/thunderbolt/tb_regs.h +++ b/drivers/thunderbolt/tb_regs.h @@ -450,6 +450,9 @@ struct tb_regs_port_header { #define DP_COMMON_CAP_1_LANE 0x0 #define DP_COMMON_CAP_2_LANES 0x1 #define DP_COMMON_CAP_4_LANES 0x2 +#define DP_COMMON_CAP_UHBR10 BIT(17) +#define DP_COMMON_CAP_UHBR20 BIT(18) +#define DP_COMMON_CAP_UHBR13_5 BIT(19) #define DP_COMMON_CAP_LTTPR_NS BIT(27) #define DP_COMMON_CAP_BW_MODE BIT(28) #define DP_COMMON_CAP_DPRX_DONE BIT(31) diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c index 64ec0cccc0df..620dc9d240f0 100644 --- a/drivers/thunderbolt/tunnel.c +++ b/drivers/thunderbolt/tunnel.c @@ -417,6 +417,10 @@ static int tb_dp_cm_handshake(struct tb_port *in, struct tb_port *out, return -ETIMEDOUT; } +/* + * Returns maximum possible rate from capability supporting only DP 2.0 + * and below. Used when DP BW allocation mode is not enabled. + */ static inline u32 tb_dp_cap_get_rate(u32 val) { u32 rate = (val & DP_COMMON_CAP_RATE_MASK) >> DP_COMMON_CAP_RATE_SHIFT; @@ -435,6 +439,28 @@ static inline u32 tb_dp_cap_get_rate(u32 val) } } +/* + * Returns maximum possible rate from capability supporting DP 2.1 + * UHBR20, 13.5 and 10 rates as well. Use only when DP BW allocation + * mode is enabled. + */ +static inline u32 tb_dp_cap_get_rate_ext(u32 val) +{ + if (val & DP_COMMON_CAP_UHBR20) + return 20000; + else if (val & DP_COMMON_CAP_UHBR13_5) + return 13500; + else if (val & DP_COMMON_CAP_UHBR10) + return 10000; + + return tb_dp_cap_get_rate(val); +} + +static inline bool tb_dp_is_uhbr_rate(unsigned int rate) +{ + return rate >= 10000; +} + static inline u32 tb_dp_cap_set_rate(u32 val, u32 rate) { val &= ~DP_COMMON_CAP_RATE_MASK; @@ -497,7 +523,9 @@ static inline u32 tb_dp_cap_set_lanes(u32 val, u32 lanes) static unsigned int tb_dp_bandwidth(unsigned int rate, unsigned int lanes) { - /* Tunneling removes the DP 8b/10b encoding */ + /* Tunneling removes the DP 8b/10b 128/132b encoding */ + if (tb_dp_is_uhbr_rate(rate)) + return rate * lanes * 128 / 132; return rate * lanes * 8 / 10; } @@ -690,6 +718,19 @@ static int tb_dp_bandwidth_alloc_mode_enable(struct tb_tunnel *tunnel) if (ret) return ret; + /* + * Pick up granularity that supports maximum possible bandwidth. + * For that we use the UHBR rates too. + */ + in_rate = tb_dp_cap_get_rate_ext(in_dp_cap); + out_rate = tb_dp_cap_get_rate_ext(out_dp_cap); + rate = min(in_rate, out_rate); + tmp = tb_dp_bandwidth(rate, lanes); + + tb_port_dbg(in, + "maximum bandwidth through allocation mode %u Mb/s x%u = %u Mb/s\n", + rate, lanes, tmp); + for (granularity = 250; tmp / granularity > 255 && granularity <= 1000; granularity *= 2) ; @@ -805,15 +846,42 @@ static int tb_dp_activate(struct tb_tunnel *tunnel, bool active) } /* max_bw is rounded up to next granularity */ -static int tb_dp_nrd_bandwidth(struct tb_tunnel *tunnel, int *max_bw) +static int tb_dp_bandwidth_mode_maximum_bandwidth(struct tb_tunnel *tunnel, + int *max_bw) { struct tb_port *in = tunnel->src_port; int ret, rate, lanes, nrd_bw; + u32 cap; - ret = usb4_dp_port_nrd(in, &rate, &lanes); + /* + * DP IN adapter DP_LOCAL_CAP gets updated to the lowest AUX + * read parameter values so this so we can use this to determine + * the maximum possible bandwidth over this link. + * + * See USB4 v2 spec 1.0 10.4.4.5. + */ + ret = tb_port_read(in, &cap, TB_CFG_PORT, + in->cap_adap + DP_LOCAL_CAP, 1); if (ret) return ret; + rate = tb_dp_cap_get_rate_ext(cap); + if (tb_dp_is_uhbr_rate(rate)) { + /* + * When UHBR is used there is no reduction in lanes so + * we can use this directly. + */ + lanes = tb_dp_cap_get_lanes(cap); + } else { + /* + * If there is no UHBR supported then check the + * non-reduced rate and lanes. + */ + ret = usb4_dp_port_nrd(in, &rate, &lanes); + if (ret) + return ret; + } + nrd_bw = tb_dp_bandwidth(rate, lanes); if (max_bw) { @@ -846,7 +914,7 @@ static int tb_dp_bandwidth_mode_consumed_bandwidth(struct tb_tunnel *tunnel, return ret; allocated_bw = ret; - ret = tb_dp_nrd_bandwidth(tunnel, &max_bw); + ret = tb_dp_bandwidth_mode_maximum_bandwidth(tunnel, &max_bw); if (ret < 0) return ret; if (allocated_bw == max_bw) @@ -884,7 +952,7 @@ static int tb_dp_allocated_bandwidth(struct tb_tunnel *tunnel, int *allocated_up return ret; allocated_bw = ret; - ret = tb_dp_nrd_bandwidth(tunnel, &max_bw); + ret = tb_dp_bandwidth_mode_maximum_bandwidth(tunnel, &max_bw); if (ret < 0) return ret; if (allocated_bw == max_bw) @@ -914,7 +982,7 @@ static int tb_dp_alloc_bandwidth(struct tb_tunnel *tunnel, int *alloc_up, if (!usb4_dp_port_bandwidth_mode_enabled(in)) return -EOPNOTSUPP; - ret = tb_dp_nrd_bandwidth(tunnel, &max_bw); + ret = tb_dp_bandwidth_mode_maximum_bandwidth(tunnel, &max_bw); if (ret < 0) return ret; @@ -937,6 +1005,9 @@ static int tb_dp_alloc_bandwidth(struct tb_tunnel *tunnel, int *alloc_up, /* Now we can use BW mode registers to figure out the bandwidth */ /* TODO: need to handle discovery too */ tunnel->bw_mode = true; + + tb_port_dbg(in, "allocated bandwidth through allocation mode %d Mb/s\n", + tmp); return 0; } @@ -1011,23 +1082,20 @@ static int tb_dp_maximum_bandwidth(struct tb_tunnel *tunnel, int *max_up, int *max_down) { struct tb_port *in = tunnel->src_port; - u32 rate, lanes; int ret; - /* - * DP IN adapter DP_LOCAL_CAP gets updated to the lowest AUX read - * parameter values so this so we can use this to determine the - * maximum possible bandwidth over this link. - */ - ret = tb_dp_read_cap(tunnel, DP_LOCAL_CAP, &rate, &lanes); - if (ret) + if (!usb4_dp_port_bandwidth_mode_enabled(in)) + return -EOPNOTSUPP; + + ret = tb_dp_bandwidth_mode_maximum_bandwidth(tunnel, NULL); + if (ret < 0) return ret; if (in->sw->config.depth < tunnel->dst_port->sw->config.depth) { *max_up = 0; - *max_down = tb_dp_bandwidth(rate, lanes); + *max_down = ret; } else { - *max_up = tb_dp_bandwidth(rate, lanes); + *max_up = ret; *max_down = 0; } From patchwork Mon Jun 12 08:21:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 691927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 900CEC7EE25 for ; Mon, 12 Jun 2023 08:53:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234110AbjFLIxt (ORCPT ); Mon, 12 Jun 2023 04:53:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233773AbjFLIxL (ORCPT ); Mon, 12 Jun 2023 04:53:11 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBAAA30D2 for ; Mon, 12 Jun 2023 01:52:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686559947; x=1718095947; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mtVaF8FBbmCvY3tO9Dra+v8+HXVgN+/DBdtdtcFzQMY=; b=GEsIGDLOb9bxtBJwPeJAIrWZIJqmk0e/1lbIL7mBW1R7mDGuIAd0iBzX QvYvgTENYcsRVVPJO9tALsymE2P0pYjNPF87BavilXukQ+trL7Xd58rAu L+3ABq0zDLDu1naUD57kLIFXzdfB+ydy8LdHU/JlvThkwpvXnM2ItZk32 yRCqoQ7+BaQ/HWW068FURhG3Eucw0RAZ+hS058kfyGk/6cIgiw/8+3cyW pvbGKiVb2Pcpqo1qHD5+zhYjHW9INtmvDzlB+tlMw7yQQWVsJCeaow3zr +kYKyAs7rhIYI7Zv4/IUswEMrylR74NY1pxN2BMX8RjKIOnfFL3qaPKUs Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="347627322" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="347627322" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 01:21:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10738"; a="744247785" X-IronPort-AV: E=Sophos;i="6.00,236,1681196400"; d="scan'208";a="744247785" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 12 Jun 2023 01:21:43 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 26882BE1; Mon, 12 Jun 2023 11:21:46 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Gil Fine , Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Mika Westerberg Subject: [PATCH v2 20/20] thunderbolt: Add test case for 3 DisplayPort tunnels Date: Mon, 12 Jun 2023 11:21:45 +0300 Message-Id: <20230612082145.62218-21-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230612082145.62218-1-mika.westerberg@linux.intel.com> References: <20230612082145.62218-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Intel Barlow Ridge Thunderbolt controller has 3 DP IN adapters. This allows 3 simultaneus DisplayPort tunnels through either one or two USB4 downstream ports (in any possible configuration). Add test case for this. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/test.c | 83 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 83 insertions(+) diff --git a/drivers/thunderbolt/test.c b/drivers/thunderbolt/test.c index 24c06e7354cd..9475c6698c7d 100644 --- a/drivers/thunderbolt/test.c +++ b/drivers/thunderbolt/test.c @@ -170,6 +170,23 @@ static struct tb_switch *alloc_host_usb4(struct kunit *test) return sw; } +static struct tb_switch *alloc_host_br(struct kunit *test) +{ + struct tb_switch *sw; + + sw = alloc_host_usb4(test); + if (!sw) + return NULL; + + sw->ports[10].config.type = TB_TYPE_DP_HDMI_IN; + sw->ports[10].config.max_in_hop_id = 9; + sw->ports[10].config.max_out_hop_id = 9; + sw->ports[10].cap_adap = -1; + sw->ports[10].disabled = false; + + return sw; +} + static struct tb_switch *alloc_dev_default(struct kunit *test, struct tb_switch *parent, u64 route, bool bonded) @@ -1583,6 +1600,71 @@ static void tb_test_tunnel_dp_max_length(struct kunit *test) tb_tunnel_free(tunnel); } +static void tb_test_tunnel_3dp(struct kunit *test) +{ + struct tb_switch *host, *dev1, *dev2, *dev3, *dev4, *dev5; + struct tb_port *in1, *in2, *in3, *out1, *out2, *out3; + struct tb_tunnel *tunnel1, *tunnel2, *tunnel3; + + /* + * Create 3 DP tunnels from Host to Devices #2, #5 and #4. + * + * [Host] + * 3 | + * 1 | + * [Device #1] + * 3 / | 5 \ 7 + * 1 / | \ 1 + * [Device #2] | [Device #4] + * | 1 + * [Device #3] + * | 5 + * | 1 + * [Device #5] + */ + host = alloc_host_br(test); + dev1 = alloc_dev_default(test, host, 0x3, true); + dev2 = alloc_dev_default(test, dev1, 0x303, true); + dev3 = alloc_dev_default(test, dev1, 0x503, true); + dev4 = alloc_dev_default(test, dev1, 0x703, true); + dev5 = alloc_dev_default(test, dev3, 0x50503, true); + + in1 = &host->ports[5]; + in2 = &host->ports[6]; + in3 = &host->ports[10]; + + out1 = &dev2->ports[13]; + out2 = &dev5->ports[13]; + out3 = &dev4->ports[14]; + + tunnel1 = tb_tunnel_alloc_dp(NULL, in1, out1, 1, 0, 0); + KUNIT_ASSERT_TRUE(test, tunnel1 != NULL); + KUNIT_EXPECT_EQ(test, tunnel1->type, TB_TUNNEL_DP); + KUNIT_EXPECT_PTR_EQ(test, tunnel1->src_port, in1); + KUNIT_EXPECT_PTR_EQ(test, tunnel1->dst_port, out1); + KUNIT_ASSERT_EQ(test, tunnel1->npaths, 3); + KUNIT_ASSERT_EQ(test, tunnel1->paths[0]->path_length, 3); + + tunnel2 = tb_tunnel_alloc_dp(NULL, in2, out2, 1, 0, 0); + KUNIT_ASSERT_TRUE(test, tunnel2 != NULL); + KUNIT_EXPECT_EQ(test, tunnel2->type, TB_TUNNEL_DP); + KUNIT_EXPECT_PTR_EQ(test, tunnel2->src_port, in2); + KUNIT_EXPECT_PTR_EQ(test, tunnel2->dst_port, out2); + KUNIT_ASSERT_EQ(test, tunnel2->npaths, 3); + KUNIT_ASSERT_EQ(test, tunnel2->paths[0]->path_length, 4); + + tunnel3 = tb_tunnel_alloc_dp(NULL, in3, out3, 1, 0, 0); + KUNIT_ASSERT_TRUE(test, tunnel3 != NULL); + KUNIT_EXPECT_EQ(test, tunnel3->type, TB_TUNNEL_DP); + KUNIT_EXPECT_PTR_EQ(test, tunnel3->src_port, in3); + KUNIT_EXPECT_PTR_EQ(test, tunnel3->dst_port, out3); + KUNIT_ASSERT_EQ(test, tunnel3->npaths, 3); + KUNIT_ASSERT_EQ(test, tunnel3->paths[0]->path_length, 3); + + tb_tunnel_free(tunnel2); + tb_tunnel_free(tunnel1); +} + static void tb_test_tunnel_usb3(struct kunit *test) { struct tb_switch *host, *dev1, *dev2; @@ -2790,6 +2872,7 @@ static struct kunit_case tb_test_cases[] = { KUNIT_CASE(tb_test_tunnel_dp_chain), KUNIT_CASE(tb_test_tunnel_dp_tree), KUNIT_CASE(tb_test_tunnel_dp_max_length), + KUNIT_CASE(tb_test_tunnel_3dp), KUNIT_CASE(tb_test_tunnel_port_on_path), KUNIT_CASE(tb_test_tunnel_usb3), KUNIT_CASE(tb_test_tunnel_dma),