From patchwork Fri Dec 17 01:16:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gil Fine X-Patchwork-Id: 526011 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E250BC433EF for ; Fri, 17 Dec 2021 01:10:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231624AbhLQBKa (ORCPT ); Thu, 16 Dec 2021 20:10:30 -0500 Received: from mga09.intel.com ([134.134.136.24]:64450 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230183AbhLQBKa (ORCPT ); Thu, 16 Dec 2021 20:10:30 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639703429; x=1671239429; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=/bgrEJ15ffR5IaIrCAAwD5s59n4Pf0iBkvgsEeWMLaE=; b=P7EeVCAomwPpleifjxmjbWxQa2gWAbW7JsZWmEk7/BIEwaNk2S1eMjZs xzZjRNxp9GkR48LBMcv3JssiqdfFx/r1Xo+4UAJk8kXKvpmlxs0nBaXXs dnmMMGeVkg30vrFpP4yaGKHAyQKe9swaK0dz1VUccnQelBdq9Lqs7A6He tNtr/A5Xjx4/Ojz4CPedgwucCHDOXsbtGg/5dUPzCzPhKibmpbLqC9XhR 9lKOKevI2vGXX/nYXCzehUw6VsgKRIvNxRjsiw9JaXrOMJt5lyWx5hK5l /YdfTok9NyFQjzILBfOmbkwAFrHAOeR55URQ5/Bcq0VH4nyS6AZgnG7ou A==; X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="239460817" X-IronPort-AV: E=Sophos;i="5.88,212,1635231600"; d="scan'208";a="239460817" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 17:10:15 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,212,1635231600"; d="scan'208";a="605733683" Received: from ccdjpclinux26.jer.intel.com ([10.12.48.253]) by FMSMGA003.fm.intel.com with ESMTP; 16 Dec 2021 17:10:13 -0800 From: Gil Fine To: andreas.noever@gmail.com, michael.jamet@intel.com, mika.westerberg@linux.intel.com, YehezkelShB@gmail.com Cc: gil.fine@intel.com, linux-usb@vger.kernel.org, lukas@wunner.de Subject: [PATCH v2 1/7] thunderbolt: Add TMU unidirectional mode Date: Fri, 17 Dec 2021 03:16:38 +0200 Message-Id: <20211217011644.21634-2-gil.fine@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211217011644.21634-1-gil.fine@intel.com> References: <20211217011644.21634-1-gil.fine@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Up until Titan Ridge (Thunderbolt 3) device, routers only supported bidirectional mode. In this patch, we add to TMU, a unidirectional mode. Unidirectional mode is needed for enabling of low power state of the link (CLx). Signed-off-by: Gil Fine --- drivers/thunderbolt/tb.c | 9 +- drivers/thunderbolt/tb.h | 24 +-- drivers/thunderbolt/tb_regs.h | 3 + drivers/thunderbolt/tmu.c | 286 ++++++++++++++++++++++++++++------ 4 files changed, 267 insertions(+), 55 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index a231191b06c6..96e0a029d6c8 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -221,7 +221,7 @@ static int tb_enable_tmu(struct tb_switch *sw) int ret; /* If it is already enabled in correct mode, don't touch it */ - if (tb_switch_tmu_is_enabled(sw)) + if (tb_switch_tmu_hifi_is_enabled(sw, sw->tmu.unidirect_request)) return 0; ret = tb_switch_tmu_disable(sw); @@ -669,6 +669,7 @@ static void tb_scan_port(struct tb_port *port) tb_switch_lane_bonding_enable(sw); /* Set the link configured */ tb_switch_configure_link(sw); + tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); if (tb_enable_tmu(sw)) tb_sw_warn(sw, "failed to enable TMU\n"); @@ -1375,6 +1376,7 @@ static int tb_start(struct tb *tb) return ret; } + tb_switch_tmu_configure(tb->root_switch, TB_SWITCH_TMU_RATE_HIFI, false); /* Enable TMU if it is off */ tb_switch_tmu_enable(tb->root_switch); /* Full scan to discover devices added before the driver was loaded. */ @@ -1418,6 +1420,11 @@ static void tb_restore_children(struct tb_switch *sw) if (sw->is_unplugged) return; + /* + * tb_switch_tmu_configure() was already called when the switch was + * added before entering system sleep or runtime suspend, + * so no need to call it again before enabling TMU. + */ if (tb_enable_tmu(sw)) tb_sw_warn(sw, "failed to restore TMU configuration\n"); diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 3fae40670b72..b57f14893f38 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -89,15 +89,24 @@ enum tb_switch_tmu_rate { * @cap: Offset to the TMU capability (%0 if not found) * @has_ucap: Does the switch support uni-directional mode * @rate: TMU refresh rate related to upstream switch. In case of root - * switch this holds the domain rate. + * switch this holds the domain rate. Reflects the HW setting. * @unidirectional: Is the TMU in uni-directional or bi-directional mode - * related to upstream switch. Don't case for root switch. + * related to upstream switch. Don't care for root switch. + * Reflects the HW setting. + * @unidirect_request: Is the new TMU mode: uni-directional or bi-directional + * that is requested to be set. Related to upstream switch. + * Don't care for root switch. + * @rate_request: TMU new refresh rate related to upstream switch that is + * requested to be set. In case of root switch, this holds + * the new domain rate that is requested to be set. */ struct tb_switch_tmu { int cap; bool has_ucap; enum tb_switch_tmu_rate rate; bool unidirectional; + bool unidirect_request; + enum tb_switch_tmu_rate rate_request; }; /** @@ -891,13 +900,10 @@ int tb_switch_tmu_init(struct tb_switch *sw); int tb_switch_tmu_post_time(struct tb_switch *sw); int tb_switch_tmu_disable(struct tb_switch *sw); int tb_switch_tmu_enable(struct tb_switch *sw); - -static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw) -{ - return sw->tmu.rate == TB_SWITCH_TMU_RATE_HIFI && - !sw->tmu.unidirectional; -} - +bool tb_switch_tmu_hifi_is_enabled(struct tb_switch *sw, bool unidirect); +void tb_switch_tmu_configure(struct tb_switch *sw, + enum tb_switch_tmu_rate rate, + bool unidirectional); int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged); int tb_port_add_nfc_credits(struct tb_port *port, int credits); int tb_port_clear_counter(struct tb_port *port, int counter); diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h index d8ab6c820451..89a2170032bf 100644 --- a/drivers/thunderbolt/tb_regs.h +++ b/drivers/thunderbolt/tb_regs.h @@ -246,6 +246,7 @@ enum usb4_switch_op { #define TMU_RTR_CS_3_TS_PACKET_INTERVAL_SHIFT 16 #define TMU_RTR_CS_22 0x16 #define TMU_RTR_CS_24 0x18 +#define TMU_RTR_CS_25 0x19 enum tb_port_type { TB_TYPE_INACTIVE = 0x000000, @@ -305,6 +306,8 @@ struct tb_regs_port_header { /* TMU adapter registers */ #define TMU_ADP_CS_3 0x03 #define TMU_ADP_CS_3_UDM BIT(29) +#define TMU_ADP_CS_6 0x06 +#define TMU_ADP_CS_6_DTS BIT(1) /* Lane adapter registers */ #define LANE_ADP_CS_0 0x00 diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index 039c42a06000..50f528211852 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -115,6 +115,11 @@ static inline int tb_port_tmu_unidirectional_disable(struct tb_port *port) return tb_port_tmu_set_unidirectional(port, false); } +static inline int tb_port_tmu_unidirectional_enable(struct tb_port *port) +{ + return tb_port_tmu_set_unidirectional(port, true); +} + static bool tb_port_tmu_is_unidirectional(struct tb_port *port) { int ret; @@ -128,6 +133,23 @@ static bool tb_port_tmu_is_unidirectional(struct tb_port *port) return val & TMU_ADP_CS_3_UDM; } +static int tb_port_tmu_time_sync(struct tb_port *port, bool time_sync) +{ + u32 val = time_sync ? TMU_ADP_CS_6_DTS : 0; + + return tb_port_tmu_write(port, TMU_ADP_CS_6, TMU_ADP_CS_6_DTS, val); +} + +static int tb_port_tmu_time_sync_disable(struct tb_port *port) +{ + return tb_port_tmu_time_sync(port, true); +} + +static int tb_port_tmu_time_sync_enable(struct tb_port *port) +{ + return tb_port_tmu_time_sync(port, false); +} + static int tb_switch_tmu_set_time_disruption(struct tb_switch *sw, bool set) { int ret; @@ -207,7 +229,8 @@ int tb_switch_tmu_init(struct tb_switch *sw) */ int tb_switch_tmu_post_time(struct tb_switch *sw) { - unsigned int post_local_time_offset, post_time_offset; + unsigned int post_time_high_offset, post_time_high = 0; + unsigned int post_local_time_offset, post_time_offset; struct tb_switch *root_switch = sw->tb->root_switch; u64 hi, mid, lo, local_time, post_time; int i, ret, retries = 100; @@ -247,6 +270,7 @@ int tb_switch_tmu_post_time(struct tb_switch *sw) post_local_time_offset = sw->tmu.cap + TMU_RTR_CS_22; post_time_offset = sw->tmu.cap + TMU_RTR_CS_24; + post_time_high_offset = sw->tmu.cap + TMU_RTR_CS_25; /* * Write the Grandmaster time to the Post Local Time registers @@ -258,17 +282,24 @@ int tb_switch_tmu_post_time(struct tb_switch *sw) goto out; /* - * Have the new switch update its local time (by writing 1 to - * the post_time registers) and wait for the completion of the - * same (post_time register becomes 0). This means the time has - * been converged properly. + * Have the new switch update its local time by: + * 1) writing 0x1 to the Post Time Low register and 0xFFFFFFFF to + * Post Time High register. + * 2) write 0 to Post Time High register and then wait for + * the completion of the post_time register becomes 0. + * This means the time has been converged properly. */ - post_time = 1; + post_time = 0xFFFFFFFF00000001ULL; ret = tb_sw_write(sw, &post_time, TB_CFG_SWITCH, post_time_offset, 2); if (ret) goto out; + ret = tb_sw_write(sw, &post_time_high, TB_CFG_SWITCH, + post_time_high_offset, 1); + if (ret) + goto out; + do { usleep_range(5, 10); ret = tb_sw_read(sw, &post_time, TB_CFG_SWITCH, @@ -297,7 +328,6 @@ int tb_switch_tmu_post_time(struct tb_switch *sw) */ int tb_switch_tmu_disable(struct tb_switch *sw) { - int ret; if (!tb_switch_is_usb4(sw)) return 0; @@ -306,21 +336,41 @@ int tb_switch_tmu_disable(struct tb_switch *sw) if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF) return 0; - if (sw->tmu.unidirectional) { + + if (tb_route(sw)) { + bool unidirect = tb_switch_tmu_hifi_is_enabled(sw, true); struct tb_switch *parent = tb_switch_parent(sw); - struct tb_port *up, *down; + struct tb_port *down, *up; + int ret; - up = tb_upstream_port(sw); down = tb_port_at(tb_route(sw), parent); - - /* The switch may be unplugged so ignore any errors */ - tb_port_tmu_unidirectional_disable(up); - ret = tb_port_tmu_unidirectional_disable(down); + up = tb_upstream_port(sw); + /* + * In case of unidirectional time sync, TMU handshake is + * initiated by upstream router. In case of bidirectional + * time sync, TMU handshake is initiated by downstream router. + * Therefore, we change the rate to OFF in the respective + * router. + */ + if (unidirect) + tb_switch_tmu_rate_write(parent, TB_SWITCH_TMU_RATE_OFF); + else + tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF); + tb_port_tmu_time_sync_disable(up); + ret = tb_port_tmu_time_sync_disable(down); if (ret) return ret; - } - tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF); + if (unidirect) { + /* The switch may be unplugged so ignore any errors */ + tb_port_tmu_unidirectional_disable(up); + ret = tb_port_tmu_unidirectional_disable(down); + if (ret) + return ret; + } + } else { + tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF); + } sw->tmu.unidirectional = false; sw->tmu.rate = TB_SWITCH_TMU_RATE_OFF; @@ -329,55 +379,201 @@ int tb_switch_tmu_disable(struct tb_switch *sw) return 0; } -/** - * tb_switch_tmu_enable() - Enable TMU on a switch - * @sw: Switch whose TMU to enable - * - * Enables TMU of a switch to be in bi-directional, HiFi mode. In this mode - * all tunneling should work. +static void __tb_switch_tmu_off(struct tb_switch *sw, bool unidirect) +{ + struct tb_switch *parent = tb_switch_parent(sw); + struct tb_port *down, *up; + + down = tb_port_at(tb_route(sw), parent); + up = tb_upstream_port(sw); + /* + * In case of any failure in one of the steps when setting BiDir or + * Uni TMU mode, get back to the TMU configurations in OFF mode. + * In case of additional failures in the functions below, + * ignore them since the caller shall already report a failure. + */ + tb_port_tmu_time_sync_disable(down); + tb_port_tmu_time_sync_disable(up); + if (unidirect) + tb_switch_tmu_rate_write(parent, TB_SWITCH_TMU_RATE_OFF); + else + tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF); + + tb_port_tmu_unidirectional_disable(down); + tb_port_tmu_unidirectional_disable(up); +} + +/* + * This function is called when the previous TMU mode was + * TB_SWITCH_TMU_RATE_OFF. */ -int tb_switch_tmu_enable(struct tb_switch *sw) +static int __tb_switch_tmu_enable_bidir(struct tb_switch *sw) { + struct tb_switch *parent = tb_switch_parent(sw); + struct tb_port *up, *down; int ret; - if (!tb_switch_is_usb4(sw)) - return 0; + up = tb_upstream_port(sw); + down = tb_port_at(tb_route(sw), parent); - if (tb_switch_tmu_is_enabled(sw)) - return 0; + ret = tb_port_tmu_unidirectional_disable(up); + if (ret) + return ret; - ret = tb_switch_tmu_set_time_disruption(sw, true); + ret = tb_port_tmu_unidirectional_disable(down); + if (ret) + goto out; + + ret = tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_HIFI); + if (ret) + goto out; + + ret = tb_port_tmu_time_sync_enable(up); + if (ret) + goto out; + + ret = tb_port_tmu_time_sync_enable(down); + if (ret) + goto out; + + return 0; +out: + __tb_switch_tmu_off(sw, false); + return ret; +} + +/* + * This function is called when the previous TMU mode was + * TB_SWITCH_TMU_RATE_OFF. + */ +static int __tb_switch_tmu_enable_uni(struct tb_switch *sw) +{ + struct tb_switch *parent = tb_switch_parent(sw); + struct tb_port *up, *down; + int ret; + + up = tb_upstream_port(sw); + down = tb_port_at(tb_route(sw), parent); + ret = tb_switch_tmu_rate_write(parent, TB_SWITCH_TMU_RATE_HIFI); if (ret) return ret; - /* Change mode to bi-directional */ - if (tb_route(sw) && sw->tmu.unidirectional) { - struct tb_switch *parent = tb_switch_parent(sw); - struct tb_port *up, *down; + ret = tb_port_tmu_unidirectional_enable(up); + if (ret) + goto out; - up = tb_upstream_port(sw); - down = tb_port_at(tb_route(sw), parent); + ret = tb_port_tmu_time_sync_enable(up); + if (ret) + goto out; - ret = tb_port_tmu_unidirectional_disable(down); - if (ret) - return ret; + ret = tb_port_tmu_unidirectional_enable(down); + if (ret) + goto out; - ret = tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_HIFI); - if (ret) - return ret; + ret = tb_port_tmu_time_sync_enable(down); + if (ret) + goto out; - ret = tb_port_tmu_unidirectional_disable(up); - if (ret) - return ret; + return 0; +out: + __tb_switch_tmu_off(sw, true); + return ret; +} + +static int tb_switch_tmu_hifi_enable(struct tb_switch *sw) +{ + bool unidirect = sw->tmu.unidirect_request; + int ret; + + if (unidirect && !sw->tmu.has_ucap) + return -EOPNOTSUPP; + + if (!tb_switch_is_usb4(sw)) + return 0; + + if (tb_switch_tmu_hifi_is_enabled(sw, sw->tmu.unidirect_request)) + return 0; + + ret = tb_switch_tmu_set_time_disruption(sw, true); + if (ret) + return ret; + + if (tb_route(sw)) { + /* The used mode changes are from OFF to HiFi-Uni/HiFi-BiDir */ + if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF) { + if (unidirect) + ret = __tb_switch_tmu_enable_uni(sw); + else + ret = __tb_switch_tmu_enable_bidir(sw); + if (ret) + return ret; + } + sw->tmu.unidirectional = unidirect; } else { + /* + * Host router port configurations are written as + * part of configurations for downstream port of the parent + * of the child node - see above. + * Here only the host router' rate configuration is written. + */ ret = tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_HIFI); if (ret) return ret; } - sw->tmu.unidirectional = false; sw->tmu.rate = TB_SWITCH_TMU_RATE_HIFI; - tb_sw_dbg(sw, "TMU: mode set to: %s\n", tb_switch_tmu_mode_name(sw)); + tb_sw_dbg(sw, "TMU: mode set to: %s\n", tb_switch_tmu_mode_name(sw)); return tb_switch_tmu_set_time_disruption(sw, false); } + +/** + * tb_switch_tmu_enable() - Enable TMU on a switch + * @sw: Switch whose TMU to enable + * + * Enables TMU of a switch to be in unidirectional or bidirectional HiFi mode. + * Calling tb_switch_tmu_configure() is required before calling this function, + * to select the mode HiFi and directionality (unidirectional/bidirectional). + * In both modes all tunneling should work. Unidirectional mode is required for + * CLx (Link Low-Power) to work. LowRes mode is not used currently. + */ +int tb_switch_tmu_enable(struct tb_switch *sw) +{ + if (sw->tmu.rate_request == TB_SWITCH_TMU_RATE_NORMAL) + return -EOPNOTSUPP; + + return tb_switch_tmu_hifi_enable(sw); +} + +/** + * tb_switch_tmu_configure() - Configure the TMU rate and directionality + * @sw: Switch whose mode to change + * @rate: Rate to configure Off/LowRes/HiFi + * @unidirectional: Unidirectionality selection: Unidirectional or Bidirectional + * + * Selects the rate of the TMU (Off, LowRes, HiFi), and Directionality + * (Unidirectional or Bidirectional). + * Shall be called before tb_switch_tmu_enable(). + */ +void tb_switch_tmu_configure(struct tb_switch *sw, + enum tb_switch_tmu_rate rate, bool unidirectional) +{ + sw->tmu.unidirect_request = unidirectional; + sw->tmu.rate_request = rate; +} + +/** + * tb_switch_tmu_hifi_is_enabled() - Checks if the specified TMU mode + * bidir/uni enabled correctly + * @sw: Switch whose TMU mode to check + * @unidirect: Select bidirectional or unidirectional mode to check + * + * Read TMU directionality and rate from HW, and return true, + * if matches to bidirectional/unidirectional HiFi mode settings. + * Otherwise returns false. + */ +bool tb_switch_tmu_hifi_is_enabled(struct tb_switch *sw, bool unidirect) +{ + return sw->tmu.rate == TB_SWITCH_TMU_RATE_HIFI && + sw->tmu.unidirectional == unidirect; +} From patchwork Fri Dec 17 01:16:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gil Fine X-Patchwork-Id: 525464 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6D7CC433F5 for ; Fri, 17 Dec 2021 01:10:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231652AbhLQBKb (ORCPT ); Thu, 16 Dec 2021 20:10:31 -0500 Received: from mga09.intel.com ([134.134.136.24]:64450 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230183AbhLQBKa (ORCPT ); Thu, 16 Dec 2021 20:10:30 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639703430; x=1671239430; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=Y/gY/WRAdijNcsafpZlZXaSvAmph6spdj+cRKlv++3A=; b=FY9HwhjeUruhhnNn+q1fimFI+8mRQMIp84WiTwZ9sr5EFSxh81sD0qiz fbKR0cvXYHn9CdVlVOv3vKOSNLz+hG9IFmITeE8p74eKhH90BuvBWHlmI iwz5hAKyhE2wLH/tSKLt/LliPrvauEnJmOavcWIbjnPATV6kp7vONLPp/ YA9o7h3FVzm21FsfXWkUCNOaNtXxfEhqdFOmMlCDB9D89/E+9iwn3QWnr Bwv4El5ngHighBpY0ieI6S3I00zEKpo22mbIUo8clDHZ2cjk4OBYqWtQ4 2vglnA1ImiB0aiSV/n3jmSH/61tcDXU7WVuzyhn6zXT2l9i8ZzirUudKL A==; X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="239460834" X-IronPort-AV: E=Sophos;i="5.88,212,1635231600"; d="scan'208";a="239460834" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 17:10:18 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,212,1635231600"; d="scan'208";a="605733699" Received: from ccdjpclinux26.jer.intel.com ([10.12.48.253]) by FMSMGA003.fm.intel.com with ESMTP; 16 Dec 2021 17:10:15 -0800 From: Gil Fine To: andreas.noever@gmail.com, michael.jamet@intel.com, mika.westerberg@linux.intel.com, YehezkelShB@gmail.com Cc: gil.fine@intel.com, linux-usb@vger.kernel.org, lukas@wunner.de Subject: [PATCH v2 2/7] thunderbolt: Add CL0s support for USB4 Date: Fri, 17 Dec 2021 03:16:39 +0200 Message-Id: <20211217011644.21634-3-gil.fine@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211217011644.21634-1-gil.fine@intel.com> References: <20211217011644.21634-1-gil.fine@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org In this patch we add enabling of CL0s - low power state of the link. Low power states (called collectively CLx) are used to reduce transmitter and receiver power when a high-speed lane is idle. For now, we add support only for first low power state - CL0s. We enable it, if both sides of the link support it, and only for the first hop router. (i.e. the first device that connected to the host router). That is the most common use-case, that is intended for better thermal management, and so helps to improve performance. Signed-off-by: Gil Fine --- drivers/thunderbolt/switch.c | 289 ++++++++++++++++++++++++++++++++++ drivers/thunderbolt/tb.c | 9 +- drivers/thunderbolt/tb.h | 29 ++++ drivers/thunderbolt/tb_regs.h | 6 + drivers/thunderbolt/usb4.c | 20 +++ 5 files changed, 352 insertions(+), 1 deletion(-) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 13f9230104d7..2e8cb4acdf72 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -3223,3 +3223,292 @@ struct tb_port *tb_switch_find_port(struct tb_switch *sw, return NULL; } + +static int __tb_port_pm_secondary_set(struct tb_port *port, bool secondary) +{ + u32 phy; + int ret; + + ret = tb_port_read(port, &phy, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_1, 1); + if (ret) + return ret; + + if (secondary) + phy |= LANE_ADP_CS_1_PMS; + else + phy &= ~LANE_ADP_CS_1_PMS; + + return tb_port_write(port, &phy, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_1, 1); +} + +static int tb_port_pm_secondary_enable(struct tb_port *port) +{ + return __tb_port_pm_secondary_set(port, true); +} + +static int tb_port_pm_secondary_disable(struct tb_port *port) +{ + return __tb_port_pm_secondary_set(port, false); +} + +static int tb_switch_pm_secondary_resolve(struct tb_switch *sw) +{ + struct tb_switch *parent = tb_switch_parent(sw); + struct tb_port *up, *down; + int ret; + + if (!tb_route(sw)) + return 0; + + up = tb_upstream_port(sw); + down = tb_port_at(tb_route(sw), parent); + ret = tb_port_pm_secondary_enable(up); + if (ret) + return ret; + + ret = tb_port_pm_secondary_disable(down); + if (ret) + return ret; + + return 0; +} + +static bool tb_port_clx_supported(struct tb_port *port, enum tb_clx clx) +{ + u32 mask, val; + bool ret; + + /* Don't enable CLx in case of two single-lane links */ + if (!port->bonded && port->dual_link_port) + return false; + + /* Don't enable CLx in case of inter-domain link */ + if (port->xdomain) + return false; + + if (!usb4_port_clx_supported(port)) + return false; + + switch (clx) { + case TB_CL0S: + /* CL0s support requires also CL1 support */ + mask = LANE_ADP_CS_0_CL0S_SUPPORT | LANE_ADP_CS_0_CL1_SUPPORT; + break; + + /* For now we support only CL0s. Not CL1, CL2 */ + case TB_CL1: + case TB_CL2: + default: + return false; + } + + ret = tb_port_read(port, &val, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_0, 1); + if (ret) + return false; + + return !!(val & mask); +} + +static inline bool tb_port_cl0s_supported(struct tb_port *port) +{ + return tb_port_clx_supported(port, TB_CL0S); +} + +static int __tb_port_cl0s_set(struct tb_port *port, bool enable) +{ + u32 phy, mask; + int ret; + + /* To enable CL0s, also required to enable CL1 */ + mask = LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE; + ret = tb_port_read(port, &phy, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_1, 1); + if (ret) + return ret; + + if (enable) + phy |= mask; + else + phy &= ~mask; + + return tb_port_write(port, &phy, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_1, 1); +} + +static int tb_port_cl0s_disable(struct tb_port *port) +{ + return __tb_port_cl0s_set(port, false); +} + +static int tb_port_cl0s_enable(struct tb_port *port) +{ + return __tb_port_cl0s_set(port, true); +} + +static int tb_switch_enable_cl0s(struct tb_switch *sw) +{ + struct tb_switch *parent = tb_switch_parent(sw); + bool up_cl0s_support, down_cl0s_support; + struct tb_port *up, *down; + int ret; + + if (!tb_switch_is_usb4(sw)) + return 0; + + /* + * Enable CLx for host router's downstream port as part of the + * downstream router enabling procedure. + */ + if (!tb_route(sw)) + return 0; + + /* Enable CLx only for first hop router (depth = 1) */ + if (tb_route(parent)) + return 0; + + if (tb_switch_pm_secondary_resolve(sw)) + return -EINVAL; + + up = tb_upstream_port(sw); + down = tb_port_at(tb_route(sw), parent); + + up_cl0s_support = tb_port_cl0s_supported(up); + down_cl0s_support = tb_port_cl0s_supported(down); + + tb_port_dbg(up, "CL0s %ssupported\n", + up_cl0s_support ? "" : "not "); + tb_port_dbg(down, "CL0s %ssupported\n", + down_cl0s_support ? "" : "not "); + + if (!up_cl0s_support || !down_cl0s_support) + return -EOPNOTSUPP; + + ret = tb_port_cl0s_enable(up); + if (ret) + return ret; + + ret = tb_port_cl0s_enable(down); + if (ret) { + tb_port_cl0s_disable(up); + return ret; + } + + sw->clx = TB_CL0S; + + tb_sw_dbg(sw, "enabled CL0s on upstream port\n"); + return 0; +} + +/** + * tb_switch_enable_clx() - Enable CLx on upstream port of specified router + * @sw: The switch to enable CLx for + * @clx: The CLx state to enable + * + * Enable CLx state only for first hop router. That is the most common + * use-case, that is intended for better thermal management, and so helps to + * improve performance. + * CLx is enabled only if both sides of the link support CLx, and if + * both sides of the link are not configured as two single lane links + * and only if the link is not inter-domain link. + * The complete set of conditions is descibed in CM Guide 1.0 section 8.1. + * + * Return: Returns 0 on success or an error code on failure. + */ +int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx) +{ + struct tb_switch *root_sw = sw->tb->root_switch; + + /* CLx is not enabled and validated on USB4 platforms before ADL */ + if (root_sw->generation < 4 || + tb_switch_is_tiger_lake(root_sw)) + return 0; + + switch (clx) { + case TB_CL0S: + return tb_switch_enable_cl0s(sw); + + default: + return -EOPNOTSUPP; + } +} + +static int tb_switch_disable_cl0s(struct tb_switch *sw) +{ + struct tb_switch *parent = tb_switch_parent(sw); + struct tb_port *up, *down; + int ret; + + if (!tb_switch_is_usb4(sw)) + return 0; + + /* + * Disable CLx for host router's downstream port as part of the + * downstream router enabling procedure. + */ + if (!tb_route(sw)) + return 0; + + /* Disable CLx only for first hop router (depth = 1) */ + if (tb_route(parent)) + return 0; + + up = tb_upstream_port(sw); + down = tb_port_at(tb_route(sw), parent); + ret = tb_port_cl0s_disable(up); + if (ret) + return ret; + + ret = tb_port_cl0s_disable(down); + if (ret) + return ret; + + sw->clx = TB_CLX_DISABLE; + + tb_sw_dbg(sw, "disabled CL0s on upstream port\n"); + return 0; +} + +/** + * tb_switch_disable_clx() - Disable CLx on upstream port of specified router + * @sw: The switch to disable CLx for + * @clx: The CLx state to disable + * + * Return: Returns 0 on success or an error code on failure. + */ +int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx) +{ + switch (clx) { + case TB_CL0S: + return tb_switch_disable_cl0s(sw); + + default: + return -EOPNOTSUPP; + } +} + +/** + * tb_switch_is_clx_enabled() - Checks if the CLx is enabled + * @sw: Router to check the CLx state for + * + * Checks if the CLx is enabled on the router upstream link. + * Not applicable for a host router. + */ +bool tb_switch_is_clx_enabled(struct tb_switch *sw) +{ + return sw->clx != TB_CLX_DISABLE; +} + +/** + * tb_switch_is_cl0s_enabled() - Checks if the CL0s is enabled + * @sw: Router to check the CLx state for + * + * Checks if the CL0s is enabled on the router upstream link. + * Not applicable for a host router. + */ +bool tb_switch_is_cl0s_enabled(struct tb_switch *sw) +{ + return sw->clx == TB_CL0S; +} diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 96e0a029d6c8..2d3c8c5baf9f 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -669,7 +669,11 @@ static void tb_scan_port(struct tb_port *port) tb_switch_lane_bonding_enable(sw); /* Set the link configured */ tb_switch_configure_link(sw); - tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); + if (tb_switch_enable_clx(sw, TB_CL0S)) + tb_sw_warn(sw, "failed to enable CLx on upstream port\n"); + + tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, + tb_switch_is_clx_enabled(sw)); if (tb_enable_tmu(sw)) tb_sw_warn(sw, "failed to enable TMU\n"); @@ -1420,6 +1424,9 @@ static void tb_restore_children(struct tb_switch *sw) if (sw->is_unplugged) return; + if (tb_switch_enable_clx(sw, TB_CL0S)) + tb_sw_warn(sw, "failed to re-enable CLx on upstream port\n"); + /* * tb_switch_tmu_configure() was already called when the switch was * added before entering system sleep or runtime suspend, diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index b57f14893f38..527b461a5c49 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -109,6 +109,13 @@ struct tb_switch_tmu { enum tb_switch_tmu_rate rate_request; }; +enum tb_clx { + TB_CLX_DISABLE, + TB_CL0S, + TB_CL1, + TB_CL2, +}; + /** * struct tb_switch - a thunderbolt switch * @dev: Device for the switch @@ -157,6 +164,7 @@ struct tb_switch_tmu { * @min_dp_main_credits: Router preferred minimum number of buffers for DP MAIN * @max_pcie_credits: Router preferred number of buffers for PCIe * @max_dma_credits: Router preferred number of buffers for DMA/P2P + * @clx: CLx state on the upstream link of the switch * * When the switch is being added or removed to the domain (other * switches) you need to have domain lock held. @@ -205,6 +213,7 @@ struct tb_switch { unsigned int min_dp_main_credits; unsigned int max_pcie_credits; unsigned int max_dma_credits; + enum tb_clx clx; }; /** @@ -862,6 +871,20 @@ static inline bool tb_switch_is_titan_ridge(const struct tb_switch *sw) return false; } +static inline bool tb_switch_is_tiger_lake(const struct tb_switch *sw) +{ + if (sw->config.vendor_id == PCI_VENDOR_ID_INTEL) { + switch (sw->config.device_id) { + case PCI_DEVICE_ID_INTEL_TGL_NHI0: + case PCI_DEVICE_ID_INTEL_TGL_NHI1: + case PCI_DEVICE_ID_INTEL_TGL_H_NHI0: + case PCI_DEVICE_ID_INTEL_TGL_H_NHI1: + return true; + } + } + return false; +} + /** * tb_switch_is_usb4() - Is the switch USB4 compliant * @sw: Switch to check @@ -904,6 +927,11 @@ bool tb_switch_tmu_hifi_is_enabled(struct tb_switch *sw, bool unidirect); void tb_switch_tmu_configure(struct tb_switch *sw, enum tb_switch_tmu_rate rate, bool unidirectional); + +int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx); +int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx); +bool tb_switch_is_clx_enabled(struct tb_switch *sw); + int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged); int tb_port_add_nfc_credits(struct tb_port *port, int credits); int tb_port_clear_counter(struct tb_port *port, int counter); @@ -1126,6 +1154,7 @@ static inline struct usb4_port *tb_to_usb4_port_device(struct device *dev) struct usb4_port *usb4_port_device_add(struct tb_port *port); void usb4_port_device_remove(struct usb4_port *usb4); int usb4_port_device_resume(struct usb4_port *usb4); +bool usb4_port_clx_supported(struct tb_port *port); /* Keep link controller awake during update */ #define QUIRK_FORCE_POWER_LINK_CONTROLLER BIT(0) diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h index 89a2170032bf..1af822934f82 100644 --- a/drivers/thunderbolt/tb_regs.h +++ b/drivers/thunderbolt/tb_regs.h @@ -313,11 +313,15 @@ struct tb_regs_port_header { #define LANE_ADP_CS_0 0x00 #define LANE_ADP_CS_0_SUPPORTED_WIDTH_MASK GENMASK(25, 20) #define LANE_ADP_CS_0_SUPPORTED_WIDTH_SHIFT 20 +#define LANE_ADP_CS_0_CL0S_SUPPORT BIT(26) +#define LANE_ADP_CS_0_CL1_SUPPORT BIT(27) #define LANE_ADP_CS_1 0x01 #define LANE_ADP_CS_1_TARGET_WIDTH_MASK GENMASK(9, 4) #define LANE_ADP_CS_1_TARGET_WIDTH_SHIFT 4 #define LANE_ADP_CS_1_TARGET_WIDTH_SINGLE 0x1 #define LANE_ADP_CS_1_TARGET_WIDTH_DUAL 0x3 +#define LANE_ADP_CS_1_CL0S_ENABLE BIT(10) +#define LANE_ADP_CS_1_CL1_ENABLE BIT(11) #define LANE_ADP_CS_1_LD BIT(14) #define LANE_ADP_CS_1_LB BIT(15) #define LANE_ADP_CS_1_CURRENT_SPEED_MASK GENMASK(19, 16) @@ -326,6 +330,7 @@ struct tb_regs_port_header { #define LANE_ADP_CS_1_CURRENT_SPEED_GEN3 0x4 #define LANE_ADP_CS_1_CURRENT_WIDTH_MASK GENMASK(25, 20) #define LANE_ADP_CS_1_CURRENT_WIDTH_SHIFT 20 +#define LANE_ADP_CS_1_PMS BIT(30) /* USB4 port registers */ #define PORT_CS_1 0x01 @@ -341,6 +346,7 @@ struct tb_regs_port_header { #define PORT_CS_18 0x12 #define PORT_CS_18_BE BIT(8) #define PORT_CS_18_TCM BIT(9) +#define PORT_CS_18_CPS BIT(10) #define PORT_CS_18_WOU4S BIT(18) #define PORT_CS_19 0x13 #define PORT_CS_19_PC BIT(3) diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c index ceddbe7e9f93..704820c9a042 100644 --- a/drivers/thunderbolt/usb4.c +++ b/drivers/thunderbolt/usb4.c @@ -2029,3 +2029,23 @@ int usb4_usb3_port_release_bandwidth(struct tb_port *port, int *upstream_bw, usb4_usb3_port_clear_cm_request(port); return ret; } + +/** + * usb4_port_clx_supported() - Check if CLx is supported by the link + * @port: Port to check for CLx support for + * + * PORT_CS_18_CPS bit reflects if the link supports CLx including + * active cables (if connected on the link). + */ +bool usb4_port_clx_supported(struct tb_port *port) +{ + int ret; + u32 val; + + ret = tb_port_read(port, &val, TB_CFG_PORT, + port->cap_usb4 + PORT_CS_18, 1); + if (ret) + return false; + + return !!(val & PORT_CS_18_CPS); +} From patchwork Fri Dec 17 01:16:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gil Fine X-Patchwork-Id: 526010 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AD3FC433EF for ; Fri, 17 Dec 2021 01:10:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231658AbhLQBKe (ORCPT ); Thu, 16 Dec 2021 20:10:34 -0500 Received: from mga09.intel.com ([134.134.136.24]:64461 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231656AbhLQBKd (ORCPT ); Thu, 16 Dec 2021 20:10:33 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639703433; x=1671239433; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=EEZiImbWgDolWjuC92Ci/601o8UAARHgLrnguBJTLdM=; b=Fxd3JEOrMLr5HNDevfxUEkwBrRZMS0NQvzuSmcnWT1YVkDCH3XYf53MD ca518qnSddgJyHbnV69gLBklCqbCviZYGQlXI0f7CM1rz3puv4Uj6/aWA BE5YVyFjd9necoDXLAiDptj81PokHEH2fPNxEqdaiZpqrLNkNU7ASqrUq KE15AsUyXrz38+ev8f8D6LO/GJmjx0/qfG3qQNT98IM+R2O9Z0nvf89qb E+iSYm9MajHO9817BH31PC/tjI2//QuF9kPWN898RMc2FwDfgaI4y1W3f 65ihPBt3eTRwA8kb+cWxUMd00a7FzBFG+9rK5NswY5Kdbo0GQy93ZVCfG g==; X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="239460857" X-IronPort-AV: E=Sophos;i="5.88,212,1635231600"; d="scan'208";a="239460857" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 17:10:21 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,212,1635231600"; d="scan'208";a="605733711" Received: from ccdjpclinux26.jer.intel.com ([10.12.48.253]) by FMSMGA003.fm.intel.com with ESMTP; 16 Dec 2021 17:10:18 -0800 From: Gil Fine To: andreas.noever@gmail.com, michael.jamet@intel.com, mika.westerberg@linux.intel.com, YehezkelShB@gmail.com Cc: gil.fine@intel.com, linux-usb@vger.kernel.org, lukas@wunner.de Subject: [PATCH v2 3/7] thunderbolt: Move usb4_switch_wait_for_bit() to switch.c Date: Fri, 17 Dec 2021 03:16:40 +0200 Message-Id: <20211217011644.21634-4-gil.fine@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211217011644.21634-1-gil.fine@intel.com> References: <20211217011644.21634-1-gil.fine@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Currently usb4_switch_wait_for_bit() used only in usb4.c Moving to switch.c to call it from other files. Also change the prefix to "tb_" to follow to the naming convention. Signed-off-by: Gil Fine --- drivers/thunderbolt/switch.c | 34 ++++++++++++++++++++++++++++++++++ drivers/thunderbolt/tb.h | 2 ++ drivers/thunderbolt/usb4.c | 32 +++++--------------------------- 3 files changed, 41 insertions(+), 27 deletions(-) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 2e8cb4acdf72..d58d78647175 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -3224,6 +3224,40 @@ struct tb_port *tb_switch_find_port(struct tb_switch *sw, return NULL; } +/** + * tb_switch_wait_for_bit() - Wait for specified value of bits in offset + * @sw: Switch to read the offset value from + * @offset: Offset in the router config space to read from + * @bit: Bit mask in the offset to wait for + * @value: Value of the bits to wait for + * @timeout_msec: Timeout in ms how long to wait + * + * Wait till the specified bits in specified offset reach specified value. + * Returns %0 in case of success, %-ETIMEDOUT if the @value was not reached + * within the given timeout or a negative errno in case of failure. + */ +int tb_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit, + u32 value, int timeout_msec) +{ + ktime_t timeout = ktime_add_ms(ktime_get(), timeout_msec); + + do { + u32 val; + int ret; + + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, offset, 1); + if (ret) + return ret; + + if ((val & bit) == value) + return 0; + + usleep_range(50, 100); + } while (ktime_before(ktime_get(), timeout)); + + return -ETIMEDOUT; +} + static int __tb_port_pm_secondary_set(struct tb_port *port, bool secondary) { u32 phy; diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 527b461a5c49..c7c135120a55 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -1155,6 +1155,8 @@ struct usb4_port *usb4_port_device_add(struct tb_port *port); void usb4_port_device_remove(struct usb4_port *usb4); int usb4_port_device_resume(struct usb4_port *usb4); bool usb4_port_clx_supported(struct tb_port *port); +int tb_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit, + u32 value, int timeout_msec); /* Keep link controller awake during update */ #define QUIRK_FORCE_POWER_LINK_CONTROLLER BIT(0) diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c index 704820c9a042..64613cafcbc3 100644 --- a/drivers/thunderbolt/usb4.c +++ b/drivers/thunderbolt/usb4.c @@ -50,28 +50,6 @@ enum usb4_ba_index { #define USB4_BA_VALUE_MASK GENMASK(31, 16) #define USB4_BA_VALUE_SHIFT 16 -static int usb4_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit, - u32 value, int timeout_msec) -{ - ktime_t timeout = ktime_add_ms(ktime_get(), timeout_msec); - - do { - u32 val; - int ret; - - ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, offset, 1); - if (ret) - return ret; - - if ((val & bit) == value) - return 0; - - usleep_range(50, 100); - } while (ktime_before(ktime_get(), timeout)); - - return -ETIMEDOUT; -} - static int usb4_native_switch_op(struct tb_switch *sw, u16 opcode, u32 *metadata, u8 *status, const void *tx_data, size_t tx_dwords, @@ -97,7 +75,7 @@ static int usb4_native_switch_op(struct tb_switch *sw, u16 opcode, if (ret) return ret; - ret = usb4_switch_wait_for_bit(sw, ROUTER_CS_26, ROUTER_CS_26_OV, 0, 500); + ret = tb_switch_wait_for_bit(sw, ROUTER_CS_26, ROUTER_CS_26_OV, 0, 500); if (ret) return ret; @@ -303,8 +281,8 @@ int usb4_switch_setup(struct tb_switch *sw) if (ret) return ret; - return usb4_switch_wait_for_bit(sw, ROUTER_CS_6, ROUTER_CS_6_CR, - ROUTER_CS_6_CR, 50); + return tb_switch_wait_for_bit(sw, ROUTER_CS_6, ROUTER_CS_6_CR, + ROUTER_CS_6_CR, 50); } /** @@ -480,8 +458,8 @@ int usb4_switch_set_sleep(struct tb_switch *sw) if (ret) return ret; - return usb4_switch_wait_for_bit(sw, ROUTER_CS_6, ROUTER_CS_6_SLPR, - ROUTER_CS_6_SLPR, 500); + return tb_switch_wait_for_bit(sw, ROUTER_CS_6, ROUTER_CS_6_SLPR, + ROUTER_CS_6_SLPR, 500); } /** From patchwork Fri Dec 17 01:16:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gil Fine X-Patchwork-Id: 525463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C3A7C433F5 for ; Fri, 17 Dec 2021 01:10:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231661AbhLQBKf (ORCPT ); Thu, 16 Dec 2021 20:10:35 -0500 Received: from mga09.intel.com ([134.134.136.24]:64461 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231667AbhLQBKf (ORCPT ); Thu, 16 Dec 2021 20:10:35 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639703434; x=1671239434; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=s0Y2ZAs+U+P69zkOrpS9IkZeP7CSaDIRjStL157RiSI=; b=g0idAjxus43UpWRAJnugd+0p+1bXZI4Mxyig6da4gM0RiRtGWa43SmZS AXx11xIUpwCHWdd+jxD9CG+VOv5JGjIYHqdx3Ny8e55gD4gVGfib2E+Be kvn18hdvCzdbel3HKSqXnsoZ9ROexR1gYknCGbCeRFhyroFlHQFBaSjqq l/T/C/x37RR7EmgN9/6uDsKVPzYTAPjPij0fNQnI+CA52TxWzIiUkBz48 utlQ9RG3PkjHV6LxM+AfmQFj3B5PU10m6PSz+YwOZh9q3o2xRTFF5rDHj yZyOhzkn+qo4xKpYds/AWGbRpVZ/XtXwqDqza9ycelr7tZBiPRm4X/svI w==; X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="239460879" X-IronPort-AV: E=Sophos;i="5.88,212,1635231600"; d="scan'208";a="239460879" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 17:10:23 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,212,1635231600"; d="scan'208";a="605733730" Received: from ccdjpclinux26.jer.intel.com ([10.12.48.253]) by FMSMGA003.fm.intel.com with ESMTP; 16 Dec 2021 17:10:21 -0800 From: Gil Fine To: andreas.noever@gmail.com, michael.jamet@intel.com, mika.westerberg@linux.intel.com, YehezkelShB@gmail.com Cc: gil.fine@intel.com, linux-usb@vger.kernel.org, lukas@wunner.de Subject: [PATCH v2 4/7] thunderbolt: Enable TMU for Titan Ridge device Date: Fri, 17 Dec 2021 03:16:41 +0200 Message-Id: <20211217011644.21634-5-gil.fine@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211217011644.21634-1-gil.fine@intel.com> References: <20211217011644.21634-1-gil.fine@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org In this patch we enable TMU for Titan Ridge device. This is used once needed to enable CL0s (Link Low power state). Signed-off-by: Gil Fine --- drivers/thunderbolt/switch.c | 4 ++++ drivers/thunderbolt/tb.h | 2 ++ drivers/thunderbolt/tb_regs.h | 4 ++++ drivers/thunderbolt/tmu.c | 18 +++++++++++++----- 4 files changed, 23 insertions(+), 5 deletions(-) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index d58d78647175..278f891e44e0 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -2194,6 +2194,10 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent, if (ret > 0) sw->cap_plug_events = ret; + ret = tb_switch_find_vse_cap(sw, TB_VSE_CAP_TIME2); + if (ret > 0) + sw->cap_vsec_tmu = ret; + ret = tb_switch_find_vse_cap(sw, TB_VSE_CAP_LINK_CONTROLLER); if (ret > 0) sw->cap_lc = ret; diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index c7c135120a55..f72f8f9aefda 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -138,6 +138,7 @@ enum tb_clx { * @link_usb4: Upstream link is USB4 * @generation: Switch Thunderbolt generation * @cap_plug_events: Offset to the plug events capability (%0 if not found) + * @cap_vsec_tmu: Offset to the TMU vendor specific capability (%0 if not found) * @cap_lc: Offset to the link controller capability (%0 if not found) * @is_unplugged: The switch is going away * @drom: DROM of the switch (%NULL if not found) @@ -189,6 +190,7 @@ struct tb_switch { bool link_usb4; unsigned int generation; int cap_plug_events; + int cap_vsec_tmu; int cap_lc; bool is_unplugged; u8 *drom; diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h index 1af822934f82..0b5e4891567d 100644 --- a/drivers/thunderbolt/tb_regs.h +++ b/drivers/thunderbolt/tb_regs.h @@ -447,6 +447,10 @@ struct tb_regs_hop { u32 unknown3:3; /* set to zero */ } __packed; +/* TMU Thunderbolt 3 registers */ +#define TB_TIME_VSEC_3_CS_26 0x1a +#define TB_TIME_VSEC_3_CS_26_TD BIT(22) + /* Common link controller registers */ #define TB_LC_DESC 0x02 #define TB_LC_DESC_NLC_MASK GENMASK(3, 0) diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index 50f528211852..3a50244f7041 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -152,21 +152,29 @@ static int tb_port_tmu_time_sync_enable(struct tb_port *port) static int tb_switch_tmu_set_time_disruption(struct tb_switch *sw, bool set) { + u32 val, offset, bit; int ret; - u32 val; + + if (tb_switch_is_usb4(sw)) { + offset = sw->tmu.cap + TMU_RTR_CS_0; + bit = TMU_RTR_CS_0_TD; + } else { + offset = sw->cap_vsec_tmu + TB_TIME_VSEC_3_CS_26; + bit = TB_TIME_VSEC_3_CS_26_TD; + } ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, - sw->tmu.cap + TMU_RTR_CS_0, 1); + offset, 1); if (ret) return ret; if (set) - val |= TMU_RTR_CS_0_TD; + val |= bit; else - val &= ~TMU_RTR_CS_0_TD; + val &= ~bit; return tb_sw_write(sw, &val, TB_CFG_SWITCH, - sw->tmu.cap + TMU_RTR_CS_0, 1); + offset, 1); } /** From patchwork Fri Dec 17 01:16:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gil Fine X-Patchwork-Id: 526009 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 395DFC433F5 for ; Fri, 17 Dec 2021 01:10:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231679AbhLQBKh (ORCPT ); Thu, 16 Dec 2021 20:10:37 -0500 Received: from mga09.intel.com ([134.134.136.24]:64461 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231668AbhLQBKg (ORCPT ); Thu, 16 Dec 2021 20:10:36 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639703436; x=1671239436; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=+B/Pjk3rHioXV9UWFFc3k1vyEULDgaX2ibGBMc5Cf/c=; b=gc6AXL9OYzRA7mcYksqNnU+W9d9gWtSvSNSI4IDPPVojAM3hhzLH+x3x gVa/ZDjn5hDWvVD3yKreI1wsW88ux3XpwVwcbMCwbt5AFYlDCQsHetjtU 5StaR3gGqk8R37hZgyDPmhS9i6Gkf4obMT5dNxRm3Jy/kFCbnX2D5cewu GYZAC9vAbK1G9NKP5GYokdippeQhghVTsmKcbvXPy80rF9j+m1BXK1mdW bU1t+w0nOSpeW37KnLUJwV/9RLYZpcwUIN1Qfh2hlrpqGIBJ4TCSjlHDo iz8t4zU8Sg8lHQ8TrnP1nBrvxNL+jjsv7+ZhQLTWxgvgS9QoThzYisuQL g==; X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="239460895" X-IronPort-AV: E=Sophos;i="5.88,212,1635231600"; d="scan'208";a="239460895" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 17:10:25 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,212,1635231600"; d="scan'208";a="605733749" Received: from ccdjpclinux26.jer.intel.com ([10.12.48.253]) by FMSMGA003.fm.intel.com with ESMTP; 16 Dec 2021 17:10:23 -0800 From: Gil Fine To: andreas.noever@gmail.com, michael.jamet@intel.com, mika.westerberg@linux.intel.com, YehezkelShB@gmail.com Cc: gil.fine@intel.com, linux-usb@vger.kernel.org, lukas@wunner.de Subject: [PATCH v2 5/7] thunderbolt: Rename Intel VSC capability Date: Fri, 17 Dec 2021 03:16:42 +0200 Message-Id: <20211217011644.21634-6-gil.fine@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211217011644.21634-1-gil.fine@intel.com> References: <20211217011644.21634-1-gil.fine@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Rename the VSC capability: TB_VSE_CAP_IECS to TB_VSE_CAP_CP_LP to follow the Intel devices namings as appear in the datasheet. This capability is used for controlling CLx (Low Power states of the link). Signed-off-by: Gil Fine --- drivers/thunderbolt/tb_regs.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h index 0b5e4891567d..7a1b5a06303a 100644 --- a/drivers/thunderbolt/tb_regs.h +++ b/drivers/thunderbolt/tb_regs.h @@ -33,7 +33,7 @@ enum tb_switch_cap { enum tb_switch_vse_cap { TB_VSE_CAP_PLUG_EVENTS = 0x01, /* also EEPROM */ TB_VSE_CAP_TIME2 = 0x03, - TB_VSE_CAP_IECS = 0x04, + TB_VSE_CAP_CP_LP = 0x04, TB_VSE_CAP_LINK_CONTROLLER = 0x06, /* also IECS */ }; From patchwork Fri Dec 17 01:16:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gil Fine X-Patchwork-Id: 526008 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9D45C433F5 for ; Fri, 17 Dec 2021 01:10:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231720AbhLQBKo (ORCPT ); Thu, 16 Dec 2021 20:10:44 -0500 Received: from mga09.intel.com ([134.134.136.24]:64461 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231689AbhLQBKj (ORCPT ); Thu, 16 Dec 2021 20:10:39 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639703439; x=1671239439; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=Ph6anMJDejJhvDxMBukwdoJ2+YMCial6FgA6S9PLhis=; b=TXFvIPYrBd1uAMsDxywDdbWxpXzUWFF299QcatdpX2kuTfKzOqvHDO43 s35XWLT6K56FCXrKHwK50ey0/IF/Z8hu9AsEcwwj5NbZrpnP8hWoKq756 qu0sBZq0oCsQRY0iDTumPHO39a/sLODaR/Ld8n/XXsaeRmi+2Gdbv4iYr XLMwWDwgWj25BS/0jqO+xg/fm6qHCYzgzuIikF4cp0SUJ68ka6SgoWh2l nfxz1qh6pD42zLw4FaLGC4qRcJqI9gfrOj42CNfmwbhMh8wt+u91hZVfL N8CDszaHkLC7Cnj0bzAzKLKElxnlgv+aMkvbS5aqfUvR3brYyyp2YNQLC w==; X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="239460919" X-IronPort-AV: E=Sophos;i="5.88,212,1635231600"; d="scan'208";a="239460919" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 17:10:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,212,1635231600"; d="scan'208";a="605733771" Received: from ccdjpclinux26.jer.intel.com ([10.12.48.253]) by FMSMGA003.fm.intel.com with ESMTP; 16 Dec 2021 17:10:25 -0800 From: Gil Fine To: andreas.noever@gmail.com, michael.jamet@intel.com, mika.westerberg@linux.intel.com, YehezkelShB@gmail.com Cc: gil.fine@intel.com, linux-usb@vger.kernel.org, lukas@wunner.de Subject: [PATCH v2 6/7] thunderbolt: Enable CL0s for Titan Ridge device Date: Fri, 17 Dec 2021 03:16:43 +0200 Message-Id: <20211217011644.21634-7-gil.fine@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211217011644.21634-1-gil.fine@intel.com> References: <20211217011644.21634-1-gil.fine@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org In this patch we add enabling of CL0s - low power state of the link for Titan Ridge device. Low power states (called collectively CLx) are used to reduce transmitter and receiver power when a high-speed lane is idle. For now, we add support only for first low power state - CL0s. We enable it, if both sides of the link support it, and only for the first hop router. (i.e. the first device that connected to the host router). That is the most common use-case, that is intended for better thermal management, and so helps to improve performance. Signed-off-by: Gil Fine --- drivers/thunderbolt/lc.c | 27 +++++ drivers/thunderbolt/switch.c | 194 +++++++++++++++++++++++++++++++++- drivers/thunderbolt/tb.c | 7 ++ drivers/thunderbolt/tb.h | 7 ++ drivers/thunderbolt/tb_regs.h | 40 ++++++- drivers/thunderbolt/tmu.c | 68 +++++++++++- 6 files changed, 336 insertions(+), 7 deletions(-) diff --git a/drivers/thunderbolt/lc.c b/drivers/thunderbolt/lc.c index c178f0d7beab..29ead7d75d9c 100644 --- a/drivers/thunderbolt/lc.c +++ b/drivers/thunderbolt/lc.c @@ -508,3 +508,30 @@ int tb_lc_force_power(struct tb_switch *sw) return tb_sw_write(sw, &in, TB_CFG_SWITCH, TB_LC_POWER, 1); } + +/** + * tb_lc_clx_supported() - Check whether CLx is supported by the link + * @sw: Port to check + * + * TB_LC_LINK_ATTR_CPS bit reflects if the link supports CLx including + * active cables (if connected on the link). + */ +bool tb_lc_clx_supported(struct tb_port *port) +{ + struct tb_switch *sw = port->sw; + int cap, ret; + u32 val; + + if (!tb_switch_is_titan_ridge(sw)) + return false; + + cap = find_port_lc_cap(port); + if (cap < 0) + return false; + + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, cap + TB_LC_LINK_ATTR, 1); + if (ret) + return false; + + return !!(val & TB_LC_LINK_ATTR_CPS); +} diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 278f891e44e0..cd6d16417a80 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -2202,6 +2202,10 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent, if (ret > 0) sw->cap_lc = ret; + ret = tb_switch_find_vse_cap(sw, TB_VSE_CAP_CP_LP); + if (ret > 0) + sw->cap_lp = ret; + /* Root switch is always authorized */ if (!route) sw->authorized = true; @@ -3008,6 +3012,14 @@ void tb_switch_suspend(struct tb_switch *sw, bool runtime) tb_sw_dbg(sw, "suspending switch\n"); + /* + * Actually shall be done only for Titan Ridge, + * but for simplicity can be done for USB4 device too + * as CLx is re-enabled at resume. + */ + if (tb_switch_disable_clx(sw, TB_CL0S)) + tb_sw_warn(sw, "failed to disable CLx on upstream port\n"); + err = tb_plug_events_active(sw, false); if (err) return; @@ -3326,8 +3338,12 @@ static bool tb_port_clx_supported(struct tb_port *port, enum tb_clx clx) if (port->xdomain) return false; - if (!usb4_port_clx_supported(port)) + if (tb_switch_is_usb4(port->sw) && !usb4_port_clx_supported(port)) { + return false; + } else if (tb_switch_is_titan_ridge(port->sw) && + !tb_lc_clx_supported(port)) { return false; + } switch (clx) { case TB_CL0S: @@ -3393,7 +3409,7 @@ static int tb_switch_enable_cl0s(struct tb_switch *sw) struct tb_port *up, *down; int ret; - if (!tb_switch_is_usb4(sw)) + if (!tb_switch_clx_supported(sw)) return 0; /* @@ -3434,6 +3450,13 @@ static int tb_switch_enable_cl0s(struct tb_switch *sw) return ret; } + ret = tb_port_titan_ridge_clx_objection_mask(sw); + if (ret) { + tb_port_cl0s_disable(up); + tb_port_cl0s_disable(down); + return ret; + } + sw->clx = TB_CL0S; tb_sw_dbg(sw, "enabled CL0s on upstream port\n"); @@ -3479,7 +3502,7 @@ static int tb_switch_disable_cl0s(struct tb_switch *sw) struct tb_port *up, *down; int ret; - if (!tb_switch_is_usb4(sw)) + if (!tb_switch_clx_supported(sw)) return 0; /* @@ -3550,3 +3573,168 @@ bool tb_switch_is_cl0s_enabled(struct tb_switch *sw) { return sw->clx == TB_CL0S; } + +/** + * tb_switch_clx_supported() - Is CLx supported on this type of switch + * + * @sw: The switch to check CLx support for + */ +bool tb_switch_clx_supported(struct tb_switch *sw) +{ + if (tb_switch_is_usb4(sw) || tb_switch_is_titan_ridge(sw)) + return true; + return false; +} + +/** + * tb_port_titan_ridge_clx_objection_mask() - Mask CLx objections for a switch + * @sw: Switch to mask objections for + * + * Mask the objections coming from the second depth routers in order + * to stop these objections from interfering with the CL states of the first + * depth link. + */ +int tb_port_titan_ridge_clx_objection_mask(struct tb_switch *sw) +{ + int up_port = sw->config.upstream_port_number; + u32 offset, val[2], mask_obj, unmask_obj; + int ret, i; + + if (!tb_switch_is_titan_ridge(sw)) + return 0; + + if (!tb_route(sw)) + return 0; + + /* + * In Titan Ridge there are only 2 dual-lane Thunderbolt ports: + * Port A consists of lane adapters 1,2 and + * Port B consists of lane adapters 3,4 + * If upstream port is A, (lanes are 1,2), we mask objections from + * port B (lanes 3,4) and unmask objections from Port A and vice-versa. + */ + if (up_port == 1) { + mask_obj = TB_LOW_PWR_C0_PORT_B_MASK; + unmask_obj = TB_LOW_PWR_C1_PORT_A_MASK; + offset = TB_LOW_PWR_C1_CL1; + } else { + mask_obj = TB_LOW_PWR_C1_PORT_A_MASK; + unmask_obj = TB_LOW_PWR_C0_PORT_B_MASK; + offset = TB_LOW_PWR_C3_CL1; + } + + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, + sw->cap_lp + offset, ARRAY_SIZE(val)); + if (ret) + return ret; + + for (i = 0; i < ARRAY_SIZE(val); i++) { + val[i] |= mask_obj; + val[i] &= ~unmask_obj; + } + + return tb_sw_write(sw, &val, TB_CFG_SWITCH, + sw->cap_lp + offset, ARRAY_SIZE(val)); +} + +static unsigned int tb_switch_titan_ridge_pcie_bridge_mask(unsigned int br) +{ + + return BIT(br + TB_PLUG_EVENTS_PCIE_CMD_BR_SHIFT); +} + +/* + * Can be used for read/write a specified PCIe bridge for any + * Thunderbolt 3 device. For now used only for Titan Ridge. + */ +static int tb_switch_titan_ridge_access_pcie_bridge(struct tb_switch *sw, + unsigned int bridge, + unsigned int pcie_offset, + unsigned int *value, + bool write) +{ + u32 offset, command, val; + int ret; + + if (sw->generation != 3) + return -EOPNOTSUPP; + + if (write) { + offset = sw->cap_plug_events + TB_PLUG_EVENTS_PCIE_WR_DATA; + ret = tb_sw_write(sw, value, TB_CFG_SWITCH, offset, 1); + if (ret) + return ret; + } + + command = pcie_offset & TB_PLUG_EVENTS_PCIE_CMD_DW_OFFSET_MASK; + command |= tb_switch_titan_ridge_pcie_bridge_mask(bridge); + command |= write ? TB_PLUG_EVENTS_PCIE_CMD_RD_WR_MASK : 0; + command |= TB_PLUG_EVENTS_PCIE_CMD_COMMAND_VAL << + TB_PLUG_EVENTS_PCIE_CMD_COMMAND_SHIFT; + command |= TB_PLUG_EVENTS_PCIE_CMD_REQ_ACK_MASK; + + offset = sw->cap_plug_events + TB_PLUG_EVENTS_PCIE_CMD; + + ret = tb_sw_write(sw, &command, TB_CFG_SWITCH, offset, 1); + if (ret) + return ret; + + ret = tb_switch_wait_for_bit(sw, offset, + TB_PLUG_EVENTS_PCIE_CMD_REQ_ACK_MASK, 0, 100); + if (ret) + return ret; + + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, offset, 1); + if (ret) + return ret; + + if (val & TB_PLUG_EVENTS_PCIE_CMD_TIMEOUT_MASK) + return -ETIMEDOUT; + + if (!write) { + offset = sw->cap_plug_events + TB_PLUG_EVENTS_PCIE_CMD_RD_DATA; + ret = tb_sw_read(sw, value, TB_CFG_SWITCH, offset, 1); + if (ret) + return ret; + } + + return ret; +} + +/** + * tb_switch_titan_ridge_pcie_l1_enable() - Enable PCIe link to enter L1 state + * @sw: Titan Ridge switch to enable PCIe L1 for + * + * For Titan Ridge switch to enter CLx state, its PCIe bridges + * shall enable entry to PCIe L1 state. Shall be called after + * the upstream PCIe tunnel was configured. Due to Intel platforms limitation, + * shall be called only for first hop switch. + */ +int tb_switch_titan_ridge_pcie_l1_enable(struct tb_switch *sw) +{ + struct tb_switch *parent = tb_switch_parent(sw); + unsigned int value; + int ret; + + if (!tb_route(sw)) + return 0; + + if (!tb_switch_is_titan_ridge(sw)) + return 0; + + /* Enable PCIe L1 enable only for first hop router (depth = 1) */ + if (tb_route(parent)) + return 0; + + /* Write to Downstream PCIe bridge #5 aka Dn4 */ + value = 0x0C7806B1; + ret = tb_switch_titan_ridge_access_pcie_bridge(sw, 5, 0x143, + &value, true); + if (ret) + return ret; + + /* Write to Upstream PCIe bridge #0 aka Up0 */ + value = 0x0C5806B1; + return tb_switch_titan_ridge_access_pcie_bridge(sw, 0, 0x143, + &value, true); +} diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 2d3c8c5baf9f..08dca66ea274 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -1092,6 +1092,13 @@ static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw) return -EIO; } + /* + * In case of Titan Ridge CLx shall be enabled, enable PCIe L1 here, + * after PCIe tunnelling was enabled. + */ + if (tb_switch_titan_ridge_pcie_l1_enable(sw)) + tb_sw_warn(sw, "failed to enable PCIe L1 for Titan Ridge\n"); + list_add_tail(&tunnel->list, &tcm->tunnel_list); return 0; } diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index f72f8f9aefda..614fe0a317fa 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -140,6 +140,7 @@ enum tb_clx { * @cap_plug_events: Offset to the plug events capability (%0 if not found) * @cap_vsec_tmu: Offset to the TMU vendor specific capability (%0 if not found) * @cap_lc: Offset to the link controller capability (%0 if not found) + * @cap_lp: Offset to the low power (CLx for TBT) capability (%0 if not found) * @is_unplugged: The switch is going away * @drom: DROM of the switch (%NULL if not found) * @nvm: Pointer to the NVM if the switch has one (%NULL otherwise) @@ -192,6 +193,7 @@ struct tb_switch { int cap_plug_events; int cap_vsec_tmu; int cap_lc; + int cap_lp; bool is_unplugged; u8 *drom; struct tb_nvm *nvm; @@ -933,6 +935,10 @@ void tb_switch_tmu_configure(struct tb_switch *sw, int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx); int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx); bool tb_switch_is_clx_enabled(struct tb_switch *sw); +bool tb_switch_is_cl0s_enabled(struct tb_switch *sw); +bool tb_switch_clx_supported(struct tb_switch *sw); +int tb_switch_titan_ridge_pcie_l1_enable(struct tb_switch *sw); +int tb_port_titan_ridge_clx_objection_mask(struct tb_switch *sw); int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged); int tb_port_add_nfc_credits(struct tb_port *port, int credits); @@ -1034,6 +1040,7 @@ bool tb_lc_dp_sink_query(struct tb_switch *sw, struct tb_port *in); int tb_lc_dp_sink_alloc(struct tb_switch *sw, struct tb_port *in); int tb_lc_dp_sink_dealloc(struct tb_switch *sw, struct tb_port *in); int tb_lc_force_power(struct tb_switch *sw); +bool tb_lc_clx_supported(struct tb_port *port); static inline int tb_route_length(u64 route) { diff --git a/drivers/thunderbolt/tb_regs.h b/drivers/thunderbolt/tb_regs.h index 7a1b5a06303a..4e77558ea66b 100644 --- a/drivers/thunderbolt/tb_regs.h +++ b/drivers/thunderbolt/tb_regs.h @@ -448,8 +448,42 @@ struct tb_regs_hop { } __packed; /* TMU Thunderbolt 3 registers */ -#define TB_TIME_VSEC_3_CS_26 0x1a -#define TB_TIME_VSEC_3_CS_26_TD BIT(22) +#define TB_TIME_VSEC_3_CS_9 0x9 +#define TB_TIME_VSEC_3_CS_9_TMU_OBJ_MASK GENMASK(17, 16) +#define TB_TIME_VSEC_3_CS_26 0x1a +#define TB_TIME_VSEC_3_CS_26_TD BIT(22) + +/* + * Used for Titan Ridge only. Bits are part of the same register: TMU_ADP_CS_6 + * (see above) as in USB4 spec, but these specific bits used for Titan Ridge + * only and reserved in USB4 spec. + */ +#define TMU_ADP_CS_6_DISABLE_TMU_OBJ_MASK GENMASK(3, 2) +#define TMU_ADP_CS_6_DISABLE_TMU_OBJ_CL1 BIT(2) +#define TMU_ADP_CS_6_DISABLE_TMU_OBJ_CL2 BIT(3) + +/* Plug Events registers */ +#define TB_PLUG_EVENTS_PCIE_WR_DATA 0x1b +#define TB_PLUG_EVENTS_PCIE_CMD 0x1c +#define TB_PLUG_EVENTS_PCIE_CMD_DW_OFFSET_MASK GENMASK(9, 0) +#define TB_PLUG_EVENTS_PCIE_CMD_BR_SHIFT 10 +#define TB_PLUG_EVENTS_PCIE_CMD_BR_MASK GENMASK(17, 10) +#define TB_PLUG_EVENTS_PCIE_CMD_RD_WR_MASK BIT(21) +#define TB_PLUG_EVENTS_PCIE_CMD_WR 0x1 +#define TB_PLUG_EVENTS_PCIE_CMD_COMMAND_SHIFT 22 +#define TB_PLUG_EVENTS_PCIE_CMD_COMMAND_MASK GENMASK(24, 22) +#define TB_PLUG_EVENTS_PCIE_CMD_COMMAND_VAL 0x2 +#define TB_PLUG_EVENTS_PCIE_CMD_REQ_ACK_MASK BIT(30) +#define TB_PLUG_EVENTS_PCIE_CMD_TIMEOUT_MASK BIT(31) +#define TB_PLUG_EVENTS_PCIE_CMD_RD_DATA 0x1d + +/* CP Low Power registers */ +#define TB_LOW_PWR_C1_CL1 0x1 +#define TB_LOW_PWR_C1_CL1_OBJ_MASK GENMASK(4, 1) +#define TB_LOW_PWR_C1_CL2_OBJ_MASK GENMASK(4, 1) +#define TB_LOW_PWR_C1_PORT_A_MASK GENMASK(2, 1) +#define TB_LOW_PWR_C0_PORT_B_MASK GENMASK(4, 3) +#define TB_LOW_PWR_C3_CL1 0x3 /* Common link controller registers */ #define TB_LC_DESC 0x02 @@ -485,5 +519,7 @@ struct tb_regs_hop { #define TB_LC_SX_CTRL_SLI BIT(29) #define TB_LC_SX_CTRL_UPSTREAM BIT(30) #define TB_LC_SX_CTRL_SLP BIT(31) +#define TB_LC_LINK_ATTR 0x97 +#define TB_LC_LINK_ATTR_CPS BIT(18) #endif diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index 3a50244f7041..eeba8b9adae9 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -337,7 +337,12 @@ int tb_switch_tmu_post_time(struct tb_switch *sw) int tb_switch_tmu_disable(struct tb_switch *sw) { - if (!tb_switch_is_usb4(sw)) + /* + * No need to disable TMU on devices that don't support CLx since on + * these devices e.g. Alpine Ridge and earlier, the TMU mode HiFi-BiDir + * is enabled by default and we don't change it. + */ + if (!tb_switch_clx_supported(sw)) return 0; /* Already disabled? */ @@ -450,6 +455,53 @@ static int __tb_switch_tmu_enable_bidir(struct tb_switch *sw) return ret; } +static int tb_switch_titan_ridge_tmu_objection_mask(struct tb_switch *sw) +{ + u32 val; + int ret; + + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, + sw->cap_vsec_tmu + TB_TIME_VSEC_3_CS_9, 1); + if (ret) + return ret; + + val &= ~TB_TIME_VSEC_3_CS_9_TMU_OBJ_MASK; + + return tb_sw_write(sw, &val, TB_CFG_SWITCH, + sw->cap_vsec_tmu + TB_TIME_VSEC_3_CS_9, 1); +} + +static int tb_switch_titan_ridge_tmu_uni_enable(struct tb_switch *sw) +{ + struct tb_port *up; + + if (!sw->tmu.unidirect_request) + return 0; + + up = tb_upstream_port(sw); + return tb_port_tmu_write(up, TMU_ADP_CS_6, + TMU_ADP_CS_6_DISABLE_TMU_OBJ_MASK, + TMU_ADP_CS_6_DISABLE_TMU_OBJ_MASK); +} + +static int tb_switch_titan_ridge_tmu_enable(struct tb_switch *sw) +{ + int ret; + + if (!tb_switch_is_titan_ridge(sw)) + return 0; + + /* Titan Ridge supports only CL0s */ + if (!tb_switch_is_cl0s_enabled(sw)) + return -EOPNOTSUPP; + + ret = tb_switch_titan_ridge_tmu_objection_mask(sw); + if (ret) + return ret; + + return tb_switch_titan_ridge_tmu_uni_enable(sw); +} + /* * This function is called when the previous TMU mode was * TB_SWITCH_TMU_RATE_OFF. @@ -496,7 +548,12 @@ static int tb_switch_tmu_hifi_enable(struct tb_switch *sw) if (unidirect && !sw->tmu.has_ucap) return -EOPNOTSUPP; - if (!tb_switch_is_usb4(sw)) + /* + * No need to enable TMU on devices that don't support CLx since on + * these devices e.g. Alpine Ridge and earlier, the TMU mode HiFi-BiDir + * is enabled by default. + */ + if (!tb_switch_clx_supported(sw)) return 0; if (tb_switch_tmu_hifi_is_enabled(sw, sw->tmu.unidirect_request)) @@ -547,9 +604,16 @@ static int tb_switch_tmu_hifi_enable(struct tb_switch *sw) */ int tb_switch_tmu_enable(struct tb_switch *sw) { + int ret; + if (sw->tmu.rate_request == TB_SWITCH_TMU_RATE_NORMAL) return -EOPNOTSUPP; + /* Titan Ridge specific operations to enable CL0s */ + ret = tb_switch_titan_ridge_tmu_enable(sw); + if (ret) + return ret; + return tb_switch_tmu_hifi_enable(sw); } From patchwork Fri Dec 17 01:16:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gil Fine X-Patchwork-Id: 525462 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9524EC433FE for ; Fri, 17 Dec 2021 01:10:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231695AbhLQBKo (ORCPT ); Thu, 16 Dec 2021 20:10:44 -0500 Received: from mga09.intel.com ([134.134.136.24]:64461 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231706AbhLQBKk (ORCPT ); Thu, 16 Dec 2021 20:10:40 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639703440; x=1671239440; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=yEJ9D81A0QlzOv4F07yvUkRphfg1gzxTAFQpIg8qiVk=; b=T0YsoIZk/BZjAbYRUX0m7yolTJS0hNtyT/igblT3A3JH+rakrEf9GoT7 2SPkP2oIMXfLL1/KNlKrFwSc/3pJyLc5GOdFZcgG9CBPmlaZ04gymS/Ht jGzQNXb/hDulJauDgVrq6iWPjnEaXrHw8ifFfprvkyMSi1my8Yu1o679F 9JFA5saqALfCGqgM/5ZCmxTC5urs25CQDkE4KlTHkdc+1MbJWW8ZyU9Y1 vq9X1fPoGvinDnZ7Kg5n87qD1QCTilXN4uRp+oHlhJ+AMRQAemaGHgdOY qX9PQB1JFR37ObemgXHnQEjgvdfRTAq6JzWBkpaGbxrToUnCNUUf0g9Qw g==; X-IronPort-AV: E=McAfee;i="6200,9189,10200"; a="239460940" X-IronPort-AV: E=Sophos;i="5.88,212,1635231600"; d="scan'208";a="239460940" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 17:10:31 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,212,1635231600"; d="scan'208";a="605733793" Received: from ccdjpclinux26.jer.intel.com ([10.12.48.253]) by FMSMGA003.fm.intel.com with ESMTP; 16 Dec 2021 17:10:28 -0800 From: Gil Fine To: andreas.noever@gmail.com, michael.jamet@intel.com, mika.westerberg@linux.intel.com, YehezkelShB@gmail.com Cc: gil.fine@intel.com, linux-usb@vger.kernel.org, lukas@wunner.de Subject: [PATCH v2 7/7] thunderbolt: Add kernel param for CLx disabling Date: Fri, 17 Dec 2021 03:16:44 +0200 Message-Id: <20211217011644.21634-8-gil.fine@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211217011644.21634-1-gil.fine@intel.com> References: <20211217011644.21634-1-gil.fine@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Add a module parameter that allows user to completely disable CLx functionality in case problems are found. Signed-off-by: Gil Fine --- drivers/thunderbolt/switch.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index cd6d16417a80..7224b9aa75ed 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "tb.h" @@ -26,6 +27,10 @@ struct nvm_auth_status { u32 status; }; +static bool clx_enabled = true; +module_param_named(clx, clx_enabled, bool, 0444); +MODULE_PARM_DESC(clx, "allow CLx on the High-Speed link (default: true)"); + /* * Hold NVM authentication failure status per switch This information * needs to stay around even when the switch gets power cycled so we @@ -3482,6 +3487,9 @@ int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx) { struct tb_switch *root_sw = sw->tb->root_switch; + if (!clx_enabled) + return 0; + /* CLx is not enabled and validated on USB4 platforms before ADL */ if (root_sw->generation < 4 || tb_switch_is_tiger_lake(root_sw)) @@ -3541,6 +3549,9 @@ static int tb_switch_disable_cl0s(struct tb_switch *sw) */ int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx) { + if (!clx_enabled) + return 0; + switch (clx) { case TB_CL0S: return tb_switch_disable_cl0s(sw);