From patchwork Tue Nov 17 13:04:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325469 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E35CC64E8A for ; Tue, 17 Nov 2020 13:07:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CF7462225B for ; Tue, 17 Nov 2020 13:07:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="Mt9FnRUc" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729027AbgKQNHE (ORCPT ); Tue, 17 Nov 2020 08:07:04 -0500 Received: from mail.kernel.org ([198.145.29.99]:33738 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728998AbgKQNHE (ORCPT ); Tue, 17 Nov 2020 08:07:04 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 85E1D221EB; Tue, 17 Nov 2020 13:07:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618423; bh=LNZuy9ndNPfOV1EXe5VTgCg1MZvORSf4NFK4JU1ECuQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Mt9FnRUcPmGFW0Hod8I7qTYiwTTK5dWVjf0ypXcAcS/mZzx452qTEfZhmA7pxMdht j/9zQ1mKzzhEUPEeG7RL85xnEStliWF+uVB8GmPAZewB8Ycfbvt1ELoDd6WnBAUhs5 VFuic7gDBmrGsfRuVRpk6E7Yl2BW3URNCFCX0ZtQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Shijie Luo , Miaohe Lin , Andrew Morton , Oscar Salvador , Michal Hocko , Feilong Lin , Linus Torvalds , Sasha Levin Subject: [PATCH 4.4 03/64] mm: mempolicy: fix potential pte_unmap_unlock pte error Date: Tue, 17 Nov 2020 14:04:26 +0100 Message-Id: <20201117122106.299492637@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Shijie Luo [ Upstream commit 3f08842098e842c51e3b97d0dcdebf810b32558e ] When flags in queue_pages_pte_range don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL bits, code breaks and passing origin pte - 1 to pte_unmap_unlock seems like not a good idea. queue_pages_pte_range can run in MPOL_MF_MOVE_ALL mode which doesn't migrate misplaced pages but returns with EIO when encountering such a page. Since commit a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when MPOL_MF_STRICT is specified") and early break on the first pte in the range results in pte_unmap_unlock on an underflow pte. This can lead to lockups later on when somebody tries to lock the pte resp. page_table_lock again.. Fixes: a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when MPOL_MF_STRICT is specified") Signed-off-by: Shijie Luo Signed-off-by: Miaohe Lin Signed-off-by: Andrew Morton Reviewed-by: Oscar Salvador Acked-by: Michal Hocko Cc: Miaohe Lin Cc: Feilong Lin Cc: Shijie Luo Cc: Link: https://lkml.kernel.org/r/20201019074853.50856-1-luoshijie1@huawei.com Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- mm/mempolicy.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index e101cac3d4a63..9ab7969ee7e30 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -490,14 +490,14 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, struct queue_pages *qp = walk->private; unsigned long flags = qp->flags; int nid; - pte_t *pte; + pte_t *pte, *mapped_pte; spinlock_t *ptl; split_huge_page_pmd(vma, addr, pmd); if (pmd_trans_unstable(pmd)) return 0; - pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); + mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); for (; addr != end; pte++, addr += PAGE_SIZE) { if (!pte_present(*pte)) continue; @@ -521,7 +521,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, } else break; } - pte_unmap_unlock(pte - 1, ptl); + pte_unmap_unlock(mapped_pte, ptl); cond_resched(); return addr != end ? -EIO : 0; } From patchwork Tue Nov 17 13:04:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 324581 Delivered-To: patch@linaro.org Received: by 2002:a05:6e02:5ce:0:0:0:0 with SMTP id l14csp4153029ils; Tue, 17 Nov 2020 05:08:22 -0800 (PST) X-Google-Smtp-Source: ABdhPJyBb8XLqtKGchEZlkolhMRzMQJ/53awXmCX0o2GkJC026PEKuDKCc30KEk4H1FtW9QFFApr X-Received: by 2002:a50:8b65:: with SMTP id l92mr20846965edl.132.1605618502024; Tue, 17 Nov 2020 05:08:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1605618502; cv=none; d=google.com; s=arc-20160816; b=NggtEpeLkG147nztgxPXFmRbApe3Upc3p+cFkAu7iTS61QXsQcS/cOZjFZcTtWhanG ggXPxZtRUv+/knbp+ooNlvNhksiTVcLW0JRJlNca2/DfXPqNVBC0eB/nULyM04VjviKK digaHKOB9FCVFwTV0UoB6CatlsA4H53jBTytDkTV65dl46zTv9DGOuY3p1+kiCC4q6T7 KU9LuL73MUwxBHtjnJ+QZSGfmKEot9oFm4wqwDT+XIVvnHkGZuju/G+FGAGrqrGKMMoF pmNqMmaTdJDXhpEeAh1zCJhrePOxyFnKQcuWlc673PLETu9WcCzWxgc583sQs1sgUXIE BawA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=ZDfjViKP3vJnL0vKhkh48T66vyQQrx21vQpt/mQBkIw=; b=L3xgIP/cvrR9vAM+K7DMRIafjphPseyQ6eBaUWneDuB5fhZNstz5UU75JNPgrdFjlh 0DuPN9V7cILZS2m7lfo1v4zoxZffPCCyJw4rlgK/2s3wKkZwUYMgo/hqauiAPvE0CmkB M5R/W9Hq+82+LoWhxHwaPL1K+668/r2QssbOjnH1l0T9AeE5Ev2a021LKkLcxlYfyl5S 0t9g73lWJjbzBLb8lvpqiC/nygoMMhh3xgfWxWpbcH4SmcwdiN0p/Y2Bkh7K5ZjA+sTc 2Fv8ZmfLCIfP2jmKIxE5m4Ww04m1Vppk/a3ejr5lAe5MTRtxB9SJl5eCp5ARFzW+53C3 zGqQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=17GCvp9a; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r21si13835360edx.421.2020.11.17.05.08.21; Tue, 17 Nov 2020 05:08:22 -0800 (PST) Received-SPF: pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=17GCvp9a; spf=pass (google.com: domain of stable-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729039AbgKQNHJ (ORCPT + 14 others); Tue, 17 Nov 2020 08:07:09 -0500 Received: from mail.kernel.org ([198.145.29.99]:33846 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729034AbgKQNHG (ORCPT ); Tue, 17 Nov 2020 08:07:06 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 89B3E2465E; Tue, 17 Nov 2020 13:07:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618426; bh=+ik/dmEYMup5DcNgxfoB/2tUIfGLVh9dNRL1JaFZg3s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=17GCvp9aSqail48snOjdLyXXKuX14ddOOMXtFCWGtJPXSaGL5RLjS4Ro8aWzoWgHv aLjAdglXvDXcJkLOkkmsm5YOIzqXH6yyFb3KJeSBSqPpJg1T+biMORuLrcR7ntA4vA f5W4grdzUtF+2IAaNpG1SGWb+W8bWrAj79sq2+wg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Zeng Tao , Thomas Gleixner , Arnd Bergmann , Sasha Levin Subject: [PATCH 4.4 04/64] time: Prevent undefined behaviour in timespec64_to_ns() Date: Tue, 17 Nov 2020 14:04:27 +0100 Message-Id: <20201117122106.344550754@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Zeng Tao [ Upstream commit cb47755725da7b90fecbb2aa82ac3b24a7adb89b ] UBSAN reports: Undefined behaviour in ./include/linux/time64.h:127:27 signed integer overflow: 17179869187 * 1000000000 cannot be represented in type 'long long int' Call Trace: timespec64_to_ns include/linux/time64.h:127 [inline] set_cpu_itimer+0x65c/0x880 kernel/time/itimer.c:180 do_setitimer+0x8e/0x740 kernel/time/itimer.c:245 __x64_sys_setitimer+0x14c/0x2c0 kernel/time/itimer.c:336 do_syscall_64+0xa1/0x540 arch/x86/entry/common.c:295 Commit bd40a175769d ("y2038: itimer: change implementation to timespec64") replaced the original conversion which handled time clamping correctly with timespec64_to_ns() which has no overflow protection. Fix it in timespec64_to_ns() as this is not necessarily limited to the usage in itimers. [ tglx: Added comment and adjusted the fixes tag ] Fixes: 361a3bf00582 ("time64: Add time64.h header and define struct timespec64") Signed-off-by: Zeng Tao Signed-off-by: Thomas Gleixner Reviewed-by: Arnd Bergmann Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/1598952616-6416-1-git-send-email-prime.zeng@hisilicon.com Signed-off-by: Sasha Levin --- include/linux/time64.h | 4 ++++ 1 file changed, 4 insertions(+) -- 2.27.0 diff --git a/include/linux/time64.h b/include/linux/time64.h index 367d5af899e81..10239cffd70f8 100644 --- a/include/linux/time64.h +++ b/include/linux/time64.h @@ -197,6 +197,10 @@ static inline bool timespec64_valid_strict(const struct timespec64 *ts) */ static inline s64 timespec64_to_ns(const struct timespec64 *ts) { + /* Prevent multiplication overflow */ + if ((unsigned long long)ts->tv_sec >= KTIME_SEC_MAX) + return KTIME_MAX; + return ((s64) ts->tv_sec * NSEC_PER_SEC) + ts->tv_nsec; } From patchwork Tue Nov 17 13:04:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325467 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4325BC64E75 for ; Tue, 17 Nov 2020 13:07:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D6DBC238E6 for ; Tue, 17 Nov 2020 13:07:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="EVV1Ajse" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729075AbgKQNHT (ORCPT ); Tue, 17 Nov 2020 08:07:19 -0500 Received: from mail.kernel.org ([198.145.29.99]:34606 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729071AbgKQNHS (ORCPT ); Tue, 17 Nov 2020 08:07:18 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 15BF024699; Tue, 17 Nov 2020 13:07:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618437; bh=XIty7VzwezHwWmMmxLNHSnMMIojTImj0Axn482Y+CK8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EVV1AjsenAVVTWktqVt3p33xbDCJXRbvVwNkhusEEX+NrJIhyIOswm20j6YaJP5qk wTFAzn44sW00TO1DC1JzYEB53va3iMq7MF9QF2jZzEjUWGsY6/5J7ggOkfkJYVsqN+ StvmHFQ3zK4VS8J9vTe+hxLDkKow30ELv0zhkYJc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dan Carpenter , Takashi Iwai , Sasha Levin Subject: [PATCH 4.4 08/64] ALSA: hda: prevent undefined shift in snd_hdac_ext_bus_get_link() Date: Tue, 17 Nov 2020 14:04:31 +0100 Message-Id: <20201117122106.548075540@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Dan Carpenter [ Upstream commit 158e1886b6262c1d1c96a18c85fac5219b8bf804 ] This is harmless, but the "addr" comes from the user and it could lead to a negative shift or to shift wrapping if it's too high. Fixes: 0b00a5615dc4 ("ALSA: hdac_ext: add hdac extended controller") Signed-off-by: Dan Carpenter Link: https://lore.kernel.org/r/20201103101807.GC1127762@mwanda Signed-off-by: Takashi Iwai Signed-off-by: Sasha Levin --- sound/hda/ext/hdac_ext_controller.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sound/hda/ext/hdac_ext_controller.c b/sound/hda/ext/hdac_ext_controller.c index 63215b17247c8..379250dd0668e 100644 --- a/sound/hda/ext/hdac_ext_controller.c +++ b/sound/hda/ext/hdac_ext_controller.c @@ -221,6 +221,8 @@ struct hdac_ext_link *snd_hdac_ext_bus_get_link(struct hdac_ext_bus *ebus, return NULL; if (ebus->idx != bus_idx) return NULL; + if (addr < 0 || addr > 31) + return NULL; list_for_each_entry(hlink, &ebus->hlink_list, list) { for (i = 0; i < HDA_MAX_CODECS; i++) { From patchwork Tue Nov 17 13:04:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D610C63798 for ; Tue, 17 Nov 2020 13:06:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C32D32225B for ; Tue, 17 Nov 2020 13:06:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="S3VzHqw8" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725747AbgKQNGO (ORCPT ); Tue, 17 Nov 2020 08:06:14 -0500 Received: from mail.kernel.org ([198.145.29.99]:60498 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728453AbgKQNGO (ORCPT ); Tue, 17 Nov 2020 08:06:14 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 08C732225E; Tue, 17 Nov 2020 13:06:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618373; bh=e/ZA+rSjyNAAwAafilfOd2JPZLcajknEqJZEOpFZyaE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=S3VzHqw8CbBN4PkF00gNaYRraS+vADvQtE+7dtj4qFMR2MngfhCiuzRmKbVKPrsrJ qvi0EEZT1uCRv1/Es4c9+ia+XYd77ictaYtZWiBTewv4wNX4Yxp+O0lnBT8rmbUsVT TnDJfmNL07BdVr8XyngUukckJb/9hDElVU4ug5I4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Oleksij Rempel , Oliver Hartkopp , Marc Kleine-Budde , Sasha Levin Subject: [PATCH 4.4 11/64] can: can_create_echo_skb(): fix echo skb generation: always use skb_clone() Date: Tue, 17 Nov 2020 14:04:34 +0100 Message-Id: <20201117122106.687931012@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Oleksij Rempel [ Upstream commit 286228d382ba6320f04fa2e7c6fc8d4d92e428f4 ] All user space generated SKBs are owned by a socket (unless injected into the key via AF_PACKET). If a socket is closed, all associated skbs will be cleaned up. This leads to a problem when a CAN driver calls can_put_echo_skb() on a unshared SKB. If the socket is closed prior to the TX complete handler, can_get_echo_skb() and the subsequent delivering of the echo SKB to all registered callbacks, a SKB with a refcount of 0 is delivered. To avoid the problem, in can_get_echo_skb() the original SKB is now always cloned, regardless of shared SKB or not. If the process exists it can now safely discard its SKBs, without disturbing the delivery of the echo SKB. The problem shows up in the j1939 stack, when it clones the incoming skb, which detects the already 0 refcount. We can easily reproduce this with following example: testj1939 -B -r can0: & cansend can0 1823ff40#0123 WARNING: CPU: 0 PID: 293 at lib/refcount.c:25 refcount_warn_saturate+0x108/0x174 refcount_t: addition on 0; use-after-free. Modules linked in: coda_vpu imx_vdoa videobuf2_vmalloc dw_hdmi_ahb_audio vcan CPU: 0 PID: 293 Comm: cansend Not tainted 5.5.0-rc6-00376-g9e20dcb7040d #1 Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree) Backtrace: [] (dump_backtrace) from [] (show_stack+0x20/0x24) [] (show_stack) from [] (dump_stack+0x8c/0xa0) [] (dump_stack) from [] (__warn+0xe0/0x108) [] (__warn) from [] (warn_slowpath_fmt+0xa8/0xcc) [] (warn_slowpath_fmt) from [] (refcount_warn_saturate+0x108/0x174) [] (refcount_warn_saturate) from [] (j1939_can_recv+0x20c/0x210) [] (j1939_can_recv) from [] (can_rcv_filter+0xb4/0x268) [] (can_rcv_filter) from [] (can_receive+0xb0/0xe4) [] (can_receive) from [] (can_rcv+0x48/0x98) [] (can_rcv) from [] (__netif_receive_skb_one_core+0x64/0x88) [] (__netif_receive_skb_one_core) from [] (__netif_receive_skb+0x38/0x94) [] (__netif_receive_skb) from [] (netif_receive_skb_internal+0x64/0xf8) [] (netif_receive_skb_internal) from [] (netif_receive_skb+0x34/0x19c) [] (netif_receive_skb) from [] (can_rx_offload_napi_poll+0x58/0xb4) Fixes: 0ae89beb283a ("can: add destructor for self generated skbs") Signed-off-by: Oleksij Rempel Link: http://lore.kernel.org/r/20200124132656.22156-1-o.rempel@pengutronix.de Acked-by: Oliver Hartkopp Signed-off-by: Marc Kleine-Budde Signed-off-by: Sasha Levin --- include/linux/can/skb.h | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/include/linux/can/skb.h b/include/linux/can/skb.h index 51bb6532785c3..1a2111c775ae1 100644 --- a/include/linux/can/skb.h +++ b/include/linux/can/skb.h @@ -60,21 +60,17 @@ static inline void can_skb_set_owner(struct sk_buff *skb, struct sock *sk) */ static inline struct sk_buff *can_create_echo_skb(struct sk_buff *skb) { - if (skb_shared(skb)) { - struct sk_buff *nskb = skb_clone(skb, GFP_ATOMIC); + struct sk_buff *nskb; - if (likely(nskb)) { - can_skb_set_owner(nskb, skb->sk); - consume_skb(skb); - return nskb; - } else { - kfree_skb(skb); - return NULL; - } + nskb = skb_clone(skb, GFP_ATOMIC); + if (unlikely(!nskb)) { + kfree_skb(skb); + return NULL; } - /* we can assume to have an unshared skb with proper owner */ - return skb; + can_skb_set_owner(nskb, skb->sk); + consume_skb(skb); + return nskb; } #endif /* !_CAN_SKB_H */ From patchwork Tue Nov 17 13:04:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325478 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5AB3C64E7A for ; Tue, 17 Nov 2020 13:06:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A73612468E for ; Tue, 17 Nov 2020 13:06:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="W7cYnC+O" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728525AbgKQNGU (ORCPT ); Tue, 17 Nov 2020 08:06:20 -0500 Received: from mail.kernel.org ([198.145.29.99]:60564 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728524AbgKQNGU (ORCPT ); Tue, 17 Nov 2020 08:06:20 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D598B2225E; Tue, 17 Nov 2020 13:06:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618379; bh=jWTAwVdzS6NcpDiUfAHtzYjnNmyPatKrKUaYyTKCxPI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=W7cYnC+Oj1lWFLgMP3T7m/kpy8sSfGE/4Crx68yT2LKgQRjNnRoc/qZ4J+fnxFAix M98omry36ltXtb1mk0oCKCMQB2paPrwr0jzSDRcAdMPmIT6cYmil1/Jzv3WBvdXiqQ JTUpN7snww+7gVO88aJGk70FtcFQ/aaPTHllmCVE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Fabian Inostroza , Stephane Grosjean , Marc Kleine-Budde , Sasha Levin Subject: [PATCH 4.4 13/64] can: peak_usb: peak_usb_get_ts_time(): fix timestamp wrapping Date: Tue, 17 Nov 2020 14:04:36 +0100 Message-Id: <20201117122106.787667469@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Stephane Grosjean [ Upstream commit ecc7b4187dd388549544195fb13a11b4ea8e6a84 ] Fabian Inostroza has discovered a potential problem in the hardware timestamp reporting from the PCAN-USB USB CAN interface (only), related to the fact that a timestamp of an event may precede the timestamp used for synchronization when both records are part of the same USB packet. However, this case was used to detect the wrapping of the time counter. This patch details and fixes the two identified cases where this problem can occur. Reported-by: Fabian Inostroza Signed-off-by: Stephane Grosjean Link: https://lore.kernel.org/r/20201014085631.15128-1-s.grosjean@peak-system.com Fixes: bb4785551f64 ("can: usb: PEAK-System Technik USB adapters driver core") Signed-off-by: Marc Kleine-Budde Signed-off-by: Sasha Levin --- drivers/net/can/usb/peak_usb/pcan_usb_core.c | 51 ++++++++++++++++++-- 1 file changed, 46 insertions(+), 5 deletions(-) diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_core.c b/drivers/net/can/usb/peak_usb/pcan_usb_core.c index 8c47cc8dc8965..22deddb2dbf5a 100644 --- a/drivers/net/can/usb/peak_usb/pcan_usb_core.c +++ b/drivers/net/can/usb/peak_usb/pcan_usb_core.c @@ -150,14 +150,55 @@ void peak_usb_get_ts_tv(struct peak_time_ref *time_ref, u32 ts, /* protect from getting timeval before setting now */ if (time_ref->tv_host.tv_sec > 0) { u64 delta_us; + s64 delta_ts = 0; + + /* General case: dev_ts_1 < dev_ts_2 < ts, with: + * + * - dev_ts_1 = previous sync timestamp + * - dev_ts_2 = last sync timestamp + * - ts = event timestamp + * - ts_period = known sync period (theoretical) + * ~ dev_ts2 - dev_ts1 + * *but*: + * + * - time counters wrap (see adapter->ts_used_bits) + * - sometimes, dev_ts_1 < ts < dev_ts2 + * + * "normal" case (sync time counters increase): + * must take into account case when ts wraps (tsw) + * + * < ts_period > < > + * | | | + * ---+--------+----+-------0-+--+--> + * ts_dev_1 | ts_dev_2 | + * ts tsw + */ + if (time_ref->ts_dev_1 < time_ref->ts_dev_2) { + /* case when event time (tsw) wraps */ + if (ts < time_ref->ts_dev_1) + delta_ts = 1 << time_ref->adapter->ts_used_bits; + + /* Otherwise, sync time counter (ts_dev_2) has wrapped: + * handle case when event time (tsn) hasn't. + * + * < ts_period > < > + * | | | + * ---+--------+--0-+---------+--+--> + * ts_dev_1 | ts_dev_2 | + * tsn ts + */ + } else if (time_ref->ts_dev_1 < ts) { + delta_ts = -(1 << time_ref->adapter->ts_used_bits); + } - delta_us = ts - time_ref->ts_dev_2; - if (ts < time_ref->ts_dev_2) - delta_us &= (1 << time_ref->adapter->ts_used_bits) - 1; + /* add delay between last sync and event timestamps */ + delta_ts += (signed int)(ts - time_ref->ts_dev_2); - delta_us += time_ref->ts_total; + /* add time from beginning to last sync */ + delta_ts += time_ref->ts_total; - delta_us *= time_ref->adapter->us_per_ts_scale; + /* convert ticks number into microseconds */ + delta_us = delta_ts * time_ref->adapter->us_per_ts_scale; delta_us >>= time_ref->adapter->us_per_ts_shift; *tv = time_ref->tv_host_0; From patchwork Tue Nov 17 13:04:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325477 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A862C2D0E4 for ; Tue, 17 Nov 2020 13:06:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A6B68223C7 for ; Tue, 17 Nov 2020 13:06:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="VVCUsiFI" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728568AbgKQNG0 (ORCPT ); Tue, 17 Nov 2020 08:06:26 -0500 Received: from mail.kernel.org ([198.145.29.99]:60652 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728524AbgKQNGZ (ORCPT ); Tue, 17 Nov 2020 08:06:25 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 241EE2225B; Tue, 17 Nov 2020 13:06:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618384; bh=gX7lXTOqy6dORx5kNRcSKoDtSGDjCw3IyicCcF93E3o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VVCUsiFI1AUfMrVIPUfqz886xx3RlMfTPPzs5ihJnCLmSv4IgEpHsPb269gywztZ6 i0aUkLRyS5DWHK/hjsOsBdplUgR9hXJakKy7+zW9SUgWUrxIaLIQlSEFbrRL//P8Lr qReemGupCxc3aBmYi4at1tjgFzz3ZqAIncJyQXmE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Linus Walleij , Elena Petrova , Will Deacon , Ben Hutchings , Sasha Levin Subject: [PATCH 4.4 15/64] pinctrl: devicetree: Avoid taking direct reference to device name string Date: Tue, 17 Nov 2020 14:04:38 +0100 Message-Id: <20201117122106.879814674@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Will Deacon commit be4c60b563edee3712d392aaeb0943a768df7023 upstream. When populating the pinctrl mapping table entries for a device, the 'dev_name' field for each entry is initialised to point directly at the string returned by 'dev_name()' for the device and subsequently used by 'create_pinctrl()' when looking up the mappings for the device being probed. This is unreliable in the presence of calls to 'dev_set_name()', which may reallocate the device name string leaving the pinctrl mappings with a dangling reference. This then leads to a use-after-free every time the name is dereferenced by a device probe: | BUG: KASAN: invalid-access in strcmp+0x20/0x64 | Read of size 1 at addr 13ffffc153494b00 by task modprobe/590 | Pointer tag: [13], memory tag: [fe] | | Call trace: | __kasan_report+0x16c/0x1dc | kasan_report+0x10/0x18 | check_memory_region | __hwasan_load1_noabort+0x4c/0x54 | strcmp+0x20/0x64 | create_pinctrl+0x18c/0x7f4 | pinctrl_get+0x90/0x114 | devm_pinctrl_get+0x44/0x98 | pinctrl_bind_pins+0x5c/0x450 | really_probe+0x1c8/0x9a4 | driver_probe_device+0x120/0x1d8 Follow the example of sysfs, and duplicate the device name string before stashing it away in the pinctrl mapping entries. Cc: Linus Walleij Reported-by: Elena Petrova Tested-by: Elena Petrova Signed-off-by: Will Deacon Link: https://lore.kernel.org/r/20191002124206.22928-1-will@kernel.org Signed-off-by: Linus Walleij [bwh: Backported to 4.4: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Sasha Levin --- drivers/pinctrl/devicetree.c | 26 ++++++++++++++++++++------ 1 file changed, 20 insertions(+), 6 deletions(-) diff --git a/drivers/pinctrl/devicetree.c b/drivers/pinctrl/devicetree.c index fe04e748dfe4b..eb8c29f3e16ef 100644 --- a/drivers/pinctrl/devicetree.c +++ b/drivers/pinctrl/devicetree.c @@ -40,6 +40,13 @@ struct pinctrl_dt_map { static void dt_free_map(struct pinctrl_dev *pctldev, struct pinctrl_map *map, unsigned num_maps) { + int i; + + for (i = 0; i < num_maps; ++i) { + kfree_const(map[i].dev_name); + map[i].dev_name = NULL; + } + if (pctldev) { const struct pinctrl_ops *ops = pctldev->desc->pctlops; ops->dt_free_map(pctldev, map, num_maps); @@ -73,7 +80,13 @@ static int dt_remember_or_free_map(struct pinctrl *p, const char *statename, /* Initialize common mapping table entry fields */ for (i = 0; i < num_maps; i++) { - map[i].dev_name = dev_name(p->dev); + const char *devname; + + devname = kstrdup_const(dev_name(p->dev), GFP_KERNEL); + if (!devname) + goto err_free_map; + + map[i].dev_name = devname; map[i].name = statename; if (pctldev) map[i].ctrl_dev_name = dev_name(pctldev->dev); @@ -81,11 +94,8 @@ static int dt_remember_or_free_map(struct pinctrl *p, const char *statename, /* Remember the converted mapping table entries */ dt_map = kzalloc(sizeof(*dt_map), GFP_KERNEL); - if (!dt_map) { - dev_err(p->dev, "failed to alloc struct pinctrl_dt_map\n"); - dt_free_map(pctldev, map, num_maps); - return -ENOMEM; - } + if (!dt_map) + goto err_free_map; dt_map->pctldev = pctldev; dt_map->map = map; @@ -93,6 +103,10 @@ static int dt_remember_or_free_map(struct pinctrl *p, const char *statename, list_add_tail(&dt_map->node, &p->dt_maps); return pinctrl_register_map(map, num_maps, false); + +err_free_map: + dt_free_map(pctldev, map, num_maps); + return -ENOMEM; } struct pinctrl_dev *of_pinctrl_get(struct device_node *np) From patchwork Tue Nov 17 13:04:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325474 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BD20C64EBC for ; Tue, 17 Nov 2020 13:06:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A57C92465E for ; Tue, 17 Nov 2020 13:06:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="yMpQ52Fe" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728609AbgKQNGc (ORCPT ); Tue, 17 Nov 2020 08:06:32 -0500 Received: from mail.kernel.org ([198.145.29.99]:60770 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728524AbgKQNGc (ORCPT ); Tue, 17 Nov 2020 08:06:32 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9FD03238E6; Tue, 17 Nov 2020 13:06:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618391; bh=XZgKa9sLT+bkLqnqpU2cXilc5PX73Yer78389k18gC4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=yMpQ52FejGXqQVIHRDiwnDIKpF7owUAHWxsQ2Lzt5BnFlSwKgHIyDUSDPxNxCrxAQ BCI01OAT/F5Ss+CgGuize0rT/HxuRAViVR4xjuQLqLeCf147E90lC89vmzW9guvv/Z CvuCxD8UDhViUHDu0yKK9qxfSQ3s9Nnao4OI7ggs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Martyna Szapar , Jeff Kirsher , Ben Hutchings , Sasha Levin Subject: [PATCH 4.4 17/64] i40e: Fix of memory leak and integer truncation in i40e_virtchnl.c Date: Tue, 17 Nov 2020 14:04:40 +0100 Message-Id: <20201117122106.976231505@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Martyna Szapar commit 24474f2709af6729b9b1da1c5e160ab62e25e3a4 upstream. Fixed possible memory leak in i40e_vc_add_cloud_filter function: cfilter is being allocated and in some error conditions the function returns without freeing the memory. Fix of integer truncation from u16 (type of queue_id value) to u8 when calling i40e_vc_isvalid_queue_id function. Signed-off-by: Martyna Szapar Signed-off-by: Jeff Kirsher [bwh: Backported to 4.4: i40e_vc_add_cloud_filter() does not exist but the integer truncation is still possible] Signed-off-by: Ben Hutchings Signed-off-by: Sasha Levin --- drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c index 18e10357f1d0b..b4b4d46da1734 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c @@ -188,7 +188,7 @@ static inline bool i40e_vc_isvalid_vsi_id(struct i40e_vf *vf, u16 vsi_id) * check for the valid queue id **/ static inline bool i40e_vc_isvalid_queue_id(struct i40e_vf *vf, u16 vsi_id, - u8 qid) + u16 qid) { struct i40e_pf *pf = vf->pf; struct i40e_vsi *vsi = i40e_find_vsi_from_id(pf, vsi_id); From patchwork Tue Nov 17 13:04:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 328019 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B4E7C64E75 for ; Tue, 17 Nov 2020 14:06:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 55FB120729 for ; Tue, 17 Nov 2020 14:06:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="QJSSB42h" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729298AbgKQNIg (ORCPT ); Tue, 17 Nov 2020 08:08:36 -0500 Received: from mail.kernel.org ([198.145.29.99]:32890 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728708AbgKQNGi (ORCPT ); Tue, 17 Nov 2020 08:06:38 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2D5E92225B; Tue, 17 Nov 2020 13:06:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618397; bh=UqX3J/RJVNYfzm/ipwXr6y2gSN/rI6H90ExlfyC06Z8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QJSSB42hUyQFtLl9s/XZIx9VZcGBRfFfjgCL0WLZZwodV30RiTm9bFZA7paUJ2Ygf TDyvWN/j3Uj0OKVk8Z2K/0IBbFecghaACEmxSg1Unb/FLx6jpKrZGCGlRRlEXKyDBv ne26fbK9xdVniuGWEgza/7P9OdDD0J2cGdPTHLUc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, kernel test robot , Masashi Honma , Kalle Valo Subject: [PATCH 4.4 19/64] ath9k_htc: Use appropriate rs_datalen type Date: Tue, 17 Nov 2020 14:04:42 +0100 Message-Id: <20201117122107.077116163@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Masashi Honma commit 5024f21c159f8c1668f581fff37140741c0b1ba9 upstream. kernel test robot says: drivers/net/wireless/ath/ath9k/htc_drv_txrx.c:987:20: sparse: warning: incorrect type in assignment (different base types) drivers/net/wireless/ath/ath9k/htc_drv_txrx.c:987:20: sparse: expected restricted __be16 [usertype] rs_datalen drivers/net/wireless/ath/ath9k/htc_drv_txrx.c:987:20: sparse: got unsigned short [usertype] drivers/net/wireless/ath/ath9k/htc_drv_txrx.c:988:13: sparse: warning: restricted __be16 degrades to integer drivers/net/wireless/ath/ath9k/htc_drv_txrx.c:1001:13: sparse: warning: restricted __be16 degrades to integer Indeed rs_datalen has host byte order, so modify it's own type. Reported-by: kernel test robot Fixes: cd486e627e67 ("ath9k_htc: Discard undersized packets") Signed-off-by: Masashi Honma Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/20200808233258.4596-1-masashi.honma@gmail.com Signed-off-by: Greg Kroah-Hartman --- drivers/net/wireless/ath/ath9k/htc_drv_txrx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c +++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c @@ -972,7 +972,7 @@ static bool ath9k_rx_prepare(struct ath9 struct ath_htc_rx_status *rxstatus; struct ath_rx_status rx_stats; bool decrypt_error = false; - __be16 rs_datalen; + u16 rs_datalen; bool is_phyerr; if (skb->len < HTC_RX_FRAME_HEADER_SIZE) { From patchwork Tue Nov 17 13:04:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 328013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E124C63798 for ; Tue, 17 Nov 2020 14:07:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EA13F20773 for ; Tue, 17 Nov 2020 14:07:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="rqw7gcvn" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728947AbgKQNGp (ORCPT ); Tue, 17 Nov 2020 08:06:45 -0500 Received: from mail.kernel.org ([198.145.29.99]:33066 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728945AbgKQNGp (ORCPT ); Tue, 17 Nov 2020 08:06:45 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B245E24248; Tue, 17 Nov 2020 13:06:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618403; bh=A37OiWbnTjlyCr+Wpq+gqaATOUcIelbqofq8d0wJNqc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rqw7gcvnn9sdwH2sJYA74bTvgn0i60mTK8+FucRtAPbC1BjdGXdv5UxEQrtHQTkl4 X4VtzstnxLhsNbNiaLRvT7XEQfY9tegV9toh09S6pYDsM420T573l5pipz9+pNdqH9 w+UmRSvYc5qCT04Rs4y2YJr2/xYHWFXZfIML+xtU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Pavel Andrianov , Evgeny Novikov , Felipe Balbi , Sasha Levin Subject: [PATCH 4.4 20/64] usb: gadget: goku_udc: fix potential crashes in probe Date: Tue, 17 Nov 2020 14:04:43 +0100 Message-Id: <20201117122107.127030934@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Evgeny Novikov [ Upstream commit 0d66e04875c5aae876cf3d4f4be7978fa2b00523 ] goku_probe() goes to error label "err" and invokes goku_remove() in case of failures of pci_enable_device(), pci_resource_start() and ioremap(). goku_remove() gets a device from pci_get_drvdata(pdev) and works with it without any checks, in particular it dereferences a corresponding pointer. But goku_probe() did not set this device yet. So, one can expect various crashes. The patch moves setting the device just after allocation of memory for it. Found by Linux Driver Verification project (linuxtesting.org). Reported-by: Pavel Andrianov Signed-off-by: Evgeny Novikov Signed-off-by: Felipe Balbi Signed-off-by: Sasha Levin --- drivers/usb/gadget/udc/goku_udc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/usb/gadget/udc/goku_udc.c b/drivers/usb/gadget/udc/goku_udc.c index 1fdfec14a3ba1..5d4616061309e 100644 --- a/drivers/usb/gadget/udc/goku_udc.c +++ b/drivers/usb/gadget/udc/goku_udc.c @@ -1773,6 +1773,7 @@ static int goku_probe(struct pci_dev *pdev, const struct pci_device_id *id) goto err; } + pci_set_drvdata(pdev, dev); spin_lock_init(&dev->lock); dev->pdev = pdev; dev->gadget.ops = &goku_ops; @@ -1806,7 +1807,6 @@ static int goku_probe(struct pci_dev *pdev, const struct pci_device_id *id) } dev->regs = (struct goku_udc_regs __iomem *) base; - pci_set_drvdata(pdev, dev); INFO(dev, "%s\n", driver_desc); INFO(dev, "version: " DRIVER_VERSION " %s\n", dmastr()); INFO(dev, "irq %d, pci mem %p\n", pdev->irq, base); From patchwork Tue Nov 17 13:04:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325470 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6016BC64E7B for ; Tue, 17 Nov 2020 13:07:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F201E2225B for ; Tue, 17 Nov 2020 13:07:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="tfcSDCGW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728945AbgKQNGs (ORCPT ); Tue, 17 Nov 2020 08:06:48 -0500 Received: from mail.kernel.org ([198.145.29.99]:33162 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728954AbgKQNGr (ORCPT ); Tue, 17 Nov 2020 08:06:47 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9132B2464E; Tue, 17 Nov 2020 13:06:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618406; bh=fsW6bgFU3buu5xvx3D94Li6vVxUF4G38khvZ+rUjhDU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tfcSDCGWpkupokNmSg3R11is5pBuYz3QbDqNbeFXS/7lkrx1oLJf5z9VHmuRrG6VG CmciaE/PNA/BeGl25Eq5SzfPmzncE3BcpeTZbFe+Jc0LX3v0DKh/sXquR09sIkvUfm xhRL/lN3qhE70OZY1yJbjqSfKVMMEdILb/SxRNEE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Bob Peterson , Andreas Gruenbacher , Sasha Levin Subject: [PATCH 4.4 21/64] gfs2: Free rd_bits later in gfs2_clear_rgrpd to fix use-after-free Date: Tue, 17 Nov 2020 14:04:44 +0100 Message-Id: <20201117122107.174939565@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Bob Peterson [ Upstream commit d0f17d3883f1e3f085d38572c2ea8edbd5150172 ] Function gfs2_clear_rgrpd calls kfree(rgd->rd_bits) before calling return_all_reservations, but return_all_reservations still dereferences rgd->rd_bits in __rs_deltree. Fix that by moving the call to kfree below the call to return_all_reservations. Signed-off-by: Bob Peterson Signed-off-by: Andreas Gruenbacher Signed-off-by: Sasha Levin --- fs/gfs2/rgrp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c index 2736e9cfc2ee9..99dcbdc1ff3a4 100644 --- a/fs/gfs2/rgrp.c +++ b/fs/gfs2/rgrp.c @@ -747,9 +747,9 @@ void gfs2_clear_rgrpd(struct gfs2_sbd *sdp) } gfs2_free_clones(rgd); + return_all_reservations(rgd); kfree(rgd->rd_bits); rgd->rd_bits = NULL; - return_all_reservations(rgd); kmem_cache_free(gfs2_rgrpd_cachep, rgd); } } From patchwork Tue Nov 17 13:04:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325471 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 039F1C64E7A for ; Tue, 17 Nov 2020 13:07:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B5D332467D for ; Tue, 17 Nov 2020 13:07:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="IzMZLVKj" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728970AbgKQNGu (ORCPT ); Tue, 17 Nov 2020 08:06:50 -0500 Received: from mail.kernel.org ([198.145.29.99]:33252 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728962AbgKQNGt (ORCPT ); Tue, 17 Nov 2020 08:06:49 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4C4552465E; Tue, 17 Nov 2020 13:06:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618408; bh=CUuU7On8k9gquGH9FpyAONhSZTiem7DP/gZYyGMjaQg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IzMZLVKjsirdbNpkCABIAphEGlVFyfwmmjjQmTT3NDWrbDwUlpHUSBVC6mFXnDkZn 6Qcc0m2/WcWaEkxkGLLcE4KK5VJCCOotZTrwLpkLcGyj5Wn7zbuFC3VaWGTto1DzDU xU9AGWZb0V5cKtkQd3OVZ9wDgZdzdoPhkHE5OYUk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Bob Peterson , Andreas Gruenbacher , Sasha Levin Subject: [PATCH 4.4 22/64] gfs2: check for live vs. read-only file system in gfs2_fitrim Date: Tue, 17 Nov 2020 14:04:45 +0100 Message-Id: <20201117122107.225501505@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Bob Peterson [ Upstream commit c5c68724696e7d2f8db58a5fce3673208d35c485 ] Before this patch, gfs2_fitrim was not properly checking for a "live" file system. If the file system had something to trim and the file system was read-only (or spectator) it would start the trim, but when it starts the transaction, gfs2_trans_begin returns -EROFS (read-only file system) and it errors out. However, if the file system was already trimmed so there's no work to do, it never called gfs2_trans_begin. That code is bypassed so it never returns the error. Instead, it returns a good return code with 0 work. All this makes for inconsistent behavior: The same fstrim command can return -EROFS in one case and 0 in another. This tripped up xfstests generic/537 which reports the error as: +fstrim with unrecovered metadata just ate your filesystem This patch adds a check for a "live" (iow, active journal, iow, RW) file system, and if not, returns the error properly. Signed-off-by: Bob Peterson Signed-off-by: Andreas Gruenbacher Signed-off-by: Sasha Levin --- fs/gfs2/rgrp.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c index 99dcbdc1ff3a4..faa5e0e2c4493 100644 --- a/fs/gfs2/rgrp.c +++ b/fs/gfs2/rgrp.c @@ -1388,6 +1388,9 @@ int gfs2_fitrim(struct file *filp, void __user *argp) if (!capable(CAP_SYS_ADMIN)) return -EPERM; + if (!test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags)) + return -EROFS; + if (!blk_queue_discard(q)) return -EOPNOTSUPP; From patchwork Tue Nov 17 13:04:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325468 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A610BC2D0E4 for ; Tue, 17 Nov 2020 13:07:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3D3F8238E6 for ; Tue, 17 Nov 2020 13:07:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="AqlFBbrz" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729036AbgKQNHI (ORCPT ); Tue, 17 Nov 2020 08:07:08 -0500 Received: from mail.kernel.org ([198.145.29.99]:33636 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728992AbgKQNHA (ORCPT ); Tue, 17 Nov 2020 08:07:00 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B15E62464E; Tue, 17 Nov 2020 13:06:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618420; bh=8XF7R//Yl6NH0VxVXke17EfNRQeotPb3MfGtIMyT23U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AqlFBbrz+lYs93QMA9IU2mXmVU4MDr2EwDZBm3BWcaWqOYoJPyQ6KWwTak4hBx8Xp bMiQ1e1rHsjgFPIH/6oTVN1v9o4qc4cE5sCbVEWYGBMFjX9YEDoQiXAJEFhIOy6I8Q /I7TGOTQbSF2Nee9VEDO3MCVNPEHBEpDzISHdupY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Hulk Robot , Ye Bin , Johannes Berg , Sasha Levin Subject: [PATCH 4.4 26/64] cfg80211: regulatory: Fix inconsistent format argument Date: Tue, 17 Nov 2020 14:04:49 +0100 Message-Id: <20201117122107.427138656@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Ye Bin [ Upstream commit db18d20d1cb0fde16d518fb5ccd38679f174bc04 ] Fix follow warning: [net/wireless/reg.c:3619]: (warning) %d in format string (no. 2) requires 'int' but the argument type is 'unsigned int'. Reported-by: Hulk Robot Signed-off-by: Ye Bin Link: https://lore.kernel.org/r/20201009070215.63695-1-yebin10@huawei.com Signed-off-by: Johannes Berg Signed-off-by: Sasha Levin --- net/wireless/reg.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/wireless/reg.c b/net/wireless/reg.c index 474923175b108..dcbf5cd44bb37 100644 --- a/net/wireless/reg.c +++ b/net/wireless/reg.c @@ -2775,7 +2775,7 @@ static void print_rd_rules(const struct ieee80211_regdomain *rd) power_rule = ®_rule->power_rule; if (reg_rule->flags & NL80211_RRF_AUTO_BW) - snprintf(bw, sizeof(bw), "%d KHz, %d KHz AUTO", + snprintf(bw, sizeof(bw), "%d KHz, %u KHz AUTO", freq_range->max_bandwidth_khz, reg_get_max_bandwidth(rd, reg_rule)); else From patchwork Tue Nov 17 13:04:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325466 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EA4AC64E7A for ; Tue, 17 Nov 2020 13:07:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5B9AD246BA for ; Tue, 17 Nov 2020 13:07:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="0tclfLLN" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729101AbgKQNH2 (ORCPT ); Tue, 17 Nov 2020 08:07:28 -0500 Received: from mail.kernel.org ([198.145.29.99]:34986 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729094AbgKQNH1 (ORCPT ); Tue, 17 Nov 2020 08:07:27 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 231492469D; Tue, 17 Nov 2020 13:07:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618446; bh=b4DVEnv7NChjMLM9ddUPcoDbrPCQLRPzs4KHY1O5TC0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=0tclfLLNVuRVmdieQG4w/YAndg7Z525Th/9fCMwiSA7mdVFKWqFDe/oQhDmSpwdpE WlSuKgBfXEtiQM8c5L24y7mGVB5l0LnJtAQfDXmyVAzbETaVCQiyRJBn93zIhXCoWQ HU1Dvc2AgK3EidmfDdFFjew/zWJmhV5WNU8PWOfE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Christoph Hellwig , "Darrick J. Wong" , Sasha Levin Subject: [PATCH 4.4 28/64] xfs: fix a missing unlock on error in xfs_fs_map_blocks Date: Tue, 17 Nov 2020 14:04:51 +0100 Message-Id: <20201117122107.528113940@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Christoph Hellwig [ Upstream commit 2bd3fa793aaa7e98b74e3653fdcc72fa753913b5 ] We also need to drop the iolock when invalidate_inode_pages2 fails, not only on all other error or successful cases. Fixes: 527851124d10 ("xfs: implement pNFS export operations") Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Sasha Levin --- fs/xfs/xfs_pnfs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/xfs/xfs_pnfs.c b/fs/xfs/xfs_pnfs.c index dc6221942b85f..ab66ea0a72bfb 100644 --- a/fs/xfs/xfs_pnfs.c +++ b/fs/xfs/xfs_pnfs.c @@ -162,7 +162,7 @@ xfs_fs_map_blocks( goto out_unlock; error = invalidate_inode_pages2(inode->i_mapping); if (WARN_ON_ONCE(error)) - return error; + goto out_unlock; end_fsb = XFS_B_TO_FSB(mp, (xfs_ufsize_t)offset + length); offset_fsb = XFS_B_TO_FSBT(mp, offset); From patchwork Tue Nov 17 13:04:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 328014 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33C36C64E69 for ; Tue, 17 Nov 2020 14:07:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C8DA6217A0 for ; Tue, 17 Nov 2020 14:07:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="BKbOUZv7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729205AbgKQNH6 (ORCPT ); Tue, 17 Nov 2020 08:07:58 -0500 Received: from mail.kernel.org ([198.145.29.99]:36400 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729184AbgKQNH6 (ORCPT ); Tue, 17 Nov 2020 08:07:58 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 26C31246BC; Tue, 17 Nov 2020 13:07:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618477; bh=WHGapFUCYjqIwjWno05Zd5FO/X6rBRNXN9nYRsiogMg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BKbOUZv7Pud/NeH4N4x6/P0eS7uD0ne3yVYlxBeQtbfUkNp0Ew4F2sbJsf2wdHYXu vT1NzwXeU5F9wrgVy0STkDs4HjedKvWe0ric9Ffr8jDFwVVNXIq4LWKiR6m6bOhDGG TieulSbt0a3vpa8r9iCC6oziu52fu9/WKp0bQl3k= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Evan Nimmo , Rob Herring , Sasha Levin Subject: [PATCH 4.4 29/64] of/address: Fix of_node memory leak in of_dma_is_coherent Date: Tue, 17 Nov 2020 14:04:52 +0100 Message-Id: <20201117122107.581408797@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Evan Nimmo [ Upstream commit a5bea04fcc0b3c0aec71ee1fd58fd4ff7ee36177 ] Commit dabf6b36b83a ("of: Add OF_DMA_DEFAULT_COHERENT & select it on powerpc") added a check to of_dma_is_coherent which returns early if OF_DMA_DEFAULT_COHERENT is enabled. This results in the of_node_put() being skipped causing a memory leak. Moved the of_node_get() below this check so we now we only get the node if OF_DMA_DEFAULT_COHERENT is not enabled. Fixes: dabf6b36b83a ("of: Add OF_DMA_DEFAULT_COHERENT & select it on powerpc") Signed-off-by: Evan Nimmo Link: https://lore.kernel.org/r/20201110022825.30895-1-evan.nimmo@alliedtelesis.co.nz Signed-off-by: Rob Herring Signed-off-by: Sasha Levin --- drivers/of/address.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/of/address.c b/drivers/of/address.c index b3bf8762f4e8c..77881432dd404 100644 --- a/drivers/of/address.c +++ b/drivers/of/address.c @@ -1014,11 +1014,13 @@ EXPORT_SYMBOL_GPL(of_dma_get_range); */ bool of_dma_is_coherent(struct device_node *np) { - struct device_node *node = of_node_get(np); + struct device_node *node; if (IS_ENABLED(CONFIG_OF_DMA_DEFAULT_COHERENT)) return true; + node = of_node_get(np); + while (node) { if (of_property_read_bool(node, "dma-coherent")) { of_node_put(node); From patchwork Tue Nov 17 13:04:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325452 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 260DBC71155 for ; Tue, 17 Nov 2020 13:08:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C372A2468F for ; Tue, 17 Nov 2020 13:08:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="MP4XwT6c" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729321AbgKQNIt (ORCPT ); Tue, 17 Nov 2020 08:08:49 -0500 Received: from mail.kernel.org ([198.145.29.99]:37442 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729315AbgKQNIq (ORCPT ); Tue, 17 Nov 2020 08:08:46 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id DA1252225B; Tue, 17 Nov 2020 13:08:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618525; bh=WFDBWOgLtPYNsgKmA17pqWgI3dxOGvpaXe70blMXWHQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MP4XwT6c44srEX7SavXe4qtCx4PkkWMSgPD7Q3ZbtpblgAZo7FFiSALT29l75lUFL 6L6dfXR+x+iMqz1W5zAgZ0gz9EI4uL/lTTc3yryawNg5o/YdJhkAk7QEOk9LISpXej ZeDyLiFYmfvR2qwXy852bQWDBZ6lkZMv4aSAceac= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Kaixu Xia , Theodore Tso , stable@kernel.org Subject: [PATCH 4.4 32/64] ext4: correctly report "not supported" for {usr, grp}jquota when !CONFIG_QUOTA Date: Tue, 17 Nov 2020 14:04:55 +0100 Message-Id: <20201117122107.737917783@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Kaixu Xia commit 174fe5ba2d1ea0d6c5ab2a7d4aa058d6d497ae4d upstream. The macro MOPT_Q is used to indicates the mount option is related to quota stuff and is defined to be MOPT_NOSUPPORT when CONFIG_QUOTA is disabled. Normally the quota options are handled explicitly, so it didn't matter that the MOPT_STRING flag was missing, even though the usrjquota and grpjquota mount options take a string argument. It's important that's present in the !CONFIG_QUOTA case, since without MOPT_STRING, the mount option matcher will match usrjquota= followed by an integer, and will otherwise skip the table entry, and so "mount option not supported" error message is never reported. [ Fixed up the commit description to better explain why the fix works. --TYT ] Fixes: 26092bf52478 ("ext4: use a table-driven handler for mount options") Signed-off-by: Kaixu Xia Link: https://lore.kernel.org/r/1603986396-28917-1-git-send-email-kaixuxia@tencent.com Signed-off-by: Theodore Ts'o Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman --- fs/ext4/super.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -1452,8 +1452,8 @@ static const struct mount_opts { MOPT_SET | MOPT_Q}, {Opt_noquota, (EXT4_MOUNT_QUOTA | EXT4_MOUNT_USRQUOTA | EXT4_MOUNT_GRPQUOTA), MOPT_CLEAR | MOPT_Q}, - {Opt_usrjquota, 0, MOPT_Q}, - {Opt_grpjquota, 0, MOPT_Q}, + {Opt_usrjquota, 0, MOPT_Q | MOPT_STRING}, + {Opt_grpjquota, 0, MOPT_Q | MOPT_STRING}, {Opt_offusrjquota, 0, MOPT_Q}, {Opt_offgrpjquota, 0, MOPT_Q}, {Opt_jqfmt_vfsold, QFMT_VFS_OLD, MOPT_QFMT}, From patchwork Tue Nov 17 13:04:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 610E5C63697 for ; Tue, 17 Nov 2020 13:08:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0BF4724698 for ; Tue, 17 Nov 2020 13:08:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="ToAop64p" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729323AbgKQNIt (ORCPT ); Tue, 17 Nov 2020 08:08:49 -0500 Received: from mail.kernel.org ([198.145.29.99]:37476 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729317AbgKQNIs (ORCPT ); Tue, 17 Nov 2020 08:08:48 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B9BC4238E6; Tue, 17 Nov 2020 13:08:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618528; bh=htQs+Zjnuk/vZn7s9XxNdJaGv3PG71BxdwzGEUq8XI4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ToAop64pJtO/Vhq31nIJkC4jmWm2aR90jKeWCGjZDyfNukKIoRD7f4HJN78Ek4C5W 1Ih8qPDN9ug2XJ5RKaAG79u2YWBFZeZ11A6p/dZQEsGbTSz6vAj8yhirW6VwA2SA+i BeB3rv8nV86ve7plHuF5m4obobutOOnHOiAehEzI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Dan Carpenter , Tao Ma , Joseph Qi , Andreas Dilger , Theodore Tso , stable@kernel.org Subject: [PATCH 4.4 33/64] ext4: unlock xattr_sem properly in ext4_inline_data_truncate() Date: Tue, 17 Nov 2020 14:04:56 +0100 Message-Id: <20201117122107.788461293@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Joseph Qi commit 7067b2619017d51e71686ca9756b454de0e5826a upstream. It takes xattr_sem to check inline data again but without unlock it in case not have. So unlock it before return. Fixes: aef1c8513c1f ("ext4: let ext4_truncate handle inline data correctly") Reported-by: Dan Carpenter Cc: Tao Ma Signed-off-by: Joseph Qi Reviewed-by: Andreas Dilger Link: https://lore.kernel.org/r/1604370542-124630-1-git-send-email-joseph.qi@linux.alibaba.com Signed-off-by: Theodore Ts'o Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman --- fs/ext4/inline.c | 1 + 1 file changed, 1 insertion(+) --- a/fs/ext4/inline.c +++ b/fs/ext4/inline.c @@ -1892,6 +1892,7 @@ void ext4_inline_data_truncate(struct in ext4_write_lock_xattr(inode, &no_expand); if (!ext4_has_inline_data(inode)) { + ext4_write_unlock_xattr(inode, &no_expand); *has_inline = 0; ext4_journal_stop(handle); return; From patchwork Tue Nov 17 13:04:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 328020 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA40DC63798 for ; Tue, 17 Nov 2020 14:06:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6154720773 for ; Tue, 17 Nov 2020 14:06:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="Le2QCND9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729317AbgKQOFx (ORCPT ); Tue, 17 Nov 2020 09:05:53 -0500 Received: from mail.kernel.org ([198.145.29.99]:37596 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729345AbgKQNI6 (ORCPT ); Tue, 17 Nov 2020 08:08:58 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 5CCBD246AE; Tue, 17 Nov 2020 13:08:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618537; bh=IsN8HGGS4f5HKnsh5Eka/Pg6HmJeJb0zMBEZC3x0A9Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Le2QCND9dtm6+UwSMz4nqRKhjYFC0w7JIRj+FormIkP7JRf4alAftYeUBzTJWXl8S S/l9tY6wxqzpgXMxMqgMZd8tQGhptLbNmcOGdGSjmhdMUanZsfQeBLADEdE1n0fcI8 zDRgChFEsNIFQdQ4qfZrK8TAxoBNQsjfiRj33IQQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Wengang Wang , Andrew Morton , Joseph Qi , Mark Fasheh , Joel Becker , Junxiao Bi , Changwei Ge , Gang He , Jun Piao , Linus Torvalds Subject: [PATCH 4.4 36/64] ocfs2: initialize ip_next_orphan Date: Tue, 17 Nov 2020 14:04:59 +0100 Message-Id: <20201117122107.946763828@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Wengang Wang commit f5785283dd64867a711ca1fb1f5bb172f252ecdf upstream. Though problem if found on a lower 4.1.12 kernel, I think upstream has same issue. In one node in the cluster, there is the following callback trace: # cat /proc/21473/stack __ocfs2_cluster_lock.isra.36+0x336/0x9e0 [ocfs2] ocfs2_inode_lock_full_nested+0x121/0x520 [ocfs2] ocfs2_evict_inode+0x152/0x820 [ocfs2] evict+0xae/0x1a0 iput+0x1c6/0x230 ocfs2_orphan_filldir+0x5d/0x100 [ocfs2] ocfs2_dir_foreach_blk+0x490/0x4f0 [ocfs2] ocfs2_dir_foreach+0x29/0x30 [ocfs2] ocfs2_recover_orphans+0x1b6/0x9a0 [ocfs2] ocfs2_complete_recovery+0x1de/0x5c0 [ocfs2] process_one_work+0x169/0x4a0 worker_thread+0x5b/0x560 kthread+0xcb/0xf0 ret_from_fork+0x61/0x90 The above stack is not reasonable, the final iput shouldn't happen in ocfs2_orphan_filldir() function. Looking at the code, 2067 /* Skip inodes which are already added to recover list, since dio may 2068 * happen concurrently with unlink/rename */ 2069 if (OCFS2_I(iter)->ip_next_orphan) { 2070 iput(iter); 2071 return 0; 2072 } 2073 The logic thinks the inode is already in recover list on seeing ip_next_orphan is non-NULL, so it skip this inode after dropping a reference which incremented in ocfs2_iget(). While, if the inode is already in recover list, it should have another reference and the iput() at line 2070 should not be the final iput (dropping the last reference). So I don't think the inode is really in the recover list (no vmcore to confirm). Note that ocfs2_queue_orphans(), though not shown up in the call back trace, is holding cluster lock on the orphan directory when looking up for unlinked inodes. The on disk inode eviction could involve a lot of IOs which may need long time to finish. That means this node could hold the cluster lock for very long time, that can lead to the lock requests (from other nodes) to the orhpan directory hang for long time. Looking at more on ip_next_orphan, I found it's not initialized when allocating a new ocfs2_inode_info structure. This causes te reflink operations from some nodes hang for very long time waiting for the cluster lock on the orphan directory. Fix: initialize ip_next_orphan as NULL. Signed-off-by: Wengang Wang Signed-off-by: Andrew Morton Reviewed-by: Joseph Qi Cc: Mark Fasheh Cc: Joel Becker Cc: Junxiao Bi Cc: Changwei Ge Cc: Gang He Cc: Jun Piao Cc: Link: https://lkml.kernel.org/r/20201109171746.27884-1-wen.gang.wang@oracle.com Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- fs/ocfs2/super.c | 1 + 1 file changed, 1 insertion(+) --- a/fs/ocfs2/super.c +++ b/fs/ocfs2/super.c @@ -1751,6 +1751,7 @@ static void ocfs2_inode_init_once(void * oi->ip_blkno = 0ULL; oi->ip_clusters = 0; + oi->ip_next_orphan = NULL; ocfs2_resv_init_once(&oi->ip_la_data_resv); From patchwork Tue Nov 17 13:05:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325462 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3193FC64E7D for ; Tue, 17 Nov 2020 13:07:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CDEBA246BA for ; Tue, 17 Nov 2020 13:07:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="LoJTf0+s" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729110AbgKQNHc (ORCPT ); Tue, 17 Nov 2020 08:07:32 -0500 Received: from mail.kernel.org ([198.145.29.99]:35096 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729107AbgKQNHa (ORCPT ); Tue, 17 Nov 2020 08:07:30 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 1C5832468F; Tue, 17 Nov 2020 13:07:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618449; bh=SYfwzIgWl8g1P/EmMrMx3AnqOwe83/QwaaKVAqhaHWE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LoJTf0+s4NP4FwDl9mcs7T9qWwjmrW01D7iLrpIS2++PZIl1++3qqx9Eid1ahvlWD zHMy0luTSlPc+I7ROkUjepEUbjOn1+B83WQMChP8NfGC3or5Z1VY5XyhFaT7fX/HBw /JSyCsVumaxlYWOOwim/eGD/teKVP6WVy/nsWEro= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Eric W. Biederman" , Al Viro Subject: [PATCH 4.4 37/64] dont dump the threads that had been already exiting when zapped. Date: Tue, 17 Nov 2020 14:05:00 +0100 Message-Id: <20201117122107.990579548@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Al Viro commit 77f6ab8b7768cf5e6bdd0e72499270a0671506ee upstream. Coredump logics needs to report not only the registers of the dumping thread, but (since 2.5.43) those of other threads getting killed. Doing that might require extra state saved on the stack in asm glue at kernel entry; signal delivery logics does that (we need to be able to save sigcontext there, at the very least) and so does seccomp. That covers all callers of do_coredump(). Secondary threads get hit with SIGKILL and caught as soon as they reach exit_mm(), which normally happens in signal delivery, so those are also fine most of the time. Unfortunately, it is possible to end up with secondary zapped when it has already entered exit(2) (or, worse yet, is oopsing). In those cases we reach exit_mm() when mm->core_state is already set, but the stack contents is not what we would have in signal delivery. At least on two architectures (alpha and m68k) it leads to infoleaks - we end up with a chunk of kernel stack written into coredump, with the contents consisting of normal C stack frames of the call chain leading to exit_mm() instead of the expected copy of userland registers. In case of alpha we leak 312 bytes of stack. Other architectures (including the regset-using ones) might have similar problems - the normal user of regsets is ptrace and the state of tracee at the time of such calls is special in the same way signal delivery is. Note that had the zapper gotten to the exiting thread slightly later, it wouldn't have been included into coredump anyway - we skip the threads that have already cleared their ->mm. So let's pretend that zapper always loses the race. IOW, have exit_mm() only insert into the dumper list if we'd gotten there from handling a fatal signal[*] As the result, the callers of do_exit() that have *not* gone through get_signal() are not seen by coredump logics as secondary threads. Which excludes voluntary exit()/oopsen/traps/etc. The dumper thread itself is unaffected by that, so seccomp is fine. [*] originally I intended to add a new flag in tsk->flags, but ebiederman pointed out that PF_SIGNALED is already doing just what we need. Cc: stable@vger.kernel.org Fixes: d89f3847def4 ("[PATCH] thread-aware coredumps, 2.5.43-C3") History-tree: https://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git Acked-by: "Eric W. Biederman" Signed-off-by: Al Viro Signed-off-by: Greg Kroah-Hartman --- kernel/exit.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) --- a/kernel/exit.c +++ b/kernel/exit.c @@ -408,7 +408,10 @@ static void exit_mm(struct task_struct * up_read(&mm->mmap_sem); self.task = tsk; - self.next = xchg(&core_state->dumper.next, &self); + if (self.task->flags & PF_SIGNALED) + self.next = xchg(&core_state->dumper.next, &self); + else + self.task = NULL; /* * Implies mb(), the result of xchg() must be visible * to core_state->dumper. From patchwork Tue Nov 17 13:05:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 328011 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.2 required=3.0 tests=BAYES_00, DATE_IN_PAST_03_06, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9466C63798 for ; Tue, 17 Nov 2020 16:10:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 628C22463D for ; Tue, 17 Nov 2020 16:10:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="faqokhqV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727023AbgKQQKS (ORCPT ); Tue, 17 Nov 2020 11:10:18 -0500 Received: from mail.kernel.org ([198.145.29.99]:41528 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725767AbgKQQKS (ORCPT ); Tue, 17 Nov 2020 11:10:18 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C73F02463D; Tue, 17 Nov 2020 16:10:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605629417; bh=j+J6Q8PPenbpavlCvu7NOpHFFTaf7EUG4Fhp6AZtewY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=faqokhqVsc5zPWnFPAYdCWJmJrqI9nflE7IoywhulEG7h7XymzrP4g+NSPZvfbV+A 4vxPcRuDFc4EB9QpbVruqwyIKTXaplrdkpWWgySSwMotlB9MojuBjbEu3MJs6AMy8n ZSSpbTK2jakbsK8vNfBN/kyw+4pBO5h8DVQqGrsM= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Zimmermann , Daniel Vetter , Alan Cox , Dave Airlie , Patrik Jakobsson , dri-devel@lists.freedesktop.org, stable@vger.kernel.org Subject: [PATCH 4.4 38/64] drm/gma500: Fix out-of-bounds access to struct drm_device.vblank[] Date: Tue, 17 Nov 2020 14:05:01 +0100 Message-Id: <20201117122108.042293389@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Thomas Zimmermann commit 06ad8d339524bf94b89859047822c31df6ace239 upstream. The gma500 driver expects 3 pipelines in several it's IRQ functions. Accessing struct drm_device.vblank[], this fails with devices that only have 2 pipelines. An example KASAN report is shown below. [ 62.267688] ================================================================== [ 62.268856] BUG: KASAN: slab-out-of-bounds in psb_irq_postinstall+0x250/0x3c0 [gma500_gfx] [ 62.269450] Read of size 1 at addr ffff8880012bc6d0 by task systemd-udevd/285 [ 62.269949] [ 62.270192] CPU: 0 PID: 285 Comm: systemd-udevd Tainted: G E 5.10.0-rc1-1-default+ #572 [ 62.270807] Hardware name: /DN2800MT, BIOS MTCDT10N.86A.0164.2012.1213.1024 12/13/2012 [ 62.271366] Call Trace: [ 62.271705] dump_stack+0xae/0xe5 [ 62.272180] print_address_description.constprop.0+0x17/0xf0 [ 62.272987] ? psb_irq_postinstall+0x250/0x3c0 [gma500_gfx] [ 62.273474] __kasan_report.cold+0x20/0x38 [ 62.273989] ? psb_irq_postinstall+0x250/0x3c0 [gma500_gfx] [ 62.274460] kasan_report+0x3a/0x50 [ 62.274891] psb_irq_postinstall+0x250/0x3c0 [gma500_gfx] [ 62.275380] drm_irq_install+0x131/0x1f0 <...> [ 62.300751] Allocated by task 285: [ 62.301223] kasan_save_stack+0x1b/0x40 [ 62.301731] __kasan_kmalloc.constprop.0+0xbf/0xd0 [ 62.302293] drmm_kmalloc+0x55/0x100 [ 62.302773] drm_vblank_init+0x77/0x210 Resolve the issue by only handling vblank entries up to the number of CRTCs. I'm adding a Fixes tag for reference, although the bug has been present since the driver's initial commit. Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter Fixes: 5c49fd3aa0ab ("gma500: Add the core DRM files and headers") Cc: Alan Cox Cc: Dave Airlie Cc: Patrik Jakobsson Cc: dri-devel@lists.freedesktop.org Cc: stable@vger.kernel.org#v3.3+ Link: https://patchwork.freedesktop.org/patch/msgid/20201105190256.3893-1-tzimmermann@suse.de Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/gma500/psb_irq.c | 34 ++++++++++++---------------------- 1 file changed, 12 insertions(+), 22 deletions(-) --- a/drivers/gpu/drm/gma500/psb_irq.c +++ b/drivers/gpu/drm/gma500/psb_irq.c @@ -350,6 +350,7 @@ int psb_irq_postinstall(struct drm_devic { struct drm_psb_private *dev_priv = dev->dev_private; unsigned long irqflags; + unsigned int i; spin_lock_irqsave(&dev_priv->irqmask_lock, irqflags); @@ -362,20 +363,12 @@ int psb_irq_postinstall(struct drm_devic PSB_WVDC32(dev_priv->vdc_irq_mask, PSB_INT_ENABLE_R); PSB_WVDC32(0xFFFFFFFF, PSB_HWSTAM); - if (dev->vblank[0].enabled) - psb_enable_pipestat(dev_priv, 0, PIPE_VBLANK_INTERRUPT_ENABLE); - else - psb_disable_pipestat(dev_priv, 0, PIPE_VBLANK_INTERRUPT_ENABLE); - - if (dev->vblank[1].enabled) - psb_enable_pipestat(dev_priv, 1, PIPE_VBLANK_INTERRUPT_ENABLE); - else - psb_disable_pipestat(dev_priv, 1, PIPE_VBLANK_INTERRUPT_ENABLE); - - if (dev->vblank[2].enabled) - psb_enable_pipestat(dev_priv, 2, PIPE_VBLANK_INTERRUPT_ENABLE); - else - psb_disable_pipestat(dev_priv, 2, PIPE_VBLANK_INTERRUPT_ENABLE); + for (i = 0; i < dev->num_crtcs; ++i) { + if (dev->vblank[i].enabled) + psb_enable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE); + else + psb_disable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE); + } if (dev_priv->ops->hotplug_enable) dev_priv->ops->hotplug_enable(dev, true); @@ -388,6 +381,7 @@ void psb_irq_uninstall(struct drm_device { struct drm_psb_private *dev_priv = dev->dev_private; unsigned long irqflags; + unsigned int i; spin_lock_irqsave(&dev_priv->irqmask_lock, irqflags); @@ -396,14 +390,10 @@ void psb_irq_uninstall(struct drm_device PSB_WVDC32(0xFFFFFFFF, PSB_HWSTAM); - if (dev->vblank[0].enabled) - psb_disable_pipestat(dev_priv, 0, PIPE_VBLANK_INTERRUPT_ENABLE); - - if (dev->vblank[1].enabled) - psb_disable_pipestat(dev_priv, 1, PIPE_VBLANK_INTERRUPT_ENABLE); - - if (dev->vblank[2].enabled) - psb_disable_pipestat(dev_priv, 2, PIPE_VBLANK_INTERRUPT_ENABLE); + for (i = 0; i < dev->num_crtcs; ++i) { + if (dev->vblank[i].enabled) + psb_disable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE); + } dev_priv->vdc_irq_mask &= _PSB_IRQ_SGX_FLAG | _PSB_IRQ_MSVDX_FLAG | From patchwork Tue Nov 17 13:05:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 328016 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61244C63798 for ; Tue, 17 Nov 2020 14:07:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 15D6520729 for ; Tue, 17 Nov 2020 14:07:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="MvQMPUou" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729132AbgKQNHl (ORCPT ); Tue, 17 Nov 2020 08:07:41 -0500 Received: from mail.kernel.org ([198.145.29.99]:35326 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729126AbgKQNHi (ORCPT ); Tue, 17 Nov 2020 08:07:38 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 009E724698; Tue, 17 Nov 2020 13:07:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618457; bh=E/ylpNjfTYnYrW8Ivjb7kMwyCcrIvyGzkZxCdGnehTw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MvQMPUoud2+z7hn6EmPv3AuzMfWKHqFTjhnlWpeLr/HIPOaqq0aAdP/nESmtbP6RN WaFaJkfbAaKyU94PGC7xJ+xkVISkVqrl02wDltrz3Qcyfc2XCZRnx/Y0R+hdF+4FHw 7SBOgfR7Ijz5wuS10Im+dLmVTcaEu9jmYcIAIsck= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, stable@vger.kerne.org, Coiby Xu , Hans de Goede , Linus Walleij Subject: [PATCH 4.4 40/64] pinctrl: amd: fix incorrect way to disable debounce filter Date: Tue, 17 Nov 2020 14:05:03 +0100 Message-Id: <20201117122108.143061960@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Coiby Xu commit 06abe8291bc31839950f7d0362d9979edc88a666 upstream. The correct way to disable debounce filter is to clear bit 5 and 6 of the register. Cc: stable@vger.kerne.org Signed-off-by: Coiby Xu Reviewed-by: Hans de Goede Cc: Hans de Goede Link: https://lore.kernel.org/linux-gpio/df2c008b-e7b5-4fdd-42ea-4d1c62b52139@redhat.com/ Link: https://lore.kernel.org/r/20201105231912.69527-2-coiby.xu@gmail.com Signed-off-by: Linus Walleij Signed-off-by: Greg Kroah-Hartman --- drivers/pinctrl/pinctrl-amd.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/drivers/pinctrl/pinctrl-amd.c +++ b/drivers/pinctrl/pinctrl-amd.c @@ -154,14 +154,14 @@ static int amd_gpio_set_debounce(struct pin_reg |= BIT(DB_TMR_OUT_UNIT_OFF); pin_reg |= BIT(DB_TMR_LARGE_OFF); } else { - pin_reg &= ~DB_CNTRl_MASK; + pin_reg &= ~(DB_CNTRl_MASK << DB_CNTRL_OFF); ret = -EINVAL; } } else { pin_reg &= ~BIT(DB_TMR_OUT_UNIT_OFF); pin_reg &= ~BIT(DB_TMR_LARGE_OFF); pin_reg &= ~DB_TMR_OUT_MASK; - pin_reg &= ~DB_CNTRl_MASK; + pin_reg &= ~(DB_CNTRl_MASK << DB_CNTRL_OFF); } writel(pin_reg, gpio_dev->base + offset * 4); spin_unlock_irqrestore(&gpio_dev->lock, flags); From patchwork Tue Nov 17 13:05:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 328015 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFDD6C6379F for ; Tue, 17 Nov 2020 14:07:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8D30520729 for ; Tue, 17 Nov 2020 14:07:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="s2Dq6b4D" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729180AbgKQNHv (ORCPT ); Tue, 17 Nov 2020 08:07:51 -0500 Received: from mail.kernel.org ([198.145.29.99]:35770 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729178AbgKQNHt (ORCPT ); Tue, 17 Nov 2020 08:07:49 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 89CBA246AE; Tue, 17 Nov 2020 13:07:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618469; bh=KIjxkzgdOwMJoSiRdDNpZiJS6sgnIB69WrDcBfVwplk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=s2Dq6b4DXju64uBXL0WPMravvgsc9wl+ensi/bcE55Vo/Z0LymSiiDmiOF3GBWlCJ o+rSDoao+oIT5i2eY7HOzq6u7fDm1fKhWsEvrXzQ7tBypud4+0erBLRzIhd5Y4wJgW pP3+DDnmE3vNm84HXEaDBGCJzNpcSou30Ia37QK8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Martin Schiller , Xie He , Jakub Kicinski Subject: [PATCH 4.4 44/64] net/x25: Fix null-ptr-deref in x25_connect Date: Tue, 17 Nov 2020 14:05:07 +0100 Message-Id: <20201117122108.337541694@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Martin Schiller [ Upstream commit 361182308766a265b6c521879b34302617a8c209 ] This fixes a regression for blocking connects introduced by commit 4becb7ee5b3d ("net/x25: Fix x25_neigh refcnt leak when x25 disconnect"). The x25->neighbour is already set to "NULL" by x25_disconnect() now, while a blocking connect is waiting in x25_wait_for_connection_establishment(). Therefore x25->neighbour must not be accessed here again and x25->state is also already set to X25_STATE_0 by x25_disconnect(). Fixes: 4becb7ee5b3d ("net/x25: Fix x25_neigh refcnt leak when x25 disconnect") Signed-off-by: Martin Schiller Reviewed-by: Xie He Link: https://lore.kernel.org/r/20201109065449.9014-1-ms@dev.tdt.de Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- net/x25/af_x25.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/net/x25/af_x25.c +++ b/net/x25/af_x25.c @@ -823,7 +823,7 @@ static int x25_connect(struct socket *so sock->state = SS_CONNECTED; rc = 0; out_put_neigh: - if (rc) { + if (rc && x25->neighbour) { read_lock_bh(&x25_list_lock); x25_neigh_put(x25->neighbour); x25->neighbour = NULL; From patchwork Tue Nov 17 13:05:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 328012 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E4CBC56202 for ; Tue, 17 Nov 2020 14:07:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1805720773 for ; Tue, 17 Nov 2020 14:07:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="VSVBWkdS" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730753AbgKQOG3 (ORCPT ); Tue, 17 Nov 2020 09:06:29 -0500 Received: from mail.kernel.org ([198.145.29.99]:36204 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729178AbgKQNH4 (ORCPT ); Tue, 17 Nov 2020 08:07:56 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 624702468F; Tue, 17 Nov 2020 13:07:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618474; bh=pfmlilGwH1RotNnP6Jo0WAjnpSG/1TlG1D5/nchrlZg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VSVBWkdSuONV425aWv6Ef8voUeQ8tkWiJKx0HH+MRHlrRDYONj54Ib5u4qrUjvZi/ HgdXJkG3zb/HBNGtYCJz96XBx1fM9JHW8M9rpxz/ivz8bXqEsm1g18/VS2LYnTq+5O yRVXWkK0FdsHrbeqYw1C36Tdsqa+tMfwOM6nUHxk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Amit Klein , Willy Tarreau , Eric Dumazet , "Jason A. Donenfeld" , Andy Lutomirski , Kees Cook , Thomas Gleixner , Peter Zijlstra , Linus Torvalds , tytso@mit.edu, Florian Westphal , Marc Plumb , George Spelvin Subject: [PATCH 4.4 46/64] random32: make prandom_u32() output unpredictable Date: Tue, 17 Nov 2020 14:05:09 +0100 Message-Id: <20201117122108.432714457@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: George Spelvin commit c51f8f88d705e06bd696d7510aff22b33eb8e638 upstream. Non-cryptographic PRNGs may have great statistical properties, but are usually trivially predictable to someone who knows the algorithm, given a small sample of their output. An LFSR like prandom_u32() is particularly simple, even if the sample is widely scattered bits. It turns out the network stack uses prandom_u32() for some things like random port numbers which it would prefer are *not* trivially predictable. Predictability led to a practical DNS spoofing attack. Oops. This patch replaces the LFSR with a homebrew cryptographic PRNG based on the SipHash round function, which is in turn seeded with 128 bits of strong random key. (The authors of SipHash have *not* been consulted about this abuse of their algorithm.) Speed is prioritized over security; attacks are rare, while performance is always wanted. Replacing all callers of prandom_u32() is the quick fix. Whether to reinstate a weaker PRNG for uses which can tolerate it is an open question. Commit f227e3ec3b5c ("random32: update the net random state on interrupt and activity") was an earlier attempt at a solution. This patch replaces it. Reported-by: Amit Klein Cc: Willy Tarreau Cc: Eric Dumazet Cc: "Jason A. Donenfeld" Cc: Andy Lutomirski Cc: Kees Cook Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Linus Torvalds Cc: tytso@mit.edu Cc: Florian Westphal Cc: Marc Plumb Fixes: f227e3ec3b5c ("random32: update the net random state on interrupt and activity") Signed-off-by: George Spelvin Link: https://lore.kernel.org/netdev/20200808152628.GA27941@SDF.ORG/ [ willy: partial reversal of f227e3ec3b5c; moved SIPROUND definitions to prandom.h for later use; merged George's prandom_seed() proposal; inlined siprand_u32(); replaced the net_rand_state[] array with 4 members to fix a build issue; cosmetic cleanups to make checkpatch happy; fixed RANDOM32_SELFTEST build ] [wt: backported to 4.4 -- no latent_entropy, drop prandom_reseed_late] Signed-off-by: Willy Tarreau Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 2 include/linux/prandom.h | 36 +++ kernel/time/timer.c | 7 lib/random32.c | 463 +++++++++++++++++++++++++++++------------------- 4 files changed, 317 insertions(+), 191 deletions(-) --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -678,7 +678,6 @@ retry: r->initialized = 1; r->entropy_total = 0; if (r == &nonblocking_pool) { - prandom_reseed_late(); process_random_ready_list(); wake_up_all(&urandom_init_wait); pr_notice("random: %s pool is initialized\n", r->name); @@ -923,7 +922,6 @@ void add_interrupt_randomness(int irq, i fast_mix(fast_pool); add_interrupt_bench(cycles); - this_cpu_add(net_rand_state.s1, fast_pool->pool[cycles & 3]); if ((fast_pool->count < 64) && !time_after(now, fast_pool->last + HZ)) --- a/include/linux/prandom.h +++ b/include/linux/prandom.h @@ -16,12 +16,44 @@ void prandom_bytes(void *buf, size_t nby void prandom_seed(u32 seed); void prandom_reseed_late(void); +#if BITS_PER_LONG == 64 +/* + * The core SipHash round function. Each line can be executed in + * parallel given enough CPU resources. + */ +#define PRND_SIPROUND(v0, v1, v2, v3) ( \ + v0 += v1, v1 = rol64(v1, 13), v2 += v3, v3 = rol64(v3, 16), \ + v1 ^= v0, v0 = rol64(v0, 32), v3 ^= v2, \ + v0 += v3, v3 = rol64(v3, 21), v2 += v1, v1 = rol64(v1, 17), \ + v3 ^= v0, v1 ^= v2, v2 = rol64(v2, 32) \ +) + +#define PRND_K0 (0x736f6d6570736575 ^ 0x6c7967656e657261) +#define PRND_K1 (0x646f72616e646f6d ^ 0x7465646279746573) + +#elif BITS_PER_LONG == 32 +/* + * On 32-bit machines, we use HSipHash, a reduced-width version of SipHash. + * This is weaker, but 32-bit machines are not used for high-traffic + * applications, so there is less output for an attacker to analyze. + */ +#define PRND_SIPROUND(v0, v1, v2, v3) ( \ + v0 += v1, v1 = rol32(v1, 5), v2 += v3, v3 = rol32(v3, 8), \ + v1 ^= v0, v0 = rol32(v0, 16), v3 ^= v2, \ + v0 += v3, v3 = rol32(v3, 7), v2 += v1, v1 = rol32(v1, 13), \ + v3 ^= v0, v1 ^= v2, v2 = rol32(v2, 16) \ +) +#define PRND_K0 0x6c796765 +#define PRND_K1 0x74656462 + +#else +#error Unsupported BITS_PER_LONG +#endif + struct rnd_state { __u32 s1, s2, s3, s4; }; -DECLARE_PER_CPU(struct rnd_state, net_rand_state); - u32 prandom_u32_state(struct rnd_state *state); void prandom_bytes_state(struct rnd_state *state, void *buf, size_t nbytes); void prandom_seed_full_state(struct rnd_state __percpu *pcpu_state); --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -1432,13 +1432,6 @@ void update_process_times(int user_tick) #endif scheduler_tick(); run_posix_cpu_timers(p); - - /* The current CPU might make use of net randoms without receiving IRQs - * to renew them often enough. Let's update the net_rand_state from a - * non-constant value that's not affine to the number of calls to make - * sure it's updated when there's some activity (we don't care in idle). - */ - this_cpu_add(net_rand_state.s1, rol32(jiffies, 24) + user_tick); } /* --- a/lib/random32.c +++ b/lib/random32.c @@ -39,16 +39,6 @@ #include #include -#ifdef CONFIG_RANDOM32_SELFTEST -static void __init prandom_state_selftest(void); -#else -static inline void prandom_state_selftest(void) -{ -} -#endif - -DEFINE_PER_CPU(struct rnd_state, net_rand_state); - /** * prandom_u32_state - seeded pseudo-random number generator. * @state: pointer to state structure holding seeded state. @@ -69,25 +59,6 @@ u32 prandom_u32_state(struct rnd_state * EXPORT_SYMBOL(prandom_u32_state); /** - * prandom_u32 - pseudo random number generator - * - * A 32 bit pseudo-random number is generated using a fast - * algorithm suitable for simulation. This algorithm is NOT - * considered safe for cryptographic use. - */ -u32 prandom_u32(void) -{ - struct rnd_state *state = &get_cpu_var(net_rand_state); - u32 res; - - res = prandom_u32_state(state); - put_cpu_var(state); - - return res; -} -EXPORT_SYMBOL(prandom_u32); - -/** * prandom_bytes_state - get the requested number of pseudo-random bytes * * @state: pointer to state structure holding seeded state. @@ -118,20 +89,6 @@ void prandom_bytes_state(struct rnd_stat } EXPORT_SYMBOL(prandom_bytes_state); -/** - * prandom_bytes - get the requested number of pseudo-random bytes - * @buf: where to copy the pseudo-random bytes to - * @bytes: the requested number of bytes - */ -void prandom_bytes(void *buf, size_t bytes) -{ - struct rnd_state *state = &get_cpu_var(net_rand_state); - - prandom_bytes_state(state, buf, bytes); - put_cpu_var(state); -} -EXPORT_SYMBOL(prandom_bytes); - static void prandom_warmup(struct rnd_state *state) { /* Calling RNG ten times to satisfy recurrence condition */ @@ -147,97 +104,6 @@ static void prandom_warmup(struct rnd_st prandom_u32_state(state); } -static u32 __extract_hwseed(void) -{ - unsigned int val = 0; - - (void)(arch_get_random_seed_int(&val) || - arch_get_random_int(&val)); - - return val; -} - -static void prandom_seed_early(struct rnd_state *state, u32 seed, - bool mix_with_hwseed) -{ -#define LCG(x) ((x) * 69069U) /* super-duper LCG */ -#define HWSEED() (mix_with_hwseed ? __extract_hwseed() : 0) - state->s1 = __seed(HWSEED() ^ LCG(seed), 2U); - state->s2 = __seed(HWSEED() ^ LCG(state->s1), 8U); - state->s3 = __seed(HWSEED() ^ LCG(state->s2), 16U); - state->s4 = __seed(HWSEED() ^ LCG(state->s3), 128U); -} - -/** - * prandom_seed - add entropy to pseudo random number generator - * @seed: seed value - * - * Add some additional seeding to the prandom pool. - */ -void prandom_seed(u32 entropy) -{ - int i; - /* - * No locking on the CPUs, but then somewhat random results are, well, - * expected. - */ - for_each_possible_cpu(i) { - struct rnd_state *state = &per_cpu(net_rand_state, i); - - state->s1 = __seed(state->s1 ^ entropy, 2U); - prandom_warmup(state); - } -} -EXPORT_SYMBOL(prandom_seed); - -/* - * Generate some initially weak seeding values to allow - * to start the prandom_u32() engine. - */ -static int __init prandom_init(void) -{ - int i; - - prandom_state_selftest(); - - for_each_possible_cpu(i) { - struct rnd_state *state = &per_cpu(net_rand_state, i); - u32 weak_seed = (i + jiffies) ^ random_get_entropy(); - - prandom_seed_early(state, weak_seed, true); - prandom_warmup(state); - } - - return 0; -} -core_initcall(prandom_init); - -static void __prandom_timer(unsigned long dontcare); - -static DEFINE_TIMER(seed_timer, __prandom_timer, 0, 0); - -static void __prandom_timer(unsigned long dontcare) -{ - u32 entropy; - unsigned long expires; - - get_random_bytes(&entropy, sizeof(entropy)); - prandom_seed(entropy); - - /* reseed every ~60 seconds, in [40 .. 80) interval with slack */ - expires = 40 + prandom_u32_max(40); - seed_timer.expires = jiffies + msecs_to_jiffies(expires * MSEC_PER_SEC); - - add_timer(&seed_timer); -} - -static void __init __prandom_start_seed_timer(void) -{ - set_timer_slack(&seed_timer, HZ); - seed_timer.expires = jiffies + msecs_to_jiffies(40 * MSEC_PER_SEC); - add_timer(&seed_timer); -} - void prandom_seed_full_state(struct rnd_state __percpu *pcpu_state) { int i; @@ -256,51 +122,6 @@ void prandom_seed_full_state(struct rnd_ } } -/* - * Generate better values after random number generator - * is fully initialized. - */ -static void __prandom_reseed(bool late) -{ - unsigned long flags; - static bool latch = false; - static DEFINE_SPINLOCK(lock); - - /* Asking for random bytes might result in bytes getting - * moved into the nonblocking pool and thus marking it - * as initialized. In this case we would double back into - * this function and attempt to do a late reseed. - * Ignore the pointless attempt to reseed again if we're - * already waiting for bytes when the nonblocking pool - * got initialized. - */ - - /* only allow initial seeding (late == false) once */ - if (!spin_trylock_irqsave(&lock, flags)) - return; - - if (latch && !late) - goto out; - - latch = true; - prandom_seed_full_state(&net_rand_state); -out: - spin_unlock_irqrestore(&lock, flags); -} - -void prandom_reseed_late(void) -{ - __prandom_reseed(true); -} - -static int __init prandom_reseed(void) -{ - __prandom_reseed(false); - __prandom_start_seed_timer(); - return 0; -} -late_initcall(prandom_reseed); - #ifdef CONFIG_RANDOM32_SELFTEST static struct prandom_test1 { u32 seed; @@ -420,7 +241,28 @@ static struct prandom_test2 { { 407983964U, 921U, 728767059U }, }; -static void __init prandom_state_selftest(void) +static u32 __extract_hwseed(void) +{ + unsigned int val = 0; + + (void)(arch_get_random_seed_int(&val) || + arch_get_random_int(&val)); + + return val; +} + +static void prandom_seed_early(struct rnd_state *state, u32 seed, + bool mix_with_hwseed) +{ +#define LCG(x) ((x) * 69069U) /* super-duper LCG */ +#define HWSEED() (mix_with_hwseed ? __extract_hwseed() : 0) + state->s1 = __seed(HWSEED() ^ LCG(seed), 2U); + state->s2 = __seed(HWSEED() ^ LCG(state->s1), 8U); + state->s3 = __seed(HWSEED() ^ LCG(state->s2), 16U); + state->s4 = __seed(HWSEED() ^ LCG(state->s3), 128U); +} + +static int __init prandom_state_selftest(void) { int i, j, errors = 0, runs = 0; bool error = false; @@ -460,5 +302,266 @@ static void __init prandom_state_selftes pr_warn("prandom: %d/%d self tests failed\n", errors, runs); else pr_info("prandom: %d self tests passed\n", runs); + return 0; +} +core_initcall(prandom_state_selftest); +#endif + +/* + * The prandom_u32() implementation is now completely separate from the + * prandom_state() functions, which are retained (for now) for compatibility. + * + * Because of (ab)use in the networking code for choosing random TCP/UDP port + * numbers, which open DoS possibilities if guessable, we want something + * stronger than a standard PRNG. But the performance requirements of + * the network code do not allow robust crypto for this application. + * + * So this is a homebrew Junior Spaceman implementation, based on the + * lowest-latency trustworthy crypto primitive available, SipHash. + * (The authors of SipHash have not been consulted about this abuse of + * their work.) + * + * Standard SipHash-2-4 uses 2n+4 rounds to hash n words of input to + * one word of output. This abbreviated version uses 2 rounds per word + * of output. + */ + +struct siprand_state { + unsigned long v0; + unsigned long v1; + unsigned long v2; + unsigned long v3; +}; + +static DEFINE_PER_CPU(struct siprand_state, net_rand_state); + +/* + * This is the core CPRNG function. As "pseudorandom", this is not used + * for truly valuable things, just intended to be a PITA to guess. + * For maximum speed, we do just two SipHash rounds per word. This is + * the same rate as 4 rounds per 64 bits that SipHash normally uses, + * so hopefully it's reasonably secure. + * + * There are two changes from the official SipHash finalization: + * - We omit some constants XORed with v2 in the SipHash spec as irrelevant; + * they are there only to make the output rounds distinct from the input + * rounds, and this application has no input rounds. + * - Rather than returning v0^v1^v2^v3, return v1+v3. + * If you look at the SipHash round, the last operation on v3 is + * "v3 ^= v0", so "v0 ^ v3" just undoes that, a waste of time. + * Likewise "v1 ^= v2". (The rotate of v2 makes a difference, but + * it still cancels out half of the bits in v2 for no benefit.) + * Second, since the last combining operation was xor, continue the + * pattern of alternating xor/add for a tiny bit of extra non-linearity. + */ +static inline u32 siprand_u32(struct siprand_state *s) +{ + unsigned long v0 = s->v0, v1 = s->v1, v2 = s->v2, v3 = s->v3; + + PRND_SIPROUND(v0, v1, v2, v3); + PRND_SIPROUND(v0, v1, v2, v3); + s->v0 = v0; s->v1 = v1; s->v2 = v2; s->v3 = v3; + return v1 + v3; +} + + +/** + * prandom_u32 - pseudo random number generator + * + * A 32 bit pseudo-random number is generated using a fast + * algorithm suitable for simulation. This algorithm is NOT + * considered safe for cryptographic use. + */ +u32 prandom_u32(void) +{ + struct siprand_state *state = get_cpu_ptr(&net_rand_state); + u32 res = siprand_u32(state); + + put_cpu_ptr(&net_rand_state); + return res; +} +EXPORT_SYMBOL(prandom_u32); + +/** + * prandom_bytes - get the requested number of pseudo-random bytes + * @buf: where to copy the pseudo-random bytes to + * @bytes: the requested number of bytes + */ +void prandom_bytes(void *buf, size_t bytes) +{ + struct siprand_state *state = get_cpu_ptr(&net_rand_state); + u8 *ptr = buf; + + while (bytes >= sizeof(u32)) { + put_unaligned(siprand_u32(state), (u32 *)ptr); + ptr += sizeof(u32); + bytes -= sizeof(u32); + } + + if (bytes > 0) { + u32 rem = siprand_u32(state); + + do { + *ptr++ = (u8)rem; + rem >>= BITS_PER_BYTE; + } while (--bytes > 0); + } + put_cpu_ptr(&net_rand_state); } +EXPORT_SYMBOL(prandom_bytes); + +/** + * prandom_seed - add entropy to pseudo random number generator + * @entropy: entropy value + * + * Add some additional seed material to the prandom pool. + * The "entropy" is actually our IP address (the only caller is + * the network code), not for unpredictability, but to ensure that + * different machines are initialized differently. + */ +void prandom_seed(u32 entropy) +{ + int i; + + add_device_randomness(&entropy, sizeof(entropy)); + + for_each_possible_cpu(i) { + struct siprand_state *state = per_cpu_ptr(&net_rand_state, i); + unsigned long v0 = state->v0, v1 = state->v1; + unsigned long v2 = state->v2, v3 = state->v3; + + do { + v3 ^= entropy; + PRND_SIPROUND(v0, v1, v2, v3); + PRND_SIPROUND(v0, v1, v2, v3); + v0 ^= entropy; + } while (unlikely(!v0 || !v1 || !v2 || !v3)); + + WRITE_ONCE(state->v0, v0); + WRITE_ONCE(state->v1, v1); + WRITE_ONCE(state->v2, v2); + WRITE_ONCE(state->v3, v3); + } +} +EXPORT_SYMBOL(prandom_seed); + +/* + * Generate some initially weak seeding values to allow + * the prandom_u32() engine to be started. + */ +static int __init prandom_init_early(void) +{ + int i; + unsigned long v0, v1, v2, v3; + + if (!arch_get_random_long(&v0)) + v0 = jiffies; + if (!arch_get_random_long(&v1)) + v1 = random_get_entropy(); + v2 = v0 ^ PRND_K0; + v3 = v1 ^ PRND_K1; + + for_each_possible_cpu(i) { + struct siprand_state *state; + + v3 ^= i; + PRND_SIPROUND(v0, v1, v2, v3); + PRND_SIPROUND(v0, v1, v2, v3); + v0 ^= i; + + state = per_cpu_ptr(&net_rand_state, i); + state->v0 = v0; state->v1 = v1; + state->v2 = v2; state->v3 = v3; + } + + return 0; +} +core_initcall(prandom_init_early); + + +/* Stronger reseeding when available, and periodically thereafter. */ +static void prandom_reseed(unsigned long dontcare); + +static DEFINE_TIMER(seed_timer, prandom_reseed, 0, 0); + +static void prandom_reseed(unsigned long dontcare) +{ + unsigned long expires; + int i; + + /* + * Reinitialize each CPU's PRNG with 128 bits of key. + * No locking on the CPUs, but then somewhat random results are, + * well, expected. + */ + for_each_possible_cpu(i) { + struct siprand_state *state; + unsigned long v0 = get_random_long(), v2 = v0 ^ PRND_K0; + unsigned long v1 = get_random_long(), v3 = v1 ^ PRND_K1; +#if BITS_PER_LONG == 32 + int j; + + /* + * On 32-bit machines, hash in two extra words to + * approximate 128-bit key length. Not that the hash + * has that much security, but this prevents a trivial + * 64-bit brute force. + */ + for (j = 0; j < 2; j++) { + unsigned long m = get_random_long(); + + v3 ^= m; + PRND_SIPROUND(v0, v1, v2, v3); + PRND_SIPROUND(v0, v1, v2, v3); + v0 ^= m; + } #endif + /* + * Probably impossible in practice, but there is a + * theoretical risk that a race between this reseeding + * and the target CPU writing its state back could + * create the all-zero SipHash fixed point. + * + * To ensure that never happens, ensure the state + * we write contains no zero words. + */ + state = per_cpu_ptr(&net_rand_state, i); + WRITE_ONCE(state->v0, v0 ? v0 : -1ul); + WRITE_ONCE(state->v1, v1 ? v1 : -1ul); + WRITE_ONCE(state->v2, v2 ? v2 : -1ul); + WRITE_ONCE(state->v3, v3 ? v3 : -1ul); + } + + /* reseed every ~60 seconds, in [40 .. 80) interval with slack */ + expires = round_jiffies(jiffies + 40 * HZ + prandom_u32_max(40 * HZ)); + mod_timer(&seed_timer, expires); +} + +/* + * The random ready callback can be called from almost any interrupt. + * To avoid worrying about whether it's safe to delay that interrupt + * long enough to seed all CPUs, just schedule an immediate timer event. + */ +static void prandom_timer_start(struct random_ready_callback *unused) +{ + mod_timer(&seed_timer, jiffies); +} + +/* + * Start periodic full reseeding as soon as strong + * random numbers are available. + */ +static int __init prandom_init_late(void) +{ + static struct random_ready_callback random_ready = { + .func = prandom_timer_start + }; + int ret = add_random_ready_callback(&random_ready); + + if (ret == -EALREADY) { + prandom_timer_start(&random_ready); + ret = 0; + } + return ret; +} +late_initcall(prandom_init_late); From patchwork Tue Nov 17 13:05:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325458 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B19EEC64E75 for ; Tue, 17 Nov 2020 13:08:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5F0002225B for ; Tue, 17 Nov 2020 13:08:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="TfGjB7+g" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729217AbgKQNIC (ORCPT ); Tue, 17 Nov 2020 08:08:02 -0500 Received: from mail.kernel.org ([198.145.29.99]:36512 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729215AbgKQNIB (ORCPT ); Tue, 17 Nov 2020 08:08:01 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 228CD246BF; Tue, 17 Nov 2020 13:07:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618480; bh=NsSMxTTXiXeT1xzRNNesefAXSNHTiC/wjqSL1jedHOI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TfGjB7+gXdYGWBTPuWuLcoJz5kM55xLxyomJ8Zx8ppZtXjJ19GXA0GmBm+NVNArhp w4HPj3lSwmqLlzdWHkqC2UxjT6JpsxxU369WYAcdGYcS/ki8Q+rMph5GxqznvPwv7v 6SgpZZCKHmwIYZaF1gfdmzBCjOTPnLQwQG2zTWWU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Anand K Mistry , Borislav Petkov , Thomas Gleixner , Tom Lendacky Subject: [PATCH 4.4 47/64] x86/speculation: Allow IBPB to be conditionally enabled on CPUs with always-on STIBP Date: Tue, 17 Nov 2020 14:05:10 +0100 Message-Id: <20201117122108.484682239@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Anand K Mistry commit 1978b3a53a74e3230cd46932b149c6e62e832e9a upstream. On AMD CPUs which have the feature X86_FEATURE_AMD_STIBP_ALWAYS_ON, STIBP is set to on and spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED At the same time, IBPB can be set to conditional. However, this leads to the case where it's impossible to turn on IBPB for a process because in the PR_SPEC_DISABLE case in ib_prctl_set() the spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED condition leads to a return before the task flag is set. Similarly, ib_prctl_get() will return PR_SPEC_DISABLE even though IBPB is set to conditional. More generally, the following cases are possible: 1. STIBP = conditional && IBPB = on for spectre_v2_user=seccomp,ibpb 2. STIBP = on && IBPB = conditional for AMD CPUs with X86_FEATURE_AMD_STIBP_ALWAYS_ON The first case functions correctly today, but only because spectre_v2_user_ibpb isn't updated to reflect the IBPB mode. At a high level, this change does one thing. If either STIBP or IBPB is set to conditional, allow the prctl to change the task flag. Also, reflect that capability when querying the state. This isn't perfect since it doesn't take into account if only STIBP or IBPB is unconditionally on. But it allows the conditional feature to work as expected, without affecting the unconditional one. [ bp: Massage commit message and comment; space out statements for better readability. ] Fixes: 21998a351512 ("x86/speculation: Avoid force-disabling IBPB based on STIBP and enhanced IBRS.") Signed-off-by: Anand K Mistry Signed-off-by: Borislav Petkov Acked-by: Thomas Gleixner Acked-by: Tom Lendacky Link: https://lkml.kernel.org/r/20201105163246.v2.1.Ifd7243cd3e2c2206a893ad0a5b9a4f19549e22c6@changeid Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 52 ++++++++++++++++++++++++++++----------------- 1 file changed, 33 insertions(+), 19 deletions(-) --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1223,6 +1223,14 @@ static int ssb_prctl_set(struct task_str return 0; } +static bool is_spec_ib_user_controlled(void) +{ + return spectre_v2_user_ibpb == SPECTRE_V2_USER_PRCTL || + spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP || + spectre_v2_user_stibp == SPECTRE_V2_USER_PRCTL || + spectre_v2_user_stibp == SPECTRE_V2_USER_SECCOMP; +} + static int ib_prctl_set(struct task_struct *task, unsigned long ctrl) { switch (ctrl) { @@ -1230,17 +1238,26 @@ static int ib_prctl_set(struct task_stru if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE && spectre_v2_user_stibp == SPECTRE_V2_USER_NONE) return 0; - /* - * Indirect branch speculation is always disabled in strict - * mode. It can neither be enabled if it was force-disabled - * by a previous prctl call. + /* + * With strict mode for both IBPB and STIBP, the instruction + * code paths avoid checking this task flag and instead, + * unconditionally run the instruction. However, STIBP and IBPB + * are independent and either can be set to conditionally + * enabled regardless of the mode of the other. + * + * If either is set to conditional, allow the task flag to be + * updated, unless it was force-disabled by a previous prctl + * call. Currently, this is possible on an AMD CPU which has the + * feature X86_FEATURE_AMD_STIBP_ALWAYS_ON. In this case, if the + * kernel is booted with 'spectre_v2_user=seccomp', then + * spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP and + * spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED. */ - if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT || - spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT || - spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED || + if (!is_spec_ib_user_controlled() || task_spec_ib_force_disable(task)) return -EPERM; + task_clear_spec_ib_disable(task); task_update_spec_tif(task); break; @@ -1253,10 +1270,10 @@ static int ib_prctl_set(struct task_stru if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE && spectre_v2_user_stibp == SPECTRE_V2_USER_NONE) return -EPERM; - if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT || - spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT || - spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED) + + if (!is_spec_ib_user_controlled()) return 0; + task_set_spec_ib_disable(task); if (ctrl == PR_SPEC_FORCE_DISABLE) task_set_spec_ib_force_disable(task); @@ -1319,20 +1336,17 @@ static int ib_prctl_get(struct task_stru if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE && spectre_v2_user_stibp == SPECTRE_V2_USER_NONE) return PR_SPEC_ENABLE; - else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT || - spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT || - spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED) - return PR_SPEC_DISABLE; - else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_PRCTL || - spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP || - spectre_v2_user_stibp == SPECTRE_V2_USER_PRCTL || - spectre_v2_user_stibp == SPECTRE_V2_USER_SECCOMP) { + else if (is_spec_ib_user_controlled()) { if (task_spec_ib_force_disable(task)) return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE; if (task_spec_ib_disable(task)) return PR_SPEC_PRCTL | PR_SPEC_DISABLE; return PR_SPEC_PRCTL | PR_SPEC_ENABLE; - } else + } else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT || + spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT || + spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED) + return PR_SPEC_DISABLE; + else return PR_SPEC_NOT_AFFECTED; } From patchwork Tue Nov 17 13:05:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 328017 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E9FBC64E7D for ; Tue, 17 Nov 2020 14:06:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B0DFE221FC for ; Tue, 17 Nov 2020 14:06:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="GMkPwYYL" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729224AbgKQNIM (ORCPT ); Tue, 17 Nov 2020 08:08:12 -0500 Received: from mail.kernel.org ([198.145.29.99]:36642 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729236AbgKQNIJ (ORCPT ); Tue, 17 Nov 2020 08:08:09 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 330BA2225B; Tue, 17 Nov 2020 13:08:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618487; bh=cBaCki1wSPt+czcXSB+QD0oaExPggu2lSlynJL3wzKc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GMkPwYYLSZtIvHMNMNsj1Xo7NxYrRkH3cf8N8yUZSpp+6LlvcXQlr9gqX1om+yP44 XDJePKLBoT0sgaKrZ2NwnMvIxd+6NJy9m7Sqv5aNtwfsmX0lBcu84utGD3lsP9UlVi Rcz9Otz7b7P30wd2r0BVEZgKyKsOB6ONqobOsunw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Julien Grall , Juergen Gross , Julien Grall , Wei Liu Subject: [PATCH 4.4 49/64] xen/events: add a proper barrier to 2-level uevent unmasking Date: Tue, 17 Nov 2020 14:05:12 +0100 Message-Id: <20201117122108.587012647@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Juergen Gross commit 4d3fe31bd993ef504350989786858aefdb877daa upstream. A follow-up patch will require certain write to happen before an event channel is unmasked. While the memory barrier is not strictly necessary for all the callers, the main one will need it. In order to avoid an extra memory barrier when using fifo event channels, mandate evtchn_unmask() to provide write ordering. The 2-level event handling unmask operation is missing an appropriate barrier, so add it. Fifo event channels are fine in this regard due to using sync_cmpxchg(). This is part of XSA-332. Cc: stable@vger.kernel.org Suggested-by: Julien Grall Signed-off-by: Juergen Gross Reviewed-by: Julien Grall Reviewed-by: Wei Liu Signed-off-by: Greg Kroah-Hartman --- drivers/xen/events/events_2l.c | 2 ++ 1 file changed, 2 insertions(+) --- a/drivers/xen/events/events_2l.c +++ b/drivers/xen/events/events_2l.c @@ -90,6 +90,8 @@ static void evtchn_2l_unmask(unsigned po BUG_ON(!irqs_disabled()); + smp_wmb(); /* All writes before unmask must be visible. */ + if (unlikely((cpu != cpu_from_evtchn(port)))) do_hypercall = 1; else { From patchwork Tue Nov 17 13:05:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325456 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F644C64E8A for ; Tue, 17 Nov 2020 13:08:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BC406221EB for ; Tue, 17 Nov 2020 13:08:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="VoZkGbAS" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729244AbgKQNIM (ORCPT ); Tue, 17 Nov 2020 08:08:12 -0500 Received: from mail.kernel.org ([198.145.29.99]:36684 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729242AbgKQNIL (ORCPT ); Tue, 17 Nov 2020 08:08:11 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 15414238E6; Tue, 17 Nov 2020 13:08:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618490; bh=cFm+X33idwFBFy3k9VmCTbLnVijn82oAwo+YDVz0Upk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VoZkGbASsPu2rkn5bWMJ8LhQfgBjzSjoX0AFfUUiefoGdxaDsEDNHKkYPwiToD9Ko dgvh3q/EGBPA1ukHEtIyKigHoi1LFHhz+WIq/glpiLvm+cxsJgdFkvq8GPThEIBbzg aKmD+0WB9qLoROmrqJVnbHxdP3NE4BR4NdOtsduY= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Juergen Gross , Jan Beulich Subject: [PATCH 4.4 50/64] xen/events: fix race in evtchn_fifo_unmask() Date: Tue, 17 Nov 2020 14:05:13 +0100 Message-Id: <20201117122108.636669729@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Juergen Gross commit f01337197419b7e8a492e83089552b77d3b5fb90 upstream. Unmasking a fifo event channel can result in unmasking it twice, once directly in the kernel and once via a hypercall in case the event was pending. Fix that by doing the local unmask only if the event is not pending. This is part of XSA-332. Cc: stable@vger.kernel.org Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Signed-off-by: Greg Kroah-Hartman --- drivers/xen/events/events_fifo.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) --- a/drivers/xen/events/events_fifo.c +++ b/drivers/xen/events/events_fifo.c @@ -227,19 +227,25 @@ static bool evtchn_fifo_is_masked(unsign return sync_test_bit(EVTCHN_FIFO_BIT(MASKED, word), BM(word)); } /* - * Clear MASKED, spinning if BUSY is set. + * Clear MASKED if not PENDING, spinning if BUSY is set. + * Return true if mask was cleared. */ -static void clear_masked(volatile event_word_t *word) +static bool clear_masked_cond(volatile event_word_t *word) { event_word_t new, old, w; w = *word; do { + if (w & (1 << EVTCHN_FIFO_PENDING)) + return false; + old = w & ~(1 << EVTCHN_FIFO_BUSY); new = old & ~(1 << EVTCHN_FIFO_MASKED); w = sync_cmpxchg(word, old, new); } while (w != old); + + return true; } static void evtchn_fifo_unmask(unsigned port) @@ -248,8 +254,7 @@ static void evtchn_fifo_unmask(unsigned BUG_ON(!irqs_disabled()); - clear_masked(word); - if (evtchn_fifo_is_pending(port)) { + if (!clear_masked_cond(word)) { struct evtchn_unmask unmask = { .port = port }; (void)HYPERVISOR_event_channel_op(EVTCHNOP_unmask, &unmask); } From patchwork Tue Nov 17 13:05:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B500C63697 for ; Tue, 17 Nov 2020 13:08:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C89B9246BB for ; Tue, 17 Nov 2020 13:08:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="WSnNlx74" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729262AbgKQNIV (ORCPT ); Tue, 17 Nov 2020 08:08:21 -0500 Received: from mail.kernel.org ([198.145.29.99]:36822 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728412AbgKQNIU (ORCPT ); Tue, 17 Nov 2020 08:08:20 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id F182C238E6; Tue, 17 Nov 2020 13:08:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618499; bh=5Q0M5ZDOWNA8OsriL905lRiV/6x7BGtnqP9SjN6UtBE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WSnNlx74iKrgpwm0RPAuDYOnBTnjbNwTyslywWao3w1P2nx4PsePFOditXieOCVaT nYTlSib/WFX7STSjnc7PkBvTiTuNgCyxlY/wQkifR7n+WQyHelSJqqIvbcVpazvcJ5 4DjdN2E1PdzpYkASJfqBu3iPbq+WUG55rN62vFU8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Julien Grall , Juergen Gross , Jan Beulich , Wei Liu Subject: [PATCH 4.4 53/64] xen/netback: use lateeoi irq binding Date: Tue, 17 Nov 2020 14:05:16 +0100 Message-Id: <20201117122108.779628325@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Juergen Gross commit 23025393dbeb3b8b3b60ebfa724cdae384992e27 upstream. In order to reduce the chance for the system becoming unresponsive due to event storms triggered by a misbehaving netfront use the lateeoi irq binding for netback and unmask the event channel only just before going to sleep waiting for new events. Make sure not to issue an EOI when none is pending by introducing an eoi_pending element to struct xenvif_queue. When no request has been consumed set the spurious flag when sending the EOI for an interrupt. This is part of XSA-332. Cc: stable@vger.kernel.org Reported-by: Julien Grall Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Reviewed-by: Wei Liu Signed-off-by: Greg Kroah-Hartman --- drivers/net/xen-netback/common.h | 39 +++++++++++++++++++++++ drivers/net/xen-netback/interface.c | 59 +++++++++++++++++++++++++++++++----- drivers/net/xen-netback/netback.c | 17 +++++++--- 3 files changed, 103 insertions(+), 12 deletions(-) --- a/drivers/net/xen-netback/common.h +++ b/drivers/net/xen-netback/common.h @@ -137,6 +137,20 @@ struct xenvif_queue { /* Per-queue data char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */ struct xenvif *vif; /* Parent VIF */ + /* + * TX/RX common EOI handling. + * When feature-split-event-channels = 0, interrupt handler sets + * NETBK_COMMON_EOI, otherwise NETBK_RX_EOI and NETBK_TX_EOI are set + * by the RX and TX interrupt handlers. + * RX and TX handler threads will issue an EOI when either + * NETBK_COMMON_EOI or their specific bits (NETBK_RX_EOI or + * NETBK_TX_EOI) are set and they will reset those bits. + */ + atomic_t eoi_pending; +#define NETBK_RX_EOI 0x01 +#define NETBK_TX_EOI 0x02 +#define NETBK_COMMON_EOI 0x04 + /* Use NAPI for guest TX */ struct napi_struct napi; /* When feature-split-event-channels = 0, tx_irq = rx_irq. */ @@ -317,6 +331,7 @@ void xenvif_kick_thread(struct xenvif_qu int xenvif_dealloc_kthread(void *data); +bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread); void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb); void xenvif_carrier_on(struct xenvif *vif); @@ -353,4 +368,28 @@ void xenvif_skb_zerocopy_complete(struct bool xenvif_mcast_match(struct xenvif *vif, const u8 *addr); void xenvif_mcast_addr_list_free(struct xenvif *vif); +#include + +static inline int xenvif_atomic_fetch_or(int i, atomic_t *v) +{ + int c, old; + + c = v->counter; + while ((old = cmpxchg(&v->counter, c, c | i)) != c) + c = old; + + return c; +} + +static inline int xenvif_atomic_fetch_andnot(int i, atomic_t *v) +{ + int c, old; + + c = v->counter; + while ((old = cmpxchg(&v->counter, c, c & ~i)) != c) + c = old; + + return c; +} + #endif /* __XEN_NETBACK__COMMON_H__ */ --- a/drivers/net/xen-netback/interface.c +++ b/drivers/net/xen-netback/interface.c @@ -76,12 +76,28 @@ int xenvif_schedulable(struct xenvif *vi !vif->disabled; } +static bool xenvif_handle_tx_interrupt(struct xenvif_queue *queue) +{ + bool rc; + + rc = RING_HAS_UNCONSUMED_REQUESTS(&queue->tx); + if (rc) + napi_schedule(&queue->napi); + return rc; +} + static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id) { struct xenvif_queue *queue = dev_id; + int old; - if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) - napi_schedule(&queue->napi); + old = xenvif_atomic_fetch_or(NETBK_TX_EOI, &queue->eoi_pending); + WARN(old & NETBK_TX_EOI, "Interrupt while EOI pending\n"); + + if (!xenvif_handle_tx_interrupt(queue)) { + atomic_andnot(NETBK_TX_EOI, &queue->eoi_pending); + xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS); + } return IRQ_HANDLED; } @@ -115,19 +131,46 @@ static int xenvif_poll(struct napi_struc return work_done; } +static bool xenvif_handle_rx_interrupt(struct xenvif_queue *queue) +{ + bool rc; + + rc = xenvif_have_rx_work(queue, false); + if (rc) + xenvif_kick_thread(queue); + return rc; +} + static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id) { struct xenvif_queue *queue = dev_id; + int old; - xenvif_kick_thread(queue); + old = xenvif_atomic_fetch_or(NETBK_RX_EOI, &queue->eoi_pending); + WARN(old & NETBK_RX_EOI, "Interrupt while EOI pending\n"); + + if (!xenvif_handle_rx_interrupt(queue)) { + atomic_andnot(NETBK_RX_EOI, &queue->eoi_pending); + xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS); + } return IRQ_HANDLED; } irqreturn_t xenvif_interrupt(int irq, void *dev_id) { - xenvif_tx_interrupt(irq, dev_id); - xenvif_rx_interrupt(irq, dev_id); + struct xenvif_queue *queue = dev_id; + int old; + + old = xenvif_atomic_fetch_or(NETBK_COMMON_EOI, &queue->eoi_pending); + WARN(old, "Interrupt while EOI pending\n"); + + /* Use bitwise or as we need to call both functions. */ + if ((!xenvif_handle_tx_interrupt(queue) | + !xenvif_handle_rx_interrupt(queue))) { + atomic_andnot(NETBK_COMMON_EOI, &queue->eoi_pending); + xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS); + } return IRQ_HANDLED; } @@ -555,7 +598,7 @@ int xenvif_connect(struct xenvif_queue * if (tx_evtchn == rx_evtchn) { /* feature-split-event-channels == 0 */ - err = bind_interdomain_evtchn_to_irqhandler( + err = bind_interdomain_evtchn_to_irqhandler_lateeoi( queue->vif->domid, tx_evtchn, xenvif_interrupt, 0, queue->name, queue); if (err < 0) @@ -566,7 +609,7 @@ int xenvif_connect(struct xenvif_queue * /* feature-split-event-channels == 1 */ snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name), "%s-tx", queue->name); - err = bind_interdomain_evtchn_to_irqhandler( + err = bind_interdomain_evtchn_to_irqhandler_lateeoi( queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0, queue->tx_irq_name, queue); if (err < 0) @@ -576,7 +619,7 @@ int xenvif_connect(struct xenvif_queue * snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name), "%s-rx", queue->name); - err = bind_interdomain_evtchn_to_irqhandler( + err = bind_interdomain_evtchn_to_irqhandler_lateeoi( queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0, queue->rx_irq_name, queue); if (err < 0) --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -670,6 +670,10 @@ void xenvif_napi_schedule_or_enable_even if (more_to_do) napi_schedule(&queue->napi); + else if (xenvif_atomic_fetch_andnot(NETBK_TX_EOI | NETBK_COMMON_EOI, + &queue->eoi_pending) & + (NETBK_TX_EOI | NETBK_COMMON_EOI)) + xen_irq_lateeoi(queue->tx_irq, 0); } static void tx_add_credit(struct xenvif_queue *queue) @@ -2010,14 +2014,14 @@ static bool xenvif_rx_queue_ready(struct return queue->stalled && prod - cons >= 1; } -static bool xenvif_have_rx_work(struct xenvif_queue *queue) +bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread) { return (!skb_queue_empty(&queue->rx_queue) && xenvif_rx_ring_slots_available(queue)) || (queue->vif->stall_timeout && (xenvif_rx_queue_stalled(queue) || xenvif_rx_queue_ready(queue))) - || kthread_should_stop() + || (test_kthread && kthread_should_stop()) || queue->vif->disabled; } @@ -2048,15 +2052,20 @@ static void xenvif_wait_for_rx_work(stru { DEFINE_WAIT(wait); - if (xenvif_have_rx_work(queue)) + if (xenvif_have_rx_work(queue, true)) return; for (;;) { long ret; prepare_to_wait(&queue->wq, &wait, TASK_INTERRUPTIBLE); - if (xenvif_have_rx_work(queue)) + if (xenvif_have_rx_work(queue, true)) break; + if (xenvif_atomic_fetch_andnot(NETBK_RX_EOI | NETBK_COMMON_EOI, + &queue->eoi_pending) & + (NETBK_RX_EOI | NETBK_COMMON_EOI)) + xen_irq_lateeoi(queue->rx_irq, 0); + ret = schedule_timeout(xenvif_rx_queue_timeout(queue)); if (!ret) break; From patchwork Tue Nov 17 13:05:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325454 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FD2DC63798 for ; Tue, 17 Nov 2020 13:08:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4E0BF246BB for ; Tue, 17 Nov 2020 13:08:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="l+XFc2dS" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729280AbgKQNI3 (ORCPT ); Tue, 17 Nov 2020 08:08:29 -0500 Received: from mail.kernel.org ([198.145.29.99]:36848 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729272AbgKQNIY (ORCPT ); Tue, 17 Nov 2020 08:08:24 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C69F4221EB; Tue, 17 Nov 2020 13:08:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618502; bh=8AI+wv/0W4+FExRGgwH8qDs9IV9XEX+dQkaWQqLRWMA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=l+XFc2dS0dWIpUTxl1eic84KMfvxSp+0FGUu/Pg4WI9I6C9Z5A4jdQWkkvTZDQiMk 9RQAlSZ0kLwwy5ERAdwx6eLW1C7sOcmOXP3GPdPM0Y0bDcBJvjSSR8m3fXiNrya9dn ey67/6so5U6EQ3KvdMkrt0EMac79qET5XqGnaqyU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Julien Grall , Juergen Gross , Jan Beulich , Wei Liu Subject: [PATCH 4.4 54/64] xen/scsiback: use lateeoi irq binding Date: Tue, 17 Nov 2020 14:05:17 +0100 Message-Id: <20201117122108.834543616@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Juergen Gross commit 86991b6e7ea6c613b7692f65106076943449b6b7 upstream. In order to reduce the chance for the system becoming unresponsive due to event storms triggered by a misbehaving scsifront use the lateeoi irq binding for scsiback and unmask the event channel only just before leaving the event handling function. In case of a ring protocol error don't issue an EOI in order to avoid the possibility to use that for producing an event storm. This at once will result in no further call of scsiback_irq_fn(), so the ring_error struct member can be dropped and scsiback_do_cmd_fn() can signal the protocol error via a negative return value. This is part of XSA-332. Cc: stable@vger.kernel.org Reported-by: Julien Grall Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Reviewed-by: Wei Liu Signed-off-by: Greg Kroah-Hartman --- drivers/xen/xen-scsiback.c | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) --- a/drivers/xen/xen-scsiback.c +++ b/drivers/xen/xen-scsiback.c @@ -91,7 +91,6 @@ struct vscsibk_info { unsigned int irq; struct vscsiif_back_ring ring; - int ring_error; spinlock_t ring_lock; atomic_t nr_unreplied_reqs; @@ -698,7 +697,8 @@ static int prepare_pending_reqs(struct v return 0; } -static int scsiback_do_cmd_fn(struct vscsibk_info *info) +static int scsiback_do_cmd_fn(struct vscsibk_info *info, + unsigned int *eoi_flags) { struct vscsiif_back_ring *ring = &info->ring; struct vscsiif_request ring_req; @@ -715,11 +715,12 @@ static int scsiback_do_cmd_fn(struct vsc rc = ring->rsp_prod_pvt; pr_warn("Dom%d provided bogus ring requests (%#x - %#x = %u). Halting ring processing\n", info->domid, rp, rc, rp - rc); - info->ring_error = 1; - return 0; + return -EINVAL; } while ((rc != rp)) { + *eoi_flags &= ~XEN_EOI_FLAG_SPURIOUS; + if (RING_REQUEST_CONS_OVERFLOW(ring, rc)) break; pending_req = kmem_cache_alloc(scsiback_cachep, GFP_KERNEL); @@ -782,13 +783,16 @@ static int scsiback_do_cmd_fn(struct vsc static irqreturn_t scsiback_irq_fn(int irq, void *dev_id) { struct vscsibk_info *info = dev_id; + int rc; + unsigned int eoi_flags = XEN_EOI_FLAG_SPURIOUS; - if (info->ring_error) - return IRQ_HANDLED; - - while (scsiback_do_cmd_fn(info)) + while ((rc = scsiback_do_cmd_fn(info, &eoi_flags)) > 0) cond_resched(); + /* In case of a ring error we keep the event channel masked. */ + if (!rc) + xen_irq_lateeoi(irq, eoi_flags); + return IRQ_HANDLED; } @@ -809,7 +813,7 @@ static int scsiback_init_sring(struct vs sring = (struct vscsiif_sring *)area; BACK_RING_INIT(&info->ring, sring, PAGE_SIZE); - err = bind_interdomain_evtchn_to_irq(info->domid, evtchn); + err = bind_interdomain_evtchn_to_irq_lateeoi(info->domid, evtchn); if (err < 0) goto unmap_page; @@ -1210,7 +1214,6 @@ static int scsiback_probe(struct xenbus_ info->domid = dev->otherend_id; spin_lock_init(&info->ring_lock); - info->ring_error = 0; atomic_set(&info->nr_unreplied_reqs, 0); init_waitqueue_head(&info->waiting_to_free); info->dev = dev; From patchwork Tue Nov 17 13:05:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 328018 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54CC5C64E8A for ; Tue, 17 Nov 2020 14:06:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0AAA020729 for ; Tue, 17 Nov 2020 14:06:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="j8a/0Bk+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729881AbgKQOGM (ORCPT ); Tue, 17 Nov 2020 09:06:12 -0500 Received: from mail.kernel.org ([198.145.29.99]:36886 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728360AbgKQNI1 (ORCPT ); Tue, 17 Nov 2020 08:08:27 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8D86B238E6; Tue, 17 Nov 2020 13:08:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618505; bh=TFHUiVDduEy4slz4WWlcRrZ6r1dWlN3FCYyRCLmt9mU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=j8a/0Bk+Pl+vHRNYxpWnHIAmrqGmFWa1sWywf390e1LrhmNjKkZatDBEQ3v+R6Xm2 hZtmh+FSyVdP54IiC1wu0iiX19O+DRWZsIEcPvPfQ09eFoj9nz5Iqo2Z/6QJaNIx1D fs7XZXvQFKorQsVmNT2AdV4wnq5719KSTgqw+5F0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Julien Grall , Juergen Gross , Jan Beulich , Wei Liu Subject: [PATCH 4.4 55/64] xen/pciback: use lateeoi irq binding Date: Tue, 17 Nov 2020 14:05:18 +0100 Message-Id: <20201117122108.885620580@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Juergen Gross commit c2711441bc961b37bba0615dd7135857d189035f upstream. In order to reduce the chance for the system becoming unresponsive due to event storms triggered by a misbehaving pcifront use the lateeoi irq binding for pciback and unmask the event channel only just before leaving the event handling function. Restructure the handling to support that scheme. Basically an event can come in for two reasons: either a normal request for a pciback action, which is handled in a worker, or in case the guest has finished an AER request which was requested by pciback. When an AER request is issued to the guest and a normal pciback action is currently active issue an EOI early in order to be able to receive another event when the AER request has been finished by the guest. Let the worker processing the normal requests run until no further request is pending, instead of starting a new worker ion that case. Issue the EOI only just before leaving the worker. This scheme allows to drop calling the generic function xen_pcibk_test_and_schedule_op() after processing of any request as the handling of both request types is now separated more cleanly. This is part of XSA-332. Cc: stable@vger.kernel.org Reported-by: Julien Grall Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Reviewed-by: Wei Liu Signed-off-by: Greg Kroah-Hartman --- drivers/xen/xen-pciback/pci_stub.c | 14 ++++----- drivers/xen/xen-pciback/pciback.h | 12 +++++++- drivers/xen/xen-pciback/pciback_ops.c | 48 ++++++++++++++++++++++++++-------- drivers/xen/xen-pciback/xenbus.c | 2 - 4 files changed, 56 insertions(+), 20 deletions(-) --- a/drivers/xen/xen-pciback/pci_stub.c +++ b/drivers/xen/xen-pciback/pci_stub.c @@ -681,10 +681,17 @@ static pci_ers_result_t common_process(s wmb(); notify_remote_via_irq(pdev->evtchn_irq); + /* Enable IRQ to signal "request done". */ + xen_pcibk_lateeoi(pdev, 0); + ret = wait_event_timeout(xen_pcibk_aer_wait_queue, !(test_bit(_XEN_PCIB_active, (unsigned long *) &sh_info->flags)), 300*HZ); + /* Enable IRQ for pcifront request if not already active. */ + if (!test_bit(_PDEVF_op_active, &pdev->flags)) + xen_pcibk_lateeoi(pdev, 0); + if (!ret) { if (test_bit(_XEN_PCIB_active, (unsigned long *)&sh_info->flags)) { @@ -698,13 +705,6 @@ static pci_ers_result_t common_process(s } clear_bit(_PCIB_op_pending, (unsigned long *)&pdev->flags); - if (test_bit(_XEN_PCIF_active, - (unsigned long *)&sh_info->flags)) { - dev_dbg(&psdev->dev->dev, - "schedule pci_conf service in " DRV_NAME "\n"); - xen_pcibk_test_and_schedule_op(psdev->pdev); - } - res = (pci_ers_result_t)aer_op->err; return res; } --- a/drivers/xen/xen-pciback/pciback.h +++ b/drivers/xen/xen-pciback/pciback.h @@ -13,6 +13,7 @@ #include #include #include +#include #include #define DRV_NAME "xen-pciback" @@ -26,6 +27,8 @@ struct pci_dev_entry { #define PDEVF_op_active (1<<(_PDEVF_op_active)) #define _PCIB_op_pending (1) #define PCIB_op_pending (1<<(_PCIB_op_pending)) +#define _EOI_pending (2) +#define EOI_pending (1<<(_EOI_pending)) struct xen_pcibk_device { void *pci_dev_data; @@ -182,12 +185,17 @@ static inline void xen_pcibk_release_dev irqreturn_t xen_pcibk_handle_event(int irq, void *dev_id); void xen_pcibk_do_op(struct work_struct *data); +static inline void xen_pcibk_lateeoi(struct xen_pcibk_device *pdev, + unsigned int eoi_flag) +{ + if (test_and_clear_bit(_EOI_pending, &pdev->flags)) + xen_irq_lateeoi(pdev->evtchn_irq, eoi_flag); +} + int xen_pcibk_xenbus_register(void); void xen_pcibk_xenbus_unregister(void); extern int verbose_request; - -void xen_pcibk_test_and_schedule_op(struct xen_pcibk_device *pdev); #endif /* Handles shared IRQs that can to device domain and control domain. */ --- a/drivers/xen/xen-pciback/pciback_ops.c +++ b/drivers/xen/xen-pciback/pciback_ops.c @@ -296,26 +296,41 @@ int xen_pcibk_disable_msix(struct xen_pc return 0; } #endif + +static inline bool xen_pcibk_test_op_pending(struct xen_pcibk_device *pdev) +{ + return test_bit(_XEN_PCIF_active, + (unsigned long *)&pdev->sh_info->flags) && + !test_and_set_bit(_PDEVF_op_active, &pdev->flags); +} + /* * Now the same evtchn is used for both pcifront conf_read_write request * as well as pcie aer front end ack. We use a new work_queue to schedule * xen_pcibk conf_read_write service for avoiding confict with aer_core * do_recovery job which also use the system default work_queue */ -void xen_pcibk_test_and_schedule_op(struct xen_pcibk_device *pdev) +static void xen_pcibk_test_and_schedule_op(struct xen_pcibk_device *pdev) { + bool eoi = true; + /* Check that frontend is requesting an operation and that we are not * already processing a request */ - if (test_bit(_XEN_PCIF_active, (unsigned long *)&pdev->sh_info->flags) - && !test_and_set_bit(_PDEVF_op_active, &pdev->flags)) { + if (xen_pcibk_test_op_pending(pdev)) { queue_work(xen_pcibk_wq, &pdev->op_work); + eoi = false; } /*_XEN_PCIB_active should have been cleared by pcifront. And also make sure xen_pcibk is waiting for ack by checking _PCIB_op_pending*/ if (!test_bit(_XEN_PCIB_active, (unsigned long *)&pdev->sh_info->flags) && test_bit(_PCIB_op_pending, &pdev->flags)) { wake_up(&xen_pcibk_aer_wait_queue); + eoi = false; } + + /* EOI if there was nothing to do. */ + if (eoi) + xen_pcibk_lateeoi(pdev, XEN_EOI_FLAG_SPURIOUS); } /* Performing the configuration space reads/writes must not be done in atomic @@ -323,10 +338,8 @@ void xen_pcibk_test_and_schedule_op(stru * use of semaphores). This function is intended to be called from a work * queue in process context taking a struct xen_pcibk_device as a parameter */ -void xen_pcibk_do_op(struct work_struct *data) +static void xen_pcibk_do_one_op(struct xen_pcibk_device *pdev) { - struct xen_pcibk_device *pdev = - container_of(data, struct xen_pcibk_device, op_work); struct pci_dev *dev; struct xen_pcibk_dev_data *dev_data = NULL; struct xen_pci_op *op = &pdev->op; @@ -399,16 +412,31 @@ void xen_pcibk_do_op(struct work_struct smp_mb__before_atomic(); /* /after/ clearing PCIF_active */ clear_bit(_PDEVF_op_active, &pdev->flags); smp_mb__after_atomic(); /* /before/ final check for work */ +} - /* Check to see if the driver domain tried to start another request in - * between clearing _XEN_PCIF_active and clearing _PDEVF_op_active. - */ - xen_pcibk_test_and_schedule_op(pdev); +void xen_pcibk_do_op(struct work_struct *data) +{ + struct xen_pcibk_device *pdev = + container_of(data, struct xen_pcibk_device, op_work); + + do { + xen_pcibk_do_one_op(pdev); + } while (xen_pcibk_test_op_pending(pdev)); + + xen_pcibk_lateeoi(pdev, 0); } irqreturn_t xen_pcibk_handle_event(int irq, void *dev_id) { struct xen_pcibk_device *pdev = dev_id; + bool eoi; + + /* IRQs might come in before pdev->evtchn_irq is written. */ + if (unlikely(pdev->evtchn_irq != irq)) + pdev->evtchn_irq = irq; + + eoi = test_and_set_bit(_EOI_pending, &pdev->flags); + WARN(eoi, "IRQ while EOI pending\n"); xen_pcibk_test_and_schedule_op(pdev); --- a/drivers/xen/xen-pciback/xenbus.c +++ b/drivers/xen/xen-pciback/xenbus.c @@ -124,7 +124,7 @@ static int xen_pcibk_do_attach(struct xe pdev->sh_info = vaddr; - err = bind_interdomain_evtchn_to_irqhandler( + err = bind_interdomain_evtchn_to_irqhandler_lateeoi( pdev->xdev->otherend_id, remote_evtchn, xen_pcibk_handle_event, 0, DRV_NAME, pdev); if (err < 0) { From patchwork Tue Nov 17 13:05:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 325453 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EE2BC64E8A for ; Tue, 17 Nov 2020 13:08:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 02A5F24698 for ; Tue, 17 Nov 2020 13:08:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="pcb9IlVV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729304AbgKQNIm (ORCPT ); Tue, 17 Nov 2020 08:08:42 -0500 Received: from mail.kernel.org ([198.145.29.99]:37376 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728498AbgKQNIk (ORCPT ); Tue, 17 Nov 2020 08:08:40 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 26EE72225B; Tue, 17 Nov 2020 13:08:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618519; bh=QV3Fo4IjQA0C6tB8n8tcPRL7CO8d6BIUo2/w7gFiEac=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pcb9IlVVGmoVJ/QdTdkIY0lcZWqs7kdfGEQ8X/cUZ/ardOGVhX+XSAaqnIuuHbMyX OTu/yXiUpNl818YZw8OHashTh2Ovlg5JKylykEgFLNY8Ih01ws1DlyvX0uVw1deip5 8+TmVWimOXqnxJnPStnKSh2rG1+WHNzO2AVPXPq8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Juergen Gross , Jan Beulich , Stefano Stabellini , Wei Liu Subject: [PATCH 4.4 59/64] xen/events: block rogue events for some time Date: Tue, 17 Nov 2020 14:05:22 +0100 Message-Id: <20201117122109.095315926@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Juergen Gross commit 5f7f77400ab5b357b5fdb7122c3442239672186c upstream. In order to avoid high dom0 load due to rogue guests sending events at high frequency, block those events in case there was no action needed in dom0 to handle the events. This is done by adding a per-event counter, which set to zero in case an EOI without the XEN_EOI_FLAG_SPURIOUS is received from a backend driver, and incremented when this flag has been set. In case the counter is 2 or higher delay the EOI by 1 << (cnt - 2) jiffies, but not more than 1 second. In order not to waste memory shorten the per-event refcnt to two bytes (it should normally never exceed a value of 2). Add an overflow check to evtchn_get() to make sure the 2 bytes really won't overflow. This is part of XSA-332. Cc: stable@vger.kernel.org Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Reviewed-by: Stefano Stabellini Reviewed-by: Wei Liu Signed-off-by: Greg Kroah-Hartman --- drivers/xen/events/events_base.c | 27 ++++++++++++++++++++++----- drivers/xen/events/events_internal.h | 3 ++- 2 files changed, 24 insertions(+), 6 deletions(-) --- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -468,17 +468,34 @@ static void lateeoi_list_add(struct irq_ spin_unlock_irqrestore(&eoi->eoi_list_lock, flags); } -static void xen_irq_lateeoi_locked(struct irq_info *info) +static void xen_irq_lateeoi_locked(struct irq_info *info, bool spurious) { evtchn_port_t evtchn; unsigned int cpu; + unsigned int delay = 0; evtchn = info->evtchn; if (!VALID_EVTCHN(evtchn) || !list_empty(&info->eoi_list)) return; + if (spurious) { + if ((1 << info->spurious_cnt) < (HZ << 2)) + info->spurious_cnt++; + if (info->spurious_cnt > 1) { + delay = 1 << (info->spurious_cnt - 2); + if (delay > HZ) + delay = HZ; + if (!info->eoi_time) + info->eoi_cpu = smp_processor_id(); + info->eoi_time = get_jiffies_64() + delay; + } + } else { + info->spurious_cnt = 0; + } + cpu = info->eoi_cpu; - if (info->eoi_time && info->irq_epoch == per_cpu(irq_epoch, cpu)) { + if (info->eoi_time && + (info->irq_epoch == per_cpu(irq_epoch, cpu) || delay)) { lateeoi_list_add(info); return; } @@ -515,7 +532,7 @@ static void xen_irq_lateeoi_worker(struc info->eoi_time = 0; - xen_irq_lateeoi_locked(info); + xen_irq_lateeoi_locked(info, false); } if (info) @@ -544,7 +561,7 @@ void xen_irq_lateeoi(unsigned int irq, u info = info_for_irq(irq); if (info) - xen_irq_lateeoi_locked(info); + xen_irq_lateeoi_locked(info, eoi_flags & XEN_EOI_FLAG_SPURIOUS); read_unlock_irqrestore(&evtchn_rwlock, flags); } @@ -1447,7 +1464,7 @@ int evtchn_get(unsigned int evtchn) goto done; err = -EINVAL; - if (info->refcnt <= 0) + if (info->refcnt <= 0 || info->refcnt == SHRT_MAX) goto done; info->refcnt++; --- a/drivers/xen/events/events_internal.h +++ b/drivers/xen/events/events_internal.h @@ -33,7 +33,8 @@ enum xen_irq_type { struct irq_info { struct list_head list; struct list_head eoi_list; - int refcnt; + short refcnt; + short spurious_cnt; enum xen_irq_type type; /* type */ unsigned irq; unsigned int evtchn; /* event channel */ From patchwork Tue Nov 17 13:05:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 328023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F410BC8300A for ; Tue, 17 Nov 2020 14:05:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A16D120A8B for ; Tue, 17 Nov 2020 14:05:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="u8/PS8p/" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387487AbgKQOFI (ORCPT ); Tue, 17 Nov 2020 09:05:08 -0500 Received: from mail.kernel.org ([198.145.29.99]:38186 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728619AbgKQNJP (ORCPT ); Tue, 17 Nov 2020 08:09:15 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id D4F8924698; Tue, 17 Nov 2020 13:09:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618554; bh=G0CUy7218bVZ/qERDH8RjZBfATQhVOAGQzd3v7sKvxU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=u8/PS8p/YKg5/J89whtKC9lTBEZigkaJYKEsoW4NU3Lbgnm8j5TqkYS2TBbhXCeTK WrKEEm+AQYgMk5LybtlHcGF0teAOaXc9rdREvbmb08Zrgpwvl+yGhOXhd/Tb5ngRls YuM2Z9ABtIkUmQfD8Rdh6VIL55EtyX8P8xcB5iE4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Michael Petlan , Jiri Olsa , Ingo Molnar , Peter Zijlstra , Namhyung Kim , Wade Mealing , Sudip Mukherjee Subject: [PATCH 4.4 60/64] perf/core: Fix race in the perf_mmap_close() function Date: Tue, 17 Nov 2020 14:05:23 +0100 Message-Id: <20201117122109.140341844@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jiri Olsa commit f91072ed1b7283b13ca57fcfbece5a3b92726143 upstream. There's a possible race in perf_mmap_close() when checking ring buffer's mmap_count refcount value. The problem is that the mmap_count check is not atomic because we call atomic_dec() and atomic_read() separately. perf_mmap_close: ... atomic_dec(&rb->mmap_count); ... if (atomic_read(&rb->mmap_count)) goto out_put; free_uid out_put: ring_buffer_put(rb); /* could be last */ The race can happen when we have two (or more) events sharing same ring buffer and they go through atomic_dec() and then they both see 0 as refcount value later in atomic_read(). Then both will go on and execute code which is meant to be run just once. The code that detaches ring buffer is probably fine to be executed more than once, but the problem is in calling free_uid(), which will later on demonstrate in related crashes and refcount warnings, like: refcount_t: addition on 0; use-after-free. ... RIP: 0010:refcount_warn_saturate+0x6d/0xf ... Call Trace: prepare_creds+0x190/0x1e0 copy_creds+0x35/0x172 copy_process+0x471/0x1a80 _do_fork+0x83/0x3a0 __do_sys_wait4+0x83/0x90 __do_sys_clone+0x85/0xa0 do_syscall_64+0x5b/0x1e0 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Using atomic decrease and check instead of separated calls. Tested-by: Michael Petlan Signed-off-by: Jiri Olsa Signed-off-by: Ingo Molnar Acked-by: Peter Zijlstra Acked-by: Namhyung Kim Acked-by: Wade Mealing Fixes: 9bb5d40cd93c ("perf: Fix mmap() accounting hole"); Link: https://lore.kernel.org/r/20200916115311.GE2301783@krava [sudip: backport to v4.9.y by using ring_buffer] Signed-off-by: Sudip Mukherjee Signed-off-by: Greg Kroah-Hartman --- kernel/events/core.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4664,11 +4664,11 @@ static void perf_mmap_open(struct vm_are static void perf_mmap_close(struct vm_area_struct *vma) { struct perf_event *event = vma->vm_file->private_data; - struct ring_buffer *rb = ring_buffer_get(event); struct user_struct *mmap_user = rb->mmap_user; int mmap_locked = rb->mmap_locked; unsigned long size = perf_data_size(rb); + bool detach_rest = false; if (event->pmu->event_unmapped) event->pmu->event_unmapped(event); @@ -4687,7 +4687,8 @@ static void perf_mmap_close(struct vm_ar mutex_unlock(&event->mmap_mutex); } - atomic_dec(&rb->mmap_count); + if (atomic_dec_and_test(&rb->mmap_count)) + detach_rest = true; if (!atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex)) goto out_put; @@ -4696,7 +4697,7 @@ static void perf_mmap_close(struct vm_ar mutex_unlock(&event->mmap_mutex); /* If there's still other mmap()s of this buffer, we're done. */ - if (atomic_read(&rb->mmap_count)) + if (!detach_rest) goto out_put; /* From patchwork Tue Nov 17 13:05:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 328021 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49341C56202 for ; Tue, 17 Nov 2020 14:06:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D541820729 for ; Tue, 17 Nov 2020 14:06:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="diKrxPdr" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729356AbgKQNJE (ORCPT ); Tue, 17 Nov 2020 08:09:04 -0500 Received: from mail.kernel.org ([198.145.29.99]:37698 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729369AbgKQNJD (ORCPT ); Tue, 17 Nov 2020 08:09:03 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id EF8A8221EB; Tue, 17 Nov 2020 13:09:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618542; bh=EboRDqWR0ty8njnVhjop7TgETSHaZFTP5a+mimYOoqo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=diKrxPdrxGfle8rjJAavnVFv/KK9V1r4GfTHVcEUYHjs+8ZePS7kzsObbOq+rW+D1 eczWpeLjmsdEdtCCBjpX+ldMHRrsIkq3fw2fnb4MzuHAkC6ixybU+Q14vukm95Iavb 5JtLi+N45HtWYrJ/U3wetm1N/hBCtvUBE/Qg1on0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Matteo Croce , Andrew Morton , Guenter Roeck , Petr Mladek , Arnd Bergmann , Mike Rapoport , Kees Cook , Pavel Tatashin , Robin Holt , Fabian Frederick , Linus Torvalds , Sudip Mukherjee Subject: [PATCH 4.4 61/64] Revert "kernel/reboot.c: convert simple_strtoul to kstrtoint" Date: Tue, 17 Nov 2020 14:05:24 +0100 Message-Id: <20201117122109.190012835@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Matteo Croce commit 8b92c4ff4423aa9900cf838d3294fcade4dbda35 upstream. Patch series "fix parsing of reboot= cmdline", v3. The parsing of the reboot= cmdline has two major errors: - a missing bound check can crash the system on reboot - parsing of the cpu number only works if specified last Fix both. This patch (of 2): This reverts commit 616feab753972b97. kstrtoint() and simple_strtoul() have a subtle difference which makes them non interchangeable: if a non digit character is found amid the parsing, the former will return an error, while the latter will just stop parsing, e.g. simple_strtoul("123xyx") = 123. The kernel cmdline reboot= argument allows to specify the CPU used for rebooting, with the syntax `s####` among the other flags, e.g. "reboot=warm,s31,force", so if this flag is not the last given, it's silently ignored as well as the subsequent ones. Fixes: 616feab75397 ("kernel/reboot.c: convert simple_strtoul to kstrtoint") Signed-off-by: Matteo Croce Signed-off-by: Andrew Morton Cc: Guenter Roeck Cc: Petr Mladek Cc: Arnd Bergmann Cc: Mike Rapoport Cc: Kees Cook Cc: Pavel Tatashin Cc: Robin Holt Cc: Fabian Frederick Cc: Greg Kroah-Hartman Cc: Link: https://lkml.kernel.org/r/20201103214025.116799-2-mcroce@linux.microsoft.com Signed-off-by: Linus Torvalds [sudip: use reboot_mode instead of mode] Signed-off-by: Sudip Mukherjee Signed-off-by: Greg Kroah-Hartman --- kernel/reboot.c | 21 +++++++-------------- 1 file changed, 7 insertions(+), 14 deletions(-) --- a/kernel/reboot.c +++ b/kernel/reboot.c @@ -512,22 +512,15 @@ static int __init reboot_setup(char *str break; case 's': - { - int rc; - - if (isdigit(*(str+1))) { - rc = kstrtoint(str+1, 0, &reboot_cpu); - if (rc) - return rc; - } else if (str[1] == 'm' && str[2] == 'p' && - isdigit(*(str+3))) { - rc = kstrtoint(str+3, 0, &reboot_cpu); - if (rc) - return rc; - } else + if (isdigit(*(str+1))) + reboot_cpu = simple_strtoul(str+1, NULL, 0); + else if (str[1] == 'm' && str[2] == 'p' && + isdigit(*(str+3))) + reboot_cpu = simple_strtoul(str+3, NULL, 0); + else reboot_mode = REBOOT_SOFT; break; - } + case 'g': reboot_mode = REBOOT_GPIO; break; From patchwork Tue Nov 17 13:05:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 328022 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5B93C83011 for ; Tue, 17 Nov 2020 14:05:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5DFBD20829 for ; Tue, 17 Nov 2020 14:05:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="hNIcb1xf" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728832AbgKQOFR (ORCPT ); Tue, 17 Nov 2020 09:05:17 -0500 Received: from mail.kernel.org ([198.145.29.99]:38142 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729396AbgKQNJM (ORCPT ); Tue, 17 Nov 2020 08:09:12 -0500 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id EE741246C0; Tue, 17 Nov 2020 13:09:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605618551; bh=9k1GzaQskuGl5Gm1SQOBqk+VcyKLvXW2BgKdbA76bUE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hNIcb1xfd1MAw4djjNs6JIPlMdtGXmCFDo+9lFTTLk4SoSdfzU+/r8duTrfxG75wh kLnn5WDl68BPyuk0navJ5+pT4jCFbYsVWLEDte3v7Yetirh/7la1Q7WsL72hsnSQYs M/K8lx4dL00xEMRou3YDrqYu5VAwo5/QGilv8dQo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Boris Protopopov , Ronnie Sahlberg , Steve French Subject: [PATCH 4.4 64/64] Convert trailing spaces and periods in path components Date: Tue, 17 Nov 2020 14:05:27 +0100 Message-Id: <20201117122109.332079931@linuxfoundation.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201117122106.144800239@linuxfoundation.org> References: <20201117122106.144800239@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Boris Protopopov commit 57c176074057531b249cf522d90c22313fa74b0b upstream. When converting trailing spaces and periods in paths, do so for every component of the path, not just the last component. If the conversion is not done for every path component, then subsequent operations in directories with trailing spaces or periods (e.g. create(), mkdir()) will fail with ENOENT. This is because on the server, the directory will have a special symbol in its name, and the client needs to provide the same. Signed-off-by: Boris Protopopov Acked-by: Ronnie Sahlberg Signed-off-by: Steve French Signed-off-by: Greg Kroah-Hartman --- fs/cifs/cifs_unicode.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) --- a/fs/cifs/cifs_unicode.c +++ b/fs/cifs/cifs_unicode.c @@ -493,7 +493,13 @@ cifsConvertToUTF16(__le16 *target, const else if (map_chars == SFM_MAP_UNI_RSVD) { bool end_of_string; - if (i == srclen - 1) + /** + * Remap spaces and periods found at the end of every + * component of the path. The special cases of '.' and + * '..' do not need to be dealt with explicitly because + * they are addressed in namei.c:link_path_walk(). + **/ + if ((i == srclen - 1) || (source[i+1] == '\\')) end_of_string = true; else end_of_string = false;