From patchwork Wed Nov 19 16:53:43 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 41197 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f200.google.com (mail-wi0-f200.google.com [209.85.212.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 26B00241C9 for ; Wed, 19 Nov 2014 16:55:31 +0000 (UTC) Received: by mail-wi0-f200.google.com with SMTP id ex7sf979807wid.11 for ; Wed, 19 Nov 2014 08:55:30 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=ipkEieHum7j8PjQBUwR5mNwk3RGZFpmn27kmEdyhY5A=; b=GG35FkfLKrI+becsCktMt6LITV1mPjWhwgqG+4R8XlIinP0rBs4bx0L+N3b1KKOhev Kn1CddoJ7wgu5C5iDfq3BKxVCilfUUFTtUw+OPnTY6+XXfPUIhGlgHHjb9DqSxpbUPnv 4f6iw058+IKJ10OnIAXEUay0MJSh7W3KsqWSgWHjibjrcd4aYt+xwGC8IZCDy0A44LOG FkFl75wdf3uuKRgWAV5fguhKa+emv60E2cnIO01JnrpehJQtbfDw+fdLl44FCfcT8OTH W8TmjYikUXMUzYbkLdzm2BIH+mZzJ9SuuBHv/2mMjCOfCxJtcj2P/APNJtuOkDFi9g0V iW3g== X-Gm-Message-State: ALoCoQkottvM7ILDpSm2qH6jz++ARPzxFPyZ3UKWthDO5by4oVYa61ccBrtZBmvE/ykvkNXDzAgq X-Received: by 10.180.182.195 with SMTP id eg3mr7364300wic.1.1416416130356; Wed, 19 Nov 2014 08:55:30 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.7.193 with SMTP id l1ls1590800laa.79.gmail; Wed, 19 Nov 2014 08:55:30 -0800 (PST) X-Received: by 10.152.20.72 with SMTP id l8mr6378951lae.43.1416416130052; Wed, 19 Nov 2014 08:55:30 -0800 (PST) Received: from mail-la0-f49.google.com (mail-la0-f49.google.com. [209.85.215.49]) by mx.google.com with ESMTPS id ln11si2412089lac.4.2014.11.19.08.55.30 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 19 Nov 2014 08:55:30 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) client-ip=209.85.215.49; Received: by mail-la0-f49.google.com with SMTP id hs14so843018lab.22 for ; Wed, 19 Nov 2014 08:55:29 -0800 (PST) X-Received: by 10.112.62.166 with SMTP id z6mr6221591lbr.74.1416416129916; Wed, 19 Nov 2014 08:55:29 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp118429lbc; Wed, 19 Nov 2014 08:55:28 -0800 (PST) X-Received: by 10.70.61.37 with SMTP id m5mr13541833pdr.162.1416416127998; Wed, 19 Nov 2014 08:55:27 -0800 (PST) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id je12si3538913pbd.126.2014.11.19.08.55.27 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Nov 2014 08:55:27 -0800 (PST) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xr8WI-0001Td-1B; Wed, 19 Nov 2014 16:54:18 +0000 Received: from mail-wg0-f50.google.com ([74.125.82.50]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xr8WE-0001P5-O4 for linux-arm-kernel@lists.infradead.org; Wed, 19 Nov 2014 16:54:15 +0000 Received: by mail-wg0-f50.google.com with SMTP id k14so1303655wgh.9 for ; Wed, 19 Nov 2014 08:53:52 -0800 (PST) X-Received: by 10.180.81.102 with SMTP id z6mr14951971wix.69.1416416030641; Wed, 19 Nov 2014 08:53:50 -0800 (PST) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id iz19sm2950194wic.8.2014.11.19.08.53.48 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 19 Nov 2014 08:53:49 -0800 (PST) From: Steve Capper To: will.deacon@arm.com Subject: [PATCH V5] arm64: percpu: Implement this_cpu operations Date: Wed, 19 Nov 2014 16:53:43 +0000 Message-Id: <1416416023-27216-1-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <20141119154920.GA25504@linaro.org> References: <20141119154920.GA25504@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141119_085415_081249_63CDEA91 X-CRM114-Status: GOOD ( 13.82 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [74.125.82.50 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [74.125.82.50 listed in wl.mailspike.net] -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders Cc: catalin.marinas@arm.com, Steve Capper , linux-arm-kernel@lists.infradead.org, ard.biesheuvel@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: steve.capper@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 The generic this_cpu operations disable interrupts to ensure that the requested operation is protected from pre-emption. For arm64, this is overkill and can hurt throughput and latency. This patch provides arm64 specific implementations for the this_cpu operations. Rather than disable interrupts, we use the exclusive monitor or atomic operations as appropriate. The following operations are implemented: add, add_return, and, or, read, write, xchg. We also wire up a cmpxchg implementation from cmpxchg.h. Testing was performed using the percpu_test module and hackbench on a Juno board running 3.18-rc4. Signed-off-by: Steve Capper Reviewed-by: Will Deacon --- Changed in V5: spacing and parantheses sanetised for read/write accessors. Changed in V4: rebased to allow for merge in Will's for-next branch. (I had erroneously changed my cmpxchg_double patch). Changed the read/write accessors to use ACCESS_ONCE rather than asm. Changed in V3: use cmpxchg_local rather than cmpxchg for this_cpu_cmpxchg. Changed in V2: corrected address constraint for __percpu_read and __percpu_write. --- arch/arm64/include/asm/cmpxchg.h | 6 +- arch/arm64/include/asm/percpu.h | 215 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 219 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h index 89e397b..cb95930 100644 --- a/arch/arm64/include/asm/cmpxchg.h +++ b/arch/arm64/include/asm/cmpxchg.h @@ -246,8 +246,10 @@ static inline unsigned long __cmpxchg_mb(volatile void *ptr, unsigned long old, __ret; \ }) -#define this_cpu_cmpxchg_8(ptr, o, n) \ - cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n) +#define this_cpu_cmpxchg_1(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n) +#define this_cpu_cmpxchg_2(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n) +#define this_cpu_cmpxchg_4(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n) +#define this_cpu_cmpxchg_8(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n) #define this_cpu_cmpxchg_double_8(ptr1, ptr2, o1, o2, n1, n2) \ cmpxchg_double_local(raw_cpu_ptr(&(ptr1)), raw_cpu_ptr(&(ptr2)), \ diff --git a/arch/arm64/include/asm/percpu.h b/arch/arm64/include/asm/percpu.h index 5279e57..09da25b 100644 --- a/arch/arm64/include/asm/percpu.h +++ b/arch/arm64/include/asm/percpu.h @@ -44,6 +44,221 @@ static inline unsigned long __my_cpu_offset(void) #endif /* CONFIG_SMP */ +#define PERCPU_OP(op, asm_op) \ +static inline unsigned long __percpu_##op(void *ptr, \ + unsigned long val, int size) \ +{ \ + unsigned long loop, ret; \ + \ + switch (size) { \ + case 1: \ + do { \ + asm ("//__per_cpu_" #op "_1\n" \ + "ldxrb %w[ret], %[ptr]\n" \ + #asm_op " %w[ret], %w[ret], %w[val]\n" \ + "stxrb %w[loop], %w[ret], %[ptr]\n" \ + : [loop] "=&r" (loop), [ret] "=&r" (ret), \ + [ptr] "+Q"(*(u8 *)ptr) \ + : [val] "Ir" (val)); \ + } while (loop); \ + break; \ + case 2: \ + do { \ + asm ("//__per_cpu_" #op "_2\n" \ + "ldxrh %w[ret], %[ptr]\n" \ + #asm_op " %w[ret], %w[ret], %w[val]\n" \ + "stxrh %w[loop], %w[ret], %[ptr]\n" \ + : [loop] "=&r" (loop), [ret] "=&r" (ret), \ + [ptr] "+Q"(*(u16 *)ptr) \ + : [val] "Ir" (val)); \ + } while (loop); \ + break; \ + case 4: \ + do { \ + asm ("//__per_cpu_" #op "_4\n" \ + "ldxr %w[ret], %[ptr]\n" \ + #asm_op " %w[ret], %w[ret], %w[val]\n" \ + "stxr %w[loop], %w[ret], %[ptr]\n" \ + : [loop] "=&r" (loop), [ret] "=&r" (ret), \ + [ptr] "+Q"(*(u32 *)ptr) \ + : [val] "Ir" (val)); \ + } while (loop); \ + break; \ + case 8: \ + do { \ + asm ("//__per_cpu_" #op "_8\n" \ + "ldxr %[ret], %[ptr]\n" \ + #asm_op " %[ret], %[ret], %[val]\n" \ + "stxr %w[loop], %[ret], %[ptr]\n" \ + : [loop] "=&r" (loop), [ret] "=&r" (ret), \ + [ptr] "+Q"(*(u64 *)ptr) \ + : [val] "Ir" (val)); \ + } while (loop); \ + break; \ + default: \ + BUILD_BUG(); \ + } \ + \ + return ret; \ +} + +PERCPU_OP(add, add) +PERCPU_OP(and, and) +PERCPU_OP(or, orr) +#undef PERCPU_OP + +static inline unsigned long __percpu_read(void *ptr, int size) +{ + unsigned long ret; + + switch (size) { + case 1: + ret = ACCESS_ONCE(*(u8 *)ptr); + break; + case 2: + ret = ACCESS_ONCE(*(u16 *)ptr); + break; + case 4: + ret = ACCESS_ONCE(*(u32 *)ptr); + break; + case 8: + ret = ACCESS_ONCE(*(u64 *)ptr); + break; + default: + BUILD_BUG(); + } + + return ret; +} + +static inline void __percpu_write(void *ptr, unsigned long val, int size) +{ + switch (size) { + case 1: + ACCESS_ONCE(*(u8 *)ptr) = (u8)val; + break; + case 2: + ACCESS_ONCE(*(u16 *)ptr) = (u16)val; + break; + case 4: + ACCESS_ONCE(*(u32 *)ptr) = (u32)val; + break; + case 8: + ACCESS_ONCE(*(u64 *)ptr) = (u64)val; + break; + default: + BUILD_BUG(); + } +} + +static inline unsigned long __percpu_xchg(void *ptr, unsigned long val, + int size) +{ + unsigned long ret, loop; + + switch (size) { + case 1: + do { + asm ("//__percpu_xchg_1\n" + "ldxrb %w[ret], %[ptr]\n" + "stxrb %w[loop], %w[val], %[ptr]\n" + : [loop] "=&r"(loop), [ret] "=&r"(ret), + [ptr] "+Q"(*(u8 *)ptr) + : [val] "r" (val)); + } while (loop); + break; + case 2: + do { + asm ("//__percpu_xchg_2\n" + "ldxrh %w[ret], %[ptr]\n" + "stxrh %w[loop], %w[val], %[ptr]\n" + : [loop] "=&r"(loop), [ret] "=&r"(ret), + [ptr] "+Q"(*(u16 *)ptr) + : [val] "r" (val)); + } while (loop); + break; + case 4: + do { + asm ("//__percpu_xchg_4\n" + "ldxr %w[ret], %[ptr]\n" + "stxr %w[loop], %w[val], %[ptr]\n" + : [loop] "=&r"(loop), [ret] "=&r"(ret), + [ptr] "+Q"(*(u32 *)ptr) + : [val] "r" (val)); + } while (loop); + break; + case 8: + do { + asm ("//__percpu_xchg_8\n" + "ldxr %[ret], %[ptr]\n" + "stxr %w[loop], %[val], %[ptr]\n" + : [loop] "=&r"(loop), [ret] "=&r"(ret), + [ptr] "+Q"(*(u64 *)ptr) + : [val] "r" (val)); + } while (loop); + break; + default: + BUILD_BUG(); + } + + return ret; +} + +#define _percpu_add(pcp, val) \ + __percpu_add(raw_cpu_ptr(&(pcp)), val, sizeof(pcp)) + +#define _percpu_add_return(pcp, val) (typeof(pcp)) (_percpu_add(pcp, val)) + +#define _percpu_and(pcp, val) \ + __percpu_and(raw_cpu_ptr(&(pcp)), val, sizeof(pcp)) + +#define _percpu_or(pcp, val) \ + __percpu_or(raw_cpu_ptr(&(pcp)), val, sizeof(pcp)) + +#define _percpu_read(pcp) (typeof(pcp)) \ + (__percpu_read(raw_cpu_ptr(&(pcp)), sizeof(pcp))) + +#define _percpu_write(pcp, val) \ + __percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), sizeof(pcp)) + +#define _percpu_xchg(pcp, val) (typeof(pcp)) \ + (__percpu_xchg(raw_cpu_ptr(&(pcp)), (unsigned long)(val), sizeof(pcp))) + +#define this_cpu_add_1(pcp, val) _percpu_add(pcp, val) +#define this_cpu_add_2(pcp, val) _percpu_add(pcp, val) +#define this_cpu_add_4(pcp, val) _percpu_add(pcp, val) +#define this_cpu_add_8(pcp, val) _percpu_add(pcp, val) + +#define this_cpu_add_return_1(pcp, val) _percpu_add_return(pcp, val) +#define this_cpu_add_return_2(pcp, val) _percpu_add_return(pcp, val) +#define this_cpu_add_return_4(pcp, val) _percpu_add_return(pcp, val) +#define this_cpu_add_return_8(pcp, val) _percpu_add_return(pcp, val) + +#define this_cpu_and_1(pcp, val) _percpu_and(pcp, val) +#define this_cpu_and_2(pcp, val) _percpu_and(pcp, val) +#define this_cpu_and_4(pcp, val) _percpu_and(pcp, val) +#define this_cpu_and_8(pcp, val) _percpu_and(pcp, val) + +#define this_cpu_or_1(pcp, val) _percpu_or(pcp, val) +#define this_cpu_or_2(pcp, val) _percpu_or(pcp, val) +#define this_cpu_or_4(pcp, val) _percpu_or(pcp, val) +#define this_cpu_or_8(pcp, val) _percpu_or(pcp, val) + +#define this_cpu_read_1(pcp) _percpu_read(pcp) +#define this_cpu_read_2(pcp) _percpu_read(pcp) +#define this_cpu_read_4(pcp) _percpu_read(pcp) +#define this_cpu_read_8(pcp) _percpu_read(pcp) + +#define this_cpu_write_1(pcp, val) _percpu_write(pcp, val) +#define this_cpu_write_2(pcp, val) _percpu_write(pcp, val) +#define this_cpu_write_4(pcp, val) _percpu_write(pcp, val) +#define this_cpu_write_8(pcp, val) _percpu_write(pcp, val) + +#define this_cpu_xchg_1(pcp, val) _percpu_xchg(pcp, val) +#define this_cpu_xchg_2(pcp, val) _percpu_xchg(pcp, val) +#define this_cpu_xchg_4(pcp, val) _percpu_xchg(pcp, val) +#define this_cpu_xchg_8(pcp, val) _percpu_xchg(pcp, val) + #include #endif /* __ASM_PERCPU_H */