From patchwork Wed Jul 2 14:40:50 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 32991 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f72.google.com (mail-oa0-f72.google.com [209.85.219.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 092E720560 for ; Wed, 2 Jul 2014 14:42:45 +0000 (UTC) Received: by mail-oa0-f72.google.com with SMTP id eb12sf68964746oac.11 for ; Wed, 02 Jul 2014 07:42:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:date:from:to:subject:message-id :references:mime-version:in-reply-to:user-agent:cc:precedence :list-id:list-unsubscribe:list-archive:list-post:list-help :list-subscribe:sender:errors-to:x-original-sender :x-original-authentication-results:mailing-list:content-disposition :content-type:content-transfer-encoding; bh=zPuDx+ftralIAWbaGlGR+3VCdldH2ZufsQ2SbO+dt04=; b=Z4CYsSqS//Wl6JNmOhnX/U+LF6gsnXecLCBmzq+UaxwDCur6PPGIPqE/21mUuxJIWT IZTUUVmkw3wp1GZyp4szKzAfIe8bqPU1Gf6Jt3qsSUcmJGQ1aNhdfFDl38RE1kX8Fgvy KxTdVMPhZUcD/NBGvfl7iszJ5BnOabPORmY+TxUPZVAOcbJ7lnPJjictbr3tGJYeFdgq cJmRzg/ZylkwJrtyfwBDmdBKD+ZAJZjs4ynSyMgYmeYQy6aKm/+ajt2S5ZmIFm7j8lS6 yEHeGtWnyQ/LDUXSYuhWfzRsu2TZWQsEwmifpY1tbdoLRN0AZCo+1A9uH4YdjmYFIb99 Zang== X-Gm-Message-State: ALoCoQncr8DHkkkWEcXkYYNq8RPVsnJ2BmqB4Z7hTdxtMWaAIgj2i7LcSREPnX39AmG0N3VAfK0e X-Received: by 10.42.27.18 with SMTP id h18mr2006071icc.25.1404312164617; Wed, 02 Jul 2014 07:42:44 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.95.107 with SMTP id h98ls83901qge.68.gmail; Wed, 02 Jul 2014 07:42:44 -0700 (PDT) X-Received: by 10.58.151.68 with SMTP id uo4mr48608451veb.11.1404312164365; Wed, 02 Jul 2014 07:42:44 -0700 (PDT) Received: from mail-vc0-f180.google.com (mail-vc0-f180.google.com [209.85.220.180]) by mx.google.com with ESMTPS id x5si13010564vei.12.2014.07.02.07.42.44 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 02 Jul 2014 07:42:44 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.180 as permitted sender) client-ip=209.85.220.180; Received: by mail-vc0-f180.google.com with SMTP id im17so10449010vcb.39 for ; Wed, 02 Jul 2014 07:42:44 -0700 (PDT) X-Received: by 10.220.68.83 with SMTP id u19mr644328vci.52.1404312164265; Wed, 02 Jul 2014 07:42:44 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp298524vcb; Wed, 2 Jul 2014 07:42:43 -0700 (PDT) X-Received: by 10.66.248.228 with SMTP id yp4mr4580519pac.94.1404312162966; Wed, 02 Jul 2014 07:42:42 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id qe1si30463191pbb.143.2014.07.02.07.42.42 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 02 Jul 2014 07:42:42 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1X2Lix-0001Q8-7a; Wed, 02 Jul 2014 14:41:27 +0000 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1X2Lit-0001Fz-Ix for linux-arm-kernel@lists.infradead.org; Wed, 02 Jul 2014 14:41:24 +0000 Received: from arm.com (edgewater-inn.cambridge.arm.com [10.1.203.161]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id s62Eegwo020319; Wed, 2 Jul 2014 15:40:42 +0100 (BST) Date: Wed, 2 Jul 2014 15:40:50 +0100 From: Will Deacon To: Andy Lutomirski Subject: Re: [PATCH v7 8/9] ARM: vdso initialization, mapping, and synchronization Message-ID: <20140702144050.GD24879@arm.com> References: <1403493118-7597-1-git-send-email-nathan_lynch@mentor.com> <1403493118-7597-9-git-send-email-nathan_lynch@mentor.com> <53B1D8AC.7060104@mit.edu> <20140701090309.GC28164@arm.com> <53B2C178.30607@mentor.com> <20140701141541.GP28164@arm.com> MIME-Version: 1.0 In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140702_074123_976968_5A6CC3F6 X-CRM114-Status: GOOD ( 19.86 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.96.50 listed in list.dnswl.org] -0.0 T_RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record Cc: "steve.capper@linaro.org" , "ard.biesheuvel@linaro.org" , "sboyd@codeaurora.org" , Nathan Lynch , "keescook@google.com" , "linux@arm.linux.org.uk" , "linux-arm-kernel@lists.infradead.org" X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: will.deacon@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.180 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 Content-Disposition: inline Hi Andy, On Tue, Jul 01, 2014 at 03:17:23PM +0100, Andy Lutomirski wrote: > On Tue, Jul 1, 2014 at 7:15 AM, Will Deacon wrote: > > On Tue, Jul 01, 2014 at 03:11:04PM +0100, Nathan Lynch wrote: > >> I believe Andy is suggesting separate VMAs (with different VM flags) for > >> the VDSO's data and code. So, breakpoints in code would work, but > >> attempts to modify the data page via ptrace() would fail outright > >> instead of silently COWing. > > > > Ah, yes. That makes a lot of sense for the data page -- we should do > > something similar on arm64 too, since the CoW will break everything for the > > task being debugged. We could also drop the EXEC flags too. > > If you do this, I have a slight preference for the new vma being > called "[vvar]" to match x86. It'll make the CRIU people happy if and > when they port it to ARM. I quickly hacked something (see below) and now I see the following in /proc/$$/maps: 7fa1574000-7fa1575000 r-xp 00000000 00:00 0 [vdso] 7fa1575000-7fa1576000 r--p 00000000 00:00 0 [vvar] Is that what you're after? Will --->8 diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index 50384fec56c4..84cafbc3eb54 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -138,11 +138,12 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) { struct mm_struct *mm = current->mm; - unsigned long vdso_base, vdso_mapping_len; + unsigned long vdso_base, vdso_text_len, vdso_mapping_len; int ret; + vdso_text_len = vdso_pages << PAGE_SHIFT; /* Be sure to map the data page */ - vdso_mapping_len = (vdso_pages + 1) << PAGE_SHIFT; + vdso_mapping_len = vdso_text_len + PAGE_SIZE; down_write(&mm->mmap_sem); vdso_base = get_unmapped_area(NULL, 0, vdso_mapping_len, 0, 0); @@ -152,35 +153,52 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, } mm->context.vdso = (void *)vdso_base; - ret = install_special_mapping(mm, vdso_base, vdso_mapping_len, + ret = install_special_mapping(mm, vdso_base, vdso_text_len, VM_READ|VM_EXEC| VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC, vdso_pagelist); - if (ret) { - mm->context.vdso = NULL; + if (ret) + goto up_fail; + + vdso_base += vdso_text_len; + ret = install_special_mapping(mm, vdso_base, PAGE_SIZE, + VM_READ|VM_MAYREAD, + vdso_pagelist + vdso_pages); + if (ret) goto up_fail; - } -up_fail: up_write(&mm->mmap_sem); + return 0; +up_fail: + mm->context.vdso = NULL; + up_write(&mm->mmap_sem); return ret; } const char *arch_vma_name(struct vm_area_struct *vma) { + unsigned long vdso_text; + + if (!vma->vm_mm) + return NULL; + + vdso_text = (unsigned long)vma->vm_mm->context.vdso; + /* * We can re-use the vdso pointer in mm_context_t for identifying * the vectors page for compat applications. The vDSO will always * sit above TASK_UNMAPPED_BASE and so we don't need to worry about * it conflicting with the vectors base. */ - if (vma->vm_mm && vma->vm_start == (long)vma->vm_mm->context.vdso) { + if (vma->vm_start == vdso_text) { #ifdef CONFIG_COMPAT if (vma->vm_start == AARCH32_VECTORS_BASE) return "[vectors]"; #endif return "[vdso]"; + } else if (vma->vm_start == (vdso_text + (vdso_pages << PAGE_SHIFT))) { + return "[vvar]"; } return NULL;