From patchwork Tue Jun 24 15:51:37 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Rutland X-Patchwork-Id: 32437 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pb0-f72.google.com (mail-pb0-f72.google.com [209.85.160.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id A2923201EF for ; Tue, 24 Jun 2014 15:54:50 +0000 (UTC) Received: by mail-pb0-f72.google.com with SMTP id ma3sf1746660pbc.3 for ; Tue, 24 Jun 2014 08:54:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=b6AGnoZGN2w4bFNpoYxN825hMgL1sKulVJLc59Rw+fw=; b=kqn5BN+MZWXV0pSEN24bphkt9lXWjVdnMprYjICfl0MSnOIasRIwD9B2g/CpF/fRbc +rkYS6qpuNR3PeoEJ3fb4w7RQSGbYxeyqiC1iV2Aqw1B/a8bV4/UwF2rc6rk+/YzLOql orTnJlUs6ni/v/cPmf0dJz1mdI7VAGVx62iZFZX+CN+BZ8cesNOBPTPAj4GeAPhSh2sN pxfOsGVkCexZU/QbzM+pvEvvNhF8E9n+K9G2GQ3GSQhMrXf/xTxS80jB5hAnJEOU8ZCM vl3sB3p5Xmz7/dBBWCfgLeigk9YywqexYxayVGwVAlOP7yfBG0mE4WMkxoHweKEB9OLa oYHA== X-Gm-Message-State: ALoCoQnLHYNKWAfrRRmZNc6C7Nt7TmUIVwnnOFTQmSx4LLnS6qH+XTdaRssvKovqltjSzR/3AOvt X-Received: by 10.66.164.167 with SMTP id yr7mr831712pab.15.1403625289971; Tue, 24 Jun 2014 08:54:49 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.94.129 with SMTP id g1ls2364268qge.17.gmail; Tue, 24 Jun 2014 08:54:49 -0700 (PDT) X-Received: by 10.52.241.76 with SMTP id wg12mr1240416vdc.5.1403625289792; Tue, 24 Jun 2014 08:54:49 -0700 (PDT) Received: from mail-vc0-f178.google.com (mail-vc0-f178.google.com [209.85.220.178]) by mx.google.com with ESMTPS id v15si453849vcj.8.2014.06.24.08.54.49 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 24 Jun 2014 08:54:49 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.178 as permitted sender) client-ip=209.85.220.178; Received: by mail-vc0-f178.google.com with SMTP id ij19so530143vcb.23 for ; Tue, 24 Jun 2014 08:54:49 -0700 (PDT) X-Received: by 10.58.29.234 with SMTP id n10mr1560684veh.16.1403625289715; Tue, 24 Jun 2014 08:54:49 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp221758vcb; Tue, 24 Jun 2014 08:54:49 -0700 (PDT) X-Received: by 10.224.136.65 with SMTP id q1mr2975200qat.93.1403625288880; Tue, 24 Jun 2014 08:54:48 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2001:1868:205::9]) by mx.google.com with ESMTPS id q2si897396qas.28.2014.06.24.08.54.48 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Jun 2014 08:54:48 -0700 (PDT) Received-SPF: none (google.com: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org does not designate permitted sender hosts) client-ip=2001:1868:205::9; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WzT1v-000814-Vk; Tue, 24 Jun 2014 15:53:07 +0000 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WzT1T-0007gm-Up for linux-arm-kernel@lists.infradead.org; Tue, 24 Jun 2014 15:52:44 +0000 Received: from leverpostej.cambridge.arm.com (leverpostej.cambridge.arm.com [10.1.205.151]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id s5OFpgws020511; Tue, 24 Jun 2014 16:52:16 +0100 (BST) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Subject: [PATCHv4 4/4] arm64: Enable TEXT_OFFSET fuzzing Date: Tue, 24 Jun 2014 16:51:37 +0100 Message-Id: <1403625097-18235-5-git-send-email-mark.rutland@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1403625097-18235-1-git-send-email-mark.rutland@arm.com> References: <1403625097-18235-1-git-send-email-mark.rutland@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140624_085240_397564_AC4DAAE9 X-CRM114-Status: GOOD ( 14.72 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.96.50 listed in list.dnswl.org] -0.0 T_RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record Cc: Mark Rutland , rob.herring@linaro.org, lauraa@codeaurora.org, peter.maydell@linaro.org, geoff@infradead.org, Catalin.Marinas@arm.com, Will.Deacon@arm.com, leif.lindholm@linaro.org, Marc.Zyngier@arm.com, kevin.hilman@linaro.org, ijc@hellion.org.uk, trini@ti.com, Dave.Martin@arm.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: mark.rutland@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.178 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 The arm64 Image header contains a text_offset field which bootloaders are supposed to read to determine the offset (from a 2MB aligned "start of memory" per booting.txt) at which to load the kernel. The offset is not well respected by bootloaders at present, and due to the lack of variation there is little incentive to support it. This is unfortunate for the sake of future kernels where we may wish to vary the text offset (even zeroing it). This patch adds options to arm64 to enable fuzz-testing of text_offset. CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random 16-byte aligned value value in the range [0..2MB) upon a build of the kernel. It is recommended that distribution kernels enable randomization to test bootloaders such that any compliance issues can be fixed early. Signed-off-by: Mark Rutland Acked-by: Tom Rini Acked-by: Will Deacon --- arch/arm64/Kconfig.debug | 15 +++++++++++++++ arch/arm64/Makefile | 4 ++++ arch/arm64/kernel/head.S | 8 ++++++-- arch/arm64/kernel/vmlinux.lds.S | 5 +++++ 4 files changed, 30 insertions(+), 2 deletions(-) diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug index 1c1b756..4ee8e90 100644 --- a/arch/arm64/Kconfig.debug +++ b/arch/arm64/Kconfig.debug @@ -28,4 +28,19 @@ config PID_IN_CONTEXTIDR instructions during context switch. Say Y here only if you are planning to use hardware trace tools with this kernel. +config ARM64_RANDOMIZE_TEXT_OFFSET + bool "Randomize TEXT_OFFSET at build time" + help + Say Y here if you want the image load offset (AKA TEXT_OFFSET) + of the kernel to be randomized at build-time. When selected, + this option will cause TEXT_OFFSET to be randomized upon any + build of the kernel, and the offset will be reflected in the + text_offset field of the resulting Image. This can be used to + fuzz-test bootloaders which respect text_offset. + + This option is intended for bootloader and/or kernel testing + only. Bootloaders must make no assumptions regarding the value + of TEXT_OFFSET and platforms must not require a specific + value. + endmenu diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 8185a91..e8d025c 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -38,7 +38,11 @@ CHECKFLAGS += -D__aarch64__ head-y := arch/arm64/kernel/head.o # The byte offset of the kernel image in RAM from the start of RAM. +ifeq ($(CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET), y) +TEXT_OFFSET := $(shell awk 'BEGIN {srand(); printf "0x%04x0\n", int(65535 * rand())}') +else TEXT_OFFSET := 0x00080000 +endif export TEXT_OFFSET GZFLAGS diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 7b59c3d..8483504 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -37,8 +37,12 @@ #define KERNEL_RAM_VADDR (PAGE_OFFSET + TEXT_OFFSET) -#if (KERNEL_RAM_VADDR & 0xfffff) != 0x80000 -#error KERNEL_RAM_VADDR must start at 0xXXX80000 +#if (TEXT_OFFSET & 0xf) != 0 +#error TEXT_OFFSET must be at least 16B aligned +#elif (PAGE_OFFSET & 0xfffff) != 0 +#error PAGE_OFFSET must be at least 2MB aligned +#elif TEXT_OFFSET > 0xfffff +#error TEXT_OFFSET must be less than 2MB #endif .macro pgtbl, ttb0, ttb1, virt_to_phys diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index a814768..97f0c04 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -125,3 +125,8 @@ SECTIONS */ ASSERT(((__hyp_idmap_text_start + PAGE_SIZE) > __hyp_idmap_text_end), "HYP init code too big") + +/* + * If padding is applied before .head.text, virt<->phys conversions will fail. + */ +ASSERT(_text == (PAGE_OFFSET + TEXT_OFFSET), "HEAD is misaligned")