From patchwork Fri Feb 14 17:10:09 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 24663 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f69.google.com (mail-yh0-f69.google.com [209.85.213.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id CCEBF2055D for ; Fri, 14 Feb 2014 17:12:13 +0000 (UTC) Received: by mail-yh0-f69.google.com with SMTP id a41sf27729980yho.0 for ; Fri, 14 Feb 2014 09:12:12 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:date:message-id:cc:subject :precedence:list-id:list-unsubscribe:list-post:list-help :list-subscribe:mime-version:sender:errors-to:x-original-sender :x-original-authentication-results:mailing-list:list-archive :content-type:content-transfer-encoding; bh=lIeb7WLONDhVBy5ceumAdwIFfKaS0/QqxJJEJwrb5bk=; b=SQBEeyCD2zIqg1ShcCv2ZvYEmY96rBvtPDukHaC2/VvbJ+61CEwMO+ciE4bbb8HjKe eJ0i8fOvrwbxs6LBmoqiRt19kE8PGV+JpdrAVZFgq9Fxv2Wof0TUEowoV5yQfTAB/lCs nWmJsXRB/kh3yq8/FI07Sx7OSMv7OC1z+RIt5Haq8EQW5nEopFuA6gzIZtQqxSrCIDKz mtxY4aulFX1gbHOVoDkVkZFyqzmXi/VTRUchdNYgtIeDnAa2RzaZohXI55N6+6wWWjHz fOO+sHDD02cvU1OlZwm+m7d/S+0eja4qDNSlmyXDdPnjZaEl2CBJpQnTLj3yiikFRhmd Hncg== X-Gm-Message-State: ALoCoQnL/yc2wKkkU1iWVk+HskbilRWbt07pidrJDSYy2EcycUpITsUtIw4jvCJpzQqQ+OcgvAyg X-Received: by 10.236.190.199 with SMTP id e47mr1493431yhn.53.1392397932941; Fri, 14 Feb 2014 09:12:12 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.28.6 with SMTP id 6ls122557qgy.16.gmail; Fri, 14 Feb 2014 09:12:12 -0800 (PST) X-Received: by 10.220.88.204 with SMTP id b12mr6044268vcm.3.1392397932886; Fri, 14 Feb 2014 09:12:12 -0800 (PST) Received: from mail-vc0-f171.google.com (mail-vc0-f171.google.com [209.85.220.171]) by mx.google.com with ESMTPS id cy15si2128915veb.68.2014.02.14.09.12.12 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 14 Feb 2014 09:12:12 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.171 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.171; Received: by mail-vc0-f171.google.com with SMTP id le5so9411804vcb.2 for ; Fri, 14 Feb 2014 09:12:12 -0800 (PST) X-Received: by 10.52.166.9 with SMTP id zc9mr5146129vdb.16.1392397932697; Fri, 14 Feb 2014 09:12:12 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.174.196 with SMTP id u4csp125207vcz; Fri, 14 Feb 2014 09:12:12 -0800 (PST) X-Received: by 10.140.47.101 with SMTP id l92mr14420404qga.9.1392397932012; Fri, 14 Feb 2014 09:12:12 -0800 (PST) Received: from lists.xen.org (lists.xen.org. [50.57.142.19]) by mx.google.com with ESMTPS id j4si4259671qao.104.2014.02.14.09.12.10 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Fri, 14 Feb 2014 09:12:12 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of xen-devel-bounces@lists.xen.org designates 50.57.142.19 as permitted sender) client-ip=50.57.142.19; Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WEMHR-0003Eo-Bb; Fri, 14 Feb 2014 17:10:25 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WEMHQ-0003Eg-9b for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:10:24 +0000 Received: from [193.109.254.147:52251] by server-4.bemta-14.messagelabs.com id 16/69-32066-FFD4EF25; Fri, 14 Feb 2014 17:10:23 +0000 X-Env-Sender: julien.grall@linaro.org X-Msg-Ref: server-3.tower-27.messagelabs.com!1392397822!4444362!1 X-Originating-IP: [74.125.83.49] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 6.9.16; banners=-,-,- X-VirusChecked: Checked Received: (qmail 10884 invoked from network); 14 Feb 2014 17:10:22 -0000 Received: from mail-ee0-f49.google.com (HELO mail-ee0-f49.google.com) (74.125.83.49) by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP; 14 Feb 2014 17:10:22 -0000 Received: by mail-ee0-f49.google.com with SMTP id d17so5783822eek.8 for ; Fri, 14 Feb 2014 09:10:22 -0800 (PST) X-Received: by 10.15.41.14 with SMTP id r14mr3842036eev.78.1392397822278; Fri, 14 Feb 2014 09:10:22 -0800 (PST) Received: from belegaer.uk.xensource.com. ([185.25.64.249]) by mx.google.com with ESMTPSA id k6sm21936946eep.17.2014.02.14.09.10.20 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 14 Feb 2014 09:10:21 -0800 (PST) From: Julien Grall To: xen-devel@lists.xenproject.org Date: Fri, 14 Feb 2014 17:10:09 +0000 Message-Id: <1392397809-13255-1-git-send-email-julien.grall@linaro.org> X-Mailer: git-send-email 1.7.10.4 Cc: stefano.stabellini@citrix.com, Julien Grall , tim@xen.org, ian.campbell@citrix.com, george.dunlap@citrix.com Subject: [Xen-devel] [PATCH] xen/arm: Correctly handle non-page aligned pointer in raw_copy_* X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: List-Unsubscribe: , List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: julien.grall@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.171 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Archive: The current implementation of raw_copy_* helpers may lead to data corruption and sometimes Xen crash when the guest virtual address is not aligned to PAGE_SIZE. When the total length is higher than a page, the length to read is badly compute with min(len, (unsigned)(PAGE_SIZE - offset)) As the offset is only computed one time per function, if the start address was not aligned to PAGE_SIZE, we can end up in same iteration: - to read accross page boundary => xen crash - read the previous page => data corruption This issue can be resolved by computing the offset on every iteration. Signed-off-by: Julien Grall Acked-by: Stefano Stabellini --- This patch is a bug fix for Xen 4.4. Without this patch the data may be corrupted between Xen and the guest when the guest virtual address is not aligned to PAGE_SIZE. Sometimes it can also crash Xen. These functions are used in numerous place in Xen. If it introduce another bug we can see quickly with small amount of data. --- xen/arch/arm/guestcopy.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c index af0af6b..b3b54e9 100644 --- a/xen/arch/arm/guestcopy.c +++ b/xen/arch/arm/guestcopy.c @@ -9,12 +9,11 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from, unsigned len, int flush_dcache) { /* XXX needs to handle faults */ - unsigned offset = (vaddr_t)to & ~PAGE_MASK; - while ( len ) { paddr_t g; void *p; + unsigned offset = (vaddr_t)to & ~PAGE_MASK; unsigned size = min(len, (unsigned)PAGE_SIZE - offset); if ( gvirt_to_maddr((vaddr_t) to, &g) ) @@ -50,12 +49,12 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from, unsigned long raw_clear_guest(void *to, unsigned len) { /* XXX needs to handle faults */ - unsigned offset = (vaddr_t)to & ~PAGE_MASK; while ( len ) { paddr_t g; void *p; + unsigned offset = (vaddr_t)to & ~PAGE_MASK; unsigned size = min(len, (unsigned)PAGE_SIZE - offset); if ( gvirt_to_maddr((vaddr_t) to, &g) ) @@ -76,12 +75,11 @@ unsigned long raw_clear_guest(void *to, unsigned len) unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len) { - unsigned offset = (vaddr_t)from & ~PAGE_MASK; - while ( len ) { paddr_t g; void *p; + unsigned offset = (vaddr_t)from & ~PAGE_MASK; unsigned size = min(len, (unsigned)(PAGE_SIZE - offset)); if ( gvirt_to_maddr((vaddr_t) from & PAGE_MASK, &g) )