From patchwork Wed Mar 5 10:50:05 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sebastian Capella X-Patchwork-Id: 25747 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qa0-f69.google.com (mail-qa0-f69.google.com [209.85.216.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 9BCAB203C3 for ; Wed, 5 Mar 2014 10:51:06 +0000 (UTC) Received: by mail-qa0-f69.google.com with SMTP id w5sf1900379qac.4 for ; Wed, 05 Mar 2014 02:51:06 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=6Z/snlaF548mn+6nx8Dyo48IUtpKHQeOK6U4kN/7XUY=; b=SpTR4ABjrmr6+VL2A7Y1YN7hXq9EFC6LPtqi8bJnsWlEcTd5FYjm+RJ6hsOOXDa5rp 7RAJSSX4pumiMki1HzLRUKEmWNcAxOL3ihd98J4Ie3e78tvIoQYM9EN1otOrncvZtO3f 3kyo01QDWIVTRPAKKgYgPPxpUe9GR0QGYt+hvkYrxO4gUIo7+pdOA3pvVI6NChz8m0e6 U4odSPUkqDbsdYEObbUt9AeQL8iQ/O+5Mnoe4wnnrfx6rYyGvoMhiyv5SkQF+3JlDX1p 2VlMSSlFdy4VUZ9GXc/o/vHvKvA6ssdQEFwjpcJ+fzq9JgaJ2ecMp77a3AZ2vixdjKBo LAtQ== X-Gm-Message-State: ALoCoQnqtC3ZtTInS5XwFjbUGBUfoC+pKH8i+iIQciL4Ty5bbqKGKaL8Big83mHMOIyVs32rkTX6 X-Received: by 10.58.248.170 with SMTP id yn10mr2151751vec.17.1394016666417; Wed, 05 Mar 2014 02:51:06 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.38.241 with SMTP id t104ls232129qgt.21.gmail; Wed, 05 Mar 2014 02:51:06 -0800 (PST) X-Received: by 10.220.161.132 with SMTP id r4mr668930vcx.29.1394016666270; Wed, 05 Mar 2014 02:51:06 -0800 (PST) Received: from mail-vc0-f170.google.com (mail-vc0-f170.google.com [209.85.220.170]) by mx.google.com with ESMTPS id u5si576263vdo.43.2014.03.05.02.51.06 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 05 Mar 2014 02:51:06 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.170 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.170; Received: by mail-vc0-f170.google.com with SMTP id hu8so854268vcb.29 for ; Wed, 05 Mar 2014 02:51:06 -0800 (PST) X-Received: by 10.58.228.35 with SMTP id sf3mr880271vec.7.1394016666154; Wed, 05 Mar 2014 02:51:06 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.78.9 with SMTP id i9csp9273vck; Wed, 5 Mar 2014 02:51:05 -0800 (PST) X-Received: by 10.66.129.133 with SMTP id nw5mr6036277pab.98.1394016664302; Wed, 05 Mar 2014 02:51:04 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id tm9si2045409pab.250.2014.03.05.02.51.03; Wed, 05 Mar 2014 02:51:03 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755199AbaCEKvB (ORCPT + 11 others); Wed, 5 Mar 2014 05:51:01 -0500 Received: from mail-pd0-f174.google.com ([209.85.192.174]:56150 "EHLO mail-pd0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755171AbaCEKu5 (ORCPT ); Wed, 5 Mar 2014 05:50:57 -0500 Received: by mail-pd0-f174.google.com with SMTP id y13so903909pdi.33 for ; Wed, 05 Mar 2014 02:50:56 -0800 (PST) X-Received: by 10.66.141.165 with SMTP id rp5mr6115701pab.90.1394016656775; Wed, 05 Mar 2014 02:50:56 -0800 (PST) Received: from localhost (z88l218.static.ctm.net. [202.175.88.218]) by mx.google.com with ESMTPSA id pe3sm6831271pbc.23.2014.03.05.02.50.43 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Wed, 05 Mar 2014 02:50:55 -0800 (PST) From: Sebastian Capella To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linaro-kernel@lists.linaro.org, linux-arm-kernel@lists.infradead.org Cc: Russ Dill , "Rafael J. Wysocki" , Sebastian Capella , Russell King , Len Brown , Nicolas Pitre , Santosh Shilimkar , Will Deacon , Jonathan Austin , Catalin Marinas , =?UTF-8?q?Uwe=20Kleine-K=C3=B6nig?= , Stephen Boyd , Laura Abbott , Jiang Liu , Sricharan R , Victor Kamensky , Stefano Stabellini , Ben Dooks Subject: [PATCH v7 2/2] ARM hibernation / suspend-to-disk Date: Wed, 5 Mar 2014 02:50:05 -0800 Message-Id: <1394016605-24120-3-git-send-email-sebastian.capella@linaro.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1394016605-24120-1-git-send-email-sebastian.capella@linaro.org> References: <1394016605-24120-1-git-send-email-sebastian.capella@linaro.org> MIME-Version: 1.0 Sender: linux-pm-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: sebastian.capella@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.170 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Russ Dill Enable hibernation for ARM architectures and provide ARM architecture specific calls used during hibernation. The swsusp hibernation framework depends on the platform first having functional suspend/resume. Then, in order to enable hibernation on a given platform, a platform_hibernation_ops structure may need to be registered with the system in order to save/restore any SoC-specific / cpu specific state needing (re)init over a suspend-to-disk/resume-from-disk cycle. For example: - "secure" SoCs that have different sets of control registers and/or different CR reg access patterns. - SoCs with L2 caches as the activation sequence there is SoC-dependent; a full off-on cycle for L2 is not done by the hibernation support code. - SoCs requiring steps on wakeup _before_ the "generic" parts done by cpu_suspend / cpu_resume can work correctly. - SoCs having persistent state which is maintained during suspend and resume, but will be lost during the power off cycle after suspend-to-disk. This is a rebase/rework of Frank Hofmann's v5 hibernation patchset. Acked-by: Russ Dill Cc: "Rafael J. Wysocki" Signed-off-by: Sebastian Capella Acked-by: Pavel Machek Reviewed-by: Lorenzo Pieralisi Cc: Russell King Cc: Len Brown Cc: Nicolas Pitre Cc: Santosh Shilimkar Cc: Will Deacon Cc: Jonathan Austin Cc: Catalin Marinas Cc: "Uwe Kleine-König" Cc: Stephen Boyd Cc: Laura Abbott Cc: Jiang Liu Cc: Sricharan R Cc: Victor Kamensky Cc: Stefano Stabellini Cc: Ben Dooks --- arch/arm/include/asm/memory.h | 1 + arch/arm/kernel/Makefile | 1 + arch/arm/kernel/hibernate.c | 108 +++++++++++++++++++++++++++++++++++++++++ arch/arm/mm/Kconfig | 5 ++ include/linux/suspend.h | 2 + 5 files changed, 117 insertions(+) create mode 100644 arch/arm/kernel/hibernate.c diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h index 8756e4b..d32adbb 100644 --- a/arch/arm/include/asm/memory.h +++ b/arch/arm/include/asm/memory.h @@ -291,6 +291,7 @@ static inline void *phys_to_virt(phys_addr_t x) */ #define __pa(x) __virt_to_phys((unsigned long)(x)) #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x))) +#define __pa_symbol(x) __pa((unsigned long)(x)) #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) extern phys_addr_t (*arch_virt_to_idmap)(unsigned long x); diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile index a30fc9b..8afa848 100644 --- a/arch/arm/kernel/Makefile +++ b/arch/arm/kernel/Makefile @@ -39,6 +39,7 @@ obj-$(CONFIG_ARTHUR) += arthur.o obj-$(CONFIG_ISA_DMA) += dma-isa.o obj-$(CONFIG_PCI) += bios32.o isa.o obj-$(CONFIG_ARM_CPU_SUSPEND) += sleep.o suspend.o +obj-$(CONFIG_HIBERNATION) += hibernate.o obj-$(CONFIG_SMP) += smp.o ifdef CONFIG_MMU obj-$(CONFIG_SMP) += smp_tlb.o diff --git a/arch/arm/kernel/hibernate.c b/arch/arm/kernel/hibernate.c new file mode 100644 index 0000000..656718a --- /dev/null +++ b/arch/arm/kernel/hibernate.c @@ -0,0 +1,108 @@ +/* + * Hibernation support specific for ARM + * + * Derived from work on ARM hibernation support by: + * + * Ubuntu project, hibernation support for mach-dove + * Copyright (C) 2010 Nokia Corporation (Hiroshi Doyu) + * Copyright (C) 2010 Texas Instruments, Inc. (Teerth Reddy et al.) + * https://lkml.org/lkml/2010/6/18/4 + * https://lists.linux-foundation.org/pipermail/linux-pm/2010-June/027422.html + * https://patchwork.kernel.org/patch/96442/ + * + * Copyright (C) 2006 Rafael J. Wysocki + * + * License terms: GNU General Public License (GPL) version 2 + */ + +#include +#include +#include +#include +#include + +extern const void __nosave_begin, __nosave_end; + +int pfn_is_nosave(unsigned long pfn) +{ + unsigned long nosave_begin_pfn = + __pa_symbol(&__nosave_begin) >> PAGE_SHIFT; + unsigned long nosave_end_pfn = + PAGE_ALIGN(__pa_symbol(&__nosave_end)) >> PAGE_SHIFT; + + return (pfn >= nosave_begin_pfn) && (pfn < nosave_end_pfn); +} + +void notrace save_processor_state(void) +{ + WARN_ON(num_online_cpus() != 1); + local_fiq_disable(); +} + +void notrace restore_processor_state(void) +{ + local_fiq_enable(); +} + +/* + * Snapshot kernel memory and reset the system. + * + * swsusp_save() is executed in the suspend finisher so that the CPU + * context pointer and memory are part of the saved image, which is + * required by the resume kernel image to restart execution from + * swsusp_arch_suspend(). + * + * soft_restart is not technically needed, but is used to get success + * returned from cpu_suspend. + * + * When soft reboot completes, the hibernation snapshot is written out. + */ +static int notrace arch_save_image(unsigned long unused) +{ + int ret; + + ret = swsusp_save(); + if (ret == 0) + soft_restart(virt_to_phys(cpu_resume)); + return ret; +} + +/* + * Save the current CPU state before suspend / poweroff. + */ +int notrace swsusp_arch_suspend(void) +{ + return cpu_suspend(0, arch_save_image); +} + +/* + * Restore page contents for physical pages that were in use during loading + * hibernation image. Switch to idmap_pgd so the physical page tables + * are overwritten with the same contents. + */ +static void notrace arch_restore_image(void *unused) +{ + struct pbe *pbe; + + cpu_switch_mm(idmap_pgd, &init_mm); + for (pbe = restore_pblist; pbe; pbe = pbe->next) + copy_page(pbe->orig_address, pbe->address); + + soft_restart(virt_to_phys(cpu_resume)); +} + +static u64 resume_stack[PAGE_SIZE/2/sizeof(u64)] __nosavedata; + +/* + * Resume from the hibernation image. + * Due to the kernel heap / data restore, stack contents change underneath + * and that would make function calls impossible; switch to a temporary + * stack within the nosave region to avoid that problem. + */ +int swsusp_arch_resume(void) +{ + extern void call_with_stack(void (*fn)(void *), void *arg, void *sp); + call_with_stack(arch_restore_image, 0, + resume_stack + ARRAY_SIZE(resume_stack)); + return 0; +} diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig index 1f8fed9..83707702 100644 --- a/arch/arm/mm/Kconfig +++ b/arch/arm/mm/Kconfig @@ -611,6 +611,11 @@ config CPU_USE_DOMAINS config IO_36 bool +config ARCH_HIBERNATION_POSSIBLE + bool + depends on MMU + default y if CPU_ARM920T || CPU_ARM926T || CPU_SA1100 || CPU_XSCALE || CPU_XSC3 || CPU_V6 || CPU_V6K || CPU_V7 + comment "Processor Features" config ARM_LPAE diff --git a/include/linux/suspend.h b/include/linux/suspend.h index f73cabf..38bbf95 100644 --- a/include/linux/suspend.h +++ b/include/linux/suspend.h @@ -320,6 +320,8 @@ extern unsigned long get_safe_page(gfp_t gfp_mask); extern void hibernation_set_ops(const struct platform_hibernation_ops *ops); extern int hibernate(void); extern bool system_entering_hibernation(void); +asmlinkage int swsusp_save(void); +extern struct pbe *restore_pblist; #else /* CONFIG_HIBERNATION */ static inline void register_nosave_region(unsigned long b, unsigned long e) {} static inline void register_nosave_region_late(unsigned long b, unsigned long e) {}