Message ID | 20220405141954.1489782-3-sebastianene@google.com |
---|---|
State | New |
Headers | show |
Series | Detect stalls on guest vCPUS | expand |
Sebastian, On Tue, Apr 05, 2022 at 02:19:55PM +0000, Sebastian Ene wrote: > This patch adds support for a virtual watchdog which relies on the > per-cpu hrtimers to pet at regular intervals. > The watchdog subsystem is not intended to detect soft and hard lockups. It is intended to detect userspace issues. A watchdog driver requires a userspace compinent which needs to ping the watchdog on a regular basis to prevent timeouts (and watchdog drivers are supposed to use the watchdog kernel API). What you have here is a CPU stall detection mechanism, similar to the existing soft/hard lockup detection mechanism. This code does not belong into the watchdog subsystem; it is similar to the existing hard/softlockup detection code (kernel/watchdog.c) and should reside at the same location. Having said that, I could imagine a watchdog driver to be used in VMs, but that would be similar to existing watchdog drivers. The easiest way to get there would probably be to just instantiate one of the watchdog devices already supported by qemu. Guenter
On Tue, Apr 05, 2022 at 02:15:51PM -0700, Guenter Roeck wrote: > Sebastian, > Hello Guenter, > On Tue, Apr 05, 2022 at 02:19:55PM +0000, Sebastian Ene wrote: > > This patch adds support for a virtual watchdog which relies on the > > per-cpu hrtimers to pet at regular intervals. > > > > The watchdog subsystem is not intended to detect soft and hard lockups. > It is intended to detect userspace issues. A watchdog driver requires > a userspace compinent which needs to ping the watchdog on a regular basis > to prevent timeouts (and watchdog drivers are supposed to use the > watchdog kernel API). > Thanks for getting back ! I wanted to create a mechanism to detect stalls on vCPUs and I am not sure if the current watchdog subsystem has a way to create per-CPU binded watchdogs (in the same way as Power PC has kernel/watchdog.c). The per-CPU watchdog is needed to account for time that the guest is not running(either scheduled out or waiting for an event) to prevent spurious reset events caused by the watchdog. > What you have here is a CPU stall detection mechanism, similar to the > existing soft/hard lockup detection mechanism. This code does not > belong into the watchdog subsystem; it is similar to the existing > hard/softlockup detection code (kernel/watchdog.c) and should reside > at the same location. > I agree that this doesn't belong to the watchdog subsytem but the current stall detection mechanism calls through MMIO into a virtual device 'qemu,virt-watchdog'. Calling a device from (kernel/watchdog.c) isn't something that we should avoid ? > Having said that, I could imagine a watchdog driver to be used in VMs, > but that would be similar to existing watchdog drivers. The easiest way > to get there would probably be to just instantiate one of the watchdog > devices already supported by qemu. > I am looking forward for your response, > Guenter Cheers, Sebastian
On 4/6/22 09:31, Sebastian Ene wrote: > On Tue, Apr 05, 2022 at 02:15:51PM -0700, Guenter Roeck wrote: >> Sebastian, >> > > Hello Guenter, > >> On Tue, Apr 05, 2022 at 02:19:55PM +0000, Sebastian Ene wrote: >>> This patch adds support for a virtual watchdog which relies on the >>> per-cpu hrtimers to pet at regular intervals. >>> >> >> The watchdog subsystem is not intended to detect soft and hard lockups. >> It is intended to detect userspace issues. A watchdog driver requires >> a userspace compinent which needs to ping the watchdog on a regular basis >> to prevent timeouts (and watchdog drivers are supposed to use the >> watchdog kernel API). >> > > Thanks for getting back ! I wanted to create a mechanism to detect > stalls on vCPUs and I am not sure if the current watchdog subsystem has a way > to create per-CPU binded watchdogs (in the same way as Power PC has > kernel/watchdog.c). > The per-CPU watchdog is needed to account for time that the guest is not > running(either scheduled out or waiting for an event) to prevent spurious > reset events caused by the watchdog. > >> What you have here is a CPU stall detection mechanism, similar to the >> existing soft/hard lockup detection mechanism. This code does not >> belong into the watchdog subsystem; it is similar to the existing >> hard/softlockup detection code (kernel/watchdog.c) and should reside >> at the same location. >> > > I agree that this doesn't belong to the watchdog subsytem but the current > stall detection mechanism calls through MMIO into a virtual device > 'qemu,virt-watchdog'. Calling a device from (kernel/watchdog.c) isn't > something that we should avoid ? > You are introducing qemu,virt-watchdog, so it seems to me that any argument along that line doesn't really apply. I think it is more a matter for core kernel developers to discuss and decide how this functionality is best instantiated. It doesn't _have_ to be a device, after all, just like the current lockup detection code is not a device. Either case, I am not really the right person to discuss this since it is a matter of core kernel code which I am not sufficiently familiar with. All I can say is that watchdog drivers in the watchdog subsystem have a different scope. Guenter >> Having said that, I could imagine a watchdog driver to be used in VMs, >> but that would be similar to existing watchdog drivers. The easiest way >> to get there would probably be to just instantiate one of the watchdog >> devices already supported by qemu. >> > > I am looking forward for your response, > >> Guenter > > Cheers, > Sebastian
On Wed, Apr 06, 2022 at 09:52:05AM -0700, Guenter Roeck wrote: > On 4/6/22 09:31, Sebastian Ene wrote: > > On Tue, Apr 05, 2022 at 02:15:51PM -0700, Guenter Roeck wrote: > > > Sebastian, > > > > > > > Hello Guenter, > > > > > On Tue, Apr 05, 2022 at 02:19:55PM +0000, Sebastian Ene wrote: > > > > This patch adds support for a virtual watchdog which relies on the > > > > per-cpu hrtimers to pet at regular intervals. > > > > > > > > > > The watchdog subsystem is not intended to detect soft and hard lockups. > > > It is intended to detect userspace issues. A watchdog driver requires > > > a userspace compinent which needs to ping the watchdog on a regular basis > > > to prevent timeouts (and watchdog drivers are supposed to use the > > > watchdog kernel API). > > > > > > > Thanks for getting back ! I wanted to create a mechanism to detect > > stalls on vCPUs and I am not sure if the current watchdog subsystem has a way > > to create per-CPU binded watchdogs (in the same way as Power PC has > > kernel/watchdog.c). > > The per-CPU watchdog is needed to account for time that the guest is not > > running(either scheduled out or waiting for an event) to prevent spurious > > reset events caused by the watchdog. > > > > > What you have here is a CPU stall detection mechanism, similar to the > > > existing soft/hard lockup detection mechanism. This code does not > > > belong into the watchdog subsystem; it is similar to the existing > > > hard/softlockup detection code (kernel/watchdog.c) and should reside > > > at the same location. > > > > > > > I agree that this doesn't belong to the watchdog subsytem but the current > > stall detection mechanism calls through MMIO into a virtual device > > 'qemu,virt-watchdog'. Calling a device from (kernel/watchdog.c) isn't > > something that we should avoid ? > > Hello Guenter, > > You are introducing qemu,virt-watchdog, so it seems to me that any argument > along that line doesn't really apply. > I am trying to follow your guidelines to make this work, so I would be grateful if you have some time to share your thoughts on this. > I think it is more a matter for core kernel developers to discuss and > decide how this functionality is best instantiated. It doesn't _have_ > to be a device, after all, just like the current lockup detection > code is not a device. Either case, I am not really the right person > to discuss this since it is a matter of core kernel code which I am > not sufficiently familiar with. All I can say is that watchdog drivers > in the watchdog subsystem have a different scope. This watchdog device tracks the elapsed time on a per-cpu basis, since KVM schedules vCPUs independently. I am attempting to re-write it to use the watchdog-core infrastructure but doing this will loose the per-cpu watchdog binding and exposing it to userspace would require a strong thread affinity setting. How can I overcome this problem ? Having it like a hard lockup detector mechanism doesn’t work either because when the watchdog expires, we rely on crosvm (not the guest kernel) to handle this event and reset the machine. We cannot inject the reset event back into the guest as we don’t have NMI support on arm64. > > Guenter Thanks, Sebastian > > > > Having said that, I could imagine a watchdog driver to be used in VMs, > > > but that would be similar to existing watchdog drivers. The easiest way > > > to get there would probably be to just instantiate one of the watchdog > > > devices already supported by qemu. > > > > > > > I am looking forward for your response, > > > > > Guenter > > > > Cheers, > > Sebastian >
diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig index 01ce3f41cc21..3304d128484e 100644 --- a/drivers/watchdog/Kconfig +++ b/drivers/watchdog/Kconfig @@ -351,6 +351,14 @@ config SL28CPLD_WATCHDOG To compile this driver as a module, choose M here: the module will be called sl28cpld_wdt. +config VM_WATCHDOG + tristate "Virtual Machine Watchdog" + select LOCKUP_DETECTOR + help + Detect CPU locks on the virtual machine. + To compile this driver as a module, choose M here: the + module will be called vm-wdt. + # ALPHA Architecture # ARM Architecture diff --git a/drivers/watchdog/Makefile b/drivers/watchdog/Makefile index 071a2e50be98..73206cbc3835 100644 --- a/drivers/watchdog/Makefile +++ b/drivers/watchdog/Makefile @@ -227,3 +227,4 @@ obj-$(CONFIG_MENZ069_WATCHDOG) += menz69_wdt.o obj-$(CONFIG_RAVE_SP_WATCHDOG) += rave-sp-wdt.o obj-$(CONFIG_STPMIC1_WATCHDOG) += stpmic1_wdt.o obj-$(CONFIG_SL28CPLD_WATCHDOG) += sl28cpld_wdt.o +obj-$(CONFIG_VM_WATCHDOG) += vm-wdt.o diff --git a/drivers/watchdog/vm-wdt.c b/drivers/watchdog/vm-wdt.c new file mode 100644 index 000000000000..ea4351754645 --- /dev/null +++ b/drivers/watchdog/vm-wdt.c @@ -0,0 +1,215 @@ +// SPDX-License-Identifier: GPL-2.0+ +// +// Virtual watchdog driver. +// Copyright (C) Google, 2022 + +#define pr_fmt(fmt) "vm-watchdog: " fmt + +#include <linux/cpu.h> +#include <linux/init.h> +#include <linux/io.h> +#include <linux/kernel.h> + +#include <linux/device.h> +#include <linux/interrupt.h> +#include <linux/module.h> +#include <linux/nmi.h> +#include <linux/of.h> +#include <linux/of_device.h> +#include <linux/param.h> +#include <linux/percpu.h> +#include <linux/platform_device.h> +#include <linux/slab.h> + +#define DRV_NAME "vm_wdt" +#define DRV_VERSION "1.0" + +#define VMWDT_REG_STATUS (0x00) +#define VMWDT_REG_LOAD_CNT (0x04) +#define VMWDT_REG_CURRENT_CNT (0x08) +#define VMWDT_REG_CLOCK_FREQ_HZ (0x0C) +#define VMWDT_REG_LEN (0x10) + +#define VMWDT_DEFAULT_CLOCK_HZ (10) +#define VMWDT_DEFAULT_TIMEOT_SEC (8) + +struct vm_wdt_s { + void __iomem *membase; + u32 clock_freq; + u32 expiration_sec; + u32 ping_timeout_ms; + struct hrtimer per_cpu_hrtimer; + struct platform_device *dev; +}; + +#define vmwdt_reg_write(wdt, reg, value) \ + iowrite32((value), (wdt)->membase + (reg)) +#define vmwdt_reg_read(wdt, reg) \ + io32read((wdt)->membase + (reg)) + +static struct platform_device *virt_dev; + +static enum hrtimer_restart vmwdt_timer_fn(struct hrtimer *hrtimer) +{ + struct vm_wdt_s *cpu_wdt; + u32 ticks; + + cpu_wdt = container_of(hrtimer, struct vm_wdt_s, per_cpu_hrtimer); + ticks = cpu_wdt->clock_freq * cpu_wdt->expiration_sec; + vmwdt_reg_write(cpu_wdt, VMWDT_REG_LOAD_CNT, ticks); + hrtimer_forward_now(hrtimer, ms_to_ktime(cpu_wdt->ping_timeout_ms)); + + return HRTIMER_RESTART; +} + +static void vmwdt_start(void *arg) +{ + u32 ticks; + int cpu = smp_processor_id(); + struct vm_wdt_s *cpu_wdt = arg; + struct hrtimer *hrtimer = &cpu_wdt->per_cpu_hrtimer; + + pr_info("cpu %u vmwdt start\n", cpu); + vmwdt_reg_write(cpu_wdt, VMWDT_REG_CLOCK_FREQ_HZ, + cpu_wdt->clock_freq); + + /* Compute the number of ticks required for the watchdog counter + * register based on the internal clock frequency and the watchdog + * timeout given from the device tree. + */ + ticks = cpu_wdt->clock_freq * cpu_wdt->expiration_sec; + vmwdt_reg_write(cpu_wdt, VMWDT_REG_LOAD_CNT, ticks); + + /* Enable the internal clock and start the watchdog */ + vmwdt_reg_write(cpu_wdt, VMWDT_REG_STATUS, 1); + + hrtimer_init(hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + hrtimer->function = vmwdt_timer_fn; + hrtimer_start(hrtimer, ms_to_ktime(cpu_wdt->ping_timeout_ms), + HRTIMER_MODE_REL_PINNED); +} + +static void vmwdt_stop(void *arg) +{ + int cpu = smp_processor_id(); + struct vm_wdt_s *cpu_wdt = arg; + struct hrtimer *hrtimer = &cpu_wdt->per_cpu_hrtimer; + + hrtimer_cancel(hrtimer); + + /* Disable the watchdog */ + vmwdt_reg_write(cpu_wdt, VMWDT_REG_STATUS, 0); + pr_info("cpu %d vmwdt stop\n", cpu); +} + +static int start_watchdog_on_cpu(unsigned int cpu) +{ + struct vm_wdt_s *vm_wdt = platform_get_drvdata(virt_dev); + + vmwdt_start(this_cpu_ptr(vm_wdt)); + return 0; +} + +static int stop_watchdog_on_cpu(unsigned int cpu) +{ + struct vm_wdt_s *vm_wdt = platform_get_drvdata(virt_dev); + + vmwdt_stop(this_cpu_ptr(vm_wdt)); + return 0; +} + +static int vmwdt_probe(struct platform_device *dev) +{ + int cpu, ret, err; + void __iomem *membase; + struct resource *r; + struct vm_wdt_s *vm_wdt; + u32 wdt_clock, wdt_timeout_sec = 0; + + r = platform_get_resource(dev, IORESOURCE_MEM, 0); + if (r == NULL) + return -ENOENT; + + vm_wdt = alloc_percpu(typeof(struct vm_wdt_s)); + if (!vm_wdt) + return -ENOMEM; + + membase = ioremap(r->start, resource_size(r)); + if (!membase) { + ret = -ENXIO; + goto err_withmem; + } + + virt_dev = dev; + platform_set_drvdata(dev, vm_wdt); + if (of_property_read_u32(dev->dev.of_node, "clock", &wdt_clock)) + wdt_clock = VMWDT_DEFAULT_CLOCK_HZ; + + if (of_property_read_u32(dev->dev.of_node, "timeout-sec", + &wdt_timeout_sec)) + wdt_timeout_sec = VMWDT_DEFAULT_TIMEOT_SEC; + + for_each_cpu_and(cpu, cpu_online_mask, &watchdog_cpumask) { + struct vm_wdt_s *cpu_wdt = per_cpu_ptr(vm_wdt, cpu); + + cpu_wdt->membase = membase + cpu * VMWDT_REG_LEN; + cpu_wdt->clock_freq = wdt_clock; + cpu_wdt->expiration_sec = wdt_timeout_sec; + cpu_wdt->ping_timeout_ms = wdt_timeout_sec * MSEC_PER_SEC / 2; + smp_call_function_single(cpu, vmwdt_start, cpu_wdt, true); + } + + err = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, + "virt/watchdog:online", + start_watchdog_on_cpu, + stop_watchdog_on_cpu); + if (err < 0) { + pr_warn("could not be initialized"); + ret = err; + goto err_withmem; + } + + return 0; + +err_withmem: + free_percpu(vm_wdt); + return ret; +} + +static int vmwdt_remove(struct platform_device *dev) +{ + int cpu; + struct vm_wdt_s *vm_wdt = platform_get_drvdata(dev); + + for_each_cpu_and(cpu, cpu_online_mask, &watchdog_cpumask) { + struct vm_wdt_s *cpu_wdt = per_cpu_ptr(vm_wdt, cpu); + + smp_call_function_single(cpu, vmwdt_stop, cpu_wdt, true); + } + + free_percpu(vm_wdt); + return 0; +} + +static const struct of_device_id vmwdt_of_match[] = { + { .compatible = "qemu,vm-watchdog", }, + {} +}; + +MODULE_DEVICE_TABLE(of, vm_watchdog_of_match); + +static struct platform_driver vmwdt_driver = { + .probe = vmwdt_probe, + .remove = vmwdt_remove, + .driver = { + .name = DRV_NAME, + .of_match_table = vmwdt_of_match, + }, +}; + +module_platform_driver(vmwdt_driver); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Sebastian Ene <sebastianene@google.com>"); +MODULE_DESCRIPTION("Virtual watchdog driver"); +MODULE_VERSION(DRV_VERSION);
This patch adds support for a virtual watchdog which relies on the per-cpu hrtimers to pet at regular intervals. Signed-off-by: Sebastian Ene <sebastianene@google.com> --- drivers/watchdog/Kconfig | 8 ++ drivers/watchdog/Makefile | 1 + drivers/watchdog/vm-wdt.c | 215 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 224 insertions(+) create mode 100644 drivers/watchdog/vm-wdt.c