https://github.com/torvalds/linux
Revision 012a45e3f4af68e86d85cce060c6c2fed56498b2 authored by Leon Ma on 30 April 2014, 08:43:10 UTC, committed by Thomas Gleixner on 30 April 2014, 10:34:51 UTC
If a cpu is idle and starts an hrtimer which is not pinned on that
same cpu, the nohz code might target the timer to a different cpu.

In the case that we switch the cpu base of the timer we already have a
sanity check in place, which determines whether the timer is earlier
than the current leftmost timer on the target cpu. In that case we
enqueue the timer on the current cpu because we cannot reprogram the
clock event device on the target.

If the timers base is already the target CPU we do not have this
sanity check in place so we enqueue the timer as the leftmost timer in
the target cpus rb tree, but we cannot reprogram the clock event
device on the target cpu. So the timer expires late and subsequently
prevents the reprogramming of the target cpu clock event device until
the previously programmed event fires or a timer with an earlier
expiry time gets enqueued on the target cpu itself.

Add the same target check as we have for the switch base case and
start the timer on the current cpu if it would become the leftmost
timer on the target.

[ tglx: Rewrote subject and changelog ]

Signed-off-by: Leon Ma <xindong.ma@intel.com>
Link: http://lkml.kernel.org/r/1398847391-5994-1-git-send-email-xindong.ma@intel.com
Cc: stable@vger.kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
1 parent 6c6c0d5
Raw File
Tip revision: 012a45e3f4af68e86d85cce060c6c2fed56498b2 authored by Leon Ma on 30 April 2014, 08:43:10 UTC
hrtimer: Prevent remote enqueue of leftmost timers
Tip revision: 012a45e
local_ops.txt
	     Semantics and Behavior of Local Atomic Operations

			    Mathieu Desnoyers


	This document explains the purpose of the local atomic operations, how
to implement them for any given architecture and shows how they can be used
properly. It also stresses on the precautions that must be taken when reading
those local variables across CPUs when the order of memory writes matters.



* Purpose of local atomic operations

Local atomic operations are meant to provide fast and highly reentrant per CPU
counters. They minimize the performance cost of standard atomic operations by
removing the LOCK prefix and memory barriers normally required to synchronize
across CPUs.

Having fast per CPU atomic counters is interesting in many cases : it does not
require disabling interrupts to protect from interrupt handlers and it permits
coherent counters in NMI handlers. It is especially useful for tracing purposes
and for various performance monitoring counters.

Local atomic operations only guarantee variable modification atomicity wrt the
CPU which owns the data. Therefore, care must taken to make sure that only one
CPU writes to the local_t data. This is done by using per cpu data and making
sure that we modify it from within a preemption safe context. It is however
permitted to read local_t data from any CPU : it will then appear to be written
out of order wrt other memory writes by the owner CPU.


* Implementation for a given architecture

It can be done by slightly modifying the standard atomic operations : only
their UP variant must be kept. It typically means removing LOCK prefix (on
i386 and x86_64) and any SMP synchronization barrier. If the architecture does
not have a different behavior between SMP and UP, including asm-generic/local.h
in your architecture's local.h is sufficient.

The local_t type is defined as an opaque signed long by embedding an
atomic_long_t inside a structure. This is made so a cast from this type to a
long fails. The definition looks like :

typedef struct { atomic_long_t a; } local_t;


* Rules to follow when using local atomic operations

- Variables touched by local ops must be per cpu variables.
- _Only_ the CPU owner of these variables must write to them.
- This CPU can use local ops from any context (process, irq, softirq, nmi, ...)
  to update its local_t variables.
- Preemption (or interrupts) must be disabled when using local ops in
  process context to   make sure the process won't be migrated to a
  different CPU between getting the per-cpu variable and doing the
  actual local op.
- When using local ops in interrupt context, no special care must be
  taken on a mainline kernel, since they will run on the local CPU with
  preemption already disabled. I suggest, however, to explicitly
  disable preemption anyway to make sure it will still work correctly on
  -rt kernels.
- Reading the local cpu variable will provide the current copy of the
  variable.
- Reads of these variables can be done from any CPU, because updates to
  "long", aligned, variables are always atomic. Since no memory
  synchronization is done by the writer CPU, an outdated copy of the
  variable can be read when reading some _other_ cpu's variables.


* How to use local atomic operations

#include <linux/percpu.h>
#include <asm/local.h>

static DEFINE_PER_CPU(local_t, counters) = LOCAL_INIT(0);


* Counting

Counting is done on all the bits of a signed long.

In preemptible context, use get_cpu_var() and put_cpu_var() around local atomic
operations : it makes sure that preemption is disabled around write access to
the per cpu variable. For instance :

	local_inc(&get_cpu_var(counters));
	put_cpu_var(counters);

If you are already in a preemption-safe context, you can directly use
__get_cpu_var() instead.

	local_inc(&__get_cpu_var(counters));



* Reading the counters

Those local counters can be read from foreign CPUs to sum the count. Note that
the data seen by local_read across CPUs must be considered to be out of order
relatively to other memory writes happening on the CPU that owns the data.

	long sum = 0;
	for_each_online_cpu(cpu)
		sum += local_read(&per_cpu(counters, cpu));

If you want to use a remote local_read to synchronize access to a resource
between CPUs, explicit smp_wmb() and smp_rmb() memory barriers must be used
respectively on the writer and the reader CPUs. It would be the case if you use
the local_t variable as a counter of bytes written in a buffer : there should
be a smp_wmb() between the buffer write and the counter increment and also a
smp_rmb() between the counter read and the buffer read.


Here is a sample module which implements a basic per cpu counter using local.h.

--- BEGIN ---
/* test-local.c
 *
 * Sample module for local.h usage.
 */


#include <asm/local.h>
#include <linux/module.h>
#include <linux/timer.h>

static DEFINE_PER_CPU(local_t, counters) = LOCAL_INIT(0);

static struct timer_list test_timer;

/* IPI called on each CPU. */
static void test_each(void *info)
{
	/* Increment the counter from a non preemptible context */
	printk("Increment on cpu %d\n", smp_processor_id());
	local_inc(&__get_cpu_var(counters));

	/* This is what incrementing the variable would look like within a
	 * preemptible context (it disables preemption) :
	 *
	 * local_inc(&get_cpu_var(counters));
	 * put_cpu_var(counters);
	 */
}

static void do_test_timer(unsigned long data)
{
	int cpu;

	/* Increment the counters */
	on_each_cpu(test_each, NULL, 1);
	/* Read all the counters */
	printk("Counters read from CPU %d\n", smp_processor_id());
	for_each_online_cpu(cpu) {
		printk("Read : CPU %d, count %ld\n", cpu,
			local_read(&per_cpu(counters, cpu)));
	}
	del_timer(&test_timer);
	test_timer.expires = jiffies + 1000;
	add_timer(&test_timer);
}

static int __init test_init(void)
{
	/* initialize the timer that will increment the counter */
	init_timer(&test_timer);
	test_timer.function = do_test_timer;
	test_timer.expires = jiffies + 1;
	add_timer(&test_timer);

	return 0;
}

static void __exit test_exit(void)
{
	del_timer_sync(&test_timer);
}

module_init(test_init);
module_exit(test_exit);

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Mathieu Desnoyers");
MODULE_DESCRIPTION("Local Atomic Ops");
--- END ---
back to top