Revision 63cae12bce9861cec309798d34701cf3da20bc71 authored by Peter Zijlstra on 09 December 2016, 13:59:00 UTC, committed by Ingo Molnar on 14 January 2017, 09:56:10 UTC
There is problem with installing an event in a task that is 'stuck' on
an offline CPU.

Blocked tasks are not dis-assosciated from offlined CPUs, after all, a
blocked task doesn't run and doesn't require a CPU etc.. Only on
wakeup do we ammend the situation and place the task on a available
CPU.

If we hit such a task with perf_install_in_context() we'll loop until
either that task wakes up or the CPU comes back online, if the task
waking depends on the event being installed, we're stuck.

While looking into this issue, I also spotted another problem, if we
hit a task with perf_install_in_context() that is in the middle of
being migrated, that is we observe the old CPU before sending the IPI,
but run the IPI (on the old CPU) while the task is already running on
the new CPU, things also go sideways.

Rework things to rely on task_curr() -- outside of rq->lock -- which
is rather tricky. Imagine the following scenario where we're trying to
install the first event into our task 't':

CPU0            CPU1            CPU2

                (current == t)

t->perf_event_ctxp[] = ctx;
smp_mb();
cpu = task_cpu(t);

                switch(t, n);
                                migrate(t, 2);
                                switch(p, t);

                                ctx = t->perf_event_ctxp[]; // must not be NULL

smp_function_call(cpu, ..);

                generic_exec_single()
                  func();
                    spin_lock(ctx->lock);
                    if (task_curr(t)) // false

                    add_event_to_ctx();
                    spin_unlock(ctx->lock);

                                perf_event_context_sched_in();
                                  spin_lock(ctx->lock);
                                  // sees event

So its CPU0's store of t->perf_event_ctxp[] that must not go 'missing'.
Because if CPU2's load of that variable were to observe NULL, it would
not try to schedule the ctx and we'd have a task running without its
counter, which would be 'bad'.

As long as we observe !NULL, we'll acquire ctx->lock. If we acquire it
first and not see the event yet, then CPU0 must observe task_curr()
and retry. If the install happens first, then we must see the event on
sched-in and all is well.

I think we can translate the first part (until the 'must not be NULL')
of the scenario to a litmus test like:

  C C-peterz

  {
  }

  P0(int *x, int *y)
  {
          int r1;

          WRITE_ONCE(*x, 1);
          smp_mb();
          r1 = READ_ONCE(*y);
  }

  P1(int *y, int *z)
  {
          WRITE_ONCE(*y, 1);
          smp_store_release(z, 1);
  }

  P2(int *x, int *z)
  {
          int r1;
          int r2;

          r1 = smp_load_acquire(z);
	  smp_mb();
          r2 = READ_ONCE(*x);
  }

  exists
  (0:r1=0 /\ 2:r1=1 /\ 2:r2=0)

Where:
  x is perf_event_ctxp[],
  y is our tasks's CPU, and
  z is our task being placed on the rq of CPU2.

The P0 smp_mb() is the one added by this patch, ordering the store to
perf_event_ctxp[] from find_get_context() and the load of task_cpu()
in task_function_call().

The smp_store_release/smp_load_acquire model the RCpc locking of the
rq->lock and the smp_mb() of P2 is the context switch switching from
whatever CPU2 was running to our task 't'.

This litmus test evaluates into:

  Test C-peterz Allowed
  States 7
  0:r1=0; 2:r1=0; 2:r2=0;
  0:r1=0; 2:r1=0; 2:r2=1;
  0:r1=0; 2:r1=1; 2:r2=1;
  0:r1=1; 2:r1=0; 2:r2=0;
  0:r1=1; 2:r1=0; 2:r2=1;
  0:r1=1; 2:r1=1; 2:r2=0;
  0:r1=1; 2:r1=1; 2:r2=1;
  No
  Witnesses
  Positive: 0 Negative: 7
  Condition exists (0:r1=0 /\ 2:r1=1 /\ 2:r2=0)
  Observation C-peterz Never 0 7
  Hash=e427f41d9146b2a5445101d3e2fcaa34

And the strong and weak model agree.

Reported-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: Will Deacon <will.deacon@arm.com>
Cc: jeremy.linton@arm.com
Link: http://lkml.kernel.org/r/20161209135900.GU3174@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
1 parent ad5013d
Raw File
memneq.c
/*
 * Constant-time equality testing of memory regions.
 *
 * Authors:
 *
 *   James Yonan <james@openvpn.net>
 *   Daniel Borkmann <dborkman@redhat.com>
 *
 * This file is provided under a dual BSD/GPLv2 license.  When using or
 * redistributing this file, you may do so under either license.
 *
 * GPL LICENSE SUMMARY
 *
 * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved.
 *
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of version 2 of the GNU General Public License as
 * published by the Free Software Foundation.
 *
 * This program is distributed in the hope that it will be useful, but
 * WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
 * General Public License for more details.
 *
 * You should have received a copy of the GNU General Public License
 * along with this program; if not, write to the Free Software
 * Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
 * The full GNU General Public License is included in this distribution
 * in the file called LICENSE.GPL.
 *
 * BSD LICENSE
 *
 * Copyright(c) 2013 OpenVPN Technologies, Inc. All rights reserved.
 *
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions
 * are met:
 *
 *   * Redistributions of source code must retain the above copyright
 *     notice, this list of conditions and the following disclaimer.
 *   * Redistributions in binary form must reproduce the above copyright
 *     notice, this list of conditions and the following disclaimer in
 *     the documentation and/or other materials provided with the
 *     distribution.
 *   * Neither the name of OpenVPN Technologies nor the names of its
 *     contributors may be used to endorse or promote products derived
 *     from this software without specific prior written permission.
 *
 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
 * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
 * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
 * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
 * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
 * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
 * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
 * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
 * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 */

#include <crypto/algapi.h>

#ifndef __HAVE_ARCH_CRYPTO_MEMNEQ

/* Generic path for arbitrary size */
static inline unsigned long
__crypto_memneq_generic(const void *a, const void *b, size_t size)
{
	unsigned long neq = 0;

#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
	while (size >= sizeof(unsigned long)) {
		neq |= *(unsigned long *)a ^ *(unsigned long *)b;
		OPTIMIZER_HIDE_VAR(neq);
		a += sizeof(unsigned long);
		b += sizeof(unsigned long);
		size -= sizeof(unsigned long);
	}
#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
	while (size > 0) {
		neq |= *(unsigned char *)a ^ *(unsigned char *)b;
		OPTIMIZER_HIDE_VAR(neq);
		a += 1;
		b += 1;
		size -= 1;
	}
	return neq;
}

/* Loop-free fast-path for frequently used 16-byte size */
static inline unsigned long __crypto_memneq_16(const void *a, const void *b)
{
	unsigned long neq = 0;

#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
	if (sizeof(unsigned long) == 8) {
		neq |= *(unsigned long *)(a)   ^ *(unsigned long *)(b);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned long *)(a+8) ^ *(unsigned long *)(b+8);
		OPTIMIZER_HIDE_VAR(neq);
	} else if (sizeof(unsigned int) == 4) {
		neq |= *(unsigned int *)(a)    ^ *(unsigned int *)(b);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned int *)(a+4)  ^ *(unsigned int *)(b+4);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned int *)(a+8)  ^ *(unsigned int *)(b+8);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned int *)(a+12) ^ *(unsigned int *)(b+12);
		OPTIMIZER_HIDE_VAR(neq);
	} else
#endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
	{
		neq |= *(unsigned char *)(a)    ^ *(unsigned char *)(b);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned char *)(a+1)  ^ *(unsigned char *)(b+1);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned char *)(a+2)  ^ *(unsigned char *)(b+2);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned char *)(a+3)  ^ *(unsigned char *)(b+3);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned char *)(a+4)  ^ *(unsigned char *)(b+4);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned char *)(a+5)  ^ *(unsigned char *)(b+5);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned char *)(a+6)  ^ *(unsigned char *)(b+6);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned char *)(a+7)  ^ *(unsigned char *)(b+7);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned char *)(a+8)  ^ *(unsigned char *)(b+8);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned char *)(a+9)  ^ *(unsigned char *)(b+9);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned char *)(a+10) ^ *(unsigned char *)(b+10);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned char *)(a+11) ^ *(unsigned char *)(b+11);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned char *)(a+12) ^ *(unsigned char *)(b+12);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned char *)(a+13) ^ *(unsigned char *)(b+13);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned char *)(a+14) ^ *(unsigned char *)(b+14);
		OPTIMIZER_HIDE_VAR(neq);
		neq |= *(unsigned char *)(a+15) ^ *(unsigned char *)(b+15);
		OPTIMIZER_HIDE_VAR(neq);
	}

	return neq;
}

/* Compare two areas of memory without leaking timing information,
 * and with special optimizations for common sizes.  Users should
 * not call this function directly, but should instead use
 * crypto_memneq defined in crypto/algapi.h.
 */
noinline unsigned long __crypto_memneq(const void *a, const void *b,
				       size_t size)
{
	switch (size) {
	case 16:
		return __crypto_memneq_16(a, b);
	default:
		return __crypto_memneq_generic(a, b, size);
	}
}
EXPORT_SYMBOL(__crypto_memneq);

#endif /* __HAVE_ARCH_CRYPTO_MEMNEQ */
back to top