msm-4.14/lib/smp_processor_id.c
Sebastian Andrzej Siewior 14016205f1
kernel: sched: Provide a pointer to the valid CPU mask
In commit 4b53a3412d66 ("sched/core: Remove the tsk_nr_cpus_allowed()
wrapper") the tsk_nr_cpus_allowed() wrapper was removed. There was not
much difference in !RT but in RT we used this to implement
migrate_disable(). Within a migrate_disable() section the CPU mask is
restricted to single CPU while the "normal" CPU mask remains untouched.

As an alternative implementation Ingo suggested to use
	struct task_struct {
		const cpumask_t		*cpus_ptr;
		cpumask_t		cpus_mask;
        };
with
	t->cpus_allowed_ptr = &t->cpus_allowed;

In -RT we then can switch the cpus_ptr to
	t->cpus_allowed_ptr = &cpumask_of(task_cpu(p));

in a migration disabled region. The rules are simple:
- Code that 'uses' ->cpus_allowed would use the pointer.
- Code that 'modifies' ->cpus_allowed would use the direct mask.

While converting the existing users I tried to stick with the rules
above however… well mostly CPUFREQ tries to temporary switch the CPU
mask to do something on a certain CPU and then switches the mask back it
its original value. So in theory `cpus_ptr' could or should be used.
However if this is invoked in a migration disabled region (which is not
the case because it would require something like preempt_disable() and
set_cpus_allowed_ptr() might sleep so it can't be) then the "restore"
part would restore the wrong mask. So it only looks strange and I go for
the pointer…

Some drivers copy the cpumask without cpumask_copy() and others use
cpumask_copy but without alloc_cpumask_var(). I did not fix those as
part of this, could do this as a follow up…

So is this the way we want it?
Is the usage of `cpus_ptr' vs `cpus_mask' for the set + restore part
(see cpufreq users) what we want? At some point it looks like they
should use a different interface for their doing. I am not sure why
switching to certain CPU is important but maybe it could be done via a
workqueue from the CPUFREQ core (so we have a comment desribing why are
doing this and a get_online_cpus() to ensure that the CPU does not go
offline too early).

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
[Sultan Alsawaf: adapt to floral]
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Zlatan Radovanovic <zlatan.radovanovic@fet.ba>
Signed-off-by: azrim <mirzaspc@gmail.com>
2022-06-10 16:45:21 +07:00

66 lines
1.4 KiB
C

// SPDX-License-Identifier: GPL-2.0
/*
* lib/smp_processor_id.c
*
* DEBUG_PREEMPT variant of smp_processor_id().
*/
#include <linux/export.h>
#include <linux/kallsyms.h>
#include <linux/sched.h>
notrace static unsigned int check_preemption_disabled(const char *what1,
const char *what2)
{
int this_cpu = raw_smp_processor_id();
if (likely(preempt_count()))
goto out;
if (irqs_disabled())
goto out;
/*
* Kernel threads bound to a single CPU can safely use
* smp_processor_id():
*/
if (cpumask_equal(current->cpus_ptr, cpumask_of(this_cpu)))
goto out;
/*
* It is valid to assume CPU-locality during early bootup:
*/
if (system_state < SYSTEM_SCHEDULING)
goto out;
/*
* Avoid recursion:
*/
preempt_disable_notrace();
if (!printk_ratelimit())
goto out_enable;
printk(KERN_ERR "BUG: using %s%s() in preemptible [%08x] code: %s/%d\n",
what1, what2, preempt_count() - 1, current->comm, current->pid);
print_symbol("caller is %s\n", (long)__builtin_return_address(0));
dump_stack();
out_enable:
preempt_enable_no_resched_notrace();
out:
return this_cpu;
}
notrace unsigned int debug_smp_processor_id(void)
{
return check_preemption_disabled("smp_processor_id", "");
}
EXPORT_SYMBOL(debug_smp_processor_id);
notrace void __this_cpu_preempt_check(const char *op)
{
check_preemption_disabled("__this_cpu_", op);
}
EXPORT_SYMBOL(__this_cpu_preempt_check);