mirror of
https://github.com/rd-stuffs/msm-4.14.git
synced 2025-02-20 11:45:48 +08:00
Due to the fact that migrations are driven by the CPU a task is running on there is no point tracking NUMA faults until one task runs on a new node. This patch tracks the first node used by an address space. Until it changes, PTE scanning is disabled and no NUMA hinting faults are trapped. This should help workloads that are short-lived, do not care about NUMA placement or have bound themselves to a single node. This takes advantage of the logic in "mm: sched: numa: Implement slow start for working set sampling" to delay when the checks are made. This will take advantage of processes that set their CPU and node bindings early in their lifetime. It will also potentially allow any initial load balancing to take place. Signed-off-by: Mel Gorman <mgorman@suse.de>
75 lines
1.9 KiB
C
75 lines
1.9 KiB
C
/*
|
|
* Only give sleepers 50% of their service deficit. This allows
|
|
* them to run sooner, but does not allow tons of sleepers to
|
|
* rip the spread apart.
|
|
*/
|
|
SCHED_FEAT(GENTLE_FAIR_SLEEPERS, true)
|
|
|
|
/*
|
|
* Place new tasks ahead so that they do not starve already running
|
|
* tasks
|
|
*/
|
|
SCHED_FEAT(START_DEBIT, true)
|
|
|
|
/*
|
|
* Prefer to schedule the task we woke last (assuming it failed
|
|
* wakeup-preemption), since its likely going to consume data we
|
|
* touched, increases cache locality.
|
|
*/
|
|
SCHED_FEAT(NEXT_BUDDY, false)
|
|
|
|
/*
|
|
* Prefer to schedule the task that ran last (when we did
|
|
* wake-preempt) as that likely will touch the same data, increases
|
|
* cache locality.
|
|
*/
|
|
SCHED_FEAT(LAST_BUDDY, true)
|
|
|
|
/*
|
|
* Consider buddies to be cache hot, decreases the likelyness of a
|
|
* cache buddy being migrated away, increases cache locality.
|
|
*/
|
|
SCHED_FEAT(CACHE_HOT_BUDDY, true)
|
|
|
|
/*
|
|
* Use arch dependent cpu power functions
|
|
*/
|
|
SCHED_FEAT(ARCH_POWER, true)
|
|
|
|
SCHED_FEAT(HRTICK, false)
|
|
SCHED_FEAT(DOUBLE_TICK, false)
|
|
SCHED_FEAT(LB_BIAS, true)
|
|
|
|
/*
|
|
* Spin-wait on mutex acquisition when the mutex owner is running on
|
|
* another cpu -- assumes that when the owner is running, it will soon
|
|
* release the lock. Decreases scheduling overhead.
|
|
*/
|
|
SCHED_FEAT(OWNER_SPIN, true)
|
|
|
|
/*
|
|
* Decrement CPU power based on time not spent running tasks
|
|
*/
|
|
SCHED_FEAT(NONTASK_POWER, true)
|
|
|
|
/*
|
|
* Queue remote wakeups on the target CPU and process them
|
|
* using the scheduler IPI. Reduces rq->lock contention/bounces.
|
|
*/
|
|
SCHED_FEAT(TTWU_QUEUE, true)
|
|
|
|
SCHED_FEAT(FORCE_SD_OVERLAP, false)
|
|
SCHED_FEAT(RT_RUNTIME_SHARE, true)
|
|
SCHED_FEAT(LB_MIN, false)
|
|
|
|
/*
|
|
* Apply the automatic NUMA scheduling policy. Enabled automatically
|
|
* at runtime if running on a NUMA machine. Can be controlled via
|
|
* numa_balancing=. Allow PTE scanning to be forced on UMA machines
|
|
* for debugging the core machinery.
|
|
*/
|
|
#ifdef CONFIG_NUMA_BALANCING
|
|
SCHED_FEAT(NUMA, false)
|
|
SCHED_FEAT(NUMA_FORCE, false)
|
|
#endif
|