811291 Commits

Author SHA1 Message Date
Richard Raya
34331a1f71 build.sh: Sync Neutron clang version 18.0.0
Change-Id: Iaedc953706f6033ec4880f3773a93466bd78d2cc
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:46 -03:00
Richard Raya
2a888d5292 f2fs: Update congestion timeout for 250Hz
Change-Id: I2b2208705f5eebb7a2fc4af8e1941cabb241b09f
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:04:07 -03:00
Richard Raya
1695ed419b defconfig: Switch to 250Hz scheduler tick rate
Change-Id: I5a272b4c308d3258b9491bd2dcf8ca8a8ac33582
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:04:07 -03:00
Richard Raya
4043298f9a defconfig: Disable MGLRU
Change-Id: Ie1a5efd870ea5bb6142ff30e9bd3d5d8bc320e12
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:04:06 -03:00
Richard Raya
5e9ae9564b defconfig: Disable DAMON
Change-Id: I199c12d14fca974a92779d677dafe27981ca37af
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:04:06 -03:00
Richard Raya
dd6b4657ab defconfig: Bump CPU input boost
Change-Id: I72643abe944d476e195d5fce6bf44c39448a5fa8
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
Richard Raya
00788611eb defconfig: Bump devfreq CPUBW boost
Change-Id: Id6e6abc840aca3914f5a5fba72cc1f22b9a0a5da
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
Richard Raya
8f99654e62 defconfig: Bump boost durations to 100ms
Change-Id: I15bc0cffa7427c5f3cbbf160cf313c26e071c223
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
Richard Raya
1683bf779b defconfig: Enable thermal limits DCVS
Change-Id: I7229679f224a4699be820e214d855967056e4053
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
Richard Raya
8266652dd0 Revert "ARM64/dts: sdmmagpie-thermal: Switch to user_space governor"
This reverts commit 3dc59ee906e2b005ff5a547ede72a9a7095046c0.

Change-Id: Ic95d637254af23ffaa4157d5f11716f465a3d09b
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
Yaroslav Furman
5de5097f2c thermal: cpu_cooling: Fix a throttling bug
Without this change throttling would get stuck in enabled state forever.
This leads to a regression in Geekbench 5 single-core score
from 760 to 620 on 2-3 run, forever.

Change-Id: I3aadc768d84f1be61f10e5ba53069d115a267ff7
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
Yaroslav Furman
cd8e6ad89c thermal: cpu_cooling: Simplify cpu_limits_set_level
Change-Id: I802cd25298f98e4731b667b2db87895e1091de21
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
Adithya R
3bb78dd5c3 thermal: core: Fix snprintf usage
Change-Id: I85326d382f29616ae5b3644ebcc4f1e0f45f0ad2
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
Demon000
7fd5b5c6f0 thermal: core: Use Qualcomm DRM notifier
Change-Id: Ib184c28c0441847399df567466a04b48b9b68c46
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
Arian
f6bfefdee9 thermal: core: Import xiaomi board_sensor and thermal_message changes
Change-Id: Ia19f1ed5fd5d9c0e71dc6f8bef53f6e7757a678f
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
Demon000
91804282a6 thermal: core: Custom thermal limits handling
Change-Id: If9e10d0f59e268240946a18a4a3015581559ae15
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
Richard Raya
ece7c08d1d Revert "drivers: thermal_core: add sysfs nodes to silence mi_thermald"
This reverts commit ba961c130942d852b3d99062e2d0581b5253eb00.

Change-Id: I401bd89fedcfdf32078bbba58862b963b759f34e
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
mawrick26
3ddc64a88c cpufreq: Don't WARN_ON on non-existent cpu
Change-Id: I740ca7c1faaaa04560c65637f087af64f932280b
Signed-off-by: mawrick26 <mawrick26@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
Panchajanya1999
620f019a86 cpufreq: Avoid userspace from changing maxfreq
Change-Id: I5fb761bc43bd6434bdd67782adf812ee1b8792f8
Signed-off-by: Panchajanya1999 <panchajanya@azure-dev.live>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
Richard Raya
169855a012 Revert "cpufreq: Kill userspace CPU boosting entirely"
This reverts commit 67de7a9c1de067d7d58f4759dfd3191f9a2be0c9.

Change-Id: Ic59920cddfe2da1c4f5d1028eb0fdda88e44e172
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
Richard Raya
61ef1aee03 Revert "cpu_input_boost: Rewrite update_online_cpu_policy function"
This reverts commit 0fef46c3e14cca4b7adc9f15eb5f7b0a41d74227.

Change-Id: I42d3b01a4525a81956517395060424c2ecd8bb48
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
Sultan Alsawaf
1f4f6d2b94 sched/features: Disable TTWU_QUEUE
Queuing wake-ups is not measurably beneficial in any way. Disable it.

Change-Id: I1cb07ea6e14f030ea99e3f4db64ffc1196ff14c6
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
Sultan Alsawaf
b0127a3d65 sched/features: Disable CACHE_HOT_BUDDY to leverage DynamIQ Shared Unit
In a similar vein to setting sysctl_sched_migration_cost to zero, disable
CACHE_HOT_BUDDY to better leverage the DynamIQ Shared Unit (DSU). With the
DSU, L2$ and L3$ locality isn't lost when a task is migrated to another
intra-DSU core.

Change-Id: I51c599ebb6701c6ff16cc4fe15a83c637ee470f2
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:45 -03:00
Richard Raya
1fd6091bb5 schedutil: Drop remaining tracing
Change-Id: I059f7c58abb99c511ebb494b752e2a3be1f9f05b
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:44 -03:00
Richard Raya
be3df9c273 schedutil: Hardcode rate limits to 500/20000 ms
Change-Id: Ia49b8fc8322c2bf1e8c83fddf861c6e83a81bfea
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:44 -03:00
Jonathan Avila
893aa6dab2 schedutil: Rework the fast switch limits logic
Due to certain misunderstandings about the workings of the cpufreq driver,
the original fast switch logic had several bugs in place. Instead of
creating multiple changes to address them, redesign the fast switch limits
code properly.

Change-Id: I8eb8835d63ecd4ae6c6b406a8e2b33409e856a80
Signed-off-by: Jonathan Avila <avilaj@codeaurora.org>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:44 -03:00
Vincent Guittot
9b88f1222a schedutil: Fix frequency selection for non-invariant case
Linus reported a ~50% performance regression on single-threaded
workloads on his AMD Ryzen system, and bisected it to:

When frequency invariance is not enabled, get_capacity_ref_freq(policy)
is supposed to return the current frequency and the performance margin
applied by map_util_perf(), enabling the utilization to go above the
maximum compute capacity and to select a higher frequency than the current one.

The performance margin was applied earlier in the path to take into account
utilization clampings and we couldn't get a utilization higher than the
maximum compute capacity, and the CPU remained 'stuck' at lower frequencies.

To fix this, we must use a frequency above the current frequency to
get a chance to select a higher OPP when the current one becomes fully used.
Apply the same margin and return a frequency 25% higher than the current
one in order to switch to the next OPP before we fully use the CPU
at the current one.

Link: https://lore.kernel.org/r/20240114183600.135316-1-vincent.guittot@linaro.org
Change-Id: If0d8e9b0c0292f8d5faa8b98aabc43e5045941e1
Tested-by: Wyes Karny <wkarny@gmail.com>
Reported-by: Wyes Karny <wkarny@gmail.com>
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Bisected-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:44 -03:00
Wei Wang
e6137ccf87 schedutil: Restore cached freq when next_f is not changed
We have the raw cached freq to reduce the chance in calling cpufreq
driver where it could be costly in some arch/SoC.

Currently, the raw cached freq is reset in sugov_update_single() when
it avoids frequency reduction (which is not desirable sometimes), but
it is better to restore the previous value of it in that case,
because it may not change in the next cycle and it is not necessary
to change the CPU frequency then.

Change-Id: I42dfb52f296f8b128c9f10395e6670381e1dce73
Reviewed-on: https://chromium-review.googlesource.com/c/chromiumos/third_party/kernel/+/2582584
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Sean Paul <seanpaul@chromium.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Wei Wang <wvw@google.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Matthias Kaehlcke <mka@chromium.org>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:44 -03:00
Wei Wang
3de1de2001 schedutil: Maintain raw cache when next_f is not changed
Currently, the raw cache will be reset when next_f is changed after get_next_freq for correctness. However, it may introduce more cycles in those cases. This patch changes it to maintain the cached value instead of dropping it.

Bug: 159936782
Bug: 158863204
Change-Id: I519ca02dd2e6038e3966e1f68fee641628827c82
Signed-off-by: Wei Wang <wvw@google.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-09 00:01:44 -03:00
Richard Raya
67517315bf schedutil: Drop conservative pl
Change-Id: If2425c708f97563da1868e47bd449e79c0c25ef1
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-08 23:59:43 -03:00
Vikram Mulukutla
2d76a9b243 schedutil: Ignore work_in_progress
Blindly ignoring frequency updates because of work_in_progress can leave
the CPUs at the wrong frequency for a long time. It's better to update the
frequency immediately than wait for a future event that might take a long
time to come. The irq_work code already ignores double queuing of work. So,
that part of the code is still safe when the work_in_progress flag is
ignored.

[avilaj@codeaurora.org: Port to 4.19]

Change-Id: Id0b3711314dfbfa18b5f4bce30a239ee3cf962d6
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
Signed-off-by: Saravana Kannan <skannan@codeaurora.org>
Signed-off-by: Rohit Gupta <rohgup@codeaurora.org>
Signed-off-by: Jonathan Avila <avilaj@codeaurora.org>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-08 23:59:08 -03:00
Rafael J. Wysocki
07f6569bc9 schedutil: Avoid missing updates for one-CPU policies
Commit 152db033d775 (schedutil: Allow cpufreq requests to be made
even when kthread kicked) made changes to prevent utilization updates
from being discarded during processing a previous request, but it
left a small window in which that still can happen in the one-CPU
policy case.  Namely, updates coming in after setting work_in_progress
in sugov_update_commit() and clearing it in sugov_work() will still
be dropped due to the work_in_progress check in sugov_update_single().

To close that window, rearrange the code so as to acquire the update
lock around the deferred update branch in sugov_update_single()
and drop the work_in_progress check from it.

Change-Id: I6b8d20acca7181822574eab9d29e1d41dd101ac2
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Juri Lelli <juri.lelli@redhat.com>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-08 23:59:08 -03:00
Joel Fernandes (Google)
9e0fb7e026 schedutil: Allow cpufreq requests to be made even when kthread kicked
Currently there is a chance of a schedutil cpufreq update request to be
dropped if there is a pending update request. This pending request can
be delayed if there is a scheduling delay of the irq_work and the wake
up of the schedutil governor kthread.

A very bad scenario is when a schedutil request was already just made,
such as to reduce the CPU frequency, then a newer request to increase
CPU frequency (even sched deadline urgent frequency increase requests)
can be dropped, even though the rate limits suggest that its Ok to
process a request. This is because of the way the work_in_progress flag
is used.

This patch improves the situation by allowing new requests to happen
even though the old one is still being processed. Note that in this
approach, if an irq_work was already issued, we just update next_freq
and don't bother to queue another request so there's no extra work being
done to make this happen.

Change-Id: Id37249190837b01b80c54a0977074f44388fdc53
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Juri Lelli <juri.lelli@redhat.com>
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-08 23:59:08 -03:00
Maria Yu
3f49c72829 schedutil: Queue sugov irq work on policy online CPU
Got never update frequency if scheduled the irq
work on an offlined CPU and it will always pending.
Queue sugov irq work on any online CPU if current
CPU is offline.

[clingutla@codeaurora.org: Resolved minor merge conflicts]

Change-Id: I33fc691917b5866488b6aeb11ed902a2753130b2
Signed-off-by: Maria Yu <aiquny@codeaurora.org>
Signed-off-by: Lingutla Chandrasekhar <clingutla@codeaurora.org>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-08 23:59:08 -03:00
Tyler Nijmeh
5cc6f41370 schedutil: Remove hispeed frequency boosting
Mobile devices do not benefit from hispeed frequency boosting.

Change-Id: I7823e41dd2be7187c36a53e8076fbe48eed2f6ae
Signed-off-by: Tyler Nijmeh <tylernij@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-08 23:59:03 -03:00
Sultan Alsawaf
b5a4da656c schedutil: Allow CPU frequency changes to be amended before they're set
If the last CPU frequency selected isn't set before a new CPU frequency
selection arrives, then use the new selection immediately to avoid using a
stale frequency choice. This improves both performance and energy by more
closely tracking the scheduler's latest decisions.

Change-Id: I01a31626a350ece2937d98b17d40f9192b6382fc
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-08 23:15:04 -03:00
Richard Raya
1e4b44f306 Revert "cpufreq: schedutil: Allow CPU frequency changes to be amended before they're set"
This reverts commit 752a5ed0088bdb6605f8ec2c616404155e5766f2.

Change-Id: I2d2ca52aae6bae245b13e016d46c366896fef630
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-08 23:15:04 -03:00
Richard Raya
6a7d230940 Revert "sched/fair: Fine tune capacity margins"
This reverts commit af232df38ac7715ed21893470d09aacc0a02ea4e.

Change-Id: I57d805b9f37277b842c890d1035bc8d9115d37a8
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-08 23:15:04 -03:00
Richard Raya
df7aaee02b Revert "sched/cass: Skip reserved cpus"
This reverts commit 8915bf83869d911c743d78b3a17d60c700603a9a.

Change-Id: I2578b46f4a9553a5bafaedd981cf79dd8a85867a
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-08 23:15:03 -03:00
Maria Yu
16da99cd69 sched/walt: Avoid walt irq work in offlined CPU
Avoid walt irq work in offlined CPU.

[clingutla@codeaurora.org: Resolved trivial merge conflicts]

Change-Id: Ia4410562f66bfa57daa15d8c0a785a2c7a95f2a0
Signed-off-by: Maria Yu <aiquny@codeaurora.org>
Signed-off-by: Lingutla Chandrasekhar <clingutla@codeaurora.org>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-02 23:25:35 -03:00
Richard Raya
9b0e8fcc6d sched/walt: Provide a pointer to the valid CPU mask
Change-Id: I564bd24b0fdafdc751b483ccb632e2bf64147e4f
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-02 23:25:34 -03:00
Tashfin Shakeer Rhythm
b8ef028d5e sched/walt: Do not use check_for_migration() while using CASS
WALT has check_for_migration() that calls find_energy_efficient_cpu().
But with CASS, using find_energy_efficient_cpu() is irrelevant. Since
check_for_migration() also doesn't prove to be much useful even without
the reference to find_energy_efficient_cpu(), do not use it while using
CASS with WALT. There's no need to use IS_ENABLED() for CONFIG_SCHED_WALT
here since the function is already guarded with CONFIG_SCHED_WALT.

Change-Id: If1cf34bdd7ef65c9985bfefa0201d210b5317c20
Signed-off-by: Tashfin Shakeer Rhythm <tashfinshakeerrhythm@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-02 23:25:34 -03:00
LibXZR
cfb18f18ff sched/walt: Set default window size to 8ms
On stock ROM, the window size get automatically updated on refresh
rate changing, which uses 8ms for 120Hz, 12ms for 90Hz and 20ms for
60Hz. But on custom ROM, there's no userspace daemon to do this,
causing the windows size to stuck at 20ms.

Change-Id: I1ecaae2396b6cf1a9d2024ed166c3011648c9644
Signed-off-by: LibXZR <i@xzr.moe>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-02 23:25:34 -03:00
Park Ju Hyung
ec1dab42cf sched/walt: Don't allocate window CPU arrays separately
These are allocated extremely frequently.

Allocate them with CONFIG_NR_CPUS upon struct ravg's allocation.

This will break walt debug tracings.

Change-Id: I8f67bb00fb916e04bfc954d812a3b99a3a5495c2
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-02 23:25:34 -03:00
Alexander Winkowski
b411f7d6d6 sched/walt: Introduce rotation_ctl
This is WALT rotation logic extracted from core_ctl to avoid
CPU isolation overhead while retaining the performance gain.

Change-Id: I912d2dabf7e32eaf9da2f30b38898d1b29ff0a53
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-02 23:25:34 -03:00
Alexander Winkowski
81ba1ab8a2 sched/walt: Remove unused core_ctl.h
To avoid confusion with include/linux/sched/core_ctl.h

Change-Id: I037b1cc0fa09c06737a369b4e7dfdd89cd7ad9af
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-02 23:25:34 -03:00
Abhijeet Dharmapurikar
09a0c81e4e sched/walt: Improve the scheduler
This change is for general scheduler improvement.

[dereference23: Backport to msm-4.14]

Change-Id: Iffd4ae221581aaa4aeb244a0cddd40a8b6aac74d
Signed-off-by: Abhijeet Dharmapurikar <adharmap@codeaurora.org>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-02 23:25:34 -03:00
Abhijeet Dharmapurikar
ef5b0bbf69 sched/walt: Improve the scheduler
This change is for general scheduler improvement.

Change-Id: I7cb85ea7133a94923fae97d99f5b0027750ce189
Signed-off-by: Abhijeet Dharmapurikar <adharmap@codeaurora.org>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-02 23:25:34 -03:00
Juri Lelli
d14bc69765 sched/core: Fix hrtick reprogramming
Hung tasks and RCU stall cases were reported on systems which were not
100% busy. Investigation of such unexpected cases (no sign of potential
starvation caused by tasks hogging the system) pointed out that the
periodic sched tick timer wasn't serviced anymore after a certain point
and that caused all machinery that depends on it (timers, RCU, etc.) to
stop working as well. This issues was however only reproducible if
HRTICK was enabled.

Looking at core dumps it was found that the rbtree of the hrtimer base
used also for the hrtick was corrupted (i.e. next as seen from the base
root and actual leftmost obtained by traversing the tree are different).
Same base is also used for periodic tick hrtimer, which might get "lost"
if the rbtree gets corrupted.

Much alike what described in commit 1f71addd34f4c ("tick/sched: Do not
mess with an enqueued hrtimer") there is a race window between
hrtimer_set_expires() in hrtick_start and hrtimer_start_expires() in
__hrtick_restart() in which the former might be operating on an already
queued hrtick hrtimer, which might lead to corruption of the base.

Use hrtick_start() (which removes the timer before enqueuing it back) to
ensure hrtick hrtimer reprogramming is entirely guarded by the base
lock, so that no race conditions can occur.

Link: https://lkml.kernel.org/r/20210208073554.14629-2-juri.lelli@redhat.com
Change-Id: I2f427a969a731f862e718d2bc62e682581650c4c
Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
Signed-off-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-02 23:25:34 -03:00
Sultan Alsawaf
05bab47794 sched/core: Force trivial, unbound kthreads onto low-power CPUs
In order to reduce power consumption, force all non-perf-critical,
unbound kthreads onto the low-power CPUs. Note that init must be
explicitly excluded from this so that all processes forked from init
have a sane default CPU affinity mask.

Change-Id: Ic86928a058d8fb033adc834572cd71248d29e705
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Alex Winkowski <dereference23@outlook.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
2024-09-02 23:27:43 -03:00