locking/qspinlock: Use atomic_cond_read_relaxed() for slowpath spinning

The upstream version of commit 3dab30f33814 ("locking/qspinlock: Bound
spinning on pending->locked transition in slowpath") uses a
atomic_cond_read_relaxed() here instead of smp_cond_load_acquire(). Our
linux-stable backport uses smp_cond_load_acquire() because 4.14 doesn't
have atomic_cond_read_relaxed() defined. Luckily, adding support for
atomic_cond_read_relaxed() is quite simple, so we can now use it instead
of smp_cond_load_acquire() in order to match upstream.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: azrim <mirzaspc@gmail.com>
This commit is contained in:
Sultan Alsawaf 2021-10-24 13:51:44 -07:00 committed by azrim
parent 1ef074290a
commit c331946f6c
No known key found for this signature in database
GPG Key ID: 497F8FB059B45D1C

View File

@ -342,7 +342,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
*/
if (val == _Q_PENDING_VAL) {
int cnt = _Q_PENDING_LOOPS;
val = smp_cond_load_acquire(&lock->val.counter,
val = atomic_cond_read_relaxed(&lock->val,
(VAL != _Q_PENDING_VAL) || !cnt--);
}