Change-Id: If9b885edf044a0619ec1abbc17e172c96dbe671e
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
This reverts commit 919384cd204127484dae1123f10a4c08ad330027.
Change-Id: I0198a1118430d5310a512bf6c14884419d247a4d
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
This reverts commit 54b2482d2138ffe4a91fdb7b481c9813481dbe7f.
Change-Id: Ic7595bb9ed064653e6805df4faae52c2b491fbb6
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
This reverts commit 6603b051ddaef7ede2300bfe3344f088881f7aa9.
Change-Id: I5ae8287718117263552e7bea47beee5b0b207632
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
Change-Id: I860c69b6b9085fbc8597924129fef2da79b5b425
Signed-off-by: Tyler Nijmeh <tylernij@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
There are several blockable mmu notifiers which might sleep in
mmu_notifier_invalidate_range_start and that is a problem for the
oom_reaper because it needs to guarantee a forward progress so it cannot
depend on any sleepable locks.
Currently we simply back off and mark an oom victim with blockable mmu
notifiers as done after a short sleep. That can result in selecting a new
oom victim prematurely because the previous one still hasn't torn its
memory down yet.
We can do much better though. Even if mmu notifiers use sleepable locks
there is no reason to automatically assume those locks are held. Moreover
majority of notifiers only care about a portion of the address space and
there is absolutely zero reason to fail when we are unmapping an unrelated
range. Many notifiers do really block and wait for HW which is harder to
handle and we have to bail out though.
This patch handles the low hanging fruit.
__mmu_notifier_invalidate_range_start gets a blockable flag and callbacks
are not allowed to sleep if the flag is set to false. This is achieved by
using trylock instead of the sleepable lock for most callbacks and
continue as long as we do not block down the call chain.
I think we can improve that even further because there is a common pattern
to do a range lookup first and then do something about that. The first
part can be done without a sleeping lock in most cases AFAICS.
The oom_reaper end then simply retries if there is at least one notifier
which couldn't make any progress in !blockable mode. A retry loop is
already implemented to wait for the mmap_sem and this is basically the
same thing.
The simplest way for driver developers to test this code path is to wrap
userspace code which uses these notifiers into a memcg and set the hard
limit to hit the oom. This can be done e.g. after the test faults in all
the mmu notifier managed memory and set the hard limit to something really
small. Then we are looking for a proper process tear down.
[akpm@linux-foundation.org: coding style fixes]
[akpm@linux-foundation.org: minor code simplification]
Link: http://lkml.kernel.org/r/20180716115058.5559-1-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Christian König <christian.koenig@amd.com> # AMD notifiers
Acked-by: Leon Romanovsky <leonro@mellanox.com> # mlx and umem_odp
Reported-by: David Rientjes <rientjes@google.com>
Cc: "David (ChunMing) Zhou" <David1.Zhou@amd.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: David Airlie <airlied@linux.ie>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Doug Ledford <dledford@redhat.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
Cc: Sudeep Dutt <sudeep.dutt@intel.com>
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
Cc: Dimitri Sivanich <sivanich@sgi.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Felix Kuehling <felix.kuehling@amd.com>
Change-Id: Ibf089b0ebbbfa7182eeca314b757caf456969758
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
This reverts commit 1bc31b0295561f84d4663948d66df8a5a6b5c57b.
Change-Id: I3aabea8ba7841e045936c7d5192ed0414bb2c7f8
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
remove interrupt signal mask before waiting, and
Wake up all threads waiting in app_block_wq.
Change-Id: I0b441c2fd8a4f829c356492a8facd56f92c97884
Signed-off-by: Zhen Kong <zkong@codeaurora.org>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
Kernel threads are not freezable by default, change to use
wait_event_interruptible to prevent a frozen qseecom thread
being blocked until another frozen thread is thawed.
Change-Id: I09d374c763a316201e790d9003b0b9785debd44a
Signed-off-by: Zhen Kong <zkong@codeaurora.org>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
For case when cumulative mode was not inactive
scale bus bandwidth to LOW upon resume.
Change-Id: I67ee237efed1979be24d26617fe7767846c74b4a
Signed-off-by: Jiten Patel <jitepate@codeaurora.org>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
As we don't have to care about any other security mechanism than SELinux,
inline its struct sk_security_struct into struct sock.
This way, the kernel no longer has to allocate and de-allocate
struct sk_security_struct, which happens extremely frequently under Android.
Change-Id: Ic1e584a7d31bbb225cbd3079a5002542a7683818
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
healthd queries this extremely frequently and attrname is allocated
and de-allocated repeatedly.
Use the stack space instead.
Change-Id: I93de0238710270022df1dd7af96528898ce0928f
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
Although the maximum xattr size is too big to fit on the stack (64 KiB),
we can still fulfill most getxattr requests with a 4 KiB stack
allocation, thereby improving performance.
Change-Id: I36798b527d7641597c44c1d7035981a7477e5d4b
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
The environment buffer isn't very big; when it's allocated on the stack,
kobject_uevent_env's stack frame size increases to just over 2 KiB,
which is safe considering that we have a 16 KiB stack.
Allocate the environment buffer on the stack instead of using the slab
allocator in order to improve performance.
Change-Id: I175eda1be09007436f77dd6f185931e6a1d9facb
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
Most allocations done here are rather small and can fit on the stack,
eliminating the need to allocate them dynamically. Reserve a 1024B
stack buffer for this purpose to avoid the overhead of dynamic
memory allocation.
1024B covers most use cases, and higher values were observed to cause
stack corruptions.
Change-Id: I3413ddc239bc59e87148090ce2a9d4035329ad8b
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
Target cluster-side caches if the task would have been queued on a CPU
that doesn't share a cache with it.
Change-Id: If2aeafb0158cd33ad37fef6d279f778f9821fb1e
Signed-off-by: Tyler Nijmeh <tylernij@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
CPUs that are idle are excellent candidates for latency sensitive or
high-performance tasks. Decrementing their capacity while they are idle
will result in these CPUs being chosen less, and they will prefer to
schedule smaller tasks instead of large ones. Disable this.
Change-Id: Ida93a2d65456b048662669bde70b484c3ea4a30f
Signed-off-by: Tyler Nijmeh <tylernij@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
Commit [1] moved features.h inclusion in sched.h to the top of the file.
This prevents features.h from seeing the definition of HAVE_RT_PUSH_IPI,
which in turn causes the sched_feat(RT_PUSH_IPI) to be always disabled.
Fix this by checking the parent macro of HAVE_RT_PUSH_IPI instead.
[1] ("sched: Resolve sched_feat() at compile time to improve code optimization")
Change-Id: Ia5d3a89b3985e24884d9bb1729aca2f3e7b20ef3
Signed-off-by: shygosh <shygosh@proton.me>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
This should let brand new tasks launch marginally faster.
Change-Id: I6e552a9179f080b5e7cddbb8fcabf276f287e223
Signed-off-by: Tyler Nijmeh <tylernij@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
Within a DynamIQ Shared Unit (DSU), task migration cost is optimized
through L2 and L3 cache sharing. When a task is migrated between CPUs
within the same DSU cluster, there is no loss of L2$ and L3$ locality.
Since all CPU cores are tightly interconnected within a DSU, set the task
migration cost to zero knowing that the DSU will facilitate L$ sharing via
snooping.
This leverages the DSU to improve power consumption and performance on
systems which contain a single DSU cluster.
Change-Id: Ifce37fc6c125097c1965d12c61b4ee351b9dbd94
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
Change-Id: I70b94de11a21c1744873020e981d38912bf32a37
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
This reverts commit 019251a006cfa34bb5663104e8cc83a08b520a06.
Change-Id: I85e565d277eb8f03db65a762b5c5ded6e4710493
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
-----BEGIN PGP SIGNATURE-----
iQJNBAABCAA3FiEERFwmR4yFob14UDOYC8702P6YulgFAmbMN2wZHHZlZ2FyZC5u
b3NzdW1Ab3JhY2xlLmNvbQAKCRALzvTY/pi6WI/OD/0df1vNkreJiEgYmrzoT5rG
FekNwYxJLpnN2cIk5qZv5SUSrZDfqPEMzhz4tuv0kVMv8gb+k8tXpBGr6rfZlYEw
UoDAZzAN0Ns0JwxtPoIIJrjfD1a0xq8teSGhbphUDLA/fH/M66pTPtJrDjia/99E
0Ymhc+8pv77UljmaxjEoQ8a0hXoe/TxoxwJpUdklwhcXZ+u4+FwJfxxHd4webnGs
XiC1rrq4fYOX5MaomrGhKQBFv/iQAM+U9VHn4m1yLqDXYXn3j7stOa8dPY5g6EJw
kzPsure193EXj9Tp5Y+sC+aExLGfaO7MB+hlXR0El/cHboTFJshafVuIhHp93zlI
4NVExnDe3Xmrb7U4hCJd2L2z3Vgq8jSH0R1FwZqfZruOo+B1zG9/SIiM96pD5YR1
ugk2P9XAXEpDUoMv4XPCNlpMetT7HBdUWIJCZQDCTl1axiGWnqs1MpGVHfavoTLz
CKuoluR/0vL5lVX7vrc1Pb6+7APaY0U9ehVDlMbumS2PmpkLsoaTbTLlJsnVBrmg
fkN4sjja0mszL4f5zzPCZ/9G8opf63+LLiheUNdnB4yWn7VoK4+kH3v3dW3e2gzJ
t5IYygJ8Ys8pCLAueuiRb3pce59HrEq8mFtoF+MXlLN4fVHFwKbTtJWiEvEl5x3E
Ks8j03Ywad/Zcl5GiwnnEw==
=AZG+
-----END PGP SIGNATURE-----
Merge tag 'v4.14.352-openela' of https://github.com/openela/kernel-lts
This is the 4.14.352 OpenELA-Extended LTS stable release
* tag 'v4.14.352-openela' of https://github.com/openela/kernel-lts: (32 commits)
LTS: Update to 4.14.352
filelock: Fix fcntl/close race recovery compat path
jfs: don't walk off the end of ealist
ocfs2: add bounds checking to ocfs2_check_dir_entry()
net: relax socket state check at accept time.
ACPI: processor_idle: Fix invalid comparison with insertion sort for latency
ARM: 9324/1: fix get_user() broken with veneer
filelock: Remove locks reliably when fcntl/close race is detected
hfsplus: fix uninit-value in copy_name
selftests/vDSO: fix clang build errors and warnings
spi: imx: Don't expect DMA for i.MX{25,35,50,51,53} cspi devices
fs: better handle deep ancestor chains in is_subdir()
Bluetooth: hci_core: cancel all works upon hci_unregister_dev()
net: mac802154: Fix racy device stats updates by DEV_STATS_INC() and DEV_STATS_ADD()
net: usb: qmi_wwan: add Telit FN912 compositions
ALSA: dmaengine_pcm: terminate dmaengine before synchronize
s390/sclp: Fix sclp_init() cleanup on failure
Input: elantech - fix touchpad state on resume for Lenovo N24
wifi: cfg80211: wext: add extra SIOCSIWSCAN data check
mei: demote client disconnect warning on suspend to debug
...
Change-Id: I4cbdfa0321bf83d62ac62f386eb77d21c5785dec
Signed-off-by: Richard Raya <rdxzv.dev@gmail.com>
commit 92f1655aa2b2294d0b49925f3b875a634bd3b59e upstream.
__dst_negative_advice() does not enforce proper RCU rules when
sk->dst_cache must be cleared, leading to possible UAF.
RCU rules are that we must first clear sk->sk_dst_cache,
then call dst_release(old_dst).
Note that sk_dst_reset(sk) is implementing this protocol correctly,
while __dst_negative_advice() uses the wrong order.
Given that ip6_negative_advice() has special logic
against RTF_CACHE, this means each of the three ->negative_advice()
existing methods must perform the sk_dst_reset() themselves.
Note the check against NULL dst is centralized in
__dst_negative_advice(), there is no need to duplicate
it in various callbacks.
Many thanks to Clement Lecigne for tracking this issue.
This old bug became visible after the blamed commit, using UDP sockets.
Fixes: a87cb3e48ee8 ("net: Facility to report route quality of connected sockets")
Reported-by: Clement Lecigne <clecigne@google.com>
Diagnosed-by: Clement Lecigne <clecigne@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Tom Herbert <tom@herbertland.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Link: https://lore.kernel.org/r/20240528114353.1794151-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
[mheyne: contextual conflict in ip6_negative_advice due to missing
commit c3c14da0288d ("net/ipv6: add rcu locking to ip6_negative_advice") and
commit 93531c674315 ("net/ipv6: separate handling of FIB entries from dst based routes")]
Signed-off-by: Maximilian Heyne <mheyne@amazon.de>
commit f8138f2ad2f745b9a1c696a05b749eabe44337ea upstream.
When I wrote commit 3cad1bc01041 ("filelock: Remove locks reliably when
fcntl/close race is detected"), I missed that there are two copies of the
code I was patching: The normal version, and the version for 64-bit offsets
on 32-bit kernels.
Thanks to Greg KH for stumbling over this while doing the stable
backport...
Apply exactly the same fix to the compat path for 32-bit kernels.
Fixes: c293621bbf67 ("[PATCH] stale POSIX lock handling")
Cc: stable@kernel.org
Link: https://bugs.chromium.org/p/project-zero/issues/detail?id=2563
Signed-off-by: Jann Horn <jannh@google.com>
Link: https://lore.kernel.org/r/20240723-fs-lock-recover-compatfix-v1-1-148096719529@google.com
Signed-off-by: Christian Brauner <brauner@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit a561145f3ae973ebf3e0aee41624e92a6c5cb38d)
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
commit d0fa70aca54c8643248e89061da23752506ec0d4 upstream.
Add a check before visiting the members of ea to
make sure each ea stays within the ealist.
Signed-off-by: lei lu <llfamsec@gmail.com>
Signed-off-by: Dave Kleikamp <dave.kleikamp@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 7f91bd0f2941fa36449ce1a15faaa64f840d9746)
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
commit 255547c6bb8940a97eea94ef9d464ea5967763fb upstream.
This adds sanity checks for ocfs2_dir_entry to make sure all members of
ocfs2_dir_entry don't stray beyond valid memory region.
Link: https://lkml.kernel.org/r/20240626104433.163270-1-llfamsec@gmail.com
Signed-off-by: lei lu <llfamsec@gmail.com>
Reviewed-by: Heming Zhao <heming.zhao@suse.com>
Reviewed-by: Joseph Qi <joseph.qi@linux.alibaba.com>
Cc: Mark Fasheh <mark@fasheh.com>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Cc: Changwei Ge <gechangwei@live.cn>
Cc: Gang He <ghe@suse.com>
Cc: Jun Piao <piaojun@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 13d38c00df97289e6fba2e54193959293fd910d2)
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
commit 233323f9b9f828cd7cd5145ad811c1990b692542 upstream.
The acpi_cst_latency_cmp() comparison function currently used for
sorting C-state latencies does not satisfy transitivity, causing
incorrect sorting results.
Specifically, if there are two valid acpi_processor_cx elements A and B
and one invalid element C, it may occur that A < B, A = C, and B = C.
Sorting algorithms assume that if A < B and A = C, then C < B, leading
to incorrect ordering.
Given the small size of the array (<=8), we replace the library sort
function with a simple insertion sort that properly ignores invalid
elements and sorts valid ones based on latency. This change ensures
correct ordering of the C-state latencies.
Fixes: 65ea8f2c6e23 ("ACPI: processor idle: Fix up C-state latency if not ordered")
Reported-by: Julian Sikorski <belegdol@gmail.com>
Closes: https://lore.kernel.org/lkml/70674dc7-5586-4183-8953-8095567e73df@gmail.com
Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
Tested-by: Julian Sikorski <belegdol@gmail.com>
Cc: All applicable <stable@vger.kernel.org>
Link: https://patch.msgid.link/20240701205639.117194-1-visitorckw@gmail.com
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit c9d6e349f7aad4ab9c557047d357df256c15f25e)
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
commit 24d3ba0a7b44c1617c27f5045eecc4f34752ab03 upstream.
The 32-bit ARM kernel stops working if the kernel grows to the point
where veneers for __get_user_* are created.
AAPCS32 [1] states, "Register r12 (IP) may be used by a linker as a
scratch register between a routine and any subroutine it calls. It
can also be used within a routine to hold intermediate values between
subroutine calls."
However, bl instructions buried within the inline asm are unpredictable
for compilers; hence, "ip" must be added to the clobber list.
This becomes critical when veneers for __get_user_* are created because
veneers use the ip register since commit 02e541db0540 ("ARM: 8323/1:
force linker to use PIC veneers").
[1]: https://github.com/ARM-software/abi-aa/blob/2023Q1/aapcs32/aapcs32.rst
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
Cc: John Stultz <jstultz@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 41a5c1717bf4ad1b6084e8682de64b178eabc059)
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
commit 3cad1bc010416c6dd780643476bc59ed742436b9 upstream.
When fcntl_setlk() races with close(), it removes the created lock with
do_lock_file_wait().
However, LSMs can allow the first do_lock_file_wait() that created the lock
while denying the second do_lock_file_wait() that tries to remove the lock.
In theory (but AFAIK not in practice), posix_lock_file() could also fail to
remove a lock due to GFP_KERNEL allocation failure (when splitting a range
in the middle).
After the bug has been triggered, use-after-free reads will occur in
lock_get_status() when userspace reads /proc/locks. This can likely be used
to read arbitrary kernel memory, but can't corrupt kernel memory.
This only affects systems with SELinux / Smack / AppArmor / BPF-LSM in
enforcing mode and only works from some security contexts.
Fix it by calling locks_remove_posix() instead, which is designed to
reliably get rid of POSIX locks associated with the given file and
files_struct and is also used by filp_flush().
Fixes: c293621bbf67 ("[PATCH] stale POSIX lock handling")
Cc: stable@kernel.org
Link: https://bugs.chromium.org/p/project-zero/issues/detail?id=2563
Signed-off-by: Jann Horn <jannh@google.com>
Link: https://lore.kernel.org/r/20240702-fs-lock-recover-2-v1-1-edd456f63789@google.com
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Christian Brauner <brauner@kernel.org>
[stable fixup: ->c.flc_type was ->fl_type in older kernels]
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit d30ff33040834c3b9eee29740acd92f9c7ba2250)
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
[ Upstream commit 73810cd45b99c6c418e1c6a487b52c1e74edb20d ]
When building with clang, via:
make LLVM=1 -C tools/testing/selftests
...there are several warnings, and an error. This fixes all of those and
allows these tests to run and pass.
1. Fix linker error (undefined reference to memcpy) by providing a local
version of memcpy.
2. clang complains about using this form:
if (g = h & 0xf0000000)
...so factor out the assignment into a separate step.
3. The code is passing a signed const char* to elf_hash(), which expects
a const unsigned char *. There are several callers, so fix this at
the source by allowing the function to accept a signed argument, and
then converting to unsigned operations, once inside the function.
4. clang doesn't have __attribute__((externally_visible)) and generates
a warning to that effect. Fortunately, gcc 12 and gcc 13 do not seem
to require that attribute in order to build, run and pass tests here,
so remove it.
Reviewed-by: Carlos Llamas <cmllamas@google.com>
Reviewed-by: Edward Liaw <edliaw@google.com>
Reviewed-by: Muhammad Usama Anjum <usama.anjum@collabora.com>
Tested-by: Muhammad Usama Anjum <usama.anjum@collabora.com>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
(cherry picked from commit d5e9dddd18fdfe04772bce07d4a34e39e7b1e402)
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
[ Upstream commit ce1dac560a74220f2e53845ec0723b562288aed4 ]
While in commit 2dd33f9cec90 ("spi: imx: support DMA for imx35") it was
claimed that DMA works on i.MX25, i.MX31 and i.MX35 the respective
device trees don't add DMA channels. The Reference manuals of i.MX31 and
i.MX25 also don't mention the CSPI core being DMA capable. (I didn't
check the others.)
Since commit e267a5b3ec59 ("spi: spi-imx: Use dev_err_probe for failed
DMA channel requests") this results in an error message
spi_imx 43fa4000.spi: error -ENODEV: can't get the TX DMA channel!
during boot. However that isn't fatal and the driver gets loaded just
fine, just without using DMA.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Link: https://patch.msgid.link/20240508095610.2146640-2-u.kleine-koenig@pengutronix.de
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
(cherry picked from commit 4f5e56dddabe947cc840ffb2db60d9df6ca9e8b9)
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
[ Upstream commit 391b59b045004d5b985d033263ccba3e941a7740 ]
Jan reported that 'cd ..' may take a long time in deep directory
hierarchies under a bind-mount. If concurrent renames happen it is
possible to livelock in is_subdir() because it will keep retrying.
Change is_subdir() from simply retrying over and over to retry once and
then acquire the rename lock to handle deep ancestor chains better. The
list of alternatives to this approach were less then pleasant. Change
the scope of rcu lock to cover the whole walk while at it.
A big thanks to Jan and Linus. Both Jan and Linus had proposed
effectively the same thing just that one version ended up being slightly
more elegant.
Reported-by: Jan Kara <jack@suse.cz>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Christian Brauner <brauner@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
(cherry picked from commit a5c4645346b0efb5a10ed28ae281a9af29037608)
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
[ Upstream commit 0d151a103775dd9645c78c97f77d6e2a5298d913 ]
syzbot is reporting that calling hci_release_dev() from hci_error_reset()
due to hci_dev_put() from hci_error_reset() can cause deadlock at
destroy_workqueue(), for hci_error_reset() is called from
hdev->req_workqueue which destroy_workqueue() needs to flush.
We need to make sure that hdev->{rx_work,cmd_work,tx_work} which are
queued into hdev->workqueue and hdev->{power_on,error_reset} which are
queued into hdev->req_workqueue are no longer running by the moment
destroy_workqueue(hdev->workqueue);
destroy_workqueue(hdev->req_workqueue);
are called from hci_release_dev().
Call cancel_work_sync() on these work items from hci_unregister_dev()
as soon as hdev->list is removed from hci_dev_list.
Reported-by: syzbot <syzbot+da0a9c9721e36db712e8@syzkaller.appspotmail.com>
Closes: https://syzkaller.appspot.com/bug?extid=da0a9c9721e36db712e8
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
(cherry picked from commit 48542881997e17b49dc16b93fe910e0cfcf7a9f9)
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
[ Upstream commit 6a7db25aad8ce6512b366d2ce1d0e60bac00a09d ]
When dmaengine supports pause function, in suspend state,
dmaengine_pause() is called instead of dmaengine_terminate_async(),
In end of playback stream, the runtime->state will go to
SNDRV_PCM_STATE_DRAINING, if system suspend & resume happen
at this time, application will not resume playback stream, the
stream will be closed directly, the dmaengine_terminate_async()
will not be called before the dmaengine_synchronize(), which
violates the call sequence for dmaengine_synchronize().
This behavior also happens for capture streams, but there is no
SNDRV_PCM_STATE_DRAINING state for capture. So use
dmaengine_tx_status() to check the DMA status if the status is
DMA_PAUSED, then call dmaengine_terminate_async() to terminate
dmaengine before dmaengine_synchronize().
Signed-off-by: Shengjiu Wang <shengjiu.wang@nxp.com>
Link: https://patch.msgid.link/1718851218-27803-1-git-send-email-shengjiu.wang@nxp.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
(cherry picked from commit fe0a6e7eb38f9d5396f6ff548186a6cd62c08b1a)
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
[ Upstream commit a69ce592cbe0417664bc5a075205aa75c2ec1273 ]
The Lenovo N24 on resume becomes stuck in a state where it
sends incorrect packets, causing elantech_packet_check_v4 to fail.
The only way for the device to resume sending the correct packets is for
it to be disabled and then re-enabled.
This change adds a dmi check to trigger this behavior on resume.
Signed-off-by: Jonathan Denose <jdenose@google.com>
Link: https://lore.kernel.org/r/20240503155020.v2.1.Ifa0e25ebf968d8f307f58d678036944141ab17e6@changeid
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
(cherry picked from commit 9b6a1cb833dc8ceab3fbc45a261a8dd37c4f8013)
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
[ Upstream commit 1db5322b7e6b58e1b304ce69a50e9dca798ca95b ]
Change level for the "not connected" client message in the write
callback from error to debug.
The MEI driver currently disconnects all clients upon system suspend.
This behavior is by design and user-space applications with
open connections before the suspend are expected to handle errors upon
resume, by reopening their handles, reconnecting,
and retrying their operations.
However, the current driver implementation logs an error message every
time a write operation is attempted on a disconnected client.
Since this is a normal and expected flow after system resume
logging this as an error can be misleading.
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Link: https://lore.kernel.org/r/20240530091415.725247-1-tomas.winkler@intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
(cherry picked from commit bd2a753fa12cf3d28726a4bf067398514e52d57c)
Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>