795510 Commits

Author SHA1 Message Date
Adithya R
e461316405 wireguard: compat: Remove skb_mark_not_on_list
Already added in 4263e32f284dd23341770573709e6e9bd16998da ("UPSTREAM: net: use skb_list_del_init() to remove from RX sublists")
2022-10-01 23:55:36 +05:30
idkwhoiam322
b7f4d80fbc wireguard: compat: Adapt for upstream mm commit
3b3cecea215895b2235f2eb70e9e6859b379630a: "mm: convert totalram_pages and totalhigh_pages variables to atomic"

Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
2022-10-01 23:53:31 +05:30
Adithya R
0fce4ce724 arm64: configs: surya: Enable WireGuard driver 2022-10-01 23:50:34 +05:30
Adithya R
33e79cbd8f net: Import WireGuard from wireguard-linux-compat
v1.0.20220627

https://github.com/WireGuard/wireguard-linux-compat/releases/tag/v1.0.20220627
2022-10-01 23:46:20 +05:30
Adithya R
090619207d Revert "soc/qcom: ssr: Do not panic kernel in any case"
This hack sucks, was never necessary and instead just
writing to the subsystem's sysfs node is enough.

This reverts commits 6a16747eb1c3e0c53fd06e710913f14273f831d4 and 011f53380857e4729806931f2a0a4949a9503903.
2022-09-28 15:23:57 +05:30
dianlujitao
5cce9fff1f msm: camera: Use boot clock for recording start time
* Our camera HAL uses boot time for buffer timestamp, rather than
   system monotonic time. This leads to issues as framework uses system
   monotonic time as reference start time for timestamp adjustment.
 * This patch is taken from stock kernel source.

Change-Id: Ia4fac1d48e2206a7befd0030585776371dd8c3a9
Signed-off-by: Subhajeet Muhuri <subhajeet.muhuri@aosip.dev>
2022-09-28 14:12:43 +05:30
Adithya R
6382832227 Merge tag 'LA.UM.9.1.r1-12600-SMxxx0.QSSI12.0' of msm-4.14 2022-09-27 15:33:33 +05:30
wenchangliu
6923712ea8 msm: vidc: Disable decode batching feature
This feature implemented for power optimization
with OMX but not well compliant with Codec 2.0.
Disable it to unblock VT frame drop issue.

Bug: 149071324
Test: VT frame rate test
Change-Id: I187958f4da10d5936c0f0fbd5060301e55ac7f29
Signed-off-by: Wen Chang Liu <wenchangliu@google.com>
2022-09-26 23:28:46 +05:30
Adithya R
0fcf749230 build.sh: Switch to AOSP clang 14.0.6 (r450784d)
This is now the default clang for AOSP 13.
2022-09-24 19:47:11 +05:30
Subhajeet Muhuri
184a3f2595 input: misc: aw8624_haptic: Rename to qti-haptics
* Reference:
   b55d97cf10

Signed-off-by: Subhajeet Muhuri <subhajeet.muhuri@aosip.dev>
Signed-off-by: Forenche <prahul2003@gmail.com>
2022-09-24 13:45:56 +05:30
Rahul Ratneshwar Mandal
a9022f8101 Revert "msm: vidc: fix msm_comm_get_vidc_buffer fd race issue"
Change-Id: I7712924bb8da895b294990f444982c398b9e7571
Signed-off-by: Rahul Ratneshwar Mandal <quic_rratnesh@quicinc.com>
2022-09-20 22:18:19 -07:00
Adithya R
ed8395ab98 build.sh: Append SHA of current git HEAD to zip name
* this makes it easier to identify kernel builds
   and track it down in history, since we already
   do this in localversion

 * example: QuicksilveR-surya-20220219-0000-638c37bb.zip
2022-09-21 01:57:01 +05:30
Hugo Lefeuvre
c6fc82aef7 sched/wait: Use freezable_schedule() when possible
Replace 'schedule(); try_to_freeze();' with a call to freezable_schedule().

Tasks calling freezable_schedule() set the PF_FREEZER_SKIP flag
before calling schedule(). Unlike tasks calling schedule();
try_to_freeze() tasks calling freezable_schedule() are not awaken by
try_to_freeze_tasks(). Instead they call try_to_freeze() when they
wake up if the freeze is still underway.

It is not a problem since sleeping tasks can't do anything which isn't
allowed for a frozen task while sleeping.

The result is a potential performance gain during freeze, since less
tasks have to be awaken.

For instance on a bare Debian vm running a 4.19 stable kernel, the
number of tasks skipped in freeze_task() went up from 12 without the
patch to 32 with the patch (out of 448), an increase of > x2.5.

Signed-off-by: Hugo Lefeuvre <hle@owl.eu.com>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20190207200352.GA27859@behemoth.owl.eu.com.local
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2022-09-21 01:45:20 +05:30
Adithya R
e7e50ac3a7 Revert "drm/msm: Speed up interrupt processing upon commit"
Save some juice. We already have pm-qos where necessary.

This reverts commit 68b08e6057525174f2eafcaa9ebaf7ee1848e3fb.
2022-09-21 01:44:34 +05:30
Sultan Alsawaf
c8277f7566 pinctrl: msm: Restore some barriers to prevent reordering of I/O writes
Although data dependencies and one-way, semi-permeable barriers provided by
spin locks satisfy most ordering needs here, it is still possible for some
I/O writes to be reordered with respect to one another in a dangerous way.
One such example is that the interrupt status bit could be cleared *after*
the interrupt is unmasked when enabling the IRQ, potentially leading to a
spurious interrupt if there's an interrupt pending from when the IRQ was
disabled.

To prevent dangerous I/O write reordering, restore the minimum amount of
barriers needed to ensure writes are ordered as intended.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2022-09-21 01:43:52 +05:30
Sultan Alsawaf
790cd11b50 memlat: Read perf counters in parallel and reduce system jitter
Sending synchronous IPIs to other CPUs involves spinning with preemption
disabled in order to wait for each IPI to finish. Keeping preemption off
for long periods of time like this is bad for system jitter, not to mention
the perf event IPIs are sent and flushed one at a time for each event for
each CPU rather than all at once for all the CPUs.

Since the way perf events are currently read is quite naive, rewrite it to
make it exploit parallelism and go much faster. IPIs for reading each perf
event are now sent to all CPUs asynchronously so that each CPU can work on
reading the events in parallel, and the dispatching CPU now sleeps rather
than spins when waiting for the IPIs to finish. Before the dispatching CPU
starts waiting though, it works on reading events for itself and then
reading events which can be read from any CPU in order to derive further
parallelism, and then waits for the IPIs to finish afterwards if they
haven't already.

Furthermore, there's now only one IPI sent to read all of a CPU's events
rather than an IPI sent for reading each event, which significantly speeds
up the event reads and reduces the number of IPIs sent.

This also checks for active SCM calls on a per-CPU basis rather than a
global basis so that unrelated CPUs don't get their counter reads skipped
and so that some CPUs can still receive fresh counter readings.

Overall, this makes the memlat driver much faster and more efficient, and
eliminates significant system jitter previously caused by IPI abuse.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
2022-09-21 01:43:46 +05:30
Sultan Alsawaf
563b168da5 Revert "memlat: Optimize perf event reads when possible"
This reverts commit 1d647604a0baf169928deebffe0450132c27f24b.

There are a number of problems with this optimization attempt, such as how
a perf error code could be returned to one of the counter values, a CPU
could enter an SCM call for just one event and result in garbage
computations or a divide-by-zero when calculating the stall percentage,
and IRQs are disabled as a heavy-handed way of preventing CPU migration.

Toss this commit out so that it can be replaced by something better.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
2022-09-21 01:43:39 +05:30
Sultan Alsawaf
095d8122eb Revert "soc: qcom: smp2p: Prevent suspend for threaded irq"
This reverts commit 9dcdb6f5ee0ea133f2e0d669743fcb48362ee4c5.

The IRQ subsystem already blocks suspend on waiting for IRQ threads to
finish running (in dpm_noirq_begin()). This PM wakeup does nothing but add
latency to the IRQ handler for non-RT kernels, and it isn't RT-friendly
either:
[   42.466403] BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:974
[   42.466407] in_atomic(): 1, irqs_disabled(): 128, pid: 0, name: swapper/3
[   42.466408] Preemption disabled at:
[   42.466421] [<00000000100c9f7d>] secondary_start_kernel+0xa8/0x130
[   42.466427] CPU: 3 PID: 0 Comm: swapper/3 Tainted: G S      W       4.14.212-rt102-Sultan #1
[   42.466429] Hardware name: Qualcomm Technologies, Inc. SM8150 V2 PM8150 Google Inc. MSM sm8150 Coral (DT)
[   42.466432] Call trace:
[   42.466436]  dump_backtrace+0x0/0x1ac
[   42.466439]  show_stack+0x14/0x1c
[   42.466444]  dump_stack+0x84/0xac
[   42.466448]  ___might_sleep+0x140/0x150
[   42.466452]  rt_spin_lock+0x3c/0x50
[   42.466458]  __pm_stay_awake+0x20/0x50
[   42.466462]  qcom_smp2p_isr+0x10/0x1c
[   42.466467]  __handle_irq_event_percpu+0x60/0xd4
[   42.466469]  handle_irq_event_percpu+0x58/0xb0
[   42.466471]  handle_irq_event+0x68/0xe0
[   42.466474]  handle_fasteoi_irq+0x140/0x1fc
[   42.466476]  generic_handle_irq+0x18/0x2c
[   42.466478]  __handle_domain_irq+0xf8/0xfc
[   42.466481]  gic_handle_irq+0xc8/0x164
[   42.466483]  el1_irq+0xb0/0x130
[   42.466487]  finish_task_switch+0xcc/0x1e4
[   42.466491]  __schedule+0x3f0/0x4e0
[   42.466493]  schedule_idle+0x28/0x44
[   42.466497]  do_idle+0x78/0x230
[   42.466500]  cpu_startup_entry+0x20/0x28
[   42.466502]  secondary_start_kernel+0x124/0x130

Remove it since it's useless.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2022-09-21 01:43:39 +05:30
Sultan Alsawaf
4456cff077 drm/msm: Eliminate unnecessary snprintf() usage from hot paths
There's no reason to constantly use snprintf() to generate pretty debug
strings from hot paths. We don't need them, so remove them.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
2022-09-21 01:43:39 +05:30
Sultan Alsawaf
96101dc829 drm/msm: Recycle atomic state allocations to speed up atomic commits
Constantly allocating and freeing all of the data structures associated
with atomic commits adds up and incurs a lot of latency not only when
allocating, but also when freeing. Since we know what the maximum number
of CRTCs, planes, and connectors is, we can skip the constant allocation-
and-free for the same structures and instead just recycle them via a lock-
less list. This also moves the commit cleanup so that it comes after CRTC
waiters are woken up, allowing the ioctl to proceed without waiting around
for some housekeeping to finish.

Since it's difficult to audit which parameters, if any, could exceed the
defined maximums in the msm_kms driver, dynamic allocations are retained as
a fallback so that userspace can't craft a malicious ioctl that results in
buffer overflows.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2022-09-21 01:43:39 +05:30
Sultan Alsawaf
f896281951 drm/msm: Remove bogus NULL check in _msm_drm_commit_work_cb()
The work pointer will never be NULL. Remove this check.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2022-09-21 01:43:39 +05:30
Sultan Alsawaf
aadaf594f3 Revert "drm/msm: Offload commit cleanup onto an unbound worker"
This reverts commit e3cb7f446cdf36b5e64852c18cd05dde34ecf55f.

This creates a large TTWU burden, which isn't worth it for such a small
amount of cleanup. The cleanup will become inconsequential anyway since the
memory allocations will be replaced with buffer recycling.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2022-09-21 01:43:39 +05:30
Panchajanya1999
399e9d7db7 f2fs/gc: Reduce GC thread urgent sleep time to 50ms
Android sets the value to 50ms via vold's IdleMaint service. Since
500ms is too long for GC to colllect all invalid segments in time
which results in performance degradation.

On un-encrypted device, vold fails to set this value to 50ms thus
degrades the performance over time.

Based on [1].

[1] https://github.com/topjohnwu/Magisk/pull/5462
Signed-off-by: Panchajanya1999 <rsk52959@gmail.com>
Change-Id: I80f2c29558393d726d5e696aaf285096c8108b23
Signed-off-by: Panchajanya1999 <rsk52959@gmail.com>
2022-09-21 01:43:39 +05:30
Sam Tebbs
6493f7b348 arm64: lib: Import latest version of Arm Optimized Routines' strncmp
Import the latest version of the Arm Optimized Routines strncmp function based
on the upstream code of string/aarch64/strncmp.S at commit 189dfefe37d5 from:
  https://github.com/ARM-software/optimized-routines

This latest version includes MTE support.

Note that for simplicity Arm have chosen to contribute this code to Linux under
GPLv2 rather than the original MIT OR Apache-2.0 WITH LLVM-exception license.
Arm is the sole copyright holder for this code.

Co-authored-by: Joey Gouly <joey.gouly@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220301101435.19327-3-joey.gouly@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Divyanshu-Modi <divyan.m05@gmail.com>
Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
2022-09-21 01:43:25 +05:30
Joey Gouly
48261abe57 arm64: lib: Import latest version of Arm Optimized Routines' strcmp
Import the latest version of the Arm Optimized Routines strcmp function based
on the upstream code of string/aarch64/strcmp.S at commit 189dfefe37d5 from:
  https://github.com/ARM-software/optimized-routines

This latest version includes MTE support.

Note that for simplicity Arm have chosen to contribute this code to Linux under
GPLv2 rather than the original MIT OR Apache-2.0 WITH LLVM-exception license.
Arm is the sole copyright holder for this code.

Signed-off-by: Joey Gouly <joey.gouly@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220301101435.19327-2-joey.gouly@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Divyanshu-Modi <divyan.m05@gmail.com>
Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
2022-09-21 01:43:20 +05:30
Sultan Alsawaf
7af781c785 soc: qcom: watchdog_v2: Optimize IPI pings to reduce system jitter
Sending synchronous IPIs to other CPUs involves spinning with preemption
disabled in order to wait for each IPI to finish. Keeping preemption off
for long periods of time like this is bad for system jitter, not to mention
the watchdog's IPIs are sent and flushed one at a time for each CPU rather
than all at once for all the CPUs to be pinged.

Since the existing IPI ping machinery is quite lacking, rewrite it entirely
to address all of its performance shortcomings. This not only replaces the
synchronous IPIs with asynchronous ones, but also allows the IPIs to run in
parallel. The IPI ping and wait mechanisms are now much more efficient via
the use of generic_exec_single() (since smp_call_function_single_async()
disables preemption when all it really needs is migration disabled), and
by sleeping rather than spinning while waiting for the IPIs to finish.

This also does away with the ping_start and ping_end arrays as they don't
make sense with the parallel, asynchronous execution of the IPIs anymore.
They are instead replaced by a mask indicating which CPUs were pinged so
that a watchdog bark can still print out which CPU(s) stopped responding.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
2022-09-21 01:43:09 +05:30
Arnd Bergmann
eddbff5260 smp: Fix smp_call_function_single_async prototype
commit 1139aeb1c521eb4a050920ce6c64c36c4f2a3ab7 upstream.

As of commit 966a967116e6 ("smp: Avoid using two cache lines for struct
call_single_data"), the smp code prefers 32-byte aligned call_single_data
objects for performance reasons, but the block layer includes an instance
of this structure in the main 'struct request' that is more senstive
to size than to performance here, see 4ccafe032005 ("block: unalign
call_single_data in struct request").

The result is a violation of the calling conventions that clang correctly
points out:

block/blk-mq.c:630:39: warning: passing 8-byte aligned argument to 32-byte aligned parameter 2 of 'smp_call_function_single_async' may result in an unaligned pointer access [-Walign-mismatch]
                smp_call_function_single_async(cpu, &rq->csd);

It does seem that the usage of the call_single_data without cache line
alignment should still be allowed by the smp code, so just change the
function prototype so it accepts both, but leave the default alignment
unchanged for the other users. This seems better to me than adding
a local hack to shut up an otherwise correct warning in the caller.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Jens Axboe <axboe@kernel.dk>
Link: https://lkml.kernel.org/r/20210505211300.3174456-1-arnd@kernel.org
[nc: Fix conflicts]
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-21 01:43:03 +05:30
Michael Adisumarta
64d3905d30 msm: ipa: Use detach/attach netif instead of stop/wakeup
To make sure detach and attach netif instead of stop
and wakeup and also to update transfer timer update.

Change-Id: I9d589b7f9f6fe98f778df509d3c16f339dfdeea1
Signed-off-by: Michael Adisumarta <madisuma@codeaurora.org>
Signed-off-by: Andrzej Perczak <linux@andrzejperczak.com>
Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
2022-09-21 01:42:17 +05:30
Jordan Crouse
19ab2d7722 msm: kgsl: Use DMA APIs for memory pool cache maintenance
After allocating and zeroing pages from system memory or the pool we need
to ensure that the cache is synchronized so that it doesn't cause problems
down the road.  Use dma_sync_sg_for_device to make sure the allocated pages
are clean. This isn't the best way to handle this but we haven't yet come
up with a better way and this does the job.

xNombre: Backport to 4.14, should help with expensive zero-memsetting page vs
using dedicated clear_page.

Inspired by: 2b085f4448

Change-Id: Ic0dedbade48b700015bec172cf9b64e436364b4a
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Andrzej Perczak <linux@andrzejperczak.com>
Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
2022-09-21 01:42:11 +05:30
Adithya R
829b3c10d4 techpack: audio: Merge tag 'LA.UM.9.1.r1-12300-SMxxx0.0' 2022-09-21 01:38:03 +05:30
Adithya R
ae62f2486f fw-api: Merge tag 'LA.UM.9.1.r1-12300-SMxxx0.0' 2022-09-21 01:37:43 +05:30
Adithya R
d698d6240f qca-wifi-host-cmn: Merge tag 'LA.UM.9.1.r1-12300-SMxxx0.0' 2022-09-21 01:37:04 +05:30
Adithya R
2f87a17290 qcacld-3.0: Merge tag 'LA.UM.9.1.r1-12300-SMxxx0.0' 2022-09-21 01:35:42 +05:30
Adithya R
8960a882bf Merge tag 'LA.UM.9.1.r1-12300-SMxxx0.0' of msm-4.14 2022-09-21 01:33:22 +05:30
Linux Build Service Account
557aee5185 Merge 3673f463433218d7a98aac597a4396cdc4868558 on remote branch
Change-Id: I10b2d164fba19a033c96b4ce056f43a9e9161c8b
2022-09-19 07:43:53 -07:00
qctecmdr
3673f46343 Merge "msm: camera: memmgr: update correct length in bufq" 2022-09-08 01:03:36 -07:00
Om Parkash
bd6a9de898 msm: camera: Increase the total number of camera ID's supported
Increase the total number of camera ID's supported.

Change-Id: Icf9cf2e48757d437a2b7dbbe618e886dc0203f6d
Signed-off-by: Om Parkash <quic_oparkash@quicinc.com>
2022-09-05 13:25:08 -07:00
Om Parkash
c6b6268c6a ARM: dts: msm: Add support for ToF camera sensor on trinket
Add support for ToF camera sensor on trinket.

Change-Id: I184862d787ecace17337054cbda3d32681f86c5d
Signed-off-by: Om Parkash <quic_oparkash@quicinc.com>
2022-09-06 01:22:21 +05:30
Linux Build Service Account
244639867d Merge 117cd61ffafac44ec276d60f74c8f1559a79715a on remote branch
Change-Id: Ia39e986c541415e28b9cfac3490a959a0808f768
2022-09-04 08:32:54 -07:00
Linux Build Service Account
80ca5054e7 Merge 2716194c4dd5e975fcdad5e31d0071c5bf802e76 on remote branch
Change-Id: I8848e7158c8088cb0809bd1fde04606508a9a08b
2022-09-04 08:24:46 -07:00
Vamsi Krishna Gattupalli
334d071f5e msm: ADSPRPC: Update unsigned pd support on cDSP from kernel
Query for unsigned pd support on cDSP domain and update
the unsigned_support flag during fastrpc_init process.

Change-Id: I61f4c748ad08155f418422183acc8473a7b0e0a8
Signed-off-by: Vamsi Krishna Gattupalli <quic_vgattupa@quicinc.com>
2022-09-01 08:29:59 +05:30
qctecmdr
f0d40e8ee2 Merge "msm: nfc: maxim nfc driver documentation" 2022-08-30 05:32:20 -07:00
qctecmdr
12a7013002 Merge "msm: ADSPRPC: Handle third party applications" 2022-08-29 22:45:10 -07:00
vinay
b11c6bb227 msm: nfc: maxim nfc driver documentation
Documentation for maxim nfc driver

Change-Id: I63a7d66380a89aaf1dbd89120f3f6a0290afe526
Signed-off-by: vinay <quic_vinak@quicinc.com>
2022-08-29 17:41:03 +05:30
Jeya R
a51fdb69fd msm: ADSPRPC: Handle third party applications
Reject the session when third party applications
try to spawn signed PD and  channel configured as secure.

Change-Id: Ic450a8c7dad430dfcdc4ae7354e29e63d9fae4a3
Acked-by: Krishnaiah Tadakamalla <ktadakam@qti.qualcomm.com>
Signed-off-by: Jeya R <jeyr@codeaurora.org>
2022-08-29 02:59:00 -07:00
Vamsi krishna Gattupalli
37e71216b5 msm: ADSPRPC: Substitute vfs check with flags
To check if DSP is supported or not, we make
fs call for subsystem device node. This node
is not accessible to untrusted applications.
Use subsystem status flag instead to avoid
permission issues and return proper error
in case subsystem is not up.

Change-Id: Ia19e31b899600e5d765c0a3582bdf9132c9b67bf
Acked-by: Ekansh Gupta <ekangupt@qti.qualcomm.com>
Signed-off-by: Vamsi krishna Gattupalli <vgattupa@codeaurora.org>
2022-08-29 00:15:49 -07:00
qctecmdr
cf56ed61cc Merge "msm: camera: reqmgr: reader writer locks to avoid memory faults" 2022-08-26 03:11:00 -07:00
qctecmdr
2716194c4d Merge "Revert "input: touchscreen: focaltech: support FT5446DQS"" 2022-08-25 04:26:53 -07:00
Tejas Prajapati
0a072473bb msm: camera: reqmgr: reader writer locks to avoid memory faults
Shared memory is initialized by CRM and used by
other drivers; with CRM not active other drivers
would fail to access the shared memory if
memory manager is deinit. Reader Writer locks can
prevent the open/close/ioctl calls from other drivers
if CRM open/close is already being processed.

Issue observed with the below sequence if drivers
are opened from UMD directly without this change.
CRM Open successful,ICP open successful,
CRM close in progress, ICP open successful,
mem mgr deinit and CRM close successful,
ICP tries to access HFI memory and results in crash.

This change helps to serialze the calls and prevents
issue.

CRs-Fixed: 3019488
Change-Id: If260a948918ffc1e7d8e20564fe7434731c8da9e
Signed-off-by: Tejas Prajapati <quic_tpraja@quicinc.com>
2022-08-25 11:50:37 +05:30
Umang Chheda
e14a4374d2 power: supply: smb5: PMI632: Add change to support uusb and DCIN
PMI632 does not have DCIN input. Add changes to
detect DCIN or uUSB insertion via GPIO and enable
charging for the same.

Change-Id: I4f5600041048d2fe1bfd8ded030fb718e55beaca
Signed-off-by: Umang Chheda <quic_uchheda@quicinc.com>
2022-08-23 05:33:36 -07:00