793614 Commits

Author SHA1 Message Date
Eric Dumazet
1bb48c365e tcp_bbr: fix bbr pacing rate for internal pacing
This commit makes BBR use only the MSS (without any headers) to
calculate pacing rates when internal TCP-layer pacing is used.

This is necessary to achieve the correct pacing behavior in this case,
since tcp_internal_pacing() uses only the payload length to calculate
pacing delays.

Signed-off-by: Kevin Yang <yyd@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
2021-06-11 23:23:26 +05:30
Yousuk Seung
b3bae33754 net-tcp_bbr: set tp->snd_ssthresh to BDP upon STARTUP exit
Set tp->snd_ssthresh to BDP upon STARTUP exit. This allows us
to check if a BBR flow exited STARTUP and the BDP at the
time of STARTUP exit with SCM_TIMESTAMPING_OPT_STATS. Since BBR does not
use snd_ssthresh this fix has no impact on BBR's behavior.

Signed-off-by: Yousuk Seung <ysseung@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Priyaranjan Jha <priyarjha@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
2021-06-11 23:23:26 +05:30
Eric Dumazet
a3dbf070a5 tcp_bbr: remove bbr->tso_segs_goal
Its value is computed then immediately used,
there is no need to store it.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
2021-06-11 23:23:26 +05:30
Eric Dumazet
9738ee3614 tcp_bbr: better deal with suboptimal GSO (II)
This is second part of dealing with suboptimal device gso parameters.
In first patch (350c9f484bde "tcp_bbr: better deal with suboptimal GSO")
we dealt with devices having low gso_max_segs

Some devices lower gso_max_size from 64KB to 16 KB (r8152 is an example)

In order to probe an optimal cwnd, we want BBR being not sensitive
to whatever GSO constraint a device can have.

This patch removes tso_segs_goal() CC callback in favor of
min_tso_segs() for CC wanting to override sysctl_tcp_min_tso_segs

Next patch will remove bbr->tso_segs_goal since it does not have
to be persistent.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
2021-06-11 23:23:26 +05:30
Yuchung Cheng
80d34180d6 tcp: avoid min RTT bloat by skipping RTT from delayed-ACK in BBR
A persistent connection may send tiny amount of data (e.g. health-check)
for a long period of time. BBR's windowed min RTT filter may only see
RTT samples from delayed ACKs causing BBR to grossly over-estimate
the path delay depending how much the ACK was delayed at the receiver.

This patch skips RTT samples that are likely coming from delayed ACKs. Note
that it is possible the sender never obtains a valid measure to set the
min RTT. In this case BBR will continue to set cwnd to initial window
which seems fine because the connection is thin stream.

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Acked-by: Priyaranjan Jha <priyarjha@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Change-Id: I758447f7bb6c764ec73899a9e8d84a87c7bfe2d0
2021-06-11 23:23:25 +05:30
Yuchung Cheng
f02fa782b2 tcp: avoid min-RTT overestimation from delayed ACKs
This patch avoids having TCP sender or congestion control
overestimate the min RTT by orders of magnitude. This happens when
all the samples in the windowed filter are one-packet transfer
like small request and health-check like chit-chat, which is farily
common for applications using persistent connections. This patch
tries to conservatively labels and skip RTT samples obtained from
this type of workload.

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
2021-06-11 23:23:25 +05:30
Erfan Abdi
06ae934a63 power: supply: Rename hvdcp 3.5 to hvdcp3 to get recognized in AOSP
Change-Id: I188e0f3728176a253ec9e03601727aa0bf22ce7c
Signed-off-by: Arian <arian.kulmer@web.de>
2021-06-11 23:21:51 +05:30
Yaroslav Furman
3c30ae8cbf simple_lmk: Do not flood the log when we're stuck
Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
2021-06-11 17:10:24 +05:30
Panchajanya1999
82b2a762d2 msm: kgsl: Place worker thread on SCHED_RR
On commit 254852b5b04c02668c9288676c9640bb1d46b2ec
(msm: kgsl: Increase worker thread priority), we had increased
the priority for better rendering frames.

Since a lot of display related tasks are multiplexed on KGSL on a
dedicated thread, they are placed in a FIFO manner, which is a bit
of time consuming since not all those queued tasks have a known or
small execution period.

Switching to Round-Robin fashion eliminates this complexity and bounds
these tasks on a time-slice and thus every tasks gets executed after
a certain time, as fast as possible.

Change-Id: I2b5137a5c6fdcc7fef7cafcc3e5728eab0034045
Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
2021-06-11 17:10:24 +05:30
Daniel Jacob Chittoor
d5b48ea09d ARM64/dts: qcom: Disable CoreSight DT entries for sdmmagpie
CoreSight is a set of trace and debug software by Arm Limited. and is used
on their architecture of SoCs and is very instrumental in early bring-up or
in development stage of boards. These tracing features are useful in such
scenarios but it comes with its heavy overhead.

CoreSight tracing configs are enabled by default on perf-defconfigs but is
recommended to be disabled on production devices as per 80-PJ035-1 article
due its heavy overhead, even when these debugging configs are disabled due
to these DT entries clocks are being left on which in turn introduces power
regressions.

This commit marks JTAG driver nodes as disabled and removes the inclusion of
the coresight devicetree with its properties and mappings, also remove other
nodes that were reduntant due to this change.

Signed-off-by: Daniel Jacob Chittoor <djchittoor47@gmail.com>
Signed-off-by: Forenche <prahul2003@gmail.com>
2021-06-09 19:26:35 +05:30
Jebaitedneko
a37368c132 hwtracing: coresight: Add entries from sdmmagpie-coresight
cat sdmmagpie-coresight.dtsi | grep primecell-periphid | cut -c29- | sed "s/>;//g;s/^/ETM4x_AMBA_ID(/g;s/$/),/g" | sort -u

[forenche: add clocks for sdmmagpie]
Signed-off-by: Forenche <prahul2003@gmail.com>
Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
2021-06-09 19:26:09 +05:30
Arian
578a898bcd cpufreq: Ensure the min_freq is lower than max_freq
* Libperfmgr increases the minimal frequency to 9999999 in order to boost
  the cpu to the maximal frequency. This usally works because it also
  increases the max frequency to 9999999 at init. However if we decrease
  the maximal frequency afterwards, which mi_thermald does, setting the
  minimal frequency to 9999999 fails because it exceeds the maximal
  frequency.

* We can allow setting a minimal frequency higher than the maximal
  frequency and setting a lower maximal frequency than the minimal
  frequency by adjusting the minimal frequency if it exceeds the
  maximal frequency.

Change-Id: I25b7ccde714aac14c8fdb9910857c3bd38c0aa05
Signed-off-by: Forenche <prahul2003@gmail.com>
Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
2021-06-09 19:25:49 +05:30
Vincent Palomares
2f1b8faf8c scsi: ufs: Switch to async suspend/resume callbacks
The UFS callback is the most time consuming callback in the dpm_resume
section of kernel resumes, taking around 30 ms. Making it async
improves resume latency by around 20 ms, and helps with decreasing
suspend times as well.

Bug: 134704391
Change-Id: I708c8a7bc8f2250d6b2365971ccc394c7fbf8896
Signed-off-by: Vincent Palomares <paillon@google.com>
Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
2021-06-06 21:02:34 +05:30
Sultan Alsawaf
e44dcfece7 scsi: ufs: Only apply pm_qos to the CPU servicing UFS interrupts
Applying pm_qos restrictions to multiple CPUs which aren't used for ufs
processing is a waste of power. Instead, only apply the pm_qos
restrictions to the CPU that services the UFS interrupts to save power.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2021-06-06 21:02:34 +05:30
Sultan Alsawaf
acda0cab56 scsi: ufs: Remove 10 ms CPU idle latency unvote timeout
This forces the CPU to stay out of deep idle states for far longer than
necessary, which wastes power. Just unvote immediately when requested.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2021-06-06 21:02:34 +05:30
Adithya R
21be0f2f41 scsi: ufs: Checkout driver to LA.UM.9.1.r1-10200-SMxxx0.0 2021-06-06 21:02:34 +05:30
Adithya R
e57f496302 Revert "kernel: Warn when an IRQ's affinity notifier gets overwritten"
This reverts commit d0a6c7c0aaa7db78e2d727e408263c8676df0627.
2021-06-06 21:02:31 +05:30
Adithya R
ae6b3e3951 mm: vmstat: Increase vmstat interval to 30s 2021-06-05 15:05:23 +05:30
Vincent Guittot
5b5a7fcf0a sched/fair: Fix unnecessary increase of balance interval
In case of active balancing, we increase the balance interval to cover
pinned tasks cases not covered by all_pinned logic. Neverthless, the
active migration triggered by asym packing should be treated as the normal
unbalanced case and reset the interval to default value, otherwise active
migration for asym_packing can be easily delayed for hundreds of ms
because of this pinned task detection mechanism.

The same happens to other conditions tested in need_active_balance() like
misfit task and when the capacity of src_cpu is reduced compared to
dst_cpu (see comments in need_active_balance() for details).

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: valentin.schneider@arm.com
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Jesse Chan <jc@linux.com>
Signed-off-by: billaids <jimmy.nelle@hsw-stud.de>
2021-06-05 15:05:23 +05:30
Vincent Guittot
b5dda4ead1 sched/fair: Trigger asym_packing during idle load balance
Newly idle load balancing is not always triggered when a CPU becomes idle.
This prevents the scheduler from getting a chance to migrate the task
for asym packing.

Enable active migration during idle load balance too.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: valentin.schneider@arm.com
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Jesse Chan <jc@linux.com>
Signed-off-by: billaids <jimmy.nelle@hsw-stud.de>
2021-06-05 15:05:23 +05:30
Peter Zijlstra
d7b5284e20 sched/core: Ensure load_balance() respects the active_mask
While load_balance() masks the source CPUs against active_mask, it had
a hole against the destination CPU. Ensure the destination CPU is also
part of the 'domain-mask & active-mask' set.

Reported-by: Levin, Alexander (Sasha Levin) <alexander.levin@verizon.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 77d1dfda0e79 ("sched/topology, cpuset: Avoid spurious/wrong domain rebuilds")
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2021-06-05 15:05:23 +05:30
Frederic Weisbecker
1f183ca327 sched: Use fair:prio_changed() instead of ad-hoc implementation
set_user_nice() implements its own version of fair::prio_changed() and
therefore misses a specific optimization towards nohz_full CPUs that
avoid sending an resched IPI to a reniced task running alone. Use the
proper callback instead.

Change-Id: I51ba67826dfcec0aa423758281943c01ba267c91
Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20191203160106.18806-3-frederic@kernel.org
Signed-off-by: mydongistiny <jaysonedson@gmail.com>
Signed-off-by: DennySPB <dennyspb@gmail.com>
2021-06-05 15:05:23 +05:30
Vincent Guittot
2249e7303b sched/fair: Fix the update of blocked load when newly idle
With commit:

  31e77c93e432 ("sched/fair: Update blocked load when newly idle")

... we release the rq->lock when updating blocked load of idle CPUs.

This opens a time window during which another CPU can add a task to this
CPU's cfs_rq.

The check for newly added task of idle_balance() is not in the common path.
Move the out label to include this check.

Reported-by: Heiner Kallweit <hkallweit1@gmail.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 31e77c93e432 ("sched/fair: Update blocked load when newly idle")
Link: http://lkml.kernel.org/r/20180426103133.GA6953@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2021-06-05 15:05:23 +05:30
Brendan Jackman
106179d858 FROMLIST: sched/fair: Update blocked load from newly idle balance
We now have a NOHZ kick to avoid the load of idle CPUs becoming stale. This is
good, but it brings about CPU wakeups, which have an energy cost. As an
alternative to waking CPUs up to do decay blocked load, we can sometimes do it
from newly idle balance. If the newly idle balance is on a domain that covers
all the currently nohz-idle CPUs, we push the value of nohz.next_update into the
future. That means that if such newly idle balances happen often enough, we
never need wake up a CPU just to update load.

Since we're doing this new update inside a for_each_domain, we need to do
something to avoid doing multiple updates on the same CPU in the same
idle_balance. A tick stamp is set on the rq in update_blocked_averages as a
simple way to do this. Using a simple jiffies-based timestamp, as opposed to the
last_update_time of the root cfs_rq's sched_avg, means we can do this without
taking the rq lock.

Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Change-Id: I39423091e6bf789c1579cb431930c449a3c8239a
[merge conflicts]
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
2021-06-05 15:05:23 +05:30
Wanpeng Li
6e09764af4 BACKPORT: sched/nohz: Optimize get_nohz_timer_target()
On a machine, CPU 0 is used for housekeeping, the other 39 CPUs in the
same socket are in nohz_full mode. We can observe huge time burn in the
loop for seaching nearest busy housekeeper cpu by ftrace.

  2)               |                        get_nohz_timer_target() {
  2)   0.240 us    |                          housekeeping_test_cpu();
  2)   0.458 us    |                          housekeeping_test_cpu();

  ...

  2)   0.292 us    |                          housekeeping_test_cpu();
  2)   0.240 us    |                          housekeeping_test_cpu();
  2)   0.227 us    |                          housekeeping_any_cpu();
  2) + 43.460 us   |                        }

This patch optimizes the searching logic by finding a nearest housekeeper
CPU in the housekeeping cpumask, it can minimize the worst searching time
from ~44us to < 10us in my testing. In addition, the last iterated busy
housekeeper can become a random candidate while current CPU is a better
fallback if it is a housekeeper.

Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lkml.kernel.org/r/1578876627-11938-1-git-send-email-wanpengli@tencent.com
Signed-off-by: DennySPB <dennyspb@gmail.com>
2021-06-05 15:05:23 +05:30
Tyler Nijmeh
5624867ff4 genirq: Use interruptible wait
Allow this task to be preempted in order to reduce latency.

Signed-off-by: Tyler Nijmeh <tylernij@gmail.com>
2021-06-05 15:05:23 +05:30
Tyler Nijmeh
16268bd812 mm: Increase watermark scale factor
By default, kswapd awakens once only 0.1% of memory is available. For
heavy multitasking and unpredictable operating systems such as Android,
it is difficult to predict if userspace will request a large chunk of
memory in a short period of time.

If the kernel runs out of available space, and kswapd is not triggered
soon enough, a direct reclaimation will be required, which is taxing and
expensive on the system.

To avoid direct reclaimation, we can awaken kswapd sooner. Increase the
watermark scale factor to 1% instead of 0.1% of remailing available
memory. For an 8 GB system, this would be 80 MB of remaining memory.

Referenced sources: https://source.android.com/devices/tech/perf/low-ram

Signed-off-by: Tyler Nijmeh <tylernij@gmail.com>
2021-06-05 15:05:23 +05:30
Tyler Nijmeh
c60973730e media: v4l: Use interruptible waits
Allow these tasks to be preempted in order to reduce latency.

Signed-off-by: Tyler Nijmeh <tylernij@gmail.com>
2021-06-05 15:05:23 +05:30
John Galt
fc813993d5 msm: kgsl: Make mem workqueue freezable
freezing during no interactivity may benefit power
2021-06-05 15:05:23 +05:30
THEBOSS619
6b7418fee2 block/loop: Mark worker kthread as performance critical
Modified for new boost api, also bind to big cluster.

Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
2021-06-05 15:05:17 +05:30
THEBOSS619
ea8b24c397 rcu/update: Mark tasks kthread as performance critical
RealJohnGalt: updated for new api

Signed-off-by: Adithya R <gh0strider.2k18.reborn@gmail.com>
2021-06-05 15:03:32 +05:30
Ajay Agarwal
73cdddd795 usb: dwc3: gadget: Bail out for short packets/ZLPs only if IOC is not set
Currently the driver explicitly clears the HWO bit of the TRB
prepared for short packet or ZLP for EPs in both the directions,
and returns with non-zero code which breaks cleanup of completed
requests. The driver does this because it is written with the
understanding that the HW does not clear HWO for such packets.
But the databook section 4.2.3.2 says that the HW does not clear
HWO for such packets on the OUT EP only.

Now consider a device-to-host(USB IN) usecase like SW NCM, where
the u_ether driver looks for interrupt on every 5th request(say).
In between these 5 requests, if there is any packet of multiple
of MPS, then ZLP TRB will also be prepared. When the event is
received for the 5th request, we will only handle upto the ZLP
TRB because of the faulty logic described above. And only on the
event for the 10th request, we handle the 5th request (because of
IOC set on the 5th request). So, clearly the TRB cleanup logic is
lagging behind the HW actually completing the requests. When this
mismatch happens multiple times, there eventually comes a time
when the SW does not have any free requests anymore and hence
stall will be seen.

Fix this by returning with non-zero code only if the IOC bit is
set for the short packet or ZLP TRB, so that the cleanup routine
continues upto the TRB for which the event was raised.

Change-Id: I984e6de383993fc3c2da6b74147d6f50e081de34
Signed-off-by: Ajay Agarwal <ajaya@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:56:42 +05:30
Pratham Pratap
091bb9cf07 usb: gadget: Don't giveback request if ep command times out
Currently driver is giving back requests to gadget even if
the active transfers are not stopped. This might lead the
controller to access the requests which are already unmapped.
This change adds a judgement for giving back the requests to
gadget based on ep command status. Also if ep command times out
and the controller doesn't stop correctly  then mark it as an
error event and restart the USB session.

Change-Id: If32cddddf0544140d5bdf68df9144702e00dc758
Signed-off-by: Pratham Pratap <prathampratap@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:56:42 +05:30
Sriharsha Allenki
2b7fbd15f9 usb: gadget: gsi: Ensure the doorbell is blocked before suspend
The doorbell is blocked as part of gsi_suspend, but a continuous
resume and suspend calls can cause a race where resume
handling overwrites the blocking of doorbell leading
to the controller accessing IPA doorbell after suspend.
Here are the series of events that is causing the issue.
gsi_resume
	queue_work with EVT_RESUME
gsi_suspend
	Block the doorbell
	queue_work with EVT_SUSPEND
resume_work_handler
	xdci_resume
	Unblock the doorbell
suspend_work_handler
	xdci_suspend
Fix this by ensuring that the doorbell is blocked
before suspend call to IPA as part of suspend handling.

Change-Id: I4d0254c88ed3bec6338d040480b5df2e3f81251e
Signed-off-by: Sriharsha Allenki <sallenki@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:56:42 +05:30
Veera Vegivada
77b0f7153a soc: qcom: msm_bus: Use kzalloc instead of kmalloc
kzalloc works like kmalloc, but also zero the memory.

Change-Id: Ic1675c3e20af5f97cfefb2e746e1480932348e9b
Signed-off-by: Veera Vegivada <vvegivad@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:56:41 +05:30
Veera Vegivada
439737f5c4 soc: qcom: msm_bus: Memset memory allocated by kzalloc
kzalloc returns memory which may have garbage values
sometimes, this can cause issues with NULL check.
So memset with 0.

Change-Id: I279dc365883d44e67933fef9ee509092154e37f5
Signed-off-by: Veera Vegivada <vvegivad@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:56:41 +05:30
Sandeep Singh
e240ef136b icnss: Serialize the driver remove in modem graceful shutdown
Add code to post unregister driver event during modem
graceful shutdown instead of directly calling driver
remove.

Change-Id: Ie8b7699bf4e9e346279feede68022cda20f93a69
Signed-off-by: Sandeep Singh <sandsing@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:56:41 +05:30
Can Guo
281baea79a scsi: pm: Leave runtime resume alone during system resume/thaw/restore
Runtime resume is handled by runtime PM framework, no need to forcibly
set runtime PM status to RPM_ACTIVE during system resume/thaw/restore.

Change-Id: Icc798b7ea2f5926856a2606ca1d0176093108cf6
Signed-off-by: Can Guo <cang@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:56:41 +05:30
Jishnu Prakash
a2e3f1fcbd regulator: qpnp-lcdb: Fix race between SC interrupt and lcdb enable
During the handling of a SC interrupt, in the handler API, the LCDB
module is disabled and reenabled. During the reenablement, it is
possible that another SC interrupt may be triggered and it reads a
status indicating SC before enabling is complete, which would lead to
the handler API being called again.
Fix this by reading the status register in SC interrupt handler in
the same mutex used by the handler API, which is released only after
LCDB enable is complete, which can help to avoid this issue.

Change-Id: I72505f53a489e7e79e68f28aa581e5a25dffbfa8
Signed-off-by: Jishnu Prakash <jprakash@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:56:41 +05:30
Kavya Nunna
24eee19ad2 power: smb5-lib: Fix race conditions for typec power role
Currently power_role is accessed by set_prop and get_prop
functions of typec_power_role without any locking mechanism,
there can be a scenario where both function calls were invoked
simultaneously and power_role variable is not correctly
updated, and it leads to enumeration issues when connected to PC.

Fix it by adding locking mechanism to power_role.

Change-Id: I4f5dc38a9536b535510dc2b112712a5cbd2b3f84
Signed-off-by: Kavya Nunna <knunna@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:56:41 +05:30
Armaan Siddiqui
47c807a1e0 msm: gsi: Using kzalloc instead of devm_kzalloc
devm_kzalloc causes memory failure, so using kzalloc
to avoid memory allocation failure.

Change-Id: I85befc8c2b06ce74419e4e508fc982ff4df5a343
Signed-off-by: Armaan Siddiqui <asiddiqu@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:56:41 +05:30
Jiten Patel
422ceb67a8 qseecom: Scale bus bandwidth to LOW on resume
For case when cumulative mode was not inactive
scale bus bandwidth to LOW upon resume.

Change-Id: I67ee237efed1979be24d26617fe7767846c74b4a
Signed-off-by: Jiten Patel <jitepate@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:56:41 +05:30
dpyun
5d474b68ef msm: npu: Don't allow ioctl cmds to be interrupted
When the ioctl is interrupted, it is difficult to
predict and handle the firmware state properly from
the kernel. This change is to keep ioctl cmds from
being interrupted.

Change-Id: I1377122d23e4a63438cc7a844aec56f7e8a9b4cf
Signed-off-by: dpyun <dpyun@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:56:41 +05:30
Puranam V G Tejaswi
a7e4499098 msm: kgsl: Fix use-after-free and refcount imbalance issues
Take refcount on process private within the process list lock to
avoid a possible use-after-free for the process private pointer.
Also put the refcount before returning in case where the page
fault is suppressed.

Change-Id: I5d05806aa22d0396c53dc7bd3f073c00136992be
Signed-off-by: Puranam V G Tejaswi <pvgtejas@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:56:41 +05:30
Deepak Kumar
c1ac7dfa12 msm: kgsl: Change start variable type to int in kgsl_iommu_add_global
Variable start should be of type int instead of u32. Correct this to
ensure while loop can exit and WARN_ON statement is effective in
case global VA space doesn't have enough space for current request.

Change-Id: I0bc817abc9a16934b5c91fc31ba9c6dff3545c90
Signed-off-by: Deepak Kumar <dkumar@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:56:41 +05:30
Aniket Randive
041bf2eb97 i2c: i2c-msm-geni: Selectively disable DMA and operate in FIFO mode
Currently for SSC QUP, we are not supporting the DMA mode
on SA8155 and SA8195 platform. so for some non-gsi transfer
if the mode of SE is DMA then it will switch to the FIFO
mode and complete the transfer.

Change-Id: I4cbc2ed9ceff7d1c4fdfec3a1efeda80af2fc2bf
Signed-off-by: Aniket Randive <arandive@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:55:45 +05:30
Vivek Golani
b2ace36f1b diag: Enable graceful transfer of transport
Avoid channel close by setting flags if socket
transport close is invoked after rpmsg transport
has been probed.

Change-Id: I5d94894e05bbbe079a566d9356eb08a6aeac7799
Signed-off-by: Vivek Golani <vgolani@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:55:45 +05:30
Vivek Golani
8cbb49c395 diag: Add handling for invalid max channel size
Add handling for error case where zero or negative
max channel size is returned.

Change-Id: I43dff897595327484c153f68a6e30ca8f888c3e6
Signed-off-by: Vivek Golani <vgolani@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:55:45 +05:30
Manoj Prabhu B
bd955ad3ff diag: Prevent possible out of bound copy to userspace
While copying log and msg masks to userspace prevent possible
out of bound access with a check for available buffer for copying.

Change-Id: Ic92f1efb43dae7e467830157012b4cc292669740
Signed-off-by: Manoj Prabhu B <bmanoj@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:55:45 +05:30
Manoj Prabhu B
b742a8f157 diag: Do not hold cntl lock while switching transport
Holding the cntl lock while switching the transport can
possibly lead to deadlock with a thread waiting for cntl lock
processing the socket feature.

Change-Id: I8e8b81e1329e647fe6d46f9f7211974351fe4254
Signed-off-by: Manoj Prabhu B <bmanoj@codeaurora.org>
Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com>
2021-06-05 14:55:45 +05:30