mirror of
https://github.com/rd-stuffs/msm-4.14.git
synced 2025-02-20 11:45:48 +08:00
* remotes/msm-4.9/tmp-39dae59: Linux 4.14-rc8 x86/module: Detect and skip invalid relocations objtool: Prevent GCC from merging annotate_unreachable(), take 2 Revert "x86/mm: Stop calling leave_mm() in idle code" Documentation: Add Frank Rowand to list of enforcement statement endorsers doc: add Willy Tarreau to the list of enforcement statement endorsers tools/headers: Synchronize kernel ABI headers objtool: Resync objtool's instruction decoder source code copy with the kernel's latest version Input: sparse-keymap - send sync event for KE_SW/KE_VSW Input: ar1021_i2c - set INPUT_PROP_DIRECT arch/tile: Implement ->set_state_oneshot_stopped() Update MIPS email addresses x86: CPU: Fix up "cpu MHz" in /proc/cpuinfo mm, swap: fix race between swap count continuation operations mm/huge_memory.c: deposit page table when copying a PMD migration entry initramfs: fix initramfs rebuilds w/ compression after disabling fs/hugetlbfs/inode.c: fix hwpoison reserve accounting ocfs2: fstrim: Fix start offset of first cluster group during fstrim mm, /proc/pid/pagemap: fix soft dirty marking for PMD migration entry userfaultfd: hugetlbfs: prevent UFFDIO_COPY to fill beyond the end of i_size Documentation: Add Tim Bird to list of enforcement statement endorsers net: systemport: Correct IPG length settings tcp: do not mangle skb->cb[] in tcp_make_synack() fib: fib_dump_info can no longer use __in_dev_get_rtnl stmmac: use of_property_read_u32 instead of read_u8 net_sched: hold netns refcnt for each action net_sched: acquire RTNL in tc_action_net_exit() powerpc/perf: Fix core-imc hotplug callback failure during imc initialization Kbuild: don't pass "-C" to preprocessor when processing linker scripts Revert "x86: do not use cpufreq_quick_get() for /proc/cpuinfo "cpu MHz"" arm64: ensure __dump_instr() checks addr_limit KVM: x86: Update APICv on APIC reset KVM: VMX: Do not fully reset PI descriptor on vCPU reset kvm: Return -ENODEV from update_persistent_clock futex: futex_wake_op, do not fail on invalid op MIPS: Update email address for Marcin Nowakowski License cleanup: add SPDX license identifier to uapi header files with a license License cleanup: add SPDX license identifier to uapi header files with no license License cleanup: add SPDX GPL-2.0 license identifier to files with no license KEYS: fix out-of-bounds read during ASN.1 parsing KEYS: trusted: fix writing past end of buffer in trusted_read() KEYS: return full count in keyring_read() if buffer is too small net: vrf: correct FRA_L3MDEV encode type tcp_nv: fix division by zero in tcpnv_acked() drm/amdgpu: allow harvesting check for Polaris VCE drm/amdgpu: return -ENOENT from uvd 6.0 early init for harvesting ARM: add debug ".edata_real" symbol MIPS: smp-cmp: Fix vpe_id build error MAINTAINERS: Update Pistachio platform maintainers MIPS: smp-cmp: Use right include for task_struct signal: Fix name of SIGEMT in #if defined() check MIPS: Update Goldfish RTC driver maintainer email address MIPS: Update RINT emulation maintainer email address MIPS: CPS: Fix use of current_cpu_data in preemptible code x86/mcelog: Get rid of RCU remnants watchdog/hardlockup/perf: Use atomics to track in-use cpu counter watchdog/harclockup/perf: Revert a33d44843d45 ("watchdog/hardlockup/perf: Simplify deferred event destroy") ARM: 8716/1: pass endianness info to sparse drm/i915: Check incoming alignment for unfenced buffers (on i915gm) x86/mm: fix use-after-free of vma during userfaultfd fault ide:ide-cd: fix kernel panic resulting from missing scsi_req_init mmc: dw_mmc: Fix the DTO timeout calculation tcp: fix tcp_mtu_probe() vs highest_sack ipv6: addrconf: increment ifp refcount before ipv6_del_addr() tun/tap: sanitize TUNSETSNDBUF input mlxsw: i2c: Fix buffer increment counter for write transaction netfilter: nf_reject_ipv4: Fix use-after-free in send_reset futex: Fix more put_pi_state() vs. exit_pi_state_list() races powerpc/kprobes: Dereference function pointers only if the address does not belong to kernel text Revert "powerpc64/elfv1: Only dereference function descriptor for non-text symbols" mlxsw: reg: Add high and low temperature thresholds MAINTAINERS: Remove Yotam from mlxfw MAINTAINERS: Update Yotam's E-mail net: hns: set correct return value net: lapbether: fix double free bpf: remove SK_REDIRECT from UAPI net: phy: marvell: Only configure RGMII delays when using RGMII MIPS: SMP: Fix deadlock & online race MIPS: bpf: Fix a typo in build_one_insn() MIPS: microMIPS: Fix incorrect mask in insn_table_MM MIPS: Fix CM region target definitions MIPS: generic: Fix compilation error from include asm/mips-cpc.h MIPS: Fix exception entry when CONFIG_EVA enabled irqchip/irq-mvebu-gicp: Add missing spin_lock init drm/nouveau/kms/nv50: use the correct state for base channel notifier setup MIPS: generic: Fix NI 169445 its build Update MIPS email addresses tile: pass machine size to sparse selftests: lib.mk: print individual test results to console by default RDMA/nldev: Enforce device index check for port callback Revert "PM / QoS: Fix device resume latency PM QoS" Revert "PM / QoS: Fix default runtime_pm device resume latency" scsi: qla2xxx: Fix oops in qla2x00_probe_one error path xfrm: Fix GSO for IPsec with GRE tunnel. ALSA: seq: Fix nested rwsem annotation for lockdep splat ALSA: timer: Add missing mutex lock for compat ioctls tc-testing: fix arg to ip command: -s -> -n net_sched: remove tcf_block_put_deferred() l2tp: hold tunnel in pppol2tp_connect() drm/i915: Hold rcu_read_lock when iterating over the radixtree (vma idr) drm/i915: Hold rcu_read_lock when iterating over the radixtree (objects) drm/i915/edp: read edp display control registers unconditionally drm/i915: Do not rely on wm preservation for ILK watermarks drm/i915: Cancel the modeset retry work during modeset cleanup Mark 'ioremap_page_range()' as possibly sleeping nvme: Fix setting logical block format when revalidating mmc: dw_mmc: Add locking to the CTO timer mmc: dw_mmc: Fix the CTO timeout calculation mmc: dw_mmc: cancel the CTO timer after a voltage switch perf/cgroup: Fix perf cgroup hierarchy support PM / QoS: Fix default runtime_pm device resume latency Revert "ath10k: fix napi_poll budget overflow" ath10k: rebuild crypto header in rx data frames cifs: check MaxPathNameComponentLength != 0 before using it KVM: arm/arm64: vgic-its: Check GITS_BASER Valid bit before saving tables KVM: arm/arm64: vgic-its: Check CBASER/BASER validity before enabling the ITS KVM: arm/arm64: vgic-its: Fix vgic_its_restore_collection_table returned value KVM: arm/arm64: vgic-its: Fix return value for device table restore efi/libstub: arm: omit sorting of the UEFI memory map perf tools: Unwind properly location after REJECT virtio_blk: Fix an SG_IO regression wcn36xx: Remove unnecessary rcu_read_unlock in wcn36xx_bss_info_changed ARM: dts: mvebu: pl310-cache disable double-linefill xfrm: Clear sk_dst_cache when applying per-socket policy. perf symbols: Fix memory corruption because of zero length symbols powerpc/64s/radix: Fix preempt imbalance in TLB flush netfilter: nft_set_hash: disable fast_ops for 2-len keys powerpc: Fix check for copy/paste instructions in alignment handler powerpc/perf: Fix IMC allocation routine xfrm: Fix xfrm_dst_cache memleak ARM: 8715/1: add a private asm/unaligned.h clk: uniphier: fix clock data for PXs3 Documentation: Add my name to kernel enforcement statement nvme-rdma: fix possible hang when issuing commands during ctrl removal arm/arm64: kvm: Disable branch profiling in HYP code arm/arm64: kvm: Move initialization completion message arm/arm64: KVM: set right LR register value for 32 bit guest when inject abort Documentation: kernel-enforcement-statement.rst: proper sort names ASoC: rt5616: fix 0x91 default value Documentation: Add Arm Ltd to kernel-enforcement-statement.rst arm64: dts: uniphier: add STDMAC clock to EHCI nodes ARM: dts: uniphier: add STDMAC clock to EHCI nodes mmc: renesas_sdhi: fix kernel panic in _internal_dmac.c mmc: tmio: fix swiotlb buffer is full Documentation: kernel-enforcement-statement.rst: Remove Red Hat markings Documentation: Add myself to the enforcement statement list Documentation: Sign kernel enforcement statement Add ack for Trond Myklebust to the enforcement statement Documentation: update kernel enforcement support list Documentation: add my name to supporters ASoC: rt5659: connect LOUT Amp with Charge Pump ASoC: rt5659: register power bit of LOUT Amp KVM: arm64: its: Fix missing dynamic allocation check in scan_its_table crypto: x86/chacha20 - satisfy stack validation 2.0 ASoC: rt5663: Change the dev getting function in rt5663_irq ASoC: rt5514: Revert Hotword Model control ASoC: topology: Fix a potential memory leak in 'soc_tplg_dapm_widget_denum_create()' ASoC: topology: Fix a potential NULL pointer dereference in 'soc_tplg_dapm_widget_denum_create()' ASoC: rt5514-spi: check irq status to schedule data copy ASoC: adau17x1: Workaround for noise bug in ADC Conflicts: drivers/gpu/drm/msm/Makefile drivers/soc/qcom/Makefile drivers/staging/android/ion/Makefile include/linux/coresight-stm.h include/trace/events/kmem.h Change-Id: I01f1779762b652b9213924caa3d54f29cf03d285 Signed-off-by: Runmin Wang <runminw@codeaurora.org>
438 lines
11 KiB
C
438 lines
11 KiB
C
// SPDX-License-Identifier: GPL-2.0
|
|
/*
|
|
* bootmem - A boot-time physical memory allocator and configurator
|
|
*
|
|
* Copyright (C) 1999 Ingo Molnar
|
|
* 1999 Kanoj Sarcar, SGI
|
|
* 2008 Johannes Weiner
|
|
*
|
|
* Access to this subsystem has to be serialized externally (which is true
|
|
* for the boot process anyway).
|
|
*/
|
|
#include <linux/init.h>
|
|
#include <linux/pfn.h>
|
|
#include <linux/slab.h>
|
|
#include <linux/export.h>
|
|
#include <linux/kmemleak.h>
|
|
#include <linux/range.h>
|
|
#include <linux/memblock.h>
|
|
#include <linux/bootmem.h>
|
|
|
|
#include <asm/bug.h>
|
|
#include <asm/io.h>
|
|
|
|
#include "internal.h"
|
|
|
|
#ifndef CONFIG_HAVE_MEMBLOCK
|
|
#error CONFIG_HAVE_MEMBLOCK not defined
|
|
#endif
|
|
|
|
#ifndef CONFIG_NEED_MULTIPLE_NODES
|
|
struct pglist_data __refdata contig_page_data;
|
|
EXPORT_SYMBOL(contig_page_data);
|
|
#endif
|
|
|
|
unsigned long max_low_pfn;
|
|
unsigned long min_low_pfn;
|
|
unsigned long max_pfn;
|
|
unsigned long long max_possible_pfn;
|
|
|
|
static void * __init __alloc_memory_core_early(int nid, u64 size, u64 align,
|
|
u64 goal, u64 limit)
|
|
{
|
|
void *ptr;
|
|
u64 addr;
|
|
ulong flags = choose_memblock_flags();
|
|
|
|
if (limit > memblock.current_limit)
|
|
limit = memblock.current_limit;
|
|
|
|
again:
|
|
addr = memblock_find_in_range_node(size, align, goal, limit, nid,
|
|
flags);
|
|
if (!addr && (flags & MEMBLOCK_MIRROR)) {
|
|
flags &= ~MEMBLOCK_MIRROR;
|
|
pr_warn("Could not allocate %pap bytes of mirrored memory\n",
|
|
&size);
|
|
goto again;
|
|
}
|
|
if (!addr)
|
|
return NULL;
|
|
|
|
if (memblock_reserve(addr, size))
|
|
return NULL;
|
|
|
|
ptr = phys_to_virt(addr);
|
|
memset(ptr, 0, size);
|
|
/*
|
|
* The min_count is set to 0 so that bootmem allocated blocks
|
|
* are never reported as leaks.
|
|
*/
|
|
kmemleak_alloc(ptr, size, 0, 0);
|
|
return ptr;
|
|
}
|
|
|
|
/*
|
|
* free_bootmem_late - free bootmem pages directly to page allocator
|
|
* @addr: starting address of the range
|
|
* @size: size of the range in bytes
|
|
*
|
|
* This is only useful when the bootmem allocator has already been torn
|
|
* down, but we are still initializing the system. Pages are given directly
|
|
* to the page allocator, no bootmem metadata is updated because it is gone.
|
|
*/
|
|
void free_bootmem_late(unsigned long addr, unsigned long size)
|
|
{
|
|
unsigned long cursor, end;
|
|
|
|
kmemleak_free_part_phys(addr, size);
|
|
|
|
cursor = PFN_UP(addr);
|
|
end = PFN_DOWN(addr + size);
|
|
|
|
for (; cursor < end; cursor++) {
|
|
__free_pages_bootmem(pfn_to_page(cursor), cursor, 0);
|
|
totalram_pages++;
|
|
}
|
|
}
|
|
|
|
static void __init __free_pages_memory(unsigned long start, unsigned long end)
|
|
{
|
|
int order;
|
|
|
|
while (start < end) {
|
|
order = min(MAX_ORDER - 1UL, __ffs(start));
|
|
|
|
while (start + (1UL << order) > end)
|
|
order--;
|
|
|
|
__free_pages_bootmem(pfn_to_page(start), start, order);
|
|
|
|
start += (1UL << order);
|
|
}
|
|
}
|
|
|
|
static unsigned long __init __free_memory_core(phys_addr_t start,
|
|
phys_addr_t end)
|
|
{
|
|
unsigned long start_pfn = PFN_UP(start);
|
|
unsigned long end_pfn = min_t(unsigned long,
|
|
PFN_DOWN(end), max_low_pfn);
|
|
|
|
if (start_pfn >= end_pfn)
|
|
return 0;
|
|
|
|
__free_pages_memory(start_pfn, end_pfn);
|
|
|
|
return end_pfn - start_pfn;
|
|
}
|
|
|
|
static unsigned long __init free_low_memory_core_early(void)
|
|
{
|
|
unsigned long count = 0;
|
|
phys_addr_t start, end;
|
|
u64 i;
|
|
|
|
memblock_clear_hotplug(0, -1);
|
|
|
|
for_each_reserved_mem_region(i, &start, &end)
|
|
reserve_bootmem_region(start, end);
|
|
|
|
/*
|
|
* We need to use NUMA_NO_NODE instead of NODE_DATA(0)->node_id
|
|
* because in some case like Node0 doesn't have RAM installed
|
|
* low ram will be on Node1
|
|
*/
|
|
for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &start, &end,
|
|
NULL)
|
|
count += __free_memory_core(start, end);
|
|
|
|
return count;
|
|
}
|
|
|
|
static int reset_managed_pages_done __initdata;
|
|
|
|
void reset_node_managed_pages(pg_data_t *pgdat)
|
|
{
|
|
struct zone *z;
|
|
|
|
for (z = pgdat->node_zones; z < pgdat->node_zones + MAX_NR_ZONES; z++)
|
|
z->managed_pages = 0;
|
|
}
|
|
|
|
void __init reset_all_zones_managed_pages(void)
|
|
{
|
|
struct pglist_data *pgdat;
|
|
|
|
if (reset_managed_pages_done)
|
|
return;
|
|
|
|
for_each_online_pgdat(pgdat)
|
|
reset_node_managed_pages(pgdat);
|
|
|
|
reset_managed_pages_done = 1;
|
|
}
|
|
|
|
/**
|
|
* free_all_bootmem - release free pages to the buddy allocator
|
|
*
|
|
* Returns the number of pages actually released.
|
|
*/
|
|
unsigned long __init free_all_bootmem(void)
|
|
{
|
|
unsigned long pages;
|
|
|
|
reset_all_zones_managed_pages();
|
|
|
|
pages = free_low_memory_core_early();
|
|
totalram_pages += pages;
|
|
|
|
return pages;
|
|
}
|
|
|
|
/**
|
|
* free_bootmem_node - mark a page range as usable
|
|
* @pgdat: node the range resides on
|
|
* @physaddr: starting address of the range
|
|
* @size: size of the range in bytes
|
|
*
|
|
* Partial pages will be considered reserved and left as they are.
|
|
*
|
|
* The range must reside completely on the specified node.
|
|
*/
|
|
void __init free_bootmem_node(pg_data_t *pgdat, unsigned long physaddr,
|
|
unsigned long size)
|
|
{
|
|
memblock_free(physaddr, size);
|
|
}
|
|
|
|
/**
|
|
* free_bootmem - mark a page range as usable
|
|
* @addr: starting address of the range
|
|
* @size: size of the range in bytes
|
|
*
|
|
* Partial pages will be considered reserved and left as they are.
|
|
*
|
|
* The range must be contiguous but may span node boundaries.
|
|
*/
|
|
void __init free_bootmem(unsigned long addr, unsigned long size)
|
|
{
|
|
memblock_free(addr, size);
|
|
}
|
|
|
|
static void * __init ___alloc_bootmem_nopanic(unsigned long size,
|
|
unsigned long align,
|
|
unsigned long goal,
|
|
unsigned long limit)
|
|
{
|
|
void *ptr;
|
|
|
|
if (WARN_ON_ONCE(slab_is_available()))
|
|
return kzalloc(size, GFP_NOWAIT);
|
|
|
|
restart:
|
|
|
|
ptr = __alloc_memory_core_early(NUMA_NO_NODE, size, align, goal, limit);
|
|
|
|
if (ptr)
|
|
return ptr;
|
|
|
|
if (goal != 0) {
|
|
goal = 0;
|
|
goto restart;
|
|
}
|
|
|
|
return NULL;
|
|
}
|
|
|
|
/**
|
|
* __alloc_bootmem_nopanic - allocate boot memory without panicking
|
|
* @size: size of the request in bytes
|
|
* @align: alignment of the region
|
|
* @goal: preferred starting address of the region
|
|
*
|
|
* The goal is dropped if it can not be satisfied and the allocation will
|
|
* fall back to memory below @goal.
|
|
*
|
|
* Allocation may happen on any node in the system.
|
|
*
|
|
* Returns NULL on failure.
|
|
*/
|
|
void * __init __alloc_bootmem_nopanic(unsigned long size, unsigned long align,
|
|
unsigned long goal)
|
|
{
|
|
unsigned long limit = -1UL;
|
|
|
|
return ___alloc_bootmem_nopanic(size, align, goal, limit);
|
|
}
|
|
|
|
static void * __init ___alloc_bootmem(unsigned long size, unsigned long align,
|
|
unsigned long goal, unsigned long limit)
|
|
{
|
|
void *mem = ___alloc_bootmem_nopanic(size, align, goal, limit);
|
|
|
|
if (mem)
|
|
return mem;
|
|
/*
|
|
* Whoops, we cannot satisfy the allocation request.
|
|
*/
|
|
pr_alert("bootmem alloc of %lu bytes failed!\n", size);
|
|
panic("Out of memory");
|
|
return NULL;
|
|
}
|
|
|
|
/**
|
|
* __alloc_bootmem - allocate boot memory
|
|
* @size: size of the request in bytes
|
|
* @align: alignment of the region
|
|
* @goal: preferred starting address of the region
|
|
*
|
|
* The goal is dropped if it can not be satisfied and the allocation will
|
|
* fall back to memory below @goal.
|
|
*
|
|
* Allocation may happen on any node in the system.
|
|
*
|
|
* The function panics if the request can not be satisfied.
|
|
*/
|
|
void * __init __alloc_bootmem(unsigned long size, unsigned long align,
|
|
unsigned long goal)
|
|
{
|
|
unsigned long limit = -1UL;
|
|
|
|
return ___alloc_bootmem(size, align, goal, limit);
|
|
}
|
|
|
|
void * __init ___alloc_bootmem_node_nopanic(pg_data_t *pgdat,
|
|
unsigned long size,
|
|
unsigned long align,
|
|
unsigned long goal,
|
|
unsigned long limit)
|
|
{
|
|
void *ptr;
|
|
|
|
again:
|
|
ptr = __alloc_memory_core_early(pgdat->node_id, size, align,
|
|
goal, limit);
|
|
if (ptr)
|
|
return ptr;
|
|
|
|
ptr = __alloc_memory_core_early(NUMA_NO_NODE, size, align,
|
|
goal, limit);
|
|
if (ptr)
|
|
return ptr;
|
|
|
|
if (goal) {
|
|
goal = 0;
|
|
goto again;
|
|
}
|
|
|
|
return NULL;
|
|
}
|
|
|
|
void * __init __alloc_bootmem_node_nopanic(pg_data_t *pgdat, unsigned long size,
|
|
unsigned long align, unsigned long goal)
|
|
{
|
|
if (WARN_ON_ONCE(slab_is_available()))
|
|
return kzalloc_node(size, GFP_NOWAIT, pgdat->node_id);
|
|
|
|
return ___alloc_bootmem_node_nopanic(pgdat, size, align, goal, 0);
|
|
}
|
|
|
|
static void * __init ___alloc_bootmem_node(pg_data_t *pgdat, unsigned long size,
|
|
unsigned long align, unsigned long goal,
|
|
unsigned long limit)
|
|
{
|
|
void *ptr;
|
|
|
|
ptr = ___alloc_bootmem_node_nopanic(pgdat, size, align, goal, limit);
|
|
if (ptr)
|
|
return ptr;
|
|
|
|
pr_alert("bootmem alloc of %lu bytes failed!\n", size);
|
|
panic("Out of memory");
|
|
return NULL;
|
|
}
|
|
|
|
/**
|
|
* __alloc_bootmem_node - allocate boot memory from a specific node
|
|
* @pgdat: node to allocate from
|
|
* @size: size of the request in bytes
|
|
* @align: alignment of the region
|
|
* @goal: preferred starting address of the region
|
|
*
|
|
* The goal is dropped if it can not be satisfied and the allocation will
|
|
* fall back to memory below @goal.
|
|
*
|
|
* Allocation may fall back to any node in the system if the specified node
|
|
* can not hold the requested memory.
|
|
*
|
|
* The function panics if the request can not be satisfied.
|
|
*/
|
|
void * __init __alloc_bootmem_node(pg_data_t *pgdat, unsigned long size,
|
|
unsigned long align, unsigned long goal)
|
|
{
|
|
if (WARN_ON_ONCE(slab_is_available()))
|
|
return kzalloc_node(size, GFP_NOWAIT, pgdat->node_id);
|
|
|
|
return ___alloc_bootmem_node(pgdat, size, align, goal, 0);
|
|
}
|
|
|
|
void * __init __alloc_bootmem_node_high(pg_data_t *pgdat, unsigned long size,
|
|
unsigned long align, unsigned long goal)
|
|
{
|
|
return __alloc_bootmem_node(pgdat, size, align, goal);
|
|
}
|
|
|
|
|
|
/**
|
|
* __alloc_bootmem_low - allocate low boot memory
|
|
* @size: size of the request in bytes
|
|
* @align: alignment of the region
|
|
* @goal: preferred starting address of the region
|
|
*
|
|
* The goal is dropped if it can not be satisfied and the allocation will
|
|
* fall back to memory below @goal.
|
|
*
|
|
* Allocation may happen on any node in the system.
|
|
*
|
|
* The function panics if the request can not be satisfied.
|
|
*/
|
|
void * __init __alloc_bootmem_low(unsigned long size, unsigned long align,
|
|
unsigned long goal)
|
|
{
|
|
return ___alloc_bootmem(size, align, goal, ARCH_LOW_ADDRESS_LIMIT);
|
|
}
|
|
|
|
void * __init __alloc_bootmem_low_nopanic(unsigned long size,
|
|
unsigned long align,
|
|
unsigned long goal)
|
|
{
|
|
return ___alloc_bootmem_nopanic(size, align, goal,
|
|
ARCH_LOW_ADDRESS_LIMIT);
|
|
}
|
|
|
|
/**
|
|
* __alloc_bootmem_low_node - allocate low boot memory from a specific node
|
|
* @pgdat: node to allocate from
|
|
* @size: size of the request in bytes
|
|
* @align: alignment of the region
|
|
* @goal: preferred starting address of the region
|
|
*
|
|
* The goal is dropped if it can not be satisfied and the allocation will
|
|
* fall back to memory below @goal.
|
|
*
|
|
* Allocation may fall back to any node in the system if the specified node
|
|
* can not hold the requested memory.
|
|
*
|
|
* The function panics if the request can not be satisfied.
|
|
*/
|
|
void * __init __alloc_bootmem_low_node(pg_data_t *pgdat, unsigned long size,
|
|
unsigned long align, unsigned long goal)
|
|
{
|
|
if (WARN_ON_ONCE(slab_is_available()))
|
|
return kzalloc_node(size, GFP_NOWAIT, pgdat->node_id);
|
|
|
|
return ___alloc_bootmem_node(pgdat, size, align, goal,
|
|
ARCH_LOW_ADDRESS_LIMIT);
|
|
}
|