]> git.kernelconcepts.de Git - karo-tx-linux.git/log
karo-tx-linux.git
8 years agoarm64: use linux/types.h in kvm.h
Arnd Bergmann [Thu, 12 Nov 2015 14:41:08 +0000 (15:41 +0100)]
arm64: use linux/types.h in kvm.h

We should always use linux/types.h instead of asm/types.h for
consistency, and Kbuild actually warns about it:

./usr/include/asm/kvm.h:35: include of <linux/types.h> is preferred over <asm/types.h>

This patch does as Kbuild asks us.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: build vdso without libgcov
Arnd Bergmann [Thu, 12 Nov 2015 14:37:12 +0000 (15:37 +0100)]
arm64: build vdso without libgcov

On a cross-toolchain without glibc support, libgcov may not be
available, and attempting to build an arm64 kernel with GCOV
enabled then results in a build error:

/home/arnd/cross-gcc/lib/gcc/aarch64-linux/5.2.1/../../../../aarch64-linux/bin/ld: cannot find -lgcov

We don't really want to link libgcov into the vdso anyway, so
this patch just disables GCOV in the vdso directory, just as
we do for most other architectures.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: mark cpus_have_hwcap as __maybe_unused
Arnd Bergmann [Thu, 12 Nov 2015 14:20:16 +0000 (15:20 +0100)]
arm64: mark cpus_have_hwcap as __maybe_unused

cpus_have_hwcap() is defined as a 'static' function an only used in
one place that is inside of an #ifdef, so we get a warning when
the only user is disabled:

arch/arm64/kernel/cpufeature.c:699:13: warning: 'cpus_have_hwcap' defined but not used [-Wunused-function]

This marks the function as __maybe_unused, so the compiler knows that
it can drop the function definition without warning about it.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 37b01d53ceef ("arm64/HWCAP: Use system wide safe values")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: remove redundant FRAME_POINTER kconfig option and force to select it
Yang Shi [Mon, 9 Nov 2015 18:09:55 +0000 (10:09 -0800)]
arm64: remove redundant FRAME_POINTER kconfig option and force to select it

FRAME_POINTER is defined in lib/Kconfig.debug, it is unnecessary to redefine it
in arch/arm64/Kconfig.debug.

ARM64 depends on frame pointer to get correct stack trace (also selecting
ARCH_WANT_FRAME_POINTERS). However, the lib/Kconfig.debug definition allows
such option to be disabled. This patch forces FRAME_POINTER always on on arm64.

Signed-off-by: Yang Shi <yang.shi@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: fix R/O permissions of FDT mapping
Ard Biesheuvel [Mon, 9 Nov 2015 08:55:46 +0000 (09:55 +0100)]
arm64: fix R/O permissions of FDT mapping

The mapping permissions of the FDT are set to 'PAGE_KERNEL | PTE_RDONLY'
in an attempt to map the FDT as read-only. However, not only does this
break at build time under STRICT_MM_TYPECHECKS (since the two terms are
of different types in that case), it also results in both the PTE_WRITE
and PTE_RDONLY attributes to be set, which means the region is still
writable under ARMv8.1 DBM (and an attempted write will simply clear the
PT_RDONLY bit).

So instead, define PAGE_KERNEL_RO (which already has an established
meaning across architectures) and use that instead.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: fix STRICT_MM_TYPECHECKS issue in PTE_CONT manipulation
Ard Biesheuvel [Mon, 9 Nov 2015 08:55:45 +0000 (09:55 +0100)]
arm64: fix STRICT_MM_TYPECHECKS issue in PTE_CONT manipulation

The new page table code that manipulates the PTE_CONT flags does so
in a way that is inconsistent with STRICT_MM_TYPECHECKS. Fix it by
using the correct combination of __pgprot() and pgprot_val().

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: bpf: fix mod-by-zero case
Zi Shen Lim [Thu, 5 Nov 2015 04:43:59 +0000 (20:43 -0800)]
arm64: bpf: fix mod-by-zero case

Turns out in the case of modulo by zero in a BPF program:
A = A % X;  (X == 0)
the expected behavior is to terminate with return value 0.

The bug in JIT is exposed by a new test case [1].

[1] https://lkml.org/lkml/2015/11/4/499

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Reported-by: Yang Shi <yang.shi@linaro.org>
Reported-by: Xi Wang <xi.wang@gmail.com>
CC: Alexei Starovoitov <ast@plumgrid.com>
Fixes: e54bcde3d69d ("arm64: eBPF JIT compiler")
Cc: <stable@vger.kernel.org> # 3.18+
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: bpf: fix div-by-zero case
Zi Shen Lim [Wed, 4 Nov 2015 06:56:44 +0000 (22:56 -0800)]
arm64: bpf: fix div-by-zero case

In the case of division by zero in a BPF program:
A = A / X;  (X == 0)
the expected behavior is to terminate with return value 0.

This is confirmed by the test case introduced in commit 86bf1721b226
("test_bpf: add tests checking that JIT/interpreter sets A and X to 0.").

Reported-by: Yang Shi <yang.shi@linaro.org>
Tested-by: Yang Shi <yang.shi@linaro.org>
CC: Xi Wang <xi.wang@gmail.com>
CC: Alexei Starovoitov <ast@plumgrid.com>
CC: linux-arm-kernel@lists.infradead.org
CC: linux-kernel@vger.kernel.org
Fixes: e54bcde3d69d ("arm64: eBPF JIT compiler")
Cc: <stable@vger.kernel.org> # 3.18+
Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Enable CRYPTO_CRC32_ARM64 in defconfig
Catalin Marinas [Fri, 6 Nov 2015 16:50:43 +0000 (16:50 +0000)]
arm64: Enable CRYPTO_CRC32_ARM64 in defconfig

CONFIG_CRYPTO_CRC32_ARM64 has been around since commit f6f203faa3eb
("crypto: crc32 - Add ARM64 CRC32 hw accelerated module") but defconfig
did not automatically enable it.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: cmpxchg_dbl: fix return value type
Lorenzo Pieralisi [Thu, 5 Nov 2015 14:00:56 +0000 (14:00 +0000)]
arm64: cmpxchg_dbl: fix return value type

The current arm64 __cmpxchg_double{_mb} implementations carry out the
compare exchange by first comparing the old values passed in to the
values read from the pointer provided and by stashing the cumulative
bitwise difference in a 64-bit register.

By comparing the register content against 0, it is possible to detect if
the values read differ from the old values passed in, so that the compare
exchange detects whether it has to bail out or carry on completing the
operation with the exchange.

Given the current implementation, to detect the cmpxchg operation
status, the __cmpxchg_double{_mb} functions should return the 64-bit
stashed bitwise difference so that the caller can detect cmpxchg failure
by comparing the return value content against 0. The current implementation
declares the return value as an int, which means that the 64-bit
value stashing the bitwise difference is truncated before being
returned to the __cmpxchg_double{_mb} callers, which means that
any bitwise difference present in the top 32 bits goes undetected,
triggering false positives and subsequent kernel failures.

This patch fixes the issue by declaring the arm64 __cmpxchg_double{_mb}
return values as a long, so that the bitwise difference is
properly propagated on failure, restoring the expected behaviour.

Fixes: e9a4b795652f ("arm64: cmpxchg_dbl: patch in lse instructions when supported by the CPU")
Cc: <stable@vger.kernel.org> # 4.3+
Cc: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64/efi: fix libstub build under CONFIG_MODVERSIONS
Ard Biesheuvel [Tue, 27 Oct 2015 02:12:51 +0000 (11:12 +0900)]
arm64/efi: fix libstub build under CONFIG_MODVERSIONS

Now that we strictly forbid absolute relocations in libstub code,
make sure that we don't emit any when CONFIG_MODVERSIONS is enabled,
by stripping the kcrctab sections from the object file. This fixes
a build problem under CONFIG_MODVERSIONS=y.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoARM64: Enable multi-core scheduler support by default
Dietmar Eggemann [Mon, 19 Oct 2015 16:55:49 +0000 (17:55 +0100)]
ARM64: Enable multi-core scheduler support by default

Make sure that the task scheduler domain hierarchy is set-up correctly
on systems with single or multi-cluster topology.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Acked-by: Punit Agrawal <punit.agrawal@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64/efi: move arm64 specific stub C code to libstub
Ard Biesheuvel [Fri, 23 Oct 2015 14:48:14 +0000 (16:48 +0200)]
arm64/efi: move arm64 specific stub C code to libstub

Now that we added special handling to the C files in libstub, move
the one remaining arm64 specific EFI stub C file to libstub as
well, so that it gets the same treatment. This should prevent future
changes from resulting in binaries that may execute incorrectly in
UEFI context.

With efi-entry.S the only remaining EFI stub source file under
arch/arm64, we can also simplify the Makefile logic somewhat.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: page-align sections for DEBUG_RODATA
Mark Rutland [Mon, 26 Oct 2015 21:42:33 +0000 (21:42 +0000)]
arm64: page-align sections for DEBUG_RODATA

A kernel built with DEBUG_RO_DATA && !CONFIG_DEBUG_ALIGN_RODATA doesn't
have .text aligned to a page boundary, though fixup_executable works at
page-granularity thanks to its use of create_mapping. If .text is not
page-aligned, the first page it exists in may be marked non-executable,
leading to failures when an attempt is made to execute code in said
page.

This patch upgrades ALIGN_DEBUG_RO and ALIGN_DEBUG_RO_MIN to force page
alignment for DEBUG_RO_DATA && !CONFIG_DEBUG_ALIGN_RODATA kernels,
ensuring that all sections with specific RWX permission requirements are
mapped with the correct permissions.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Laura Abbott <laura@labbott.name>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Fixes: da141706aea52c1a ("arm64: add better page protections to arm64")
Cc: <stable@vger.kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Fix build with CONFIG_ZONE_DMA=n
Robin Murphy [Tue, 27 Oct 2015 17:40:26 +0000 (17:40 +0000)]
arm64: Fix build with CONFIG_ZONE_DMA=n

Trying to build with CONFIG_ZONE_DMA=n leaves visible references
to the now-undefined ZONE_DMA, resulting in a syntax error.

Hide the references behind an #ifdef instead of using IS_ENABLED.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Fix compat register mappings
Robin Murphy [Thu, 22 Oct 2015 14:41:52 +0000 (15:41 +0100)]
arm64: Fix compat register mappings

For reasons not entirely apparent, but now enshrined in history, the
architectural mapping of AArch32 banked registers to AArch64 registers
actually orders SP_<mode> and LR_<mode> backwards compared to the
intuitive r13/r14 order, for all modes except FIQ.

Fix the compat_<reg>_<mode> macros accordingly, in the hope of avoiding
subtle bugs with KVM and AArch32 guests.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Increase the max granular size
Tirumalesh Chalamarla [Tue, 22 Sep 2015 17:59:48 +0000 (19:59 +0200)]
arm64: Increase the max granular size

Increase the standard cacheline size to avoid having locks in the same
cacheline.

Cavium's ThunderX core implements cache lines of 128 byte size. With
current granulare size of 64 bytes (L1_CACHE_SHIFT=6) two locks could
share the same cache line leading a performance degradation.
Increasing the size fixes that.

Increasing the size has no negative impact to cache invalidation on
systems with a smaller cache line. There is an impact on memory usage,
but that's not too important for arm64 use cases.

Signed-off-by: Tirumalesh Chalamarla <tchalamarla@cavium.com>
Signed-off-by: Robert Richter <rrichter@cavium.com>
Acked-by: Timur Tabi <timur@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: remove bogus TASK_SIZE_64 check
Ard Biesheuvel [Mon, 26 Oct 2015 03:53:17 +0000 (12:53 +0900)]
arm64: remove bogus TASK_SIZE_64 check

The comparison between TASK_SIZE_64 and MODULES_VADDR does not
make any sense on arm64, it is simply something that has been
carried over from the ARM port which arm64 is based on. So drop it.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: make Timer Interrupt Frequency selectable
Kefeng Wang [Mon, 26 Oct 2015 03:48:16 +0000 (11:48 +0800)]
arm64: make Timer Interrupt Frequency selectable

It allows a selectable timer interrupt frequency of 100, 250, 300 and 1000 HZ.
We will get better performance when choose a suitable frequency in some scene.

Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64/mm: use PAGE_ALIGNED instead of IS_ALIGNED
Alexander Kuleshov [Mon, 26 Oct 2015 11:26:57 +0000 (17:26 +0600)]
arm64/mm: use PAGE_ALIGNED instead of IS_ALIGNED

The <linux/mm.h> already provides the PAGE_ALIGNED macro. Let's
use this macro instead of IS_ALIGNED and passing PAGE_SIZE directly.

Signed-off-by: Alexander Kuleshov <kuleshovmail@gmail.com>
Acked-by: Laura Abbott <laura@labbott.name>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: cachetype: fix definitions of ICACHEF_* flags
Will Deacon [Tue, 27 Oct 2015 12:05:55 +0000 (12:05 +0000)]
arm64: cachetype: fix definitions of ICACHEF_* flags

test_bit and set_bit take the bit number to operate on, rather than a
mask. This patch fixes the ICACHEF_* definitions so that they represent
the bit index in __icache_flags as opposed to the mask returned by the
BIT macro.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: cpufeature: declare enable_cpu_capabilities as static
Will Deacon [Tue, 27 Oct 2015 12:05:54 +0000 (12:05 +0000)]
arm64: cpufeature: declare enable_cpu_capabilities as static

enable_cpu_capabilities is only called from within cpufeature.c, so it
can be declared static.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoMerge branch 'irq/for-arm' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Catalin Marinas [Thu, 22 Oct 2015 16:30:08 +0000 (17:30 +0100)]
Merge branch 'irq/for-arm' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

This is an incremental fix for a patch previously pulled from tip
irq/for-arm.

* 'irq/for-arm' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  genirq: Make the cpuhotplug migration code less noisy

8 years agogenirq: Make the cpuhotplug migration code less noisy
Thomas Gleixner [Thu, 22 Oct 2015 12:34:57 +0000 (14:34 +0200)]
genirq: Make the cpuhotplug migration code less noisy

The original arm code has a pr_debug() statement for the case where
the irq chip has no set_affinity() callback. That's sufficient for
debugging and we really don't want to spam dmesg with useless warnings
for the normal case.

Fixes: f1e0bb0ad473: "genirq: Introduce generic irq migration for cpu hotunplug"
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Requested-by: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Yang Yingliang <yangyingliang@huawei.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Jiang Liu <jiang.liu@linux.intel.com>
8 years agoarm64: Constify hwcap name string arrays
Dave Martin [Thu, 30 Jul 2015 15:36:25 +0000 (16:36 +0100)]
arm64: Constify hwcap name string arrays

The hwcap string arrays used for generating the contents of
/proc/cpuinfo are currently arrays of non-const pointers.

There's no need for these pointers to be mutable, so this patch makes
them const so that they can be moved to .rodata.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64/kvm: Make use of the system wide safe values
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:55 +0000 (14:24 +0100)]
arm64/kvm: Make use of the system wide safe values

Use the system wide safe value from the new API for safer
decisions

Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: kvmarm@lists.cs.columbia.edu
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64/debug: Make use of the system wide safe value
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:54 +0000 (14:24 +0100)]
arm64/debug: Make use of the system wide safe value

Use the system wide value of ID_AA64DFR0 to make safer decisions

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Move FP/ASIMD hwcap handling to common code
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:53 +0000 (14:24 +0100)]
arm64: Move FP/ASIMD hwcap handling to common code

The FP/ASIMD is detected in fpsimd_init(), which is built-in
unconditionally. Lets move the hwcap handling to the central place.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64/HWCAP: Use system wide safe values
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:52 +0000 (14:24 +0100)]
arm64/HWCAP: Use system wide safe values

Extend struct arm64_cpu_capabilities to handle the HWCAP detection
and make use of the system wide value of the feature registers for
a reliable set of HWCAPs.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64/capabilities: Make use of system wide safe value
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:51 +0000 (14:24 +0100)]
arm64/capabilities: Make use of system wide safe value

Now that we can reliably read the system wide safe value for a
feature register, use that to compute the system capability.
This patch also replaces the 'feature-register-specific'
methods with a generic routine to check the capability.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Delay cpu feature capability checks
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:50 +0000 (14:24 +0100)]
arm64: Delay cpu feature capability checks

At the moment we run through the arm64_features capability list for
each CPU and set the capability if one of the CPU supports it. This
could be problematic in a heterogeneous system with differing capabilities.
Delay the CPU feature checks until all the enabled CPUs are up(i.e,
smp_cpus_done(), so that we can make better decisions based on the
overall system capability. Once we decide and advertise the capabilities
the alternatives can be applied. From this state, we cannot roll back
a feature to disabled based on the values from a new hotplugged CPU,
due to the runtime patching and other reasons. So, for all new CPUs,
we need to make sure that they have the established system capabilities.
Failing which, we bring the CPU down, preventing it from turning online.
Once the capabilities are decided, any new CPU booting up goes through
verification to ensure that it has all the enabled capabilities and also
invokes the respective enable() method on the CPU.

The CPU errata checks are not delayed and is still executed per-CPU
to detect the respective capabilities. If we ever come across a non-errata
capability that needs to be checked on each-CPU, we could introduce them via
a new capability table(or introduce a flag), which can be processed per CPU.

The next patch will make the feature checks use the system wide
safe value of a feature register.

NOTE: The enable() methods associated with the capability is scheduled
on all the CPUs (which is the only use case at the moment). If we need
a different type of 'enable()' which only needs to be run once on any CPU,
we should be able to handle that when needed.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
[catalin.marinas@arm.com: static variable and coding style fixes]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Refactor check_cpu_capabilities
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:49 +0000 (14:24 +0100)]
arm64: Refactor check_cpu_capabilities

check_cpu_capabilities runs through a given list of caps and
checks if the system has the cap, updates the system capability
bitmap and also runs any enable() methods associated with them.
All of this is not quite obvious from the name 'check'. This
patch splits the check_cpu_capabilities into two parts :

1) update_cpu_capabilities
 => Runs through the given list and updates the system
    wide capability map.
2) enable_cpu_capabilities
 => Runs through the given list and invokes enable() (if any)
    for the caps enabled on the system.

Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Suggested-by: Catalin Marinas <catalin.marinsa@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Cleanup mixed endian support detection
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:48 +0000 (14:24 +0100)]
arm64: Cleanup mixed endian support detection

Make use of the system wide safe register to decide the support
for mixed endian.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Read system wide CPUID value
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:47 +0000 (14:24 +0100)]
arm64: Read system wide CPUID value

Add an API for reading the safe CPUID value across the
system from the new infrastructure.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Consolidate CPU Sanity check to CPU Feature infrastructure
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:46 +0000 (14:24 +0100)]
arm64: Consolidate CPU Sanity check to CPU Feature infrastructure

This patch consolidates the CPU Sanity check to the new infrastructure.

Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Keep track of CPU feature registers
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:45 +0000 (14:24 +0100)]
arm64: Keep track of CPU feature registers

This patch adds an infrastructure to keep track of the CPU feature
registers on the system. For each register, the infrastructure keeps
track of the system wide safe value of the feature bits. Also, tracks
the which fields of a register should be matched strictly across all
the CPUs on the system for the SANITY check infrastructure.

The feature bits are classified into following 3 types depending on
the implication of the possible values. This information is used to
decide the safe value for a feature.

LOWER_SAFE  - The smaller value is safer
HIGHER_SAFE - The bigger value is safer
EXACT       - We can't decide between the two, so a predefined safe_value is used.

This infrastructure will be later used to make better decisions for:

 - Kernel features (e.g, KVM, Debug)
 - SANITY Check
 - CPU capability
 - ELF HWCAP
 - Exposing CPU Feature register to userspace.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
[catalin.marinas@arm.com: whitespace fix]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Handle width of a cpuid feature
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:44 +0000 (14:24 +0100)]
arm64: Handle width of a cpuid feature

Introduce a helper to extract cpuid feature for any given
width.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Move /proc/cpuinfo handling code
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:43 +0000 (14:24 +0100)]
arm64: Move /proc/cpuinfo handling code

This patch moves the /proc/cpuinfo handling code:

arch/arm64/kernel/{setup.c to cpuinfo.c}

No functional changes

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Move mixed endian support detection
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:42 +0000 (14:24 +0100)]
arm64: Move mixed endian support detection

Move the mixed endian support detection code to cpufeature.c
from cpuinfo.c. This also moves the update_cpu_features()
used by mixed endian detection code, which will get more
functionality.

Also moves the ID register field shifts to asm/sysreg.h,
where all the useful definitions will end up in later patches.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Move cpu feature detection code
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:41 +0000 (14:24 +0100)]
arm64: Move cpu feature detection code

This patch moves the CPU feature detection code from
 arch/arm64/kernel/{setup.c to cpufeature.c}

The plan is to consolidate all the CPU feature handling
in cpufeature.c.

Apart from changing pr_fmt from "alternatives" to "cpu features",
there are no functional changes.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Delay cpuinfo_store_boot_cpu
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:40 +0000 (14:24 +0100)]
arm64: Delay cpuinfo_store_boot_cpu

At the moment the boot CPU stores the cpuinfo long before the
PERCPU areas are initialised by the kernel. This could be problematic
as the non-boot CPU data structures might get copied with the data
from the boot CPU, giving us no chance to detect if a particular CPU
updated its cpuinfo. This patch delays the boot cpu store to
smp_prepare_boot_cpu().

Also kills the setup_processor() which no longer does meaningful
work.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Delay ELF HWCAP initialisation until all CPUs are up
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:39 +0000 (14:24 +0100)]
arm64: Delay ELF HWCAP initialisation until all CPUs are up

Delay the ELF HWCAP initialisation until all the (enabled) CPUs are
up, i.e, smp_cpus_done(). This is in preparation for detecting the
common features across the CPUS and creating a consistent ELF HWCAP
for the system.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Make the CPU information more clear
Suzuki K. Poulose [Mon, 19 Oct 2015 13:24:38 +0000 (14:24 +0100)]
arm64: Make the CPU information more clear

At early boot, we print the CPU version/revision. On a heterogeneous
system, we could have different types of CPUs. Print the CPU info for
all active cpus. Also, the secondary CPUs prints the message only when
they turn online.

Also, remove the redundant 'revision' information which doesn't
make any sense without the 'variant' field.

Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Tested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Make 36-bit VA depend on EXPERT
Catalin Marinas [Tue, 20 Oct 2015 13:59:20 +0000 (14:59 +0100)]
arm64: Make 36-bit VA depend on EXPERT

Commit 215399392fe4 (arm64: 36 bit VA) introduced 36-bit VA support for
the arm64 kernel when the 16KB page configuration is enabled. While this
is a valid hardware configuration, it's not something we want to
encourage since it reduces the memory (and I/O) range that the kernel
can access. Make this depend on EXPERT to avoid complaints of Linux not
mapping the whole RAM, especially on platforms following the ARM
recommended memory map.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Synchronise dump_backtrace() with perf callchain
Jungseok Lee [Sat, 17 Oct 2015 14:28:11 +0000 (14:28 +0000)]
arm64: Synchronise dump_backtrace() with perf callchain

Unlike perf callchain relying on walk_stackframe(), dump_backtrace()
has its own backtrace logic. A major difference between them is the
moment a symbol is recorded. Perf writes down a symbol *before*
calling unwind_frame(), but dump_backtrace() prints it out *after*
unwind_frame(). As a result, the last valid symbol cannot be hooked
in case of dump_backtrace(). This patch addresses the issue as
synchronising dump_backtrace() with perf callchain.

A simple test and its results are as follows:

- crash trigger

 $ sudo echo c > /proc/sysrq-trigger

- current status

 Call trace:
 [<fffffe00003dc738>] sysrq_handle_crash+0x24/0x30
 [<fffffe00003dd2ac>] __handle_sysrq+0x128/0x19c
 [<fffffe00003dd730>] write_sysrq_trigger+0x60/0x74
 [<fffffe0000249fc4>] proc_reg_write+0x84/0xc0
 [<fffffe00001f2638>] __vfs_write+0x44/0x104
 [<fffffe00001f2e60>] vfs_write+0x98/0x1a8
 [<fffffe00001f3730>] SyS_write+0x50/0xb0

- with this change

 Call trace:
 [<fffffe00003dc738>] sysrq_handle_crash+0x24/0x30
 [<fffffe00003dd2ac>] __handle_sysrq+0x128/0x19c
 [<fffffe00003dd730>] write_sysrq_trigger+0x60/0x74
 [<fffffe0000249fc4>] proc_reg_write+0x84/0xc0
 [<fffffe00001f2638>] __vfs_write+0x44/0x104
 [<fffffe00001f2e60>] vfs_write+0x98/0x1a8
 [<fffffe00001f3730>] SyS_write+0x50/0xb0
 [<fffffe00000939ec>] el0_svc_naked+0x20/0x28

Note that this patch does not cover a case where MMU is disabled. The
last stack frame of swapper, for example, has PC in a form of physical
address. Unfortunately, a simple conversion using phys_to_virt() cannot
cover all scenarios since PC is retrieved from LR - 4, not LR. It is
a big tradeoff to change both head.S and unwind_frame() for only a few
of symbols in *.S. Thus, this hunk does not take care of the case.

Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Cc: James Morse <james.morse@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Jungseok Lee <jungseoklee85@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: add cpu_idle tracepoints to arch_cpu_idle
Jisheng Zhang [Wed, 16 Sep 2015 14:23:21 +0000 (22:23 +0800)]
arm64: add cpu_idle tracepoints to arch_cpu_idle

Currently, if cpuidle is disabled or not supported, powertop reports
zero wakeups and zero events. This is due to the cpu_idle tracepoints
are missing.

This patch is to make cpu_idle tracepoints always available even if
cpuidle is disabled or not supported.

Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: 36 bit VA
Suzuki K. Poulose [Mon, 19 Oct 2015 13:19:38 +0000 (14:19 +0100)]
arm64: 36 bit VA

36bit VA lets us use 2 level page tables while limiting the
available address space to 64GB.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Steve Capper <steve.capper@linaro.org>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Add 16K page size support
Suzuki K. Poulose [Mon, 19 Oct 2015 13:19:37 +0000 (14:19 +0100)]
arm64: Add 16K page size support

This patch turns on the 16K page support in the kernel. We
support 48bit VA (4 level page tables) and 47bit VA (3 level
page tables).

With 16K we can map 128 entries using contiguous bit hint
at level 3 to map 2M using single TLB entry.

TODO: 16K supports 32 contiguous entries at level 2 to get us
1G(which is not yet supported by the infrastructure). That should
be a separate patch altogether.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Jeremy Linton <jeremy.linton@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Steve Capper <steve.capper@linaro.org>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Add page size to the kernel image header
Ard Biesheuvel [Mon, 19 Oct 2015 13:19:36 +0000 (14:19 +0100)]
arm64: Add page size to the kernel image header

This patch adds the page size to the arm64 kernel image header
so that one can infer the PAGESIZE used by the kernel. This will
be helpful to diagnose failures to boot the kernel with page size
not supported by the CPU.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Check for selected granule support
Suzuki K. Poulose [Mon, 19 Oct 2015 13:19:35 +0000 (14:19 +0100)]
arm64: Check for selected granule support

Ensure that the selected page size is supported by the CPU(s). If it doesn't
park it.

Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Kconfig: Fix help text about AArch32 support with 64K pages
Suzuki K. Poulose [Mon, 19 Oct 2015 13:19:34 +0000 (14:19 +0100)]
arm64: Kconfig: Fix help text about AArch32 support with 64K pages

Update the help text for ARM64_64K_PAGES to reflect the reality
about AArch32 support.

Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Simplify NR_FIX_BTMAPS calculation
Mark Rutland [Mon, 19 Oct 2015 13:19:33 +0000 (14:19 +0100)]
arm64: Simplify NR_FIX_BTMAPS calculation

We choose NR_FIX_BTMAPS such that each slot (NR_FIX_BTMAPS * PAGE_SIZE)
can address 256K.

Use division to derive NR_FIX_BTMAPS rather than defining it for each
page size.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Clean config usages for page size
Suzuki K. Poulose [Mon, 19 Oct 2015 13:19:32 +0000 (14:19 +0100)]
arm64: Clean config usages for page size

We use !CONFIG_ARM64_64K_PAGES for CONFIG_ARM64_4K_PAGES
(and vice versa) in code. It all worked well, so far since
we only had two options. Now, with the introduction of 16K,
these cases will break. This patch cleans up the code to
use the required CONFIG symbol expression without the assumption
that !64K => 4K (and vice versa)

Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Handle 4 level page table for swapper
Suzuki K. Poulose [Mon, 19 Oct 2015 13:19:31 +0000 (14:19 +0100)]
arm64: Handle 4 level page table for swapper

At the moment, we only support maximum of 3-level page table for
swapper. With 48bit VA, 64K has only 3 levels and 4K uses section
mapping. Add support for 4-level page table for swapper, needed
by 16K pages.

Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Calculate size for idmap_pg_dir at compile time
Suzuki K. Poulose [Mon, 19 Oct 2015 13:19:30 +0000 (14:19 +0100)]
arm64: Calculate size for idmap_pg_dir at compile time

Now that we can calculate the number of levels required for
mapping a va width, reserve exact number of pages that would
be required to cover the idmap. The idmap should be able to handle
the maximum physical address size supported.

Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Introduce helpers for page table levels
Suzuki K. Poulose [Mon, 19 Oct 2015 13:19:29 +0000 (14:19 +0100)]
arm64: Introduce helpers for page table levels

Introduce helpers for finding the number of page table
levels required for a given VA width, shift for a particular
page table level.

Convert the existing users to the new helpers. More users
to follow.

Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Handle section maps for swapper/idmap
Suzuki K. Poulose [Mon, 19 Oct 2015 13:19:28 +0000 (14:19 +0100)]
arm64: Handle section maps for swapper/idmap

We use section maps with 4K page size to create the swapper/idmaps.
So far we have used !64K or 4K checks to handle the case where we
use the section maps.
This patch adds a new symbol, ARM64_SWAPPER_USES_SECTION_MAPS, to
handle cases where we use section maps, instead of using the page size
symbols.

Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Move swapper pagetable definitions
Suzuki K. Poulose [Mon, 19 Oct 2015 13:19:27 +0000 (14:19 +0100)]
arm64: Move swapper pagetable definitions

Move the kernel pagetable (both swapper and idmap) definitions
from the generic asm/page.h to a new file, asm/kernel-pgtable.h.

This is mostly a cosmetic change, to clean up the asm/page.h to
get rid of the arch specific details which are not needed by the
generic code.

Also renames the symbols to prevent conflicts. e.g,
  BLOCK_SHIFT => SWAPPER_BLOCK_SHIFT

Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: debug: Fix typo in debug-monitors.c
Yang Shi [Fri, 18 Sep 2015 21:09:00 +0000 (14:09 -0700)]
arm64: debug: Fix typo in debug-monitors.c

Fix handers to handlers.

Signed-off-by: Yang Shi <yang.shi@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: AArch32 user space PC alignment exception
Mark Salyzyn [Tue, 13 Oct 2015 21:30:51 +0000 (14:30 -0700)]
arm64: AArch32 user space PC alignment exception

ARMv7 does not have a PC alignment exception. ARMv8 AArch32
user space however can produce a PC alignment exception. Add
handler so that we do not dump an unexpected stack trace in
the logs.

Signed-off-by: Mark Salyzyn <salyzyn@android.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Minor coding style fixes for kc_offset_to_vaddr and kc_vaddr_to_offset
Catalin Marinas [Fri, 16 Oct 2015 13:34:50 +0000 (14:34 +0100)]
arm64: Minor coding style fixes for kc_offset_to_vaddr and kc_vaddr_to_offset

These were introduced by commit 03875ad52fdd (arm64: add
kc_offset_to_vaddr and kc_vaddr_to_offset macro).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoRevert "arm64: ioremap: add ioremap_cache macro"
Catalin Marinas [Tue, 13 Oct 2015 15:18:17 +0000 (16:18 +0100)]
Revert "arm64: ioremap: add ioremap_cache macro"

This reverts commit 1b6d7f8742d5d46c478f10c9e57da18d049b116d.

This patch would conflict with Dan Williams' "tree-wide convert to
memremap()" series (ioremap_cache replaced by arch_memremap)

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: add kc_offset_to_vaddr and kc_vaddr_to_offset macro
yalin wang [Mon, 12 Oct 2015 06:52:59 +0000 (14:52 +0800)]
arm64: add kc_offset_to_vaddr and kc_vaddr_to_offset macro

This patch add kc_offset_to_vaddr() and kc_vaddr_to_offset(),
the default version doesn't work on arm64, because arm64 kernel address
is below the PAGE_OFFSET, like module address and vmemmap address are
all below PAGE_OFFSET address.

Signed-off-by: yalin wang <yalin.wang2010@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: ioremap: add ioremap_cache macro
yalin wang [Mon, 12 Oct 2015 02:28:18 +0000 (10:28 +0800)]
arm64: ioremap: add ioremap_cache macro

Add ioremap_cache macro, because some code will test if this macro
is defined or not, and will generate a generric version if not defined,
for example, memremap.c do like this.

Signed-off-by: yalin wang <yalin.wang2010@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: kasan: fix issues reported by sparse
Will Deacon [Tue, 13 Oct 2015 13:01:06 +0000 (14:01 +0100)]
arm64: kasan: fix issues reported by sparse

Sparse reports some new issues introduced by the kasan patches:

  arch/arm64/mm/kasan_init.c:91:13: warning: no previous prototype for
  'kasan_early_init' [-Wmissing-prototypes] void __init kasan_early_init(void)
             ^
  arch/arm64/mm/kasan_init.c:91:13: warning: symbol 'kasan_early_init'
  was not declared. Should it be static? [sparse]

This patch resolves the problem by adding a prototype for
kasan_early_init and marking the function as asmlinkage, since it's only
called from head.S.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoDocumentation/features/KASAN: arm64 supports KASAN now
Andrey Ryabinin [Mon, 12 Oct 2015 15:53:00 +0000 (18:53 +0300)]
Documentation/features/KASAN: arm64 supports KASAN now

Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoARM64: kasan: print memory assignment
Linus Walleij [Mon, 12 Oct 2015 15:52:59 +0000 (18:52 +0300)]
ARM64: kasan: print memory assignment

This prints out the virtual memory assigned to KASan in the
boot crawl along with other memory assignments, if and only
if KASan is activated.

Example dmesg from the Juno Development board:

Memory: 1691156K/2080768K available (5465K kernel code, 444K rwdata,
2160K rodata, 340K init, 217K bss, 373228K reserved, 16384K cma-reserved)
Virtual kernel memory layout:
    kasan   : 0xffffff8000000000 - 0xffffff9000000000   (    64 GB)
    vmalloc : 0xffffff9000000000 - 0xffffffbdbfff0000   (   182 GB)
    vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000   (     8 GB maximum)
              0xffffffbdc2000000 - 0xffffffbdc3fc0000   (    31 MB actual)
    fixed   : 0xffffffbffabfd000 - 0xffffffbffac00000   (    12 KB)
    PCI I/O : 0xffffffbffae00000 - 0xffffffbffbe00000   (    16 MB)
    modules : 0xffffffbffc000000 - 0xffffffc000000000   (    64 MB)
    memory  : 0xffffffc000000000 - 0xffffffc07f000000   (  2032 MB)
      .init : 0xffffffc0007f5000 - 0xffffffc00084a000   (   340 KB)
      .text : 0xffffffc000080000 - 0xffffffc0007f45b4   (  7634 KB)
      .data : 0xffffffc000850000 - 0xffffffc0008bf200   (   445 KB)

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: add KASAN support
Andrey Ryabinin [Mon, 12 Oct 2015 15:52:58 +0000 (18:52 +0300)]
arm64: add KASAN support

This patch adds arch specific code for kernel address sanitizer
(see Documentation/kasan.txt).

1/8 of kernel addresses reserved for shadow memory. There was no
big enough hole for this, so virtual addresses for shadow were
stolen from vmalloc area.

At early boot stage the whole shadow region populated with just
one physical page (kasan_zero_page). Later, this page reused
as readonly zero shadow for some memory that KASan currently
don't track (vmalloc).
After mapping the physical memory, pages for shadow memory are
allocated and mapped.

Functions like memset/memmove/memcpy do a lot of memory accesses.
If bad pointer passed to one of these function it is important
to catch this. Compiler's instrumentation cannot do this since
these functions are written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.
Some files built without kasan instrumentation (e.g. mm/slub.c).
Original mem* function replaced (via #define) with prefixed variants
to disable memory access checks for such files.

Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: move PGD_SIZE definition to pgalloc.h
Andrey Ryabinin [Mon, 12 Oct 2015 15:52:57 +0000 (18:52 +0300)]
arm64: move PGD_SIZE definition to pgalloc.h

This will be used by KASAN latter.

Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: atomics: implement native {relaxed, acquire, release} atomics
Will Deacon [Thu, 8 Oct 2015 19:15:18 +0000 (20:15 +0100)]
arm64: atomics: implement native {relaxed, acquire, release} atomics

Commit 654672d4ba1a ("locking/atomics: Add _{acquire|release|relaxed}()
variants of some atomic operation") introduced a relaxed atomic API to
Linux that maps nicely onto the arm64 memory model, including the new
ARMv8.1 atomic instructions.

This patch hooks up the API to our relaxed atomic instructions, rather
than have them all expand to the full-barrier variants as they do
currently.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64/efi: isolate EFI stub from the kernel proper
Ard Biesheuvel [Thu, 8 Oct 2015 19:02:04 +0000 (20:02 +0100)]
arm64/efi: isolate EFI stub from the kernel proper

Since arm64 does not use a builtin decompressor, the EFI stub is built
into the kernel proper. So far, this has been working fine, but actually,
since the stub is in fact a PE/COFF relocatable binary that is executed
at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we
should not be seamlessly sharing code with the kernel proper, which is a
position dependent executable linked at a high virtual offset.

So instead, separate the contents of libstub and its dependencies, by
putting them into their own namespace by prefixing all of its symbols
with __efistub. This way, we have tight control over what parts of the
kernel proper are referenced by the stub.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Matt Fleming <matt.fleming@intel.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: use ENDPIPROC() to annotate position independent assembler routines
Ard Biesheuvel [Thu, 8 Oct 2015 19:02:03 +0000 (20:02 +0100)]
arm64: use ENDPIPROC() to annotate position independent assembler routines

For more control over which functions are called with the MMU off or
with the UEFI 1:1 mapping active, annotate some assembler routines as
position independent. This is done by introducing ENDPIPROC(), which
replaces the ENDPROC() declaration of those routines.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64/efi: remove /chosen/linux, uefi-stub-kern-ver DT property
Ard Biesheuvel [Thu, 8 Oct 2015 19:02:02 +0000 (20:02 +0100)]
arm64/efi: remove /chosen/linux, uefi-stub-kern-ver DT property

With the stub to kernel interface being promoted to a proper interface
so that other agents than the stub can boot the kernel proper in EFI
mode, we can remove the linux,uefi-stub-kern-ver field, considering
that its original purpose was to prevent this from happening in the
first place.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Matt Fleming <matt.fleming@intel.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Fix missing #include in hw_breakpoint.c
Catalin Marinas [Mon, 12 Oct 2015 11:10:53 +0000 (12:10 +0100)]
arm64: Fix missing #include in hw_breakpoint.c

A prior commit used to detect the hw breakpoint ABI behaviour based on
the target state missed the asm/compat.h include and the build fails
with !CONFIG_COMPAT.

Fixes: 8f48c0629049 ("arm64: hw_breakpoint: use target state to determine ABI behaviour")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: fix a migrating irq bug when hotplug cpu
Yang Yingliang [Thu, 24 Sep 2015 09:32:14 +0000 (17:32 +0800)]
arm64: fix a migrating irq bug when hotplug cpu

When cpu is disabled, all irqs will be migratged to another cpu.
In some cases, a new affinity is different, the old affinity need
to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE,
the old affinity can not be updated. Fix it by using irq_do_set_affinity.

And migrating interrupts is a core code matter, so use the generic
function irq_migrate_all_off_this_cpu() to migrate interrupts in
kernel/irq/migration.c.

Cc: Jiang Liu <jiang.liu@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
Cc: Hanjun Guo <hanjun.guo@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoMerge branch 'irq/for-arm' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Catalin Marinas [Fri, 9 Oct 2015 15:47:34 +0000 (16:47 +0100)]
Merge branch 'irq/for-arm' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

* 'irq/for-arm' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  genirq: Introduce generic irq migration for cpu hotunplug

8 years agoarm64: Mark kernel page ranges contiguous
Jeremy Linton [Wed, 7 Oct 2015 17:00:25 +0000 (12:00 -0500)]
arm64: Mark kernel page ranges contiguous

With 64k pages, the next larger segment size is 512M. The linux
kernel also uses different protection flags to cover its code and data.
Because of this requirement, the vast majority of the kernel code and
data structures end up being mapped with 64k pages instead of the larger
pages common with a 4k page kernel.

Recent ARM processors support a contiguous bit in the
page tables which allows the a TLB to cover a range larger than a
single PTE if that range is mapped into physically contiguous
ram.

So, for the kernel its a good idea to set this flag. Some basic
micro benchmarks show it can significantly reduce the number of
L1 dTLB refills.

Add boot option to enable/disable CONT marking, as well as fix a
bug found by Steve Capper.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
[catalin.marinas@arm.com: remove CONFIG_ARM64_CONT_PTE altogether]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Make the kernel page dump utility aware of the CONT bit
Jeremy Linton [Wed, 7 Oct 2015 17:00:23 +0000 (12:00 -0500)]
arm64: Make the kernel page dump utility aware of the CONT bit

The kernel page dump utility needs to be aware of the CONT bit before
it will break up pages ranges for display.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Default kernel pages should be contiguous
Jeremy Linton [Wed, 7 Oct 2015 17:00:22 +0000 (12:00 -0500)]
arm64: Default kernel pages should be contiguous

The default page attributes for a PMD being broken should have the CONT bit
set. Create a new definition for an early boot range of PTE's that are
contiguous.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Macros to check/set/unset the contiguous bit
Jeremy Linton [Wed, 7 Oct 2015 17:00:21 +0000 (12:00 -0500)]
arm64: Macros to check/set/unset the contiguous bit

Add the supporting macros to check if the contiguous bit
is set, set the bit, or clear it in a PTE entry.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: PTE/PMD contiguous bit definition
Jeremy Linton [Wed, 7 Oct 2015 17:00:20 +0000 (12:00 -0500)]
arm64: PTE/PMD contiguous bit definition

Define the bit positions in the PTE and PMD for the
contiguous bit.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: Add contiguous page flag shifts and constants
Jeremy Linton [Wed, 7 Oct 2015 17:00:19 +0000 (12:00 -0500)]
arm64: Add contiguous page flag shifts and constants

Add the number of pages required to form a contiguous range,
as well as some supporting constants.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoMAINTAINERS: add myself as arm perf reviewer
Mark Rutland [Fri, 2 Oct 2015 09:55:08 +0000 (10:55 +0100)]
MAINTAINERS: add myself as arm perf reviewer

As suggested by Will Deacon, add myself as a reviewer of the ARM PMU
profiling and debugging code.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoMAINTAINERS: update ARM PMU profiling and debugging for arm64
Mark Rutland [Fri, 2 Oct 2015 09:55:07 +0000 (10:55 +0100)]
MAINTAINERS: update ARM PMU profiling and debugging for arm64

Will Deacon maintains the profiling and debugging code under both
arch/arm and arch/arm64. Update MAINTAINERS to reflect this, in
preparation for adding myself as a reviewer of said code.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: dts: juno: describe PMUs separately
Mark Rutland [Fri, 2 Oct 2015 09:55:06 +0000 (10:55 +0100)]
arm64: dts: juno: describe PMUs separately

The A57 and A53 PMUs in Juno support different events, so describe them
separately in both the Juno and Juno R1 DTs.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Liviu Dudau <liviu.dudau@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: perf: add Cortex-A57 support
Mark Rutland [Fri, 2 Oct 2015 09:55:05 +0000 (10:55 +0100)]
arm64: perf: add Cortex-A57 support

The Cortex-A57 PMU supports a few events outside of the required PMUv3
set that are rather useful.

This patch adds the event map data for said events.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: perf: add Cortex-A53 support
Mark Rutland [Fri, 2 Oct 2015 09:55:04 +0000 (10:55 +0100)]
arm64: perf: add Cortex-A53 support

The Cortex-A53 PMU supports a few events outside of the required PMUv3
set that are rather useful.

This patch adds the event map data for said events.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: perf: move to shared arm_pmu framework
Mark Rutland [Fri, 2 Oct 2015 09:55:03 +0000 (10:55 +0100)]
arm64: perf: move to shared arm_pmu framework

Now that the arm_pmu framework has been factored out to drivers/perf we
can make use of it for arm64, gaining support for heterogeneous PMUs
and unifying the two codebases before they diverge further.

The as yet unused PMU name for PMUv3 is changed to armv8_pmuv3, matching
the style previously applied to the 32-bit PMUs.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: hw_breakpoint: use target state to determine ABI behaviour
Will Deacon [Wed, 7 Oct 2015 10:37:36 +0000 (11:37 +0100)]
arm64: hw_breakpoint: use target state to determine ABI behaviour

The arm64 hw_breakpoint interface is slightly less flexible than its
32-bit counterpart, thanks to some changes in the architecture rendering
unaligned watchpoint addresses obselete for AArch64.

However, in a multi-arch environment (i.e. debugging a 32-bit target
with a 64-bit GDB under a 64-bit kernel), we need to provide a feature
compatible interface to GDB in order for debugging to function correctly.

This patch adds a new helper, is_compat_bp,  to our hw_breakpoint
implementation which changes the interface behaviour based on the
architecture of the debug target as opposed to the debugger itself.
This allows debugged to function as expected for multi-arch
configurations without relying on deprecated architectural behaviours
when debugging native applications.

Cc: Yao Qi <yao.qi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: mm: remove dsb from update_mmu_cache
Will Deacon [Tue, 6 Oct 2015 17:46:30 +0000 (18:46 +0100)]
arm64: mm: remove dsb from update_mmu_cache

update_mmu_cache() consists of a dsb(ishst) instruction so that new user
mappings are guaranteed to be visible to the page table walker on
exception return.

In reality this can be a very expensive operation which is rarely needed.
Removing this barrier shows a modest improvement in hackbench scores and
, in the worst case, we re-take the user fault and establish that there
was nothing to do.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: tlb: remove redundant barrier from __flush_tlb_pgtable
Will Deacon [Tue, 6 Oct 2015 17:46:29 +0000 (18:46 +0100)]
arm64: tlb: remove redundant barrier from __flush_tlb_pgtable

__flush_tlb_pgtable is used to invalidate intermediate page table
entries after they have been cleared and are about to be freed. Since
pXd_clear imply memory barriers, we don't need the extra one here.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: mm: kill mm_cpumask usage
Will Deacon [Tue, 6 Oct 2015 17:46:28 +0000 (18:46 +0100)]
arm64: mm: kill mm_cpumask usage

mm_cpumask isn't actually used for anything on arm64, so remove all the
code trying to keep it up-to-date.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: switch_mm: simplify mm and CPU checks
Will Deacon [Tue, 6 Oct 2015 17:46:27 +0000 (18:46 +0100)]
arm64: switch_mm: simplify mm and CPU checks

switch_mm performs some checks to try and avoid entering the ASID
allocator:

  (1) If we're switching to the init_mm (no user mappings), then simply
      set a reserved TTBR0 value with no page table (the zero page)

  (2) If prev == next *and* the mm_cpumask indicates that we've run on
      this CPU before, then we can skip the allocator.

However, there is plenty of redundancy here. With the new ASID allocator,
if prev == next, then we know that our ASID is valid and do not need to
worry about re-allocation. Consequently, we can drop the mm_cpumask check
in (2) and move the prev == next check before the init_mm check, since
if prev == next == init_mm then there's nothing to do.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: tlbflush: avoid flushing when fullmm == 1
Will Deacon [Tue, 6 Oct 2015 17:46:26 +0000 (18:46 +0100)]
arm64: tlbflush: avoid flushing when fullmm == 1

The TLB gather code sets fullmm=1 when tearing down the entire address
space for an mm_struct on exit or execve. Given that the ASID allocator
will never re-allocate a dirty ASID, this flushing is not needed and can
simply be avoided in the flushing code.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: tlbflush: remove redundant ASID casts to (unsigned long)
Will Deacon [Tue, 6 Oct 2015 17:46:25 +0000 (18:46 +0100)]
arm64: tlbflush: remove redundant ASID casts to (unsigned long)

The ASID macro returns a 64-bit (long long) value, so there is no need
to cast to (unsigned long) before shifting prior to a TLBI operation.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: mm: rewrite ASID allocator and MM context-switching code
Will Deacon [Tue, 6 Oct 2015 17:46:24 +0000 (18:46 +0100)]
arm64: mm: rewrite ASID allocator and MM context-switching code

Our current switch_mm implementation suffers from a number of problems:

  (1) The ASID allocator relies on IPIs to synchronise the CPUs on a
      rollover event

  (2) Because of (1), we cannot allocate ASIDs with interrupts disabled
      and therefore make use of a TIF_SWITCH_MM flag to postpone the
      actual switch to finish_arch_post_lock_switch

  (3) We run context switch with a reserved (invalid) TTBR0 value, even
      though the ASID and pgd are updated atomically

  (4) We take a global spinlock (cpu_asid_lock) during context-switch

  (5) We use h/w broadcast TLB operations when they are not required
      (e.g. in flush_context)

This patch addresses these problems by rewriting the ASID algorithm to
match the bitmap-based arch/arm/ implementation more closely. This in
turn allows us to remove much of the complications surrounding switch_mm,
including the ugly thread flag.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: flush: use local TLB and I-cache invalidation
Will Deacon [Tue, 6 Oct 2015 17:46:23 +0000 (18:46 +0100)]
arm64: flush: use local TLB and I-cache invalidation

There are a number of places where a single CPU is running with a
private page-table and we need to perform maintenance on the TLB and
I-cache in order to ensure correctness, but do not require the operation
to be broadcast to other CPUs.

This patch adds local variants of tlb_flush_all and __flush_icache_all
to support these use-cases and updates the callers respectively.
__local_flush_icache_all also implies an isb, since it is intended to be
used synchronously.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Daney <david.daney@cavium.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: proc: de-scope TLBI operation during cold boot
Will Deacon [Tue, 6 Oct 2015 17:46:22 +0000 (18:46 +0100)]
arm64: proc: de-scope TLBI operation during cold boot

When cold-booting a CPU, we must invalidate any junk entries from the
local TLB prior to enabling the MMU. This doesn't require broadcasting
within the inner-shareable domain, so de-scope the operation to apply
only to the local CPU.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: mm: remove unused cpu_set_idmap_tcr_t0sz function
Will Deacon [Tue, 6 Oct 2015 17:46:21 +0000 (18:46 +0100)]
arm64: mm: remove unused cpu_set_idmap_tcr_t0sz function

With commit b08d4640a3dc ("arm64: remove dead code"),
cpu_set_idmap_tcr_t0sz is no longer called and can therefore be removed
from the kernel.

This patch removes the function and effectively inlines the helper
function __cpu_set_tcr_t0sz into cpu_set_default_tcr_t0sz.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
8 years agoarm64: introduce VA_START macro - the first kernel virtual address.
Andrey Ryabinin [Thu, 17 Sep 2015 09:38:07 +0000 (12:38 +0300)]
arm64: introduce VA_START macro - the first kernel virtual address.

In order to not use lengthy (UL(0xffffffffffffffff) << VA_BITS) everywhere,
replace it with VA_START.

Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>