The vDSO uses arch_counter_enforce_ordering(), so it needs to work as intended in purecap (when PCuABI is selected). At the moment it creates a dummy pointer from the 64-bit value of SP and then dereferences it. To make this valid in purecap, derive a valid capability from CSP instead.
This issue went unnoticed due to DDC allowing all X-based loads/stores, even in purecap. Once DDC is nullified, this patch is required to avoid a capability fault when arch_counter_enforce_ordering() is called from the purecap vDSO.
Signed-off-by: Kevin Brodsky kevin.brodsky@arm.com --- arch/arm64/include/asm/barrier.h | 13 +++++++++++++ 1 file changed, 13 insertions(+)
diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h index cf2987464c18..38bf8e0d9655 100644 --- a/arch/arm64/include/asm/barrier.h +++ b/arch/arm64/include/asm/barrier.h @@ -106,6 +106,18 @@ static inline unsigned long array_index_mask_nospec(unsigned long idx, * * https://lore.kernel.org/r/alpine.DEB.2.21.1902081950260.1662@nanos.tec.linut... */ +#ifdef __CHERI_PURE_CAPABILITY__ +#define arch_counter_enforce_ordering(val) do { \ + u64 tmp, _val = (val); \ + void *ptr; \ + \ + asm volatile( \ + " eor %0, %2, %2\n" \ + " add %1, csp, %0\n" \ + " ldr xzr, [%1]" \ + : "=r" (tmp), "=r"(ptr) : "r" (_val)); \ +} while (0) +#else /* __CHERI_PURE_CAPABILITY__ */ #define arch_counter_enforce_ordering(val) do { \ u64 tmp, _val = (val); \ \ @@ -115,6 +127,7 @@ static inline unsigned long array_index_mask_nospec(unsigned long idx, " ldr xzr, [%0]" \ : "=r" (tmp) : "r" (_val)); \ } while (0) +#endif /* __CHERI_PURE_CAPABILITY__ */
#define __smp_mb() dmb(ish) #define __smp_rmb() dmb(ishld)