Index: linux-5.0.19-rt11/Documentation/preempt-locking.txt
===================================================================
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:55 @ preemption must be disabled around such
 
 Note, some FPU functions are already explicitly preempt safe.  For example,
 kernel_fpu_begin and kernel_fpu_end will disable and enable preemption.
-However, fpu__restore() must be called with preemption disabled.
 
 
 RULE #3: Lock acquire and release must be performed by same task
Index: linux-5.0.19-rt11/Documentation/printk-ringbuffer.txt
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/Documentation/printk-ringbuffer.txt
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+struct printk_ringbuffer
+------------------------
+John Ogness <john.ogness@linutronix.de>
+
+Overview
+~~~~~~~~
+As the name suggests, this ring buffer was implemented specifically to serve
+the needs of the printk() infrastructure. The ring buffer itself is not
+specific to printk and could be used for other purposes. _However_, the
+requirements and semantics of printk are rather unique. If you intend to use
+this ring buffer for anything other than printk, you need to be very clear on
+its features, behavior, and pitfalls.
+
+Features
+^^^^^^^^
+The printk ring buffer has the following features:
+
+- single global buffer
+- resides in initialized data section (available at early boot)
+- lockless readers
+- supports multiple writers
+- supports multiple non-consuming readers
+- safe from any context (including NMI)
+- groups bytes into variable length blocks (referenced by entries)
+- entries tagged with sequence numbers
+
+Behavior
+^^^^^^^^
+Since the printk ring buffer readers are lockless, there exists no
+synchronization between readers and writers. Basically writers are the tasks
+in control and may overwrite any and all committed data at any time and from
+any context. For this reason readers can miss entries if they are overwritten
+before the reader was able to access the data. The reader API implementation
+is such that reader access to entries is atomic, so there is no risk of
+readers having to deal with partial or corrupt data. Also, entries are
+tagged with sequence numbers so readers can recognize if entries were missed.
+
+Writing to the ring buffer consists of 2 steps. First a writer must reserve
+an entry of desired size. After this step the writer has exclusive access
+to the memory region. Once the data has been written to memory, it needs to
+be committed to the ring buffer. After this step the entry has been inserted
+into the ring buffer and assigned an appropriate sequence number.
+
+Once committed, a writer must no longer access the data directly. This is
+because the data may have been overwritten and no longer exists. If a
+writer must access the data, it should either keep a private copy before
+committing the entry or use the reader API to gain access to the data.
+
+Because of how the data backend is implemented, entries that have been
+reserved but not yet committed act as barriers, preventing future writers
+from filling the ring buffer beyond the location of the reserved but not
+yet committed entry region. For this reason it is *important* that writers
+perform both reserve and commit as quickly as possible. Also, be aware that
+preemption and local interrupts are disabled and writing to the ring buffer
+is processor-reentrant locked during the reserve/commit window. Writers in
+NMI contexts can still preempt any other writers, but as long as these
+writers do not write a large amount of data with respect to the ring buffer
+size, this should not become an issue.
+
+API
+~~~
+
+Declaration
+^^^^^^^^^^^
+The printk ring buffer can be instantiated as a static structure:
+
+ /* declare a static struct printk_ringbuffer */
+ #define DECLARE_STATIC_PRINTKRB(name, szbits, cpulockptr)
+
+The value of szbits specifies the size of the ring buffer in bits. The
+cpulockptr field is a pointer to a prb_cpulock struct that is used to
+perform processor-reentrant spin locking for the writers. It is specified
+externally because it may be used for multiple ring buffers (or other
+code) to synchronize writers without risk of deadlock.
+
+Here is an example of a declaration of a printk ring buffer specifying a
+32KB (2^15) ring buffer:
+
+....
+DECLARE_STATIC_PRINTKRB_CPULOCK(rb_cpulock);
+DECLARE_STATIC_PRINTKRB(rb, 15, &rb_cpulock);
+....
+
+If writers will be using multiple ring buffers and the ordering of that usage
+is not clear, the same prb_cpulock should be used for both ring buffers.
+
+Writer API
+^^^^^^^^^^
+The writer API consists of 2 functions. The first is to reserve an entry in
+the ring buffer, the second is to commit that data to the ring buffer. The
+reserved entry information is stored within a provided `struct prb_handle`.
+
+ /* reserve an entry */
+ char *prb_reserve(struct prb_handle *h, struct printk_ringbuffer *rb,
+                   unsigned int size);
+
+ /* commit a reserved entry to the ring buffer */
+ void prb_commit(struct prb_handle *h);
+
+Here is an example of a function to write data to a ring buffer:
+
+....
+int write_data(struct printk_ringbuffer *rb, char *data, int size)
+{
+    struct prb_handle h;
+    char *buf;
+
+    buf = prb_reserve(&h, rb, size);
+    if (!buf)
+        return -1;
+    memcpy(buf, data, size);
+    prb_commit(&h);
+
+    return 0;
+}
+....
+
+Pitfalls
+++++++++
+Be aware that prb_reserve() can fail. A retry might be successful, but it
+depends entirely on whether or not the next part of the ring buffer to
+overwrite belongs to reserved but not yet committed entries of other writers.
+Writers can use the prb_inc_lost() function to allow readers to notice that a
+message was lost.
+
+Reader API
+^^^^^^^^^^
+The reader API utilizes a `struct prb_iterator` to track the reader's
+position in the ring buffer.
+
+ /* declare a pre-initialized static iterator for a ring buffer */
+ #define DECLARE_STATIC_PRINTKRB_ITER(name, rbaddr)
+
+ /* initialize iterator for a ring buffer (if static macro NOT used) */
+ void prb_iter_init(struct prb_iterator *iter,
+                    struct printk_ringbuffer *rb, u64 *seq);
+
+ /* make a deep copy of an iterator */
+ void prb_iter_copy(struct prb_iterator *dest,
+                    struct prb_iterator *src);
+
+ /* non-blocking, advance to next entry (and read the data) */
+ int prb_iter_next(struct prb_iterator *iter, char *buf,
+                   int size, u64 *seq);
+
+ /* blocking, advance to next entry (and read the data) */
+ int prb_iter_wait_next(struct prb_iterator *iter, char *buf,
+                        int size, u64 *seq);
+
+ /* position iterator at the entry seq */
+ int prb_iter_seek(struct prb_iterator *iter, u64 seq);
+
+ /* read data at current position */
+ int prb_iter_data(struct prb_iterator *iter, char *buf,
+                   int size, u64 *seq);
+
+Typically prb_iter_data() is not needed because the data can be retrieved
+directly with prb_iter_next().
+
+Here is an example of a non-blocking function that will read all the data in
+a ring buffer:
+
+....
+void read_all_data(struct printk_ringbuffer *rb, char *buf, int size)
+{
+    struct prb_iterator iter;
+    u64 prev_seq = 0;
+    u64 seq;
+    int ret;
+
+    prb_iter_init(&iter, rb, NULL);
+
+    for (;;) {
+        ret = prb_iter_next(&iter, buf, size, &seq);
+        if (ret > 0) {
+            if (seq != ++prev_seq) {
+                /* "seq - prev_seq" entries missed */
+                prev_seq = seq;
+            }
+            /* process buf here */
+        } else if (ret == 0) {
+            /* hit the end, done */
+            break;
+        } else if (ret < 0) {
+            /*
+             * iterator is invalid, a writer overtook us, reset the
+             * iterator and keep going, entries were missed
+             */
+            prb_iter_init(&iter, rb, NULL);
+        }
+    }
+}
+....
+
+Pitfalls
+++++++++
+The reader's iterator can become invalid at any time because the reader was
+overtaken by a writer. Typically the reader should reset the iterator back
+to the current oldest entry (which will be newer than the entry the reader
+was at) and continue, noting the number of entries that were missed.
+
+Utility API
+^^^^^^^^^^^
+Several functions are available as convenience for external code.
+
+ /* query the size of the data buffer */
+ int prb_buffer_size(struct printk_ringbuffer *rb);
+
+ /* skip a seq number to signify a lost record */
+ void prb_inc_lost(struct printk_ringbuffer *rb);
+
+ /* processor-reentrant spin lock */
+ void prb_lock(struct prb_cpulock *cpu_lock, unsigned int *cpu_store);
+
+ /* processor-reentrant spin unlock */
+ void prb_lock(struct prb_cpulock *cpu_lock, unsigned int *cpu_store);
+
+Pitfalls
+++++++++
+Although the value returned by prb_buffer_size() does represent an absolute
+upper bound, the amount of data that can be stored within the ring buffer
+is actually less because of the additional storage space of a header for each
+entry.
+
+The prb_lock() and prb_unlock() functions can be used to synchronize between
+ring buffer writers and other external activities. The function of a
+processor-reentrant spin lock is to disable preemption and local interrupts
+and synchronize against other processors. It does *not* protect against
+multiple contexts of a single processor, i.e NMI.
+
+Implementation
+~~~~~~~~~~~~~~
+This section describes several of the implementation concepts and details to
+help developers better understand the code.
+
+Entries
+^^^^^^^
+All ring buffer data is stored within a single static byte array. The reason
+for this is to ensure that any pointers to the data (past and present) will
+always point to valid memory. This is important because the lockless readers
+may be preempted for long periods of time and when they resume may be working
+with expired pointers.
+
+Entries are identified by start index and size. (The start index plus size
+is the start index of the next entry.) The start index is not simply an
+offset into the byte array, but rather a logical position (lpos) that maps
+directly to byte array offsets.
+
+For example, for a byte array of 1000, an entry may have have a start index
+of 100. Another entry may have a start index of 1100. And yet another 2100.
+All of these entry are pointing to the same memory region, but only the most
+recent entry is valid. The other entries are pointing to valid memory, but
+represent entries that have been overwritten.
+
+Note that due to overflowing, the most recent entry is not necessarily the one
+with the highest lpos value. Indeed, the printk ring buffer initializes its
+data such that an overflow happens relatively quickly in order to validate the
+handling of this situation. The implementation assumes that an lpos (unsigned
+long) will never completely wrap while a reader is preempted. If this were to
+become an issue, the seq number (which never wraps) could be used to increase
+the robustness of handling this situation.
+
+Buffer Wrapping
+^^^^^^^^^^^^^^^
+If an entry starts near the end of the byte array but would extend beyond it,
+a special terminating entry (size = -1) is inserted into the byte array and
+the real entry is placed at the beginning of the byte array. This can waste
+space at the end of the byte array, but simplifies the implementation by
+allowing writers to always work with contiguous buffers.
+
+Note that the size field is the first 4 bytes of the entry header. Also note
+that calc_next() always ensures that there are at least 4 bytes left at the
+end of the byte array to allow room for a terminating entry.
+
+Ring Buffer Pointers
+^^^^^^^^^^^^^^^^^^^^
+Three pointers (lpos values) are used to manage the ring buffer:
+
+ - _tail_: points to the oldest entry
+ - _head_: points to where the next new committed entry will be
+ - _reserve_: points to where the next new reserved entry will be
+
+These pointers always maintain a logical ordering:
+
+ tail <= head <= reserve
+
+The reserve pointer moves forward when a writer reserves a new entry. The
+head pointer moves forward when a writer commits a new entry.
+
+The reserve pointer cannot overwrite the tail pointer in a wrap situation. In
+such a situation, the tail pointer must be "pushed forward", thus
+invalidating that oldest entry. Readers identify if they are accessing a
+valid entry by ensuring their entry pointer is `>= tail && < head`.
+
+If the tail pointer is equal to the head pointer, it cannot be pushed and any
+reserve operation will fail. The only resolution is for writers to commit
+their reserved entries.
+
+Processor-Reentrant Locking
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+The purpose of the processor-reentrant locking is to limit the interruption
+scenarios of writers to 2 contexts. This allows for a simplified
+implementation where:
+
+- The reserve/commit window only exists on 1 processor at a time. A reserve
+  can never fail due to uncommitted entries of other processors.
+
+- When committing entries, it is trivial to handle the situation when
+  subsequent entries have already been committed, i.e. managing the head
+  pointer.
+
+Performance
+~~~~~~~~~~~
+Some basic tests were performed on a quad Intel(R) Xeon(R) CPU E5-2697 v4 at
+2.30GHz (36 cores / 72 threads). All tests involved writing a total of
+32,000,000 records at an average of 33 bytes each. Each writer was pinned to
+its own CPU and would write as fast as it could until a total of 32,000,000
+records were written. All tests involved 2 readers that were both pinned
+together to another CPU. Each reader would read as fast as it could and track
+how many of the 32,000,000 records it could read. All tests used a ring buffer
+of 16KB in size, which holds around 350 records (header + data for each
+entry).
+
+The only difference between the tests is the number of writers (and thus also
+the number of records per writer). As more writers are added, the time to
+write a record increases. This is because data pointers, modified via cmpxchg,
+and global data access in general become more contended.
+
+1 writer
+^^^^^^^^
+ runtime: 0m 18s
+ reader1: 16219900/32000000 (50%) records
+ reader2: 16141582/32000000 (50%) records
+
+2 writers
+^^^^^^^^^
+ runtime: 0m 32s
+ reader1: 16327957/32000000 (51%) records
+ reader2: 16313988/32000000 (50%) records
+
+4 writers
+^^^^^^^^^
+ runtime: 0m 42s
+ reader1: 16421642/32000000 (51%) records
+ reader2: 16417224/32000000 (51%) records
+
+8 writers
+^^^^^^^^^
+ runtime: 0m 43s
+ reader1: 16418300/32000000 (51%) records
+ reader2: 16432222/32000000 (51%) records
+
+16 writers
+^^^^^^^^^^
+ runtime: 0m 54s
+ reader1: 16539189/32000000 (51%) records
+ reader2: 16542711/32000000 (51%) records
+
+32 writers
+^^^^^^^^^^
+ runtime: 1m 13s
+ reader1: 16731808/32000000 (52%) records
+ reader2: 16735119/32000000 (52%) records
+
+Comments
+^^^^^^^^
+It is particularly interesting to compare/contrast the 1-writer and 32-writer
+tests. Despite the writing of the 32,000,000 records taking over 4 times
+longer, the readers (which perform no cmpxchg) were still unable to keep up.
+This shows that the memory contention between the increasing number of CPUs
+also has a dramatic effect on readers.
+
+It should also be noted that in all cases each reader was able to read >=50%
+of the records. This means that a single reader would have been able to keep
+up with the writer(s) in all cases, becoming slightly easier as more writers
+are added. This was the purpose of pinning 2 readers to 1 CPU: to observe how
+maximum reader performance changes.
Index: linux-5.0.19-rt11/arch/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/arch/Kconfig
+++ linux-5.0.19-rt11/arch/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:31 @ config OPROFILE
 	tristate "OProfile system profiling"
 	depends on PROFILING
 	depends on HAVE_OPROFILE
+	depends on !PREEMPT_RT_FULL
 	select RING_BUFFER
 	select RING_BUFFER_ALLOW_SWAP
 	help
Index: linux-5.0.19-rt11/arch/alpha/include/asm/spinlock_types.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/alpha/include/asm/spinlock_types.h
+++ linux-5.0.19-rt11/arch/alpha/include/asm/spinlock_types.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5 @
 #ifndef _ALPHA_SPINLOCK_TYPES_H
 #define _ALPHA_SPINLOCK_TYPES_H
 
-#ifndef __LINUX_SPINLOCK_TYPES_H
-# error "please don't include this file directly"
-#endif
-
 typedef struct {
 	volatile unsigned int lock;
 } arch_spinlock_t;
Index: linux-5.0.19-rt11/arch/arm/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm/Kconfig
+++ linux-5.0.19-rt11/arch/arm/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:55 @ config ARM
 	select HARDIRQS_SW_RESEND
 	select HAVE_ARCH_AUDITSYSCALL if AEABI && !OABI_COMPAT
 	select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
-	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
+	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU && !PREEMPT_RT_BASE
 	select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_MMAP_RND_BITS if MMU
 	select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:93 @ config ARM
 	select HAVE_PERF_EVENTS
 	select HAVE_PERF_REGS
 	select HAVE_PERF_USER_STACK_DUMP
+	select HAVE_PREEMPT_LAZY
 	select HAVE_RCU_TABLE_FREE if SMP && ARM_LPAE
 	select HAVE_REGS_AND_STACK_ACCESS_API
 	select HAVE_RSEQ
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2136 @ config NEON
 
 config KERNEL_MODE_NEON
 	bool "Support for NEON in kernel mode"
-	depends on NEON && AEABI
+	depends on NEON && AEABI && !PREEMPT_RT_BASE
 	help
 	  Say Y to include support for NEON in kernel mode.
 
Index: linux-5.0.19-rt11/arch/arm/include/asm/irq.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm/include/asm/irq.h
+++ linux-5.0.19-rt11/arch/arm/include/asm/irq.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:26 @
 #endif
 
 #ifndef __ASSEMBLY__
+#include <linux/cpumask.h>
+
 struct irqaction;
 struct pt_regs;
 
Index: linux-5.0.19-rt11/arch/arm/include/asm/spinlock_types.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm/include/asm/spinlock_types.h
+++ linux-5.0.19-rt11/arch/arm/include/asm/spinlock_types.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5 @
 #ifndef __ASM_SPINLOCK_TYPES_H
 #define __ASM_SPINLOCK_TYPES_H
 
-#ifndef __LINUX_SPINLOCK_TYPES_H
-# error "please don't include this file directly"
-#endif
-
 #define TICKET_SHIFT	16
 
 typedef struct {
Index: linux-5.0.19-rt11/arch/arm/include/asm/switch_to.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm/include/asm/switch_to.h
+++ linux-5.0.19-rt11/arch/arm/include/asm/switch_to.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:7 @
 
 #include <linux/thread_info.h>
 
+#if defined CONFIG_PREEMPT_RT_FULL && defined CONFIG_HIGHMEM
+void switch_kmaps(struct task_struct *prev_p, struct task_struct *next_p);
+#else
+static inline void
+switch_kmaps(struct task_struct *prev_p, struct task_struct *next_p) { }
+#endif
+
 /*
  * For v7 SMP cores running a preemptible kernel we may be pre-empted
  * during a TLB maintenance operation, so execute an inner-shareable dsb
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:36 @ extern struct task_struct *__switch_to(s
 #define switch_to(prev,next,last)					\
 do {									\
 	__complete_pending_tlbi();					\
+	switch_kmaps(prev, next);					\
 	last = __switch_to(prev,task_thread_info(prev), task_thread_info(next));	\
 } while (0)
 
Index: linux-5.0.19-rt11/arch/arm/include/asm/thread_info.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm/include/asm/thread_info.h
+++ linux-5.0.19-rt11/arch/arm/include/asm/thread_info.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:52 @ struct cpu_context_save {
 struct thread_info {
 	unsigned long		flags;		/* low level flags */
 	int			preempt_count;	/* 0 => preemptable, <0 => bug */
+	int			preempt_lazy_count; /* 0 => preemptable, <0 => bug */
 	mm_segment_t		addr_limit;	/* address limit */
 	struct task_struct	*task;		/* main task structure */
 	__u32			cpu;		/* cpu */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:146 @ extern int vfp_restore_user_hwstate(stru
 #define TIF_SYSCALL_TRACE	4	/* syscall trace active */
 #define TIF_SYSCALL_AUDIT	5	/* syscall auditing active */
 #define TIF_SYSCALL_TRACEPOINT	6	/* syscall tracepoint instrumentation */
-#define TIF_SECCOMP		7	/* seccomp syscall filtering active */
+#define TIF_SECCOMP		8	/* seccomp syscall filtering active */
+#define TIF_NEED_RESCHED_LAZY	7
 
 #define TIF_NOHZ		12	/* in adaptive nohz mode */
 #define TIF_USING_IWMMXT	17
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:157 @ extern int vfp_restore_user_hwstate(stru
 #define _TIF_SIGPENDING		(1 << TIF_SIGPENDING)
 #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
 #define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
+#define _TIF_NEED_RESCHED_LAZY	(1 << TIF_NEED_RESCHED_LAZY)
 #define _TIF_UPROBE		(1 << TIF_UPROBE)
 #define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
 #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:173 @ extern int vfp_restore_user_hwstate(stru
  * Change these and you break ASM code in entry-common.S
  */
 #define _TIF_WORK_MASK		(_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
-				 _TIF_NOTIFY_RESUME | _TIF_UPROBE)
+				 _TIF_NOTIFY_RESUME | _TIF_UPROBE | \
+				 _TIF_NEED_RESCHED_LAZY)
 
 #endif /* __KERNEL__ */
 #endif /* __ASM_ARM_THREAD_INFO_H */
Index: linux-5.0.19-rt11/arch/arm/kernel/asm-offsets.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm/kernel/asm-offsets.c
+++ linux-5.0.19-rt11/arch/arm/kernel/asm-offsets.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:59 @ int main(void)
   BLANK();
   DEFINE(TI_FLAGS,		offsetof(struct thread_info, flags));
   DEFINE(TI_PREEMPT,		offsetof(struct thread_info, preempt_count));
+  DEFINE(TI_PREEMPT_LAZY,	offsetof(struct thread_info, preempt_lazy_count));
   DEFINE(TI_ADDR_LIMIT,		offsetof(struct thread_info, addr_limit));
   DEFINE(TI_TASK,		offsetof(struct thread_info, task));
   DEFINE(TI_CPU,		offsetof(struct thread_info, cpu));
Index: linux-5.0.19-rt11/arch/arm/kernel/entry-armv.S
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm/kernel/entry-armv.S
+++ linux-5.0.19-rt11/arch/arm/kernel/entry-armv.S
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:219 @ __irq_svc:
 
 #ifdef CONFIG_PREEMPT
 	ldr	r8, [tsk, #TI_PREEMPT]		@ get preempt count
-	ldr	r0, [tsk, #TI_FLAGS]		@ get flags
 	teq	r8, #0				@ if preempt count != 0
+	bne	1f				@ return from exeption
+	ldr	r0, [tsk, #TI_FLAGS]		@ get flags
+	tst	r0, #_TIF_NEED_RESCHED		@ if NEED_RESCHED is set
+	blne	svc_preempt			@ preempt!
+
+	ldr	r8, [tsk, #TI_PREEMPT_LAZY]	@ get preempt lazy count
+	teq	r8, #0				@ if preempt lazy count != 0
 	movne	r0, #0				@ force flags to 0
-	tst	r0, #_TIF_NEED_RESCHED
+	tst	r0, #_TIF_NEED_RESCHED_LAZY
 	blne	svc_preempt
+1:
 #endif
 
 	svc_exit r5, irq = 1			@ return from exception
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:245 @ svc_preempt:
 1:	bl	preempt_schedule_irq		@ irq en/disable is done inside
 	ldr	r0, [tsk, #TI_FLAGS]		@ get new tasks TI_FLAGS
 	tst	r0, #_TIF_NEED_RESCHED
+	bne	1b
+	tst	r0, #_TIF_NEED_RESCHED_LAZY
 	reteq	r8				@ go again
-	b	1b
+	ldr	r0, [tsk, #TI_PREEMPT_LAZY]	@ get preempt lazy count
+	teq	r0, #0				@ if preempt lazy count != 0
+	beq	1b
+	ret	r8				@ go again
+
 #endif
 
 __und_fault:
Index: linux-5.0.19-rt11/arch/arm/kernel/entry-common.S
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm/kernel/entry-common.S
+++ linux-5.0.19-rt11/arch/arm/kernel/entry-common.S
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:59 @ __ret_fast_syscall:
 	cmp	r2, #TASK_SIZE
 	blne	addr_limit_check_failed
 	ldr	r1, [tsk, #TI_FLAGS]		@ re-check for syscall tracing
-	tst	r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
+	tst	r1, #((_TIF_SYSCALL_WORK | _TIF_WORK_MASK) & ~_TIF_SECCOMP)
+	bne	fast_work_pending
+	tst	r1, #_TIF_SECCOMP
 	bne	fast_work_pending
 
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:98 @ __ret_fast_syscall:
 	cmp	r2, #TASK_SIZE
 	blne	addr_limit_check_failed
 	ldr	r1, [tsk, #TI_FLAGS]		@ re-check for syscall tracing
-	tst	r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
+	tst	r1, #((_TIF_SYSCALL_WORK | _TIF_WORK_MASK) & ~_TIF_SECCOMP)
+	bne	do_slower_path
+	tst	r1, #_TIF_SECCOMP
 	beq	no_work_pending
+do_slower_path:
  UNWIND(.fnend		)
 ENDPROC(ret_fast_syscall)
 
Index: linux-5.0.19-rt11/arch/arm/kernel/process.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm/kernel/process.c
+++ linux-5.0.19-rt11/arch/arm/kernel/process.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:331 @ unsigned long arch_randomize_brk(struct
 }
 
 #ifdef CONFIG_MMU
+/*
+ * CONFIG_SPLIT_PTLOCK_CPUS results in a page->ptl lock.  If the lock is not
+ * initialized by pgtable_page_ctor() then a coredump of the vector page will
+ * fail.
+ */
+static int __init vectors_user_mapping_init_page(void)
+{
+	struct page *page;
+	unsigned long addr = 0xffff0000;
+	pgd_t *pgd;
+	pud_t *pud;
+	pmd_t *pmd;
+
+	pgd = pgd_offset_k(addr);
+	pud = pud_offset(pgd, addr);
+	pmd = pmd_offset(pud, addr);
+	page = pmd_page(*(pmd));
+
+	pgtable_page_ctor(page);
+
+	return 0;
+}
+late_initcall(vectors_user_mapping_init_page);
+
 #ifdef CONFIG_KUSER_HELPERS
 /*
  * The vectors page is always readable from user space for the
Index: linux-5.0.19-rt11/arch/arm/kernel/signal.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm/kernel/signal.c
+++ linux-5.0.19-rt11/arch/arm/kernel/signal.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:655 @ do_work_pending(struct pt_regs *regs, un
 	 */
 	trace_hardirqs_off();
 	do {
-		if (likely(thread_flags & _TIF_NEED_RESCHED)) {
+		if (likely(thread_flags & (_TIF_NEED_RESCHED |
+					   _TIF_NEED_RESCHED_LAZY))) {
 			schedule();
 		} else {
 			if (unlikely(!user_mode(regs)))
Index: linux-5.0.19-rt11/arch/arm/kernel/smp.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm/kernel/smp.c
+++ linux-5.0.19-rt11/arch/arm/kernel/smp.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:687 @ void handle_IPI(int ipinr, struct pt_reg
 		break;
 
 	case IPI_CPU_BACKTRACE:
-		printk_nmi_enter();
 		irq_enter();
 		nmi_cpu_backtrace(regs);
 		irq_exit();
-		printk_nmi_exit();
 		break;
 
 	default:
Index: linux-5.0.19-rt11/arch/arm/mach-at91/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm/mach-at91/Kconfig
+++ linux-5.0.19-rt11/arch/arm/mach-at91/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:110 @ config SOC_AT91SAM9
 	    AT91SAM9X35
 	    AT91SAM9XE
 
+comment "Clocksource driver selection"
+
+config ATMEL_CLOCKSOURCE_PIT
+	bool "Periodic Interval Timer (PIT) support"
+	depends on SOC_AT91SAM9 || SOC_SAMA5
+	default SOC_AT91SAM9 || SOC_SAMA5
+	select ATMEL_PIT
+	help
+	  Select this to get a clocksource based on the Atmel Periodic Interval
+	  Timer. It has a relatively low resolution and the TC Block clocksource
+	  should be preferred.
+
+config ATMEL_CLOCKSOURCE_TCB
+	bool "Timer Counter Blocks (TCB) support"
+	default SOC_AT91RM9200 || SOC_AT91SAM9 || SOC_SAMA5
+	select ATMEL_TCB_CLKSRC
+	help
+	  Select this to get a high precision clocksource based on a
+	  TC block with a 5+ MHz base clock rate.
+	  On platforms with 16-bit counters, two timer channels are combined
+	  to make a single 32-bit timer.
+	  It can also be used as a clock event device supporting oneshot mode.
+
 config HAVE_AT91_UTMI
 	bool
 
Index: linux-5.0.19-rt11/arch/arm/mach-imx/cpuidle-imx6q.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm/mach-imx/cpuidle-imx6q.c
+++ linux-5.0.19-rt11/arch/arm/mach-imx/cpuidle-imx6q.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:20 @
 #include "hardware.h"
 
 static int num_idle_cpus = 0;
-static DEFINE_SPINLOCK(cpuidle_lock);
+static DEFINE_RAW_SPINLOCK(cpuidle_lock);
 
 static int imx6q_enter_wait(struct cpuidle_device *dev,
 			    struct cpuidle_driver *drv, int index)
 {
-	spin_lock(&cpuidle_lock);
+	raw_spin_lock(&cpuidle_lock);
 	if (++num_idle_cpus == num_online_cpus())
 		imx6_set_lpm(WAIT_UNCLOCKED);
-	spin_unlock(&cpuidle_lock);
+	raw_spin_unlock(&cpuidle_lock);
 
 	cpu_do_idle();
 
-	spin_lock(&cpuidle_lock);
+	raw_spin_lock(&cpuidle_lock);
 	if (num_idle_cpus-- == num_online_cpus())
 		imx6_set_lpm(WAIT_CLOCKED);
-	spin_unlock(&cpuidle_lock);
+	raw_spin_unlock(&cpuidle_lock);
 
 	return index;
 }
Index: linux-5.0.19-rt11/arch/arm/mm/fault.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm/mm/fault.c
+++ linux-5.0.19-rt11/arch/arm/mm/fault.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:440 @ do_translation_fault(unsigned long addr,
 	if (addr < TASK_SIZE)
 		return do_page_fault(addr, fsr, regs);
 
+	if (interrupts_enabled(regs))
+		local_irq_enable();
+
 	if (user_mode(regs))
 		goto bad_area;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:510 @ do_translation_fault(unsigned long addr,
 static int
 do_sect_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
 {
+	if (interrupts_enabled(regs))
+		local_irq_enable();
+
 	do_bad_area(addr, fsr, regs);
 	return 0;
 }
Index: linux-5.0.19-rt11/arch/arm/mm/highmem.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm/mm/highmem.c
+++ linux-5.0.19-rt11/arch/arm/mm/highmem.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:37 @ static inline pte_t get_fixmap_pte(unsig
 	return *ptep;
 }
 
+static unsigned int fixmap_idx(int type)
+{
+	return FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id();
+}
+
 void *kmap(struct page *page)
 {
 	might_sleep();
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:62 @ EXPORT_SYMBOL(kunmap);
 
 void *kmap_atomic(struct page *page)
 {
+	pte_t pte = mk_pte(page, kmap_prot);
 	unsigned int idx;
 	unsigned long vaddr;
 	void *kmap;
 	int type;
 
-	preempt_disable();
+	preempt_disable_nort();
 	pagefault_disable();
 	if (!PageHighMem(page))
 		return page_address(page);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:88 @ void *kmap_atomic(struct page *page)
 
 	type = kmap_atomic_idx_push();
 
-	idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id();
+	idx = fixmap_idx(type);
 	vaddr = __fix_to_virt(idx);
 #ifdef CONFIG_DEBUG_HIGHMEM
 	/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:102 @ void *kmap_atomic(struct page *page)
 	 * in place, so the contained TLB flush ensures the TLB is updated
 	 * with the new mapping.
 	 */
-	set_fixmap_pte(idx, mk_pte(page, kmap_prot));
+#ifdef CONFIG_PREEMPT_RT_FULL
+	current->kmap_pte[type] = pte;
+#endif
+	set_fixmap_pte(idx, pte);
 
 	return (void *)vaddr;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:118 @ void __kunmap_atomic(void *kvaddr)
 
 	if (kvaddr >= (void *)FIXADDR_START) {
 		type = kmap_atomic_idx();
-		idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id();
+		idx = fixmap_idx(type);
 
 		if (cache_is_vivt())
 			__cpuc_flush_dcache_area((void *)vaddr, PAGE_SIZE);
+#ifdef CONFIG_PREEMPT_RT_FULL
+		current->kmap_pte[type] = __pte(0);
+#endif
 #ifdef CONFIG_DEBUG_HIGHMEM
 		BUG_ON(vaddr != __fix_to_virt(idx));
-		set_fixmap_pte(idx, __pte(0));
 #else
 		(void) idx;  /* to kill a warning */
 #endif
+		set_fixmap_pte(idx, __pte(0));
 		kmap_atomic_idx_pop();
 	} else if (vaddr >= PKMAP_ADDR(0) && vaddr < PKMAP_ADDR(LAST_PKMAP)) {
 		/* this address was obtained through kmap_high_get() */
 		kunmap_high(pte_page(pkmap_page_table[PKMAP_NR(vaddr)]));
 	}
 	pagefault_enable();
-	preempt_enable();
+	preempt_enable_nort();
 }
 EXPORT_SYMBOL(__kunmap_atomic);
 
 void *kmap_atomic_pfn(unsigned long pfn)
 {
+	pte_t pte = pfn_pte(pfn, kmap_prot);
 	unsigned long vaddr;
 	int idx, type;
 	struct page *page = pfn_to_page(pfn);
 
-	preempt_disable();
+	preempt_disable_nort();
 	pagefault_disable();
 	if (!PageHighMem(page))
 		return page_address(page);
 
 	type = kmap_atomic_idx_push();
-	idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id();
+	idx = fixmap_idx(type);
 	vaddr = __fix_to_virt(idx);
 #ifdef CONFIG_DEBUG_HIGHMEM
 	BUG_ON(!pte_none(get_fixmap_pte(vaddr)));
 #endif
-	set_fixmap_pte(idx, pfn_pte(pfn, kmap_prot));
+#ifdef CONFIG_PREEMPT_RT_FULL
+	current->kmap_pte[type] = pte;
+#endif
+	set_fixmap_pte(idx, pte);
 
 	return (void *)vaddr;
 }
+#if defined CONFIG_PREEMPT_RT_FULL
+void switch_kmaps(struct task_struct *prev_p, struct task_struct *next_p)
+{
+	int i;
+
+	/*
+	 * Clear @prev's kmap_atomic mappings
+	 */
+	for (i = 0; i < prev_p->kmap_idx; i++) {
+		int idx = fixmap_idx(i);
+
+		set_fixmap_pte(idx, __pte(0));
+	}
+	/*
+	 * Restore @next_p's kmap_atomic mappings
+	 */
+	for (i = 0; i < next_p->kmap_idx; i++) {
+		int idx = fixmap_idx(i);
+
+		if (!pte_none(next_p->kmap_pte[i]))
+			set_fixmap_pte(idx, next_p->kmap_pte[i]);
+	}
+}
+#endif
Index: linux-5.0.19-rt11/arch/arm64/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm64/Kconfig
+++ linux-5.0.19-rt11/arch/arm64/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:149 @ config ARM64
 	select HAVE_PERF_EVENTS
 	select HAVE_PERF_REGS
 	select HAVE_PERF_USER_STACK_DUMP
+	select HAVE_PREEMPT_LAZY
 	select HAVE_REGS_AND_STACK_ACCESS_API
 	select HAVE_RCU_TABLE_FREE
 	select HAVE_RCU_TABLE_INVALIDATE
Index: linux-5.0.19-rt11/arch/arm64/crypto/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm64/crypto/Kconfig
+++ linux-5.0.19-rt11/arch/arm64/crypto/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:22 @ config CRYPTO_SHA512_ARM64
 
 config CRYPTO_SHA1_ARM64_CE
 	tristate "SHA-1 digest algorithm (ARMv8 Crypto Extensions)"
-	depends on KERNEL_MODE_NEON
+	depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
 	select CRYPTO_HASH
 	select CRYPTO_SHA1
 
 config CRYPTO_SHA2_ARM64_CE
 	tristate "SHA-224/SHA-256 digest algorithm (ARMv8 Crypto Extensions)"
-	depends on KERNEL_MODE_NEON
+	depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
 	select CRYPTO_HASH
 	select CRYPTO_SHA256_ARM64
 
 config CRYPTO_SHA512_ARM64_CE
 	tristate "SHA-384/SHA-512 digest algorithm (ARMv8 Crypto Extensions)"
-	depends on KERNEL_MODE_NEON
+	depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
 	select CRYPTO_HASH
 	select CRYPTO_SHA512_ARM64
 
 config CRYPTO_SHA3_ARM64
 	tristate "SHA3 digest algorithm (ARMv8.2 Crypto Extensions)"
-	depends on KERNEL_MODE_NEON
+	depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
 	select CRYPTO_HASH
 	select CRYPTO_SHA3
 
 config CRYPTO_SM3_ARM64_CE
 	tristate "SM3 digest algorithm (ARMv8.2 Crypto Extensions)"
-	depends on KERNEL_MODE_NEON
+	depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
 	select CRYPTO_HASH
 	select CRYPTO_SM3
 
 config CRYPTO_SM4_ARM64_CE
 	tristate "SM4 symmetric cipher (ARMv8.2 Crypto Extensions)"
-	depends on KERNEL_MODE_NEON
+	depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
 	select CRYPTO_ALGAPI
 	select CRYPTO_SM4
 
 config CRYPTO_GHASH_ARM64_CE
 	tristate "GHASH/AES-GCM using ARMv8 Crypto Extensions"
-	depends on KERNEL_MODE_NEON
+	depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
 	select CRYPTO_HASH
 	select CRYPTO_GF128MUL
 	select CRYPTO_AES
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:66 @ config CRYPTO_GHASH_ARM64_CE
 
 config CRYPTO_CRCT10DIF_ARM64_CE
 	tristate "CRCT10DIF digest algorithm using PMULL instructions"
-	depends on KERNEL_MODE_NEON && CRC_T10DIF
+	depends on KERNEL_MODE_NEON && CRC_T10DIF && !PREEMPT_RT_BASE
 	select CRYPTO_HASH
 
 config CRYPTO_AES_ARM64
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:75 @ config CRYPTO_AES_ARM64
 
 config CRYPTO_AES_ARM64_CE
 	tristate "AES core cipher using ARMv8 Crypto Extensions"
-	depends on ARM64 && KERNEL_MODE_NEON
+	depends on ARM64 && KERNEL_MODE_NEON && !PREEMPT_RT_BASE
 	select CRYPTO_ALGAPI
 	select CRYPTO_AES_ARM64
 
 config CRYPTO_AES_ARM64_CE_CCM
 	tristate "AES in CCM mode using ARMv8 Crypto Extensions"
-	depends on ARM64 && KERNEL_MODE_NEON
+	depends on ARM64 && KERNEL_MODE_NEON && !PREEMPT_RT_BASE
 	select CRYPTO_ALGAPI
 	select CRYPTO_AES_ARM64_CE
 	select CRYPTO_AES_ARM64
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:89 @ config CRYPTO_AES_ARM64_CE_CCM
 
 config CRYPTO_AES_ARM64_CE_BLK
 	tristate "AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions"
-	depends on KERNEL_MODE_NEON
+	depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
 	select CRYPTO_BLKCIPHER
 	select CRYPTO_AES_ARM64_CE
 	select CRYPTO_AES_ARM64
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:97 @ config CRYPTO_AES_ARM64_CE_BLK
 
 config CRYPTO_AES_ARM64_NEON_BLK
 	tristate "AES in ECB/CBC/CTR/XTS modes using NEON instructions"
-	depends on KERNEL_MODE_NEON
+	depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
 	select CRYPTO_BLKCIPHER
 	select CRYPTO_AES_ARM64
 	select CRYPTO_AES
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:105 @ config CRYPTO_AES_ARM64_NEON_BLK
 
 config CRYPTO_CHACHA20_NEON
 	tristate "ChaCha20, XChaCha20, and XChaCha12 stream ciphers using NEON instructions"
-	depends on KERNEL_MODE_NEON
+	depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
 	select CRYPTO_BLKCIPHER
 	select CRYPTO_CHACHA20
 
 config CRYPTO_NHPOLY1305_NEON
 	tristate "NHPoly1305 hash function using NEON instructions (for Adiantum)"
-	depends on KERNEL_MODE_NEON
+	depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
 	select CRYPTO_NHPOLY1305
 
 config CRYPTO_AES_ARM64_BS
 	tristate "AES in ECB/CBC/CTR/XTS modes using bit-sliced NEON algorithm"
-	depends on KERNEL_MODE_NEON
+	depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
 	select CRYPTO_BLKCIPHER
 	select CRYPTO_AES_ARM64_NEON_BLK
 	select CRYPTO_AES_ARM64
Index: linux-5.0.19-rt11/arch/arm64/include/asm/alternative.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm64/include/asm/alternative.h
+++ linux-5.0.19-rt11/arch/arm64/include/asm/alternative.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:38 @ void apply_alternatives_module(void *sta
 static inline void apply_alternatives_module(void *start, size_t length) { }
 #endif
 
+#ifdef CONFIG_KVM_ARM_HOST
+void kvm_compute_layout(void);
+#else
+static inline void kvm_compute_layout(void) { }
+#endif
+
 #define ALTINSTR_ENTRY(feature,cb)					      \
 	" .word 661b - .\n"				/* label           */ \
 	" .if " __stringify(cb) " == 0\n"				      \
Index: linux-5.0.19-rt11/arch/arm64/include/asm/preempt.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm64/include/asm/preempt.h
+++ linux-5.0.19-rt11/arch/arm64/include/asm/preempt.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:73 @ static inline bool __preempt_count_dec_a
 	 * interrupt occurring between the non-atomic READ_ONCE/WRITE_ONCE
 	 * pair.
 	 */
-	return !pc || !READ_ONCE(ti->preempt_count);
+	if (!pc || !READ_ONCE(ti->preempt_count))
+		return true;
+#ifdef CONFIG_PREEMPT_LAZY
+	if (current_thread_info()->preempt_lazy_count)
+		return false;
+	return test_thread_flag(TIF_NEED_RESCHED_LAZY);
+#else
+	return false;
+#endif
 }
 
 static inline bool should_resched(int preempt_offset)
 {
+#ifdef CONFIG_PREEMPT_LAZY
+	u64 pc = READ_ONCE(current_thread_info()->preempt_count);
+	if (pc == preempt_offset)
+		return true;
+
+	if ((pc & ~PREEMPT_NEED_RESCHED) != preempt_offset)
+		return false;
+
+	if (current_thread_info()->preempt_lazy_count)
+		return false;
+	return test_thread_flag(TIF_NEED_RESCHED_LAZY);
+#else
 	u64 pc = READ_ONCE(current_thread_info()->preempt_count);
 	return pc == preempt_offset;
+#endif
 }
 
 #ifdef CONFIG_PREEMPT
Index: linux-5.0.19-rt11/arch/arm64/include/asm/spinlock_types.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm64/include/asm/spinlock_types.h
+++ linux-5.0.19-rt11/arch/arm64/include/asm/spinlock_types.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:19 @
 #ifndef __ASM_SPINLOCK_TYPES_H
 #define __ASM_SPINLOCK_TYPES_H
 
-#if !defined(__LINUX_SPINLOCK_TYPES_H) && !defined(__ASM_SPINLOCK_H)
-# error "please don't include this file directly"
-#endif
-
 #include <asm-generic/qspinlock_types.h>
 #include <asm-generic/qrwlock_types.h>
 
Index: linux-5.0.19-rt11/arch/arm64/include/asm/thread_info.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm64/include/asm/thread_info.h
+++ linux-5.0.19-rt11/arch/arm64/include/asm/thread_info.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:45 @ struct thread_info {
 #ifdef CONFIG_ARM64_SW_TTBR0_PAN
 	u64			ttbr0;		/* saved TTBR0_EL1 */
 #endif
+	int			preempt_lazy_count;	/* 0 => preemptable, <0 => bug */
 	union {
 		u64		preempt_count;	/* 0 => preemptible, <0 => bug */
 		struct {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:91 @ void arch_release_task_struct(struct tas
 #define TIF_FOREIGN_FPSTATE	3	/* CPU's FP state is not current's */
 #define TIF_UPROBE		4	/* uprobe breakpoint or singlestep */
 #define TIF_FSCHECK		5	/* Check FS is USER_DS on return */
+#define TIF_NEED_RESCHED_LAZY	6
 #define TIF_NOHZ		7
 #define TIF_SYSCALL_TRACE	8
 #define TIF_SYSCALL_AUDIT	9
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:110 @ void arch_release_task_struct(struct tas
 #define _TIF_NEED_RESCHED	(1 << TIF_NEED_RESCHED)
 #define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
 #define _TIF_FOREIGN_FPSTATE	(1 << TIF_FOREIGN_FPSTATE)
+#define _TIF_NEED_RESCHED_LAZY	(1 << TIF_NEED_RESCHED_LAZY)
 #define _TIF_NOHZ		(1 << TIF_NOHZ)
 #define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
 #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:123 @ void arch_release_task_struct(struct tas
 
 #define _TIF_WORK_MASK		(_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
 				 _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \
-				 _TIF_UPROBE | _TIF_FSCHECK)
+				 _TIF_UPROBE | _TIF_FSCHECK | _TIF_NEED_RESCHED_LAZY)
 
+#define _TIF_NEED_RESCHED_MASK	(_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY)
 #define _TIF_SYSCALL_WORK	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
 				 _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
 				 _TIF_NOHZ)
Index: linux-5.0.19-rt11/arch/arm64/kernel/alternative.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm64/kernel/alternative.c
+++ linux-5.0.19-rt11/arch/arm64/kernel/alternative.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:227 @ static int __apply_alternatives_multi_st
 void __init apply_alternatives_all(void)
 {
 	/* better not try code patching on a live SMP system */
+	kvm_compute_layout();
 	stop_machine(__apply_alternatives_multi_stop, NULL, cpu_online_mask);
 }
 
Index: linux-5.0.19-rt11/arch/arm64/kernel/asm-offsets.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm64/kernel/asm-offsets.c
+++ linux-5.0.19-rt11/arch/arm64/kernel/asm-offsets.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:44 @ int main(void)
   BLANK();
   DEFINE(TSK_TI_FLAGS,		offsetof(struct task_struct, thread_info.flags));
   DEFINE(TSK_TI_PREEMPT,	offsetof(struct task_struct, thread_info.preempt_count));
+  DEFINE(TSK_TI_PREEMPT_LAZY,	offsetof(struct task_struct, thread_info.preempt_lazy_count));
   DEFINE(TSK_TI_ADDR_LIMIT,	offsetof(struct task_struct, thread_info.addr_limit));
 #ifdef CONFIG_ARM64_SW_TTBR0_PAN
   DEFINE(TSK_TI_TTBR0,		offsetof(struct task_struct, thread_info.ttbr0));
Index: linux-5.0.19-rt11/arch/arm64/kernel/entry.S
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm64/kernel/entry.S
+++ linux-5.0.19-rt11/arch/arm64/kernel/entry.S
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:613 @ el1_irq:
 
 #ifdef CONFIG_PREEMPT
 	ldr	x24, [tsk, #TSK_TI_PREEMPT]	// get preempt count
-	cbnz	x24, 1f				// preempt count != 0
-	bl	el1_preempt
+	cbnz	x24, 2f				// preempt count != 0
+
+	ldr	w24, [tsk, #TSK_TI_PREEMPT_LAZY] // get preempt lazy count
+	cbnz	w24, 2f				// preempt lazy count != 0
+
+	ldr     x0, [tsk, #TSK_TI_FLAGS]	// get flags
+	tbz	x0, #TIF_NEED_RESCHED_LAZY, 2f	// needs rescheduling?
 1:
+	bl	el1_preempt
+2:
 #endif
 #ifdef CONFIG_TRACE_IRQFLAGS
 	bl	trace_hardirqs_on
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:636 @ el1_preempt:
 1:	bl	preempt_schedule_irq		// irq en/disable is done inside
 	ldr	x0, [tsk, #TSK_TI_FLAGS]	// get new tasks TI_FLAGS
 	tbnz	x0, #TIF_NEED_RESCHED, 1b	// needs rescheduling?
+	tbnz	x0, #TIF_NEED_RESCHED_LAZY, 1b	// needs rescheduling?
 	ret	x24
 #endif
 
Index: linux-5.0.19-rt11/arch/arm64/kernel/fpsimd.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm64/kernel/fpsimd.c
+++ linux-5.0.19-rt11/arch/arm64/kernel/fpsimd.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:162 @ static void sve_free(struct task_struct
 	__sve_free(task);
 }
 
+static void *sve_free_atomic(struct task_struct *task)
+{
+	void *sve_state = task->thread.sve_state;
+
+	WARN_ON(test_tsk_thread_flag(task, TIF_SVE));
+
+	task->thread.sve_state = NULL;
+	return sve_state;
+}
+
 /*
  * TIF_SVE controls whether a task can use SVE without trapping while
  * in userspace, and also the way a task's FPSIMD/SVE state is stored
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:560 @ int sve_set_vector_length(struct task_st
 	 * non-SVE thread.
 	 */
 	if (task == current) {
+		preempt_disable();
 		local_bh_disable();
 
 		fpsimd_save();
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:571 @ int sve_set_vector_length(struct task_st
 	if (test_and_clear_tsk_thread_flag(task, TIF_SVE))
 		sve_to_fpsimd(task);
 
-	if (task == current)
+	if (task == current) {
 		local_bh_enable();
+		preempt_enable();
+	}
 
 	/*
 	 * Force reallocation of task SVE state to the correct size
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:829 @ asmlinkage void do_sve_acc(unsigned int
 
 	sve_alloc(current);
 
+	preempt_disable();
 	local_bh_disable();
 
 	fpsimd_save();
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:843 @ asmlinkage void do_sve_acc(unsigned int
 		WARN_ON(1); /* SVE access shouldn't have trapped */
 
 	local_bh_enable();
+	preempt_enable();
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:906 @ void fpsimd_thread_switch(struct task_st
 void fpsimd_flush_thread(void)
 {
 	int vl, supported_vl;
+	void *mem = NULL;
 
 	if (!system_supports_fpsimd())
 		return;
 
+	preempt_disable();
 	local_bh_disable();
 
 	memset(&current->thread.uw.fpsimd_state, 0,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:920 @ void fpsimd_flush_thread(void)
 
 	if (system_supports_sve()) {
 		clear_thread_flag(TIF_SVE);
-		sve_free(current);
+		mem = sve_free_atomic(current);
 
 		/*
 		 * Reset the task vector length as required.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:956 @ void fpsimd_flush_thread(void)
 	set_thread_flag(TIF_FOREIGN_FPSTATE);
 
 	local_bh_enable();
+	preempt_enable();
+	kfree(mem);
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:969 @ void fpsimd_preserve_current_state(void)
 	if (!system_supports_fpsimd())
 		return;
 
+	preempt_disable();
 	local_bh_disable();
 	fpsimd_save();
 	local_bh_enable();
+	preempt_enable();
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1031 @ void fpsimd_restore_current_state(void)
 	if (!system_supports_fpsimd())
 		return;
 
+	preempt_disable();
 	local_bh_disable();
 
 	if (test_and_clear_thread_flag(TIF_FOREIGN_FPSTATE)) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1040 @ void fpsimd_restore_current_state(void)
 	}
 
 	local_bh_enable();
+	preempt_enable();
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1053 @ void fpsimd_update_current_state(struct
 	if (!system_supports_fpsimd())
 		return;
 
+	preempt_disable();
 	local_bh_disable();
 
 	current->thread.uw.fpsimd_state = *state;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1066 @ void fpsimd_update_current_state(struct
 	clear_thread_flag(TIF_FOREIGN_FPSTATE);
 
 	local_bh_enable();
+	preempt_enable();
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1112 @ void kernel_neon_begin(void)
 
 	BUG_ON(!may_use_simd());
 
+	preempt_disable();
 	local_bh_disable();
 
 	__this_cpu_write(kernel_neon_busy, true);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1126 @ void kernel_neon_begin(void)
 	preempt_disable();
 
 	local_bh_enable();
+	preempt_enable();
 }
 EXPORT_SYMBOL(kernel_neon_begin);
 
Index: linux-5.0.19-rt11/arch/arm64/kernel/signal.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm64/kernel/signal.c
+++ linux-5.0.19-rt11/arch/arm64/kernel/signal.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:929 @ asmlinkage void do_notify_resume(struct
 		/* Check valid user FS if needed */
 		addr_limit_user_check();
 
-		if (thread_flags & _TIF_NEED_RESCHED) {
+		if (thread_flags & _TIF_NEED_RESCHED_MASK) {
 			/* Unmask Debug and SError for the next task */
 			local_daif_restore(DAIF_PROCCTX_NOIRQ);
 
Index: linux-5.0.19-rt11/arch/arm64/kvm/va_layout.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/arm64/kvm/va_layout.c
+++ linux-5.0.19-rt11/arch/arm64/kvm/va_layout.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:36 @ static u8 tag_lsb;
 static u64 tag_val;
 static u64 va_mask;
 
-static void compute_layout(void)
+__init void kvm_compute_layout(void)
 {
 	phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start);
 	u64 hyp_va_msb;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:124 @ void __init kvm_update_va_mask(struct al
 
 	BUG_ON(nr_inst != 5);
 
-	if (!has_vhe() && !va_mask)
-		compute_layout();
 
 	for (i = 0; i < nr_inst; i++) {
 		u32 rd, rn, insn, oinsn;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:168 @ void kvm_patch_vector_branch(struct alt_
 		return;
 	}
 
-	if (!va_mask)
-		compute_layout();
-
 	/*
 	 * Compute HYP VA by using the same computation as kern_hyp_va()
 	 */
Index: linux-5.0.19-rt11/arch/hexagon/include/asm/spinlock_types.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/hexagon/include/asm/spinlock_types.h
+++ linux-5.0.19-rt11/arch/hexagon/include/asm/spinlock_types.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:24 @
 #ifndef _ASM_SPINLOCK_TYPES_H
 #define _ASM_SPINLOCK_TYPES_H
 
-#ifndef __LINUX_SPINLOCK_TYPES_H
-# error "please don't include this file directly"
-#endif
-
 typedef struct {
 	volatile unsigned int lock;
 } arch_spinlock_t;
Index: linux-5.0.19-rt11/arch/ia64/include/asm/spinlock_types.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/ia64/include/asm/spinlock_types.h
+++ linux-5.0.19-rt11/arch/ia64/include/asm/spinlock_types.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5 @
 #ifndef _ASM_IA64_SPINLOCK_TYPES_H
 #define _ASM_IA64_SPINLOCK_TYPES_H
 
-#ifndef __LINUX_SPINLOCK_TYPES_H
-# error "please don't include this file directly"
-#endif
-
 typedef struct {
 	volatile unsigned int lock;
 } arch_spinlock_t;
Index: linux-5.0.19-rt11/arch/ia64/kernel/mca.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/ia64/kernel/mca.c
+++ linux-5.0.19-rt11/arch/ia64/kernel/mca.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1827 @ format_mca_init_stack(void *mca_data, un
 	ti->cpu = cpu;
 	p->stack = ti;
 	p->state = TASK_UNINTERRUPTIBLE;
-	cpumask_set_cpu(cpu, &p->cpus_allowed);
+	cpumask_set_cpu(cpu, &p->cpus_mask);
 	INIT_LIST_HEAD(&p->tasks);
 	p->parent = p->real_parent = p->group_leader = p;
 	INIT_LIST_HEAD(&p->children);
Index: linux-5.0.19-rt11/arch/mips/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/arch/mips/Kconfig
+++ linux-5.0.19-rt11/arch/mips/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2603 @ config MIPS_CRC_SUPPORT
 #
 config HIGHMEM
 	bool "High Memory Support"
-	depends on 32BIT && CPU_SUPPORTS_HIGHMEM && SYS_SUPPORTS_HIGHMEM && !CPU_MIPS32_3_5_EVA
+	depends on 32BIT && CPU_SUPPORTS_HIGHMEM && SYS_SUPPORTS_HIGHMEM && !CPU_MIPS32_3_5_EVA && !PREEMPT_RT_FULL
 
 config CPU_SUPPORTS_HIGHMEM
 	bool
Index: linux-5.0.19-rt11/arch/mips/include/asm/switch_to.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/mips/include/asm/switch_to.h
+++ linux-5.0.19-rt11/arch/mips/include/asm/switch_to.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:45 @ extern struct task_struct *ll_task;
  * inline to try to keep the overhead down. If we have been forced to run on
  * a "CPU" with an FPU because of a previous high level of FP computation,
  * but did not actually use the FPU during the most recent time-slice (CU1
- * isn't set), we undo the restriction on cpus_allowed.
+ * isn't set), we undo the restriction on cpus_mask.
  *
  * We're not calling set_cpus_allowed() here, because we have no need to
  * force prompt migration - we're already switching the current CPU to a
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:60 @ do {									\
 	    test_ti_thread_flag(__prev_ti, TIF_FPUBOUND) &&		\
 	    (!(KSTK_STATUS(prev) & ST0_CU1))) {				\
 		clear_ti_thread_flag(__prev_ti, TIF_FPUBOUND);		\
-		prev->cpus_allowed = prev->thread.user_cpus_allowed;	\
+		prev->cpus_mask = prev->thread.user_cpus_allowed;	\
 	}								\
 	next->thread.emulated_fp = 0;					\
 } while(0)
Index: linux-5.0.19-rt11/arch/mips/kernel/mips-mt-fpaff.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/mips/kernel/mips-mt-fpaff.c
+++ linux-5.0.19-rt11/arch/mips/kernel/mips-mt-fpaff.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:180 @ asmlinkage long mipsmt_sys_sched_getaffi
 	if (retval)
 		goto out_unlock;
 
-	cpumask_or(&allowed, &p->thread.user_cpus_allowed, &p->cpus_allowed);
+	cpumask_or(&allowed, &p->thread.user_cpus_allowed, p->cpus_ptr);
 	cpumask_and(&mask, &allowed, cpu_active_mask);
 
 out_unlock:
Index: linux-5.0.19-rt11/arch/mips/kernel/traps.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/mips/kernel/traps.c
+++ linux-5.0.19-rt11/arch/mips/kernel/traps.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:894 @ static void mt_ase_fp_affinity(void)
 		 * restricted the allowed set to exclude any CPUs with FPUs,
 		 * we'll skip the procedure.
 		 */
-		if (cpumask_intersects(&current->cpus_allowed, &mt_fpu_cpumask)) {
+		if (cpumask_intersects(&current->cpus_mask, &mt_fpu_cpumask)) {
 			cpumask_t tmask;
 
 			current->thread.user_cpus_allowed
-				= current->cpus_allowed;
-			cpumask_and(&tmask, &current->cpus_allowed,
+				= current->cpus_mask;
+			cpumask_and(&tmask, &current->cpus_mask,
 				    &mt_fpu_cpumask);
 			set_cpus_allowed_ptr(current, &tmask);
 			set_thread_flag(TIF_FPUBOUND);
Index: linux-5.0.19-rt11/arch/powerpc/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/arch/powerpc/Kconfig
+++ linux-5.0.19-rt11/arch/powerpc/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:108 @ config LOCKDEP_SUPPORT
 
 config RWSEM_GENERIC_SPINLOCK
 	bool
+	default y if PREEMPT_RT_FULL
 
 config RWSEM_XCHGADD_ALGORITHM
 	bool
-	default y
+	default y if !PREEMPT_RT_FULL
 
 config GENERIC_LOCKBREAK
 	bool
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:222 @ config PPC
 	select HAVE_HARDLOCKUP_DETECTOR_PERF	if PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH
 	select HAVE_PERF_REGS
 	select HAVE_PERF_USER_STACK_DUMP
+	select HAVE_PREEMPT_LAZY
 	select HAVE_RCU_TABLE_FREE		if SMP
 	select HAVE_REGS_AND_STACK_ACCESS_API
 	select HAVE_RELIABLE_STACKTRACE		if PPC64 && CPU_LITTLE_ENDIAN
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:399 @ menu "Kernel options"
 
 config HIGHMEM
 	bool "High memory support"
-	depends on PPC32
+	depends on PPC32 && !PREEMPT_RT_FULL
 
 source "kernel/Kconfig.hz"
 
Index: linux-5.0.19-rt11/arch/powerpc/include/asm/spinlock_types.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/powerpc/include/asm/spinlock_types.h
+++ linux-5.0.19-rt11/arch/powerpc/include/asm/spinlock_types.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5 @
 #ifndef _ASM_POWERPC_SPINLOCK_TYPES_H
 #define _ASM_POWERPC_SPINLOCK_TYPES_H
 
-#ifndef __LINUX_SPINLOCK_TYPES_H
-# error "please don't include this file directly"
-#endif
-
 typedef struct {
 	volatile unsigned int slock;
 } arch_spinlock_t;
Index: linux-5.0.19-rt11/arch/powerpc/include/asm/stackprotector.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/powerpc/include/asm/stackprotector.h
+++ linux-5.0.19-rt11/arch/powerpc/include/asm/stackprotector.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:27 @ static __always_inline void boot_init_st
 	unsigned long canary;
 
 	/* Try to get a semi random initial value. */
+#ifdef CONFIG_PREEMPT_RT_FULL
+	canary = (unsigned long)&canary;
+#else
 	canary = get_random_canary();
+#endif
 	canary ^= mftb();
 	canary ^= LINUX_VERSION_CODE;
 	canary &= CANARY_MASK;
Index: linux-5.0.19-rt11/arch/powerpc/include/asm/thread_info.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/powerpc/include/asm/thread_info.h
+++ linux-5.0.19-rt11/arch/powerpc/include/asm/thread_info.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:41 @ struct thread_info {
 	int		cpu;			/* cpu we're on */
 	int		preempt_count;		/* 0 => preemptable,
 						   <0 => BUG */
+	int		preempt_lazy_count;	/* 0 => preemptable,
+						   <0 => BUG */
 	unsigned long	local_flags;		/* private flags for thread */
 #ifdef CONFIG_LIVEPATCH
 	unsigned long *livepatch_sp;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:104 @ void arch_setup_new_exec(void);
 #define TIF_SINGLESTEP		8	/* singlestepping active */
 #define TIF_NOHZ		9	/* in adaptive nohz mode */
 #define TIF_SECCOMP		10	/* secure computing */
-#define TIF_RESTOREALL		11	/* Restore all regs (implies NOERROR) */
-#define TIF_NOERROR		12	/* Force successful syscall return */
+
+#define TIF_NEED_RESCHED_LAZY	11	/* lazy rescheduling necessary */
+#define TIF_SYSCALL_TRACEPOINT	12	/* syscall tracepoint instrumentation */
+
 #define TIF_NOTIFY_RESUME	13	/* callback before returning to user */
 #define TIF_UPROBE		14	/* breakpointed or single-stepping */
-#define TIF_SYSCALL_TRACEPOINT	15	/* syscall tracepoint instrumentation */
 #define TIF_EMULATE_STACK_STORE	16	/* Is an instruction emulation
 						for stack store? */
 #define TIF_MEMDIE		17	/* is terminating due to OOM killer */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:118 @ void arch_setup_new_exec(void);
 #endif
 #define TIF_POLLING_NRFLAG	19	/* true if poll_idle() is polling TIF_NEED_RESCHED */
 #define TIF_32BIT		20	/* 32 bit binary */
+#define TIF_RESTOREALL		21	/* Restore all regs (implies NOERROR) */
+#define TIF_NOERROR		22	/* Force successful syscall return */
+
 
 /* as above, but as bit values */
 #define _TIF_SYSCALL_TRACE	(1<<TIF_SYSCALL_TRACE)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:140 @ void arch_setup_new_exec(void);
 #define _TIF_SYSCALL_TRACEPOINT	(1<<TIF_SYSCALL_TRACEPOINT)
 #define _TIF_EMULATE_STACK_STORE	(1<<TIF_EMULATE_STACK_STORE)
 #define _TIF_NOHZ		(1<<TIF_NOHZ)
+#define _TIF_NEED_RESCHED_LAZY	(1<<TIF_NEED_RESCHED_LAZY)
 #define _TIF_FSCHECK		(1<<TIF_FSCHECK)
 #define _TIF_SYSCALL_EMU	(1<<TIF_SYSCALL_EMU)
 #define _TIF_SYSCALL_DOTRACE	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:150 @ void arch_setup_new_exec(void);
 #define _TIF_USER_WORK_MASK	(_TIF_SIGPENDING | _TIF_NEED_RESCHED | \
 				 _TIF_NOTIFY_RESUME | _TIF_UPROBE | \
 				 _TIF_RESTORE_TM | _TIF_PATCH_PENDING | \
-				 _TIF_FSCHECK)
+				 _TIF_FSCHECK | _TIF_NEED_RESCHED_LAZY)
 #define _TIF_PERSYSCALL_MASK	(_TIF_RESTOREALL|_TIF_NOERROR)
+#define _TIF_NEED_RESCHED_MASK	(_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY)
 
 /* Bits in local_flags */
 /* Don't move TLF_NAPPING without adjusting the code in entry_32.S */
Index: linux-5.0.19-rt11/arch/powerpc/kernel/asm-offsets.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/powerpc/kernel/asm-offsets.c
+++ linux-5.0.19-rt11/arch/powerpc/kernel/asm-offsets.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:164 @ int main(void)
 	OFFSET(TI_FLAGS, thread_info, flags);
 	OFFSET(TI_LOCAL_FLAGS, thread_info, local_flags);
 	OFFSET(TI_PREEMPT, thread_info, preempt_count);
+	OFFSET(TI_PREEMPT_LAZY, thread_info, preempt_lazy_count);
 	OFFSET(TI_TASK, thread_info, task);
 	OFFSET(TI_CPU, thread_info, cpu);
 
Index: linux-5.0.19-rt11/arch/powerpc/kernel/entry_32.S
===================================================================
--- linux-5.0.19-rt11.orig/arch/powerpc/kernel/entry_32.S
+++ linux-5.0.19-rt11/arch/powerpc/kernel/entry_32.S
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:396 @ ret_from_syscall:
 	MTMSRD(r10)
 	lwz	r9,TI_FLAGS(r12)
 	li	r8,-MAX_ERRNO
-	andi.	r0,r9,(_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP|_TIF_USER_WORK_MASK|_TIF_PERSYSCALL_MASK)
+	lis	r0,(_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP|_TIF_USER_WORK_MASK|_TIF_PERSYSCALL_MASK)@h
+	ori	r0,r0, (_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP|_TIF_USER_WORK_MASK|_TIF_PERSYSCALL_MASK)@l
+	and.	r0,r9,r0
 	bne-	syscall_exit_work
 	cmplw	0,r3,r8
 	blt+	syscall_exit_cont
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:516 @ syscall_dotrace:
 	b	syscall_dotrace_cont
 
 syscall_exit_work:
-	andi.	r0,r9,_TIF_RESTOREALL
+	andis.	r0,r9,_TIF_RESTOREALL@h
 	beq+	0f
 	REST_NVGPRS(r1)
 	b	2f
 0:	cmplw	0,r3,r8
 	blt+	1f
-	andi.	r0,r9,_TIF_NOERROR
+	andis.	r0,r9,_TIF_NOERROR@h
 	bne-	1f
 	lwz	r11,_CCR(r1)			/* Load CR */
 	neg	r3,r3
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:531 @ syscall_exit_work:
 
 1:	stw	r6,RESULT(r1)	/* Save result */
 	stw	r3,GPR3(r1)	/* Update return value */
-2:	andi.	r0,r9,(_TIF_PERSYSCALL_MASK)
+2:	andis.	r0,r9,(_TIF_PERSYSCALL_MASK)@h
 	beq	4f
 
 	/* Clear per-syscall TIF flags if any are set.  */
 
-	li	r11,_TIF_PERSYSCALL_MASK
+	lis	r11,_TIF_PERSYSCALL_MASK@h
 	addi	r12,r12,TI_FLAGS
 3:	lwarx	r8,0,r12
 	andc	r8,r8,r11
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:893 @ resume_kernel:
 	cmpwi	0,r0,0		/* if non-zero, just restore regs and return */
 	bne	restore
 	andi.	r8,r8,_TIF_NEED_RESCHED
+	bne+	1f
+	lwz	r0,TI_PREEMPT_LAZY(r9)
+	cmpwi	0,r0,0		/* if non-zero, just restore regs and return */
+	bne	restore
+	lwz	r0,TI_FLAGS(r9)
+	andi.	r0,r0,_TIF_NEED_RESCHED_LAZY
 	beq+	restore
+1:
 	lwz	r3,_MSR(r1)
 	andi.	r0,r3,MSR_EE	/* interrupts off? */
 	beq	restore		/* don't schedule if so */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:911 @ resume_kernel:
 	 */
 	bl	trace_hardirqs_off
 #endif
-1:	bl	preempt_schedule_irq
+2:	bl	preempt_schedule_irq
 	CURRENT_THREAD_INFO(r9, r1)
 	lwz	r3,TI_FLAGS(r9)
-	andi.	r0,r3,_TIF_NEED_RESCHED
-	bne-	1b
+	andi.	r0,r3,_TIF_NEED_RESCHED_MASK
+	bne-	2b
 #ifdef CONFIG_TRACE_IRQFLAGS
 	/* And now, to properly rebalance the above, we tell lockdep they
 	 * are being turned back on, which will happen when we return
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1244 @ global_dbcr0:
 #endif /* !(CONFIG_4xx || CONFIG_BOOKE) */
 
 do_work:			/* r10 contains MSR_KERNEL here */
-	andi.	r0,r9,_TIF_NEED_RESCHED
+	andi.	r0,r9,_TIF_NEED_RESCHED_MASK
 	beq	do_user_signal
 
 do_resched:			/* r10 contains MSR_KERNEL here */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1265 @ recheck:
 	MTMSRD(r10)		/* disable interrupts */
 	CURRENT_THREAD_INFO(r9, r1)
 	lwz	r9,TI_FLAGS(r9)
-	andi.	r0,r9,_TIF_NEED_RESCHED
+	andi.	r0,r9,_TIF_NEED_RESCHED_MASK
 	bne-	do_resched
 	andi.	r0,r9,_TIF_USER_WORK_MASK
 	beq	restore_user
Index: linux-5.0.19-rt11/arch/powerpc/kernel/entry_64.S
===================================================================
--- linux-5.0.19-rt11.orig/arch/powerpc/kernel/entry_64.S
+++ linux-5.0.19-rt11/arch/powerpc/kernel/entry_64.S
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:256 @ system_call_exit:
 
 	ld	r9,TI_FLAGS(r12)
 	li	r11,-MAX_ERRNO
-	andi.	r0,r9,(_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP|_TIF_USER_WORK_MASK|_TIF_PERSYSCALL_MASK)
+	lis	r0,(_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP|_TIF_USER_WORK_MASK|_TIF_PERSYSCALL_MASK)@h
+	ori	r0,r0,(_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP|_TIF_USER_WORK_MASK|_TIF_PERSYSCALL_MASK)@l
+	and.	r0,r9,r0
 	bne-	.Lsyscall_exit_work
 
 	andi.	r0,r8,MSR_FP
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:375 @ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 	/* If TIF_RESTOREALL is set, don't scribble on either r3 or ccr.
 	 If TIF_NOERROR is set, just save r3 as it is. */
 
-	andi.	r0,r9,_TIF_RESTOREALL
+	andis.	r0,r9,_TIF_RESTOREALL@h
 	beq+	0f
 	REST_NVGPRS(r1)
 	b	2f
 0:	cmpld	r3,r11		/* r11 is -MAX_ERRNO */
 	blt+	1f
-	andi.	r0,r9,_TIF_NOERROR
+	andis.	r0,r9,_TIF_NOERROR@h
 	bne-	1f
 	ld	r5,_CCR(r1)
 	neg	r3,r3
 	oris	r5,r5,0x1000	/* Set SO bit in CR */
 	std	r5,_CCR(r1)
 1:	std	r3,GPR3(r1)
-2:	andi.	r0,r9,(_TIF_PERSYSCALL_MASK)
+2:	andis.	r0,r9,(_TIF_PERSYSCALL_MASK)@h
 	beq	4f
 
 	/* Clear per-syscall TIF flags if any are set.  */
 
-	li	r11,_TIF_PERSYSCALL_MASK
+	lis	r11,(_TIF_PERSYSCALL_MASK)@h
 	addi	r12,r12,TI_FLAGS
 3:	ldarx	r10,0,r12
 	andc	r10,r10,r11
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:785 @ _GLOBAL(ret_from_except_lite)
 	bl	restore_math
 	b	restore
 #endif
-1:	andi.	r0,r4,_TIF_NEED_RESCHED
+1:	andi.	r0,r4,_TIF_NEED_RESCHED_MASK
 	beq	2f
 	bl	restore_interrupts
 	SCHEDULE_USER
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:847 @ resume_kernel:
 
 #ifdef CONFIG_PREEMPT
 	/* Check if we need to preempt */
+	lwz	r8,TI_PREEMPT(r9)
+	cmpwi	0,r8,0		/* if non-zero, just restore regs and return */
+	bne	restore
 	andi.	r0,r4,_TIF_NEED_RESCHED
+	bne+	check_count
+
+	andi.	r0,r4,_TIF_NEED_RESCHED_LAZY
 	beq+	restore
+	lwz	r8,TI_PREEMPT_LAZY(r9)
+
 	/* Check that preempt_count() == 0 and interrupts are enabled */
-	lwz	r8,TI_PREEMPT(r9)
+check_count:
 	cmpwi	cr0,r8,0
 	bne	restore
 	ld	r0,SOFTE(r1)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:875 @ resume_kernel:
 	/* Re-test flags and eventually loop */
 	CURRENT_THREAD_INFO(r9, r1)
 	ld	r4,TI_FLAGS(r9)
-	andi.	r0,r4,_TIF_NEED_RESCHED
+	andi.	r0,r4,_TIF_NEED_RESCHED_MASK
 	bne	1b
 
 	/*
Index: linux-5.0.19-rt11/arch/powerpc/kernel/irq.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/powerpc/kernel/irq.c
+++ linux-5.0.19-rt11/arch/powerpc/kernel/irq.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:769 @ void irq_ctx_init(void)
 	}
 }
 
+#ifndef CONFIG_PREEMPT_RT_FULL
 void do_softirq_own_stack(void)
 {
 	struct thread_info *curtp, *irqtp;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:787 @ void do_softirq_own_stack(void)
 	if (irqtp->flags)
 		set_bits(irqtp->flags, &curtp->flags);
 }
+#endif
 
 irq_hw_number_t virq_to_hw(unsigned int virq)
 {
Index: linux-5.0.19-rt11/arch/powerpc/kernel/misc_32.S
===================================================================
--- linux-5.0.19-rt11.orig/arch/powerpc/kernel/misc_32.S
+++ linux-5.0.19-rt11/arch/powerpc/kernel/misc_32.S
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:45 @
  * We store the saved ksp_limit in the unused part
  * of the STACK_FRAME_OVERHEAD
  */
+#ifndef CONFIG_PREEMPT_RT_FULL
 _GLOBAL(call_do_softirq)
 	mflr	r0
 	stw	r0,4(r1)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:62 @ _GLOBAL(call_do_softirq)
 	stw	r10,THREAD+KSP_LIMIT(r2)
 	mtlr	r0
 	blr
+#endif
 
 /*
  * void call_do_irq(struct pt_regs *regs, struct thread_info *irqtp);
Index: linux-5.0.19-rt11/arch/powerpc/kernel/misc_64.S
===================================================================
--- linux-5.0.19-rt11.orig/arch/powerpc/kernel/misc_64.S
+++ linux-5.0.19-rt11/arch/powerpc/kernel/misc_64.S
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:35 @
 
 	.text
 
+#ifndef CONFIG_PREEMPT_RT_FULL
 _GLOBAL(call_do_softirq)
 	mflr	r0
 	std	r0,16(r1)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:46 @ _GLOBAL(call_do_softirq)
 	ld	r0,16(r1)
 	mtlr	r0
 	blr
+#endif
 
 _GLOBAL(call_do_irq)
 	mflr	r0
Index: linux-5.0.19-rt11/arch/powerpc/kernel/traps.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/powerpc/kernel/traps.c
+++ linux-5.0.19-rt11/arch/powerpc/kernel/traps.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:177 @ extern void panic_flush_kmsg_start(void)
 
 extern void panic_flush_kmsg_end(void)
 {
-	printk_safe_flush_on_panic();
 	kmsg_dump(KMSG_DUMP_PANIC);
 	bust_spinlocks(0);
 	debug_locks_off();
Index: linux-5.0.19-rt11/arch/powerpc/kernel/watchdog.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/powerpc/kernel/watchdog.c
+++ linux-5.0.19-rt11/arch/powerpc/kernel/watchdog.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:184 @ static void watchdog_smp_panic(int cpu,
 
 	wd_smp_unlock(&flags);
 
-	printk_safe_flush();
-	/*
-	 * printk_safe_flush() seems to require another print
-	 * before anything actually goes out to console.
-	 */
 	if (sysctl_hardlockup_all_cpu_backtrace)
 		trigger_allbutself_cpu_backtrace();
 
Index: linux-5.0.19-rt11/arch/powerpc/kvm/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/arch/powerpc/kvm/Kconfig
+++ linux-5.0.19-rt11/arch/powerpc/kvm/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:181 @ config KVM_E500MC
 config KVM_MPIC
 	bool "KVM in-kernel MPIC emulation"
 	depends on KVM && E500
+	depends on !PREEMPT_RT_FULL
 	select HAVE_KVM_IRQCHIP
 	select HAVE_KVM_IRQFD
 	select HAVE_KVM_IRQ_ROUTING
Index: linux-5.0.19-rt11/arch/powerpc/platforms/cell/spufs/sched.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/powerpc/platforms/cell/spufs/sched.c
+++ linux-5.0.19-rt11/arch/powerpc/platforms/cell/spufs/sched.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:144 @ void __spu_update_sched_info(struct spu_
 	 * runqueue. The context will be rescheduled on the proper node
 	 * if it is timesliced or preempted.
 	 */
-	cpumask_copy(&ctx->cpus_allowed, &current->cpus_allowed);
+	cpumask_copy(&ctx->cpus_allowed, current->cpus_ptr);
 
 	/* Save the current cpu id for spu interrupt routing. */
 	ctx->last_ran = raw_smp_processor_id();
Index: linux-5.0.19-rt11/arch/powerpc/platforms/ps3/device-init.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/powerpc/platforms/ps3/device-init.c
+++ linux-5.0.19-rt11/arch/powerpc/platforms/ps3/device-init.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:755 @ static int ps3_notification_read_write(s
 	}
 	pr_debug("%s:%u: notification %s issued\n", __func__, __LINE__, op);
 
-	res = wait_event_interruptible(dev->done.wait,
-				       dev->done.done || kthread_should_stop());
+	res = swait_event_interruptible_exclusive(dev->done.wait,
+						  dev->done.done || kthread_should_stop());
 	if (kthread_should_stop())
 		res = -EINTR;
 	if (res) {
Index: linux-5.0.19-rt11/arch/powerpc/platforms/pseries/iommu.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/powerpc/platforms/pseries/iommu.c
+++ linux-5.0.19-rt11/arch/powerpc/platforms/pseries/iommu.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:41 @
 #include <linux/of.h>
 #include <linux/iommu.h>
 #include <linux/rculist.h>
+#include <linux/locallock.h>
 #include <asm/io.h>
 #include <asm/prom.h>
 #include <asm/rtas.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:195 @ static int tce_build_pSeriesLP(struct io
 }
 
 static DEFINE_PER_CPU(__be64 *, tce_page);
+static DEFINE_LOCAL_IRQ_LOCK(tcp_page_lock);
 
 static int tce_buildmulti_pSeriesLP(struct iommu_table *tbl, long tcenum,
 				     long npages, unsigned long uaddr,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:216 @ static int tce_buildmulti_pSeriesLP(stru
 		                           direction, attrs);
 	}
 
-	local_irq_save(flags);	/* to protect tcep and the page behind it */
+	/* to protect tcep and the page behind it */
+	local_lock_irqsave(tcp_page_lock, flags);
 
 	tcep = __this_cpu_read(tce_page);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:228 @ static int tce_buildmulti_pSeriesLP(stru
 		tcep = (__be64 *)__get_free_page(GFP_ATOMIC);
 		/* If allocation fails, fall back to the loop implementation */
 		if (!tcep) {
-			local_irq_restore(flags);
+			local_unlock_irqrestore(tcp_page_lock, flags);
 			return tce_build_pSeriesLP(tbl, tcenum, npages, uaddr,
 					    direction, attrs);
 		}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:262 @ static int tce_buildmulti_pSeriesLP(stru
 		tcenum += limit;
 	} while (npages > 0 && !rc);
 
-	local_irq_restore(flags);
+	local_unlock_irqrestore(tcp_page_lock, flags);
 
 	if (unlikely(rc == H_NOT_ENOUGH_RESOURCES)) {
 		ret = (int)rc;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:420 @ static int tce_setrange_multi_pSeriesLP(
 	u64 rc = 0;
 	long l, limit;
 
-	local_irq_disable();	/* to protect tcep and the page behind it */
+	/* to protect tcep and the page behind it */
+	local_lock_irq(tcp_page_lock);
 	tcep = __this_cpu_read(tce_page);
 
 	if (!tcep) {
 		tcep = (__be64 *)__get_free_page(GFP_ATOMIC);
 		if (!tcep) {
-			local_irq_enable();
+			local_unlock_irq(tcp_page_lock);
 			return -ENOMEM;
 		}
 		__this_cpu_write(tce_page, tcep);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:473 @ static int tce_setrange_multi_pSeriesLP(
 
 	/* error cleanup: caller will clear whole range */
 
-	local_irq_enable();
+	local_unlock_irq(tcp_page_lock);
 	return rc;
 }
 
Index: linux-5.0.19-rt11/arch/s390/include/asm/spinlock_types.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/s390/include/asm/spinlock_types.h
+++ linux-5.0.19-rt11/arch/s390/include/asm/spinlock_types.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5 @
 #ifndef __ASM_SPINLOCK_TYPES_H
 #define __ASM_SPINLOCK_TYPES_H
 
-#ifndef __LINUX_SPINLOCK_TYPES_H
-# error "please don't include this file directly"
-#endif
-
 typedef struct {
 	int lock;
 } __attribute__ ((aligned (4))) arch_spinlock_t;
Index: linux-5.0.19-rt11/arch/sh/include/asm/spinlock_types.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/sh/include/asm/spinlock_types.h
+++ linux-5.0.19-rt11/arch/sh/include/asm/spinlock_types.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5 @
 #ifndef __ASM_SH_SPINLOCK_TYPES_H
 #define __ASM_SH_SPINLOCK_TYPES_H
 
-#ifndef __LINUX_SPINLOCK_TYPES_H
-# error "please don't include this file directly"
-#endif
-
 typedef struct {
 	volatile unsigned int lock;
 } arch_spinlock_t;
Index: linux-5.0.19-rt11/arch/sh/kernel/irq.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/sh/kernel/irq.c
+++ linux-5.0.19-rt11/arch/sh/kernel/irq.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:151 @ void irq_ctx_exit(int cpu)
 	hardirq_ctx[cpu] = NULL;
 }
 
+#ifndef CONFIG_PREEMPT_RT_FULL
 void do_softirq_own_stack(void)
 {
 	struct thread_info *curctx;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:179 @ void do_softirq_own_stack(void)
 		  "r5", "r6", "r7", "r8", "r9", "r15", "t", "pr"
 	);
 }
+#endif
 #else
 static inline void handle_one_irq(unsigned int irq)
 {
Index: linux-5.0.19-rt11/arch/sparc/kernel/irq_64.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/sparc/kernel/irq_64.c
+++ linux-5.0.19-rt11/arch/sparc/kernel/irq_64.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:857 @ void __irq_entry handler_irq(int pil, st
 	set_irq_regs(old_regs);
 }
 
+#ifndef CONFIG_PREEMPT_RT_FULL
 void do_softirq_own_stack(void)
 {
 	void *orig_sp, *sp = softirq_stack[smp_processor_id()];
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:872 @ void do_softirq_own_stack(void)
 	__asm__ __volatile__("mov %0, %%sp"
 			     : : "r" (orig_sp));
 }
+#endif
 
 #ifdef CONFIG_HOTPLUG_CPU
 void fixup_irqs(void)
Index: linux-5.0.19-rt11/arch/x86/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/Kconfig
+++ linux-5.0.19-rt11/arch/x86/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:186 @ config X86
 	select HAVE_PCI
 	select HAVE_PERF_REGS
 	select HAVE_PERF_USER_STACK_DUMP
+	select HAVE_PREEMPT_LAZY
 	select HAVE_RCU_TABLE_FREE		if PARAVIRT
 	select HAVE_RCU_TABLE_INVALIDATE	if HAVE_RCU_TABLE_FREE
 	select HAVE_REGS_AND_STACK_ACCESS_API
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:273 @ config ARCH_MAY_HAVE_PC_FDC
 	def_bool y
 	depends on ISA_DMA_API
 
+config RWSEM_GENERIC_SPINLOCK
+	def_bool PREEMPT_RT_FULL
+
 config RWSEM_XCHGADD_ALGORITHM
-	def_bool y
+	def_bool !RWSEM_GENERIC_SPINLOCK && !PREEMPT_RT_FULL
 
 config GENERIC_CALIBRATE_DELAY
 	def_bool y
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:956 @ config CALGARY_IOMMU_ENABLED_BY_DEFAULT
 config MAXSMP
 	bool "Enable Maximum number of SMP Processors and NUMA Nodes"
 	depends on X86_64 && SMP && DEBUG_KERNEL
-	select CPUMASK_OFFSTACK
+	select CPUMASK_OFFSTACK if !PREEMPT_RT_FULL
 	---help---
 	  Enable maximum number of CPUS and NUMA Nodes for this architecture.
 	  If unsure, say N.
Index: linux-5.0.19-rt11/arch/x86/crypto/aesni-intel_glue.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/crypto/aesni-intel_glue.c
+++ linux-5.0.19-rt11/arch/x86/crypto/aesni-intel_glue.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:418 @ static int ecb_encrypt(struct skcipher_r
 
 	err = skcipher_walk_virt(&walk, req, true);
 
-	kernel_fpu_begin();
 	while ((nbytes = walk.nbytes)) {
+		kernel_fpu_begin();
 		aesni_ecb_enc(ctx, walk.dst.virt.addr, walk.src.virt.addr,
 			      nbytes & AES_BLOCK_MASK);
+		kernel_fpu_end();
 		nbytes &= AES_BLOCK_SIZE - 1;
 		err = skcipher_walk_done(&walk, nbytes);
 	}
-	kernel_fpu_end();
 
 	return err;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:440 @ static int ecb_decrypt(struct skcipher_r
 
 	err = skcipher_walk_virt(&walk, req, true);
 
-	kernel_fpu_begin();
 	while ((nbytes = walk.nbytes)) {
+		kernel_fpu_begin();
 		aesni_ecb_dec(ctx, walk.dst.virt.addr, walk.src.virt.addr,
 			      nbytes & AES_BLOCK_MASK);
+		kernel_fpu_end();
 		nbytes &= AES_BLOCK_SIZE - 1;
 		err = skcipher_walk_done(&walk, nbytes);
 	}
-	kernel_fpu_end();
 
 	return err;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:462 @ static int cbc_encrypt(struct skcipher_r
 
 	err = skcipher_walk_virt(&walk, req, true);
 
-	kernel_fpu_begin();
 	while ((nbytes = walk.nbytes)) {
+		kernel_fpu_begin();
 		aesni_cbc_enc(ctx, walk.dst.virt.addr, walk.src.virt.addr,
 			      nbytes & AES_BLOCK_MASK, walk.iv);
+		kernel_fpu_end();
 		nbytes &= AES_BLOCK_SIZE - 1;
 		err = skcipher_walk_done(&walk, nbytes);
 	}
-	kernel_fpu_end();
 
 	return err;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:484 @ static int cbc_decrypt(struct skcipher_r
 
 	err = skcipher_walk_virt(&walk, req, true);
 
-	kernel_fpu_begin();
 	while ((nbytes = walk.nbytes)) {
+		kernel_fpu_begin();
 		aesni_cbc_dec(ctx, walk.dst.virt.addr, walk.src.virt.addr,
 			      nbytes & AES_BLOCK_MASK, walk.iv);
+		kernel_fpu_end();
 		nbytes &= AES_BLOCK_SIZE - 1;
 		err = skcipher_walk_done(&walk, nbytes);
 	}
-	kernel_fpu_end();
 
 	return err;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:541 @ static int ctr_crypt(struct skcipher_req
 
 	err = skcipher_walk_virt(&walk, req, true);
 
-	kernel_fpu_begin();
 	while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) {
+		kernel_fpu_begin();
 		aesni_ctr_enc_tfm(ctx, walk.dst.virt.addr, walk.src.virt.addr,
 			              nbytes & AES_BLOCK_MASK, walk.iv);
+		kernel_fpu_end();
 		nbytes &= AES_BLOCK_SIZE - 1;
 		err = skcipher_walk_done(&walk, nbytes);
 	}
 	if (walk.nbytes) {
+		kernel_fpu_begin();
 		ctr_crypt_final(ctx, &walk);
+		kernel_fpu_end();
 		err = skcipher_walk_done(&walk, 0);
 	}
-	kernel_fpu_end();
 
 	return err;
 }
Index: linux-5.0.19-rt11/arch/x86/crypto/cast5_avx_glue.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/crypto/cast5_avx_glue.c
+++ linux-5.0.19-rt11/arch/x86/crypto/cast5_avx_glue.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:64 @ static inline void cast5_fpu_end(bool fp
 
 static int ecb_crypt(struct skcipher_request *req, bool enc)
 {
-	bool fpu_enabled = false;
+	bool fpu_enabled;
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	struct cast5_ctx *ctx = crypto_skcipher_ctx(tfm);
 	struct skcipher_walk walk;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:79 @ static int ecb_crypt(struct skcipher_req
 		u8 *wsrc = walk.src.virt.addr;
 		u8 *wdst = walk.dst.virt.addr;
 
-		fpu_enabled = cast5_fpu_begin(fpu_enabled, &walk, nbytes);
+		fpu_enabled = cast5_fpu_begin(false, &walk, nbytes);
 
 		/* Process multi-block batch */
 		if (nbytes >= bsize * CAST5_PARALLEL_BLOCKS) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:108 @ static int ecb_crypt(struct skcipher_req
 		} while (nbytes >= bsize);
 
 done:
+		cast5_fpu_end(fpu_enabled);
 		err = skcipher_walk_done(&walk, nbytes);
 	}
-
-	cast5_fpu_end(fpu_enabled);
 	return err;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:214 @ static int cbc_decrypt(struct skcipher_r
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	struct cast5_ctx *ctx = crypto_skcipher_ctx(tfm);
-	bool fpu_enabled = false;
+	bool fpu_enabled;
 	struct skcipher_walk walk;
 	unsigned int nbytes;
 	int err;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:222 @ static int cbc_decrypt(struct skcipher_r
 	err = skcipher_walk_virt(&walk, req, false);
 
 	while ((nbytes = walk.nbytes)) {
-		fpu_enabled = cast5_fpu_begin(fpu_enabled, &walk, nbytes);
+		fpu_enabled = cast5_fpu_begin(false, &walk, nbytes);
 		nbytes = __cbc_decrypt(ctx, &walk);
+		cast5_fpu_end(fpu_enabled);
 		err = skcipher_walk_done(&walk, nbytes);
 	}
-
-	cast5_fpu_end(fpu_enabled);
 	return err;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:293 @ static int ctr_crypt(struct skcipher_req
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	struct cast5_ctx *ctx = crypto_skcipher_ctx(tfm);
-	bool fpu_enabled = false;
+	bool fpu_enabled;
 	struct skcipher_walk walk;
 	unsigned int nbytes;
 	int err;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:301 @ static int ctr_crypt(struct skcipher_req
 	err = skcipher_walk_virt(&walk, req, false);
 
 	while ((nbytes = walk.nbytes) >= CAST5_BLOCK_SIZE) {
-		fpu_enabled = cast5_fpu_begin(fpu_enabled, &walk, nbytes);
+		fpu_enabled = cast5_fpu_begin(false, &walk, nbytes);
 		nbytes = __ctr_crypt(&walk, ctx);
+		cast5_fpu_end(fpu_enabled);
 		err = skcipher_walk_done(&walk, nbytes);
 	}
 
-	cast5_fpu_end(fpu_enabled);
-
 	if (walk.nbytes) {
 		ctr_crypt_final(&walk, ctx);
 		err = skcipher_walk_done(&walk, 0);
Index: linux-5.0.19-rt11/arch/x86/crypto/chacha_glue.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/crypto/chacha_glue.c
+++ linux-5.0.19-rt11/arch/x86/crypto/chacha_glue.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:134 @ static int chacha_simd_stream_xor(struct
 				  struct chacha_ctx *ctx, u8 *iv)
 {
 	u32 *state, state_buf[16 + 2] __aligned(8);
-	int next_yield = 4096; /* bytes until next FPU yield */
 	int err = 0;
 
 	BUILD_BUG_ON(CHACHA_STATE_ALIGN != 16);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:146 @ static int chacha_simd_stream_xor(struct
 
 		if (nbytes < walk->total) {
 			nbytes = round_down(nbytes, walk->stride);
-			next_yield -= nbytes;
 		}
 
 		chacha_dosimd(state, walk->dst.virt.addr, walk->src.virt.addr,
 			      nbytes, ctx->nrounds);
 
-		if (next_yield <= 0) {
-			/* temporarily allow preemption */
-			kernel_fpu_end();
-			kernel_fpu_begin();
-			next_yield = 4096;
-		}
-
+		kernel_fpu_end();
 		err = skcipher_walk_done(walk, walk->nbytes - nbytes);
+		kernel_fpu_begin();
 	}
 
 	return err;
Index: linux-5.0.19-rt11/arch/x86/crypto/glue_helper.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/crypto/glue_helper.c
+++ linux-5.0.19-rt11/arch/x86/crypto/glue_helper.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:41 @ int glue_ecb_req_128bit(const struct com
 	void *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req));
 	const unsigned int bsize = 128 / 8;
 	struct skcipher_walk walk;
-	bool fpu_enabled = false;
+	bool fpu_enabled;
 	unsigned int nbytes;
 	int err;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:54 @ int glue_ecb_req_128bit(const struct com
 		unsigned int i;
 
 		fpu_enabled = glue_fpu_begin(bsize, gctx->fpu_blocks_limit,
-					     &walk, fpu_enabled, nbytes);
+					     &walk, false, nbytes);
 		for (i = 0; i < gctx->num_funcs; i++) {
 			func_bytes = bsize * gctx->funcs[i].num_blocks;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:72 @ int glue_ecb_req_128bit(const struct com
 			if (nbytes < bsize)
 				break;
 		}
+		glue_fpu_end(fpu_enabled);
 		err = skcipher_walk_done(&walk, nbytes);
 	}
-
-	glue_fpu_end(fpu_enabled);
 	return err;
 }
 EXPORT_SYMBOL_GPL(glue_ecb_req_128bit);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:117 @ int glue_cbc_decrypt_req_128bit(const st
 	void *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req));
 	const unsigned int bsize = 128 / 8;
 	struct skcipher_walk walk;
-	bool fpu_enabled = false;
+	bool fpu_enabled;
 	unsigned int nbytes;
 	int err;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:131 @ int glue_cbc_decrypt_req_128bit(const st
 		u128 last_iv;
 
 		fpu_enabled = glue_fpu_begin(bsize, gctx->fpu_blocks_limit,
-					     &walk, fpu_enabled, nbytes);
+					     &walk, false, nbytes);
 		/* Start of the last block. */
 		src += nbytes / bsize - 1;
 		dst += nbytes / bsize - 1;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:163 @ int glue_cbc_decrypt_req_128bit(const st
 done:
 		u128_xor(dst, dst, (u128 *)walk.iv);
 		*(u128 *)walk.iv = last_iv;
+		glue_fpu_end(fpu_enabled);
 		err = skcipher_walk_done(&walk, nbytes);
 	}
 
-	glue_fpu_end(fpu_enabled);
 	return err;
 }
 EXPORT_SYMBOL_GPL(glue_cbc_decrypt_req_128bit);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:177 @ int glue_ctr_req_128bit(const struct com
 	void *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req));
 	const unsigned int bsize = 128 / 8;
 	struct skcipher_walk walk;
-	bool fpu_enabled = false;
+	bool fpu_enabled;
 	unsigned int nbytes;
 	int err;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:191 @ int glue_ctr_req_128bit(const struct com
 		le128 ctrblk;
 
 		fpu_enabled = glue_fpu_begin(bsize, gctx->fpu_blocks_limit,
-					     &walk, fpu_enabled, nbytes);
+					     &walk, false, nbytes);
 
 		be128_to_le128(&ctrblk, (be128 *)walk.iv);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:215 @ int glue_ctr_req_128bit(const struct com
 		}
 
 		le128_to_be128((be128 *)walk.iv, &ctrblk);
+		glue_fpu_end(fpu_enabled);
 		err = skcipher_walk_done(&walk, nbytes);
 	}
 
-	glue_fpu_end(fpu_enabled);
-
 	if (nbytes) {
 		le128 ctrblk;
 		u128 tmp;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:279 @ int glue_xts_req_128bit(const struct com
 {
 	const unsigned int bsize = 128 / 8;
 	struct skcipher_walk walk;
-	bool fpu_enabled = false;
+	bool fpu_enabled;
 	unsigned int nbytes;
 	int err;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:290 @ int glue_xts_req_128bit(const struct com
 
 	/* set minimum length to bsize, for tweak_fn */
 	fpu_enabled = glue_fpu_begin(bsize, gctx->fpu_blocks_limit,
-				     &walk, fpu_enabled,
+				     &walk, false,
 				     nbytes < bsize ? bsize : nbytes);
 
 	/* calculate first value of T */
 	tweak_fn(tweak_ctx, walk.iv, walk.iv);
 
 	while (nbytes) {
+		fpu_enabled = glue_fpu_begin(bsize, gctx->fpu_blocks_limit,
+					     &walk, fpu_enabled,
+					     nbytes < bsize ? bsize : nbytes);
 		nbytes = __glue_xts_req_128bit(gctx, crypt_ctx, &walk);
 
+		glue_fpu_end(fpu_enabled);
+		fpu_enabled = false;
 		err = skcipher_walk_done(&walk, nbytes);
 		nbytes = walk.nbytes;
 	}
 
-	glue_fpu_end(fpu_enabled);
-
 	return err;
 }
 EXPORT_SYMBOL_GPL(glue_xts_req_128bit);
Index: linux-5.0.19-rt11/arch/x86/entry/common.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/entry/common.c
+++ linux-5.0.19-rt11/arch/x86/entry/common.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:34 @
 #include <asm/vdso.h>
 #include <linux/uaccess.h>
 #include <asm/cpufeature.h>
+#include <asm/fpu/api.h>
 #include <asm/nospec-branch.h>
 
 #define CREATE_TRACE_POINTS
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:138 @ static long syscall_trace_enter(struct p
 
 #define EXIT_TO_USERMODE_LOOP_FLAGS				\
 	(_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_UPROBE |	\
-	 _TIF_NEED_RESCHED | _TIF_USER_RETURN_NOTIFY | _TIF_PATCH_PENDING)
+	 _TIF_NEED_RESCHED_MASK | _TIF_USER_RETURN_NOTIFY | _TIF_PATCH_PENDING)
 
 static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:153 @ static void exit_to_usermode_loop(struct
 		/* We have work to do. */
 		local_irq_enable();
 
-		if (cached_flags & _TIF_NEED_RESCHED)
+		if (cached_flags & _TIF_NEED_RESCHED_MASK)
 			schedule();
 
+#ifdef ARCH_RT_DELAYS_SIGNAL_SEND
+		if (unlikely(current->forced_info.si_signo)) {
+			struct task_struct *t = current;
+			force_sig_info(t->forced_info.si_signo, &t->forced_info, t);
+			t->forced_info.si_signo = 0;
+		}
+#endif
 		if (cached_flags & _TIF_UPROBE)
 			uprobe_notify_resume(regs);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:208 @ __visible inline void prepare_exit_to_us
 	if (unlikely(cached_flags & EXIT_TO_USERMODE_LOOP_FLAGS))
 		exit_to_usermode_loop(regs, cached_flags);
 
+	/* Reload ti->flags; we may have rescheduled above. */
+	cached_flags = READ_ONCE(ti->flags);
+
+	fpregs_assert_state_consistent();
+	if (unlikely(cached_flags & _TIF_NEED_FPU_LOAD))
+		switch_fpu_return();
+
 #ifdef CONFIG_COMPAT
 	/*
 	 * Compat syscalls set TS_COMPAT.  Make sure we clear it before
Index: linux-5.0.19-rt11/arch/x86/entry/entry_32.S
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/entry/entry_32.S
+++ linux-5.0.19-rt11/arch/x86/entry/entry_32.S
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:772 @ END(ret_from_exception)
 ENTRY(resume_kernel)
 	DISABLE_INTERRUPTS(CLBR_ANY)
 .Lneed_resched:
+	# preempt count == 0 + NEED_RS set?
 	cmpl	$0, PER_CPU_VAR(__preempt_count)
+#ifndef CONFIG_PREEMPT_LAZY
 	jnz	restore_all_kernel
+#else
+	jz	test_int_off
+
+	# atleast preempt count == 0 ?
+	cmpl $_PREEMPT_ENABLED,PER_CPU_VAR(__preempt_count)
+	jne	restore_all_kernel
+
+	movl	PER_CPU_VAR(current_task), %ebp
+	cmpl	$0,TASK_TI_preempt_lazy_count(%ebp)	# non-zero preempt_lazy_count ?
+	jnz	restore_all_kernel
+
+	testl	$_TIF_NEED_RESCHED_LAZY, TASK_TI_flags(%ebp)
+	jz	restore_all_kernel
+test_int_off:
+#endif
 	testl	$X86_EFLAGS_IF, PT_EFLAGS(%esp)	# interrupts off (exception path) ?
 	jz	restore_all_kernel
 	call	preempt_schedule_irq
Index: linux-5.0.19-rt11/arch/x86/entry/entry_64.S
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/entry/entry_64.S
+++ linux-5.0.19-rt11/arch/x86/entry/entry_64.S
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:651 @ retint_kernel:
 	btl	$9, EFLAGS(%rsp)		/* were interrupts off? */
 	jnc	1f
 0:	cmpl	$0, PER_CPU_VAR(__preempt_count)
+#ifndef CONFIG_PREEMPT_LAZY
 	jnz	1f
+#else
+	jz	do_preempt_schedule_irq
+
+	# atleast preempt count == 0 ?
+	cmpl $_PREEMPT_ENABLED,PER_CPU_VAR(__preempt_count)
+	jnz	1f
+
+	movq	PER_CPU_VAR(current_task), %rcx
+	cmpl	$0, TASK_TI_preempt_lazy_count(%rcx)
+	jnz	1f
+
+	btl	$TIF_NEED_RESCHED_LAZY,TASK_TI_flags(%rcx)
+	jnc	1f
+do_preempt_schedule_irq:
+#endif
 	call	preempt_schedule_irq
 	jmp	0b
 1:
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1054 @ bad_gs:
 	jmp	2b
 	.previous
 
+#ifndef CONFIG_PREEMPT_RT_FULL
 /* Call softirq on interrupt stack. Interrupts are off. */
 ENTRY(do_softirq_own_stack)
 	pushq	%rbp
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1065 @ ENTRY(do_softirq_own_stack)
 	leaveq
 	ret
 ENDPROC(do_softirq_own_stack)
+#endif
 
 #ifdef CONFIG_XEN_PV
 idtentry hypervisor_callback xen_do_hypervisor_callback has_error_code=0
Index: linux-5.0.19-rt11/arch/x86/ia32/ia32_signal.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/ia32/ia32_signal.c
+++ linux-5.0.19-rt11/arch/x86/ia32/ia32_signal.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:224 @ static void __user *get_sigframe(struct
 				 size_t frame_size,
 				 void __user **fpstate)
 {
-	struct fpu *fpu = &current->thread.fpu;
-	unsigned long sp;
+	unsigned long sp, fx_aligned, math_size;
 
 	/* Default to using normal stack */
 	sp = regs->sp;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:238 @ static void __user *get_sigframe(struct
 		 ksig->ka.sa.sa_restorer)
 		sp = (unsigned long) ksig->ka.sa.sa_restorer;
 
-	if (fpu->initialized) {
-		unsigned long fx_aligned, math_size;
-
-		sp = fpu__alloc_mathframe(sp, 1, &fx_aligned, &math_size);
-		*fpstate = (struct _fpstate_32 __user *) sp;
-		if (copy_fpstate_to_sigframe(*fpstate, (void __user *)fx_aligned,
-				    math_size) < 0)
-			return (void __user *) -1L;
-	}
+	sp = fpu__alloc_mathframe(sp, 1, &fx_aligned, &math_size);
+	*fpstate = (struct _fpstate_32 __user *) sp;
+	if (copy_fpstate_to_sigframe(*fpstate, (void __user *)fx_aligned,
+				     math_size) < 0)
+		return (void __user *) -1L;
 
 	sp -= frame_size;
 	/* Align the stack pointer according to the i386 ABI,
Index: linux-5.0.19-rt11/arch/x86/include/asm/fpu/api.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/include/asm/fpu/api.h
+++ linux-5.0.19-rt11/arch/x86/include/asm/fpu/api.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:13 @
 
 #ifndef _ASM_X86_FPU_API_H
 #define _ASM_X86_FPU_API_H
+#include <linux/bottom_half.h>
 
 /*
  * Use kernel_fpu_begin/end() if you intend to use FPU in kernel context. It
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:25 @
 extern void kernel_fpu_begin(void);
 extern void kernel_fpu_end(void);
 extern bool irq_fpu_usable(void);
+extern void fpregs_mark_activate(void);
+extern void kernel_fpu_resched(void);
+
+/*
+ * Use fpregs_lock() while editing CPU's FPU registers or fpu->state.
+ * A context switch will (and softirq might) save CPU's FPU register to
+ * fpu->state and set TIF_NEED_FPU_LOAD leaving CPU's FPU registers in a random
+ * state.
+ */
+static inline void fpregs_lock(void)
+{
+	preempt_disable();
+	local_bh_disable();
+}
+
+static inline void fpregs_unlock(void)
+{
+	local_bh_enable();
+	preempt_enable();
+}
+
+#ifdef CONFIG_X86_DEBUG_FPU
+extern void fpregs_assert_state_consistent(void);
+#else
+static inline void fpregs_assert_state_consistent(void) { }
+#endif
+
+/*
+ * Load the task FPU state before returning to userspace.
+ */
+extern void switch_fpu_return(void);
 
 /*
  * Query the presence of one or more xfeatures. Works on any legacy CPU as well.
Index: linux-5.0.19-rt11/arch/x86/include/asm/fpu/internal.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/include/asm/fpu/internal.h
+++ linux-5.0.19-rt11/arch/x86/include/asm/fpu/internal.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:17 @
 #include <linux/compat.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
+#include <linux/mm.h>
 
 #include <asm/user.h>
 #include <asm/fpu/api.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:28 @
 /*
  * High level FPU state handling functions:
  */
-extern void fpu__initialize(struct fpu *fpu);
 extern void fpu__prepare_read(struct fpu *fpu);
 extern void fpu__prepare_write(struct fpu *fpu);
 extern void fpu__save(struct fpu *fpu);
-extern void fpu__restore(struct fpu *fpu);
 extern int  fpu__restore_sig(void __user *buf, int ia32_frame);
 extern void fpu__drop(struct fpu *fpu);
-extern int  fpu__copy(struct fpu *dst_fpu, struct fpu *src_fpu);
+extern int  fpu__copy(struct task_struct *dst, struct task_struct *src);
 extern void fpu__clear(struct fpu *fpu);
 extern int  fpu__exception_code(struct fpu *fpu, int trap_nr);
 extern int  dump_fpu(struct pt_regs *ptregs, struct user_i387_struct *fpstate);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:124 @ extern void fpstate_sanitize_xstate(stru
 	err;								\
 })
 
+#define kernel_insn_err(insn, output, input...)				\
+({									\
+	int err;							\
+	asm volatile("1:" #insn "\n\t"					\
+		     "2:\n"						\
+		     ".section .fixup,\"ax\"\n"				\
+		     "3:  movl $-1,%[err]\n"				\
+		     "    jmp  2b\n"					\
+		     ".previous\n"					\
+		     _ASM_EXTABLE(1b, 3b)				\
+		     : [err] "=r" (err), output				\
+		     : "0"(0), input);					\
+	err;								\
+})
+
 #define kernel_insn(insn, output, input...)				\
 	asm volatile("1:" #insn "\n\t"					\
 		     "2:\n"						\
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:175 @ static inline void copy_kernel_to_fxregs
 	}
 }
 
+static inline int copy_kernel_to_fxregs_err(struct fxregs_state *fx)
+{
+	if (IS_ENABLED(CONFIG_X86_32))
+		return kernel_insn_err(fxrstor %[fx], "=m" (*fx), [fx] "m" (*fx));
+	else
+		return kernel_insn_err(fxrstorq %[fx], "=m" (*fx), [fx] "m" (*fx));
+}
+
 static inline int copy_user_to_fxregs(struct fxregs_state __user *fx)
 {
 	if (IS_ENABLED(CONFIG_X86_32))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:200 @ static inline void copy_kernel_to_fregs(
 	kernel_insn(frstor %[fx], "=m" (*fx), [fx] "m" (*fx));
 }
 
+static inline int copy_kernel_to_fregs_err(struct fregs_state *fx)
+{
+	return kernel_insn_err(frstor %[fx], "=m" (*fx), [fx] "m" (*fx));
+}
+
 static inline int copy_user_to_fregs(struct fregs_state __user *fx)
 {
 	return user_insn(frstor %[fx], "=m" (*fx), [fx] "m" (*fx));
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:431 @ static inline int copy_user_to_xregs(str
 }
 
 /*
+ * Restore xstate from kernel space xsave area, return an error code instead an
+ * exception.
+ */
+static inline int copy_kernel_to_xregs_err(struct xregs_state *xstate, u64 mask)
+{
+	u32 lmask = mask;
+	u32 hmask = mask >> 32;
+	int err;
+
+	XSTATE_OP(XRSTOR, xstate, lmask, hmask, err);
+
+	return err;
+}
+
+/*
  * These must be called with preempt disabled. Returns
  * 'true' if the FPU state is still intact and we can
  * keep registers active.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:562 @ static inline void fpregs_activate(struc
 	trace_x86_fpu_regs_activated(fpu);
 }
 
+static inline void __fpregs_load_activate(void)
+{
+	struct fpu *fpu = &current->thread.fpu;
+	int cpu = smp_processor_id();
+
+	if (WARN_ON_ONCE(current->mm == NULL))
+		return;
+
+	if (!fpregs_state_valid(fpu, cpu)) {
+		copy_kernel_to_fpregs(&fpu->state);
+		fpregs_activate(fpu);
+		fpu->last_cpu = cpu;
+	}
+	clear_thread_flag(TIF_NEED_FPU_LOAD);
+}
+
 /*
  * FPU state switching for scheduling.
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:586 @ static inline void fpregs_activate(struc
  *  - switch_fpu_prepare() saves the old state.
  *    This is done within the context of the old process.
  *
- *  - switch_fpu_finish() restores the new state as
- *    necessary.
+ *  - switch_fpu_finish() sets TIF_NEED_FPU_LOAD; the floating point state
+ *    will get loaded on return to userspace, or when the kernel needs it.
+ *
+ * The FPU context is only stored/restore for user task and ->mm is used to
+ * distinguish between kernel and user threads.
+ *
+ * If TIF_NEED_FPU_LOAD is cleared then the CPU's FPU registers are saved in
+ * the current thread's FPU registers state.
+ * If TIF_NEED_FPU_LOAD is set then CPU's FPU registers may not hold current()'s
+ * FPU registers. It is required to load the registers before returning to
+ * userland or using the content otherwise.
  */
 static inline void
 switch_fpu_prepare(struct fpu *old_fpu, int cpu)
 {
-	if (static_cpu_has(X86_FEATURE_FPU) && old_fpu->initialized) {
+	if (static_cpu_has(X86_FEATURE_FPU) && current->mm) {
 		if (!copy_fpregs_to_fpstate(old_fpu))
 			old_fpu->last_cpu = -1;
 		else
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:609 @ switch_fpu_prepare(struct fpu *old_fpu,
 
 		/* But leave fpu_fpregs_owner_ctx! */
 		trace_x86_fpu_regs_deactivated(old_fpu);
-	} else
-		old_fpu->last_cpu = -1;
+	}
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:617 @ switch_fpu_prepare(struct fpu *old_fpu,
  */
 
 /*
- * Set up the userspace FPU context for the new task, if the task
- * has used the FPU.
+ * Load PKRU from the FPU context if available. Delay loading the loading of the
+ * complete FPU state until the return to userland.
  */
-static inline void switch_fpu_finish(struct fpu *new_fpu, int cpu)
+static inline void switch_fpu_finish(struct fpu *new_fpu)
 {
-	bool preload = static_cpu_has(X86_FEATURE_FPU) &&
-		       new_fpu->initialized;
+	struct pkru_state *pk;
+	u32 pkru_val = init_pkru_value;
 
-	if (preload) {
-		if (!fpregs_state_valid(new_fpu, cpu))
-			copy_kernel_to_fpregs(&new_fpu->state);
-		fpregs_activate(new_fpu);
-	}
-}
+	if (!static_cpu_has(X86_FEATURE_FPU))
+		return;
 
-/*
- * Needs to be preemption-safe.
- *
- * NOTE! user_fpu_begin() must be used only immediately before restoring
- * the save state. It does not do any saving/restoring on its own. In
- * lazy FPU mode, it is just an optimization to avoid a #NM exception,
- * the task can lose the FPU right after preempt_enable().
- */
-static inline void user_fpu_begin(void)
-{
-	struct fpu *fpu = &current->thread.fpu;
+	set_thread_flag(TIF_NEED_FPU_LOAD);
+
+	if (!cpu_feature_enabled(X86_FEATURE_OSPKE))
+		return;
 
-	preempt_disable();
-	fpregs_activate(fpu);
-	preempt_enable();
+	/*
+	 * PKRU state is switched eagerly because it needs to be valid before we
+	 * return to userland e.g. for a copy_to_user() operation.
+	 */
+	if (current->mm) {
+		pk = get_xsave_addr(&new_fpu->state.xsave, XFEATURE_PKRU);
+		if (pk)
+			pkru_val = pk->pkru;
+	}
+	__write_pkru(pkru_val);
 }
 
 /*
Index: linux-5.0.19-rt11/arch/x86/include/asm/fpu/signal.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/include/asm/fpu/signal.h
+++ linux-5.0.19-rt11/arch/x86/include/asm/fpu/signal.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:25 @ int ia32_setup_frame(int sig, struct ksi
 
 extern void convert_from_fxsr(struct user_i387_ia32_struct *env,
 			      struct task_struct *tsk);
-extern void convert_to_fxsr(struct task_struct *tsk,
+extern void convert_to_fxsr(struct fxregs_state *fxsave,
 			    const struct user_i387_ia32_struct *env);
 
 unsigned long
Index: linux-5.0.19-rt11/arch/x86/include/asm/fpu/types.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/include/asm/fpu/types.h
+++ linux-5.0.19-rt11/arch/x86/include/asm/fpu/types.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:297 @ struct fpu {
 	unsigned int			last_cpu;
 
 	/*
-	 * @initialized:
-	 *
-	 * This flag indicates whether this context is initialized: if the task
-	 * is not running then we can restore from this context, if the task
-	 * is running then we should save into this context.
-	 */
-	unsigned char			initialized;
-
-	/*
 	 * @state:
 	 *
 	 * In-memory copy of all FPU registers that we save/restore
Index: linux-5.0.19-rt11/arch/x86/include/asm/fpu/xstate.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/include/asm/fpu/xstate.h
+++ linux-5.0.19-rt11/arch/x86/include/asm/fpu/xstate.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:8 @
 #include <linux/types.h>
 #include <asm/processor.h>
 #include <linux/uaccess.h>
+#include <asm/user.h>
 
 /* Bit 63 of XCR0 is reserved for future expansion */
 #define XFEATURE_MASK_EXTEND	(~(XFEATURE_MASK_FPSSE | (1ULL << 63)))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:50 @ extern void __init update_regset_xstate_
 					     u64 xstate_mask);
 
 void fpu__xstate_clear_all_cpu_caps(void);
-void *get_xsave_addr(struct xregs_state *xsave, int xstate);
-const void *get_xsave_field_ptr(int xstate_field);
+void *get_xsave_addr(struct xregs_state *xsave, int xfeature_nr);
+const void *get_xsave_field_ptr(int xfeature_nr);
 int using_compacted_format(void);
 int copy_xstate_to_kernel(void *kbuf, struct xregs_state *xsave, unsigned int offset, unsigned int size);
 int copy_xstate_to_user(void __user *ubuf, struct xregs_state *xsave, unsigned int offset, unsigned int size);
Index: linux-5.0.19-rt11/arch/x86/include/asm/pgtable.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/include/asm/pgtable.h
+++ linux-5.0.19-rt11/arch/x86/include/asm/pgtable.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:26 @
 
 #ifndef __ASSEMBLY__
 #include <asm/x86_init.h>
+#include <asm/fpu/xstate.h>
+#include <asm/fpu/api.h>
 
 extern pgd_t early_top_pgt[PTRS_PER_PGD];
 int __init __early_make_pgtable(unsigned long address, pmdval_t pmd);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:132 @ static inline int pte_dirty(pte_t pte)
 static inline u32 read_pkru(void)
 {
 	if (boot_cpu_has(X86_FEATURE_OSPKE))
-		return __read_pkru();
+		return __read_pkru_ins();
 	return 0;
 }
 
 static inline void write_pkru(u32 pkru)
 {
-	if (boot_cpu_has(X86_FEATURE_OSPKE))
-		__write_pkru(pkru);
+	struct pkru_state *pk;
+
+	if (!boot_cpu_has(X86_FEATURE_OSPKE))
+		return;
+
+	pk = get_xsave_addr(&current->thread.fpu.state.xsave, XFEATURE_PKRU);
+	/*
+	 * The PKRU value in xstate needs to be in sync with the value that is
+	 * written to the CPU. The FPU restore on return to userland would
+	 * otherwise load the previous value again.
+	 */
+	fpregs_lock();
+	if (pk)
+		pk->pkru = pkru;
+	__write_pkru(pkru);
+	fpregs_unlock();
 }
 
 static inline int pte_young(pte_t pte)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1374 @ static inline pmd_t pmd_swp_clear_soft_d
 #define PKRU_WD_BIT 0x2
 #define PKRU_BITS_PER_PKEY 2
 
+#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+extern u32 init_pkru_value;
+#else
+#define init_pkru_value	0
+#endif
+
 static inline bool __pkru_allows_read(u32 pkru, u16 pkey)
 {
 	int pkru_pkey_bits = pkey * PKRU_BITS_PER_PKEY;
Index: linux-5.0.19-rt11/arch/x86/include/asm/preempt.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/include/asm/preempt.h
+++ linux-5.0.19-rt11/arch/x86/include/asm/preempt.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:92 @ static __always_inline void __preempt_co
  * a decrement which hits zero means we have no preempt_count and should
  * reschedule.
  */
-static __always_inline bool __preempt_count_dec_and_test(void)
+static __always_inline bool ____preempt_count_dec_and_test(void)
 {
 	return GEN_UNARY_RMWcc("decl", __preempt_count, e, __percpu_arg([var]));
 }
 
+static __always_inline bool __preempt_count_dec_and_test(void)
+{
+	if (____preempt_count_dec_and_test())
+		return true;
+#ifdef CONFIG_PREEMPT_LAZY
+	if (current_thread_info()->preempt_lazy_count)
+		return false;
+	return test_thread_flag(TIF_NEED_RESCHED_LAZY);
+#else
+	return false;
+#endif
+}
+
 /*
  * Returns true when we need to resched and can (barring IRQ state).
  */
 static __always_inline bool should_resched(int preempt_offset)
 {
+#ifdef CONFIG_PREEMPT_LAZY
+	u32 tmp;
+
+	tmp = raw_cpu_read_4(__preempt_count);
+	if (tmp == preempt_offset)
+		return true;
+
+	/* preempt count == 0 ? */
+	tmp &= ~PREEMPT_NEED_RESCHED;
+	if (tmp != preempt_offset)
+		return false;
+	if (current_thread_info()->preempt_lazy_count)
+		return false;
+	return test_thread_flag(TIF_NEED_RESCHED_LAZY);
+#else
 	return unlikely(raw_cpu_read_4(__preempt_count) == preempt_offset);
+#endif
 }
 
 #ifdef CONFIG_PREEMPT
Index: linux-5.0.19-rt11/arch/x86/include/asm/signal.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/include/asm/signal.h
+++ linux-5.0.19-rt11/arch/x86/include/asm/signal.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:31 @ typedef struct {
 #define SA_IA32_ABI	0x02000000u
 #define SA_X32_ABI	0x01000000u
 
+/*
+ * Because some traps use the IST stack, we must keep preemption
+ * disabled while calling do_trap(), but do_trap() may call
+ * force_sig_info() which will grab the signal spin_locks for the
+ * task, which in PREEMPT_RT_FULL are mutexes.  By defining
+ * ARCH_RT_DELAYS_SIGNAL_SEND the force_sig_info() will set
+ * TIF_NOTIFY_RESUME and set up the signal to be sent on exit of the
+ * trap.
+ */
+#if defined(CONFIG_PREEMPT_RT_FULL)
+#define ARCH_RT_DELAYS_SIGNAL_SEND
+#endif
+
 #ifndef CONFIG_COMPAT
 typedef sigset_t compat_sigset_t;
 #endif
Index: linux-5.0.19-rt11/arch/x86/include/asm/smap.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/include/asm/smap.h
+++ linux-5.0.19-rt11/arch/x86/include/asm/smap.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:61 @ static __always_inline void stac(void)
 	alternative("", __stringify(__ASM_STAC), X86_FEATURE_SMAP);
 }
 
+static __always_inline unsigned long smap_save(void)
+{
+	unsigned long flags;
+
+	asm volatile (ALTERNATIVE("", "pushf; pop %0; " __stringify(__ASM_CLAC),
+				  X86_FEATURE_SMAP)
+		      : "=rm" (flags) : : "memory", "cc");
+
+	return flags;
+}
+
+static __always_inline void smap_restore(unsigned long flags)
+{
+	asm volatile (ALTERNATIVE("", "push %0; popf", X86_FEATURE_SMAP)
+		      : : "g" (flags) : "memory", "cc");
+}
+
 /* These macros can be used in asm() statements */
 #define ASM_CLAC \
 	ALTERNATIVE("", __stringify(__ASM_CLAC), X86_FEATURE_SMAP)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:89 @ static __always_inline void stac(void)
 static inline void clac(void) { }
 static inline void stac(void) { }
 
+static inline unsigned long smap_save(void) { return 0; }
+static inline void smap_restore(unsigned long flags) { }
+
 #define ASM_CLAC
 #define ASM_STAC
 
Index: linux-5.0.19-rt11/arch/x86/include/asm/special_insns.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/include/asm/special_insns.h
+++ linux-5.0.19-rt11/arch/x86/include/asm/special_insns.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:95 @ static inline void native_write_cr8(unsi
 #endif
 
 #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
-static inline u32 __read_pkru(void)
+static inline u32 __read_pkru_ins(void)
 {
 	u32 ecx = 0;
 	u32 edx, pkru;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:110 @ static inline u32 __read_pkru(void)
 	return pkru;
 }
 
-static inline void __write_pkru(u32 pkru)
+static inline void __write_pkru_ins(u32 pkru)
 {
 	u32 ecx = 0, edx = 0;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:121 @ static inline void __write_pkru(u32 pkru
 	asm volatile(".byte 0x0f,0x01,0xef\n\t"
 		     : : "a" (pkru), "c"(ecx), "d"(edx));
 }
+
+static inline void __write_pkru(u32 pkru)
+{
+	/*
+	 * WRPKRU is relatively expensive compared to RDPKRU.
+	 * Avoid WRPKRU when it would not change the value.
+	 */
+	if (pkru == __read_pkru_ins())
+		return;
+	__write_pkru_ins(pkru);
+}
+
 #else
-static inline u32 __read_pkru(void)
+static inline u32 __read_pkru_ins(void)
 {
 	return 0;
 }
Index: linux-5.0.19-rt11/arch/x86/include/asm/stackprotector.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/include/asm/stackprotector.h
+++ linux-5.0.19-rt11/arch/x86/include/asm/stackprotector.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:63 @
  */
 static __always_inline void boot_init_stack_canary(void)
 {
-	u64 canary;
+	u64 uninitialized_var(canary);
 	u64 tsc;
 
 #ifdef CONFIG_X86_64
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:74 @ static __always_inline void boot_init_st
 	 * of randomness. The TSC only matters for very early init,
 	 * there it already has some randomness on most systems. Later
 	 * on during the bootup the random pool has true entropy too.
+	 * For preempt-rt we need to weaken the randomness a bit, as
+	 * we can't call into the random generator from atomic context
+	 * due to locking constraints. We just leave canary
+	 * uninitialized and use the TSC based randomness on top of it.
 	 */
+#ifndef CONFIG_PREEMPT_RT_FULL
 	get_random_bytes(&canary, sizeof(canary));
+#endif
 	tsc = rdtsc();
 	canary += tsc + (tsc << 32UL);
 	canary &= CANARY_MASK;
Index: linux-5.0.19-rt11/arch/x86/include/asm/thread_info.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/include/asm/thread_info.h
+++ linux-5.0.19-rt11/arch/x86/include/asm/thread_info.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:59 @ struct task_struct;
 struct thread_info {
 	unsigned long		flags;		/* low level flags */
 	u32			status;		/* thread synchronous flags */
+	int			preempt_lazy_count;	/* 0 => lazy preemptable
+							  <0 => BUG */
 };
 
 #define INIT_THREAD_INFO(tsk)			\
 {						\
 	.flags		= 0,			\
+	.preempt_lazy_count = 0,		\
 }
 
 #else /* !__ASSEMBLY__ */
 
 #include <asm/asm-offsets.h>
 
+#define GET_THREAD_INFO(reg) \
+	_ASM_MOV PER_CPU_VAR(cpu_current_top_of_stack),reg ; \
+	_ASM_SUB $(THREAD_SIZE),reg ;
+
 #endif
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:98 @ struct thread_info {
 #define TIF_USER_RETURN_NOTIFY	11	/* notify kernel of userspace return */
 #define TIF_UPROBE		12	/* breakpointed or singlestepping */
 #define TIF_PATCH_PENDING	13	/* pending live patching update */
+#define TIF_NEED_FPU_LOAD	14	/* load FPU on return to userspace */
 #define TIF_NOCPUID		15	/* CPUID is not accessible in userland */
 #define TIF_NOTSC		16	/* TSC is not accessible in userland */
 #define TIF_IA32		17	/* IA32 compatibility process */
+#define TIF_NEED_RESCHED_LAZY	18	/* lazy rescheduling necessary */
 #define TIF_NOHZ		19	/* in adaptive nohz mode */
 #define TIF_MEMDIE		20	/* is terminating due to OOM killer */
 #define TIF_POLLING_NRFLAG	21	/* idle is polling for TIF_NEED_RESCHED */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:129 @ struct thread_info {
 #define _TIF_USER_RETURN_NOTIFY	(1 << TIF_USER_RETURN_NOTIFY)
 #define _TIF_UPROBE		(1 << TIF_UPROBE)
 #define _TIF_PATCH_PENDING	(1 << TIF_PATCH_PENDING)
+#define _TIF_NEED_FPU_LOAD	(1 << TIF_NEED_FPU_LOAD)
 #define _TIF_NOCPUID		(1 << TIF_NOCPUID)
 #define _TIF_NOTSC		(1 << TIF_NOTSC)
 #define _TIF_IA32		(1 << TIF_IA32)
+#define _TIF_NEED_RESCHED_LAZY	(1 << TIF_NEED_RESCHED_LAZY)
 #define _TIF_NOHZ		(1 << TIF_NOHZ)
 #define _TIF_POLLING_NRFLAG	(1 << TIF_POLLING_NRFLAG)
 #define _TIF_IO_BITMAP		(1 << TIF_IO_BITMAP)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:171 @ struct thread_info {
 #define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)
 #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
 
+#define _TIF_NEED_RESCHED_MASK	(_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY)
+
 #define STACK_WARN		(THREAD_SIZE/8)
 
 /*
Index: linux-5.0.19-rt11/arch/x86/include/asm/trace/fpu.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/include/asm/trace/fpu.h
+++ linux-5.0.19-rt11/arch/x86/include/asm/trace/fpu.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:16 @ DECLARE_EVENT_CLASS(x86_fpu,
 
 	TP_STRUCT__entry(
 		__field(struct fpu *, fpu)
-		__field(bool, initialized)
+		__field(bool, load_fpu)
 		__field(u64, xfeatures)
 		__field(u64, xcomp_bv)
 		),
 
 	TP_fast_assign(
 		__entry->fpu		= fpu;
-		__entry->initialized	= fpu->initialized;
+		__entry->load_fpu	= test_thread_flag(TIF_NEED_FPU_LOAD);
 		if (boot_cpu_has(X86_FEATURE_OSXSAVE)) {
 			__entry->xfeatures = fpu->state.xsave.header.xfeatures;
 			__entry->xcomp_bv  = fpu->state.xsave.header.xcomp_bv;
 		}
 	),
-	TP_printk("x86/fpu: %p initialized: %d xfeatures: %llx xcomp_bv: %llx",
+	TP_printk("x86/fpu: %p load: %d xfeatures: %llx xcomp_bv: %llx",
 			__entry->fpu,
-			__entry->initialized,
+			__entry->load_fpu,
 			__entry->xfeatures,
 			__entry->xcomp_bv
 	)
Index: linux-5.0.19-rt11/arch/x86/include/asm/uaccess.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/include/asm/uaccess.h
+++ linux-5.0.19-rt11/arch/x86/include/asm/uaccess.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:723 @ static __must_check inline bool user_acc
 #define user_access_begin(a,b)	user_access_begin(a,b)
 #define user_access_end()	__uaccess_end()
 
+#define user_access_save()	smap_save()
+#define user_access_restore(x)	smap_restore(x)
+
 #define unsafe_put_user(x, ptr, label)	\
 	__put_user_size((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr)), label)
 
Index: linux-5.0.19-rt11/arch/x86/kernel/apic/io_apic.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kernel/apic/io_apic.c
+++ linux-5.0.19-rt11/arch/x86/kernel/apic/io_apic.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1725 @ static bool io_apic_level_ack_pending(st
 	return false;
 }
 
-static inline bool ioapic_irqd_mask(struct irq_data *data)
+static inline bool ioapic_prepare_move(struct irq_data *data)
 {
 	/* If we are moving the irq we need to mask it */
 	if (unlikely(irqd_is_setaffinity_pending(data))) {
-		mask_ioapic_irq(data);
+		if (!irqd_irq_masked(data))
+			mask_ioapic_irq(data);
 		return true;
 	}
 	return false;
 }
 
-static inline void ioapic_irqd_unmask(struct irq_data *data, bool masked)
+static inline void ioapic_finish_move(struct irq_data *data, bool moveit)
 {
-	if (unlikely(masked)) {
+	if (unlikely(moveit)) {
 		/* Only migrate the irq if the ack has been received.
 		 *
 		 * On rare occasions the broadcast level triggered ack gets
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1767 @ static inline void ioapic_irqd_unmask(st
 		 */
 		if (!io_apic_level_ack_pending(data->chip_data))
 			irq_move_masked_irq(data);
-		unmask_ioapic_irq(data);
+		/* If the irq is masked in the core, leave it */
+		if (!irqd_irq_masked(data))
+			unmask_ioapic_irq(data);
 	}
 }
 #else
-static inline bool ioapic_irqd_mask(struct irq_data *data)
+static inline bool ioapic_prepare_move(struct irq_data *data)
 {
 	return false;
 }
-static inline void ioapic_irqd_unmask(struct irq_data *data, bool masked)
+static inline void ioapic_finish_move(struct irq_data *data, bool moveit)
 {
 }
 #endif
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1786 @ static void ioapic_ack_level(struct irq_
 {
 	struct irq_cfg *cfg = irqd_cfg(irq_data);
 	unsigned long v;
-	bool masked;
+	bool moveit;
 	int i;
 
 	irq_complete_move(cfg);
-	masked = ioapic_irqd_mask(irq_data);
+	moveit = ioapic_prepare_move(irq_data);
 
 	/*
 	 * It appears there is an erratum which affects at least version 0x11
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1845 @ static void ioapic_ack_level(struct irq_
 		eoi_ioapic_pin(cfg->vector, irq_data->chip_data);
 	}
 
-	ioapic_irqd_unmask(irq_data, masked);
+	ioapic_finish_move(irq_data, moveit);
 }
 
 static void ioapic_ir_ack_level(struct irq_data *irq_data)
Index: linux-5.0.19-rt11/arch/x86/kernel/asm-offsets.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kernel/asm-offsets.c
+++ linux-5.0.19-rt11/arch/x86/kernel/asm-offsets.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:42 @ static void __used common(void)
 
 	BLANK();
 	OFFSET(TASK_TI_flags, task_struct, thread_info.flags);
+	OFFSET(TASK_TI_preempt_lazy_count, task_struct, thread_info.preempt_lazy_count);
 	OFFSET(TASK_addr_limit, task_struct, thread.addr_limit);
 
 	BLANK();
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:96 @ static void __used common(void)
 
 	BLANK();
 	DEFINE(PTREGS_SIZE, sizeof(struct pt_regs));
+	DEFINE(_PREEMPT_ENABLED, PREEMPT_ENABLED);
 
 	/* TLB state for the entry code */
 	OFFSET(TLB_STATE_user_pcid_flush_mask, tlb_state, user_pcid_flush_mask);
Index: linux-5.0.19-rt11/arch/x86/kernel/cpu/common.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kernel/cpu/common.c
+++ linux-5.0.19-rt11/arch/x86/kernel/cpu/common.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:375 @ static bool pku_disabled;
 
 static __always_inline void setup_pku(struct cpuinfo_x86 *c)
 {
+	struct pkru_state *pk;
+
 	/* check the boot processor, plus compile options for PKU: */
 	if (!cpu_feature_enabled(X86_FEATURE_PKU))
 		return;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:387 @ static __always_inline void setup_pku(st
 		return;
 
 	cr4_set_bits(X86_CR4_PKE);
+	pk = get_xsave_addr(&init_fpstate.xsave, XFEATURE_PKRU);
+	if (pk)
+		pk->pkru = init_pkru_value;
 	/*
 	 * Seting X86_CR4_PKE will cause the X86_FEATURE_OSPKE
 	 * cpuid bit to be set.  We need to ensure that we
Index: linux-5.0.19-rt11/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
+++ linux-5.0.19-rt11/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1513 @ static int pseudo_lock_dev_mmap(struct f
 	 * may be scheduled elsewhere and invalidate entries in the
 	 * pseudo-locked region.
 	 */
-	if (!cpumask_subset(&current->cpus_allowed, &plr->d->cpu_mask)) {
+	if (!cpumask_subset(current->cpus_ptr, &plr->d->cpu_mask)) {
 		mutex_unlock(&rdtgroup_mutex);
 		return -EINVAL;
 	}
Index: linux-5.0.19-rt11/arch/x86/kernel/fpu/core.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kernel/fpu/core.c
+++ linux-5.0.19-rt11/arch/x86/kernel/fpu/core.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:104 @ static void __kernel_fpu_begin(void)
 
 	kernel_fpu_disable();
 
-	if (fpu->initialized) {
-		/*
-		 * Ignore return value -- we don't care if reg state
-		 * is clobbered.
-		 */
-		copy_fpregs_to_fpstate(fpu);
-	} else {
-		__cpu_invalidate_fpregs_state();
+	if (current->mm) {
+		if (!test_thread_flag(TIF_NEED_FPU_LOAD)) {
+			set_thread_flag(TIF_NEED_FPU_LOAD);
+			/*
+			 * Ignore return value -- we don't care if reg state
+			 * is clobbered.
+			 */
+			copy_fpregs_to_fpstate(fpu);
+		}
 	}
+	__cpu_invalidate_fpregs_state();
 }
 
 static void __kernel_fpu_end(void)
 {
-	struct fpu *fpu = &current->thread.fpu;
-
-	if (fpu->initialized)
-		copy_kernel_to_fpregs(&fpu->state);
-
 	kernel_fpu_enable();
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:136 @ void kernel_fpu_end(void)
 }
 EXPORT_SYMBOL_GPL(kernel_fpu_end);
 
+void kernel_fpu_resched(void)
+{
+	WARN_ON_FPU(!this_cpu_read(in_kernel_fpu));
+
+	if (should_resched(PREEMPT_OFFSET)) {
+		kernel_fpu_end();
+		cond_resched();
+		kernel_fpu_begin();
+	}
+}
+EXPORT_SYMBOL_GPL(kernel_fpu_resched);
+
 /*
  * Save the FPU state (mark it for reload if necessary):
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:157 @ void fpu__save(struct fpu *fpu)
 {
 	WARN_ON_FPU(fpu != &current->thread.fpu);
 
-	preempt_disable();
+	fpregs_lock();
 	trace_x86_fpu_before_save(fpu);
-	if (fpu->initialized) {
+
+	if (!test_thread_flag(TIF_NEED_FPU_LOAD)) {
 		if (!copy_fpregs_to_fpstate(fpu)) {
 			copy_kernel_to_fpregs(&fpu->state);
 		}
 	}
+
 	trace_x86_fpu_after_save(fpu);
-	preempt_enable();
+	fpregs_unlock();
 }
 EXPORT_SYMBOL_GPL(fpu__save);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:200 @ void fpstate_init(union fpregs_state *st
 }
 EXPORT_SYMBOL_GPL(fpstate_init);
 
-int fpu__copy(struct fpu *dst_fpu, struct fpu *src_fpu)
+int fpu__copy(struct task_struct *dst, struct task_struct *src)
 {
+	struct fpu *dst_fpu = &dst->thread.fpu;
+	struct fpu *src_fpu = &src->thread.fpu;
+
 	dst_fpu->last_cpu = -1;
 
-	if (!src_fpu->initialized || !static_cpu_has(X86_FEATURE_FPU))
+	if (!static_cpu_has(X86_FEATURE_FPU))
 		return 0;
 
 	WARN_ON_FPU(src_fpu != &current->thread.fpu);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:219 @ int fpu__copy(struct fpu *dst_fpu, struc
 	memset(&dst_fpu->state.xsave, 0, fpu_kernel_xstate_size);
 
 	/*
-	 * Save current FPU registers directly into the child
-	 * FPU context, without any memory-to-memory copying.
+	 * If the FPU registers are not current just memcpy() the state.
+	 * Otherwise save current FPU registers directly into the child's FPU
+	 * context, without any memory-to-memory copying.
 	 *
 	 * ( The function 'fails' in the FNSAVE case, which destroys
-	 *   register contents so we have to copy them back. )
+	 *   register contents so we have to load them back. )
 	 */
-	if (!copy_fpregs_to_fpstate(dst_fpu)) {
-		memcpy(&src_fpu->state, &dst_fpu->state, fpu_kernel_xstate_size);
-		copy_kernel_to_fpregs(&src_fpu->state);
-	}
+	fpregs_lock();
+	if (test_thread_flag(TIF_NEED_FPU_LOAD))
+		memcpy(&dst_fpu->state, &src_fpu->state, fpu_kernel_xstate_size);
+
+	else if (!copy_fpregs_to_fpstate(dst_fpu))
+		copy_kernel_to_fpregs(&dst_fpu->state);
+
+	fpregs_unlock();
+
+	set_tsk_thread_flag(dst, TIF_NEED_FPU_LOAD);
 
 	trace_x86_fpu_copy_src(src_fpu);
 	trace_x86_fpu_copy_dst(dst_fpu);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:247 @ int fpu__copy(struct fpu *dst_fpu, struc
  * Activate the current task's in-memory FPU context,
  * if it has not been used before:
  */
-void fpu__initialize(struct fpu *fpu)
+static void fpu__initialize(struct fpu *fpu)
 {
 	WARN_ON_FPU(fpu != &current->thread.fpu);
 
-	if (!fpu->initialized) {
-		fpstate_init(&fpu->state);
-		trace_x86_fpu_init_state(fpu);
-
-		trace_x86_fpu_activate_state(fpu);
-		/* Safe to do for the current task: */
-		fpu->initialized = 1;
-	}
+	set_thread_flag(TIF_NEED_FPU_LOAD);
+	fpstate_init(&fpu->state);
+	trace_x86_fpu_init_state(fpu);
 }
-EXPORT_SYMBOL_GPL(fpu__initialize);
 
 /*
  * This function must be called before we read a task's fpstate.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:266 @ EXPORT_SYMBOL_GPL(fpu__initialize);
  *
  * - or it's called for stopped tasks (ptrace), in which case the
  *   registers were already saved by the context-switch code when
- *   the task scheduled out - we only have to initialize the registers
- *   if they've never been initialized.
+ *   the task scheduled out.
  *
  * If the task has used the FPU before then save it.
  */
 void fpu__prepare_read(struct fpu *fpu)
 {
-	if (fpu == &current->thread.fpu) {
+	if (fpu == &current->thread.fpu)
 		fpu__save(fpu);
-	} else {
-		if (!fpu->initialized) {
-			fpstate_init(&fpu->state);
-			trace_x86_fpu_init_state(fpu);
-
-			trace_x86_fpu_activate_state(fpu);
-			/* Safe to do for current and for stopped child tasks: */
-			fpu->initialized = 1;
-		}
-	}
 }
 
 /*
  * This function must be called before we write a task's fpstate.
  *
- * If the task has used the FPU before then invalidate any cached FPU registers.
- * If the task has not used the FPU before then initialize its fpstate.
+ * Invalidate any cached FPU registers.
  *
  * After this function call, after registers in the fpstate are
  * modified and the child task has woken up, the child task will
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:296 @ void fpu__prepare_write(struct fpu *fpu)
 	 */
 	WARN_ON_FPU(fpu == &current->thread.fpu);
 
-	if (fpu->initialized) {
-		/* Invalidate any cached state: */
-		__fpu_invalidate_fpregs_state(fpu);
-	} else {
-		fpstate_init(&fpu->state);
-		trace_x86_fpu_init_state(fpu);
-
-		trace_x86_fpu_activate_state(fpu);
-		/* Safe to do for stopped child tasks: */
-		fpu->initialized = 1;
-	}
+	/* Invalidate any cached state: */
+	__fpu_invalidate_fpregs_state(fpu);
 }
 
 /*
- * 'fpu__restore()' is called to copy FPU registers from
- * the FPU fpstate to the live hw registers and to activate
- * access to the hardware registers, so that FPU instructions
- * can be used afterwards.
- *
- * Must be called with kernel preemption disabled (for example
- * with local interrupts disabled, as it is in the case of
- * do_device_not_available()).
- */
-void fpu__restore(struct fpu *fpu)
-{
-	fpu__initialize(fpu);
-
-	/* Avoid __kernel_fpu_begin() right after fpregs_activate() */
-	kernel_fpu_disable();
-	trace_x86_fpu_before_restore(fpu);
-	fpregs_activate(fpu);
-	copy_kernel_to_fpregs(&fpu->state);
-	trace_x86_fpu_after_restore(fpu);
-	kernel_fpu_enable();
-}
-EXPORT_SYMBOL_GPL(fpu__restore);
-
-/*
  * Drops current FPU state: deactivates the fpregs and
  * the fpstate. NOTE: it still leaves previous contents
  * in the fpregs in the eager-FPU case.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:314 @ void fpu__drop(struct fpu *fpu)
 	preempt_disable();
 
 	if (fpu == &current->thread.fpu) {
-		if (fpu->initialized) {
-			/* Ignore delayed exceptions from user space */
-			asm volatile("1: fwait\n"
-				     "2:\n"
-				     _ASM_EXTABLE(1b, 2b));
-			fpregs_deactivate(fpu);
-		}
+		/* Ignore delayed exceptions from user space */
+		asm volatile("1: fwait\n"
+			     "2:\n"
+			     _ASM_EXTABLE(1b, 2b));
+		fpregs_deactivate(fpu);
 	}
 
-	fpu->initialized = 0;
-
 	trace_x86_fpu_dropped(fpu);
 
 	preempt_enable();
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:332 @ void fpu__drop(struct fpu *fpu)
  */
 static inline void copy_init_fpstate_to_fpregs(void)
 {
+	fpregs_lock();
+
 	if (use_xsave())
 		copy_kernel_to_xregs(&init_fpstate.xsave, -1);
 	else if (static_cpu_has(X86_FEATURE_FXSR))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:343 @ static inline void copy_init_fpstate_to_
 
 	if (boot_cpu_has(X86_FEATURE_OSPKE))
 		copy_init_pkru_to_fpregs();
+
+	fpregs_mark_activate();
+	fpregs_unlock();
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:363 @ void fpu__clear(struct fpu *fpu)
 	/*
 	 * Make sure fpstate is cleared and initialized.
 	 */
-	if (static_cpu_has(X86_FEATURE_FPU)) {
-		preempt_disable();
-		fpu__initialize(fpu);
-		user_fpu_begin();
+	fpu__initialize(fpu);
+	if (static_cpu_has(X86_FEATURE_FPU))
 		copy_init_fpstate_to_fpregs();
-		preempt_enable();
-	}
 }
 
 /*
+ * Load FPU context before returning to userspace.
+ */
+void switch_fpu_return(void)
+{
+	if (!static_cpu_has(X86_FEATURE_FPU))
+		return;
+
+	__fpregs_load_activate();
+}
+EXPORT_SYMBOL_GPL(switch_fpu_return);
+
+#ifdef CONFIG_X86_DEBUG_FPU
+/*
+ * If current FPU state according to its tracking (loaded FPU ctx on this CPU)
+ * is not valid then we must have TIF_NEED_FPU_LOAD set so the context is loaded on
+ * return to userland.
+ */
+void fpregs_assert_state_consistent(void)
+{
+       struct fpu *fpu = &current->thread.fpu;
+
+       if (test_thread_flag(TIF_NEED_FPU_LOAD))
+               return;
+       WARN_ON_FPU(!fpregs_state_valid(fpu, smp_processor_id()));
+}
+EXPORT_SYMBOL_GPL(fpregs_assert_state_consistent);
+#endif
+
+void fpregs_mark_activate(void)
+{
+	struct fpu *fpu = &current->thread.fpu;
+
+	fpregs_activate(fpu);
+	fpu->last_cpu = smp_processor_id();
+	clear_thread_flag(TIF_NEED_FPU_LOAD);
+}
+EXPORT_SYMBOL_GPL(fpregs_mark_activate);
+
+/*
  * x87 math exception handling:
  */
 
Index: linux-5.0.19-rt11/arch/x86/kernel/fpu/init.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kernel/fpu/init.c
+++ linux-5.0.19-rt11/arch/x86/kernel/fpu/init.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:242 @ static void __init fpu__init_system_ctx_
 
 	WARN_ON_FPU(!on_boot_cpu);
 	on_boot_cpu = 0;
-
-	WARN_ON_FPU(current->thread.fpu.initialized);
 }
 
 /*
Index: linux-5.0.19-rt11/arch/x86/kernel/fpu/regset.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kernel/fpu/regset.c
+++ linux-5.0.19-rt11/arch/x86/kernel/fpu/regset.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:18 @
  */
 int regset_fpregs_active(struct task_struct *target, const struct user_regset *regset)
 {
-	struct fpu *target_fpu = &target->thread.fpu;
-
-	return target_fpu->initialized ? regset->n : 0;
+	return regset->n;
 }
 
 int regset_xregset_fpregs_active(struct task_struct *target, const struct user_regset *regset)
 {
-	struct fpu *target_fpu = &target->thread.fpu;
-
-	if (boot_cpu_has(X86_FEATURE_FXSR) && target_fpu->initialized)
+	if (boot_cpu_has(X86_FEATURE_FXSR))
 		return regset->n;
 	else
 		return 0;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:268 @ convert_from_fxsr(struct user_i387_ia32_
 		memcpy(&to[i], &from[i], sizeof(to[0]));
 }
 
-void convert_to_fxsr(struct task_struct *tsk,
+void convert_to_fxsr(struct fxregs_state *fxsave,
 		     const struct user_i387_ia32_struct *env)
 
 {
-	struct fxregs_state *fxsave = &tsk->thread.fpu.state.fxsave;
 	struct _fpreg *from = (struct _fpreg *) &env->st_space[0];
 	struct _fpxreg *to = (struct _fpxreg *) &fxsave->st_space[0];
 	int i;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:348 @ int fpregs_set(struct task_struct *targe
 
 	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &env, 0, -1);
 	if (!ret)
-		convert_to_fxsr(target, &env);
+		convert_to_fxsr(&target->thread.fpu.state.fxsave, &env);
 
 	/*
 	 * update the header bit in the xsave header, indicating the
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:369 @ int fpregs_set(struct task_struct *targe
 int dump_fpu(struct pt_regs *regs, struct user_i387_struct *ufpu)
 {
 	struct task_struct *tsk = current;
-	struct fpu *fpu = &tsk->thread.fpu;
-	int fpvalid;
-
-	fpvalid = fpu->initialized;
-	if (fpvalid)
-		fpvalid = !fpregs_get(tsk, NULL,
-				      0, sizeof(struct user_i387_ia32_struct),
-				      ufpu, NULL);
 
-	return fpvalid;
+	return !fpregs_get(tsk, NULL, 0, sizeof(struct user_i387_ia32_struct),
+			   ufpu, NULL);
 }
 EXPORT_SYMBOL(dump_fpu);
 
Index: linux-5.0.19-rt11/arch/x86/kernel/fpu/signal.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kernel/fpu/signal.c
+++ linux-5.0.19-rt11/arch/x86/kernel/fpu/signal.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:8 @
 
 #include <linux/compat.h>
 #include <linux/cpu.h>
+#include <linux/pagemap.h>
 
 #include <asm/fpu/internal.h>
 #include <asm/fpu/signal.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:65 @ static inline int save_fsave_header(stru
 		struct user_i387_ia32_struct env;
 		struct _fpstate_32 __user *fp = buf;
 
+		fpregs_lock();
+		if (!test_thread_flag(TIF_NEED_FPU_LOAD))
+			copy_fxregs_to_kernel(&tsk->thread.fpu);
+		fpregs_unlock();
+
 		convert_from_fxsr(&env, tsk);
 
 		if (__copy_to_user(buf, &env, sizeof(env)) ||
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:153 @ static inline int copy_fpregs_to_sigfram
  *	buf == buf_fx for 64-bit frames and 32-bit fsave frame.
  *	buf != buf_fx for 32-bit frames with fxstate.
  *
- * If the fpu, extended register state is live, save the state directly
- * to the user frame pointed by the aligned pointer 'buf_fx'. Otherwise,
- * copy the thread's fpu state to the user frame starting at 'buf_fx'.
+ * Try to save it directly to the user frame with disabled page fault handler.
+ * If this fails then do the slow path where the FPU state is first saved to
+ * task's fpu->state and then copy it to the user frame pointed by the aligned
+ * pointer 'buf_fx'.
  *
  * If this is a 32-bit frame with fxstate, put a fsave header before
  * the aligned state at 'buf_fx'.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:166 @ static inline int copy_fpregs_to_sigfram
  */
 int copy_fpstate_to_sigframe(void __user *buf, void __user *buf_fx, int size)
 {
-	struct fpu *fpu = &current->thread.fpu;
-	struct xregs_state *xsave = &fpu->state.xsave;
 	struct task_struct *tsk = current;
 	int ia32_fxstate = (buf != buf_fx);
+	int ret;
 
 	ia32_fxstate &= (IS_ENABLED(CONFIG_X86_32) ||
 			 IS_ENABLED(CONFIG_IA32_EMULATION));
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:181 @ int copy_fpstate_to_sigframe(void __user
 			sizeof(struct user_i387_ia32_struct), NULL,
 			(struct _fpstate_32 __user *) buf) ? -1 : 1;
 
-	if (fpu->initialized || using_compacted_format()) {
-		/* Save the live register state to the user directly. */
-		if (copy_fpregs_to_sigframe(buf_fx))
-			return -1;
-		/* Update the thread's fxstate to save the fsave header. */
-		if (ia32_fxstate)
-			copy_fxregs_to_kernel(fpu);
-	} else {
-		/*
-		 * It is a *bug* if kernel uses compacted-format for xsave
-		 * area and we copy it out directly to a signal frame. It
-		 * should have been handled above by saving the registers
-		 * directly.
-		 */
-		if (boot_cpu_has(X86_FEATURE_XSAVES)) {
-			WARN_ONCE(1, "x86/fpu: saving compacted-format xsave area to a signal frame!\n");
-			return -1;
-		}
+retry:
+	fpregs_lock();
+	/*
+	 * Load the FPU register if they are not valid for the current task.
+	 * With a valid FPU state we can attempt to save the state directly to
+	 * userland's stack frame which will likely succeed. If it does not,
+	 * resolve the fault in the user memory and try again.
+	 */
+	if (test_thread_flag(TIF_NEED_FPU_LOAD))
+		__fpregs_load_activate();
 
-		fpstate_sanitize_xstate(fpu);
-		if (__copy_to_user(buf_fx, xsave, fpu_user_xstate_size))
-			return -1;
+	pagefault_disable();
+	ret = copy_fpregs_to_sigframe(buf_fx);
+	pagefault_enable();
+	fpregs_unlock();
+
+	if (ret) {
+		if (!fault_in_pages_writeable(buf_fx, fpu_user_xstate_size))
+			goto retry;
+		return -EFAULT;
 	}
 
 	/* Save the fsave header for the 32-bit frames. */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:214 @ int copy_fpstate_to_sigframe(void __user
 }
 
 static inline void
-sanitize_restored_xstate(struct task_struct *tsk,
+sanitize_restored_xstate(union fpregs_state *state,
 			 struct user_i387_ia32_struct *ia32_env,
 			 u64 xfeatures, int fx_only)
 {
-	struct xregs_state *xsave = &tsk->thread.fpu.state.xsave;
+	struct xregs_state *xsave = &state->xsave;
 	struct xstate_header *header = &xsave->header;
 
 	if (use_xsave()) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:245 @ sanitize_restored_xstate(struct task_str
 		 */
 		xsave->i387.mxcsr &= mxcsr_feature_mask;
 
-		convert_to_fxsr(tsk, ia32_env);
+		if (ia32_env)
+			convert_to_fxsr(&state->fxsave, ia32_env);
 	}
 }
 
 /*
  * Restore the extended state if present. Otherwise, restore the FP/SSE state.
  */
-static inline int copy_user_to_fpregs_zeroing(void __user *buf, u64 xbv, int fx_only)
+static int copy_user_to_fpregs_zeroing(void __user *buf, u64 xbv, int fx_only)
 {
 	if (use_xsave()) {
-		if ((unsigned long)buf % 64 || fx_only) {
+		if (fx_only) {
 			u64 init_bv = xfeatures_mask & ~XFEATURE_MASK_FPSSE;
 			copy_kernel_to_xregs(&init_fpstate.xsave, init_bv);
 			return copy_user_to_fxregs(buf);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:274 @ static inline int copy_user_to_fpregs_ze
 
 static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
 {
+	struct user_i387_ia32_struct *envp = NULL;
 	int ia32_fxstate = (buf != buf_fx);
 	struct task_struct *tsk = current;
 	struct fpu *fpu = &tsk->thread.fpu;
 	int state_size = fpu_kernel_xstate_size;
+	struct user_i387_ia32_struct env;
 	u64 xfeatures = 0;
 	int fx_only = 0;
+	int ret = 0;
 
 	ia32_fxstate &= (IS_ENABLED(CONFIG_X86_32) ||
 			 IS_ENABLED(CONFIG_IA32_EMULATION));
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:295 @ static int __fpu__restore_sig(void __use
 	if (!access_ok(buf, size))
 		return -EACCES;
 
-	fpu__initialize(fpu);
-
 	if (!static_cpu_has(X86_FEATURE_FPU))
 		return fpregs_soft_set(current, NULL,
 				       0, sizeof(struct user_i387_ia32_struct),
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:317 @ static int __fpu__restore_sig(void __use
 		}
 	}
 
+	/*
+	 * The current state of the FPU registers does not matter. By setting
+	 * TIF_NEED_FPU_LOAD unconditionally it is ensured that the our xstate
+	 * is not modified on context switch and that the xstate is considered
+	 * to loaded again on return to userland (overriding last_cpu avoids the
+	 * optimisation).
+	 */
+	set_thread_flag(TIF_NEED_FPU_LOAD);
+	__fpu_invalidate_fpregs_state(fpu);
+
+	if ((unsigned long)buf_fx % 64)
+		fx_only = 1;
+	/*
+	 * For 32-bit frames with fxstate, copy the fxstate so it can be
+	 * reconstructed later.
+	 */
 	if (ia32_fxstate) {
-		/*
-		 * For 32-bit frames with fxstate, copy the user state to the
-		 * thread's fpu state, reconstruct fxstate from the fsave
-		 * header. Validate and sanitize the copied state.
-		 */
-		struct user_i387_ia32_struct env;
-		int err = 0;
+		ret = __copy_from_user(&env, buf, sizeof(env));
+		if (ret)
+			goto err_out;
+		envp = &env;
+	} else {
+		fpregs_lock();
+		pagefault_disable();
+		ret = copy_user_to_fpregs_zeroing(buf_fx, xfeatures, fx_only);
+		pagefault_enable();
+		if (!ret) {
+			fpregs_mark_activate();
+			fpregs_unlock();
+			return 0;
+		}
+		fpregs_unlock();
+	}
 
-		/*
-		 * Drop the current fpu which clears fpu->initialized. This ensures
-		 * that any context-switch during the copy of the new state,
-		 * avoids the intermediate state from getting restored/saved.
-		 * Thus avoiding the new restored state from getting corrupted.
-		 * We will be ready to restore/save the state only after
-		 * fpu->initialized is again set.
-		 */
-		fpu__drop(fpu);
+	if (use_xsave() && !fx_only) {
+		u64 init_bv = xfeatures_mask & ~xfeatures;
 
 		if (using_compacted_format()) {
-			err = copy_user_to_xstate(&fpu->state.xsave, buf_fx);
+			ret = copy_user_to_xstate(&fpu->state.xsave, buf_fx);
 		} else {
-			err = __copy_from_user(&fpu->state.xsave, buf_fx, state_size);
+			ret = __copy_from_user(&fpu->state.xsave, buf_fx, state_size);
 
-			if (!err && state_size > offsetof(struct xregs_state, header))
-				err = validate_xstate_header(&fpu->state.xsave.header);
+			if (!ret && state_size > offsetof(struct xregs_state, header))
+				ret = validate_xstate_header(&fpu->state.xsave.header);
 		}
+		if (ret)
+			goto err_out;
 
-		if (err || __copy_from_user(&env, buf, sizeof(env))) {
-			fpstate_init(&fpu->state);
-			trace_x86_fpu_init_state(fpu);
-			err = -1;
-		} else {
-			sanitize_restored_xstate(tsk, &env, xfeatures, fx_only);
+		sanitize_restored_xstate(&fpu->state, envp, xfeatures, fx_only);
+
+		fpregs_lock();
+		if (unlikely(init_bv))
+			copy_kernel_to_xregs(&init_fpstate.xsave, init_bv);
+		ret = copy_kernel_to_xregs_err(&fpu->state.xsave, xfeatures);
+
+	} else if (use_fxsr()) {
+		ret = __copy_from_user(&fpu->state.fxsave, buf_fx, state_size);
+		if (ret) {
+			ret = -EFAULT;
+			goto err_out;
 		}
 
-		local_bh_disable();
-		fpu->initialized = 1;
-		fpu__restore(fpu);
-		local_bh_enable();
+		sanitize_restored_xstate(&fpu->state, envp, xfeatures, fx_only);
 
-		return err;
-	} else {
-		/*
-		 * For 64-bit frames and 32-bit fsave frames, restore the user
-		 * state to the registers directly (with exceptions handled).
-		 */
-		user_fpu_begin();
-		if (copy_user_to_fpregs_zeroing(buf_fx, xfeatures, fx_only)) {
-			fpu__clear(fpu);
-			return -1;
+		fpregs_lock();
+		if (use_xsave()) {
+			u64 init_bv = xfeatures_mask & ~XFEATURE_MASK_FPSSE;
+			copy_kernel_to_xregs(&init_fpstate.xsave, init_bv);
 		}
+
+		ret = copy_kernel_to_fxregs_err(&fpu->state.fxsave);
+	} else {
+		ret = __copy_from_user(&fpu->state.fsave, buf_fx, state_size);
+		if (ret)
+			goto err_out;
+		fpregs_lock();
+		ret = copy_kernel_to_fregs_err(&fpu->state.fsave);
 	}
+	if (!ret)
+		fpregs_mark_activate();
+	fpregs_unlock();
 
-	return 0;
+err_out:
+	if (ret)
+		fpu__clear(fpu);
+	return ret;
 }
 
 static inline int xstate_sigframe_size(void)
Index: linux-5.0.19-rt11/arch/x86/kernel/fpu/xstate.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kernel/fpu/xstate.c
+++ linux-5.0.19-rt11/arch/x86/kernel/fpu/xstate.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:808 @ void fpu__resume_cpu(void)
 }
 
 /*
- * Given an xstate feature mask, calculate where in the xsave
+ * Given an xstate feature nr, calculate where in the xsave
  * buffer the state is.  Callers should ensure that the buffer
  * is valid.
  */
-static void *__raw_xsave_addr(struct xregs_state *xsave, int xstate_feature_mask)
+static void *__raw_xsave_addr(struct xregs_state *xsave, int xfeature_nr)
 {
-	int feature_nr = fls64(xstate_feature_mask) - 1;
-
-	if (!xfeature_enabled(feature_nr)) {
+	if (!xfeature_enabled(xfeature_nr)) {
 		WARN_ON_FPU(1);
 		return NULL;
 	}
 
-	return (void *)xsave + xstate_comp_offsets[feature_nr];
+	return (void *)xsave + xstate_comp_offsets[xfeature_nr];
 }
 /*
  * Given the xsave area and a state inside, this function returns the
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:833 @ static void *__raw_xsave_addr(struct xre
  *
  * Inputs:
  *	xstate: the thread's storage area for all FPU data
- *	xstate_feature: state which is defined in xsave.h (e.g.
- *	XFEATURE_MASK_FP, XFEATURE_MASK_SSE, etc...)
+ *	xfeature_nr: state which is defined in xsave.h (e.g. XFEATURE_FP,
+ *	XFEATURE_SSE, etc...)
  * Output:
  *	address of the state in the xsave area, or NULL if the
  *	field is not present in the xsave buffer.
  */
-void *get_xsave_addr(struct xregs_state *xsave, int xstate_feature)
+void *get_xsave_addr(struct xregs_state *xsave, int xfeature_nr)
 {
 	/*
 	 * Do we even *have* xsave state?
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:852 @ void *get_xsave_addr(struct xregs_state
 	 * have not enabled.  Remember that pcntxt_mask is
 	 * what we write to the XCR0 register.
 	 */
-	WARN_ONCE(!(xfeatures_mask & xstate_feature),
+	WARN_ONCE(!(xfeatures_mask & BIT_ULL(xfeature_nr)),
 		  "get of unsupported state");
 	/*
 	 * This assumes the last 'xsave*' instruction to
-	 * have requested that 'xstate_feature' be saved.
+	 * have requested that 'xfeature_nr' be saved.
 	 * If it did not, we might be seeing and old value
 	 * of the field in the buffer.
 	 *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:865 @ void *get_xsave_addr(struct xregs_state
 	 * or because the "init optimization" caused it
 	 * to not be saved.
 	 */
-	if (!(xsave->header.xfeatures & xstate_feature))
+	if (!(xsave->header.xfeatures & BIT_ULL(xfeature_nr)))
 		return NULL;
 
-	return __raw_xsave_addr(xsave, xstate_feature);
+	return __raw_xsave_addr(xsave, xfeature_nr);
 }
 EXPORT_SYMBOL_GPL(get_xsave_addr);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:883 @ EXPORT_SYMBOL_GPL(get_xsave_addr);
  * Note that this only works on the current task.
  *
  * Inputs:
- *	@xsave_state: state which is defined in xsave.h (e.g. XFEATURE_MASK_FP,
- *	XFEATURE_MASK_SSE, etc...)
+ *	@xfeature_nr: state which is defined in xsave.h (e.g. XFEATURE_FP,
+ *	XFEATURE_SSE, etc...)
  * Output:
  *	address of the state in the xsave area or NULL if the state
  *	is not present or is in its 'init state'.
  */
-const void *get_xsave_field_ptr(int xsave_state)
+const void *get_xsave_field_ptr(int xfeature_nr)
 {
 	struct fpu *fpu = &current->thread.fpu;
 
-	if (!fpu->initialized)
-		return NULL;
 	/*
 	 * fpu__save() takes the CPU's xstate registers
 	 * and saves them off to the 'fpu memory buffer.
 	 */
 	fpu__save(fpu);
 
-	return get_xsave_addr(&fpu->state.xsave, xsave_state);
+	return get_xsave_addr(&fpu->state.xsave, xfeature_nr);
 }
 
 #ifdef CONFIG_ARCH_HAS_PKEYS
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1015 @ int copy_xstate_to_kernel(void *kbuf, st
 		 * Copy only in-use xstates:
 		 */
 		if ((header.xfeatures >> i) & 1) {
-			void *src = __raw_xsave_addr(xsave, 1 << i);
+			void *src = __raw_xsave_addr(xsave, i);
 
 			offset = xstate_offsets[i];
 			size = xstate_sizes[i];
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1101 @ int copy_xstate_to_user(void __user *ubu
 		 * Copy only in-use xstates:
 		 */
 		if ((header.xfeatures >> i) & 1) {
-			void *src = __raw_xsave_addr(xsave, 1 << i);
+			void *src = __raw_xsave_addr(xsave, i);
 
 			offset = xstate_offsets[i];
 			size = xstate_sizes[i];
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1158 @ int copy_kernel_to_xstate(struct xregs_s
 		u64 mask = ((u64)1 << i);
 
 		if (hdr.xfeatures & mask) {
-			void *dst = __raw_xsave_addr(xsave, 1 << i);
+			void *dst = __raw_xsave_addr(xsave, i);
 
 			offset = xstate_offsets[i];
 			size = xstate_sizes[i];
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1212 @ int copy_user_to_xstate(struct xregs_sta
 		u64 mask = ((u64)1 << i);
 
 		if (hdr.xfeatures & mask) {
-			void *dst = __raw_xsave_addr(xsave, 1 << i);
+			void *dst = __raw_xsave_addr(xsave, i);
 
 			offset = xstate_offsets[i];
 			size = xstate_sizes[i];
Index: linux-5.0.19-rt11/arch/x86/kernel/ima_arch.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kernel/ima_arch.c
+++ linux-5.0.19-rt11/arch/x86/kernel/ima_arch.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:20 @ static enum efi_secureboot_mode get_sb_m
 
 	size = sizeof(secboot);
 
+	if (!efi_enabled(EFI_RUNTIME_SERVICES)) {
+		pr_info("ima: secureboot mode unknown, no efi\n");
+		return efi_secureboot_mode_unknown;
+	}
+
 	/* Get variable contents into buffer */
 	status = efi.get_variable(efi_SecureBoot_name, &efi_variable_guid,
 				  NULL, &size, &secboot);
Index: linux-5.0.19-rt11/arch/x86/kernel/irq_32.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kernel/irq_32.c
+++ linux-5.0.19-rt11/arch/x86/kernel/irq_32.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:133 @ void irq_ctx_init(int cpu)
 	       cpu, per_cpu(hardirq_stack, cpu),  per_cpu(softirq_stack, cpu));
 }
 
+#ifndef CONFIG_PREEMPT_RT_FULL
 void do_softirq_own_stack(void)
 {
 	struct irq_stack *irqstk;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:150 @ void do_softirq_own_stack(void)
 
 	call_on_stack(__do_softirq, isp);
 }
+#endif
 
 bool handle_irq(struct irq_desc *desc, struct pt_regs *regs)
 {
Index: linux-5.0.19-rt11/arch/x86/kernel/process.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kernel/process.c
+++ linux-5.0.19-rt11/arch/x86/kernel/process.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:104 @ int arch_dup_task_struct(struct task_str
 	dst->thread.vm86 = NULL;
 #endif
 
-	return fpu__copy(&dst->thread.fpu, &src->thread.fpu);
+	return fpu__copy(dst, src);
 }
 
 /*
Index: linux-5.0.19-rt11/arch/x86/kernel/process_32.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kernel/process_32.c
+++ linux-5.0.19-rt11/arch/x86/kernel/process_32.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:41 @
 #include <linux/io.h>
 #include <linux/kdebug.h>
 #include <linux/syscalls.h>
+#include <linux/highmem.h>
 
 #include <asm/pgtable.h>
 #include <asm/ldt.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:206 @ start_thread(struct pt_regs *regs, unsig
 }
 EXPORT_SYMBOL_GPL(start_thread);
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+static void switch_kmaps(struct task_struct *prev_p, struct task_struct *next_p)
+{
+	int i;
+
+	/*
+	 * Clear @prev's kmap_atomic mappings
+	 */
+	for (i = 0; i < prev_p->kmap_idx; i++) {
+		int idx = i + KM_TYPE_NR * smp_processor_id();
+		pte_t *ptep = kmap_pte - idx;
+
+		kpte_clear_flush(ptep, __fix_to_virt(FIX_KMAP_BEGIN + idx));
+	}
+	/*
+	 * Restore @next_p's kmap_atomic mappings
+	 */
+	for (i = 0; i < next_p->kmap_idx; i++) {
+		int idx = i + KM_TYPE_NR * smp_processor_id();
+
+		if (!pte_none(next_p->kmap_pte[i]))
+			set_pte(kmap_pte - idx, next_p->kmap_pte[i]);
+	}
+}
+#else
+static inline void
+switch_kmaps(struct task_struct *prev_p, struct task_struct *next_p) { }
+#endif
+
 
 /*
  *	switch_to(x,y) should switch tasks from x to y.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:274 @ __switch_to(struct task_struct *prev_p,
 
 	/* never put a printk in __switch_to... printk() calls wake_up*() indirectly */
 
-	switch_fpu_prepare(prev_fpu, cpu);
+	if (!test_thread_flag(TIF_NEED_FPU_LOAD))
+		switch_fpu_prepare(prev_fpu, cpu);
 
 	/*
 	 * Save away %gs. No need to save %fs, as it was saved on the
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:305 @ __switch_to(struct task_struct *prev_p,
 
 	switch_to_extra(prev_p, next_p);
 
+	switch_kmaps(prev_p, next_p);
+
 	/*
 	 * Leave lazy mode, flushing any hypercalls made here.
 	 * This must be done before restoring TLS segments so
-	 * the GDT and LDT are properly updated, and must be
-	 * done before fpu__restore(), so the TS bit is up
-	 * to date.
+	 * the GDT and LDT are properly updated.
 	 */
 	arch_end_context_switch(next_p);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:331 @ __switch_to(struct task_struct *prev_p,
 	if (prev->gs | next->gs)
 		lazy_load_gs(next->gs);
 
-	switch_fpu_finish(next_fpu, cpu);
-
 	this_cpu_write(current_task, next_p);
 
+	switch_fpu_finish(next_fpu);
+
 	/* Load the Intel cache allocation PQR MSR. */
 	resctrl_sched_in();
 
Index: linux-5.0.19-rt11/arch/x86/kernel/process_64.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kernel/process_64.c
+++ linux-5.0.19-rt11/arch/x86/kernel/process_64.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:531 @ __switch_to(struct task_struct *prev_p,
 	WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ENTRY) &&
 		     this_cpu_read(irq_count) != -1);
 
-	switch_fpu_prepare(prev_fpu, cpu);
+	if (!test_thread_flag(TIF_NEED_FPU_LOAD))
+		switch_fpu_prepare(prev_fpu, cpu);
 
 	/* We must save %fs and %gs before load_TLS() because
 	 * %fs and %gs may be cleared by load_TLS().
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:550 @ __switch_to(struct task_struct *prev_p,
 	/*
 	 * Leave lazy mode, flushing any hypercalls made here.  This
 	 * must be done after loading TLS entries in the GDT but before
-	 * loading segments that might reference them, and and it must
-	 * be done before fpu__restore(), so the TS bit is up to
-	 * date.
+	 * loading segments that might reference them.
 	 */
 	arch_end_context_switch(next_p);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:578 @ __switch_to(struct task_struct *prev_p,
 
 	x86_fsgsbase_load(prev, next);
 
-	switch_fpu_finish(next_fpu, cpu);
-
 	/*
 	 * Switch the PDA and FPU contexts.
 	 */
 	this_cpu_write(current_task, next_p);
 	this_cpu_write(cpu_current_top_of_stack, task_top_of_stack(next_p));
 
+	switch_fpu_finish(next_fpu);
+
 	/* Reload sp0. */
 	update_task_stack(next_p);
 
Index: linux-5.0.19-rt11/arch/x86/kernel/signal.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kernel/signal.c
+++ linux-5.0.19-rt11/arch/x86/kernel/signal.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:248 @ get_sigframe(struct k_sigaction *ka, str
 	unsigned long sp = regs->sp;
 	unsigned long buf_fx = 0;
 	int onsigstack = on_sig_stack(sp);
-	struct fpu *fpu = &current->thread.fpu;
+	int ret;
 
 	/* redzone */
 	if (IS_ENABLED(CONFIG_X86_64))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:267 @ get_sigframe(struct k_sigaction *ka, str
 		sp = (unsigned long) ka->sa.sa_restorer;
 	}
 
-	if (fpu->initialized) {
-		sp = fpu__alloc_mathframe(sp, IS_ENABLED(CONFIG_X86_32),
-					  &buf_fx, &math_size);
-		*fpstate = (void __user *)sp;
-	}
+	sp = fpu__alloc_mathframe(sp, IS_ENABLED(CONFIG_X86_32),
+				  &buf_fx, &math_size);
+	*fpstate = (void __user *)sp;
 
 	sp = align_sigframe(sp - frame_size);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:281 @ get_sigframe(struct k_sigaction *ka, str
 		return (void __user *)-1L;
 
 	/* save i387 and extended state */
-	if (fpu->initialized &&
-	    copy_fpstate_to_sigframe(*fpstate, (void __user *)buf_fx, math_size) < 0)
+	ret = copy_fpstate_to_sigframe(*fpstate, (void __user *)buf_fx, math_size);
+	if (ret < 0)
 		return (void __user *)-1L;
 
 	return (void __user *)sp;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:769 @ handle_signal(struct ksignal *ksig, stru
 		/*
 		 * Ensure the signal handler starts with the new fpu state.
 		 */
-		if (fpu->initialized)
-			fpu__clear(fpu);
+		fpu__clear(fpu);
 	}
 	signal_setup_done(failed, ksig, stepping);
 }
Index: linux-5.0.19-rt11/arch/x86/kernel/traps.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kernel/traps.c
+++ linux-5.0.19-rt11/arch/x86/kernel/traps.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:458 @ dotraplinkage void do_bounds(struct pt_r
 	 * which is all zeros which indicates MPX was not
 	 * responsible for the exception.
 	 */
-	bndcsr = get_xsave_field_ptr(XFEATURE_MASK_BNDCSR);
+	bndcsr = get_xsave_field_ptr(XFEATURE_BNDCSR);
 	if (!bndcsr)
 		goto exit_trap;
 
Index: linux-5.0.19-rt11/arch/x86/kvm/lapic.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kvm/lapic.c
+++ linux-5.0.19-rt11/arch/x86/kvm/lapic.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2302 @ int kvm_create_lapic(struct kvm_vcpu *vc
 	apic->vcpu = vcpu;
 
 	hrtimer_init(&apic->lapic_timer.timer, CLOCK_MONOTONIC,
-		     HRTIMER_MODE_ABS_PINNED);
+		     HRTIMER_MODE_ABS_PINNED_HARD);
 	apic->lapic_timer.timer.function = apic_timer_fn;
 	if (timer_advance_ns == -1) {
 		apic->lapic_timer.timer_advance_ns = 1000;
Index: linux-5.0.19-rt11/arch/x86/kvm/vmx/vmx.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kvm/vmx/vmx.c
+++ linux-5.0.19-rt11/arch/x86/kvm/vmx/vmx.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6638 @ static void vmx_vcpu_run(struct kvm_vcpu
 	 */
 	if (static_cpu_has(X86_FEATURE_PKU) &&
 	    kvm_read_cr4_bits(vcpu, X86_CR4_PKE)) {
-		vcpu->arch.pkru = __read_pkru();
+		vcpu->arch.pkru = __read_pkru_ins();
 		if (vcpu->arch.pkru != vmx->host_pkru)
 			__write_pkru(vmx->host_pkru);
 	}
Index: linux-5.0.19-rt11/arch/x86/kvm/x86.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/kvm/x86.c
+++ linux-5.0.19-rt11/arch/x86/kvm/x86.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3694 @ static void fill_xsave(u8 *dest, struct
 	 */
 	valid = xstate_bv & ~XFEATURE_MASK_FPSSE;
 	while (valid) {
-		u64 feature = valid & -valid;
-		int index = fls64(feature) - 1;
-		void *src = get_xsave_addr(xsave, feature);
+		u64 xfeature_mask = valid & -valid;
+		int xfeature_nr = fls64(xfeature_mask) - 1;
+		void *src = get_xsave_addr(xsave, xfeature_nr);
 
 		if (src) {
 			u32 size, offset, ecx, edx;
-			cpuid_count(XSTATE_CPUID, index,
+			cpuid_count(XSTATE_CPUID, xfeature_nr,
 				    &size, &offset, &ecx, &edx);
-			if (feature == XFEATURE_MASK_PKRU)
+			if (xfeature_nr == XFEATURE_PKRU)
 				memcpy(dest + offset, &vcpu->arch.pkru,
 				       sizeof(vcpu->arch.pkru));
 			else
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3710 @ static void fill_xsave(u8 *dest, struct
 
 		}
 
-		valid -= feature;
+		valid -= xfeature_mask;
 	}
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3737 @ static void load_xsave(struct kvm_vcpu *
 	 */
 	valid = xstate_bv & ~XFEATURE_MASK_FPSSE;
 	while (valid) {
-		u64 feature = valid & -valid;
-		int index = fls64(feature) - 1;
-		void *dest = get_xsave_addr(xsave, feature);
+		u64 xfeature_mask = valid & -valid;
+		int xfeature_nr = fls64(xfeature_mask) - 1;
+		void *dest = get_xsave_addr(xsave, xfeature_nr);
 
 		if (dest) {
 			u32 size, offset, ecx, edx;
-			cpuid_count(XSTATE_CPUID, index,
+			cpuid_count(XSTATE_CPUID, xfeature_nr,
 				    &size, &offset, &ecx, &edx);
-			if (feature == XFEATURE_MASK_PKRU)
+			if (xfeature_nr == XFEATURE_PKRU)
 				memcpy(&vcpu->arch.pkru, src + offset,
 				       sizeof(vcpu->arch.pkru));
 			else
 				memcpy(dest, src + offset, size);
 		}
 
-		valid -= feature;
+		valid -= xfeature_mask;
 	}
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6972 @ int kvm_arch_init(void *opaque)
 		goto out;
 	}
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+	if (!boot_cpu_has(X86_FEATURE_CONSTANT_TSC)) {
+		pr_err("RT requires X86_FEATURE_CONSTANT_TSC\n");
+		r = -EOPNOTSUPP;
+		goto out;
+	}
+#endif
+
 	r = -ENOMEM;
 	x86_fpu_cache = kmem_cache_create("x86_fpu", sizeof(struct fpu),
 					  __alignof__(struct fpu), SLAB_ACCOUNT,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:7913 @ static int vcpu_enter_guest(struct kvm_v
 		wait_lapic_expire(vcpu);
 	guest_enter_irqoff();
 
+	fpregs_assert_state_consistent();
+	if (test_thread_flag(TIF_NEED_FPU_LOAD))
+		switch_fpu_return();
+
 	if (unlikely(vcpu->arch.switch_db_regs)) {
 		set_debugreg(0, 7);
 		set_debugreg(vcpu->arch.eff_db[0], 0);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:8175 @ static int complete_emulated_mmio(struct
 /* Swap (qemu) user FPU context for the guest FPU context. */
 static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
 {
-	preempt_disable();
+	fpregs_lock();
+
 	copy_fpregs_to_fpstate(&current->thread.fpu);
 	/* PKRU is separately restored in kvm_x86_ops->run.  */
 	__copy_kernel_to_fpregs(&vcpu->arch.guest_fpu->state,
 				~XFEATURE_MASK_PKRU);
-	preempt_enable();
+
+	fpregs_mark_activate();
+	fpregs_unlock();
+
 	trace_kvm_fpu(1);
 }
 
 /* When vcpu_run ends, restore user space FPU context. */
 static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)
 {
-	preempt_disable();
+	fpregs_lock();
+
 	copy_fpregs_to_fpstate(vcpu->arch.guest_fpu);
 	copy_kernel_to_fpregs(&current->thread.fpu.state);
-	preempt_enable();
+
+	fpregs_mark_activate();
+	fpregs_unlock();
+
 	++vcpu->stat.fpu_reload;
 	trace_kvm_fpu(0);
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:8896 @ void kvm_vcpu_reset(struct kvm_vcpu *vcp
 		if (init_event)
 			kvm_put_guest_fpu(vcpu);
 		mpx_state_buffer = get_xsave_addr(&vcpu->arch.guest_fpu->state.xsave,
-					XFEATURE_MASK_BNDREGS);
+					XFEATURE_BNDREGS);
 		if (mpx_state_buffer)
 			memset(mpx_state_buffer, 0, sizeof(struct mpx_bndreg_state));
 		mpx_state_buffer = get_xsave_addr(&vcpu->arch.guest_fpu->state.xsave,
-					XFEATURE_MASK_BNDCSR);
+					XFEATURE_BNDCSR);
 		if (mpx_state_buffer)
 			memset(mpx_state_buffer, 0, sizeof(struct mpx_bndcsr));
 		if (init_event)
Index: linux-5.0.19-rt11/arch/x86/math-emu/fpu_entry.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/math-emu/fpu_entry.c
+++ linux-5.0.19-rt11/arch/x86/math-emu/fpu_entry.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:116 @ void math_emulate(struct math_emu_info *
 	unsigned long code_base = 0;
 	unsigned long code_limit = 0;	/* Initialized to stop compiler warnings */
 	struct desc_struct code_descriptor;
-	struct fpu *fpu = &current->thread.fpu;
-
-	fpu__initialize(fpu);
 
 #ifdef RE_ENTRANT_CHECKING
 	if (emulating) {
Index: linux-5.0.19-rt11/arch/x86/mm/highmem_32.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/mm/highmem_32.c
+++ linux-5.0.19-rt11/arch/x86/mm/highmem_32.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:35 @ EXPORT_SYMBOL(kunmap);
  */
 void *kmap_atomic_prot(struct page *page, pgprot_t prot)
 {
+	pte_t pte = mk_pte(page, prot);
 	unsigned long vaddr;
 	int idx, type;
 
-	preempt_disable();
+	preempt_disable_nort();
 	pagefault_disable();
 
 	if (!PageHighMem(page))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:49 @ void *kmap_atomic_prot(struct page *page
 	idx = type + KM_TYPE_NR*smp_processor_id();
 	vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
 	BUG_ON(!pte_none(*(kmap_pte-idx)));
-	set_pte(kmap_pte-idx, mk_pte(page, prot));
+#ifdef CONFIG_PREEMPT_RT_FULL
+	current->kmap_pte[type] = pte;
+#endif
+	set_pte(kmap_pte-idx, pte);
 	arch_flush_lazy_mmu_mode();
 
 	return (void *)vaddr;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:95 @ void __kunmap_atomic(void *kvaddr)
 		 * is a bad idea also, in case the page changes cacheability
 		 * attributes or becomes a protected page in a hypervisor.
 		 */
+#ifdef CONFIG_PREEMPT_RT_FULL
+		current->kmap_pte[type] = __pte(0);
+#endif
 		kpte_clear_flush(kmap_pte-idx, vaddr);
 		kmap_atomic_idx_pop();
 		arch_flush_lazy_mmu_mode();
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:110 @ void __kunmap_atomic(void *kvaddr)
 #endif
 
 	pagefault_enable();
-	preempt_enable();
+	preempt_enable_nort();
 }
 EXPORT_SYMBOL(__kunmap_atomic);
 
Index: linux-5.0.19-rt11/arch/x86/mm/iomap_32.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/mm/iomap_32.c
+++ linux-5.0.19-rt11/arch/x86/mm/iomap_32.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:62 @ EXPORT_SYMBOL_GPL(iomap_free);
 
 void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot)
 {
+	pte_t pte = pfn_pte(pfn, prot);
 	unsigned long vaddr;
 	int idx, type;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:72 @ void *kmap_atomic_prot_pfn(unsigned long
 	type = kmap_atomic_idx_push();
 	idx = type + KM_TYPE_NR * smp_processor_id();
 	vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
-	set_pte(kmap_pte - idx, pfn_pte(pfn, prot));
+	WARN_ON(!pte_none(*(kmap_pte - idx)));
+
+#ifdef CONFIG_PREEMPT_RT_FULL
+	current->kmap_pte[type] = pte;
+#endif
+	set_pte(kmap_pte - idx, pte);
 	arch_flush_lazy_mmu_mode();
 
 	return (void *)vaddr;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:128 @ iounmap_atomic(void __iomem *kvaddr)
 		 * is a bad idea also, in case the page changes cacheability
 		 * attributes or becomes a protected page in a hypervisor.
 		 */
+#ifdef CONFIG_PREEMPT_RT_FULL
+		current->kmap_pte[type] = __pte(0);
+#endif
 		kpte_clear_flush(kmap_pte-idx, vaddr);
 		kmap_atomic_idx_pop();
 	}
Index: linux-5.0.19-rt11/arch/x86/mm/mpx.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/mm/mpx.c
+++ linux-5.0.19-rt11/arch/x86/mm/mpx.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:145 @ int mpx_fault_info(struct mpx_fault_info
 		goto err_out;
 	}
 	/* get bndregs field from current task's xsave area */
-	bndregs = get_xsave_field_ptr(XFEATURE_MASK_BNDREGS);
+	bndregs = get_xsave_field_ptr(XFEATURE_BNDREGS);
 	if (!bndregs) {
 		err = -EINVAL;
 		goto err_out;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:193 @ static __user void *mpx_get_bounds_dir(v
 	 * The bounds directory pointer is stored in a register
 	 * only accessible if we first do an xsave.
 	 */
-	bndcsr = get_xsave_field_ptr(XFEATURE_MASK_BNDCSR);
+	bndcsr = get_xsave_field_ptr(XFEATURE_BNDCSR);
 	if (!bndcsr)
 		return MPX_INVALID_BOUNDS_DIR;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:379 @ static int do_mpx_bt_fault(void)
 	const struct mpx_bndcsr *bndcsr;
 	struct mm_struct *mm = current->mm;
 
-	bndcsr = get_xsave_field_ptr(XFEATURE_MASK_BNDCSR);
+	bndcsr = get_xsave_field_ptr(XFEATURE_BNDCSR);
 	if (!bndcsr)
 		return -EINVAL;
 	/*
Index: linux-5.0.19-rt11/arch/x86/mm/pkeys.c
===================================================================
--- linux-5.0.19-rt11.orig/arch/x86/mm/pkeys.c
+++ linux-5.0.19-rt11/arch/x86/mm/pkeys.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:21 @
 
 #include <asm/cpufeature.h>             /* boot_cpu_has, ...            */
 #include <asm/mmu_context.h>            /* vma_pkey()                   */
+#include <asm/fpu/internal.h>		/* init_fpstate			*/
 
 int __execute_only_pkey(struct mm_struct *mm)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:43 @ int __execute_only_pkey(struct mm_struct
 	 * dance to set PKRU if we do not need to.  Check it
 	 * first and assume that if the execute-only pkey is
 	 * write-disabled that we do not have to set it
-	 * ourselves.  We need preempt off so that nobody
-	 * can make fpregs inactive.
+	 * ourselves.
 	 */
-	preempt_disable();
 	if (!need_to_set_mm_pkey &&
-	    current->thread.fpu.initialized &&
 	    !__pkru_allows_read(read_pkru(), execute_only_pkey)) {
-		preempt_enable();
 		return execute_only_pkey;
 	}
-	preempt_enable();
 
 	/*
 	 * Set up PKRU so that it denies access for everything
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:130 @ int __arch_override_mprotect_pkey(struct
  * in the process's lifetime will not accidentally get access
  * to data which is pkey-protected later on.
  */
-static
 u32 init_pkru_value = PKRU_AD_KEY( 1) | PKRU_AD_KEY( 2) | PKRU_AD_KEY( 3) |
 		      PKRU_AD_KEY( 4) | PKRU_AD_KEY( 5) | PKRU_AD_KEY( 6) |
 		      PKRU_AD_KEY( 7) | PKRU_AD_KEY( 8) | PKRU_AD_KEY( 9) |
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:146 @ void copy_init_pkru_to_fpregs(void)
 {
 	u32 init_pkru_value_snapshot = READ_ONCE(init_pkru_value);
 	/*
-	 * Any write to PKRU takes it out of the XSAVE 'init
-	 * state' which increases context switch cost.  Avoid
-	 * writing 0 when PKRU was already 0.
-	 */
-	if (!init_pkru_value_snapshot && !read_pkru())
-		return;
-	/*
 	 * Override the PKRU state that came from 'init_fpstate'
 	 * with the baseline from the process.
 	 */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:165 @ static ssize_t init_pkru_read_file(struc
 static ssize_t init_pkru_write_file(struct file *file,
 		 const char __user *user_buf, size_t count, loff_t *ppos)
 {
+	struct pkru_state *pk;
 	char buf[32];
 	ssize_t len;
 	u32 new_init_pkru;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:188 @ static ssize_t init_pkru_write_file(stru
 		return -EINVAL;
 
 	WRITE_ONCE(init_pkru_value, new_init_pkru);
+	pk = get_xsave_addr(&init_fpstate.xsave, XFEATURE_PKRU);
+	if (!pk)
+		return -EINVAL;
+	pk->pkru = new_init_pkru;
 	return count;
 }
 
Index: linux-5.0.19-rt11/arch/xtensa/include/asm/spinlock_types.h
===================================================================
--- linux-5.0.19-rt11.orig/arch/xtensa/include/asm/spinlock_types.h
+++ linux-5.0.19-rt11/arch/xtensa/include/asm/spinlock_types.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5 @
 #ifndef __ASM_SPINLOCK_TYPES_H
 #define __ASM_SPINLOCK_TYPES_H
 
-#ifndef __LINUX_SPINLOCK_TYPES_H
-# error "please don't include this file directly"
-#endif
-
 typedef struct {
 	volatile unsigned int slock;
 } arch_spinlock_t;
Index: linux-5.0.19-rt11/block/blk-core.c
===================================================================
--- linux-5.0.19-rt11.orig/block/blk-core.c
+++ linux-5.0.19-rt11/block/blk-core.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:450 @ void blk_queue_exit(struct request_queue
 	percpu_ref_put(&q->q_usage_counter);
 }
 
+static void blk_queue_usage_counter_release_wrk(struct work_struct *work)
+{
+	struct request_queue *q =
+		container_of(work, struct request_queue, mq_pcpu_wake);
+
+	wake_up_all(&q->mq_freeze_wq);
+}
+
 static void blk_queue_usage_counter_release(struct percpu_ref *ref)
 {
 	struct request_queue *q =
 		container_of(ref, struct request_queue, q_usage_counter);
 
-	wake_up_all(&q->mq_freeze_wq);
+	if (wq_has_sleeper(&q->mq_freeze_wq))
+		schedule_work(&q->mq_pcpu_wake);
 }
 
 static void blk_rq_timed_out_timer(struct timer_list *t)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:536 @ struct request_queue *blk_alloc_queue_no
 	spin_lock_init(&q->queue_lock);
 
 	init_waitqueue_head(&q->mq_freeze_wq);
+	INIT_WORK(&q->mq_pcpu_wake, blk_queue_usage_counter_release_wrk);
 
 	/*
 	 * Init percpu_ref in atomic mode so that it's faster to shutdown.
Index: linux-5.0.19-rt11/block/blk-ioc.c
===================================================================
--- linux-5.0.19-rt11.orig/block/blk-ioc.c
+++ linux-5.0.19-rt11/block/blk-ioc.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:12 @
 #include <linux/blkdev.h>
 #include <linux/slab.h>
 #include <linux/sched/task.h>
+#include <linux/delay.h>
 
 #include "blk.h"
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:119 @ static void ioc_release_fn(struct work_s
 			spin_unlock(&q->queue_lock);
 		} else {
 			spin_unlock_irqrestore(&ioc->lock, flags);
-			cpu_relax();
+			cpu_chill();
 			spin_lock_irqsave_nested(&ioc->lock, flags, 1);
 		}
 	}
Index: linux-5.0.19-rt11/block/blk-mq.c
===================================================================
--- linux-5.0.19-rt11.orig/block/blk-mq.c
+++ linux-5.0.19-rt11/block/blk-mq.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:608 @ static void __blk_mq_complete_request(st
 		return;
 	}
 
-	cpu = get_cpu();
+	cpu = get_cpu_light();
+	/*
+	 * Avoid SMP function calls for completions because they acquire
+	 * sleeping spinlocks on RT.
+	 */
+#ifdef CONFIG_PREEMPT_RT_FULL
+	shared = true;
+#else
 	if (!test_bit(QUEUE_FLAG_SAME_FORCE, &q->queue_flags))
 		shared = cpus_share_cache(cpu, ctx->cpu);
+#endif
 
 	if (cpu != ctx->cpu && !shared && cpu_online(ctx->cpu)) {
 		rq->csd.func = __blk_mq_complete_request_remote;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:628 @ static void __blk_mq_complete_request(st
 	} else {
 		q->mq_ops->complete(rq);
 	}
-	put_cpu();
+	put_cpu_light();
 }
 
 static void hctx_unlock(struct blk_mq_hw_ctx *hctx, int srcu_idx)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1470 @ static void __blk_mq_delay_run_hw_queue(
 		return;
 
 	if (!async && !(hctx->flags & BLK_MQ_F_BLOCKING)) {
-		int cpu = get_cpu();
+		int cpu = get_cpu_light();
 		if (cpumask_test_cpu(cpu, hctx->cpumask)) {
 			__blk_mq_run_hw_queue(hctx);
-			put_cpu();
+			put_cpu_light();
 			return;
 		}
 
-		put_cpu();
+		put_cpu_light();
 	}
 
 	kblockd_mod_delayed_work_on(blk_mq_hctx_next_cpu(hctx), &hctx->run_work,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3405 @ static bool blk_mq_poll_hybrid_sleep(str
 	kt = nsecs;
 
 	mode = HRTIMER_MODE_REL;
-	hrtimer_init_on_stack(&hs.timer, CLOCK_MONOTONIC, mode);
+	hrtimer_init_sleeper_on_stack(&hs, CLOCK_MONOTONIC, mode, current);
 	hrtimer_set_expires(&hs.timer, kt);
 
-	hrtimer_init_sleeper(&hs, current);
 	do {
 		if (blk_mq_rq_state(rq) == MQ_RQ_COMPLETE)
 			break;
Index: linux-5.0.19-rt11/block/blk-mq.h
===================================================================
--- linux-5.0.19-rt11.orig/block/blk-mq.h
+++ linux-5.0.19-rt11/block/blk-mq.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:154 @ static inline struct blk_mq_ctx *__blk_m
  */
 static inline struct blk_mq_ctx *blk_mq_get_ctx(struct request_queue *q)
 {
-	return __blk_mq_get_ctx(q, get_cpu());
+	return __blk_mq_get_ctx(q, get_cpu_light());
 }
 
 static inline void blk_mq_put_ctx(struct blk_mq_ctx *ctx)
 {
-	put_cpu();
+	put_cpu_light();
 }
 
 struct blk_mq_alloc_data {
Index: linux-5.0.19-rt11/block/blk-softirq.c
===================================================================
--- linux-5.0.19-rt11.orig/block/blk-softirq.c
+++ linux-5.0.19-rt11/block/blk-softirq.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:56 @ static void trigger_softirq(void *data)
 		raise_softirq_irqoff(BLOCK_SOFTIRQ);
 
 	local_irq_restore(flags);
+	preempt_check_resched_rt();
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:95 @ static int blk_softirq_cpu_dead(unsigned
 			 this_cpu_ptr(&blk_cpu_done));
 	raise_softirq_irqoff(BLOCK_SOFTIRQ);
 	local_irq_enable();
+	preempt_check_resched_rt();
 
 	return 0;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:147 @ do_local:
 		goto do_local;
 
 	local_irq_restore(flags);
+	preempt_check_resched_rt();
 }
 
 static __init int blk_softirq_init(void)
Index: linux-5.0.19-rt11/crypto/cryptd.c
===================================================================
--- linux-5.0.19-rt11.orig/crypto/cryptd.c
+++ linux-5.0.19-rt11/crypto/cryptd.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:42 @ MODULE_PARM_DESC(cryptd_max_cpu_qlen, "S
 struct cryptd_cpu_queue {
 	struct crypto_queue queue;
 	struct work_struct work;
+	spinlock_t qlock;
 };
 
 struct cryptd_queue {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:121 @ static int cryptd_init_queue(struct cryp
 		cpu_queue = per_cpu_ptr(queue->cpu_queue, cpu);
 		crypto_init_queue(&cpu_queue->queue, max_cpu_qlen);
 		INIT_WORK(&cpu_queue->work, cryptd_queue_worker);
+		spin_lock_init(&cpu_queue->qlock);
 	}
 	pr_info("cryptd: max_cpu_qlen set to %d\n", max_cpu_qlen);
 	return 0;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:146 @ static int cryptd_enqueue_request(struct
 	struct cryptd_cpu_queue *cpu_queue;
 	atomic_t *refcnt;
 
-	cpu = get_cpu();
-	cpu_queue = this_cpu_ptr(queue->cpu_queue);
+	cpu_queue = raw_cpu_ptr(queue->cpu_queue);
+	spin_lock_bh(&cpu_queue->qlock);
+	cpu = smp_processor_id();
+
 	err = crypto_enqueue_request(&cpu_queue->queue, request);
 
 	refcnt = crypto_tfm_ctx(request->tfm);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:165 @ static int cryptd_enqueue_request(struct
 	atomic_inc(refcnt);
 
 out_put_cpu:
-	put_cpu();
+	spin_unlock_bh(&cpu_queue->qlock);
 
 	return err;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:181 @ static void cryptd_queue_worker(struct w
 	cpu_queue = container_of(work, struct cryptd_cpu_queue, work);
 	/*
 	 * Only handle one request at a time to avoid hogging crypto workqueue.
-	 * preempt_disable/enable is used to prevent being preempted by
-	 * cryptd_enqueue_request(). local_bh_disable/enable is used to prevent
-	 * cryptd_enqueue_request() being accessed from software interrupts.
 	 */
-	local_bh_disable();
-	preempt_disable();
+	spin_lock_bh(&cpu_queue->qlock);
 	backlog = crypto_get_backlog(&cpu_queue->queue);
 	req = crypto_dequeue_request(&cpu_queue->queue);
-	preempt_enable();
-	local_bh_enable();
+	spin_unlock_bh(&cpu_queue->qlock);
 
 	if (!req)
 		return;
Index: linux-5.0.19-rt11/crypto/crypto_user_stat.c
===================================================================
--- linux-5.0.19-rt11.orig/crypto/crypto_user_stat.c
+++ linux-5.0.19-rt11/crypto/crypto_user_stat.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:23 @
 
 #define null_terminated(x)	(strnlen(x, sizeof(x)) < sizeof(x))
 
-static DEFINE_MUTEX(crypto_cfg_mutex);
-
 extern struct sock *crypto_nlsk;
 
 struct crypto_dump_info {
Index: linux-5.0.19-rt11/crypto/scompress.c
===================================================================
--- linux-5.0.19-rt11.orig/crypto/scompress.c
+++ linux-5.0.19-rt11/crypto/scompress.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:27 @
 #include <linux/cryptouser.h>
 #include <net/netlink.h>
 #include <linux/scatterlist.h>
+#include <linux/locallock.h>
 #include <crypto/scatterwalk.h>
 #include <crypto/internal/acompress.h>
 #include <crypto/internal/scompress.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:38 @ static void * __percpu *scomp_src_scratc
 static void * __percpu *scomp_dst_scratches;
 static int scomp_scratch_users;
 static DEFINE_MUTEX(scomp_lock);
+static DEFINE_LOCAL_IRQ_LOCK(scomp_scratches_lock);
 
 #ifdef CONFIG_NET
 static int crypto_scomp_report(struct sk_buff *skb, struct crypto_alg *alg)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:148 @ static int scomp_acomp_comp_decomp(struc
 	void **tfm_ctx = acomp_tfm_ctx(tfm);
 	struct crypto_scomp *scomp = *tfm_ctx;
 	void **ctx = acomp_request_ctx(req);
-	const int cpu = get_cpu();
+	const int cpu = local_lock_cpu(scomp_scratches_lock);
 	u8 *scratch_src = *per_cpu_ptr(scomp_src_scratches, cpu);
 	u8 *scratch_dst = *per_cpu_ptr(scomp_dst_scratches, cpu);
 	int ret;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:183 @ static int scomp_acomp_comp_decomp(struc
 					 1);
 	}
 out:
-	put_cpu();
+	local_unlock_cpu(scomp_scratches_lock);
 	return ret;
 }
 
Index: linux-5.0.19-rt11/drivers/block/zram/zcomp.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/block/zram/zcomp.c
+++ linux-5.0.19-rt11/drivers/block/zram/zcomp.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:119 @ ssize_t zcomp_available_show(const char
 
 struct zcomp_strm *zcomp_stream_get(struct zcomp *comp)
 {
-	return *get_cpu_ptr(comp->stream);
+	struct zcomp_strm *zstrm;
+
+	zstrm = *get_local_ptr(comp->stream);
+	spin_lock(&zstrm->zcomp_lock);
+	return zstrm;
 }
 
 void zcomp_stream_put(struct zcomp *comp)
 {
-	put_cpu_ptr(comp->stream);
+	struct zcomp_strm *zstrm;
+
+	zstrm = *this_cpu_ptr(comp->stream);
+	spin_unlock(&zstrm->zcomp_lock);
+	put_local_ptr(zstrm);
 }
 
 int zcomp_compress(struct zcomp_strm *zstrm,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:182 @ int zcomp_cpu_up_prepare(unsigned int cp
 		pr_err("Can't allocate a compression stream\n");
 		return -ENOMEM;
 	}
+	spin_lock_init(&zstrm->zcomp_lock);
 	*per_cpu_ptr(comp->stream, cpu) = zstrm;
 	return 0;
 }
Index: linux-5.0.19-rt11/drivers/block/zram/zcomp.h
===================================================================
--- linux-5.0.19-rt11.orig/drivers/block/zram/zcomp.h
+++ linux-5.0.19-rt11/drivers/block/zram/zcomp.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:17 @ struct zcomp_strm {
 	/* compression/decompression buffer */
 	void *buffer;
 	struct crypto_comp *tfm;
+	spinlock_t zcomp_lock;
 };
 
 /* dynamic per-device compression frontend */
Index: linux-5.0.19-rt11/drivers/block/zram/zram_drv.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/block/zram/zram_drv.c
+++ linux-5.0.19-rt11/drivers/block/zram/zram_drv.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:58 @ static void zram_free_page(struct zram *
 static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
 				u32 index, int offset, struct bio *bio);
 
+#ifdef CONFIG_PREEMPT_RT_BASE
+static void zram_meta_init_table_locks(struct zram *zram, size_t num_pages)
+{
+	size_t index;
+
+	for (index = 0; index < num_pages; index++)
+		spin_lock_init(&zram->table[index].lock);
+}
+
+static int zram_slot_trylock(struct zram *zram, u32 index)
+{
+	int ret;
+
+	ret = spin_trylock(&zram->table[index].lock);
+	if (ret)
+		__set_bit(ZRAM_LOCK, &zram->table[index].flags);
+	return ret;
+}
+
+static void zram_slot_lock(struct zram *zram, u32 index)
+{
+	spin_lock(&zram->table[index].lock);
+	__set_bit(ZRAM_LOCK, &zram->table[index].flags);
+}
+
+static void zram_slot_unlock(struct zram *zram, u32 index)
+{
+	__clear_bit(ZRAM_LOCK, &zram->table[index].flags);
+	spin_unlock(&zram->table[index].lock);
+}
+
+#else
+
+static void zram_meta_init_table_locks(struct zram *zram, size_t num_pages) { }
 
 static int zram_slot_trylock(struct zram *zram, u32 index)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:107 @ static void zram_slot_unlock(struct zram
 {
 	bit_spin_unlock(ZRAM_LOCK, &zram->table[index].flags);
 }
+#endif
 
 static inline bool init_done(struct zram *zram)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1192 @ static bool zram_meta_alloc(struct zram
 
 	if (!huge_class_size)
 		huge_class_size = zs_huge_class_size(zram->mem_pool);
+	zram_meta_init_table_locks(zram, num_pages);
 	return true;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1255 @ static int __zram_bvec_read(struct zram
 	unsigned long handle;
 	unsigned int size;
 	void *src, *dst;
+	struct zcomp_strm *zstrm;
 
 	zram_slot_lock(zram, index);
 	if (zram_test_flag(zram, index, ZRAM_WB)) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1286 @ static int __zram_bvec_read(struct zram
 
 	size = zram_get_obj_size(zram, index);
 
+	zstrm = zcomp_stream_get(zram->comp);
 	src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO);
 	if (size == PAGE_SIZE) {
 		dst = kmap_atomic(page);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1294 @ static int __zram_bvec_read(struct zram
 		kunmap_atomic(dst);
 		ret = 0;
 	} else {
-		struct zcomp_strm *zstrm = zcomp_stream_get(zram->comp);
 
 		dst = kmap_atomic(page);
 		ret = zcomp_decompress(zstrm, src, size, dst);
 		kunmap_atomic(dst);
-		zcomp_stream_put(zram->comp);
 	}
 	zs_unmap_object(zram->mem_pool, handle);
+	zcomp_stream_put(zram->comp);
 	zram_slot_unlock(zram, index);
 
 	/* Should NEVER happen. Return bio error if it does. */
Index: linux-5.0.19-rt11/drivers/block/zram/zram_drv.h
===================================================================
--- linux-5.0.19-rt11.orig/drivers/block/zram/zram_drv.h
+++ linux-5.0.19-rt11/drivers/block/zram/zram_drv.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:66 @ struct zram_table_entry {
 		unsigned long element;
 	};
 	unsigned long flags;
+	spinlock_t lock;
 #ifdef CONFIG_ZRAM_MEMORY_TRACKING
 	ktime_t ac_time;
 #endif
Index: linux-5.0.19-rt11/drivers/char/random.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/char/random.c
+++ linux-5.0.19-rt11/drivers/char/random.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1234 @ static __u32 get_reg(struct fast_pool *f
 	return *ptr;
 }
 
-void add_interrupt_randomness(int irq, int irq_flags)
+void add_interrupt_randomness(int irq, int irq_flags, __u64 ip)
 {
 	struct entropy_store	*r;
 	struct fast_pool	*fast_pool = this_cpu_ptr(&irq_randomness);
-	struct pt_regs		*regs = get_irq_regs();
 	unsigned long		now = jiffies;
 	cycles_t		cycles = random_get_entropy();
 	__u32			c_high, j_high;
-	__u64			ip;
 	unsigned long		seed;
 	int			credit = 0;
 
 	if (cycles == 0)
-		cycles = get_reg(fast_pool, regs);
+		cycles = get_reg(fast_pool, NULL);
 	c_high = (sizeof(cycles) > 4) ? cycles >> 32 : 0;
 	j_high = (sizeof(now) > 4) ? now >> 32 : 0;
 	fast_pool->pool[0] ^= cycles ^ j_high ^ irq;
 	fast_pool->pool[1] ^= now ^ c_high;
-	ip = regs ? instruction_pointer(regs) : _RET_IP_;
+	if (!ip)
+		ip = _RET_IP_;
 	fast_pool->pool[2] ^= ip;
 	fast_pool->pool[3] ^= (sizeof(ip) > 4) ? ip >> 32 :
-		get_reg(fast_pool, regs);
+		get_reg(fast_pool, NULL);
 
 	fast_mix(fast_pool);
 	add_interrupt_bench(cycles);
Index: linux-5.0.19-rt11/drivers/char/tpm/tpm-dev-common.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/char/tpm/tpm-dev-common.c
+++ linux-5.0.19-rt11/drivers/char/tpm/tpm-dev-common.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:28 @
 #include "tpm-dev.h"
 
 static struct workqueue_struct *tpm_dev_wq;
-static DEFINE_MUTEX(tpm_dev_wq_lock);
 
 static void tpm_async_work(struct work_struct *work)
 {
Index: linux-5.0.19-rt11/drivers/char/tpm/tpm_tis.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/char/tpm/tpm_tis.c
+++ linux-5.0.19-rt11/drivers/char/tpm/tpm_tis.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:56 @ static inline struct tpm_tis_tcg_phy *to
 	return container_of(data, struct tpm_tis_tcg_phy, priv);
 }
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+/*
+ * Flushes previous write operations to chip so that a subsequent
+ * ioread*()s won't stall a cpu.
+ */
+static inline void tpm_tis_flush(void __iomem *iobase)
+{
+	ioread8(iobase + TPM_ACCESS(0));
+}
+#else
+#define tpm_tis_flush(iobase) do { } while (0)
+#endif
+
+static inline void tpm_tis_iowrite8(u8 b, void __iomem *iobase, u32 addr)
+{
+	iowrite8(b, iobase + addr);
+	tpm_tis_flush(iobase);
+}
+
+static inline void tpm_tis_iowrite32(u32 b, void __iomem *iobase, u32 addr)
+{
+	iowrite32(b, iobase + addr);
+	tpm_tis_flush(iobase);
+}
+
 static bool interrupts = true;
 module_param(interrupts, bool, 0444);
 MODULE_PARM_DESC(interrupts, "Enable interrupts");
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:178 @ static int tpm_tcg_write_bytes(struct tp
 	struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
 
 	while (len--)
-		iowrite8(*value++, phy->iobase + addr);
+		tpm_tis_iowrite8(*value++, phy->iobase, addr);
 
 	return 0;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:205 @ static int tpm_tcg_write32(struct tpm_ti
 {
 	struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
 
-	iowrite32(value, phy->iobase + addr);
+	tpm_tis_iowrite32(value, phy->iobase, addr);
 
 	return 0;
 }
Index: linux-5.0.19-rt11/drivers/clocksource/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/drivers/clocksource/Kconfig
+++ linux-5.0.19-rt11/drivers/clocksource/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:402 @ config ARMV7M_SYSTICK
 	  This options enables support for the ARMv7M system timer unit
 
 config ATMEL_PIT
+	bool "Atmel PIT support" if COMPILE_TEST
+	depends on HAS_IOMEM
 	select TIMER_OF if OF
-	def_bool SOC_AT91SAM9 || SOC_SAMA5
+	help
+	  Support for the Periodic Interval Timer found on Atmel SoCs.
 
 config ATMEL_ST
 	bool "Atmel ST timer support" if COMPILE_TEST
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:416 @ config ATMEL_ST
 	help
 	  Support for the Atmel ST timer.
 
+config ATMEL_TCB_CLKSRC
+	bool "Atmel TC Block timer driver" if COMPILE_TEST
+	depends on HAS_IOMEM && ATMEL_TCLIB
+	select TIMER_OF if OF
+	help
+	  Support for Timer Counter Blocks on Atmel SoCs.
+
+config ATMEL_TCB_CLKSRC_USE_SLOW_CLOCK
+	bool "TC Block use 32 KiHz clock"
+	depends on ATMEL_TCB_CLKSRC
+	default y
+	help
+	  Select this to use 32 KiHz base clock rate as TC block clock.
+
 config CLKSRC_EXYNOS_MCT
 	bool "Exynos multi core timer driver" if COMPILE_TEST
 	depends on ARM || ARM64
Index: linux-5.0.19-rt11/drivers/clocksource/Makefile
===================================================================
--- linux-5.0.19-rt11.orig/drivers/clocksource/Makefile
+++ linux-5.0.19-rt11/drivers/clocksource/Makefile
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6 @ obj-$(CONFIG_TIMER_OF)		+= timer-of.o
 obj-$(CONFIG_TIMER_PROBE)	+= timer-probe.o
 obj-$(CONFIG_ATMEL_PIT)		+= timer-atmel-pit.o
 obj-$(CONFIG_ATMEL_ST)		+= timer-atmel-st.o
-obj-$(CONFIG_ATMEL_TCB_CLKSRC)	+= tcb_clksrc.o
+obj-$(CONFIG_ATMEL_TCB_CLKSRC)	+= timer-atmel-tcb.o
 obj-$(CONFIG_X86_PM_TIMER)	+= acpi_pm.o
 obj-$(CONFIG_SCx200HR_TIMER)	+= scx200_hrt.o
 obj-$(CONFIG_CS5535_CLOCK_EVENT_SRC)	+= cs5535-clockevt.o
Index: linux-5.0.19-rt11/drivers/clocksource/tcb_clksrc.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/clocksource/tcb_clksrc.c
+++ /dev/null
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1 @
-// SPDX-License-Identifier: GPL-2.0
-#include <linux/init.h>
-#include <linux/clocksource.h>
-#include <linux/clockchips.h>
-#include <linux/interrupt.h>
-#include <linux/irq.h>
-
-#include <linux/clk.h>
-#include <linux/err.h>
-#include <linux/ioport.h>
-#include <linux/io.h>
-#include <linux/platform_device.h>
-#include <linux/syscore_ops.h>
-#include <linux/atmel_tc.h>
-
-
-/*
- * We're configured to use a specific TC block, one that's not hooked
- * up to external hardware, to provide a time solution:
- *
- *   - Two channels combine to create a free-running 32 bit counter
- *     with a base rate of 5+ MHz, packaged as a clocksource (with
- *     resolution better than 200 nsec).
- *   - Some chips support 32 bit counter. A single channel is used for
- *     this 32 bit free-running counter. the second channel is not used.
- *
- *   - The third channel may be used to provide a 16-bit clockevent
- *     source, used in either periodic or oneshot mode.  This runs
- *     at 32 KiHZ, and can handle delays of up to two seconds.
- *
- * A boot clocksource and clockevent source are also currently needed,
- * unless the relevant platforms (ARM/AT91, AVR32/AT32) are changed so
- * this code can be used when init_timers() is called, well before most
- * devices are set up.  (Some low end AT91 parts, which can run uClinux,
- * have only the timers in one TC block... they currently don't support
- * the tclib code, because of that initialization issue.)
- *
- * REVISIT behavior during system suspend states... we should disable
- * all clocks and save the power.  Easily done for clockevent devices,
- * but clocksources won't necessarily get the needed notifications.
- * For deeper system sleep states, this will be mandatory...
- */
-
-static void __iomem *tcaddr;
-static struct
-{
-	u32 cmr;
-	u32 imr;
-	u32 rc;
-	bool clken;
-} tcb_cache[3];
-static u32 bmr_cache;
-
-static u64 tc_get_cycles(struct clocksource *cs)
-{
-	unsigned long	flags;
-	u32		lower, upper;
-
-	raw_local_irq_save(flags);
-	do {
-		upper = readl_relaxed(tcaddr + ATMEL_TC_REG(1, CV));
-		lower = readl_relaxed(tcaddr + ATMEL_TC_REG(0, CV));
-	} while (upper != readl_relaxed(tcaddr + ATMEL_TC_REG(1, CV)));
-
-	raw_local_irq_restore(flags);
-	return (upper << 16) | lower;
-}
-
-static u64 tc_get_cycles32(struct clocksource *cs)
-{
-	return readl_relaxed(tcaddr + ATMEL_TC_REG(0, CV));
-}
-
-void tc_clksrc_suspend(struct clocksource *cs)
-{
-	int i;
-
-	for (i = 0; i < ARRAY_SIZE(tcb_cache); i++) {
-		tcb_cache[i].cmr = readl(tcaddr + ATMEL_TC_REG(i, CMR));
-		tcb_cache[i].imr = readl(tcaddr + ATMEL_TC_REG(i, IMR));
-		tcb_cache[i].rc = readl(tcaddr + ATMEL_TC_REG(i, RC));
-		tcb_cache[i].clken = !!(readl(tcaddr + ATMEL_TC_REG(i, SR)) &
-					ATMEL_TC_CLKSTA);
-	}
-
-	bmr_cache = readl(tcaddr + ATMEL_TC_BMR);
-}
-
-void tc_clksrc_resume(struct clocksource *cs)
-{
-	int i;
-
-	for (i = 0; i < ARRAY_SIZE(tcb_cache); i++) {
-		/* Restore registers for the channel, RA and RB are not used  */
-		writel(tcb_cache[i].cmr, tcaddr + ATMEL_TC_REG(i, CMR));
-		writel(tcb_cache[i].rc, tcaddr + ATMEL_TC_REG(i, RC));
-		writel(0, tcaddr + ATMEL_TC_REG(i, RA));
-		writel(0, tcaddr + ATMEL_TC_REG(i, RB));
-		/* Disable all the interrupts */
-		writel(0xff, tcaddr + ATMEL_TC_REG(i, IDR));
-		/* Reenable interrupts that were enabled before suspending */
-		writel(tcb_cache[i].imr, tcaddr + ATMEL_TC_REG(i, IER));
-		/* Start the clock if it was used */
-		if (tcb_cache[i].clken)
-			writel(ATMEL_TC_CLKEN, tcaddr + ATMEL_TC_REG(i, CCR));
-	}
-
-	/* Dual channel, chain channels */
-	writel(bmr_cache, tcaddr + ATMEL_TC_BMR);
-	/* Finally, trigger all the channels*/
-	writel(ATMEL_TC_SYNC, tcaddr + ATMEL_TC_BCR);
-}
-
-static struct clocksource clksrc = {
-	.name           = "tcb_clksrc",
-	.rating         = 200,
-	.read           = tc_get_cycles,
-	.mask           = CLOCKSOURCE_MASK(32),
-	.flags		= CLOCK_SOURCE_IS_CONTINUOUS,
-	.suspend	= tc_clksrc_suspend,
-	.resume		= tc_clksrc_resume,
-};
-
-#ifdef CONFIG_GENERIC_CLOCKEVENTS
-
-struct tc_clkevt_device {
-	struct clock_event_device	clkevt;
-	struct clk			*clk;
-	void __iomem			*regs;
-};
-
-static struct tc_clkevt_device *to_tc_clkevt(struct clock_event_device *clkevt)
-{
-	return container_of(clkevt, struct tc_clkevt_device, clkevt);
-}
-
-/* For now, we always use the 32K clock ... this optimizes for NO_HZ,
- * because using one of the divided clocks would usually mean the
- * tick rate can never be less than several dozen Hz (vs 0.5 Hz).
- *
- * A divided clock could be good for high resolution timers, since
- * 30.5 usec resolution can seem "low".
- */
-static u32 timer_clock;
-
-static int tc_shutdown(struct clock_event_device *d)
-{
-	struct tc_clkevt_device *tcd = to_tc_clkevt(d);
-	void __iomem		*regs = tcd->regs;
-
-	writel(0xff, regs + ATMEL_TC_REG(2, IDR));
-	writel(ATMEL_TC_CLKDIS, regs + ATMEL_TC_REG(2, CCR));
-	if (!clockevent_state_detached(d))
-		clk_disable(tcd->clk);
-
-	return 0;
-}
-
-static int tc_set_oneshot(struct clock_event_device *d)
-{
-	struct tc_clkevt_device *tcd = to_tc_clkevt(d);
-	void __iomem		*regs = tcd->regs;
-
-	if (clockevent_state_oneshot(d) || clockevent_state_periodic(d))
-		tc_shutdown(d);
-
-	clk_enable(tcd->clk);
-
-	/* slow clock, count up to RC, then irq and stop */
-	writel(timer_clock | ATMEL_TC_CPCSTOP | ATMEL_TC_WAVE |
-		     ATMEL_TC_WAVESEL_UP_AUTO, regs + ATMEL_TC_REG(2, CMR));
-	writel(ATMEL_TC_CPCS, regs + ATMEL_TC_REG(2, IER));
-
-	/* set_next_event() configures and starts the timer */
-	return 0;
-}
-
-static int tc_set_periodic(struct clock_event_device *d)
-{
-	struct tc_clkevt_device *tcd = to_tc_clkevt(d);
-	void __iomem		*regs = tcd->regs;
-
-	if (clockevent_state_oneshot(d) || clockevent_state_periodic(d))
-		tc_shutdown(d);
-
-	/* By not making the gentime core emulate periodic mode on top
-	 * of oneshot, we get lower overhead and improved accuracy.
-	 */
-	clk_enable(tcd->clk);
-
-	/* slow clock, count up to RC, then irq and restart */
-	writel(timer_clock | ATMEL_TC_WAVE | ATMEL_TC_WAVESEL_UP_AUTO,
-		     regs + ATMEL_TC_REG(2, CMR));
-	writel((32768 + HZ / 2) / HZ, tcaddr + ATMEL_TC_REG(2, RC));
-
-	/* Enable clock and interrupts on RC compare */
-	writel(ATMEL_TC_CPCS, regs + ATMEL_TC_REG(2, IER));
-
-	/* go go gadget! */
-	writel(ATMEL_TC_CLKEN | ATMEL_TC_SWTRG, regs +
-		     ATMEL_TC_REG(2, CCR));
-	return 0;
-}
-
-static int tc_next_event(unsigned long delta, struct clock_event_device *d)
-{
-	writel_relaxed(delta, tcaddr + ATMEL_TC_REG(2, RC));
-
-	/* go go gadget! */
-	writel_relaxed(ATMEL_TC_CLKEN | ATMEL_TC_SWTRG,
-			tcaddr + ATMEL_TC_REG(2, CCR));
-	return 0;
-}
-
-static struct tc_clkevt_device clkevt = {
-	.clkevt	= {
-		.name			= "tc_clkevt",
-		.features		= CLOCK_EVT_FEAT_PERIODIC |
-					  CLOCK_EVT_FEAT_ONESHOT,
-		/* Should be lower than at91rm9200's system timer */
-		.rating			= 125,
-		.set_next_event		= tc_next_event,
-		.set_state_shutdown	= tc_shutdown,
-		.set_state_periodic	= tc_set_periodic,
-		.set_state_oneshot	= tc_set_oneshot,
-	},
-};
-
-static irqreturn_t ch2_irq(int irq, void *handle)
-{
-	struct tc_clkevt_device	*dev = handle;
-	unsigned int		sr;
-
-	sr = readl_relaxed(dev->regs + ATMEL_TC_REG(2, SR));
-	if (sr & ATMEL_TC_CPCS) {
-		dev->clkevt.event_handler(&dev->clkevt);
-		return IRQ_HANDLED;
-	}
-
-	return IRQ_NONE;
-}
-
-static int __init setup_clkevents(struct atmel_tc *tc, int clk32k_divisor_idx)
-{
-	int ret;
-	struct clk *t2_clk = tc->clk[2];
-	int irq = tc->irq[2];
-
-	ret = clk_prepare_enable(tc->slow_clk);
-	if (ret)
-		return ret;
-
-	/* try to enable t2 clk to avoid future errors in mode change */
-	ret = clk_prepare_enable(t2_clk);
-	if (ret) {
-		clk_disable_unprepare(tc->slow_clk);
-		return ret;
-	}
-
-	clk_disable(t2_clk);
-
-	clkevt.regs = tc->regs;
-	clkevt.clk = t2_clk;
-
-	timer_clock = clk32k_divisor_idx;
-
-	clkevt.clkevt.cpumask = cpumask_of(0);
-
-	ret = request_irq(irq, ch2_irq, IRQF_TIMER, "tc_clkevt", &clkevt);
-	if (ret) {
-		clk_unprepare(t2_clk);
-		clk_disable_unprepare(tc->slow_clk);
-		return ret;
-	}
-
-	clockevents_config_and_register(&clkevt.clkevt, 32768, 1, 0xffff);
-
-	return ret;
-}
-
-#else /* !CONFIG_GENERIC_CLOCKEVENTS */
-
-static int __init setup_clkevents(struct atmel_tc *tc, int clk32k_divisor_idx)
-{
-	/* NOTHING */
-	return 0;
-}
-
-#endif
-
-static void __init tcb_setup_dual_chan(struct atmel_tc *tc, int mck_divisor_idx)
-{
-	/* channel 0:  waveform mode, input mclk/8, clock TIOA0 on overflow */
-	writel(mck_divisor_idx			/* likely divide-by-8 */
-			| ATMEL_TC_WAVE
-			| ATMEL_TC_WAVESEL_UP		/* free-run */
-			| ATMEL_TC_ACPA_SET		/* TIOA0 rises at 0 */
-			| ATMEL_TC_ACPC_CLEAR,		/* (duty cycle 50%) */
-			tcaddr + ATMEL_TC_REG(0, CMR));
-	writel(0x0000, tcaddr + ATMEL_TC_REG(0, RA));
-	writel(0x8000, tcaddr + ATMEL_TC_REG(0, RC));
-	writel(0xff, tcaddr + ATMEL_TC_REG(0, IDR));	/* no irqs */
-	writel(ATMEL_TC_CLKEN, tcaddr + ATMEL_TC_REG(0, CCR));
-
-	/* channel 1:  waveform mode, input TIOA0 */
-	writel(ATMEL_TC_XC1			/* input: TIOA0 */
-			| ATMEL_TC_WAVE
-			| ATMEL_TC_WAVESEL_UP,		/* free-run */
-			tcaddr + ATMEL_TC_REG(1, CMR));
-	writel(0xff, tcaddr + ATMEL_TC_REG(1, IDR));	/* no irqs */
-	writel(ATMEL_TC_CLKEN, tcaddr + ATMEL_TC_REG(1, CCR));
-
-	/* chain channel 0 to channel 1*/
-	writel(ATMEL_TC_TC1XC1S_TIOA0, tcaddr + ATMEL_TC_BMR);
-	/* then reset all the timers */
-	writel(ATMEL_TC_SYNC, tcaddr + ATMEL_TC_BCR);
-}
-
-static void __init tcb_setup_single_chan(struct atmel_tc *tc, int mck_divisor_idx)
-{
-	/* channel 0:  waveform mode, input mclk/8 */
-	writel(mck_divisor_idx			/* likely divide-by-8 */
-			| ATMEL_TC_WAVE
-			| ATMEL_TC_WAVESEL_UP,		/* free-run */
-			tcaddr + ATMEL_TC_REG(0, CMR));
-	writel(0xff, tcaddr + ATMEL_TC_REG(0, IDR));	/* no irqs */
-	writel(ATMEL_TC_CLKEN, tcaddr + ATMEL_TC_REG(0, CCR));
-
-	/* then reset all the timers */
-	writel(ATMEL_TC_SYNC, tcaddr + ATMEL_TC_BCR);
-}
-
-static int __init tcb_clksrc_init(void)
-{
-	static char bootinfo[] __initdata
-		= KERN_DEBUG "%s: tc%d at %d.%03d MHz\n";
-
-	struct platform_device *pdev;
-	struct atmel_tc *tc;
-	struct clk *t0_clk;
-	u32 rate, divided_rate = 0;
-	int best_divisor_idx = -1;
-	int clk32k_divisor_idx = -1;
-	int i;
-	int ret;
-
-	tc = atmel_tc_alloc(CONFIG_ATMEL_TCB_CLKSRC_BLOCK);
-	if (!tc) {
-		pr_debug("can't alloc TC for clocksource\n");
-		return -ENODEV;
-	}
-	tcaddr = tc->regs;
-	pdev = tc->pdev;
-
-	t0_clk = tc->clk[0];
-	ret = clk_prepare_enable(t0_clk);
-	if (ret) {
-		pr_debug("can't enable T0 clk\n");
-		goto err_free_tc;
-	}
-
-	/* How fast will we be counting?  Pick something over 5 MHz.  */
-	rate = (u32) clk_get_rate(t0_clk);
-	for (i = 0; i < 5; i++) {
-		unsigned divisor = atmel_tc_divisors[i];
-		unsigned tmp;
-
-		/* remember 32 KiHz clock for later */
-		if (!divisor) {
-			clk32k_divisor_idx = i;
-			continue;
-		}
-
-		tmp = rate / divisor;
-		pr_debug("TC: %u / %-3u [%d] --> %u\n", rate, divisor, i, tmp);
-		if (best_divisor_idx > 0) {
-			if (tmp < 5 * 1000 * 1000)
-				continue;
-		}
-		divided_rate = tmp;
-		best_divisor_idx = i;
-	}
-
-
-	printk(bootinfo, clksrc.name, CONFIG_ATMEL_TCB_CLKSRC_BLOCK,
-			divided_rate / 1000000,
-			((divided_rate % 1000000) + 500) / 1000);
-
-	if (tc->tcb_config && tc->tcb_config->counter_width == 32) {
-		/* use apropriate function to read 32 bit counter */
-		clksrc.read = tc_get_cycles32;
-		/* setup ony channel 0 */
-		tcb_setup_single_chan(tc, best_divisor_idx);
-	} else {
-		/* tclib will give us three clocks no matter what the
-		 * underlying platform supports.
-		 */
-		ret = clk_prepare_enable(tc->clk[1]);
-		if (ret) {
-			pr_debug("can't enable T1 clk\n");
-			goto err_disable_t0;
-		}
-		/* setup both channel 0 & 1 */
-		tcb_setup_dual_chan(tc, best_divisor_idx);
-	}
-
-	/* and away we go! */
-	ret = clocksource_register_hz(&clksrc, divided_rate);
-	if (ret)
-		goto err_disable_t1;
-
-	/* channel 2:  periodic and oneshot timer support */
-	ret = setup_clkevents(tc, clk32k_divisor_idx);
-	if (ret)
-		goto err_unregister_clksrc;
-
-	return 0;
-
-err_unregister_clksrc:
-	clocksource_unregister(&clksrc);
-
-err_disable_t1:
-	if (!tc->tcb_config || tc->tcb_config->counter_width != 32)
-		clk_disable_unprepare(tc->clk[1]);
-
-err_disable_t0:
-	clk_disable_unprepare(t0_clk);
-
-err_free_tc:
-	atmel_tc_free(tc);
-	return ret;
-}
-arch_initcall(tcb_clksrc_init);
Index: linux-5.0.19-rt11/drivers/clocksource/timer-atmel-tcb.c
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/drivers/clocksource/timer-atmel-tcb.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/init.h>
+#include <linux/clocksource.h>
+#include <linux/clockchips.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+
+#include <linux/clk.h>
+#include <linux/err.h>
+#include <linux/ioport.h>
+#include <linux/io.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/sched_clock.h>
+#include <linux/syscore_ops.h>
+#include <soc/at91/atmel_tcb.h>
+
+
+/*
+ * We're configured to use a specific TC block, one that's not hooked
+ * up to external hardware, to provide a time solution:
+ *
+ *   - Two channels combine to create a free-running 32 bit counter
+ *     with a base rate of 5+ MHz, packaged as a clocksource (with
+ *     resolution better than 200 nsec).
+ *   - Some chips support 32 bit counter. A single channel is used for
+ *     this 32 bit free-running counter. the second channel is not used.
+ *
+ *   - The third channel may be used to provide a 16-bit clockevent
+ *     source, used in either periodic or oneshot mode.
+ *
+ * REVISIT behavior during system suspend states... we should disable
+ * all clocks and save the power.  Easily done for clockevent devices,
+ * but clocksources won't necessarily get the needed notifications.
+ * For deeper system sleep states, this will be mandatory...
+ */
+
+static void __iomem *tcaddr;
+static struct
+{
+	u32 cmr;
+	u32 imr;
+	u32 rc;
+	bool clken;
+} tcb_cache[3];
+static u32 bmr_cache;
+
+static u64 tc_get_cycles(struct clocksource *cs)
+{
+	unsigned long	flags;
+	u32		lower, upper;
+
+	raw_local_irq_save(flags);
+	do {
+		upper = readl_relaxed(tcaddr + ATMEL_TC_REG(1, CV));
+		lower = readl_relaxed(tcaddr + ATMEL_TC_REG(0, CV));
+	} while (upper != readl_relaxed(tcaddr + ATMEL_TC_REG(1, CV)));
+
+	raw_local_irq_restore(flags);
+	return (upper << 16) | lower;
+}
+
+static u64 tc_get_cycles32(struct clocksource *cs)
+{
+	return readl_relaxed(tcaddr + ATMEL_TC_REG(0, CV));
+}
+
+static void tc_clksrc_suspend(struct clocksource *cs)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(tcb_cache); i++) {
+		tcb_cache[i].cmr = readl(tcaddr + ATMEL_TC_REG(i, CMR));
+		tcb_cache[i].imr = readl(tcaddr + ATMEL_TC_REG(i, IMR));
+		tcb_cache[i].rc = readl(tcaddr + ATMEL_TC_REG(i, RC));
+		tcb_cache[i].clken = !!(readl(tcaddr + ATMEL_TC_REG(i, SR)) &
+					ATMEL_TC_CLKSTA);
+	}
+
+	bmr_cache = readl(tcaddr + ATMEL_TC_BMR);
+}
+
+static void tc_clksrc_resume(struct clocksource *cs)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(tcb_cache); i++) {
+		/* Restore registers for the channel, RA and RB are not used  */
+		writel(tcb_cache[i].cmr, tcaddr + ATMEL_TC_REG(i, CMR));
+		writel(tcb_cache[i].rc, tcaddr + ATMEL_TC_REG(i, RC));
+		writel(0, tcaddr + ATMEL_TC_REG(i, RA));
+		writel(0, tcaddr + ATMEL_TC_REG(i, RB));
+		/* Disable all the interrupts */
+		writel(0xff, tcaddr + ATMEL_TC_REG(i, IDR));
+		/* Reenable interrupts that were enabled before suspending */
+		writel(tcb_cache[i].imr, tcaddr + ATMEL_TC_REG(i, IER));
+		/* Start the clock if it was used */
+		if (tcb_cache[i].clken)
+			writel(ATMEL_TC_CLKEN, tcaddr + ATMEL_TC_REG(i, CCR));
+	}
+
+	/* Dual channel, chain channels */
+	writel(bmr_cache, tcaddr + ATMEL_TC_BMR);
+	/* Finally, trigger all the channels*/
+	writel(ATMEL_TC_SYNC, tcaddr + ATMEL_TC_BCR);
+}
+
+static struct clocksource clksrc = {
+	.rating         = 200,
+	.read           = tc_get_cycles,
+	.mask           = CLOCKSOURCE_MASK(32),
+	.flags		= CLOCK_SOURCE_IS_CONTINUOUS,
+	.suspend	= tc_clksrc_suspend,
+	.resume		= tc_clksrc_resume,
+};
+
+static u64 notrace tc_sched_clock_read(void)
+{
+	return tc_get_cycles(&clksrc);
+}
+
+static u64 notrace tc_sched_clock_read32(void)
+{
+	return tc_get_cycles32(&clksrc);
+}
+
+#ifdef CONFIG_GENERIC_CLOCKEVENTS
+
+struct tc_clkevt_device {
+	struct clock_event_device	clkevt;
+	struct clk			*clk;
+	bool				clk_enabled;
+	u32				freq;
+	void __iomem			*regs;
+};
+
+static struct tc_clkevt_device *to_tc_clkevt(struct clock_event_device *clkevt)
+{
+	return container_of(clkevt, struct tc_clkevt_device, clkevt);
+}
+
+static u32 timer_clock;
+
+static void tc_clk_disable(struct clock_event_device *d)
+{
+	struct tc_clkevt_device *tcd = to_tc_clkevt(d);
+
+	clk_disable(tcd->clk);
+	tcd->clk_enabled = false;
+}
+
+static void tc_clk_enable(struct clock_event_device *d)
+{
+	struct tc_clkevt_device *tcd = to_tc_clkevt(d);
+
+	if (tcd->clk_enabled)
+		return;
+	clk_enable(tcd->clk);
+	tcd->clk_enabled = true;
+}
+
+static int tc_shutdown(struct clock_event_device *d)
+{
+	struct tc_clkevt_device *tcd = to_tc_clkevt(d);
+	void __iomem		*regs = tcd->regs;
+
+	writel(0xff, regs + ATMEL_TC_REG(2, IDR));
+	writel(ATMEL_TC_CLKDIS, regs + ATMEL_TC_REG(2, CCR));
+	return 0;
+}
+
+static int tc_shutdown_clk_off(struct clock_event_device *d)
+{
+	tc_shutdown(d);
+	if (!clockevent_state_detached(d))
+		tc_clk_disable(d);
+
+	return 0;
+}
+
+static int tc_set_oneshot(struct clock_event_device *d)
+{
+	struct tc_clkevt_device *tcd = to_tc_clkevt(d);
+	void __iomem		*regs = tcd->regs;
+
+	if (clockevent_state_oneshot(d) || clockevent_state_periodic(d))
+		tc_shutdown(d);
+
+	tc_clk_enable(d);
+
+	/* count up to RC, then irq and stop */
+	writel(timer_clock | ATMEL_TC_CPCSTOP | ATMEL_TC_WAVE |
+		     ATMEL_TC_WAVESEL_UP_AUTO, regs + ATMEL_TC_REG(2, CMR));
+	writel(ATMEL_TC_CPCS, regs + ATMEL_TC_REG(2, IER));
+
+	/* set_next_event() configures and starts the timer */
+	return 0;
+}
+
+static int tc_set_periodic(struct clock_event_device *d)
+{
+	struct tc_clkevt_device *tcd = to_tc_clkevt(d);
+	void __iomem		*regs = tcd->regs;
+
+	if (clockevent_state_oneshot(d) || clockevent_state_periodic(d))
+		tc_shutdown(d);
+
+	/* By not making the gentime core emulate periodic mode on top
+	 * of oneshot, we get lower overhead and improved accuracy.
+	 */
+	tc_clk_enable(d);
+
+	/* count up to RC, then irq and restart */
+	writel(timer_clock | ATMEL_TC_WAVE | ATMEL_TC_WAVESEL_UP_AUTO,
+		     regs + ATMEL_TC_REG(2, CMR));
+	writel((tcd->freq + HZ / 2) / HZ, tcaddr + ATMEL_TC_REG(2, RC));
+
+	/* Enable clock and interrupts on RC compare */
+	writel(ATMEL_TC_CPCS, regs + ATMEL_TC_REG(2, IER));
+
+	/* go go gadget! */
+	writel(ATMEL_TC_CLKEN | ATMEL_TC_SWTRG, regs +
+		     ATMEL_TC_REG(2, CCR));
+	return 0;
+}
+
+static int tc_next_event(unsigned long delta, struct clock_event_device *d)
+{
+	writel_relaxed(delta, tcaddr + ATMEL_TC_REG(2, RC));
+
+	/* go go gadget! */
+	writel_relaxed(ATMEL_TC_CLKEN | ATMEL_TC_SWTRG,
+			tcaddr + ATMEL_TC_REG(2, CCR));
+	return 0;
+}
+
+static struct tc_clkevt_device clkevt = {
+	.clkevt	= {
+		.features		= CLOCK_EVT_FEAT_PERIODIC |
+					  CLOCK_EVT_FEAT_ONESHOT,
+		/* Should be lower than at91rm9200's system timer */
+#ifdef CONFIG_ATMEL_TCB_CLKSRC_USE_SLOW_CLOCK
+		.rating			= 125,
+#else
+		.rating			= 200,
+#endif
+		.set_next_event		= tc_next_event,
+		.set_state_shutdown	= tc_shutdown_clk_off,
+		.set_state_periodic	= tc_set_periodic,
+		.set_state_oneshot	= tc_set_oneshot,
+	},
+};
+
+static irqreturn_t ch2_irq(int irq, void *handle)
+{
+	struct tc_clkevt_device	*dev = handle;
+	unsigned int		sr;
+
+	sr = readl_relaxed(dev->regs + ATMEL_TC_REG(2, SR));
+	if (sr & ATMEL_TC_CPCS) {
+		dev->clkevt.event_handler(&dev->clkevt);
+		return IRQ_HANDLED;
+	}
+
+	return IRQ_NONE;
+}
+
+static int __init setup_clkevents(struct atmel_tc *tc, int divisor_idx)
+{
+	unsigned divisor = atmel_tc_divisors[divisor_idx];
+	int ret;
+	struct clk *t2_clk = tc->clk[2];
+	int irq = tc->irq[2];
+
+	ret = clk_prepare_enable(tc->slow_clk);
+	if (ret)
+		return ret;
+
+	/* try to enable t2 clk to avoid future errors in mode change */
+	ret = clk_prepare_enable(t2_clk);
+	if (ret) {
+		clk_disable_unprepare(tc->slow_clk);
+		return ret;
+	}
+
+	clk_disable(t2_clk);
+
+	clkevt.regs = tc->regs;
+	clkevt.clk = t2_clk;
+
+	timer_clock = divisor_idx;
+	if (!divisor)
+		clkevt.freq = 32768;
+	else
+		clkevt.freq = clk_get_rate(t2_clk) / divisor;
+
+	clkevt.clkevt.cpumask = cpumask_of(0);
+
+	ret = request_irq(irq, ch2_irq, IRQF_TIMER, "tc_clkevt", &clkevt);
+	if (ret) {
+		clk_unprepare(t2_clk);
+		clk_disable_unprepare(tc->slow_clk);
+		return ret;
+	}
+
+	clockevents_config_and_register(&clkevt.clkevt, clkevt.freq, 1, 0xffff);
+
+	return ret;
+}
+
+#else /* !CONFIG_GENERIC_CLOCKEVENTS */
+
+static int __init setup_clkevents(struct atmel_tc *tc, int clk32k_divisor_idx)
+{
+	/* NOTHING */
+	return 0;
+}
+
+#endif
+
+static void __init tcb_setup_dual_chan(struct atmel_tc *tc, int mck_divisor_idx)
+{
+	/* channel 0:  waveform mode, input mclk/8, clock TIOA0 on overflow */
+	writel(mck_divisor_idx			/* likely divide-by-8 */
+			| ATMEL_TC_WAVE
+			| ATMEL_TC_WAVESEL_UP		/* free-run */
+			| ATMEL_TC_ACPA_SET		/* TIOA0 rises at 0 */
+			| ATMEL_TC_ACPC_CLEAR,		/* (duty cycle 50%) */
+			tcaddr + ATMEL_TC_REG(0, CMR));
+	writel(0x0000, tcaddr + ATMEL_TC_REG(0, RA));
+	writel(0x8000, tcaddr + ATMEL_TC_REG(0, RC));
+	writel(0xff, tcaddr + ATMEL_TC_REG(0, IDR));	/* no irqs */
+	writel(ATMEL_TC_CLKEN, tcaddr + ATMEL_TC_REG(0, CCR));
+
+	/* channel 1:  waveform mode, input TIOA0 */
+	writel(ATMEL_TC_XC1			/* input: TIOA0 */
+			| ATMEL_TC_WAVE
+			| ATMEL_TC_WAVESEL_UP,		/* free-run */
+			tcaddr + ATMEL_TC_REG(1, CMR));
+	writel(0xff, tcaddr + ATMEL_TC_REG(1, IDR));	/* no irqs */
+	writel(ATMEL_TC_CLKEN, tcaddr + ATMEL_TC_REG(1, CCR));
+
+	/* chain channel 0 to channel 1*/
+	writel(ATMEL_TC_TC1XC1S_TIOA0, tcaddr + ATMEL_TC_BMR);
+	/* then reset all the timers */
+	writel(ATMEL_TC_SYNC, tcaddr + ATMEL_TC_BCR);
+}
+
+static void __init tcb_setup_single_chan(struct atmel_tc *tc, int mck_divisor_idx)
+{
+	/* channel 0:  waveform mode, input mclk/8 */
+	writel(mck_divisor_idx			/* likely divide-by-8 */
+			| ATMEL_TC_WAVE
+			| ATMEL_TC_WAVESEL_UP,		/* free-run */
+			tcaddr + ATMEL_TC_REG(0, CMR));
+	writel(0xff, tcaddr + ATMEL_TC_REG(0, IDR));	/* no irqs */
+	writel(ATMEL_TC_CLKEN, tcaddr + ATMEL_TC_REG(0, CCR));
+
+	/* then reset all the timers */
+	writel(ATMEL_TC_SYNC, tcaddr + ATMEL_TC_BCR);
+}
+
+static const u8 atmel_tcb_divisors[5] = { 2, 8, 32, 128, 0, };
+
+static const struct of_device_id atmel_tcb_of_match[] = {
+	{ .compatible = "atmel,at91rm9200-tcb", .data = (void *)16, },
+	{ .compatible = "atmel,at91sam9x5-tcb", .data = (void *)32, },
+	{ /* sentinel */ }
+};
+
+static int __init tcb_clksrc_init(struct device_node *node)
+{
+	struct atmel_tc tc;
+	struct clk *t0_clk;
+	const struct of_device_id *match;
+	u64 (*tc_sched_clock)(void);
+	u32 rate, divided_rate = 0;
+	int best_divisor_idx = -1;
+	int clk32k_divisor_idx = -1;
+	int bits;
+	int i;
+	int ret;
+
+	/* Protect against multiple calls */
+	if (tcaddr)
+		return 0;
+
+	tc.regs = of_iomap(node->parent, 0);
+	if (!tc.regs)
+		return -ENXIO;
+
+	t0_clk = of_clk_get_by_name(node->parent, "t0_clk");
+	if (IS_ERR(t0_clk))
+		return PTR_ERR(t0_clk);
+
+	tc.slow_clk = of_clk_get_by_name(node->parent, "slow_clk");
+	if (IS_ERR(tc.slow_clk))
+		return PTR_ERR(tc.slow_clk);
+
+	tc.clk[0] = t0_clk;
+	tc.clk[1] = of_clk_get_by_name(node->parent, "t1_clk");
+	if (IS_ERR(tc.clk[1]))
+		tc.clk[1] = t0_clk;
+	tc.clk[2] = of_clk_get_by_name(node->parent, "t2_clk");
+	if (IS_ERR(tc.clk[2]))
+		tc.clk[2] = t0_clk;
+
+	tc.irq[2] = of_irq_get(node->parent, 2);
+	if (tc.irq[2] <= 0) {
+		tc.irq[2] = of_irq_get(node->parent, 0);
+		if (tc.irq[2] <= 0)
+			return -EINVAL;
+	}
+
+	match = of_match_node(atmel_tcb_of_match, node->parent);
+	bits = (uintptr_t)match->data;
+
+	for (i = 0; i < ARRAY_SIZE(tc.irq); i++)
+		writel(ATMEL_TC_ALL_IRQ, tc.regs + ATMEL_TC_REG(i, IDR));
+
+	ret = clk_prepare_enable(t0_clk);
+	if (ret) {
+		pr_debug("can't enable T0 clk\n");
+		return ret;
+	}
+
+	/* How fast will we be counting?  Pick something over 5 MHz.  */
+	rate = (u32) clk_get_rate(t0_clk);
+	for (i = 0; i < ARRAY_SIZE(atmel_tcb_divisors); i++) {
+		unsigned divisor = atmel_tcb_divisors[i];
+		unsigned tmp;
+
+		/* remember 32 KiHz clock for later */
+		if (!divisor) {
+			clk32k_divisor_idx = i;
+			continue;
+		}
+
+		tmp = rate / divisor;
+		pr_debug("TC: %u / %-3u [%d] --> %u\n", rate, divisor, i, tmp);
+		if (best_divisor_idx > 0) {
+			if (tmp < 5 * 1000 * 1000)
+				continue;
+		}
+		divided_rate = tmp;
+		best_divisor_idx = i;
+	}
+
+	clksrc.name = kbasename(node->parent->full_name);
+	clkevt.clkevt.name = kbasename(node->parent->full_name);
+	pr_debug("%s at %d.%03d MHz\n", clksrc.name, divided_rate / 1000000,
+			((divided_rate % 1000000) + 500) / 1000);
+
+	tcaddr = tc.regs;
+
+	if (bits == 32) {
+		/* use apropriate function to read 32 bit counter */
+		clksrc.read = tc_get_cycles32;
+		/* setup ony channel 0 */
+		tcb_setup_single_chan(&tc, best_divisor_idx);
+		tc_sched_clock = tc_sched_clock_read32;
+	} else {
+		/* we have three clocks no matter what the
+		 * underlying platform supports.
+		 */
+		ret = clk_prepare_enable(tc.clk[1]);
+		if (ret) {
+			pr_debug("can't enable T1 clk\n");
+			goto err_disable_t0;
+		}
+		/* setup both channel 0 & 1 */
+		tcb_setup_dual_chan(&tc, best_divisor_idx);
+		tc_sched_clock = tc_sched_clock_read;
+	}
+
+	/* and away we go! */
+	ret = clocksource_register_hz(&clksrc, divided_rate);
+	if (ret)
+		goto err_disable_t1;
+
+	/* channel 2:  periodic and oneshot timer support */
+#ifdef CONFIG_ATMEL_TCB_CLKSRC_USE_SLOW_CLOCK
+	ret = setup_clkevents(&tc, clk32k_divisor_idx);
+#else
+	ret = setup_clkevents(tc, best_divisor_idx);
+#endif
+	if (ret)
+		goto err_unregister_clksrc;
+
+	sched_clock_register(tc_sched_clock, 32, divided_rate);
+
+	return 0;
+
+err_unregister_clksrc:
+	clocksource_unregister(&clksrc);
+
+err_disable_t1:
+	if (bits != 32)
+		clk_disable_unprepare(tc.clk[1]);
+
+err_disable_t0:
+	clk_disable_unprepare(t0_clk);
+
+	tcaddr = NULL;
+
+	return ret;
+}
+TIMER_OF_DECLARE(atmel_tcb_clksrc, "atmel,tcb-timer", tcb_clksrc_init);
Index: linux-5.0.19-rt11/drivers/connector/cn_proc.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/connector/cn_proc.c
+++ linux-5.0.19-rt11/drivers/connector/cn_proc.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:35 @
 #include <linux/pid_namespace.h>
 
 #include <linux/cn_proc.h>
+#include <linux/locallock.h>
 
 /*
  * Size of a cn_msg followed by a proc_event structure.  Since the
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:58 @ static struct cb_id cn_proc_event_id = {
 
 /* proc_event_counts is used as the sequence number of the netlink message */
 static DEFINE_PER_CPU(__u32, proc_event_counts) = { 0 };
+static DEFINE_LOCAL_IRQ_LOCK(send_msg_lock);
 
 static inline void send_msg(struct cn_msg *msg)
 {
-	preempt_disable();
+	local_lock(send_msg_lock);
 
 	msg->seq = __this_cpu_inc_return(proc_event_counts) - 1;
 	((struct proc_event *)msg->data)->cpu = smp_processor_id();
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:75 @ static inline void send_msg(struct cn_ms
 	 */
 	cn_netlink_send(msg, 0, CN_IDX_PROC, GFP_NOWAIT);
 
-	preempt_enable();
+	local_unlock(send_msg_lock);
 }
 
 void proc_fork_connector(struct task_struct *task)
Index: linux-5.0.19-rt11/drivers/cpufreq/Kconfig.x86
===================================================================
--- linux-5.0.19-rt11.orig/drivers/cpufreq/Kconfig.x86
+++ linux-5.0.19-rt11/drivers/cpufreq/Kconfig.x86
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:128 @ config X86_POWERNOW_K7_ACPI
 
 config X86_POWERNOW_K8
 	tristate "AMD Opteron/Athlon64 PowerNow!"
-	depends on ACPI && ACPI_PROCESSOR && X86_ACPI_CPUFREQ
+	depends on ACPI && ACPI_PROCESSOR && X86_ACPI_CPUFREQ && !PREEMPT_RT_BASE
 	help
 	  This adds the CPUFreq driver for K8/early Opteron/Athlon64 processors.
 	  Support for K10 and newer processors is now in acpi-cpufreq.
Index: linux-5.0.19-rt11/drivers/crypto/chelsio/chtls/chtls_main.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/crypto/chelsio/chtls/chtls_main.c
+++ linux-5.0.19-rt11/drivers/crypto/chelsio/chtls/chtls_main.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:33 @
  */
 static LIST_HEAD(cdev_list);
 static DEFINE_MUTEX(cdev_mutex);
-static DEFINE_MUTEX(cdev_list_lock);
 
 static DEFINE_MUTEX(notify_mutex);
 static RAW_NOTIFIER_HEAD(listen_notify_list);
Index: linux-5.0.19-rt11/drivers/firmware/efi/efi.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/firmware/efi/efi.c
+++ linux-5.0.19-rt11/drivers/firmware/efi/efi.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:91 @ struct mm_struct efi_mm = {
 
 struct workqueue_struct *efi_rts_wq;
 
-static bool disable_runtime;
+static bool disable_runtime = IS_ENABLED(CONFIG_PREEMPT_RT_BASE);
 static int __init setup_noefi(char *arg)
 {
 	disable_runtime = true;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:117 @ static int __init parse_efi_cmdline(char
 	if (parse_option_str(str, "noruntime"))
 		disable_runtime = true;
 
+	if (parse_option_str(str, "runtime"))
+		disable_runtime = false;
+
 	return 0;
 }
 early_param("efi", parse_efi_cmdline);
Index: linux-5.0.19-rt11/drivers/gpu/drm/i915/i915_irq.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/gpu/drm/i915/i915_irq.c
+++ linux-5.0.19-rt11/drivers/gpu/drm/i915/i915_irq.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1028 @ static bool i915_get_crtc_scanoutpos(str
 	spin_lock_irqsave(&dev_priv->uncore.lock, irqflags);
 
 	/* preempt_disable_rt() should go right here in PREEMPT_RT patchset. */
+	preempt_disable_rt();
 
 	/* Get optional system timestamp before query. */
 	if (stime)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1080 @ static bool i915_get_crtc_scanoutpos(str
 		*etime = ktime_get();
 
 	/* preempt_enable_rt() should go right here in PREEMPT_RT patchset. */
+	preempt_enable_rt();
 
 	spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
 
Index: linux-5.0.19-rt11/drivers/gpu/drm/i915/i915_request.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/gpu/drm/i915/i915_request.c
+++ linux-5.0.19-rt11/drivers/gpu/drm/i915/i915_request.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:281 @ static void __retire_engine_request(stru
 
 	GEM_BUG_ON(!i915_request_completed(rq));
 
-	local_irq_disable();
-
-	spin_lock(&engine->timeline.lock);
+	spin_lock_irq(&engine->timeline.lock);
 	GEM_BUG_ON(!list_is_first(&rq->link, &engine->timeline.requests));
 	list_del_init(&rq->link);
 	spin_unlock(&engine->timeline.lock);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:295 @ static void __retire_engine_request(stru
 		GEM_BUG_ON(!atomic_read(&rq->i915->gt_pm.rps.num_waiters));
 		atomic_dec(&rq->i915->gt_pm.rps.num_waiters);
 	}
-	spin_unlock(&rq->lock);
-
-	local_irq_enable();
+	spin_unlock_irq(&rq->lock);
 
 	/*
 	 * The backing object for the context is done after switching to the
Index: linux-5.0.19-rt11/drivers/gpu/drm/i915/i915_trace.h
===================================================================
--- linux-5.0.19-rt11.orig/drivers/gpu/drm/i915/i915_trace.h
+++ linux-5.0.19-rt11/drivers/gpu/drm/i915/i915_trace.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5 @
 #if !defined(_I915_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
 #define _I915_TRACE_H_
 
+#ifdef CONFIG_PREEMPT_RT_BASE
+#define NOTRACE
+#endif
+
 #include <linux/stringify.h>
 #include <linux/types.h>
 #include <linux/tracepoint.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:686 @ DEFINE_EVENT(i915_request, i915_request_
 	    TP_ARGS(rq)
 );
 
-#if defined(CONFIG_DRM_I915_LOW_LEVEL_TRACEPOINTS)
+#if defined(CONFIG_DRM_I915_LOW_LEVEL_TRACEPOINTS) && !defined(NOTRACE)
 DEFINE_EVENT(i915_request, i915_request_submit,
 	     TP_PROTO(struct i915_request *rq),
 	     TP_ARGS(rq)
Index: linux-5.0.19-rt11/drivers/gpu/drm/i915/intel_sprite.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/gpu/drm/i915/intel_sprite.c
+++ linux-5.0.19-rt11/drivers/gpu/drm/i915/intel_sprite.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:39 @
 #include <drm/drm_rect.h>
 #include <drm/drm_atomic.h>
 #include <drm/drm_plane_helper.h>
+#include <linux/locallock.h>
 #include "intel_drv.h"
 #include "intel_frontbuffer.h"
 #include <drm/i915_drm.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:65 @ int intel_usecs_to_scanlines(const struc
 #define VBLANK_EVASION_TIME_US 100
 #endif
 
+static DEFINE_LOCAL_IRQ_LOCK(pipe_update_lock);
+
 /**
  * intel_pipe_update_start() - start update of a set of display registers
  * @new_crtc_state: the new crtc state
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:116 @ void intel_pipe_update_start(const struc
 		DRM_ERROR("PSR idle timed out 0x%x, atomic update may fail\n",
 			  psr_status);
 
-	local_irq_disable();
+	local_lock_irq(pipe_update_lock);
 
 	crtc->debug.min_vbl = min;
 	crtc->debug.max_vbl = max;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:140 @ void intel_pipe_update_start(const struc
 			break;
 		}
 
-		local_irq_enable();
+		local_unlock_irq(pipe_update_lock);
 
 		timeout = schedule_timeout(timeout);
 
-		local_irq_disable();
+		local_lock_irq(pipe_update_lock);
 	}
 
 	finish_wait(wq, &wait);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:177 @ void intel_pipe_update_start(const struc
 	return;
 
 irq_disable:
-	local_irq_disable();
+	local_lock_irq(pipe_update_lock);
 }
 
 /**
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:213 @ void intel_pipe_update_end(struct intel_
 		new_crtc_state->base.event = NULL;
 	}
 
-	local_irq_enable();
+	local_unlock_irq(pipe_update_lock);
 
 	if (intel_vgpu_active(dev_priv))
 		return;
Index: linux-5.0.19-rt11/drivers/gpu/drm/radeon/radeon_display.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/gpu/drm/radeon/radeon_display.c
+++ linux-5.0.19-rt11/drivers/gpu/drm/radeon/radeon_display.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1816 @ int radeon_get_crtc_scanoutpos(struct dr
 	struct radeon_device *rdev = dev->dev_private;
 
 	/* preempt_disable_rt() should go right here in PREEMPT_RT patchset. */
+	preempt_disable_rt();
 
 	/* Get optional system timestamp before query. */
 	if (stime)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1909 @ int radeon_get_crtc_scanoutpos(struct dr
 		*etime = ktime_get();
 
 	/* preempt_enable_rt() should go right here in PREEMPT_RT patchset. */
+	preempt_enable_rt();
 
 	/* Decode into vertical and horizontal scanout position. */
 	*vpos = position & 0x1fff;
Index: linux-5.0.19-rt11/drivers/hv/hv.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/hv/hv.c
+++ linux-5.0.19-rt11/drivers/hv/hv.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:113 @ int hv_post_message(union hv_connection_
 static void hv_stimer0_isr(void)
 {
 	struct hv_per_cpu_context *hv_cpu;
+	struct pt_regs *regs = get_irq_regs();
+	u64 ip = regs ? instruction_pointer(regs) : 0;
 
 	hv_cpu = this_cpu_ptr(hv_context.cpu_context);
 	hv_cpu->clk_evt->event_handler(hv_cpu->clk_evt);
-	add_interrupt_randomness(stimer0_vector, 0);
+	add_interrupt_randomness(stimer0_vector, 0, ip);
 }
 
 static int hv_ce_set_next_event(unsigned long delta,
Index: linux-5.0.19-rt11/drivers/hv/hyperv_vmbus.h
===================================================================
--- linux-5.0.19-rt11.orig/drivers/hv/hyperv_vmbus.h
+++ linux-5.0.19-rt11/drivers/hv/hyperv_vmbus.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:34 @
 #include <linux/atomic.h>
 #include <linux/hyperv.h>
 #include <linux/interrupt.h>
+#include <linux/irq.h>
 
 #include "hv_trace.h"
 
Index: linux-5.0.19-rt11/drivers/hv/vmbus_drv.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/hv/vmbus_drv.c
+++ linux-5.0.19-rt11/drivers/hv/vmbus_drv.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1121 @ static void vmbus_isr(void)
 	void *page_addr = hv_cpu->synic_event_page;
 	struct hv_message *msg;
 	union hv_synic_event_flags *event;
+	struct pt_regs *regs = get_irq_regs();
+	u64 ip = regs ? instruction_pointer(regs) : 0;
 	bool handled = false;
 
 	if (unlikely(page_addr == NULL))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1166 @ static void vmbus_isr(void)
 			tasklet_schedule(&hv_cpu->msg_dpc);
 	}
 
-	add_interrupt_randomness(HYPERVISOR_CALLBACK_VECTOR, 0);
+	add_interrupt_randomness(HYPERVISOR_CALLBACK_VECTOR, 0, ip);
 }
 
 /*
Index: linux-5.0.19-rt11/drivers/infiniband/hw/hfi1/affinity.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/infiniband/hw/hfi1/affinity.c
+++ linux-5.0.19-rt11/drivers/infiniband/hw/hfi1/affinity.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1040 @ int hfi1_get_proc_affinity(int node)
 	struct hfi1_affinity_node *entry;
 	cpumask_var_t diff, hw_thread_mask, available_mask, intrs_mask;
 	const struct cpumask *node_mask,
-		*proc_mask = &current->cpus_allowed;
+		*proc_mask = current->cpus_ptr;
 	struct hfi1_affinity_node_list *affinity = &node_affinity;
 	struct cpu_mask_set *set = &affinity->proc;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1048 @ int hfi1_get_proc_affinity(int node)
 	 * check whether process/context affinity has already
 	 * been set
 	 */
-	if (cpumask_weight(proc_mask) == 1) {
+	if (current->nr_cpus_allowed == 1) {
 		hfi1_cdbg(PROC, "PID %u %s affinity set to CPU %*pbl",
 			  current->pid, current->comm,
 			  cpumask_pr_args(proc_mask));
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1059 @ int hfi1_get_proc_affinity(int node)
 		cpu = cpumask_first(proc_mask);
 		cpumask_set_cpu(cpu, &set->used);
 		goto done;
-	} else if (cpumask_weight(proc_mask) < cpumask_weight(&set->mask)) {
+	} else if (current->nr_cpus_allowed < cpumask_weight(&set->mask)) {
 		hfi1_cdbg(PROC, "PID %u %s affinity set to CPU set(s) %*pbl",
 			  current->pid, current->comm,
 			  cpumask_pr_args(proc_mask));
Index: linux-5.0.19-rt11/drivers/infiniband/hw/hfi1/sdma.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/infiniband/hw/hfi1/sdma.c
+++ linux-5.0.19-rt11/drivers/infiniband/hw/hfi1/sdma.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:858 @ struct sdma_engine *sdma_select_user_eng
 {
 	struct sdma_rht_node *rht_node;
 	struct sdma_engine *sde = NULL;
-	const struct cpumask *current_mask = &current->cpus_allowed;
 	unsigned long cpu_id;
 
 	/*
 	 * To ensure that always the same sdma engine(s) will be
 	 * selected make sure the process is pinned to this CPU only.
 	 */
-	if (cpumask_weight(current_mask) != 1)
+	if (current->nr_cpus_allowed != 1)
 		goto out;
 
 	cpu_id = smp_processor_id();
Index: linux-5.0.19-rt11/drivers/infiniband/hw/qib/qib_file_ops.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/infiniband/hw/qib/qib_file_ops.c
+++ linux-5.0.19-rt11/drivers/infiniband/hw/qib/qib_file_ops.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1145 @ static __poll_t qib_poll(struct file *fp
 static void assign_ctxt_affinity(struct file *fp, struct qib_devdata *dd)
 {
 	struct qib_filedata *fd = fp->private_data;
-	const unsigned int weight = cpumask_weight(&current->cpus_allowed);
+	const unsigned int weight = current->nr_cpus_allowed;
 	const struct cpumask *local_mask = cpumask_of_pcibus(dd->pcidev->bus);
 	int local_cpu;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1626 @ static int qib_assign_ctxt(struct file *
 		ret = find_free_ctxt(i_minor - 1, fp, uinfo);
 	else {
 		int unit;
-		const unsigned int cpu = cpumask_first(&current->cpus_allowed);
-		const unsigned int weight =
-			cpumask_weight(&current->cpus_allowed);
+		const unsigned int cpu = cpumask_first(current->cpus_ptr);
+		const unsigned int weight = current->nr_cpus_allowed;
 
 		if (weight == 1 && !test_bit(cpu, qib_cpulist))
 			if (!find_hca(cpu, &unit) && unit >= 0)
Index: linux-5.0.19-rt11/drivers/iommu/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/drivers/iommu/Kconfig
+++ linux-5.0.19-rt11/drivers/iommu/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:96 @ config IOMMU_DMA
 	bool
 	select IOMMU_API
 	select IOMMU_IOVA
+	select IRQ_MSI_IOMMU
 	select NEED_SG_DMA_LENGTH
 
 config FSL_PAMU
Index: linux-5.0.19-rt11/drivers/iommu/dma-iommu.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/iommu/dma-iommu.c
+++ linux-5.0.19-rt11/drivers/iommu/dma-iommu.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:892 @ out_free_page:
 	return NULL;
 }
 
-void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg)
+int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr)
 {
-	struct device *dev = msi_desc_to_dev(irq_get_msi_desc(irq));
+	struct device *dev = msi_desc_to_dev(desc);
 	struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
 	struct iommu_dma_cookie *cookie;
 	struct iommu_dma_msi_page *msi_page;
-	phys_addr_t msi_addr = (u64)msg->address_hi << 32 | msg->address_lo;
 	unsigned long flags;
 
-	if (!domain || !domain->iova_cookie)
-		return;
+	if (!domain || !domain->iova_cookie) {
+		desc->iommu_cookie = NULL;
+		return 0;
+	}
 
 	cookie = domain->iova_cookie;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:916 @ void iommu_dma_map_msi_msg(int irq, stru
 	msi_page = iommu_dma_get_msi_page(dev, msi_addr, domain);
 	spin_unlock_irqrestore(&cookie->msi_lock, flags);
 
-	if (WARN_ON(!msi_page)) {
-		/*
-		 * We're called from a void callback, so the best we can do is
-		 * 'fail' by filling the message with obviously bogus values.
-		 * Since we got this far due to an IOMMU being present, it's
-		 * not like the existing address would have worked anyway...
-		 */
-		msg->address_hi = ~0U;
-		msg->address_lo = ~0U;
-		msg->data = ~0U;
-	} else {
-		msg->address_hi = upper_32_bits(msi_page->iova);
-		msg->address_lo &= cookie_msi_granule(cookie) - 1;
-		msg->address_lo += lower_32_bits(msi_page->iova);
-	}
+	msi_desc_set_iommu_cookie(desc, msi_page);
+
+	if (!msi_page)
+		return -ENOMEM;
+	return 0;
+}
+
+void iommu_dma_compose_msi_msg(struct msi_desc *desc,
+			       struct msi_msg *msg)
+{
+	struct device *dev = msi_desc_to_dev(desc);
+	const struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
+	const struct iommu_dma_msi_page *msi_page;
+
+	msi_page = msi_desc_get_iommu_cookie(desc);
+
+	if (!domain || !domain->iova_cookie || WARN_ON(!msi_page))
+		return;
+
+	msg->address_hi = upper_32_bits(msi_page->iova);
+	msg->address_lo &= cookie_msi_granule(domain->iova_cookie) - 1;
+	msg->address_lo += lower_32_bits(msi_page->iova);
 }
Index: linux-5.0.19-rt11/drivers/irqchip/irq-gic-v2m.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/irqchip/irq-gic-v2m.c
+++ linux-5.0.19-rt11/drivers/irqchip/irq-gic-v2m.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:113 @ static void gicv2m_compose_msi_msg(struc
 	if (v2m->flags & GICV2M_NEEDS_SPI_OFFSET)
 		msg->data -= v2m->spi_offset;
 
-	iommu_dma_map_msi_msg(data->irq, msg);
+	iommu_dma_compose_msi_msg(irq_data_get_msi_desc(data), msg);
 }
 
 static struct irq_chip gicv2m_irq_chip = {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:170 @ static void gicv2m_unalloc_msi(struct v2
 static int gicv2m_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
 				   unsigned int nr_irqs, void *args)
 {
+	msi_alloc_info_t *info = args;
 	struct v2m_data *v2m = NULL, *tmp;
 	int hwirq, offset, i, err = 0;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:190 @ static int gicv2m_irq_domain_alloc(struc
 
 	hwirq = v2m->spi_start + offset;
 
+	err = iommu_dma_prepare_msi(info->desc,
+				    v2m->res.start + V2M_MSI_SETSPI_NS);
+	if (err)
+		return err;
+
 	for (i = 0; i < nr_irqs; i++) {
 		err = gicv2m_irq_gic_domain_alloc(domain, virq + i, hwirq + i);
 		if (err)
Index: linux-5.0.19-rt11/drivers/irqchip/irq-gic-v3-its.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/irqchip/irq-gic-v3-its.c
+++ linux-5.0.19-rt11/drivers/irqchip/irq-gic-v3-its.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1182 @ static void its_irq_compose_msi_msg(stru
 	msg->address_hi		= upper_32_bits(addr);
 	msg->data		= its_get_event_id(d);
 
-	iommu_dma_map_msi_msg(d->irq, msg);
+	iommu_dma_compose_msi_msg(irq_data_get_msi_desc(d), msg);
 }
 
 static int its_irq_set_irqchip_state(struct irq_data *d,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2565 @ static int its_irq_domain_alloc(struct i
 {
 	msi_alloc_info_t *info = args;
 	struct its_device *its_dev = info->scratchpad[0].ptr;
+	struct its_node *its = its_dev->its;
 	irq_hw_number_t hwirq;
 	int err;
 	int i;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2574 @ static int its_irq_domain_alloc(struct i
 	if (err)
 		return err;
 
+	err = iommu_dma_prepare_msi(info->desc, its->get_msi_base(its_dev));
+	if (err)
+		return err;
+
 	for (i = 0; i < nr_irqs; i++) {
 		err = its_irq_gic_domain_alloc(domain, virq + i, hwirq + i);
 		if (err)
Index: linux-5.0.19-rt11/drivers/irqchip/irq-gic-v3-mbi.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/irqchip/irq-gic-v3-mbi.c
+++ linux-5.0.19-rt11/drivers/irqchip/irq-gic-v3-mbi.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:87 @ static void mbi_free_msi(struct mbi_rang
 static int mbi_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
 				   unsigned int nr_irqs, void *args)
 {
+	msi_alloc_info_t *info = args;
 	struct mbi_range *mbi = NULL;
 	int hwirq, offset, i, err = 0;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:108 @ static int mbi_irq_domain_alloc(struct i
 
 	hwirq = mbi->spi_start + offset;
 
+	err = iommu_dma_prepare_msi(info->desc,
+				    mbi_phys_base + GICD_SETSPI_NSR);
+	if (err)
+		return err;
+
 	for (i = 0; i < nr_irqs; i++) {
 		err = mbi_irq_gic_domain_alloc(domain, virq + i, hwirq + i);
 		if (err)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:151 @ static void mbi_compose_msi_msg(struct i
 	msg[0].address_lo = lower_32_bits(mbi_phys_base + GICD_SETSPI_NSR);
 	msg[0].data = data->parent_data->hwirq;
 
-	iommu_dma_map_msi_msg(data->irq, msg);
+	iommu_dma_compose_msi_msg(irq_data_get_msi_desc(data), msg);
 }
 
 #ifdef CONFIG_PCI_MSI
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:211 @ static void mbi_compose_mbi_msg(struct i
 	msg[1].address_lo = lower_32_bits(mbi_phys_base + GICD_CLRSPI_NSR);
 	msg[1].data = data->parent_data->hwirq;
 
-	iommu_dma_map_msi_msg(data->irq, &msg[1]);
+	iommu_dma_compose_msi_msg(irq_data_get_msi_desc(data), &msg[1]);
 }
 
 /* Platform-MSI specific irqchip */
Index: linux-5.0.19-rt11/drivers/irqchip/irq-ls-scfg-msi.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/irqchip/irq-ls-scfg-msi.c
+++ linux-5.0.19-rt11/drivers/irqchip/irq-ls-scfg-msi.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:103 @ static void ls_scfg_msi_compose_msg(stru
 		msg->data |= cpumask_first(mask);
 	}
 
-	iommu_dma_map_msi_msg(data->irq, msg);
+	iommu_dma_compose_msi_msg(irq_data_get_msi_desc(data), msg);
 }
 
 static int ls_scfg_msi_set_affinity(struct irq_data *irq_data,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:144 @ static int ls_scfg_msi_domain_irq_alloc(
 					unsigned int nr_irqs,
 					void *args)
 {
+	msi_alloc_info_t *info = args;
 	struct ls_scfg_msi *msi_data = domain->host_data;
 	int pos, err = 0;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:161 @ static int ls_scfg_msi_domain_irq_alloc(
 	if (err)
 		return err;
 
+	err = iommu_dma_prepare_msi(info->desc, msi_data->msiir_addr);
+	if (err)
+		return err;
+
 	irq_domain_set_info(domain, virq, pos,
 			    &ls_scfg_msi_parent_chip, msi_data,
 			    handle_simple_irq, NULL, NULL);
Index: linux-5.0.19-rt11/drivers/leds/trigger/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/drivers/leds/trigger/Kconfig
+++ linux-5.0.19-rt11/drivers/leds/trigger/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:66 @ config LEDS_TRIGGER_BACKLIGHT
 
 config LEDS_TRIGGER_CPU
 	bool "LED CPU Trigger"
+	depends on !PREEMPT_RT_BASE
 	help
 	  This allows LEDs to be controlled by active CPUs. This shows
 	  the active CPUs across an array of LEDs so you can see which
Index: linux-5.0.19-rt11/drivers/md/bcache/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/drivers/md/bcache/Kconfig
+++ linux-5.0.19-rt11/drivers/md/bcache/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
 
 config BCACHE
 	tristate "Block device as cache"
+	depends on !PREEMPT_RT_FULL
 	select CRC64
 	help
 	Allows a block device to be used as cache for other devices; uses
Index: linux-5.0.19-rt11/drivers/md/raid5.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/md/raid5.c
+++ linux-5.0.19-rt11/drivers/md/raid5.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2072 @ static void raid_run_ops(struct stripe_h
 	struct raid5_percpu *percpu;
 	unsigned long cpu;
 
-	cpu = get_cpu();
+	cpu = get_cpu_light();
 	percpu = per_cpu_ptr(conf->percpu, cpu);
+	spin_lock(&percpu->lock);
 	if (test_bit(STRIPE_OP_BIOFILL, &ops_request)) {
 		ops_run_biofill(sh);
 		overlap_clear++;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2133 @ static void raid_run_ops(struct stripe_h
 			if (test_and_clear_bit(R5_Overlap, &dev->flags))
 				wake_up(&sh->raid_conf->wait_for_overlap);
 		}
-	put_cpu();
+	spin_unlock(&percpu->lock);
+	put_cpu_light();
 }
 
 static void free_stripe(struct kmem_cache *sc, struct stripe_head *sh)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6828 @ static int raid456_cpu_up_prepare(unsign
 			__func__, cpu);
 		return -ENOMEM;
 	}
+	spin_lock_init(&per_cpu_ptr(conf->percpu, cpu)->lock);
 	return 0;
 }
 
Index: linux-5.0.19-rt11/drivers/md/raid5.h
===================================================================
--- linux-5.0.19-rt11.orig/drivers/md/raid5.h
+++ linux-5.0.19-rt11/drivers/md/raid5.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:640 @ struct r5conf {
 	int			recovery_disabled;
 	/* per cpu variables */
 	struct raid5_percpu {
+		spinlock_t	lock;		/* Protection for -RT */
 		struct page	*spare_page; /* Used when checking P/Q in raid6 */
 		struct flex_array *scribble;   /* space for constructing buffer
 					      * lists and performing address
Index: linux-5.0.19-rt11/drivers/misc/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/drivers/misc/Kconfig
+++ linux-5.0.19-rt11/drivers/misc/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:62 @ config ATMEL_TCLIB
 	  blocks found on many Atmel processors.  This facilitates using
 	  these blocks by different drivers despite processor differences.
 
-config ATMEL_TCB_CLKSRC
-	bool "TC Block Clocksource"
-	depends on ATMEL_TCLIB
-	default y
-	help
-	  Select this to get a high precision clocksource based on a
-	  TC block with a 5+ MHz base clock rate.  Two timer channels
-	  are combined to make a single 32-bit timer.
-
-	  When GENERIC_CLOCKEVENTS is defined, the third timer channel
-	  may be used as a clock event device supporting oneshot mode
-	  (delays of up to two seconds) based on the 32 KiHz clock.
-
-config ATMEL_TCB_CLKSRC_BLOCK
-	int
-	depends on ATMEL_TCB_CLKSRC
-	default 0
-	range 0 1
-	help
-	  Some chips provide more than one TC block, so you have the
-	  choice of which one to use for the clock framework.  The other
-	  TC can be used for other purposes, such as PWM generation and
-	  interval timing.
-
 config DUMMY_IRQ
 	tristate "Dummy IRQ handler"
 	default n
Index: linux-5.0.19-rt11/drivers/misc/atmel_tclib.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/misc/atmel_tclib.c
+++ linux-5.0.19-rt11/drivers/misc/atmel_tclib.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1 @
-#include <linux/atmel_tc.h>
 #include <linux/clk.h>
 #include <linux/err.h>
 #include <linux/init.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:12 @
 #include <linux/slab.h>
 #include <linux/export.h>
 #include <linux/of.h>
+#include <soc/at91/atmel_tcb.h>
 
 /*
  * This is a thin library to solve the problem of how to portably allocate
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:114 @ static int __init tc_probe(struct platfo
 	struct resource	*r;
 	unsigned int	i;
 
+	if (of_get_child_count(pdev->dev.of_node))
+		return -EBUSY;
+
 	irq = platform_get_irq(pdev, 0);
 	if (irq < 0)
 		return -EINVAL;
Index: linux-5.0.19-rt11/drivers/net/wireless/intersil/orinoco/orinoco_usb.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/net/wireless/intersil/orinoco/orinoco_usb.c
+++ linux-5.0.19-rt11/drivers/net/wireless/intersil/orinoco/orinoco_usb.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:696 @ static void ezusb_req_ctx_wait(struct ez
 			while (!ctx->done.done && msecs--)
 				udelay(1000);
 		} else {
-			wait_event_interruptible(ctx->done.wait,
-						 ctx->done.done);
+			swait_event_interruptible_exclusive(ctx->done.wait,
+							    ctx->done.done);
 		}
 		break;
 	default:
Index: linux-5.0.19-rt11/drivers/of/base.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/of/base.c
+++ linux-5.0.19-rt11/drivers/of/base.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:141 @ static u32 phandle_cache_mask;
 /*
  * Caller must hold devtree_lock.
  */
-static void __of_free_phandle_cache(void)
+static struct device_node** __of_free_phandle_cache(void)
 {
 	u32 cache_entries = phandle_cache_mask + 1;
 	u32 k;
+	struct device_node **shadow;
 
 	if (!phandle_cache)
-		return;
+		return NULL;
 
 	for (k = 0; k < cache_entries; k++)
 		of_node_put(phandle_cache[k]);
 
-	kfree(phandle_cache);
+	shadow = phandle_cache;
 	phandle_cache = NULL;
+	return shadow;
 }
 
 int of_free_phandle_cache(void)
 {
 	unsigned long flags;
+	struct device_node **shadow;
 
 	raw_spin_lock_irqsave(&devtree_lock, flags);
 
-	__of_free_phandle_cache();
+	shadow = __of_free_phandle_cache();
 
 	raw_spin_unlock_irqrestore(&devtree_lock, flags);
-
+	kfree(shadow);
 	return 0;
 }
 #if !defined(CONFIG_MODULES)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:203 @ void of_populate_phandle_cache(void)
 	u32 cache_entries;
 	struct device_node *np;
 	u32 phandles = 0;
+	struct device_node **shadow;
 
 	raw_spin_lock_irqsave(&devtree_lock, flags);
 
-	__of_free_phandle_cache();
+	shadow = __of_free_phandle_cache();
 
 	for_each_of_allnodes(np)
 		if (np->phandle && np->phandle != OF_PHANDLE_ILLEGAL)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:215 @ void of_populate_phandle_cache(void)
 
 	if (!phandles)
 		goto out;
+	raw_spin_unlock_irqrestore(&devtree_lock, flags);
 
 	cache_entries = roundup_pow_of_two(phandles);
 	phandle_cache_mask = cache_entries - 1;
 
 	phandle_cache = kcalloc(cache_entries, sizeof(*phandle_cache),
 				GFP_ATOMIC);
+	raw_spin_lock_irqsave(&devtree_lock, flags);
 	if (!phandle_cache)
 		goto out;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:234 @ void of_populate_phandle_cache(void)
 
 out:
 	raw_spin_unlock_irqrestore(&devtree_lock, flags);
+	kfree(shadow);
 }
 
 void __init of_core_init(void)
Index: linux-5.0.19-rt11/drivers/pci/switch/switchtec.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/pci/switch/switchtec.c
+++ linux-5.0.19-rt11/drivers/pci/switch/switchtec.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:51 @ struct switchtec_user {
 
 	enum mrpc_state state;
 
-	struct completion comp;
+	wait_queue_head_t cmd_comp;
 	struct kref kref;
 	struct list_head list;
 
+	bool cmd_done;
 	u32 cmd;
 	u32 status;
 	u32 return_code;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:77 @ static struct switchtec_user *stuser_cre
 	stuser->stdev = stdev;
 	kref_init(&stuser->kref);
 	INIT_LIST_HEAD(&stuser->list);
-	init_completion(&stuser->comp);
+	init_waitqueue_head(&stuser->cmd_comp);
 	stuser->event_cnt = atomic_read(&stdev->event_cnt);
 
 	dev_dbg(&stdev->dev, "%s: %p\n", __func__, stuser);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:175 @ static int mrpc_queue_cmd(struct switcht
 	kref_get(&stuser->kref);
 	stuser->read_len = sizeof(stuser->data);
 	stuser_set_state(stuser, MRPC_QUEUED);
-	init_completion(&stuser->comp);
+	stuser->cmd_done = false;
 	list_add_tail(&stuser->list, &stdev->mrpc_queue);
 
 	mrpc_cmd_submit(stdev);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:222 @ static void mrpc_complete_cmd(struct swi
 		memcpy_fromio(stuser->data, &stdev->mmio_mrpc->output_data,
 			      stuser->read_len);
 out:
-	complete_all(&stuser->comp);
+	stuser->cmd_done = true;
+	wake_up_interruptible(&stuser->cmd_comp);
 	list_del_init(&stuser->list);
 	stuser_put(stuser);
 	stdev->mrpc_busy = 0;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:395 @ static int switchtec_dev_open(struct ino
 		return PTR_ERR(stuser);
 
 	filp->private_data = stuser;
-	nonseekable_open(inode, filp);
+	stream_open(inode, filp);
 
 	dev_dbg(&stdev->dev, "%s: %p\n", __func__, stuser);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:495 @ static ssize_t switchtec_dev_read(struct
 	mutex_unlock(&stdev->mrpc_mutex);
 
 	if (filp->f_flags & O_NONBLOCK) {
-		if (!try_wait_for_completion(&stuser->comp))
+		if (!READ_ONCE(stuser->cmd_done))
 			return -EAGAIN;
 	} else {
-		rc = wait_for_completion_interruptible(&stuser->comp);
+		rc = wait_event_interruptible(stuser->cmd_comp,
+					      stuser->cmd_done);
 		if (rc < 0)
 			return rc;
 	}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:547 @ static __poll_t switchtec_dev_poll(struc
 	struct switchtec_dev *stdev = stuser->stdev;
 	__poll_t ret = 0;
 
-	poll_wait(filp, &stuser->comp.wait, wait);
+	poll_wait(filp, &stuser->cmd_comp, wait);
 	poll_wait(filp, &stdev->event_wq, wait);
 
 	if (lock_mutex_and_test_alive(stdev))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:555 @ static __poll_t switchtec_dev_poll(struc
 
 	mutex_unlock(&stdev->mrpc_mutex);
 
-	if (try_wait_for_completion(&stuser->comp))
+	if (READ_ONCE(stuser->cmd_done))
 		ret |= EPOLLIN | EPOLLRDNORM;
 
 	if (stuser->event_cnt != atomic_read(&stdev->event_cnt))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1093 @ static void stdev_kill(struct switchtec_
 
 	/* Wake up and kill any users waiting on an MRPC request */
 	list_for_each_entry_safe(stuser, tmpuser, &stdev->mrpc_queue, list) {
-		complete_all(&stuser->comp);
+		stuser->cmd_done = true;
+		wake_up_interruptible(&stuser->cmd_comp);
 		list_del_init(&stuser->list);
 		stuser_put(stuser);
 	}
Index: linux-5.0.19-rt11/drivers/pwm/pwm-atmel-tcb.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/pwm/pwm-atmel-tcb.c
+++ linux-5.0.19-rt11/drivers/pwm/pwm-atmel-tcb.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:20 @
 #include <linux/ioport.h>
 #include <linux/io.h>
 #include <linux/platform_device.h>
-#include <linux/atmel_tc.h>
 #include <linux/pwm.h>
 #include <linux/of_device.h>
 #include <linux/slab.h>
+#include <soc/at91/atmel_tcb.h>
 
 #define NPWM	6
 
Index: linux-5.0.19-rt11/drivers/scsi/fcoe/fcoe.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/scsi/fcoe/fcoe.c
+++ linux-5.0.19-rt11/drivers/scsi/fcoe/fcoe.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1461 @ err2:
 static int fcoe_alloc_paged_crc_eof(struct sk_buff *skb, int tlen)
 {
 	struct fcoe_percpu_s *fps;
-	int rc;
+	int rc, cpu = get_cpu_light();
 
-	fps = &get_cpu_var(fcoe_percpu);
+	fps = &per_cpu(fcoe_percpu, cpu);
 	rc = fcoe_get_paged_crc_eof(skb, tlen, fps);
-	put_cpu_var(fcoe_percpu);
+	put_cpu_light();
 
 	return rc;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1652 @ static inline int fcoe_filter_frames(str
 		return 0;
 	}
 
-	stats = per_cpu_ptr(lport->stats, get_cpu());
+	stats = per_cpu_ptr(lport->stats, get_cpu_light());
 	stats->InvalidCRCCount++;
 	if (stats->InvalidCRCCount < 5)
 		printk(KERN_WARNING "fcoe: dropping frame with CRC error\n");
-	put_cpu();
+	put_cpu_light();
 	return -EINVAL;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1697 @ static void fcoe_recv_frame(struct sk_bu
 	 */
 	hp = (struct fcoe_hdr *) skb_network_header(skb);
 
-	stats = per_cpu_ptr(lport->stats, get_cpu());
+	stats = per_cpu_ptr(lport->stats, get_cpu_light());
 	if (unlikely(FC_FCOE_DECAPS_VER(hp) != FC_FCOE_VER)) {
 		if (stats->ErrorFrames < 5)
 			printk(KERN_WARNING "fcoe: FCoE version "
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1729 @ static void fcoe_recv_frame(struct sk_bu
 		goto drop;
 
 	if (!fcoe_filter_frames(lport, fp)) {
-		put_cpu();
+		put_cpu_light();
 		fc_exch_recv(lport, fp);
 		return;
 	}
 drop:
 	stats->ErrorFrames++;
-	put_cpu();
+	put_cpu_light();
 	kfree_skb(skb);
 }
 
Index: linux-5.0.19-rt11/drivers/scsi/fcoe/fcoe_ctlr.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/scsi/fcoe/fcoe_ctlr.c
+++ linux-5.0.19-rt11/drivers/scsi/fcoe/fcoe_ctlr.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:841 @ static unsigned long fcoe_ctlr_age_fcfs(
 
 	INIT_LIST_HEAD(&del_list);
 
-	stats = per_cpu_ptr(fip->lp->stats, get_cpu());
+	stats = per_cpu_ptr(fip->lp->stats, get_cpu_light());
 
 	list_for_each_entry_safe(fcf, next, &fip->fcfs, list) {
 		deadline = fcf->time + fcf->fka_period + fcf->fka_period / 2;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:877 @ static unsigned long fcoe_ctlr_age_fcfs(
 				sel_time = fcf->time;
 		}
 	}
-	put_cpu();
+	put_cpu_light();
 
 	list_for_each_entry_safe(fcf, next, &del_list, list) {
 		/* Removes fcf from current list */
Index: linux-5.0.19-rt11/drivers/scsi/libfc/fc_exch.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/scsi/libfc/fc_exch.c
+++ linux-5.0.19-rt11/drivers/scsi/libfc/fc_exch.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:836 @ static struct fc_exch *fc_exch_em_alloc(
 	}
 	memset(ep, 0, sizeof(*ep));
 
-	cpu = get_cpu();
+	cpu = get_cpu_light();
 	pool = per_cpu_ptr(mp->pool, cpu);
 	spin_lock_bh(&pool->lock);
-	put_cpu();
+	put_cpu_light();
 
 	/* peek cache of free slot */
 	if (pool->left != FC_XID_UNKNOWN) {
Index: linux-5.0.19-rt11/drivers/staging/android/vsoc.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/staging/android/vsoc.c
+++ linux-5.0.19-rt11/drivers/staging/android/vsoc.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:440 @ static int handle_vsoc_cond_wait(struct
 			return -EINVAL;
 		wake_time = ktime_set(arg->wake_time_sec, arg->wake_time_nsec);
 
-		hrtimer_init_on_stack(&to->timer, CLOCK_MONOTONIC,
-				      HRTIMER_MODE_ABS);
+		hrtimer_init_sleeper_on_stack(to, CLOCK_MONOTONIC,
+					      HRTIMER_MODE_ABS, current);
 		hrtimer_set_expires_range_ns(&to->timer, wake_time,
 					     current->timer_slack_ns);
-
-		hrtimer_init_sleeper(to, current);
 	}
 
 	while (1) {
Index: linux-5.0.19-rt11/drivers/tty/serial/8250/8250.h
===================================================================
--- linux-5.0.19-rt11.orig/drivers/tty/serial/8250/8250.h
+++ linux-5.0.19-rt11/drivers/tty/serial/8250/8250.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:258 @ static inline int serial_index(struct ua
 {
 	return port->minor - 64;
 }
+
+void set_ier(struct uart_8250_port *up, unsigned char ier);
+void clear_ier(struct uart_8250_port *up);
+void restore_ier(struct uart_8250_port *up);
Index: linux-5.0.19-rt11/drivers/tty/serial/8250/8250_core.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/tty/serial/8250/8250_core.c
+++ linux-5.0.19-rt11/drivers/tty/serial/8250/8250_core.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:57 @ static struct uart_driver serial8250_reg
 
 static unsigned int skip_txen_test; /* force skip of txen test at init time */
 
-#define PASS_LIMIT	512
+/*
+ * On -rt we can have a more delays, and legitimately
+ * so - so don't drop work spuriously and spam the
+ * syslog:
+ */
+#ifdef CONFIG_PREEMPT_RT_FULL
+# define PASS_LIMIT	1000000
+#else
+# define PASS_LIMIT	512
+#endif
 
 #include <asm/serial.h>
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:277 @ static void serial8250_timeout(struct ti
 static void serial8250_backup_timeout(struct timer_list *t)
 {
 	struct uart_8250_port *up = from_timer(up, t, timer);
-	unsigned int iir, ier = 0, lsr;
+	unsigned int iir, lsr;
 	unsigned long flags;
 
 	spin_lock_irqsave(&up->port.lock, flags);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:286 @ static void serial8250_backup_timeout(st
 	 * Must disable interrupts or else we risk racing with the interrupt
 	 * based handler.
 	 */
-	if (up->port.irq) {
-		ier = serial_in(up, UART_IER);
-		serial_out(up, UART_IER, 0);
-	}
+	if (up->port.irq)
+		clear_ier(up);
 
 	iir = serial_in(up, UART_IIR);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:310 @ static void serial8250_backup_timeout(st
 		serial8250_tx_chars(up);
 
 	if (up->port.irq)
-		serial_out(up, UART_IER, ier);
+		restore_ier(up);
 
 	spin_unlock_irqrestore(&up->port.lock, flags);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:588 @ serial8250_register_ports(struct uart_dr
 
 #ifdef CONFIG_SERIAL_8250_CONSOLE
 
+static void univ8250_console_write_atomic(struct console *co, const char *s,
+					  unsigned int count)
+{
+	struct uart_8250_port *up = &serial8250_ports[co->index];
+
+	serial8250_console_write_atomic(up, s, count);
+}
+
 static void univ8250_console_write(struct console *co, const char *s,
 				   unsigned int count)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:681 @ static int univ8250_console_match(struct
 
 static struct console univ8250_console = {
 	.name		= "ttyS",
+	.write_atomic	= univ8250_console_write_atomic,
 	.write		= univ8250_console_write,
 	.device		= uart_console_device,
 	.setup		= univ8250_console_setup,
Index: linux-5.0.19-rt11/drivers/tty/serial/8250/8250_dma.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/tty/serial/8250/8250_dma.c
+++ linux-5.0.19-rt11/drivers/tty/serial/8250/8250_dma.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:39 @ static void __dma_tx_complete(void *para
 	ret = serial8250_tx_dma(p);
 	if (ret) {
 		p->ier |= UART_IER_THRI;
-		serial_port_out(&p->port, UART_IER, p->ier);
+		set_ier(p, p->ier);
 	}
 
 	spin_unlock_irqrestore(&p->port.lock, flags);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:104 @ int serial8250_tx_dma(struct uart_8250_p
 	if (dma->tx_err) {
 		dma->tx_err = 0;
 		if (p->ier & UART_IER_THRI) {
-			p->ier &= ~UART_IER_THRI;
-			serial_out(p, UART_IER, p->ier);
+			set_ier(p, p->ier);
 		}
 	}
 	return 0;
Index: linux-5.0.19-rt11/drivers/tty/serial/8250/8250_port.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/tty/serial/8250/8250_port.c
+++ linux-5.0.19-rt11/drivers/tty/serial/8250/8250_port.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:734 @ static void serial8250_set_sleep(struct
 			serial_out(p, UART_EFR, UART_EFR_ECB);
 			serial_out(p, UART_LCR, 0);
 		}
-		serial_out(p, UART_IER, sleep ? UART_IERX_SLEEP : 0);
+		set_ier(p, sleep ? UART_IERX_SLEEP : 0);
 		if (p->capabilities & UART_CAP_EFR) {
 			serial_out(p, UART_LCR, UART_LCR_CONF_MODE_B);
 			serial_out(p, UART_EFR, efr);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1436 @ static void serial8250_stop_rx(struct ua
 
 	up->ier &= ~(UART_IER_RLSI | UART_IER_RDI);
 	up->port.read_status_mask &= ~UART_LSR_DR;
-	serial_port_out(port, UART_IER, up->ier);
+	set_ier(up, up->ier);
 
 	serial8250_rpm_put(up);
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1454 @ static void __do_stop_tx_rs485(struct ua
 		serial8250_clear_and_reinit_fifos(p);
 
 		p->ier |= UART_IER_RLSI | UART_IER_RDI;
-		serial_port_out(&p->port, UART_IER, p->ier);
+		set_ier(p, p->ier);
 	}
 }
 static enum hrtimer_restart serial8250_em485_handle_stop_tx(struct hrtimer *t)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1507 @ static inline void __do_stop_tx(struct u
 {
 	if (p->ier & UART_IER_THRI) {
 		p->ier &= ~UART_IER_THRI;
-		serial_out(p, UART_IER, p->ier);
+		set_ier(p, p->ier);
 		serial8250_rpm_put_tx(p);
 	}
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1560 @ static inline void __start_tx(struct uar
 
 	if (!(up->ier & UART_IER_THRI)) {
 		up->ier |= UART_IER_THRI;
-		serial_port_out(port, UART_IER, up->ier);
+		set_ier(up, up->ier);
 
 		if (up->bugs & UART_BUG_TXEN) {
 			unsigned char lsr;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1666 @ static void serial8250_disable_ms(struct
 		return;
 
 	up->ier &= ~UART_IER_MSI;
-	serial_port_out(port, UART_IER, up->ier);
+	set_ier(up, up->ier);
 }
 
 static void serial8250_enable_ms(struct uart_port *port)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1680 @ static void serial8250_enable_ms(struct
 	up->ier |= UART_IER_MSI;
 
 	serial8250_rpm_get(up);
-	serial_port_out(port, UART_IER, up->ier);
+	set_ier(up, up->ier);
 	serial8250_rpm_put(up);
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2053 @ static void wait_for_xmitr(struct uart_8
 	}
 }
 
+static atomic_t ier_counter = ATOMIC_INIT(0);
+static atomic_t ier_value = ATOMIC_INIT(0);
+
+void set_ier(struct uart_8250_port *up, unsigned char ier)
+{
+	struct uart_port *port = &up->port;
+	unsigned int flags;
+
+	console_atomic_lock(&flags);
+	if (atomic_read(&ier_counter) > 0)
+		atomic_set(&ier_value, ier);
+	else
+		serial_port_out(port, UART_IER, ier);
+	console_atomic_unlock(flags);
+}
+
+void clear_ier(struct uart_8250_port *up)
+{
+	struct uart_port *port = &up->port;
+	unsigned int ier_cleared = 0;
+	unsigned int flags;
+	unsigned int ier;
+
+	console_atomic_lock(&flags);
+	atomic_inc(&ier_counter);
+	ier = serial_port_in(port, UART_IER);
+	if (up->capabilities & UART_CAP_UUE)
+		ier_cleared = UART_IER_UUE;
+	if (ier != ier_cleared) {
+		serial_port_out(port, UART_IER, ier_cleared);
+		atomic_set(&ier_value, ier);
+	}
+	console_atomic_unlock(flags);
+}
+EXPORT_SYMBOL_GPL(clear_ier);
+
+void restore_ier(struct uart_8250_port *up)
+{
+	struct uart_port *port = &up->port;
+	unsigned int flags;
+
+	console_atomic_lock(&flags);
+	if (atomic_fetch_dec(&ier_counter) == 1)
+		serial_port_out(port, UART_IER, atomic_read(&ier_value));
+	console_atomic_unlock(flags);
+}
+EXPORT_SYMBOL_GPL(restore_ier);
+
 #ifdef CONFIG_CONSOLE_POLL
 /*
  * Console polling routines for writing and reading from the uart while
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2132 @ out:
 static void serial8250_put_poll_char(struct uart_port *port,
 			 unsigned char c)
 {
-	unsigned int ier;
 	struct uart_8250_port *up = up_to_u8250p(port);
 
 	serial8250_rpm_get(up);
-	/*
-	 *	First save the IER then disable the interrupts
-	 */
-	ier = serial_port_in(port, UART_IER);
-	if (up->capabilities & UART_CAP_UUE)
-		serial_port_out(port, UART_IER, UART_IER_UUE);
-	else
-		serial_port_out(port, UART_IER, 0);
+	clear_ier(up);
 
 	wait_for_xmitr(up, BOTH_EMPTY);
 	/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2148 @ static void serial8250_put_poll_char(str
 	 *	and restore the IER
 	 */
 	wait_for_xmitr(up, BOTH_EMPTY);
-	serial_port_out(port, UART_IER, ier);
+	restore_ier(up);
 	serial8250_rpm_put(up);
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2460 @ void serial8250_do_shutdown(struct uart_
 	 */
 	spin_lock_irqsave(&port->lock, flags);
 	up->ier = 0;
-	serial_port_out(port, UART_IER, 0);
+	set_ier(up, 0);
 	spin_unlock_irqrestore(&port->lock, flags);
 
 	synchronize_irq(port->irq);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2771 @ serial8250_do_set_termios(struct uart_po
 	if (up->capabilities & UART_CAP_RTOIE)
 		up->ier |= UART_IER_RTOIE;
 
-	serial_port_out(port, UART_IER, up->ier);
+	set_ier(up, up->ier);
 
 	if (up->capabilities & UART_CAP_EFR) {
 		unsigned char efr = 0;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3235 @ EXPORT_SYMBOL_GPL(serial8250_set_default
 
 #ifdef CONFIG_SERIAL_8250_CONSOLE
 
-static void serial8250_console_putchar(struct uart_port *port, int ch)
+static void serial8250_console_putchar_locked(struct uart_port *port, int ch)
 {
 	struct uart_8250_port *up = up_to_u8250p(port);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3243 @ static void serial8250_console_putchar(s
 	serial_port_out(port, UART_TX, ch);
 }
 
+static void serial8250_console_putchar(struct uart_port *port, int ch)
+{
+	struct uart_8250_port *up = up_to_u8250p(port);
+	unsigned int flags;
+
+	wait_for_xmitr(up, UART_LSR_THRE);
+
+	console_atomic_lock(&flags);
+	serial8250_console_putchar_locked(port, ch);
+	console_atomic_unlock(flags);
+}
+
 /*
  *	Restore serial console when h/w power-off detected
  */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3276 @ static void serial8250_console_restore(s
 	serial8250_out_MCR(up, UART_MCR_DTR | UART_MCR_RTS);
 }
 
+void serial8250_console_write_atomic(struct uart_8250_port *up,
+				     const char *s, unsigned int count)
+{
+	struct uart_port *port = &up->port;
+	unsigned int flags;
+
+	console_atomic_lock(&flags);
+
+	touch_nmi_watchdog();
+
+	clear_ier(up);
+
+	if (atomic_fetch_inc(&up->console_printing)) {
+		uart_console_write(port, "\n", 1,
+				   serial8250_console_putchar_locked);
+	}
+	uart_console_write(port, s, count, serial8250_console_putchar_locked);
+	atomic_dec(&up->console_printing);
+
+	wait_for_xmitr(up, BOTH_EMPTY);
+	restore_ier(up);
+
+	console_atomic_unlock(flags);
+}
+
 /*
  *	Print a string to the serial port trying not to disturb
  *	any possible real use of the port...
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3312 @ void serial8250_console_write(struct uar
 {
 	struct uart_port *port = &up->port;
 	unsigned long flags;
-	unsigned int ier;
-	int locked = 1;
 
 	touch_nmi_watchdog();
 
 	serial8250_rpm_get(up);
+	spin_lock_irqsave(&port->lock, flags);
 
-	if (oops_in_progress)
-		locked = spin_trylock_irqsave(&port->lock, flags);
-	else
-		spin_lock_irqsave(&port->lock, flags);
-
-	/*
-	 *	First save the IER then disable the interrupts
-	 */
-	ier = serial_port_in(port, UART_IER);
-
-	if (up->capabilities & UART_CAP_UUE)
-		serial_port_out(port, UART_IER, UART_IER_UUE);
-	else
-		serial_port_out(port, UART_IER, 0);
+	clear_ier(up);
 
 	/* check scratch reg to see if port powered off during system sleep */
 	if (up->canary && (up->canary != serial_port_in(port, UART_SCR))) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3326 @ void serial8250_console_write(struct uar
 		up->canary = 0;
 	}
 
+	atomic_inc(&up->console_printing);
 	uart_console_write(port, s, count, serial8250_console_putchar);
+	atomic_dec(&up->console_printing);
 
 	/*
 	 *	Finally, wait for transmitter to become empty
 	 *	and restore the IER
 	 */
 	wait_for_xmitr(up, BOTH_EMPTY);
-	serial_port_out(port, UART_IER, ier);
+	restore_ier(up);
 
 	/*
 	 *	The receive handling will happen properly because the
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3347 @ void serial8250_console_write(struct uar
 	if (up->msr_saved_flags)
 		serial8250_modem_status(up);
 
-	if (locked)
-		spin_unlock_irqrestore(&port->lock, flags);
+	spin_unlock_irqrestore(&port->lock, flags);
 	serial8250_rpm_put(up);
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3368 @ static unsigned int probe_baud(struct ua
 
 int serial8250_console_setup(struct uart_port *port, char *options, bool probe)
 {
+	struct uart_8250_port *up = up_to_u8250p(port);
 	int baud = 9600;
 	int bits = 8;
 	int parity = 'n';
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3377 @ int serial8250_console_setup(struct uart
 	if (!port->iobase && !port->membase)
 		return -ENODEV;
 
+	atomic_set(&up->console_printing, 0);
+
 	if (options)
 		uart_parse_options(options, &baud, &parity, &bits, &flow);
 	else if (probe)
Index: linux-5.0.19-rt11/drivers/tty/serial/amba-pl011.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/tty/serial/amba-pl011.c
+++ linux-5.0.19-rt11/drivers/tty/serial/amba-pl011.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2214 @ pl011_console_write(struct console *co,
 {
 	struct uart_amba_port *uap = amba_ports[co->index];
 	unsigned int old_cr = 0, new_cr;
-	unsigned long flags;
+	unsigned long flags = 0;
 	int locked = 1;
 
 	clk_enable(uap->clk);
 
-	local_irq_save(flags);
+	/*
+	 * local_irq_save(flags);
+	 *
+	 * This local_irq_save() is nonsense. If we come in via sysrq
+	 * handling then interrupts are already disabled. Aside of
+	 * that the port.sysrq check is racy on SMP regardless.
+	*/
 	if (uap->port.sysrq)
 		locked = 0;
 	else if (oops_in_progress)
-		locked = spin_trylock(&uap->port.lock);
+		locked = spin_trylock_irqsave(&uap->port.lock, flags);
 	else
-		spin_lock(&uap->port.lock);
+		spin_lock_irqsave(&uap->port.lock, flags);
 
 	/*
 	 *	First save the CR then disable the interrupts
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2257 @ pl011_console_write(struct console *co,
 		pl011_write(old_cr, uap, REG_CR);
 
 	if (locked)
-		spin_unlock(&uap->port.lock);
-	local_irq_restore(flags);
+		spin_unlock_irqrestore(&uap->port.lock, flags);
 
 	clk_disable(uap->clk);
 }
Index: linux-5.0.19-rt11/drivers/tty/serial/omap-serial.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/tty/serial/omap-serial.c
+++ linux-5.0.19-rt11/drivers/tty/serial/omap-serial.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1310 @ serial_omap_console_write(struct console
 
 	pm_runtime_get_sync(up->dev);
 
-	local_irq_save(flags);
-	if (up->port.sysrq)
-		locked = 0;
-	else if (oops_in_progress)
-		locked = spin_trylock(&up->port.lock);
+	if (up->port.sysrq || oops_in_progress)
+		locked = spin_trylock_irqsave(&up->port.lock, flags);
 	else
-		spin_lock(&up->port.lock);
+		spin_lock_irqsave(&up->port.lock, flags);
 
 	/*
 	 * First save the IER then disable the interrupts
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1342 @ serial_omap_console_write(struct console
 	pm_runtime_mark_last_busy(up->dev);
 	pm_runtime_put_autosuspend(up->dev);
 	if (locked)
-		spin_unlock(&up->port.lock);
-	local_irq_restore(flags);
+		spin_unlock_irqrestore(&up->port.lock, flags);
 }
 
 static int __init
Index: linux-5.0.19-rt11/drivers/tty/sysrq.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/tty/sysrq.c
+++ linux-5.0.19-rt11/drivers/tty/sysrq.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:211 @ static struct sysrq_key_op sysrq_showloc
 #endif
 
 #ifdef CONFIG_SMP
-static DEFINE_SPINLOCK(show_lock);
+static DEFINE_RAW_SPINLOCK(show_lock);
 
 static void showacpu(void *dummy)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:221 @ static void showacpu(void *dummy)
 	if (idle_cpu(smp_processor_id()))
 		return;
 
-	spin_lock_irqsave(&show_lock, flags);
+	raw_spin_lock_irqsave(&show_lock, flags);
 	pr_info("CPU%d:\n", smp_processor_id());
 	show_stack(NULL, NULL);
-	spin_unlock_irqrestore(&show_lock, flags);
+	raw_spin_unlock_irqrestore(&show_lock, flags);
 }
 
 static void sysrq_showregs_othercpus(struct work_struct *dummy)
Index: linux-5.0.19-rt11/drivers/usb/gadget/function/f_fs.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/usb/gadget/function/f_fs.c
+++ linux-5.0.19-rt11/drivers/usb/gadget/function/f_fs.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1708 @ static void ffs_data_put(struct ffs_data
 		pr_info("%s(): freeing\n", __func__);
 		ffs_data_clear(ffs);
 		BUG_ON(waitqueue_active(&ffs->ev.waitq) ||
-		       waitqueue_active(&ffs->ep0req_completion.wait) ||
+		       swait_active(&ffs->ep0req_completion.wait) ||
 		       waitqueue_active(&ffs->wait));
 		destroy_workqueue(ffs->io_completion_wq);
 		kfree(ffs->dev_name);
Index: linux-5.0.19-rt11/drivers/usb/gadget/legacy/inode.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/usb/gadget/legacy/inode.c
+++ linux-5.0.19-rt11/drivers/usb/gadget/legacy/inode.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:346 @ ep_io (struct ep_data *epdata, void *buf
 	spin_unlock_irq (&epdata->dev->lock);
 
 	if (likely (value == 0)) {
-		value = wait_event_interruptible (done.wait, done.done);
+		value = swait_event_interruptible_exclusive(done.wait, done.done);
 		if (value != 0) {
 			spin_lock_irq (&epdata->dev->lock);
 			if (likely (epdata->ep != NULL)) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:355 @ ep_io (struct ep_data *epdata, void *buf
 				usb_ep_dequeue (epdata->ep, epdata->req);
 				spin_unlock_irq (&epdata->dev->lock);
 
-				wait_event (done.wait, done.done);
+				swait_event_exclusive(done.wait, done.done);
 				if (epdata->status == -ECONNRESET)
 					epdata->status = -EINTR;
 			} else {
Index: linux-5.0.19-rt11/drivers/watchdog/watchdog_dev.c
===================================================================
--- linux-5.0.19-rt11.orig/drivers/watchdog/watchdog_dev.c
+++ linux-5.0.19-rt11/drivers/watchdog/watchdog_dev.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:150 @ static inline void watchdog_update_worke
 		ktime_t t = watchdog_next_keepalive(wdd);
 
 		if (t > 0)
-			hrtimer_start(&wd_data->timer, t, HRTIMER_MODE_REL);
+			hrtimer_start(&wd_data->timer, t, HRTIMER_MODE_REL_HARD);
 	} else {
 		hrtimer_cancel(&wd_data->timer);
 	}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:169 @ static int __watchdog_ping(struct watchd
 	if (ktime_after(earliest_keepalive, now)) {
 		hrtimer_start(&wd_data->timer,
 			      ktime_sub(earliest_keepalive, now),
-			      HRTIMER_MODE_REL);
+			      HRTIMER_MODE_REL_HARD);
 		return 0;
 	}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:948 @ static int watchdog_cdev_register(struct
 		return -ENODEV;
 
 	kthread_init_work(&wd_data->work, watchdog_ping_work);
-	hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+	hrtimer_init(&wd_data->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
 	wd_data->timer.function = watchdog_timer_expired;
 
 	if (wdd->id == 0) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:995 @ static int watchdog_cdev_register(struct
 		__module_get(wdd->ops->owner);
 		kref_get(&wd_data->kref);
 		if (handle_boot_enabled)
-			hrtimer_start(&wd_data->timer, 0, HRTIMER_MODE_REL);
+			hrtimer_start(&wd_data->timer, 0, HRTIMER_MODE_REL_HARD);
 		else
 			pr_info("watchdog%d running and kernel based pre-userspace handler disabled\n",
 				wdd->id);
Index: linux-5.0.19-rt11/fs/aio.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/aio.c
+++ linux-5.0.19-rt11/fs/aio.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:130 @ struct kioctx {
 	long			nr_pages;
 
 	struct rcu_work		free_rwork;	/* see free_ioctx() */
+	struct work_struct	free_work;	/* see free_ioctx() */
 
 	/*
 	 * signals when all in-flight requests are done
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:616 @ static void free_ioctx_reqs(struct percp
  * and ctx->users has dropped to 0, so we know no more kiocbs can be submitted -
  * now it's safe to cancel any that need to be.
  */
-static void free_ioctx_users(struct percpu_ref *ref)
+static void free_ioctx_users_work(struct work_struct *work)
 {
-	struct kioctx *ctx = container_of(ref, struct kioctx, users);
+	struct kioctx *ctx = container_of(work, struct kioctx, free_work);
 	struct aio_kiocb *req;
 
 	spin_lock_irq(&ctx->ctx_lock);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:636 @ static void free_ioctx_users(struct perc
 	percpu_ref_put(&ctx->reqs);
 }
 
+static void free_ioctx_users(struct percpu_ref *ref)
+{
+	struct kioctx *ctx = container_of(ref, struct kioctx, users);
+
+	INIT_WORK(&ctx->free_work, free_ioctx_users_work);
+	schedule_work(&ctx->free_work);
+}
+
 static int ioctx_add_table(struct kioctx *ctx, struct mm_struct *mm)
 {
 	unsigned i, new_nr;
Index: linux-5.0.19-rt11/fs/autofs/expire.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/autofs/expire.c
+++ linux-5.0.19-rt11/fs/autofs/expire.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:11 @
  * option, any later version, incorporated herein by reference.
  */
 
+#include <linux/delay.h>
 #include "autofs_i.h"
 
 /* Check if a dentry can be expired */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:157 @ again:
 			parent = p->d_parent;
 			if (!spin_trylock(&parent->d_lock)) {
 				spin_unlock(&p->d_lock);
-				cpu_relax();
+				cpu_chill();
 				goto relock;
 			}
 			spin_unlock(&p->d_lock);
Index: linux-5.0.19-rt11/fs/buffer.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/buffer.c
+++ linux-5.0.19-rt11/fs/buffer.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:277 @ static void end_buffer_async_read(struct
 	 * decide that the page is now completely done.
 	 */
 	first = page_buffers(page);
-	local_irq_save(flags);
-	bit_spin_lock(BH_Uptodate_Lock, &first->b_state);
+	flags = bh_uptodate_lock_irqsave(first);
 	clear_buffer_async_read(bh);
 	unlock_buffer(bh);
 	tmp = bh;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:290 @ static void end_buffer_async_read(struct
 		}
 		tmp = tmp->b_this_page;
 	} while (tmp != bh);
-	bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
-	local_irq_restore(flags);
+	bh_uptodate_unlock_irqrestore(first, flags);
 
 	/*
 	 * If none of the buffers had errors and they are all
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:302 @ static void end_buffer_async_read(struct
 	return;
 
 still_busy:
-	bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
-	local_irq_restore(flags);
-	return;
+	bh_uptodate_unlock_irqrestore(first, flags);
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:329 @ void end_buffer_async_write(struct buffe
 	}
 
 	first = page_buffers(page);
-	local_irq_save(flags);
-	bit_spin_lock(BH_Uptodate_Lock, &first->b_state);
+	flags = bh_uptodate_lock_irqsave(first);
 
 	clear_buffer_async_write(bh);
 	unlock_buffer(bh);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:341 @ void end_buffer_async_write(struct buffe
 		}
 		tmp = tmp->b_this_page;
 	}
-	bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
-	local_irq_restore(flags);
+	bh_uptodate_unlock_irqrestore(first, flags);
 	end_page_writeback(page);
 	return;
 
 still_busy:
-	bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
-	local_irq_restore(flags);
-	return;
+	bh_uptodate_unlock_irqrestore(first, flags);
 }
 EXPORT_SYMBOL(end_buffer_async_write);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3363 @ struct buffer_head *alloc_buffer_head(gf
 	struct buffer_head *ret = kmem_cache_zalloc(bh_cachep, gfp_flags);
 	if (ret) {
 		INIT_LIST_HEAD(&ret->b_assoc_buffers);
+		buffer_head_init_locks(ret);
 		preempt_disable();
 		__this_cpu_inc(bh_accounting.nr);
 		recalc_bh_state();
Index: linux-5.0.19-rt11/fs/cifs/readdir.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/cifs/readdir.c
+++ linux-5.0.19-rt11/fs/cifs/readdir.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:83 @ cifs_prime_dcache(struct dentry *parent,
 	struct inode *inode;
 	struct super_block *sb = parent->d_sb;
 	struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
-	DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+	DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
 
 	cifs_dbg(FYI, "%s: for %s\n", __func__, name->name);
 
Index: linux-5.0.19-rt11/fs/dcache.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/dcache.c
+++ linux-5.0.19-rt11/fs/dcache.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2419 @ EXPORT_SYMBOL(d_rehash);
 static inline unsigned start_dir_add(struct inode *dir)
 {
 
+	preempt_disable_rt();
 	for (;;) {
-		unsigned n = dir->i_dir_seq;
-		if (!(n & 1) && cmpxchg(&dir->i_dir_seq, n, n + 1) == n)
+		unsigned n = dir->__i_dir_seq;
+		if (!(n & 1) && cmpxchg(&dir->__i_dir_seq, n, n + 1) == n)
 			return n;
 		cpu_relax();
 	}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2430 @ static inline unsigned start_dir_add(str
 
 static inline void end_dir_add(struct inode *dir, unsigned n)
 {
-	smp_store_release(&dir->i_dir_seq, n + 2);
+	smp_store_release(&dir->__i_dir_seq, n + 2);
+	preempt_enable_rt();
 }
 
 static void d_wait_lookup(struct dentry *dentry)
 {
-	if (d_in_lookup(dentry)) {
-		DECLARE_WAITQUEUE(wait, current);
-		add_wait_queue(dentry->d_wait, &wait);
-		do {
-			set_current_state(TASK_UNINTERRUPTIBLE);
-			spin_unlock(&dentry->d_lock);
-			schedule();
-			spin_lock(&dentry->d_lock);
-		} while (d_in_lookup(dentry));
-	}
+	struct swait_queue __wait;
+
+	if (!d_in_lookup(dentry))
+		return;
+
+	INIT_LIST_HEAD(&__wait.task_list);
+	do {
+		prepare_to_swait_exclusive(dentry->d_wait, &__wait, TASK_UNINTERRUPTIBLE);
+		spin_unlock(&dentry->d_lock);
+		schedule();
+		spin_lock(&dentry->d_lock);
+	} while (d_in_lookup(dentry));
+	finish_swait(dentry->d_wait, &__wait);
 }
 
 struct dentry *d_alloc_parallel(struct dentry *parent,
 				const struct qstr *name,
-				wait_queue_head_t *wq)
+				struct swait_queue_head *wq)
 {
 	unsigned int hash = name->hash;
 	struct hlist_bl_head *b = in_lookup_hash(parent, hash);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2467 @ struct dentry *d_alloc_parallel(struct d
 
 retry:
 	rcu_read_lock();
-	seq = smp_load_acquire(&parent->d_inode->i_dir_seq);
+	seq = smp_load_acquire(&parent->d_inode->__i_dir_seq);
 	r_seq = read_seqbegin(&rename_lock);
 	dentry = __d_lookup_rcu(parent, name, &d_seq);
 	if (unlikely(dentry)) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2495 @ retry:
 	}
 
 	hlist_bl_lock(b);
-	if (unlikely(READ_ONCE(parent->d_inode->i_dir_seq) != seq)) {
+	if (unlikely(READ_ONCE(parent->d_inode->__i_dir_seq) != seq)) {
 		hlist_bl_unlock(b);
 		rcu_read_unlock();
 		goto retry;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2568 @ void __d_lookup_done(struct dentry *dent
 	hlist_bl_lock(b);
 	dentry->d_flags &= ~DCACHE_PAR_LOOKUP;
 	__hlist_bl_del(&dentry->d_u.d_in_lookup_hash);
-	wake_up_all(dentry->d_wait);
+	swake_up_all(dentry->d_wait);
 	dentry->d_wait = NULL;
 	hlist_bl_unlock(b);
 	INIT_HLIST_NODE(&dentry->d_u.d_alias);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3080 @ __setup("dhash_entries=", set_dhash_entr
 
 static void __init dcache_init_early(void)
 {
+	unsigned int loop;
+
 	/* If hashes are distributed across NUMA nodes, defer
 	 * hash allocation until vmalloc space is available.
 	 */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3098 @ static void __init dcache_init_early(voi
 					NULL,
 					0,
 					0);
+
+	for (loop = 0; loop < (1U << d_hash_shift); loop++)
+		INIT_HLIST_BL_HEAD(dentry_hashtable + loop);
+
 	d_hash_shift = 32 - d_hash_shift;
 }
 
 static void __init dcache_init(void)
 {
+	unsigned int loop;
 	/*
 	 * A constructor could be added for stable state like the lists,
 	 * but it is probably not worth it because of the cache nature
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3131 @ static void __init dcache_init(void)
 					NULL,
 					0,
 					0);
+
+	for (loop = 0; loop < (1U << d_hash_shift); loop++)
+		INIT_HLIST_BL_HEAD(dentry_hashtable + loop);
+
 	d_hash_shift = 32 - d_hash_shift;
 }
 
Index: linux-5.0.19-rt11/fs/eventpoll.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/eventpoll.c
+++ linux-5.0.19-rt11/fs/eventpoll.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:574 @ static int ep_poll_wakeup_proc(void *pri
 
 static void ep_poll_safewake(wait_queue_head_t *wq)
 {
-	int this_cpu = get_cpu();
+	int this_cpu = get_cpu_light();
 
 	ep_call_nested(&poll_safewake_ncalls,
 		       ep_poll_wakeup_proc, NULL, wq, (void *) (long) this_cpu);
 
-	put_cpu();
+	put_cpu_light();
 }
 
 #else
Index: linux-5.0.19-rt11/fs/exec.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/exec.c
+++ linux-5.0.19-rt11/fs/exec.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1034 @ static int exec_mmap(struct mm_struct *m
 		}
 	}
 	task_lock(tsk);
+	preempt_disable_rt();
 	active_mm = tsk->active_mm;
 	tsk->mm = mm;
 	tsk->active_mm = mm;
 	activate_mm(active_mm, mm);
 	tsk->mm->vmacache_seqnum = 0;
 	vmacache_flush(tsk);
+	preempt_enable_rt();
 	task_unlock(tsk);
 	if (old_mm) {
 		up_read(&old_mm->mmap_sem);
Index: linux-5.0.19-rt11/fs/ext4/page-io.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/ext4/page-io.c
+++ linux-5.0.19-rt11/fs/ext4/page-io.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:98 @ static void ext4_finish_bio(struct bio *
 		 * We check all buffers in the page under BH_Uptodate_Lock
 		 * to avoid races with other end io clearing async_write flags
 		 */
-		local_irq_save(flags);
-		bit_spin_lock(BH_Uptodate_Lock, &head->b_state);
+		flags = bh_uptodate_lock_irqsave(head);
 		do {
 			if (bh_offset(bh) < bio_start ||
 			    bh_offset(bh) + bh->b_size > bio_end) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:110 @ static void ext4_finish_bio(struct bio *
 			if (bio->bi_status)
 				buffer_io_error(bh);
 		} while ((bh = bh->b_this_page) != head);
-		bit_spin_unlock(BH_Uptodate_Lock, &head->b_state);
-		local_irq_restore(flags);
+		bh_uptodate_unlock_irqrestore(head, flags);
 		if (!under_io) {
 #ifdef CONFIG_EXT4_FS_ENCRYPTION
 			if (data_page)
Index: linux-5.0.19-rt11/fs/fscache/cookie.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/fscache/cookie.c
+++ linux-5.0.19-rt11/fs/fscache/cookie.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:965 @ inconsistent:
 	return -ESTALE;
 }
 EXPORT_SYMBOL(__fscache_check_consistency);
+
+void __init fscache_cookie_init(void)
+{
+	int i;
+
+	for (i = 0; i < (1 << fscache_cookie_hash_shift) - 1; i++)
+		INIT_HLIST_BL_HEAD(&fscache_cookie_hash[i]);
+}
Index: linux-5.0.19-rt11/fs/fscache/main.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/fscache/main.c
+++ linux-5.0.19-rt11/fs/fscache/main.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:152 @ static int __init fscache_init(void)
 		ret = -ENOMEM;
 		goto error_cookie_jar;
 	}
+	fscache_cookie_init();
 
 	fscache_root = kobject_create_and_add("fscache", kernel_kobj);
 	if (!fscache_root)
Index: linux-5.0.19-rt11/fs/fuse/readdir.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/fuse/readdir.c
+++ linux-5.0.19-rt11/fs/fuse/readdir.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:161 @ static int fuse_direntplus_link(struct f
 	struct inode *dir = d_inode(parent);
 	struct fuse_conn *fc;
 	struct inode *inode;
-	DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+	DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
 
 	if (!o->nodeid) {
 		/*
Index: linux-5.0.19-rt11/fs/inode.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/inode.c
+++ linux-5.0.19-rt11/fs/inode.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:158 @ int inode_init_always(struct super_block
 	inode->i_bdev = NULL;
 	inode->i_cdev = NULL;
 	inode->i_link = NULL;
-	inode->i_dir_seq = 0;
+	inode->__i_dir_seq = 0;
 	inode->i_rdev = 0;
 	inode->dirtied_when = 0;
 
Index: linux-5.0.19-rt11/fs/libfs.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/libfs.c
+++ linux-5.0.19-rt11/fs/libfs.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:93 @ static struct dentry *next_positive(stru
 				    struct list_head *from,
 				    int count)
 {
-	unsigned *seq = &parent->d_inode->i_dir_seq, n;
+	unsigned *seq = &parent->d_inode->__i_dir_seq, n;
 	struct dentry *res;
 	struct list_head *p;
 	bool skipped;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:126 @ retry:
 static void move_cursor(struct dentry *cursor, struct list_head *after)
 {
 	struct dentry *parent = cursor->d_parent;
-	unsigned n, *seq = &parent->d_inode->i_dir_seq;
+	unsigned n, *seq = &parent->d_inode->__i_dir_seq;
 	spin_lock(&parent->d_lock);
+	preempt_disable_rt();
 	for (;;) {
 		n = *seq;
 		if (!(n & 1) && cmpxchg(seq, n, n + 1) == n)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:141 @ static void move_cursor(struct dentry *c
 	else
 		list_add_tail(&cursor->d_child, &parent->d_subdirs);
 	smp_store_release(seq, n + 2);
+	preempt_enable_rt();
 	spin_unlock(&parent->d_lock);
 }
 
Index: linux-5.0.19-rt11/fs/locks.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/locks.c
+++ linux-5.0.19-rt11/fs/locks.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1061 @ static int flock_lock_inode(struct inode
 			return -ENOMEM;
 	}
 
-	percpu_down_read_preempt_disable(&file_rwsem);
+	percpu_down_read(&file_rwsem);
 	spin_lock(&ctx->flc_lock);
 	if (request->fl_flags & FL_ACCESS)
 		goto find_conflict;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1103 @ find_conflict:
 
 out:
 	spin_unlock(&ctx->flc_lock);
-	percpu_up_read_preempt_enable(&file_rwsem);
+	percpu_up_read(&file_rwsem);
 	if (new_fl)
 		locks_free_lock(new_fl);
 	locks_dispose_list(&dispose);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1141 @ static int posix_lock_inode(struct inode
 		new_fl2 = locks_alloc_lock();
 	}
 
-	percpu_down_read_preempt_disable(&file_rwsem);
+	percpu_down_read(&file_rwsem);
 	spin_lock(&ctx->flc_lock);
 	/*
 	 * New lock request. Walk all POSIX locks and look for conflicts. If
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1320 @ static int posix_lock_inode(struct inode
 	}
  out:
 	spin_unlock(&ctx->flc_lock);
-	percpu_up_read_preempt_enable(&file_rwsem);
+	percpu_up_read(&file_rwsem);
 	/*
 	 * Free any unused locks.
 	 */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1592 @ int __break_lease(struct inode *inode, u
 		return error;
 	}
 
-	percpu_down_read_preempt_disable(&file_rwsem);
+	percpu_down_read(&file_rwsem);
 	spin_lock(&ctx->flc_lock);
 
 	time_out_leases(inode, &dispose);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1644 @ restart:
 	locks_insert_block(fl, new_fl, leases_conflict);
 	trace_break_lease_block(inode, new_fl);
 	spin_unlock(&ctx->flc_lock);
-	percpu_up_read_preempt_enable(&file_rwsem);
+	percpu_up_read(&file_rwsem);
 
 	locks_dispose_list(&dispose);
 	error = wait_event_interruptible_timeout(new_fl->fl_wait,
 						!new_fl->fl_blocker, break_time);
 
-	percpu_down_read_preempt_disable(&file_rwsem);
+	percpu_down_read(&file_rwsem);
 	spin_lock(&ctx->flc_lock);
 	trace_break_lease_unblock(inode, new_fl);
 	locks_delete_block(new_fl);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1667 @ restart:
 	}
 out:
 	spin_unlock(&ctx->flc_lock);
-	percpu_up_read_preempt_enable(&file_rwsem);
+	percpu_up_read(&file_rwsem);
 	locks_dispose_list(&dispose);
 	locks_free_lock(new_fl);
 	return error;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1737 @ int fcntl_getlease(struct file *filp)
 
 	ctx = smp_load_acquire(&inode->i_flctx);
 	if (ctx && !list_empty_careful(&ctx->flc_lease)) {
-		percpu_down_read_preempt_disable(&file_rwsem);
+		percpu_down_read(&file_rwsem);
 		spin_lock(&ctx->flc_lock);
 		time_out_leases(inode, &dispose);
 		list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1747 @ int fcntl_getlease(struct file *filp)
 			break;
 		}
 		spin_unlock(&ctx->flc_lock);
-		percpu_up_read_preempt_enable(&file_rwsem);
+		percpu_up_read(&file_rwsem);
 
 		locks_dispose_list(&dispose);
 	}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1821 @ generic_add_lease(struct file *filp, lon
 		return -EINVAL;
 	}
 
-	percpu_down_read_preempt_disable(&file_rwsem);
+	percpu_down_read(&file_rwsem);
 	spin_lock(&ctx->flc_lock);
 	time_out_leases(inode, &dispose);
 	error = check_conflicting_open(dentry, arg, lease->fl_flags);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1892 @ out_setup:
 		lease->fl_lmops->lm_setup(lease, priv);
 out:
 	spin_unlock(&ctx->flc_lock);
-	percpu_up_read_preempt_enable(&file_rwsem);
+	percpu_up_read(&file_rwsem);
 	locks_dispose_list(&dispose);
 	if (is_deleg)
 		inode_unlock(inode);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1915 @ static int generic_delete_lease(struct f
 		return error;
 	}
 
-	percpu_down_read_preempt_disable(&file_rwsem);
+	percpu_down_read(&file_rwsem);
 	spin_lock(&ctx->flc_lock);
 	list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
 		if (fl->fl_file == filp &&
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1928 @ static int generic_delete_lease(struct f
 	if (victim)
 		error = fl->fl_lmops->lm_change(victim, F_UNLCK, &dispose);
 	spin_unlock(&ctx->flc_lock);
-	percpu_up_read_preempt_enable(&file_rwsem);
+	percpu_up_read(&file_rwsem);
 	locks_dispose_list(&dispose);
 	return error;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2651 @ locks_remove_lease(struct file *filp, st
 	if (list_empty(&ctx->flc_lease))
 		return;
 
-	percpu_down_read_preempt_disable(&file_rwsem);
+	percpu_down_read(&file_rwsem);
 	spin_lock(&ctx->flc_lock);
 	list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_list)
 		if (filp == fl->fl_file)
 			lease_modify(fl, F_UNLCK, &dispose);
 	spin_unlock(&ctx->flc_lock);
-	percpu_up_read_preempt_enable(&file_rwsem);
+	percpu_up_read(&file_rwsem);
 
 	locks_dispose_list(&dispose);
 }
Index: linux-5.0.19-rt11/fs/namei.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/namei.c
+++ linux-5.0.19-rt11/fs/namei.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1648 @ static struct dentry *__lookup_slow(cons
 {
 	struct dentry *dentry, *old;
 	struct inode *inode = dir->d_inode;
-	DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+	DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
 
 	/* Don't go there if it's already dead */
 	if (unlikely(IS_DEADDIR(inode)))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3138 @ static int lookup_open(struct nameidata
 	struct dentry *dentry;
 	int error, create_error = 0;
 	umode_t mode = op->mode;
-	DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+	DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
 
 	if (unlikely(IS_DEADDIR(dir_inode)))
 		return -ENOENT;
Index: linux-5.0.19-rt11/fs/namespace.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/namespace.c
+++ linux-5.0.19-rt11/fs/namespace.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:17 @
 #include <linux/mnt_namespace.h>
 #include <linux/user_namespace.h>
 #include <linux/namei.h>
+#include <linux/delay.h>
 #include <linux/security.h>
 #include <linux/cred.h>
 #include <linux/idr.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:328 @ int __mnt_want_write(struct vfsmount *m)
 	 * incremented count after it has set MNT_WRITE_HOLD.
 	 */
 	smp_mb();
-	while (READ_ONCE(mnt->mnt.mnt_flags) & MNT_WRITE_HOLD)
-		cpu_relax();
+	while (READ_ONCE(mnt->mnt.mnt_flags) & MNT_WRITE_HOLD) {
+		preempt_enable();
+		cpu_chill();
+		preempt_disable();
+	}
 	/*
 	 * After the slowpath clears MNT_WRITE_HOLD, mnt_is_readonly will
 	 * be set to match its requirements. So we must not load that until
Index: linux-5.0.19-rt11/fs/nfs/delegation.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/nfs/delegation.c
+++ linux-5.0.19-rt11/fs/nfs/delegation.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:154 @ again:
 		sp = state->owner;
 		/* Block nfs4_proc_unlck */
 		mutex_lock(&sp->so_delegreturn_mutex);
-		seq = raw_seqcount_begin(&sp->so_reclaim_seqcount);
+		seq = read_seqbegin(&sp->so_reclaim_seqlock);
 		err = nfs4_open_delegation_recall(ctx, state, stateid, type);
 		if (!err)
 			err = nfs_delegation_claim_locks(state, stateid);
-		if (!err && read_seqcount_retry(&sp->so_reclaim_seqcount, seq))
+		if (!err && read_seqretry(&sp->so_reclaim_seqlock, seq))
 			err = -EAGAIN;
 		mutex_unlock(&sp->so_delegreturn_mutex);
 		put_nfs_open_context(ctx);
Index: linux-5.0.19-rt11/fs/nfs/dir.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/nfs/dir.c
+++ linux-5.0.19-rt11/fs/nfs/dir.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:443 @ static
 void nfs_prime_dcache(struct dentry *parent, struct nfs_entry *entry)
 {
 	struct qstr filename = QSTR_INIT(entry->name, entry->len);
-	DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+	DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
 	struct dentry *dentry;
 	struct dentry *alias;
 	struct inode *dir = d_inode(parent);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1493 @ int nfs_atomic_open(struct inode *dir, s
 		    struct file *file, unsigned open_flags,
 		    umode_t mode)
 {
-	DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+	DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
 	struct nfs_open_context *ctx;
 	struct dentry *res;
 	struct iattr attr = { .ia_valid = ATTR_OPEN };
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1813 @ int nfs_rmdir(struct inode *dir, struct
 
 	trace_nfs_rmdir_enter(dir, dentry);
 	if (d_really_is_positive(dentry)) {
+#ifdef CONFIG_PREEMPT_RT_BASE
+		down(&NFS_I(d_inode(dentry))->rmdir_sem);
+#else
 		down_write(&NFS_I(d_inode(dentry))->rmdir_sem);
+#endif
 		error = NFS_PROTO(dir)->rmdir(dir, &dentry->d_name);
 		/* Ensure the VFS deletes this inode */
 		switch (error) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1827 @ int nfs_rmdir(struct inode *dir, struct
 		case -ENOENT:
 			nfs_dentry_handle_enoent(dentry);
 		}
+#ifdef CONFIG_PREEMPT_RT_BASE
+		up(&NFS_I(d_inode(dentry))->rmdir_sem);
+#else
 		up_write(&NFS_I(d_inode(dentry))->rmdir_sem);
+#endif
 	} else
 		error = NFS_PROTO(dir)->rmdir(dir, &dentry->d_name);
 	trace_nfs_rmdir_exit(dir, dentry, error);
Index: linux-5.0.19-rt11/fs/nfs/inode.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/nfs/inode.c
+++ linux-5.0.19-rt11/fs/nfs/inode.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2091 @ static void init_once(void *foo)
 	atomic_long_set(&nfsi->nrequests, 0);
 	atomic_long_set(&nfsi->commit_info.ncommit, 0);
 	atomic_set(&nfsi->commit_info.rpcs_out, 0);
+#ifdef CONFIG_PREEMPT_RT_BASE
+	sema_init(&nfsi->rmdir_sem, 1);
+#else
 	init_rwsem(&nfsi->rmdir_sem);
+#endif
 	mutex_init(&nfsi->commit_mutex);
 	nfs4_init_once(nfsi);
 }
Index: linux-5.0.19-rt11/fs/nfs/nfs4_fs.h
===================================================================
--- linux-5.0.19-rt11.orig/fs/nfs/nfs4_fs.h
+++ linux-5.0.19-rt11/fs/nfs/nfs4_fs.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:118 @ struct nfs4_state_owner {
 	unsigned long	     so_flags;
 	struct list_head     so_states;
 	struct nfs_seqid_counter so_seqid;
-	seqcount_t	     so_reclaim_seqcount;
+	seqlock_t	     so_reclaim_seqlock;
 	struct mutex	     so_delegreturn_mutex;
 };
 
Index: linux-5.0.19-rt11/fs/nfs/nfs4proc.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/nfs/nfs4proc.c
+++ linux-5.0.19-rt11/fs/nfs/nfs4proc.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2895 @ static int _nfs4_open_and_get_state(stru
 	unsigned int seq;
 	int ret;
 
-	seq = raw_seqcount_begin(&sp->so_reclaim_seqcount);
+	seq = raw_seqcount_begin(&sp->so_reclaim_seqlock.seqcount);
 
 	ret = _nfs4_proc_open(opendata, ctx);
 	if (ret != 0)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2936 @ static int _nfs4_open_and_get_state(stru
 
 	if (d_inode(dentry) == state->inode) {
 		nfs_inode_attach_open_context(ctx);
-		if (read_seqcount_retry(&sp->so_reclaim_seqcount, seq))
+		if (read_seqretry(&sp->so_reclaim_seqlock, seq))
 			nfs4_schedule_stateid_recovery(server, state);
 	}
 
Index: linux-5.0.19-rt11/fs/nfs/nfs4state.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/nfs/nfs4state.c
+++ linux-5.0.19-rt11/fs/nfs/nfs4state.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:513 @ nfs4_alloc_state_owner(struct nfs_server
 	nfs4_init_seqid_counter(&sp->so_seqid);
 	atomic_set(&sp->so_count, 1);
 	INIT_LIST_HEAD(&sp->so_lru);
-	seqcount_init(&sp->so_reclaim_seqcount);
+	seqlock_init(&sp->so_reclaim_seqlock);
 	mutex_init(&sp->so_delegreturn_mutex);
 	return sp;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1618 @ static int nfs4_reclaim_open_state(struc
 	 * recovering after a network partition or a reboot from a
 	 * server that doesn't support a grace period.
 	 */
+#ifdef CONFIG_PREEMPT_RT_FULL
+	write_seqlock(&sp->so_reclaim_seqlock);
+#else
+	write_seqcount_begin(&sp->so_reclaim_seqlock.seqcount);
+#endif
 	spin_lock(&sp->so_lock);
-	raw_write_seqcount_begin(&sp->so_reclaim_seqcount);
 restart:
 	list_for_each_entry(state, &sp->so_states, open_states) {
 		if (!test_and_clear_bit(ops->state_flag_bit, &state->flags))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1676 @ restart:
 		spin_lock(&sp->so_lock);
 		goto restart;
 	}
-	raw_write_seqcount_end(&sp->so_reclaim_seqcount);
 	spin_unlock(&sp->so_lock);
+#ifdef CONFIG_PREEMPT_RT_FULL
+	write_sequnlock(&sp->so_reclaim_seqlock);
+#else
+	write_seqcount_end(&sp->so_reclaim_seqlock.seqcount);
+#endif
 	return 0;
 out_err:
 	nfs4_put_open_state(state);
-	spin_lock(&sp->so_lock);
-	raw_write_seqcount_end(&sp->so_reclaim_seqcount);
-	spin_unlock(&sp->so_lock);
+#ifdef CONFIG_PREEMPT_RT_FULL
+	write_sequnlock(&sp->so_reclaim_seqlock);
+#else
+	write_seqcount_end(&sp->so_reclaim_seqlock.seqcount);
+#endif
 	return status;
 }
 
Index: linux-5.0.19-rt11/fs/nfs/unlink.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/nfs/unlink.c
+++ linux-5.0.19-rt11/fs/nfs/unlink.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:16 @
 #include <linux/sunrpc/clnt.h>
 #include <linux/nfs_fs.h>
 #include <linux/sched.h>
-#include <linux/wait.h>
+#include <linux/swait.h>
 #include <linux/namei.h>
 #include <linux/fsnotify.h>
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:55 @ static void nfs_async_unlink_done(struct
 		rpc_restart_call_prepare(task);
 }
 
+#ifdef CONFIG_PREEMPT_RT_BASE
+static void nfs_down_anon(struct semaphore *sema)
+{
+	down(sema);
+}
+
+static void nfs_up_anon(struct semaphore *sema)
+{
+	up(sema);
+}
+
+#else
+static void nfs_down_anon(struct rw_semaphore *rwsem)
+{
+	down_read_non_owner(rwsem);
+}
+
+static void nfs_up_anon(struct rw_semaphore *rwsem)
+{
+	up_read_non_owner(rwsem);
+}
+#endif
+
 /**
  * nfs_async_unlink_release - Release the sillydelete data.
  * @task: rpc_task of the sillydelete
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:91 @ static void nfs_async_unlink_release(voi
 	struct dentry *dentry = data->dentry;
 	struct super_block *sb = dentry->d_sb;
 
-	up_read_non_owner(&NFS_I(d_inode(dentry->d_parent))->rmdir_sem);
+	nfs_up_anon(&NFS_I(d_inode(dentry->d_parent))->rmdir_sem);
 	d_lookup_done(dentry);
 	nfs_free_unlinkdata(data);
 	dput(dentry);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:144 @ static int nfs_call_unlink(struct dentry
 	struct inode *dir = d_inode(dentry->d_parent);
 	struct dentry *alias;
 
-	down_read_non_owner(&NFS_I(dir)->rmdir_sem);
+	nfs_down_anon(&NFS_I(dir)->rmdir_sem);
 	alias = d_alloc_parallel(dentry->d_parent, &data->args.name, &data->wq);
 	if (IS_ERR(alias)) {
-		up_read_non_owner(&NFS_I(dir)->rmdir_sem);
+		nfs_up_anon(&NFS_I(dir)->rmdir_sem);
 		return 0;
 	}
 	if (!d_in_lookup(alias)) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:169 @ static int nfs_call_unlink(struct dentry
 			ret = 0;
 		spin_unlock(&alias->d_lock);
 		dput(alias);
-		up_read_non_owner(&NFS_I(dir)->rmdir_sem);
+		nfs_up_anon(&NFS_I(dir)->rmdir_sem);
 		/*
 		 * If we'd displaced old cached devname, free it.  At that
 		 * point dentry is definitely not a root, so we won't need
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:205 @ nfs_async_unlink(struct dentry *dentry,
 
 	data->cred = get_current_cred();
 	data->res.dir_attr = &data->dir_attr;
-	init_waitqueue_head(&data->wq);
+	init_swait_queue_head(&data->wq);
 
 	status = -EBUSY;
 	spin_lock(&dentry->d_lock);
Index: linux-5.0.19-rt11/fs/ntfs/aops.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/ntfs/aops.c
+++ linux-5.0.19-rt11/fs/ntfs/aops.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:109 @ static void ntfs_end_buffer_async_read(s
 				"0x%llx.", (unsigned long long)bh->b_blocknr);
 	}
 	first = page_buffers(page);
-	local_irq_save(flags);
-	bit_spin_lock(BH_Uptodate_Lock, &first->b_state);
+	flags = bh_uptodate_lock_irqsave(first);
 	clear_buffer_async_read(bh);
 	unlock_buffer(bh);
 	tmp = bh;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:124 @ static void ntfs_end_buffer_async_read(s
 		}
 		tmp = tmp->b_this_page;
 	} while (tmp != bh);
-	bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
-	local_irq_restore(flags);
+	bh_uptodate_unlock_irqrestore(first, flags);
 	/*
 	 * If none of the buffers had errors then we can set the page uptodate,
 	 * but we first have to perform the post read mst fixups, if the
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:157 @ static void ntfs_end_buffer_async_read(s
 	unlock_page(page);
 	return;
 still_busy:
-	bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
-	local_irq_restore(flags);
-	return;
+	bh_uptodate_unlock_irqrestore(first, flags);
 }
 
 /**
Index: linux-5.0.19-rt11/fs/proc/array.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/proc/array.c
+++ linux-5.0.19-rt11/fs/proc/array.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:384 @ static inline void task_context_switch_c
 static void task_cpus_allowed(struct seq_file *m, struct task_struct *task)
 {
 	seq_printf(m, "Cpus_allowed:\t%*pb\n",
-		   cpumask_pr_args(&task->cpus_allowed));
+		   cpumask_pr_args(task->cpus_ptr));
 	seq_printf(m, "Cpus_allowed_list:\t%*pbl\n",
-		   cpumask_pr_args(&task->cpus_allowed));
+		   cpumask_pr_args(task->cpus_ptr));
 }
 
 static inline void task_core_dumping(struct seq_file *m, struct mm_struct *mm)
Index: linux-5.0.19-rt11/fs/proc/base.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/proc/base.c
+++ linux-5.0.19-rt11/fs/proc/base.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1877 @ bool proc_fill_cache(struct file *file,
 
 	child = d_hash_and_lookup(dir, &qname);
 	if (!child) {
-		DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+		DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
 		child = d_alloc_parallel(dir, &qname, &wq);
 		if (IS_ERR(child))
 			goto end_instantiate;
Index: linux-5.0.19-rt11/fs/proc/kmsg.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/proc/kmsg.c
+++ linux-5.0.19-rt11/fs/proc/kmsg.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:21 @
 #include <linux/uaccess.h>
 #include <asm/io.h>
 
-extern wait_queue_head_t log_wait;
-
 static int kmsg_open(struct inode * inode, struct file * file)
 {
 	return do_syslog(SYSLOG_ACTION_OPEN, NULL, 0, SYSLOG_FROM_PROC);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:43 @ static ssize_t kmsg_read(struct file *fi
 
 static __poll_t kmsg_poll(struct file *file, poll_table *wait)
 {
-	poll_wait(file, &log_wait, wait);
+	poll_wait(file, printk_wait_queue(), wait);
 	if (do_syslog(SYSLOG_ACTION_SIZE_UNREAD, NULL, 0, SYSLOG_FROM_PROC))
 		return EPOLLIN | EPOLLRDNORM;
 	return 0;
Index: linux-5.0.19-rt11/fs/proc/proc_sysctl.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/proc/proc_sysctl.c
+++ linux-5.0.19-rt11/fs/proc/proc_sysctl.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:680 @ static bool proc_sys_fill_cache(struct f
 
 	child = d_lookup(dir, &qname);
 	if (!child) {
-		DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+		DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
 		child = d_alloc_parallel(dir, &qname, &wq);
 		if (IS_ERR(child))
 			return false;
Index: linux-5.0.19-rt11/fs/squashfs/decompressor_multi_percpu.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/squashfs/decompressor_multi_percpu.c
+++ linux-5.0.19-rt11/fs/squashfs/decompressor_multi_percpu.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:13 @
 #include <linux/slab.h>
 #include <linux/percpu.h>
 #include <linux/buffer_head.h>
+#include <linux/locallock.h>
 
 #include "squashfs_fs.h"
 #include "squashfs_fs_sb.h"
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:29 @ struct squashfs_stream {
 	void		*stream;
 };
 
+static DEFINE_LOCAL_IRQ_LOCK(stream_lock);
+
 void *squashfs_decompressor_create(struct squashfs_sb_info *msblk,
 						void *comp_opts)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:85 @ int squashfs_decompress(struct squashfs_
 {
 	struct squashfs_stream __percpu *percpu =
 			(struct squashfs_stream __percpu *) msblk->stream;
-	struct squashfs_stream *stream = get_cpu_ptr(percpu);
-	int res = msblk->decompressor->decompress(msblk, stream->stream, bh, b,
-		offset, length, output);
-	put_cpu_ptr(stream);
+	struct squashfs_stream *stream;
+	int res;
+
+	stream = get_locked_ptr(stream_lock, percpu);
+
+	res = msblk->decompressor->decompress(msblk, stream->stream, bh, b,
+			offset, length, output);
+
+	put_locked_ptr(stream_lock, stream);
 
 	if (res < 0)
 		ERROR("%s decompression failed, data probably corrupt\n",
Index: linux-5.0.19-rt11/fs/timerfd.c
===================================================================
--- linux-5.0.19-rt11.orig/fs/timerfd.c
+++ linux-5.0.19-rt11/fs/timerfd.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:474 @ static int do_timerfd_settime(int ufd, i
 				break;
 		}
 		spin_unlock_irq(&ctx->wqh.lock);
-		cpu_relax();
+
+		if (isalarm(ctx))
+			hrtimer_grab_expiry_lock(&ctx->t.alarm.timer);
+		else
+			hrtimer_grab_expiry_lock(&ctx->t.tmr);
 	}
 
 	/*
Index: linux-5.0.19-rt11/include/asm-generic/percpu.h
===================================================================
--- linux-5.0.19-rt11.orig/include/asm-generic/percpu.h
+++ linux-5.0.19-rt11/include/asm-generic/percpu.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:8 @
 #include <linux/compiler.h>
 #include <linux/threads.h>
 #include <linux/percpu-defs.h>
+#include <linux/irqflags.h>
 
 #ifdef CONFIG_SMP
 
Index: linux-5.0.19-rt11/include/linux/blkdev.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/blkdev.h
+++ linux-5.0.19-rt11/include/linux/blkdev.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:16 @
 #include <linux/llist.h>
 #include <linux/timer.h>
 #include <linux/workqueue.h>
+#include <linux/kthread.h>
 #include <linux/pagemap.h>
 #include <linux/backing-dev-defs.h>
 #include <linux/wait.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:553 @ struct request_queue {
 #endif
 	struct rcu_head		rcu_head;
 	wait_queue_head_t	mq_freeze_wq;
+	struct work_struct	mq_pcpu_wake;
 	struct percpu_ref	q_usage_counter;
 	struct list_head	all_q_node;
 
Index: linux-5.0.19-rt11/include/linux/bottom_half.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/bottom_half.h
+++ linux-5.0.19-rt11/include/linux/bottom_half.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:7 @
 
 #include <linux/preempt.h>
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+extern void __local_bh_disable_ip(unsigned long ip, unsigned int cnt);
+#else
+
 #ifdef CONFIG_TRACE_IRQFLAGS
 extern void __local_bh_disable_ip(unsigned long ip, unsigned int cnt);
 #else
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:20 @ static __always_inline void __local_bh_d
 	barrier();
 }
 #endif
+#endif
 
 static inline void local_bh_disable(void)
 {
Index: linux-5.0.19-rt11/include/linux/buffer_head.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/buffer_head.h
+++ linux-5.0.19-rt11/include/linux/buffer_head.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:79 @ struct buffer_head {
 	struct address_space *b_assoc_map;	/* mapping this buffer is
 						   associated with */
 	atomic_t b_count;		/* users using this buffer_head */
+#ifdef CONFIG_PREEMPT_RT_BASE
+	spinlock_t b_uptodate_lock;
+#if IS_ENABLED(CONFIG_JBD2)
+	spinlock_t b_state_lock;
+	spinlock_t b_journal_head_lock;
+#endif
+#endif
 };
 
+static inline unsigned long bh_uptodate_lock_irqsave(struct buffer_head *bh)
+{
+	unsigned long flags;
+
+#ifndef CONFIG_PREEMPT_RT_BASE
+	local_irq_save(flags);
+	bit_spin_lock(BH_Uptodate_Lock, &bh->b_state);
+#else
+	spin_lock_irqsave(&bh->b_uptodate_lock, flags);
+#endif
+	return flags;
+}
+
+static inline void
+bh_uptodate_unlock_irqrestore(struct buffer_head *bh, unsigned long flags)
+{
+#ifndef CONFIG_PREEMPT_RT_BASE
+	bit_spin_unlock(BH_Uptodate_Lock, &bh->b_state);
+	local_irq_restore(flags);
+#else
+	spin_unlock_irqrestore(&bh->b_uptodate_lock, flags);
+#endif
+}
+
+static inline void buffer_head_init_locks(struct buffer_head *bh)
+{
+#ifdef CONFIG_PREEMPT_RT_BASE
+	spin_lock_init(&bh->b_uptodate_lock);
+#if IS_ENABLED(CONFIG_JBD2)
+	spin_lock_init(&bh->b_state_lock);
+	spin_lock_init(&bh->b_journal_head_lock);
+#endif
+#endif
+}
+
 /*
  * macro tricks to expand the set_buffer_foo(), clear_buffer_foo()
  * and buffer_foo() functions.
Index: linux-5.0.19-rt11/include/linux/completion.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/completion.h
+++ linux-5.0.19-rt11/include/linux/completion.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:12 @
  * See kernel/sched/completion.c for details.
  */
 
-#include <linux/wait.h>
+#include <linux/swait.h>
 
 /*
  * struct completion - structure used to maintain state for a "completion"
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:28 @
  */
 struct completion {
 	unsigned int done;
-	wait_queue_head_t wait;
+	struct swait_queue_head wait;
 };
 
 #define init_completion_map(x, m) __init_completion(x)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:37 @ static inline void complete_acquire(stru
 static inline void complete_release(struct completion *x) {}
 
 #define COMPLETION_INITIALIZER(work) \
-	{ 0, __WAIT_QUEUE_HEAD_INITIALIZER((work).wait) }
+	{ 0, __SWAIT_QUEUE_HEAD_INITIALIZER((work).wait) }
 
 #define COMPLETION_INITIALIZER_ONSTACK_MAP(work, map) \
 	(*({ init_completion_map(&(work), &(map)); &(work); }))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:88 @ static inline void complete_release(stru
 static inline void __init_completion(struct completion *x)
 {
 	x->done = 0;
-	init_waitqueue_head(&x->wait);
+	init_swait_queue_head(&x->wait);
 }
 
 /**
Index: linux-5.0.19-rt11/include/linux/console.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/console.h
+++ linux-5.0.19-rt11/include/linux/console.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:148 @ static inline int con_debug_leave(void)
 struct console {
 	char	name[16];
 	void	(*write)(struct console *, const char *, unsigned);
+	void	(*write_atomic)(struct console *, const char *, unsigned);
 	int	(*read)(struct console *, char *, unsigned);
 	struct tty_driver *(*device)(struct console *, int *);
 	void	(*unblank)(void);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:157 @ struct console {
 	short	flags;
 	short	index;
 	int	cflag;
+	unsigned long printk_seq;
+	int	wrote_history;
 	void	*data;
 	struct	 console *next;
 };
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:235 @ extern void console_init(void);
 void dummycon_register_output_notifier(struct notifier_block *nb);
 void dummycon_unregister_output_notifier(struct notifier_block *nb);
 
+extern void console_atomic_lock(unsigned int *flags);
+extern void console_atomic_unlock(unsigned int flags);
+
 #endif /* _LINUX_CONSOLE_H */
Index: linux-5.0.19-rt11/include/linux/cpu.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/cpu.h
+++ linux-5.0.19-rt11/include/linux/cpu.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:116 @ extern void cpu_hotplug_disable(void);
 extern void cpu_hotplug_enable(void);
 void clear_tasks_mm_cpumask(int cpu);
 int cpu_down(unsigned int cpu);
+extern void pin_current_cpu(void);
+extern void unpin_current_cpu(void);
 
 #else /* CONFIG_HOTPLUG_CPU */
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:129 @ static inline int  cpus_read_trylock(voi
 static inline void lockdep_assert_cpus_held(void) { }
 static inline void cpu_hotplug_disable(void) { }
 static inline void cpu_hotplug_enable(void) { }
+static inline void pin_current_cpu(void) { }
+static inline void unpin_current_cpu(void) { }
+
 #endif	/* !CONFIG_HOTPLUG_CPU */
 
 /* Wrappers which go away once all code is converted */
Index: linux-5.0.19-rt11/include/linux/dcache.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/dcache.h
+++ linux-5.0.19-rt11/include/linux/dcache.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:109 @ struct dentry {
 
 	union {
 		struct list_head d_lru;		/* LRU list */
-		wait_queue_head_t *d_wait;	/* in-lookup ones only */
+		struct swait_queue_head *d_wait;	/* in-lookup ones only */
 	};
 	struct list_head d_child;	/* child of parent list */
 	struct list_head d_subdirs;	/* our children */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:240 @ extern struct dentry * d_alloc(struct de
 extern struct dentry * d_alloc_anon(struct super_block *);
 extern struct dentry * d_alloc_pseudo(struct super_block *, const struct qstr *);
 extern struct dentry * d_alloc_parallel(struct dentry *, const struct qstr *,
-					wait_queue_head_t *);
+					struct swait_queue_head *);
 extern struct dentry * d_splice_alias(struct inode *, struct dentry *);
 extern struct dentry * d_add_ci(struct dentry *, struct inode *, struct qstr *);
 extern struct dentry * d_exact_alias(struct dentry *, struct inode *);
Index: linux-5.0.19-rt11/include/linux/delay.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/delay.h
+++ linux-5.0.19-rt11/include/linux/delay.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:67 @ static inline void ssleep(unsigned int s
 	msleep(seconds * 1000);
 }
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+extern void cpu_chill(void);
+#else
+# define cpu_chill()	cpu_relax()
+#endif
+
 #endif /* defined(_LINUX_DELAY_H) */
Index: linux-5.0.19-rt11/include/linux/dma-iommu.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/dma-iommu.h
+++ linux-5.0.19-rt11/include/linux/dma-iommu.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:74 @ void iommu_dma_unmap_resource(struct dev
 		size_t size, enum dma_data_direction dir, unsigned long attrs);
 
 /* The DMA API isn't _quite_ the whole story, though... */
-void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg);
+/*
+ * iommu_dma_prepare_msi() - Map the MSI page in the IOMMU device
+ *
+ * The MSI page will be stored in @desc.
+ *
+ * Return: 0 on success otherwise an error describing the failure.
+ */
+int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr);
+
+/* Update the MSI message if required. */
+void iommu_dma_compose_msi_msg(struct msi_desc *desc,
+			       struct msi_msg *msg);
+
 void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list);
 
 #else
 
 struct iommu_domain;
+struct msi_desc;
 struct msi_msg;
 struct device;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:115 @ static inline void iommu_put_dma_cookie(
 {
 }
 
-static inline void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg)
+static inline int iommu_dma_prepare_msi(struct msi_desc *desc,
+					phys_addr_t msi_addr)
+{
+	return 0;
+}
+
+static inline void iommu_dma_compose_msi_msg(struct msi_desc *desc,
+					     struct msi_msg *msg)
 {
 }
 
Index: linux-5.0.19-rt11/include/linux/fs.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/fs.h
+++ linux-5.0.19-rt11/include/linux/fs.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:700 @ struct inode {
 		struct block_device	*i_bdev;
 		struct cdev		*i_cdev;
 		char			*i_link;
-		unsigned		i_dir_seq;
+		unsigned		__i_dir_seq;
 	};
 
 	__u32			i_generation;
Index: linux-5.0.19-rt11/include/linux/fscache.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/fscache.h
+++ linux-5.0.19-rt11/include/linux/fscache.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:233 @ extern void __fscache_readpages_cancel(s
 extern void __fscache_disable_cookie(struct fscache_cookie *, const void *, bool);
 extern void __fscache_enable_cookie(struct fscache_cookie *, const void *, loff_t,
 				    bool (*)(void *), void *);
+extern void fscache_cookie_init(void);
 
 /**
  * fscache_register_netfs - Register a filesystem as desiring caching services
Index: linux-5.0.19-rt11/include/linux/hardirq.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/hardirq.h
+++ linux-5.0.19-rt11/include/linux/hardirq.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:71 @ extern void irq_exit(void);
 #define nmi_enter()						\
 	do {							\
 		arch_nmi_enter();				\
-		printk_nmi_enter();				\
 		lockdep_off();					\
 		ftrace_nmi_enter();				\
 		BUG_ON(in_nmi());				\
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:87 @ extern void irq_exit(void);
 		preempt_count_sub(NMI_OFFSET + HARDIRQ_OFFSET);	\
 		ftrace_nmi_exit();				\
 		lockdep_on();					\
-		printk_nmi_exit();				\
 		arch_nmi_exit();				\
 	} while (0)
 
Index: linux-5.0.19-rt11/include/linux/highmem.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/highmem.h
+++ linux-5.0.19-rt11/include/linux/highmem.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:11 @
 #include <linux/mm.h>
 #include <linux/uaccess.h>
 #include <linux/hardirq.h>
+#include <linux/sched.h>
 
 #include <asm/cacheflush.h>
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:94 @ static inline void kunmap(struct page *p
 
 static inline void *kmap_atomic(struct page *page)
 {
-	preempt_disable();
+	preempt_disable_nort();
 	pagefault_disable();
 	return page_address(page);
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:103 @ static inline void *kmap_atomic(struct p
 static inline void __kunmap_atomic(void *addr)
 {
 	pagefault_enable();
-	preempt_enable();
+	preempt_enable_nort();
 }
 
 #define kmap_atomic_pfn(pfn)	kmap_atomic(pfn_to_page(pfn))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:115 @ static inline void __kunmap_atomic(void
 
 #if defined(CONFIG_HIGHMEM) || defined(CONFIG_X86_32)
 
+#ifndef CONFIG_PREEMPT_RT_FULL
 DECLARE_PER_CPU(int, __kmap_atomic_idx);
+#endif
 
 static inline int kmap_atomic_idx_push(void)
 {
+#ifndef CONFIG_PREEMPT_RT_FULL
 	int idx = __this_cpu_inc_return(__kmap_atomic_idx) - 1;
 
-#ifdef CONFIG_DEBUG_HIGHMEM
+# ifdef CONFIG_DEBUG_HIGHMEM
 	WARN_ON_ONCE(in_irq() && !irqs_disabled());
 	BUG_ON(idx >= KM_TYPE_NR);
-#endif
+# endif
 	return idx;
+#else
+	current->kmap_idx++;
+	BUG_ON(current->kmap_idx > KM_TYPE_NR);
+	return current->kmap_idx - 1;
+#endif
 }
 
 static inline int kmap_atomic_idx(void)
 {
+#ifndef CONFIG_PREEMPT_RT_FULL
 	return __this_cpu_read(__kmap_atomic_idx) - 1;
+#else
+	return current->kmap_idx - 1;
+#endif
 }
 
 static inline void kmap_atomic_idx_pop(void)
 {
-#ifdef CONFIG_DEBUG_HIGHMEM
+#ifndef CONFIG_PREEMPT_RT_FULL
+# ifdef CONFIG_DEBUG_HIGHMEM
 	int idx = __this_cpu_dec_return(__kmap_atomic_idx);
 
 	BUG_ON(idx < 0);
-#else
+# else
 	__this_cpu_dec(__kmap_atomic_idx);
+# endif
+#else
+	current->kmap_idx--;
+# ifdef CONFIG_DEBUG_HIGHMEM
+	BUG_ON(current->kmap_idx < 0);
+# endif
 #endif
 }
 
Index: linux-5.0.19-rt11/include/linux/hrtimer.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/hrtimer.h
+++ linux-5.0.19-rt11/include/linux/hrtimer.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:41 @ enum hrtimer_mode {
 	HRTIMER_MODE_REL	= 0x01,
 	HRTIMER_MODE_PINNED	= 0x02,
 	HRTIMER_MODE_SOFT	= 0x04,
+	HRTIMER_MODE_HARD	= 0x08,
 
 	HRTIMER_MODE_ABS_PINNED = HRTIMER_MODE_ABS | HRTIMER_MODE_PINNED,
 	HRTIMER_MODE_REL_PINNED = HRTIMER_MODE_REL | HRTIMER_MODE_PINNED,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:52 @ enum hrtimer_mode {
 	HRTIMER_MODE_ABS_PINNED_SOFT = HRTIMER_MODE_ABS_PINNED | HRTIMER_MODE_SOFT,
 	HRTIMER_MODE_REL_PINNED_SOFT = HRTIMER_MODE_REL_PINNED | HRTIMER_MODE_SOFT,
 
+	HRTIMER_MODE_ABS_HARD	= HRTIMER_MODE_ABS | HRTIMER_MODE_HARD,
+	HRTIMER_MODE_REL_HARD	= HRTIMER_MODE_REL | HRTIMER_MODE_HARD,
+
+	HRTIMER_MODE_ABS_PINNED_HARD = HRTIMER_MODE_ABS_PINNED | HRTIMER_MODE_HARD,
+	HRTIMER_MODE_REL_PINNED_HARD = HRTIMER_MODE_REL_PINNED | HRTIMER_MODE_HARD,
 };
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:192 @ enum  hrtimer_base_type {
  * @nr_retries:		Total number of hrtimer interrupt retries
  * @nr_hangs:		Total number of hrtimer interrupt hangs
  * @max_hang_time:	Maximum time spent in hrtimer_interrupt
+ * @softirq_expiry_lock: Lock which is taken while softirq based hrtimer are
+ *			 expired
  * @expires_next:	absolute time of the next event, is required for remote
  *			hrtimer enqueue; it is the total first expiry time (hard
  *			and soft hrtimer are taken into account)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:221 @ struct hrtimer_cpu_base {
 	unsigned short			nr_hangs;
 	unsigned int			max_hang_time;
 #endif
+	spinlock_t			softirq_expiry_lock;
 	ktime_t				expires_next;
 	struct hrtimer			*next_timer;
 	ktime_t				softirq_expires_next;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:373 @ DECLARE_PER_CPU(struct tick_device, tick
 /* Initialize timers: */
 extern void hrtimer_init(struct hrtimer *timer, clockid_t which_clock,
 			 enum hrtimer_mode mode);
+extern void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, clockid_t clock_id,
+				 enum hrtimer_mode mode,
+				 struct task_struct *task);
 
 #ifdef CONFIG_DEBUG_OBJECTS_TIMERS
 extern void hrtimer_init_on_stack(struct hrtimer *timer, clockid_t which_clock,
 				  enum hrtimer_mode mode);
+extern void hrtimer_init_sleeper_on_stack(struct hrtimer_sleeper *sl,
+					  clockid_t clock_id,
+					  enum hrtimer_mode mode,
+					  struct task_struct *task);
 
 extern void destroy_hrtimer_on_stack(struct hrtimer *timer);
 #else
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:393 @ static inline void hrtimer_init_on_stack
 {
 	hrtimer_init(timer, which_clock, mode);
 }
+
+static inline void hrtimer_init_sleeper_on_stack(struct hrtimer_sleeper *sl,
+					    clockid_t clock_id,
+					    enum hrtimer_mode mode,
+					    struct task_struct *task)
+{
+	hrtimer_init_sleeper(sl, clock_id, mode, task);
+}
+
 static inline void destroy_hrtimer_on_stack(struct hrtimer *timer) { }
 #endif
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:425 @ static inline void hrtimer_start(struct
 
 extern int hrtimer_cancel(struct hrtimer *timer);
 extern int hrtimer_try_to_cancel(struct hrtimer *timer);
+extern void hrtimer_grab_expiry_lock(const struct hrtimer *timer);
 
 static inline void hrtimer_start_expires(struct hrtimer *timer,
 					 enum hrtimer_mode mode)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:506 @ extern long hrtimer_nanosleep(const stru
 			      const enum hrtimer_mode mode,
 			      const clockid_t clockid);
 
-extern void hrtimer_init_sleeper(struct hrtimer_sleeper *sl,
-				 struct task_struct *tsk);
-
 extern int schedule_hrtimeout_range(ktime_t *expires, u64 delta,
 						const enum hrtimer_mode mode);
 extern int schedule_hrtimeout_range_clock(ktime_t *expires,
Index: linux-5.0.19-rt11/include/linux/idr.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/idr.h
+++ linux-5.0.19-rt11/include/linux/idr.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:172 @ static inline bool idr_is_empty(const st
  * Each idr_preload() should be matched with an invocation of this
  * function.  See idr_preload() for details.
  */
-static inline void idr_preload_end(void)
-{
-	preempt_enable();
-}
+void idr_preload_end(void);
 
 /**
  * idr_for_each_entry() - Iterate over an IDR's elements of a given type.
Index: linux-5.0.19-rt11/include/linux/interrupt.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/interrupt.h
+++ linux-5.0.19-rt11/include/linux/interrupt.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:446 @ extern int irq_set_irqchip_state(unsigne
 				 bool state);
 
 #ifdef CONFIG_IRQ_FORCED_THREADING
+# ifdef CONFIG_PREEMPT_RT_BASE
+#  define force_irqthreads	(true)
+# else
 extern bool force_irqthreads;
+# endif
 #else
 #define force_irqthreads	(0)
 #endif
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:519 @ struct softirq_action
 asmlinkage void do_softirq(void);
 asmlinkage void __do_softirq(void);
 
-#ifdef __ARCH_HAS_DO_SOFTIRQ
+#if defined(__ARCH_HAS_DO_SOFTIRQ) && !defined(CONFIG_PREEMPT_RT_FULL)
 void do_softirq_own_stack(void);
 #else
 static inline void do_softirq_own_stack(void)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:534 @ extern void __raise_softirq_irqoff(unsig
 
 extern void raise_softirq_irqoff(unsigned int nr);
 extern void raise_softirq(unsigned int nr);
+extern void softirq_check_pending_idle(void);
 
 DECLARE_PER_CPU(struct task_struct *, ksoftirqd);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:599 @ static inline void tasklet_unlock(struct
 
 static inline void tasklet_unlock_wait(struct tasklet_struct *t)
 {
-	while (test_bit(TASKLET_STATE_RUN, &(t)->state)) { barrier(); }
+	while (test_bit(TASKLET_STATE_RUN, &(t)->state)) {
+		local_bh_disable();
+		local_bh_enable();
+	}
 }
 #else
 #define tasklet_trylock(t) 1
Index: linux-5.0.19-rt11/include/linux/irq_work.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/irq_work.h
+++ linux-5.0.19-rt11/include/linux/irq_work.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:21 @
 
 /* Doesn't want IPI, wait for tick: */
 #define IRQ_WORK_LAZY		BIT(2)
+/* Run hard IRQ context, even on RT */
+#define IRQ_WORK_HARD_IRQ	BIT(3)
 
 #define IRQ_WORK_CLAIMED	(IRQ_WORK_PENDING | IRQ_WORK_BUSY)
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:57 @ static inline bool irq_work_needs_cpu(vo
 static inline void irq_work_run(void) { }
 #endif
 
+#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT_FULL)
+void irq_work_tick_soft(void);
+#else
+static inline void irq_work_tick_soft(void) { }
+#endif
+
 #endif /* _LINUX_IRQ_WORK_H */
Index: linux-5.0.19-rt11/include/linux/irqdesc.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/irqdesc.h
+++ linux-5.0.19-rt11/include/linux/irqdesc.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:74 @ struct irq_desc {
 	unsigned int		irqs_unhandled;
 	atomic_t		threads_handled;
 	int			threads_handled_last;
+	u64			random_ip;
 	raw_spinlock_t		lock;
 	struct cpumask		*percpu_enabled;
 	const struct cpumask	*percpu_affinity;
Index: linux-5.0.19-rt11/include/linux/irqflags.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/irqflags.h
+++ linux-5.0.19-rt11/include/linux/irqflags.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:46 @ do {						\
 do {						\
 	current->hardirq_context--;		\
 } while (0)
-# define lockdep_softirq_enter()		\
-do {						\
-	current->softirq_context++;		\
-} while (0)
-# define lockdep_softirq_exit()			\
-do {						\
-	current->softirq_context--;		\
-} while (0)
 #else
 # define trace_hardirqs_on()		do { } while (0)
 # define trace_hardirqs_off()		do { } while (0)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:58 @ do {						\
 # define lockdep_softirq_enter()	do { } while (0)
 # define lockdep_softirq_exit()		do { } while (0)
 #endif
+
+#if defined(CONFIG_TRACE_IRQFLAGS) && !defined(CONFIG_PREEMPT_RT_FULL)
+# define lockdep_softirq_enter()		\
+do {						\
+	current->softirq_context++;		\
+} while (0)
+# define lockdep_softirq_exit()			\
+do {						\
+	current->softirq_context--;		\
+} while (0)
+
+#else
+# define lockdep_softirq_enter()	do { } while (0)
+# define lockdep_softirq_exit()		do { } while (0)
+#endif
 
 #if defined(CONFIG_IRQSOFF_TRACER) || \
 	defined(CONFIG_PREEMPT_TRACER)
Index: linux-5.0.19-rt11/include/linux/jbd2.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/jbd2.h
+++ linux-5.0.19-rt11/include/linux/jbd2.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:350 @ static inline struct journal_head *bh2jh
 
 static inline void jbd_lock_bh_state(struct buffer_head *bh)
 {
+#ifndef CONFIG_PREEMPT_RT_BASE
 	bit_spin_lock(BH_State, &bh->b_state);
+#else
+	spin_lock(&bh->b_state_lock);
+#endif
 }
 
 static inline int jbd_trylock_bh_state(struct buffer_head *bh)
 {
+#ifndef CONFIG_PREEMPT_RT_BASE
 	return bit_spin_trylock(BH_State, &bh->b_state);
+#else
+	return spin_trylock(&bh->b_state_lock);
+#endif
 }
 
 static inline int jbd_is_locked_bh_state(struct buffer_head *bh)
 {
+#ifndef CONFIG_PREEMPT_RT_BASE
 	return bit_spin_is_locked(BH_State, &bh->b_state);
+#else
+	return spin_is_locked(&bh->b_state_lock);
+#endif
 }
 
 static inline void jbd_unlock_bh_state(struct buffer_head *bh)
 {
+#ifndef CONFIG_PREEMPT_RT_BASE
 	bit_spin_unlock(BH_State, &bh->b_state);
+#else
+	spin_unlock(&bh->b_state_lock);
+#endif
 }
 
 static inline void jbd_lock_bh_journal_head(struct buffer_head *bh)
 {
+#ifndef CONFIG_PREEMPT_RT_BASE
 	bit_spin_lock(BH_JournalHead, &bh->b_state);
+#else
+	spin_lock(&bh->b_journal_head_lock);
+#endif
 }
 
 static inline void jbd_unlock_bh_journal_head(struct buffer_head *bh)
 {
+#ifndef CONFIG_PREEMPT_RT_BASE
 	bit_spin_unlock(BH_JournalHead, &bh->b_state);
+#else
+	spin_unlock(&bh->b_journal_head_lock);
+#endif
 }
 
 #define J_ASSERT(assert)	BUG_ON(!(assert))
Index: linux-5.0.19-rt11/include/linux/kernel.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/kernel.h
+++ linux-5.0.19-rt11/include/linux/kernel.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:262 @ extern int _cond_resched(void);
  */
 # define might_sleep() \
 	do { __might_sleep(__FILE__, __LINE__, 0); might_resched(); } while (0)
+
+# define might_sleep_no_state_check() \
+	do { ___might_sleep(__FILE__, __LINE__, 0); might_resched(); } while (0)
 # define sched_annotate_sleep()	(current->task_state_change = 0)
 #else
   static inline void ___might_sleep(const char *file, int line,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:272 @ extern int _cond_resched(void);
   static inline void __might_sleep(const char *file, int line,
 				   int preempt_offset) { }
 # define might_sleep() do { might_resched(); } while (0)
+# define might_sleep_no_state_check() do { might_resched(); } while (0)
 # define sched_annotate_sleep() do { } while (0)
 #endif
 
Index: linux-5.0.19-rt11/include/linux/kmsg_dump.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/kmsg_dump.h
+++ linux-5.0.19-rt11/include/linux/kmsg_dump.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:49 @ struct kmsg_dumper {
 	bool registered;
 
 	/* private state of the kmsg iterator */
-	u32 cur_idx;
-	u32 next_idx;
-	u64 cur_seq;
-	u64 next_seq;
+	u64 line_seq;
+	u64 buffer_end_seq;
 };
 
 #ifdef CONFIG_PRINTK
Index: linux-5.0.19-rt11/include/linux/kthread.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/kthread.h
+++ linux-5.0.19-rt11/include/linux/kthread.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:88 @ enum {
 
 struct kthread_worker {
 	unsigned int		flags;
-	spinlock_t		lock;
+	raw_spinlock_t		lock;
 	struct list_head	work_list;
 	struct list_head	delayed_work_list;
 	struct task_struct	*task;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:109 @ struct kthread_delayed_work {
 };
 
 #define KTHREAD_WORKER_INIT(worker)	{				\
-	.lock = __SPIN_LOCK_UNLOCKED((worker).lock),			\
+	.lock = __RAW_SPIN_LOCK_UNLOCKED((worker).lock),		\
 	.work_list = LIST_HEAD_INIT((worker).work_list),		\
 	.delayed_work_list = LIST_HEAD_INIT((worker).delayed_work_list),\
 	}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:167 @ extern void __kthread_init_worker(struct
 #define kthread_init_delayed_work(dwork, fn)				\
 	do {								\
 		kthread_init_work(&(dwork)->work, (fn));		\
-		__init_timer(&(dwork)->timer,				\
-			     kthread_delayed_work_timer_fn,		\
-			     TIMER_IRQSAFE);				\
+		timer_setup(&(dwork)->timer,				\
+			     kthread_delayed_work_timer_fn, 0);		\
 	} while (0)
 
 int kthread_worker_fn(void *worker_ptr);
Index: linux-5.0.19-rt11/include/linux/list_bl.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/list_bl.h
+++ linux-5.0.19-rt11/include/linux/list_bl.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6 @
 #define _LINUX_LIST_BL_H
 
 #include <linux/list.h>
+#include <linux/spinlock.h>
 #include <linux/bit_spinlock.h>
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:37 @
 
 struct hlist_bl_head {
 	struct hlist_bl_node *first;
+#ifdef CONFIG_PREEMPT_RT_BASE
+	raw_spinlock_t lock;
+#endif
 };
 
 struct hlist_bl_node {
 	struct hlist_bl_node *next, **pprev;
 };
-#define INIT_HLIST_BL_HEAD(ptr) \
-	((ptr)->first = NULL)
+
+#ifdef CONFIG_PREEMPT_RT_BASE
+#define INIT_HLIST_BL_HEAD(h)		\
+do {					\
+	(h)->first = NULL;		\
+	raw_spin_lock_init(&(h)->lock);	\
+} while (0)
+#else
+#define INIT_HLIST_BL_HEAD(h) (h)->first = NULL
+#endif
 
 static inline void INIT_HLIST_BL_NODE(struct hlist_bl_node *h)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:134 @ static inline void hlist_bl_del_init(str
 
 static inline void hlist_bl_lock(struct hlist_bl_head *b)
 {
+#ifndef CONFIG_PREEMPT_RT_BASE
 	bit_spin_lock(0, (unsigned long *)b);
+#else
+	raw_spin_lock(&b->lock);
+#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
+	__set_bit(0, (unsigned long *)b);
+#endif
+#endif
 }
 
 static inline void hlist_bl_unlock(struct hlist_bl_head *b)
 {
+#ifndef CONFIG_PREEMPT_RT_BASE
 	__bit_spin_unlock(0, (unsigned long *)b);
+#else
+#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
+	__clear_bit(0, (unsigned long *)b);
+#endif
+	raw_spin_unlock(&b->lock);
+#endif
 }
 
 static inline bool hlist_bl_is_locked(struct hlist_bl_head *b)
Index: linux-5.0.19-rt11/include/linux/locallock.h
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/include/linux/locallock.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+#ifndef _LINUX_LOCALLOCK_H
+#define _LINUX_LOCALLOCK_H
+
+#include <linux/percpu.h>
+#include <linux/spinlock.h>
+
+#ifdef CONFIG_PREEMPT_RT_BASE
+
+#ifdef CONFIG_DEBUG_SPINLOCK
+# define LL_WARN(cond)	WARN_ON(cond)
+#else
+# define LL_WARN(cond)	do { } while (0)
+#endif
+
+/*
+ * per cpu lock based substitute for local_irq_*()
+ */
+struct local_irq_lock {
+	spinlock_t		lock;
+	struct task_struct	*owner;
+	int			nestcnt;
+	unsigned long		flags;
+};
+
+#define DEFINE_LOCAL_IRQ_LOCK(lvar)					\
+	DEFINE_PER_CPU(struct local_irq_lock, lvar) = {			\
+		.lock = __SPIN_LOCK_UNLOCKED((lvar).lock) }
+
+#define DECLARE_LOCAL_IRQ_LOCK(lvar)					\
+	DECLARE_PER_CPU(struct local_irq_lock, lvar)
+
+#define local_irq_lock_init(lvar)					\
+	do {								\
+		int __cpu;						\
+		for_each_possible_cpu(__cpu)				\
+			spin_lock_init(&per_cpu(lvar, __cpu).lock);	\
+	} while (0)
+
+static inline void __local_lock(struct local_irq_lock *lv)
+{
+	if (lv->owner != current) {
+		spin_lock(&lv->lock);
+		LL_WARN(lv->owner);
+		LL_WARN(lv->nestcnt);
+		lv->owner = current;
+	}
+	lv->nestcnt++;
+}
+
+#define local_lock(lvar)					\
+	do { __local_lock(&get_local_var(lvar)); } while (0)
+
+#define local_lock_on(lvar, cpu)				\
+	do { __local_lock(&per_cpu(lvar, cpu)); } while (0)
+
+static inline int __local_trylock(struct local_irq_lock *lv)
+{
+	if (lv->owner != current && spin_trylock(&lv->lock)) {
+		LL_WARN(lv->owner);
+		LL_WARN(lv->nestcnt);
+		lv->owner = current;
+		lv->nestcnt = 1;
+		return 1;
+	} else if (lv->owner == current) {
+		lv->nestcnt++;
+		return 1;
+	}
+	return 0;
+}
+
+#define local_trylock(lvar)						\
+	({								\
+		int __locked;						\
+		__locked = __local_trylock(&get_local_var(lvar));	\
+		if (!__locked)						\
+			put_local_var(lvar);				\
+		__locked;						\
+	})
+
+static inline void __local_unlock(struct local_irq_lock *lv)
+{
+	LL_WARN(lv->nestcnt == 0);
+	LL_WARN(lv->owner != current);
+	if (--lv->nestcnt)
+		return;
+
+	lv->owner = NULL;
+	spin_unlock(&lv->lock);
+}
+
+#define local_unlock(lvar)					\
+	do {							\
+		__local_unlock(this_cpu_ptr(&lvar));		\
+		put_local_var(lvar);				\
+	} while (0)
+
+#define local_unlock_on(lvar, cpu)                       \
+	do { __local_unlock(&per_cpu(lvar, cpu)); } while (0)
+
+static inline void __local_lock_irq(struct local_irq_lock *lv)
+{
+	spin_lock_irqsave(&lv->lock, lv->flags);
+	LL_WARN(lv->owner);
+	LL_WARN(lv->nestcnt);
+	lv->owner = current;
+	lv->nestcnt = 1;
+}
+
+#define local_lock_irq(lvar)						\
+	do { __local_lock_irq(&get_local_var(lvar)); } while (0)
+
+#define local_lock_irq_on(lvar, cpu)					\
+	do { __local_lock_irq(&per_cpu(lvar, cpu)); } while (0)
+
+static inline void __local_unlock_irq(struct local_irq_lock *lv)
+{
+	LL_WARN(!lv->nestcnt);
+	LL_WARN(lv->owner != current);
+	lv->owner = NULL;
+	lv->nestcnt = 0;
+	spin_unlock_irq(&lv->lock);
+}
+
+#define local_unlock_irq(lvar)						\
+	do {								\
+		__local_unlock_irq(this_cpu_ptr(&lvar));		\
+		put_local_var(lvar);					\
+	} while (0)
+
+#define local_unlock_irq_on(lvar, cpu)					\
+	do {								\
+		__local_unlock_irq(&per_cpu(lvar, cpu));		\
+	} while (0)
+
+static inline int __local_lock_irqsave(struct local_irq_lock *lv)
+{
+	if (lv->owner != current) {
+		__local_lock_irq(lv);
+		return 0;
+	} else {
+		lv->nestcnt++;
+		return 1;
+	}
+}
+
+#define local_lock_irqsave(lvar, _flags)				\
+	do {								\
+		if (__local_lock_irqsave(&get_local_var(lvar)))		\
+			put_local_var(lvar);				\
+		_flags = __this_cpu_read(lvar.flags);			\
+	} while (0)
+
+#define local_lock_irqsave_on(lvar, _flags, cpu)			\
+	do {								\
+		__local_lock_irqsave(&per_cpu(lvar, cpu));		\
+		_flags = per_cpu(lvar, cpu).flags;			\
+	} while (0)
+
+static inline int __local_unlock_irqrestore(struct local_irq_lock *lv,
+					    unsigned long flags)
+{
+	LL_WARN(!lv->nestcnt);
+	LL_WARN(lv->owner != current);
+	if (--lv->nestcnt)
+		return 0;
+
+	lv->owner = NULL;
+	spin_unlock_irqrestore(&lv->lock, lv->flags);
+	return 1;
+}
+
+#define local_unlock_irqrestore(lvar, flags)				\
+	do {								\
+		if (__local_unlock_irqrestore(this_cpu_ptr(&lvar), flags)) \
+			put_local_var(lvar);				\
+	} while (0)
+
+#define local_unlock_irqrestore_on(lvar, flags, cpu)			\
+	do {								\
+		__local_unlock_irqrestore(&per_cpu(lvar, cpu), flags);	\
+	} while (0)
+
+#define local_spin_trylock_irq(lvar, lock)				\
+	({								\
+		int __locked;						\
+		local_lock_irq(lvar);					\
+		__locked = spin_trylock(lock);				\
+		if (!__locked)						\
+			local_unlock_irq(lvar);				\
+		__locked;						\
+	})
+
+#define local_spin_lock_irq(lvar, lock)					\
+	do {								\
+		local_lock_irq(lvar);					\
+		spin_lock(lock);					\
+	} while (0)
+
+#define local_spin_unlock_irq(lvar, lock)				\
+	do {								\
+		spin_unlock(lock);					\
+		local_unlock_irq(lvar);					\
+	} while (0)
+
+#define local_spin_lock_irqsave(lvar, lock, flags)			\
+	do {								\
+		local_lock_irqsave(lvar, flags);			\
+		spin_lock(lock);					\
+	} while (0)
+
+#define local_spin_unlock_irqrestore(lvar, lock, flags)			\
+	do {								\
+		spin_unlock(lock);					\
+		local_unlock_irqrestore(lvar, flags);			\
+	} while (0)
+
+#define get_locked_var(lvar, var)					\
+	(*({								\
+		local_lock(lvar);					\
+		this_cpu_ptr(&var);					\
+	}))
+
+#define put_locked_var(lvar, var)	local_unlock(lvar);
+
+#define get_locked_ptr(lvar, var)					\
+	({								\
+		local_lock(lvar);					\
+		this_cpu_ptr(var);					\
+	})
+
+#define put_locked_ptr(lvar, var)	local_unlock(lvar);
+
+#define local_lock_cpu(lvar)						\
+	({								\
+		local_lock(lvar);					\
+		smp_processor_id();					\
+	})
+
+#define local_unlock_cpu(lvar)			local_unlock(lvar)
+
+#else /* PREEMPT_RT_BASE */
+
+#define DEFINE_LOCAL_IRQ_LOCK(lvar)		__typeof__(const int) lvar
+#define DECLARE_LOCAL_IRQ_LOCK(lvar)		extern __typeof__(const int) lvar
+
+static inline void local_irq_lock_init(int lvar) { }
+
+#define local_trylock(lvar)					\
+	({							\
+		preempt_disable();				\
+		1;						\
+	})
+
+#define local_lock(lvar)			preempt_disable()
+#define local_unlock(lvar)			preempt_enable()
+#define local_lock_irq(lvar)			local_irq_disable()
+#define local_lock_irq_on(lvar, cpu)		local_irq_disable()
+#define local_unlock_irq(lvar)			local_irq_enable()
+#define local_unlock_irq_on(lvar, cpu)		local_irq_enable()
+#define local_lock_irqsave(lvar, flags)		local_irq_save(flags)
+#define local_unlock_irqrestore(lvar, flags)	local_irq_restore(flags)
+
+#define local_spin_trylock_irq(lvar, lock)	spin_trylock_irq(lock)
+#define local_spin_lock_irq(lvar, lock)		spin_lock_irq(lock)
+#define local_spin_unlock_irq(lvar, lock)	spin_unlock_irq(lock)
+#define local_spin_lock_irqsave(lvar, lock, flags)	\
+	spin_lock_irqsave(lock, flags)
+#define local_spin_unlock_irqrestore(lvar, lock, flags)	\
+	spin_unlock_irqrestore(lock, flags)
+
+#define get_locked_var(lvar, var)		get_cpu_var(var)
+#define put_locked_var(lvar, var)		put_cpu_var(var)
+#define get_locked_ptr(lvar, var)		get_cpu_ptr(var)
+#define put_locked_ptr(lvar, var)		put_cpu_ptr(var)
+
+#define local_lock_cpu(lvar)			get_cpu()
+#define local_unlock_cpu(lvar)			put_cpu()
+
+#endif
+
+#endif
Index: linux-5.0.19-rt11/include/linux/mm_types.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/mm_types.h
+++ linux-5.0.19-rt11/include/linux/mm_types.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:15 @
 #include <linux/completion.h>
 #include <linux/cpumask.h>
 #include <linux/uprobes.h>
+#include <linux/rcupdate.h>
 #include <linux/page-flags-layout.h>
 #include <linux/workqueue.h>
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:491 @ struct mm_struct {
 		bool tlb_flush_batched;
 #endif
 		struct uprobes_state uprobes_state;
+#ifdef CONFIG_PREEMPT_RT_BASE
+		struct rcu_head delayed_drop;
+#endif
 #ifdef CONFIG_HUGETLB_PAGE
 		atomic_long_t hugetlb_usage;
 #endif
Index: linux-5.0.19-rt11/include/linux/msi.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/msi.h
+++ linux-5.0.19-rt11/include/linux/msi.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:80 @ struct msi_desc {
 	struct device			*dev;
 	struct msi_msg			msg;
 	struct irq_affinity_desc	*affinity;
+#ifdef CONFIG_IRQ_MSI_IOMMU
+	const void			*iommu_cookie;
+#endif
 
 	union {
 		/* PCI MSI/X specific data */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:125 @ struct msi_desc {
 #define for_each_msi_entry_safe(desc, tmp, dev)	\
 	list_for_each_entry_safe((desc), (tmp), dev_to_msi_list((dev)), list)
 
+#ifdef CONFIG_IRQ_MSI_IOMMU
+static inline const void *msi_desc_get_iommu_cookie(struct msi_desc *desc)
+{
+	return desc->iommu_cookie;
+}
+
+static inline void msi_desc_set_iommu_cookie(struct msi_desc *desc,
+					     const void *iommu_cookie)
+{
+	desc->iommu_cookie = iommu_cookie;
+}
+#else
+static inline const void *msi_desc_get_iommu_cookie(struct msi_desc *desc)
+{
+	return NULL;
+}
+
+static inline void msi_desc_set_iommu_cookie(struct msi_desc *desc,
+					     const void *iommu_cookie)
+{
+}
+#endif
+
 #ifdef CONFIG_PCI_MSI
 #define first_pci_msi_entry(pdev)	first_msi_entry(&(pdev)->dev)
 #define for_each_pci_msi_entry(desc, pdev)	\
Index: linux-5.0.19-rt11/include/linux/mutex.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/mutex.h
+++ linux-5.0.19-rt11/include/linux/mutex.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:25 @
 
 struct ww_acquire_ctx;
 
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# define __DEP_MAP_MUTEX_INITIALIZER(lockname) \
+		, .dep_map = { .name = #lockname }
+#else
+# define __DEP_MAP_MUTEX_INITIALIZER(lockname)
+#endif
+
+#ifdef CONFIG_PREEMPT_RT_FULL
+# include <linux/mutex_rt.h>
+#else
+
 /*
  * Simple, straightforward mutexes with strict semantics:
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:132 @ do {									\
 	__mutex_init((mutex), #mutex, &__key);				\
 } while (0)
 
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
-# define __DEP_MAP_MUTEX_INITIALIZER(lockname) \
-		, .dep_map = { .name = #lockname }
-#else
-# define __DEP_MAP_MUTEX_INITIALIZER(lockname)
-#endif
-
 #define __MUTEX_INITIALIZER(lockname) \
 		{ .owner = ATOMIC_LONG_INIT(0) \
 		, .wait_lock = __SPIN_LOCK_UNLOCKED(lockname.wait_lock) \
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:236 @ mutex_trylock_recursive(struct mutex *lo
 	return mutex_trylock(lock);
 }
 
+#endif /* !PREEMPT_RT_FULL */
+
 #endif /* __LINUX_MUTEX_H */
Index: linux-5.0.19-rt11/include/linux/mutex_rt.h
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/include/linux/mutex_rt.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+#ifndef __LINUX_MUTEX_RT_H
+#define __LINUX_MUTEX_RT_H
+
+#ifndef __LINUX_MUTEX_H
+#error "Please include mutex.h"
+#endif
+
+#include <linux/rtmutex.h>
+
+/* FIXME: Just for __lockfunc */
+#include <linux/spinlock.h>
+
+struct mutex {
+	struct rt_mutex		lock;
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	struct lockdep_map	dep_map;
+#endif
+};
+
+#define __MUTEX_INITIALIZER(mutexname)					\
+	{								\
+		.lock = __RT_MUTEX_INITIALIZER(mutexname.lock)		\
+		__DEP_MAP_MUTEX_INITIALIZER(mutexname)			\
+	}
+
+#define DEFINE_MUTEX(mutexname)						\
+	struct mutex mutexname = __MUTEX_INITIALIZER(mutexname)
+
+extern void __mutex_do_init(struct mutex *lock, const char *name, struct lock_class_key *key);
+extern void __lockfunc _mutex_lock(struct mutex *lock);
+extern void __lockfunc _mutex_lock_io(struct mutex *lock);
+extern void __lockfunc _mutex_lock_io_nested(struct mutex *lock, int subclass);
+extern int __lockfunc _mutex_lock_interruptible(struct mutex *lock);
+extern int __lockfunc _mutex_lock_killable(struct mutex *lock);
+extern void __lockfunc _mutex_lock_nested(struct mutex *lock, int subclass);
+extern void __lockfunc _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock);
+extern int __lockfunc _mutex_lock_interruptible_nested(struct mutex *lock, int subclass);
+extern int __lockfunc _mutex_lock_killable_nested(struct mutex *lock, int subclass);
+extern int __lockfunc _mutex_trylock(struct mutex *lock);
+extern void __lockfunc _mutex_unlock(struct mutex *lock);
+
+#define mutex_is_locked(l)		rt_mutex_is_locked(&(l)->lock)
+#define mutex_lock(l)			_mutex_lock(l)
+#define mutex_lock_interruptible(l)	_mutex_lock_interruptible(l)
+#define mutex_lock_killable(l)		_mutex_lock_killable(l)
+#define mutex_trylock(l)		_mutex_trylock(l)
+#define mutex_unlock(l)			_mutex_unlock(l)
+#define mutex_lock_io(l)		_mutex_lock_io(l);
+
+#define __mutex_owner(l)		((l)->lock.owner)
+
+#ifdef CONFIG_DEBUG_MUTEXES
+#define mutex_destroy(l)		rt_mutex_destroy(&(l)->lock)
+#else
+static inline void mutex_destroy(struct mutex *lock) {}
+#endif
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# define mutex_lock_nested(l, s)	_mutex_lock_nested(l, s)
+# define mutex_lock_interruptible_nested(l, s) \
+					_mutex_lock_interruptible_nested(l, s)
+# define mutex_lock_killable_nested(l, s) \
+					_mutex_lock_killable_nested(l, s)
+# define mutex_lock_io_nested(l, s)	_mutex_lock_io_nested(l, s)
+
+# define mutex_lock_nest_lock(lock, nest_lock)				\
+do {									\
+	typecheck(struct lockdep_map *, &(nest_lock)->dep_map);		\
+	_mutex_lock_nest_lock(lock, &(nest_lock)->dep_map);		\
+} while (0)
+
+#else
+# define mutex_lock_nested(l, s)	_mutex_lock(l)
+# define mutex_lock_interruptible_nested(l, s) \
+					_mutex_lock_interruptible(l)
+# define mutex_lock_killable_nested(l, s) \
+					_mutex_lock_killable(l)
+# define mutex_lock_nest_lock(lock, nest_lock) mutex_lock(lock)
+# define mutex_lock_io_nested(l, s)	_mutex_lock_io(l)
+#endif
+
+# define mutex_init(mutex)				\
+do {							\
+	static struct lock_class_key __key;		\
+							\
+	rt_mutex_init(&(mutex)->lock);			\
+	__mutex_do_init((mutex), #mutex, &__key);	\
+} while (0)
+
+# define __mutex_init(mutex, name, key)			\
+do {							\
+	rt_mutex_init(&(mutex)->lock);			\
+	__mutex_do_init((mutex), name, key);		\
+} while (0)
+
+/**
+ * These values are chosen such that FAIL and SUCCESS match the
+ * values of the regular mutex_trylock().
+ */
+enum mutex_trylock_recursive_enum {
+	MUTEX_TRYLOCK_FAILED    = 0,
+	MUTEX_TRYLOCK_SUCCESS   = 1,
+	MUTEX_TRYLOCK_RECURSIVE,
+};
+/**
+ * mutex_trylock_recursive - trylock variant that allows recursive locking
+ * @lock: mutex to be locked
+ *
+ * This function should not be used, _ever_. It is purely for hysterical GEM
+ * raisins, and once those are gone this will be removed.
+ *
+ * Returns:
+ *  MUTEX_TRYLOCK_FAILED    - trylock failed,
+ *  MUTEX_TRYLOCK_SUCCESS   - lock acquired,
+ *  MUTEX_TRYLOCK_RECURSIVE - we already owned the lock.
+ */
+int __rt_mutex_owner_current(struct rt_mutex *lock);
+
+static inline /* __deprecated */ __must_check enum mutex_trylock_recursive_enum
+mutex_trylock_recursive(struct mutex *lock)
+{
+	if (unlikely(__rt_mutex_owner_current(&lock->lock)))
+		return MUTEX_TRYLOCK_RECURSIVE;
+
+	return mutex_trylock(lock);
+}
+
+extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock);
+
+#endif
Index: linux-5.0.19-rt11/include/linux/netdevice.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/netdevice.h
+++ linux-5.0.19-rt11/include/linux/netdevice.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:425 @ typedef enum rx_handler_result rx_handle
 typedef rx_handler_result_t rx_handler_func_t(struct sk_buff **pskb);
 
 void __napi_schedule(struct napi_struct *n);
+
+/*
+ * When PREEMPT_RT_FULL is defined, all device interrupt handlers
+ * run as threads, and they can also be preempted (without PREEMPT_RT
+ * interrupt threads can not be preempted). Which means that calling
+ * __napi_schedule_irqoff() from an interrupt handler can be preempted
+ * and can corrupt the napi->poll_list.
+ */
+#ifdef CONFIG_PREEMPT_RT_FULL
+#define __napi_schedule_irqoff(n) __napi_schedule(n)
+#else
 void __napi_schedule_irqoff(struct napi_struct *n);
+#endif
 
 static inline bool napi_disable_pending(struct napi_struct *n)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:631 @ struct netdev_queue {
  * write-mostly part
  */
 	spinlock_t		_xmit_lock ____cacheline_aligned_in_smp;
+#ifdef CONFIG_PREEMPT_RT_FULL
+	struct task_struct	*xmit_lock_owner;
+#else
 	int			xmit_lock_owner;
+#endif
 	/*
 	 * Time (in jiffies) of last Tx
 	 */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2670 @ void netdev_freemem(struct net_device *d
 void synchronize_net(void);
 int init_dummy_netdev(struct net_device *dev);
 
-DECLARE_PER_CPU(int, xmit_recursion);
 #define XMIT_RECURSION_LIMIT	10
+#ifdef CONFIG_PREEMPT_RT_FULL
+static inline int dev_recursion_level(void)
+{
+	return current->xmit_recursion;
+}
+
+static inline int xmit_rec_read(void)
+{
+	return current->xmit_recursion;
+}
+
+static inline void xmit_rec_inc(void)
+{
+	current->xmit_recursion++;
+}
+
+static inline void xmit_rec_dec(void)
+{
+	current->xmit_recursion--;
+}
+
+#else
+
+DECLARE_PER_CPU(int, xmit_recursion);
 
 static inline int dev_recursion_level(void)
 {
 	return this_cpu_read(xmit_recursion);
 }
 
+static inline int xmit_rec_read(void)
+{
+	return __this_cpu_read(xmit_recursion);
+}
+
+static inline void xmit_rec_inc(void)
+{
+	__this_cpu_inc(xmit_recursion);
+}
+
+static inline void xmit_rec_dec(void)
+{
+	__this_cpu_dec(xmit_recursion);
+}
+#endif
+
 struct net_device *dev_get_by_index(struct net *net, int ifindex);
 struct net_device *__dev_get_by_index(struct net *net, int ifindex);
 struct net_device *dev_get_by_index_rcu(struct net *net, int ifindex);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3071 @ struct softnet_data {
 	unsigned int		dropped;
 	struct sk_buff_head	input_pkt_queue;
 	struct napi_struct	backlog;
+	struct sk_buff_head	tofree_queue;
 
 };
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3926 @ static inline u32 netif_msg_init(int deb
 	return (1U << debug_value) - 1;
 }
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+static inline void netdev_queue_set_owner(struct netdev_queue *txq, int cpu)
+{
+	txq->xmit_lock_owner = current;
+}
+
+static inline void netdev_queue_clear_owner(struct netdev_queue *txq)
+{
+	txq->xmit_lock_owner = NULL;
+}
+
+static inline bool netdev_queue_has_owner(struct netdev_queue *txq)
+{
+	if (txq->xmit_lock_owner != NULL)
+		return true;
+	return false;
+}
+
+#else
+
+static inline void netdev_queue_set_owner(struct netdev_queue *txq, int cpu)
+{
+	txq->xmit_lock_owner = cpu;
+}
+
+static inline void netdev_queue_clear_owner(struct netdev_queue *txq)
+{
+	txq->xmit_lock_owner = -1;
+}
+
+static inline bool netdev_queue_has_owner(struct netdev_queue *txq)
+{
+	if (txq->xmit_lock_owner != -1)
+		return true;
+	return false;
+}
+#endif
+
 static inline void __netif_tx_lock(struct netdev_queue *txq, int cpu)
 {
 	spin_lock(&txq->_xmit_lock);
-	txq->xmit_lock_owner = cpu;
+	netdev_queue_set_owner(txq, cpu);
 }
 
 static inline bool __netif_tx_acquire(struct netdev_queue *txq)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3984 @ static inline void __netif_tx_release(st
 static inline void __netif_tx_lock_bh(struct netdev_queue *txq)
 {
 	spin_lock_bh(&txq->_xmit_lock);
-	txq->xmit_lock_owner = smp_processor_id();
+	netdev_queue_set_owner(txq, smp_processor_id());
 }
 
 static inline bool __netif_tx_trylock(struct netdev_queue *txq)
 {
 	bool ok = spin_trylock(&txq->_xmit_lock);
 	if (likely(ok))
-		txq->xmit_lock_owner = smp_processor_id();
+		netdev_queue_set_owner(txq, smp_processor_id());
 	return ok;
 }
 
 static inline void __netif_tx_unlock(struct netdev_queue *txq)
 {
-	txq->xmit_lock_owner = -1;
+	netdev_queue_clear_owner(txq);
 	spin_unlock(&txq->_xmit_lock);
 }
 
 static inline void __netif_tx_unlock_bh(struct netdev_queue *txq)
 {
-	txq->xmit_lock_owner = -1;
+	netdev_queue_clear_owner(txq);
 	spin_unlock_bh(&txq->_xmit_lock);
 }
 
 static inline void txq_trans_update(struct netdev_queue *txq)
 {
-	if (txq->xmit_lock_owner != -1)
+	if (netdev_queue_has_owner(txq))
 		txq->trans_start = jiffies;
 }
 
Index: linux-5.0.19-rt11/include/linux/netfilter/x_tables.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/netfilter/x_tables.h
+++ linux-5.0.19-rt11/include/linux/netfilter/x_tables.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:9 @
 #include <linux/netdevice.h>
 #include <linux/static_key.h>
 #include <linux/netfilter.h>
+#include <linux/locallock.h>
 #include <uapi/linux/netfilter/x_tables.h>
 
 /* Test a struct->invflags and a boolean for inequality */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:349 @ void xt_free_table_info(struct xt_table_
  */
 DECLARE_PER_CPU(seqcount_t, xt_recseq);
 
+DECLARE_LOCAL_IRQ_LOCK(xt_write_lock);
+
 /* xt_tee_enabled - true if x_tables needs to handle reentrancy
  *
  * Enabled if current ip(6)tables ruleset has at least one -j TEE rule.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:371 @ static inline unsigned int xt_write_recs
 {
 	unsigned int addend;
 
+	/* RT protection */
+	local_lock(xt_write_lock);
+
 	/*
 	 * Low order bit of sequence is set if we already
 	 * called xt_write_recseq_begin().
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:404 @ static inline void xt_write_recseq_end(u
 	/* this is kind of a write_seqcount_end(), but addend is 0 or 1 */
 	smp_wmb();
 	__this_cpu_add(xt_recseq.sequence, addend);
+	local_unlock(xt_write_lock);
 }
 
 /*
Index: linux-5.0.19-rt11/include/linux/nfs_fs.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/nfs_fs.h
+++ linux-5.0.19-rt11/include/linux/nfs_fs.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:169 @ struct nfs_inode {
 
 	/* Readers: in-flight sillydelete RPC calls */
 	/* Writers: rmdir */
+#ifdef CONFIG_PREEMPT_RT_BASE
+	struct semaphore        rmdir_sem;
+#else
 	struct rw_semaphore	rmdir_sem;
+#endif
 	struct mutex		commit_mutex;
 
 #if IS_ENABLED(CONFIG_NFS_V4)
Index: linux-5.0.19-rt11/include/linux/nfs_xdr.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/nfs_xdr.h
+++ linux-5.0.19-rt11/include/linux/nfs_xdr.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1562 @ struct nfs_unlinkdata {
 	struct nfs_removeargs args;
 	struct nfs_removeres res;
 	struct dentry *dentry;
-	wait_queue_head_t wq;
+	struct swait_queue_head wq;
 	const struct cred *cred;
 	struct nfs_fattr dir_attr;
 	long timeout;
Index: linux-5.0.19-rt11/include/linux/percpu-rwsem.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/percpu-rwsem.h
+++ linux-5.0.19-rt11/include/linux/percpu-rwsem.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:32 @ static struct percpu_rw_semaphore name =
 extern int __percpu_down_read(struct percpu_rw_semaphore *, int);
 extern void __percpu_up_read(struct percpu_rw_semaphore *);
 
-static inline void percpu_down_read_preempt_disable(struct percpu_rw_semaphore *sem)
+static inline void percpu_down_read(struct percpu_rw_semaphore *sem)
 {
 	might_sleep();
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:50 @ static inline void percpu_down_read_pree
 	__this_cpu_inc(*sem->read_count);
 	if (unlikely(!rcu_sync_is_idle(&sem->rss)))
 		__percpu_down_read(sem, false); /* Unconditional memory barrier */
-	barrier();
 	/*
-	 * The barrier() prevents the compiler from
+	 * The preempt_enable() prevents the compiler from
 	 * bleeding the critical section out.
 	 */
-}
-
-static inline void percpu_down_read(struct percpu_rw_semaphore *sem)
-{
-	percpu_down_read_preempt_disable(sem);
 	preempt_enable();
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:80 @ static inline int percpu_down_read_trylo
 	return ret;
 }
 
-static inline void percpu_up_read_preempt_enable(struct percpu_rw_semaphore *sem)
+static inline void percpu_up_read(struct percpu_rw_semaphore *sem)
 {
-	/*
-	 * The barrier() prevents the compiler from
-	 * bleeding the critical section out.
-	 */
-	barrier();
+	preempt_disable();
 	/*
 	 * Same as in percpu_down_read().
 	 */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:95 @ static inline void percpu_up_read_preemp
 	rwsem_release(&sem->rw_sem.dep_map, 1, _RET_IP_);
 }
 
-static inline void percpu_up_read(struct percpu_rw_semaphore *sem)
-{
-	preempt_disable();
-	percpu_up_read_preempt_enable(sem);
-}
-
 extern void percpu_down_write(struct percpu_rw_semaphore *);
 extern void percpu_up_write(struct percpu_rw_semaphore *);
 
Index: linux-5.0.19-rt11/include/linux/percpu.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/percpu.h
+++ linux-5.0.19-rt11/include/linux/percpu.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:22 @
 #define PERCPU_MODULE_RESERVE		0
 #endif
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+
+#define get_local_var(var) (*({	\
+	migrate_disable();	\
+	this_cpu_ptr(&var);	}))
+
+#define put_local_var(var) do {	\
+	(void)&(var);		\
+	migrate_enable();	\
+} while (0)
+
+# define get_local_ptr(var) ({	\
+	migrate_disable();	\
+	this_cpu_ptr(var);	})
+
+# define put_local_ptr(var) do {	\
+	(void)(var);			\
+	migrate_enable();		\
+} while (0)
+
+#else
+
+#define get_local_var(var)	get_cpu_var(var)
+#define put_local_var(var)	put_cpu_var(var)
+#define get_local_ptr(var)	get_cpu_ptr(var)
+#define put_local_ptr(var)	put_cpu_ptr(var)
+
+#endif
+
 /* minimum unit size, also is the maximum supported allocation size */
 #define PCPU_MIN_UNIT_SIZE		PFN_ALIGN(32 << 10)
 
Index: linux-5.0.19-rt11/include/linux/pid.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/pid.h
+++ linux-5.0.19-rt11/include/linux/pid.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6 @
 #define _LINUX_PID_H
 
 #include <linux/rculist.h>
+#include <linux/atomic.h>
 
 enum pid_type
 {
Index: linux-5.0.19-rt11/include/linux/posix-timers.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/posix-timers.h
+++ linux-5.0.19-rt11/include/linux/posix-timers.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:18 @ struct cpu_timer_list {
 	u64 expires, incr;
 	struct task_struct *task;
 	int firing;
+	int firing_cpu;
 };
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:118 @ struct k_itimer {
 		struct {
 			struct alarm	alarmtimer;
 		} alarm;
-		struct rcu_head		rcu;
 	} it;
+	struct rcu_head		rcu;
 };
 
 void run_posix_cpu_timers(struct task_struct *task);
Index: linux-5.0.19-rt11/include/linux/preempt.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/preempt.h
+++ linux-5.0.19-rt11/include/linux/preempt.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:81 @
 #include <asm/preempt.h>
 
 #define hardirq_count()	(preempt_count() & HARDIRQ_MASK)
-#define softirq_count()	(preempt_count() & SOFTIRQ_MASK)
 #define irq_count()	(preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
 				 | NMI_MASK))
-
 /*
  * Are we doing bottom half or hardware interrupt processing?
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:97 @
  *       should not be used in new code.
  */
 #define in_irq()		(hardirq_count())
-#define in_softirq()		(softirq_count())
 #define in_interrupt()		(irq_count())
-#define in_serving_softirq()	(softirq_count() & SOFTIRQ_OFFSET)
 #define in_nmi()		(preempt_count() & NMI_MASK)
 #define in_task()		(!(preempt_count() & \
 				   (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
+#ifdef CONFIG_PREEMPT_RT_FULL
+
+#define softirq_count()		((long)get_current()->softirq_count)
+#define in_softirq()		(softirq_count())
+#define in_serving_softirq()	(get_current()->softirq_count & SOFTIRQ_OFFSET)
+
+#else
+
+#define softirq_count()		(preempt_count() & SOFTIRQ_MASK)
+#define in_softirq()		(softirq_count())
+#define in_serving_softirq()	(softirq_count() & SOFTIRQ_OFFSET)
+
+#endif
 
 /*
  * The preempt_count offset after preempt_disable();
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:127 @
 /*
  * The preempt_count offset after spin_lock()
  */
+#if !defined(CONFIG_PREEMPT_RT_FULL)
 #define PREEMPT_LOCK_OFFSET	PREEMPT_DISABLE_OFFSET
+#else
+#define PREEMPT_LOCK_OFFSET	0
+#endif
 
 /*
  * The preempt_count offset needed for things like:
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:180 @ extern void preempt_count_sub(int val);
 #define preempt_count_inc() preempt_count_add(1)
 #define preempt_count_dec() preempt_count_sub(1)
 
+#ifdef CONFIG_PREEMPT_LAZY
+#define add_preempt_lazy_count(val)	do { preempt_lazy_count() += (val); } while (0)
+#define sub_preempt_lazy_count(val)	do { preempt_lazy_count() -= (val); } while (0)
+#define inc_preempt_lazy_count()	add_preempt_lazy_count(1)
+#define dec_preempt_lazy_count()	sub_preempt_lazy_count(1)
+#define preempt_lazy_count()		(current_thread_info()->preempt_lazy_count)
+#else
+#define add_preempt_lazy_count(val)	do { } while (0)
+#define sub_preempt_lazy_count(val)	do { } while (0)
+#define inc_preempt_lazy_count()	do { } while (0)
+#define dec_preempt_lazy_count()	do { } while (0)
+#define preempt_lazy_count()		(0)
+#endif
+
 #ifdef CONFIG_PREEMPT_COUNT
 
 #define preempt_disable() \
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:202 @ do { \
 	barrier(); \
 } while (0)
 
+#define preempt_lazy_disable() \
+do { \
+	inc_preempt_lazy_count(); \
+	barrier(); \
+} while (0)
+
 #define sched_preempt_enable_no_resched() \
 do { \
 	barrier(); \
 	preempt_count_dec(); \
 } while (0)
 
-#define preempt_enable_no_resched() sched_preempt_enable_no_resched()
+#ifdef CONFIG_PREEMPT_RT_BASE
+# define preempt_enable_no_resched() sched_preempt_enable_no_resched()
+# define preempt_check_resched_rt() preempt_check_resched()
+#else
+# define preempt_enable_no_resched() preempt_enable()
+# define preempt_check_resched_rt() barrier();
+#endif
 
 #define preemptible()	(preempt_count() == 0 && !irqs_disabled())
 
+#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+
+extern void migrate_disable(void);
+extern void migrate_enable(void);
+
+int __migrate_disabled(struct task_struct *p);
+
+#elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+
+extern void migrate_disable(void);
+extern void migrate_enable(void);
+static inline int __migrate_disabled(struct task_struct *p)
+{
+	return 0;
+}
+
+#else
+#define migrate_disable()		preempt_disable()
+#define migrate_enable()		preempt_enable()
+static inline int __migrate_disabled(struct task_struct *p)
+{
+	return 0;
+}
+#endif
+
 #ifdef CONFIG_PREEMPT
 #define preempt_enable() \
 do { \
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:270 @ do { \
 		__preempt_schedule(); \
 } while (0)
 
+#define preempt_lazy_enable() \
+do { \
+	dec_preempt_lazy_count(); \
+	barrier(); \
+	preempt_check_resched(); \
+} while (0)
+
 #else /* !CONFIG_PREEMPT */
 #define preempt_enable() \
 do { \
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:284 @ do { \
 	preempt_count_dec(); \
 } while (0)
 
+#define preempt_lazy_enable() \
+do { \
+	dec_preempt_lazy_count(); \
+	barrier(); \
+} while (0)
+
 #define preempt_enable_notrace() \
 do { \
 	barrier(); \
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:328 @ do { \
 #define preempt_disable_notrace()		barrier()
 #define preempt_enable_no_resched_notrace()	barrier()
 #define preempt_enable_notrace()		barrier()
+#define preempt_check_resched_rt()		barrier()
 #define preemptible()				0
 
+#define migrate_disable()			barrier()
+#define migrate_enable()			barrier()
+
+static inline int __migrate_disabled(struct task_struct *p)
+{
+	return 0;
+}
 #endif /* CONFIG_PREEMPT_COUNT */
 
 #ifdef MODULE
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:356 @ do { \
 } while (0)
 #define preempt_fold_need_resched() \
 do { \
-	if (tif_need_resched()) \
+	if (tif_need_resched_now()) \
 		set_preempt_need_resched(); \
 } while (0)
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+# define preempt_disable_rt()		preempt_disable()
+# define preempt_enable_rt()		preempt_enable()
+# define preempt_disable_nort()		barrier()
+# define preempt_enable_nort()		barrier()
+#else
+# define preempt_disable_rt()		barrier()
+# define preempt_enable_rt()		barrier()
+# define preempt_disable_nort()		preempt_disable()
+# define preempt_enable_nort()		preempt_enable()
+#endif
+
 #ifdef CONFIG_PREEMPT_NOTIFIERS
 
 struct preempt_notifier;
Index: linux-5.0.19-rt11/include/linux/printk.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/printk.h
+++ linux-5.0.19-rt11/include/linux/printk.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:62 @ static inline const char *printk_skip_he
  */
 #define CONSOLE_LOGLEVEL_DEFAULT CONFIG_CONSOLE_LOGLEVEL_DEFAULT
 #define CONSOLE_LOGLEVEL_QUIET	 CONFIG_CONSOLE_LOGLEVEL_QUIET
+#define CONSOLE_LOGLEVEL_EMERGENCY CONFIG_CONSOLE_LOGLEVEL_EMERGENCY
 
 extern int console_printk[];
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:70 @ extern int console_printk[];
 #define default_message_loglevel (console_printk[1])
 #define minimum_console_loglevel (console_printk[2])
 #define default_console_loglevel (console_printk[3])
+#define emergency_console_loglevel (console_printk[4])
 
 static inline void console_silent(void)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:150 @ static inline __printf(1, 2) __cold
 void early_printk(const char *s, ...) { }
 #endif
 
-#ifdef CONFIG_PRINTK_NMI
-extern void printk_nmi_enter(void);
-extern void printk_nmi_exit(void);
-extern void printk_nmi_direct_enter(void);
-extern void printk_nmi_direct_exit(void);
-#else
-static inline void printk_nmi_enter(void) { }
-static inline void printk_nmi_exit(void) { }
-static inline void printk_nmi_direct_enter(void) { }
-static inline void printk_nmi_direct_exit(void) { }
-#endif /* PRINTK_NMI */
-
 #ifdef CONFIG_PRINTK
 asmlinkage __printf(5, 0)
 int vprintk_emit(int facility, int level,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:194 @ __printf(1, 2) void dump_stack_set_arch_
 void dump_stack_print_info(const char *log_lvl);
 void show_regs_print_info(const char *log_lvl);
 extern asmlinkage void dump_stack(void) __cold;
-extern void printk_safe_init(void);
-extern void printk_safe_flush(void);
-extern void printk_safe_flush_on_panic(void);
+struct wait_queue_head *printk_wait_queue(void);
 #else
 static inline __printf(1, 0)
 int vprintk(const char *s, va_list args)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:258 @ static inline void show_regs_print_info(
 static inline void dump_stack(void)
 {
 }
-
-static inline void printk_safe_init(void)
-{
-}
-
-static inline void printk_safe_flush(void)
-{
-}
-
-static inline void printk_safe_flush_on_panic(void)
-{
-}
 #endif
 
 extern int kptr_restrict;
Index: linux-5.0.19-rt11/include/linux/printk_ringbuffer.h
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/include/linux/printk_ringbuffer.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_PRINTK_RINGBUFFER_H
+#define _LINUX_PRINTK_RINGBUFFER_H
+
+#include <linux/irq_work.h>
+#include <linux/atomic.h>
+#include <linux/percpu.h>
+#include <linux/wait.h>
+
+struct prb_cpulock {
+	atomic_t owner;
+	unsigned long __percpu *irqflags;
+};
+
+struct printk_ringbuffer {
+	void *buffer;
+	unsigned int size_bits;
+
+	u64 seq;
+	atomic_long_t lost;
+
+	atomic_long_t tail;
+	atomic_long_t head;
+	atomic_long_t reserve;
+
+	struct prb_cpulock *cpulock;
+	atomic_t ctx;
+
+	struct wait_queue_head *wq;
+	atomic_long_t wq_counter;
+	struct irq_work *wq_work;
+};
+
+struct prb_entry {
+	unsigned int size;
+	u64 seq;
+	char data[0];
+};
+
+struct prb_handle {
+	struct printk_ringbuffer *rb;
+	unsigned int cpu;
+	struct prb_entry *entry;
+};
+
+#define DECLARE_STATIC_PRINTKRB_CPULOCK(name)				\
+static DEFINE_PER_CPU(unsigned long, _##name##_percpu_irqflags);	\
+static struct prb_cpulock name = {					\
+	.owner = ATOMIC_INIT(-1),					\
+	.irqflags = &_##name##_percpu_irqflags,				\
+}
+
+#define PRB_INIT ((unsigned long)-1)
+
+#define DECLARE_STATIC_PRINTKRB_ITER(name, rbaddr)			\
+static struct prb_iterator name = {					\
+	.rb = rbaddr,							\
+	.lpos = PRB_INIT,						\
+}
+
+struct prb_iterator {
+	struct printk_ringbuffer *rb;
+	unsigned long lpos;
+};
+
+#define DECLARE_STATIC_PRINTKRB(name, szbits, cpulockptr)		\
+static char _##name##_buffer[1 << (szbits)]				\
+	__aligned(__alignof__(long));					\
+static DECLARE_WAIT_QUEUE_HEAD(_##name##_wait);				\
+static void _##name##_wake_work_func(struct irq_work *irq_work)		\
+{									\
+	wake_up_interruptible_all(&_##name##_wait);			\
+}									\
+static struct irq_work _##name##_wake_work = {				\
+	.func = _##name##_wake_work_func,				\
+	.flags = IRQ_WORK_LAZY,						\
+};									\
+static struct printk_ringbuffer name = {				\
+	.buffer = &_##name##_buffer[0],					\
+	.size_bits = szbits,						\
+	.seq = 0,							\
+	.lost = ATOMIC_LONG_INIT(0),					\
+	.tail = ATOMIC_LONG_INIT(-111 * sizeof(long)),			\
+	.head = ATOMIC_LONG_INIT(-111 * sizeof(long)),			\
+	.reserve = ATOMIC_LONG_INIT(-111 * sizeof(long)),		\
+	.cpulock = cpulockptr,						\
+	.ctx = ATOMIC_INIT(0),						\
+	.wq = &_##name##_wait,						\
+	.wq_counter = ATOMIC_LONG_INIT(0),				\
+	.wq_work = &_##name##_wake_work,				\
+}
+
+/* writer interface */
+char *prb_reserve(struct prb_handle *h, struct printk_ringbuffer *rb,
+		  unsigned int size);
+void prb_commit(struct prb_handle *h);
+
+/* reader interface */
+void prb_iter_init(struct prb_iterator *iter, struct printk_ringbuffer *rb,
+		   u64 *seq);
+void prb_iter_copy(struct prb_iterator *dest, struct prb_iterator *src);
+int prb_iter_next(struct prb_iterator *iter, char *buf, int size, u64 *seq);
+int prb_iter_wait_next(struct prb_iterator *iter, char *buf, int size,
+		       u64 *seq);
+int prb_iter_seek(struct prb_iterator *iter, u64 seq);
+int prb_iter_data(struct prb_iterator *iter, char *buf, int size, u64 *seq);
+
+/* utility functions */
+int prb_buffer_size(struct printk_ringbuffer *rb);
+void prb_inc_lost(struct printk_ringbuffer *rb);
+void prb_lock(struct prb_cpulock *cpu_lock, unsigned int *cpu_store);
+void prb_unlock(struct prb_cpulock *cpu_lock, unsigned int cpu_store);
+
+#endif /*_LINUX_PRINTK_RINGBUFFER_H */
Index: linux-5.0.19-rt11/include/linux/radix-tree.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/radix-tree.h
+++ linux-5.0.19-rt11/include/linux/radix-tree.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:242 @ unsigned int radix_tree_gang_lookup(cons
 			unsigned int max_items);
 int radix_tree_preload(gfp_t gfp_mask);
 int radix_tree_maybe_preload(gfp_t gfp_mask);
+void radix_tree_preload_end(void);
 void radix_tree_init(void);
 void *radix_tree_tag_set(struct radix_tree_root *,
 			unsigned long index, unsigned int tag);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:260 @ unsigned int radix_tree_gang_lookup_tag_
 		unsigned int max_items, unsigned int tag);
 int radix_tree_tagged(const struct radix_tree_root *, unsigned int tag);
 
-static inline void radix_tree_preload_end(void)
-{
-	preempt_enable();
-}
-
 void __rcu **idr_get_free(struct radix_tree_root *root,
 			      struct radix_tree_iter *iter, gfp_t gfp,
 			      unsigned long max);
Index: linux-5.0.19-rt11/include/linux/random.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/random.h
+++ linux-5.0.19-rt11/include/linux/random.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:35 @ static inline void add_latent_entropy(vo
 
 extern void add_input_randomness(unsigned int type, unsigned int code,
 				 unsigned int value) __latent_entropy;
-extern void add_interrupt_randomness(int irq, int irq_flags) __latent_entropy;
+extern void add_interrupt_randomness(int irq, int irq_flags, __u64 ip) __latent_entropy;
 
 extern void get_random_bytes(void *buf, int nbytes);
 extern int wait_for_random_bytes(void);
Index: linux-5.0.19-rt11/include/linux/ratelimit.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/ratelimit.h
+++ linux-5.0.19-rt11/include/linux/ratelimit.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:62 @ static inline void ratelimit_state_exit(
 		return;
 
 	if (rs->missed) {
-		pr_warn("%s: %d output lines suppressed due to ratelimiting\n",
+		pr_info("%s: %d output lines suppressed due to ratelimiting\n",
 			current->comm, rs->missed);
 		rs->missed = 0;
 	}
Index: linux-5.0.19-rt11/include/linux/rbtree.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/rbtree.h
+++ linux-5.0.19-rt11/include/linux/rbtree.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:34 @
 
 #include <linux/kernel.h>
 #include <linux/stddef.h>
-#include <linux/rcupdate.h>
+#include <linux/rcu_assign_pointer.h>
 
 struct rb_node {
 	unsigned long  __rb_parent_color;
Index: linux-5.0.19-rt11/include/linux/rcu_assign_pointer.h
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/include/linux/rcu_assign_pointer.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+#ifndef __LINUX_RCU_ASSIGN_POINTER_H__
+#define __LINUX_RCU_ASSIGN_POINTER_H__
+#include <linux/compiler.h>
+#include <asm/barrier.h>
+
+/**
+ * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
+ * @v: The value to statically initialize with.
+ */
+#define RCU_INITIALIZER(v) (typeof(*(v)) __force __rcu *)(v)
+
+/**
+ * rcu_assign_pointer() - assign to RCU-protected pointer
+ * @p: pointer to assign to
+ * @v: value to assign (publish)
+ *
+ * Assigns the specified value to the specified RCU-protected
+ * pointer, ensuring that any concurrent RCU readers will see
+ * any prior initialization.
+ *
+ * Inserts memory barriers on architectures that require them
+ * (which is most of them), and also prevents the compiler from
+ * reordering the code that initializes the structure after the pointer
+ * assignment.  More importantly, this call documents which pointers
+ * will be dereferenced by RCU read-side code.
+ *
+ * In some special cases, you may use RCU_INIT_POINTER() instead
+ * of rcu_assign_pointer().  RCU_INIT_POINTER() is a bit faster due
+ * to the fact that it does not constrain either the CPU or the compiler.
+ * That said, using RCU_INIT_POINTER() when you should have used
+ * rcu_assign_pointer() is a very bad thing that results in
+ * impossible-to-diagnose memory corruption.  So please be careful.
+ * See the RCU_INIT_POINTER() comment header for details.
+ *
+ * Note that rcu_assign_pointer() evaluates each of its arguments only
+ * once, appearances notwithstanding.  One of the "extra" evaluations
+ * is in typeof() and the other visible only to sparse (__CHECKER__),
+ * neither of which actually execute the argument.  As with most cpp
+ * macros, this execute-arguments-only-once property is important, so
+ * please be careful when making changes to rcu_assign_pointer() and the
+ * other macros that it invokes.
+ */
+#define rcu_assign_pointer(p, v)					      \
+({									      \
+	uintptr_t _r_a_p__v = (uintptr_t)(v);				      \
+									      \
+	if (__builtin_constant_p(v) && (_r_a_p__v) == (uintptr_t)NULL)	      \
+		WRITE_ONCE((p), (typeof(p))(_r_a_p__v));		      \
+	else								      \
+		smp_store_release(&p, RCU_INITIALIZER((typeof(p))_r_a_p__v)); \
+	_r_a_p__v;							      \
+})
+
+#endif
Index: linux-5.0.19-rt11/include/linux/rcupdate.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/rcupdate.h
+++ linux-5.0.19-rt11/include/linux/rcupdate.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:45 @
 #include <linux/lockdep.h>
 #include <asm/processor.h>
 #include <linux/cpumask.h>
+#include <linux/rcu_assign_pointer.h>
 
 #define ULONG_CMP_GE(a, b)	(ULONG_MAX / 2 >= (a) - (b))
 #define ULONG_CMP_LT(a, b)	(ULONG_MAX / 2 < (a) - (b))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:68 @ void __rcu_read_unlock(void);
  * types of kernel builds, the rcu_read_lock() nesting depth is unknowable.
  */
 #define rcu_preempt_depth() (current->rcu_read_lock_nesting)
+#ifndef CONFIG_PREEMPT_RT_FULL
+#define sched_rcu_preempt_depth()	rcu_preempt_depth()
+#else
+static inline int sched_rcu_preempt_depth(void) { return 0; }
+#endif
 
 #else /* #ifdef CONFIG_PREEMPT_RCU */
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:93 @ static inline int rcu_preempt_depth(void
 	return 0;
 }
 
+#define sched_rcu_preempt_depth()	rcu_preempt_depth()
+
 #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
 
 /* Internal to kernel */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:298 @ static inline void rcu_preempt_sleep_che
 #define rcu_sleep_check()						\
 	do {								\
 		rcu_preempt_sleep_check();				\
-		RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map),	\
+		if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL))		\
+			RCU_LOCKDEP_WARN(lock_is_held(&rcu_bh_lock_map),\
 				 "Illegal context switch in RCU-bh read-side critical section"); \
 		RCU_LOCKDEP_WARN(lock_is_held(&rcu_sched_lock_map),	\
 				 "Illegal context switch in RCU-sched read-side critical section"); \
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:355 @ static inline void rcu_preempt_sleep_che
 })
 
 /**
- * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
- * @v: The value to statically initialize with.
- */
-#define RCU_INITIALIZER(v) (typeof(*(v)) __force __rcu *)(v)
-
-/**
- * rcu_assign_pointer() - assign to RCU-protected pointer
- * @p: pointer to assign to
- * @v: value to assign (publish)
- *
- * Assigns the specified value to the specified RCU-protected
- * pointer, ensuring that any concurrent RCU readers will see
- * any prior initialization.
- *
- * Inserts memory barriers on architectures that require them
- * (which is most of them), and also prevents the compiler from
- * reordering the code that initializes the structure after the pointer
- * assignment.  More importantly, this call documents which pointers
- * will be dereferenced by RCU read-side code.
- *
- * In some special cases, you may use RCU_INIT_POINTER() instead
- * of rcu_assign_pointer().  RCU_INIT_POINTER() is a bit faster due
- * to the fact that it does not constrain either the CPU or the compiler.
- * That said, using RCU_INIT_POINTER() when you should have used
- * rcu_assign_pointer() is a very bad thing that results in
- * impossible-to-diagnose memory corruption.  So please be careful.
- * See the RCU_INIT_POINTER() comment header for details.
- *
- * Note that rcu_assign_pointer() evaluates each of its arguments only
- * once, appearances notwithstanding.  One of the "extra" evaluations
- * is in typeof() and the other visible only to sparse (__CHECKER__),
- * neither of which actually execute the argument.  As with most cpp
- * macros, this execute-arguments-only-once property is important, so
- * please be careful when making changes to rcu_assign_pointer() and the
- * other macros that it invokes.
- */
-#define rcu_assign_pointer(p, v)					      \
-({									      \
-	uintptr_t _r_a_p__v = (uintptr_t)(v);				      \
-									      \
-	if (__builtin_constant_p(v) && (_r_a_p__v) == (uintptr_t)NULL)	      \
-		WRITE_ONCE((p), (typeof(p))(_r_a_p__v));		      \
-	else								      \
-		smp_store_release(&p, RCU_INITIALIZER((typeof(p))_r_a_p__v)); \
-	_r_a_p__v;							      \
-})
-
-/**
  * rcu_swap_protected() - swap an RCU and a regular pointer
  * @rcu_ptr: RCU pointer
  * @ptr: regular pointer
Index: linux-5.0.19-rt11/include/linux/rtmutex.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/rtmutex.h
+++ linux-5.0.19-rt11/include/linux/rtmutex.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:17 @
 #define __LINUX_RT_MUTEX_H
 
 #include <linux/linkage.h>
+#include <linux/spinlock_types_raw.h>
 #include <linux/rbtree.h>
-#include <linux/spinlock_types.h>
 
 extern int max_lock_depth; /* for sysctl */
 
+#ifdef CONFIG_DEBUG_MUTEXES
+#include <linux/debug_locks.h>
+#endif
+
 /**
  * The rt_mutex structure
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:38 @ struct rt_mutex {
 	raw_spinlock_t		wait_lock;
 	struct rb_root_cached   waiters;
 	struct task_struct	*owner;
-#ifdef CONFIG_DEBUG_RT_MUTEXES
 	int			save_state;
+#ifdef CONFIG_DEBUG_RT_MUTEXES
 	const char		*name, *file;
 	int			line;
 	void			*magic;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:89 @ do { \
 #define __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname)
 #endif
 
-#define __RT_MUTEX_INITIALIZER(mutexname) \
-	{ .wait_lock = __RAW_SPIN_LOCK_UNLOCKED(mutexname.wait_lock) \
+#define __RT_MUTEX_INITIALIZER_PLAIN(mutexname) \
+	.wait_lock = __RAW_SPIN_LOCK_UNLOCKED(mutexname.wait_lock) \
 	, .waiters = RB_ROOT_CACHED \
 	, .owner = NULL \
 	__DEBUG_RT_MUTEX_INITIALIZER(mutexname) \
-	__DEP_MAP_RT_MUTEX_INITIALIZER(mutexname)}
+	__DEP_MAP_RT_MUTEX_INITIALIZER(mutexname)
+
+#define __RT_MUTEX_INITIALIZER(mutexname) \
+	{ __RT_MUTEX_INITIALIZER_PLAIN(mutexname) }
 
 #define DEFINE_RT_MUTEX(mutexname) \
 	struct rt_mutex mutexname = __RT_MUTEX_INITIALIZER(mutexname)
 
+#define __RT_MUTEX_INITIALIZER_SAVE_STATE(mutexname) \
+	{ __RT_MUTEX_INITIALIZER_PLAIN(mutexname)    \
+		, .save_state = 1 }
+
 /**
  * rt_mutex_is_locked - is the mutex locked
  * @lock: the mutex to be queried
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:129 @ extern void rt_mutex_lock(struct rt_mute
 #endif
 
 extern int rt_mutex_lock_interruptible(struct rt_mutex *lock);
+extern int rt_mutex_lock_killable(struct rt_mutex *lock);
 extern int rt_mutex_timed_lock(struct rt_mutex *lock,
 			       struct hrtimer_sleeper *timeout);
 
Index: linux-5.0.19-rt11/include/linux/rwlock_rt.h
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/include/linux/rwlock_rt.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+#ifndef __LINUX_RWLOCK_RT_H
+#define __LINUX_RWLOCK_RT_H
+
+#ifndef __LINUX_SPINLOCK_H
+#error Do not include directly. Use spinlock.h
+#endif
+
+extern void __lockfunc rt_write_lock(rwlock_t *rwlock);
+extern void __lockfunc rt_read_lock(rwlock_t *rwlock);
+extern int __lockfunc rt_write_trylock(rwlock_t *rwlock);
+extern int __lockfunc rt_read_trylock(rwlock_t *rwlock);
+extern void __lockfunc rt_write_unlock(rwlock_t *rwlock);
+extern void __lockfunc rt_read_unlock(rwlock_t *rwlock);
+extern int __lockfunc rt_read_can_lock(rwlock_t *rwlock);
+extern int __lockfunc rt_write_can_lock(rwlock_t *rwlock);
+extern void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key *key);
+
+#define read_can_lock(rwlock)		rt_read_can_lock(rwlock)
+#define write_can_lock(rwlock)		rt_write_can_lock(rwlock)
+
+#define read_trylock(lock)	__cond_lock(lock, rt_read_trylock(lock))
+#define write_trylock(lock)	__cond_lock(lock, rt_write_trylock(lock))
+
+static inline int __write_trylock_rt_irqsave(rwlock_t *lock, unsigned long *flags)
+{
+	/* XXX ARCH_IRQ_ENABLED */
+	*flags = 0;
+	return rt_write_trylock(lock);
+}
+
+#define write_trylock_irqsave(lock, flags)		\
+	__cond_lock(lock, __write_trylock_rt_irqsave(lock, &(flags)))
+
+#define read_lock_irqsave(lock, flags)			\
+	do {						\
+		typecheck(unsigned long, flags);	\
+		rt_read_lock(lock);			\
+		flags = 0;				\
+	} while (0)
+
+#define write_lock_irqsave(lock, flags)			\
+	do {						\
+		typecheck(unsigned long, flags);	\
+		rt_write_lock(lock);			\
+		flags = 0;				\
+	} while (0)
+
+#define read_lock(lock)		rt_read_lock(lock)
+
+#define read_lock_bh(lock)				\
+	do {						\
+		local_bh_disable();			\
+		rt_read_lock(lock);			\
+	} while (0)
+
+#define read_lock_irq(lock)	read_lock(lock)
+
+#define write_lock(lock)	rt_write_lock(lock)
+
+#define write_lock_bh(lock)				\
+	do {						\
+		local_bh_disable();			\
+		rt_write_lock(lock);			\
+	} while (0)
+
+#define write_lock_irq(lock)	write_lock(lock)
+
+#define read_unlock(lock)	rt_read_unlock(lock)
+
+#define read_unlock_bh(lock)				\
+	do {						\
+		rt_read_unlock(lock);			\
+		local_bh_enable();			\
+	} while (0)
+
+#define read_unlock_irq(lock)	read_unlock(lock)
+
+#define write_unlock(lock)	rt_write_unlock(lock)
+
+#define write_unlock_bh(lock)				\
+	do {						\
+		rt_write_unlock(lock);			\
+		local_bh_enable();			\
+	} while (0)
+
+#define write_unlock_irq(lock)	write_unlock(lock)
+
+#define read_unlock_irqrestore(lock, flags)		\
+	do {						\
+		typecheck(unsigned long, flags);	\
+		(void) flags;				\
+		rt_read_unlock(lock);			\
+	} while (0)
+
+#define write_unlock_irqrestore(lock, flags) \
+	do {						\
+		typecheck(unsigned long, flags);	\
+		(void) flags;				\
+		rt_write_unlock(lock);			\
+	} while (0)
+
+#define rwlock_init(rwl)				\
+do {							\
+	static struct lock_class_key __key;		\
+							\
+	__rt_rwlock_init(rwl, #rwl, &__key);		\
+} while (0)
+
+/*
+ * Internal functions made global for CPU pinning
+ */
+void __read_rt_lock(struct rt_rw_lock *lock);
+int __read_rt_trylock(struct rt_rw_lock *lock);
+void __write_rt_lock(struct rt_rw_lock *lock);
+int __write_rt_trylock(struct rt_rw_lock *lock);
+void __read_rt_unlock(struct rt_rw_lock *lock);
+void __write_rt_unlock(struct rt_rw_lock *lock);
+
+#endif
Index: linux-5.0.19-rt11/include/linux/rwlock_types.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/rwlock_types.h
+++ linux-5.0.19-rt11/include/linux/rwlock_types.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
 #ifndef __LINUX_RWLOCK_TYPES_H
 #define __LINUX_RWLOCK_TYPES_H
 
+#if !defined(__LINUX_SPINLOCK_TYPES_H)
+# error "Do not include directly, include spinlock_types.h"
+#endif
+
 /*
  * include/linux/rwlock_types.h - generic rwlock type definitions
  *				  and initializers
Index: linux-5.0.19-rt11/include/linux/rwlock_types_rt.h
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/include/linux/rwlock_types_rt.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+#ifndef __LINUX_RWLOCK_TYPES_RT_H
+#define __LINUX_RWLOCK_TYPES_RT_H
+
+#ifndef __LINUX_SPINLOCK_TYPES_H
+#error "Do not include directly. Include spinlock_types.h instead"
+#endif
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# define RW_DEP_MAP_INIT(lockname)	.dep_map = { .name = #lockname }
+#else
+# define RW_DEP_MAP_INIT(lockname)
+#endif
+
+typedef struct rt_rw_lock rwlock_t;
+
+#define __RW_LOCK_UNLOCKED(name) __RWLOCK_RT_INITIALIZER(name)
+
+#define DEFINE_RWLOCK(name) \
+	rwlock_t name = __RW_LOCK_UNLOCKED(name)
+
+/*
+ * A reader biased implementation primarily for CPU pinning.
+ *
+ * Can be selected as general replacement for the single reader RT rwlock
+ * variant
+ */
+struct rt_rw_lock {
+	struct rt_mutex		rtmutex;
+	atomic_t		readers;
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	struct lockdep_map	dep_map;
+#endif
+};
+
+#define READER_BIAS	(1U << 31)
+#define WRITER_BIAS	(1U << 30)
+
+#define __RWLOCK_RT_INITIALIZER(name)					\
+{									\
+	.readers = ATOMIC_INIT(READER_BIAS),				\
+	.rtmutex = __RT_MUTEX_INITIALIZER_SAVE_STATE(name.rtmutex),	\
+	RW_DEP_MAP_INIT(name)						\
+}
+
+void __rwlock_biased_rt_init(struct rt_rw_lock *lock, const char *name,
+			     struct lock_class_key *key);
+
+#define rwlock_biased_rt_init(rwlock)					\
+	do {								\
+		static struct lock_class_key __key;			\
+									\
+		__rwlock_biased_rt_init((rwlock), #rwlock, &__key);	\
+	} while (0)
+
+#endif
Index: linux-5.0.19-rt11/include/linux/rwsem-rt.h
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/include/linux/rwsem-rt.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+#ifndef _LINUX_RWSEM_RT_H
+#define _LINUX_RWSEM_RT_H
+
+#ifndef _LINUX_RWSEM_H
+#error "Include rwsem.h"
+#endif
+
+#include <linux/rtmutex.h>
+#include <linux/swait.h>
+
+#define READER_BIAS		(1U << 31)
+#define WRITER_BIAS		(1U << 30)
+
+struct rw_semaphore {
+	atomic_t		readers;
+	struct rt_mutex		rtmutex;
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	struct lockdep_map	dep_map;
+#endif
+};
+
+#define __RWSEM_INITIALIZER(name)				\
+{								\
+	.readers = ATOMIC_INIT(READER_BIAS),			\
+	.rtmutex = __RT_MUTEX_INITIALIZER(name.rtmutex),	\
+	RW_DEP_MAP_INIT(name)					\
+}
+
+#define DECLARE_RWSEM(lockname) \
+	struct rw_semaphore lockname = __RWSEM_INITIALIZER(lockname)
+
+extern void  __rwsem_init(struct rw_semaphore *rwsem, const char *name,
+			  struct lock_class_key *key);
+
+#define __init_rwsem(sem, name, key)			\
+do {							\
+		rt_mutex_init(&(sem)->rtmutex);		\
+		__rwsem_init((sem), (name), (key));	\
+} while (0)
+
+#define init_rwsem(sem)					\
+do {							\
+	static struct lock_class_key __key;		\
+							\
+	__init_rwsem((sem), #sem, &__key);		\
+} while (0)
+
+static inline int rwsem_is_locked(struct rw_semaphore *sem)
+{
+	return atomic_read(&sem->readers) != READER_BIAS;
+}
+
+static inline int rwsem_is_contended(struct rw_semaphore *sem)
+{
+	return atomic_read(&sem->readers) > 0;
+}
+
+extern void __down_read(struct rw_semaphore *sem);
+extern int __down_read_killable(struct rw_semaphore *sem);
+extern int __down_read_trylock(struct rw_semaphore *sem);
+extern void __down_write(struct rw_semaphore *sem);
+extern int __must_check __down_write_killable(struct rw_semaphore *sem);
+extern int __down_write_trylock(struct rw_semaphore *sem);
+extern void __up_read(struct rw_semaphore *sem);
+extern void __up_write(struct rw_semaphore *sem);
+extern void __downgrade_write(struct rw_semaphore *sem);
+
+#endif
Index: linux-5.0.19-rt11/include/linux/rwsem.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/rwsem.h
+++ linux-5.0.19-rt11/include/linux/rwsem.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:23 @
 #include <linux/osq_lock.h>
 #endif
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+#include <linux/rwsem-rt.h>
+#else /* PREEMPT_RT_FULL */
+
 struct rw_semaphore;
 
 #ifdef CONFIG_RWSEM_GENERIC_SPINLOCK
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:121 @ static inline int rwsem_is_contended(str
 	return !list_empty(&sem->wait_list);
 }
 
+#endif /* !PREEMPT_RT_FULL */
+
+/*
+ * The functions below are the same for all rwsem implementations including
+ * the RT specific variant.
+ */
+
 /*
  * lock for reading
  */
Index: linux-5.0.19-rt11/include/linux/sched.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/sched.h
+++ linux-5.0.19-rt11/include/linux/sched.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:32 @
 #include <linux/mm_types_task.h>
 #include <linux/task_io_accounting.h>
 #include <linux/rseq.h>
+#include <asm/kmap_types.h>
 
 /* task_struct member predeclarations (sorted alphabetically): */
 struct audit_context;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:106 @ struct task_group;
 					 __TASK_TRACED | EXIT_DEAD | EXIT_ZOMBIE | \
 					 TASK_PARKED)
 
-#define task_is_traced(task)		((task->state & __TASK_TRACED) != 0)
-
 #define task_is_stopped(task)		((task->state & __TASK_STOPPED) != 0)
 
-#define task_is_stopped_or_traced(task)	((task->state & (__TASK_STOPPED | __TASK_TRACED)) != 0)
-
 #define task_contributes_to_load(task)	((task->state & TASK_UNINTERRUPTIBLE) != 0 && \
 					 (task->flags & PF_FROZEN) == 0 && \
 					 (task->state & TASK_NOLOAD) == 0)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:135 @ struct task_group;
 		smp_store_mb(current->state, (state_value));	\
 	} while (0)
 
+#define __set_current_state_no_track(state_value)		\
+	current->state = (state_value);
+
 #define set_special_state(state_value)					\
 	do {								\
 		unsigned long flags; /* may shadow */			\
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:147 @ struct task_group;
 		current->state = (state_value);				\
 		raw_spin_unlock_irqrestore(&current->pi_lock, flags);	\
 	} while (0)
+
 #else
 /*
  * set_current_state() includes a barrier so that the write of current->state
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:192 @ struct task_group;
 #define set_current_state(state_value)					\
 	smp_store_mb(current->state, (state_value))
 
+#define __set_current_state_no_track(state_value)	\
+	__set_current_state(state_value)
+
 /*
  * set_special_state() should be used for those states when the blocking task
  * can not use the regular condition based wait-loop. In that case we must
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:606 @ struct task_struct {
 #endif
 	/* -1 unrunnable, 0 runnable, >0 stopped: */
 	volatile long			state;
+	/* saved state for "spinlock sleepers" */
+	volatile long			saved_state;
 
 	/*
 	 * This begins the randomizable portion of task_struct. Only
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:668 @ struct task_struct {
 
 	unsigned int			policy;
 	int				nr_cpus_allowed;
-	cpumask_t			cpus_allowed;
+	const cpumask_t			*cpus_ptr;
+	cpumask_t			cpus_mask;
+#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+	int				migrate_disable;
+	int				migrate_disable_update;
+	int				pinned_on_cpu;
+# ifdef CONFIG_SCHED_DEBUG
+	int				migrate_disable_atomic;
+# endif
+
+#elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+# ifdef CONFIG_SCHED_DEBUG
+	int				migrate_disable;
+	int				migrate_disable_atomic;
+# endif
+#endif
+#ifdef CONFIG_PREEMPT_RT_FULL
+	int				sleeping_lock;
+#endif
 
 #ifdef CONFIG_PREEMPT_RCU
 	int				rcu_read_lock_nesting;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:851 @ struct task_struct {
 #ifdef CONFIG_POSIX_TIMERS
 	struct task_cputime		cputime_expires;
 	struct list_head		cpu_timers[3];
+#ifdef CONFIG_PREEMPT_RT_BASE
+	struct task_struct		*posix_timer_list;
+#endif
 #endif
 
 	/* Process credentials: */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:898 @ struct task_struct {
 	/* Signal handlers: */
 	struct signal_struct		*signal;
 	struct sighand_struct		*sighand;
+	struct sigqueue			*sigqueue_cache;
+
 	sigset_t			blocked;
 	sigset_t			real_blocked;
 	/* Restored if set_restore_sigmask() was used: */
 	sigset_t			saved_sigmask;
 	struct sigpending		pending;
+#ifdef CONFIG_PREEMPT_RT_FULL
+	/* TODO: move me into ->restart_block ? */
+	struct				kernel_siginfo forced_info;
+#endif
 	unsigned long			sas_ss_sp;
 	size_t				sas_ss_size;
 	unsigned int			sas_ss_flags;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:933 @ struct task_struct {
 	raw_spinlock_t			pi_lock;
 
 	struct wake_q_node		wake_q;
+	struct wake_q_node		wake_q_sleeper;
 
 #ifdef CONFIG_RT_MUTEXES
 	/* PI waiters blocked on a rt_mutex held by this task: */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:964 @ struct task_struct {
 	int				softirqs_enabled;
 	int				softirq_context;
 #endif
+#ifdef CONFIG_PREEMPT_RT_FULL
+	int				softirq_count;
+#endif
 
 #ifdef CONFIG_LOCKDEP
 # define MAX_LOCK_DEPTH			48UL
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1215 @ struct task_struct {
 	unsigned int			sequential_io;
 	unsigned int			sequential_io_avg;
 #endif
+#ifdef CONFIG_PREEMPT_RT_BASE
+	struct rcu_head			put_rcu;
+#endif
+#ifdef CONFIG_PREEMPT_RT_FULL
+# if defined CONFIG_HIGHMEM || defined CONFIG_X86_32
+	int				kmap_idx;
+	pte_t				kmap_pte[KM_TYPE_NR];
+# endif
+#endif
 #ifdef CONFIG_DEBUG_ATOMIC_SLEEP
 	unsigned long			task_state_change;
 #endif
+#ifdef CONFIG_PREEMPT_RT_FULL
+	int				xmit_recursion;
+#endif
 	int				pagefault_disabled;
 #ifdef CONFIG_MMU
 	struct task_struct		*oom_reaper_list;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1453 @ extern struct pid *cad_pid;
 #define PF_SWAPWRITE		0x00800000	/* Allowed to write to swap */
 #define PF_MEMSTALL		0x01000000	/* Stalled due to lack of memory */
 #define PF_UMH			0x02000000	/* I'm an Usermodehelper process */
-#define PF_NO_SETAFFINITY	0x04000000	/* Userland is not allowed to meddle with cpus_allowed */
+#define PF_NO_SETAFFINITY	0x04000000	/* Userland is not allowed to meddle with cpus_mask */
 #define PF_MCE_EARLY		0x08000000      /* Early kill for mce process policy */
 #define PF_MUTEX_TESTER		0x20000000	/* Thread belongs to the rt mutex tester */
 #define PF_FREEZER_SKIP		0x40000000	/* Freezer should not count it as freezable */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1658 @ extern struct task_struct *find_get_task
 
 extern int wake_up_state(struct task_struct *tsk, unsigned int state);
 extern int wake_up_process(struct task_struct *tsk);
+extern int wake_up_lock_sleeper(struct task_struct *tsk);
 extern void wake_up_new_task(struct task_struct *tsk);
 
 #ifdef CONFIG_SMP
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1741 @ static inline int test_tsk_need_resched(
 	return unlikely(test_tsk_thread_flag(tsk,TIF_NEED_RESCHED));
 }
 
+#ifdef CONFIG_PREEMPT_LAZY
+static inline void set_tsk_need_resched_lazy(struct task_struct *tsk)
+{
+	set_tsk_thread_flag(tsk,TIF_NEED_RESCHED_LAZY);
+}
+
+static inline void clear_tsk_need_resched_lazy(struct task_struct *tsk)
+{
+	clear_tsk_thread_flag(tsk,TIF_NEED_RESCHED_LAZY);
+}
+
+static inline int test_tsk_need_resched_lazy(struct task_struct *tsk)
+{
+	return unlikely(test_tsk_thread_flag(tsk,TIF_NEED_RESCHED_LAZY));
+}
+
+static inline int need_resched_lazy(void)
+{
+	return test_thread_flag(TIF_NEED_RESCHED_LAZY);
+}
+
+static inline int need_resched_now(void)
+{
+	return test_thread_flag(TIF_NEED_RESCHED);
+}
+
+#else
+static inline void clear_tsk_need_resched_lazy(struct task_struct *tsk) { }
+static inline int need_resched_lazy(void) { return 0; }
+
+static inline int need_resched_now(void)
+{
+	return test_thread_flag(TIF_NEED_RESCHED);
+}
+
+#endif
+
+
+static inline bool __task_is_stopped_or_traced(struct task_struct *task)
+{
+	if (task->state & (__TASK_STOPPED | __TASK_TRACED))
+		return true;
+#ifdef CONFIG_PREEMPT_RT_FULL
+	if (task->saved_state & (__TASK_STOPPED | __TASK_TRACED))
+		return true;
+#endif
+	return false;
+}
+
+static inline bool task_is_stopped_or_traced(struct task_struct *task)
+{
+	bool traced_stopped;
+
+#ifdef CONFIG_PREEMPT_RT_FULL
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&task->pi_lock, flags);
+	traced_stopped = __task_is_stopped_or_traced(task);
+	raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+#else
+	traced_stopped = __task_is_stopped_or_traced(task);
+#endif
+	return traced_stopped;
+}
+
+static inline bool task_is_traced(struct task_struct *task)
+{
+	bool traced = false;
+
+	if (task->state & __TASK_TRACED)
+		return true;
+#ifdef CONFIG_PREEMPT_RT_FULL
+	/* in case the task is sleeping on tasklist_lock */
+	raw_spin_lock_irq(&task->pi_lock);
+	if (task->state & __TASK_TRACED)
+		traced = true;
+	else if (task->saved_state & __TASK_TRACED)
+		traced = true;
+	raw_spin_unlock_irq(&task->pi_lock);
+#endif
+	return traced;
+}
+
 /*
  * cond_resched() and cond_resched_lock(): latency reduction via
  * explicit rescheduling in places that are safe. The return
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1876 @ static __always_inline bool need_resched
 	return unlikely(tif_need_resched());
 }
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+static inline void sleeping_lock_inc(void)
+{
+	current->sleeping_lock++;
+}
+
+static inline void sleeping_lock_dec(void)
+{
+	current->sleeping_lock--;
+}
+
+#else
+
+static inline void sleeping_lock_inc(void) { }
+static inline void sleeping_lock_dec(void) { }
+#endif
+
 /*
  * Wrappers for p->thread_info->cpu access. No-op on UP.
  */
Index: linux-5.0.19-rt11/include/linux/sched/mm.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/sched/mm.h
+++ linux-5.0.19-rt11/include/linux/sched/mm.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:52 @ static inline void mmdrop(struct mm_stru
 		__mmdrop(mm);
 }
 
+#ifdef CONFIG_PREEMPT_RT_BASE
+extern void __mmdrop_delayed(struct rcu_head *rhp);
+static inline void mmdrop_delayed(struct mm_struct *mm)
+{
+	if (atomic_dec_and_test(&mm->mm_count))
+		call_rcu(&mm->delayed_drop, __mmdrop_delayed);
+}
+#else
+# define mmdrop_delayed(mm)	mmdrop(mm)
+#endif
+
 /*
  * This has to be called after a get_task_mm()/mmget_not_zero()
  * followed by taking the mmap_sem for writing before modifying the
Index: linux-5.0.19-rt11/include/linux/sched/task.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/sched/task.h
+++ linux-5.0.19-rt11/include/linux/sched/task.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:93 @ extern void sched_exec(void);
 
 #define get_task_struct(tsk) do { atomic_inc(&(tsk)->usage); } while(0)
 
+#ifdef CONFIG_PREEMPT_RT_BASE
+extern void __put_task_struct_cb(struct rcu_head *rhp);
+
+static inline void put_task_struct(struct task_struct *t)
+{
+	if (atomic_dec_and_test(&t->usage))
+		call_rcu(&t->put_rcu, __put_task_struct_cb);
+}
+#else
 extern void __put_task_struct(struct task_struct *t);
 
 static inline void put_task_struct(struct task_struct *t)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:109 @ static inline void put_task_struct(struc
 	if (atomic_dec_and_test(&t->usage))
 		__put_task_struct(t);
 }
-
+#endif
 struct task_struct *task_rcu_dereference(struct task_struct **ptask);
 
 #ifdef CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT
Index: linux-5.0.19-rt11/include/linux/sched/wake_q.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/sched/wake_q.h
+++ linux-5.0.19-rt11/include/linux/sched/wake_q.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:54 @ static inline void wake_q_init(struct wa
 	head->lastp = &head->first;
 }
 
-extern void wake_q_add(struct wake_q_head *head,
-		       struct task_struct *task);
-extern void wake_up_q(struct wake_q_head *head);
+extern void __wake_q_add(struct wake_q_head *head,
+			 struct task_struct *task, bool sleeper);
+static inline void wake_q_add(struct wake_q_head *head,
+			      struct task_struct *task)
+{
+	__wake_q_add(head, task, false);
+}
+
+static inline void wake_q_add_sleeper(struct wake_q_head *head,
+				      struct task_struct *task)
+{
+	__wake_q_add(head, task, true);
+}
+
+extern void __wake_up_q(struct wake_q_head *head, bool sleeper);
+static inline void wake_up_q(struct wake_q_head *head)
+{
+	__wake_up_q(head, false);
+}
+
+static inline void wake_up_q_sleeper(struct wake_q_head *head)
+{
+	__wake_up_q(head, true);
+}
 
 #endif /* _LINUX_SCHED_WAKE_Q_H */
Index: linux-5.0.19-rt11/include/linux/seqlock.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/seqlock.h
+++ linux-5.0.19-rt11/include/linux/seqlock.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:224 @ static inline int read_seqcount_retry(co
 	return __read_seqcount_retry(s, start);
 }
 
-
-
-static inline void raw_write_seqcount_begin(seqcount_t *s)
+static inline void __raw_write_seqcount_begin(seqcount_t *s)
 {
 	s->sequence++;
 	smp_wmb();
 }
 
-static inline void raw_write_seqcount_end(seqcount_t *s)
+static inline void raw_write_seqcount_begin(seqcount_t *s)
+{
+	preempt_disable_rt();
+	__raw_write_seqcount_begin(s);
+}
+
+static inline void __raw_write_seqcount_end(seqcount_t *s)
 {
 	smp_wmb();
 	s->sequence++;
 }
 
+static inline void raw_write_seqcount_end(seqcount_t *s)
+{
+	__raw_write_seqcount_end(s);
+	preempt_enable_rt();
+}
+
 /**
  * raw_write_seqcount_barrier - do a seq write barrier
  * @s: pointer to seqcount_t
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:441 @ typedef struct {
 /*
  * Read side functions for starting and finalizing a read side section.
  */
+#ifndef CONFIG_PREEMPT_RT_FULL
 static inline unsigned read_seqbegin(const seqlock_t *sl)
 {
 	return read_seqcount_begin(&sl->seqcount);
 }
+#else
+/*
+ * Starvation safe read side for RT
+ */
+static inline unsigned read_seqbegin(seqlock_t *sl)
+{
+	unsigned ret;
+
+repeat:
+	ret = READ_ONCE(sl->seqcount.sequence);
+	if (unlikely(ret & 1)) {
+		/*
+		 * Take the lock and let the writer proceed (i.e. evtl
+		 * boost it), otherwise we could loop here forever.
+		 */
+		spin_unlock_wait(&sl->lock);
+		goto repeat;
+	}
+	smp_rmb();
+	return ret;
+}
+#endif
 
 static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:482 @ static inline unsigned read_seqretry(con
 static inline void write_seqlock(seqlock_t *sl)
 {
 	spin_lock(&sl->lock);
-	write_seqcount_begin(&sl->seqcount);
+	__raw_write_seqcount_begin(&sl->seqcount);
+}
+
+static inline int try_write_seqlock(seqlock_t *sl)
+{
+	if (spin_trylock(&sl->lock)) {
+		__raw_write_seqcount_begin(&sl->seqcount);
+		return 1;
+	}
+	return 0;
 }
 
 static inline void write_sequnlock(seqlock_t *sl)
 {
-	write_seqcount_end(&sl->seqcount);
+	__raw_write_seqcount_end(&sl->seqcount);
 	spin_unlock(&sl->lock);
 }
 
 static inline void write_seqlock_bh(seqlock_t *sl)
 {
 	spin_lock_bh(&sl->lock);
-	write_seqcount_begin(&sl->seqcount);
+	__raw_write_seqcount_begin(&sl->seqcount);
 }
 
 static inline void write_sequnlock_bh(seqlock_t *sl)
 {
-	write_seqcount_end(&sl->seqcount);
+	__raw_write_seqcount_end(&sl->seqcount);
 	spin_unlock_bh(&sl->lock);
 }
 
 static inline void write_seqlock_irq(seqlock_t *sl)
 {
 	spin_lock_irq(&sl->lock);
-	write_seqcount_begin(&sl->seqcount);
+	__raw_write_seqcount_begin(&sl->seqcount);
 }
 
 static inline void write_sequnlock_irq(seqlock_t *sl)
 {
-	write_seqcount_end(&sl->seqcount);
+	__raw_write_seqcount_end(&sl->seqcount);
 	spin_unlock_irq(&sl->lock);
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:529 @ static inline unsigned long __write_seql
 	unsigned long flags;
 
 	spin_lock_irqsave(&sl->lock, flags);
-	write_seqcount_begin(&sl->seqcount);
+	__raw_write_seqcount_begin(&sl->seqcount);
 	return flags;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:539 @ static inline unsigned long __write_seql
 static inline void
 write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
 {
-	write_seqcount_end(&sl->seqcount);
+	__raw_write_seqcount_end(&sl->seqcount);
 	spin_unlock_irqrestore(&sl->lock, flags);
 }
 
Index: linux-5.0.19-rt11/include/linux/serial_8250.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/serial_8250.h
+++ linux-5.0.19-rt11/include/linux/serial_8250.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:14 @
 #ifndef _LINUX_SERIAL_8250_H
 #define _LINUX_SERIAL_8250_H
 
+#include <linux/atomic.h>
 #include <linux/serial_core.h>
 #include <linux/serial_reg.h>
 #include <linux/platform_device.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:130 @ struct uart_8250_port {
 #define MSR_SAVE_FLAGS UART_MSR_ANY_DELTA
 	unsigned char		msr_saved_flags;
 
+	atomic_t		console_printing;
+
 	struct uart_8250_dma	*dma;
 	const struct uart_8250_ops *ops;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:183 @ void serial8250_init_port(struct uart_82
 void serial8250_set_defaults(struct uart_8250_port *up);
 void serial8250_console_write(struct uart_8250_port *up, const char *s,
 			      unsigned int count);
+void serial8250_console_write_atomic(struct uart_8250_port *up, const char *s,
+				     unsigned int count);
 int serial8250_console_setup(struct uart_port *port, char *options, bool probe);
 
 extern void serial8250_set_isa_configurator(void (*v)
Index: linux-5.0.19-rt11/include/linux/signal.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/signal.h
+++ linux-5.0.19-rt11/include/linux/signal.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:258 @ static inline void init_sigpending(struc
 }
 
 extern void flush_sigqueue(struct sigpending *queue);
+extern void flush_task_sigqueue(struct task_struct *tsk);
 
 /* Test if 'sig' is valid signal. Use this instead of testing _NSIG directly */
 static inline int valid_signal(unsigned long sig)
Index: linux-5.0.19-rt11/include/linux/skbuff.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/skbuff.h
+++ linux-5.0.19-rt11/include/linux/skbuff.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:292 @ struct sk_buff_head {
 
 	__u32		qlen;
 	spinlock_t	lock;
+	raw_spinlock_t	raw_lock;
 };
 
 struct sk_buff;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1752 @ static inline void skb_queue_head_init(s
 	__skb_queue_head_init(list);
 }
 
+static inline void skb_queue_head_init_raw(struct sk_buff_head *list)
+{
+	raw_spin_lock_init(&list->raw_lock);
+	__skb_queue_head_init(list);
+}
+
 static inline void skb_queue_head_init_class(struct sk_buff_head *list,
 		struct lock_class_key *class)
 {
Index: linux-5.0.19-rt11/include/linux/smp.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/smp.h
+++ linux-5.0.19-rt11/include/linux/smp.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:209 @ static inline int get_boot_cpu_id(void)
 #define get_cpu()		({ preempt_disable(); smp_processor_id(); })
 #define put_cpu()		preempt_enable()
 
+#define get_cpu_light()		({ migrate_disable(); smp_processor_id(); })
+#define put_cpu_light()		migrate_enable()
+
 /*
  * Callback to arch code if there's nosmp or maxcpus=0 on the
  * boot command line:
Index: linux-5.0.19-rt11/include/linux/spinlock.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/spinlock.h
+++ linux-5.0.19-rt11/include/linux/spinlock.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:301 @ static inline void do_raw_spin_unlock(ra
 })
 
 /* Include rwlock functions */
-#include <linux/rwlock.h>
+#ifdef CONFIG_PREEMPT_RT_FULL
+# include <linux/rwlock_rt.h>
+#else
+# include <linux/rwlock.h>
+#endif
 
 /*
  * Pull the _spin_*()/_read_*()/_write_*() functions/declarations:
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:316 @ static inline void do_raw_spin_unlock(ra
 # include <linux/spinlock_api_up.h>
 #endif
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+# include <linux/spinlock_rt.h>
+#else /* PREEMPT_RT_FULL */
+
 /*
  * Map the spin_lock functions to the raw variants for PREEMPT_RT=n
  */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:440 @ static __always_inline int spin_is_conte
 
 #define assert_spin_locked(lock)	assert_raw_spin_locked(&(lock)->rlock)
 
+#endif /* !PREEMPT_RT_FULL */
+
 /*
  * Pull the atomic_t declaration:
  * (asm-mips/atomic.h needs above definitions)
Index: linux-5.0.19-rt11/include/linux/spinlock_api_smp.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/spinlock_api_smp.h
+++ linux-5.0.19-rt11/include/linux/spinlock_api_smp.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:190 @ static inline int __raw_spin_trylock_bh(
 	return 0;
 }
 
-#include <linux/rwlock_api_smp.h>
+#ifndef CONFIG_PREEMPT_RT_FULL
+# include <linux/rwlock_api_smp.h>
+#endif
 
 #endif /* __LINUX_SPINLOCK_API_SMP_H */
Index: linux-5.0.19-rt11/include/linux/spinlock_rt.h
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/include/linux/spinlock_rt.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+#ifndef __LINUX_SPINLOCK_RT_H
+#define __LINUX_SPINLOCK_RT_H
+
+#ifndef __LINUX_SPINLOCK_H
+#error Do not include directly. Use spinlock.h
+#endif
+
+#include <linux/bug.h>
+
+extern void
+__rt_spin_lock_init(spinlock_t *lock, const char *name, struct lock_class_key *key);
+
+#define spin_lock_init(slock)				\
+do {							\
+	static struct lock_class_key __key;		\
+							\
+	rt_mutex_init(&(slock)->lock);			\
+	__rt_spin_lock_init(slock, #slock, &__key);	\
+} while (0)
+
+extern void __lockfunc rt_spin_lock(spinlock_t *lock);
+extern unsigned long __lockfunc rt_spin_lock_trace_flags(spinlock_t *lock);
+extern void __lockfunc rt_spin_lock_nested(spinlock_t *lock, int subclass);
+extern void __lockfunc rt_spin_unlock(spinlock_t *lock);
+extern void __lockfunc rt_spin_unlock_wait(spinlock_t *lock);
+extern int __lockfunc rt_spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags);
+extern int __lockfunc rt_spin_trylock_bh(spinlock_t *lock);
+extern int __lockfunc rt_spin_trylock(spinlock_t *lock);
+extern int atomic_dec_and_spin_lock(atomic_t *atomic, spinlock_t *lock);
+
+/*
+ * lockdep-less calls, for derived types like rwlock:
+ * (for trylock they can use rt_mutex_trylock() directly.
+ * Migrate disable handling must be done at the call site.
+ */
+extern void __lockfunc __rt_spin_lock(struct rt_mutex *lock);
+extern void __lockfunc __rt_spin_trylock(struct rt_mutex *lock);
+extern void __lockfunc __rt_spin_unlock(struct rt_mutex *lock);
+
+#define spin_lock(lock)			rt_spin_lock(lock)
+
+#define spin_lock_bh(lock)			\
+	do {					\
+		local_bh_disable();		\
+		rt_spin_lock(lock);		\
+	} while (0)
+
+#define spin_lock_irq(lock)		spin_lock(lock)
+
+#define spin_do_trylock(lock)		__cond_lock(lock, rt_spin_trylock(lock))
+
+#define spin_trylock(lock)			\
+({						\
+	int __locked;				\
+	__locked = spin_do_trylock(lock);	\
+	__locked;				\
+})
+
+#ifdef CONFIG_LOCKDEP
+# define spin_lock_nested(lock, subclass)		\
+	do {						\
+		rt_spin_lock_nested(lock, subclass);	\
+	} while (0)
+
+#define spin_lock_bh_nested(lock, subclass)		\
+	do {						\
+		local_bh_disable();			\
+		rt_spin_lock_nested(lock, subclass);	\
+	} while (0)
+
+# define spin_lock_irqsave_nested(lock, flags, subclass) \
+	do {						 \
+		typecheck(unsigned long, flags);	 \
+		flags = 0;				 \
+		rt_spin_lock_nested(lock, subclass);	 \
+	} while (0)
+#else
+# define spin_lock_nested(lock, subclass)	spin_lock(lock)
+# define spin_lock_bh_nested(lock, subclass)	spin_lock_bh(lock)
+
+# define spin_lock_irqsave_nested(lock, flags, subclass) \
+	do {						 \
+		typecheck(unsigned long, flags);	 \
+		flags = 0;				 \
+		spin_lock(lock);			 \
+	} while (0)
+#endif
+
+#define spin_lock_irqsave(lock, flags)			 \
+	do {						 \
+		typecheck(unsigned long, flags);	 \
+		flags = 0;				 \
+		spin_lock(lock);			 \
+	} while (0)
+
+static inline unsigned long spin_lock_trace_flags(spinlock_t *lock)
+{
+	unsigned long flags = 0;
+#ifdef CONFIG_TRACE_IRQFLAGS
+	flags = rt_spin_lock_trace_flags(lock);
+#else
+	spin_lock(lock); /* lock_local */
+#endif
+	return flags;
+}
+
+/* FIXME: we need rt_spin_lock_nest_lock */
+#define spin_lock_nest_lock(lock, nest_lock) spin_lock_nested(lock, 0)
+
+#define spin_unlock(lock)			rt_spin_unlock(lock)
+
+#define spin_unlock_bh(lock)				\
+	do {						\
+		rt_spin_unlock(lock);			\
+		local_bh_enable();			\
+	} while (0)
+
+#define spin_unlock_irq(lock)		spin_unlock(lock)
+
+#define spin_unlock_irqrestore(lock, flags)		\
+	do {						\
+		typecheck(unsigned long, flags);	\
+		(void) flags;				\
+		spin_unlock(lock);			\
+	} while (0)
+
+#define spin_trylock_bh(lock)	__cond_lock(lock, rt_spin_trylock_bh(lock))
+#define spin_trylock_irq(lock)	spin_trylock(lock)
+
+#define spin_trylock_irqsave(lock, flags)	\
+	rt_spin_trylock_irqsave(lock, &(flags))
+
+#define spin_unlock_wait(lock)		rt_spin_unlock_wait(lock)
+
+#ifdef CONFIG_GENERIC_LOCKBREAK
+# define spin_is_contended(lock)	((lock)->break_lock)
+#else
+# define spin_is_contended(lock)	(((void)(lock), 0))
+#endif
+
+static inline int spin_can_lock(spinlock_t *lock)
+{
+	return !rt_mutex_is_locked(&lock->lock);
+}
+
+static inline int spin_is_locked(spinlock_t *lock)
+{
+	return rt_mutex_is_locked(&lock->lock);
+}
+
+static inline void assert_spin_locked(spinlock_t *lock)
+{
+	BUG_ON(!spin_is_locked(lock));
+}
+
+#endif
Index: linux-5.0.19-rt11/include/linux/spinlock_types.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/spinlock_types.h
+++ linux-5.0.19-rt11/include/linux/spinlock_types.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:12 @
  * Released under the General Public License (GPL).
  */
 
-#if defined(CONFIG_SMP)
-# include <asm/spinlock_types.h>
-#else
-# include <linux/spinlock_types_up.h>
-#endif
-
-#include <linux/lockdep.h>
-
-typedef struct raw_spinlock {
-	arch_spinlock_t raw_lock;
-#ifdef CONFIG_DEBUG_SPINLOCK
-	unsigned int magic, owner_cpu;
-	void *owner;
-#endif
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
-	struct lockdep_map dep_map;
-#endif
-} raw_spinlock_t;
-
-#define SPINLOCK_MAGIC		0xdead4ead
-
-#define SPINLOCK_OWNER_INIT	((void *)-1L)
-
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
-# define SPIN_DEP_MAP_INIT(lockname)	.dep_map = { .name = #lockname }
-#else
-# define SPIN_DEP_MAP_INIT(lockname)
-#endif
+#include <linux/spinlock_types_raw.h>
 
-#ifdef CONFIG_DEBUG_SPINLOCK
-# define SPIN_DEBUG_INIT(lockname)		\
-	.magic = SPINLOCK_MAGIC,		\
-	.owner_cpu = -1,			\
-	.owner = SPINLOCK_OWNER_INIT,
+#ifndef CONFIG_PREEMPT_RT_FULL
+# include <linux/spinlock_types_nort.h>
+# include <linux/rwlock_types.h>
 #else
-# define SPIN_DEBUG_INIT(lockname)
+# include <linux/rtmutex.h>
+# include <linux/spinlock_types_rt.h>
+# include <linux/rwlock_types_rt.h>
 #endif
 
-#define __RAW_SPIN_LOCK_INITIALIZER(lockname)	\
-	{					\
-	.raw_lock = __ARCH_SPIN_LOCK_UNLOCKED,	\
-	SPIN_DEBUG_INIT(lockname)		\
-	SPIN_DEP_MAP_INIT(lockname) }
-
-#define __RAW_SPIN_LOCK_UNLOCKED(lockname)	\
-	(raw_spinlock_t) __RAW_SPIN_LOCK_INITIALIZER(lockname)
-
-#define DEFINE_RAW_SPINLOCK(x)	raw_spinlock_t x = __RAW_SPIN_LOCK_UNLOCKED(x)
-
-typedef struct spinlock {
-	union {
-		struct raw_spinlock rlock;
-
-#ifdef CONFIG_DEBUG_LOCK_ALLOC
-# define LOCK_PADSIZE (offsetof(struct raw_spinlock, dep_map))
-		struct {
-			u8 __padding[LOCK_PADSIZE];
-			struct lockdep_map dep_map;
-		};
-#endif
-	};
-} spinlock_t;
-
-#define __SPIN_LOCK_INITIALIZER(lockname) \
-	{ { .rlock = __RAW_SPIN_LOCK_INITIALIZER(lockname) } }
-
-#define __SPIN_LOCK_UNLOCKED(lockname) \
-	(spinlock_t ) __SPIN_LOCK_INITIALIZER(lockname)
-
-#define DEFINE_SPINLOCK(x)	spinlock_t x = __SPIN_LOCK_UNLOCKED(x)
-
-#include <linux/rwlock_types.h>
-
 #endif /* __LINUX_SPINLOCK_TYPES_H */
Index: linux-5.0.19-rt11/include/linux/spinlock_types_nort.h
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/include/linux/spinlock_types_nort.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+#ifndef __LINUX_SPINLOCK_TYPES_NORT_H
+#define __LINUX_SPINLOCK_TYPES_NORT_H
+
+#ifndef __LINUX_SPINLOCK_TYPES_H
+#error "Do not include directly. Include spinlock_types.h instead"
+#endif
+
+/*
+ * The non RT version maps spinlocks to raw_spinlocks
+ */
+typedef struct spinlock {
+	union {
+		struct raw_spinlock rlock;
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# define LOCK_PADSIZE (offsetof(struct raw_spinlock, dep_map))
+		struct {
+			u8 __padding[LOCK_PADSIZE];
+			struct lockdep_map dep_map;
+		};
+#endif
+	};
+} spinlock_t;
+
+#define __SPIN_LOCK_INITIALIZER(lockname) \
+	{ { .rlock = __RAW_SPIN_LOCK_INITIALIZER(lockname) } }
+
+#define __SPIN_LOCK_UNLOCKED(lockname) \
+	(spinlock_t ) __SPIN_LOCK_INITIALIZER(lockname)
+
+#define DEFINE_SPINLOCK(x)	spinlock_t x = __SPIN_LOCK_UNLOCKED(x)
+
+#endif
Index: linux-5.0.19-rt11/include/linux/spinlock_types_raw.h
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/include/linux/spinlock_types_raw.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+#ifndef __LINUX_SPINLOCK_TYPES_RAW_H
+#define __LINUX_SPINLOCK_TYPES_RAW_H
+
+#include <linux/types.h>
+
+#if defined(CONFIG_SMP)
+# include <asm/spinlock_types.h>
+#else
+# include <linux/spinlock_types_up.h>
+#endif
+
+#include <linux/lockdep.h>
+
+typedef struct raw_spinlock {
+	arch_spinlock_t raw_lock;
+#ifdef CONFIG_DEBUG_SPINLOCK
+	unsigned int magic, owner_cpu;
+	void *owner;
+#endif
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	struct lockdep_map dep_map;
+#endif
+} raw_spinlock_t;
+
+#define SPINLOCK_MAGIC		0xdead4ead
+
+#define SPINLOCK_OWNER_INIT	((void *)-1L)
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+# define SPIN_DEP_MAP_INIT(lockname)	.dep_map = { .name = #lockname }
+#else
+# define SPIN_DEP_MAP_INIT(lockname)
+#endif
+
+#ifdef CONFIG_DEBUG_SPINLOCK
+# define SPIN_DEBUG_INIT(lockname)		\
+	.magic = SPINLOCK_MAGIC,		\
+	.owner_cpu = -1,			\
+	.owner = SPINLOCK_OWNER_INIT,
+#else
+# define SPIN_DEBUG_INIT(lockname)
+#endif
+
+#define __RAW_SPIN_LOCK_INITIALIZER(lockname)	\
+	{					\
+	.raw_lock = __ARCH_SPIN_LOCK_UNLOCKED,	\
+	SPIN_DEBUG_INIT(lockname)		\
+	SPIN_DEP_MAP_INIT(lockname) }
+
+#define __RAW_SPIN_LOCK_UNLOCKED(lockname)	\
+	(raw_spinlock_t) __RAW_SPIN_LOCK_INITIALIZER(lockname)
+
+#define DEFINE_RAW_SPINLOCK(x)	raw_spinlock_t x = __RAW_SPIN_LOCK_UNLOCKED(x)
+
+#endif
Index: linux-5.0.19-rt11/include/linux/spinlock_types_rt.h
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/include/linux/spinlock_types_rt.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+#ifndef __LINUX_SPINLOCK_TYPES_RT_H
+#define __LINUX_SPINLOCK_TYPES_RT_H
+
+#ifndef __LINUX_SPINLOCK_TYPES_H
+#error "Do not include directly. Include spinlock_types.h instead"
+#endif
+
+#include <linux/cache.h>
+
+/*
+ * PREEMPT_RT: spinlocks - an RT mutex plus lock-break field:
+ */
+typedef struct spinlock {
+	struct rt_mutex		lock;
+	unsigned int		break_lock;
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	struct lockdep_map	dep_map;
+#endif
+} spinlock_t;
+
+#ifdef CONFIG_DEBUG_RT_MUTEXES
+# define __RT_SPIN_INITIALIZER(name) \
+	{ \
+	.wait_lock = __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock), \
+	.save_state = 1, \
+	.file = __FILE__, \
+	.line = __LINE__ , \
+	}
+#else
+# define __RT_SPIN_INITIALIZER(name) \
+	{								\
+	.wait_lock = __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock),		\
+	.save_state = 1, \
+	}
+#endif
+
+/*
+.wait_list = PLIST_HEAD_INIT_RAW((name).lock.wait_list, (name).lock.wait_lock)
+*/
+
+#define __SPIN_LOCK_UNLOCKED(name)			\
+	{ .lock = __RT_SPIN_INITIALIZER(name.lock),		\
+	  SPIN_DEP_MAP_INIT(name) }
+
+#define DEFINE_SPINLOCK(name) \
+	spinlock_t name = __SPIN_LOCK_UNLOCKED(name)
+
+#endif
Index: linux-5.0.19-rt11/include/linux/spinlock_types_up.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/spinlock_types_up.h
+++ linux-5.0.19-rt11/include/linux/spinlock_types_up.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3 @
 #ifndef __LINUX_SPINLOCK_TYPES_UP_H
 #define __LINUX_SPINLOCK_TYPES_UP_H
 
-#ifndef __LINUX_SPINLOCK_TYPES_H
-# error "please don't include this file directly"
-#endif
-
 /*
  * include/linux/spinlock_types_up.h - spinlock type definitions for UP
  *
Index: linux-5.0.19-rt11/include/linux/srcutree.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/srcutree.h
+++ linux-5.0.19-rt11/include/linux/srcutree.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:48 @ struct srcu_data {
 	unsigned long srcu_gp_seq_needed;	/* Furthest future GP needed. */
 	unsigned long srcu_gp_seq_needed_exp;	/* Furthest future exp GP. */
 	bool srcu_cblist_invoking;		/* Invoking these CBs? */
-	struct delayed_work work;		/* Context for CB invoking. */
+	struct timer_list delay_work;		/* Delay for CB invoking */
+	struct work_struct work;		/* Context for CB invoking. */
 	struct rcu_head srcu_barrier_head;	/* For srcu_barrier() use. */
 	struct srcu_node *mynode;		/* Leaf srcu_node. */
 	unsigned long grpmask;			/* Mask for leaf srcu_node */
Index: linux-5.0.19-rt11/include/linux/suspend.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/suspend.h
+++ linux-5.0.19-rt11/include/linux/suspend.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:199 @ struct platform_s2idle_ops {
 	void (*end)(void);
 };
 
+#if defined(CONFIG_SUSPEND) || defined(CONFIG_HIBERNATION)
+extern bool pm_in_action;
+#else
+# define pm_in_action false
+#endif
+
 #ifdef CONFIG_SUSPEND
 extern suspend_state_t mem_sleep_current;
 extern suspend_state_t mem_sleep_default;
Index: linux-5.0.19-rt11/include/linux/swait.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/swait.h
+++ linux-5.0.19-rt11/include/linux/swait.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:163 @ static inline bool swq_has_sleeper(struc
 extern void swake_up_one(struct swait_queue_head *q);
 extern void swake_up_all(struct swait_queue_head *q);
 extern void swake_up_locked(struct swait_queue_head *q);
+extern void swake_up_all_locked(struct swait_queue_head *q);
 
+extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
 extern void prepare_to_swait_exclusive(struct swait_queue_head *q, struct swait_queue *wait, int state);
 extern long prepare_to_swait_event(struct swait_queue_head *q, struct swait_queue *wait, int state);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:302 @ do {									\
 	__ret;								\
 })
 
+#define __swait_event_lock_irq(wq, condition, lock, cmd)		\
+	___swait_event(wq, condition, TASK_UNINTERRUPTIBLE, 0,		\
+		       raw_spin_unlock_irq(&lock);			\
+		       cmd;						\
+		       schedule();					\
+		       raw_spin_lock_irq(&lock))
+
+#define swait_event_lock_irq(wq_head, condition, lock)			\
+	do {								\
+		if (condition)						\
+			break;						\
+		__swait_event_lock_irq(wq_head, condition, lock, );	\
+	} while (0)
+
 #endif /* _LINUX_SWAIT_H */
Index: linux-5.0.19-rt11/include/linux/swap.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/swap.h
+++ linux-5.0.19-rt11/include/linux/swap.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:15 @
 #include <linux/fs.h>
 #include <linux/atomic.h>
 #include <linux/page-flags.h>
+#include <linux/locallock.h>
 #include <asm/page.h>
 
 struct notifier_block;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:332 @ extern unsigned long nr_free_pagecache_p
 
 
 /* linux/mm/swap.c */
+DECLARE_LOCAL_IRQ_LOCK(swapvec_lock);
 extern void lru_cache_add(struct page *);
 extern void lru_cache_add_anon(struct page *page);
 extern void lru_cache_add_file(struct page *page);
Index: linux-5.0.19-rt11/include/linux/thread_info.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/thread_info.h
+++ linux-5.0.19-rt11/include/linux/thread_info.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:100 @ static inline int test_ti_thread_flag(st
 #define test_thread_flag(flag) \
 	test_ti_thread_flag(current_thread_info(), flag)
 
-#define tif_need_resched() test_thread_flag(TIF_NEED_RESCHED)
+#ifdef CONFIG_PREEMPT_LAZY
+#define tif_need_resched()	(test_thread_flag(TIF_NEED_RESCHED) || \
+				 test_thread_flag(TIF_NEED_RESCHED_LAZY))
+#define tif_need_resched_now()	(test_thread_flag(TIF_NEED_RESCHED))
+#define tif_need_resched_lazy()	test_thread_flag(TIF_NEED_RESCHED_LAZY))
+
+#else
+#define tif_need_resched()	test_thread_flag(TIF_NEED_RESCHED)
+#define tif_need_resched_now()	test_thread_flag(TIF_NEED_RESCHED)
+#define tif_need_resched_lazy()	0
+#endif
 
 #ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES
 static inline int arch_within_stack_frames(const void * const stack,
Index: linux-5.0.19-rt11/include/linux/timer.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/timer.h
+++ linux-5.0.19-rt11/include/linux/timer.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:175 @ extern void add_timer(struct timer_list
 
 extern int try_to_del_timer_sync(struct timer_list *timer);
 
-#ifdef CONFIG_SMP
+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL)
   extern int del_timer_sync(struct timer_list *timer);
 #else
 # define del_timer_sync(t)		del_timer(t)
Index: linux-5.0.19-rt11/include/linux/trace_events.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/trace_events.h
+++ linux-5.0.19-rt11/include/linux/trace_events.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:65 @ struct trace_entry {
 	unsigned char		flags;
 	unsigned char		preempt_count;
 	int			pid;
+	unsigned short		migrate_disable;
+	unsigned short		padding;
+	unsigned char		preempt_lazy_count;
 };
 
 #define TRACE_EVENT_TYPE_MAX						\
Index: linux-5.0.19-rt11/include/linux/uaccess.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/uaccess.h
+++ linux-5.0.19-rt11/include/linux/uaccess.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:185 @ static __always_inline void pagefault_di
  */
 static inline void pagefault_disable(void)
 {
+	migrate_disable();
 	pagefault_disabled_inc();
 	/*
 	 * make sure to have issued the store before a pagefault
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:202 @ static inline void pagefault_enable(void
 	 */
 	barrier();
 	pagefault_disabled_dec();
+	migrate_enable();
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:273 @ extern long strncpy_from_unsafe(char *ds
 #define user_access_end() do { } while (0)
 #define unsafe_get_user(x, ptr, err) do { if (unlikely(__get_user(x, ptr))) goto err; } while (0)
 #define unsafe_put_user(x, ptr, err) do { if (unlikely(__put_user(x, ptr))) goto err; } while (0)
+static inline unsigned long user_access_save(void) { return 0UL; }
+static inline void user_access_restore(unsigned long flags) { }
 #endif
 
 #ifdef CONFIG_HARDENED_USERCOPY
Index: linux-5.0.19-rt11/include/linux/vmstat.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/vmstat.h
+++ linux-5.0.19-rt11/include/linux/vmstat.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:57 @ DECLARE_PER_CPU(struct vm_event_state, v
  */
 static inline void __count_vm_event(enum vm_event_item item)
 {
+	preempt_disable_rt();
 	raw_cpu_inc(vm_event_states.event[item]);
+	preempt_enable_rt();
 }
 
 static inline void count_vm_event(enum vm_event_item item)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:69 @ static inline void count_vm_event(enum v
 
 static inline void __count_vm_events(enum vm_event_item item, long delta)
 {
+	preempt_disable_rt();
 	raw_cpu_add(vm_event_states.event[item], delta);
+	preempt_enable_rt();
 }
 
 static inline void count_vm_events(enum vm_event_item item, long delta)
Index: linux-5.0.19-rt11/include/linux/wait.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/wait.h
+++ linux-5.0.19-rt11/include/linux/wait.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:13 @
 
 #include <asm/current.h>
 #include <uapi/linux/wait.h>
+#include <linux/atomic.h>
 
 typedef struct wait_queue_entry wait_queue_entry_t;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:492 @ do {										\
 	int __ret = 0;								\
 	struct hrtimer_sleeper __t;						\
 										\
-	hrtimer_init_on_stack(&__t.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);	\
-	hrtimer_init_sleeper(&__t, current);					\
+	hrtimer_init_sleeper_on_stack(&__t, CLOCK_MONOTONIC, HRTIMER_MODE_REL,	\
+				      current);					\
 	if ((timeout) != KTIME_MAX)						\
 		hrtimer_start_range_ns(&__t.timer, timeout,			\
 				       current->timer_slack_ns,			\
Index: linux-5.0.19-rt11/include/linux/workqueue.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/workqueue.h
+++ linux-5.0.19-rt11/include/linux/workqueue.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:458 @ __alloc_workqueue_key(const char *fmt, u
 
 extern void destroy_workqueue(struct workqueue_struct *wq);
 
-struct workqueue_attrs *alloc_workqueue_attrs(gfp_t gfp_mask);
-void free_workqueue_attrs(struct workqueue_attrs *attrs);
-int apply_workqueue_attrs(struct workqueue_struct *wq,
-			  const struct workqueue_attrs *attrs);
 int workqueue_set_unbound_cpumask(cpumask_var_t cpumask);
 
 extern bool queue_work_on(int cpu, struct workqueue_struct *wq,
Index: linux-5.0.19-rt11/include/net/gen_stats.h
===================================================================
--- linux-5.0.19-rt11.orig/include/net/gen_stats.h
+++ linux-5.0.19-rt11/include/net/gen_stats.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:9 @
 #include <linux/socket.h>
 #include <linux/rtnetlink.h>
 #include <linux/pkt_sched.h>
+#include <net/net_seq_lock.h>
 
 struct gnet_stats_basic_cpu {
 	struct gnet_stats_basic_packed bstats;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:40 @ int gnet_stats_start_copy_compat(struct
 				 spinlock_t *lock, struct gnet_dump *d,
 				 int padattr);
 
-int gnet_stats_copy_basic(const seqcount_t *running,
+int gnet_stats_copy_basic(net_seqlock_t *running,
 			  struct gnet_dump *d,
 			  struct gnet_stats_basic_cpu __percpu *cpu,
 			  struct gnet_stats_basic_packed *b);
-void __gnet_stats_copy_basic(const seqcount_t *running,
+void __gnet_stats_copy_basic(net_seqlock_t *running,
 			     struct gnet_stats_basic_packed *bstats,
 			     struct gnet_stats_basic_cpu __percpu *cpu,
 			     struct gnet_stats_basic_packed *b);
-int gnet_stats_copy_basic_hw(const seqcount_t *running,
+int gnet_stats_copy_basic_hw(net_seqlock_t *running,
 			     struct gnet_dump *d,
 			     struct gnet_stats_basic_cpu __percpu *cpu,
 			     struct gnet_stats_basic_packed *b);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:68 @ int gen_new_estimator(struct gnet_stats_
 		      struct gnet_stats_basic_cpu __percpu *cpu_bstats,
 		      struct net_rate_estimator __rcu **rate_est,
 		      spinlock_t *lock,
-		      seqcount_t *running, struct nlattr *opt);
+		      net_seqlock_t *running, struct nlattr *opt);
 void gen_kill_estimator(struct net_rate_estimator __rcu **ptr);
 int gen_replace_estimator(struct gnet_stats_basic_packed *bstats,
 			  struct gnet_stats_basic_cpu __percpu *cpu_bstats,
 			  struct net_rate_estimator __rcu **ptr,
 			  spinlock_t *lock,
-			  seqcount_t *running, struct nlattr *opt);
+			  net_seqlock_t *running, struct nlattr *opt);
 bool gen_estimator_active(struct net_rate_estimator __rcu **ptr);
 bool gen_estimator_read(struct net_rate_estimator __rcu **ptr,
 			struct gnet_stats_rate_est64 *sample);
Index: linux-5.0.19-rt11/include/net/neighbour.h
===================================================================
--- linux-5.0.19-rt11.orig/include/net/neighbour.h
+++ linux-5.0.19-rt11/include/net/neighbour.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:461 @ static inline int neigh_hh_bridge(struct
 }
 #endif
 
-static inline int neigh_hh_output(const struct hh_cache *hh, struct sk_buff *skb)
+static inline int neigh_hh_output(struct hh_cache *hh, struct sk_buff *skb)
 {
 	unsigned int hh_alen = 0;
 	unsigned int seq;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:503 @ static inline int neigh_hh_output(const
 
 static inline int neigh_output(struct neighbour *n, struct sk_buff *skb)
 {
-	const struct hh_cache *hh = &n->hh;
+	struct hh_cache *hh = &n->hh;
 
 	if ((n->nud_state & NUD_CONNECTED) && hh->hh_len)
 		return neigh_hh_output(hh, skb);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:544 @ struct neighbour_cb {
 
 #define NEIGH_CB(skb)	((struct neighbour_cb *)(skb)->cb)
 
-static inline void neigh_ha_snapshot(char *dst, const struct neighbour *n,
+static inline void neigh_ha_snapshot(char *dst, struct neighbour *n,
 				     const struct net_device *dev)
 {
 	unsigned int seq;
Index: linux-5.0.19-rt11/include/net/net_seq_lock.h
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/include/net/net_seq_lock.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+#ifndef __NET_NET_SEQ_LOCK_H__
+#define __NET_NET_SEQ_LOCK_H__
+
+#ifdef CONFIG_PREEMPT_RT_BASE
+# define net_seqlock_t			seqlock_t
+# define net_seq_begin(__r)		read_seqbegin(__r)
+# define net_seq_retry(__r, __s)	read_seqretry(__r, __s)
+
+#else
+# define net_seqlock_t			seqcount_t
+# define net_seq_begin(__r)		read_seqcount_begin(__r)
+# define net_seq_retry(__r, __s)	read_seqcount_retry(__r, __s)
+#endif
+
+#endif
Index: linux-5.0.19-rt11/include/net/sch_generic.h
===================================================================
--- linux-5.0.19-rt11.orig/include/net/sch_generic.h
+++ linux-5.0.19-rt11/include/net/sch_generic.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:13 @
 #include <linux/percpu.h>
 #include <linux/dynamic_queue_limits.h>
 #include <linux/list.h>
+#include <net/net_seq_lock.h>
 #include <linux/refcount.h>
 #include <linux/workqueue.h>
 #include <net/gen_stats.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:108 @ struct Qdisc {
 	struct sk_buff_head	gso_skb ____cacheline_aligned_in_smp;
 	struct qdisc_skb_head	q;
 	struct gnet_stats_basic_packed bstats;
-	seqcount_t		running;
+	net_seqlock_t		running;
 	struct gnet_stats_queue	qstats;
 	unsigned long		state;
 	struct Qdisc            *next_sched;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:143 @ static inline bool qdisc_is_running(stru
 {
 	if (qdisc->flags & TCQ_F_NOLOCK)
 		return spin_is_locked(&qdisc->seqlock);
+#ifdef CONFIG_PREEMPT_RT_BASE
+	return spin_is_locked(&qdisc->running.lock) ? true : false;
+#else
 	return (raw_read_seqcount(&qdisc->running) & 1) ? true : false;
+#endif
 }
 
 static inline bool qdisc_run_begin(struct Qdisc *qdisc)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:158 @ static inline bool qdisc_run_begin(struc
 	} else if (qdisc_is_running(qdisc)) {
 		return false;
 	}
+#ifdef CONFIG_PREEMPT_RT_BASE
+	if (try_write_seqlock(&qdisc->running))
+		return true;
+	return false;
+#else
 	/* Variant of write_seqcount_begin() telling lockdep a trylock
 	 * was attempted.
 	 */
 	raw_write_seqcount_begin(&qdisc->running);
 	seqcount_acquire(&qdisc->running.dep_map, 0, 1, _RET_IP_);
 	return true;
+#endif
 }
 
 static inline void qdisc_run_end(struct Qdisc *qdisc)
 {
+#ifdef CONFIG_PREEMPT_RT_BASE
+	write_sequnlock(&qdisc->running);
+#else
 	write_seqcount_end(&qdisc->running);
+#endif
 	if (qdisc->flags & TCQ_F_NOLOCK)
 		spin_unlock(&qdisc->seqlock);
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:495 @ static inline spinlock_t *qdisc_root_sle
 	return qdisc_lock(root);
 }
 
-static inline seqcount_t *qdisc_root_sleeping_running(const struct Qdisc *qdisc)
+static inline net_seqlock_t *qdisc_root_sleeping_running(const struct Qdisc *qdisc)
 {
 	struct Qdisc *root = qdisc_root_sleeping(qdisc);
 
Index: linux-5.0.19-rt11/include/linux/atmel_tc.h
===================================================================
--- linux-5.0.19-rt11.orig/include/linux/atmel_tc.h
+++ /dev/null
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1 @
-/*
- * Timer/Counter Unit (TC) registers.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- */
-
-#ifndef ATMEL_TC_H
-#define ATMEL_TC_H
-
-#include <linux/compiler.h>
-#include <linux/list.h>
-
-/*
- * Many 32-bit Atmel SOCs include one or more TC blocks, each of which holds
- * three general-purpose 16-bit timers.  These timers share one register bank.
- * Depending on the SOC, each timer may have its own clock and IRQ, or those
- * may be shared by the whole TC block.
- *
- * These TC blocks may have up to nine external pins:  TCLK0..2 signals for
- * clocks or clock gates, and per-timer TIOA and TIOB signals used for PWM
- * or triggering.  Those pins need to be set up for use with the TC block,
- * else they will be used as GPIOs or for a different controller.
- *
- * Although we expect each TC block to have a platform_device node, those
- * nodes are not what drivers bind to.  Instead, they ask for a specific
- * TC block, by number ... which is a common approach on systems with many
- * timers.  Then they use clk_get() and platform_get_irq() to get clock and
- * IRQ resources.
- */
-
-struct clk;
-
-/**
- * struct atmel_tcb_config - SoC data for a Timer/Counter Block
- * @counter_width: size in bits of a timer counter register
- */
-struct atmel_tcb_config {
-	size_t	counter_width;
-};
-
-/**
- * struct atmel_tc - information about a Timer/Counter Block
- * @pdev: physical device
- * @regs: mapping through which the I/O registers can be accessed
- * @id: block id
- * @tcb_config: configuration data from SoC
- * @irq: irq for each of the three channels
- * @clk: internal clock source for each of the three channels
- * @node: list node, for tclib internal use
- * @allocated: if already used, for tclib internal use
- *
- * On some platforms, each TC channel has its own clocks and IRQs,
- * while on others, all TC channels share the same clock and IRQ.
- * Drivers should clk_enable() all the clocks they need even though
- * all the entries in @clk may point to the same physical clock.
- * Likewise, drivers should request irqs independently for each
- * channel, but they must use IRQF_SHARED in case some of the entries
- * in @irq are actually the same IRQ.
- */
-struct atmel_tc {
-	struct platform_device	*pdev;
-	void __iomem		*regs;
-	int                     id;
-	const struct atmel_tcb_config *tcb_config;
-	int			irq[3];
-	struct clk		*clk[3];
-	struct clk		*slow_clk;
-	struct list_head	node;
-	bool			allocated;
-};
-
-extern struct atmel_tc *atmel_tc_alloc(unsigned block);
-extern void atmel_tc_free(struct atmel_tc *tc);
-
-/* platform-specific ATMEL_TC_TIMER_CLOCKx divisors (0 means 32KiHz) */
-extern const u8 atmel_tc_divisors[5];
-
-
-/*
- * Two registers have block-wide controls.  These are: configuring the three
- * "external" clocks (or event sources) used by the timer channels; and
- * synchronizing the timers by resetting them all at once.
- *
- * "External" can mean "external to chip" using the TCLK0, TCLK1, or TCLK2
- * signals.  Or, it can mean "external to timer", using the TIOA output from
- * one of the other two timers that's being run in waveform mode.
- */
-
-#define ATMEL_TC_BCR	0xc0		/* TC Block Control Register */
-#define     ATMEL_TC_SYNC	(1 << 0)	/* synchronize timers */
-
-#define ATMEL_TC_BMR	0xc4		/* TC Block Mode Register */
-#define     ATMEL_TC_TC0XC0S	(3 << 0)	/* external clock 0 source */
-#define        ATMEL_TC_TC0XC0S_TCLK0	(0 << 0)
-#define        ATMEL_TC_TC0XC0S_NONE	(1 << 0)
-#define        ATMEL_TC_TC0XC0S_TIOA1	(2 << 0)
-#define        ATMEL_TC_TC0XC0S_TIOA2	(3 << 0)
-#define     ATMEL_TC_TC1XC1S	(3 << 2)	/* external clock 1 source */
-#define        ATMEL_TC_TC1XC1S_TCLK1	(0 << 2)
-#define        ATMEL_TC_TC1XC1S_NONE	(1 << 2)
-#define        ATMEL_TC_TC1XC1S_TIOA0	(2 << 2)
-#define        ATMEL_TC_TC1XC1S_TIOA2	(3 << 2)
-#define     ATMEL_TC_TC2XC2S	(3 << 4)	/* external clock 2 source */
-#define        ATMEL_TC_TC2XC2S_TCLK2	(0 << 4)
-#define        ATMEL_TC_TC2XC2S_NONE	(1 << 4)
-#define        ATMEL_TC_TC2XC2S_TIOA0	(2 << 4)
-#define        ATMEL_TC_TC2XC2S_TIOA1	(3 << 4)
-
-
-/*
- * Each TC block has three "channels", each with one counter and controls.
- *
- * Note that the semantics of ATMEL_TC_TIMER_CLOCKx (input clock selection
- * when it's not "external") is silicon-specific.  AT91 platforms use one
- * set of definitions; AVR32 platforms use a different set.  Don't hard-wire
- * such knowledge into your code, use the global "atmel_tc_divisors" ...
- * where index N is the divisor for clock N+1, else zero to indicate it uses
- * the 32 KiHz clock.
- *
- * The timers can be chained in various ways, and operated in "waveform"
- * generation mode (including PWM) or "capture" mode (to time events).  In
- * both modes, behavior can be configured in many ways.
- *
- * Each timer has two I/O pins, TIOA and TIOB.  Waveform mode uses TIOA as a
- * PWM output, and TIOB as either another PWM or as a trigger.  Capture mode
- * uses them only as inputs.
- */
-#define ATMEL_TC_CHAN(idx)	((idx)*0x40)
-#define ATMEL_TC_REG(idx, reg)	(ATMEL_TC_CHAN(idx) + ATMEL_TC_ ## reg)
-
-#define ATMEL_TC_CCR	0x00		/* Channel Control Register */
-#define     ATMEL_TC_CLKEN	(1 << 0)	/* clock enable */
-#define     ATMEL_TC_CLKDIS	(1 << 1)	/* clock disable */
-#define     ATMEL_TC_SWTRG	(1 << 2)	/* software trigger */
-
-#define ATMEL_TC_CMR	0x04		/* Channel Mode Register */
-
-/* Both modes share some CMR bits */
-#define     ATMEL_TC_TCCLKS	(7 << 0)	/* clock source */
-#define        ATMEL_TC_TIMER_CLOCK1	(0 << 0)
-#define        ATMEL_TC_TIMER_CLOCK2	(1 << 0)
-#define        ATMEL_TC_TIMER_CLOCK3	(2 << 0)
-#define        ATMEL_TC_TIMER_CLOCK4	(3 << 0)
-#define        ATMEL_TC_TIMER_CLOCK5	(4 << 0)
-#define        ATMEL_TC_XC0		(5 << 0)
-#define        ATMEL_TC_XC1		(6 << 0)
-#define        ATMEL_TC_XC2		(7 << 0)
-#define     ATMEL_TC_CLKI	(1 << 3)	/* clock invert */
-#define     ATMEL_TC_BURST	(3 << 4)	/* clock gating */
-#define        ATMEL_TC_GATE_NONE	(0 << 4)
-#define        ATMEL_TC_GATE_XC0	(1 << 4)
-#define        ATMEL_TC_GATE_XC1	(2 << 4)
-#define        ATMEL_TC_GATE_XC2	(3 << 4)
-#define     ATMEL_TC_WAVE	(1 << 15)	/* true = Waveform mode */
-
-/* CAPTURE mode CMR bits */
-#define     ATMEL_TC_LDBSTOP	(1 << 6)	/* counter stops on RB load */
-#define     ATMEL_TC_LDBDIS	(1 << 7)	/* counter disable on RB load */
-#define     ATMEL_TC_ETRGEDG	(3 << 8)	/* external trigger edge */
-#define        ATMEL_TC_ETRGEDG_NONE	(0 << 8)
-#define        ATMEL_TC_ETRGEDG_RISING	(1 << 8)
-#define        ATMEL_TC_ETRGEDG_FALLING	(2 << 8)
-#define        ATMEL_TC_ETRGEDG_BOTH	(3 << 8)
-#define     ATMEL_TC_ABETRG	(1 << 10)	/* external trigger is TIOA? */
-#define     ATMEL_TC_CPCTRG	(1 << 14)	/* RC compare trigger enable */
-#define     ATMEL_TC_LDRA	(3 << 16)	/* RA loading edge (of TIOA) */
-#define        ATMEL_TC_LDRA_NONE	(0 << 16)
-#define        ATMEL_TC_LDRA_RISING	(1 << 16)
-#define        ATMEL_TC_LDRA_FALLING	(2 << 16)
-#define        ATMEL_TC_LDRA_BOTH	(3 << 16)
-#define     ATMEL_TC_LDRB	(3 << 18)	/* RB loading edge (of TIOA) */
-#define        ATMEL_TC_LDRB_NONE	(0 << 18)
-#define        ATMEL_TC_LDRB_RISING	(1 << 18)
-#define        ATMEL_TC_LDRB_FALLING	(2 << 18)
-#define        ATMEL_TC_LDRB_BOTH	(3 << 18)
-
-/* WAVEFORM mode CMR bits */
-#define     ATMEL_TC_CPCSTOP	(1 <<  6)	/* RC compare stops counter */
-#define     ATMEL_TC_CPCDIS	(1 <<  7)	/* RC compare disables counter */
-#define     ATMEL_TC_EEVTEDG	(3 <<  8)	/* external event edge */
-#define        ATMEL_TC_EEVTEDG_NONE	(0 << 8)
-#define        ATMEL_TC_EEVTEDG_RISING	(1 << 8)
-#define        ATMEL_TC_EEVTEDG_FALLING	(2 << 8)
-#define        ATMEL_TC_EEVTEDG_BOTH	(3 << 8)
-#define     ATMEL_TC_EEVT	(3 << 10)	/* external event source */
-#define        ATMEL_TC_EEVT_TIOB	(0 << 10)
-#define        ATMEL_TC_EEVT_XC0	(1 << 10)
-#define        ATMEL_TC_EEVT_XC1	(2 << 10)
-#define        ATMEL_TC_EEVT_XC2	(3 << 10)
-#define     ATMEL_TC_ENETRG	(1 << 12)	/* external event is trigger */
-#define     ATMEL_TC_WAVESEL	(3 << 13)	/* waveform type */
-#define        ATMEL_TC_WAVESEL_UP	(0 << 13)
-#define        ATMEL_TC_WAVESEL_UPDOWN	(1 << 13)
-#define        ATMEL_TC_WAVESEL_UP_AUTO	(2 << 13)
-#define        ATMEL_TC_WAVESEL_UPDOWN_AUTO (3 << 13)
-#define     ATMEL_TC_ACPA	(3 << 16)	/* RA compare changes TIOA */
-#define        ATMEL_TC_ACPA_NONE	(0 << 16)
-#define        ATMEL_TC_ACPA_SET	(1 << 16)
-#define        ATMEL_TC_ACPA_CLEAR	(2 << 16)
-#define        ATMEL_TC_ACPA_TOGGLE	(3 << 16)
-#define     ATMEL_TC_ACPC	(3 << 18)	/* RC compare changes TIOA */
-#define        ATMEL_TC_ACPC_NONE	(0 << 18)
-#define        ATMEL_TC_ACPC_SET	(1 << 18)
-#define        ATMEL_TC_ACPC_CLEAR	(2 << 18)
-#define        ATMEL_TC_ACPC_TOGGLE	(3 << 18)
-#define     ATMEL_TC_AEEVT	(3 << 20)	/* external event changes TIOA */
-#define        ATMEL_TC_AEEVT_NONE	(0 << 20)
-#define        ATMEL_TC_AEEVT_SET	(1 << 20)
-#define        ATMEL_TC_AEEVT_CLEAR	(2 << 20)
-#define        ATMEL_TC_AEEVT_TOGGLE	(3 << 20)
-#define     ATMEL_TC_ASWTRG	(3 << 22)	/* software trigger changes TIOA */
-#define        ATMEL_TC_ASWTRG_NONE	(0 << 22)
-#define        ATMEL_TC_ASWTRG_SET	(1 << 22)
-#define        ATMEL_TC_ASWTRG_CLEAR	(2 << 22)
-#define        ATMEL_TC_ASWTRG_TOGGLE	(3 << 22)
-#define     ATMEL_TC_BCPB	(3 << 24)	/* RB compare changes TIOB */
-#define        ATMEL_TC_BCPB_NONE	(0 << 24)
-#define        ATMEL_TC_BCPB_SET	(1 << 24)
-#define        ATMEL_TC_BCPB_CLEAR	(2 << 24)
-#define        ATMEL_TC_BCPB_TOGGLE	(3 << 24)
-#define     ATMEL_TC_BCPC	(3 << 26)	/* RC compare changes TIOB */
-#define        ATMEL_TC_BCPC_NONE	(0 << 26)
-#define        ATMEL_TC_BCPC_SET	(1 << 26)
-#define        ATMEL_TC_BCPC_CLEAR	(2 << 26)
-#define        ATMEL_TC_BCPC_TOGGLE	(3 << 26)
-#define     ATMEL_TC_BEEVT	(3 << 28)	/* external event changes TIOB */
-#define        ATMEL_TC_BEEVT_NONE	(0 << 28)
-#define        ATMEL_TC_BEEVT_SET	(1 << 28)
-#define        ATMEL_TC_BEEVT_CLEAR	(2 << 28)
-#define        ATMEL_TC_BEEVT_TOGGLE	(3 << 28)
-#define     ATMEL_TC_BSWTRG	(3 << 30)	/* software trigger changes TIOB */
-#define        ATMEL_TC_BSWTRG_NONE	(0 << 30)
-#define        ATMEL_TC_BSWTRG_SET	(1 << 30)
-#define        ATMEL_TC_BSWTRG_CLEAR	(2 << 30)
-#define        ATMEL_TC_BSWTRG_TOGGLE	(3 << 30)
-
-#define ATMEL_TC_CV	0x10		/* counter Value */
-#define ATMEL_TC_RA	0x14		/* register A */
-#define ATMEL_TC_RB	0x18		/* register B */
-#define ATMEL_TC_RC	0x1c		/* register C */
-
-#define ATMEL_TC_SR	0x20		/* status (read-only) */
-/* Status-only flags */
-#define     ATMEL_TC_CLKSTA	(1 << 16)	/* clock enabled */
-#define     ATMEL_TC_MTIOA	(1 << 17)	/* TIOA mirror */
-#define     ATMEL_TC_MTIOB	(1 << 18)	/* TIOB mirror */
-
-#define ATMEL_TC_IER	0x24		/* interrupt enable (write-only) */
-#define ATMEL_TC_IDR	0x28		/* interrupt disable (write-only) */
-#define ATMEL_TC_IMR	0x2c		/* interrupt mask (read-only) */
-
-/* Status and IRQ flags */
-#define     ATMEL_TC_COVFS	(1 <<  0)	/* counter overflow */
-#define     ATMEL_TC_LOVRS	(1 <<  1)	/* load overrun */
-#define     ATMEL_TC_CPAS	(1 <<  2)	/* RA compare */
-#define     ATMEL_TC_CPBS	(1 <<  3)	/* RB compare */
-#define     ATMEL_TC_CPCS	(1 <<  4)	/* RC compare */
-#define     ATMEL_TC_LDRAS	(1 <<  5)	/* RA loading */
-#define     ATMEL_TC_LDRBS	(1 <<  6)	/* RB loading */
-#define     ATMEL_TC_ETRGS	(1 <<  7)	/* external trigger */
-#define     ATMEL_TC_ALL_IRQ	(ATMEL_TC_COVFS	| ATMEL_TC_LOVRS | \
-				 ATMEL_TC_CPAS | ATMEL_TC_CPBS | \
-				 ATMEL_TC_CPCS | ATMEL_TC_LDRAS | \
-				 ATMEL_TC_LDRBS | ATMEL_TC_ETRGS) \
-				 /* all IRQs */
-
-#endif
Index: linux-5.0.19-rt11/include/soc/at91/atmel_tcb.h
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/include/soc/at91/atmel_tcb.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+/*
+ * Timer/Counter Unit (TC) registers.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef __SOC_ATMEL_TCB_H
+#define __SOC_ATMEL_TCB_H
+
+#include <linux/compiler.h>
+#include <linux/list.h>
+
+/*
+ * Many 32-bit Atmel SOCs include one or more TC blocks, each of which holds
+ * three general-purpose 16-bit timers.  These timers share one register bank.
+ * Depending on the SOC, each timer may have its own clock and IRQ, or those
+ * may be shared by the whole TC block.
+ *
+ * These TC blocks may have up to nine external pins:  TCLK0..2 signals for
+ * clocks or clock gates, and per-timer TIOA and TIOB signals used for PWM
+ * or triggering.  Those pins need to be set up for use with the TC block,
+ * else they will be used as GPIOs or for a different controller.
+ *
+ * Although we expect each TC block to have a platform_device node, those
+ * nodes are not what drivers bind to.  Instead, they ask for a specific
+ * TC block, by number ... which is a common approach on systems with many
+ * timers.  Then they use clk_get() and platform_get_irq() to get clock and
+ * IRQ resources.
+ */
+
+struct clk;
+
+/**
+ * struct atmel_tcb_config - SoC data for a Timer/Counter Block
+ * @counter_width: size in bits of a timer counter register
+ */
+struct atmel_tcb_config {
+	size_t	counter_width;
+};
+
+/**
+ * struct atmel_tc - information about a Timer/Counter Block
+ * @pdev: physical device
+ * @regs: mapping through which the I/O registers can be accessed
+ * @id: block id
+ * @tcb_config: configuration data from SoC
+ * @irq: irq for each of the three channels
+ * @clk: internal clock source for each of the three channels
+ * @node: list node, for tclib internal use
+ * @allocated: if already used, for tclib internal use
+ *
+ * On some platforms, each TC channel has its own clocks and IRQs,
+ * while on others, all TC channels share the same clock and IRQ.
+ * Drivers should clk_enable() all the clocks they need even though
+ * all the entries in @clk may point to the same physical clock.
+ * Likewise, drivers should request irqs independently for each
+ * channel, but they must use IRQF_SHARED in case some of the entries
+ * in @irq are actually the same IRQ.
+ */
+struct atmel_tc {
+	struct platform_device	*pdev;
+	void __iomem		*regs;
+	int                     id;
+	const struct atmel_tcb_config *tcb_config;
+	int			irq[3];
+	struct clk		*clk[3];
+	struct clk		*slow_clk;
+	struct list_head	node;
+	bool			allocated;
+};
+
+extern struct atmel_tc *atmel_tc_alloc(unsigned block);
+extern void atmel_tc_free(struct atmel_tc *tc);
+
+/* platform-specific ATMEL_TC_TIMER_CLOCKx divisors (0 means 32KiHz) */
+extern const u8 atmel_tc_divisors[5];
+
+
+/*
+ * Two registers have block-wide controls.  These are: configuring the three
+ * "external" clocks (or event sources) used by the timer channels; and
+ * synchronizing the timers by resetting them all at once.
+ *
+ * "External" can mean "external to chip" using the TCLK0, TCLK1, or TCLK2
+ * signals.  Or, it can mean "external to timer", using the TIOA output from
+ * one of the other two timers that's being run in waveform mode.
+ */
+
+#define ATMEL_TC_BCR	0xc0		/* TC Block Control Register */
+#define     ATMEL_TC_SYNC	(1 << 0)	/* synchronize timers */
+
+#define ATMEL_TC_BMR	0xc4		/* TC Block Mode Register */
+#define     ATMEL_TC_TC0XC0S	(3 << 0)	/* external clock 0 source */
+#define        ATMEL_TC_TC0XC0S_TCLK0	(0 << 0)
+#define        ATMEL_TC_TC0XC0S_NONE	(1 << 0)
+#define        ATMEL_TC_TC0XC0S_TIOA1	(2 << 0)
+#define        ATMEL_TC_TC0XC0S_TIOA2	(3 << 0)
+#define     ATMEL_TC_TC1XC1S	(3 << 2)	/* external clock 1 source */
+#define        ATMEL_TC_TC1XC1S_TCLK1	(0 << 2)
+#define        ATMEL_TC_TC1XC1S_NONE	(1 << 2)
+#define        ATMEL_TC_TC1XC1S_TIOA0	(2 << 2)
+#define        ATMEL_TC_TC1XC1S_TIOA2	(3 << 2)
+#define     ATMEL_TC_TC2XC2S	(3 << 4)	/* external clock 2 source */
+#define        ATMEL_TC_TC2XC2S_TCLK2	(0 << 4)
+#define        ATMEL_TC_TC2XC2S_NONE	(1 << 4)
+#define        ATMEL_TC_TC2XC2S_TIOA0	(2 << 4)
+#define        ATMEL_TC_TC2XC2S_TIOA1	(3 << 4)
+
+
+/*
+ * Each TC block has three "channels", each with one counter and controls.
+ *
+ * Note that the semantics of ATMEL_TC_TIMER_CLOCKx (input clock selection
+ * when it's not "external") is silicon-specific.  AT91 platforms use one
+ * set of definitions; AVR32 platforms use a different set.  Don't hard-wire
+ * such knowledge into your code, use the global "atmel_tc_divisors" ...
+ * where index N is the divisor for clock N+1, else zero to indicate it uses
+ * the 32 KiHz clock.
+ *
+ * The timers can be chained in various ways, and operated in "waveform"
+ * generation mode (including PWM) or "capture" mode (to time events).  In
+ * both modes, behavior can be configured in many ways.
+ *
+ * Each timer has two I/O pins, TIOA and TIOB.  Waveform mode uses TIOA as a
+ * PWM output, and TIOB as either another PWM or as a trigger.  Capture mode
+ * uses them only as inputs.
+ */
+#define ATMEL_TC_CHAN(idx)	((idx)*0x40)
+#define ATMEL_TC_REG(idx, reg)	(ATMEL_TC_CHAN(idx) + ATMEL_TC_ ## reg)
+
+#define ATMEL_TC_CCR	0x00		/* Channel Control Register */
+#define     ATMEL_TC_CLKEN	(1 << 0)	/* clock enable */
+#define     ATMEL_TC_CLKDIS	(1 << 1)	/* clock disable */
+#define     ATMEL_TC_SWTRG	(1 << 2)	/* software trigger */
+
+#define ATMEL_TC_CMR	0x04		/* Channel Mode Register */
+
+/* Both modes share some CMR bits */
+#define     ATMEL_TC_TCCLKS	(7 << 0)	/* clock source */
+#define        ATMEL_TC_TIMER_CLOCK1	(0 << 0)
+#define        ATMEL_TC_TIMER_CLOCK2	(1 << 0)
+#define        ATMEL_TC_TIMER_CLOCK3	(2 << 0)
+#define        ATMEL_TC_TIMER_CLOCK4	(3 << 0)
+#define        ATMEL_TC_TIMER_CLOCK5	(4 << 0)
+#define        ATMEL_TC_XC0		(5 << 0)
+#define        ATMEL_TC_XC1		(6 << 0)
+#define        ATMEL_TC_XC2		(7 << 0)
+#define     ATMEL_TC_CLKI	(1 << 3)	/* clock invert */
+#define     ATMEL_TC_BURST	(3 << 4)	/* clock gating */
+#define        ATMEL_TC_GATE_NONE	(0 << 4)
+#define        ATMEL_TC_GATE_XC0	(1 << 4)
+#define        ATMEL_TC_GATE_XC1	(2 << 4)
+#define        ATMEL_TC_GATE_XC2	(3 << 4)
+#define     ATMEL_TC_WAVE	(1 << 15)	/* true = Waveform mode */
+
+/* CAPTURE mode CMR bits */
+#define     ATMEL_TC_LDBSTOP	(1 << 6)	/* counter stops on RB load */
+#define     ATMEL_TC_LDBDIS	(1 << 7)	/* counter disable on RB load */
+#define     ATMEL_TC_ETRGEDG	(3 << 8)	/* external trigger edge */
+#define        ATMEL_TC_ETRGEDG_NONE	(0 << 8)
+#define        ATMEL_TC_ETRGEDG_RISING	(1 << 8)
+#define        ATMEL_TC_ETRGEDG_FALLING	(2 << 8)
+#define        ATMEL_TC_ETRGEDG_BOTH	(3 << 8)
+#define     ATMEL_TC_ABETRG	(1 << 10)	/* external trigger is TIOA? */
+#define     ATMEL_TC_CPCTRG	(1 << 14)	/* RC compare trigger enable */
+#define     ATMEL_TC_LDRA	(3 << 16)	/* RA loading edge (of TIOA) */
+#define        ATMEL_TC_LDRA_NONE	(0 << 16)
+#define        ATMEL_TC_LDRA_RISING	(1 << 16)
+#define        ATMEL_TC_LDRA_FALLING	(2 << 16)
+#define        ATMEL_TC_LDRA_BOTH	(3 << 16)
+#define     ATMEL_TC_LDRB	(3 << 18)	/* RB loading edge (of TIOA) */
+#define        ATMEL_TC_LDRB_NONE	(0 << 18)
+#define        ATMEL_TC_LDRB_RISING	(1 << 18)
+#define        ATMEL_TC_LDRB_FALLING	(2 << 18)
+#define        ATMEL_TC_LDRB_BOTH	(3 << 18)
+
+/* WAVEFORM mode CMR bits */
+#define     ATMEL_TC_CPCSTOP	(1 <<  6)	/* RC compare stops counter */
+#define     ATMEL_TC_CPCDIS	(1 <<  7)	/* RC compare disables counter */
+#define     ATMEL_TC_EEVTEDG	(3 <<  8)	/* external event edge */
+#define        ATMEL_TC_EEVTEDG_NONE	(0 << 8)
+#define        ATMEL_TC_EEVTEDG_RISING	(1 << 8)
+#define        ATMEL_TC_EEVTEDG_FALLING	(2 << 8)
+#define        ATMEL_TC_EEVTEDG_BOTH	(3 << 8)
+#define     ATMEL_TC_EEVT	(3 << 10)	/* external event source */
+#define        ATMEL_TC_EEVT_TIOB	(0 << 10)
+#define        ATMEL_TC_EEVT_XC0	(1 << 10)
+#define        ATMEL_TC_EEVT_XC1	(2 << 10)
+#define        ATMEL_TC_EEVT_XC2	(3 << 10)
+#define     ATMEL_TC_ENETRG	(1 << 12)	/* external event is trigger */
+#define     ATMEL_TC_WAVESEL	(3 << 13)	/* waveform type */
+#define        ATMEL_TC_WAVESEL_UP	(0 << 13)
+#define        ATMEL_TC_WAVESEL_UPDOWN	(1 << 13)
+#define        ATMEL_TC_WAVESEL_UP_AUTO	(2 << 13)
+#define        ATMEL_TC_WAVESEL_UPDOWN_AUTO (3 << 13)
+#define     ATMEL_TC_ACPA	(3 << 16)	/* RA compare changes TIOA */
+#define        ATMEL_TC_ACPA_NONE	(0 << 16)
+#define        ATMEL_TC_ACPA_SET	(1 << 16)
+#define        ATMEL_TC_ACPA_CLEAR	(2 << 16)
+#define        ATMEL_TC_ACPA_TOGGLE	(3 << 16)
+#define     ATMEL_TC_ACPC	(3 << 18)	/* RC compare changes TIOA */
+#define        ATMEL_TC_ACPC_NONE	(0 << 18)
+#define        ATMEL_TC_ACPC_SET	(1 << 18)
+#define        ATMEL_TC_ACPC_CLEAR	(2 << 18)
+#define        ATMEL_TC_ACPC_TOGGLE	(3 << 18)
+#define     ATMEL_TC_AEEVT	(3 << 20)	/* external event changes TIOA */
+#define        ATMEL_TC_AEEVT_NONE	(0 << 20)
+#define        ATMEL_TC_AEEVT_SET	(1 << 20)
+#define        ATMEL_TC_AEEVT_CLEAR	(2 << 20)
+#define        ATMEL_TC_AEEVT_TOGGLE	(3 << 20)
+#define     ATMEL_TC_ASWTRG	(3 << 22)	/* software trigger changes TIOA */
+#define        ATMEL_TC_ASWTRG_NONE	(0 << 22)
+#define        ATMEL_TC_ASWTRG_SET	(1 << 22)
+#define        ATMEL_TC_ASWTRG_CLEAR	(2 << 22)
+#define        ATMEL_TC_ASWTRG_TOGGLE	(3 << 22)
+#define     ATMEL_TC_BCPB	(3 << 24)	/* RB compare changes TIOB */
+#define        ATMEL_TC_BCPB_NONE	(0 << 24)
+#define        ATMEL_TC_BCPB_SET	(1 << 24)
+#define        ATMEL_TC_BCPB_CLEAR	(2 << 24)
+#define        ATMEL_TC_BCPB_TOGGLE	(3 << 24)
+#define     ATMEL_TC_BCPC	(3 << 26)	/* RC compare changes TIOB */
+#define        ATMEL_TC_BCPC_NONE	(0 << 26)
+#define        ATMEL_TC_BCPC_SET	(1 << 26)
+#define        ATMEL_TC_BCPC_CLEAR	(2 << 26)
+#define        ATMEL_TC_BCPC_TOGGLE	(3 << 26)
+#define     ATMEL_TC_BEEVT	(3 << 28)	/* external event changes TIOB */
+#define        ATMEL_TC_BEEVT_NONE	(0 << 28)
+#define        ATMEL_TC_BEEVT_SET	(1 << 28)
+#define        ATMEL_TC_BEEVT_CLEAR	(2 << 28)
+#define        ATMEL_TC_BEEVT_TOGGLE	(3 << 28)
+#define     ATMEL_TC_BSWTRG	(3 << 30)	/* software trigger changes TIOB */
+#define        ATMEL_TC_BSWTRG_NONE	(0 << 30)
+#define        ATMEL_TC_BSWTRG_SET	(1 << 30)
+#define        ATMEL_TC_BSWTRG_CLEAR	(2 << 30)
+#define        ATMEL_TC_BSWTRG_TOGGLE	(3 << 30)
+
+#define ATMEL_TC_CV	0x10		/* counter Value */
+#define ATMEL_TC_RA	0x14		/* register A */
+#define ATMEL_TC_RB	0x18		/* register B */
+#define ATMEL_TC_RC	0x1c		/* register C */
+
+#define ATMEL_TC_SR	0x20		/* status (read-only) */
+/* Status-only flags */
+#define     ATMEL_TC_CLKSTA	(1 << 16)	/* clock enabled */
+#define     ATMEL_TC_MTIOA	(1 << 17)	/* TIOA mirror */
+#define     ATMEL_TC_MTIOB	(1 << 18)	/* TIOB mirror */
+
+#define ATMEL_TC_IER	0x24		/* interrupt enable (write-only) */
+#define ATMEL_TC_IDR	0x28		/* interrupt disable (write-only) */
+#define ATMEL_TC_IMR	0x2c		/* interrupt mask (read-only) */
+
+/* Status and IRQ flags */
+#define     ATMEL_TC_COVFS	(1 <<  0)	/* counter overflow */
+#define     ATMEL_TC_LOVRS	(1 <<  1)	/* load overrun */
+#define     ATMEL_TC_CPAS	(1 <<  2)	/* RA compare */
+#define     ATMEL_TC_CPBS	(1 <<  3)	/* RB compare */
+#define     ATMEL_TC_CPCS	(1 <<  4)	/* RC compare */
+#define     ATMEL_TC_LDRAS	(1 <<  5)	/* RA loading */
+#define     ATMEL_TC_LDRBS	(1 <<  6)	/* RB loading */
+#define     ATMEL_TC_ETRGS	(1 <<  7)	/* external trigger */
+#define     ATMEL_TC_ALL_IRQ	(ATMEL_TC_COVFS	| ATMEL_TC_LOVRS | \
+				 ATMEL_TC_CPAS | ATMEL_TC_CPBS | \
+				 ATMEL_TC_CPCS | ATMEL_TC_LDRAS | \
+				 ATMEL_TC_LDRBS | ATMEL_TC_ETRGS) \
+				 /* all IRQs */
+
+#endif
Index: linux-5.0.19-rt11/init/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/init/Kconfig
+++ linux-5.0.19-rt11/init/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:817 @ config CFS_BANDWIDTH
 config RT_GROUP_SCHED
 	bool "Group scheduling for SCHED_RR/FIFO"
 	depends on CGROUP_SCHED
+	depends on !PREEMPT_RT_FULL
 	default n
 	help
 	  This feature lets you explicitly allocate real CPU bandwidth
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1671 @ choice
 
 config SLAB
 	bool "SLAB"
+	depends on !PREEMPT_RT_FULL
 	select HAVE_HARDENED_USERCOPY_ALLOCATOR
 	help
 	  The regular slab allocator that is established and known to work
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1692 @ config SLUB
 config SLOB
 	depends on EXPERT
 	bool "SLOB (Simple Allocator)"
+	depends on !PREEMPT_RT_FULL
 	help
 	   SLOB replaces the stock allocator with a drastically simpler
 	   allocator. SLOB is generally more space efficient but
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1734 @ config SLAB_FREELIST_HARDENED
 
 config SLUB_CPU_PARTIAL
 	default y
-	depends on SLUB && SMP
+	depends on SLUB && SMP && !PREEMPT_RT_FULL
 	bool "SLUB per cpu partial cache"
 	help
 	  Per cpu partial caches accellerate objects allocation and freeing
Index: linux-5.0.19-rt11/init/Makefile
===================================================================
--- linux-5.0.19-rt11.orig/init/Makefile
+++ linux-5.0.19-rt11/init/Makefile
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:37 @ silent_chk_compile.h = :
 include/generated/compile.h: FORCE
 	@$($(quiet)chk_compile.h)
 	$(Q)$(CONFIG_SHELL) $(srctree)/scripts/mkcompile_h $@ \
-	"$(UTS_MACHINE)" "$(CONFIG_SMP)" "$(CONFIG_PREEMPT)" "$(CC) $(KBUILD_CFLAGS)"
+	"$(UTS_MACHINE)" "$(CONFIG_SMP)" "$(CONFIG_PREEMPT)" "$(CONFIG_PREEMPT_RT_FULL)" "$(CC) $(KBUILD_CFLAGS)"
Index: linux-5.0.19-rt11/init/init_task.c
===================================================================
--- linux-5.0.19-rt11.orig/init/init_task.c
+++ linux-5.0.19-rt11/init/init_task.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:53 @ static struct sighand_struct init_sighan
 	.signalfd_wqh	= __WAIT_QUEUE_HEAD_INITIALIZER(init_sighand.signalfd_wqh),
 };
 
+#if defined(CONFIG_POSIX_TIMERS) && defined(CONFIG_PREEMPT_RT_BASE)
+# define INIT_TIMER_LIST		.posix_timer_list = NULL,
+#else
+# define INIT_TIMER_LIST
+#endif
+
 /*
  * Set up the first task table, touch at your own risk!. Base=0,
  * limit=0x1fffff (=2MB)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:80 @ struct task_struct init_task
 	.static_prio	= MAX_PRIO - 20,
 	.normal_prio	= MAX_PRIO - 20,
 	.policy		= SCHED_NORMAL,
-	.cpus_allowed	= CPU_MASK_ALL,
+	.cpus_ptr	= &init_task.cpus_mask,
+	.cpus_mask	= CPU_MASK_ALL,
 	.nr_cpus_allowed= NR_CPUS,
 	.mm		= NULL,
 	.active_mm	= &init_mm,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:128 @ struct task_struct init_task
 	INIT_CPU_TIMERS(init_task)
 	.pi_lock	= __RAW_SPIN_LOCK_UNLOCKED(init_task.pi_lock),
 	.timer_slack_ns = 50000, /* 50 usec default slack */
+	INIT_TIMER_LIST
 	.thread_pid	= &init_struct_pid,
 	.thread_group	= LIST_HEAD_INIT(init_task.thread_group),
 	.thread_node	= LIST_HEAD_INIT(init_signals.thread_head),
Index: linux-5.0.19-rt11/init/main.c
===================================================================
--- linux-5.0.19-rt11.orig/init/main.c
+++ linux-5.0.19-rt11/init/main.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:651 @ asmlinkage __visible void __init start_k
 	softirq_init();
 	timekeeping_init();
 	time_init();
-	printk_safe_init();
 	perf_event_init();
 	profile_init();
 	call_function_init();
Index: linux-5.0.19-rt11/kernel/Kconfig.locks
===================================================================
--- linux-5.0.19-rt11.orig/kernel/Kconfig.locks
+++ linux-5.0.19-rt11/kernel/Kconfig.locks
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:228 @ config ARCH_SUPPORTS_ATOMIC_RMW
 
 config MUTEX_SPIN_ON_OWNER
 	def_bool y
-	depends on SMP && ARCH_SUPPORTS_ATOMIC_RMW
+	depends on SMP && ARCH_SUPPORTS_ATOMIC_RMW && !PREEMPT_RT_FULL
 
 config RWSEM_SPIN_ON_OWNER
        def_bool y
-       depends on SMP && RWSEM_XCHGADD_ALGORITHM && ARCH_SUPPORTS_ATOMIC_RMW
+       depends on SMP && RWSEM_XCHGADD_ALGORITHM && ARCH_SUPPORTS_ATOMIC_RMW && !PREEMPT_RT_FULL
 
 config LOCK_SPIN_ON_OWNER
        def_bool y
Index: linux-5.0.19-rt11/kernel/Kconfig.preempt
===================================================================
--- linux-5.0.19-rt11.orig/kernel/Kconfig.preempt
+++ linux-5.0.19-rt11/kernel/Kconfig.preempt
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+config PREEMPT
+	bool
+	select PREEMPT_COUNT
+
+config PREEMPT_RT_BASE
+	bool
+	select PREEMPT
+
+config HAVE_PREEMPT_LAZY
+	bool
+
+config PREEMPT_LAZY
+	def_bool y if HAVE_PREEMPT_LAZY && PREEMPT_RT_FULL
 
 choice
 	prompt "Preemption Model"
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:50 @ config PREEMPT_VOLUNTARY
 
 	  Select this if you are building a kernel for a desktop system.
 
-config PREEMPT
+config PREEMPT__LL
 	bool "Preemptible Kernel (Low-Latency Desktop)"
 	depends on !ARCH_NO_PREEMPT
-	select PREEMPT_COUNT
+	select PREEMPT
 	select UNINLINE_SPIN_UNLOCK if !ARCH_INLINE_SPIN_UNLOCK
 	help
 	  This option reduces the latency of the kernel by making
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:70 @ config PREEMPT
 	  embedded system with latency requirements in the milliseconds
 	  range.
 
+config PREEMPT_RTB
+	bool "Preemptible Kernel (Basic RT)"
+	select PREEMPT_RT_BASE
+	help
+	  This option is basically the same as (Low-Latency Desktop) but
+	  enables changes which are preliminary for the full preemptible
+	  RT kernel.
+
+config PREEMPT_RT_FULL
+	bool "Fully Preemptible Kernel (RT)"
+	depends on IRQ_FORCED_THREADING
+	select PREEMPT_RT_BASE
+	select PREEMPT_RCU
+	help
+	  All and everything
+
 endchoice
 
 config PREEMPT_COUNT
Index: linux-5.0.19-rt11/kernel/cgroup/cpuset.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/cgroup/cpuset.c
+++ linux-5.0.19-rt11/kernel/cgroup/cpuset.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:348 @ static struct cpuset top_cpuset = {
  */
 
 static DEFINE_MUTEX(cpuset_mutex);
-static DEFINE_SPINLOCK(callback_lock);
+static DEFINE_RAW_SPINLOCK(callback_lock);
 
 static struct workqueue_struct *cpuset_migrate_mm_wq;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1223 @ static int update_parent_subparts_cpumas
 	 * Newly added CPUs will be removed from effective_cpus and
 	 * newly deleted ones will be added back to effective_cpus.
 	 */
-	spin_lock_irq(&callback_lock);
+	raw_spin_lock_irq(&callback_lock);
 	if (adding) {
 		cpumask_or(parent->subparts_cpus,
 			   parent->subparts_cpus, tmp->addmask);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1242 @ static int update_parent_subparts_cpumas
 	}
 
 	parent->nr_subparts_cpus = cpumask_weight(parent->subparts_cpus);
-	spin_unlock_irq(&callback_lock);
+	raw_spin_unlock_irq(&callback_lock);
 
 	return cmd == partcmd_update;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1347 @ static void update_cpumasks_hier(struct
 			continue;
 		rcu_read_unlock();
 
-		spin_lock_irq(&callback_lock);
+		raw_spin_lock_irq(&callback_lock);
 
 		cpumask_copy(cp->effective_cpus, tmp->new_cpus);
 		if (cp->nr_subparts_cpus &&
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1378 @ static void update_cpumasks_hier(struct
 					= cpumask_weight(cp->subparts_cpus);
 			}
 		}
-		spin_unlock_irq(&callback_lock);
+		raw_spin_unlock_irq(&callback_lock);
 
 		WARN_ON(!is_in_v2_mode() &&
 			!cpumask_equal(cp->cpus_allowed, cp->effective_cpus));
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1496 @ static int update_cpumask(struct cpuset
 			return -EINVAL;
 	}
 
-	spin_lock_irq(&callback_lock);
+	raw_spin_lock_irq(&callback_lock);
 	cpumask_copy(cs->cpus_allowed, trialcs->cpus_allowed);
 
 	/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1507 @ static int update_cpumask(struct cpuset
 			       cs->cpus_allowed);
 		cs->nr_subparts_cpus = cpumask_weight(cs->subparts_cpus);
 	}
-	spin_unlock_irq(&callback_lock);
+	raw_spin_unlock_irq(&callback_lock);
 
 	update_cpumasks_hier(cs, &tmp);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1701 @ static void update_nodemasks_hier(struct
 			continue;
 		rcu_read_unlock();
 
-		spin_lock_irq(&callback_lock);
+		raw_spin_lock_irq(&callback_lock);
 		cp->effective_mems = *new_mems;
-		spin_unlock_irq(&callback_lock);
+		raw_spin_unlock_irq(&callback_lock);
 
 		WARN_ON(!is_in_v2_mode() &&
 			!nodes_equal(cp->mems_allowed, cp->effective_mems));
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1771 @ static int update_nodemask(struct cpuset
 	if (retval < 0)
 		goto done;
 
-	spin_lock_irq(&callback_lock);
+	raw_spin_lock_irq(&callback_lock);
 	cs->mems_allowed = trialcs->mems_allowed;
-	spin_unlock_irq(&callback_lock);
+	raw_spin_unlock_irq(&callback_lock);
 
 	/* use trialcs->mems_allowed as a temp variable */
 	update_nodemasks_hier(cs, &trialcs->mems_allowed);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1864 @ static int update_flag(cpuset_flagbits_t
 	spread_flag_changed = ((is_spread_slab(cs) != is_spread_slab(trialcs))
 			|| (is_spread_page(cs) != is_spread_page(trialcs)));
 
-	spin_lock_irq(&callback_lock);
+	raw_spin_lock_irq(&callback_lock);
 	cs->flags = trialcs->flags;
-	spin_unlock_irq(&callback_lock);
+	raw_spin_unlock_irq(&callback_lock);
 
 	if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed)
 		rebuild_sched_domains_locked();
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2369 @ static int cpuset_common_seq_show(struct
 	cpuset_filetype_t type = seq_cft(sf)->private;
 	int ret = 0;
 
-	spin_lock_irq(&callback_lock);
+	raw_spin_lock_irq(&callback_lock);
 
 	switch (type) {
 	case FILE_CPULIST:
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2391 @ static int cpuset_common_seq_show(struct
 		ret = -EINVAL;
 	}
 
-	spin_unlock_irq(&callback_lock);
+	raw_spin_unlock_irq(&callback_lock);
 	return ret;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2701 @ static int cpuset_css_online(struct cgro
 
 	cpuset_inc();
 
-	spin_lock_irq(&callback_lock);
+	raw_spin_lock_irq(&callback_lock);
 	if (is_in_v2_mode()) {
 		cpumask_copy(cs->effective_cpus, parent->effective_cpus);
 		cs->effective_mems = parent->effective_mems;
 		cs->use_parent_ecpus = true;
 		parent->child_ecpus_count++;
 	}
-	spin_unlock_irq(&callback_lock);
+	raw_spin_unlock_irq(&callback_lock);
 
 	if (!test_bit(CGRP_CPUSET_CLONE_CHILDREN, &css->cgroup->flags))
 		goto out_unlock;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2735 @ static int cpuset_css_online(struct cgro
 	}
 	rcu_read_unlock();
 
-	spin_lock_irq(&callback_lock);
+	raw_spin_lock_irq(&callback_lock);
 	cs->mems_allowed = parent->mems_allowed;
 	cs->effective_mems = parent->mems_allowed;
 	cpumask_copy(cs->cpus_allowed, parent->cpus_allowed);
 	cpumask_copy(cs->effective_cpus, parent->cpus_allowed);
-	spin_unlock_irq(&callback_lock);
+	raw_spin_unlock_irq(&callback_lock);
 out_unlock:
 	mutex_unlock(&cpuset_mutex);
 	return 0;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2793 @ static void cpuset_css_free(struct cgrou
 static void cpuset_bind(struct cgroup_subsys_state *root_css)
 {
 	mutex_lock(&cpuset_mutex);
-	spin_lock_irq(&callback_lock);
+	raw_spin_lock_irq(&callback_lock);
 
 	if (is_in_v2_mode()) {
 		cpumask_copy(top_cpuset.cpus_allowed, cpu_possible_mask);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2804 @ static void cpuset_bind(struct cgroup_su
 		top_cpuset.mems_allowed = top_cpuset.effective_mems;
 	}
 
-	spin_unlock_irq(&callback_lock);
+	raw_spin_unlock_irq(&callback_lock);
 	mutex_unlock(&cpuset_mutex);
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2818 @ static void cpuset_fork(struct task_stru
 	if (task_css_is_root(task, cpuset_cgrp_id))
 		return;
 
-	set_cpus_allowed_ptr(task, &current->cpus_allowed);
+	set_cpus_allowed_ptr(task, current->cpus_ptr);
 	task->mems_allowed = current->mems_allowed;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2905 @ hotplug_update_tasks_legacy(struct cpuse
 {
 	bool is_empty;
 
-	spin_lock_irq(&callback_lock);
+	raw_spin_lock_irq(&callback_lock);
 	cpumask_copy(cs->cpus_allowed, new_cpus);
 	cpumask_copy(cs->effective_cpus, new_cpus);
 	cs->mems_allowed = *new_mems;
 	cs->effective_mems = *new_mems;
-	spin_unlock_irq(&callback_lock);
+	raw_spin_unlock_irq(&callback_lock);
 
 	/*
 	 * Don't call update_tasks_cpumask() if the cpuset becomes empty,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2947 @ hotplug_update_tasks(struct cpuset *cs,
 	if (nodes_empty(*new_mems))
 		*new_mems = parent_cs(cs)->effective_mems;
 
-	spin_lock_irq(&callback_lock);
+	raw_spin_lock_irq(&callback_lock);
 	cpumask_copy(cs->effective_cpus, new_cpus);
 	cs->effective_mems = *new_mems;
-	spin_unlock_irq(&callback_lock);
+	raw_spin_unlock_irq(&callback_lock);
 
 	if (cpus_updated)
 		update_tasks_cpumask(cs);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3105 @ static void cpuset_hotplug_workfn(struct
 
 	/* synchronize cpus_allowed to cpu_active_mask */
 	if (cpus_updated) {
-		spin_lock_irq(&callback_lock);
+		raw_spin_lock_irq(&callback_lock);
 		if (!on_dfl)
 			cpumask_copy(top_cpuset.cpus_allowed, &new_cpus);
 		/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3125 @ static void cpuset_hotplug_workfn(struct
 			}
 		}
 		cpumask_copy(top_cpuset.effective_cpus, &new_cpus);
-		spin_unlock_irq(&callback_lock);
+		raw_spin_unlock_irq(&callback_lock);
 		/* we don't mess with cpumasks of tasks in top_cpuset */
 	}
 
 	/* synchronize mems_allowed to N_MEMORY */
 	if (mems_updated) {
-		spin_lock_irq(&callback_lock);
+		raw_spin_lock_irq(&callback_lock);
 		if (!on_dfl)
 			top_cpuset.mems_allowed = new_mems;
 		top_cpuset.effective_mems = new_mems;
-		spin_unlock_irq(&callback_lock);
+		raw_spin_unlock_irq(&callback_lock);
 		update_tasks_nodemask(&top_cpuset);
 	}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3236 @ void cpuset_cpus_allowed(struct task_str
 {
 	unsigned long flags;
 
-	spin_lock_irqsave(&callback_lock, flags);
+	raw_spin_lock_irqsave(&callback_lock, flags);
 	rcu_read_lock();
 	guarantee_online_cpus(task_cs(tsk), pmask);
 	rcu_read_unlock();
-	spin_unlock_irqrestore(&callback_lock, flags);
+	raw_spin_unlock_irqrestore(&callback_lock, flags);
 }
 
 void cpuset_cpus_allowed_fallback(struct task_struct *tsk)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3288 @ nodemask_t cpuset_mems_allowed(struct ta
 	nodemask_t mask;
 	unsigned long flags;
 
-	spin_lock_irqsave(&callback_lock, flags);
+	raw_spin_lock_irqsave(&callback_lock, flags);
 	rcu_read_lock();
 	guarantee_online_mems(task_cs(tsk), &mask);
 	rcu_read_unlock();
-	spin_unlock_irqrestore(&callback_lock, flags);
+	raw_spin_unlock_irqrestore(&callback_lock, flags);
 
 	return mask;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3384 @ bool __cpuset_node_allowed(int node, gfp
 		return true;
 
 	/* Not hardwall and node outside mems_allowed: scan up cpusets */
-	spin_lock_irqsave(&callback_lock, flags);
+	raw_spin_lock_irqsave(&callback_lock, flags);
 
 	rcu_read_lock();
 	cs = nearest_hardwall_ancestor(task_cs(current));
 	allowed = node_isset(node, cs->mems_allowed);
 	rcu_read_unlock();
 
-	spin_unlock_irqrestore(&callback_lock, flags);
+	raw_spin_unlock_irqrestore(&callback_lock, flags);
 	return allowed;
 }
 
Index: linux-5.0.19-rt11/kernel/cgroup/rstat.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/cgroup/rstat.c
+++ linux-5.0.19-rt11/kernel/cgroup/rstat.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:162 @ static void cgroup_rstat_flush_locked(st
 		raw_spinlock_t *cpu_lock = per_cpu_ptr(&cgroup_rstat_cpu_lock,
 						       cpu);
 		struct cgroup *pos = NULL;
+		unsigned long flags;
 
-		raw_spin_lock(cpu_lock);
+		raw_spin_lock_irqsave(cpu_lock, flags);
 		while ((pos = cgroup_rstat_cpu_pop_updated(pos, cgrp, cpu))) {
 			struct cgroup_subsys_state *css;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:176 @ static void cgroup_rstat_flush_locked(st
 				css->ss->css_rstat_flush(css, cpu);
 			rcu_read_unlock();
 		}
-		raw_spin_unlock(cpu_lock);
+		raw_spin_unlock_irqrestore(cpu_lock, flags);
 
 		/* if @may_sleep, play nice and yield if necessary */
 		if (may_sleep && (need_resched() ||
Index: linux-5.0.19-rt11/kernel/cpu.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/cpu.c
+++ linux-5.0.19-rt11/kernel/cpu.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:78 @ static DEFINE_PER_CPU(struct cpuhp_cpu_s
 	.fail = CPUHP_INVALID,
 };
 
+#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PREEMPT_RT_FULL)
+static DEFINE_PER_CPU(struct rt_rw_lock, cpuhp_pin_lock) = \
+	__RWLOCK_RT_INITIALIZER(cpuhp_pin_lock);
+#endif
+
 #if defined(CONFIG_LOCKDEP) && defined(CONFIG_SMP)
 static struct lockdep_map cpuhp_state_up_map =
 	STATIC_LOCKDEP_MAP_INIT("cpuhp_state-up", &cpuhp_state_up_map);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:289 @ static int cpu_hotplug_disabled;
 
 #ifdef CONFIG_HOTPLUG_CPU
 
+/**
+ * pin_current_cpu - Prevent the current cpu from being unplugged
+ */
+void pin_current_cpu(void)
+{
+#ifdef CONFIG_PREEMPT_RT_FULL
+	struct rt_rw_lock *cpuhp_pin;
+	unsigned int cpu;
+	int ret;
+
+again:
+	cpuhp_pin = this_cpu_ptr(&cpuhp_pin_lock);
+	ret = __read_rt_trylock(cpuhp_pin);
+	if (ret) {
+		current->pinned_on_cpu = smp_processor_id();
+		return;
+	}
+	cpu = smp_processor_id();
+	preempt_lazy_enable();
+	preempt_enable();
+
+	__read_rt_lock(cpuhp_pin);
+
+	preempt_disable();
+	preempt_lazy_disable();
+	if (cpu != smp_processor_id()) {
+		__read_rt_unlock(cpuhp_pin);
+		goto again;
+	}
+	current->pinned_on_cpu = cpu;
+#endif
+}
+
+/**
+ * unpin_current_cpu - Allow unplug of current cpu
+ */
+void unpin_current_cpu(void)
+{
+#ifdef CONFIG_PREEMPT_RT_FULL
+	struct rt_rw_lock *cpuhp_pin = this_cpu_ptr(&cpuhp_pin_lock);
+
+	if (WARN_ON(current->pinned_on_cpu != smp_processor_id()))
+		cpuhp_pin = per_cpu_ptr(&cpuhp_pin_lock, current->pinned_on_cpu);
+
+	current->pinned_on_cpu = -1;
+	__read_rt_unlock(cpuhp_pin);
+#endif
+}
+
 DEFINE_STATIC_PERCPU_RWSEM(cpu_hotplug_lock);
 
 void cpus_read_lock(void)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:924 @ static int take_cpu_down(void *_param)
 
 static int takedown_cpu(unsigned int cpu)
 {
+#ifdef CONFIG_PREEMPT_RT_FULL
+	struct rt_rw_lock *cpuhp_pin = per_cpu_ptr(&cpuhp_pin_lock, cpu);
+#endif
 	struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
 	int err;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:939 @ static int takedown_cpu(unsigned int cpu
 	 */
 	irq_lock_sparse();
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+	__write_rt_lock(cpuhp_pin);
+#endif
+
 	/*
 	 * So now all preempt/rcu users must observe !cpu_active().
 	 */
 	err = stop_machine_cpuslocked(take_cpu_down, NULL, cpumask_of(cpu));
 	if (err) {
+#ifdef CONFIG_PREEMPT_RT_FULL
+		__write_rt_unlock(cpuhp_pin);
+#endif
 		/* CPU refused to die */
 		irq_unlock_sparse();
 		/* Unpark the hotplug thread so we can rollback there */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:969 @ static int takedown_cpu(unsigned int cpu
 	wait_for_ap_thread(st, false);
 	BUG_ON(st->state != CPUHP_AP_IDLE_DEAD);
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+	__write_rt_unlock(cpuhp_pin);
+#endif
 	/* Interrupts are moved away from the dying cpu, reenable alloc/free */
 	irq_unlock_sparse();
 
Index: linux-5.0.19-rt11/kernel/events/core.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/events/core.c
+++ linux-5.0.19-rt11/kernel/events/core.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1105 @ static void __perf_mux_hrtimer_init(stru
 	cpuctx->hrtimer_interval = ns_to_ktime(NSEC_PER_MSEC * interval);
 
 	raw_spin_lock_init(&cpuctx->hrtimer_lock);
-	hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
+	hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED_HARD);
 	timer->function = perf_mux_hrtimer_handler;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:9248 @ static void perf_swevent_init_hrtimer(st
 	if (!is_sampling_event(event))
 		return;
 
-	hrtimer_init(&hwc->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+	hrtimer_init(&hwc->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
 	hwc->hrtimer.function = perf_swevent_hrtimer;
 
 	/*
Index: linux-5.0.19-rt11/kernel/exit.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/exit.c
+++ linux-5.0.19-rt11/kernel/exit.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:163 @ static void __exit_signal(struct task_st
 	 * Do this under ->siglock, we can race with another thread
 	 * doing sigqueue_free() if we have SIGQUEUE_PREALLOC signals.
 	 */
-	flush_sigqueue(&tsk->pending);
+	flush_task_sigqueue(tsk);
 	tsk->sighand = NULL;
 	spin_unlock(&sighand->siglock);
 
Index: linux-5.0.19-rt11/kernel/fork.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/fork.c
+++ linux-5.0.19-rt11/kernel/fork.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:43 @
 #include <linux/hmm.h>
 #include <linux/fs.h>
 #include <linux/mm.h>
+#include <linux/kprobes.h>
 #include <linux/vmacache.h>
 #include <linux/nsproxy.h>
 #include <linux/capability.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:683 @ void __mmdrop(struct mm_struct *mm)
 }
 EXPORT_SYMBOL_GPL(__mmdrop);
 
+#ifdef CONFIG_PREEMPT_RT_BASE
+/*
+ * RCU callback for delayed mm drop. Not strictly rcu, but we don't
+ * want another facility to make this work.
+ */
+void __mmdrop_delayed(struct rcu_head *rhp)
+{
+	struct mm_struct *mm = container_of(rhp, struct mm_struct, delayed_drop);
+
+	__mmdrop(mm);
+}
+#endif
+
 static void mmdrop_async_fn(struct work_struct *work)
 {
 	struct mm_struct *mm;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:730 @ static inline void put_signal_struct(str
 	if (atomic_dec_and_test(&sig->sigcnt))
 		free_signal_struct(sig);
 }
-
+#ifdef CONFIG_PREEMPT_RT_BASE
+static
+#endif
 void __put_task_struct(struct task_struct *tsk)
 {
 	WARN_ON(!tsk->exit_state);
 	WARN_ON(atomic_read(&tsk->usage));
 	WARN_ON(tsk == current);
 
+	/*
+	 * Remove function-return probe instances associated with this
+	 * task and put them back on the free list.
+	 */
+	kprobe_flush_task(tsk);
+
+	/* Task is done with its stack. */
+	put_task_stack(tsk);
+
 	cgroup_free(tsk);
 	task_numa_free(tsk);
 	security_task_free(tsk);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:758 @ void __put_task_struct(struct task_struc
 	if (!profile_handoff_task(tsk))
 		free_task(tsk);
 }
+#ifndef CONFIG_PREEMPT_RT_BASE
 EXPORT_SYMBOL_GPL(__put_task_struct);
+#else
+void __put_task_struct_cb(struct rcu_head *rhp)
+{
+	struct task_struct *tsk = container_of(rhp, struct task_struct, put_rcu);
+
+	__put_task_struct(tsk);
+
+}
+EXPORT_SYMBOL_GPL(__put_task_struct_cb);
+#endif
 
 void __init __weak arch_task_cache_init(void) { }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:930 @ static struct task_struct *dup_task_stru
 #ifdef CONFIG_STACKPROTECTOR
 	tsk->stack_canary = get_random_canary();
 #endif
+	if (orig->cpus_ptr == &orig->cpus_mask)
+		tsk->cpus_ptr = &tsk->cpus_mask;
 
 	/*
 	 * One for us, one for whoever does the "release_task()" (usually
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:944 @ static struct task_struct *dup_task_stru
 	tsk->splice_pipe = NULL;
 	tsk->task_frag.page = NULL;
 	tsk->wake_q.next = NULL;
+	tsk->wake_q_sleeper.next = NULL;
 
 	account_kernel_stack(tsk, 1);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1671 @ static void rt_mutex_init_task(struct ta
  */
 static void posix_cpu_timers_init(struct task_struct *tsk)
 {
+#ifdef CONFIG_PREEMPT_RT_BASE
+	tsk->posix_timer_list = NULL;
+#endif
 	tsk->cputime_expires.prof_exp = 0;
 	tsk->cputime_expires.virt_exp = 0;
 	tsk->cputime_expires.sched_exp = 0;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1876 @ static __latent_entropy struct task_stru
 	spin_lock_init(&p->alloc_lock);
 
 	init_sigpending(&p->pending);
+	p->sigqueue_cache = NULL;
 
 	p->utime = p->stime = p->gtime = 0;
 #ifdef CONFIG_ARCH_HAS_SCALED_CPUTIME
Index: linux-5.0.19-rt11/kernel/futex.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/futex.c
+++ linux-5.0.19-rt11/kernel/futex.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:246 @ struct futex_q {
 	struct plist_node list;
 
 	struct task_struct *task;
-	spinlock_t *lock_ptr;
+	raw_spinlock_t *lock_ptr;
 	union futex_key key;
 	struct futex_pi_state *pi_state;
 	struct rt_mutex_waiter *rt_waiter;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:267 @ static const struct futex_q futex_q_init
  */
 struct futex_hash_bucket {
 	atomic_t waiters;
-	spinlock_t lock;
+	raw_spinlock_t lock;
 	struct plist_head chain;
 } ____cacheline_aligned_in_smp;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:833 @ static void get_pi_state(struct futex_pi
  * Drops a reference to the pi_state object and frees or caches it
  * when the last reference is gone.
  */
-static void put_pi_state(struct futex_pi_state *pi_state)
+static struct futex_pi_state *__put_pi_state(struct futex_pi_state *pi_state)
 {
 	if (!pi_state)
-		return;
+		return NULL;
 
 	if (!atomic_dec_and_test(&pi_state->refcount))
-		return;
+		return NULL;
 
 	/*
 	 * If pi_state->owner is NULL, the owner is most probably dying
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:859 @ static void put_pi_state(struct futex_pi
 		raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
 	}
 
-	if (current->pi_state_cache) {
-		kfree(pi_state);
-	} else {
+	if (!current->pi_state_cache) {
 		/*
 		 * pi_state->list is already empty.
 		 * clear pi_state->owner.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:868 @ static void put_pi_state(struct futex_pi
 		pi_state->owner = NULL;
 		atomic_set(&pi_state->refcount, 1);
 		current->pi_state_cache = pi_state;
+		pi_state = NULL;
+	}
+	return pi_state;
+}
+
+static void put_pi_state(struct futex_pi_state *pi_state)
+{
+	kfree(__put_pi_state(pi_state));
+}
+
+static void put_pi_state_atomic(struct futex_pi_state *pi_state,
+				struct list_head *to_free)
+{
+	if (__put_pi_state(pi_state))
+		list_add(&pi_state->list, to_free);
+}
+
+static void free_pi_state_list(struct list_head *to_free)
+{
+	struct futex_pi_state *p, *next;
+
+	list_for_each_entry_safe(p, next, to_free, list) {
+		list_del(&p->list);
+		kfree(p);
 	}
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:908 @ void exit_pi_state_list(struct task_stru
 	struct futex_pi_state *pi_state;
 	struct futex_hash_bucket *hb;
 	union futex_key key = FUTEX_KEY_INIT;
+	LIST_HEAD(to_free);
 
 	if (!futex_cmpxchg_enabled)
 		return;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:942 @ void exit_pi_state_list(struct task_stru
 		}
 		raw_spin_unlock_irq(&curr->pi_lock);
 
-		spin_lock(&hb->lock);
+		raw_spin_lock(&hb->lock);
 		raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
 		raw_spin_lock(&curr->pi_lock);
 		/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:952 @ void exit_pi_state_list(struct task_stru
 		if (head->next != next) {
 			/* retain curr->pi_lock for the loop invariant */
 			raw_spin_unlock(&pi_state->pi_mutex.wait_lock);
-			spin_unlock(&hb->lock);
-			put_pi_state(pi_state);
+			raw_spin_unlock(&hb->lock);
+			put_pi_state_atomic(pi_state, &to_free);
 			continue;
 		}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:964 @ void exit_pi_state_list(struct task_stru
 
 		raw_spin_unlock(&curr->pi_lock);
 		raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
-		spin_unlock(&hb->lock);
+		raw_spin_unlock(&hb->lock);
 
 		rt_mutex_futex_unlock(&pi_state->pi_mutex);
 		put_pi_state(pi_state);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:972 @ void exit_pi_state_list(struct task_stru
 		raw_spin_lock_irq(&curr->pi_lock);
 	}
 	raw_spin_unlock_irq(&curr->pi_lock);
+
+	free_pi_state_list(&to_free);
 }
 
 #endif
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1510 @ static int wake_futex_pi(u32 __user *uad
 	struct task_struct *new_owner;
 	bool postunlock = false;
 	DEFINE_WAKE_Q(wake_q);
+	DEFINE_WAKE_Q(wake_sleeper_q);
 	int ret = 0;
 
 	new_owner = rt_mutex_next_owner(&pi_state->pi_mutex);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1570 @ static int wake_futex_pi(u32 __user *uad
 	pi_state->owner = new_owner;
 	raw_spin_unlock(&new_owner->pi_lock);
 
-	postunlock = __rt_mutex_futex_unlock(&pi_state->pi_mutex, &wake_q);
-
+	postunlock = __rt_mutex_futex_unlock(&pi_state->pi_mutex, &wake_q,
+					     &wake_sleeper_q);
 out_unlock:
 	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
 
 	if (postunlock)
-		rt_mutex_postunlock(&wake_q);
+		rt_mutex_postunlock(&wake_q, &wake_sleeper_q);
 
 	return ret;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1588 @ static inline void
 double_lock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
 {
 	if (hb1 <= hb2) {
-		spin_lock(&hb1->lock);
+		raw_spin_lock(&hb1->lock);
 		if (hb1 < hb2)
-			spin_lock_nested(&hb2->lock, SINGLE_DEPTH_NESTING);
+			raw_spin_lock_nested(&hb2->lock, SINGLE_DEPTH_NESTING);
 	} else { /* hb1 > hb2 */
-		spin_lock(&hb2->lock);
-		spin_lock_nested(&hb1->lock, SINGLE_DEPTH_NESTING);
+		raw_spin_lock(&hb2->lock);
+		raw_spin_lock_nested(&hb1->lock, SINGLE_DEPTH_NESTING);
 	}
 }
 
 static inline void
 double_unlock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
 {
-	spin_unlock(&hb1->lock);
+	raw_spin_unlock(&hb1->lock);
 	if (hb1 != hb2)
-		spin_unlock(&hb2->lock);
+		raw_spin_unlock(&hb2->lock);
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1630 @ futex_wake(u32 __user *uaddr, unsigned i
 	if (!hb_waiters_pending(hb))
 		goto out_put_key;
 
-	spin_lock(&hb->lock);
+	raw_spin_lock(&hb->lock);
 
 	plist_for_each_entry_safe(this, next, &hb->chain, list) {
 		if (match_futex (&this->key, &key)) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1649 @ futex_wake(u32 __user *uaddr, unsigned i
 		}
 	}
 
-	spin_unlock(&hb->lock);
+	raw_spin_unlock(&hb->lock);
 	wake_up_q(&wake_q);
 out_put_key:
 	put_futex_key(&key);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1956 @ static int futex_requeue(u32 __user *uad
 	struct futex_hash_bucket *hb1, *hb2;
 	struct futex_q *this, *next;
 	DEFINE_WAKE_Q(wake_q);
+	LIST_HEAD(to_free);
 
 	if (nr_wake < 0 || nr_requeue < 0)
 		return -EINVAL;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2194 @ retry_private:
 				 * object.
 				 */
 				this->pi_state = NULL;
-				put_pi_state(pi_state);
+				put_pi_state_atomic(pi_state, &to_free);
 				/*
 				 * We stop queueing more waiters and let user
 				 * space deal with the mess.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2211 @ retry_private:
 	 * in futex_proxy_trylock_atomic() or in lookup_pi_state(). We
 	 * need to drop it here again.
 	 */
-	put_pi_state(pi_state);
+	put_pi_state_atomic(pi_state, &to_free);
 
 out_unlock:
 	double_unlock_hb(hb1, hb2);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2232 @ out_put_keys:
 out_put_key1:
 	put_futex_key(&key1);
 out:
+	free_pi_state_list(&to_free);
 	return ret ? ret : task_count;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2256 @ static inline struct futex_hash_bucket *
 
 	q->lock_ptr = &hb->lock;
 
-	spin_lock(&hb->lock);
+	raw_spin_lock(&hb->lock);
 	return hb;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2264 @ static inline void
 queue_unlock(struct futex_hash_bucket *hb)
 	__releases(&hb->lock)
 {
-	spin_unlock(&hb->lock);
+	raw_spin_unlock(&hb->lock);
 	hb_waiters_dec(hb);
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2303 @ static inline void queue_me(struct futex
 	__releases(&hb->lock)
 {
 	__queue_me(q, hb);
-	spin_unlock(&hb->lock);
+	raw_spin_unlock(&hb->lock);
 }
 
 /**
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2319 @ static inline void queue_me(struct futex
  */
 static int unqueue_me(struct futex_q *q)
 {
-	spinlock_t *lock_ptr;
+	raw_spinlock_t *lock_ptr;
 	int ret = 0;
 
 	/* In the common case we don't take the spinlock, which is nice. */
 retry:
 	/*
-	 * q->lock_ptr can change between this read and the following spin_lock.
-	 * Use READ_ONCE to forbid the compiler from reloading q->lock_ptr and
-	 * optimizing lock_ptr out of the logic below.
+	 * q->lock_ptr can change between this read and the following
+	 * raw_spin_lock. Use READ_ONCE to forbid the compiler from reloading
+	 * q->lock_ptr and optimizing lock_ptr out of the logic below.
 	 */
 	lock_ptr = READ_ONCE(q->lock_ptr);
 	if (lock_ptr != NULL) {
-		spin_lock(lock_ptr);
+		raw_spin_lock(lock_ptr);
 		/*
 		 * q->lock_ptr can change between reading it and
-		 * spin_lock(), causing us to take the wrong lock.  This
+		 * raw_spin_lock(), causing us to take the wrong lock.  This
 		 * corrects the race condition.
 		 *
 		 * Reasoning goes like this: if we have the wrong lock,
 		 * q->lock_ptr must have changed (maybe several times)
-		 * between reading it and the spin_lock().  It can
-		 * change again after the spin_lock() but only if it was
-		 * already changed before the spin_lock().  It cannot,
+		 * between reading it and the raw_spin_lock().  It can
+		 * change again after the raw_spin_lock() but only if it was
+		 * already changed before the raw_spin_lock().  It cannot,
 		 * however, change back to the original value.  Therefore
 		 * we can detect whether we acquired the correct lock.
 		 */
 		if (unlikely(lock_ptr != q->lock_ptr)) {
-			spin_unlock(lock_ptr);
+			raw_spin_unlock(lock_ptr);
 			goto retry;
 		}
 		__unqueue_futex(q);
 
 		BUG_ON(q->pi_state);
 
-		spin_unlock(lock_ptr);
+		raw_spin_unlock(lock_ptr);
 		ret = 1;
 	}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2369 @ retry:
 static void unqueue_me_pi(struct futex_q *q)
 	__releases(q->lock_ptr)
 {
+	struct futex_pi_state *ps;
+
 	__unqueue_futex(q);
 
 	BUG_ON(!q->pi_state);
-	put_pi_state(q->pi_state);
+	ps = __put_pi_state(q->pi_state);
 	q->pi_state = NULL;
 
-	spin_unlock(q->lock_ptr);
+	raw_spin_unlock(q->lock_ptr);
+	kfree(ps);
 }
 
 static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2511 @ retry:
 	 */
 handle_err:
 	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
-	spin_unlock(q->lock_ptr);
+	raw_spin_unlock(q->lock_ptr);
 
 	switch (err) {
 	case -EFAULT:
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2529 @ handle_err:
 		break;
 	}
 
-	spin_lock(q->lock_ptr);
+	raw_spin_lock(q->lock_ptr);
 	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
 
 	/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2625 @ static void futex_wait_queue_me(struct f
 	/*
 	 * The task state is guaranteed to be set before another task can
 	 * wake it. set_current_state() is implemented using smp_store_mb() and
-	 * queue_me() calls spin_unlock() upon completion, both serializing
+	 * queue_me() calls raw_spin_unlock() upon completion, both serializing
 	 * access to the hash list and forcing another memory barrier.
 	 */
 	set_current_state(TASK_INTERRUPTIBLE);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2743 @ static int futex_wait(u32 __user *uaddr,
 	if (abs_time) {
 		to = &timeout;
 
-		hrtimer_init_on_stack(&to->timer, (flags & FLAGS_CLOCKRT) ?
-				      CLOCK_REALTIME : CLOCK_MONOTONIC,
-				      HRTIMER_MODE_ABS);
-		hrtimer_init_sleeper(to, current);
+		hrtimer_init_sleeper_on_stack(to, (flags & FLAGS_CLOCKRT) ?
+					      CLOCK_REALTIME : CLOCK_MONOTONIC,
+					      HRTIMER_MODE_ABS, current);
 		hrtimer_set_expires_range_ns(&to->timer, *abs_time,
 					     current->timer_slack_ns);
 	}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2844 @ static int futex_lock_pi(u32 __user *uad
 
 	if (time) {
 		to = &timeout;
-		hrtimer_init_on_stack(&to->timer, CLOCK_REALTIME,
-				      HRTIMER_MODE_ABS);
-		hrtimer_init_sleeper(to, current);
+		hrtimer_init_sleeper_on_stack(to, CLOCK_REALTIME,
+					      HRTIMER_MODE_ABS, current);
 		hrtimer_set_expires(&to->timer, *time);
 	}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2900 @ retry_private:
 		goto no_block;
 	}
 
-	rt_mutex_init_waiter(&rt_waiter);
+	rt_mutex_init_waiter(&rt_waiter, false);
 
 	/*
 	 * On PREEMPT_RT_FULL, when hb->lock becomes an rt_mutex, we must not
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2916 @ retry_private:
 	 * before __rt_mutex_start_proxy_lock() is done.
 	 */
 	raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock);
-	spin_unlock(q.lock_ptr);
+	raw_spin_unlock(q.lock_ptr);
 	/*
 	 * __rt_mutex_start_proxy_lock() unconditionally enqueues the @rt_waiter
 	 * such that futex_unlock_pi() is guaranteed to observe the waiter when
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2937 @ retry_private:
 	ret = rt_mutex_wait_proxy_lock(&q.pi_state->pi_mutex, to, &rt_waiter);
 
 cleanup:
-	spin_lock(q.lock_ptr);
+	raw_spin_lock(q.lock_ptr);
 	/*
 	 * If we failed to acquire the lock (deadlock/signal/timeout), we must
 	 * first acquire the hb->lock before removing the lock from the
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3038 @ retry:
 		return ret;
 
 	hb = hash_futex(&key);
-	spin_lock(&hb->lock);
+	raw_spin_lock(&hb->lock);
 
 	/*
 	 * Check waiters first. We do not trust user space values at
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3072 @ retry:
 		 * rt_waiter. Also see the WARN in wake_futex_pi().
 		 */
 		raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
-		spin_unlock(&hb->lock);
+		raw_spin_unlock(&hb->lock);
 
 		/* drops pi_state->pi_mutex.wait_lock */
 		ret = wake_futex_pi(uaddr, uval, pi_state);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3111 @ retry:
 	 * owner.
 	 */
 	if ((ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, 0))) {
-		spin_unlock(&hb->lock);
+		raw_spin_unlock(&hb->lock);
 		switch (ret) {
 		case -EFAULT:
 			goto pi_faulted;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3131 @ retry:
 	ret = (curval == uval) ? 0 : -EAGAIN;
 
 out_unlock:
-	spin_unlock(&hb->lock);
+	raw_spin_unlock(&hb->lock);
 out_putkey:
 	put_futex_key(&key);
 	return ret;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3263 @ static int futex_wait_requeue_pi(u32 __u
 
 	if (abs_time) {
 		to = &timeout;
-		hrtimer_init_on_stack(&to->timer, (flags & FLAGS_CLOCKRT) ?
-				      CLOCK_REALTIME : CLOCK_MONOTONIC,
-				      HRTIMER_MODE_ABS);
-		hrtimer_init_sleeper(to, current);
+		hrtimer_init_sleeper_on_stack(to, (flags & FLAGS_CLOCKRT) ?
+					      CLOCK_REALTIME : CLOCK_MONOTONIC,
+					      HRTIMER_MODE_ABS, current);
 		hrtimer_set_expires_range_ns(&to->timer, *abs_time,
 					     current->timer_slack_ns);
 	}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3274 @ static int futex_wait_requeue_pi(u32 __u
 	 * The waiter is allocated on our stack, manipulated by the requeue
 	 * code while we sleep on uaddr.
 	 */
-	rt_mutex_init_waiter(&rt_waiter);
+	rt_mutex_init_waiter(&rt_waiter, false);
 
 	ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2, FUTEX_WRITE);
 	if (unlikely(ret != 0))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3305 @ static int futex_wait_requeue_pi(u32 __u
 	/* Queue the futex_q, drop the hb lock, wait for wakeup. */
 	futex_wait_queue_me(hb, &q, to);
 
-	spin_lock(&hb->lock);
+	raw_spin_lock(&hb->lock);
 	ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to);
-	spin_unlock(&hb->lock);
+	raw_spin_unlock(&hb->lock);
 	if (ret)
 		goto out_put_keys;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3327 @ static int futex_wait_requeue_pi(u32 __u
 		 * did a lock-steal - fix up the PI-state in that case.
 		 */
 		if (q.pi_state && (q.pi_state->owner != current)) {
-			spin_lock(q.lock_ptr);
+			struct futex_pi_state *ps_free;
+
+			raw_spin_lock(q.lock_ptr);
 			ret = fixup_pi_state_owner(uaddr2, &q, current);
 			if (ret && rt_mutex_owner(&q.pi_state->pi_mutex) == current) {
 				pi_state = q.pi_state;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3339 @ static int futex_wait_requeue_pi(u32 __u
 			 * Drop the reference to the pi state which
 			 * the requeue_pi() code acquired for us.
 			 */
-			put_pi_state(q.pi_state);
-			spin_unlock(q.lock_ptr);
+			ps_free = __put_pi_state(q.pi_state);
+			raw_spin_unlock(q.lock_ptr);
+			kfree(ps_free);
 		}
 	} else {
 		struct rt_mutex *pi_mutex;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3355 @ static int futex_wait_requeue_pi(u32 __u
 		pi_mutex = &q.pi_state->pi_mutex;
 		ret = rt_mutex_wait_proxy_lock(pi_mutex, to, &rt_waiter);
 
-		spin_lock(q.lock_ptr);
+		raw_spin_lock(q.lock_ptr);
 		if (ret && !rt_mutex_cleanup_proxy_lock(pi_mutex, &rt_waiter))
 			ret = 0;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3980 @ static int __init futex_init(void)
 	for (i = 0; i < futex_hashsize; i++) {
 		atomic_set(&futex_queues[i].waiters, 0);
 		plist_head_init(&futex_queues[i].chain);
-		spin_lock_init(&futex_queues[i].lock);
+		raw_spin_lock_init(&futex_queues[i].lock);
 	}
 
 	return 0;
Index: linux-5.0.19-rt11/kernel/irq/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/kernel/irq/Kconfig
+++ linux-5.0.19-rt11/kernel/irq/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:94 @ config GENERIC_MSI_IRQ_DOMAIN
 	select IRQ_DOMAIN_HIERARCHY
 	select GENERIC_MSI_IRQ
 
+config IRQ_MSI_IOMMU
+	bool
+
 config HANDLE_DOMAIN_IRQ
 	bool
 
Index: linux-5.0.19-rt11/kernel/irq/handle.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/irq/handle.c
+++ linux-5.0.19-rt11/kernel/irq/handle.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:188 @ irqreturn_t handle_irq_event_percpu(stru
 {
 	irqreturn_t retval;
 	unsigned int flags = 0;
+	struct pt_regs *regs = get_irq_regs();
+	u64 ip = regs ? instruction_pointer(regs) : 0;
 
 	retval = __handle_irq_event_percpu(desc, &flags);
 
-	add_interrupt_randomness(desc->irq_data.irq, flags);
+#ifdef CONFIG_PREEMPT_RT_FULL
+	desc->random_ip = ip;
+#else
+	add_interrupt_randomness(desc->irq_data.irq, flags, ip);
+#endif
 
 	if (!noirqdebug)
 		note_interrupt(desc, retval);
Index: linux-5.0.19-rt11/kernel/irq/manage.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/irq/manage.c
+++ linux-5.0.19-rt11/kernel/irq/manage.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:26 @
 #include "internals.h"
 
 #ifdef CONFIG_IRQ_FORCED_THREADING
+# ifndef CONFIG_PREEMPT_RT_BASE
 __read_mostly bool force_irqthreads;
 EXPORT_SYMBOL_GPL(force_irqthreads);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:36 @ static int __init setup_forced_irqthread
 	return 0;
 }
 early_param("threadirqs", setup_forced_irqthreads);
+# endif
 #endif
 
 static void __synchronize_hardirq(struct irq_desc *desc)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1039 @ static int irq_thread(void *data)
 		if (action_ret == IRQ_WAKE_THREAD)
 			irq_wake_secondary(desc, action);
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+		migrate_disable();
+		add_interrupt_randomness(action->irq, 0,
+				 desc->random_ip ^ (unsigned long) action);
+		migrate_enable();
+#endif
 		wake_threads_waitq(desc);
 	}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2240 @ EXPORT_SYMBOL_GPL(irq_get_irqchip_state)
  *	This call sets the internal irqchip state of an interrupt,
  *	depending on the value of @which.
  *
- *	This function should be called with preemption disabled if the
+ *	This function should be called with migration disabled if the
  *	interrupt controller has per-cpu registers.
  */
 int irq_set_irqchip_state(unsigned int irq, enum irqchip_irq_state which,
Index: linux-5.0.19-rt11/kernel/irq/spurious.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/irq/spurious.c
+++ linux-5.0.19-rt11/kernel/irq/spurious.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:445 @ MODULE_PARM_DESC(noirqdebug, "Disable ir
 
 static int __init irqfixup_setup(char *str)
 {
+#ifdef CONFIG_PREEMPT_RT_BASE
+	pr_warn("irqfixup boot option not supported w/ CONFIG_PREEMPT_RT_BASE\n");
+	return 1;
+#endif
 	irqfixup = 1;
 	printk(KERN_WARNING "Misrouted IRQ fixup support enabled.\n");
 	printk(KERN_WARNING "This may impact system performance.\n");
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:461 @ module_param(irqfixup, int, 0644);
 
 static int __init irqpoll_setup(char *str)
 {
+#ifdef CONFIG_PREEMPT_RT_BASE
+	pr_warn("irqpoll boot option not supported w/ CONFIG_PREEMPT_RT_BASE\n");
+	return 1;
+#endif
 	irqfixup = 2;
 	printk(KERN_WARNING "Misrouted IRQ fixup and polling support "
 				"enabled\n");
Index: linux-5.0.19-rt11/kernel/irq_work.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/irq_work.c
+++ linux-5.0.19-rt11/kernel/irq_work.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:20 @
 #include <linux/cpu.h>
 #include <linux/notifier.h>
 #include <linux/smp.h>
+#include <linux/interrupt.h>
 #include <asm/processor.h>
 
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:63 @ void __weak arch_irq_work_raise(void)
 /* Enqueue on current CPU, work must already be claimed and preempt disabled */
 static void __irq_work_queue_local(struct irq_work *work)
 {
+	struct llist_head *list;
+	bool lazy_work, realtime = IS_ENABLED(CONFIG_PREEMPT_RT_FULL);
+
+	lazy_work = work->flags & IRQ_WORK_LAZY;
+
 	/* If the work is "lazy", handle it from next tick if any */
-	if (work->flags & IRQ_WORK_LAZY) {
-		if (llist_add(&work->llnode, this_cpu_ptr(&lazy_list)) &&
-		    tick_nohz_tick_stopped())
-			arch_irq_work_raise();
-	} else {
-		if (llist_add(&work->llnode, this_cpu_ptr(&raised_list)))
+	if (lazy_work || (realtime && !(work->flags & IRQ_WORK_HARD_IRQ)))
+		list = this_cpu_ptr(&lazy_list);
+	else
+		list = this_cpu_ptr(&raised_list);
+
+	if (llist_add(&work->llnode, list)) {
+		if (!lazy_work || tick_nohz_tick_stopped())
 			arch_irq_work_raise();
 	}
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:117 @ bool irq_work_queue_on(struct irq_work *
 
 	preempt_disable();
 	if (cpu != smp_processor_id()) {
+		struct llist_head *list;
+
 		/* Arch remote IPI send/receive backend aren't NMI safe */
 		WARN_ON_ONCE(in_nmi());
-		if (llist_add(&work->llnode, &per_cpu(raised_list, cpu)))
+		if (IS_ENABLED(CONFIG_PREEMPT_RT_FULL) && !(work->flags & IRQ_WORK_HARD_IRQ))
+			list = &per_cpu(lazy_list, cpu);
+		else
+			list = &per_cpu(raised_list, cpu);
+
+		if (llist_add(&work->llnode, list))
 			arch_send_call_function_single_ipi(cpu);
 	} else {
 		__irq_work_queue_local(work);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:145 @ bool irq_work_needs_cpu(void)
 	raised = this_cpu_ptr(&raised_list);
 	lazy = this_cpu_ptr(&lazy_list);
 
-	if (llist_empty(raised) || arch_irq_work_has_interrupt())
-		if (llist_empty(lazy))
-			return false;
+	if (llist_empty(raised) && llist_empty(lazy))
+		return false;
 
 	/* All work should have been flushed before going offline */
 	WARN_ON_ONCE(cpu_is_offline(smp_processor_id()));
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:160 @ static void irq_work_run_list(struct lli
 	struct llist_node *llnode;
 	unsigned long flags;
 
+#ifndef CONFIG_PREEMPT_RT_FULL
+	/*
+	 * nort: On RT IRQ-work may run in SOFTIRQ context.
+	 */
 	BUG_ON(!irqs_disabled());
-
+#endif
 	if (llist_empty(list))
 		return;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:197 @ static void irq_work_run_list(struct lli
 void irq_work_run(void)
 {
 	irq_work_run_list(this_cpu_ptr(&raised_list));
-	irq_work_run_list(this_cpu_ptr(&lazy_list));
+	if (IS_ENABLED(CONFIG_PREEMPT_RT_FULL)) {
+		/*
+		 * NOTE: we raise softirq via IPI for safety,
+		 * and execute in irq_work_tick() to move the
+		 * overhead from hard to soft irq context.
+		 */
+		if (!llist_empty(this_cpu_ptr(&lazy_list)))
+			raise_softirq(TIMER_SOFTIRQ);
+	} else
+		irq_work_run_list(this_cpu_ptr(&lazy_list));
 }
 EXPORT_SYMBOL_GPL(irq_work_run);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:216 @ void irq_work_tick(void)
 
 	if (!llist_empty(raised) && !arch_irq_work_has_interrupt())
 		irq_work_run_list(raised);
+
+	if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL))
+		irq_work_run_list(this_cpu_ptr(&lazy_list));
+}
+
+#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT_FULL)
+void irq_work_tick_soft(void)
+{
 	irq_work_run_list(this_cpu_ptr(&lazy_list));
 }
+#endif
 
 /*
  * Synchronize against the irq_work @entry, ensures the entry is not
Index: linux-5.0.19-rt11/kernel/kexec_core.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/kexec_core.c
+++ linux-5.0.19-rt11/kernel/kexec_core.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:975 @ void crash_kexec(struct pt_regs *regs)
 	old_cpu = atomic_cmpxchg(&panic_cpu, PANIC_CPU_INVALID, this_cpu);
 	if (old_cpu == PANIC_CPU_INVALID) {
 		/* This is the 1st CPU which comes here, so go ahead. */
-		printk_safe_flush_on_panic();
 		__crash_kexec(regs);
 
 		/*
Index: linux-5.0.19-rt11/kernel/ksysfs.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/ksysfs.c
+++ linux-5.0.19-rt11/kernel/ksysfs.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:143 @ KERNEL_ATTR_RO(vmcoreinfo);
 
 #endif /* CONFIG_CRASH_CORE */
 
+#if defined(CONFIG_PREEMPT_RT_FULL)
+static ssize_t realtime_show(struct kobject *kobj,
+			     struct kobj_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%d\n", 1);
+}
+KERNEL_ATTR_RO(realtime);
+#endif
+
 /* whether file capabilities are enabled */
 static ssize_t fscaps_show(struct kobject *kobj,
 				  struct kobj_attribute *attr, char *buf)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:243 @ static struct attribute * kernel_attrs[]
 	&rcu_expedited_attr.attr,
 	&rcu_normal_attr.attr,
 #endif
+#ifdef CONFIG_PREEMPT_RT_FULL
+	&realtime_attr.attr,
+#endif
 	NULL
 };
 
Index: linux-5.0.19-rt11/kernel/kthread.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/kthread.c
+++ linux-5.0.19-rt11/kernel/kthread.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:602 @ void __kthread_init_worker(struct kthrea
 				struct lock_class_key *key)
 {
 	memset(worker, 0, sizeof(struct kthread_worker));
-	spin_lock_init(&worker->lock);
+	raw_spin_lock_init(&worker->lock);
 	lockdep_set_class_and_name(&worker->lock, key, name);
 	INIT_LIST_HEAD(&worker->work_list);
 	INIT_LIST_HEAD(&worker->delayed_work_list);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:644 @ repeat:
 
 	if (kthread_should_stop()) {
 		__set_current_state(TASK_RUNNING);
-		spin_lock_irq(&worker->lock);
+		raw_spin_lock_irq(&worker->lock);
 		worker->task = NULL;
-		spin_unlock_irq(&worker->lock);
+		raw_spin_unlock_irq(&worker->lock);
 		return 0;
 	}
 
 	work = NULL;
-	spin_lock_irq(&worker->lock);
+	raw_spin_lock_irq(&worker->lock);
 	if (!list_empty(&worker->work_list)) {
 		work = list_first_entry(&worker->work_list,
 					struct kthread_work, node);
 		list_del_init(&work->node);
 	}
 	worker->current_work = work;
-	spin_unlock_irq(&worker->lock);
+	raw_spin_unlock_irq(&worker->lock);
 
 	if (work) {
 		__set_current_state(TASK_RUNNING);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:815 @ bool kthread_queue_work(struct kthread_w
 	bool ret = false;
 	unsigned long flags;
 
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 	if (!queuing_blocked(worker, work)) {
 		kthread_insert_work(worker, work, &worker->work_list);
 		ret = true;
 	}
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(kthread_queue_work);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:838 @ void kthread_delayed_work_timer_fn(struc
 	struct kthread_delayed_work *dwork = from_timer(dwork, t, timer);
 	struct kthread_work *work = &dwork->work;
 	struct kthread_worker *worker = work->worker;
+	unsigned long flags;
 
 	/*
 	 * This might happen when a pending work is reinitialized.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:847 @ void kthread_delayed_work_timer_fn(struc
 	if (WARN_ON_ONCE(!worker))
 		return;
 
-	spin_lock(&worker->lock);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 	/* Work must not be used with >1 worker, see kthread_queue_work(). */
 	WARN_ON_ONCE(work->worker != worker);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:856 @ void kthread_delayed_work_timer_fn(struc
 	list_del_init(&work->node);
 	kthread_insert_work(worker, work, &worker->work_list);
 
-	spin_unlock(&worker->lock);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 }
 EXPORT_SYMBOL(kthread_delayed_work_timer_fn);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:912 @ bool kthread_queue_delayed_work(struct k
 	unsigned long flags;
 	bool ret = false;
 
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 
 	if (!queuing_blocked(worker, work)) {
 		__kthread_queue_delayed_work(worker, dwork, delay);
 		ret = true;
 	}
 
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(kthread_queue_delayed_work);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:955 @ void kthread_flush_work(struct kthread_w
 	if (!worker)
 		return;
 
-	spin_lock_irq(&worker->lock);
+	raw_spin_lock_irq(&worker->lock);
 	/* Work must not be used with >1 worker, see kthread_queue_work(). */
 	WARN_ON_ONCE(work->worker != worker);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:967 @ void kthread_flush_work(struct kthread_w
 	else
 		noop = true;
 
-	spin_unlock_irq(&worker->lock);
+	raw_spin_unlock_irq(&worker->lock);
 
 	if (!noop)
 		wait_for_completion(&fwork.done);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1000 @ static bool __kthread_cancel_work(struct
 		 * any queuing is blocked by setting the canceling counter.
 		 */
 		work->canceling++;
-		spin_unlock_irqrestore(&worker->lock, *flags);
+		raw_spin_unlock_irqrestore(&worker->lock, *flags);
 		del_timer_sync(&dwork->timer);
-		spin_lock_irqsave(&worker->lock, *flags);
+		raw_spin_lock_irqsave(&worker->lock, *flags);
 		work->canceling--;
 	}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1049 @ bool kthread_mod_delayed_work(struct kth
 	unsigned long flags;
 	int ret = false;
 
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 
 	/* Do not bother with canceling when never queued. */
 	if (!work->worker)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1066 @ bool kthread_mod_delayed_work(struct kth
 fast_queue:
 	__kthread_queue_delayed_work(worker, dwork, delay);
 out:
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL_GPL(kthread_mod_delayed_work);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1080 @ static bool __kthread_cancel_work_sync(s
 	if (!worker)
 		goto out;
 
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 	/* Work must not be used with >1 worker, see kthread_queue_work(). */
 	WARN_ON_ONCE(work->worker != worker);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1094 @ static bool __kthread_cancel_work_sync(s
 	 * In the meantime, block any queuing by setting the canceling counter.
 	 */
 	work->canceling++;
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 	kthread_flush_work(work);
-	spin_lock_irqsave(&worker->lock, flags);
+	raw_spin_lock_irqsave(&worker->lock, flags);
 	work->canceling--;
 
 out_fast:
-	spin_unlock_irqrestore(&worker->lock, flags);
+	raw_spin_unlock_irqrestore(&worker->lock, flags);
 out:
 	return ret;
 }
Index: linux-5.0.19-rt11/kernel/locking/Makefile
===================================================================
--- linux-5.0.19-rt11.orig/kernel/locking/Makefile
+++ linux-5.0.19-rt11/kernel/locking/Makefile
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6 @
 # and is generally not a function of system call inputs.
 KCOV_INSTRUMENT		:= n
 
-obj-y += mutex.o semaphore.o rwsem.o percpu-rwsem.o
+obj-y += semaphore.o percpu-rwsem.o
 
 ifdef CONFIG_FUNCTION_TRACER
 CFLAGS_REMOVE_lockdep.o = $(CC_FLAGS_FTRACE)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:15 @ CFLAGS_REMOVE_mutex-debug.o = $(CC_FLAGS
 CFLAGS_REMOVE_rtmutex-debug.o = $(CC_FLAGS_FTRACE)
 endif
 
+ifneq ($(CONFIG_PREEMPT_RT_FULL),y)
+obj-y += mutex.o
 obj-$(CONFIG_DEBUG_MUTEXES) += mutex-debug.o
+endif
+obj-y += rwsem.o
 obj-$(CONFIG_LOCKDEP) += lockdep.o
 ifeq ($(CONFIG_PROC_FS),y)
 obj-$(CONFIG_LOCKDEP) += lockdep_proc.o
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:32 @ obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
 obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
 obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock.o
 obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock_debug.o
+ifneq ($(CONFIG_PREEMPT_RT_FULL),y)
 obj-$(CONFIG_RWSEM_GENERIC_SPINLOCK) += rwsem-spinlock.o
 obj-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem-xadd.o
+endif
+obj-$(CONFIG_PREEMPT_RT_FULL) += mutex-rt.o rwsem-rt.o rwlock-rt.o
 obj-$(CONFIG_QUEUED_RWLOCKS) += qrwlock.o
 obj-$(CONFIG_LOCK_TORTURE_TEST) += locktorture.o
 obj-$(CONFIG_WW_MUTEX_SELFTEST) += test-ww_mutex.o
Index: linux-5.0.19-rt11/kernel/locking/lockdep.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/locking/lockdep.c
+++ linux-5.0.19-rt11/kernel/locking/lockdep.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:702 @ look_up_lock_class(const struct lockdep_
 			 * Huh! same key, different name? Did someone trample
 			 * on some memory? We're most confused.
 			 */
-			WARN_ON_ONCE(class->name != lock->name);
+			WARN_ON_ONCE(class->name != lock->name &&
+				     lock->key != &__lockdep_no_validate__);
 			return class;
 		}
 	}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3771 @ static void check_flags(unsigned long fl
 		}
 	}
 
+#ifndef CONFIG_PREEMPT_RT_FULL
 	/*
 	 * We dont accurately track softirq state in e.g.
 	 * hardirq contexts (such as on 4KSTACKS), so only
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3786 @ static void check_flags(unsigned long fl
 			DEBUG_LOCKS_WARN_ON(!current->softirqs_enabled);
 		}
 	}
+#endif
 
 	if (!debug_locks)
 		print_irqtrace_events(current);
Index: linux-5.0.19-rt11/kernel/locking/locktorture.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/locking/locktorture.c
+++ linux-5.0.19-rt11/kernel/locking/locktorture.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:32 @
 #include <linux/kthread.h>
 #include <linux/sched/rt.h>
 #include <linux/spinlock.h>
-#include <linux/rwlock.h>
 #include <linux/mutex.h>
 #include <linux/rwsem.h>
 #include <linux/smp.h>
Index: linux-5.0.19-rt11/kernel/locking/mutex-rt.c
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/kernel/locking/mutex-rt.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+/*
+ * kernel/rt.c
+ *
+ * Real-Time Preemption Support
+ *
+ * started by Ingo Molnar:
+ *
+ *  Copyright (C) 2004-2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ *  Copyright (C) 2006, Timesys Corp., Thomas Gleixner <tglx@timesys.com>
+ *
+ * historic credit for proving that Linux spinlocks can be implemented via
+ * RT-aware mutexes goes to many people: The Pmutex project (Dirk Grambow
+ * and others) who prototyped it on 2.4 and did lots of comparative
+ * research and analysis; TimeSys, for proving that you can implement a
+ * fully preemptible kernel via the use of IRQ threading and mutexes;
+ * Bill Huey for persuasively arguing on lkml that the mutex model is the
+ * right one; and to MontaVista, who ported pmutexes to 2.6.
+ *
+ * This code is a from-scratch implementation and is not based on pmutexes,
+ * but the idea of converting spinlocks to mutexes is used here too.
+ *
+ * lock debugging, locking tree, deadlock detection:
+ *
+ *  Copyright (C) 2004, LynuxWorks, Inc., Igor Manyilov, Bill Huey
+ *  Released under the General Public License (GPL).
+ *
+ * Includes portions of the generic R/W semaphore implementation from:
+ *
+ *  Copyright (c) 2001   David Howells (dhowells@redhat.com).
+ *  - Derived partially from idea by Andrea Arcangeli <andrea@suse.de>
+ *  - Derived also from comments by Linus
+ *
+ * Pending ownership of locks and ownership stealing:
+ *
+ *  Copyright (C) 2005, Kihon Technologies Inc., Steven Rostedt
+ *
+ *   (also by Steven Rostedt)
+ *    - Converted single pi_lock to individual task locks.
+ *
+ * By Esben Nielsen:
+ *    Doing priority inheritance with help of the scheduler.
+ *
+ *  Copyright (C) 2006, Timesys Corp., Thomas Gleixner <tglx@timesys.com>
+ *  - major rework based on Esben Nielsens initial patch
+ *  - replaced thread_info references by task_struct refs
+ *  - removed task->pending_owner dependency
+ *  - BKL drop/reacquire for semaphore style locks to avoid deadlocks
+ *    in the scheduler return path as discussed with Steven Rostedt
+ *
+ *  Copyright (C) 2006, Kihon Technologies Inc.
+ *    Steven Rostedt <rostedt@goodmis.org>
+ *  - debugged and patched Thomas Gleixner's rework.
+ *  - added back the cmpxchg to the rework.
+ *  - turned atomic require back on for SMP.
+ */
+
+#include <linux/spinlock.h>
+#include <linux/rtmutex.h>
+#include <linux/sched.h>
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/kallsyms.h>
+#include <linux/syscalls.h>
+#include <linux/interrupt.h>
+#include <linux/plist.h>
+#include <linux/fs.h>
+#include <linux/futex.h>
+#include <linux/hrtimer.h>
+
+#include "rtmutex_common.h"
+
+/*
+ * struct mutex functions
+ */
+void __mutex_do_init(struct mutex *mutex, const char *name,
+		     struct lock_class_key *key)
+{
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	/*
+	 * Make sure we are not reinitializing a held lock:
+	 */
+	debug_check_no_locks_freed((void *)mutex, sizeof(*mutex));
+	lockdep_init_map(&mutex->dep_map, name, key, 0);
+#endif
+	mutex->lock.save_state = 0;
+}
+EXPORT_SYMBOL(__mutex_do_init);
+
+void __lockfunc _mutex_lock(struct mutex *lock)
+{
+	mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
+	__rt_mutex_lock_state(&lock->lock, TASK_UNINTERRUPTIBLE);
+}
+EXPORT_SYMBOL(_mutex_lock);
+
+void __lockfunc _mutex_lock_io(struct mutex *lock)
+{
+	int token;
+
+	token = io_schedule_prepare();
+	_mutex_lock(lock);
+	io_schedule_finish(token);
+}
+EXPORT_SYMBOL_GPL(_mutex_lock_io);
+
+int __lockfunc _mutex_lock_interruptible(struct mutex *lock)
+{
+	int ret;
+
+	mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
+	ret = __rt_mutex_lock_state(&lock->lock, TASK_INTERRUPTIBLE);
+	if (ret)
+		mutex_release(&lock->dep_map, 1, _RET_IP_);
+	return ret;
+}
+EXPORT_SYMBOL(_mutex_lock_interruptible);
+
+int __lockfunc _mutex_lock_killable(struct mutex *lock)
+{
+	int ret;
+
+	mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
+	ret = __rt_mutex_lock_state(&lock->lock, TASK_KILLABLE);
+	if (ret)
+		mutex_release(&lock->dep_map, 1, _RET_IP_);
+	return ret;
+}
+EXPORT_SYMBOL(_mutex_lock_killable);
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+void __lockfunc _mutex_lock_nested(struct mutex *lock, int subclass)
+{
+	mutex_acquire_nest(&lock->dep_map, subclass, 0, NULL, _RET_IP_);
+	__rt_mutex_lock_state(&lock->lock, TASK_UNINTERRUPTIBLE);
+}
+EXPORT_SYMBOL(_mutex_lock_nested);
+
+void __lockfunc _mutex_lock_io_nested(struct mutex *lock, int subclass)
+{
+	int token;
+
+	token = io_schedule_prepare();
+
+	mutex_acquire_nest(&lock->dep_map, subclass, 0, NULL, _RET_IP_);
+	__rt_mutex_lock_state(&lock->lock, TASK_UNINTERRUPTIBLE);
+
+	io_schedule_finish(token);
+}
+EXPORT_SYMBOL_GPL(_mutex_lock_io_nested);
+
+void __lockfunc _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest)
+{
+	mutex_acquire_nest(&lock->dep_map, 0, 0, nest, _RET_IP_);
+	__rt_mutex_lock_state(&lock->lock, TASK_UNINTERRUPTIBLE);
+}
+EXPORT_SYMBOL(_mutex_lock_nest_lock);
+
+int __lockfunc _mutex_lock_interruptible_nested(struct mutex *lock, int subclass)
+{
+	int ret;
+
+	mutex_acquire_nest(&lock->dep_map, subclass, 0, NULL, _RET_IP_);
+	ret = __rt_mutex_lock_state(&lock->lock, TASK_INTERRUPTIBLE);
+	if (ret)
+		mutex_release(&lock->dep_map, 1, _RET_IP_);
+	return ret;
+}
+EXPORT_SYMBOL(_mutex_lock_interruptible_nested);
+
+int __lockfunc _mutex_lock_killable_nested(struct mutex *lock, int subclass)
+{
+	int ret;
+
+	mutex_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
+	ret = __rt_mutex_lock_state(&lock->lock, TASK_KILLABLE);
+	if (ret)
+		mutex_release(&lock->dep_map, 1, _RET_IP_);
+	return ret;
+}
+EXPORT_SYMBOL(_mutex_lock_killable_nested);
+#endif
+
+int __lockfunc _mutex_trylock(struct mutex *lock)
+{
+	int ret = __rt_mutex_trylock(&lock->lock);
+
+	if (ret)
+		mutex_acquire(&lock->dep_map, 0, 1, _RET_IP_);
+
+	return ret;
+}
+EXPORT_SYMBOL(_mutex_trylock);
+
+void __lockfunc _mutex_unlock(struct mutex *lock)
+{
+	mutex_release(&lock->dep_map, 1, _RET_IP_);
+	__rt_mutex_unlock(&lock->lock);
+}
+EXPORT_SYMBOL(_mutex_unlock);
+
+/**
+ * atomic_dec_and_mutex_lock - return holding mutex if we dec to 0
+ * @cnt: the atomic which we are to dec
+ * @lock: the mutex to return holding if we dec to 0
+ *
+ * return true and hold lock if we dec to 0, return false otherwise
+ */
+int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock)
+{
+	/* dec if we can't possibly hit 0 */
+	if (atomic_add_unless(cnt, -1, 1))
+		return 0;
+	/* we might hit 0, so take the lock */
+	mutex_lock(lock);
+	if (!atomic_dec_and_test(cnt)) {
+		/* when we actually did the dec, we didn't hit 0 */
+		mutex_unlock(lock);
+		return 0;
+	}
+	/* we hit 0, and we hold the lock */
+	return 1;
+}
+EXPORT_SYMBOL(atomic_dec_and_mutex_lock);
Index: linux-5.0.19-rt11/kernel/locking/rtmutex.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/locking/rtmutex.c
+++ linux-5.0.19-rt11/kernel/locking/rtmutex.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:10 @
  *  Copyright (C) 2005-2006 Timesys Corp., Thomas Gleixner <tglx@timesys.com>
  *  Copyright (C) 2005 Kihon Technologies Inc., Steven Rostedt
  *  Copyright (C) 2006 Esben Nielsen
+ *  Adaptive Spinlocks:
+ *  Copyright (C) 2008 Novell, Inc., Gregory Haskins, Sven Dietrich,
+ *				     and Peter Morreale,
+ * Adaptive Spinlocks simplification:
+ *  Copyright (C) 2008 Red Hat, Inc., Steven Rostedt <srostedt@redhat.com>
  *
  *  See Documentation/locking/rt-mutex-design.txt for details.
  */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:26 @
 #include <linux/sched/wake_q.h>
 #include <linux/sched/debug.h>
 #include <linux/timer.h>
+#include <linux/ww_mutex.h>
+#include <linux/blkdev.h>
 
 #include "rtmutex_common.h"
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:238 @ static inline bool unlock_rt_mutex_safe(
  * Only use with rt_mutex_waiter_{less,equal}()
  */
 #define task_to_waiter(p)	\
-	&(struct rt_mutex_waiter){ .prio = (p)->prio, .deadline = (p)->dl.deadline }
+	&(struct rt_mutex_waiter){ .prio = (p)->prio, .deadline = (p)->dl.deadline, .task = (p) }
 
 static inline int
 rt_mutex_waiter_less(struct rt_mutex_waiter *left,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:278 @ rt_mutex_waiter_equal(struct rt_mutex_wa
 	return 1;
 }
 
+#define STEAL_NORMAL  0
+#define STEAL_LATERAL 1
+
+static inline int
+rt_mutex_steal(struct rt_mutex *lock, struct rt_mutex_waiter *waiter, int mode)
+{
+	struct rt_mutex_waiter *top_waiter = rt_mutex_top_waiter(lock);
+
+	if (waiter == top_waiter || rt_mutex_waiter_less(waiter, top_waiter))
+		return 1;
+
+	/*
+	 * Note that RT tasks are excluded from lateral-steals
+	 * to prevent the introduction of an unbounded latency.
+	 */
+	if (mode == STEAL_NORMAL || rt_task(waiter->task))
+		return 0;
+
+	return rt_mutex_waiter_equal(waiter, top_waiter);
+}
+
 static void
 rt_mutex_enqueue(struct rt_mutex *lock, struct rt_mutex_waiter *waiter)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:403 @ static bool rt_mutex_cond_detect_deadloc
 	return debug_rt_mutex_detect_deadlock(waiter, chwalk);
 }
 
+static void rt_mutex_wake_waiter(struct rt_mutex_waiter *waiter)
+{
+	if (waiter->savestate)
+		wake_up_lock_sleeper(waiter->task);
+	else
+		wake_up_process(waiter->task);
+}
+
 /*
  * Max number of times we'll walk the boosting chain:
  */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:735 @ static int rt_mutex_adjust_prio_chain(st
 	 * follow here. This is the end of the chain we are walking.
 	 */
 	if (!rt_mutex_owner(lock)) {
+		struct rt_mutex_waiter *lock_top_waiter;
+
 		/*
 		 * If the requeue [7] above changed the top waiter,
 		 * then we need to wake the new top waiter up to try
 		 * to get the lock.
 		 */
-		if (prerequeue_top_waiter != rt_mutex_top_waiter(lock))
-			wake_up_process(rt_mutex_top_waiter(lock)->task);
+		lock_top_waiter = rt_mutex_top_waiter(lock);
+		if (prerequeue_top_waiter != lock_top_waiter)
+			rt_mutex_wake_waiter(lock_top_waiter);
 		raw_spin_unlock_irq(&lock->wait_lock);
 		return 0;
 	}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:846 @ static int rt_mutex_adjust_prio_chain(st
  * @task:   The task which wants to acquire the lock
  * @waiter: The waiter that is queued to the lock's wait tree if the
  *	    callsite called task_blocked_on_lock(), otherwise NULL
+ * @mode:   Lock steal mode (STEAL_NORMAL, STEAL_LATERAL)
  */
-static int try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task,
-				struct rt_mutex_waiter *waiter)
+static int __try_to_take_rt_mutex(struct rt_mutex *lock,
+				  struct task_struct *task,
+				  struct rt_mutex_waiter *waiter, int mode)
 {
 	lockdep_assert_held(&lock->wait_lock);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:886 @ static int try_to_take_rt_mutex(struct r
 	 */
 	if (waiter) {
 		/*
-		 * If waiter is not the highest priority waiter of
-		 * @lock, give up.
+		 * If waiter is not the highest priority waiter of @lock,
+		 * or its peer when lateral steal is allowed, give up.
 		 */
-		if (waiter != rt_mutex_top_waiter(lock))
+		if (!rt_mutex_steal(lock, waiter, mode))
 			return 0;
-
 		/*
 		 * We can acquire the lock. Remove the waiter from the
 		 * lock waiters tree.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:908 @ static int try_to_take_rt_mutex(struct r
 		 */
 		if (rt_mutex_has_waiters(lock)) {
 			/*
-			 * If @task->prio is greater than or equal to
-			 * the top waiter priority (kernel view),
-			 * @task lost.
+			 * If @task->prio is greater than the top waiter
+			 * priority (kernel view), or equal to it when a
+			 * lateral steal is forbidden, @task lost.
 			 */
-			if (!rt_mutex_waiter_less(task_to_waiter(task),
-						  rt_mutex_top_waiter(lock)))
+			if (!rt_mutex_steal(lock, task_to_waiter(task), mode))
 				return 0;
-
 			/*
 			 * The current top waiter stays enqueued. We
 			 * don't have to change anything in the lock
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:960 @ takeit:
 	return 1;
 }
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+/*
+ * preemptible spin_lock functions:
+ */
+static inline void rt_spin_lock_fastlock(struct rt_mutex *lock,
+					 void  (*slowfn)(struct rt_mutex *lock))
+{
+	might_sleep_no_state_check();
+
+	if (likely(rt_mutex_cmpxchg_acquire(lock, NULL, current)))
+		return;
+	else
+		slowfn(lock);
+}
+
+static inline void rt_spin_lock_fastunlock(struct rt_mutex *lock,
+					   void  (*slowfn)(struct rt_mutex *lock))
+{
+	if (likely(rt_mutex_cmpxchg_release(lock, current, NULL)))
+		return;
+	else
+		slowfn(lock);
+}
+#ifdef CONFIG_SMP
+/*
+ * Note that owner is a speculative pointer and dereferencing relies
+ * on rcu_read_lock() and the check against the lock owner.
+ */
+static int adaptive_wait(struct rt_mutex *lock,
+			 struct task_struct *owner)
+{
+	int res = 0;
+
+	rcu_read_lock();
+	for (;;) {
+		if (owner != rt_mutex_owner(lock))
+			break;
+		/*
+		 * Ensure that owner->on_cpu is dereferenced _after_
+		 * checking the above to be valid.
+		 */
+		barrier();
+		if (!owner->on_cpu) {
+			res = 1;
+			break;
+		}
+		cpu_relax();
+	}
+	rcu_read_unlock();
+	return res;
+}
+#else
+static int adaptive_wait(struct rt_mutex *lock,
+			 struct task_struct *orig_owner)
+{
+	return 1;
+}
+#endif
+
+static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
+				   struct rt_mutex_waiter *waiter,
+				   struct task_struct *task,
+				   enum rtmutex_chainwalk chwalk);
+/*
+ * Slow path lock function spin_lock style: this variant is very
+ * careful not to miss any non-lock wakeups.
+ *
+ * We store the current state under p->pi_lock in p->saved_state and
+ * the try_to_wake_up() code handles this accordingly.
+ */
+void __sched rt_spin_lock_slowlock_locked(struct rt_mutex *lock,
+					  struct rt_mutex_waiter *waiter,
+					  unsigned long flags)
+{
+	struct task_struct *lock_owner, *self = current;
+	struct rt_mutex_waiter *top_waiter;
+	int ret;
+
+	if (__try_to_take_rt_mutex(lock, self, NULL, STEAL_LATERAL))
+		return;
+
+	BUG_ON(rt_mutex_owner(lock) == self);
+
+	/*
+	 * We save whatever state the task is in and we'll restore it
+	 * after acquiring the lock taking real wakeups into account
+	 * as well. We are serialized via pi_lock against wakeups. See
+	 * try_to_wake_up().
+	 */
+	raw_spin_lock(&self->pi_lock);
+	self->saved_state = self->state;
+	__set_current_state_no_track(TASK_UNINTERRUPTIBLE);
+	raw_spin_unlock(&self->pi_lock);
+
+	ret = task_blocks_on_rt_mutex(lock, waiter, self, RT_MUTEX_MIN_CHAINWALK);
+	BUG_ON(ret);
+
+	for (;;) {
+		/* Try to acquire the lock again. */
+		if (__try_to_take_rt_mutex(lock, self, waiter, STEAL_LATERAL))
+			break;
+
+		top_waiter = rt_mutex_top_waiter(lock);
+		lock_owner = rt_mutex_owner(lock);
+
+		raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
+
+		debug_rt_mutex_print_deadlock(waiter);
+
+		if (top_waiter != waiter || adaptive_wait(lock, lock_owner))
+			schedule();
+
+		raw_spin_lock_irqsave(&lock->wait_lock, flags);
+
+		raw_spin_lock(&self->pi_lock);
+		__set_current_state_no_track(TASK_UNINTERRUPTIBLE);
+		raw_spin_unlock(&self->pi_lock);
+	}
+
+	/*
+	 * Restore the task state to current->saved_state. We set it
+	 * to the original state above and the try_to_wake_up() code
+	 * has possibly updated it when a real (non-rtmutex) wakeup
+	 * happened while we were blocked. Clear saved_state so
+	 * try_to_wakeup() does not get confused.
+	 */
+	raw_spin_lock(&self->pi_lock);
+	__set_current_state_no_track(self->saved_state);
+	self->saved_state = TASK_RUNNING;
+	raw_spin_unlock(&self->pi_lock);
+
+	/*
+	 * try_to_take_rt_mutex() sets the waiter bit
+	 * unconditionally. We might have to fix that up:
+	 */
+	fixup_rt_mutex_waiters(lock);
+
+	BUG_ON(rt_mutex_has_waiters(lock) && waiter == rt_mutex_top_waiter(lock));
+	BUG_ON(!RB_EMPTY_NODE(&waiter->tree_entry));
+}
+
+static void noinline __sched rt_spin_lock_slowlock(struct rt_mutex *lock)
+{
+	struct rt_mutex_waiter waiter;
+	unsigned long flags;
+
+	rt_mutex_init_waiter(&waiter, true);
+
+	raw_spin_lock_irqsave(&lock->wait_lock, flags);
+	rt_spin_lock_slowlock_locked(lock, &waiter, flags);
+	raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
+	debug_rt_mutex_free_waiter(&waiter);
+}
+
+static bool __sched __rt_mutex_unlock_common(struct rt_mutex *lock,
+					     struct wake_q_head *wake_q,
+					     struct wake_q_head *wq_sleeper);
+/*
+ * Slow path to release a rt_mutex spin_lock style
+ */
+void __sched rt_spin_lock_slowunlock(struct rt_mutex *lock)
+{
+	unsigned long flags;
+	DEFINE_WAKE_Q(wake_q);
+	DEFINE_WAKE_Q(wake_sleeper_q);
+	bool postunlock;
+
+	raw_spin_lock_irqsave(&lock->wait_lock, flags);
+	postunlock = __rt_mutex_unlock_common(lock, &wake_q, &wake_sleeper_q);
+	raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
+
+	if (postunlock)
+		rt_mutex_postunlock(&wake_q, &wake_sleeper_q);
+}
+
+void __lockfunc rt_spin_lock(spinlock_t *lock)
+{
+	sleeping_lock_inc();
+	migrate_disable();
+	spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
+	rt_spin_lock_fastlock(&lock->lock, rt_spin_lock_slowlock);
+}
+EXPORT_SYMBOL(rt_spin_lock);
+
+void __lockfunc __rt_spin_lock(struct rt_mutex *lock)
+{
+	rt_spin_lock_fastlock(lock, rt_spin_lock_slowlock);
+}
+
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+void __lockfunc rt_spin_lock_nested(spinlock_t *lock, int subclass)
+{
+	sleeping_lock_inc();
+	migrate_disable();
+	spin_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
+	rt_spin_lock_fastlock(&lock->lock, rt_spin_lock_slowlock);
+}
+EXPORT_SYMBOL(rt_spin_lock_nested);
+#endif
+
+void __lockfunc rt_spin_unlock(spinlock_t *lock)
+{
+	/* NOTE: we always pass in '1' for nested, for simplicity */
+	spin_release(&lock->dep_map, 1, _RET_IP_);
+	rt_spin_lock_fastunlock(&lock->lock, rt_spin_lock_slowunlock);
+	migrate_enable();
+	sleeping_lock_dec();
+}
+EXPORT_SYMBOL(rt_spin_unlock);
+
+void __lockfunc __rt_spin_unlock(struct rt_mutex *lock)
+{
+	rt_spin_lock_fastunlock(lock, rt_spin_lock_slowunlock);
+}
+EXPORT_SYMBOL(__rt_spin_unlock);
+
+/*
+ * Wait for the lock to get unlocked: instead of polling for an unlock
+ * (like raw spinlocks do), we lock and unlock, to force the kernel to
+ * schedule if there's contention:
+ */
+void __lockfunc rt_spin_unlock_wait(spinlock_t *lock)
+{
+	spin_lock(lock);
+	spin_unlock(lock);
+}
+EXPORT_SYMBOL(rt_spin_unlock_wait);
+
+int __lockfunc rt_spin_trylock(spinlock_t *lock)
+{
+	int ret;
+
+	sleeping_lock_inc();
+	migrate_disable();
+	ret = __rt_mutex_trylock(&lock->lock);
+	if (ret) {
+		spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
+	} else {
+		migrate_enable();
+		sleeping_lock_dec();
+	}
+	return ret;
+}
+EXPORT_SYMBOL(rt_spin_trylock);
+
+int __lockfunc rt_spin_trylock_bh(spinlock_t *lock)
+{
+	int ret;
+
+	local_bh_disable();
+	ret = __rt_mutex_trylock(&lock->lock);
+	if (ret) {
+		sleeping_lock_inc();
+		migrate_disable();
+		spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
+	} else
+		local_bh_enable();
+	return ret;
+}
+EXPORT_SYMBOL(rt_spin_trylock_bh);
+
+int __lockfunc rt_spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags)
+{
+	int ret;
+
+	*flags = 0;
+	ret = __rt_mutex_trylock(&lock->lock);
+	if (ret) {
+		sleeping_lock_inc();
+		migrate_disable();
+		spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
+	}
+	return ret;
+}
+EXPORT_SYMBOL(rt_spin_trylock_irqsave);
+
+void
+__rt_spin_lock_init(spinlock_t *lock, const char *name, struct lock_class_key *key)
+{
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	/*
+	 * Make sure we are not reinitializing a held lock:
+	 */
+	debug_check_no_locks_freed((void *)lock, sizeof(*lock));
+	lockdep_init_map(&lock->dep_map, name, key, 0);
+#endif
+}
+EXPORT_SYMBOL(__rt_spin_lock_init);
+
+#endif /* PREEMPT_RT_FULL */
+
+#ifdef CONFIG_PREEMPT_RT_FULL
+	static inline int __sched
+__mutex_lock_check_stamp(struct rt_mutex *lock, struct ww_acquire_ctx *ctx)
+{
+	struct ww_mutex *ww = container_of(lock, struct ww_mutex, base.lock);
+	struct ww_acquire_ctx *hold_ctx = READ_ONCE(ww->ctx);
+
+	if (!hold_ctx)
+		return 0;
+
+	if (unlikely(ctx == hold_ctx))
+		return -EALREADY;
+
+	if (ctx->stamp - hold_ctx->stamp <= LONG_MAX &&
+	    (ctx->stamp != hold_ctx->stamp || ctx > hold_ctx)) {
+#ifdef CONFIG_DEBUG_MUTEXES
+		DEBUG_LOCKS_WARN_ON(ctx->contending_lock);
+		ctx->contending_lock = ww;
+#endif
+		return -EDEADLK;
+	}
+
+	return 0;
+}
+#else
+	static inline int __sched
+__mutex_lock_check_stamp(struct rt_mutex *lock, struct ww_acquire_ctx *ctx)
+{
+	BUG();
+	return 0;
+}
+
+#endif
+
+static inline int
+try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task,
+		     struct rt_mutex_waiter *waiter)
+{
+	return __try_to_take_rt_mutex(lock, task, waiter, STEAL_NORMAL);
+}
+
 /*
  * Task blocks on lock.
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1389 @ static int task_blocks_on_rt_mutex(struc
  * Called with lock->wait_lock held and interrupts disabled.
  */
 static void mark_wakeup_next_waiter(struct wake_q_head *wake_q,
+				    struct wake_q_head *wake_sleeper_q,
 				    struct rt_mutex *lock)
 {
 	struct rt_mutex_waiter *waiter;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1429 @ static void mark_wakeup_next_waiter(stru
 	 * Pairs with preempt_enable() in rt_mutex_postunlock();
 	 */
 	preempt_disable();
-	wake_q_add(wake_q, waiter->task);
+	if (waiter->savestate)
+		wake_q_add_sleeper(wake_sleeper_q, waiter->task);
+	else
+		wake_q_add(wake_q, waiter->task);
 	raw_spin_unlock(&current->pi_lock);
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1514 @ void rt_mutex_adjust_pi(struct task_stru
 		return;
 	}
 	next_lock = waiter->lock;
-	raw_spin_unlock_irqrestore(&task->pi_lock, flags);
 
 	/* gets dropped in rt_mutex_adjust_prio_chain()! */
 	get_task_struct(task);
 
+	raw_spin_unlock_irqrestore(&task->pi_lock, flags);
 	rt_mutex_adjust_prio_chain(task, RT_MUTEX_MIN_CHAINWALK, NULL,
 				   next_lock, NULL, task);
 }
 
-void rt_mutex_init_waiter(struct rt_mutex_waiter *waiter)
+void rt_mutex_init_waiter(struct rt_mutex_waiter *waiter, bool savestate)
 {
 	debug_rt_mutex_init_waiter(waiter);
 	RB_CLEAR_NODE(&waiter->pi_tree_entry);
 	RB_CLEAR_NODE(&waiter->tree_entry);
 	waiter->task = NULL;
+	waiter->savestate = savestate;
 }
 
 /**
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1545 @ void rt_mutex_init_waiter(struct rt_mute
 static int __sched
 __rt_mutex_slowlock(struct rt_mutex *lock, int state,
 		    struct hrtimer_sleeper *timeout,
-		    struct rt_mutex_waiter *waiter)
+		    struct rt_mutex_waiter *waiter,
+		    struct ww_acquire_ctx *ww_ctx)
 {
 	int ret = 0;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1555 @ __rt_mutex_slowlock(struct rt_mutex *loc
 		if (try_to_take_rt_mutex(lock, current, waiter))
 			break;
 
-		/*
-		 * TASK_INTERRUPTIBLE checks for signals and
-		 * timeout. Ignored otherwise.
-		 */
-		if (likely(state == TASK_INTERRUPTIBLE)) {
-			/* Signal pending? */
-			if (signal_pending(current))
-				ret = -EINTR;
-			if (timeout && !timeout->task)
-				ret = -ETIMEDOUT;
+		if (timeout && !timeout->task) {
+			ret = -ETIMEDOUT;
+			break;
+		}
+		if (signal_pending_state(state, current)) {
+			ret = -EINTR;
+			break;
+		}
+
+		if (ww_ctx && ww_ctx->acquired > 0) {
+			ret = __mutex_lock_check_stamp(lock, ww_ctx);
 			if (ret)
 				break;
 		}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1604 @ static void rt_mutex_handle_deadlock(int
 	}
 }
 
-/*
- * Slow path lock function:
- */
-static int __sched
-rt_mutex_slowlock(struct rt_mutex *lock, int state,
-		  struct hrtimer_sleeper *timeout,
-		  enum rtmutex_chainwalk chwalk)
+static __always_inline void ww_mutex_lock_acquired(struct ww_mutex *ww,
+						   struct ww_acquire_ctx *ww_ctx)
 {
-	struct rt_mutex_waiter waiter;
-	unsigned long flags;
-	int ret = 0;
+#ifdef CONFIG_DEBUG_MUTEXES
+	/*
+	 * If this WARN_ON triggers, you used ww_mutex_lock to acquire,
+	 * but released with a normal mutex_unlock in this call.
+	 *
+	 * This should never happen, always use ww_mutex_unlock.
+	 */
+	DEBUG_LOCKS_WARN_ON(ww->ctx);
+
+	/*
+	 * Not quite done after calling ww_acquire_done() ?
+	 */
+	DEBUG_LOCKS_WARN_ON(ww_ctx->done_acquire);
 
-	rt_mutex_init_waiter(&waiter);
+	if (ww_ctx->contending_lock) {
+		/*
+		 * After -EDEADLK you tried to
+		 * acquire a different ww_mutex? Bad!
+		 */
+		DEBUG_LOCKS_WARN_ON(ww_ctx->contending_lock != ww);
+
+		/*
+		 * You called ww_mutex_lock after receiving -EDEADLK,
+		 * but 'forgot' to unlock everything else first?
+		 */
+		DEBUG_LOCKS_WARN_ON(ww_ctx->acquired > 0);
+		ww_ctx->contending_lock = NULL;
+	}
 
 	/*
-	 * Technically we could use raw_spin_[un]lock_irq() here, but this can
-	 * be called in early boot if the cmpxchg() fast path is disabled
-	 * (debug, no architecture support). In this case we will acquire the
-	 * rtmutex with lock->wait_lock held. But we cannot unconditionally
-	 * enable interrupts in that early boot case. So we need to use the
-	 * irqsave/restore variants.
+	 * Naughty, using a different class will lead to undefined behavior!
 	 */
-	raw_spin_lock_irqsave(&lock->wait_lock, flags);
+	DEBUG_LOCKS_WARN_ON(ww_ctx->ww_class != ww->ww_class);
+#endif
+	ww_ctx->acquired++;
+}
+
+#ifdef CONFIG_PREEMPT_RT_FULL
+static void ww_mutex_account_lock(struct rt_mutex *lock,
+				  struct ww_acquire_ctx *ww_ctx)
+{
+	struct ww_mutex *ww = container_of(lock, struct ww_mutex, base.lock);
+	struct rt_mutex_waiter *waiter, *n;
+
+	/*
+	 * This branch gets optimized out for the common case,
+	 * and is only important for ww_mutex_lock.
+	 */
+	ww_mutex_lock_acquired(ww, ww_ctx);
+	ww->ctx = ww_ctx;
+
+	/*
+	 * Give any possible sleeping processes the chance to wake up,
+	 * so they can recheck if they have to back off.
+	 */
+	rbtree_postorder_for_each_entry_safe(waiter, n, &lock->waiters.rb_root,
+					     tree_entry) {
+		/* XXX debug rt mutex waiter wakeup */
+
+		BUG_ON(waiter->lock != lock);
+		rt_mutex_wake_waiter(waiter);
+	}
+}
+
+#else
+
+static void ww_mutex_account_lock(struct rt_mutex *lock,
+				  struct ww_acquire_ctx *ww_ctx)
+{
+	BUG();
+}
+#endif
+
+int __sched rt_mutex_slowlock_locked(struct rt_mutex *lock, int state,
+				     struct hrtimer_sleeper *timeout,
+				     enum rtmutex_chainwalk chwalk,
+				     struct ww_acquire_ctx *ww_ctx,
+				     struct rt_mutex_waiter *waiter)
+{
+	int ret;
+
+#ifdef CONFIG_PREEMPT_RT_FULL
+	if (ww_ctx) {
+		struct ww_mutex *ww;
+
+		ww = container_of(lock, struct ww_mutex, base.lock);
+		if (unlikely(ww_ctx == READ_ONCE(ww->ctx)))
+			return -EALREADY;
+	}
+#endif
 
 	/* Try to acquire the lock again: */
 	if (try_to_take_rt_mutex(lock, current, NULL)) {
-		raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
+		if (ww_ctx)
+			ww_mutex_account_lock(lock, ww_ctx);
 		return 0;
 	}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1711 @ rt_mutex_slowlock(struct rt_mutex *lock,
 	if (unlikely(timeout))
 		hrtimer_start_expires(&timeout->timer, HRTIMER_MODE_ABS);
 
-	ret = task_blocks_on_rt_mutex(lock, &waiter, current, chwalk);
+	ret = task_blocks_on_rt_mutex(lock, waiter, current, chwalk);
 
-	if (likely(!ret))
+	if (likely(!ret)) {
 		/* sleep on the mutex */
-		ret = __rt_mutex_slowlock(lock, state, timeout, &waiter);
+		ret = __rt_mutex_slowlock(lock, state, timeout, waiter,
+					  ww_ctx);
+	} else if (ww_ctx) {
+		/* ww_mutex received EDEADLK, let it become EALREADY */
+		ret = __mutex_lock_check_stamp(lock, ww_ctx);
+		BUG_ON(!ret);
+	}
 
 	if (unlikely(ret)) {
 		__set_current_state(TASK_RUNNING);
-		remove_waiter(lock, &waiter);
-		rt_mutex_handle_deadlock(ret, chwalk, &waiter);
+		remove_waiter(lock, waiter);
+		/* ww_mutex wants to report EDEADLK/EALREADY, let it */
+		if (!ww_ctx)
+			rt_mutex_handle_deadlock(ret, chwalk, waiter);
+	} else if (ww_ctx) {
+		ww_mutex_account_lock(lock, ww_ctx);
 	}
 
 	/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1738 @ rt_mutex_slowlock(struct rt_mutex *lock,
 	 * unconditionally. We might have to fix that up.
 	 */
 	fixup_rt_mutex_waiters(lock);
+	return ret;
+}
+
+/*
+ * Slow path lock function:
+ */
+static int __sched
+rt_mutex_slowlock(struct rt_mutex *lock, int state,
+		  struct hrtimer_sleeper *timeout,
+		  enum rtmutex_chainwalk chwalk,
+		  struct ww_acquire_ctx *ww_ctx)
+{
+	struct rt_mutex_waiter waiter;
+	unsigned long flags;
+	int ret = 0;
+
+	rt_mutex_init_waiter(&waiter, false);
+
+	/*
+	 * Technically we could use raw_spin_[un]lock_irq() here, but this can
+	 * be called in early boot if the cmpxchg() fast path is disabled
+	 * (debug, no architecture support). In this case we will acquire the
+	 * rtmutex with lock->wait_lock held. But we cannot unconditionally
+	 * enable interrupts in that early boot case. So we need to use the
+	 * irqsave/restore variants.
+	 */
+	raw_spin_lock_irqsave(&lock->wait_lock, flags);
+
+	ret = rt_mutex_slowlock_locked(lock, state, timeout, chwalk, ww_ctx,
+				       &waiter);
 
 	raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1828 @ static inline int rt_mutex_slowtrylock(s
  * Return whether the current task needs to call rt_mutex_postunlock().
  */
 static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock,
-					struct wake_q_head *wake_q)
+					struct wake_q_head *wake_q,
+					struct wake_q_head *wake_sleeper_q)
 {
 	unsigned long flags;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1883 @ static bool __sched rt_mutex_slowunlock(
 	 *
 	 * Queue the next waiter for wakeup once we release the wait_lock.
 	 */
-	mark_wakeup_next_waiter(wake_q, lock);
+	mark_wakeup_next_waiter(wake_q, wake_sleeper_q, lock);
 	raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
 
 	return true; /* call rt_mutex_postunlock() */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1897 @ static bool __sched rt_mutex_slowunlock(
  */
 static inline int
 rt_mutex_fastlock(struct rt_mutex *lock, int state,
+		  struct ww_acquire_ctx *ww_ctx,
 		  int (*slowfn)(struct rt_mutex *lock, int state,
 				struct hrtimer_sleeper *timeout,
-				enum rtmutex_chainwalk chwalk))
+				enum rtmutex_chainwalk chwalk,
+				struct ww_acquire_ctx *ww_ctx))
 {
 	if (likely(rt_mutex_cmpxchg_acquire(lock, NULL, current)))
 		return 0;
 
-	return slowfn(lock, state, NULL, RT_MUTEX_MIN_CHAINWALK);
+	/*
+	 * If rt_mutex blocks, the function sched_submit_work will not call
+	 * blk_schedule_flush_plug (because tsk_is_pi_blocked would be true).
+	 * We must call blk_schedule_flush_plug here, if we don't call it,
+	 * a deadlock in I/O may happen.
+	 */
+	if (unlikely(blk_needs_flush_plug(current)))
+		blk_schedule_flush_plug(current);
+
+	return slowfn(lock, state, NULL, RT_MUTEX_MIN_CHAINWALK, ww_ctx);
 }
 
 static inline int
 rt_mutex_timed_fastlock(struct rt_mutex *lock, int state,
 			struct hrtimer_sleeper *timeout,
 			enum rtmutex_chainwalk chwalk,
+			struct ww_acquire_ctx *ww_ctx,
 			int (*slowfn)(struct rt_mutex *lock, int state,
 				      struct hrtimer_sleeper *timeout,
-				      enum rtmutex_chainwalk chwalk))
+				      enum rtmutex_chainwalk chwalk,
+				      struct ww_acquire_ctx *ww_ctx))
 {
 	if (chwalk == RT_MUTEX_MIN_CHAINWALK &&
 	    likely(rt_mutex_cmpxchg_acquire(lock, NULL, current)))
 		return 0;
 
-	return slowfn(lock, state, timeout, chwalk);
+	if (unlikely(blk_needs_flush_plug(current)))
+		blk_schedule_flush_plug(current);
+
+	return slowfn(lock, state, timeout, chwalk, ww_ctx);
 }
 
 static inline int
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1951 @ rt_mutex_fasttrylock(struct rt_mutex *lo
 /*
  * Performs the wakeup of the the top-waiter and re-enables preemption.
  */
-void rt_mutex_postunlock(struct wake_q_head *wake_q)
+void rt_mutex_postunlock(struct wake_q_head *wake_q,
+			 struct wake_q_head *wake_sleeper_q)
 {
 	wake_up_q(wake_q);
+	wake_up_q_sleeper(wake_sleeper_q);
 
 	/* Pairs with preempt_disable() in rt_mutex_slowunlock() */
 	preempt_enable();
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1964 @ void rt_mutex_postunlock(struct wake_q_h
 static inline void
 rt_mutex_fastunlock(struct rt_mutex *lock,
 		    bool (*slowfn)(struct rt_mutex *lock,
-				   struct wake_q_head *wqh))
+				   struct wake_q_head *wqh,
+				   struct wake_q_head *wq_sleeper))
 {
 	DEFINE_WAKE_Q(wake_q);
+	DEFINE_WAKE_Q(wake_sleeper_q);
 
 	if (likely(rt_mutex_cmpxchg_release(lock, current, NULL)))
 		return;
 
-	if (slowfn(lock, &wake_q))
-		rt_mutex_postunlock(&wake_q);
+	if (slowfn(lock, &wake_q, &wake_sleeper_q))
+		rt_mutex_postunlock(&wake_q, &wake_sleeper_q);
 }
 
-static inline void __rt_mutex_lock(struct rt_mutex *lock, unsigned int subclass)
+int __sched __rt_mutex_lock_state(struct rt_mutex *lock, int state)
 {
 	might_sleep();
+	return rt_mutex_fastlock(lock, state, NULL, rt_mutex_slowlock);
+}
+
+/**
+ * rt_mutex_lock_state - lock a rt_mutex with a given state
+ *
+ * @lock:      The rt_mutex to be locked
+ * @state:     The state to set when blocking on the rt_mutex
+ */
+static inline int __sched rt_mutex_lock_state(struct rt_mutex *lock,
+					      unsigned int subclass, int state)
+{
+	int ret;
 
 	mutex_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
-	rt_mutex_fastlock(lock, TASK_UNINTERRUPTIBLE, rt_mutex_slowlock);
+	ret = __rt_mutex_lock_state(lock, state);
+	if (ret)
+		mutex_release(&lock->dep_map, 1, _RET_IP_);
+	return ret;
+}
+
+static inline void __rt_mutex_lock(struct rt_mutex *lock, unsigned int subclass)
+{
+	rt_mutex_lock_state(lock, subclass, TASK_UNINTERRUPTIBLE);
 }
 
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2044 @ EXPORT_SYMBOL_GPL(rt_mutex_lock);
  */
 int __sched rt_mutex_lock_interruptible(struct rt_mutex *lock)
 {
-	int ret;
-
-	might_sleep();
-
-	mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
-	ret = rt_mutex_fastlock(lock, TASK_INTERRUPTIBLE, rt_mutex_slowlock);
-	if (ret)
-		mutex_release(&lock->dep_map, 1, _RET_IP_);
-
-	return ret;
+	return rt_mutex_lock_state(lock, 0, TASK_INTERRUPTIBLE);
 }
 EXPORT_SYMBOL_GPL(rt_mutex_lock_interruptible);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2062 @ int __sched __rt_mutex_futex_trylock(str
 }
 
 /**
+ * rt_mutex_lock_killable - lock a rt_mutex killable
+ *
+ * @lock:              the rt_mutex to be locked
+ * @detect_deadlock:   deadlock detection on/off
+ *
+ * Returns:
+ *  0          on success
+ * -EINTR      when interrupted by a signal
+ */
+int __sched rt_mutex_lock_killable(struct rt_mutex *lock)
+{
+	return rt_mutex_lock_state(lock, 0, TASK_KILLABLE);
+}
+EXPORT_SYMBOL_GPL(rt_mutex_lock_killable);
+
+/**
  * rt_mutex_timed_lock - lock a rt_mutex interruptible
  *			the timeout structure is provided
  *			by the caller
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2100 @ rt_mutex_timed_lock(struct rt_mutex *loc
 	mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
 	ret = rt_mutex_timed_fastlock(lock, TASK_INTERRUPTIBLE, timeout,
 				       RT_MUTEX_MIN_CHAINWALK,
+				       NULL,
 				       rt_mutex_slowlock);
 	if (ret)
 		mutex_release(&lock->dep_map, 1, _RET_IP_);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2109 @ rt_mutex_timed_lock(struct rt_mutex *loc
 }
 EXPORT_SYMBOL_GPL(rt_mutex_timed_lock);
 
+int __sched __rt_mutex_trylock(struct rt_mutex *lock)
+{
+#ifdef CONFIG_PREEMPT_RT_FULL
+	if (WARN_ON_ONCE(in_irq() || in_nmi()))
+#else
+	if (WARN_ON_ONCE(in_irq() || in_nmi() || in_serving_softirq()))
+#endif
+		return 0;
+
+	return rt_mutex_fasttrylock(lock, rt_mutex_slowtrylock);
+}
+
 /**
  * rt_mutex_trylock - try to lock a rt_mutex
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2136 @ int __sched rt_mutex_trylock(struct rt_m
 {
 	int ret;
 
-	if (WARN_ON_ONCE(in_irq() || in_nmi() || in_serving_softirq()))
-		return 0;
-
-	ret = rt_mutex_fasttrylock(lock, rt_mutex_slowtrylock);
+	ret = __rt_mutex_trylock(lock);
 	if (ret)
 		mutex_acquire(&lock->dep_map, 0, 1, _RET_IP_);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2144 @ int __sched rt_mutex_trylock(struct rt_m
 }
 EXPORT_SYMBOL_GPL(rt_mutex_trylock);
 
+void __sched __rt_mutex_unlock(struct rt_mutex *lock)
+{
+	rt_mutex_fastunlock(lock, rt_mutex_slowunlock);
+}
+
 /**
  * rt_mutex_unlock - unlock a rt_mutex
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2157 @ EXPORT_SYMBOL_GPL(rt_mutex_trylock);
 void __sched rt_mutex_unlock(struct rt_mutex *lock)
 {
 	mutex_release(&lock->dep_map, 1, _RET_IP_);
-	rt_mutex_fastunlock(lock, rt_mutex_slowunlock);
+	__rt_mutex_unlock(lock);
 }
 EXPORT_SYMBOL_GPL(rt_mutex_unlock);
 
-/**
- * Futex variant, that since futex variants do not use the fast-path, can be
- * simple and will not need to retry.
- */
-bool __sched __rt_mutex_futex_unlock(struct rt_mutex *lock,
-				    struct wake_q_head *wake_q)
+static bool __sched __rt_mutex_unlock_common(struct rt_mutex *lock,
+					     struct wake_q_head *wake_q,
+					     struct wake_q_head *wq_sleeper)
 {
 	lockdep_assert_held(&lock->wait_lock);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2180 @ bool __sched __rt_mutex_futex_unlock(str
 	 * avoid inversion prior to the wakeup.  preempt_disable()
 	 * therein pairs with rt_mutex_postunlock().
 	 */
-	mark_wakeup_next_waiter(wake_q, lock);
+	mark_wakeup_next_waiter(wake_q, wq_sleeper, lock);
 
 	return true; /* call postunlock() */
 }
 
+/**
+ * Futex variant, that since futex variants do not use the fast-path, can be
+ * simple and will not need to retry.
+ */
+bool __sched __rt_mutex_futex_unlock(struct rt_mutex *lock,
+				     struct wake_q_head *wake_q,
+				     struct wake_q_head *wq_sleeper)
+{
+	return __rt_mutex_unlock_common(lock, wake_q, wq_sleeper);
+}
+
 void __sched rt_mutex_futex_unlock(struct rt_mutex *lock)
 {
 	DEFINE_WAKE_Q(wake_q);
+	DEFINE_WAKE_Q(wake_sleeper_q);
 	unsigned long flags;
 	bool postunlock;
 
 	raw_spin_lock_irqsave(&lock->wait_lock, flags);
-	postunlock = __rt_mutex_futex_unlock(lock, &wake_q);
+	postunlock = __rt_mutex_futex_unlock(lock, &wake_q, &wake_sleeper_q);
 	raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
 
 	if (postunlock)
-		rt_mutex_postunlock(&wake_q);
+		rt_mutex_postunlock(&wake_q, &wake_sleeper_q);
 }
 
 /**
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2247 @ void __rt_mutex_init(struct rt_mutex *lo
 	if (name && key)
 		debug_rt_mutex_init(lock, name, key);
 }
-EXPORT_SYMBOL_GPL(__rt_mutex_init);
+EXPORT_SYMBOL(__rt_mutex_init);
 
 /**
  * rt_mutex_init_proxy_locked - initialize and lock a rt_mutex on behalf of a
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2267 @ void rt_mutex_init_proxy_locked(struct r
 				struct task_struct *proxy_owner)
 {
 	__rt_mutex_init(lock, NULL, NULL);
+#ifdef CONFIG_DEBUG_SPINLOCK
+	/*
+	 * get another key class for the wait_lock. LOCK_PI and UNLOCK_PI is
+	 * holding the ->wait_lock of the proxy_lock while unlocking a sleeping
+	 * lock.
+	 */
+	raw_spin_lock_init(&lock->wait_lock);
+#endif
 	debug_rt_mutex_proxy_lock(lock, proxy_owner);
 	rt_mutex_set_owner(lock, proxy_owner);
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2422 @ int rt_mutex_wait_proxy_lock(struct rt_m
 			       struct hrtimer_sleeper *to,
 			       struct rt_mutex_waiter *waiter)
 {
+	struct task_struct *tsk = current;
 	int ret;
 
 	raw_spin_lock_irq(&lock->wait_lock);
 	/* sleep on the mutex */
 	set_current_state(TASK_INTERRUPTIBLE);
-	ret = __rt_mutex_slowlock(lock, TASK_INTERRUPTIBLE, to, waiter);
+	ret = __rt_mutex_slowlock(lock, TASK_INTERRUPTIBLE, to, waiter, NULL);
 	/*
 	 * try_to_take_rt_mutex() sets the waiter bit unconditionally. We might
 	 * have to fix that up.
 	 */
 	fixup_rt_mutex_waiters(lock);
+	/*
+	 * RT has a problem here when the wait got interrupted by a timeout
+	 * or a signal. task->pi_blocked_on is still set. The task must
+	 * acquire the hash bucket lock when returning from this function.
+	 *
+	 * If the hash bucket lock is contended then the
+	 * BUG_ON(rt_mutex_real_waiter(task->pi_blocked_on)) in
+	 * task_blocks_on_rt_mutex() will trigger. This can be avoided by
+	 * clearing task->pi_blocked_on which removes the task from the
+	 * boosting chain of the rtmutex. That's correct because the task
+	 * is not longer blocked on it.
+	 */
+	if (ret) {
+		raw_spin_lock(&tsk->pi_lock);
+		tsk->pi_blocked_on = NULL;
+		raw_spin_unlock(&tsk->pi_lock);
+	}
+
 	raw_spin_unlock_irq(&lock->wait_lock);
 
 	return ret;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2513 @ bool rt_mutex_cleanup_proxy_lock(struct
 
 	return cleanup;
 }
+
+static inline int
+ww_mutex_deadlock_injection(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+{
+#ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH
+	unsigned tmp;
+
+	if (ctx->deadlock_inject_countdown-- == 0) {
+		tmp = ctx->deadlock_inject_interval;
+		if (tmp > UINT_MAX/4)
+			tmp = UINT_MAX;
+		else
+			tmp = tmp*2 + tmp + tmp/2;
+
+		ctx->deadlock_inject_interval = tmp;
+		ctx->deadlock_inject_countdown = tmp;
+		ctx->contending_lock = lock;
+
+		ww_mutex_unlock(lock);
+
+		return -EDEADLK;
+	}
+#endif
+
+	return 0;
+}
+
+#ifdef CONFIG_PREEMPT_RT_FULL
+int __sched
+ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+{
+	int ret;
+
+	might_sleep();
+
+	mutex_acquire_nest(&lock->base.dep_map, 0, 0,
+			   ctx ? &ctx->dep_map : NULL, _RET_IP_);
+	ret = rt_mutex_slowlock(&lock->base.lock, TASK_INTERRUPTIBLE, NULL, 0,
+				ctx);
+	if (ret)
+		mutex_release(&lock->base.dep_map, 1, _RET_IP_);
+	else if (!ret && ctx && ctx->acquired > 1)
+		return ww_mutex_deadlock_injection(lock, ctx);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(ww_mutex_lock_interruptible);
+
+int __sched
+ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+{
+	int ret;
+
+	might_sleep();
+
+	mutex_acquire_nest(&lock->base.dep_map, 0, 0,
+			   ctx ? &ctx->dep_map : NULL, _RET_IP_);
+	ret = rt_mutex_slowlock(&lock->base.lock, TASK_UNINTERRUPTIBLE, NULL, 0,
+				ctx);
+	if (ret)
+		mutex_release(&lock->base.dep_map, 1, _RET_IP_);
+	else if (!ret && ctx && ctx->acquired > 1)
+		return ww_mutex_deadlock_injection(lock, ctx);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(ww_mutex_lock);
+
+void __sched ww_mutex_unlock(struct ww_mutex *lock)
+{
+	int nest = !!lock->ctx;
+
+	/*
+	 * The unlocking fastpath is the 0->1 transition from 'locked'
+	 * into 'unlocked' state:
+	 */
+	if (nest) {
+#ifdef CONFIG_DEBUG_MUTEXES
+		DEBUG_LOCKS_WARN_ON(!lock->ctx->acquired);
+#endif
+		if (lock->ctx->acquired > 0)
+			lock->ctx->acquired--;
+		lock->ctx = NULL;
+	}
+
+	mutex_release(&lock->base.dep_map, nest, _RET_IP_);
+	__rt_mutex_unlock(&lock->base.lock);
+}
+EXPORT_SYMBOL(ww_mutex_unlock);
+
+int __rt_mutex_owner_current(struct rt_mutex *lock)
+{
+	return rt_mutex_owner(lock) == current;
+}
+EXPORT_SYMBOL(__rt_mutex_owner_current);
+#endif
Index: linux-5.0.19-rt11/kernel/locking/rtmutex_common.h
===================================================================
--- linux-5.0.19-rt11.orig/kernel/locking/rtmutex_common.h
+++ linux-5.0.19-rt11/kernel/locking/rtmutex_common.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:18 @
 
 #include <linux/rtmutex.h>
 #include <linux/sched/wake_q.h>
+#include <linux/sched/debug.h>
 
 /*
  * This is the control structure for tasks blocked on a rt_mutex,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:33 @ struct rt_mutex_waiter {
 	struct rb_node          pi_tree_entry;
 	struct task_struct	*task;
 	struct rt_mutex		*lock;
+	bool			savestate;
 #ifdef CONFIG_DEBUG_RT_MUTEXES
 	unsigned long		ip;
 	struct pid		*deadlock_task_pid;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:140 @ extern void rt_mutex_init_proxy_locked(s
 				       struct task_struct *proxy_owner);
 extern void rt_mutex_proxy_unlock(struct rt_mutex *lock,
 				  struct task_struct *proxy_owner);
-extern void rt_mutex_init_waiter(struct rt_mutex_waiter *waiter);
+extern void rt_mutex_init_waiter(struct rt_mutex_waiter *waiter, bool savetate);
 extern int __rt_mutex_start_proxy_lock(struct rt_mutex *lock,
 				     struct rt_mutex_waiter *waiter,
 				     struct task_struct *task);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:158 @ extern int __rt_mutex_futex_trylock(stru
 
 extern void rt_mutex_futex_unlock(struct rt_mutex *lock);
 extern bool __rt_mutex_futex_unlock(struct rt_mutex *lock,
-				 struct wake_q_head *wqh);
+				 struct wake_q_head *wqh,
+				 struct wake_q_head *wq_sleeper);
 
-extern void rt_mutex_postunlock(struct wake_q_head *wake_q);
+extern void rt_mutex_postunlock(struct wake_q_head *wake_q,
+				struct wake_q_head *wake_sleeper_q);
+
+/* RW semaphore special interface */
+struct ww_acquire_ctx;
+
+extern int __rt_mutex_lock_state(struct rt_mutex *lock, int state);
+extern int __rt_mutex_trylock(struct rt_mutex *lock);
+extern void __rt_mutex_unlock(struct rt_mutex *lock);
+int __sched rt_mutex_slowlock_locked(struct rt_mutex *lock, int state,
+				     struct hrtimer_sleeper *timeout,
+				     enum rtmutex_chainwalk chwalk,
+				     struct ww_acquire_ctx *ww_ctx,
+				     struct rt_mutex_waiter *waiter);
+void __sched rt_spin_lock_slowlock_locked(struct rt_mutex *lock,
+					  struct rt_mutex_waiter *waiter,
+					  unsigned long flags);
+void __sched rt_spin_lock_slowunlock(struct rt_mutex *lock);
 
 #ifdef CONFIG_DEBUG_RT_MUTEXES
 # include "rtmutex-debug.h"
Index: linux-5.0.19-rt11/kernel/locking/rwlock-rt.c
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/kernel/locking/rwlock-rt.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+/*
+ */
+#include <linux/sched/debug.h>
+#include <linux/export.h>
+
+#include "rtmutex_common.h"
+#include <linux/rwlock_types_rt.h>
+
+/*
+ * RT-specific reader/writer locks
+ *
+ * write_lock()
+ *  1) Lock lock->rtmutex
+ *  2) Remove the reader BIAS to force readers into the slow path
+ *  3) Wait until all readers have left the critical region
+ *  4) Mark it write locked
+ *
+ * write_unlock()
+ *  1) Remove the write locked marker
+ *  2) Set the reader BIAS so readers can use the fast path again
+ *  3) Unlock lock->rtmutex to release blocked readers
+ *
+ * read_lock()
+ *  1) Try fast path acquisition (reader BIAS is set)
+ *  2) Take lock->rtmutex.wait_lock which protects the writelocked flag
+ *  3) If !writelocked, acquire it for read
+ *  4) If writelocked, block on lock->rtmutex
+ *  5) unlock lock->rtmutex, goto 1)
+ *
+ * read_unlock()
+ *  1) Try fast path release (reader count != 1)
+ *  2) Wake the writer waiting in write_lock()#3
+ *
+ * read_lock()#3 has the consequence, that rw locks on RT are not writer
+ * fair, but writers, which should be avoided in RT tasks (think tasklist
+ * lock), are subject to the rtmutex priority/DL inheritance mechanism.
+ *
+ * It's possible to make the rw locks writer fair by keeping a list of
+ * active readers. A blocked writer would force all newly incoming readers
+ * to block on the rtmutex, but the rtmutex would have to be proxy locked
+ * for one reader after the other. We can't use multi-reader inheritance
+ * because there is no way to support that with
+ * SCHED_DEADLINE. Implementing the one by one reader boosting/handover
+ * mechanism is a major surgery for a very dubious value.
+ *
+ * The risk of writer starvation is there, but the pathological use cases
+ * which trigger it are not necessarily the typical RT workloads.
+ */
+
+void __rwlock_biased_rt_init(struct rt_rw_lock *lock, const char *name,
+			     struct lock_class_key *key)
+{
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	/*
+	 * Make sure we are not reinitializing a held semaphore:
+	 */
+	debug_check_no_locks_freed((void *)lock, sizeof(*lock));
+	lockdep_init_map(&lock->dep_map, name, key, 0);
+#endif
+	atomic_set(&lock->readers, READER_BIAS);
+	rt_mutex_init(&lock->rtmutex);
+	lock->rtmutex.save_state = 1;
+}
+
+int __read_rt_trylock(struct rt_rw_lock *lock)
+{
+	int r, old;
+
+	/*
+	 * Increment reader count, if lock->readers < 0, i.e. READER_BIAS is
+	 * set.
+	 */
+	for (r = atomic_read(&lock->readers); r < 0;) {
+		old = atomic_cmpxchg(&lock->readers, r, r + 1);
+		if (likely(old == r))
+			return 1;
+		r = old;
+	}
+	return 0;
+}
+
+void __sched __read_rt_lock(struct rt_rw_lock *lock)
+{
+	struct rt_mutex *m = &lock->rtmutex;
+	struct rt_mutex_waiter waiter;
+	unsigned long flags;
+
+	if (__read_rt_trylock(lock))
+		return;
+
+	raw_spin_lock_irqsave(&m->wait_lock, flags);
+	/*
+	 * Allow readers as long as the writer has not completely
+	 * acquired the semaphore for write.
+	 */
+	if (atomic_read(&lock->readers) != WRITER_BIAS) {
+		atomic_inc(&lock->readers);
+		raw_spin_unlock_irqrestore(&m->wait_lock, flags);
+		return;
+	}
+
+	/*
+	 * Call into the slow lock path with the rtmutex->wait_lock
+	 * held, so this can't result in the following race:
+	 *
+	 * Reader1		Reader2		Writer
+	 *			read_lock()
+	 *					write_lock()
+	 *					rtmutex_lock(m)
+	 *					swait()
+	 * read_lock()
+	 * unlock(m->wait_lock)
+	 *			read_unlock()
+	 *			swake()
+	 *					lock(m->wait_lock)
+	 *					lock->writelocked=true
+	 *					unlock(m->wait_lock)
+	 *
+	 *					write_unlock()
+	 *					lock->writelocked=false
+	 *					rtmutex_unlock(m)
+	 *			read_lock()
+	 *					write_lock()
+	 *					rtmutex_lock(m)
+	 *					swait()
+	 * rtmutex_lock(m)
+	 *
+	 * That would put Reader1 behind the writer waiting on
+	 * Reader2 to call read_unlock() which might be unbound.
+	 */
+	rt_mutex_init_waiter(&waiter, true);
+	rt_spin_lock_slowlock_locked(m, &waiter, flags);
+	/*
+	 * The slowlock() above is guaranteed to return with the rtmutex is
+	 * now held, so there can't be a writer active. Increment the reader
+	 * count and immediately drop the rtmutex again.
+	 */
+	atomic_inc(&lock->readers);
+	raw_spin_unlock_irqrestore(&m->wait_lock, flags);
+	rt_spin_lock_slowunlock(m);
+
+	debug_rt_mutex_free_waiter(&waiter);
+}
+
+void __read_rt_unlock(struct rt_rw_lock *lock)
+{
+	struct rt_mutex *m = &lock->rtmutex;
+	struct task_struct *tsk;
+
+	/*
+	 * sem->readers can only hit 0 when a writer is waiting for the
+	 * active readers to leave the critical region.
+	 */
+	if (!atomic_dec_and_test(&lock->readers))
+		return;
+
+	raw_spin_lock_irq(&m->wait_lock);
+	/*
+	 * Wake the writer, i.e. the rtmutex owner. It might release the
+	 * rtmutex concurrently in the fast path, but to clean up the rw
+	 * lock it needs to acquire m->wait_lock. The worst case which can
+	 * happen is a spurious wakeup.
+	 */
+	tsk = rt_mutex_owner(m);
+	if (tsk)
+		wake_up_process(tsk);
+
+	raw_spin_unlock_irq(&m->wait_lock);
+}
+
+static void __write_unlock_common(struct rt_rw_lock *lock, int bias,
+				  unsigned long flags)
+{
+	struct rt_mutex *m = &lock->rtmutex;
+
+	atomic_add(READER_BIAS - bias, &lock->readers);
+	raw_spin_unlock_irqrestore(&m->wait_lock, flags);
+	rt_spin_lock_slowunlock(m);
+}
+
+void __sched __write_rt_lock(struct rt_rw_lock *lock)
+{
+	struct rt_mutex *m = &lock->rtmutex;
+	struct task_struct *self = current;
+	unsigned long flags;
+
+	/* Take the rtmutex as a first step */
+	__rt_spin_lock(m);
+
+	/* Force readers into slow path */
+	atomic_sub(READER_BIAS, &lock->readers);
+
+	raw_spin_lock_irqsave(&m->wait_lock, flags);
+
+	raw_spin_lock(&self->pi_lock);
+	self->saved_state = self->state;
+	__set_current_state_no_track(TASK_UNINTERRUPTIBLE);
+	raw_spin_unlock(&self->pi_lock);
+
+	for (;;) {
+		/* Have all readers left the critical region? */
+		if (!atomic_read(&lock->readers)) {
+			atomic_set(&lock->readers, WRITER_BIAS);
+			raw_spin_lock(&self->pi_lock);
+			__set_current_state_no_track(self->saved_state);
+			self->saved_state = TASK_RUNNING;
+			raw_spin_unlock(&self->pi_lock);
+			raw_spin_unlock_irqrestore(&m->wait_lock, flags);
+			return;
+		}
+
+		raw_spin_unlock_irqrestore(&m->wait_lock, flags);
+
+		if (atomic_read(&lock->readers) != 0)
+			schedule();
+
+		raw_spin_lock_irqsave(&m->wait_lock, flags);
+
+		raw_spin_lock(&self->pi_lock);
+		__set_current_state_no_track(TASK_UNINTERRUPTIBLE);
+		raw_spin_unlock(&self->pi_lock);
+	}
+}
+
+int __write_rt_trylock(struct rt_rw_lock *lock)
+{
+	struct rt_mutex *m = &lock->rtmutex;
+	unsigned long flags;
+
+	if (!__rt_mutex_trylock(m))
+		return 0;
+
+	atomic_sub(READER_BIAS, &lock->readers);
+
+	raw_spin_lock_irqsave(&m->wait_lock, flags);
+	if (!atomic_read(&lock->readers)) {
+		atomic_set(&lock->readers, WRITER_BIAS);
+		raw_spin_unlock_irqrestore(&m->wait_lock, flags);
+		return 1;
+	}
+	__write_unlock_common(lock, 0, flags);
+	return 0;
+}
+
+void __write_rt_unlock(struct rt_rw_lock *lock)
+{
+	struct rt_mutex *m = &lock->rtmutex;
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&m->wait_lock, flags);
+	__write_unlock_common(lock, WRITER_BIAS, flags);
+}
+
+/* Map the reader biased implementation */
+static inline int do_read_rt_trylock(rwlock_t *rwlock)
+{
+	return __read_rt_trylock(rwlock);
+}
+
+static inline int do_write_rt_trylock(rwlock_t *rwlock)
+{
+	return __write_rt_trylock(rwlock);
+}
+
+static inline void do_read_rt_lock(rwlock_t *rwlock)
+{
+	__read_rt_lock(rwlock);
+}
+
+static inline void do_write_rt_lock(rwlock_t *rwlock)
+{
+	__write_rt_lock(rwlock);
+}
+
+static inline void do_read_rt_unlock(rwlock_t *rwlock)
+{
+	__read_rt_unlock(rwlock);
+}
+
+static inline void do_write_rt_unlock(rwlock_t *rwlock)
+{
+	__write_rt_unlock(rwlock);
+}
+
+static inline void do_rwlock_rt_init(rwlock_t *rwlock, const char *name,
+				     struct lock_class_key *key)
+{
+	__rwlock_biased_rt_init(rwlock, name, key);
+}
+
+int __lockfunc rt_read_can_lock(rwlock_t *rwlock)
+{
+	return  atomic_read(&rwlock->readers) < 0;
+}
+
+int __lockfunc rt_write_can_lock(rwlock_t *rwlock)
+{
+	return atomic_read(&rwlock->readers) == READER_BIAS;
+}
+
+/*
+ * The common functions which get wrapped into the rwlock API.
+ */
+int __lockfunc rt_read_trylock(rwlock_t *rwlock)
+{
+	int ret;
+
+	sleeping_lock_inc();
+	migrate_disable();
+	ret = do_read_rt_trylock(rwlock);
+	if (ret) {
+		rwlock_acquire_read(&rwlock->dep_map, 0, 1, _RET_IP_);
+	} else {
+		migrate_enable();
+		sleeping_lock_dec();
+	}
+	return ret;
+}
+EXPORT_SYMBOL(rt_read_trylock);
+
+int __lockfunc rt_write_trylock(rwlock_t *rwlock)
+{
+	int ret;
+
+	sleeping_lock_inc();
+	migrate_disable();
+	ret = do_write_rt_trylock(rwlock);
+	if (ret) {
+		rwlock_acquire(&rwlock->dep_map, 0, 1, _RET_IP_);
+	} else {
+		migrate_enable();
+		sleeping_lock_dec();
+	}
+	return ret;
+}
+EXPORT_SYMBOL(rt_write_trylock);
+
+void __lockfunc rt_read_lock(rwlock_t *rwlock)
+{
+	sleeping_lock_inc();
+	migrate_disable();
+	rwlock_acquire_read(&rwlock->dep_map, 0, 0, _RET_IP_);
+	do_read_rt_lock(rwlock);
+}
+EXPORT_SYMBOL(rt_read_lock);
+
+void __lockfunc rt_write_lock(rwlock_t *rwlock)
+{
+	sleeping_lock_inc();
+	migrate_disable();
+	rwlock_acquire(&rwlock->dep_map, 0, 0, _RET_IP_);
+	do_write_rt_lock(rwlock);
+}
+EXPORT_SYMBOL(rt_write_lock);
+
+void __lockfunc rt_read_unlock(rwlock_t *rwlock)
+{
+	rwlock_release(&rwlock->dep_map, 1, _RET_IP_);
+	do_read_rt_unlock(rwlock);
+	migrate_enable();
+	sleeping_lock_dec();
+}
+EXPORT_SYMBOL(rt_read_unlock);
+
+void __lockfunc rt_write_unlock(rwlock_t *rwlock)
+{
+	rwlock_release(&rwlock->dep_map, 1, _RET_IP_);
+	do_write_rt_unlock(rwlock);
+	migrate_enable();
+	sleeping_lock_dec();
+}
+EXPORT_SYMBOL(rt_write_unlock);
+
+void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key *key)
+{
+	do_rwlock_rt_init(rwlock, name, key);
+}
+EXPORT_SYMBOL(__rt_rwlock_init);
Index: linux-5.0.19-rt11/kernel/locking/rwsem-rt.c
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/kernel/locking/rwsem-rt.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+/*
+ */
+#include <linux/blkdev.h>
+#include <linux/rwsem.h>
+#include <linux/sched/debug.h>
+#include <linux/sched/signal.h>
+#include <linux/export.h>
+
+#include "rtmutex_common.h"
+
+/*
+ * RT-specific reader/writer semaphores
+ *
+ * down_write()
+ *  1) Lock sem->rtmutex
+ *  2) Remove the reader BIAS to force readers into the slow path
+ *  3) Wait until all readers have left the critical region
+ *  4) Mark it write locked
+ *
+ * up_write()
+ *  1) Remove the write locked marker
+ *  2) Set the reader BIAS so readers can use the fast path again
+ *  3) Unlock sem->rtmutex to release blocked readers
+ *
+ * down_read()
+ *  1) Try fast path acquisition (reader BIAS is set)
+ *  2) Take sem->rtmutex.wait_lock which protects the writelocked flag
+ *  3) If !writelocked, acquire it for read
+ *  4) If writelocked, block on sem->rtmutex
+ *  5) unlock sem->rtmutex, goto 1)
+ *
+ * up_read()
+ *  1) Try fast path release (reader count != 1)
+ *  2) Wake the writer waiting in down_write()#3
+ *
+ * down_read()#3 has the consequence, that rw semaphores on RT are not writer
+ * fair, but writers, which should be avoided in RT tasks (think mmap_sem),
+ * are subject to the rtmutex priority/DL inheritance mechanism.
+ *
+ * It's possible to make the rw semaphores writer fair by keeping a list of
+ * active readers. A blocked writer would force all newly incoming readers to
+ * block on the rtmutex, but the rtmutex would have to be proxy locked for one
+ * reader after the other. We can't use multi-reader inheritance because there
+ * is no way to support that with SCHED_DEADLINE. Implementing the one by one
+ * reader boosting/handover mechanism is a major surgery for a very dubious
+ * value.
+ *
+ * The risk of writer starvation is there, but the pathological use cases
+ * which trigger it are not necessarily the typical RT workloads.
+ */
+
+void __rwsem_init(struct rw_semaphore *sem, const char *name,
+		  struct lock_class_key *key)
+{
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
+	/*
+	 * Make sure we are not reinitializing a held semaphore:
+	 */
+	debug_check_no_locks_freed((void *)sem, sizeof(*sem));
+	lockdep_init_map(&sem->dep_map, name, key, 0);
+#endif
+	atomic_set(&sem->readers, READER_BIAS);
+}
+EXPORT_SYMBOL(__rwsem_init);
+
+int __down_read_trylock(struct rw_semaphore *sem)
+{
+	int r, old;
+
+	/*
+	 * Increment reader count, if sem->readers < 0, i.e. READER_BIAS is
+	 * set.
+	 */
+	for (r = atomic_read(&sem->readers); r < 0;) {
+		old = atomic_cmpxchg(&sem->readers, r, r + 1);
+		if (likely(old == r))
+			return 1;
+		r = old;
+	}
+	return 0;
+}
+
+static int __sched __down_read_common(struct rw_semaphore *sem, int state)
+{
+	struct rt_mutex *m = &sem->rtmutex;
+	struct rt_mutex_waiter waiter;
+	int ret;
+
+	if (__down_read_trylock(sem))
+		return 0;
+	/*
+	 * If rt_mutex blocks, the function sched_submit_work will not call
+	 * blk_schedule_flush_plug (because tsk_is_pi_blocked would be true).
+	 * We must call blk_schedule_flush_plug here, if we don't call it,
+	 * a deadlock in I/O may happen.
+	 */
+	if (unlikely(blk_needs_flush_plug(current)))
+		blk_schedule_flush_plug(current);
+
+	might_sleep();
+	raw_spin_lock_irq(&m->wait_lock);
+	/*
+	 * Allow readers as long as the writer has not completely
+	 * acquired the semaphore for write.
+	 */
+	if (atomic_read(&sem->readers) != WRITER_BIAS) {
+		atomic_inc(&sem->readers);
+		raw_spin_unlock_irq(&m->wait_lock);
+		return 0;
+	}
+
+	/*
+	 * Call into the slow lock path with the rtmutex->wait_lock
+	 * held, so this can't result in the following race:
+	 *
+	 * Reader1		Reader2		Writer
+	 *			down_read()
+	 *					down_write()
+	 *					rtmutex_lock(m)
+	 *					swait()
+	 * down_read()
+	 * unlock(m->wait_lock)
+	 *			up_read()
+	 *			swake()
+	 *					lock(m->wait_lock)
+	 *					sem->writelocked=true
+	 *					unlock(m->wait_lock)
+	 *
+	 *					up_write()
+	 *					sem->writelocked=false
+	 *					rtmutex_unlock(m)
+	 *			down_read()
+	 *					down_write()
+	 *					rtmutex_lock(m)
+	 *					swait()
+	 * rtmutex_lock(m)
+	 *
+	 * That would put Reader1 behind the writer waiting on
+	 * Reader2 to call up_read() which might be unbound.
+	 */
+	rt_mutex_init_waiter(&waiter, false);
+	ret = rt_mutex_slowlock_locked(m, state, NULL, RT_MUTEX_MIN_CHAINWALK,
+				       NULL, &waiter);
+	/*
+	 * The slowlock() above is guaranteed to return with the rtmutex (for
+	 * ret = 0) is now held, so there can't be a writer active. Increment
+	 * the reader count and immediately drop the rtmutex again.
+	 * For ret != 0 we don't hold the rtmutex and need unlock the wait_lock.
+	 * We don't own the lock then.
+	 */
+	if (!ret)
+		atomic_inc(&sem->readers);
+	raw_spin_unlock_irq(&m->wait_lock);
+	if (!ret)
+		__rt_mutex_unlock(m);
+
+	debug_rt_mutex_free_waiter(&waiter);
+	return ret;
+}
+
+void __down_read(struct rw_semaphore *sem)
+{
+	int ret;
+
+	ret = __down_read_common(sem, TASK_UNINTERRUPTIBLE);
+	WARN_ON_ONCE(ret);
+}
+
+int __down_read_killable(struct rw_semaphore *sem)
+{
+	int ret;
+
+	ret = __down_read_common(sem, TASK_KILLABLE);
+	if (likely(!ret))
+		return ret;
+	WARN_ONCE(ret != -EINTR, "Unexpected state: %d\n", ret);
+	return -EINTR;
+}
+
+void __up_read(struct rw_semaphore *sem)
+{
+	struct rt_mutex *m = &sem->rtmutex;
+	struct task_struct *tsk;
+
+	/*
+	 * sem->readers can only hit 0 when a writer is waiting for the
+	 * active readers to leave the critical region.
+	 */
+	if (!atomic_dec_and_test(&sem->readers))
+		return;
+
+	might_sleep();
+	raw_spin_lock_irq(&m->wait_lock);
+	/*
+	 * Wake the writer, i.e. the rtmutex owner. It might release the
+	 * rtmutex concurrently in the fast path (due to a signal), but to
+	 * clean up the rwsem it needs to acquire m->wait_lock. The worst
+	 * case which can happen is a spurious wakeup.
+	 */
+	tsk = rt_mutex_owner(m);
+	if (tsk)
+		wake_up_process(tsk);
+
+	raw_spin_unlock_irq(&m->wait_lock);
+}
+
+static void __up_write_unlock(struct rw_semaphore *sem, int bias,
+			      unsigned long flags)
+{
+	struct rt_mutex *m = &sem->rtmutex;
+
+	atomic_add(READER_BIAS - bias, &sem->readers);
+	raw_spin_unlock_irqrestore(&m->wait_lock, flags);
+	__rt_mutex_unlock(m);
+}
+
+static int __sched __down_write_common(struct rw_semaphore *sem, int state)
+{
+	struct rt_mutex *m = &sem->rtmutex;
+	unsigned long flags;
+
+	/* Take the rtmutex as a first step */
+	if (__rt_mutex_lock_state(m, state))
+		return -EINTR;
+
+	/* Force readers into slow path */
+	atomic_sub(READER_BIAS, &sem->readers);
+	might_sleep();
+
+	set_current_state(state);
+	for (;;) {
+		raw_spin_lock_irqsave(&m->wait_lock, flags);
+		/* Have all readers left the critical region? */
+		if (!atomic_read(&sem->readers)) {
+			atomic_set(&sem->readers, WRITER_BIAS);
+			__set_current_state(TASK_RUNNING);
+			raw_spin_unlock_irqrestore(&m->wait_lock, flags);
+			return 0;
+		}
+
+		if (signal_pending_state(state, current)) {
+			__set_current_state(TASK_RUNNING);
+			__up_write_unlock(sem, 0, flags);
+			return -EINTR;
+		}
+		raw_spin_unlock_irqrestore(&m->wait_lock, flags);
+
+		if (atomic_read(&sem->readers) != 0) {
+			schedule();
+			set_current_state(state);
+		}
+	}
+}
+
+void __sched __down_write(struct rw_semaphore *sem)
+{
+	__down_write_common(sem, TASK_UNINTERRUPTIBLE);
+}
+
+int __sched __down_write_killable(struct rw_semaphore *sem)
+{
+	return __down_write_common(sem, TASK_KILLABLE);
+}
+
+int __down_write_trylock(struct rw_semaphore *sem)
+{
+	struct rt_mutex *m = &sem->rtmutex;
+	unsigned long flags;
+
+	if (!__rt_mutex_trylock(m))
+		return 0;
+
+	atomic_sub(READER_BIAS, &sem->readers);
+
+	raw_spin_lock_irqsave(&m->wait_lock, flags);
+	if (!atomic_read(&sem->readers)) {
+		atomic_set(&sem->readers, WRITER_BIAS);
+		raw_spin_unlock_irqrestore(&m->wait_lock, flags);
+		return 1;
+	}
+	__up_write_unlock(sem, 0, flags);
+	return 0;
+}
+
+void __up_write(struct rw_semaphore *sem)
+{
+	struct rt_mutex *m = &sem->rtmutex;
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&m->wait_lock, flags);
+	__up_write_unlock(sem, WRITER_BIAS, flags);
+}
+
+void __downgrade_write(struct rw_semaphore *sem)
+{
+	struct rt_mutex *m = &sem->rtmutex;
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&m->wait_lock, flags);
+	/* Release it and account current as reader */
+	__up_write_unlock(sem, WRITER_BIAS - 1, flags);
+}
Index: linux-5.0.19-rt11/kernel/locking/spinlock.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/locking/spinlock.c
+++ linux-5.0.19-rt11/kernel/locking/spinlock.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:120 @ void __lockfunc __raw_##op##_lock_bh(loc
  *         __[spin|read|write]_lock_bh()
  */
 BUILD_LOCK_OPS(spin, raw_spinlock);
+
+#ifndef CONFIG_PREEMPT_RT_FULL
 BUILD_LOCK_OPS(read, rwlock);
 BUILD_LOCK_OPS(write, rwlock);
+#endif
 
 #endif
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:208 @ void __lockfunc _raw_spin_unlock_bh(raw_
 EXPORT_SYMBOL(_raw_spin_unlock_bh);
 #endif
 
+#ifndef CONFIG_PREEMPT_RT_FULL
+
 #ifndef CONFIG_INLINE_READ_TRYLOCK
 int __lockfunc _raw_read_trylock(rwlock_t *lock)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:354 @ void __lockfunc _raw_write_unlock_bh(rwl
 EXPORT_SYMBOL(_raw_write_unlock_bh);
 #endif
 
+#endif /* !PREEMPT_RT_FULL */
+
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
 
 void __lockfunc _raw_spin_lock_nested(raw_spinlock_t *lock, int subclass)
Index: linux-5.0.19-rt11/kernel/locking/spinlock_debug.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/locking/spinlock_debug.c
+++ linux-5.0.19-rt11/kernel/locking/spinlock_debug.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:34 @ void __raw_spin_lock_init(raw_spinlock_t
 
 EXPORT_SYMBOL(__raw_spin_lock_init);
 
+#ifndef CONFIG_PREEMPT_RT_FULL
 void __rwlock_init(rwlock_t *lock, const char *name,
 		   struct lock_class_key *key)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:52 @ void __rwlock_init(rwlock_t *lock, const
 }
 
 EXPORT_SYMBOL(__rwlock_init);
+#endif
 
 static void spin_dump(raw_spinlock_t *lock, const char *msg)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:140 @ void do_raw_spin_unlock(raw_spinlock_t *
 	arch_spin_unlock(&lock->raw_lock);
 }
 
+#ifndef CONFIG_PREEMPT_RT_FULL
 static void rwlock_bug(rwlock_t *lock, const char *msg)
 {
 	if (!debug_locks_off())
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:230 @ void do_raw_write_unlock(rwlock_t *lock)
 	debug_write_unlock(lock);
 	arch_write_unlock(&lock->raw_lock);
 }
+
+#endif
Index: linux-5.0.19-rt11/kernel/panic.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/panic.c
+++ linux-5.0.19-rt11/kernel/panic.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:226 @ void panic(const char *fmt, ...)
 	 * Bypass the panic_cpu check and call __crash_kexec directly.
 	 */
 	if (!_crash_kexec_post_notifiers) {
-		printk_safe_flush_on_panic();
 		__crash_kexec(NULL);
 
 		/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:249 @ void panic(const char *fmt, ...)
 	 */
 	atomic_notifier_call_chain(&panic_notifier_list, 0, buf);
 
-	/* Call flush even twice. It tries harder with a single online CPU */
-	printk_safe_flush_on_panic();
 	kmsg_dump(KMSG_DUMP_PANIC);
 
 	/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:510 @ static u64 oops_id;
 
 static int init_oops_id(void)
 {
+#ifndef CONFIG_PREEMPT_RT_FULL
 	if (!oops_id)
 		get_random_bytes(&oops_id, sizeof(oops_id));
 	else
+#endif
 		oops_id++;
 
 	return 0;
Index: linux-5.0.19-rt11/kernel/power/hibernate.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/power/hibernate.c
+++ linux-5.0.19-rt11/kernel/power/hibernate.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:684 @ static int load_image_and_restore(void)
 	return error;
 }
 
+#ifndef CONFIG_SUSPEND
+bool pm_in_action;
+#endif
+
 /**
  * hibernate - Carry out system hibernation, including saving the image.
  */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:701 @ int hibernate(void)
 		return -EPERM;
 	}
 
+	pm_in_action = true;
+
 	lock_system_sleep();
 	/* The snapshot device should not be opened while we're running */
 	if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:781 @ int hibernate(void)
 	atomic_inc(&snapshot_device_available);
  Unlock:
 	unlock_system_sleep();
+	pm_in_action = false;
 	pr_info("hibernation exit\n");
 
 	return error;
Index: linux-5.0.19-rt11/kernel/power/suspend.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/power/suspend.c
+++ linux-5.0.19-rt11/kernel/power/suspend.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:603 @ static int enter_state(suspend_state_t s
 	return error;
 }
 
+bool pm_in_action;
+
 /**
  * pm_suspend - Externally visible function for suspending the system.
  * @state: System sleep state to enter.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:619 @ int pm_suspend(suspend_state_t state)
 	if (state <= PM_SUSPEND_ON || state >= PM_SUSPEND_MAX)
 		return -EINVAL;
 
+	pm_in_action = true;
 	pr_info("suspend entry (%s)\n", mem_sleep_labels[state]);
 	error = enter_state(state);
 	if (error) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:629 @ int pm_suspend(suspend_state_t state)
 		suspend_stats.success++;
 	}
 	pr_info("suspend exit\n");
+	pm_in_action = false;
 	return error;
 }
 EXPORT_SYMBOL(pm_suspend);
Index: linux-5.0.19-rt11/kernel/printk/Makefile
===================================================================
--- linux-5.0.19-rt11.orig/kernel/printk/Makefile
+++ linux-5.0.19-rt11/kernel/printk/Makefile
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1 @
 obj-y	= printk.o
-obj-$(CONFIG_PRINTK)	+= printk_safe.o
 obj-$(CONFIG_A11Y_BRAILLE_CONSOLE)	+= braille.o
Index: linux-5.0.19-rt11/kernel/printk/internal.h
===================================================================
--- linux-5.0.19-rt11.orig/kernel/printk/internal.h
+++ /dev/null
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1 @
-/*
- * internal.h - printk internal definitions
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2
- * of the License, or (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, see <http://www.gnu.org/licenses/>.
- */
-#include <linux/percpu.h>
-
-#ifdef CONFIG_PRINTK
-
-#define PRINTK_SAFE_CONTEXT_MASK	 0x3fffffff
-#define PRINTK_NMI_DIRECT_CONTEXT_MASK	 0x40000000
-#define PRINTK_NMI_CONTEXT_MASK		 0x80000000
-
-extern raw_spinlock_t logbuf_lock;
-
-__printf(5, 0)
-int vprintk_store(int facility, int level,
-		  const char *dict, size_t dictlen,
-		  const char *fmt, va_list args);
-
-__printf(1, 0) int vprintk_default(const char *fmt, va_list args);
-__printf(1, 0) int vprintk_deferred(const char *fmt, va_list args);
-__printf(1, 0) int vprintk_func(const char *fmt, va_list args);
-void __printk_safe_enter(void);
-void __printk_safe_exit(void);
-
-#define printk_safe_enter_irqsave(flags)	\
-	do {					\
-		local_irq_save(flags);		\
-		__printk_safe_enter();		\
-	} while (0)
-
-#define printk_safe_exit_irqrestore(flags)	\
-	do {					\
-		__printk_safe_exit();		\
-		local_irq_restore(flags);	\
-	} while (0)
-
-#define printk_safe_enter_irq()		\
-	do {					\
-		local_irq_disable();		\
-		__printk_safe_enter();		\
-	} while (0)
-
-#define printk_safe_exit_irq()			\
-	do {					\
-		__printk_safe_exit();		\
-		local_irq_enable();		\
-	} while (0)
-
-void defer_console_output(void);
-
-#else
-
-__printf(1, 0) int vprintk_func(const char *fmt, va_list args) { return 0; }
-
-/*
- * In !PRINTK builds we still export logbuf_lock spin_lock, console_sem
- * semaphore and some of console functions (console_unlock()/etc.), so
- * printk-safe must preserve the existing local IRQ guarantees.
- */
-#define printk_safe_enter_irqsave(flags) local_irq_save(flags)
-#define printk_safe_exit_irqrestore(flags) local_irq_restore(flags)
-
-#define printk_safe_enter_irq() local_irq_disable()
-#define printk_safe_exit_irq() local_irq_enable()
-
-#endif /* CONFIG_PRINTK */
Index: linux-5.0.19-rt11/kernel/printk/printk.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/printk/printk.c
+++ linux-5.0.19-rt11/kernel/printk/printk.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:47 @
 #include <linux/irq_work.h>
 #include <linux/ctype.h>
 #include <linux/uio.h>
+#include <linux/kthread.h>
+#include <linux/clocksource.h>
+#include <linux/printk_ringbuffer.h>
 #include <linux/sched/clock.h>
 #include <linux/sched/debug.h>
 #include <linux/sched/task_stack.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:63 @
 
 #include "console_cmdline.h"
 #include "braille.h"
-#include "internal.h"
 
-int console_printk[4] = {
+int console_printk[5] = {
 	CONSOLE_LOGLEVEL_DEFAULT,	/* console_loglevel */
 	MESSAGE_LOGLEVEL_DEFAULT,	/* default_message_loglevel */
 	CONSOLE_LOGLEVEL_MIN,		/* minimum_console_loglevel */
 	CONSOLE_LOGLEVEL_DEFAULT,	/* default_console_loglevel */
+	CONSOLE_LOGLEVEL_EMERGENCY,	/* emergency_console_loglevel */
 };
 
 atomic_t ignore_console_lock_warning __read_mostly = ATOMIC_INIT(0);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:213 @ static int nr_ext_console_drivers;
 
 static int __down_trylock_console_sem(unsigned long ip)
 {
-	int lock_failed;
-	unsigned long flags;
-
-	/*
-	 * Here and in __up_console_sem() we need to be in safe mode,
-	 * because spindump/WARN/etc from under console ->lock will
-	 * deadlock in printk()->down_trylock_console_sem() otherwise.
-	 */
-	printk_safe_enter_irqsave(flags);
-	lock_failed = down_trylock(&console_sem);
-	printk_safe_exit_irqrestore(flags);
-
-	if (lock_failed)
+	if (down_trylock(&console_sem))
 		return 1;
 	mutex_acquire(&console_lock_dep_map, 0, 1, ip);
 	return 0;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:222 @ static int __down_trylock_console_sem(un
 
 static void __up_console_sem(unsigned long ip)
 {
-	unsigned long flags;
-
 	mutex_release(&console_lock_dep_map, 1, ip);
 
-	printk_safe_enter_irqsave(flags);
 	up(&console_sem);
-	printk_safe_exit_irqrestore(flags);
 }
 #define up_console_sem() __up_console_sem(_RET_IP_)
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:239 @ static void __up_console_sem(unsigned lo
 static int console_locked, console_suspended;
 
 /*
- * If exclusive_console is non-NULL then only this console is to be printed to.
- */
-static struct console *exclusive_console;
-
-/*
  *	Array of consoles built from command line options (console=)
  */
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:335 @ enum log_flags {
 
 struct printk_log {
 	u64 ts_nsec;		/* timestamp in nanoseconds */
+	u16 cpu;		/* cpu that generated record */
 	u16 len;		/* length of entire record */
 	u16 text_len;		/* length of text buffer */
 	u16 dict_len;		/* length of dictionary buffer */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:348 @ __packed __aligned(4)
 #endif
 ;
 
-/*
- * The logbuf_lock protects kmsg buffer, indices, counters.  This can be taken
- * within the scheduler's rq lock. It must be released before calling
- * console_unlock() or anything else that might wake up a process.
- */
-DEFINE_RAW_SPINLOCK(logbuf_lock);
-
-/*
- * Helper macros to lock/unlock logbuf_lock and switch between
- * printk-safe/unsafe modes.
- */
-#define logbuf_lock_irq()				\
-	do {						\
-		printk_safe_enter_irq();		\
-		raw_spin_lock(&logbuf_lock);		\
-	} while (0)
-
-#define logbuf_unlock_irq()				\
-	do {						\
-		raw_spin_unlock(&logbuf_lock);		\
-		printk_safe_exit_irq();			\
-	} while (0)
-
-#define logbuf_lock_irqsave(flags)			\
-	do {						\
-		printk_safe_enter_irqsave(flags);	\
-		raw_spin_lock(&logbuf_lock);		\
-	} while (0)
-
-#define logbuf_unlock_irqrestore(flags)		\
-	do {						\
-		raw_spin_unlock(&logbuf_lock);		\
-		printk_safe_exit_irqrestore(flags);	\
-	} while (0)
+DECLARE_STATIC_PRINTKRB_CPULOCK(printk_cpulock);
 
 #ifdef CONFIG_PRINTK
-DECLARE_WAIT_QUEUE_HEAD(log_wait);
-/* the next printk record to read by syslog(READ) or /proc/kmsg */
+/* record buffer */
+DECLARE_STATIC_PRINTKRB(printk_rb, CONFIG_LOG_BUF_SHIFT, &printk_cpulock);
+
+static DEFINE_MUTEX(syslog_lock);
+DECLARE_STATIC_PRINTKRB_ITER(syslog_iter, &printk_rb);
+
+/* the last printk record read by syslog(READ) or /proc/kmsg */
 static u64 syslog_seq;
-static u32 syslog_idx;
 static size_t syslog_partial;
 static bool syslog_time;
 
-/* index and sequence number of the first record stored in the buffer */
-static u64 log_first_seq;
-static u32 log_first_idx;
-
-/* index and sequence number of the next record to store in the buffer */
-static u64 log_next_seq;
-static u32 log_next_idx;
-
-/* the next printk record to write to the console */
-static u64 console_seq;
-static u32 console_idx;
-static u64 exclusive_console_stop_seq;
-
-/* the next printk record to read after the last 'clear' command */
+/* the last printk record at the last 'clear' command */
 static u64 clear_seq;
-static u32 clear_idx;
 
 #define PREFIX_MAX		32
 #define LOG_LINE_MAX		(1024 - PREFIX_MAX)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:371 @ static u32 clear_idx;
 #define LOG_LEVEL(v)		((v) & 0x07)
 #define LOG_FACILITY(v)		((v) >> 3 & 0xff)
 
-/* record buffer */
-#define LOG_ALIGN __alignof__(struct printk_log)
-#define __LOG_BUF_LEN (1 << CONFIG_LOG_BUF_SHIFT)
-#define LOG_BUF_LEN_MAX (u32)(1 << 31)
-static char __log_buf[__LOG_BUF_LEN] __aligned(LOG_ALIGN);
-static char *log_buf = __log_buf;
-static u32 log_buf_len = __LOG_BUF_LEN;
-
 /* Return log buffer address */
 char *log_buf_addr_get(void)
 {
-	return log_buf;
+	return printk_rb.buffer;
 }
 
 /* Return log buffer size */
 u32 log_buf_len_get(void)
 {
-	return log_buf_len;
+	return (1 << printk_rb.size_bits);
 }
 
 /* human readable text of the record */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:395 @ static char *log_dict(const struct print
 	return (char *)msg + sizeof(struct printk_log) + msg->text_len;
 }
 
-/* get record by index; idx must point to valid msg */
-static struct printk_log *log_from_idx(u32 idx)
-{
-	struct printk_log *msg = (struct printk_log *)(log_buf + idx);
-
-	/*
-	 * A length == 0 record is the end of buffer marker. Wrap around and
-	 * read the message at the start of the buffer.
-	 */
-	if (!msg->len)
-		return (struct printk_log *)log_buf;
-	return msg;
-}
-
-/* get next record; idx must point to valid msg */
-static u32 log_next(u32 idx)
-{
-	struct printk_log *msg = (struct printk_log *)(log_buf + idx);
-
-	/* length == 0 indicates the end of the buffer; wrap */
-	/*
-	 * A length == 0 record is the end of buffer marker. Wrap around and
-	 * read the message at the start of the buffer as *this* one, and
-	 * return the one after that.
-	 */
-	if (!msg->len) {
-		msg = (struct printk_log *)log_buf;
-		return msg->len;
-	}
-	return idx + msg->len;
-}
-
-/*
- * Check whether there is enough free space for the given message.
- *
- * The same values of first_idx and next_idx mean that the buffer
- * is either empty or full.
- *
- * If the buffer is empty, we must respect the position of the indexes.
- * They cannot be reset to the beginning of the buffer.
- */
-static int logbuf_has_space(u32 msg_size, bool empty)
-{
-	u32 free;
-
-	if (log_next_idx > log_first_idx || empty)
-		free = max(log_buf_len - log_next_idx, log_first_idx);
-	else
-		free = log_first_idx - log_next_idx;
-
-	/*
-	 * We need space also for an empty header that signalizes wrapping
-	 * of the buffer.
-	 */
-	return free >= msg_size + sizeof(struct printk_log);
-}
-
-static int log_make_free_space(u32 msg_size)
-{
-	while (log_first_seq < log_next_seq &&
-	       !logbuf_has_space(msg_size, false)) {
-		/* drop old messages until we have enough contiguous space */
-		log_first_idx = log_next(log_first_idx);
-		log_first_seq++;
-	}
-
-	if (clear_seq < log_first_seq) {
-		clear_seq = log_first_seq;
-		clear_idx = log_first_idx;
-	}
-
-	/* sequence numbers are equal, so the log buffer is empty */
-	if (logbuf_has_space(msg_size, log_first_seq == log_next_seq))
-		return 0;
-
-	return -ENOMEM;
-}
-
-/* compute the message size including the padding bytes */
-static u32 msg_used_size(u16 text_len, u16 dict_len, u32 *pad_len)
-{
-	u32 size;
-
-	size = sizeof(struct printk_log) + text_len + dict_len;
-	*pad_len = (-size) & (LOG_ALIGN - 1);
-	size += *pad_len;
-
-	return size;
-}
-
-/*
- * Define how much of the log buffer we could take at maximum. The value
- * must be greater than two. Note that only half of the buffer is available
- * when the index points to the middle.
- */
-#define MAX_LOG_TAKE_PART 4
-static const char trunc_msg[] = "<truncated>";
-
-static u32 truncate_msg(u16 *text_len, u16 *trunc_msg_len,
-			u16 *dict_len, u32 *pad_len)
-{
-	/*
-	 * The message should not take the whole buffer. Otherwise, it might
-	 * get removed too soon.
-	 */
-	u32 max_text_len = log_buf_len / MAX_LOG_TAKE_PART;
-	if (*text_len > max_text_len)
-		*text_len = max_text_len;
-	/* enable the warning message */
-	*trunc_msg_len = strlen(trunc_msg);
-	/* disable the "dict" completely */
-	*dict_len = 0;
-	/* compute the size again, count also the warning message */
-	return msg_used_size(*text_len + *trunc_msg_len, 0, pad_len);
-}
+static void printk_emergency(char *buffer, int level, u64 ts_nsec, u16 cpu,
+			     char *text, u16 text_len);
 
 /* insert record into the buffer, discard old ones, update heads */
 static int log_store(int facility, int level,
-		     enum log_flags flags, u64 ts_nsec,
+		     enum log_flags flags, u64 ts_nsec, u16 cpu,
 		     const char *dict, u16 dict_len,
 		     const char *text, u16 text_len)
 {
 	struct printk_log *msg;
-	u32 size, pad_len;
-	u16 trunc_msg_len = 0;
-
-	/* number of '\0' padding bytes to next message */
-	size = msg_used_size(text_len, dict_len, &pad_len);
+	struct prb_handle h;
+	char *rbuf;
+	u32 size;
 
-	if (log_make_free_space(size)) {
-		/* truncate the message if it is too long for empty buffer */
-		size = truncate_msg(&text_len, &trunc_msg_len,
-				    &dict_len, &pad_len);
-		/* survive when the log buffer is too small for trunc_msg */
-		if (log_make_free_space(size))
-			return 0;
-	}
+	size = sizeof(*msg) + text_len + dict_len;
 
-	if (log_next_idx + size + sizeof(struct printk_log) > log_buf_len) {
+	rbuf = prb_reserve(&h, &printk_rb, size);
+	if (!rbuf) {
 		/*
-		 * This message + an additional empty header does not fit
-		 * at the end of the buffer. Add an empty header with len == 0
-		 * to signify a wrap around.
+		 * An emergency message would have been printed, but
+		 * it cannot be stored in the log.
 		 */
-		memset(log_buf + log_next_idx, 0, sizeof(struct printk_log));
-		log_next_idx = 0;
+		prb_inc_lost(&printk_rb);
+		return 0;
 	}
 
 	/* fill message */
-	msg = (struct printk_log *)(log_buf + log_next_idx);
+	msg = (struct printk_log *)rbuf;
 	memcpy(log_text(msg), text, text_len);
 	msg->text_len = text_len;
-	if (trunc_msg_len) {
-		memcpy(log_text(msg) + text_len, trunc_msg, trunc_msg_len);
-		msg->text_len += trunc_msg_len;
-	}
 	memcpy(log_dict(msg), dict, dict_len);
 	msg->dict_len = dict_len;
 	msg->facility = facility;
 	msg->level = level & 7;
 	msg->flags = flags & 0x1f;
-	if (ts_nsec > 0)
-		msg->ts_nsec = ts_nsec;
-	else
-		msg->ts_nsec = local_clock();
-	memset(log_dict(msg) + dict_len, 0, pad_len);
+	msg->ts_nsec = ts_nsec;
+	msg->cpu = cpu;
 	msg->len = size;
 
 	/* insert message */
-	log_next_idx += msg->len;
-	log_next_seq++;
+	prb_commit(&h);
 
 	return msg->text_len;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:496 @ static ssize_t msg_print_ext_header(char
 
 	do_div(ts_usec, 1000);
 
-	return scnprintf(buf, size, "%u,%llu,%llu,%c;",
+	return scnprintf(buf, size, "%u,%llu,%llu,%c,%hu;",
 		       (msg->facility << 3) | msg->level, seq, ts_usec,
-		       msg->flags & LOG_CONT ? 'c' : '-');
+		       msg->flags & LOG_CONT ? 'c' : '-', msg->cpu);
 }
 
 static ssize_t msg_print_ext_body(char *buf, size_t size,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:549 @ static ssize_t msg_print_ext_body(char *
 	return p - buf;
 }
 
+#define PRINTK_SPRINT_MAX (LOG_LINE_MAX + PREFIX_MAX)
+#define PRINTK_RECORD_MAX (sizeof(struct printk_log) + \
+				CONSOLE_EXT_LOG_MAX + PRINTK_SPRINT_MAX)
+
 /* /dev/kmsg - userspace message inject/listen interface */
 struct devkmsg_user {
 	u64 seq;
-	u32 idx;
+	struct prb_iterator iter;
 	struct ratelimit_state rs;
 	struct mutex lock;
 	char buf[CONSOLE_EXT_LOG_MAX];
+	char msgbuf[PRINTK_RECORD_MAX];
 };
 
 static __printf(3, 4) __cold
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:643 @ static ssize_t devkmsg_read(struct file
 			    size_t count, loff_t *ppos)
 {
 	struct devkmsg_user *user = file->private_data;
+	struct prb_iterator backup_iter;
 	struct printk_log *msg;
-	size_t len;
 	ssize_t ret;
+	size_t len;
+	u64 seq;
 
 	if (!user)
 		return -EBADF;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:656 @ static ssize_t devkmsg_read(struct file
 	if (ret)
 		return ret;
 
-	logbuf_lock_irq();
-	while (user->seq == log_next_seq) {
-		if (file->f_flags & O_NONBLOCK) {
-			ret = -EAGAIN;
-			logbuf_unlock_irq();
-			goto out;
-		}
+	/* make a backup copy in case there is a problem */
+	prb_iter_copy(&backup_iter, &user->iter);
 
-		logbuf_unlock_irq();
-		ret = wait_event_interruptible(log_wait,
-					       user->seq != log_next_seq);
-		if (ret)
-			goto out;
-		logbuf_lock_irq();
+	if (file->f_flags & O_NONBLOCK) {
+		ret = prb_iter_next(&user->iter, &user->msgbuf[0],
+				      sizeof(user->msgbuf), &seq);
+	} else {
+		ret = prb_iter_wait_next(&user->iter, &user->msgbuf[0],
+					   sizeof(user->msgbuf), &seq);
 	}
-
-	if (user->seq < log_first_seq) {
-		/* our last seen message is gone, return error and reset */
-		user->idx = log_first_idx;
-		user->seq = log_first_seq;
+	if (ret == 0) {
+		/* end of list */
+		ret = -EAGAIN;
+		goto out;
+	} else if (ret == -EINVAL) {
+		/* iterator invalid, return error and reset */
 		ret = -EPIPE;
-		logbuf_unlock_irq();
+		prb_iter_init(&user->iter, &printk_rb, &user->seq);
+		goto out;
+	} else if (ret < 0) {
+		/* interrupted by signal */
 		goto out;
 	}
 
-	msg = log_from_idx(user->idx);
+	if (user->seq == 0) {
+		user->seq = seq;
+	} else {
+		user->seq++;
+		if (user->seq < seq) {
+			ret = -EPIPE;
+			goto restore_out;
+		}
+	}
+
+	msg = (struct printk_log *)&user->msgbuf[0];
 	len = msg_print_ext_header(user->buf, sizeof(user->buf),
 				   msg, user->seq);
 	len += msg_print_ext_body(user->buf + len, sizeof(user->buf) - len,
 				  log_dict(msg), msg->dict_len,
 				  log_text(msg), msg->text_len);
 
-	user->idx = log_next(user->idx);
-	user->seq++;
-	logbuf_unlock_irq();
-
 	if (len > count) {
 		ret = -EINVAL;
-		goto out;
+		goto restore_out;
 	}
 
 	if (copy_to_user(buf, user->buf, len)) {
 		ret = -EFAULT;
-		goto out;
+		goto restore_out;
 	}
+
 	ret = len;
+	goto out;
+restore_out:
+	/*
+	 * There was an error, but this message should not be
+	 * lost because of it. Restore the backup and setup
+	 * seq so that it will work with the next read.
+	 */
+	prb_iter_copy(&user->iter, &backup_iter);
+	user->seq = seq - 1;
 out:
 	mutex_unlock(&user->lock);
 	return ret;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:725 @ out:
 static loff_t devkmsg_llseek(struct file *file, loff_t offset, int whence)
 {
 	struct devkmsg_user *user = file->private_data;
-	loff_t ret = 0;
+	loff_t ret;
+	u64 seq;
 
 	if (!user)
 		return -EBADF;
 	if (offset)
 		return -ESPIPE;
 
-	logbuf_lock_irq();
+	ret = mutex_lock_interruptible(&user->lock);
+	if (ret)
+		return ret;
+
 	switch (whence) {
 	case SEEK_SET:
 		/* the first record */
-		user->idx = log_first_idx;
-		user->seq = log_first_seq;
+		prb_iter_init(&user->iter, &printk_rb, &user->seq);
 		break;
 	case SEEK_DATA:
 		/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:748 @ static loff_t devkmsg_llseek(struct file
 		 * like issued by 'dmesg -c'. Reading /dev/kmsg itself
 		 * changes no global state, and does not clear anything.
 		 */
-		user->idx = clear_idx;
-		user->seq = clear_seq;
+		for (;;) {
+			prb_iter_init(&user->iter, &printk_rb, &seq);
+			ret = prb_iter_seek(&user->iter, clear_seq);
+			if (ret > 0) {
+				/* seeked to clear seq */
+				user->seq = clear_seq;
+				break;
+			} else if (ret == 0) {
+				/*
+				 * The end of the list was hit without
+				 * ever seeing the clear seq. Just
+				 * seek to the beginning of the list.
+				 */
+				prb_iter_init(&user->iter, &printk_rb,
+						&user->seq);
+				break;
+			}
+			/* iterator invalid, start over */
+
+			/* reset clear_seq if it is no longer available */
+			if (seq > clear_seq)
+				clear_seq = 0;
+		}
+		ret = 0;
 		break;
 	case SEEK_END:
 		/* after the last record */
-		user->idx = log_next_idx;
-		user->seq = log_next_seq;
+		for (;;) {
+			ret = prb_iter_next(&user->iter, NULL, 0, &user->seq);
+			if (ret == 0)
+				break;
+			else if (ret > 0)
+				continue;
+			/* iterator invalid, start over */
+			prb_iter_init(&user->iter, &printk_rb, &user->seq);
+		}
+		ret = 0;
 		break;
 	default:
 		ret = -EINVAL;
 	}
-	logbuf_unlock_irq();
+
+	mutex_unlock(&user->lock);
 	return ret;
 }
 
+struct wait_queue_head *printk_wait_queue(void)
+{
+	/* FIXME: using prb internals! */
+	return printk_rb.wq;
+}
+
 static __poll_t devkmsg_poll(struct file *file, poll_table *wait)
 {
 	struct devkmsg_user *user = file->private_data;
+	struct prb_iterator iter;
 	__poll_t ret = 0;
+	int rbret;
+	u64 seq;
 
 	if (!user)
 		return EPOLLERR|EPOLLNVAL;
 
-	poll_wait(file, &log_wait, wait);
+	poll_wait(file, printk_wait_queue(), wait);
 
-	logbuf_lock_irq();
-	if (user->seq < log_next_seq) {
-		/* return error when data has vanished underneath us */
-		if (user->seq < log_first_seq)
-			ret = EPOLLIN|EPOLLRDNORM|EPOLLERR|EPOLLPRI;
-		else
-			ret = EPOLLIN|EPOLLRDNORM;
-	}
-	logbuf_unlock_irq();
+	mutex_lock(&user->lock);
+
+	/* use copy so no actual iteration takes place */
+	prb_iter_copy(&iter, &user->iter);
+
+	rbret = prb_iter_next(&iter, &user->msgbuf[0],
+				sizeof(user->msgbuf), &seq);
+	if (rbret == 0)
+		goto out;
+
+	ret = EPOLLIN|EPOLLRDNORM;
+
+	if (rbret < 0 || (seq - user->seq) != 1)
+		ret |= EPOLLERR|EPOLLPRI;
+out:
+	mutex_unlock(&user->lock);
 
 	return ret;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:858 @ static int devkmsg_open(struct inode *in
 
 	mutex_init(&user->lock);
 
-	logbuf_lock_irq();
-	user->idx = log_first_idx;
-	user->seq = log_first_seq;
-	logbuf_unlock_irq();
+	prb_iter_init(&user->iter, &printk_rb, &user->seq);
 
 	file->private_data = user;
 	return 0;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:898 @ const struct file_operations kmsg_fops =
  */
 void log_buf_vmcoreinfo_setup(void)
 {
-	VMCOREINFO_SYMBOL(log_buf);
-	VMCOREINFO_SYMBOL(log_buf_len);
-	VMCOREINFO_SYMBOL(log_first_idx);
-	VMCOREINFO_SYMBOL(clear_idx);
-	VMCOREINFO_SYMBOL(log_next_idx);
 	/*
 	 * Export struct printk_log size and field offsets. User space tools can
 	 * parse it and detect any changes to structure down the line.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:910 @ void log_buf_vmcoreinfo_setup(void)
 }
 #endif
 
+/* FIXME: no support for buffer resizing */
+#if 0
 /* requested log_buf_len from kernel cmdline */
 static unsigned long __initdata new_log_buf_len;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:977 @ static void __init log_buf_add_cpu(void)
 #else /* !CONFIG_SMP */
 static inline void log_buf_add_cpu(void) {}
 #endif /* CONFIG_SMP */
+#endif /* 0 */
 
 void __init setup_log_buf(int early)
 {
+/* FIXME: no support for buffer resizing */
+#if 0
 	unsigned long flags;
 	char *new_log_buf;
 	unsigned int free;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1021 @ void __init setup_log_buf(int early)
 	pr_info("log_buf_len: %u bytes\n", log_buf_len);
 	pr_info("early log buf free: %u(%u%%)\n",
 		free, (free * 100) / __LOG_BUF_LEN);
+#endif
 }
 
 static bool __read_mostly ignore_loglevel;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1102 @ static inline void boot_delay_msec(int l
 static bool printk_time = IS_ENABLED(CONFIG_PRINTK_TIME);
 module_param_named(time, printk_time, bool, S_IRUGO | S_IWUSR);
 
+static size_t print_cpu(u16 cpu, char *buf)
+{
+	return sprintf(buf, "%03hu: ", cpu);
+}
+
 static size_t print_syslog(unsigned int level, char *buf)
 {
 	return sprintf(buf, "<%u>", level);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1129 @ static size_t print_prefix(const struct
 		len = print_syslog((msg->facility << 3) | msg->level, buf);
 	if (time)
 		len += print_time(msg->ts_nsec, buf + len);
+	len += print_cpu(msg->cpu, buf + len);
 	return len;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1174 @ static size_t msg_print_text(const struc
 	return len;
 }
 
-static int syslog_print(char __user *buf, int size)
+static int syslog_print(char __user *buf, int size, char *text,
+			char *msgbuf, int *locked)
 {
-	char *text;
+	struct prb_iterator iter;
 	struct printk_log *msg;
 	int len = 0;
-
-	text = kmalloc(LOG_LINE_MAX + PREFIX_MAX, GFP_KERNEL);
-	if (!text)
-		return -ENOMEM;
+	u64 seq;
+	int ret;
 
 	while (size > 0) {
 		size_t n;
 		size_t skip;
 
-		logbuf_lock_irq();
-		if (syslog_seq < log_first_seq) {
-			/* messages are gone, move to first one */
-			syslog_seq = log_first_seq;
-			syslog_idx = log_first_idx;
-			syslog_partial = 0;
+		for (;;) {
+			prb_iter_copy(&iter, &syslog_iter);
+			ret = prb_iter_next(&iter, msgbuf,
+					    PRINTK_RECORD_MAX, &seq);
+			if (ret < 0) {
+				/* messages are gone, move to first one */
+				prb_iter_init(&syslog_iter, &printk_rb,
+					      &syslog_seq);
+				syslog_partial = 0;
+				continue;
+			}
+			break;
 		}
-		if (syslog_seq == log_next_seq) {
-			logbuf_unlock_irq();
+		if (ret == 0)
 			break;
+
+		/*
+		 * If messages have been missed, the partial tracker
+		 * is no longer valid and must be reset.
+		 */
+		if (syslog_seq > 0 && seq - 1 != syslog_seq) {
+			syslog_seq = seq - 1;
+			syslog_partial = 0;
 		}
 
 		/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1219 @ static int syslog_print(char __user *buf
 		if (!syslog_partial)
 			syslog_time = printk_time;
 
+		msg = (struct printk_log *)msgbuf;
+
 		skip = syslog_partial;
-		msg = log_from_idx(syslog_idx);
 		n = msg_print_text(msg, true, syslog_time, text,
-				   LOG_LINE_MAX + PREFIX_MAX);
+				   PRINTK_SPRINT_MAX);
 		if (n - syslog_partial <= size) {
 			/* message fits into buffer, move forward */
-			syslog_idx = log_next(syslog_idx);
-			syslog_seq++;
+			prb_iter_next(&syslog_iter, NULL, 0, &syslog_seq);
 			n -= syslog_partial;
 			syslog_partial = 0;
-		} else if (!len){
+		} else if (!len) {
 			/* partial read(), remember position */
 			n = size;
 			syslog_partial += n;
 		} else
 			n = 0;
-		logbuf_unlock_irq();
 
 		if (!n)
 			break;
 
+		mutex_unlock(&syslog_lock);
 		if (copy_to_user(buf, text + skip, n)) {
 			if (!len)
 				len = -EFAULT;
+			*locked = 0;
 			break;
 		}
+		ret = mutex_lock_interruptible(&syslog_lock);
 
 		len += n;
 		size -= n;
 		buf += n;
+
+		if (ret) {
+			if (!len)
+				len = ret;
+			*locked = 0;
+			break;
+		}
 	}
 
-	kfree(text);
 	return len;
 }
 
-static int syslog_print_all(char __user *buf, int size, bool clear)
+static int count_remaining(struct prb_iterator *iter, u64 until_seq,
+			   char *msgbuf, int size, bool records, bool time)
 {
-	char *text;
+	struct prb_iterator local_iter;
+	struct printk_log *msg;
 	int len = 0;
-	u64 next_seq;
 	u64 seq;
-	u32 idx;
+	int ret;
+
+	prb_iter_copy(&local_iter, iter);
+	for (;;) {
+		ret = prb_iter_next(&local_iter, msgbuf, size, &seq);
+		if (ret == 0) {
+			break;
+		} else if (ret < 0) {
+			/* the iter is invalid, restart from head */
+			prb_iter_init(&local_iter, &printk_rb, NULL);
+			len = 0;
+			continue;
+		}
+
+		if (until_seq && seq >= until_seq)
+			break;
+
+		if (records) {
+			len++;
+		} else {
+			msg = (struct printk_log *)msgbuf;
+			len += msg_print_text(msg, true, time, NULL, 0);
+		}
+	}
+
+	return len;
+}
+
+static void syslog_clear(void)
+{
+	struct prb_iterator iter;
+	int ret;
+
+	prb_iter_init(&iter, &printk_rb, &clear_seq);
+	for (;;) {
+		ret = prb_iter_next(&iter, NULL, 0, &clear_seq);
+		if (ret == 0)
+			break;
+		else if (ret < 0)
+			prb_iter_init(&iter, &printk_rb, &clear_seq);
+	}
+}
+
+static int syslog_print_all(char __user *buf, int size, bool clear)
+{
+	struct prb_iterator iter;
+	struct printk_log *msg;
+	char *msgbuf = NULL;
+	char *text = NULL;
+	int textlen;
+	u64 seq = 0;
+	int len = 0;
 	bool time;
+	int ret;
 
-	text = kmalloc(LOG_LINE_MAX + PREFIX_MAX, GFP_KERNEL);
+	text = kmalloc(PRINTK_SPRINT_MAX, GFP_KERNEL);
 	if (!text)
 		return -ENOMEM;
+	msgbuf = kmalloc(PRINTK_RECORD_MAX, GFP_KERNEL);
+	if (!msgbuf) {
+		kfree(text);
+		return -ENOMEM;
+	}
 
 	time = printk_time;
-	logbuf_lock_irq();
+
 	/*
-	 * Find first record that fits, including all following records,
-	 * into the user-provided buffer for this dump.
+	 * Setup iter to last event before clear. Clear may
+	 * be lost, but keep going with a best effort.
 	 */
-	seq = clear_seq;
-	idx = clear_idx;
-	while (seq < log_next_seq) {
-		struct printk_log *msg = log_from_idx(idx);
-
-		len += msg_print_text(msg, true, time, NULL, 0);
-		idx = log_next(idx);
-		seq++;
-	}
-
-	/* move first record forward until length fits into the buffer */
-	seq = clear_seq;
-	idx = clear_idx;
-	while (len > size && seq < log_next_seq) {
-		struct printk_log *msg = log_from_idx(idx);
+	prb_iter_init(&iter, &printk_rb, NULL);
+	prb_iter_seek(&iter, clear_seq);
 
+	/* count the total bytes after clear */
+	len = count_remaining(&iter, 0, msgbuf, PRINTK_RECORD_MAX,
+			      false, time);
+
+	/* move iter forward until length fits into the buffer */
+	while (len > size) {
+		ret = prb_iter_next(&iter, msgbuf,
+				    PRINTK_RECORD_MAX, &seq);
+		if (ret == 0) {
+			break;
+		} else if (ret < 0) {
+			/*
+			 * The iter is now invalid so clear will
+			 * also be invalid. Restart from the head.
+			 */
+			prb_iter_init(&iter, &printk_rb, NULL);
+			len = count_remaining(&iter, 0, msgbuf,
+					      PRINTK_RECORD_MAX, false, time);
+			continue;
+		}
+
+		msg = (struct printk_log *)msgbuf;
 		len -= msg_print_text(msg, true, time, NULL, 0);
-		idx = log_next(idx);
-		seq++;
-	}
 
-	/* last message fitting into this dump */
-	next_seq = log_next_seq;
+		if (clear)
+			clear_seq = seq;
+	}
 
+	/* copy messages to buffer */
 	len = 0;
-	while (len >= 0 && seq < next_seq) {
-		struct printk_log *msg = log_from_idx(idx);
-		int textlen = msg_print_text(msg, true, time, text,
-					     LOG_LINE_MAX + PREFIX_MAX);
+	while (len >= 0 && len < size) {
+		if (clear)
+			clear_seq = seq;
+
+		ret = prb_iter_next(&iter, msgbuf,
+				    PRINTK_RECORD_MAX, &seq);
+		if (ret == 0) {
+			break;
+		} else if (ret < 0) {
+			/*
+			 * The iter is now invalid. Make a best
+			 * effort to grab the rest of the log
+			 * from the new head.
+			 */
+			prb_iter_init(&iter, &printk_rb, NULL);
+			continue;
+		}
 
-		idx = log_next(idx);
-		seq++;
+		msg = (struct printk_log *)msgbuf;
+		textlen = msg_print_text(msg, true, time, text,
+					 PRINTK_SPRINT_MAX);
+		if (textlen < 0) {
+			len = textlen;
+			break;
+		}
 
-		logbuf_unlock_irq();
 		if (copy_to_user(buf + len, text, textlen))
 			len = -EFAULT;
 		else
 			len += textlen;
-		logbuf_lock_irq();
-
-		if (seq < log_first_seq) {
-			/* messages are gone, move to next one */
-			seq = log_first_seq;
-			idx = log_first_idx;
-		}
 	}
 
-	if (clear) {
-		clear_seq = log_next_seq;
-		clear_idx = log_next_idx;
-	}
-	logbuf_unlock_irq();
+	if (clear && !seq)
+		syslog_clear();
 
-	kfree(text);
+	if (text)
+		kfree(text);
+	if (msgbuf)
+		kfree(msgbuf);
 	return len;
 }
 
-static void syslog_clear(void)
-{
-	logbuf_lock_irq();
-	clear_seq = log_next_seq;
-	clear_idx = log_next_idx;
-	logbuf_unlock_irq();
-}
-
 int do_syslog(int type, char __user *buf, int len, int source)
 {
 	bool clear = false;
 	static int saved_console_loglevel = LOGLEVEL_DEFAULT;
+	struct prb_iterator iter;
+	char *msgbuf = NULL;
+	char *text = NULL;
+	int locked;
 	int error;
+	int ret;
 
 	error = check_syslog_permissions(type, source);
 	if (error)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1442 @ int do_syslog(int type, char __user *buf
 			return 0;
 		if (!access_ok(buf, len))
 			return -EFAULT;
-		error = wait_event_interruptible(log_wait,
-						 syslog_seq != log_next_seq);
+
+		text = kmalloc(PRINTK_SPRINT_MAX, GFP_KERNEL);
+		msgbuf = kmalloc(PRINTK_RECORD_MAX, GFP_KERNEL);
+		if (!text || !msgbuf) {
+			error = -ENOMEM;
+			goto out;
+		}
+
+		error = mutex_lock_interruptible(&syslog_lock);
 		if (error)
-			return error;
-		error = syslog_print(buf, len);
+			goto out;
+
+		/*
+		 * Wait until a first message is available. Use a copy
+		 * because no iteration should occur for syslog now.
+		 */
+		for (;;) {
+			prb_iter_copy(&iter, &syslog_iter);
+
+			mutex_unlock(&syslog_lock);
+			ret = prb_iter_wait_next(&iter, NULL, 0, NULL);
+			if (ret == -ERESTARTSYS) {
+				error = ret;
+				goto out;
+			}
+			error = mutex_lock_interruptible(&syslog_lock);
+			if (error)
+				goto out;
+
+			if (ret == -EINVAL) {
+				prb_iter_init(&syslog_iter, &printk_rb,
+					      &syslog_seq);
+				syslog_partial = 0;
+				continue;
+			}
+			break;
+		}
+
+		/* print as much as will fit in the user buffer */
+		locked = 1;
+		error = syslog_print(buf, len, text, msgbuf, &locked);
+		if (locked)
+			mutex_unlock(&syslog_lock);
 		break;
 	/* Read/clear last kernel messages */
 	case SYSLOG_ACTION_READ_CLEAR:
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1529 @ int do_syslog(int type, char __user *buf
 		break;
 	/* Number of chars in the log buffer */
 	case SYSLOG_ACTION_SIZE_UNREAD:
-		logbuf_lock_irq();
-		if (syslog_seq < log_first_seq) {
-			/* messages are gone, move to first one */
-			syslog_seq = log_first_seq;
-			syslog_idx = log_first_idx;
-			syslog_partial = 0;
-		}
+		msgbuf = kmalloc(PRINTK_RECORD_MAX, GFP_KERNEL);
+		if (!msgbuf)
+			return -ENOMEM;
+
+		error = mutex_lock_interruptible(&syslog_lock);
+		if (error)
+			goto out;
+
 		if (source == SYSLOG_FROM_PROC) {
 			/*
 			 * Short-cut for poll(/"proc/kmsg") which simply checks
 			 * for pending data, not the size; return the count of
 			 * records, not the length.
 			 */
-			error = log_next_seq - syslog_seq;
+			error = count_remaining(&syslog_iter, 0, msgbuf,
+						PRINTK_RECORD_MAX, true,
+						printk_time);
 		} else {
-			u64 seq = syslog_seq;
-			u32 idx = syslog_idx;
-			bool time = syslog_partial ? syslog_time : printk_time;
-
-			while (seq < log_next_seq) {
-				struct printk_log *msg = log_from_idx(idx);
-
-				error += msg_print_text(msg, true, time, NULL,
-							0);
-				time = printk_time;
-				idx = log_next(idx);
-				seq++;
-			}
+			error = count_remaining(&syslog_iter, 0, msgbuf,
+						PRINTK_RECORD_MAX, false,
+						printk_time);
 			error -= syslog_partial;
 		}
-		logbuf_unlock_irq();
+
+		mutex_unlock(&syslog_lock);
 		break;
 	/* Size of the log buffer */
 	case SYSLOG_ACTION_SIZE_BUFFER:
-		error = log_buf_len;
+		error = prb_buffer_size(&printk_rb);
 		break;
 	default:
 		error = -EINVAL;
 		break;
 	}
-
+out:
+	if (msgbuf)
+		kfree(msgbuf);
+	if (text)
+		kfree(text);
 	return error;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1576 @ SYSCALL_DEFINE3(syslog, int, type, char
 	return do_syslog(type, buf, len, SYSLOG_FROM_READER);
 }
 
-/*
- * Special console_lock variants that help to reduce the risk of soft-lockups.
- * They allow to pass console_lock to another printk() call using a busy wait.
- */
+int printk_delay_msec __read_mostly;
 
-#ifdef CONFIG_LOCKDEP
-static struct lockdep_map console_owner_dep_map = {
-	.name = "console_owner"
-};
-#endif
+static inline void printk_delay(int level)
+{
+	boot_delay_msec(level);
+	if (unlikely(printk_delay_msec)) {
+		int m = printk_delay_msec;
 
-static DEFINE_RAW_SPINLOCK(console_owner_lock);
-static struct task_struct *console_owner;
-static bool console_waiter;
+		while (m--) {
+			mdelay(1);
+			touch_nmi_watchdog();
+		}
+	}
+}
 
-/**
- * console_lock_spinning_enable - mark beginning of code where another
- *	thread might safely busy wait
- *
- * This basically converts console_lock into a spinlock. This marks
- * the section where the console_lock owner can not sleep, because
- * there may be a waiter spinning (like a spinlock). Also it must be
- * ready to hand over the lock at the end of the section.
- */
-static void console_lock_spinning_enable(void)
-{
-	raw_spin_lock(&console_owner_lock);
-	console_owner = current;
-	raw_spin_unlock(&console_owner_lock);
+static void print_console_dropped(struct console *con, u64 count)
+{
+	char text[64];
+	int len;
 
-	/* The waiter may spin on us after setting console_owner */
-	spin_acquire(&console_owner_dep_map, 0, 0, _THIS_IP_);
+	len = sprintf(text, "** %llu printk message%s dropped **\n",
+		      count, count > 1 ? "s" : "");
+	con->write(con, text, len);
 }
 
-/**
- * console_lock_spinning_disable_and_check - mark end of code where another
- *	thread was able to busy wait and check if there is a waiter
- *
- * This is called at the end of the section where spinning is allowed.
- * It has two functions. First, it is a signal that it is no longer
- * safe to start busy waiting for the lock. Second, it checks if
- * there is a busy waiter and passes the lock rights to her.
- *
- * Important: Callers lose the lock if there was a busy waiter.
- *	They must not touch items synchronized by console_lock
- *	in this case.
- *
- * Return: 1 if the lock rights were passed, 0 otherwise.
- */
-static int console_lock_spinning_disable_and_check(void)
+static void format_text(struct printk_log *msg, u64 seq,
+			char *ext_text, size_t *ext_len,
+			char *text, size_t *len, bool time)
 {
-	int waiter;
-
-	raw_spin_lock(&console_owner_lock);
-	waiter = READ_ONCE(console_waiter);
-	console_owner = NULL;
-	raw_spin_unlock(&console_owner_lock);
+	if (suppress_message_printing(msg->level)) {
+		/*
+		 * Skip record that has level above the console
+		 * loglevel and update each console's local seq.
+		 */
+		*len = 0;
+		*ext_len = 0;
+		return;
+	}
 
-	if (!waiter) {
-		spin_release(&console_owner_dep_map, 1, _THIS_IP_);
-		return 0;
+	*len = msg_print_text(msg, console_msg_format & MSG_FORMAT_SYSLOG,
+			      time, text, PRINTK_SPRINT_MAX);
+	if (nr_ext_console_drivers) {
+		*ext_len = msg_print_ext_header(ext_text, CONSOLE_EXT_LOG_MAX,
+						msg, seq);
+		*ext_len += msg_print_ext_body(ext_text + *ext_len,
+					       CONSOLE_EXT_LOG_MAX - *ext_len,
+					       log_dict(msg), msg->dict_len,
+					       log_text(msg), msg->text_len);
+	} else {
+		*ext_len = 0;
 	}
+}
 
-	/* The waiter is now free to continue */
-	WRITE_ONCE(console_waiter, false);
+static void printk_write_history(struct console *con, u64 master_seq)
+{
+	struct prb_iterator iter;
+	bool time = printk_time;
+	static char *ext_text;
+	static char *text;
+	static char *buf;
+	u64 seq;
 
-	spin_release(&console_owner_dep_map, 1, _THIS_IP_);
+	ext_text = kmalloc(CONSOLE_EXT_LOG_MAX, GFP_KERNEL);
+	text = kmalloc(PRINTK_SPRINT_MAX, GFP_KERNEL);
+	buf = kmalloc(PRINTK_RECORD_MAX, GFP_KERNEL);
+	if (!ext_text || !text || !buf)
+		return;
 
-	/*
-	 * Hand off console_lock to waiter. The waiter will perform
-	 * the up(). After this, the waiter is the console_lock owner.
-	 */
-	mutex_release(&console_lock_dep_map, 1, _THIS_IP_);
-	return 1;
-}
+	if (!(con->flags & CON_ENABLED))
+		goto out;
 
-/**
- * console_trylock_spinning - try to get console_lock by busy waiting
- *
- * This allows to busy wait for the console_lock when the current
- * owner is running in specially marked sections. It means that
- * the current owner is running and cannot reschedule until it
- * is ready to lose the lock.
- *
- * Return: 1 if we got the lock, 0 othrewise
- */
-static int console_trylock_spinning(void)
-{
-	struct task_struct *owner = NULL;
-	bool waiter;
-	bool spin = false;
-	unsigned long flags;
+	if (!con->write)
+		goto out;
 
-	if (console_trylock())
-		return 1;
+	if (!cpu_online(raw_smp_processor_id()) &&
+	    !(con->flags & CON_ANYTIME))
+		goto out;
 
-	printk_safe_enter_irqsave(flags);
+	prb_iter_init(&iter, &printk_rb, NULL);
 
-	raw_spin_lock(&console_owner_lock);
-	owner = READ_ONCE(console_owner);
-	waiter = READ_ONCE(console_waiter);
-	if (!waiter && owner && owner != current) {
-		WRITE_ONCE(console_waiter, true);
-		spin = true;
-	}
-	raw_spin_unlock(&console_owner_lock);
-
-	/*
-	 * If there is an active printk() writing to the
-	 * consoles, instead of having it write our data too,
-	 * see if we can offload that load from the active
-	 * printer, and do some printing ourselves.
-	 * Go into a spin only if there isn't already a waiter
-	 * spinning, and there is an active printer, and
-	 * that active printer isn't us (recursive printk?).
-	 */
-	if (!spin) {
-		printk_safe_exit_irqrestore(flags);
-		return 0;
-	}
+	for (;;) {
+		struct printk_log *msg;
+		size_t ext_len;
+		size_t len;
+		int ret;
 
-	/* We spin waiting for the owner to release us */
-	spin_acquire(&console_owner_dep_map, 0, 0, _THIS_IP_);
-	/* Owner will clear console_waiter on hand off */
-	while (READ_ONCE(console_waiter))
-		cpu_relax();
-	spin_release(&console_owner_dep_map, 1, _THIS_IP_);
+		ret = prb_iter_next(&iter, buf, PRINTK_RECORD_MAX, &seq);
+		if (ret == 0) {
+			break;
+		} else if (ret < 0) {
+			prb_iter_init(&iter, &printk_rb, NULL);
+			continue;
+		}
 
-	printk_safe_exit_irqrestore(flags);
-	/*
-	 * The owner passed the console lock to us.
-	 * Since we did not spin on console lock, annotate
-	 * this as a trylock. Otherwise lockdep will
-	 * complain.
-	 */
-	mutex_acquire(&console_lock_dep_map, 0, 1, _THIS_IP_);
+		if (seq > master_seq)
+			break;
 
-	return 1;
+		con->printk_seq++;
+		if (con->printk_seq < seq) {
+			print_console_dropped(con, seq - con->printk_seq);
+			con->printk_seq = seq;
+		}
+
+		msg = (struct printk_log *)buf;
+		format_text(msg, master_seq, ext_text, &ext_len, text,
+			    &len, time);
+
+		if (len == 0 && ext_len == 0)
+			continue;
+
+		if (con->flags & CON_EXTENDED)
+			con->write(con, ext_text, ext_len);
+		else
+			con->write(con, text, len);
+
+		printk_delay(msg->level);
+	}
+out:
+	con->wrote_history = 1;
+	kfree(ext_text);
+	kfree(text);
+	kfree(buf);
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1705 @ static int console_trylock_spinning(void
  * log_buf[start] to log_buf[end - 1].
  * The console_lock must be held.
  */
-static void call_console_drivers(const char *ext_text, size_t ext_len,
-				 const char *text, size_t len)
+static void call_console_drivers(u64 seq, const char *ext_text, size_t ext_len,
+				 const char *text, size_t len, int level,
+				 int facility)
 {
 	struct console *con;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1717 @ static void call_console_drivers(const c
 		return;
 
 	for_each_console(con) {
-		if (exclusive_console && con != exclusive_console)
-			continue;
 		if (!(con->flags & CON_ENABLED))
 			continue;
+		if (!con->wrote_history) {
+			if (con->flags & CON_PRINTBUFFER) {
+				printk_write_history(con, seq);
+				continue;
+			}
+			con->wrote_history = 1;
+			con->printk_seq = seq - 1;
+		}
+		if (con->write_atomic && level < emergency_console_loglevel &&
+		    facility == 0) {
+			/* skip emergency messages, already printed */
+			if (con->printk_seq < seq)
+				con->printk_seq = seq;
+			continue;
+		}
+		if (con->flags & CON_BOOT && facility == 0) {
+			/* skip emergency messages, already printed */
+			if (con->printk_seq < seq)
+				con->printk_seq = seq;
+			continue;
+		}
 		if (!con->write)
 			continue;
-		if (!cpu_online(smp_processor_id()) &&
+		if (!cpu_online(raw_smp_processor_id()) &&
 		    !(con->flags & CON_ANYTIME))
 			continue;
+		if (con->printk_seq >= seq)
+			continue;
+
+		con->printk_seq++;
+		if (con->printk_seq < seq) {
+			print_console_dropped(con, seq - con->printk_seq);
+			con->printk_seq = seq;
+		}
+
+		/* for supressed messages, only seq is updated */
+		if (len == 0 && ext_len == 0)
+			continue;
+
 		if (con->flags & CON_EXTENDED)
 			con->write(con, ext_text, ext_len);
 		else
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1765 @ static void call_console_drivers(const c
 	}
 }
 
-int printk_delay_msec __read_mostly;
-
-static inline void printk_delay(void)
-{
-	if (unlikely(printk_delay_msec)) {
-		int m = printk_delay_msec;
-
-		while (m--) {
-			mdelay(1);
-			touch_nmi_watchdog();
-		}
-	}
-}
-
 /*
  * Continuation lines are buffered, and not committed to the record buffer
  * until the line is complete, or a race forces it. The line fragments
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1774 @ static inline void printk_delay(void)
 static struct cont {
 	char buf[LOG_LINE_MAX];
 	size_t len;			/* length == 0 means unused buffer */
-	struct task_struct *owner;	/* task of first print*/
+	int cpu_owner;			/* cpu of first print*/
 	u64 ts_nsec;			/* time of first print */
 	u8 level;			/* log level of first message */
 	u8 facility;			/* log facility of first message */
 	enum log_flags flags;		/* prefix, newline flags */
-} cont;
+} cont[2];
 
-static void cont_flush(void)
+static void cont_flush(int ctx)
 {
-	if (cont.len == 0)
+	struct cont *c = &cont[ctx];
+
+	if (c->len == 0)
 		return;
 
-	log_store(cont.facility, cont.level, cont.flags, cont.ts_nsec,
-		  NULL, 0, cont.buf, cont.len);
-	cont.len = 0;
+	log_store(c->facility, c->level, c->flags, c->ts_nsec, c->cpu_owner,
+		  NULL, 0, c->buf, c->len);
+	c->len = 0;
 }
 
-static bool cont_add(int facility, int level, enum log_flags flags, const char *text, size_t len)
+static void cont_add(int ctx, int cpu, int facility, int level,
+		     enum log_flags flags, const char *text, size_t len)
 {
+	struct cont *c = &cont[ctx];
+
+	if (cpu != c->cpu_owner || !(flags & LOG_CONT))
+		cont_flush(ctx);
+
 	/* If the line gets too long, split it up in separate records. */
-	if (cont.len + len > sizeof(cont.buf)) {
-		cont_flush();
-		return false;
-	}
+	while (c->len + len > sizeof(c->buf))
+		cont_flush(ctx);
 
-	if (!cont.len) {
-		cont.facility = facility;
-		cont.level = level;
-		cont.owner = current;
-		cont.ts_nsec = local_clock();
-		cont.flags = flags;
+	if (!c->len) {
+		c->facility = facility;
+		c->level = level;
+		c->cpu_owner = cpu;
+		c->ts_nsec = local_clock();
+		c->flags = flags;
 	}
 
-	memcpy(cont.buf + cont.len, text, len);
-	cont.len += len;
+	memcpy(c->buf + c->len, text, len);
+	c->len += len;
 
-	// The original flags come from the first line,
-	// but later continuations can add a newline.
+	/*
+	 * The original flags come from the first line,
+	 * but later continuations can add a newline.
+	 */
 	if (flags & LOG_NEWLINE) {
-		cont.flags |= LOG_NEWLINE;
-		cont_flush();
+		c->flags |= LOG_NEWLINE;
+		cont_flush(ctx);
 	}
-
-	return true;
 }
 
-static size_t log_output(int facility, int level, enum log_flags lflags, const char *dict, size_t dictlen, char *text, size_t text_len)
+/* ring buffer used as memory allocator for temporary sprint buffers */
+DECLARE_STATIC_PRINTKRB(sprint_rb,
+			ilog2(PRINTK_RECORD_MAX + sizeof(struct prb_entry) +
+			      sizeof(long)) + 2, &printk_cpulock);
+
+asmlinkage int vprintk_emit(int facility, int level,
+			    const char *dict, size_t dictlen,
+			    const char *fmt, va_list args)
 {
-	/*
-	 * If an earlier line was buffered, and we're a continuation
-	 * write from the same process, try to add it to the buffer.
-	 */
-	if (cont.len) {
-		if (cont.owner == current && (lflags & LOG_CONT)) {
-			if (cont_add(facility, level, lflags, text, text_len))
-				return text_len;
-		}
-		/* Otherwise, make sure it's flushed */
-		cont_flush();
-	}
+	enum log_flags lflags = 0;
+	int ctx = !!in_nmi();
+	int printed_len = 0;
+	struct prb_handle h;
+	size_t text_len;
+	u64 ts_nsec;
+	char *text;
+	char *rbuf;
+	int cpu;
 
-	/* Skip empty continuation lines that couldn't be added - they just flush */
-	if (!text_len && (lflags & LOG_CONT))
-		return 0;
+	ts_nsec = local_clock();
 
-	/* If it doesn't end in a newline, try to buffer the current line */
-	if (!(lflags & LOG_NEWLINE)) {
-		if (cont_add(facility, level, lflags, text, text_len))
-			return text_len;
+	rbuf = prb_reserve(&h, &sprint_rb, PRINTK_SPRINT_MAX);
+	if (!rbuf) {
+		prb_inc_lost(&printk_rb);
+		return printed_len;
 	}
 
-	/* Store it in the record log */
-	return log_store(facility, level, lflags, 0, dict, dictlen, text, text_len);
-}
-
-/* Must be called under logbuf_lock. */
-int vprintk_store(int facility, int level,
-		  const char *dict, size_t dictlen,
-		  const char *fmt, va_list args)
-{
-	static char textbuf[LOG_LINE_MAX];
-	char *text = textbuf;
-	size_t text_len;
-	enum log_flags lflags = 0;
+	cpu = raw_smp_processor_id();
 
 	/*
-	 * The printf needs to come first; we need the syslog
-	 * prefix which might be passed-in as a parameter.
+	 * If this turns out to be an emergency message, there
+	 * may need to be a prefix added. Leave room for it.
 	 */
-	text_len = vscnprintf(text, sizeof(textbuf), fmt, args);
+	text = rbuf + PREFIX_MAX;
+	text_len = vscnprintf(text, PRINTK_SPRINT_MAX - PREFIX_MAX, fmt, args);
 
-	/* mark and strip a trailing newline */
+	/* strip and flag a trailing newline */
 	if (text_len && text[text_len-1] == '\n') {
 		text_len--;
 		lflags |= LOG_NEWLINE;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1896 @ int vprintk_store(int facility, int leve
 	if (dict)
 		lflags |= LOG_PREFIX|LOG_NEWLINE;
 
-	return log_output(facility, level, lflags,
-			  dict, dictlen, text, text_len);
-}
-
-asmlinkage int vprintk_emit(int facility, int level,
-			    const char *dict, size_t dictlen,
-			    const char *fmt, va_list args)
-{
-	int printed_len;
-	bool in_sched = false, pending_output;
-	unsigned long flags;
-	u64 curr_log_seq;
-
-	if (level == LOGLEVEL_SCHED) {
-		level = LOGLEVEL_DEFAULT;
-		in_sched = true;
+	/*
+	 * NOTE:
+	 * - rbuf points to beginning of allocated buffer
+	 * - text points to beginning of text
+	 * - there is room before text for prefix
+	 */
+	if (facility == 0) {
+		/* only the kernel can create emergency messages */
+		printk_emergency(rbuf, level & 7, ts_nsec, cpu,
+				 text, text_len);
 	}
 
-	boot_delay_msec(level);
-	printk_delay();
-
-	/* This stops the holder of console_sem just where we want him */
-	logbuf_lock_irqsave(flags);
-	curr_log_seq = log_next_seq;
-	printed_len = vprintk_store(facility, level, dict, dictlen, fmt, args);
-	pending_output = (curr_log_seq != log_next_seq);
-	logbuf_unlock_irqrestore(flags);
-
-	/* If called from the scheduler, we can not call up(). */
-	if (!in_sched && pending_output) {
-		/*
-		 * Disable preemption to avoid being preempted while holding
-		 * console_sem which would prevent anyone from printing to
-		 * console
-		 */
-		preempt_disable();
-		/*
-		 * Try to acquire and then immediately release the console
-		 * semaphore.  The release will print out buffers and wake up
-		 * /dev/kmsg and syslog() users.
-		 */
-		if (console_trylock_spinning())
-			console_unlock();
-		preempt_enable();
+	if ((lflags & LOG_CONT) || !(lflags & LOG_NEWLINE)) {
+		cont_add(ctx, cpu, facility, level, lflags, text, text_len);
+		printed_len = text_len;
+	} else {
+		if (cpu == cont[ctx].cpu_owner)
+			cont_flush(ctx);
+		printed_len = log_store(facility, level, lflags, ts_nsec, cpu,
+					dict, dictlen, text, text_len);
 	}
 
-	if (pending_output)
-		wake_up_klogd();
+	prb_commit(&h);
 	return printed_len;
 }
 EXPORT_SYMBOL(vprintk_emit);
 
+static __printf(1, 0) int vprintk_func(const char *fmt, va_list args)
+{
+	return vprintk_emit(0, LOGLEVEL_DEFAULT, NULL, 0, fmt, args);
+}
+
 asmlinkage int vprintk(const char *fmt, va_list args)
 {
 	return vprintk_func(fmt, args);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1984 @ asmlinkage __visible int printk(const ch
 	return r;
 }
 EXPORT_SYMBOL(printk);
-
-#else /* CONFIG_PRINTK */
-
-#define LOG_LINE_MAX		0
-#define PREFIX_MAX		0
-#define printk_time		false
-
-static u64 syslog_seq;
-static u32 syslog_idx;
-static u64 console_seq;
-static u32 console_idx;
-static u64 exclusive_console_stop_seq;
-static u64 log_first_seq;
-static u32 log_first_idx;
-static u64 log_next_seq;
-static char *log_text(const struct printk_log *msg) { return NULL; }
-static char *log_dict(const struct printk_log *msg) { return NULL; }
-static struct printk_log *log_from_idx(u32 idx) { return NULL; }
-static u32 log_next(u32 idx) { return 0; }
-static ssize_t msg_print_ext_header(char *buf, size_t size,
-				    struct printk_log *msg,
-				    u64 seq) { return 0; }
-static ssize_t msg_print_ext_body(char *buf, size_t size,
-				  char *dict, size_t dict_len,
-				  char *text, size_t text_len) { return 0; }
-static void console_lock_spinning_enable(void) { }
-static int console_lock_spinning_disable_and_check(void) { return 0; }
-static void call_console_drivers(const char *ext_text, size_t ext_len,
-				 const char *text, size_t len) {}
-static size_t msg_print_text(const struct printk_log *msg, bool syslog,
-			     bool time, char *buf, size_t size) { return 0; }
-static bool suppress_message_printing(int level) { return false; }
-
 #endif /* CONFIG_PRINTK */
 
 #ifdef CONFIG_EARLY_PRINTK
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2214 @ int is_console_locked(void)
 }
 EXPORT_SYMBOL(is_console_locked);
 
-/*
- * Check if we have any console that is capable of printing while cpu is
- * booting or shutting down. Requires console_sem.
- */
-static int have_callable_console(void)
-{
-	struct console *con;
-
-	for_each_console(con)
-		if ((con->flags & CON_ENABLED) &&
-				(con->flags & CON_ANYTIME))
-			return 1;
-
-	return 0;
-}
-
-/*
- * Can we actually use the console at this time on this cpu?
- *
- * Console drivers may assume that per-cpu resources have been allocated. So
- * unless they're explicitly marked as being able to cope (CON_ANYTIME) don't
- * call them until this CPU is officially up.
- */
-static inline int can_use_console(void)
-{
-	return cpu_online(raw_smp_processor_id()) || have_callable_console();
-}
-
 /**
  * console_unlock - unlock the console system
  *
  * Releases the console_lock which the caller holds on the console system
  * and the console driver list.
  *
- * While the console_lock was held, console output may have been buffered
- * by printk().  If this is the case, console_unlock(); emits
- * the output prior to releasing the lock.
- *
- * If there is output waiting, we wake /dev/kmsg and syslog() users.
- *
  * console_unlock(); may be called from any context.
  */
 void console_unlock(void)
 {
-	static char ext_text[CONSOLE_EXT_LOG_MAX];
-	static char text[LOG_LINE_MAX + PREFIX_MAX];
-	unsigned long flags;
-	bool do_cond_resched, retry;
-
 	if (console_suspended) {
 		up_console_sem();
 		return;
 	}
 
-	/*
-	 * Console drivers are called with interrupts disabled, so
-	 * @console_may_schedule should be cleared before; however, we may
-	 * end up dumping a lot of lines, for example, if called from
-	 * console registration path, and should invoke cond_resched()
-	 * between lines if allowable.  Not doing so can cause a very long
-	 * scheduling stall on a slow console leading to RCU stall and
-	 * softlockup warnings which exacerbate the issue with more
-	 * messages practically incapacitating the system.
-	 *
-	 * console_trylock() is not able to detect the preemptive
-	 * context reliably. Therefore the value must be stored before
-	 * and cleared after the the "again" goto label.
-	 */
-	do_cond_resched = console_may_schedule;
-again:
-	console_may_schedule = 0;
-
-	/*
-	 * We released the console_sem lock, so we need to recheck if
-	 * cpu is online and (if not) is there at least one CON_ANYTIME
-	 * console.
-	 */
-	if (!can_use_console()) {
-		console_locked = 0;
-		up_console_sem();
-		return;
-	}
-
-	for (;;) {
-		struct printk_log *msg;
-		size_t ext_len = 0;
-		size_t len;
-
-		printk_safe_enter_irqsave(flags);
-		raw_spin_lock(&logbuf_lock);
-		if (console_seq < log_first_seq) {
-			len = sprintf(text,
-				      "** %llu printk messages dropped **\n",
-				      log_first_seq - console_seq);
-
-			/* messages are gone, move to first one */
-			console_seq = log_first_seq;
-			console_idx = log_first_idx;
-		} else {
-			len = 0;
-		}
-skip:
-		if (console_seq == log_next_seq)
-			break;
-
-		msg = log_from_idx(console_idx);
-		if (suppress_message_printing(msg->level)) {
-			/*
-			 * Skip record we have buffered and already printed
-			 * directly to the console when we received it, and
-			 * record that has level above the console loglevel.
-			 */
-			console_idx = log_next(console_idx);
-			console_seq++;
-			goto skip;
-		}
-
-		/* Output to all consoles once old messages replayed. */
-		if (unlikely(exclusive_console &&
-			     console_seq >= exclusive_console_stop_seq)) {
-			exclusive_console = NULL;
-		}
-
-		len += msg_print_text(msg,
-				console_msg_format & MSG_FORMAT_SYSLOG,
-				printk_time, text + len, sizeof(text) - len);
-		if (nr_ext_console_drivers) {
-			ext_len = msg_print_ext_header(ext_text,
-						sizeof(ext_text),
-						msg, console_seq);
-			ext_len += msg_print_ext_body(ext_text + ext_len,
-						sizeof(ext_text) - ext_len,
-						log_dict(msg), msg->dict_len,
-						log_text(msg), msg->text_len);
-		}
-		console_idx = log_next(console_idx);
-		console_seq++;
-		raw_spin_unlock(&logbuf_lock);
-
-		/*
-		 * While actively printing out messages, if another printk()
-		 * were to occur on another CPU, it may wait for this one to
-		 * finish. This task can not be preempted if there is a
-		 * waiter waiting to take over.
-		 */
-		console_lock_spinning_enable();
-
-		stop_critical_timings();	/* don't trace print latency */
-		call_console_drivers(ext_text, ext_len, text, len);
-		start_critical_timings();
-
-		if (console_lock_spinning_disable_and_check()) {
-			printk_safe_exit_irqrestore(flags);
-			return;
-		}
-
-		printk_safe_exit_irqrestore(flags);
-
-		if (do_cond_resched)
-			cond_resched();
-	}
-
 	console_locked = 0;
-
-	raw_spin_unlock(&logbuf_lock);
-
 	up_console_sem();
-
-	/*
-	 * Someone could have filled up the buffer again, so re-check if there's
-	 * something to flush. In case we cannot trylock the console_sem again,
-	 * there's a new owner and the console_unlock() from them will do the
-	 * flush, no worries.
-	 */
-	raw_spin_lock(&logbuf_lock);
-	retry = console_seq != log_next_seq;
-	raw_spin_unlock(&logbuf_lock);
-	printk_safe_exit_irqrestore(flags);
-
-	if (retry && console_trylock())
-		goto again;
 }
 EXPORT_SYMBOL(console_unlock);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2280 @ void console_unblank(void)
 void console_flush_on_panic(void)
 {
 	/*
-	 * If someone else is holding the console lock, trylock will fail
-	 * and may_schedule may be set.  Ignore and proceed to unlock so
-	 * that messages are flushed out.  As this can be called from any
-	 * context and we don't want to get preempted while flushing,
-	 * ensure may_schedule is cleared.
+	 * FIXME: This is currently a NOP. Emergency messages will have been
+	 * printed, but what about if write_atomic is not available on the
+	 * console? What if the printk kthread is still alive?
 	 */
-	console_trylock();
-	console_may_schedule = 0;
-	console_unlock();
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2361 @ early_param("keep_bootcon", keep_bootcon
 void register_console(struct console *newcon)
 {
 	int i;
-	unsigned long flags;
 	struct console *bcon = NULL;
 	struct console_cmdline *c;
 	static bool has_preferred;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2476 @ void register_console(struct console *ne
 	if (newcon->flags & CON_EXTENDED)
 		nr_ext_console_drivers++;
 
-	if (newcon->flags & CON_PRINTBUFFER) {
-		/*
-		 * console_unlock(); will print out the buffered messages
-		 * for us.
-		 */
-		logbuf_lock_irqsave(flags);
-		console_seq = syslog_seq;
-		console_idx = syslog_idx;
-		/*
-		 * We're about to replay the log buffer.  Only do this to the
-		 * just-registered console to avoid excessive message spam to
-		 * the already-registered consoles.
-		 *
-		 * Set exclusive_console with disabled interrupts to reduce
-		 * race window with eventual console_flush_on_panic() that
-		 * ignores console_lock.
-		 */
-		exclusive_console = newcon;
-		exclusive_console_stop_seq = console_seq;
-		logbuf_unlock_irqrestore(flags);
-	}
 	console_unlock();
 	console_sysfs_notify();
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2485 @ void register_console(struct console *ne
 	 * boot consoles, real consoles, etc - this is to ensure that end
 	 * users know there might be something in the kernel's log buffer that
 	 * went to the bootconsole (that they do not see on the real console)
+	 *
+	 * This message is also important because it will trigger the
+	 * printk kthread to begin dumping the log buffer to the newly
+	 * registered console.
 	 */
 	pr_info("%sconsole [%s%d] enabled\n",
 		(newcon->flags & CON_BOOT) ? "boot" : "" ,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2632 @ static int __init printk_late_init(void)
 late_initcall(printk_late_init);
 
 #if defined CONFIG_PRINTK
-/*
- * Delayed printk version, for scheduler-internal messages:
- */
-#define PRINTK_PENDING_WAKEUP	0x01
-#define PRINTK_PENDING_OUTPUT	0x02
+static int printk_kthread_func(void *data)
+{
+	struct prb_iterator iter;
+	struct printk_log *msg;
+	size_t ext_len;
+	char *ext_text;
+	u64 master_seq;
+	size_t len;
+	char *text;
+	char *buf;
+	int ret;
 
-static DEFINE_PER_CPU(int, printk_pending);
+	ext_text = kmalloc(CONSOLE_EXT_LOG_MAX, GFP_KERNEL);
+	text = kmalloc(PRINTK_SPRINT_MAX, GFP_KERNEL);
+	buf = kmalloc(PRINTK_RECORD_MAX, GFP_KERNEL);
+	if (!ext_text || !text || !buf)
+		return -1;
 
-static void wake_up_klogd_work_func(struct irq_work *irq_work)
-{
-	int pending = __this_cpu_xchg(printk_pending, 0);
+	prb_iter_init(&iter, &printk_rb, NULL);
 
-	if (pending & PRINTK_PENDING_OUTPUT) {
-		/* If trylock fails, someone else is doing the printing */
-		if (console_trylock())
-			console_unlock();
+	/* the printk kthread never exits */
+	for (;;) {
+		ret = prb_iter_wait_next(&iter, buf,
+					 PRINTK_RECORD_MAX, &master_seq);
+		if (ret == -ERESTARTSYS) {
+			continue;
+		} else if (ret < 0) {
+			/* iterator invalid, start over */
+			prb_iter_init(&iter, &printk_rb, NULL);
+			continue;
+		}
+
+		msg = (struct printk_log *)buf;
+		format_text(msg, master_seq, ext_text, &ext_len, text,
+			    &len, printk_time);
+
+		console_lock();
+		call_console_drivers(master_seq, ext_text, ext_len, text, len,
+				     msg->level, msg->facility);
+		if (len > 0 || ext_len > 0)
+			printk_delay(msg->level);
+		console_unlock();
 	}
 
-	if (pending & PRINTK_PENDING_WAKEUP)
-		wake_up_interruptible(&log_wait);
-}
+	kfree(ext_text);
+	kfree(text);
+	kfree(buf);
 
-static DEFINE_PER_CPU(struct irq_work, wake_up_klogd_work) = {
-	.func = wake_up_klogd_work_func,
-	.flags = IRQ_WORK_LAZY,
-};
+	return 0;
+}
 
-void wake_up_klogd(void)
+static int __init init_printk_kthread(void)
 {
-	preempt_disable();
-	if (waitqueue_active(&log_wait)) {
-		this_cpu_or(printk_pending, PRINTK_PENDING_WAKEUP);
-		irq_work_queue(this_cpu_ptr(&wake_up_klogd_work));
+	struct task_struct *thread;
+
+	thread = kthread_run(printk_kthread_func, NULL, "printk");
+	if (IS_ERR(thread)) {
+		pr_err("printk: unable to create printing thread\n");
+		return PTR_ERR(thread);
 	}
-	preempt_enable();
-}
 
-void defer_console_output(void)
-{
-	preempt_disable();
-	__this_cpu_or(printk_pending, PRINTK_PENDING_OUTPUT);
-	irq_work_queue(this_cpu_ptr(&wake_up_klogd_work));
-	preempt_enable();
+	return 0;
 }
+late_initcall(init_printk_kthread);
 
-int vprintk_deferred(const char *fmt, va_list args)
+static int vprintk_deferred(const char *fmt, va_list args)
 {
-	int r;
-
-	r = vprintk_emit(0, LOGLEVEL_SCHED, NULL, 0, fmt, args);
-	defer_console_output();
-
-	return r;
+	return vprintk_emit(0, LOGLEVEL_DEFAULT, NULL, 0, fmt, args);
 }
 
 int printk_deferred(const char *fmt, ...)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2821 @ module_param_named(always_kmsg_dump, alw
  */
 void kmsg_dump(enum kmsg_dump_reason reason)
 {
+	struct kmsg_dumper dumper_local;
 	struct kmsg_dumper *dumper;
-	unsigned long flags;
 
 	if ((reason > KMSG_DUMP_OOPS) && !always_kmsg_dump)
 		return;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2832 @ void kmsg_dump(enum kmsg_dump_reason rea
 		if (dumper->max_reason && reason > dumper->max_reason)
 			continue;
 
-		/* initialize iterator with data about the stored records */
-		dumper->active = true;
+		/*
+		 * use a local copy to avoid modifying the
+		 * iterator used by any other cpus/contexts
+		 */
+		memcpy(&dumper_local, dumper, sizeof(dumper_local));
 
-		logbuf_lock_irqsave(flags);
-		dumper->cur_seq = clear_seq;
-		dumper->cur_idx = clear_idx;
-		dumper->next_seq = log_next_seq;
-		dumper->next_idx = log_next_idx;
-		logbuf_unlock_irqrestore(flags);
+		/* initialize iterator with data about the stored records */
+		dumper_local.active = true;
+		kmsg_dump_rewind(&dumper_local);
 
 		/* invoke dumper which will iterate over records */
-		dumper->dump(dumper, reason);
-
-		/* reset iterator */
-		dumper->active = false;
+		dumper_local.dump(&dumper_local, reason);
 	}
 	rcu_read_unlock();
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2870 @ void kmsg_dump(enum kmsg_dump_reason rea
 bool kmsg_dump_get_line_nolock(struct kmsg_dumper *dumper, bool syslog,
 			       char *line, size_t size, size_t *len)
 {
+	struct prb_iterator iter;
 	struct printk_log *msg;
-	size_t l = 0;
-	bool ret = false;
+	struct prb_handle h;
+	bool cont = false;
+	char *msgbuf;
+	char *rbuf;
+	size_t l;
+	u64 seq;
+	int ret;
 
 	if (!dumper->active)
-		goto out;
+		return cont;
 
-	if (dumper->cur_seq < log_first_seq) {
-		/* messages are gone, move to first available one */
-		dumper->cur_seq = log_first_seq;
-		dumper->cur_idx = log_first_idx;
+	rbuf = prb_reserve(&h, &sprint_rb, PRINTK_RECORD_MAX);
+	if (!rbuf)
+		return cont;
+	msgbuf = rbuf;
+retry:
+	for (;;) {
+		prb_iter_init(&iter, &printk_rb, &seq);
+
+		if (dumper->line_seq == seq) {
+			/* already where we want to be */
+			break;
+		} else if (dumper->line_seq < seq) {
+			/* messages are gone, move to first available one */
+			dumper->line_seq = seq;
+			break;
+		}
+
+		ret = prb_iter_seek(&iter, dumper->line_seq);
+		if (ret > 0) {
+			/* seeked to line_seq */
+			break;
+		} else if (ret == 0) {
+			/*
+			 * The end of the list was hit without ever seeing
+			 * line_seq. Reset it to the beginning of the list.
+			 */
+			prb_iter_init(&iter, &printk_rb, &dumper->line_seq);
+			break;
+		}
+		/* iterator invalid, start over */
 	}
 
-	/* last entry */
-	if (dumper->cur_seq >= log_next_seq)
+	ret = prb_iter_next(&iter, msgbuf, PRINTK_RECORD_MAX,
+			    &dumper->line_seq);
+	if (ret == 0)
 		goto out;
+	else if (ret < 0)
+		goto retry;
 
-	msg = log_from_idx(dumper->cur_idx);
+	msg = (struct printk_log *)msgbuf;
 	l = msg_print_text(msg, syslog, printk_time, line, size);
 
-	dumper->cur_idx = log_next(dumper->cur_idx);
-	dumper->cur_seq++;
-	ret = true;
-out:
 	if (len)
 		*len = l;
-	return ret;
+	cont = true;
+out:
+	prb_commit(&h);
+	return cont;
 }
 
 /**
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2953 @ out:
 bool kmsg_dump_get_line(struct kmsg_dumper *dumper, bool syslog,
 			char *line, size_t size, size_t *len)
 {
-	unsigned long flags;
 	bool ret;
 
-	logbuf_lock_irqsave(flags);
 	ret = kmsg_dump_get_line_nolock(dumper, syslog, line, size, len);
-	logbuf_unlock_irqrestore(flags);
 
 	return ret;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2983 @ EXPORT_SYMBOL_GPL(kmsg_dump_get_line);
 bool kmsg_dump_get_buffer(struct kmsg_dumper *dumper, bool syslog,
 			  char *buf, size_t size, size_t *len)
 {
-	unsigned long flags;
-	u64 seq;
-	u32 idx;
-	u64 next_seq;
-	u32 next_idx;
-	size_t l = 0;
-	bool ret = false;
+	struct prb_iterator iter;
 	bool time = printk_time;
+	struct printk_log *msg;
+	u64 new_end_seq = 0;
+	struct prb_handle h;
+	bool cont = false;
+	char *msgbuf;
+	u64 end_seq;
+	int textlen;
+	u64 seq = 0;
+	char *rbuf;
+	int l = 0;
+	int ret;
 
 	if (!dumper->active)
-		goto out;
+		return cont;
 
-	logbuf_lock_irqsave(flags);
-	if (dumper->cur_seq < log_first_seq) {
-		/* messages are gone, move to first available one */
-		dumper->cur_seq = log_first_seq;
-		dumper->cur_idx = log_first_idx;
-	}
+	rbuf = prb_reserve(&h, &sprint_rb, PRINTK_RECORD_MAX);
+	if (!rbuf)
+		return cont;
+	msgbuf = rbuf;
 
-	/* last entry */
-	if (dumper->cur_seq >= dumper->next_seq) {
-		logbuf_unlock_irqrestore(flags);
-		goto out;
-	}
+	prb_iter_init(&iter, &printk_rb, NULL);
 
-	/* calculate length of entire buffer */
-	seq = dumper->cur_seq;
-	idx = dumper->cur_idx;
-	while (seq < dumper->next_seq) {
-		struct printk_log *msg = log_from_idx(idx);
+	/*
+	 * seek to the start record, which is set/modified
+	 * by kmsg_dump_get_line_nolock()
+	 */
+	ret = prb_iter_seek(&iter, dumper->line_seq);
+	if (ret <= 0)
+		prb_iter_init(&iter, &printk_rb, &seq);
 
-		l += msg_print_text(msg, true, time, NULL, 0);
-		idx = log_next(idx);
-		seq++;
+	/* work with a local end seq to have a constant value */
+	end_seq = dumper->buffer_end_seq;
+	if (!end_seq) {
+		/* initialize end seq to "infinity" */
+		end_seq = -1;
+		dumper->buffer_end_seq = end_seq;
 	}
+retry:
+	if (seq >= end_seq)
+		goto out;
 
-	/* move first record forward until length fits into the buffer */
-	seq = dumper->cur_seq;
-	idx = dumper->cur_idx;
-	while (l > size && seq < dumper->next_seq) {
-		struct printk_log *msg = log_from_idx(idx);
+	/* count the total bytes after seq */
+	textlen = count_remaining(&iter, end_seq, msgbuf,
+				  PRINTK_RECORD_MAX, 0, time);
+
+	/* move iter forward until length fits into the buffer */
+	while (textlen > size) {
+		ret = prb_iter_next(&iter, msgbuf, PRINTK_RECORD_MAX, &seq);
+		if (ret == 0) {
+			break;
+		} else if (ret < 0) {
+			prb_iter_init(&iter, &printk_rb, &seq);
+			goto retry;
+		}
 
-		l -= msg_print_text(msg, true, time, NULL, 0);
-		idx = log_next(idx);
-		seq++;
+		msg = (struct printk_log *)msgbuf;
+		textlen -= msg_print_text(msg, true, time, NULL, 0);
 	}
 
-	/* last message in next interation */
-	next_seq = seq;
-	next_idx = idx;
+	/* save end seq for the next interation */
+	new_end_seq = seq + 1;
+
+	/* copy messages to buffer */
+	while (l < size) {
+		ret = prb_iter_next(&iter, msgbuf, PRINTK_RECORD_MAX, &seq);
+		if (ret == 0) {
+			break;
+		} else if (ret < 0) {
+			/*
+			 * iterator (and thus also the start position)
+			 * invalid, start over from beginning of list
+			 */
+			prb_iter_init(&iter, &printk_rb, NULL);
+			continue;
+		}
 
-	l = 0;
-	while (seq < dumper->next_seq) {
-		struct printk_log *msg = log_from_idx(idx);
+		if (seq >= end_seq)
+			break;
 
-		l += msg_print_text(msg, syslog, time, buf + l, size - l);
-		idx = log_next(idx);
-		seq++;
+		msg = (struct printk_log *)msgbuf;
+		textlen = msg_print_text(msg, syslog, time, buf + l, size - l);
+		if (textlen > 0)
+			l += textlen;
+		cont = true;
 	}
 
-	dumper->next_seq = next_seq;
-	dumper->next_idx = next_idx;
-	ret = true;
-	logbuf_unlock_irqrestore(flags);
-out:
-	if (len)
+	if (cont && len)
 		*len = l;
-	return ret;
+out:
+	prb_commit(&h);
+	if (new_end_seq)
+		dumper->buffer_end_seq = new_end_seq;
+	return cont;
 }
 EXPORT_SYMBOL_GPL(kmsg_dump_get_buffer);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3093 @ EXPORT_SYMBOL_GPL(kmsg_dump_get_buffer);
  */
 void kmsg_dump_rewind_nolock(struct kmsg_dumper *dumper)
 {
-	dumper->cur_seq = clear_seq;
-	dumper->cur_idx = clear_idx;
-	dumper->next_seq = log_next_seq;
-	dumper->next_idx = log_next_idx;
+	dumper->line_seq = 0;
+	dumper->buffer_end_seq = 0;
 }
 
 /**
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3107 @ void kmsg_dump_rewind_nolock(struct kmsg
  */
 void kmsg_dump_rewind(struct kmsg_dumper *dumper)
 {
-	unsigned long flags;
-
-	logbuf_lock_irqsave(flags);
 	kmsg_dump_rewind_nolock(dumper);
-	logbuf_unlock_irqrestore(flags);
 }
 EXPORT_SYMBOL_GPL(kmsg_dump_rewind);
 
+static bool console_can_emergency(int level)
+{
+	struct console *con;
+
+	for_each_console(con) {
+		if (!(con->flags & CON_ENABLED))
+			continue;
+		if (con->write_atomic && level < emergency_console_loglevel)
+			return true;
+		if (con->write && (con->flags & CON_BOOT))
+			return true;
+	}
+	return false;
+}
+
+static void call_emergency_console_drivers(int level, const char *text,
+					   size_t text_len)
+{
+	struct console *con;
+
+	for_each_console(con) {
+		if (!(con->flags & CON_ENABLED))
+			continue;
+		if (con->write_atomic && level < emergency_console_loglevel) {
+			con->write_atomic(con, text, text_len);
+			continue;
+		}
+		if (con->write && (con->flags & CON_BOOT)) {
+			con->write(con, text, text_len);
+			continue;
+		}
+	}
+}
+
+static void printk_emergency(char *buffer, int level, u64 ts_nsec, u16 cpu,
+			     char *text, u16 text_len)
+{
+	struct printk_log msg;
+	size_t prefix_len;
+
+	if (!console_can_emergency(level))
+		return;
+
+	msg.level = level;
+	msg.ts_nsec = ts_nsec;
+	msg.cpu = cpu;
+	msg.facility = 0;
+
+	/* "text" must have PREFIX_MAX preceding bytes available */
+
+	prefix_len = print_prefix(&msg,
+				  console_msg_format & MSG_FORMAT_SYSLOG,
+				  printk_time, buffer);
+	/* move the prefix forward to the beginning of the message text */
+	text -= prefix_len;
+	memmove(text, buffer, prefix_len);
+	text_len += prefix_len;
+
+	text[text_len++] = '\n';
+
+	call_emergency_console_drivers(level, text, text_len);
+
+	touch_softlockup_watchdog_sync();
+	clocksource_touch_watchdog();
+	rcu_cpu_stall_reset();
+	touch_nmi_watchdog();
+
+	printk_delay(level);
+}
 #endif
+
+void console_atomic_lock(unsigned int *flags)
+{
+	prb_lock(&printk_cpulock, flags);
+}
+EXPORT_SYMBOL(console_atomic_lock);
+
+void console_atomic_unlock(unsigned int flags)
+{
+	prb_unlock(&printk_cpulock, flags);
+}
+EXPORT_SYMBOL(console_atomic_unlock);
Index: linux-5.0.19-rt11/kernel/printk/printk_safe.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/printk/printk_safe.c
+++ /dev/null
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1 @
-/*
- * printk_safe.c - Safe printk for printk-deadlock-prone contexts
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2
- * of the License, or (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, see <http://www.gnu.org/licenses/>.
- */
-
-#include <linux/preempt.h>
-#include <linux/spinlock.h>
-#include <linux/debug_locks.h>
-#include <linux/smp.h>
-#include <linux/cpumask.h>
-#include <linux/irq_work.h>
-#include <linux/printk.h>
-
-#include "internal.h"
-
-/*
- * printk() could not take logbuf_lock in NMI context. Instead,
- * it uses an alternative implementation that temporary stores
- * the strings into a per-CPU buffer. The content of the buffer
- * is later flushed into the main ring buffer via IRQ work.
- *
- * The alternative implementation is chosen transparently
- * by examinig current printk() context mask stored in @printk_context
- * per-CPU variable.
- *
- * The implementation allows to flush the strings also from another CPU.
- * There are situations when we want to make sure that all buffers
- * were handled or when IRQs are blocked.
- */
-static int printk_safe_irq_ready __read_mostly;
-
-#define SAFE_LOG_BUF_LEN ((1 << CONFIG_PRINTK_SAFE_LOG_BUF_SHIFT) -	\
-				sizeof(atomic_t) -			\
-				sizeof(atomic_t) -			\
-				sizeof(struct irq_work))
-
-struct printk_safe_seq_buf {
-	atomic_t		len;	/* length of written data */
-	atomic_t		message_lost;
-	struct irq_work		work;	/* IRQ work that flushes the buffer */
-	unsigned char		buffer[SAFE_LOG_BUF_LEN];
-};
-
-static DEFINE_PER_CPU(struct printk_safe_seq_buf, safe_print_seq);
-static DEFINE_PER_CPU(int, printk_context);
-
-#ifdef CONFIG_PRINTK_NMI
-static DEFINE_PER_CPU(struct printk_safe_seq_buf, nmi_print_seq);
-#endif
-
-/* Get flushed in a more safe context. */
-static void queue_flush_work(struct printk_safe_seq_buf *s)
-{
-	if (printk_safe_irq_ready)
-		irq_work_queue(&s->work);
-}
-
-/*
- * Add a message to per-CPU context-dependent buffer. NMI and printk-safe
- * have dedicated buffers, because otherwise printk-safe preempted by
- * NMI-printk would have overwritten the NMI messages.
- *
- * The messages are flushed from irq work (or from panic()), possibly,
- * from other CPU, concurrently with printk_safe_log_store(). Should this
- * happen, printk_safe_log_store() will notice the buffer->len mismatch
- * and repeat the write.
- */
-static __printf(2, 0) int printk_safe_log_store(struct printk_safe_seq_buf *s,
-						const char *fmt, va_list args)
-{
-	int add;
-	size_t len;
-	va_list ap;
-
-again:
-	len = atomic_read(&s->len);
-
-	/* The trailing '\0' is not counted into len. */
-	if (len >= sizeof(s->buffer) - 1) {
-		atomic_inc(&s->message_lost);
-		queue_flush_work(s);
-		return 0;
-	}
-
-	/*
-	 * Make sure that all old data have been read before the buffer
-	 * was reset. This is not needed when we just append data.
-	 */
-	if (!len)
-		smp_rmb();
-
-	va_copy(ap, args);
-	add = vscnprintf(s->buffer + len, sizeof(s->buffer) - len, fmt, ap);
-	va_end(ap);
-	if (!add)
-		return 0;
-
-	/*
-	 * Do it once again if the buffer has been flushed in the meantime.
-	 * Note that atomic_cmpxchg() is an implicit memory barrier that
-	 * makes sure that the data were written before updating s->len.
-	 */
-	if (atomic_cmpxchg(&s->len, len, len + add) != len)
-		goto again;
-
-	queue_flush_work(s);
-	return add;
-}
-
-static inline void printk_safe_flush_line(const char *text, int len)
-{
-	/*
-	 * Avoid any console drivers calls from here, because we may be
-	 * in NMI or printk_safe context (when in panic). The messages
-	 * must go only into the ring buffer at this stage.  Consoles will
-	 * get explicitly called later when a crashdump is not generated.
-	 */
-	printk_deferred("%.*s", len, text);
-}
-
-/* printk part of the temporary buffer line by line */
-static int printk_safe_flush_buffer(const char *start, size_t len)
-{
-	const char *c, *end;
-	bool header;
-
-	c = start;
-	end = start + len;
-	header = true;
-
-	/* Print line by line. */
-	while (c < end) {
-		if (*c == '\n') {
-			printk_safe_flush_line(start, c - start + 1);
-			start = ++c;
-			header = true;
-			continue;
-		}
-
-		/* Handle continuous lines or missing new line. */
-		if ((c + 1 < end) && printk_get_level(c)) {
-			if (header) {
-				c = printk_skip_level(c);
-				continue;
-			}
-
-			printk_safe_flush_line(start, c - start);
-			start = c++;
-			header = true;
-			continue;
-		}
-
-		header = false;
-		c++;
-	}
-
-	/* Check if there was a partial line. Ignore pure header. */
-	if (start < end && !header) {
-		static const char newline[] = KERN_CONT "\n";
-
-		printk_safe_flush_line(start, end - start);
-		printk_safe_flush_line(newline, strlen(newline));
-	}
-
-	return len;
-}
-
-static void report_message_lost(struct printk_safe_seq_buf *s)
-{
-	int lost = atomic_xchg(&s->message_lost, 0);
-
-	if (lost)
-		printk_deferred("Lost %d message(s)!\n", lost);
-}
-
-/*
- * Flush data from the associated per-CPU buffer. The function
- * can be called either via IRQ work or independently.
- */
-static void __printk_safe_flush(struct irq_work *work)
-{
-	static raw_spinlock_t read_lock =
-		__RAW_SPIN_LOCK_INITIALIZER(read_lock);
-	struct printk_safe_seq_buf *s =
-		container_of(work, struct printk_safe_seq_buf, work);
-	unsigned long flags;
-	size_t len;
-	int i;
-
-	/*
-	 * The lock has two functions. First, one reader has to flush all
-	 * available message to make the lockless synchronization with
-	 * writers easier. Second, we do not want to mix messages from
-	 * different CPUs. This is especially important when printing
-	 * a backtrace.
-	 */
-	raw_spin_lock_irqsave(&read_lock, flags);
-
-	i = 0;
-more:
-	len = atomic_read(&s->len);
-
-	/*
-	 * This is just a paranoid check that nobody has manipulated
-	 * the buffer an unexpected way. If we printed something then
-	 * @len must only increase. Also it should never overflow the
-	 * buffer size.
-	 */
-	if ((i && i >= len) || len > sizeof(s->buffer)) {
-		const char *msg = "printk_safe_flush: internal error\n";
-
-		printk_safe_flush_line(msg, strlen(msg));
-		len = 0;
-	}
-
-	if (!len)
-		goto out; /* Someone else has already flushed the buffer. */
-
-	/* Make sure that data has been written up to the @len */
-	smp_rmb();
-	i += printk_safe_flush_buffer(s->buffer + i, len - i);
-
-	/*
-	 * Check that nothing has got added in the meantime and truncate
-	 * the buffer. Note that atomic_cmpxchg() is an implicit memory
-	 * barrier that makes sure that the data were copied before
-	 * updating s->len.
-	 */
-	if (atomic_cmpxchg(&s->len, len, 0) != len)
-		goto more;
-
-out:
-	report_message_lost(s);
-	raw_spin_unlock_irqrestore(&read_lock, flags);
-}
-
-/**
- * printk_safe_flush - flush all per-cpu nmi buffers.
- *
- * The buffers are flushed automatically via IRQ work. This function
- * is useful only when someone wants to be sure that all buffers have
- * been flushed at some point.
- */
-void printk_safe_flush(void)
-{
-	int cpu;
-
-	for_each_possible_cpu(cpu) {
-#ifdef CONFIG_PRINTK_NMI
-		__printk_safe_flush(&per_cpu(nmi_print_seq, cpu).work);
-#endif
-		__printk_safe_flush(&per_cpu(safe_print_seq, cpu).work);
-	}
-}
-
-/**
- * printk_safe_flush_on_panic - flush all per-cpu nmi buffers when the system
- *	goes down.
- *
- * Similar to printk_safe_flush() but it can be called even in NMI context when
- * the system goes down. It does the best effort to get NMI messages into
- * the main ring buffer.
- *
- * Note that it could try harder when there is only one CPU online.
- */
-void printk_safe_flush_on_panic(void)
-{
-	/*
-	 * Make sure that we could access the main ring buffer.
-	 * Do not risk a double release when more CPUs are up.
-	 */
-	if (raw_spin_is_locked(&logbuf_lock)) {
-		if (num_online_cpus() > 1)
-			return;
-
-		debug_locks_off();
-		raw_spin_lock_init(&logbuf_lock);
-	}
-
-	printk_safe_flush();
-}
-
-#ifdef CONFIG_PRINTK_NMI
-/*
- * Safe printk() for NMI context. It uses a per-CPU buffer to
- * store the message. NMIs are not nested, so there is always only
- * one writer running. But the buffer might get flushed from another
- * CPU, so we need to be careful.
- */
-static __printf(1, 0) int vprintk_nmi(const char *fmt, va_list args)
-{
-	struct printk_safe_seq_buf *s = this_cpu_ptr(&nmi_print_seq);
-
-	return printk_safe_log_store(s, fmt, args);
-}
-
-void notrace printk_nmi_enter(void)
-{
-	this_cpu_or(printk_context, PRINTK_NMI_CONTEXT_MASK);
-}
-
-void notrace printk_nmi_exit(void)
-{
-	this_cpu_and(printk_context, ~PRINTK_NMI_CONTEXT_MASK);
-}
-
-/*
- * Marks a code that might produce many messages in NMI context
- * and the risk of losing them is more critical than eventual
- * reordering.
- *
- * It has effect only when called in NMI context. Then printk()
- * will try to store the messages into the main logbuf directly
- * and use the per-CPU buffers only as a fallback when the lock
- * is not available.
- */
-void printk_nmi_direct_enter(void)
-{
-	if (this_cpu_read(printk_context) & PRINTK_NMI_CONTEXT_MASK)
-		this_cpu_or(printk_context, PRINTK_NMI_DIRECT_CONTEXT_MASK);
-}
-
-void printk_nmi_direct_exit(void)
-{
-	this_cpu_and(printk_context, ~PRINTK_NMI_DIRECT_CONTEXT_MASK);
-}
-
-#else
-
-static __printf(1, 0) int vprintk_nmi(const char *fmt, va_list args)
-{
-	return 0;
-}
-
-#endif /* CONFIG_PRINTK_NMI */
-
-/*
- * Lock-less printk(), to avoid deadlocks should the printk() recurse
- * into itself. It uses a per-CPU buffer to store the message, just like
- * NMI.
- */
-static __printf(1, 0) int vprintk_safe(const char *fmt, va_list args)
-{
-	struct printk_safe_seq_buf *s = this_cpu_ptr(&safe_print_seq);
-
-	return printk_safe_log_store(s, fmt, args);
-}
-
-/* Can be preempted by NMI. */
-void __printk_safe_enter(void)
-{
-	this_cpu_inc(printk_context);
-}
-
-/* Can be preempted by NMI. */
-void __printk_safe_exit(void)
-{
-	this_cpu_dec(printk_context);
-}
-
-__printf(1, 0) int vprintk_func(const char *fmt, va_list args)
-{
-	/*
-	 * Try to use the main logbuf even in NMI. But avoid calling console
-	 * drivers that might have their own locks.
-	 */
-	if ((this_cpu_read(printk_context) & PRINTK_NMI_DIRECT_CONTEXT_MASK) &&
-	    raw_spin_trylock(&logbuf_lock)) {
-		int len;
-
-		len = vprintk_store(0, LOGLEVEL_DEFAULT, NULL, 0, fmt, args);
-		raw_spin_unlock(&logbuf_lock);
-		defer_console_output();
-		return len;
-	}
-
-	/* Use extra buffer in NMI when logbuf_lock is taken or in safe mode. */
-	if (this_cpu_read(printk_context) & PRINTK_NMI_CONTEXT_MASK)
-		return vprintk_nmi(fmt, args);
-
-	/* Use extra buffer to prevent a recursion deadlock in safe mode. */
-	if (this_cpu_read(printk_context) & PRINTK_SAFE_CONTEXT_MASK)
-		return vprintk_safe(fmt, args);
-
-	/* No obstacles. */
-	return vprintk_default(fmt, args);
-}
-
-void __init printk_safe_init(void)
-{
-	int cpu;
-
-	for_each_possible_cpu(cpu) {
-		struct printk_safe_seq_buf *s;
-
-		s = &per_cpu(safe_print_seq, cpu);
-		init_irq_work(&s->work, __printk_safe_flush);
-
-#ifdef CONFIG_PRINTK_NMI
-		s = &per_cpu(nmi_print_seq, cpu);
-		init_irq_work(&s->work, __printk_safe_flush);
-#endif
-	}
-
-	/*
-	 * In the highly unlikely event that a NMI were to trigger at
-	 * this moment. Make sure IRQ work is set up before this
-	 * variable is set.
-	 */
-	barrier();
-	printk_safe_irq_ready = 1;
-
-	/* Flush pending messages that did not have scheduled IRQ works. */
-	printk_safe_flush();
-}
Index: linux-5.0.19-rt11/kernel/ptrace.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/ptrace.c
+++ linux-5.0.19-rt11/kernel/ptrace.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:179 @ static bool ptrace_freeze_traced(struct
 
 	spin_lock_irq(&task->sighand->siglock);
 	if (task_is_traced(task) && !__fatal_signal_pending(task)) {
-		task->state = __TASK_TRACED;
+		unsigned long flags;
+
+		raw_spin_lock_irqsave(&task->pi_lock, flags);
+		if (task->state & __TASK_TRACED)
+			task->state = __TASK_TRACED;
+		else
+			task->saved_state = __TASK_TRACED;
+		raw_spin_unlock_irqrestore(&task->pi_lock, flags);
 		ret = true;
 	}
 	spin_unlock_irq(&task->sighand->siglock);
Index: linux-5.0.19-rt11/kernel/rcu/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/kernel/rcu/Kconfig
+++ linux-5.0.19-rt11/kernel/rcu/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:193 @ config RCU_FAST_NO_HZ
 
 config RCU_BOOST
 	bool "Enable RCU priority boosting"
-	depends on RT_MUTEXES && PREEMPT_RCU && RCU_EXPERT
-	default n
+	depends on (RT_MUTEXES && PREEMPT_RCU && RCU_EXPERT) || PREEMPT_RT_FULL
+	default y if PREEMPT_RT_FULL
 	help
 	  This option boosts the priority of preempted RCU readers that
 	  block the current preemptible RCU grace period for too long.
Index: linux-5.0.19-rt11/kernel/rcu/srcutree.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/rcu/srcutree.c
+++ linux-5.0.19-rt11/kernel/rcu/srcutree.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:41 @
 #include <linux/delay.h>
 #include <linux/module.h>
 #include <linux/srcu.h>
+#include <linux/locallock.h>
 
 #include "rcu.h"
 #include "rcu_segcblist.h"
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:62 @ static bool __read_mostly srcu_init_done
 static void srcu_invoke_callbacks(struct work_struct *work);
 static void srcu_reschedule(struct srcu_struct *ssp, unsigned long delay);
 static void process_srcu(struct work_struct *work);
+static void srcu_delay_timer(struct timer_list *t);
 
 /* Wrappers for lock acquisition and release, see raw_spin_lock_rcu_node(). */
 #define spin_lock_rcu_node(p)					\
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:161 @ static void init_srcu_struct_nodes(struc
 			snp->grphi = cpu;
 		}
 		sdp->cpu = cpu;
-		INIT_DELAYED_WORK(&sdp->work, srcu_invoke_callbacks);
+		INIT_WORK(&sdp->work, srcu_invoke_callbacks);
+		timer_setup(&sdp->delay_work, srcu_delay_timer, 0);
 		sdp->ssp = ssp;
 		sdp->grpmask = 1 << (cpu - sdp->mynode->grplo);
 		if (is_static)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:392 @ void _cleanup_srcu_struct(struct srcu_st
 	} else {
 		flush_delayed_work(&ssp->work);
 	}
-	for_each_possible_cpu(cpu)
+	for_each_possible_cpu(cpu) {
+		struct srcu_data *sdp = per_cpu_ptr(ssp->sda, cpu);
+
 		if (quiesced) {
-			if (WARN_ON(delayed_work_pending(&per_cpu_ptr(ssp->sda, cpu)->work)))
+			if (WARN_ON(timer_pending(&sdp->delay_work)))
+				return; /* Just leak it! */
+			if (WARN_ON(work_pending(&sdp->work)))
 				return; /* Just leak it! */
 		} else {
-			flush_delayed_work(&per_cpu_ptr(ssp->sda, cpu)->work);
+			del_timer_sync(&sdp->delay_work);
+			flush_work(&sdp->work);
 		}
+	}
 	if (WARN_ON(rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)) != SRCU_STATE_IDLE) ||
 	    WARN_ON(srcu_readers_active(ssp))) {
 		pr_info("%s: Active srcu_struct %p state: %d\n",
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:475 @ static void srcu_gp_start(struct srcu_st
 	WARN_ON_ONCE(state != SRCU_STATE_SCAN1);
 }
 
-/*
- * Track online CPUs to guide callback workqueue placement.
- */
-DEFINE_PER_CPU(bool, srcu_online);
 
-void srcu_online_cpu(unsigned int cpu)
+static void srcu_delay_timer(struct timer_list *t)
 {
-	WRITE_ONCE(per_cpu(srcu_online, cpu), true);
-}
+	struct srcu_data *sdp = container_of(t, struct srcu_data, delay_work);
 
-void srcu_offline_cpu(unsigned int cpu)
-{
-	WRITE_ONCE(per_cpu(srcu_online, cpu), false);
+	queue_work_on(sdp->cpu, rcu_gp_wq, &sdp->work);
 }
 
-/*
- * Place the workqueue handler on the specified CPU if online, otherwise
- * just run it whereever.  This is useful for placing workqueue handlers
- * that are to invoke the specified CPU's callbacks.
- */
-static bool srcu_queue_delayed_work_on(int cpu, struct workqueue_struct *wq,
-				       struct delayed_work *dwork,
+static void srcu_queue_delayed_work_on(struct srcu_data *sdp,
 				       unsigned long delay)
 {
-	bool ret;
+	if (!delay) {
+		queue_work_on(sdp->cpu, rcu_gp_wq, &sdp->work);
+		return;
+	}
 
-	preempt_disable();
-	if (READ_ONCE(per_cpu(srcu_online, cpu)))
-		ret = queue_delayed_work_on(cpu, wq, dwork, delay);
-	else
-		ret = queue_delayed_work(wq, dwork, delay);
-	preempt_enable();
-	return ret;
+	timer_reduce(&sdp->delay_work, jiffies + delay);
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:500 @ static bool srcu_queue_delayed_work_on(i
  */
 static void srcu_schedule_cbs_sdp(struct srcu_data *sdp, unsigned long delay)
 {
-	srcu_queue_delayed_work_on(sdp->cpu, rcu_gp_wq, &sdp->work, delay);
+	srcu_queue_delayed_work_on(sdp, delay);
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:756 @ static void srcu_flip(struct srcu_struct
 	smp_mb(); /* D */  /* Pairs with C. */
 }
 
+static DEFINE_LOCAL_IRQ_LOCK(sp_llock);
 /*
  * If SRCU is likely idle, return true, otherwise return false.
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:786 @ static bool srcu_might_be_idle(struct sr
 	unsigned long t;
 
 	/* If the local srcu_data structure has callbacks, not idle.  */
-	local_irq_save(flags);
+	local_lock_irqsave(sp_llock, flags);
 	sdp = this_cpu_ptr(ssp->sda);
 	if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) {
-		local_irq_restore(flags);
+		local_unlock_irqrestore(sp_llock, flags);
 		return false; /* Callbacks already present, so not idle. */
 	}
-	local_irq_restore(flags);
+	local_unlock_irqrestore(sp_llock, flags);
 
 	/*
 	 * No local callbacks, so probabalistically probe global state.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:872 @ void __call_srcu(struct srcu_struct *ssp
 	}
 	rhp->func = func;
 	idx = srcu_read_lock(ssp);
-	local_irq_save(flags);
+	local_lock_irqsave(sp_llock, flags);
 	sdp = this_cpu_ptr(ssp->sda);
 	spin_lock_rcu_node(sdp);
 	rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp, false);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:888 @ void __call_srcu(struct srcu_struct *ssp
 		sdp->srcu_gp_seq_needed_exp = s;
 		needexp = true;
 	}
-	spin_unlock_irqrestore_rcu_node(sdp, flags);
+	spin_unlock_rcu_node(sdp);
+	local_unlock_irqrestore(sp_llock, flags);
 	if (needgp)
 		srcu_funnel_gp_start(ssp, sdp, s, do_norm);
 	else if (needexp)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1184 @ static void srcu_invoke_callbacks(struct
 	struct srcu_data *sdp;
 	struct srcu_struct *ssp;
 
-	sdp = container_of(work, struct srcu_data, work.work);
+	sdp = container_of(work, struct srcu_data, work);
+
 	ssp = sdp->ssp;
 	rcu_cblist_init(&ready_cbs);
 	spin_lock_irq_rcu_node(sdp);
Index: linux-5.0.19-rt11/kernel/rcu/tree.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/rcu/tree.c
+++ linux-5.0.19-rt11/kernel/rcu/tree.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:65 @
 #include <linux/suspend.h>
 #include <linux/ftrace.h>
 #include <linux/tick.h>
+#include <linux/gfp.h>
+#include <linux/oom.h>
+#include <linux/smpboot.h>
+#include <linux/jiffies.h>
+#include <linux/sched/isolation.h>
+#include "../time/tick-internal.h"
 
 #include "tree.h"
 #include "rcu.h"
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1155 @ static int rcu_implicit_dynticks_qs(stru
 		    !rdp->rcu_iw_pending && rdp->rcu_iw_gp_seq != rnp->gp_seq &&
 		    (rnp->ffmask & rdp->grpmask)) {
 			init_irq_work(&rdp->rcu_iw, rcu_iw_handler);
+			rdp->rcu_iw.flags = IRQ_WORK_HARD_IRQ;
 			rdp->rcu_iw_pending = true;
 			rdp->rcu_iw_gp_seq = rnp->gp_seq;
 			irq_work_queue_on(&rdp->rcu_iw, rdp->cpu);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2735 @ EXPORT_SYMBOL_GPL(rcu_fwd_progress_check
  * structures.  This may be called only from the CPU to whom the rdp
  * belongs.
  */
-static __latent_entropy void rcu_process_callbacks(struct softirq_action *unused)
+static __latent_entropy void rcu_process_callbacks(void)
 {
 	unsigned long flags;
 	struct rcu_data *rdp = raw_cpu_ptr(&rcu_data);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2777 @ static __latent_entropy void rcu_process
 	trace_rcu_utilization(TPS("End RCU core"));
 }
 
+static DEFINE_PER_CPU(struct task_struct *, rcu_cpu_kthread_task);
+
 /*
  * Schedule RCU callback invocation.  If the running implementation of RCU
  * does not support RCU priority boosting, just do a direct call, otherwise
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2790 @ static void invoke_rcu_callbacks(struct
 {
 	if (unlikely(!READ_ONCE(rcu_scheduler_fully_active)))
 		return;
-	if (likely(!rcu_state.boost)) {
-		rcu_do_batch(rdp);
+	rcu_do_batch(rdp);
+}
+
+static void rcu_wake_cond(struct task_struct *t, int status)
+{
+	/*
+	 * If the thread is yielding, only wake it when this
+	 * is invoked from idle
+	 */
+	if (t && (status != RCU_KTHREAD_YIELDING || is_idle_task(current)))
+		wake_up_process(t);
+}
+
+/*
+ * Wake up this CPU's rcuc kthread to do RCU core processing.
+ */
+static void invoke_rcu_core(void)
+{
+	unsigned long flags;
+	struct task_struct *t;
+
+	if (!cpu_online(smp_processor_id()))
 		return;
+	local_irq_save(flags);
+	__this_cpu_write(rcu_cpu_has_work, 1);
+	t = __this_cpu_read(rcu_cpu_kthread_task);
+	if (t != NULL && current != t)
+		rcu_wake_cond(t, __this_cpu_read(rcu_cpu_kthread_status));
+	local_irq_restore(flags);
+}
+
+static void rcu_cpu_kthread_park(unsigned int cpu)
+{
+	per_cpu(rcu_cpu_kthread_status, cpu) = RCU_KTHREAD_OFFCPU;
+}
+
+static int rcu_cpu_kthread_should_run(unsigned int cpu)
+{
+	return __this_cpu_read(rcu_cpu_has_work);
+}
+
+/*
+ * Per-CPU kernel thread that invokes RCU callbacks.  This replaces
+ * the RCU softirq used in configurations of RCU that do not support RCU
+ * priority boosting.
+ */
+static void rcu_cpu_kthread(unsigned int cpu)
+{
+	unsigned int *statusp = this_cpu_ptr(&rcu_cpu_kthread_status);
+	char work, *workp = this_cpu_ptr(&rcu_cpu_has_work);
+	int spincnt;
+
+	for (spincnt = 0; spincnt < 10; spincnt++) {
+		trace_rcu_utilization(TPS("Start CPU kthread@rcu_wait"));
+		local_bh_disable();
+		*statusp = RCU_KTHREAD_RUNNING;
+		this_cpu_inc(rcu_cpu_kthread_loops);
+		local_irq_disable();
+		work = *workp;
+		*workp = 0;
+		local_irq_enable();
+		if (work)
+			rcu_process_callbacks();
+		local_bh_enable();
+		if (*workp == 0) {
+			trace_rcu_utilization(TPS("End CPU kthread@rcu_wait"));
+			*statusp = RCU_KTHREAD_WAITING;
+			return;
+		}
 	}
-	invoke_rcu_callbacks_kthread();
+	*statusp = RCU_KTHREAD_YIELDING;
+	trace_rcu_utilization(TPS("Start CPU kthread@rcu_yield"));
+	schedule_timeout_interruptible(2);
+	trace_rcu_utilization(TPS("End CPU kthread@rcu_yield"));
+	*statusp = RCU_KTHREAD_WAITING;
 }
 
-static void invoke_rcu_core(void)
+static struct smp_hotplug_thread rcu_cpu_thread_spec = {
+	.store			= &rcu_cpu_kthread_task,
+	.thread_should_run	= rcu_cpu_kthread_should_run,
+	.thread_fn		= rcu_cpu_kthread,
+	.thread_comm		= "rcuc/%u",
+	.setup			= rcu_cpu_kthread_setup,
+	.park			= rcu_cpu_kthread_park,
+};
+
+/*
+ * Spawn per-CPU RCU core processing kthreads.
+ */
+static int __init rcu_spawn_core_kthreads(void)
 {
-	if (cpu_online(smp_processor_id()))
-		raise_softirq(RCU_SOFTIRQ);
+	int cpu;
+
+	for_each_possible_cpu(cpu)
+		per_cpu(rcu_cpu_has_work, cpu) = 0;
+	WARN_ONCE(smpboot_register_percpu_thread(&rcu_cpu_thread_spec), "%s: Could not start rcub kthread, OOM is now expected behavior\n", __func__);
+	return 0;
 }
+early_initcall(rcu_spawn_core_kthreads);
 
 /*
  * Handle any core-RCU processing required by a call_rcu() invocation.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3437 @ int rcutree_online_cpu(unsigned int cpu)
 	raw_spin_lock_irqsave_rcu_node(rnp, flags);
 	rnp->ffmask |= rdp->grpmask;
 	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
-	if (IS_ENABLED(CONFIG_TREE_SRCU))
-		srcu_online_cpu(cpu);
 	if (rcu_scheduler_active == RCU_SCHEDULER_INACTIVE)
 		return 0; /* Too early in boot for scheduler work. */
 	sync_sched_exp_online_cleanup(cpu);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3461 @ int rcutree_offline_cpu(unsigned int cpu
 	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 
 	rcutree_affinity_setting(cpu, cpu);
-	if (IS_ENABLED(CONFIG_TREE_SRCU))
-		srcu_offline_cpu(cpu);
 	return 0;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3881 @ void __init rcu_init(void)
 	rcu_init_one();
 	if (dump_tree)
 		rcu_dump_rcu_node_tree();
-	open_softirq(RCU_SOFTIRQ, rcu_process_callbacks);
 
 	/*
 	 * We don't need protection against CPU-hotplug here because
Index: linux-5.0.19-rt11/kernel/rcu/tree.h
===================================================================
--- linux-5.0.19-rt11.orig/kernel/rcu/tree.h
+++ linux-5.0.19-rt11/kernel/rcu/tree.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:405 @ static const char *tp_rcu_varname __used
 
 int rcu_dynticks_snap(struct rcu_data *rdp);
 
-#ifdef CONFIG_RCU_BOOST
 DECLARE_PER_CPU(unsigned int, rcu_cpu_kthread_status);
 DECLARE_PER_CPU(int, rcu_cpu_kthread_cpu);
 DECLARE_PER_CPU(unsigned int, rcu_cpu_kthread_loops);
 DECLARE_PER_CPU(char, rcu_cpu_has_work);
-#endif /* #ifdef CONFIG_RCU_BOOST */
 
 /* Forward declarations for rcutree_plugin.h */
 static void rcu_bootup_announce(void);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:426 @ void call_rcu(struct rcu_head *head, rcu
 static void dump_blkd_tasks(struct rcu_node *rnp, int ncheck);
 static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags);
 static void rcu_preempt_boost_start_gp(struct rcu_node *rnp);
-static void invoke_rcu_callbacks_kthread(void);
 static bool rcu_is_callbacks_kthread(void);
+static void rcu_cpu_kthread_setup(unsigned int cpu);
 static void __init rcu_spawn_boost_kthreads(void);
 static void rcu_prepare_kthreads(int cpu);
 static void rcu_cleanup_after_idle(void);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:463 @ static void rcu_bind_gp_kthread(void);
 static bool rcu_nohz_full_cpu(void);
 static void rcu_dynticks_task_enter(void);
 static void rcu_dynticks_task_exit(void);
-
-#ifdef CONFIG_SRCU
-void srcu_online_cpu(unsigned int cpu);
-void srcu_offline_cpu(unsigned int cpu);
-#else /* #ifdef CONFIG_SRCU */
-void srcu_online_cpu(unsigned int cpu) { }
-void srcu_offline_cpu(unsigned int cpu) { }
-#endif /* #else #ifdef CONFIG_SRCU */
Index: linux-5.0.19-rt11/kernel/rcu/tree_plugin.h
===================================================================
--- linux-5.0.19-rt11.orig/kernel/rcu/tree_plugin.h
+++ linux-5.0.19-rt11/kernel/rcu/tree_plugin.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:27 @
  *	   Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  */
 
-#include <linux/delay.h>
-#include <linux/gfp.h>
-#include <linux/oom.h>
-#include <linux/sched/debug.h>
-#include <linux/smpboot.h>
-#include <linux/sched/isolation.h>
-#include <uapi/linux/sched/types.h>
-#include "../time/tick-internal.h"
-
-#ifdef CONFIG_RCU_BOOST
-
 #include "../locking/rtmutex_common.h"
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:37 @ DEFINE_PER_CPU(unsigned int, rcu_cpu_kth
 DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_loops);
 DEFINE_PER_CPU(char, rcu_cpu_has_work);
 
-#else /* #ifdef CONFIG_RCU_BOOST */
-
-/*
- * Some architectures do not define rt_mutexes, but if !CONFIG_RCU_BOOST,
- * all uses are in dead code.  Provide a definition to keep the compiler
- * happy, but add WARN_ON_ONCE() to complain if used in the wrong place.
- * This probably needs to be excluded from -rt builds.
- */
-#define rt_mutex_owner(a) ({ WARN_ON_ONCE(1); NULL; })
-#define rt_mutex_futex_unlock(x) WARN_ON_ONCE(1)
-
-#endif /* #else #ifdef CONFIG_RCU_BOOST */
-
 #ifdef CONFIG_RCU_NOCB_CPU
 static cpumask_var_t rcu_nocb_mask; /* CPUs to have callbacks offloaded. */
 static bool __read_mostly rcu_nocb_poll;    /* Offload kthread are to poll. */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:309 @ void rcu_note_context_switch(bool preemp
 	struct task_struct *t = current;
 	struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
 	struct rcu_node *rnp;
+	int sleeping_l = 0;
 
 	barrier(); /* Avoid RCU read-side critical sections leaking down. */
 	trace_rcu_utilization(TPS("Start context switch"));
 	lockdep_assert_irqs_disabled();
-	WARN_ON_ONCE(!preempt && t->rcu_read_lock_nesting > 0);
+#if defined(CONFIG_PREEMPT_RT_FULL)
+	sleeping_l = t->sleeping_lock;
+#endif
+	WARN_ON_ONCE(!preempt && t->rcu_read_lock_nesting > 0 && !sleeping_l);
 	if (t->rcu_read_lock_nesting > 0 &&
 	    !t->rcu_read_unlock_special.b.blocked) {
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:635 @ static void rcu_read_unlock_special(stru
 		/* Need to defer quiescent state until everything is enabled. */
 		if (irqs_were_disabled) {
 			/* Enabling irqs does not reschedule, so... */
-			raise_softirq_irqoff(RCU_SOFTIRQ);
+			invoke_rcu_core();
 		} else {
 			/* Enabling BH or preempt does reschedule, so... */
 			set_tsk_need_resched(current);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1133 @ dump_blkd_tasks(struct rcu_node *rnp, in
 
 #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
 
+/*
+ * If boosting, set rcuc kthreads to realtime priority.
+ */
+static void rcu_cpu_kthread_setup(unsigned int cpu)
+{
 #ifdef CONFIG_RCU_BOOST
+	struct sched_param sp;
 
-static void rcu_wake_cond(struct task_struct *t, int status)
-{
-	/*
-	 * If the thread is yielding, only wake it when this
-	 * is invoked from idle
-	 */
-	if (status != RCU_KTHREAD_YIELDING || is_idle_task(current))
-		wake_up_process(t);
+	sp.sched_priority = kthread_prio;
+	sched_setscheduler_nocheck(current, SCHED_FIFO, &sp);
+#endif /* #ifdef CONFIG_RCU_BOOST */
 }
 
+#ifdef CONFIG_RCU_BOOST
+
 /*
  * Carry out RCU priority boosting on the task indicated by ->exp_tasks
  * or ->boost_tasks, advancing the pointer to the next task in the
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1286 @ static void rcu_initiate_boost(struct rc
 }
 
 /*
- * Wake up the per-CPU kthread to invoke RCU callbacks.
- */
-static void invoke_rcu_callbacks_kthread(void)
-{
-	unsigned long flags;
-
-	local_irq_save(flags);
-	__this_cpu_write(rcu_cpu_has_work, 1);
-	if (__this_cpu_read(rcu_cpu_kthread_task) != NULL &&
-	    current != __this_cpu_read(rcu_cpu_kthread_task)) {
-		rcu_wake_cond(__this_cpu_read(rcu_cpu_kthread_task),
-			      __this_cpu_read(rcu_cpu_kthread_status));
-	}
-	local_irq_restore(flags);
-}
-
-/*
  * Is the current CPU running the RCU-callbacks kthread?
  * Caller must have preemption disabled.
  */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1338 @ static int rcu_spawn_one_boost_kthread(s
 	return 0;
 }
 
-static void rcu_kthread_do_work(void)
-{
-	rcu_do_batch(this_cpu_ptr(&rcu_data));
-}
-
-static void rcu_cpu_kthread_setup(unsigned int cpu)
-{
-	struct sched_param sp;
-
-	sp.sched_priority = kthread_prio;
-	sched_setscheduler_nocheck(current, SCHED_FIFO, &sp);
-}
-
-static void rcu_cpu_kthread_park(unsigned int cpu)
-{
-	per_cpu(rcu_cpu_kthread_status, cpu) = RCU_KTHREAD_OFFCPU;
-}
-
-static int rcu_cpu_kthread_should_run(unsigned int cpu)
-{
-	return __this_cpu_read(rcu_cpu_has_work);
-}
-
-/*
- * Per-CPU kernel thread that invokes RCU callbacks.  This replaces
- * the RCU softirq used in configurations of RCU that do not support RCU
- * priority boosting.
- */
-static void rcu_cpu_kthread(unsigned int cpu)
-{
-	unsigned int *statusp = this_cpu_ptr(&rcu_cpu_kthread_status);
-	char work, *workp = this_cpu_ptr(&rcu_cpu_has_work);
-	int spincnt;
-
-	for (spincnt = 0; spincnt < 10; spincnt++) {
-		trace_rcu_utilization(TPS("Start CPU kthread@rcu_wait"));
-		local_bh_disable();
-		*statusp = RCU_KTHREAD_RUNNING;
-		this_cpu_inc(rcu_cpu_kthread_loops);
-		local_irq_disable();
-		work = *workp;
-		*workp = 0;
-		local_irq_enable();
-		if (work)
-			rcu_kthread_do_work();
-		local_bh_enable();
-		if (*workp == 0) {
-			trace_rcu_utilization(TPS("End CPU kthread@rcu_wait"));
-			*statusp = RCU_KTHREAD_WAITING;
-			return;
-		}
-	}
-	*statusp = RCU_KTHREAD_YIELDING;
-	trace_rcu_utilization(TPS("Start CPU kthread@rcu_yield"));
-	schedule_timeout_interruptible(2);
-	trace_rcu_utilization(TPS("End CPU kthread@rcu_yield"));
-	*statusp = RCU_KTHREAD_WAITING;
-}
-
 /*
  * Set the per-rcu_node kthread's affinity to cover all CPUs that are
  * served by the rcu_node in question.  The CPU hotplug lock is still
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1368 @ static void rcu_boost_kthread_setaffinit
 	free_cpumask_var(cm);
 }
 
-static struct smp_hotplug_thread rcu_cpu_thread_spec = {
-	.store			= &rcu_cpu_kthread_task,
-	.thread_should_run	= rcu_cpu_kthread_should_run,
-	.thread_fn		= rcu_cpu_kthread,
-	.thread_comm		= "rcuc/%u",
-	.setup			= rcu_cpu_kthread_setup,
-	.park			= rcu_cpu_kthread_park,
-};
-
 /*
  * Spawn boost kthreads -- called as soon as the scheduler is running.
  */
 static void __init rcu_spawn_boost_kthreads(void)
 {
 	struct rcu_node *rnp;
-	int cpu;
 
-	for_each_possible_cpu(cpu)
-		per_cpu(rcu_cpu_has_work, cpu) = 0;
-	if (WARN_ONCE(smpboot_register_percpu_thread(&rcu_cpu_thread_spec), "%s: Could not start rcub kthread, OOM is now expected behavior\n", __func__))
-		return;
 	rcu_for_each_leaf_node(rnp)
 		(void)rcu_spawn_one_boost_kthread(rnp);
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1397 @ static void rcu_initiate_boost(struct rc
 	raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
 }
 
-static void invoke_rcu_callbacks_kthread(void)
-{
-	WARN_ON_ONCE(1);
-}
-
 static bool rcu_is_callbacks_kthread(void)
 {
 	return false;
Index: linux-5.0.19-rt11/kernel/rcu/update.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/rcu/update.c
+++ linux-5.0.19-rt11/kernel/rcu/update.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:71 @ extern int rcu_expedited; /* from sysctl
 module_param(rcu_expedited, int, 0);
 extern int rcu_normal; /* from sysctl */
 module_param(rcu_normal, int, 0);
-static int rcu_normal_after_boot;
+static int rcu_normal_after_boot = IS_ENABLED(CONFIG_PREEMPT_RT_FULL);
+#ifndef CONFIG_PREEMPT_RT_FULL
 module_param(rcu_normal_after_boot, int, 0);
+#endif
 #endif /* #ifndef CONFIG_TINY_RCU */
 
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
Index: linux-5.0.19-rt11/kernel/sched/completion.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/sched/completion.c
+++ linux-5.0.19-rt11/kernel/sched/completion.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:32 @ void complete(struct completion *x)
 {
 	unsigned long flags;
 
-	spin_lock_irqsave(&x->wait.lock, flags);
+	raw_spin_lock_irqsave(&x->wait.lock, flags);
 
 	if (x->done != UINT_MAX)
 		x->done++;
-	__wake_up_locked(&x->wait, TASK_NORMAL, 1);
-	spin_unlock_irqrestore(&x->wait.lock, flags);
+	swake_up_locked(&x->wait);
+	raw_spin_unlock_irqrestore(&x->wait.lock, flags);
 }
 EXPORT_SYMBOL(complete);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:61 @ void complete_all(struct completion *x)
 {
 	unsigned long flags;
 
-	spin_lock_irqsave(&x->wait.lock, flags);
+	raw_spin_lock_irqsave(&x->wait.lock, flags);
 	x->done = UINT_MAX;
-	__wake_up_locked(&x->wait, TASK_NORMAL, 0);
-	spin_unlock_irqrestore(&x->wait.lock, flags);
+	swake_up_all_locked(&x->wait);
+	raw_spin_unlock_irqrestore(&x->wait.lock, flags);
 }
 EXPORT_SYMBOL(complete_all);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:73 @ do_wait_for_common(struct completion *x,
 		   long (*action)(long), long timeout, int state)
 {
 	if (!x->done) {
-		DECLARE_WAITQUEUE(wait, current);
+		DECLARE_SWAITQUEUE(wait);
 
-		__add_wait_queue_entry_tail_exclusive(&x->wait, &wait);
 		do {
 			if (signal_pending_state(state, current)) {
 				timeout = -ERESTARTSYS;
 				break;
 			}
+			__prepare_to_swait(&x->wait, &wait);
 			__set_current_state(state);
-			spin_unlock_irq(&x->wait.lock);
+			raw_spin_unlock_irq(&x->wait.lock);
 			timeout = action(timeout);
-			spin_lock_irq(&x->wait.lock);
+			raw_spin_lock_irq(&x->wait.lock);
 		} while (!x->done && timeout);
-		__remove_wait_queue(&x->wait, &wait);
+		__finish_swait(&x->wait, &wait);
 		if (!x->done)
 			return timeout;
 	}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:103 @ __wait_for_common(struct completion *x,
 
 	complete_acquire(x);
 
-	spin_lock_irq(&x->wait.lock);
+	raw_spin_lock_irq(&x->wait.lock);
 	timeout = do_wait_for_common(x, action, timeout, state);
-	spin_unlock_irq(&x->wait.lock);
+	raw_spin_unlock_irq(&x->wait.lock);
 
 	complete_release(x);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:294 @ bool try_wait_for_completion(struct comp
 	if (!READ_ONCE(x->done))
 		return false;
 
-	spin_lock_irqsave(&x->wait.lock, flags);
+	raw_spin_lock_irqsave(&x->wait.lock, flags);
 	if (!x->done)
 		ret = false;
 	else if (x->done != UINT_MAX)
 		x->done--;
-	spin_unlock_irqrestore(&x->wait.lock, flags);
+	raw_spin_unlock_irqrestore(&x->wait.lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL(try_wait_for_completion);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:325 @ bool completion_done(struct completion *
 	 * otherwise we can end up freeing the completion before complete()
 	 * is done referencing it.
 	 */
-	spin_lock_irqsave(&x->wait.lock, flags);
-	spin_unlock_irqrestore(&x->wait.lock, flags);
+	raw_spin_lock_irqsave(&x->wait.lock, flags);
+	raw_spin_unlock_irqrestore(&x->wait.lock, flags);
 	return true;
 }
 EXPORT_SYMBOL(completion_done);
Index: linux-5.0.19-rt11/kernel/sched/core.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/sched/core.c
+++ linux-5.0.19-rt11/kernel/sched/core.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:47 @ const_debug unsigned int sysctl_sched_fe
  * Number of tasks to iterate in a single balance run.
  * Limited because this is done with IRQs disabled.
  */
+#ifdef CONFIG_PREEMPT_RT_FULL
+const_debug unsigned int sysctl_sched_nr_migrate = 8;
+#else
 const_debug unsigned int sysctl_sched_nr_migrate = 32;
+#endif
 
 /*
  * period over which we measure -rt task CPU usage in us.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:322 @ static void hrtick_rq_init(struct rq *rq
 	rq->hrtick_csd.info = rq;
 #endif
 
-	hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+	hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
 	rq->hrtick_timer.function = hrtick;
 }
 #else	/* CONFIG_SCHED_HRTICK */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:416 @ static bool set_nr_if_polling(struct tas
  * This function must be used as-if it were wake_up_process(); IOW the task
  * must be ready to be woken at this location.
  */
-void wake_q_add(struct wake_q_head *head, struct task_struct *task)
+void __wake_q_add(struct wake_q_head *head, struct task_struct *task,
+		  bool sleeper)
 {
-	struct wake_q_node *node = &task->wake_q;
+	struct wake_q_node *node;
+
+	if (sleeper)
+		node = &task->wake_q_sleeper;
+	else
+		node = &task->wake_q;
 
 	/*
 	 * Atomically grab the task, if ->wake_q is !nil already it means
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:447 @ void wake_q_add(struct wake_q_head *head
 	head->lastp = &node->next;
 }
 
-void wake_up_q(struct wake_q_head *head)
+void __wake_up_q(struct wake_q_head *head, bool sleeper)
 {
 	struct wake_q_node *node = head->first;
 
 	while (node != WAKE_Q_TAIL) {
 		struct task_struct *task;
 
-		task = container_of(node, struct task_struct, wake_q);
+		if (sleeper)
+			task = container_of(node, struct task_struct, wake_q_sleeper);
+		else
+			task = container_of(node, struct task_struct, wake_q);
 		BUG_ON(!task);
 		/* Task can safely be re-inserted now: */
 		node = node->next;
-		task->wake_q.next = NULL;
-
+		if (sleeper)
+			task->wake_q_sleeper.next = NULL;
+		else
+			task->wake_q.next = NULL;
 		/*
 		 * wake_up_process() executes a full barrier, which pairs with
 		 * the queueing in wake_q_add() so as not to miss wakeups.
 		 */
-		wake_up_process(task);
+		if (sleeper)
+			wake_up_lock_sleeper(task);
+		else
+			wake_up_process(task);
 		put_task_struct(task);
 	}
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:508 @ void resched_curr(struct rq *rq)
 		trace_sched_wake_idle_without_ipi(cpu);
 }
 
+#ifdef CONFIG_PREEMPT_LAZY
+
+static int tsk_is_polling(struct task_struct *p)
+{
+#ifdef TIF_POLLING_NRFLAG
+	return test_tsk_thread_flag(p, TIF_POLLING_NRFLAG);
+#else
+	return 0;
+#endif
+}
+
+void resched_curr_lazy(struct rq *rq)
+{
+	struct task_struct *curr = rq->curr;
+	int cpu;
+
+	if (!sched_feat(PREEMPT_LAZY)) {
+		resched_curr(rq);
+		return;
+	}
+
+	lockdep_assert_held(&rq->lock);
+
+	if (test_tsk_need_resched(curr))
+		return;
+
+	if (test_tsk_need_resched_lazy(curr))
+		return;
+
+	set_tsk_need_resched_lazy(curr);
+
+	cpu = cpu_of(rq);
+	if (cpu == smp_processor_id())
+		return;
+
+	/* NEED_RESCHED_LAZY must be visible before we test polling */
+	smp_mb();
+	if (!tsk_is_polling(curr))
+		smp_send_reschedule(cpu);
+}
+#endif
+
 void resched_cpu(int cpu)
 {
 	struct rq *rq = cpu_rq(cpu);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:959 @ static inline bool is_per_cpu_kthread(st
  */
 static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
 {
-	if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
+	if (!cpumask_test_cpu(cpu, p->cpus_ptr))
 		return false;
 
-	if (is_per_cpu_kthread(p))
+	if (is_per_cpu_kthread(p) || __migrate_disabled(p))
 		return cpu_online(cpu);
 
 	return cpu_active(cpu);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1054 @ static int migration_cpu_stop(void *data
 	local_irq_disable();
 	/*
 	 * We need to explicitly wake pending tasks before running
-	 * __migrate_task() such that we will not miss enforcing cpus_allowed
+	 * __migrate_task() such that we will not miss enforcing cpus_ptr
 	 * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
 	 */
 	sched_ttwu_pending();
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1085 @ static int migration_cpu_stop(void *data
  */
 void set_cpus_allowed_common(struct task_struct *p, const struct cpumask *new_mask)
 {
-	cpumask_copy(&p->cpus_allowed, new_mask);
+	cpumask_copy(&p->cpus_mask, new_mask);
 	p->nr_cpus_allowed = cpumask_weight(new_mask);
 }
 
-void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
+#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+int __migrate_disabled(struct task_struct *p)
+{
+	return p->migrate_disable;
+}
+EXPORT_SYMBOL_GPL(__migrate_disabled);
+#endif
+
+static void __do_set_cpus_allowed_tail(struct task_struct *p,
+				       const struct cpumask *new_mask)
 {
 	struct rq *rq = task_rq(p);
 	bool queued, running;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1127 @ void do_set_cpus_allowed(struct task_str
 		set_curr_task(rq, p);
 }
 
+void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
+{
+#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+	if (__migrate_disabled(p)) {
+		lockdep_assert_held(&p->pi_lock);
+
+		cpumask_copy(&p->cpus_mask, new_mask);
+		p->migrate_disable_update = 1;
+		return;
+	}
+#endif
+	__do_set_cpus_allowed_tail(p, new_mask);
+}
+
 /*
  * Change a given task's CPU affinity. Migrate the thread to a
  * proper CPU and schedule it away if the CPU it's executing on
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1178 @ static int __set_cpus_allowed_ptr(struct
 		goto out;
 	}
 
-	if (cpumask_equal(&p->cpus_allowed, new_mask))
+	if (cpumask_equal(p->cpus_ptr, new_mask))
 		goto out;
 
 	if (!cpumask_intersects(new_mask, cpu_valid_mask)) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1199 @ static int __set_cpus_allowed_ptr(struct
 	}
 
 	/* Can the task run on the task's current CPU? If so, we're done */
-	if (cpumask_test_cpu(task_cpu(p), new_mask))
+	if (cpumask_test_cpu(task_cpu(p), new_mask) || __migrate_disabled(p))
+		goto out;
+
+#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+	if (__migrate_disabled(p)) {
+		p->migrate_disable_update = 1;
 		goto out;
+	}
+#endif
 
 	dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask);
 	if (task_running(rq, p) || p->state == TASK_WAKING) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1348 @ static int migrate_swap_stop(void *data)
 	if (task_cpu(arg->src_task) != arg->src_cpu)
 		goto unlock;
 
-	if (!cpumask_test_cpu(arg->dst_cpu, &arg->src_task->cpus_allowed))
+	if (!cpumask_test_cpu(arg->dst_cpu, arg->src_task->cpus_ptr))
 		goto unlock;
 
-	if (!cpumask_test_cpu(arg->src_cpu, &arg->dst_task->cpus_allowed))
+	if (!cpumask_test_cpu(arg->src_cpu, arg->dst_task->cpus_ptr))
 		goto unlock;
 
 	__migrate_swap_task(arg->src_task, arg->dst_cpu);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1393 @ int migrate_swap(struct task_struct *cur
 	if (!cpu_active(arg.src_cpu) || !cpu_active(arg.dst_cpu))
 		goto out;
 
-	if (!cpumask_test_cpu(arg.dst_cpu, &arg.src_task->cpus_allowed))
+	if (!cpumask_test_cpu(arg.dst_cpu, arg.src_task->cpus_ptr))
 		goto out;
 
-	if (!cpumask_test_cpu(arg.src_cpu, &arg.dst_task->cpus_allowed))
+	if (!cpumask_test_cpu(arg.src_cpu, arg.dst_task->cpus_ptr))
 		goto out;
 
 	trace_sched_swap_numa(cur, arg.src_cpu, p, arg.dst_cpu);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1407 @ out:
 }
 #endif /* CONFIG_NUMA_BALANCING */
 
+static bool check_task_state(struct task_struct *p, long match_state)
+{
+	bool match = false;
+
+	raw_spin_lock_irq(&p->pi_lock);
+	if (p->state == match_state || p->saved_state == match_state)
+		match = true;
+	raw_spin_unlock_irq(&p->pi_lock);
+
+	return match;
+}
+
 /*
  * wait_task_inactive - wait for a thread to unschedule.
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1463 @ unsigned long wait_task_inactive(struct
 		 * is actually now running somewhere else!
 		 */
 		while (task_running(rq, p)) {
-			if (match_state && unlikely(p->state != match_state))
+			if (match_state && !check_task_state(p, match_state))
 				return 0;
 			cpu_relax();
 		}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1478 @ unsigned long wait_task_inactive(struct
 		running = task_running(rq, p);
 		queued = task_on_rq_queued(p);
 		ncsw = 0;
-		if (!match_state || p->state == match_state)
+		if (!match_state || p->state == match_state ||
+		    p->saved_state == match_state)
 			ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
 		task_rq_unlock(rq, p, &rf);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1554 @ void kick_process(struct task_struct *p)
 EXPORT_SYMBOL_GPL(kick_process);
 
 /*
- * ->cpus_allowed is protected by both rq->lock and p->pi_lock
+ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
  *
  * A few notes on cpu_active vs cpu_online:
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1594 @ static int select_fallback_rq(int cpu, s
 		for_each_cpu(dest_cpu, nodemask) {
 			if (!cpu_active(dest_cpu))
 				continue;
-			if (cpumask_test_cpu(dest_cpu, &p->cpus_allowed))
+			if (cpumask_test_cpu(dest_cpu, p->cpus_ptr))
 				return dest_cpu;
 		}
 	}
 
 	for (;;) {
 		/* Any allowed, online CPU? */
-		for_each_cpu(dest_cpu, &p->cpus_allowed) {
+		for_each_cpu(dest_cpu, p->cpus_ptr) {
 			if (!is_cpu_allowed(p, dest_cpu))
 				continue;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1645 @ out:
 }
 
 /*
- * The caller (fork, wakeup) owns p->pi_lock, ->cpus_allowed is stable.
+ * The caller (fork, wakeup) owns p->pi_lock, ->cpus_ptr is stable.
  */
 static inline
 int select_task_rq(struct task_struct *p, int cpu, int sd_flags, int wake_flags)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1655 @ int select_task_rq(struct task_struct *p
 	if (p->nr_cpus_allowed > 1)
 		cpu = p->sched_class->select_task_rq(p, cpu, sd_flags, wake_flags);
 	else
-		cpu = cpumask_any(&p->cpus_allowed);
+		cpu = cpumask_any(p->cpus_ptr);
 
 	/*
 	 * In order not to call set_task_cpu() on a blocking task we need
-	 * to rely on ttwu() to place the task on a valid ->cpus_allowed
+	 * to rely on ttwu() to place the task on a valid ->cpus_ptr
 	 * CPU.
 	 *
 	 * Since this is common to all placement strategies, this lives here.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1762 @ static inline void ttwu_activate(struct
 {
 	activate_task(rq, p, en_flags);
 	p->on_rq = TASK_ON_RQ_QUEUED;
-
-	/* If a worker is waking up, notify the workqueue: */
-	if (p->flags & PF_WQ_WORKER)
-		wq_worker_waking_up(p, cpu_of(rq));
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2080 @ try_to_wake_up(struct task_struct *p, un
 	 */
 	raw_spin_lock_irqsave(&p->pi_lock, flags);
 	smp_mb__after_spinlock();
-	if (!(p->state & state))
+	if (!(p->state & state)) {
+		/*
+		 * The task might be running due to a spinlock sleeper
+		 * wakeup. Check the saved state and set it to running
+		 * if the wakeup condition is true.
+		 */
+		if (!(wake_flags & WF_LOCK_SLEEPER)) {
+			if (p->saved_state & state) {
+				p->saved_state = TASK_RUNNING;
+				success = 1;
+			}
+		}
 		goto out;
+	}
+
+	/*
+	 * If this is a regular wakeup, then we can unconditionally
+	 * clear the saved state of a "lock sleeper".
+	 */
+	if (!(wake_flags & WF_LOCK_SLEEPER))
+		p->saved_state = TASK_RUNNING;
 
 	trace_sched_waking(p);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2199 @ out:
 }
 
 /**
- * try_to_wake_up_local - try to wake up a local task with rq lock held
- * @p: the thread to be awakened
- * @rf: request-queue flags for pinning
- *
- * Put @p on the run-queue if it's not already there. The caller must
- * ensure that this_rq() is locked, @p is bound to this_rq() and not
- * the current task.
- */
-static void try_to_wake_up_local(struct task_struct *p, struct rq_flags *rf)
-{
-	struct rq *rq = task_rq(p);
-
-	if (WARN_ON_ONCE(rq != this_rq()) ||
-	    WARN_ON_ONCE(p == current))
-		return;
-
-	lockdep_assert_held(&rq->lock);
-
-	if (!raw_spin_trylock(&p->pi_lock)) {
-		/*
-		 * This is OK, because current is on_cpu, which avoids it being
-		 * picked for load-balance and preemption/IRQs are still
-		 * disabled avoiding further scheduler activity on it and we've
-		 * not yet picked a replacement task.
-		 */
-		rq_unlock(rq, rf);
-		raw_spin_lock(&p->pi_lock);
-		rq_relock(rq, rf);
-	}
-
-	if (!(p->state & TASK_NORMAL))
-		goto out;
-
-	trace_sched_waking(p);
-
-	if (!task_on_rq_queued(p)) {
-		if (p->in_iowait) {
-			delayacct_blkio_end(p);
-			atomic_dec(&rq->nr_iowait);
-		}
-		ttwu_activate(rq, p, ENQUEUE_WAKEUP | ENQUEUE_NOCLOCK);
-	}
-
-	ttwu_do_wakeup(rq, p, 0, rf);
-	ttwu_stat(p, smp_processor_id(), 0);
-out:
-	raw_spin_unlock(&p->pi_lock);
-}
-
-/**
  * wake_up_process - Wake up a specific process
  * @p: The process to be woken up.
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2215 @ int wake_up_process(struct task_struct *
 }
 EXPORT_SYMBOL(wake_up_process);
 
+/**
+ * wake_up_lock_sleeper - Wake up a specific process blocked on a "sleeping lock"
+ * @p: The process to be woken up.
+ *
+ * Same as wake_up_process() above, but wake_flags=WF_LOCK_SLEEPER to indicate
+ * the nature of the wakeup.
+ */
+int wake_up_lock_sleeper(struct task_struct *p)
+{
+	return try_to_wake_up(p, TASK_UNINTERRUPTIBLE, WF_LOCK_SLEEPER);
+}
+
 int wake_up_state(struct task_struct *p, unsigned int state)
 {
 	return try_to_wake_up(p, state, 0);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2464 @ int sched_fork(unsigned long clone_flags
 	p->on_cpu = 0;
 #endif
 	init_task_preempt_count(p);
+#ifdef CONFIG_HAVE_PREEMPT_LAZY
+	task_thread_info(p)->preempt_lazy_count = 0;
+#endif
 #ifdef CONFIG_SMP
 	plist_node_init(&p->pushable_tasks, MAX_PRIO);
 	RB_CLEAR_NODE(&p->pushable_dl_tasks);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2507 @ void wake_up_new_task(struct task_struct
 #ifdef CONFIG_SMP
 	/*
 	 * Fork balancing, do it here and not earlier because:
-	 *  - cpus_allowed can change in the fork path
+	 *  - cpus_ptr can change in the fork path
 	 *  - any previously selected CPU might disappear through hotplug
 	 *
 	 * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2795 @ static struct rq *finish_task_switch(str
 	 *   provided by mmdrop(),
 	 * - a sync_core for SYNC_CORE.
 	 */
+	/*
+	 * We use mmdrop_delayed() here so we don't have to do the
+	 * full __mmdrop() when we are the last user.
+	 */
 	if (mm) {
 		membarrier_mm_sync_core_before_usermode(mm);
-		mmdrop(mm);
+		mmdrop_delayed(mm);
 	}
 	if (unlikely(prev_state == TASK_DEAD)) {
 		if (prev->sched_class->task_dead)
 			prev->sched_class->task_dead(prev);
 
-		/*
-		 * Remove function-return probe instances associated with this
-		 * task and put them back on the free list.
-		 */
-		kprobe_flush_task(prev);
-
-		/* Task is done with its stack. */
-		put_task_stack(prev);
-
 		put_task_struct(prev);
 	}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3521 @ static void __sched notrace __schedule(b
 				atomic_inc(&rq->nr_iowait);
 				delayacct_blkio_start();
 			}
-
-			/*
-			 * If a worker went to sleep, notify and ask workqueue
-			 * whether it wants to wake up a task to maintain
-			 * concurrency.
-			 */
-			if (prev->flags & PF_WQ_WORKER) {
-				struct task_struct *to_wakeup;
-
-				to_wakeup = wq_worker_sleeping(prev);
-				if (to_wakeup)
-					try_to_wake_up_local(to_wakeup, &rf);
-			}
 		}
 		switch_count = &prev->nvcsw;
 	}
 
 	next = pick_next_task(rq, prev, &rf);
 	clear_tsk_need_resched(prev);
+	clear_tsk_need_resched_lazy(prev);
 	clear_preempt_need_resched();
 
 	if (likely(prev != next)) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3579 @ void __noreturn do_task_dead(void)
 
 static inline void sched_submit_work(struct task_struct *tsk)
 {
-	if (!tsk->state || tsk_is_pi_blocked(tsk))
+	if (!tsk->state)
+		return;
+
+	/*
+	 * If a worker went to sleep, notify and ask workqueue whether
+	 * it wants to wake up a task to maintain concurrency.
+	 * As this function is called inside the schedule() context,
+	 * we disable preemption to avoid it calling schedule() again
+	 * in the possible wakeup of a kworker.
+	 */
+	if (tsk->flags & PF_WQ_WORKER) {
+		preempt_disable();
+		wq_worker_sleeping(tsk);
+		preempt_enable_no_resched();
+	}
+
+	if (tsk_is_pi_blocked(tsk))
 		return;
+
 	/*
 	 * If we are going to sleep and we have plugged IO queued,
 	 * make sure to submit it to avoid deadlocks.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3606 @ static inline void sched_submit_work(str
 		blk_schedule_flush_plug(tsk);
 }
 
+static void sched_update_worker(struct task_struct *tsk)
+{
+	if (tsk->flags & PF_WQ_WORKER)
+		wq_worker_running(tsk);
+}
+
 asmlinkage __visible void __sched schedule(void)
 {
 	struct task_struct *tsk = current;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3622 @ asmlinkage __visible void __sched schedu
 		__schedule(false);
 		sched_preempt_enable_no_resched();
 	} while (need_resched());
+	sched_update_worker(tsk);
 }
 EXPORT_SYMBOL(schedule);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3711 @ static void __sched notrace preempt_sche
 	} while (need_resched());
 }
 
+#ifdef CONFIG_PREEMPT_LAZY
+/*
+ * If TIF_NEED_RESCHED is then we allow to be scheduled away since this is
+ * set by a RT task. Oterwise we try to avoid beeing scheduled out as long as
+ * preempt_lazy_count counter >0.
+ */
+static __always_inline int preemptible_lazy(void)
+{
+	if (test_thread_flag(TIF_NEED_RESCHED))
+		return 1;
+	if (current_thread_info()->preempt_lazy_count)
+		return 0;
+	return 1;
+}
+
+#else
+
+static inline int preemptible_lazy(void)
+{
+	return 1;
+}
+
+#endif
+
 #ifdef CONFIG_PREEMPT
 /*
  * this is the entry point to schedule() from in-kernel preemption
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3749 @ asmlinkage __visible void __sched notrac
 	 */
 	if (likely(!preemptible()))
 		return;
-
+	if (!preemptible_lazy())
+		return;
 	preempt_schedule_common();
 }
 NOKPROBE_SYMBOL(preempt_schedule);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3777 @ asmlinkage __visible void __sched notrac
 	if (likely(!preemptible()))
 		return;
 
+	if (!preemptible_lazy())
+		return;
+
 	do {
 		/*
 		 * Because the function tracer can trace preempt_count_sub()
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4408 @ change:
 			 * the entire root_domain to become SCHED_DEADLINE. We
 			 * will also fail if there's no bandwidth available.
 			 */
-			if (!cpumask_subset(span, &p->cpus_allowed) ||
+			if (!cpumask_subset(span, p->cpus_ptr) ||
 			    rq->rd->dl_bw.bw == 0) {
 				task_rq_unlock(rq, p, &rf);
 				return -EPERM;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5007 @ long sched_getaffinity(pid_t pid, struct
 		goto out_unlock;
 
 	raw_spin_lock_irqsave(&p->pi_lock, flags);
-	cpumask_and(mask, &p->cpus_allowed, cpu_active_mask);
+	cpumask_and(mask, &p->cpus_mask, cpu_active_mask);
 	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
 
 out_unlock:
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5546 @ void init_idle(struct task_struct *idle,
 
 	/* Set the preempt count _outside_ the spinlocks! */
 	init_idle_preempt_count(idle, cpu);
-
+#ifdef CONFIG_HAVE_PREEMPT_LAZY
+	task_thread_info(idle)->preempt_lazy_count = 0;
+#endif
 	/*
 	 * The idle tasks have their own, simple scheduling class:
 	 */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5587 @ int task_can_attach(struct task_struct *
 	 * allowed nodes is unnecessary.  Thus, cpusets are not
 	 * applicable for such threads.  This prevents checking for
 	 * success of set_cpus_allowed_ptr() on all attached tasks
-	 * before cpus_allowed may be changed.
+	 * before cpus_mask may be changed.
 	 */
 	if (p->flags & PF_NO_SETAFFINITY) {
 		ret = -EINVAL;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5614 @ int migrate_task_to(struct task_struct *
 	if (curr_cpu == target_cpu)
 		return 0;
 
-	if (!cpumask_test_cpu(target_cpu, &p->cpus_allowed))
+	if (!cpumask_test_cpu(target_cpu, p->cpus_ptr))
 		return -EINVAL;
 
 	/* TODO: This is not properly updating schedstats */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5653 @ void sched_setnuma(struct task_struct *p
 #endif /* CONFIG_NUMA_BALANCING */
 
 #ifdef CONFIG_HOTPLUG_CPU
+static DEFINE_PER_CPU(struct mm_struct *, idle_last_mm);
+
 /*
  * Ensure that the idle task is using init_mm right before its CPU goes
  * offline.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5670 @ void idle_task_exit(void)
 		current->active_mm = &init_mm;
 		finish_arch_post_lock_switch();
 	}
-	mmdrop(mm);
+	/*
+	 * Defer the cleanup to an alive cpu. On RT we can neither
+	 * call mmdrop() nor mmdrop_delayed() from here.
+	 */
+	per_cpu(idle_last_mm, smp_processor_id()) = mm;
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5758 @ static void migrate_tasks(struct rq *dea
 		put_prev_task(rq, next);
 
 		/*
-		 * Rules for changing task_struct::cpus_allowed are holding
+		 * Rules for changing task_struct::cpus_mask are holding
 		 * both pi_lock and rq->lock, such that holding either
 		 * stabilizes the mask.
 		 *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5986 @ int sched_cpu_dying(unsigned int cpu)
 	update_max_interval();
 	nohz_balance_exit_idle(rq);
 	hrtick_clear(rq);
+	if (per_cpu(idle_last_mm, cpu)) {
+		mmdrop_delayed(per_cpu(idle_last_mm, cpu));
+		per_cpu(idle_last_mm, cpu) = NULL;
+	}
 	return 0;
 }
 #endif
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6226 @ void __init sched_init(void)
 #ifdef CONFIG_DEBUG_ATOMIC_SLEEP
 static inline int preempt_count_equals(int preempt_offset)
 {
-	int nested = preempt_count() + rcu_preempt_depth();
+	int nested = preempt_count() + sched_rcu_preempt_depth();
 
 	return (nested == preempt_offset);
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:7212 @ const u32 sched_prio_to_wmult[40] = {
 };
 
 #undef CREATE_TRACE_POINTS
+
+#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+
+static inline void
+update_nr_migratory(struct task_struct *p, long delta)
+{
+	if (unlikely((p->sched_class == &rt_sched_class ||
+		      p->sched_class == &dl_sched_class) &&
+		      p->nr_cpus_allowed > 1)) {
+		if (p->sched_class == &rt_sched_class)
+			task_rq(p)->rt.rt_nr_migratory += delta;
+		else
+			task_rq(p)->dl.dl_nr_migratory += delta;
+	}
+}
+
+static inline void
+migrate_disable_update_cpus_allowed(struct task_struct *p)
+{
+	struct rq *rq;
+	struct rq_flags rf;
+
+	p->cpus_ptr = cpumask_of(smp_processor_id());
+
+	rq = task_rq_lock(p, &rf);
+	update_nr_migratory(p, -1);
+	p->nr_cpus_allowed = 1;
+	task_rq_unlock(rq, p, &rf);
+}
+
+static inline void
+migrate_enable_update_cpus_allowed(struct task_struct *p)
+{
+	struct rq *rq;
+	struct rq_flags rf;
+
+	p->cpus_ptr = &p->cpus_mask;
+
+	rq = task_rq_lock(p, &rf);
+	p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask);
+	update_nr_migratory(p, 1);
+	task_rq_unlock(rq, p, &rf);
+}
+
+void migrate_disable(void)
+{
+	struct task_struct *p = current;
+
+	if (in_atomic() || irqs_disabled()) {
+#ifdef CONFIG_SCHED_DEBUG
+		p->migrate_disable_atomic++;
+#endif
+		return;
+	}
+#ifdef CONFIG_SCHED_DEBUG
+	if (unlikely(p->migrate_disable_atomic)) {
+		tracing_off();
+		WARN_ON_ONCE(1);
+	}
+#endif
+
+	if (p->migrate_disable) {
+		p->migrate_disable++;
+		return;
+	}
+
+	preempt_disable();
+	preempt_lazy_disable();
+	pin_current_cpu();
+
+	migrate_disable_update_cpus_allowed(p);
+	p->migrate_disable = 1;
+
+	preempt_enable();
+}
+EXPORT_SYMBOL(migrate_disable);
+
+void migrate_enable(void)
+{
+	struct task_struct *p = current;
+
+	if (in_atomic() || irqs_disabled()) {
+#ifdef CONFIG_SCHED_DEBUG
+		p->migrate_disable_atomic--;
+#endif
+		return;
+	}
+
+#ifdef CONFIG_SCHED_DEBUG
+	if (unlikely(p->migrate_disable_atomic)) {
+		tracing_off();
+		WARN_ON_ONCE(1);
+	}
+#endif
+
+	WARN_ON_ONCE(p->migrate_disable <= 0);
+	if (p->migrate_disable > 1) {
+		p->migrate_disable--;
+		return;
+	}
+
+	preempt_disable();
+
+	p->migrate_disable = 0;
+	migrate_enable_update_cpus_allowed(p);
+
+	if (p->migrate_disable_update) {
+		struct rq *rq;
+		struct rq_flags rf;
+
+		rq = task_rq_lock(p, &rf);
+		update_rq_clock(rq);
+
+		__do_set_cpus_allowed_tail(p, &p->cpus_mask);
+		task_rq_unlock(rq, p, &rf);
+
+		p->migrate_disable_update = 0;
+
+		WARN_ON(smp_processor_id() != task_cpu(p));
+		if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
+			const struct cpumask *cpu_valid_mask = cpu_active_mask;
+			struct migration_arg arg;
+			unsigned int dest_cpu;
+
+			if (p->flags & PF_KTHREAD) {
+				/*
+				 * Kernel threads are allowed on online && !active CPUs
+				 */
+				cpu_valid_mask = cpu_online_mask;
+			}
+			dest_cpu = cpumask_any_and(cpu_valid_mask, &p->cpus_mask);
+			arg.task = p;
+			arg.dest_cpu = dest_cpu;
+
+			unpin_current_cpu();
+			preempt_lazy_enable();
+			preempt_enable();
+			stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg);
+			tlb_migrate_finish(p->mm);
+
+			return;
+		}
+	}
+	unpin_current_cpu();
+	preempt_lazy_enable();
+	preempt_enable();
+}
+EXPORT_SYMBOL(migrate_enable);
+
+#elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+void migrate_disable(void)
+{
+#ifdef CONFIG_SCHED_DEBUG
+	struct task_struct *p = current;
+
+	if (in_atomic() || irqs_disabled()) {
+		p->migrate_disable_atomic++;
+		return;
+	}
+
+	if (unlikely(p->migrate_disable_atomic)) {
+		tracing_off();
+		WARN_ON_ONCE(1);
+	}
+
+	p->migrate_disable++;
+#endif
+	barrier();
+}
+EXPORT_SYMBOL(migrate_disable);
+
+void migrate_enable(void)
+{
+#ifdef CONFIG_SCHED_DEBUG
+	struct task_struct *p = current;
+
+	if (in_atomic() || irqs_disabled()) {
+		p->migrate_disable_atomic--;
+		return;
+	}
+
+	if (unlikely(p->migrate_disable_atomic)) {
+		tracing_off();
+		WARN_ON_ONCE(1);
+	}
+
+	WARN_ON_ONCE(p->migrate_disable <= 0);
+	p->migrate_disable--;
+#endif
+	barrier();
+}
+EXPORT_SYMBOL(migrate_enable);
+#endif
Index: linux-5.0.19-rt11/kernel/sched/cpudeadline.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/sched/cpudeadline.c
+++ linux-5.0.19-rt11/kernel/sched/cpudeadline.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:127 @ int cpudl_find(struct cpudl *cp, struct
 	const struct sched_dl_entity *dl_se = &p->dl;
 
 	if (later_mask &&
-	    cpumask_and(later_mask, cp->free_cpus, &p->cpus_allowed)) {
+	    cpumask_and(later_mask, cp->free_cpus, p->cpus_ptr)) {
 		return 1;
 	} else {
 		int best_cpu = cpudl_maximum(cp);
 
 		WARN_ON(best_cpu != -1 && !cpu_present(best_cpu));
 
-		if (cpumask_test_cpu(best_cpu, &p->cpus_allowed) &&
+		if (cpumask_test_cpu(best_cpu, p->cpus_ptr) &&
 		    dl_time_before(dl_se->deadline, cp->elements[0].dl)) {
 			if (later_mask)
 				cpumask_set_cpu(best_cpu, later_mask);
Index: linux-5.0.19-rt11/kernel/sched/cpupri.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/sched/cpupri.c
+++ linux-5.0.19-rt11/kernel/sched/cpupri.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:101 @ int cpupri_find(struct cpupri *cp, struc
 		if (skip)
 			continue;
 
-		if (cpumask_any_and(&p->cpus_allowed, vec->mask) >= nr_cpu_ids)
+		if (cpumask_any_and(p->cpus_ptr, vec->mask) >= nr_cpu_ids)
 			continue;
 
 		if (lowest_mask) {
-			cpumask_and(lowest_mask, &p->cpus_allowed, vec->mask);
+			cpumask_and(lowest_mask, p->cpus_ptr, vec->mask);
 
 			/*
 			 * We have to ensure that we have at least one bit
Index: linux-5.0.19-rt11/kernel/sched/deadline.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/sched/deadline.c
+++ linux-5.0.19-rt11/kernel/sched/deadline.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:541 @ static struct rq *dl_task_offline_migrat
 		 * If we cannot preempt any rq, fall back to pick any
 		 * online CPU:
 		 */
-		cpu = cpumask_any_and(cpu_active_mask, &p->cpus_allowed);
+		cpu = cpumask_any_and(cpu_active_mask, p->cpus_ptr);
 		if (cpu >= nr_cpu_ids) {
 			/*
 			 * Failed to find any suitable CPU.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1056 @ void init_dl_task_timer(struct sched_dl_
 {
 	struct hrtimer *timer = &dl_se->dl_timer;
 
-	hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+	hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
 	timer->function = dl_task_timer;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1827 @ static void set_curr_task_dl(struct rq *
 static int pick_dl_task(struct rq *rq, struct task_struct *p, int cpu)
 {
 	if (!task_running(rq, p) &&
-	    cpumask_test_cpu(cpu, &p->cpus_allowed))
+	    cpumask_test_cpu(cpu, p->cpus_ptr))
 		return 1;
 	return 0;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1977 @ static struct rq *find_lock_later_rq(str
 		/* Retry if something changed. */
 		if (double_lock_balance(rq, later_rq)) {
 			if (unlikely(task_rq(task) != rq ||
-				     !cpumask_test_cpu(later_rq->cpu, &task->cpus_allowed) ||
+				     !cpumask_test_cpu(later_rq->cpu, task->cpus_ptr) ||
 				     task_running(rq, task) ||
 				     !dl_task(task) ||
 				     !task_on_rq_queued(task))) {
Index: linux-5.0.19-rt11/kernel/sched/debug.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/sched/debug.c
+++ linux-5.0.19-rt11/kernel/sched/debug.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:985 @ void proc_sched_show_task(struct task_st
 		P(dl.runtime);
 		P(dl.deadline);
 	}
+#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+	P(migrate_disable);
+#endif
+	P(nr_cpus_allowed);
 #undef PN_SCHEDSTAT
 #undef PN
 #undef __PN
Index: linux-5.0.19-rt11/kernel/sched/fair.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/sched/fair.c
+++ linux-5.0.19-rt11/kernel/sched/fair.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1611 @ static void task_numa_compare(struct tas
 	 * be incurred if the tasks were swapped.
 	 */
 	/* Skip this swap candidate if cannot move to the source cpu */
-	if (!cpumask_test_cpu(env->src_cpu, &cur->cpus_allowed))
+	if (!cpumask_test_cpu(env->src_cpu, cur->cpus_ptr))
 		goto unlock;
 
 	/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1708 @ static void task_numa_find_cpu(struct ta
 
 	for_each_cpu(cpu, cpumask_of_node(env->dst_nid)) {
 		/* Skip this CPU if the source task cannot migrate */
-		if (!cpumask_test_cpu(cpu, &env->p->cpus_allowed))
+		if (!cpumask_test_cpu(cpu, env->p->cpus_ptr))
 			continue;
 
 		env->dst_cpu = cpu;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4036 @ check_preempt_tick(struct cfs_rq *cfs_rq
 	ideal_runtime = sched_slice(cfs_rq, curr);
 	delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;
 	if (delta_exec > ideal_runtime) {
-		resched_curr(rq_of(cfs_rq));
+		resched_curr_lazy(rq_of(cfs_rq));
 		/*
 		 * The current task ran long enough, ensure it doesn't get
 		 * re-elected due to buddy favours.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4060 @ check_preempt_tick(struct cfs_rq *cfs_rq
 		return;
 
 	if (delta > ideal_runtime)
-		resched_curr(rq_of(cfs_rq));
+		resched_curr_lazy(rq_of(cfs_rq));
 }
 
 static void
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4202 @ entity_tick(struct cfs_rq *cfs_rq, struc
 	 * validating it and just reschedule.
 	 */
 	if (queued) {
-		resched_curr(rq_of(cfs_rq));
+		resched_curr_lazy(rq_of(cfs_rq));
 		return;
 	}
 	/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4386 @ static void __account_cfs_rq_runtime(str
 	 * hierarchy can be throttled
 	 */
 	if (!assign_cfs_rq_runtime(cfs_rq) && likely(cfs_rq->curr))
-		resched_curr(rq_of(cfs_rq));
+		resched_curr_lazy(rq_of(cfs_rq));
 }
 
 static __always_inline
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4572 @ static u64 distribute_cfs_runtime(struct
 		struct rq *rq = rq_of(cfs_rq);
 		struct rq_flags rf;
 
-		rq_lock(rq, &rf);
+		rq_lock_irqsave(rq, &rf);
 		if (!cfs_rq_throttled(cfs_rq))
 			goto next;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4589 @ static u64 distribute_cfs_runtime(struct
 			unthrottle_cfs_rq(cfs_rq);
 
 next:
-		rq_unlock(rq, &rf);
+		rq_unlock_irqrestore(rq, &rf);
 
 		if (!remaining)
 			break;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4605 @ next:
  * period the timer is deactivated until scheduling resumes; cfs_b->idle is
  * used to track this state.
  */
-static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun)
+static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun, unsigned long flags)
 {
 	u64 runtime, runtime_expires;
 	int throttled;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4647 @ static int do_sched_cfs_period_timer(str
 	while (throttled && cfs_b->runtime > 0 && !cfs_b->distribute_running) {
 		runtime = cfs_b->runtime;
 		cfs_b->distribute_running = 1;
-		raw_spin_unlock(&cfs_b->lock);
+		raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
 		/* we can't nest cfs_b->lock while distributing bandwidth */
 		runtime = distribute_cfs_runtime(cfs_b, runtime,
 						 runtime_expires);
-		raw_spin_lock(&cfs_b->lock);
+		raw_spin_lock_irqsave(&cfs_b->lock, flags);
 
 		cfs_b->distribute_running = 0;
 		throttled = !list_empty(&cfs_b->throttled_cfs_rq);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4760 @ static __always_inline void return_cfs_r
 static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
 {
 	u64 runtime = 0, slice = sched_cfs_bandwidth_slice();
+	unsigned long flags;
 	u64 expires;
 
 	/* confirm we're still not at a refresh boundary */
-	raw_spin_lock(&cfs_b->lock);
+	raw_spin_lock_irqsave(&cfs_b->lock, flags);
 	if (cfs_b->distribute_running) {
-		raw_spin_unlock(&cfs_b->lock);
+		raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
 		return;
 	}
 
 	if (runtime_refresh_within(cfs_b, min_bandwidth_expiration)) {
-		raw_spin_unlock(&cfs_b->lock);
+		raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
 		return;
 	}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4782 @ static void do_sched_cfs_slack_timer(str
 	if (runtime)
 		cfs_b->distribute_running = 1;
 
-	raw_spin_unlock(&cfs_b->lock);
+	raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
 
 	if (!runtime)
 		return;
 
 	runtime = distribute_cfs_runtime(cfs_b, runtime, expires);
 
-	raw_spin_lock(&cfs_b->lock);
+	raw_spin_lock_irqsave(&cfs_b->lock, flags);
 	if (expires == cfs_b->runtime_expires)
 		lsub_positive(&cfs_b->runtime, runtime);
 	cfs_b->distribute_running = 0;
-	raw_spin_unlock(&cfs_b->lock);
+	raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4873 @ static enum hrtimer_restart sched_cfs_pe
 {
 	struct cfs_bandwidth *cfs_b =
 		container_of(timer, struct cfs_bandwidth, period_timer);
+	unsigned long flags;
 	int overrun;
 	int idle = 0;
 	int count = 0;
 
-	raw_spin_lock(&cfs_b->lock);
+	raw_spin_lock_irqsave(&cfs_b->lock, flags);
 	for (;;) {
 		overrun = hrtimer_forward_now(timer, cfs_b->period);
 		if (!overrun)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4906 @ static enum hrtimer_restart sched_cfs_pe
 			count = 0;
 		}
 
-		idle = do_sched_cfs_period_timer(cfs_b, overrun);
+		idle = do_sched_cfs_period_timer(cfs_b, overrun, flags);
 	}
 	if (idle)
 		cfs_b->period_active = 0;
-	raw_spin_unlock(&cfs_b->lock);
+	raw_spin_unlock_irqrestore(&cfs_b->lock, flags);
 
 	return idle ? HRTIMER_NORESTART : HRTIMER_RESTART;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5082 @ static void hrtick_start_fair(struct rq
 
 		if (delta < 0) {
 			if (rq->curr == p)
-				resched_curr(rq);
+				resched_curr_lazy(rq);
 			return;
 		}
 		hrtick_start(rq, delta);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5792 @ find_idlest_group(struct sched_domain *s
 
 		/* Skip over this group if it has no CPUs allowed */
 		if (!cpumask_intersects(sched_group_span(group),
-					&p->cpus_allowed))
+					p->cpus_ptr))
 			continue;
 
 		local_group = cpumask_test_cpu(this_cpu,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5924 @ find_idlest_group_cpu(struct sched_group
 		return cpumask_first(sched_group_span(group));
 
 	/* Traverse only the allowed CPUs */
-	for_each_cpu_and(i, sched_group_span(group), &p->cpus_allowed) {
+	for_each_cpu_and(i, sched_group_span(group), p->cpus_ptr) {
 		if (available_idle_cpu(i)) {
 			struct rq *rq = cpu_rq(i);
 			struct cpuidle_state *idle = idle_get_state(rq);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5964 @ static inline int find_idlest_cpu(struct
 {
 	int new_cpu = cpu;
 
-	if (!cpumask_intersects(sched_domain_span(sd), &p->cpus_allowed))
+	if (!cpumask_intersects(sched_domain_span(sd), p->cpus_ptr))
 		return prev_cpu;
 
 	/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6081 @ static int select_idle_core(struct task_
 	if (!test_idle_cores(target, false))
 		return -1;
 
-	cpumask_and(cpus, sched_domain_span(sd), &p->cpus_allowed);
+	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
 
 	for_each_cpu_wrap(core, cpus, target) {
 		bool idle = true;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6115 @ static int select_idle_smt(struct task_s
 		return -1;
 
 	for_each_cpu(cpu, cpu_smt_mask(target)) {
-		if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
+		if (!cpumask_test_cpu(cpu, p->cpus_ptr))
 			continue;
 		if (available_idle_cpu(cpu))
 			return cpu;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6178 @ static int select_idle_cpu(struct task_s
 	for_each_cpu_wrap(cpu, sched_domain_span(sd), target) {
 		if (!--nr)
 			return -1;
-		if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
+		if (!cpumask_test_cpu(cpu, p->cpus_ptr))
 			continue;
 		if (available_idle_cpu(cpu))
 			break;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6215 @ static int select_idle_sibling(struct ta
 	    recent_used_cpu != target &&
 	    cpus_share_cache(recent_used_cpu, target) &&
 	    available_idle_cpu(recent_used_cpu) &&
-	    cpumask_test_cpu(p->recent_used_cpu, &p->cpus_allowed)) {
+	    cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr)) {
 		/*
 		 * Replace recent_used_cpu with prev as it is a potential
 		 * candidate for the next wake:
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6561 @ static int find_energy_efficient_cpu(str
 		int max_spare_cap_cpu = -1;
 
 		for_each_cpu_and(cpu, perf_domain_span(pd), sched_domain_span(sd)) {
-			if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
+			if (!cpumask_test_cpu(cpu, p->cpus_ptr))
 				continue;
 
 			/* Skip CPUs that will be overutilized. */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6650 @ select_task_rq_fair(struct task_struct *
 		}
 
 		want_affine = !wake_wide(p) && !wake_cap(p, cpu, prev_cpu) &&
-			      cpumask_test_cpu(cpu, &p->cpus_allowed);
+			      cpumask_test_cpu(cpu, p->cpus_ptr);
 	}
 
 	rcu_read_lock();
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6911 @ static void check_preempt_wakeup(struct
 	return;
 
 preempt:
-	resched_curr(rq);
+	resched_curr_lazy(rq);
 	/*
 	 * Only set the backward buddy when the current task is still
 	 * on the rq. This can happen when a wakeup gets interleaved
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:7400 @ int can_migrate_task(struct task_struct
 	/*
 	 * We do not migrate tasks that are:
 	 * 1) throttled_lb_pair, or
-	 * 2) cannot be migrated to this CPU due to cpus_allowed, or
+	 * 2) cannot be migrated to this CPU due to cpus_ptr, or
 	 * 3) running (obviously), or
 	 * 4) are cache-hot on their current CPU.
 	 */
 	if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
 		return 0;
 
-	if (!cpumask_test_cpu(env->dst_cpu, &p->cpus_allowed)) {
+	if (!cpumask_test_cpu(env->dst_cpu, p->cpus_ptr)) {
 		int cpu;
 
 		schedstat_inc(p->se.statistics.nr_failed_migrations_affine);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:7427 @ int can_migrate_task(struct task_struct
 
 		/* Prevent to re-select dst_cpu via env's CPUs: */
 		for_each_cpu_and(cpu, env->dst_grpmask, env->cpus) {
-			if (cpumask_test_cpu(cpu, &p->cpus_allowed)) {
+			if (cpumask_test_cpu(cpu, p->cpus_ptr)) {
 				env->flags |= LBF_DST_PINNED;
 				env->new_dst_cpu = cpu;
 				break;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:8024 @ check_cpu_capacity(struct rq *rq, struct
 
 /*
  * Group imbalance indicates (and tries to solve) the problem where balancing
- * groups is inadequate due to ->cpus_allowed constraints.
+ * groups is inadequate due to ->cpus_ptr constraints.
  *
  * Imagine a situation of two groups of 4 CPUs each and 4 tasks each with a
  * cpumask covering 1 CPU of the first group and 3 CPUs of the second group.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:8695 @ static struct sched_group *find_busiest_
 	/*
 	 * If the busiest group is imbalanced the below checks don't
 	 * work because they assume all things are equal, which typically
-	 * isn't true due to cpus_allowed constraints and the like.
+	 * isn't true due to cpus_ptr constraints and the like.
 	 */
 	if (busiest->group_type == group_imbalanced)
 		goto force_balance;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:9123 @ more_balance:
 			 * if the curr task on busiest CPU can't be
 			 * moved to this_cpu:
 			 */
-			if (!cpumask_test_cpu(this_cpu, &busiest->curr->cpus_allowed)) {
+			if (!cpumask_test_cpu(this_cpu, busiest->curr->cpus_ptr)) {
 				raw_spin_unlock_irqrestore(&busiest->lock,
 							    flags);
 				env.flags |= LBF_ALL_PINNED;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:10111 @ static void task_fork_fair(struct task_s
 		 * 'current' within the tree based on its new key value.
 		 */
 		swap(curr->vruntime, se->vruntime);
-		resched_curr(rq);
+		resched_curr_lazy(rq);
 	}
 
 	se->vruntime -= cfs_rq->min_vruntime;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:10135 @ prio_changed_fair(struct rq *rq, struct
 	 */
 	if (rq->curr == p) {
 		if (p->prio > oldprio)
-			resched_curr(rq);
+			resched_curr_lazy(rq);
 	} else
 		check_preempt_curr(rq, p, 0);
 }
Index: linux-5.0.19-rt11/kernel/sched/features.h
===================================================================
--- linux-5.0.19-rt11.orig/kernel/sched/features.h
+++ linux-5.0.19-rt11/kernel/sched/features.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:49 @ SCHED_FEAT(LB_BIAS, false)
  */
 SCHED_FEAT(NONTASK_CAPACITY, true)
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+SCHED_FEAT(TTWU_QUEUE, false)
+# ifdef CONFIG_PREEMPT_LAZY
+SCHED_FEAT(PREEMPT_LAZY, true)
+# endif
+#else
+
 /*
  * Queue remote wakeups on the target CPU and process them
  * using the scheduler IPI. Reduces rq->lock contention/bounces.
  */
 SCHED_FEAT(TTWU_QUEUE, true)
+#endif
 
 /*
  * When doing wakeups, attempt to limit superfluous scans of the LLC domain.
Index: linux-5.0.19-rt11/kernel/sched/rt.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/sched/rt.c
+++ linux-5.0.19-rt11/kernel/sched/rt.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:48 @ void init_rt_bandwidth(struct rt_bandwid
 
 	raw_spin_lock_init(&rt_b->rt_runtime_lock);
 
-	hrtimer_init(&rt_b->rt_period_timer,
-			CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+	hrtimer_init(&rt_b->rt_period_timer, CLOCK_MONOTONIC,
+		     HRTIMER_MODE_REL_HARD);
 	rt_b->rt_period_timer.function = sched_rt_period_timer;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1617 @ static void put_prev_task_rt(struct rq *
 static int pick_rt_task(struct rq *rq, struct task_struct *p, int cpu)
 {
 	if (!task_running(rq, p) &&
-	    cpumask_test_cpu(cpu, &p->cpus_allowed))
+	    cpumask_test_cpu(cpu, p->cpus_ptr))
 		return 1;
 
 	return 0;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1754 @ static struct rq *find_lock_lowest_rq(st
 			 * Also make sure that it wasn't scheduled on its rq.
 			 */
 			if (unlikely(task_rq(task) != rq ||
-				     !cpumask_test_cpu(lowest_rq->cpu, &task->cpus_allowed) ||
+				     !cpumask_test_cpu(lowest_rq->cpu, task->cpus_ptr) ||
 				     task_running(rq, task) ||
 				     !rt_task(task) ||
 				     !task_on_rq_queued(task))) {
Index: linux-5.0.19-rt11/kernel/sched/sched.h
===================================================================
--- linux-5.0.19-rt11.orig/kernel/sched/sched.h
+++ linux-5.0.19-rt11/kernel/sched/sched.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1575 @ static inline int task_on_rq_migrating(s
 #define WF_SYNC			0x01		/* Waker goes to sleep after wakeup */
 #define WF_FORK			0x02		/* Child wakeup after fork */
 #define WF_MIGRATED		0x4		/* Internal use, task got migrated */
+#define WF_LOCK_SLEEPER		0x08		/* wakeup spinlock "sleeper" */
 
 /*
  * To aid in avoiding the subversion of "niceness" due to uneven distribution
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1770 @ extern void reweight_task(struct task_st
 extern void resched_curr(struct rq *rq);
 extern void resched_cpu(int cpu);
 
+#ifdef CONFIG_PREEMPT_LAZY
+extern void resched_curr_lazy(struct rq *rq);
+#else
+static inline void resched_curr_lazy(struct rq *rq)
+{
+	resched_curr(rq);
+}
+#endif
+
 extern struct rt_bandwidth def_rt_bandwidth;
 extern void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime);
 
Index: linux-5.0.19-rt11/kernel/sched/swait.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/sched/swait.c
+++ linux-5.0.19-rt11/kernel/sched/swait.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:35 @ void swake_up_locked(struct swait_queue_
 }
 EXPORT_SYMBOL(swake_up_locked);
 
+void swake_up_all_locked(struct swait_queue_head *q)
+{
+	struct swait_queue *curr;
+	int wakes = 0;
+
+	while (!list_empty(&q->task_list)) {
+
+		curr = list_first_entry(&q->task_list, typeof(*curr),
+					task_list);
+		wake_up_process(curr->task);
+		list_del_init(&curr->task_list);
+		wakes++;
+	}
+	if (pm_in_action)
+		return;
+	WARN(wakes > 2, "complete_all() with %d waiters\n", wakes);
+}
+EXPORT_SYMBOL(swake_up_all_locked);
+
 void swake_up_one(struct swait_queue_head *q)
 {
 	unsigned long flags;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:73 @ void swake_up_all(struct swait_queue_hea
 	struct swait_queue *curr;
 	LIST_HEAD(tmp);
 
+	WARN_ON(irqs_disabled());
 	raw_spin_lock_irq(&q->lock);
 	list_splice_init(&q->task_list, &tmp);
 	while (!list_empty(&tmp)) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:92 @ void swake_up_all(struct swait_queue_hea
 }
 EXPORT_SYMBOL(swake_up_all);
 
-static void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait)
+void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait)
 {
 	wait->task = current;
 	if (list_empty(&wait->task_list))
Index: linux-5.0.19-rt11/kernel/sched/topology.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/sched/topology.c
+++ linux-5.0.19-rt11/kernel/sched/topology.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:476 @ static int init_rootdomain(struct root_d
 	rd->rto_cpu = -1;
 	raw_spin_lock_init(&rd->rto_lock);
 	init_irq_work(&rd->rto_push_work, rto_push_irq_work_func);
+	rd->rto_push_work.flags |= IRQ_WORK_HARD_IRQ;
 #endif
 
 	init_dl_bw(&rd->dl_bw);
Index: linux-5.0.19-rt11/kernel/signal.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/signal.c
+++ linux-5.0.19-rt11/kernel/signal.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:22 @
 #include <linux/sched/task.h>
 #include <linux/sched/task_stack.h>
 #include <linux/sched/cputime.h>
+#include <linux/sched/rt.h>
 #include <linux/fs.h>
 #include <linux/tty.h>
 #include <linux/binfmts.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:397 @ void task_join_group_stop(struct task_st
 	}
 }
 
+static inline struct sigqueue *get_task_cache(struct task_struct *t)
+{
+	struct sigqueue *q = t->sigqueue_cache;
+
+	if (cmpxchg(&t->sigqueue_cache, q, NULL) != q)
+		return NULL;
+	return q;
+}
+
+static inline int put_task_cache(struct task_struct *t, struct sigqueue *q)
+{
+	if (cmpxchg(&t->sigqueue_cache, NULL, q) == NULL)
+		return 0;
+	return 1;
+}
+
 /*
  * allocate a new signal queue record
  * - this may be called without locks if and only if t == current, otherwise an
  *   appropriate lock must be held to stop the target task from exiting
  */
 static struct sigqueue *
-__sigqueue_alloc(int sig, struct task_struct *t, gfp_t flags, int override_rlimit)
+__sigqueue_do_alloc(int sig, struct task_struct *t, gfp_t flags,
+		    int override_rlimit, int fromslab)
 {
 	struct sigqueue *q = NULL;
 	struct user_struct *user;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:437 @ __sigqueue_alloc(int sig, struct task_st
 	if (override_rlimit ||
 	    atomic_read(&user->sigpending) <=
 			task_rlimit(t, RLIMIT_SIGPENDING)) {
-		q = kmem_cache_alloc(sigqueue_cachep, flags);
+		if (!fromslab)
+			q = get_task_cache(t);
+		if (!q)
+			q = kmem_cache_alloc(sigqueue_cachep, flags);
 	} else {
 		print_dropped_signal(sig);
 	}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:457 @ __sigqueue_alloc(int sig, struct task_st
 	return q;
 }
 
+static struct sigqueue *
+__sigqueue_alloc(int sig, struct task_struct *t, gfp_t flags,
+		 int override_rlimit)
+{
+	return __sigqueue_do_alloc(sig, t, flags, override_rlimit, 0);
+}
+
 static void __sigqueue_free(struct sigqueue *q)
 {
 	if (q->flags & SIGQUEUE_PREALLOC)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:473 @ static void __sigqueue_free(struct sigqu
 	kmem_cache_free(sigqueue_cachep, q);
 }
 
+static void sigqueue_free_current(struct sigqueue *q)
+{
+	struct user_struct *up;
+
+	if (q->flags & SIGQUEUE_PREALLOC)
+		return;
+
+	up = q->user;
+	if (rt_prio(current->normal_prio) && !put_task_cache(current, q)) {
+		atomic_dec(&up->sigpending);
+		free_uid(up);
+	} else
+		  __sigqueue_free(q);
+}
+
 void flush_sigqueue(struct sigpending *queue)
 {
 	struct sigqueue *q;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:501 @ void flush_sigqueue(struct sigpending *q
 }
 
 /*
+ * Called from __exit_signal. Flush tsk->pending and
+ * tsk->sigqueue_cache
+ */
+void flush_task_sigqueue(struct task_struct *tsk)
+{
+	struct sigqueue *q;
+
+	flush_sigqueue(&tsk->pending);
+
+	q = get_task_cache(tsk);
+	if (q)
+		kmem_cache_free(sigqueue_cachep, q);
+}
+
+/*
  * Flush all pending signals for this kthread.
  */
 void flush_signals(struct task_struct *t)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:639 @ still_pending:
 			(info->si_code == SI_TIMER) &&
 			(info->si_sys_private);
 
-		__sigqueue_free(first);
+		sigqueue_free_current(first);
 	} else {
 		/*
 		 * Ok, it wasn't in the queue.  This must be
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:676 @ int dequeue_signal(struct task_struct *t
 	bool resched_timer = false;
 	int signr;
 
+	WARN_ON_ONCE(tsk != current);
+
 	/* We only dequeue private signals from ourselves, we don't let
 	 * signalfd steal them
 	 */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1331 @ int do_send_sig_info(int sig, struct ker
  * We don't want to have recursive SIGSEGV's etc, for example,
  * that is why we also clear SIGNAL_UNKILLABLE.
  */
-int
-force_sig_info(int sig, struct kernel_siginfo *info, struct task_struct *t)
+static int
+do_force_sig_info(int sig, struct kernel_siginfo *info, struct task_struct *t)
 {
 	unsigned long int flags;
 	int ret, blocked, ignored;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1361 @ force_sig_info(int sig, struct kernel_si
 	return ret;
 }
 
+int force_sig_info(int sig, struct kernel_siginfo *info, struct task_struct *t)
+{
+/*
+ * On some archs, PREEMPT_RT has to delay sending a signal from a trap
+ * since it can not enable preemption, and the signal code's spin_locks
+ * turn into mutexes. Instead, it must set TIF_NOTIFY_RESUME which will
+ * send the signal on exit of the trap.
+ */
+#ifdef ARCH_RT_DELAYS_SIGNAL_SEND
+	if (in_atomic()) {
+		if (WARN_ON_ONCE(t != current))
+			return 0;
+		if (WARN_ON_ONCE(t->forced_info.si_signo))
+			return 0;
+
+		if (is_si_special(info)) {
+			WARN_ON_ONCE(info != SEND_SIG_PRIV);
+			t->forced_info.si_signo = sig;
+			t->forced_info.si_errno = 0;
+			t->forced_info.si_code = SI_KERNEL;
+			t->forced_info.si_pid = 0;
+			t->forced_info.si_uid = 0;
+		} else {
+			t->forced_info = *info;
+		}
+
+		set_tsk_thread_flag(t, TIF_NOTIFY_RESUME);
+		return 0;
+	}
+#endif
+	return do_force_sig_info(sig, info, t);
+}
+
 /*
  * Nuke all other threads in the group.
  */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1813 @ EXPORT_SYMBOL(kill_pid);
  */
 struct sigqueue *sigqueue_alloc(void)
 {
-	struct sigqueue *q = __sigqueue_alloc(-1, current, GFP_KERNEL, 0);
+	/* Preallocated sigqueue objects always from the slabcache ! */
+	struct sigqueue *q = __sigqueue_do_alloc(-1, current, GFP_KERNEL, 0, 1);
 
 	if (q)
 		q->flags |= SIGQUEUE_PREALLOC;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2194 @ static void ptrace_stop(int exit_code, i
 		if (gstop_done && ptrace_reparented(current))
 			do_notify_parent_cldstop(current, false, why);
 
-		/*
-		 * Don't want to allow preemption here, because
-		 * sys_ptrace() needs this task to be inactive.
-		 *
-		 * XXX: implement read_unlock_no_resched().
-		 */
-		preempt_disable();
 		read_unlock(&tasklist_lock);
-		preempt_enable_no_resched();
 		freezable_schedule();
 	} else {
 		/*
Index: linux-5.0.19-rt11/kernel/softirq.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/softirq.c
+++ linux-5.0.19-rt11/kernel/softirq.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:29 @
 #include <linux/smpboot.h>
 #include <linux/tick.h>
 #include <linux/irq.h>
+#include <linux/locallock.h>
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/irq.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:106 @ static bool ksoftirqd_running(unsigned l
  * softirq and whether we just have bh disabled.
  */
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+static DEFINE_LOCAL_IRQ_LOCK(bh_lock);
+static DEFINE_PER_CPU(long, softirq_counter);
+
+void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
+{
+	unsigned long __maybe_unused flags;
+	long soft_cnt;
+
+	WARN_ON_ONCE(in_irq());
+	if (!in_atomic())
+		local_lock(bh_lock);
+	soft_cnt = this_cpu_inc_return(softirq_counter);
+	WARN_ON_ONCE(soft_cnt == 0);
+	current->softirq_count += SOFTIRQ_DISABLE_OFFSET;
+
+#ifdef CONFIG_TRACE_IRQFLAGS
+	local_irq_save(flags);
+	if (soft_cnt == 1)
+		trace_softirqs_off(ip);
+	local_irq_restore(flags);
+#endif
+}
+EXPORT_SYMBOL(__local_bh_disable_ip);
+
+static void local_bh_disable_rt(void)
+{
+	local_bh_disable();
+}
+
+void _local_bh_enable(void)
+{
+	unsigned long __maybe_unused flags;
+	long soft_cnt;
+
+	soft_cnt = this_cpu_dec_return(softirq_counter);
+	WARN_ON_ONCE(soft_cnt < 0);
+
+#ifdef CONFIG_TRACE_IRQFLAGS
+	local_irq_save(flags);
+	if (soft_cnt == 0)
+		trace_softirqs_on(_RET_IP_);
+	local_irq_restore(flags);
+#endif
+
+	current->softirq_count -= SOFTIRQ_DISABLE_OFFSET;
+	if (!in_atomic())
+		local_unlock(bh_lock);
+}
+
+void _local_bh_enable_rt(void)
+{
+	_local_bh_enable();
+}
+
+void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
+{
+	u32 pending;
+	long count;
+
+	WARN_ON_ONCE(in_irq());
+	lockdep_assert_irqs_enabled();
+
+	local_irq_disable();
+	count = this_cpu_read(softirq_counter);
+
+	if (unlikely(count == 1)) {
+		pending = local_softirq_pending();
+		if (pending && !ksoftirqd_running(pending)) {
+			if (!in_atomic())
+				__do_softirq();
+			else
+				wakeup_softirqd();
+		}
+		trace_softirqs_on(ip);
+	}
+	count = this_cpu_dec_return(softirq_counter);
+	WARN_ON_ONCE(count < 0);
+	local_irq_enable();
+
+	if (!in_atomic())
+		local_unlock(bh_lock);
+
+	current->softirq_count -= SOFTIRQ_DISABLE_OFFSET;
+	preempt_check_resched();
+}
+EXPORT_SYMBOL(__local_bh_enable_ip);
+
+#else
+static void local_bh_disable_rt(void) { }
+static void _local_bh_enable_rt(void) { }
+
 /*
  * This one is for softirq.c-internal use,
  * where hardirqs are disabled legitimately:
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:292 @ void __local_bh_enable_ip(unsigned long
 	preempt_check_resched();
 }
 EXPORT_SYMBOL(__local_bh_enable_ip);
+#endif
 
 /*
  * We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:363 @ asmlinkage __visible void __softirq_entr
 	pending = local_softirq_pending();
 	account_irq_enter_time(current);
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+	current->softirq_count |= SOFTIRQ_OFFSET;
+#else
 	__local_bh_disable_ip(_RET_IP_, SOFTIRQ_OFFSET);
+#endif
 	in_hardirq = lockdep_softirq_start();
 
 restart:
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:401 @ restart:
 		h++;
 		pending >>= softirq_bit;
 	}
-
+#ifndef CONFIG_PREEMPT_RT_FULL
 	if (__this_cpu_read(ksoftirqd) == current)
 		rcu_softirq_qs();
+#endif
 	local_irq_disable();
 
 	pending = local_softirq_pending();
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:418 @ restart:
 
 	lockdep_softirq_end(in_hardirq);
 	account_irq_exit_time(current);
+#ifdef CONFIG_PREEMPT_RT_FULL
+	current->softirq_count &= ~SOFTIRQ_OFFSET;
+#else
 	__local_bh_enable(SOFTIRQ_OFFSET);
+#endif
 	WARN_ON_ONCE(in_interrupt());
 	current_restore_flags(old_flags, PF_MEMALLOC);
 }
 
+#ifndef CONFIG_PREEMPT_RT_FULL
 asmlinkage __visible void do_softirq(void)
 {
 	__u32 pending;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:445 @ asmlinkage __visible void do_softirq(voi
 
 	local_irq_restore(flags);
 }
+#endif
 
 /*
  * Enter an interrupt context.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:466 @ void irq_enter(void)
 	__irq_enter();
 }
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+
+static inline void invoke_softirq(void)
+{
+	if (this_cpu_read(softirq_counter) == 0)
+		wakeup_softirqd();
+}
+
+#else
+
 static inline void invoke_softirq(void)
 {
 	if (ksoftirqd_running(local_softirq_pending()))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:501 @ static inline void invoke_softirq(void)
 		wakeup_softirqd();
 	}
 }
+#endif
 
 static inline void tick_irq_exit(void)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:539 @ void irq_exit(void)
 /*
  * This function must run with irqs disabled!
  */
+#ifdef CONFIG_PREEMPT_RT_FULL
+void raise_softirq_irqoff(unsigned int nr)
+{
+	__raise_softirq_irqoff(nr);
+
+	/*
+	 * If we're in an hard interrupt we let irq return code deal
+	 * with the wakeup of ksoftirqd.
+	 */
+	if (in_irq())
+		return;
+	/*
+	 * If were are not in BH-disabled section then we have to wake
+	 * ksoftirqd.
+	 */
+	if (this_cpu_read(softirq_counter) == 0)
+		wakeup_softirqd();
+}
+
+#else
+
 inline void raise_softirq_irqoff(unsigned int nr)
 {
 	__raise_softirq_irqoff(nr);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:577 @ inline void raise_softirq_irqoff(unsigne
 		wakeup_softirqd();
 }
 
+#endif
+
 void raise_softirq(unsigned int nr)
 {
 	unsigned long flags;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:706 @ void tasklet_kill(struct tasklet_struct
 
 	while (test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) {
 		do {
-			yield();
+			local_bh_disable();
+			local_bh_enable();
 		} while (test_bit(TASKLET_STATE_SCHED, &t->state));
 	}
 	tasklet_unlock_wait(t);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:788 @ static int ksoftirqd_should_run(unsigned
 
 static void run_ksoftirqd(unsigned int cpu)
 {
+	local_bh_disable_rt();
 	local_irq_disable();
 	if (local_softirq_pending()) {
 		/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:797 @ static void run_ksoftirqd(unsigned int c
 		 */
 		__do_softirq();
 		local_irq_enable();
+		_local_bh_enable_rt();
 		cond_resched();
 		return;
 	}
 	local_irq_enable();
+	_local_bh_enable_rt();
 }
 
 #ifdef CONFIG_HOTPLUG_CPU
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:876 @ static struct smp_hotplug_thread softirq
 
 static __init int spawn_ksoftirqd(void)
 {
+#ifdef CONFIG_PREEMPT_RT_FULL
+	int cpu;
+
+	for_each_possible_cpu(cpu)
+		lockdep_set_novalidate_class(per_cpu_ptr(&bh_lock.lock, cpu));
+#endif
+
 	cpuhp_setup_state_nocalls(CPUHP_SOFTIRQ_DEAD, "softirq:dead", NULL,
 				  takeover_tasklets);
 	BUG_ON(smpboot_register_percpu_thread(&softirq_threads));
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:891 @ static __init int spawn_ksoftirqd(void)
 }
 early_initcall(spawn_ksoftirqd);
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+
+/*
+ * On preempt-rt a softirq running context might be blocked on a
+ * lock. There might be no other runnable task on this CPU because the
+ * lock owner runs on some other CPU. So we have to go into idle with
+ * the pending bit set. Therefor we need to check this otherwise we
+ * warn about false positives which confuses users and defeats the
+ * whole purpose of this test.
+ *
+ * This code is called with interrupts disabled.
+ */
+void softirq_check_pending_idle(void)
+{
+	struct task_struct *tsk = __this_cpu_read(ksoftirqd);
+	static int rate_limit;
+	bool okay = false;
+	u32 warnpending;
+
+	if (rate_limit >= 10)
+		return;
+
+	warnpending = local_softirq_pending() & SOFTIRQ_STOP_IDLE_MASK;
+	if (!warnpending)
+		return;
+
+	if (!tsk)
+		return;
+	/*
+	 * If ksoftirqd is blocked on a lock then we may go idle with pending
+	 * softirq.
+	 */
+	raw_spin_lock(&tsk->pi_lock);
+	if (tsk->pi_blocked_on || tsk->state == TASK_RUNNING ||
+	    (tsk->state == TASK_UNINTERRUPTIBLE && tsk->sleeping_lock)) {
+		okay = true;
+	}
+	raw_spin_unlock(&tsk->pi_lock);
+	if (okay)
+		return;
+	/*
+	 * The softirq lock is held in non-atomic context and the owner is
+	 * blocking on a lock. It will schedule softirqs once the counter goes
+	 * back to zero.
+	 */
+	if (this_cpu_read(softirq_counter) > 0)
+		return;
+
+	printk(KERN_ERR "NOHZ: local_softirq_pending %02x\n",
+	       warnpending);
+	rate_limit++;
+}
+
+#else
+
+void softirq_check_pending_idle(void)
+{
+	static int ratelimit;
+
+	if (ratelimit < 10 &&
+	    (local_softirq_pending() & SOFTIRQ_STOP_IDLE_MASK)) {
+		pr_warn("NOHZ: local_softirq_pending %02x\n",
+			(unsigned int) local_softirq_pending());
+		ratelimit++;
+	}
+}
+
+#endif
+
 /*
  * [ These __weak aliases are kept in a separate compilation unit, so that
  *   GCC does not inline them incorrectly. ]
Index: linux-5.0.19-rt11/kernel/time/alarmtimer.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/time/alarmtimer.c
+++ linux-5.0.19-rt11/kernel/time/alarmtimer.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:436 @ int alarm_cancel(struct alarm *alarm)
 		int ret = alarm_try_to_cancel(alarm);
 		if (ret >= 0)
 			return ret;
-		cpu_relax();
+		hrtimer_grab_expiry_lock(&alarm->timer);
 	}
 }
 EXPORT_SYMBOL_GPL(alarm_cancel);
Index: linux-5.0.19-rt11/kernel/time/hrtimer.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/time/hrtimer.c
+++ linux-5.0.19-rt11/kernel/time/hrtimer.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:933 @ u64 hrtimer_forward(struct hrtimer *time
 }
 EXPORT_SYMBOL_GPL(hrtimer_forward);
 
+void hrtimer_grab_expiry_lock(const struct hrtimer *timer)
+{
+	struct hrtimer_clock_base *base = timer->base;
+
+	if (base && base->cpu_base) {
+		spin_lock(&base->cpu_base->softirq_expiry_lock);
+		spin_unlock(&base->cpu_base->softirq_expiry_lock);
+	}
+}
+
 /*
  * enqueue_hrtimer - internal function to (re)start a timer
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1112 @ void hrtimer_start_range_ns(struct hrtim
 	 * Check whether the HRTIMER_MODE_SOFT bit and hrtimer.is_soft
 	 * match.
 	 */
+#ifndef CONFIG_PREEMPT_RT_BASE
 	WARN_ON_ONCE(!(mode & HRTIMER_MODE_SOFT) ^ !timer->is_soft);
+#endif
 
 	base = lock_hrtimer_base(timer, &flags);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1177 @ int hrtimer_cancel(struct hrtimer *timer
 
 		if (ret >= 0)
 			return ret;
-		cpu_relax();
+		hrtimer_grab_expiry_lock(timer);
 	}
 }
 EXPORT_SYMBOL_GPL(hrtimer_cancel);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1274 @ static inline int hrtimer_clockid_to_bas
 static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id,
 			   enum hrtimer_mode mode)
 {
-	bool softtimer = !!(mode & HRTIMER_MODE_SOFT);
-	int base = softtimer ? HRTIMER_MAX_CLOCK_BASES / 2 : 0;
+	bool softtimer;
+	int base;
 	struct hrtimer_cpu_base *cpu_base;
 
+	softtimer = !!(mode & HRTIMER_MODE_SOFT);
+#ifdef CONFIG_PREEMPT_RT_FULL
+	if (!softtimer && !(mode & HRTIMER_MODE_HARD))
+		softtimer = true;
+#endif
+	base = softtimer ? HRTIMER_MAX_CLOCK_BASES / 2 : 0;
+
 	memset(timer, 0, sizeof(struct hrtimer));
 
 	cpu_base = raw_cpu_ptr(&hrtimer_bases);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1481 @ static __latent_entropy void hrtimer_run
 	unsigned long flags;
 	ktime_t now;
 
+	spin_lock(&cpu_base->softirq_expiry_lock);
 	raw_spin_lock_irqsave(&cpu_base->lock, flags);
 
 	now = hrtimer_update_base(cpu_base);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1491 @ static __latent_entropy void hrtimer_run
 	hrtimer_update_softirq_timer(cpu_base, true);
 
 	raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
+	spin_unlock(&cpu_base->softirq_expiry_lock);
 }
 
 #ifdef CONFIG_HIGH_RES_TIMERS
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1663 @ static enum hrtimer_restart hrtimer_wake
 	return HRTIMER_NORESTART;
 }
 
-void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, struct task_struct *task)
-{
+static void __hrtimer_init_sleeper(struct hrtimer_sleeper *sl,
+				   clockid_t clock_id,
+				   enum hrtimer_mode mode,
+				   struct task_struct *task)
+{
+#ifdef CONFIG_PREEMPT_RT_FULL
+	if (!(mode & (HRTIMER_MODE_SOFT | HRTIMER_MODE_HARD))) {
+		if (task_is_realtime(current) || system_state != SYSTEM_RUNNING)
+			mode |= HRTIMER_MODE_HARD;
+		else
+			mode |= HRTIMER_MODE_SOFT;
+	}
+#endif
+	__hrtimer_init(&sl->timer, clock_id, mode);
 	sl->timer.function = hrtimer_wakeup;
 	sl->task = task;
 }
+
+/**
+ * hrtimer_init_sleeper - initialize sleeper to the given clock
+ * @sl:		sleeper to be initialized
+ * @clock_id:	the clock to be used
+ * @mode:	timer mode abs/rel
+ * @task:	the task to wake up
+ */
+void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, clockid_t clock_id,
+			  enum hrtimer_mode mode, struct task_struct *task)
+{
+	debug_init(&sl->timer, clock_id, mode);
+	__hrtimer_init_sleeper(sl, clock_id, mode, task);
+
+}
 EXPORT_SYMBOL_GPL(hrtimer_init_sleeper);
 
+#ifdef CONFIG_DEBUG_OBJECTS_TIMERS
+void hrtimer_init_sleeper_on_stack(struct hrtimer_sleeper *sl,
+				   clockid_t clock_id,
+				   enum hrtimer_mode mode,
+				   struct task_struct *task)
+{
+	debug_object_init_on_stack(&sl->timer, &hrtimer_debug_descr);
+	__hrtimer_init_sleeper(sl, clock_id, mode, task);
+}
+EXPORT_SYMBOL_GPL(hrtimer_init_sleeper_on_stack);
+#endif
+
 int nanosleep_copyout(struct restart_block *restart, struct timespec64 *ts)
 {
 	switch(restart->nanosleep.type) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1732 @ static int __sched do_nanosleep(struct h
 {
 	struct restart_block *restart;
 
-	hrtimer_init_sleeper(t, current);
-
 	do {
 		set_current_state(TASK_INTERRUPTIBLE);
 		hrtimer_start_expires(&t->timer, mode);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1739 @ static int __sched do_nanosleep(struct h
 		if (likely(t->task))
 			freezable_schedule();
 
+		__set_current_state(TASK_RUNNING);
 		hrtimer_cancel(&t->timer);
 		mode = HRTIMER_MODE_ABS;
 
 	} while (t->task && !signal_pending(current));
 
-	__set_current_state(TASK_RUNNING);
 
 	if (!t->task)
 		return 0;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1768 @ static long __sched hrtimer_nanosleep_re
 	struct hrtimer_sleeper t;
 	int ret;
 
-	hrtimer_init_on_stack(&t.timer, restart->nanosleep.clockid,
-				HRTIMER_MODE_ABS);
+	hrtimer_init_sleeper_on_stack(&t, restart->nanosleep.clockid,
+				      HRTIMER_MODE_ABS, current);
 	hrtimer_set_expires_tv64(&t.timer, restart->nanosleep.expires);
-
 	ret = do_nanosleep(&t, HRTIMER_MODE_ABS);
 	destroy_hrtimer_on_stack(&t.timer);
 	return ret;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1788 @ long hrtimer_nanosleep(const struct time
 	if (dl_task(current) || rt_task(current))
 		slack = 0;
 
-	hrtimer_init_on_stack(&t.timer, clockid, mode);
+	hrtimer_init_sleeper_on_stack(&t, clockid, mode, current);
 	hrtimer_set_expires_range_ns(&t.timer, timespec64_to_ktime(*rqtp), slack);
 	ret = do_nanosleep(&t, mode);
 	if (ret != -ERESTART_RESTARTBLOCK)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1848 @ COMPAT_SYSCALL_DEFINE2(nanosleep, struct
 }
 #endif
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+/*
+ * Sleep for 1 ms in hope whoever holds what we want will let it go.
+ */
+void cpu_chill(void)
+{
+	unsigned int freeze_flag = current->flags & PF_NOFREEZE;
+	struct task_struct *self = current;
+	ktime_t chill_time;
+
+	raw_spin_lock_irq(&self->pi_lock);
+	self->saved_state = self->state;
+	__set_current_state_no_track(TASK_UNINTERRUPTIBLE);
+	raw_spin_unlock_irq(&self->pi_lock);
+
+	chill_time = ktime_set(0, NSEC_PER_MSEC);
+
+	current->flags |= PF_NOFREEZE;
+	sleeping_lock_inc();
+	schedule_hrtimeout(&chill_time, HRTIMER_MODE_REL_HARD);
+	sleeping_lock_dec();
+	if (!freeze_flag)
+		current->flags &= ~PF_NOFREEZE;
+
+	raw_spin_lock_irq(&self->pi_lock);
+	__set_current_state_no_track(self->saved_state);
+	self->saved_state = TASK_RUNNING;
+	raw_spin_unlock_irq(&self->pi_lock);
+}
+EXPORT_SYMBOL(cpu_chill);
+#endif
+
 /*
  * Functions related to boot-time initialization:
  */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1901 @ int hrtimers_prepare_cpu(unsigned int cp
 	cpu_base->softirq_next_timer = NULL;
 	cpu_base->expires_next = KTIME_MAX;
 	cpu_base->softirq_expires_next = KTIME_MAX;
+	spin_lock_init(&cpu_base->softirq_expiry_lock);
 	return 0;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2020 @ schedule_hrtimeout_range_clock(ktime_t *
 		return -EINTR;
 	}
 
-	hrtimer_init_on_stack(&t.timer, clock_id, mode);
+	hrtimer_init_sleeper_on_stack(&t, clock_id, mode, current);
 	hrtimer_set_expires_range_ns(&t.timer, *expires, delta);
 
-	hrtimer_init_sleeper(&t, current);
-
 	hrtimer_start_expires(&t.timer, mode);
 
 	if (likely(t.task))
Index: linux-5.0.19-rt11/kernel/time/itimer.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/time/itimer.c
+++ linux-5.0.19-rt11/kernel/time/itimer.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:216 @ again:
 		/* We are sharing ->siglock with it_real_fn() */
 		if (hrtimer_try_to_cancel(timer) < 0) {
 			spin_unlock_irq(&tsk->sighand->siglock);
+			hrtimer_grab_expiry_lock(timer);
 			goto again;
 		}
 		expires = timeval_to_ktime(value->it_value);
Index: linux-5.0.19-rt11/kernel/time/jiffies.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/time/jiffies.c
+++ linux-5.0.19-rt11/kernel/time/jiffies.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:61 @ static struct clocksource clocksource_ji
 	.max_cycles	= 10,
 };
 
-__cacheline_aligned_in_smp DEFINE_SEQLOCK(jiffies_lock);
+__cacheline_aligned_in_smp DEFINE_RAW_SPINLOCK(jiffies_lock);
+__cacheline_aligned_in_smp seqcount_t jiffies_seq;
 
 #if (BITS_PER_LONG < 64)
 u64 get_jiffies_64(void)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:71 @ u64 get_jiffies_64(void)
 	u64 ret;
 
 	do {
-		seq = read_seqbegin(&jiffies_lock);
+		seq = read_seqcount_begin(&jiffies_seq);
 		ret = jiffies_64;
-	} while (read_seqretry(&jiffies_lock, seq));
+	} while (read_seqcount_retry(&jiffies_seq, seq));
 	return ret;
 }
 EXPORT_SYMBOL(get_jiffies_64);
Index: linux-5.0.19-rt11/kernel/time/posix-cpu-timers.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/time/posix-cpu-timers.c
+++ linux-5.0.19-rt11/kernel/time/posix-cpu-timers.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6 @
  * Implement CPU time clocks for the POSIX clock interface.
  */
 
+#include <uapi/linux/sched/types.h>
 #include <linux/sched/signal.h>
 #include <linux/sched/cputime.h>
+#include <linux/sched/rt.h>
 #include <linux/posix-timers.h>
 #include <linux/errno.h>
 #include <linux/math64.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:20 @
 #include <linux/workqueue.h>
 #include <linux/compat.h>
 #include <linux/sched/deadline.h>
+#include <linux/smpboot.h>
 
 #include "posix-timers.h"
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:792 @ check_timers_list(struct list_head *time
 			return t->expires;
 
 		t->firing = 1;
+		t->firing_cpu = smp_processor_id();
 		list_move_tail(&t->entry, firing);
 	}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1135 @ static inline int fastpath_timer_check(s
 	return 0;
 }
 
+static DEFINE_PER_CPU(spinlock_t, cpu_timer_expiry_lock) = __SPIN_LOCK_UNLOCKED(cpu_timer_expiry_lock);
+
+void cpu_timers_grab_expiry_lock(struct k_itimer *timer)
+{
+	int cpu = timer->it.cpu.firing_cpu;
+
+	if (cpu >= 0) {
+		spinlock_t *expiry_lock = per_cpu_ptr(&cpu_timer_expiry_lock, cpu);
+
+		spin_lock_irq(expiry_lock);
+		spin_unlock_irq(expiry_lock);
+	}
+}
+
 /*
  * This is called from the timer interrupt handler.  The irq handler has
  * already updated our counts.  We need to check if any timers fire now.
  * Interrupts are disabled.
  */
-void run_posix_cpu_timers(struct task_struct *tsk)
+static void __run_posix_cpu_timers(struct task_struct *tsk)
 {
 	LIST_HEAD(firing);
 	struct k_itimer *timer, *next;
 	unsigned long flags;
-
-	lockdep_assert_irqs_disabled();
+	spinlock_t *expiry_lock;
 
 	/*
 	 * The fast path checks that there are no expired thread or thread
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1168 @ void run_posix_cpu_timers(struct task_st
 	if (!fastpath_timer_check(tsk))
 		return;
 
+	expiry_lock = this_cpu_ptr(&cpu_timer_expiry_lock);
+	spin_lock(expiry_lock);
+
 	if (!lock_task_sighand(tsk, &flags))
 		return;
 	/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1205 @ void run_posix_cpu_timers(struct task_st
 		list_del_init(&timer->it.cpu.entry);
 		cpu_firing = timer->it.cpu.firing;
 		timer->it.cpu.firing = 0;
+		timer->it.cpu.firing_cpu = -1;
 		/*
 		 * The firing flag is -1 if we collided with a reset
 		 * of the timer, which already reported this
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1215 @ void run_posix_cpu_timers(struct task_st
 			cpu_timer_fire(timer);
 		spin_unlock(&timer->it_lock);
 	}
+	spin_unlock(expiry_lock);
 }
 
+#ifdef CONFIG_PREEMPT_RT_BASE
+#include <linux/kthread.h>
+#include <linux/cpu.h>
+DEFINE_PER_CPU(struct task_struct *, posix_timer_task);
+DEFINE_PER_CPU(struct task_struct *, posix_timer_tasklist);
+DEFINE_PER_CPU(bool, posix_timer_th_active);
+
+static void posix_cpu_kthread_fn(unsigned int cpu)
+{
+	struct task_struct *tsk = NULL;
+	struct task_struct *next = NULL;
+
+	BUG_ON(per_cpu(posix_timer_task, cpu) != current);
+
+	/* grab task list */
+	raw_local_irq_disable();
+	tsk = per_cpu(posix_timer_tasklist, cpu);
+	per_cpu(posix_timer_tasklist, cpu) = NULL;
+	raw_local_irq_enable();
+
+	/* its possible the list is empty, just return */
+	if (!tsk)
+		return;
+
+	/* Process task list */
+	while (1) {
+		/* save next */
+		next = tsk->posix_timer_list;
+
+		/* run the task timers, clear its ptr and
+		 * unreference it
+		 */
+		__run_posix_cpu_timers(tsk);
+		tsk->posix_timer_list = NULL;
+		put_task_struct(tsk);
+
+		/* check if this is the last on the list */
+		if (next == tsk)
+			break;
+		tsk = next;
+	}
+}
+
+static inline int __fastpath_timer_check(struct task_struct *tsk)
+{
+	/* tsk == current, ensure it is safe to use ->signal/sighand */
+	if (unlikely(tsk->exit_state))
+		return 0;
+
+	if (!task_cputime_zero(&tsk->cputime_expires))
+			return 1;
+
+	if (!task_cputime_zero(&tsk->signal->cputime_expires))
+			return 1;
+
+	return 0;
+}
+
+void run_posix_cpu_timers(struct task_struct *tsk)
+{
+	unsigned int cpu = smp_processor_id();
+	struct task_struct *tasklist;
+
+	BUG_ON(!irqs_disabled());
+
+	if (per_cpu(posix_timer_th_active, cpu) != true)
+		return;
+
+	/* get per-cpu references */
+	tasklist = per_cpu(posix_timer_tasklist, cpu);
+
+	/* check to see if we're already queued */
+	if (!tsk->posix_timer_list && __fastpath_timer_check(tsk)) {
+		get_task_struct(tsk);
+		if (tasklist) {
+			tsk->posix_timer_list = tasklist;
+		} else {
+			/*
+			 * The list is terminated by a self-pointing
+			 * task_struct
+			 */
+			tsk->posix_timer_list = tsk;
+		}
+		per_cpu(posix_timer_tasklist, cpu) = tsk;
+
+		wake_up_process(per_cpu(posix_timer_task, cpu));
+	}
+}
+
+static int posix_cpu_kthread_should_run(unsigned int cpu)
+{
+	return __this_cpu_read(posix_timer_tasklist) != NULL;
+}
+
+static void posix_cpu_kthread_park(unsigned int cpu)
+{
+	this_cpu_write(posix_timer_th_active, false);
+}
+
+static void posix_cpu_kthread_unpark(unsigned int cpu)
+{
+	this_cpu_write(posix_timer_th_active, true);
+}
+
+static void posix_cpu_kthread_setup(unsigned int cpu)
+{
+	struct sched_param sp;
+
+	sp.sched_priority = MAX_RT_PRIO - 1;
+	sched_setscheduler_nocheck(current, SCHED_FIFO, &sp);
+	posix_cpu_kthread_unpark(cpu);
+}
+
+static struct smp_hotplug_thread posix_cpu_thread = {
+	.store			= &posix_timer_task,
+	.thread_should_run	= posix_cpu_kthread_should_run,
+	.thread_fn		= posix_cpu_kthread_fn,
+	.thread_comm		= "posixcputmr/%u",
+	.setup			= posix_cpu_kthread_setup,
+	.park			= posix_cpu_kthread_park,
+	.unpark			= posix_cpu_kthread_unpark,
+};
+
+static int __init posix_cpu_thread_init(void)
+{
+	/* Start one for boot CPU. */
+	unsigned long cpu;
+	int ret;
+
+	/* init the per-cpu posix_timer_tasklets */
+	for_each_possible_cpu(cpu)
+		per_cpu(posix_timer_tasklist, cpu) = NULL;
+
+	ret = smpboot_register_percpu_thread(&posix_cpu_thread);
+	WARN_ON(ret);
+
+	return 0;
+}
+early_initcall(posix_cpu_thread_init);
+#else /* CONFIG_PREEMPT_RT_BASE */
+void run_posix_cpu_timers(struct task_struct *tsk)
+{
+	lockdep_assert_irqs_disabled();
+	__run_posix_cpu_timers(tsk);
+}
+#endif /* CONFIG_PREEMPT_RT_BASE */
+
 /*
  * Set one of the process-wide special case CPU timers or RLIMIT_CPU.
  * The tsk->sighand->siglock must be held by the caller.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1481 @ static int do_cpu_nanosleep(const clocki
 		spin_unlock_irq(&timer.it_lock);
 
 		while (error == TIMER_RETRY) {
+
+			cpu_timers_grab_expiry_lock(&timer);
 			/*
 			 * We need to handle case when timer was or is in the
 			 * middle of firing. In other cases we already freed
Index: linux-5.0.19-rt11/kernel/time/posix-timers.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/time/posix-timers.c
+++ linux-5.0.19-rt11/kernel/time/posix-timers.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:445 @ static struct k_itimer * alloc_posix_tim
 
 static void k_itimer_rcu_free(struct rcu_head *head)
 {
-	struct k_itimer *tmr = container_of(head, struct k_itimer, it.rcu);
+	struct k_itimer *tmr = container_of(head, struct k_itimer, rcu);
 
 	kmem_cache_free(posix_timers_cache, tmr);
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:462 @ static void release_posix_timer(struct k
 	}
 	put_pid(tmr->it_pid);
 	sigqueue_free(tmr->sigq);
-	call_rcu(&tmr->it.rcu, k_itimer_rcu_free);
+	call_rcu(&tmr->rcu, k_itimer_rcu_free);
 }
 
 static int common_timer_create(struct k_itimer *new_timer)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:808 @ static int common_hrtimer_try_to_cancel(
 	return hrtimer_try_to_cancel(&timr->it.real.timer);
 }
 
+static void timer_wait_for_callback(const struct k_clock *kc, struct k_itimer *timer)
+{
+	if (kc->timer_arm == common_hrtimer_arm)
+		hrtimer_grab_expiry_lock(&timer->it.real.timer);
+	else if (kc == &alarm_clock)
+		hrtimer_grab_expiry_lock(&timer->it.alarm.alarmtimer.timer);
+	else
+		/* posix-cpu-timers */
+		cpu_timers_grab_expiry_lock(timer);
+}
+
 /* Set a POSIX.1b interval timer. */
 int common_timer_set(struct k_itimer *timr, int flags,
 		     struct itimerspec64 *new_setting,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:884 @ retry:
 	else
 		error = kc->timer_set(timr, flags, new_spec64, old_spec64);
 
-	unlock_timer(timr, flag);
 	if (error == TIMER_RETRY) {
+		rcu_read_lock();
+		unlock_timer(timr, flag);
+		timer_wait_for_callback(kc, timr);
+		rcu_read_unlock();
 		old_spec64 = NULL;	// We already got the old time...
 		goto retry;
 	}
+	unlock_timer(timr, flag);
 
 	return error;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:954 @ int common_timer_del(struct k_itimer *ti
 	return 0;
 }
 
-static inline int timer_delete_hook(struct k_itimer *timer)
+static int timer_delete_hook(struct k_itimer *timer)
 {
 	const struct k_clock *kc = timer->kclock;
+	int ret;
 
 	if (WARN_ON_ONCE(!kc || !kc->timer_del))
 		return -EINVAL;
-	return kc->timer_del(timer);
+	ret = kc->timer_del(timer);
+	if (ret == TIMER_RETRY) {
+		rcu_read_lock();
+		spin_unlock_irq(&timer->it_lock);
+		timer_wait_for_callback(kc, timer);
+		rcu_read_unlock();
+	}
+	return ret;
 }
 
 /* Delete a POSIX.1b interval timer. */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:982 @ retry_delete:
 	if (!timer)
 		return -EINVAL;
 
-	if (timer_delete_hook(timer) == TIMER_RETRY) {
-		unlock_timer(timer, flags);
+	if (timer_delete_hook(timer) == TIMER_RETRY)
 		goto retry_delete;
-	}
 
 	spin_lock(&current->sighand->siglock);
 	list_del(&timer->list);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1009 @ static void itimer_delete(struct k_itime
 retry_delete:
 	spin_lock_irqsave(&timer->it_lock, flags);
 
-	if (timer_delete_hook(timer) == TIMER_RETRY) {
-		unlock_timer(timer, flags);
+	if (timer_delete_hook(timer) == TIMER_RETRY)
 		goto retry_delete;
-	}
+
 	list_del(&timer->list);
 	/*
 	 * This keeps any tasks waiting on the spin lock from thinking
Index: linux-5.0.19-rt11/kernel/time/posix-timers.h
===================================================================
--- linux-5.0.19-rt11.orig/kernel/time/posix-timers.h
+++ linux-5.0.19-rt11/kernel/time/posix-timers.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:35 @ extern const struct k_clock clock_proces
 extern const struct k_clock clock_thread;
 extern const struct k_clock alarm_clock;
 
+extern void cpu_timers_grab_expiry_lock(struct k_itimer *timer);
+
 int posix_timer_event(struct k_itimer *timr, int si_private);
 
 void common_timer_get(struct k_itimer *timr, struct itimerspec64 *cur_setting);
Index: linux-5.0.19-rt11/kernel/time/tick-broadcast-hrtimer.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/time/tick-broadcast-hrtimer.c
+++ linux-5.0.19-rt11/kernel/time/tick-broadcast-hrtimer.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:107 @ static enum hrtimer_restart bc_handler(s
 
 void tick_setup_hrtimer_broadcast(void)
 {
-	hrtimer_init(&bctimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
+	hrtimer_init(&bctimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_HARD);
 	bctimer.function = bc_handler;
 	clockevents_register_device(&ce_broadcast_hrtimer);
 }
Index: linux-5.0.19-rt11/kernel/time/tick-common.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/time/tick-common.c
+++ linux-5.0.19-rt11/kernel/time/tick-common.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:78 @ int tick_is_oneshot_available(void)
 static void tick_periodic(int cpu)
 {
 	if (tick_do_timer_cpu == cpu) {
-		write_seqlock(&jiffies_lock);
+		raw_spin_lock(&jiffies_lock);
+		write_seqcount_begin(&jiffies_seq);
 
 		/* Keep track of the next tick event */
 		tick_next_period = ktime_add(tick_next_period, tick_period);
 
 		do_timer(1);
-		write_sequnlock(&jiffies_lock);
+		write_seqcount_end(&jiffies_seq);
+		raw_spin_unlock(&jiffies_lock);
 		update_wall_time();
 	}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:158 @ void tick_setup_periodic(struct clock_ev
 		ktime_t next;
 
 		do {
-			seq = read_seqbegin(&jiffies_lock);
+			seq = read_seqcount_begin(&jiffies_seq);
 			next = tick_next_period;
-		} while (read_seqretry(&jiffies_lock, seq));
+		} while (read_seqcount_retry(&jiffies_seq, seq));
 
 		clockevents_switch_state(dev, CLOCK_EVT_STATE_ONESHOT);
 
Index: linux-5.0.19-rt11/kernel/time/tick-sched.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/time/tick-sched.c
+++ linux-5.0.19-rt11/kernel/time/tick-sched.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:67 @ static void tick_do_update_jiffies64(kti
 		return;
 
 	/* Reevaluate with jiffies_lock held */
-	write_seqlock(&jiffies_lock);
+	raw_spin_lock(&jiffies_lock);
+	write_seqcount_begin(&jiffies_seq);
 
 	delta = ktime_sub(now, last_jiffies_update);
 	if (delta >= tick_period) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:91 @ static void tick_do_update_jiffies64(kti
 		/* Keep the tick_next_period variable up to date */
 		tick_next_period = ktime_add(last_jiffies_update, tick_period);
 	} else {
-		write_sequnlock(&jiffies_lock);
+		write_seqcount_end(&jiffies_seq);
+		raw_spin_unlock(&jiffies_lock);
 		return;
 	}
-	write_sequnlock(&jiffies_lock);
+	write_seqcount_end(&jiffies_seq);
+	raw_spin_unlock(&jiffies_lock);
 	update_wall_time();
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:107 @ static ktime_t tick_init_jiffy_update(vo
 {
 	ktime_t period;
 
-	write_seqlock(&jiffies_lock);
+	raw_spin_lock(&jiffies_lock);
+	write_seqcount_begin(&jiffies_seq);
 	/* Did we start the jiffies update yet ? */
 	if (last_jiffies_update == 0)
 		last_jiffies_update = tick_next_period;
 	period = last_jiffies_update;
-	write_sequnlock(&jiffies_lock);
+	write_seqcount_end(&jiffies_seq);
+	raw_spin_unlock(&jiffies_lock);
 	return period;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:232 @ static void nohz_full_kick_func(struct i
 
 static DEFINE_PER_CPU(struct irq_work, nohz_full_kick_work) = {
 	.func = nohz_full_kick_func,
+	.flags = IRQ_WORK_HARD_IRQ,
 };
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:658 @ static ktime_t tick_nohz_next_event(stru
 
 	/* Read jiffies and the time when jiffies were updated last */
 	do {
-		seq = read_seqbegin(&jiffies_lock);
+		seq = read_seqcount_begin(&jiffies_seq);
 		basemono = last_jiffies_update;
 		basejiff = jiffies;
-	} while (read_seqretry(&jiffies_lock, seq));
+	} while (read_seqcount_retry(&jiffies_seq, seq));
 	ts->last_jiffies = basejiff;
 	ts->timer_expires_base = basemono;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:892 @ static bool can_stop_idle_tick(int cpu,
 		return false;
 
 	if (unlikely(local_softirq_pending())) {
-		static int ratelimit;
-
-		if (ratelimit < 10 &&
-		    (local_softirq_pending() & SOFTIRQ_STOP_IDLE_MASK)) {
-			pr_warn("NOHZ: local_softirq_pending %02x\n",
-				(unsigned int) local_softirq_pending());
-			ratelimit++;
-		}
+		softirq_check_pending_idle();
 		return false;
 	}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1304 @ void tick_setup_sched_timer(void)
 	/*
 	 * Emulate tick processing via per-CPU hrtimers:
 	 */
-	hrtimer_init(&ts->sched_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
+	hrtimer_init(&ts->sched_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_HARD);
 	ts->sched_timer.function = tick_sched_timer;
 
 	/* Get the next period (per-CPU) */
Index: linux-5.0.19-rt11/kernel/time/timekeeping.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/time/timekeeping.c
+++ linux-5.0.19-rt11/kernel/time/timekeeping.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2383 @ EXPORT_SYMBOL(hardpps);
  */
 void xtime_update(unsigned long ticks)
 {
-	write_seqlock(&jiffies_lock);
+	raw_spin_lock(&jiffies_lock);
+	write_seqcount_begin(&jiffies_seq);
 	do_timer(ticks);
-	write_sequnlock(&jiffies_lock);
+	write_seqcount_end(&jiffies_seq);
+	raw_spin_unlock(&jiffies_lock);
 	update_wall_time();
 }
Index: linux-5.0.19-rt11/kernel/time/timekeeping.h
===================================================================
--- linux-5.0.19-rt11.orig/kernel/time/timekeeping.h
+++ linux-5.0.19-rt11/kernel/time/timekeeping.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:28 @ static inline void sched_clock_resume(vo
 extern void do_timer(unsigned long ticks);
 extern void update_wall_time(void);
 
-extern seqlock_t jiffies_lock;
+extern raw_spinlock_t jiffies_lock;
+extern seqcount_t jiffies_seq;
 
 #define CS_NAME_LEN	32
 
Index: linux-5.0.19-rt11/kernel/time/timer.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/time/timer.c
+++ linux-5.0.19-rt11/kernel/time/timer.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:199 @ EXPORT_SYMBOL(jiffies_64);
 struct timer_base {
 	raw_spinlock_t		lock;
 	struct timer_list	*running_timer;
+	spinlock_t		expiry_lock;
 	unsigned long		clk;
 	unsigned long		next_expiry;
 	unsigned int		cpu;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1210 @ int del_timer(struct timer_list *timer)
 }
 EXPORT_SYMBOL(del_timer);
 
-/**
- * try_to_del_timer_sync - Try to deactivate a timer
- * @timer: timer to delete
- *
- * This function tries to deactivate a timer. Upon successful (ret >= 0)
- * exit the timer is not queued and the handler is not running on any CPU.
- */
-int try_to_del_timer_sync(struct timer_list *timer)
+static int __try_to_del_timer_sync(struct timer_list *timer,
+				   struct timer_base **basep)
 {
 	struct timer_base *base;
 	unsigned long flags;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1219 @ int try_to_del_timer_sync(struct timer_l
 
 	debug_assert_init(timer);
 
-	base = lock_timer_base(timer, &flags);
+	*basep = base = lock_timer_base(timer, &flags);
 
 	if (base->running_timer != timer)
 		ret = detach_if_pending(timer, base, true);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1228 @ int try_to_del_timer_sync(struct timer_l
 
 	return ret;
 }
+
+/**
+ * try_to_del_timer_sync - Try to deactivate a timer
+ * @timer: timer to delete
+ *
+ * This function tries to deactivate a timer. Upon successful (ret >= 0)
+ * exit the timer is not queued and the handler is not running on any CPU.
+ */
+int try_to_del_timer_sync(struct timer_list *timer)
+{
+	struct timer_base *base;
+
+	return __try_to_del_timer_sync(timer, &base);
+}
 EXPORT_SYMBOL(try_to_del_timer_sync);
 
-#ifdef CONFIG_SMP
+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL)
+static int __del_timer_sync(struct timer_list *timer)
+{
+	struct timer_base *base;
+	int ret;
+
+	for (;;) {
+		ret = __try_to_del_timer_sync(timer, &base);
+		if (ret >= 0)
+			return ret;
+
+		/*
+		 * When accessing the lock, timers of base are no longer expired
+		 * and so timer is no longer running.
+		 */
+		spin_lock(&base->expiry_lock);
+		spin_unlock(&base->expiry_lock);
+	}
+}
+
 /**
  * del_timer_sync - deactivate a timer and wait for the handler to finish.
  * @timer: the timer to be deactivated
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1319 @ int del_timer_sync(struct timer_list *ti
 	 * could lead to deadlock.
 	 */
 	WARN_ON(in_irq() && !(timer->flags & TIMER_IRQSAFE));
-	for (;;) {
-		int ret = try_to_del_timer_sync(timer);
-		if (ret >= 0)
-			return ret;
-		cpu_relax();
-	}
+
+	return __del_timer_sync(timer);
 }
 EXPORT_SYMBOL(del_timer_sync);
 #endif
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1380 @ static void expire_timers(struct timer_b
 
 		fn = timer->function;
 
-		if (timer->flags & TIMER_IRQSAFE) {
+		if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL) &&
+		    timer->flags & TIMER_IRQSAFE) {
 			raw_spin_unlock(&base->lock);
 			call_timer_fn(timer, fn);
+			base->running_timer = NULL;
+			spin_unlock(&base->expiry_lock);
+			spin_lock(&base->expiry_lock);
 			raw_spin_lock(&base->lock);
 		} else {
 			raw_spin_unlock_irq(&base->lock);
 			call_timer_fn(timer, fn);
+			base->running_timer = NULL;
+			spin_unlock(&base->expiry_lock);
+			spin_lock(&base->expiry_lock);
 			raw_spin_lock_irq(&base->lock);
 		}
 	}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1688 @ static inline void __run_timers(struct t
 	if (!time_after_eq(jiffies, base->clk))
 		return;
 
+	spin_lock(&base->expiry_lock);
 	raw_spin_lock_irq(&base->lock);
 
 	/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1715 @ static inline void __run_timers(struct t
 		while (levels--)
 			expire_timers(base, heads + levels);
 	}
-	base->running_timer = NULL;
 	raw_spin_unlock_irq(&base->lock);
+	spin_unlock(&base->expiry_lock);
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1726 @ static __latent_entropy void run_timer_s
 {
 	struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]);
 
+	irq_work_tick_soft();
+
 	__run_timers(base);
 	if (IS_ENABLED(CONFIG_NO_HZ_COMMON))
 		__run_timers(this_cpu_ptr(&timer_bases[BASE_DEF]));
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1963 @ static void __init init_timer_cpu(int cp
 		base->cpu = cpu;
 		raw_spin_lock_init(&base->lock);
 		base->clk = jiffies;
+		spin_lock_init(&base->expiry_lock);
 	}
 }
 
Index: linux-5.0.19-rt11/kernel/trace/trace.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/trace/trace.c
+++ linux-5.0.19-rt11/kernel/trace/trace.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2140 @ tracing_generic_entry_update(struct trac
 	struct task_struct *tsk = current;
 
 	entry->preempt_count		= pc & 0xff;
+	entry->preempt_lazy_count	= preempt_lazy_count();
 	entry->pid			= (tsk) ? tsk->pid : 0;
 	entry->flags =
 #ifdef CONFIG_TRACE_IRQFLAGS_SUPPORT
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2151 @ tracing_generic_entry_update(struct trac
 		((pc & NMI_MASK    ) ? TRACE_FLAG_NMI     : 0) |
 		((pc & HARDIRQ_MASK) ? TRACE_FLAG_HARDIRQ : 0) |
 		((pc & SOFTIRQ_OFFSET) ? TRACE_FLAG_SOFTIRQ : 0) |
-		(tif_need_resched() ? TRACE_FLAG_NEED_RESCHED : 0) |
+		(tif_need_resched_now() ? TRACE_FLAG_NEED_RESCHED : 0) |
+		(need_resched_lazy() ? TRACE_FLAG_NEED_RESCHED_LAZY : 0) |
 		(test_preempt_need_resched() ? TRACE_FLAG_PREEMPT_RESCHED : 0);
+
+	entry->migrate_disable = (tsk) ? __migrate_disabled(tsk) & 0xFF : 0;
 }
 EXPORT_SYMBOL_GPL(tracing_generic_entry_update);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3355 @ get_total_entries(struct trace_buffer *b
 
 static void print_lat_help_header(struct seq_file *m)
 {
-	seq_puts(m, "#                  _------=> CPU#            \n"
-		    "#                 / _-----=> irqs-off        \n"
-		    "#                | / _----=> need-resched    \n"
-		    "#                || / _---=> hardirq/softirq \n"
-		    "#                ||| / _--=> preempt-depth   \n"
-		    "#                |||| /     delay            \n"
-		    "#  cmd     pid   ||||| time  |   caller      \n"
-		    "#     \\   /      |||||  \\    |   /         \n");
+	seq_puts(m, "#                  _--------=> CPU#              \n"
+		    "#                 / _-------=> irqs-off          \n"
+		    "#                | / _------=> need-resched      \n"
+		    "#                || / _-----=> need-resched_lazy \n"
+		    "#                ||| / _----=> hardirq/softirq   \n"
+		    "#                |||| / _---=> preempt-depth     \n"
+		    "#                ||||| / _--=> preempt-lazy-depth\n"
+		    "#                |||||| / _-=> migrate-disable   \n"
+		    "#                ||||||| /     delay             \n"
+		    "# cmd     pid    |||||||| time   |  caller       \n"
+		    "#     \\   /      ||||||||   \\    |  /            \n");
 }
 
 static void print_event_info(struct trace_buffer *buf, struct seq_file *m)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3403 @ static void print_func_help_header_irq(s
 		   tgid ? tgid_space : space);
 	seq_printf(m, "#                          %s / _----=> need-resched\n",
 		   tgid ? tgid_space : space);
-	seq_printf(m, "#                          %s| / _---=> hardirq/softirq\n",
+	seq_printf(m, "#                          %s| / _---=> need-resched_lazy\n",
+		   tgid ? tgid_space : space);
+	seq_printf(m, "#                          %s|| / _--=> hardirq/softirq\n",
 		   tgid ? tgid_space : space);
-	seq_printf(m, "#                          %s|| / _--=> preempt-depth\n",
+	seq_printf(m, "#                          %s||| /     preempt-depth\n",
 		   tgid ? tgid_space : space);
-	seq_printf(m, "#                          %s||| /     delay\n",
+	seq_printf(m, "#                          %s|||| /    delay\n",
 		   tgid ? tgid_space : space);
-	seq_printf(m, "#           TASK-PID %sCPU#  ||||    TIMESTAMP  FUNCTION\n",
+	seq_printf(m, "#           TASK-PID %sCPU#  |||||   TIMESTAMP  FUNCTION\n",
 		   tgid ? "   TGID   " : space);
-	seq_printf(m, "#              | |   %s  |   ||||       |         |\n",
+	seq_printf(m, "#              | |   %s  |   |||||      |         |\n",
 		   tgid ? "     |    " : space);
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:8385 @ void ftrace_dump(enum ftrace_dump_mode o
 	tracing_off();
 
 	local_irq_save(flags);
-	printk_nmi_direct_enter();
 
 	/* Simulate the iterator */
 	trace_init_global_iter(&iter);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:8465 @ void ftrace_dump(enum ftrace_dump_mode o
 		atomic_dec(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
 	}
 	atomic_dec(&dump_running);
-	printk_nmi_direct_exit();
 	local_irq_restore(flags);
 }
 EXPORT_SYMBOL_GPL(ftrace_dump);
Index: linux-5.0.19-rt11/kernel/trace/trace.h
===================================================================
--- linux-5.0.19-rt11.orig/kernel/trace/trace.h
+++ linux-5.0.19-rt11/kernel/trace/trace.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:130 @ struct kretprobe_trace_entry_head {
  *  NEED_RESCHED	- reschedule is requested
  *  HARDIRQ		- inside an interrupt handler
  *  SOFTIRQ		- inside a softirq handler
+ *  NEED_RESCHED_LAZY	- lazy reschedule is requested
  */
 enum trace_flag_type {
 	TRACE_FLAG_IRQS_OFF		= 0x01,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:140 @ enum trace_flag_type {
 	TRACE_FLAG_SOFTIRQ		= 0x10,
 	TRACE_FLAG_PREEMPT_RESCHED	= 0x20,
 	TRACE_FLAG_NMI			= 0x40,
+	TRACE_FLAG_NEED_RESCHED_LAZY	= 0x80,
 };
 
 #define TRACE_BUF_SIZE		1024
Index: linux-5.0.19-rt11/kernel/trace/trace_events.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/trace/trace_events.c
+++ linux-5.0.19-rt11/kernel/trace/trace_events.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:191 @ static int trace_define_common_fields(vo
 	__common_field(unsigned char, flags);
 	__common_field(unsigned char, preempt_count);
 	__common_field(int, pid);
+	__common_field(unsigned short, migrate_disable);
+	__common_field(unsigned short, padding);
 
 	return ret;
 }
Index: linux-5.0.19-rt11/kernel/trace/trace_hwlat.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/trace/trace_hwlat.c
+++ linux-5.0.19-rt11/kernel/trace/trace_hwlat.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:280 @ static void move_to_next_cpu(void)
 	 * of this thread, than stop migrating for the duration
 	 * of the current test.
 	 */
-	if (!cpumask_equal(current_mask, &current->cpus_allowed))
+	if (!cpumask_equal(current_mask, current->cpus_ptr))
 		goto disable;
 
 	get_online_cpus();
Index: linux-5.0.19-rt11/kernel/trace/trace_output.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/trace/trace_output.c
+++ linux-5.0.19-rt11/kernel/trace/trace_output.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:429 @ int trace_print_lat_fmt(struct trace_seq
 {
 	char hardsoft_irq;
 	char need_resched;
+	char need_resched_lazy;
 	char irqs_off;
 	int hardirq;
 	int softirq;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:460 @ int trace_print_lat_fmt(struct trace_seq
 		break;
 	}
 
+	need_resched_lazy =
+		(entry->flags & TRACE_FLAG_NEED_RESCHED_LAZY) ? 'L' : '.';
+
 	hardsoft_irq =
 		(nmi && hardirq)     ? 'Z' :
 		nmi                  ? 'z' :
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:471 @ int trace_print_lat_fmt(struct trace_seq
 		softirq              ? 's' :
 		                       '.' ;
 
-	trace_seq_printf(s, "%c%c%c",
-			 irqs_off, need_resched, hardsoft_irq);
+	trace_seq_printf(s, "%c%c%c%c",
+			 irqs_off, need_resched, need_resched_lazy,
+			 hardsoft_irq);
 
 	if (entry->preempt_count)
 		trace_seq_printf(s, "%x", entry->preempt_count);
 	else
 		trace_seq_putc(s, '.');
 
+	if (entry->preempt_lazy_count)
+		trace_seq_printf(s, "%x", entry->preempt_lazy_count);
+	else
+		trace_seq_putc(s, '.');
+
+	if (entry->migrate_disable)
+		trace_seq_printf(s, "%x", entry->migrate_disable);
+	else
+		trace_seq_putc(s, '.');
+
 	return !trace_seq_has_overflowed(s);
 }
 
Index: linux-5.0.19-rt11/kernel/watchdog.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/watchdog.c
+++ linux-5.0.19-rt11/kernel/watchdog.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:486 @ static void watchdog_enable(unsigned int
 	 * Start the timer first to prevent the NMI watchdog triggering
 	 * before the timer has a chance to fire.
 	 */
-	hrtimer_init(hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+	hrtimer_init(hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
 	hrtimer->function = watchdog_timer_fn;
 	hrtimer_start(hrtimer, ns_to_ktime(sample_period),
 		      HRTIMER_MODE_REL_PINNED);
Index: linux-5.0.19-rt11/kernel/workqueue.c
===================================================================
--- linux-5.0.19-rt11.orig/kernel/workqueue.c
+++ linux-5.0.19-rt11/kernel/workqueue.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:130 @ enum {
  *
  * PL: wq_pool_mutex protected.
  *
- * PR: wq_pool_mutex protected for writes.  Sched-RCU protected for reads.
+ * PR: wq_pool_mutex protected for writes.  RCU protected for reads.
  *
  * PW: wq_pool_mutex and wq->mutex protected for writes.  Either for reads.
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:139 @ enum {
  *
  * WQ: wq->mutex protected.
  *
- * WR: wq->mutex protected for writes.  Sched-RCU protected for reads.
+ * WR: wq->mutex protected for writes.  RCU protected for reads.
  *
  * MD: wq_mayday_lock protected.
  */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:147 @ enum {
 /* struct worker is defined in workqueue_internal.h */
 
 struct worker_pool {
-	spinlock_t		lock;		/* the pool lock */
+	raw_spinlock_t		lock;		/* the pool lock */
 	int			cpu;		/* I: the associated cpu */
 	int			node;		/* I: the associated node ID */
 	int			id;		/* I: pool ID */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:186 @ struct worker_pool {
 	atomic_t		nr_running ____cacheline_aligned_in_smp;
 
 	/*
-	 * Destruction of pool is sched-RCU protected to allow dereferences
+	 * Destruction of pool is RCU protected to allow dereferences
 	 * from get_work_pool().
 	 */
 	struct rcu_head		rcu;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:215 @ struct pool_workqueue {
 	/*
 	 * Release of unbound pwq is punted to system_wq.  See put_pwq()
 	 * and pwq_unbound_release_workfn() for details.  pool_workqueue
-	 * itself is also sched-RCU protected so that the first pwq can be
+	 * itself is also RCU protected so that the first pwq can be
 	 * determined without grabbing wq->mutex.
 	 */
 	struct work_struct	unbound_release_work;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:300 @ static struct workqueue_attrs *wq_update
 
 static DEFINE_MUTEX(wq_pool_mutex);	/* protects pools and workqueues list */
 static DEFINE_MUTEX(wq_pool_attach_mutex); /* protects worker attach/detach */
-static DEFINE_SPINLOCK(wq_mayday_lock);	/* protects wq->maydays list */
-static DECLARE_WAIT_QUEUE_HEAD(wq_manager_wait); /* wait for manager to go away */
+static DEFINE_RAW_SPINLOCK(wq_mayday_lock);	/* protects wq->maydays list */
+static DECLARE_SWAIT_QUEUE_HEAD(wq_manager_wait); /* wait for manager to go away */
 
 static LIST_HEAD(workqueues);		/* PR: list of all workqueues */
 static bool workqueue_freezing;		/* PL: have wqs started freezing? */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:360 @ static void workqueue_sysfs_unregister(s
 #include <trace/events/workqueue.h>
 
 #define assert_rcu_or_pool_mutex()					\
-	RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held() &&			\
+	RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&			\
 			 !lockdep_is_held(&wq_pool_mutex),		\
-			 "sched RCU or wq_pool_mutex should be held")
+			 "RCU or wq_pool_mutex should be held")
 
 #define assert_rcu_or_wq_mutex(wq)					\
-	RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held() &&			\
+	RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&			\
 			 !lockdep_is_held(&wq->mutex),			\
-			 "sched RCU or wq->mutex should be held")
+			 "RCU or wq->mutex should be held")
 
 #define assert_rcu_or_wq_mutex_or_pool_mutex(wq)			\
-	RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held() &&			\
+	RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&			\
 			 !lockdep_is_held(&wq->mutex) &&		\
 			 !lockdep_is_held(&wq_pool_mutex),		\
-			 "sched RCU, wq->mutex or wq_pool_mutex should be held")
+			 "RCU, wq->mutex or wq_pool_mutex should be held")
 
 #define for_each_cpu_worker_pool(pool, cpu)				\
 	for ((pool) = &per_cpu(cpu_worker_pools, cpu)[0];		\
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:385 @ static void workqueue_sysfs_unregister(s
  * @pool: iteration cursor
  * @pi: integer used for iteration
  *
- * This must be called either with wq_pool_mutex held or sched RCU read
+ * This must be called either with wq_pool_mutex held or RCU read
  * locked.  If the pool needs to be used beyond the locking in effect, the
  * caller is responsible for guaranteeing that the pool stays online.
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:417 @ static void workqueue_sysfs_unregister(s
  * @pwq: iteration cursor
  * @wq: the target workqueue
  *
- * This must be called either with wq->mutex held or sched RCU read locked.
+ * This must be called either with wq->mutex held or RCU read locked.
  * If the pwq needs to be used beyond the locking in effect, the caller is
  * responsible for guaranteeing that the pwq stays online.
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:553 @ static int worker_pool_assign_id(struct
  * @wq: the target workqueue
  * @node: the node ID
  *
- * This must be called with any of wq_pool_mutex, wq->mutex or sched RCU
+ * This must be called with any of wq_pool_mutex, wq->mutex or RCU
  * read locked.
  * If the pwq needs to be used beyond the locking in effect, the caller is
  * responsible for guaranteeing that the pwq stays online.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:697 @ static struct pool_workqueue *get_work_p
  * @work: the work item of interest
  *
  * Pools are created and destroyed under wq_pool_mutex, and allows read
- * access under sched-RCU read lock.  As such, this function should be
- * called under wq_pool_mutex or with preemption disabled.
+ * access under RCU read lock.  As such, this function should be
+ * called under wq_pool_mutex or inside of a rcu_read_lock() region.
  *
  * All fields of the returned pool are accessible as long as the above
  * mentioned locking is in effect.  If the returned pool needs to be used
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:831 @ static struct worker *first_idle_worker(
  * Wake up the first idle worker of @pool.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  */
 static void wake_up_worker(struct worker_pool *pool)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:842 @ static void wake_up_worker(struct worker
 }
 
 /**
- * wq_worker_waking_up - a worker is waking up
+ * wq_worker_running - a worker is running again
  * @task: task waking up
- * @cpu: CPU @task is waking up to
  *
- * This function is called during try_to_wake_up() when a worker is
- * being awoken.
- *
- * CONTEXT:
- * spin_lock_irq(rq->lock)
+ * This function is called when a worker returns from schedule()
  */
-void wq_worker_waking_up(struct task_struct *task, int cpu)
+void wq_worker_running(struct task_struct *task)
 {
 	struct worker *worker = kthread_data(task);
 
-	if (!(worker->flags & WORKER_NOT_RUNNING)) {
-		WARN_ON_ONCE(worker->pool->cpu != cpu);
+	if (!worker->sleeping)
+		return;
+	if (!(worker->flags & WORKER_NOT_RUNNING))
 		atomic_inc(&worker->pool->nr_running);
-	}
+	worker->sleeping = 0;
 }
 
 /**
  * wq_worker_sleeping - a worker is going to sleep
  * @task: task going to sleep
  *
- * This function is called during schedule() when a busy worker is
- * going to sleep.  Worker on the same cpu can be woken up by
- * returning pointer to its task.
- *
- * CONTEXT:
- * spin_lock_irq(rq->lock)
- *
- * Return:
- * Worker task on @cpu to wake up, %NULL if none.
+ * This function is called from schedule() when a busy worker is
+ * going to sleep.
  */
-struct task_struct *wq_worker_sleeping(struct task_struct *task)
+void wq_worker_sleeping(struct task_struct *task)
 {
-	struct worker *worker = kthread_data(task), *to_wakeup = NULL;
+	struct worker *next, *worker = kthread_data(task);
 	struct worker_pool *pool;
 
 	/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:876 @ struct task_struct *wq_worker_sleeping(s
 	 * checking NOT_RUNNING.
 	 */
 	if (worker->flags & WORKER_NOT_RUNNING)
-		return NULL;
+		return;
 
 	pool = worker->pool;
 
-	/* this can only happen on the local cpu */
-	if (WARN_ON_ONCE(pool->cpu != raw_smp_processor_id()))
-		return NULL;
+	if (WARN_ON_ONCE(worker->sleeping))
+		return;
+
+	worker->sleeping = 1;
+	raw_spin_lock_irq(&pool->lock);
 
 	/*
 	 * The counterpart of the following dec_and_test, implied mb,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:898 @ struct task_struct *wq_worker_sleeping(s
 	 * lock is safe.
 	 */
 	if (atomic_dec_and_test(&pool->nr_running) &&
-	    !list_empty(&pool->worklist))
-		to_wakeup = first_idle_worker(pool);
-	return to_wakeup ? to_wakeup->task : NULL;
+	    !list_empty(&pool->worklist)) {
+		next = first_idle_worker(pool);
+		if (next)
+			wake_up_process(next->task);
+	}
+	raw_spin_unlock_irq(&pool->lock);
 }
 
 /**
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:913 @ struct task_struct *wq_worker_sleeping(s
  * the scheduler to get a worker's last known identity.
  *
  * CONTEXT:
- * spin_lock_irq(rq->lock)
+ * raw_spin_lock_irq(rq->lock)
  *
  * Return:
  * The last work function %current executed as a worker, NULL if it
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:934 @ work_func_t wq_worker_last_func(struct t
  * Set @flags in @worker->flags and adjust nr_running accordingly.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock)
+ * raw_spin_lock_irq(pool->lock)
  */
 static inline void worker_set_flags(struct worker *worker, unsigned int flags)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:959 @ static inline void worker_set_flags(stru
  * Clear @flags in @worker->flags and adjust nr_running accordingly.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock)
+ * raw_spin_lock_irq(pool->lock)
  */
 static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1007 @ static inline void worker_clr_flags(stru
  * actually occurs, it should be easy to locate the culprit work function.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  *
  * Return:
  * Pointer to worker which is executing @work if found, %NULL
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1042 @ static struct worker *find_worker_execut
  * nested inside outer list_for_each_entry_safe().
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  */
 static void move_linked_works(struct work_struct *work, struct list_head *head,
 			      struct work_struct **nextp)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1117 @ static void put_pwq_unlocked(struct pool
 {
 	if (pwq) {
 		/*
-		 * As both pwqs and pools are sched-RCU protected, the
+		 * As both pwqs and pools are RCU protected, the
 		 * following lock operations are safe.
 		 */
-		spin_lock_irq(&pwq->pool->lock);
+		raw_spin_lock_irq(&pwq->pool->lock);
 		put_pwq(pwq);
-		spin_unlock_irq(&pwq->pool->lock);
+		raw_spin_unlock_irq(&pwq->pool->lock);
 	}
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1155 @ static void pwq_activate_first_delayed(s
  * decrement nr_in_flight of its pwq and handle workqueue flushing.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  */
 static void pwq_dec_nr_in_flight(struct pool_workqueue *pwq, int color)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1245 @ static int try_to_grab_pending(struct wo
 	if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work)))
 		return 0;
 
+	rcu_read_lock();
 	/*
 	 * The queueing is in progress, or it is already queued. Try to
 	 * steal it from ->worklist without clearing WORK_STRUCT_PENDING.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1254 @ static int try_to_grab_pending(struct wo
 	if (!pool)
 		goto fail;
 
-	spin_lock(&pool->lock);
+	raw_spin_lock(&pool->lock);
 	/*
 	 * work->data is guaranteed to point to pwq only while the work
 	 * item is queued on pwq->wq, and both updating work->data to point
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1283 @ static int try_to_grab_pending(struct wo
 		/* work->data points to pwq iff queued, point to pool */
 		set_work_pool_and_keep_pending(work, pool->id);
 
-		spin_unlock(&pool->lock);
+		raw_spin_unlock(&pool->lock);
+		rcu_read_unlock();
 		return 1;
 	}
-	spin_unlock(&pool->lock);
+	raw_spin_unlock(&pool->lock);
 fail:
+	rcu_read_unlock();
 	local_irq_restore(*flags);
 	if (work_is_canceling(work))
 		return -ENOENT;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1308 @ fail:
  * work_struct flags.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  */
 static void insert_work(struct pool_workqueue *pwq, struct work_struct *work,
 			struct list_head *head, unsigned int extra_flags)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1403 @ static void __queue_work(int cpu, struct
 	if (unlikely(wq->flags & __WQ_DRAINING) &&
 	    WARN_ON_ONCE(!is_chained_work(wq)))
 		return;
+	rcu_read_lock();
 retry:
 	if (req_cpu == WORK_CPU_UNBOUND)
 		cpu = wq_select_unbound_cpu(raw_smp_processor_id());
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1423 @ retry:
 	if (last_pool && last_pool != pwq->pool) {
 		struct worker *worker;
 
-		spin_lock(&last_pool->lock);
+		raw_spin_lock(&last_pool->lock);
 
 		worker = find_worker_executing_work(last_pool, work);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1431 @ retry:
 			pwq = worker->current_pwq;
 		} else {
 			/* meh... not running there, queue here */
-			spin_unlock(&last_pool->lock);
-			spin_lock(&pwq->pool->lock);
+			raw_spin_unlock(&last_pool->lock);
+			raw_spin_lock(&pwq->pool->lock);
 		}
 	} else {
-		spin_lock(&pwq->pool->lock);
+		raw_spin_lock(&pwq->pool->lock);
 	}
 
 	/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1448 @ retry:
 	 */
 	if (unlikely(!pwq->refcnt)) {
 		if (wq->flags & WQ_UNBOUND) {
-			spin_unlock(&pwq->pool->lock);
+			raw_spin_unlock(&pwq->pool->lock);
 			cpu_relax();
 			goto retry;
 		}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1460 @ retry:
 	/* pwq determined, queue */
 	trace_workqueue_queue_work(req_cpu, pwq, work);
 
-	if (WARN_ON(!list_empty(&work->entry))) {
-		spin_unlock(&pwq->pool->lock);
-		return;
-	}
+	if (WARN_ON(!list_empty(&work->entry)))
+		goto out;
 
 	pwq->nr_in_flight[pwq->work_color]++;
 	work_flags = work_color_to_flags(pwq->work_color);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1479 @ retry:
 
 	insert_work(pwq, work, worklist, work_flags);
 
-	spin_unlock(&pwq->pool->lock);
+out:
+	raw_spin_unlock(&pwq->pool->lock);
+	rcu_read_unlock();
 }
 
 /**
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1516 @ EXPORT_SYMBOL(queue_work_on);
 void delayed_work_timer_fn(struct timer_list *t)
 {
 	struct delayed_work *dwork = from_timer(dwork, t, timer);
+	unsigned long flags;
 
-	/* should have been called from irqsafe timer with irq already off */
+	local_irq_save(flags);
 	__queue_work(dwork->cpu, dwork->wq, &dwork->work);
+	local_irq_restore(flags);
 }
 EXPORT_SYMBOL(delayed_work_timer_fn);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1667 @ EXPORT_SYMBOL(queue_rcu_work);
  * necessary.
  *
  * LOCKING:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  */
 static void worker_enter_idle(struct worker *worker)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1707 @ static void worker_enter_idle(struct wor
  * @worker is leaving idle state.  Update stats.
  *
  * LOCKING:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  */
 static void worker_leave_idle(struct worker *worker)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1845 @ static struct worker *create_worker(stru
 	worker_attach_to_pool(worker, pool);
 
 	/* start the newly created worker */
-	spin_lock_irq(&pool->lock);
+	raw_spin_lock_irq(&pool->lock);
 	worker->pool->nr_workers++;
 	worker_enter_idle(worker);
 	wake_up_process(worker->task);
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 
 	return worker;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1868 @ fail:
  * be idle.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  */
 static void destroy_worker(struct worker *worker)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1894 @ static void idle_worker_timeout(struct t
 {
 	struct worker_pool *pool = from_timer(pool, t, idle_timer);
 
-	spin_lock_irq(&pool->lock);
+	raw_spin_lock_irq(&pool->lock);
 
 	while (too_many_workers(pool)) {
 		struct worker *worker;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1912 @ static void idle_worker_timeout(struct t
 		destroy_worker(worker);
 	}
 
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 }
 
 static void send_mayday(struct work_struct *work)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1943 @ static void pool_mayday_timeout(struct t
 	struct worker_pool *pool = from_timer(pool, t, mayday_timer);
 	struct work_struct *work;
 
-	spin_lock_irq(&pool->lock);
-	spin_lock(&wq_mayday_lock);		/* for wq->maydays */
+	raw_spin_lock_irq(&pool->lock);
+	raw_spin_lock(&wq_mayday_lock);		/* for wq->maydays */
 
 	if (need_to_create_worker(pool)) {
 		/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1957 @ static void pool_mayday_timeout(struct t
 			send_mayday(work);
 	}
 
-	spin_unlock(&wq_mayday_lock);
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock(&wq_mayday_lock);
+	raw_spin_unlock_irq(&pool->lock);
 
 	mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INTERVAL);
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1977 @ static void pool_mayday_timeout(struct t
  * may_start_working() %true.
  *
  * LOCKING:
- * spin_lock_irq(pool->lock) which may be released and regrabbed
+ * raw_spin_lock_irq(pool->lock) which may be released and regrabbed
  * multiple times.  Does GFP_KERNEL allocations.  Called only from
  * manager.
  */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1986 @ __releases(&pool->lock)
 __acquires(&pool->lock)
 {
 restart:
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 
 	/* if we don't make progress in MAYDAY_INITIAL_TIMEOUT, call for help */
 	mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2002 @ restart:
 	}
 
 	del_timer_sync(&pool->mayday_timer);
-	spin_lock_irq(&pool->lock);
+	raw_spin_lock_irq(&pool->lock);
 	/*
 	 * This is necessary even after a new worker was just successfully
 	 * created as @pool->lock was dropped and the new worker might have
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2025 @ restart:
  * and may_start_working() is true.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock) which may be released and regrabbed
+ * raw_spin_lock_irq(pool->lock) which may be released and regrabbed
  * multiple times.  Does GFP_KERNEL allocations.
  *
  * Return:
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2048 @ static bool manage_workers(struct worker
 
 	pool->manager = NULL;
 	pool->flags &= ~POOL_MANAGER_ACTIVE;
-	wake_up(&wq_manager_wait);
+	swake_up_one(&wq_manager_wait);
 	return true;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2064 @ static bool manage_workers(struct worker
  * call this function to process a work.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock) which is released and regrabbed.
+ * raw_spin_lock_irq(pool->lock) which is released and regrabbed.
  */
 static void process_one_work(struct worker *worker, struct work_struct *work)
 __releases(&pool->lock)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2146 @ __acquires(&pool->lock)
 	 */
 	set_work_pool_and_clear_pending(work, pool->id);
 
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 
 	lock_map_acquire(&pwq->wq->lockdep_map);
 	lock_map_acquire(&lockdep_map);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2201 @ __acquires(&pool->lock)
 	 */
 	cond_resched();
 
-	spin_lock_irq(&pool->lock);
+	raw_spin_lock_irq(&pool->lock);
 
 	/* clear cpu intensive status */
 	if (unlikely(cpu_intensive))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2227 @ __acquires(&pool->lock)
  * fetches a work from the top and executes it.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock) which may be released and regrabbed
+ * raw_spin_lock_irq(pool->lock) which may be released and regrabbed
  * multiple times.
  */
 static void process_scheduled_works(struct worker *worker)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2269 @ static int worker_thread(void *__worker)
 	/* tell the scheduler that this is a workqueue worker */
 	set_pf_worker(true);
 woke_up:
-	spin_lock_irq(&pool->lock);
+	raw_spin_lock_irq(&pool->lock);
 
 	/* am I supposed to die? */
 	if (unlikely(worker->flags & WORKER_DIE)) {
-		spin_unlock_irq(&pool->lock);
+		raw_spin_unlock_irq(&pool->lock);
 		WARN_ON_ONCE(!list_empty(&worker->entry));
 		set_pf_worker(false);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2339 @ sleep:
 	 */
 	worker_enter_idle(worker);
 	__set_current_state(TASK_IDLE);
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 	schedule();
 	goto woke_up;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2393 @ repeat:
 	should_stop = kthread_should_stop();
 
 	/* see whether any pwq is asking for help */
-	spin_lock_irq(&wq_mayday_lock);
+	raw_spin_lock_irq(&wq_mayday_lock);
 
 	while (!list_empty(&wq->maydays)) {
 		struct pool_workqueue *pwq = list_first_entry(&wq->maydays,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2405 @ repeat:
 		__set_current_state(TASK_RUNNING);
 		list_del_init(&pwq->mayday_node);
 
-		spin_unlock_irq(&wq_mayday_lock);
+		raw_spin_unlock_irq(&wq_mayday_lock);
 
 		worker_attach_to_pool(rescuer, pool);
 
-		spin_lock_irq(&pool->lock);
+		raw_spin_lock_irq(&pool->lock);
 
 		/*
 		 * Slurp in all works issued via this workqueue and
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2438 @ repeat:
 			 * incur MAYDAY_INTERVAL delay inbetween.
 			 */
 			if (need_to_create_worker(pool)) {
-				spin_lock(&wq_mayday_lock);
+				raw_spin_lock(&wq_mayday_lock);
 				get_pwq(pwq);
 				list_move_tail(&pwq->mayday_node, &wq->maydays);
-				spin_unlock(&wq_mayday_lock);
+				raw_spin_unlock(&wq_mayday_lock);
 			}
 		}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2459 @ repeat:
 		if (need_more_worker(pool))
 			wake_up_worker(pool);
 
-		spin_unlock_irq(&pool->lock);
+		raw_spin_unlock_irq(&pool->lock);
 
 		worker_detach_from_pool(rescuer);
 
-		spin_lock_irq(&wq_mayday_lock);
+		raw_spin_lock_irq(&wq_mayday_lock);
 	}
 
-	spin_unlock_irq(&wq_mayday_lock);
+	raw_spin_unlock_irq(&wq_mayday_lock);
 
 	if (should_stop) {
 		__set_current_state(TASK_RUNNING);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2546 @ static void wq_barrier_func(struct work_
  * underneath us, so we can't reliably determine pwq from @target.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  */
 static void insert_wq_barrier(struct pool_workqueue *pwq,
 			      struct wq_barrier *barr,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2633 @ static bool flush_workqueue_prep_pwqs(st
 	for_each_pwq(pwq, wq) {
 		struct worker_pool *pool = pwq->pool;
 
-		spin_lock_irq(&pool->lock);
+		raw_spin_lock_irq(&pool->lock);
 
 		if (flush_color >= 0) {
 			WARN_ON_ONCE(pwq->flush_color != -1);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2650 @ static bool flush_workqueue_prep_pwqs(st
 			pwq->work_color = work_color;
 		}
 
-		spin_unlock_irq(&pool->lock);
+		raw_spin_unlock_irq(&pool->lock);
 	}
 
 	if (flush_color >= 0 && atomic_dec_and_test(&wq->nr_pwqs_to_flush))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2850 @ reflush:
 	for_each_pwq(pwq, wq) {
 		bool drained;
 
-		spin_lock_irq(&pwq->pool->lock);
+		raw_spin_lock_irq(&pwq->pool->lock);
 		drained = !pwq->nr_active && list_empty(&pwq->delayed_works);
-		spin_unlock_irq(&pwq->pool->lock);
+		raw_spin_unlock_irq(&pwq->pool->lock);
 
 		if (drained)
 			continue;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2881 @ static bool start_flush_work(struct work
 
 	might_sleep();
 
-	local_irq_disable();
+	rcu_read_lock();
 	pool = get_work_pool(work);
 	if (!pool) {
-		local_irq_enable();
+		rcu_read_unlock();
 		return false;
 	}
 
-	spin_lock(&pool->lock);
+	raw_spin_lock_irq(&pool->lock);
 	/* see the comment in try_to_grab_pending() with the same code */
 	pwq = get_work_pwq(work);
 	if (pwq) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2904 @ static bool start_flush_work(struct work
 	check_flush_dependency(pwq->wq, work);
 
 	insert_wq_barrier(pwq, barr, work, worker);
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 
 	/*
 	 * Force a lock recursion deadlock when using flush_work() inside a
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2920 @ static bool start_flush_work(struct work
 		lock_map_acquire(&pwq->wq->lockdep_map);
 		lock_map_release(&pwq->wq->lockdep_map);
 	}
-
+	rcu_read_unlock();
 	return true;
 already_gone:
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
+	rcu_read_unlock();
 	return false;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3236 @ EXPORT_SYMBOL_GPL(execute_in_process_con
  *
  * Undo alloc_workqueue_attrs().
  */
-void free_workqueue_attrs(struct workqueue_attrs *attrs)
+static void free_workqueue_attrs(struct workqueue_attrs *attrs)
 {
 	if (attrs) {
 		free_cpumask_var(attrs->cpumask);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3246 @ void free_workqueue_attrs(struct workque
 
 /**
  * alloc_workqueue_attrs - allocate a workqueue_attrs
- * @gfp_mask: allocation mask to use
  *
  * Allocate a new workqueue_attrs, initialize with default settings and
  * return it.
  *
  * Return: The allocated new workqueue_attr on success. %NULL on failure.
  */
-struct workqueue_attrs *alloc_workqueue_attrs(gfp_t gfp_mask)
+static struct workqueue_attrs *alloc_workqueue_attrs(void)
 {
 	struct workqueue_attrs *attrs;
 
-	attrs = kzalloc(sizeof(*attrs), gfp_mask);
+	attrs = kzalloc(sizeof(*attrs), GFP_KERNEL);
 	if (!attrs)
 		goto fail;
-	if (!alloc_cpumask_var(&attrs->cpumask, gfp_mask))
+	if (!alloc_cpumask_var(&attrs->cpumask, GFP_KERNEL))
 		goto fail;
 
 	cpumask_copy(attrs->cpumask, cpu_possible_mask);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3316 @ static bool wqattrs_equal(const struct w
  */
 static int init_worker_pool(struct worker_pool *pool)
 {
-	spin_lock_init(&pool->lock);
+	raw_spin_lock_init(&pool->lock);
 	pool->id = -1;
 	pool->cpu = -1;
 	pool->node = NUMA_NO_NODE;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3337 @ static int init_worker_pool(struct worke
 	pool->refcnt = 1;
 
 	/* shouldn't fail above this point */
-	pool->attrs = alloc_workqueue_attrs(GFP_KERNEL);
+	pool->attrs = alloc_workqueue_attrs();
 	if (!pool->attrs)
 		return -ENOMEM;
 	return 0;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3370 @ static void rcu_free_pool(struct rcu_hea
  * put_unbound_pool - put a worker_pool
  * @pool: worker_pool to put
  *
- * Put @pool.  If its refcnt reaches zero, it gets destroyed in sched-RCU
+ * Put @pool.  If its refcnt reaches zero, it gets destroyed in RCU
  * safe manner.  get_unbound_pool() calls this function on its failure path
  * and this function should be able to release pools which went through,
  * successfully or not, init_worker_pool().
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3402 @ static void put_unbound_pool(struct work
 	 * @pool's workers from blocking on attach_mutex.  We're the last
 	 * manager and @pool gets freed with the flag set.
 	 */
-	spin_lock_irq(&pool->lock);
-	wait_event_lock_irq(wq_manager_wait,
+	raw_spin_lock_irq(&pool->lock);
+	swait_event_lock_irq(wq_manager_wait,
 			    !(pool->flags & POOL_MANAGER_ACTIVE), pool->lock);
 	pool->flags |= POOL_MANAGER_ACTIVE;
 
 	while ((worker = first_idle_worker(pool)))
 		destroy_worker(worker);
 	WARN_ON(pool->nr_workers || pool->nr_idle);
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 
 	mutex_lock(&wq_pool_attach_mutex);
 	if (!list_empty(&pool->workers))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3564 @ static void pwq_adjust_max_active(struct
 		return;
 
 	/* this function can be called during early boot w/ irq disabled */
-	spin_lock_irqsave(&pwq->pool->lock, flags);
+	raw_spin_lock_irqsave(&pwq->pool->lock, flags);
 
 	/*
 	 * During [un]freezing, the caller is responsible for ensuring that
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3587 @ static void pwq_adjust_max_active(struct
 		pwq->max_active = 0;
 	}
 
-	spin_unlock_irqrestore(&pwq->pool->lock, flags);
+	raw_spin_unlock_irqrestore(&pwq->pool->lock, flags);
 }
 
 /* initialize newly alloced @pwq which is associated with @wq and @pool */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3760 @ apply_wqattrs_prepare(struct workqueue_s
 
 	ctx = kzalloc(struct_size(ctx, pwq_tbl, nr_node_ids), GFP_KERNEL);
 
-	new_attrs = alloc_workqueue_attrs(GFP_KERNEL);
-	tmp_attrs = alloc_workqueue_attrs(GFP_KERNEL);
+	new_attrs = alloc_workqueue_attrs();
+	tmp_attrs = alloc_workqueue_attrs();
 	if (!ctx || !new_attrs || !tmp_attrs)
 		goto out_free;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3897 @ static int apply_workqueue_attrs_locked(
  *
  * Return: 0 on success and -errno on failure.
  */
-int apply_workqueue_attrs(struct workqueue_struct *wq,
+static int apply_workqueue_attrs(struct workqueue_struct *wq,
 			  const struct workqueue_attrs *attrs)
 {
 	int ret;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3908 @ int apply_workqueue_attrs(struct workque
 
 	return ret;
 }
-EXPORT_SYMBOL_GPL(apply_workqueue_attrs);
 
 /**
  * wq_update_unbound_numa - update NUMA affinity of a wq for CPU hot[un]plug
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3985 @ static void wq_update_unbound_numa(struc
 
 use_dfl_pwq:
 	mutex_lock(&wq->mutex);
-	spin_lock_irq(&wq->dfl_pwq->pool->lock);
+	raw_spin_lock_irq(&wq->dfl_pwq->pool->lock);
 	get_pwq(wq->dfl_pwq);
-	spin_unlock_irq(&wq->dfl_pwq->pool->lock);
+	raw_spin_unlock_irq(&wq->dfl_pwq->pool->lock);
 	old_pwq = numa_pwq_tbl_install(wq, node, wq->dfl_pwq);
 out_unlock:
 	mutex_unlock(&wq->mutex);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4106 @ struct workqueue_struct *__alloc_workque
 		return NULL;
 
 	if (flags & WQ_UNBOUND) {
-		wq->unbound_attrs = alloc_workqueue_attrs(GFP_KERNEL);
+		wq->unbound_attrs = alloc_workqueue_attrs();
 		if (!wq->unbound_attrs)
 			goto err_free_wq;
 	}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4333 @ bool workqueue_congested(int cpu, struct
 	struct pool_workqueue *pwq;
 	bool ret;
 
-	rcu_read_lock_sched();
+	rcu_read_lock();
+	preempt_disable();
 
 	if (cpu == WORK_CPU_UNBOUND)
 		cpu = smp_processor_id();
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4345 @ bool workqueue_congested(int cpu, struct
 		pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu));
 
 	ret = !list_empty(&pwq->delayed_works);
-	rcu_read_unlock_sched();
+	preempt_enable();
+	rcu_read_unlock();
 
 	return ret;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4372 @ unsigned int work_busy(struct work_struc
 	if (work_pending(work))
 		ret |= WORK_BUSY_PENDING;
 
-	local_irq_save(flags);
+	rcu_read_lock();
 	pool = get_work_pool(work);
 	if (pool) {
-		spin_lock(&pool->lock);
+		raw_spin_lock_irqsave(&pool->lock, flags);
 		if (find_worker_executing_work(pool, work))
 			ret |= WORK_BUSY_RUNNING;
-		spin_unlock(&pool->lock);
+		raw_spin_unlock_irqrestore(&pool->lock, flags);
 	}
-	local_irq_restore(flags);
+	rcu_read_unlock();
 
 	return ret;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4564 @ void show_workqueue_state(void)
 	unsigned long flags;
 	int pi;
 
-	rcu_read_lock_sched();
+	rcu_read_lock();
 
 	pr_info("Showing busy workqueues and worker pools:\n");
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4584 @ void show_workqueue_state(void)
 		pr_info("workqueue %s: flags=0x%x\n", wq->name, wq->flags);
 
 		for_each_pwq(pwq, wq) {
-			spin_lock_irqsave(&pwq->pool->lock, flags);
+			raw_spin_lock_irqsave(&pwq->pool->lock, flags);
 			if (pwq->nr_active || !list_empty(&pwq->delayed_works))
 				show_pwq(pwq);
-			spin_unlock_irqrestore(&pwq->pool->lock, flags);
+			raw_spin_unlock_irqrestore(&pwq->pool->lock, flags);
 			/*
 			 * We could be printing a lot from atomic context, e.g.
 			 * sysrq-t -> show_workqueue_state(). Avoid triggering
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4601 @ void show_workqueue_state(void)
 		struct worker *worker;
 		bool first = true;
 
-		spin_lock_irqsave(&pool->lock, flags);
+		raw_spin_lock_irqsave(&pool->lock, flags);
 		if (pool->nr_workers == pool->nr_idle)
 			goto next_pool;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4620 @ void show_workqueue_state(void)
 		}
 		pr_cont("\n");
 	next_pool:
-		spin_unlock_irqrestore(&pool->lock, flags);
+		raw_spin_unlock_irqrestore(&pool->lock, flags);
 		/*
 		 * We could be printing a lot from atomic context, e.g.
 		 * sysrq-t -> show_workqueue_state(). Avoid triggering
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4629 @ void show_workqueue_state(void)
 		touch_nmi_watchdog();
 	}
 
-	rcu_read_unlock_sched();
+	rcu_read_unlock();
 }
 
 /* used to show worker information through /proc/PID/{comm,stat,status} */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4650 @ void wq_worker_comm(char *buf, size_t si
 		struct worker_pool *pool = worker->pool;
 
 		if (pool) {
-			spin_lock_irq(&pool->lock);
+			raw_spin_lock_irq(&pool->lock);
 			/*
 			 * ->desc tracks information (wq name or
 			 * set_worker_desc()) for the latest execution.  If
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4664 @ void wq_worker_comm(char *buf, size_t si
 					scnprintf(buf + off, size - off, "-%s",
 						  worker->desc);
 			}
-			spin_unlock_irq(&pool->lock);
+			raw_spin_unlock_irq(&pool->lock);
 		}
 	}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4695 @ static void unbind_workers(int cpu)
 
 	for_each_cpu_worker_pool(pool, cpu) {
 		mutex_lock(&wq_pool_attach_mutex);
-		spin_lock_irq(&pool->lock);
+		raw_spin_lock_irq(&pool->lock);
 
 		/*
 		 * We've blocked all attach/detach operations. Make all workers
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4709 @ static void unbind_workers(int cpu)
 
 		pool->flags |= POOL_DISASSOCIATED;
 
-		spin_unlock_irq(&pool->lock);
+		raw_spin_unlock_irq(&pool->lock);
 		mutex_unlock(&wq_pool_attach_mutex);
 
 		/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4735 @ static void unbind_workers(int cpu)
 		 * worker blocking could lead to lengthy stalls.  Kick off
 		 * unbound chain execution of currently pending work items.
 		 */
-		spin_lock_irq(&pool->lock);
+		raw_spin_lock_irq(&pool->lock);
 		wake_up_worker(pool);
-		spin_unlock_irq(&pool->lock);
+		raw_spin_unlock_irq(&pool->lock);
 	}
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4764 @ static void rebind_workers(struct worker
 		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
 						  pool->attrs->cpumask) < 0);
 
-	spin_lock_irq(&pool->lock);
+	raw_spin_lock_irq(&pool->lock);
 
 	pool->flags &= ~POOL_DISASSOCIATED;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4803 @ static void rebind_workers(struct worker
 		WRITE_ONCE(worker->flags, worker_flags);
 	}
 
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 }
 
 /**
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5016 @ bool freeze_workqueues_busy(void)
 		 * nr_active is monotonically decreasing.  It's safe
 		 * to peek without lock.
 		 */
-		rcu_read_lock_sched();
+		rcu_read_lock();
 		for_each_pwq(pwq, wq) {
 			WARN_ON_ONCE(pwq->nr_active < 0);
 			if (pwq->nr_active) {
 				busy = true;
-				rcu_read_unlock_sched();
+				rcu_read_unlock();
 				goto out_unlock;
 			}
 		}
-		rcu_read_unlock_sched();
+		rcu_read_unlock();
 	}
 out_unlock:
 	mutex_unlock(&wq_pool_mutex);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5220 @ static ssize_t wq_pool_ids_show(struct d
 	const char *delim = "";
 	int node, written = 0;
 
-	rcu_read_lock_sched();
+	get_online_cpus();
+	rcu_read_lock();
 	for_each_node(node) {
 		written += scnprintf(buf + written, PAGE_SIZE - written,
 				     "%s%d:%d", delim, node,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5229 @ static ssize_t wq_pool_ids_show(struct d
 		delim = " ";
 	}
 	written += scnprintf(buf + written, PAGE_SIZE - written, "\n");
-	rcu_read_unlock_sched();
+	rcu_read_unlock();
+	put_online_cpus();
 
 	return written;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5255 @ static struct workqueue_attrs *wq_sysfs_
 
 	lockdep_assert_held(&wq_pool_mutex);
 
-	attrs = alloc_workqueue_attrs(GFP_KERNEL);
+	attrs = alloc_workqueue_attrs();
 	if (!attrs)
 		return NULL;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5677 @ static void __init wq_numa_init(void)
 		return;
 	}
 
-	wq_update_unbound_numa_attrs_buf = alloc_workqueue_attrs(GFP_KERNEL);
+	wq_update_unbound_numa_attrs_buf = alloc_workqueue_attrs();
 	BUG_ON(!wq_update_unbound_numa_attrs_buf);
 
 	/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5752 @ int __init workqueue_init_early(void)
 	for (i = 0; i < NR_STD_WORKER_POOLS; i++) {
 		struct workqueue_attrs *attrs;
 
-		BUG_ON(!(attrs = alloc_workqueue_attrs(GFP_KERNEL)));
+		BUG_ON(!(attrs = alloc_workqueue_attrs()));
 		attrs->nice = std_nice[i];
 		unbound_std_wq_attrs[i] = attrs;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5761 @ int __init workqueue_init_early(void)
 		 * guaranteed by max_active which is enforced by pwqs.
 		 * Turn off NUMA so that dfl_pwq is used for all nodes.
 		 */
-		BUG_ON(!(attrs = alloc_workqueue_attrs(GFP_KERNEL)));
+		BUG_ON(!(attrs = alloc_workqueue_attrs()));
 		attrs->nice = std_nice[i];
 		attrs->no_numa = true;
 		ordered_wq_attrs[i] = attrs;
Index: linux-5.0.19-rt11/kernel/workqueue_internal.h
===================================================================
--- linux-5.0.19-rt11.orig/kernel/workqueue_internal.h
+++ linux-5.0.19-rt11/kernel/workqueue_internal.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:47 @ struct worker {
 	unsigned long		last_active;	/* L: last active timestamp */
 	unsigned int		flags;		/* X: flags */
 	int			id;		/* I: worker id */
+	int			sleeping;	/* None */
 
 	/*
 	 * Opaque string set with work_set_desc().  Printed out with task
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:76 @ static inline struct worker *current_wq_
  * Scheduler hooks for concurrency managed workqueue.  Only to be used from
  * sched/ and workqueue.c.
  */
-void wq_worker_waking_up(struct task_struct *task, int cpu);
-struct task_struct *wq_worker_sleeping(struct task_struct *task);
+void wq_worker_running(struct task_struct *task);
+void wq_worker_sleeping(struct task_struct *task);
 work_func_t wq_worker_last_func(struct task_struct *task);
 
 #endif /* _KERNEL_WORKQUEUE_INTERNAL_H */
Index: linux-5.0.19-rt11/lib/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/lib/Kconfig
+++ linux-5.0.19-rt11/lib/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:455 @ config CHECK_SIGNATURE
 
 config CPUMASK_OFFSTACK
 	bool "Force CPU masks off stack" if DEBUG_PER_CPU_MAPS
+	depends on !PREEMPT_RT_FULL
 	help
 	  Use dynamic allocation for cpumask_var_t, instead of putting
 	  them on the stack.  This is a bit more expensive, but avoids
Index: linux-5.0.19-rt11/lib/Kconfig.debug
===================================================================
--- linux-5.0.19-rt11.orig/lib/Kconfig.debug
+++ linux-5.0.19-rt11/lib/Kconfig.debug
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:46 @ config CONSOLE_LOGLEVEL_QUIET
 	  will be used as the loglevel. IOW passing "quiet" will be the
 	  equivalent of passing "loglevel=<CONSOLE_LOGLEVEL_QUIET>"
 
+config CONSOLE_LOGLEVEL_EMERGENCY
+	int "Emergency console loglevel (1-15)"
+	range 1 15
+	default "5"
+	help
+	  The loglevel to determine if a console message is an emergency
+	  message.
+
+	  If supported by the console driver, emergency messages will be
+	  flushed to the console immediately. This can cause significant system
+	  latencies so the value should be set such that only significant
+	  messages are classified as emergency messages.
+
+	  Setting a default here is equivalent to passing in
+	  emergency_loglevel=<x> in the kernel bootargs. emergency_loglevel=<x>
+	  continues to override whatever value is specified here as well.
+
 config MESSAGE_LOGLEVEL_DEFAULT
 	int "Default message log level (1-7)"
 	range 1 7
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1242 @ config DEBUG_ATOMIC_SLEEP
 
 config DEBUG_LOCKING_API_SELFTESTS
 	bool "Locking API boot-time self-tests"
-	depends on DEBUG_KERNEL
+	depends on DEBUG_KERNEL && !PREEMPT_RT_FULL
 	help
 	  Say Y here if you want the kernel to run a short self-test during
 	  bootup. The self-test checks whether common types of locking bugs
Index: linux-5.0.19-rt11/lib/Makefile
===================================================================
--- linux-5.0.19-rt11.orig/lib/Makefile
+++ linux-5.0.19-rt11/lib/Makefile
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:33 @ endif
 
 lib-y := ctype.o string.o vsprintf.o cmdline.o \
 	 rbtree.o radix-tree.o timerqueue.o xarray.o \
-	 idr.o int_sqrt.o extable.o \
+	 idr.o int_sqrt.o extable.o printk_ringbuffer.o \
 	 sha1.o chacha.o irq_regs.o argv_split.o \
 	 flex_proportions.o ratelimit.o show_mem.o \
 	 is_single_threaded.o plist.o decompress.o kobject_uevent.o \
Index: linux-5.0.19-rt11/lib/bust_spinlocks.c
===================================================================
--- linux-5.0.19-rt11.orig/lib/bust_spinlocks.c
+++ linux-5.0.19-rt11/lib/bust_spinlocks.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:29 @ void bust_spinlocks(int yes)
 		unblank_screen();
 #endif
 		console_unblank();
-		if (--oops_in_progress == 0)
-			wake_up_klogd();
+		--oops_in_progress;
 	}
 }
Index: linux-5.0.19-rt11/lib/debugobjects.c
===================================================================
--- linux-5.0.19-rt11.orig/lib/debugobjects.c
+++ linux-5.0.19-rt11/lib/debugobjects.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:379 @ __debug_object_init(void *addr, struct d
 	struct debug_obj *obj;
 	unsigned long flags;
 
-	fill_pool();
+#ifdef CONFIG_PREEMPT_RT_FULL
+	if (preempt_count() == 0 && !irqs_disabled())
+#endif
+		fill_pool();
 
 	db = get_bucket((unsigned long) addr);
 
Index: linux-5.0.19-rt11/lib/irq_poll.c
===================================================================
--- linux-5.0.19-rt11.orig/lib/irq_poll.c
+++ linux-5.0.19-rt11/lib/irq_poll.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:40 @ void irq_poll_sched(struct irq_poll *iop
 	list_add_tail(&iop->list, this_cpu_ptr(&blk_cpu_iopoll));
 	__raise_softirq_irqoff(IRQ_POLL_SOFTIRQ);
 	local_irq_restore(flags);
+	preempt_check_resched_rt();
 }
 EXPORT_SYMBOL(irq_poll_sched);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:76 @ void irq_poll_complete(struct irq_poll *
 	local_irq_save(flags);
 	__irq_poll_complete(iop);
 	local_irq_restore(flags);
+	preempt_check_resched_rt();
 }
 EXPORT_SYMBOL(irq_poll_complete);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:101 @ static void __latent_entropy irq_poll_so
 		}
 
 		local_irq_enable();
+		preempt_check_resched_rt();
 
 		/* Even though interrupts have been re-enabled, this
 		 * access is safe because interrupts can only add new
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:139 @ static void __latent_entropy irq_poll_so
 		__raise_softirq_irqoff(IRQ_POLL_SOFTIRQ);
 
 	local_irq_enable();
+	preempt_check_resched_rt();
 }
 
 /**
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:203 @ static int irq_poll_cpu_dead(unsigned in
 			 this_cpu_ptr(&blk_cpu_iopoll));
 	__raise_softirq_irqoff(IRQ_POLL_SOFTIRQ);
 	local_irq_enable();
+	preempt_check_resched_rt();
 
 	return 0;
 }
Index: linux-5.0.19-rt11/lib/locking-selftest.c
===================================================================
--- linux-5.0.19-rt11.orig/lib/locking-selftest.c
+++ linux-5.0.19-rt11/lib/locking-selftest.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:745 @ GENERATE_TESTCASE(init_held_rtmutex);
 #include "locking-selftest-spin-hardirq.h"
 GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_hard_spin)
 
+#ifndef CONFIG_PREEMPT_RT_FULL
+
 #include "locking-selftest-rlock-hardirq.h"
 GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_hard_rlock)
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:762 @ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_
 #include "locking-selftest-wlock-softirq.h"
 GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_soft_wlock)
 
+#endif
+
 #undef E1
 #undef E2
 
+#ifndef CONFIG_PREEMPT_RT_FULL
 /*
  * Enabling hardirqs with a softirq-safe lock held:
  */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:800 @ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2A
 #undef E1
 #undef E2
 
+#endif
+
 /*
  * Enabling irqs with an irq-safe lock held:
  */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:825 @ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2A
 #include "locking-selftest-spin-hardirq.h"
 GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_hard_spin)
 
+#ifndef CONFIG_PREEMPT_RT_FULL
+
 #include "locking-selftest-rlock-hardirq.h"
 GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_hard_rlock)
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:842 @ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B
 #include "locking-selftest-wlock-softirq.h"
 GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_soft_wlock)
 
+#endif
+
 #undef E1
 #undef E2
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:875 @ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B
 #include "locking-selftest-spin-hardirq.h"
 GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_hard_spin)
 
+#ifndef CONFIG_PREEMPT_RT_FULL
+
 #include "locking-selftest-rlock-hardirq.h"
 GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_hard_rlock)
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:892 @ GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_
 #include "locking-selftest-wlock-softirq.h"
 GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_soft_wlock)
 
+#endif
+
 #undef E1
 #undef E2
 #undef E3
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:927 @ GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_
 #include "locking-selftest-spin-hardirq.h"
 GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_hard_spin)
 
+#ifndef CONFIG_PREEMPT_RT_FULL
+
 #include "locking-selftest-rlock-hardirq.h"
 GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_hard_rlock)
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:944 @ GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_
 #include "locking-selftest-wlock-softirq.h"
 GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_soft_wlock)
 
+#endif
+
 #undef E1
 #undef E2
 #undef E3
 
+#ifndef CONFIG_PREEMPT_RT_FULL
+
 /*
  * read-lock / write-lock irq inversion.
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1014 @ GENERATE_PERMUTATIONS_3_EVENTS(irq_inver
 #undef E2
 #undef E3
 
+#endif
+
+#ifndef CONFIG_PREEMPT_RT_FULL
+
 /*
  * read-lock / write-lock recursion that is actually safe.
  */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1056 @ GENERATE_PERMUTATIONS_3_EVENTS(irq_read_
 #undef E2
 #undef E3
 
+#endif
+
 /*
  * read-lock / write-lock recursion that is unsafe.
  */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2087 @ void locking_selftest(void)
 
 	printk("  --------------------------------------------------------------------------\n");
 
+#ifndef CONFIG_PREEMPT_RT_FULL
 	/*
 	 * irq-context testcases:
 	 */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2100 @ void locking_selftest(void)
 
 	DO_TESTCASE_6x2("irq read-recursion", irq_read_recursion);
 //	DO_TESTCASE_6x2B("irq read-recursion #2", irq_read_recursion2);
+#else
+	/* On -rt, we only do hardirq context test for raw spinlock */
+	DO_TESTCASE_1B("hard-irqs-on + irq-safe-A", irqsafe1_hard_spin, 12);
+	DO_TESTCASE_1B("hard-irqs-on + irq-safe-A", irqsafe1_hard_spin, 21);
+
+	DO_TESTCASE_1B("hard-safe-A + irqs-on", irqsafe2B_hard_spin, 12);
+	DO_TESTCASE_1B("hard-safe-A + irqs-on", irqsafe2B_hard_spin, 21);
+
+	DO_TESTCASE_1B("hard-safe-A + unsafe-B #1", irqsafe3_hard_spin, 123);
+	DO_TESTCASE_1B("hard-safe-A + unsafe-B #1", irqsafe3_hard_spin, 132);
+	DO_TESTCASE_1B("hard-safe-A + unsafe-B #1", irqsafe3_hard_spin, 213);
+	DO_TESTCASE_1B("hard-safe-A + unsafe-B #1", irqsafe3_hard_spin, 231);
+	DO_TESTCASE_1B("hard-safe-A + unsafe-B #1", irqsafe3_hard_spin, 312);
+	DO_TESTCASE_1B("hard-safe-A + unsafe-B #1", irqsafe3_hard_spin, 321);
+
+	DO_TESTCASE_1B("hard-safe-A + unsafe-B #2", irqsafe4_hard_spin, 123);
+	DO_TESTCASE_1B("hard-safe-A + unsafe-B #2", irqsafe4_hard_spin, 132);
+	DO_TESTCASE_1B("hard-safe-A + unsafe-B #2", irqsafe4_hard_spin, 213);
+	DO_TESTCASE_1B("hard-safe-A + unsafe-B #2", irqsafe4_hard_spin, 231);
+	DO_TESTCASE_1B("hard-safe-A + unsafe-B #2", irqsafe4_hard_spin, 312);
+	DO_TESTCASE_1B("hard-safe-A + unsafe-B #2", irqsafe4_hard_spin, 321);
+#endif
 
 	ww_tests();
 
Index: linux-5.0.19-rt11/lib/nmi_backtrace.c
===================================================================
--- linux-5.0.19-rt11.orig/lib/nmi_backtrace.c
+++ linux-5.0.19-rt11/lib/nmi_backtrace.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:78 @ void nmi_trigger_cpumask_backtrace(const
 		touch_softlockup_watchdog();
 	}
 
-	/*
-	 * Force flush any remote buffers that might be stuck in IRQ context
-	 * and therefore could not run their irq_work.
-	 */
-	printk_safe_flush();
-
 	clear_bit_unlock(0, &backtrace_flag);
 	put_cpu();
 }
Index: linux-5.0.19-rt11/lib/printk_ringbuffer.c
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/lib/printk_ringbuffer.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4 @
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/sched.h>
+#include <linux/smp.h>
+#include <linux/string.h>
+#include <linux/errno.h>
+#include <linux/printk_ringbuffer.h>
+
+#define PRB_SIZE(rb) (1 << rb->size_bits)
+#define PRB_SIZE_BITMASK(rb) (PRB_SIZE(rb) - 1)
+#define PRB_INDEX(rb, lpos) (lpos & PRB_SIZE_BITMASK(rb))
+#define PRB_WRAPS(rb, lpos) (lpos >> rb->size_bits)
+#define PRB_WRAP_LPOS(rb, lpos, xtra) \
+	((PRB_WRAPS(rb, lpos) + xtra) << rb->size_bits)
+#define PRB_DATA_SIZE(e) (e->size - sizeof(struct prb_entry))
+#define PRB_DATA_ALIGN sizeof(long)
+
+static bool __prb_trylock(struct prb_cpulock *cpu_lock,
+			  unsigned int *cpu_store)
+{
+	unsigned long *flags;
+	unsigned int cpu;
+
+	cpu = get_cpu();
+
+	*cpu_store = atomic_read(&cpu_lock->owner);
+	/* memory barrier to ensure the current lock owner is visible */
+	smp_rmb();
+	if (*cpu_store == -1) {
+		flags = per_cpu_ptr(cpu_lock->irqflags, cpu);
+		local_irq_save(*flags);
+		if (atomic_try_cmpxchg_acquire(&cpu_lock->owner,
+					       cpu_store, cpu)) {
+			return true;
+		}
+		local_irq_restore(*flags);
+	} else if (*cpu_store == cpu) {
+		return true;
+	}
+
+	put_cpu();
+	return false;
+}
+
+/*
+ * prb_lock: Perform a processor-reentrant spin lock.
+ * @cpu_lock: A pointer to the lock object.
+ * @cpu_store: A "flags" pointer to store lock status information.
+ *
+ * If no processor has the lock, the calling processor takes the lock and
+ * becomes the owner. If the calling processor is already the owner of the
+ * lock, this function succeeds immediately. If lock is locked by another
+ * processor, this function spins until the calling processor becomes the
+ * owner.
+ *
+ * It is safe to call this function from any context and state.
+ */
+void prb_lock(struct prb_cpulock *cpu_lock, unsigned int *cpu_store)
+{
+	for (;;) {
+		if (__prb_trylock(cpu_lock, cpu_store))
+			break;
+		cpu_relax();
+	}
+}
+
+/*
+ * prb_unlock: Perform a processor-reentrant spin unlock.
+ * @cpu_lock: A pointer to the lock object.
+ * @cpu_store: A "flags" object storing lock status information.
+ *
+ * Release the lock. The calling processor must be the owner of the lock.
+ *
+ * It is safe to call this function from any context and state.
+ */
+void prb_unlock(struct prb_cpulock *cpu_lock, unsigned int cpu_store)
+{
+	unsigned long *flags;
+	unsigned int cpu;
+
+	cpu = atomic_read(&cpu_lock->owner);
+	atomic_set_release(&cpu_lock->owner, cpu_store);
+
+	if (cpu_store == -1) {
+		flags = per_cpu_ptr(cpu_lock->irqflags, cpu);
+		local_irq_restore(*flags);
+	}
+
+	put_cpu();
+}
+
+static struct prb_entry *to_entry(struct printk_ringbuffer *rb,
+				  unsigned long lpos)
+{
+	char *buffer = rb->buffer;
+	buffer += PRB_INDEX(rb, lpos);
+	return (struct prb_entry *)buffer;
+}
+
+static int calc_next(struct printk_ringbuffer *rb, unsigned long tail,
+		     unsigned long lpos, int size, unsigned long *calced_next)
+{
+	unsigned long next_lpos;
+	int ret = 0;
+again:
+	next_lpos = lpos + size;
+	if (next_lpos - tail > PRB_SIZE(rb))
+		return -1;
+
+	if (PRB_WRAPS(rb, lpos) != PRB_WRAPS(rb, next_lpos)) {
+		lpos = PRB_WRAP_LPOS(rb, next_lpos, 0);
+		ret |= 1;
+		goto again;
+	}
+
+	*calced_next = next_lpos;
+	return ret;
+}
+
+static bool push_tail(struct printk_ringbuffer *rb, unsigned long tail)
+{
+	unsigned long new_tail;
+	struct prb_entry *e;
+	unsigned long head;
+
+	if (tail != atomic_long_read(&rb->tail))
+		return true;
+
+	e = to_entry(rb, tail);
+	if (e->size != -1)
+		new_tail = tail + e->size;
+	else
+		new_tail = PRB_WRAP_LPOS(rb, tail, 1);
+
+	/* make sure the new tail does not overtake the head */
+	head = atomic_long_read(&rb->head);
+	if (head - new_tail > PRB_SIZE(rb))
+		return false;
+
+	atomic_long_cmpxchg(&rb->tail, tail, new_tail);
+	return true;
+}
+
+/*
+ * prb_commit: Commit a reserved entry to the ring buffer.
+ * @h: An entry handle referencing the data entry to commit.
+ *
+ * Commit data that has been reserved using prb_reserve(). Once the data
+ * block has been committed, it can be invalidated at any time. If a writer
+ * is interested in using the data after committing, the writer should make
+ * its own copy first or use the prb_iter_ reader functions to access the
+ * data in the ring buffer.
+ *
+ * It is safe to call this function from any context and state.
+ */
+void prb_commit(struct prb_handle *h)
+{
+	struct printk_ringbuffer *rb = h->rb;
+	bool changed = false;
+	struct prb_entry *e;
+	unsigned long head;
+	unsigned long res;
+
+	for (;;) {
+		if (atomic_read(&rb->ctx) != 1) {
+			/* the interrupted context will fixup head */
+			atomic_dec(&rb->ctx);
+			break;
+		}
+		/* assign sequence numbers before moving head */
+		head = atomic_long_read(&rb->head);
+		res = atomic_long_read(&rb->reserve);
+		while (head != res) {
+			e = to_entry(rb, head);
+			if (e->size == -1) {
+				head = PRB_WRAP_LPOS(rb, head, 1);
+				continue;
+			}
+			while (atomic_long_read(&rb->lost)) {
+				atomic_long_dec(&rb->lost);
+				rb->seq++;
+			}
+			e->seq = ++rb->seq;
+			head += e->size;
+			changed = true;
+		}
+		atomic_long_set_release(&rb->head, res);
+
+		atomic_dec(&rb->ctx);
+
+		if (atomic_long_read(&rb->reserve) == res)
+			break;
+		atomic_inc(&rb->ctx);
+	}
+
+	prb_unlock(rb->cpulock, h->cpu);
+
+	if (changed) {
+		atomic_long_inc(&rb->wq_counter);
+		if (wq_has_sleeper(rb->wq)) {
+#ifdef CONFIG_IRQ_WORK
+			irq_work_queue(rb->wq_work);
+#else
+			if (!in_nmi())
+				wake_up_interruptible_all(rb->wq);
+#endif
+		}
+	}
+}
+
+/*
+ * prb_reserve: Reserve an entry within a ring buffer.
+ * @h: An entry handle to be setup and reference an entry.
+ * @rb: A ring buffer to reserve data within.
+ * @size: The number of bytes to reserve.
+ *
+ * Reserve an entry of at least @size bytes to be used by the caller. If
+ * successful, the data region of the entry belongs to the caller and cannot
+ * be invalidated by any other task/context. For this reason, the caller
+ * should call prb_commit() as quickly as possible in order to avoid preventing
+ * other tasks/contexts from reserving data in the case that the ring buffer
+ * has wrapped.
+ *
+ * It is safe to call this function from any context and state.
+ *
+ * Returns a pointer to the reserved entry (and @h is setup to reference that
+ * entry) or NULL if it was not possible to reserve data.
+ */
+char *prb_reserve(struct prb_handle *h, struct printk_ringbuffer *rb,
+		  unsigned int size)
+{
+	unsigned long tail, res1, res2;
+	int ret;
+
+	if (size == 0)
+		return NULL;
+	size += sizeof(struct prb_entry);
+	size += PRB_DATA_ALIGN - 1;
+	size &= ~(PRB_DATA_ALIGN - 1);
+	if (size >= PRB_SIZE(rb))
+		return NULL;
+
+	h->rb = rb;
+	prb_lock(rb->cpulock, &h->cpu);
+
+	atomic_inc(&rb->ctx);
+
+	do {
+		for (;;) {
+			tail = atomic_long_read(&rb->tail);
+			res1 = atomic_long_read(&rb->reserve);
+			ret = calc_next(rb, tail, res1, size, &res2);
+			if (ret >= 0)
+				break;
+			if (!push_tail(rb, tail)) {
+				prb_commit(h);
+				return NULL;
+			}
+		}
+	} while (!atomic_long_try_cmpxchg_acquire(&rb->reserve, &res1, res2));
+
+	h->entry = to_entry(rb, res1);
+
+	if (ret) {
+		/* handle wrap */
+		h->entry->size = -1;
+		h->entry = to_entry(rb, PRB_WRAP_LPOS(rb, res2, 0));
+	}
+
+	h->entry->size = size;
+
+	return &h->entry->data[0];
+}
+
+/*
+ * prb_iter_copy: Copy an iterator.
+ * @dest: The iterator to copy to.
+ * @src: The iterator to copy from.
+ *
+ * Make a deep copy of an iterator. This is particularly useful for making
+ * backup copies of an iterator in case a form of rewinding it needed.
+ *
+ * It is safe to call this function from any context and state. But
+ * note that this function is not atomic. Callers should not make copies
+ * to/from iterators that can be accessed by other tasks/contexts.
+ */
+void prb_iter_copy(struct prb_iterator *dest, struct prb_iterator *src)
+{
+	memcpy(dest, src, sizeof(*dest));
+}
+
+/*
+ * prb_iter_init: Initialize an iterator for a ring buffer.
+ * @iter: The iterator to initialize.
+ * @rb: A ring buffer to that @iter should iterate.
+ * @seq: The sequence number of the position preceding the first record.
+ *       May be NULL.
+ *
+ * Initialize an iterator to be used with a specified ring buffer. If @seq
+ * is non-NULL, it will be set such that prb_iter_next() will provide a
+ * sequence value of "@seq + 1" if no records were missed.
+ *
+ * It is safe to call this function from any context and state.
+ */
+void prb_iter_init(struct prb_iterator *iter, struct printk_ringbuffer *rb,
+		   u64 *seq)
+{
+	memset(iter, 0, sizeof(*iter));
+	iter->rb = rb;
+	iter->lpos = PRB_INIT;
+
+	if (!seq)
+		return;
+
+	for (;;) {
+		struct prb_iterator tmp_iter;
+		int ret;
+
+		prb_iter_copy(&tmp_iter, iter);
+
+		ret = prb_iter_next(&tmp_iter, NULL, 0, seq);
+		if (ret < 0)
+			continue;
+
+		if (ret == 0)
+			*seq = 0;
+		else
+			(*seq)--;
+		break;
+	}
+}
+
+static bool is_valid(struct printk_ringbuffer *rb, unsigned long lpos)
+{
+	unsigned long head, tail;
+
+	tail = atomic_long_read(&rb->tail);
+	head = atomic_long_read(&rb->head);
+	head -= tail;
+	lpos -= tail;
+
+	if (lpos >= head)
+		return false;
+	return true;
+}
+
+/*
+ * prb_iter_data: Retrieve the record data at the current position.
+ * @iter: Iterator tracking the current position.
+ * @buf: A buffer to store the data of the record. May be NULL.
+ * @size: The size of @buf. (Ignored if @buf is NULL.)
+ * @seq: The sequence number of the record. May be NULL.
+ *
+ * If @iter is at a record, provide the data and/or sequence number of that
+ * record (if specified by the caller).
+ *
+ * It is safe to call this function from any context and state.
+ *
+ * Returns >=0 if the current record contains valid data (returns 0 if @buf
+ * is NULL or returns the size of the data block if @buf is non-NULL) or
+ * -EINVAL if @iter is now invalid.
+ */
+int prb_iter_data(struct prb_iterator *iter, char *buf, int size, u64 *seq)
+{
+	struct printk_ringbuffer *rb = iter->rb;
+	unsigned long lpos = iter->lpos;
+	unsigned int datsize = 0;
+	struct prb_entry *e;
+
+	if (buf || seq) {
+		e = to_entry(rb, lpos);
+		if (!is_valid(rb, lpos))
+			return -EINVAL;
+		/* memory barrier to ensure valid lpos */
+		smp_rmb();
+		if (buf) {
+			datsize = PRB_DATA_SIZE(e);
+			/* memory barrier to ensure load of datsize */
+			smp_rmb();
+			if (!is_valid(rb, lpos))
+				return -EINVAL;
+			if (PRB_INDEX(rb, lpos) + datsize >
+			    PRB_SIZE(rb) - PRB_DATA_ALIGN) {
+				return -EINVAL;
+			}
+			if (size > datsize)
+				size = datsize;
+			memcpy(buf, &e->data[0], size);
+		}
+		if (seq)
+			*seq = e->seq;
+		/* memory barrier to ensure loads of entry data */
+		smp_rmb();
+	}
+
+	if (!is_valid(rb, lpos))
+		return -EINVAL;
+
+	return datsize;
+}
+
+/*
+ * prb_iter_next: Advance to the next record.
+ * @iter: Iterator tracking the current position.
+ * @buf: A buffer to store the data of the next record. May be NULL.
+ * @size: The size of @buf. (Ignored if @buf is NULL.)
+ * @seq: The sequence number of the next record. May be NULL.
+ *
+ * If a next record is available, @iter is advanced and (if specified)
+ * the data and/or sequence number of that record are provided.
+ *
+ * It is safe to call this function from any context and state.
+ *
+ * Returns 1 if @iter was advanced, 0 if @iter is at the end of the list, or
+ * -EINVAL if @iter is now invalid.
+ */
+int prb_iter_next(struct prb_iterator *iter, char *buf, int size, u64 *seq)
+{
+	struct printk_ringbuffer *rb = iter->rb;
+	unsigned long next_lpos;
+	struct prb_entry *e;
+	unsigned int esize;
+
+	if (iter->lpos == PRB_INIT) {
+		next_lpos = atomic_long_read(&rb->tail);
+	} else {
+		if (!is_valid(rb, iter->lpos))
+			return -EINVAL;
+		/* memory barrier to ensure valid lpos */
+		smp_rmb();
+		e = to_entry(rb, iter->lpos);
+		esize = e->size;
+		/* memory barrier to ensure load of size */
+		smp_rmb();
+		if (!is_valid(rb, iter->lpos))
+			return -EINVAL;
+		next_lpos = iter->lpos + esize;
+	}
+	if (next_lpos == atomic_long_read(&rb->head))
+		return 0;
+	if (!is_valid(rb, next_lpos))
+		return -EINVAL;
+	/* memory barrier to ensure valid lpos */
+	smp_rmb();
+
+	iter->lpos = next_lpos;
+	e = to_entry(rb, iter->lpos);
+	esize = e->size;
+	/* memory barrier to ensure load of size */
+	smp_rmb();
+	if (!is_valid(rb, iter->lpos))
+		return -EINVAL;
+	if (esize == -1)
+		iter->lpos = PRB_WRAP_LPOS(rb, iter->lpos, 1);
+
+	if (prb_iter_data(iter, buf, size, seq) < 0)
+		return -EINVAL;
+
+	return 1;
+}
+
+/*
+ * prb_iter_wait_next: Advance to the next record, blocking if none available.
+ * @iter: Iterator tracking the current position.
+ * @buf: A buffer to store the data of the next record. May be NULL.
+ * @size: The size of @buf. (Ignored if @buf is NULL.)
+ * @seq: The sequence number of the next record. May be NULL.
+ *
+ * If a next record is already available, this function works like
+ * prb_iter_next(). Otherwise block interruptible until a next record is
+ * available.
+ *
+ * When a next record is available, @iter is advanced and (if specified)
+ * the data and/or sequence number of that record are provided.
+ *
+ * This function might sleep.
+ *
+ * Returns 1 if @iter was advanced, -EINVAL if @iter is now invalid, or
+ * -ERESTARTSYS if interrupted by a signal.
+ */
+int prb_iter_wait_next(struct prb_iterator *iter, char *buf, int size, u64 *seq)
+{
+	unsigned long last_seen;
+	int ret;
+
+	for (;;) {
+		last_seen = atomic_long_read(&iter->rb->wq_counter);
+
+		ret = prb_iter_next(iter, buf, size, seq);
+		if (ret != 0)
+			break;
+
+		ret = wait_event_interruptible(*iter->rb->wq,
+			last_seen != atomic_long_read(&iter->rb->wq_counter));
+		if (ret < 0)
+			break;
+	}
+
+	return ret;
+}
+
+/*
+ * prb_iter_seek: Seek forward to a specific record.
+ * @iter: Iterator to advance.
+ * @seq: Record number to advance to.
+ *
+ * Advance @iter such that a following call to prb_iter_data() will provide
+ * the contents of the specified record. If a record is specified that does
+ * not yet exist, advance @iter to the end of the record list.
+ *
+ * Note that iterators cannot be rewound. So if a record is requested that
+ * exists but is previous to @iter in position, @iter is considered invalid.
+ *
+ * It is safe to call this function from any context and state.
+ *
+ * Returns 1 on succces, 0 if specified record does not yet exist (@iter is
+ * now at the end of the list), or -EINVAL if @iter is now invalid.
+ */
+int prb_iter_seek(struct prb_iterator *iter, u64 seq)
+{
+	u64 cur_seq;
+	int ret;
+
+	/* first check if the iterator is already at the wanted seq */
+	if (seq == 0) {
+		if (iter->lpos == PRB_INIT)
+			return 1;
+		else
+			return -EINVAL;
+	}
+	if (iter->lpos != PRB_INIT) {
+		if (prb_iter_data(iter, NULL, 0, &cur_seq) >= 0) {
+			if (cur_seq == seq)
+				return 1;
+			if (cur_seq > seq)
+				return -EINVAL;
+		}
+	}
+
+	/* iterate to find the wanted seq */
+	for (;;) {
+		ret = prb_iter_next(iter, NULL, 0, &cur_seq);
+		if (ret <= 0)
+			break;
+
+		if (cur_seq == seq)
+			break;
+
+		if (cur_seq > seq) {
+			ret = -EINVAL;
+			break;
+		}
+	}
+
+	return ret;
+}
+
+/*
+ * prb_buffer_size: Get the size of the ring buffer.
+ * @rb: The ring buffer to get the size of.
+ *
+ * Return the number of bytes used for the ring buffer entry storage area.
+ * Note that this area stores both entry header and entry data. Therefore
+ * this represents an upper bound to the amount of data that can be stored
+ * in the ring buffer.
+ *
+ * It is safe to call this function from any context and state.
+ *
+ * Returns the size in bytes of the entry storage area.
+ */
+int prb_buffer_size(struct printk_ringbuffer *rb)
+{
+	return PRB_SIZE(rb);
+}
+
+/*
+ * prb_inc_lost: Increment the seq counter to signal a lost record.
+ * @rb: The ring buffer to increment the seq of.
+ *
+ * Increment the seq counter so that a seq number is intentially missing
+ * for the readers. This allows readers to identify that a record is
+ * missing. A writer will typically use this function if prb_reserve()
+ * fails.
+ *
+ * It is safe to call this function from any context and state.
+ */
+void prb_inc_lost(struct printk_ringbuffer *rb)
+{
+	atomic_long_inc(&rb->lost);
+}
Index: linux-5.0.19-rt11/lib/radix-tree.c
===================================================================
--- linux-5.0.19-rt11.orig/lib/radix-tree.c
+++ linux-5.0.19-rt11/lib/radix-tree.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:42 @
 #include <linux/slab.h>
 #include <linux/string.h>
 #include <linux/xarray.h>
-
+#include <linux/locallock.h>
 
 /*
  * Radix tree node cache.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:88 @ struct radix_tree_preload {
 	struct radix_tree_node *nodes;
 };
 static DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) = { 0, };
+static DEFINE_LOCAL_IRQ_LOCK(radix_tree_preloads_lock);
 
 static inline struct radix_tree_node *entry_to_node(void *ptr)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:286 @ radix_tree_node_alloc(gfp_t gfp_mask, st
 		 * succeed in getting a node here (and never reach
 		 * kmem_cache_alloc)
 		 */
-		rtp = this_cpu_ptr(&radix_tree_preloads);
+		rtp = &get_locked_var(radix_tree_preloads_lock, radix_tree_preloads);
 		if (rtp->nr) {
 			ret = rtp->nodes;
 			rtp->nodes = ret->parent;
 			rtp->nr--;
 		}
+		put_locked_var(radix_tree_preloads_lock, radix_tree_preloads);
 		/*
 		 * Update the allocation stack trace as this is more useful
 		 * for debugging.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:358 @ static __must_check int __radix_tree_pre
 	 */
 	gfp_mask &= ~__GFP_ACCOUNT;
 
-	preempt_disable();
+	local_lock(radix_tree_preloads_lock);
 	rtp = this_cpu_ptr(&radix_tree_preloads);
 	while (rtp->nr < nr) {
-		preempt_enable();
+		local_unlock(radix_tree_preloads_lock);
 		node = kmem_cache_alloc(radix_tree_node_cachep, gfp_mask);
 		if (node == NULL)
 			goto out;
-		preempt_disable();
+		local_lock(radix_tree_preloads_lock);
 		rtp = this_cpu_ptr(&radix_tree_preloads);
 		if (rtp->nr < nr) {
 			node->parent = rtp->nodes;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:407 @ int radix_tree_maybe_preload(gfp_t gfp_m
 	if (gfpflags_allow_blocking(gfp_mask))
 		return __radix_tree_preload(gfp_mask, RADIX_TREE_PRELOAD_SIZE);
 	/* Preloading doesn't help anything with this gfp mask, skip it */
-	preempt_disable();
+	local_lock(radix_tree_preloads_lock);
 	return 0;
 }
 EXPORT_SYMBOL(radix_tree_maybe_preload);
 
+void radix_tree_preload_end(void)
+{
+	local_unlock(radix_tree_preloads_lock);
+}
+EXPORT_SYMBOL(radix_tree_preload_end);
+
 static unsigned radix_tree_load_root(const struct radix_tree_root *root,
 		struct radix_tree_node **nodep, unsigned long *maxindex)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1502 @ EXPORT_SYMBOL(radix_tree_tagged);
 void idr_preload(gfp_t gfp_mask)
 {
 	if (__radix_tree_preload(gfp_mask, IDR_PRELOAD_SIZE))
-		preempt_disable();
+		local_lock(radix_tree_preloads_lock);
 }
 EXPORT_SYMBOL(idr_preload);
 
+void idr_preload_end(void)
+{
+	local_unlock(radix_tree_preloads_lock);
+}
+EXPORT_SYMBOL(idr_preload_end);
+
 void __rcu **idr_get_free(struct radix_tree_root *root,
 			      struct radix_tree_iter *iter, gfp_t gfp,
 			      unsigned long max)
Index: linux-5.0.19-rt11/lib/scatterlist.c
===================================================================
--- linux-5.0.19-rt11.orig/lib/scatterlist.c
+++ linux-5.0.19-rt11/lib/scatterlist.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:779 @ void sg_miter_stop(struct sg_mapping_ite
 			flush_kernel_dcache_page(miter->page);
 
 		if (miter->__flags & SG_MITER_ATOMIC) {
-			WARN_ON_ONCE(preemptible());
+			WARN_ON_ONCE(!pagefault_disabled());
 			kunmap_atomic(miter->addr);
 		} else
 			kunmap(miter->page);
Index: linux-5.0.19-rt11/lib/smp_processor_id.c
===================================================================
--- linux-5.0.19-rt11.orig/lib/smp_processor_id.c
+++ linux-5.0.19-rt11/lib/smp_processor_id.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:25 @ notrace static unsigned int check_preemp
 	 * Kernel threads bound to a single CPU can safely use
 	 * smp_processor_id():
 	 */
-	if (cpumask_equal(&current->cpus_allowed, cpumask_of(this_cpu)))
+	if (cpumask_equal(current->cpus_ptr, cpumask_of(this_cpu)))
 		goto out;
 
 	/*
Index: linux-5.0.19-rt11/localversion-rt
===================================================================
--- /dev/null
+++ linux-5.0.19-rt11/localversion-rt
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1 @
+-rt16
Index: linux-5.0.19-rt11/mm/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/mm/Kconfig
+++ linux-5.0.19-rt11/mm/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:375 @ config NOMMU_INITIAL_TRIM_EXCESS
 
 config TRANSPARENT_HUGEPAGE
 	bool "Transparent Hugepage Support"
-	depends on HAVE_ARCH_TRANSPARENT_HUGEPAGE
+	depends on HAVE_ARCH_TRANSPARENT_HUGEPAGE && !PREEMPT_RT_FULL
 	select COMPACTION
 	select XARRAY_MULTI
 	help
Index: linux-5.0.19-rt11/mm/compaction.c
===================================================================
--- linux-5.0.19-rt11.orig/mm/compaction.c
+++ linux-5.0.19-rt11/mm/compaction.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1661 @ check_drain:
 				block_start_pfn(cc->migrate_pfn, cc->order);
 
 			if (cc->last_migrated_pfn < current_block_start) {
-				cpu = get_cpu();
+				cpu = get_cpu_light();
+				local_lock_irq(swapvec_lock);
 				lru_add_drain_cpu(cpu);
+				local_unlock_irq(swapvec_lock);
 				drain_local_pages(zone);
-				put_cpu();
+				put_cpu_light();
 				/* No more flushing until we migrate again */
 				cc->last_migrated_pfn = 0;
 			}
Index: linux-5.0.19-rt11/mm/highmem.c
===================================================================
--- linux-5.0.19-rt11.orig/mm/highmem.c
+++ linux-5.0.19-rt11/mm/highmem.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:33 @
 #include <linux/kgdb.h>
 #include <asm/tlbflush.h>
 
-
+#ifndef CONFIG_PREEMPT_RT_FULL
 #if defined(CONFIG_HIGHMEM) || defined(CONFIG_X86_32)
 DEFINE_PER_CPU(int, __kmap_atomic_idx);
+EXPORT_PER_CPU_SYMBOL(__kmap_atomic_idx);
+#endif
 #endif
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:113 @ static inline wait_queue_head_t *get_pkm
 atomic_long_t _totalhigh_pages __read_mostly;
 EXPORT_SYMBOL(_totalhigh_pages);
 
-EXPORT_PER_CPU_SYMBOL(__kmap_atomic_idx);
-
 unsigned int nr_free_highpages (void)
 {
 	struct zone *zone;
Index: linux-5.0.19-rt11/mm/kmemleak.c
===================================================================
--- linux-5.0.19-rt11.orig/mm/kmemleak.c
+++ linux-5.0.19-rt11/mm/kmemleak.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:29 @
  *
  * The following locks and mutexes are used by kmemleak:
  *
- * - kmemleak_lock (rwlock): protects the object_list modifications and
+ * - kmemleak_lock (raw spinlock): protects the object_list modifications and
  *   accesses to the object_tree_root. The object_list is the main list
  *   holding the metadata (struct kmemleak_object) for the allocated memory
  *   blocks. The object_tree_root is a red black tree used to look-up
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:202 @ static LIST_HEAD(gray_list);
 /* search tree for object boundaries */
 static struct rb_root object_tree_root = RB_ROOT;
 /* rw_lock protecting the access to object_list and object_tree_root */
-static DEFINE_RWLOCK(kmemleak_lock);
+static DEFINE_RAW_SPINLOCK(kmemleak_lock);
 
 /* allocation caches for kmemleak internal data */
 static struct kmem_cache *object_cache;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:518 @ static struct kmemleak_object *find_and_
 	struct kmemleak_object *object;
 
 	rcu_read_lock();
-	read_lock_irqsave(&kmemleak_lock, flags);
+	raw_spin_lock_irqsave(&kmemleak_lock, flags);
 	object = lookup_object(ptr, alias);
-	read_unlock_irqrestore(&kmemleak_lock, flags);
+	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
 
 	/* check whether the object is still available */
 	if (object && !get_object(object))
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:540 @ static struct kmemleak_object *find_and_
 	unsigned long flags;
 	struct kmemleak_object *object;
 
-	write_lock_irqsave(&kmemleak_lock, flags);
+	raw_spin_lock_irqsave(&kmemleak_lock, flags);
 	object = lookup_object(ptr, alias);
 	if (object) {
 		rb_erase(&object->rb_node, &object_tree_root);
 		list_del_rcu(&object->object_list);
 	}
-	write_unlock_irqrestore(&kmemleak_lock, flags);
+	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
 
 	return object;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:621 @ static struct kmemleak_object *create_ob
 	/* kernel backtrace */
 	object->trace_len = __save_stack_trace(object->trace);
 
-	write_lock_irqsave(&kmemleak_lock, flags);
+	raw_spin_lock_irqsave(&kmemleak_lock, flags);
 
 	untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
 	min_addr = min(min_addr, untagged_ptr);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:653 @ static struct kmemleak_object *create_ob
 
 	list_add_tail_rcu(&object->object_list, &object_list);
 out:
-	write_unlock_irqrestore(&kmemleak_lock, flags);
+	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
 	return object;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1340 @ static void scan_block(void *_start, voi
 	unsigned long flags;
 	unsigned long untagged_ptr;
 
-	read_lock_irqsave(&kmemleak_lock, flags);
+	raw_spin_lock_irqsave(&kmemleak_lock, flags);
 	for (ptr = start; ptr < end; ptr++) {
 		struct kmemleak_object *object;
 		unsigned long pointer;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1398 @ static void scan_block(void *_start, voi
 			spin_unlock(&object->lock);
 		}
 	}
-	read_unlock_irqrestore(&kmemleak_lock, flags);
+	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
 }
 
 /*
Index: linux-5.0.19-rt11/mm/memcontrol.c
===================================================================
--- linux-5.0.19-rt11.orig/mm/memcontrol.c
+++ linux-5.0.19-rt11/mm/memcontrol.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:72 @
 #include <net/sock.h>
 #include <net/ip.h>
 #include "slab.h"
+#include <linux/locallock.h>
 
 #include <linux/uaccess.h>
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:98 @ int do_swap_account __read_mostly;
 #define do_swap_account		0
 #endif
 
+static DEFINE_LOCAL_IRQ_LOCK(event_lock);
+
 /* Whether legacy memory+swap accounting is active */
 static bool do_memsw_account(void)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2078 @ static void drain_all_stock(struct mem_c
 	 * as well as workers from this path always operate on the local
 	 * per-cpu data. CPU up doesn't touch memcg_stock at all.
 	 */
-	curcpu = get_cpu();
+	curcpu = get_cpu_light();
 	for_each_online_cpu(cpu) {
 		struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu);
 		struct mem_cgroup *memcg;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2098 @ static void drain_all_stock(struct mem_c
 		}
 		css_put(&memcg->css);
 	}
-	put_cpu();
+	put_cpu_light();
 	mutex_unlock(&percpu_charge_mutex);
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4875 @ static int mem_cgroup_move_account(struc
 
 	ret = 0;
 
-	local_irq_disable();
+	local_lock_irq(event_lock);
 	mem_cgroup_charge_statistics(to, page, compound, nr_pages);
 	memcg_check_events(to, page);
 	mem_cgroup_charge_statistics(from, page, compound, -nr_pages);
 	memcg_check_events(from, page);
-	local_irq_enable();
+	local_unlock_irq(event_lock);
 out_unlock:
 	unlock_page(page);
 out:
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5999 @ void mem_cgroup_commit_charge(struct pag
 
 	commit_charge(page, memcg, lrucare);
 
-	local_irq_disable();
+	local_lock_irq(event_lock);
 	mem_cgroup_charge_statistics(memcg, page, compound, nr_pages);
 	memcg_check_events(memcg, page);
-	local_irq_enable();
+	local_unlock_irq(event_lock);
 
 	if (do_memsw_account() && PageSwapCache(page)) {
 		swp_entry_t entry = { .val = page_private(page) };
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6071 @ static void uncharge_batch(const struct
 		memcg_oom_recover(ug->memcg);
 	}
 
-	local_irq_save(flags);
+	local_lock_irqsave(event_lock, flags);
 	__mod_memcg_state(ug->memcg, MEMCG_RSS, -ug->nr_anon);
 	__mod_memcg_state(ug->memcg, MEMCG_CACHE, -ug->nr_file);
 	__mod_memcg_state(ug->memcg, MEMCG_RSS_HUGE, -ug->nr_huge);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6079 @ static void uncharge_batch(const struct
 	__count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout);
 	__this_cpu_add(ug->memcg->stat_cpu->nr_page_events, nr_pages);
 	memcg_check_events(ug->memcg, ug->dummy_page);
-	local_irq_restore(flags);
+	local_unlock_irqrestore(event_lock, flags);
 
 	if (!mem_cgroup_is_root(ug->memcg))
 		css_put_many(&ug->memcg->css, nr_pages);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6242 @ void mem_cgroup_migrate(struct page *old
 
 	commit_charge(newpage, memcg, false);
 
-	local_irq_save(flags);
+	local_lock_irqsave(event_lock, flags);
 	mem_cgroup_charge_statistics(memcg, newpage, compound, nr_pages);
 	memcg_check_events(memcg, newpage);
-	local_irq_restore(flags);
+	local_unlock_irqrestore(event_lock, flags);
 }
 
 DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6437 @ void mem_cgroup_swapout(struct page *pag
 	struct mem_cgroup *memcg, *swap_memcg;
 	unsigned int nr_entries;
 	unsigned short oldid;
+	unsigned long flags;
 
 	VM_BUG_ON_PAGE(PageLRU(page), page);
 	VM_BUG_ON_PAGE(page_count(page), page);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6483 @ void mem_cgroup_swapout(struct page *pag
 	 * important here to have the interrupts disabled because it is the
 	 * only synchronisation we have for updating the per-CPU variables.
 	 */
+	local_lock_irqsave(event_lock, flags);
+#ifndef CONFIG_PREEMPT_RT_BASE
 	VM_BUG_ON(!irqs_disabled());
+#endif
 	mem_cgroup_charge_statistics(memcg, page, PageTransHuge(page),
 				     -nr_entries);
 	memcg_check_events(memcg, page);
 
 	if (!mem_cgroup_is_root(memcg))
 		css_put_many(&memcg->css, nr_entries);
+	local_unlock_irqrestore(event_lock, flags);
 }
 
 /**
Index: linux-5.0.19-rt11/mm/mmu_context.c
===================================================================
--- linux-5.0.19-rt11.orig/mm/mmu_context.c
+++ linux-5.0.19-rt11/mm/mmu_context.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:28 @ void use_mm(struct mm_struct *mm)
 	struct task_struct *tsk = current;
 
 	task_lock(tsk);
+	preempt_disable_rt();
 	active_mm = tsk->active_mm;
 	if (active_mm != mm) {
 		mmgrab(mm);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:36 @ void use_mm(struct mm_struct *mm)
 	}
 	tsk->mm = mm;
 	switch_mm(active_mm, mm, tsk);
+	preempt_enable_rt();
 	task_unlock(tsk);
 #ifdef finish_arch_post_lock_switch
 	finish_arch_post_lock_switch();
Index: linux-5.0.19-rt11/mm/page_alloc.c
===================================================================
--- linux-5.0.19-rt11.orig/mm/page_alloc.c
+++ linux-5.0.19-rt11/mm/page_alloc.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:63 @
 #include <linux/hugetlb.h>
 #include <linux/sched/rt.h>
 #include <linux/sched/mm.h>
+#include <linux/locallock.h>
 #include <linux/page_owner.h>
 #include <linux/kthread.h>
 #include <linux/memcontrol.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:312 @ EXPORT_SYMBOL(nr_node_ids);
 EXPORT_SYMBOL(nr_online_nodes);
 #endif
 
+static DEFINE_LOCAL_IRQ_LOCK(pa_lock);
+
+#ifdef CONFIG_PREEMPT_RT_BASE
+# define cpu_lock_irqsave(cpu, flags)		\
+	local_lock_irqsave_on(pa_lock, flags, cpu)
+# define cpu_unlock_irqrestore(cpu, flags)	\
+	local_unlock_irqrestore_on(pa_lock, flags, cpu)
+#else
+# define cpu_lock_irqsave(cpu, flags)		local_irq_save(flags)
+# define cpu_unlock_irqrestore(cpu, flags)	local_irq_restore(flags)
+#endif
+
 int page_group_by_mobility_disabled __read_mostly;
 
 #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1141 @ static inline void prefetch_buddy(struct
 }
 
 /*
- * Frees a number of pages from the PCP lists
+ * Frees a number of pages which have been collected from the pcp lists.
  * Assumes all pages on list are in same zone, and of same order.
  * count is the number of pages to free.
  *
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1151 @ static inline void prefetch_buddy(struct
  * And clear the zone's pages_scanned counter, to hold off the "all pages are
  * pinned" detection logic.
  */
-static void free_pcppages_bulk(struct zone *zone, int count,
-					struct per_cpu_pages *pcp)
+static void free_pcppages_bulk(struct zone *zone, struct list_head *head,
+			       bool zone_retry)
+{
+	bool isolated_pageblocks;
+	struct page *page, *tmp;
+	unsigned long flags;
+
+	spin_lock_irqsave(&zone->lock, flags);
+	isolated_pageblocks = has_isolate_pageblock(zone);
+
+	/*
+	 * Use safe version since after __free_one_page(),
+	 * page->lru.next will not point to original list.
+	 */
+	list_for_each_entry_safe(page, tmp, head, lru) {
+		int mt = get_pcppage_migratetype(page);
+
+		if (page_zone(page) != zone) {
+			/*
+			 * free_unref_page_list() sorts pages by zone. If we end
+			 * up with pages from a different NUMA nodes belonging
+			 * to the same ZONE index then we need to redo with the
+			 * correct ZONE pointer. Skip the page for now, redo it
+			 * on the next iteration.
+			 */
+			WARN_ON_ONCE(zone_retry == false);
+			if (zone_retry)
+				continue;
+		}
+
+		/* MIGRATE_ISOLATE page should not go to pcplists */
+		VM_BUG_ON_PAGE(is_migrate_isolate(mt), page);
+		/* Pageblock could have been isolated meanwhile */
+		if (unlikely(isolated_pageblocks))
+			mt = get_pageblock_migratetype(page);
+
+		list_del(&page->lru);
+		__free_one_page(page, page_to_pfn(page), zone, 0, mt);
+		trace_mm_page_pcpu_drain(page, 0, mt);
+	}
+	spin_unlock_irqrestore(&zone->lock, flags);
+}
+
+static void isolate_pcp_pages(int count, struct per_cpu_pages *pcp,
+			      struct list_head *dst)
+
 {
 	int migratetype = 0;
 	int batch_free = 0;
 	int prefetch_nr = 0;
-	bool isolated_pageblocks;
-	struct page *page, *tmp;
-	LIST_HEAD(head);
+	struct page *page;
 
 	while (count) {
 		struct list_head *list;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1233 @ static void free_pcppages_bulk(struct zo
 			if (bulkfree_pcp_prepare(page))
 				continue;
 
-			list_add_tail(&page->lru, &head);
+			list_add_tail(&page->lru, dst);
 
 			/*
 			 * We are going to put the page back to the global
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1248 @ static void free_pcppages_bulk(struct zo
 				prefetch_buddy(page);
 		} while (--count && --batch_free && !list_empty(list));
 	}
-
-	spin_lock(&zone->lock);
-	isolated_pageblocks = has_isolate_pageblock(zone);
-
-	/*
-	 * Use safe version since after __free_one_page(),
-	 * page->lru.next will not point to original list.
-	 */
-	list_for_each_entry_safe(page, tmp, &head, lru) {
-		int mt = get_pcppage_migratetype(page);
-		/* MIGRATE_ISOLATE page should not go to pcplists */
-		VM_BUG_ON_PAGE(is_migrate_isolate(mt), page);
-		/* Pageblock could have been isolated meanwhile */
-		if (unlikely(isolated_pageblocks))
-			mt = get_pageblock_migratetype(page);
-
-		__free_one_page(page, page_to_pfn(page), zone, 0, mt);
-		trace_mm_page_pcpu_drain(page, 0, mt);
-	}
-	spin_unlock(&zone->lock);
 }
 
 static void free_one_page(struct zone *zone,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1348 @ static void __free_pages_ok(struct page
 		return;
 
 	migratetype = get_pfnblock_migratetype(page, pfn);
-	local_irq_save(flags);
+	local_lock_irqsave(pa_lock, flags);
 	__count_vm_events(PGFREE, 1 << order);
 	free_one_page(page_zone(page), page, pfn, order, migratetype);
-	local_irq_restore(flags);
+	local_unlock_irqrestore(pa_lock, flags);
 }
 
 static void __init __free_pages_boot_core(struct page *page, unsigned int order)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2659 @ void drain_zone_pages(struct zone *zone,
 {
 	unsigned long flags;
 	int to_drain, batch;
+	LIST_HEAD(dst);
 
-	local_irq_save(flags);
+	local_lock_irqsave(pa_lock, flags);
 	batch = READ_ONCE(pcp->batch);
 	to_drain = min(pcp->count, batch);
 	if (to_drain > 0)
-		free_pcppages_bulk(zone, to_drain, pcp);
-	local_irq_restore(flags);
+		isolate_pcp_pages(to_drain, pcp, &dst);
+
+	local_unlock_irqrestore(pa_lock, flags);
+
+	if (to_drain > 0)
+		free_pcppages_bulk(zone, &dst, false);
 }
 #endif
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2686 @ static void drain_pages_zone(unsigned in
 	unsigned long flags;
 	struct per_cpu_pageset *pset;
 	struct per_cpu_pages *pcp;
+	LIST_HEAD(dst);
+	int count;
 
-	local_irq_save(flags);
+	cpu_lock_irqsave(cpu, flags);
 	pset = per_cpu_ptr(zone->pageset, cpu);
 
 	pcp = &pset->pcp;
-	if (pcp->count)
-		free_pcppages_bulk(zone, pcp->count, pcp);
-	local_irq_restore(flags);
+	count = pcp->count;
+	if (count)
+		isolate_pcp_pages(count, pcp, &dst);
+
+	cpu_unlock_irqrestore(cpu, flags);
+
+	if (count)
+		free_pcppages_bulk(zone, &dst, false);
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2735 @ void drain_local_pages(struct zone *zone
 		drain_pages(cpu);
 }
 
+#ifndef CONFIG_PREEMPT_RT_BASE
 static void drain_local_pages_wq(struct work_struct *work)
 {
 	struct pcpu_drain *drain;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2753 @ static void drain_local_pages_wq(struct
 	drain_local_pages(drain->zone);
 	preempt_enable();
 }
+#endif
 
 /*
  * Spill all the per-cpu pages from all CPUs back into the buddy allocator.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2821 @ void drain_all_pages(struct zone *zone)
 			cpumask_clear_cpu(cpu, &cpus_with_pcps);
 	}
 
+#ifdef CONFIG_PREEMPT_RT_BASE
+	for_each_cpu(cpu, &cpus_with_pcps) {
+		if (zone)
+			drain_pages_zone(cpu, zone);
+		else
+			drain_pages(cpu);
+	}
+#else
 	for_each_cpu(cpu, &cpus_with_pcps) {
 		struct pcpu_drain *drain = per_cpu_ptr(&pcpu_drain, cpu);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2838 @ void drain_all_pages(struct zone *zone)
 	}
 	for_each_cpu(cpu, &cpus_with_pcps)
 		flush_work(&per_cpu_ptr(&pcpu_drain, cpu)->work);
+#endif
 
 	mutex_unlock(&pcpu_drain_mutex);
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2910 @ static bool free_unref_page_prepare(stru
 	return true;
 }
 
-static void free_unref_page_commit(struct page *page, unsigned long pfn)
+static void free_unref_page_commit(struct page *page, unsigned long pfn,
+				   struct list_head *dst)
 {
 	struct zone *zone = page_zone(page);
 	struct per_cpu_pages *pcp;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2940 @ static void free_unref_page_commit(struc
 	pcp->count++;
 	if (pcp->count >= pcp->high) {
 		unsigned long batch = READ_ONCE(pcp->batch);
-		free_pcppages_bulk(zone, batch, pcp);
+
+		isolate_pcp_pages(batch, pcp, dst);
 	}
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2952 @ void free_unref_page(struct page *page)
 {
 	unsigned long flags;
 	unsigned long pfn = page_to_pfn(page);
+	struct zone *zone = page_zone(page);
+	LIST_HEAD(dst);
 
 	if (!free_unref_page_prepare(page, pfn))
 		return;
 
-	local_irq_save(flags);
-	free_unref_page_commit(page, pfn);
-	local_irq_restore(flags);
+	local_lock_irqsave(pa_lock, flags);
+	free_unref_page_commit(page, pfn, &dst);
+	local_unlock_irqrestore(pa_lock, flags);
+	if (!list_empty(&dst))
+		free_pcppages_bulk(zone, &dst, false);
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2973 @ void free_unref_page_list(struct list_he
 	struct page *page, *next;
 	unsigned long flags, pfn;
 	int batch_count = 0;
+	struct list_head dsts[__MAX_NR_ZONES];
+	int i;
+
+	for (i = 0; i < __MAX_NR_ZONES; i++)
+		INIT_LIST_HEAD(&dsts[i]);
 
 	/* Prepare pages for freeing */
 	list_for_each_entry_safe(page, next, list, lru) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2987 @ void free_unref_page_list(struct list_he
 		set_page_private(page, pfn);
 	}
 
-	local_irq_save(flags);
+	local_lock_irqsave(pa_lock, flags);
 	list_for_each_entry_safe(page, next, list, lru) {
 		unsigned long pfn = page_private(page);
+		enum zone_type type;
 
 		set_page_private(page, 0);
 		trace_mm_page_free_batched(page);
-		free_unref_page_commit(page, pfn);
+		type = page_zonenum(page);
+		free_unref_page_commit(page, pfn, &dsts[type]);
 
 		/*
 		 * Guard against excessive IRQ disabled times when we get
 		 * a large list of pages to free.
 		 */
 		if (++batch_count == SWAP_CLUSTER_MAX) {
-			local_irq_restore(flags);
+			local_unlock_irqrestore(pa_lock, flags);
 			batch_count = 0;
-			local_irq_save(flags);
+			local_lock_irqsave(pa_lock, flags);
 		}
 	}
-	local_irq_restore(flags);
+	local_unlock_irqrestore(pa_lock, flags);
+
+	for (i = 0; i < __MAX_NR_ZONES; ) {
+		struct page *page;
+		struct zone *zone;
+
+		if (list_empty(&dsts[i])) {
+			i++;
+			continue;
+		}
+
+		page = list_first_entry(&dsts[i], struct page, lru);
+		zone = page_zone(page);
+
+		free_pcppages_bulk(zone, &dsts[i], true);
+	}
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3158 @ static struct page *rmqueue_pcplist(stru
 	struct page *page;
 	unsigned long flags;
 
-	local_irq_save(flags);
+	local_lock_irqsave(pa_lock, flags);
 	pcp = &this_cpu_ptr(zone->pageset)->pcp;
 	list = &pcp->lists[migratetype];
 	page = __rmqueue_pcplist(zone,  migratetype, alloc_flags, pcp, list);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3166 @ static struct page *rmqueue_pcplist(stru
 		__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
 		zone_statistics(preferred_zone, zone);
 	}
-	local_irq_restore(flags);
+	local_unlock_irqrestore(pa_lock, flags);
 	return page;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3193 @ struct page *rmqueue(struct zone *prefer
 	 * allocate greater than order-1 page units with __GFP_NOFAIL.
 	 */
 	WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1));
-	spin_lock_irqsave(&zone->lock, flags);
+	local_spin_lock_irqsave(pa_lock, &zone->lock, flags);
 
 	do {
 		page = NULL;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3213 @ struct page *rmqueue(struct zone *prefer
 
 	__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
 	zone_statistics(preferred_zone, zone);
-	local_irq_restore(flags);
+	local_unlock_irqrestore(pa_lock, flags);
 
 out:
 	/* Separate test+clear to avoid unnecessary atomics */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3226 @ out:
 	return page;
 
 failed:
-	local_irq_restore(flags);
+	local_unlock_irqrestore(pa_lock, flags);
 	return NULL;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:7462 @ void __init free_area_init(unsigned long
 
 static int page_alloc_cpu_dead(unsigned int cpu)
 {
-
+	local_lock_irq_on(swapvec_lock, cpu);
 	lru_add_drain_cpu(cpu);
+	local_unlock_irq_on(swapvec_lock, cpu);
 	drain_pages(cpu);
 
 	/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:8397 @ void zone_pcp_reset(struct zone *zone)
 	struct per_cpu_pageset *pset;
 
 	/* avoid races with drain_pages()  */
-	local_irq_save(flags);
+	local_lock_irqsave(pa_lock, flags);
 	if (zone->pageset != &boot_pageset) {
 		for_each_online_cpu(cpu) {
 			pset = per_cpu_ptr(zone->pageset, cpu);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:8406 @ void zone_pcp_reset(struct zone *zone)
 		free_percpu(zone->pageset);
 		zone->pageset = &boot_pageset;
 	}
-	local_irq_restore(flags);
+	local_unlock_irqrestore(pa_lock, flags);
 }
 
 #ifdef CONFIG_MEMORY_HOTREMOVE
Index: linux-5.0.19-rt11/mm/slab.c
===================================================================
--- linux-5.0.19-rt11.orig/mm/slab.c
+++ linux-5.0.19-rt11/mm/slab.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:236 @ static void kmem_cache_node_init(struct
 	parent->shared = NULL;
 	parent->alien = NULL;
 	parent->colour_next = 0;
-	spin_lock_init(&parent->list_lock);
+	raw_spin_lock_init(&parent->list_lock);
 	parent->free_objects = 0;
 	parent->free_touched = 0;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:590 @ static noinline void cache_free_pfmemall
 	page_node = page_to_nid(page);
 	n = get_node(cachep, page_node);
 
-	spin_lock(&n->list_lock);
+	raw_spin_lock(&n->list_lock);
 	free_block(cachep, &objp, 1, page_node, &list);
-	spin_unlock(&n->list_lock);
+	raw_spin_unlock(&n->list_lock);
 
 	slabs_destroy(cachep, &list);
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:721 @ static void __drain_alien_cache(struct k
 	struct kmem_cache_node *n = get_node(cachep, node);
 
 	if (ac->avail) {
-		spin_lock(&n->list_lock);
+		raw_spin_lock(&n->list_lock);
 		/*
 		 * Stuff objects into the remote nodes shared array first.
 		 * That way we could avoid the overhead of putting the objects
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:732 @ static void __drain_alien_cache(struct k
 
 		free_block(cachep, ac->entry, ac->avail, node, list);
 		ac->avail = 0;
-		spin_unlock(&n->list_lock);
+		raw_spin_unlock(&n->list_lock);
 	}
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:805 @ static int __cache_free_alien(struct kme
 		slabs_destroy(cachep, &list);
 	} else {
 		n = get_node(cachep, page_node);
-		spin_lock(&n->list_lock);
+		raw_spin_lock(&n->list_lock);
 		free_block(cachep, &objp, 1, page_node, &list);
-		spin_unlock(&n->list_lock);
+		raw_spin_unlock(&n->list_lock);
 		slabs_destroy(cachep, &list);
 	}
 	return 1;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:848 @ static int init_cache_node(struct kmem_c
 	 */
 	n = get_node(cachep, node);
 	if (n) {
-		spin_lock_irq(&n->list_lock);
+		raw_spin_lock_irq(&n->list_lock);
 		n->free_limit = (1 + nr_cpus_node(node)) * cachep->batchcount +
 				cachep->num;
-		spin_unlock_irq(&n->list_lock);
+		raw_spin_unlock_irq(&n->list_lock);
 
 		return 0;
 	}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:930 @ static int setup_kmem_cache_node(struct
 		goto fail;
 
 	n = get_node(cachep, node);
-	spin_lock_irq(&n->list_lock);
+	raw_spin_lock_irq(&n->list_lock);
 	if (n->shared && force_change) {
 		free_block(cachep, n->shared->entry,
 				n->shared->avail, node, &list);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:948 @ static int setup_kmem_cache_node(struct
 		new_alien = NULL;
 	}
 
-	spin_unlock_irq(&n->list_lock);
+	raw_spin_unlock_irq(&n->list_lock);
 	slabs_destroy(cachep, &list);
 
 	/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:987 @ static void cpuup_canceled(long cpu)
 		if (!n)
 			continue;
 
-		spin_lock_irq(&n->list_lock);
+		raw_spin_lock_irq(&n->list_lock);
 
 		/* Free limit for this kmem_cache_node */
 		n->free_limit -= cachep->batchcount;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1000 @ static void cpuup_canceled(long cpu)
 		}
 
 		if (!cpumask_empty(mask)) {
-			spin_unlock_irq(&n->list_lock);
+			raw_spin_unlock_irq(&n->list_lock);
 			goto free_slab;
 		}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1014 @ static void cpuup_canceled(long cpu)
 		alien = n->alien;
 		n->alien = NULL;
 
-		spin_unlock_irq(&n->list_lock);
+		raw_spin_unlock_irq(&n->list_lock);
 
 		kfree(shared);
 		if (alien) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1198 @ static void __init init_list(struct kmem
 	/*
 	 * Do not assume that spinlocks can be initialized via memcpy:
 	 */
-	spin_lock_init(&ptr->list_lock);
+	raw_spin_lock_init(&ptr->list_lock);
 
 	MAKE_ALL_LISTS(cachep, ptr, nodeid);
 	cachep->node[nodeid] = ptr;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1369 @ slab_out_of_memory(struct kmem_cache *ca
 	for_each_kmem_cache_node(cachep, node, n) {
 		unsigned long total_slabs, free_slabs, free_objs;
 
-		spin_lock_irqsave(&n->list_lock, flags);
+		raw_spin_lock_irqsave(&n->list_lock, flags);
 		total_slabs = n->total_slabs;
 		free_slabs = n->free_slabs;
 		free_objs = n->free_objects;
-		spin_unlock_irqrestore(&n->list_lock, flags);
+		raw_spin_unlock_irqrestore(&n->list_lock, flags);
 
 		pr_warn("  node %d: slabs: %ld/%ld, objs: %ld/%ld\n",
 			node, total_slabs - free_slabs, total_slabs,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2168 @ static void check_spinlock_acquired(stru
 {
 #ifdef CONFIG_SMP
 	check_irq_off();
-	assert_spin_locked(&get_node(cachep, numa_mem_id())->list_lock);
+	assert_raw_spin_locked(&get_node(cachep, numa_mem_id())->list_lock);
 #endif
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2176 @ static void check_spinlock_acquired_node
 {
 #ifdef CONFIG_SMP
 	check_irq_off();
-	assert_spin_locked(&get_node(cachep, node)->list_lock);
+	assert_raw_spin_locked(&get_node(cachep, node)->list_lock);
 #endif
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2216 @ static void do_drain(void *arg)
 	check_irq_off();
 	ac = cpu_cache_get(cachep);
 	n = get_node(cachep, node);
-	spin_lock(&n->list_lock);
+	raw_spin_lock(&n->list_lock);
 	free_block(cachep, ac->entry, ac->avail, node, &list);
-	spin_unlock(&n->list_lock);
+	raw_spin_unlock(&n->list_lock);
 	slabs_destroy(cachep, &list);
 	ac->avail = 0;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2236 @ static void drain_cpu_caches(struct kmem
 			drain_alien_cache(cachep, n->alien);
 
 	for_each_kmem_cache_node(cachep, node, n) {
-		spin_lock_irq(&n->list_lock);
+		raw_spin_lock_irq(&n->list_lock);
 		drain_array_locked(cachep, n->shared, node, true, &list);
-		spin_unlock_irq(&n->list_lock);
+		raw_spin_unlock_irq(&n->list_lock);
 
 		slabs_destroy(cachep, &list);
 	}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2260 @ static int drain_freelist(struct kmem_ca
 	nr_freed = 0;
 	while (nr_freed < tofree && !list_empty(&n->slabs_free)) {
 
-		spin_lock_irq(&n->list_lock);
+		raw_spin_lock_irq(&n->list_lock);
 		p = n->slabs_free.prev;
 		if (p == &n->slabs_free) {
-			spin_unlock_irq(&n->list_lock);
+			raw_spin_unlock_irq(&n->list_lock);
 			goto out;
 		}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2276 @ static int drain_freelist(struct kmem_ca
 		 * to the cache.
 		 */
 		n->free_objects -= cache->num;
-		spin_unlock_irq(&n->list_lock);
+		raw_spin_unlock_irq(&n->list_lock);
 		slab_destroy(cache, page);
 		nr_freed++;
 	}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2730 @ static void cache_grow_end(struct kmem_c
 	INIT_LIST_HEAD(&page->lru);
 	n = get_node(cachep, page_to_nid(page));
 
-	spin_lock(&n->list_lock);
+	raw_spin_lock(&n->list_lock);
 	n->total_slabs++;
 	if (!page->active) {
 		list_add_tail(&page->lru, &(n->slabs_free));
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2740 @ static void cache_grow_end(struct kmem_c
 
 	STATS_INC_GROWN(cachep);
 	n->free_objects += cachep->num - page->active;
-	spin_unlock(&n->list_lock);
+	raw_spin_unlock(&n->list_lock);
 
 	fixup_objfreelist_debug(cachep, &list);
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2908 @ static struct page *get_first_slab(struc
 {
 	struct page *page;
 
-	assert_spin_locked(&n->list_lock);
+	assert_raw_spin_locked(&n->list_lock);
 	page = list_first_entry_or_null(&n->slabs_partial, struct page, lru);
 	if (!page) {
 		n->free_touched = 1;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2934 @ static noinline void *cache_alloc_pfmema
 	if (!gfp_pfmemalloc_allowed(flags))
 		return NULL;
 
-	spin_lock(&n->list_lock);
+	raw_spin_lock(&n->list_lock);
 	page = get_first_slab(n, true);
 	if (!page) {
-		spin_unlock(&n->list_lock);
+		raw_spin_unlock(&n->list_lock);
 		return NULL;
 	}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2946 @ static noinline void *cache_alloc_pfmema
 
 	fixup_slab_list(cachep, n, page, &list);
 
-	spin_unlock(&n->list_lock);
+	raw_spin_unlock(&n->list_lock);
 	fixup_objfreelist_debug(cachep, &list);
 
 	return obj;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3005 @ static void *cache_alloc_refill(struct k
 	if (!n->free_objects && (!shared || !shared->avail))
 		goto direct_grow;
 
-	spin_lock(&n->list_lock);
+	raw_spin_lock(&n->list_lock);
 	shared = READ_ONCE(n->shared);
 
 	/* See if we can refill from the shared array */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3029 @ static void *cache_alloc_refill(struct k
 must_grow:
 	n->free_objects -= ac->avail;
 alloc_done:
-	spin_unlock(&n->list_lock);
+	raw_spin_unlock(&n->list_lock);
 	fixup_objfreelist_debug(cachep, &list);
 
 direct_grow:
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3254 @ static void *____cache_alloc_node(struct
 	BUG_ON(!n);
 
 	check_irq_off();
-	spin_lock(&n->list_lock);
+	raw_spin_lock(&n->list_lock);
 	page = get_first_slab(n, false);
 	if (!page)
 		goto must_grow;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3272 @ static void *____cache_alloc_node(struct
 
 	fixup_slab_list(cachep, n, page, &list);
 
-	spin_unlock(&n->list_lock);
+	raw_spin_unlock(&n->list_lock);
 	fixup_objfreelist_debug(cachep, &list);
 	return obj;
 
 must_grow:
-	spin_unlock(&n->list_lock);
+	raw_spin_unlock(&n->list_lock);
 	page = cache_grow_begin(cachep, gfp_exact_node(flags), nodeid);
 	if (page) {
 		/* This slab isn't counted yet so don't update free_objects */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3453 @ static void cache_flusharray(struct kmem
 
 	check_irq_off();
 	n = get_node(cachep, node);
-	spin_lock(&n->list_lock);
+	raw_spin_lock(&n->list_lock);
 	if (n->shared) {
 		struct array_cache *shared_array = n->shared;
 		int max = shared_array->limit - shared_array->avail;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3482 @ free_done:
 		STATS_SET_FREEABLE(cachep, i);
 	}
 #endif
-	spin_unlock(&n->list_lock);
+	raw_spin_unlock(&n->list_lock);
 	slabs_destroy(cachep, &list);
 	ac->avail -= batchcount;
 	memmove(ac->entry, &(ac->entry[batchcount]), sizeof(void *)*ac->avail);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3890 @ static int __do_tune_cpucache(struct kme
 
 		node = cpu_to_mem(cpu);
 		n = get_node(cachep, node);
-		spin_lock_irq(&n->list_lock);
+		raw_spin_lock_irq(&n->list_lock);
 		free_block(cachep, ac->entry, ac->avail, node, &list);
-		spin_unlock_irq(&n->list_lock);
+		raw_spin_unlock_irq(&n->list_lock);
 		slabs_destroy(cachep, &list);
 	}
 	free_percpu(prev);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4017 @ static void drain_array(struct kmem_cach
 		return;
 	}
 
-	spin_lock_irq(&n->list_lock);
+	raw_spin_lock_irq(&n->list_lock);
 	drain_array_locked(cachep, ac, node, false, &list);
-	spin_unlock_irq(&n->list_lock);
+	raw_spin_unlock_irq(&n->list_lock);
 
 	slabs_destroy(cachep, &list);
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4103 @ void get_slabinfo(struct kmem_cache *cac
 
 	for_each_kmem_cache_node(cachep, node, n) {
 		check_irq_on();
-		spin_lock_irq(&n->list_lock);
+		raw_spin_lock_irq(&n->list_lock);
 
 		total_slabs += n->total_slabs;
 		free_slabs += n->free_slabs;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4112 @ void get_slabinfo(struct kmem_cache *cac
 		if (n->shared)
 			shared_avail += n->shared->avail;
 
-		spin_unlock_irq(&n->list_lock);
+		raw_spin_unlock_irq(&n->list_lock);
 	}
 	num_objs = total_slabs * cachep->num;
 	active_slabs = total_slabs - free_slabs;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4328 @ static int leaks_show(struct seq_file *m
 		for_each_kmem_cache_node(cachep, node, n) {
 
 			check_irq_on();
-			spin_lock_irq(&n->list_lock);
+			raw_spin_lock_irq(&n->list_lock);
 
 			list_for_each_entry(page, &n->slabs_full, lru)
 				handle_slab(x, cachep, page);
 			list_for_each_entry(page, &n->slabs_partial, lru)
 				handle_slab(x, cachep, page);
-			spin_unlock_irq(&n->list_lock);
+			raw_spin_unlock_irq(&n->list_lock);
 		}
 	} while (!is_store_user_clean(cachep));
 
Index: linux-5.0.19-rt11/mm/slab.h
===================================================================
--- linux-5.0.19-rt11.orig/mm/slab.h
+++ linux-5.0.19-rt11/mm/slab.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:456 @ static inline void slab_post_alloc_hook(
  * The slab lists for all objects.
  */
 struct kmem_cache_node {
-	spinlock_t list_lock;
+	raw_spinlock_t list_lock;
 
 #ifdef CONFIG_SLAB
 	struct list_head slabs_partial;	/* partial list first, better asm code */
Index: linux-5.0.19-rt11/mm/slub.c
===================================================================
--- linux-5.0.19-rt11.orig/mm/slub.c
+++ linux-5.0.19-rt11/mm/slub.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1187 @ static noinline int free_debug_processin
 	unsigned long uninitialized_var(flags);
 	int ret = 0;
 
-	spin_lock_irqsave(&n->list_lock, flags);
+	raw_spin_lock_irqsave(&n->list_lock, flags);
 	slab_lock(page);
 
 	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1222 @ out:
 			 bulk_cnt, cnt);
 
 	slab_unlock(page);
-	spin_unlock_irqrestore(&n->list_lock, flags);
+	raw_spin_unlock_irqrestore(&n->list_lock, flags);
 	if (!ret)
 		slab_fix(s, "Object at 0x%p not freed", object);
 	return ret;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1390 @ static inline void dec_slabs_node(struct
 
 #endif /* CONFIG_SLUB_DEBUG */
 
+struct slub_free_list {
+	raw_spinlock_t		lock;
+	struct list_head	list;
+};
+static DEFINE_PER_CPU(struct slub_free_list, slub_free_list);
+
 /*
  * Hooks for other subsystems that check memory allocations. In a typical
  * production configuration these hooks all should produce no code at all.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1633 @ static struct page *allocate_slab(struct
 	void *start, *p, *next;
 	int idx, order;
 	bool shuffle;
+	bool enableirqs = false;
 
 	flags &= gfp_allowed_mask;
 
 	if (gfpflags_allow_blocking(flags))
+		enableirqs = true;
+
+#ifdef CONFIG_PREEMPT_RT_FULL
+	if (system_state > SYSTEM_BOOTING)
+		enableirqs = true;
+#endif
+	if (enableirqs)
 		local_irq_enable();
 
 	flags |= s->allocflags;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1704 @ static struct page *allocate_slab(struct
 	page->frozen = 1;
 
 out:
-	if (gfpflags_allow_blocking(flags))
+	if (enableirqs)
 		local_irq_disable();
 	if (!page)
 		return NULL;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1762 @ static void __free_slab(struct kmem_cach
 	__free_pages(page, order);
 }
 
+static void free_delayed(struct list_head *h)
+{
+	while (!list_empty(h)) {
+		struct page *page = list_first_entry(h, struct page, lru);
+
+		list_del(&page->lru);
+		__free_slab(page->slab_cache, page);
+	}
+}
+
 static void rcu_free_slab(struct rcu_head *h)
 {
 	struct page *page = container_of(h, struct page, rcu_head);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1783 @ static void free_slab(struct kmem_cache
 {
 	if (unlikely(s->flags & SLAB_TYPESAFE_BY_RCU)) {
 		call_rcu(&page->rcu_head, rcu_free_slab);
+	} else if (irqs_disabled()) {
+		struct slub_free_list *f = this_cpu_ptr(&slub_free_list);
+
+		raw_spin_lock(&f->lock);
+		list_add(&page->lru, &f->list);
+		raw_spin_unlock(&f->lock);
 	} else
 		__free_slab(s, page);
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1896 @ static void *get_partial_node(struct kme
 	if (!n || !n->nr_partial)
 		return NULL;
 
-	spin_lock(&n->list_lock);
+	raw_spin_lock(&n->list_lock);
 	list_for_each_entry_safe(page, page2, &n->partial, lru) {
 		void *t;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1921 @ static void *get_partial_node(struct kme
 			break;
 
 	}
-	spin_unlock(&n->list_lock);
+	raw_spin_unlock(&n->list_lock);
 	return object;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2167 @ redo:
 			 * that acquire_slab() will see a slab page that
 			 * is frozen
 			 */
-			spin_lock(&n->list_lock);
+			raw_spin_lock(&n->list_lock);
 		}
 	} else {
 		m = M_FULL;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2178 @ redo:
 			 * slabs from diagnostic functions will not see
 			 * any frozen slabs.
 			 */
-			spin_lock(&n->list_lock);
+			raw_spin_lock(&n->list_lock);
 		}
 	}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2202 @ redo:
 		goto redo;
 
 	if (lock)
-		spin_unlock(&n->list_lock);
+		raw_spin_unlock(&n->list_lock);
 
 	if (m == M_PARTIAL)
 		stat(s, tail);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2241 @ static void unfreeze_partials(struct kme
 		n2 = get_node(s, page_to_nid(page));
 		if (n != n2) {
 			if (n)
-				spin_unlock(&n->list_lock);
+				raw_spin_unlock(&n->list_lock);
 
 			n = n2;
-			spin_lock(&n->list_lock);
+			raw_spin_lock(&n->list_lock);
 		}
 
 		do {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2273 @ static void unfreeze_partials(struct kme
 	}
 
 	if (n)
-		spin_unlock(&n->list_lock);
+		raw_spin_unlock(&n->list_lock);
 
 	while (discard_page) {
 		page = discard_page;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2310 @ static void put_cpu_partial(struct kmem_
 			pobjects = oldpage->pobjects;
 			pages = oldpage->pages;
 			if (drain && pobjects > s->cpu_partial) {
+				struct slub_free_list *f;
 				unsigned long flags;
+				LIST_HEAD(tofree);
 				/*
 				 * partial array is full. Move the existing
 				 * set to the per node partial list.
 				 */
 				local_irq_save(flags);
 				unfreeze_partials(s, this_cpu_ptr(s->cpu_slab));
+				f = this_cpu_ptr(&slub_free_list);
+				raw_spin_lock(&f->lock);
+				list_splice_init(&f->list, &tofree);
+				raw_spin_unlock(&f->lock);
 				local_irq_restore(flags);
+				free_delayed(&tofree);
 				oldpage = NULL;
 				pobjects = 0;
 				pages = 0;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2392 @ static bool has_cpu_slab(int cpu, void *
 
 static void flush_all(struct kmem_cache *s)
 {
+	LIST_HEAD(tofree);
+	int cpu;
+
 	on_each_cpu_cond(has_cpu_slab, flush_cpu_slab, s, 1, GFP_ATOMIC);
+	for_each_online_cpu(cpu) {
+		struct slub_free_list *f;
+
+		if (!has_cpu_slab(cpu, s))
+			continue;
+
+		f = &per_cpu(slub_free_list, cpu);
+		raw_spin_lock_irq(&f->lock);
+		list_splice_init(&f->list, &tofree);
+		raw_spin_unlock_irq(&f->lock);
+		free_delayed(&tofree);
+	}
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2462 @ static unsigned long count_partial(struc
 	unsigned long x = 0;
 	struct page *page;
 
-	spin_lock_irqsave(&n->list_lock, flags);
+	raw_spin_lock_irqsave(&n->list_lock, flags);
 	list_for_each_entry(page, &n->partial, lru)
 		x += get_count(page);
-	spin_unlock_irqrestore(&n->list_lock, flags);
+	raw_spin_unlock_irqrestore(&n->list_lock, flags);
 	return x;
 }
 #endif /* CONFIG_SLUB_DEBUG || CONFIG_SYSFS */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2605 @ static inline void *get_freelist(struct
  * already disabled (which is the case for bulk allocation).
  */
 static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
-			  unsigned long addr, struct kmem_cache_cpu *c)
+			  unsigned long addr, struct kmem_cache_cpu *c,
+			  struct list_head *to_free)
 {
+	struct slub_free_list *f;
 	void *freelist;
 	struct page *page;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2664 @ load_freelist:
 	VM_BUG_ON(!c->page->frozen);
 	c->freelist = get_freepointer(s, freelist);
 	c->tid = next_tid(c->tid);
+
+out:
+	f = this_cpu_ptr(&slub_free_list);
+	raw_spin_lock(&f->lock);
+	list_splice_init(&f->list, to_free);
+	raw_spin_unlock(&f->lock);
+
 	return freelist;
 
 new_slab:
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2686 @ new_slab:
 
 	if (unlikely(!freelist)) {
 		slab_out_of_memory(s, gfpflags, node);
-		return NULL;
+		goto out;
 	}
 
 	page = c->page;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2699 @ new_slab:
 		goto new_slab;	/* Slab failed checks. Next slab needed */
 
 	deactivate_slab(s, page, get_freepointer(s, freelist), c);
-	return freelist;
+	goto out;
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2711 @ static void *__slab_alloc(struct kmem_ca
 {
 	void *p;
 	unsigned long flags;
+	LIST_HEAD(tofree);
 
 	local_irq_save(flags);
 #ifdef CONFIG_PREEMPT
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2723 @ static void *__slab_alloc(struct kmem_ca
 	c = this_cpu_ptr(s->cpu_slab);
 #endif
 
-	p = ___slab_alloc(s, gfpflags, node, addr, c);
+	p = ___slab_alloc(s, gfpflags, node, addr, c, &tofree);
 	local_irq_restore(flags);
+	free_delayed(&tofree);
 	return p;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2911 @ static void __slab_free(struct kmem_cach
 
 	do {
 		if (unlikely(n)) {
-			spin_unlock_irqrestore(&n->list_lock, flags);
+			raw_spin_unlock_irqrestore(&n->list_lock, flags);
 			n = NULL;
 		}
 		prior = page->freelist;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2943 @ static void __slab_free(struct kmem_cach
 				 * Otherwise the list_lock will synchronize with
 				 * other processors updating the list of slabs.
 				 */
-				spin_lock_irqsave(&n->list_lock, flags);
+				raw_spin_lock_irqsave(&n->list_lock, flags);
 
 			}
 		}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2985 @ static void __slab_free(struct kmem_cach
 		add_partial(n, page, DEACTIVATE_TO_TAIL);
 		stat(s, FREE_ADD_PARTIAL);
 	}
-	spin_unlock_irqrestore(&n->list_lock, flags);
+	raw_spin_unlock_irqrestore(&n->list_lock, flags);
 	return;
 
 slab_empty:
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3000 @ slab_empty:
 		remove_full(s, n, page);
 	}
 
-	spin_unlock_irqrestore(&n->list_lock, flags);
+	raw_spin_unlock_irqrestore(&n->list_lock, flags);
 	stat(s, FREE_SLAB);
 	discard_slab(s, page);
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3203 @ int kmem_cache_alloc_bulk(struct kmem_ca
 			  void **p)
 {
 	struct kmem_cache_cpu *c;
+	LIST_HEAD(to_free);
 	int i;
 
 	/* memcg and kmem_cache debug support */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3227 @ int kmem_cache_alloc_bulk(struct kmem_ca
 			 * of re-populating per CPU c->freelist
 			 */
 			p[i] = ___slab_alloc(s, flags, NUMA_NO_NODE,
-					    _RET_IP_, c);
+					    _RET_IP_, c, &to_free);
 			if (unlikely(!p[i]))
 				goto error;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3239 @ int kmem_cache_alloc_bulk(struct kmem_ca
 	}
 	c->tid = next_tid(c->tid);
 	local_irq_enable();
+	free_delayed(&to_free);
 
 	/* Clear memory outside IRQ disabled fastpath loop */
 	if (unlikely(flags & __GFP_ZERO)) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3254 @ int kmem_cache_alloc_bulk(struct kmem_ca
 	return i;
 error:
 	local_irq_enable();
+	free_delayed(&to_free);
 	slab_post_alloc_hook(s, flags, i, p);
 	__kmem_cache_free_bulk(s, i, p);
 	return 0;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3390 @ static void
 init_kmem_cache_node(struct kmem_cache_node *n)
 {
 	n->nr_partial = 0;
-	spin_lock_init(&n->list_lock);
+	raw_spin_lock_init(&n->list_lock);
 	INIT_LIST_HEAD(&n->partial);
 #ifdef CONFIG_SLUB_DEBUG
 	atomic_long_set(&n->nr_slabs, 0);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3743 @ static void list_slab_objects(struct kme
 							const char *text)
 {
 #ifdef CONFIG_SLUB_DEBUG
+#ifdef CONFIG_PREEMPT_RT_BASE
+	/* XXX move out of irq-off section */
+	slab_err(s, page, text, s->name);
+#else
+
 	void *addr = page_address(page);
 	void *p;
 	unsigned long *map = bitmap_zalloc(page->objects, GFP_ATOMIC);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3767 @ static void list_slab_objects(struct kme
 	slab_unlock(page);
 	bitmap_free(map);
 #endif
+#endif
 }
 
+
 /*
  * Attempt to free all partial slabs on a node.
  * This is called from __kmem_cache_shutdown(). We must take list_lock
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3782 @ static void free_partial(struct kmem_cac
 	struct page *page, *h;
 
 	BUG_ON(irqs_disabled());
-	spin_lock_irq(&n->list_lock);
+	raw_spin_lock_irq(&n->list_lock);
 	list_for_each_entry_safe(page, h, &n->partial, lru) {
 		if (!page->inuse) {
 			remove_partial(n, page);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3792 @ static void free_partial(struct kmem_cac
 			"Objects remaining in %s on __kmem_cache_shutdown()");
 		}
 	}
-	spin_unlock_irq(&n->list_lock);
+	raw_spin_unlock_irq(&n->list_lock);
 
 	list_for_each_entry_safe(page, h, &discard, lru)
 		discard_slab(s, page);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4066 @ int __kmem_cache_shrink(struct kmem_cach
 		for (i = 0; i < SHRINK_PROMOTE_MAX; i++)
 			INIT_LIST_HEAD(promote + i);
 
-		spin_lock_irqsave(&n->list_lock, flags);
+		raw_spin_lock_irqsave(&n->list_lock, flags);
 
 		/*
 		 * Build lists of slabs to discard or promote.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4097 @ int __kmem_cache_shrink(struct kmem_cach
 		for (i = SHRINK_PROMOTE_MAX - 1; i >= 0; i--)
 			list_splice(promote + i, &n->partial);
 
-		spin_unlock_irqrestore(&n->list_lock, flags);
+		raw_spin_unlock_irqrestore(&n->list_lock, flags);
 
 		/* Release empty slabs */
 		list_for_each_entry_safe(page, t, &discard, lru)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4310 @ void __init kmem_cache_init(void)
 {
 	static __initdata struct kmem_cache boot_kmem_cache,
 		boot_kmem_cache_node;
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		raw_spin_lock_init(&per_cpu(slub_free_list, cpu).lock);
+		INIT_LIST_HEAD(&per_cpu(slub_free_list, cpu).list);
+	}
 
 	if (debug_guardpage_minorder())
 		slub_max_order = 0;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4517 @ static int validate_slab_node(struct kme
 	struct page *page;
 	unsigned long flags;
 
-	spin_lock_irqsave(&n->list_lock, flags);
+	raw_spin_lock_irqsave(&n->list_lock, flags);
 
 	list_for_each_entry(page, &n->partial, lru) {
 		validate_slab_slab(s, page, map);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4539 @ static int validate_slab_node(struct kme
 		       s->name, count, atomic_long_read(&n->nr_slabs));
 
 out:
-	spin_unlock_irqrestore(&n->list_lock, flags);
+	raw_spin_unlock_irqrestore(&n->list_lock, flags);
 	return count;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4725 @ static int list_locations(struct kmem_ca
 		if (!atomic_long_read(&n->nr_slabs))
 			continue;
 
-		spin_lock_irqsave(&n->list_lock, flags);
+		raw_spin_lock_irqsave(&n->list_lock, flags);
 		list_for_each_entry(page, &n->partial, lru)
 			process_slab(&t, s, page, alloc, map);
 		list_for_each_entry(page, &n->full, lru)
 			process_slab(&t, s, page, alloc, map);
-		spin_unlock_irqrestore(&n->list_lock, flags);
+		raw_spin_unlock_irqrestore(&n->list_lock, flags);
 	}
 
 	for (i = 0; i < t.count; i++) {
Index: linux-5.0.19-rt11/mm/swap.c
===================================================================
--- linux-5.0.19-rt11.orig/mm/swap.c
+++ linux-5.0.19-rt11/mm/swap.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:35 @
 #include <linux/memcontrol.h>
 #include <linux/gfp.h>
 #include <linux/uio.h>
+#include <linux/locallock.h>
 #include <linux/hugetlb.h>
 #include <linux/page_idle.h>
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:54 @ static DEFINE_PER_CPU(struct pagevec, lr
 #ifdef CONFIG_SMP
 static DEFINE_PER_CPU(struct pagevec, activate_page_pvecs);
 #endif
+static DEFINE_LOCAL_IRQ_LOCK(rotate_lock);
+DEFINE_LOCAL_IRQ_LOCK(swapvec_lock);
 
 /*
  * This path almost never happens for VM activity - pages are normally
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:258 @ void rotate_reclaimable_page(struct page
 		unsigned long flags;
 
 		get_page(page);
-		local_irq_save(flags);
+		local_lock_irqsave(rotate_lock, flags);
 		pvec = this_cpu_ptr(&lru_rotate_pvecs);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_move_tail(pvec);
-		local_irq_restore(flags);
+		local_unlock_irqrestore(rotate_lock, flags);
 	}
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:312 @ void activate_page(struct page *page)
 {
 	page = compound_head(page);
 	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
-		struct pagevec *pvec = &get_cpu_var(activate_page_pvecs);
+		struct pagevec *pvec = &get_locked_var(swapvec_lock,
+						       activate_page_pvecs);
 
 		get_page(page);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_lru_move_fn(pvec, __activate_page, NULL);
-		put_cpu_var(activate_page_pvecs);
+		put_locked_var(swapvec_lock, activate_page_pvecs);
 	}
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:340 @ void activate_page(struct page *page)
 
 static void __lru_cache_activate_page(struct page *page)
 {
-	struct pagevec *pvec = &get_cpu_var(lru_add_pvec);
+	struct pagevec *pvec = &get_locked_var(swapvec_lock, lru_add_pvec);
 	int i;
 
 	/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:362 @ static void __lru_cache_activate_page(st
 		}
 	}
 
-	put_cpu_var(lru_add_pvec);
+	put_locked_var(swapvec_lock, lru_add_pvec);
 }
 
 /*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:404 @ EXPORT_SYMBOL(mark_page_accessed);
 
 static void __lru_cache_add(struct page *page)
 {
-	struct pagevec *pvec = &get_cpu_var(lru_add_pvec);
+	struct pagevec *pvec = &get_locked_var(swapvec_lock, lru_add_pvec);
 
 	get_page(page);
 	if (!pagevec_add(pvec, page) || PageCompound(page))
 		__pagevec_lru_add(pvec);
-	put_cpu_var(lru_add_pvec);
+	put_locked_var(swapvec_lock, lru_add_pvec);
 }
 
 /**
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:587 @ void lru_add_drain_cpu(int cpu)
 		unsigned long flags;
 
 		/* No harm done if a racing interrupt already did this */
-		local_irq_save(flags);
+#ifdef CONFIG_PREEMPT_RT_BASE
+		local_lock_irqsave_on(rotate_lock, flags, cpu);
 		pagevec_move_tail(pvec);
-		local_irq_restore(flags);
+		local_unlock_irqrestore_on(rotate_lock, flags, cpu);
+#else
+		local_lock_irqsave(rotate_lock, flags);
+		pagevec_move_tail(pvec);
+		local_unlock_irqrestore(rotate_lock, flags);
+#endif
 	}
 
 	pvec = &per_cpu(lru_deactivate_file_pvecs, cpu);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:627 @ void deactivate_file_page(struct page *p
 		return;
 
 	if (likely(get_page_unless_zero(page))) {
-		struct pagevec *pvec = &get_cpu_var(lru_deactivate_file_pvecs);
+		struct pagevec *pvec = &get_locked_var(swapvec_lock,
+						       lru_deactivate_file_pvecs);
 
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_lru_move_fn(pvec, lru_deactivate_file_fn, NULL);
-		put_cpu_var(lru_deactivate_file_pvecs);
+		put_locked_var(swapvec_lock, lru_deactivate_file_pvecs);
 	}
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:647 @ void mark_page_lazyfree(struct page *pag
 {
 	if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
 	    !PageSwapCache(page) && !PageUnevictable(page)) {
-		struct pagevec *pvec = &get_cpu_var(lru_lazyfree_pvecs);
+		struct pagevec *pvec = &get_locked_var(swapvec_lock,
+						       lru_lazyfree_pvecs);
 
 		get_page(page);
 		if (!pagevec_add(pvec, page) || PageCompound(page))
 			pagevec_lru_move_fn(pvec, lru_lazyfree_fn, NULL);
-		put_cpu_var(lru_lazyfree_pvecs);
+		put_locked_var(swapvec_lock, lru_lazyfree_pvecs);
 	}
 }
 
 void lru_add_drain(void)
 {
-	lru_add_drain_cpu(get_cpu());
-	put_cpu();
+	lru_add_drain_cpu(local_lock_cpu(swapvec_lock));
+	local_unlock_cpu(swapvec_lock);
 }
 
 #ifdef CONFIG_SMP
 
+#ifdef CONFIG_PREEMPT_RT_BASE
+static inline void remote_lru_add_drain(int cpu, struct cpumask *has_work)
+{
+	local_lock_on(swapvec_lock, cpu);
+	lru_add_drain_cpu(cpu);
+	local_unlock_on(swapvec_lock, cpu);
+}
+
+#else
+
 static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);
 
 static void lru_add_drain_per_cpu(struct work_struct *dummy)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:682 @ static void lru_add_drain_per_cpu(struct
 	lru_add_drain();
 }
 
+static inline void remote_lru_add_drain(int cpu, struct cpumask *has_work)
+{
+	struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
+
+	INIT_WORK(work, lru_add_drain_per_cpu);
+	queue_work_on(cpu, mm_percpu_wq, work);
+	cpumask_set_cpu(cpu, has_work);
+}
+#endif
+
 /*
  * Doesn't need any cpu hotplug locking because we do rely on per-cpu
  * kworkers being shut down before our page_alloc_cpu_dead callback is
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:716 @ void lru_add_drain_all(void)
 	cpumask_clear(&has_work);
 
 	for_each_online_cpu(cpu) {
-		struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
 
 		if (pagevec_count(&per_cpu(lru_add_pvec, cpu)) ||
 		    pagevec_count(&per_cpu(lru_rotate_pvecs, cpu)) ||
 		    pagevec_count(&per_cpu(lru_deactivate_file_pvecs, cpu)) ||
 		    pagevec_count(&per_cpu(lru_lazyfree_pvecs, cpu)) ||
-		    need_activate_page_drain(cpu)) {
-			INIT_WORK(work, lru_add_drain_per_cpu);
-			queue_work_on(cpu, mm_percpu_wq, work);
-			cpumask_set_cpu(cpu, &has_work);
-		}
+		    need_activate_page_drain(cpu))
+			remote_lru_add_drain(cpu, &has_work);
 	}
 
+#ifndef CONFIG_PREEMPT_RT_BASE
 	for_each_cpu(cpu, &has_work)
 		flush_work(&per_cpu(lru_add_drain_work, cpu));
+#endif
 
 	mutex_unlock(&lock);
 }
Index: linux-5.0.19-rt11/mm/vmalloc.c
===================================================================
--- linux-5.0.19-rt11.orig/mm/vmalloc.c
+++ linux-5.0.19-rt11/mm/vmalloc.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:855 @ static void *new_vmap_block(unsigned int
 	struct vmap_block *vb;
 	struct vmap_area *va;
 	unsigned long vb_idx;
-	int node, err;
+	int node, err, cpu;
 	void *vaddr;
 
 	node = numa_node_id();
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:898 @ static void *new_vmap_block(unsigned int
 	BUG_ON(err);
 	radix_tree_preload_end();
 
-	vbq = &get_cpu_var(vmap_block_queue);
+	cpu = get_cpu_light();
+	vbq = this_cpu_ptr(&vmap_block_queue);
 	spin_lock(&vbq->lock);
 	list_add_tail_rcu(&vb->free_list, &vbq->free);
 	spin_unlock(&vbq->lock);
-	put_cpu_var(vmap_block_queue);
+	put_cpu_light();
 
 	return vaddr;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:972 @ static void *vb_alloc(unsigned long size
 	struct vmap_block *vb;
 	void *vaddr = NULL;
 	unsigned int order;
+	int cpu;
 
 	BUG_ON(offset_in_page(size));
 	BUG_ON(size > PAGE_SIZE*VMAP_MAX_ALLOC);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:987 @ static void *vb_alloc(unsigned long size
 	order = get_order(size);
 
 	rcu_read_lock();
-	vbq = &get_cpu_var(vmap_block_queue);
+	cpu = get_cpu_light();
+	vbq = this_cpu_ptr(&vmap_block_queue);
 	list_for_each_entry_rcu(vb, &vbq->free, free_list) {
 		unsigned long pages_off;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1011 @ static void *vb_alloc(unsigned long size
 		break;
 	}
 
-	put_cpu_var(vmap_block_queue);
+	put_cpu_light();
 	rcu_read_unlock();
 
 	/* Allocate new block if nothing was found */
Index: linux-5.0.19-rt11/mm/vmstat.c
===================================================================
--- linux-5.0.19-rt11.orig/mm/vmstat.c
+++ linux-5.0.19-rt11/mm/vmstat.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:323 @ void __mod_zone_page_state(struct zone *
 	long x;
 	long t;
 
+	preempt_disable_rt();
 	x = delta + __this_cpu_read(*p);
 
 	t = __this_cpu_read(pcp->stat_threshold);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:333 @ void __mod_zone_page_state(struct zone *
 		x = 0;
 	}
 	__this_cpu_write(*p, x);
+	preempt_enable_rt();
 }
 EXPORT_SYMBOL(__mod_zone_page_state);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:345 @ void __mod_node_page_state(struct pglist
 	long x;
 	long t;
 
+	preempt_disable_rt();
 	x = delta + __this_cpu_read(*p);
 
 	t = __this_cpu_read(pcp->stat_threshold);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:355 @ void __mod_node_page_state(struct pglist
 		x = 0;
 	}
 	__this_cpu_write(*p, x);
+	preempt_enable_rt();
 }
 EXPORT_SYMBOL(__mod_node_page_state);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:388 @ void __inc_zone_state(struct zone *zone,
 	s8 __percpu *p = pcp->vm_stat_diff + item;
 	s8 v, t;
 
+	preempt_disable_rt();
 	v = __this_cpu_inc_return(*p);
 	t = __this_cpu_read(pcp->stat_threshold);
 	if (unlikely(v > t)) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:397 @ void __inc_zone_state(struct zone *zone,
 		zone_page_state_add(v + overstep, zone, item);
 		__this_cpu_write(*p, -overstep);
 	}
+	preempt_enable_rt();
 }
 
 void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:406 @ void __inc_node_state(struct pglist_data
 	s8 __percpu *p = pcp->vm_node_stat_diff + item;
 	s8 v, t;
 
+	preempt_disable_rt();
 	v = __this_cpu_inc_return(*p);
 	t = __this_cpu_read(pcp->stat_threshold);
 	if (unlikely(v > t)) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:415 @ void __inc_node_state(struct pglist_data
 		node_page_state_add(v + overstep, pgdat, item);
 		__this_cpu_write(*p, -overstep);
 	}
+	preempt_enable_rt();
 }
 
 void __inc_zone_page_state(struct page *page, enum zone_stat_item item)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:436 @ void __dec_zone_state(struct zone *zone,
 	s8 __percpu *p = pcp->vm_stat_diff + item;
 	s8 v, t;
 
+	preempt_disable_rt();
 	v = __this_cpu_dec_return(*p);
 	t = __this_cpu_read(pcp->stat_threshold);
 	if (unlikely(v < - t)) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:445 @ void __dec_zone_state(struct zone *zone,
 		zone_page_state_add(v - overstep, zone, item);
 		__this_cpu_write(*p, overstep);
 	}
+	preempt_enable_rt();
 }
 
 void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item item)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:454 @ void __dec_node_state(struct pglist_data
 	s8 __percpu *p = pcp->vm_node_stat_diff + item;
 	s8 v, t;
 
+	preempt_disable_rt();
 	v = __this_cpu_dec_return(*p);
 	t = __this_cpu_read(pcp->stat_threshold);
 	if (unlikely(v < - t)) {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:463 @ void __dec_node_state(struct pglist_data
 		node_page_state_add(v - overstep, pgdat, item);
 		__this_cpu_write(*p, overstep);
 	}
+	preempt_enable_rt();
 }
 
 void __dec_zone_page_state(struct page *page, enum zone_stat_item item)
Index: linux-5.0.19-rt11/mm/workingset.c
===================================================================
--- linux-5.0.19-rt11.orig/mm/workingset.c
+++ linux-5.0.19-rt11/mm/workingset.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:371 @ static struct list_lru shadow_nodes;
 
 void workingset_update_node(struct xa_node *node)
 {
+	struct address_space *mapping;
+
 	/*
 	 * Track non-empty nodes that contain only shadow entries;
 	 * unlink those that contain pages or are being freed.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:381 @ void workingset_update_node(struct xa_no
 	 * already where they should be. The list_empty() test is safe
 	 * as node->private_list is protected by the i_pages lock.
 	 */
-	VM_WARN_ON_ONCE(!irqs_disabled());  /* For __inc_lruvec_page_state */
+	mapping = container_of(node->array, struct address_space, i_pages);
+	lockdep_assert_held(&mapping->i_pages.xa_lock);
 
 	if (node->count && node->count == node->nr_values) {
 		if (list_empty(&node->private_list)) {
Index: linux-5.0.19-rt11/mm/zsmalloc.c
===================================================================
--- linux-5.0.19-rt11.orig/mm/zsmalloc.c
+++ linux-5.0.19-rt11/mm/zsmalloc.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:58 @
 #include <linux/migrate.h>
 #include <linux/pagemap.h>
 #include <linux/fs.h>
+#include <linux/locallock.h>
 
 #define ZSPAGE_MAGIC	0x58
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:76 @
  */
 #define ZS_MAX_ZSPAGE_ORDER 2
 #define ZS_MAX_PAGES_PER_ZSPAGE (_AC(1, UL) << ZS_MAX_ZSPAGE_ORDER)
-
 #define ZS_HANDLE_SIZE (sizeof(unsigned long))
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+
+struct zsmalloc_handle {
+	unsigned long addr;
+	struct mutex lock;
+};
+
+#define ZS_HANDLE_ALLOC_SIZE (sizeof(struct zsmalloc_handle))
+
+#else
+
+#define ZS_HANDLE_ALLOC_SIZE (sizeof(unsigned long))
+#endif
+
 /*
  * Object location (<PFN>, <obj_idx>) is encoded as
  * as single (unsigned long) handle value.
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:337 @ static void SetZsPageMovable(struct zs_p
 
 static int create_cache(struct zs_pool *pool)
 {
-	pool->handle_cachep = kmem_cache_create("zs_handle", ZS_HANDLE_SIZE,
+	pool->handle_cachep = kmem_cache_create("zs_handle", ZS_HANDLE_ALLOC_SIZE,
 					0, 0, NULL);
 	if (!pool->handle_cachep)
 		return 1;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:361 @ static void destroy_cache(struct zs_pool
 
 static unsigned long cache_alloc_handle(struct zs_pool *pool, gfp_t gfp)
 {
-	return (unsigned long)kmem_cache_alloc(pool->handle_cachep,
-			gfp & ~(__GFP_HIGHMEM|__GFP_MOVABLE));
+	void *p;
+
+	p = kmem_cache_alloc(pool->handle_cachep,
+			     gfp & ~(__GFP_HIGHMEM|__GFP_MOVABLE));
+#ifdef CONFIG_PREEMPT_RT_FULL
+	if (p) {
+		struct zsmalloc_handle *zh = p;
+
+		mutex_init(&zh->lock);
+	}
+#endif
+	return (unsigned long)p;
 }
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+static struct zsmalloc_handle *zs_get_pure_handle(unsigned long handle)
+{
+	return (void *)(handle &~((1 << OBJ_TAG_BITS) - 1));
+}
+#endif
+
 static void cache_free_handle(struct zs_pool *pool, unsigned long handle)
 {
 	kmem_cache_free(pool->handle_cachep, (void *)handle);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:400 @ static void cache_free_zspage(struct zs_
 
 static void record_obj(unsigned long handle, unsigned long obj)
 {
+#ifdef CONFIG_PREEMPT_RT_FULL
+	struct zsmalloc_handle *zh = zs_get_pure_handle(handle);
+
+	WRITE_ONCE(zh->addr, obj);
+#else
 	/*
 	 * lsb of @obj represents handle lock while other bits
 	 * represent object value the handle is pointing so
 	 * updating shouldn't do store tearing.
 	 */
 	WRITE_ONCE(*(unsigned long *)handle, obj);
+#endif
 }
 
 /* zpool driver */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:493 @ MODULE_ALIAS("zpool-zsmalloc");
 
 /* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
 static DEFINE_PER_CPU(struct mapping_area, zs_map_area);
+static DEFINE_LOCAL_IRQ_LOCK(zs_map_area_lock);
 
 static bool is_zspage_isolated(struct zspage *zspage)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:923 @ static unsigned long location_to_obj(str
 
 static unsigned long handle_to_obj(unsigned long handle)
 {
+#ifdef CONFIG_PREEMPT_RT_FULL
+	struct zsmalloc_handle *zh = zs_get_pure_handle(handle);
+
+	return zh->addr;
+#else
 	return *(unsigned long *)handle;
+#endif
 }
 
 static unsigned long obj_to_head(struct page *page, void *obj)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:943 @ static unsigned long obj_to_head(struct
 
 static inline int testpin_tag(unsigned long handle)
 {
+#ifdef CONFIG_PREEMPT_RT_FULL
+	struct zsmalloc_handle *zh = zs_get_pure_handle(handle);
+
+	return mutex_is_locked(&zh->lock);
+#else
 	return bit_spin_is_locked(HANDLE_PIN_BIT, (unsigned long *)handle);
+#endif
 }
 
 static inline int trypin_tag(unsigned long handle)
 {
+#ifdef CONFIG_PREEMPT_RT_FULL
+	struct zsmalloc_handle *zh = zs_get_pure_handle(handle);
+
+	return mutex_trylock(&zh->lock);
+#else
 	return bit_spin_trylock(HANDLE_PIN_BIT, (unsigned long *)handle);
+#endif
 }
 
 static void pin_tag(unsigned long handle)
 {
+#ifdef CONFIG_PREEMPT_RT_FULL
+	struct zsmalloc_handle *zh = zs_get_pure_handle(handle);
+
+	return mutex_lock(&zh->lock);
+#else
 	bit_spin_lock(HANDLE_PIN_BIT, (unsigned long *)handle);
+#endif
 }
 
 static void unpin_tag(unsigned long handle)
 {
+#ifdef CONFIG_PREEMPT_RT_FULL
+	struct zsmalloc_handle *zh = zs_get_pure_handle(handle);
+
+	return mutex_unlock(&zh->lock);
+#else
 	bit_spin_unlock(HANDLE_PIN_BIT, (unsigned long *)handle);
+#endif
 }
 
 static void reset_page(struct page *page)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1408 @ void *zs_map_object(struct zs_pool *pool
 	class = pool->size_class[class_idx];
 	off = (class->size * obj_idx) & ~PAGE_MASK;
 
-	area = &get_cpu_var(zs_map_area);
+	area = &get_locked_var(zs_map_area_lock, zs_map_area);
 	area->vm_mm = mm;
 	if (off + class->size <= PAGE_SIZE) {
 		/* this object is contained entirely within a page */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1462 @ void zs_unmap_object(struct zs_pool *poo
 
 		__zs_unmap_object(area, pages, off, class->size);
 	}
-	put_cpu_var(zs_map_area);
+	put_locked_var(zs_map_area_lock, zs_map_area);
 
 	migrate_read_unlock(zspage);
 	unpin_tag(handle);
Index: linux-5.0.19-rt11/mm/zswap.c
===================================================================
--- linux-5.0.19-rt11.orig/mm/zswap.c
+++ linux-5.0.19-rt11/mm/zswap.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:30 @
 #include <linux/highmem.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
+#include <linux/locallock.h>
 #include <linux/types.h>
 #include <linux/atomic.h>
 #include <linux/frontswap.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:994 @ static void zswap_fill_page(void *ptr, u
 	memset_l(page, value, PAGE_SIZE / sizeof(unsigned long));
 }
 
+/* protect zswap_dstmem from concurrency */
+static DEFINE_LOCAL_IRQ_LOCK(zswap_dstmem_lock);
 /*********************************
 * frontswap hooks
 **********************************/
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1072 @ static int zswap_frontswap_store(unsigne
 	}
 
 	/* compress */
-	dst = get_cpu_var(zswap_dstmem);
-	tfm = *get_cpu_ptr(entry->pool->tfm);
+	dst = get_locked_var(zswap_dstmem_lock, zswap_dstmem);
+	tfm = *this_cpu_ptr(entry->pool->tfm);
 	src = kmap_atomic(page);
 	ret = crypto_comp_compress(tfm, src, PAGE_SIZE, dst, &dlen);
 	kunmap_atomic(src);
-	put_cpu_ptr(entry->pool->tfm);
 	if (ret) {
 		ret = -EINVAL;
 		goto put_dstmem;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1099 @ static int zswap_frontswap_store(unsigne
 	memcpy(buf, &zhdr, hlen);
 	memcpy(buf + hlen, dst, dlen);
 	zpool_unmap_handle(entry->pool->zpool, handle);
-	put_cpu_var(zswap_dstmem);
+	put_locked_var(zswap_dstmem_lock, zswap_dstmem);
 
 	/* populate entry */
 	entry->offset = offset;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1127 @ insert_entry:
 	return 0;
 
 put_dstmem:
-	put_cpu_var(zswap_dstmem);
+	put_locked_var(zswap_dstmem_lock, zswap_dstmem);
 	zswap_pool_put(entry->pool);
 freepage:
 	zswap_entry_cache_free(entry);
Index: linux-5.0.19-rt11/net/Kconfig
===================================================================
--- linux-5.0.19-rt11.orig/net/Kconfig
+++ linux-5.0.19-rt11/net/Kconfig
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:282 @ config CGROUP_NET_CLASSID
 
 config NET_RX_BUSY_POLL
 	bool
-	default y
+	default y if !PREEMPT_RT_FULL
 
 config BQL
 	bool
Index: linux-5.0.19-rt11/net/core/dev.c
===================================================================
--- linux-5.0.19-rt11.orig/net/core/dev.c
+++ linux-5.0.19-rt11/net/core/dev.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:202 @ static unsigned int napi_gen_id = NR_CPU
 static DEFINE_READ_MOSTLY_HASHTABLE(napi_hash, 8);
 
 static seqcount_t devnet_rename_seq;
+static DEFINE_MUTEX(devnet_rename_mutex);
 
 static inline void dev_base_seq_inc(struct net *net)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:225 @ static inline struct hlist_head *dev_ind
 static inline void rps_lock(struct softnet_data *sd)
 {
 #ifdef CONFIG_RPS
-	spin_lock(&sd->input_pkt_queue.lock);
+	raw_spin_lock(&sd->input_pkt_queue.raw_lock);
 #endif
 }
 
 static inline void rps_unlock(struct softnet_data *sd)
 {
 #ifdef CONFIG_RPS
-	spin_unlock(&sd->input_pkt_queue.lock);
+	raw_spin_unlock(&sd->input_pkt_queue.raw_lock);
 #endif
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:928 @ retry:
 	strcpy(name, dev->name);
 	rcu_read_unlock();
 	if (read_seqcount_retry(&devnet_rename_seq, seq)) {
-		cond_resched();
+		mutex_lock(&devnet_rename_mutex);
+		mutex_unlock(&devnet_rename_mutex);
 		goto retry;
 	}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1206 @ int dev_change_name(struct net_device *d
 	    likely(!(dev->priv_flags & IFF_LIVE_RENAME_OK)))
 		return -EBUSY;
 
-	write_seqcount_begin(&devnet_rename_seq);
+	mutex_lock(&devnet_rename_mutex);
+	__raw_write_seqcount_begin(&devnet_rename_seq);
 
-	if (strncmp(newname, dev->name, IFNAMSIZ) == 0) {
-		write_seqcount_end(&devnet_rename_seq);
-		return 0;
-	}
+	if (strncmp(newname, dev->name, IFNAMSIZ) == 0)
+		goto outunlock;
 
 	memcpy(oldname, dev->name, IFNAMSIZ);
 
 	err = dev_get_valid_name(net, dev, newname);
-	if (err < 0) {
-		write_seqcount_end(&devnet_rename_seq);
-		return err;
-	}
+	if (err < 0)
+		goto outunlock;
 
 	if (oldname[0] && !strchr(oldname, '%'))
 		netdev_info(dev, "renamed from %s\n", oldname);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1229 @ rollback:
 	if (ret) {
 		memcpy(dev->name, oldname, IFNAMSIZ);
 		dev->name_assign_type = old_assign_type;
-		write_seqcount_end(&devnet_rename_seq);
-		return ret;
+		err = ret;
+		goto outunlock;
 	}
 
-	write_seqcount_end(&devnet_rename_seq);
+	__raw_write_seqcount_end(&devnet_rename_seq);
+	mutex_unlock(&devnet_rename_mutex);
 
 	netdev_adjacent_rename_links(dev, oldname);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1255 @ rollback:
 		/* err >= 0 after dev_alloc_name() or stores the first errno */
 		if (err >= 0) {
 			err = ret;
-			write_seqcount_begin(&devnet_rename_seq);
+			mutex_lock(&devnet_rename_mutex);
+			__raw_write_seqcount_begin(&devnet_rename_seq);
 			memcpy(dev->name, oldname, IFNAMSIZ);
 			memcpy(oldname, newname, IFNAMSIZ);
 			dev->name_assign_type = old_assign_type;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1269 @ rollback:
 	}
 
 	return err;
+
+outunlock:
+	__raw_write_seqcount_end(&devnet_rename_seq);
+	mutex_unlock(&devnet_rename_mutex);
+	return err;
 }
 
 /**
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2763 @ static void __netif_reschedule(struct Qd
 	sd->output_queue_tailp = &q->next_sched;
 	raise_softirq_irqoff(NET_TX_SOFTIRQ);
 	local_irq_restore(flags);
+	preempt_check_resched_rt();
 }
 
 void __netif_schedule(struct Qdisc *q)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2826 @ void __dev_kfree_skb_irq(struct sk_buff
 	__this_cpu_write(softnet_data.completion_queue, skb);
 	raise_softirq_irqoff(NET_TX_SOFTIRQ);
 	local_irq_restore(flags);
+	preempt_check_resched_rt();
 }
 EXPORT_SYMBOL(__dev_kfree_skb_irq);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3509 @ static inline int __dev_xmit_skb(struct
 	 * This permits qdisc->running owner to get the lock more
 	 * often and dequeue packets faster.
 	 */
+#ifdef CONFIG_PREEMPT_RT_FULL
+	contended = true;
+#else
 	contended = qdisc_is_running(q);
+#endif
 	if (unlikely(contended))
 		spin_lock(&q->busylock);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3585 @ static void skb_update_prio(struct sk_bu
 #define skb_update_prio(skb)
 #endif
 
+#ifndef CONFIG_PREEMPT_RT_FULL
 DEFINE_PER_CPU(int, xmit_recursion);
 EXPORT_SYMBOL(xmit_recursion);
+#endif
 
 /**
  *	dev_loopback_xmit - loop back @skb
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3879 @ static int __dev_queue_xmit(struct sk_bu
 	if (dev->flags & IFF_UP) {
 		int cpu = smp_processor_id(); /* ok because BHs are off */
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+		if (txq->xmit_lock_owner != current) {
+#else
 		if (txq->xmit_lock_owner != cpu) {
-			if (unlikely(__this_cpu_read(xmit_recursion) >
-				     XMIT_RECURSION_LIMIT))
+#endif
+			if (unlikely(xmit_rec_read() > XMIT_RECURSION_LIMIT))
 				goto recursion_alert;
 
 			skb = validate_xmit_skb(skb, dev, &again);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:3894 @ static int __dev_queue_xmit(struct sk_bu
 			HARD_TX_LOCK(dev, txq, cpu);
 
 			if (!netif_xmit_stopped(txq)) {
-				__this_cpu_inc(xmit_recursion);
+				xmit_rec_inc();
 				skb = dev_hard_start_xmit(skb, dev, txq, &rc);
-				__this_cpu_dec(xmit_recursion);
+				xmit_rec_dec();
 				if (dev_xmit_complete(rc)) {
 					HARD_TX_UNLOCK(dev, txq);
 					goto out;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4315 @ drop:
 	rps_unlock(sd);
 
 	local_irq_restore(flags);
+	preempt_check_resched_rt();
 
 	atomic_long_inc(&skb->dev->rx_dropped);
 	kfree_skb(skb);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4542 @ static int netif_rx_internal(struct sk_b
 		struct rps_dev_flow voidflow, *rflow = &voidflow;
 		int cpu;
 
-		preempt_disable();
+		migrate_disable();
 		rcu_read_lock();
 
 		cpu = get_rps_cpu(skb->dev, skb, &rflow);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4552 @ static int netif_rx_internal(struct sk_b
 		ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
 
 		rcu_read_unlock();
-		preempt_enable();
+		migrate_enable();
 	} else
 #endif
 	{
 		unsigned int qtail;
 
-		ret = enqueue_to_backlog(skb, get_cpu(), &qtail);
-		put_cpu();
+		ret = enqueue_to_backlog(skb, get_cpu_light(), &qtail);
+		put_cpu_light();
 	}
 	return ret;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:4598 @ int netif_rx_ni(struct sk_buff *skb)
 
 	trace_netif_rx_ni_entry(skb);
 
-	preempt_disable();
+	local_bh_disable();
 	err = netif_rx_internal(skb);
-	if (local_softirq_pending())
-		do_softirq();
-	preempt_enable();
+	local_bh_enable();
 	trace_netif_rx_ni_exit(err);
 
 	return err;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5341 @ static void flush_backlog(struct work_st
 	skb_queue_walk_safe(&sd->input_pkt_queue, skb, tmp) {
 		if (skb->dev->reg_state == NETREG_UNREGISTERING) {
 			__skb_unlink(skb, &sd->input_pkt_queue);
-			kfree_skb(skb);
+			__skb_queue_tail(&sd->tofree_queue, skb);
 			input_queue_head_incr(sd);
 		}
 	}
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5351 @ static void flush_backlog(struct work_st
 	skb_queue_walk_safe(&sd->process_queue, skb, tmp) {
 		if (skb->dev->reg_state == NETREG_UNREGISTERING) {
 			__skb_unlink(skb, &sd->process_queue);
-			kfree_skb(skb);
+			__skb_queue_tail(&sd->tofree_queue, skb);
 			input_queue_head_incr(sd);
 		}
 	}
+	if (!skb_queue_empty(&sd->tofree_queue))
+		raise_softirq_irqoff(NET_RX_SOFTIRQ);
 	local_bh_enable();
+
 }
 
 static void flush_all_backlogs(void)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5921 @ static void net_rps_action_and_irq_enabl
 		sd->rps_ipi_list = NULL;
 
 		local_irq_enable();
+		preempt_check_resched_rt();
 
 		/* Send pending IPI's to kick RPS processing on remote cpus. */
 		net_rps_send_ipi(remsd);
 	} else
 #endif
 		local_irq_enable();
+	preempt_check_resched_rt();
 }
 
 static bool sd_has_rps_ipi_waiting(struct softnet_data *sd)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5958 @ static int process_backlog(struct napi_s
 	while (again) {
 		struct sk_buff *skb;
 
+		local_irq_disable();
 		while ((skb = __skb_dequeue(&sd->process_queue))) {
+			local_irq_enable();
 			rcu_read_lock();
 			__netif_receive_skb(skb);
 			rcu_read_unlock();
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:5968 @ static int process_backlog(struct napi_s
 			if (++work >= quota)
 				return work;
 
+			local_irq_disable();
 		}
 
-		local_irq_disable();
 		rps_lock(sd);
 		if (skb_queue_empty(&sd->input_pkt_queue)) {
 			/*
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6008 @ void __napi_schedule(struct napi_struct
 	local_irq_save(flags);
 	____napi_schedule(this_cpu_ptr(&softnet_data), n);
 	local_irq_restore(flags);
+	preempt_check_resched_rt();
 }
 EXPORT_SYMBOL(__napi_schedule);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6045 @ bool napi_schedule_prep(struct napi_stru
 }
 EXPORT_SYMBOL(napi_schedule_prep);
 
+#ifndef CONFIG_PREEMPT_RT_FULL
 /**
  * __napi_schedule_irqoff - schedule for receive
  * @n: entry to schedule
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6057 @ void __napi_schedule_irqoff(struct napi_
 	____napi_schedule(this_cpu_ptr(&softnet_data), n);
 }
 EXPORT_SYMBOL(__napi_schedule_irqoff);
+#endif
 
 bool napi_complete_done(struct napi_struct *n, int work_done)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:6437 @ static __latent_entropy void net_rx_acti
 	unsigned long time_limit = jiffies +
 		usecs_to_jiffies(netdev_budget_usecs);
 	int budget = netdev_budget;
+	struct sk_buff_head tofree_q;
+	struct sk_buff *skb;
 	LIST_HEAD(list);
 	LIST_HEAD(repoll);
 
+	__skb_queue_head_init(&tofree_q);
+
 	local_irq_disable();
+	skb_queue_splice_init(&sd->tofree_queue, &tofree_q);
 	list_splice_init(&sd->poll_list, &list);
 	local_irq_enable();
 
+	while ((skb = __skb_dequeue(&tofree_q)))
+		kfree_skb(skb);
+
 	for (;;) {
 		struct napi_struct *n;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:8521 @ static void netdev_init_one_queue(struct
 	/* Initialize queue lock */
 	spin_lock_init(&queue->_xmit_lock);
 	netdev_set_xmit_lockdep_class(&queue->_xmit_lock, dev->type);
-	queue->xmit_lock_owner = -1;
+	netdev_queue_clear_owner(queue);
 	netdev_queue_numa_node_write(queue, NUMA_NO_NODE);
 	queue->dev = dev;
 #ifdef CONFIG_BQL
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:9455 @ static int dev_cpu_dead(unsigned int old
 
 	raise_softirq_irqoff(NET_TX_SOFTIRQ);
 	local_irq_enable();
+	preempt_check_resched_rt();
 
 #ifdef CONFIG_RPS
 	remsd = oldsd->rps_ipi_list;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:9469 @ static int dev_cpu_dead(unsigned int old
 		netif_rx_ni(skb);
 		input_queue_head_incr(oldsd);
 	}
-	while ((skb = skb_dequeue(&oldsd->input_pkt_queue))) {
+	while ((skb = __skb_dequeue(&oldsd->input_pkt_queue))) {
 		netif_rx_ni(skb);
 		input_queue_head_incr(oldsd);
 	}
+	while ((skb = __skb_dequeue(&oldsd->tofree_queue))) {
+		kfree_skb(skb);
+	}
 
 	return 0;
 }
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:9784 @ static int __init net_dev_init(void)
 
 		INIT_WORK(flush, flush_backlog);
 
-		skb_queue_head_init(&sd->input_pkt_queue);
-		skb_queue_head_init(&sd->process_queue);
+		skb_queue_head_init_raw(&sd->input_pkt_queue);
+		skb_queue_head_init_raw(&sd->process_queue);
+		skb_queue_head_init_raw(&sd->tofree_queue);
 #ifdef CONFIG_XFRM_OFFLOAD
 		skb_queue_head_init(&sd->xfrm_backlog);
 #endif
Index: linux-5.0.19-rt11/net/core/filter.c
===================================================================
--- linux-5.0.19-rt11.orig/net/core/filter.c
+++ linux-5.0.19-rt11/net/core/filter.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2005 @ static inline int __bpf_tx_skb(struct ne
 {
 	int ret;
 
-	if (unlikely(__this_cpu_read(xmit_recursion) > XMIT_RECURSION_LIMIT)) {
+	if (unlikely(xmit_rec_read() > XMIT_RECURSION_LIMIT)) {
 		net_crit_ratelimited("bpf: recursion limit reached on datapath, buggy bpf program?\n");
 		kfree_skb(skb);
 		return -ENETDOWN;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2013 @ static inline int __bpf_tx_skb(struct ne
 
 	skb->dev = dev;
 
-	__this_cpu_inc(xmit_recursion);
+	xmit_rec_inc();
 	ret = dev_queue_xmit(skb);
-	__this_cpu_dec(xmit_recursion);
+	xmit_rec_dec();
 
 	return ret;
 }
Index: linux-5.0.19-rt11/net/core/gen_estimator.c
===================================================================
--- linux-5.0.19-rt11.orig/net/core/gen_estimator.c
+++ linux-5.0.19-rt11/net/core/gen_estimator.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:49 @
 struct net_rate_estimator {
 	struct gnet_stats_basic_packed	*bstats;
 	spinlock_t		*stats_lock;
-	seqcount_t		*running;
+	net_seqlock_t		*running;
 	struct gnet_stats_basic_cpu __percpu *cpu_bstats;
 	u8			ewma_log;
 	u8			intvl_log; /* period : (250ms << intvl_log) */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:132 @ int gen_new_estimator(struct gnet_stats_
 		      struct gnet_stats_basic_cpu __percpu *cpu_bstats,
 		      struct net_rate_estimator __rcu **rate_est,
 		      spinlock_t *lock,
-		      seqcount_t *running,
+		      net_seqlock_t *running,
 		      struct nlattr *opt)
 {
 	struct gnet_estimator *parm = nla_data(opt);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:230 @ int gen_replace_estimator(struct gnet_st
 			  struct gnet_stats_basic_cpu __percpu *cpu_bstats,
 			  struct net_rate_estimator __rcu **rate_est,
 			  spinlock_t *lock,
-			  seqcount_t *running, struct nlattr *opt)
+			  net_seqlock_t *running, struct nlattr *opt)
 {
 	return gen_new_estimator(bstats, cpu_bstats, rate_est,
 				 lock, running, opt);
Index: linux-5.0.19-rt11/net/core/gen_stats.c
===================================================================
--- linux-5.0.19-rt11.orig/net/core/gen_stats.c
+++ linux-5.0.19-rt11/net/core/gen_stats.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:145 @ __gnet_stats_copy_basic_cpu(struct gnet_
 }
 
 void
-__gnet_stats_copy_basic(const seqcount_t *running,
+__gnet_stats_copy_basic(net_seqlock_t *running,
 			struct gnet_stats_basic_packed *bstats,
 			struct gnet_stats_basic_cpu __percpu *cpu,
 			struct gnet_stats_basic_packed *b)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:158 @ __gnet_stats_copy_basic(const seqcount_t
 	}
 	do {
 		if (running)
-			seq = read_seqcount_begin(running);
+			seq = net_seq_begin(running);
 		bstats->bytes = b->bytes;
 		bstats->packets = b->packets;
-	} while (running && read_seqcount_retry(running, seq));
+	} while (running && net_seq_retry(running, seq));
 }
 EXPORT_SYMBOL(__gnet_stats_copy_basic);
 
 static int
-___gnet_stats_copy_basic(const seqcount_t *running,
+___gnet_stats_copy_basic(net_seqlock_t *running,
 			 struct gnet_dump *d,
 			 struct gnet_stats_basic_cpu __percpu *cpu,
 			 struct gnet_stats_basic_packed *b,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:207 @ ___gnet_stats_copy_basic(const seqcount_
  * if the room in the socket buffer was not sufficient.
  */
 int
-gnet_stats_copy_basic(const seqcount_t *running,
+gnet_stats_copy_basic(net_seqlock_t *running,
 		      struct gnet_dump *d,
 		      struct gnet_stats_basic_cpu __percpu *cpu,
 		      struct gnet_stats_basic_packed *b)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:231 @ EXPORT_SYMBOL(gnet_stats_copy_basic);
  * if the room in the socket buffer was not sufficient.
  */
 int
-gnet_stats_copy_basic_hw(const seqcount_t *running,
+gnet_stats_copy_basic_hw(net_seqlock_t *running,
 			 struct gnet_dump *d,
 			 struct gnet_stats_basic_cpu __percpu *cpu,
 			 struct gnet_stats_basic_packed *b)
Index: linux-5.0.19-rt11/net/core/pktgen.c
===================================================================
--- linux-5.0.19-rt11.orig/net/core/pktgen.c
+++ linux-5.0.19-rt11/net/core/pktgen.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2163 @ static void spin(struct pktgen_dev *pkt_
 	s64 remaining;
 	struct hrtimer_sleeper t;
 
-	hrtimer_init_on_stack(&t.timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
+	hrtimer_init_sleeper_on_stack(&t, CLOCK_MONOTONIC, HRTIMER_MODE_ABS,
+				      current);
 	hrtimer_set_expires(&t.timer, spin_until);
 
 	remaining = ktime_to_ns(hrtimer_expires_remaining(&t.timer));
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:2179 @ static void spin(struct pktgen_dev *pkt_
 		} while (ktime_compare(end_time, spin_until) < 0);
 	} else {
 		/* see do_nanosleep */
-		hrtimer_init_sleeper(&t, current);
 		do {
 			set_current_state(TASK_INTERRUPTIBLE);
 			hrtimer_start_expires(&t.timer, HRTIMER_MODE_ABS);
Index: linux-5.0.19-rt11/net/core/skbuff.c
===================================================================
--- linux-5.0.19-rt11.orig/net/core/skbuff.c
+++ linux-5.0.19-rt11/net/core/skbuff.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:66 @
 #include <linux/errqueue.h>
 #include <linux/prefetch.h>
 #include <linux/if_vlan.h>
+#include <linux/locallock.h>
 
 #include <net/protocol.h>
 #include <net/dst.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:337 @ struct napi_alloc_cache {
 
 static DEFINE_PER_CPU(struct page_frag_cache, netdev_alloc_cache);
 static DEFINE_PER_CPU(struct napi_alloc_cache, napi_alloc_cache);
+static DEFINE_LOCAL_IRQ_LOCK(netdev_alloc_lock);
+static DEFINE_LOCAL_IRQ_LOCK(napi_alloc_cache_lock);
 
 static void *__netdev_alloc_frag(unsigned int fragsz, gfp_t gfp_mask)
 {
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:346 @ static void *__netdev_alloc_frag(unsigne
 	unsigned long flags;
 	void *data;
 
-	local_irq_save(flags);
+	local_lock_irqsave(netdev_alloc_lock, flags);
 	nc = this_cpu_ptr(&netdev_alloc_cache);
 	data = page_frag_alloc(nc, fragsz, gfp_mask);
-	local_irq_restore(flags);
+	local_unlock_irqrestore(netdev_alloc_lock, flags);
 	return data;
 }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:370 @ EXPORT_SYMBOL(netdev_alloc_frag);
 
 static void *__napi_alloc_frag(unsigned int fragsz, gfp_t gfp_mask)
 {
-	struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
+	struct napi_alloc_cache *nc;
+	void *data;
 
-	return page_frag_alloc(&nc->page, fragsz, gfp_mask);
+	nc = &get_locked_var(napi_alloc_cache_lock, napi_alloc_cache);
+	data =  page_frag_alloc(&nc->page, fragsz, gfp_mask);
+	put_locked_var(napi_alloc_cache_lock, napi_alloc_cache);
+	return data;
 }
 
 void *napi_alloc_frag(unsigned int fragsz)
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:425 @ struct sk_buff *__netdev_alloc_skb(struc
 	if (sk_memalloc_socks())
 		gfp_mask |= __GFP_MEMALLOC;
 
-	local_irq_save(flags);
+	local_lock_irqsave(netdev_alloc_lock, flags);
 
 	nc = this_cpu_ptr(&netdev_alloc_cache);
 	data = page_frag_alloc(nc, len, gfp_mask);
 	pfmemalloc = nc->pfmemalloc;
 
-	local_irq_restore(flags);
+	local_unlock_irqrestore(netdev_alloc_lock, flags);
 
 	if (unlikely(!data))
 		return NULL;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:472 @ EXPORT_SYMBOL(__netdev_alloc_skb);
 struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
 				 gfp_t gfp_mask)
 {
-	struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
+	struct napi_alloc_cache *nc;
 	struct sk_buff *skb;
 	void *data;
+	bool pfmemalloc;
 
 	len += NET_SKB_PAD + NET_IP_ALIGN;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:493 @ struct sk_buff *__napi_alloc_skb(struct
 	if (sk_memalloc_socks())
 		gfp_mask |= __GFP_MEMALLOC;
 
+	nc = &get_locked_var(napi_alloc_cache_lock, napi_alloc_cache);
 	data = page_frag_alloc(&nc->page, len, gfp_mask);
+	pfmemalloc = nc->page.pfmemalloc;
+	put_locked_var(napi_alloc_cache_lock, napi_alloc_cache);
 	if (unlikely(!data))
 		return NULL;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:507 @ struct sk_buff *__napi_alloc_skb(struct
 	}
 
 	/* use OR instead of assignment to avoid clearing of bits in mask */
-	if (nc->page.pfmemalloc)
+	if (pfmemalloc)
 		skb->pfmemalloc = 1;
 	skb->head_frag = 1;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:736 @ void __consume_stateless_skb(struct sk_b
 
 void __kfree_skb_flush(void)
 {
-	struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
+	struct napi_alloc_cache *nc;
 
+	nc = &get_locked_var(napi_alloc_cache_lock, napi_alloc_cache);
 	/* flush skb_cache if containing objects */
 	if (nc->skb_count) {
 		kmem_cache_free_bulk(skbuff_head_cache, nc->skb_count,
 				     nc->skb_cache);
 		nc->skb_count = 0;
 	}
+	put_locked_var(napi_alloc_cache_lock, napi_alloc_cache);
 }
 
 static inline void _kfree_skb_defer(struct sk_buff *skb)
 {
-	struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
+	struct napi_alloc_cache *nc;
 
 	/* drop skb->head and call any destructors for packet */
 	skb_release_all(skb);
 
+	nc = &get_locked_var(napi_alloc_cache_lock, napi_alloc_cache);
 	/* record skb to CPU local list */
 	nc->skb_cache[nc->skb_count++] = skb;
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:770 @ static inline void _kfree_skb_defer(stru
 				     nc->skb_cache);
 		nc->skb_count = 0;
 	}
+	put_locked_var(napi_alloc_cache_lock, napi_alloc_cache);
 }
 void __kfree_skb_defer(struct sk_buff *skb)
 {
Index: linux-5.0.19-rt11/net/netfilter/core.c
===================================================================
--- linux-5.0.19-rt11.orig/net/netfilter/core.c
+++ linux-5.0.19-rt11/net/netfilter/core.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:23 @
 #include <linux/inetdevice.h>
 #include <linux/proc_fs.h>
 #include <linux/mutex.h>
+#include <linux/locallock.h>
 #include <linux/mm.h>
 #include <linux/rcupdate.h>
 #include <net/net_namespace.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:31 @
 
 #include "nf_internals.h"
 
+#ifdef CONFIG_PREEMPT_RT_BASE
+DEFINE_LOCAL_IRQ_LOCK(xt_write_lock);
+EXPORT_PER_CPU_SYMBOL(xt_write_lock);
+#endif
+
 const struct nf_ipv6_ops __rcu *nf_ipv6_ops __read_mostly;
 EXPORT_SYMBOL_GPL(nf_ipv6_ops);
 
Index: linux-5.0.19-rt11/net/packet/af_packet.c
===================================================================
--- linux-5.0.19-rt11.orig/net/packet/af_packet.c
+++ linux-5.0.19-rt11/net/packet/af_packet.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:66 @
 #include <linux/if_packet.h>
 #include <linux/wireless.h>
 #include <linux/kernel.h>
+#include <linux/delay.h>
 #include <linux/kmod.h>
 #include <linux/slab.h>
 #include <linux/vmalloc.h>
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:671 @ static void prb_retire_rx_blk_timer_expi
 	if (BLOCK_NUM_PKTS(pbd)) {
 		while (atomic_read(&pkc->blk_fill_in_prog)) {
 			/* Waiting for skb_copy_bits to finish... */
-			cpu_relax();
+			cpu_chill();
 		}
 	}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:933 @ static void prb_retire_current_block(str
 		if (!(status & TP_STATUS_BLK_TMO)) {
 			while (atomic_read(&pkc->blk_fill_in_prog)) {
 				/* Waiting for skb_copy_bits to finish... */
-				cpu_relax();
+				cpu_chill();
 			}
 		}
 		prb_close_block(pkc, pbd, po, status);
Index: linux-5.0.19-rt11/net/rds/ib_rdma.c
===================================================================
--- linux-5.0.19-rt11.orig/net/rds/ib_rdma.c
+++ linux-5.0.19-rt11/net/rds/ib_rdma.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:37 @
 #include <linux/slab.h>
 #include <linux/rculist.h>
 #include <linux/llist.h>
+#include <linux/delay.h>
 
 #include "rds_single_path.h"
 #include "ib_mr.h"
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:226 @ static inline void wait_clean_list_grace
 	for_each_online_cpu(cpu) {
 		flag = &per_cpu(clean_list_grace, cpu);
 		while (test_bit(CLEAN_LIST_BUSY_BIT, flag))
-			cpu_relax();
+			cpu_chill();
 	}
 }
 
Index: linux-5.0.19-rt11/net/sched/sch_api.c
===================================================================
--- linux-5.0.19-rt11.orig/net/sched/sch_api.c
+++ linux-5.0.19-rt11/net/sched/sch_api.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1245 @ static struct Qdisc *qdisc_create(struct
 		rcu_assign_pointer(sch->stab, stab);
 	}
 	if (tca[TCA_RATE]) {
-		seqcount_t *running;
+		net_seqlock_t *running;
 
 		err = -EOPNOTSUPP;
 		if (sch->flags & TCQ_F_MQROOT) {
Index: linux-5.0.19-rt11/net/sched/sch_generic.c
===================================================================
--- linux-5.0.19-rt11.orig/net/sched/sch_generic.c
+++ linux-5.0.19-rt11/net/sched/sch_generic.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:573 @ struct Qdisc noop_qdisc = {
 	.ops		=	&noop_qdisc_ops,
 	.q.lock		=	__SPIN_LOCK_UNLOCKED(noop_qdisc.q.lock),
 	.dev_queue	=	&noop_netdev_queue,
+#ifdef CONFIG_PREEMPT_RT_BASE
+	.running	=	__SEQLOCK_UNLOCKED(noop_qdisc.running),
+#else
 	.running	=	SEQCNT_ZERO(noop_qdisc.running),
+#endif
 	.busylock	=	__SPIN_LOCK_UNLOCKED(noop_qdisc.busylock),
 	.gso_skb = {
 		.next = (struct sk_buff *)&noop_qdisc.gso_skb,
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:878 @ struct Qdisc *qdisc_alloc(struct netdev_
 	lockdep_set_class(&sch->busylock,
 			  dev->qdisc_tx_busylock ?: &qdisc_tx_busylock);
 
+#ifdef CONFIG_PREEMPT_RT_BASE
+	seqlock_init(&sch->running);
+	lockdep_set_class(&sch->running.seqcount,
+			  dev->qdisc_running_key ?: &qdisc_running_key);
+	lockdep_set_class(&sch->running.lock,
+			  dev->qdisc_running_key ?: &qdisc_running_key);
+#else
 	seqcount_init(&sch->running);
 	lockdep_set_class(&sch->running,
 			  dev->qdisc_running_key ?: &qdisc_running_key);
+#endif
 
 	sch->ops = ops;
 	sch->flags = ops->static_flags;
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:1238 @ void dev_deactivate_many(struct list_hea
 	/* Wait for outstanding qdisc_run calls. */
 	list_for_each_entry(dev, head, close_list) {
 		while (some_qdisc_is_busy(dev))
-			yield();
+			msleep(1);
 		/* The new qdisc is assigned at this point so we can safely
 		 * unwind stale skb lists and qdisc statistics
 		 */
Index: linux-5.0.19-rt11/net/sunrpc/svc_xprt.c
===================================================================
--- linux-5.0.19-rt11.orig/net/sunrpc/svc_xprt.c
+++ linux-5.0.19-rt11/net/sunrpc/svc_xprt.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:395 @ void svc_xprt_do_enqueue(struct svc_xprt
 	if (test_and_set_bit(XPT_BUSY, &xprt->xpt_flags))
 		return;
 
-	cpu = get_cpu();
+	cpu = get_cpu_light();
 	pool = svc_pool_for_cpu(xprt->xpt_server, cpu);
 
 	atomic_long_inc(&pool->sp_stats.packets);
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:419 @ void svc_xprt_do_enqueue(struct svc_xprt
 	rqstp = NULL;
 out_unlock:
 	rcu_read_unlock();
-	put_cpu();
+	put_cpu_light();
 	trace_svc_xprt_do_enqueue(xprt, rqstp);
 }
 EXPORT_SYMBOL_GPL(svc_xprt_do_enqueue);
Index: linux-5.0.19-rt11/samples/trace_events/trace-events-sample.c
===================================================================
--- linux-5.0.19-rt11.orig/samples/trace_events/trace-events-sample.c
+++ linux-5.0.19-rt11/samples/trace_events/trace-events-sample.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:36 @ static void simple_thread_func(int cnt)
 
 	/* Silly tracepoints */
 	trace_foo_bar("hello", cnt, array, random_strings[len],
-		      &current->cpus_allowed);
+		      current->cpus_ptr);
 
 	trace_foo_with_template_simple("HELLO", cnt);
 
Index: linux-5.0.19-rt11/scripts/mkcompile_h
===================================================================
--- linux-5.0.19-rt11.orig/scripts/mkcompile_h
+++ linux-5.0.19-rt11/scripts/mkcompile_h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:8 @ TARGET=$1
 ARCH=$2
 SMP=$3
 PREEMPT=$4
-CC=$5
+RT=$5
+CC=$6
 
 vecho() { [ "${quiet}" = "silent_" ] || echo "$@" ; }
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:57 @ UTS_VERSION="#$VERSION"
 CONFIG_FLAGS=""
 if [ -n "$SMP" ] ; then CONFIG_FLAGS="SMP"; fi
 if [ -n "$PREEMPT" ] ; then CONFIG_FLAGS="$CONFIG_FLAGS PREEMPT"; fi
+if [ -n "$RT" ] ; then CONFIG_FLAGS="$CONFIG_FLAGS RT"; fi
 UTS_VERSION="$UTS_VERSION $CONFIG_FLAGS $TIMESTAMP"
 
 # Truncate to maximum length
Index: linux-5.0.19-rt11/security/apparmor/include/path.h
===================================================================
--- linux-5.0.19-rt11.orig/security/apparmor/include/path.h
+++ linux-5.0.19-rt11/security/apparmor/include/path.h
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:43 @ struct aa_buffers {
 
 #include <linux/percpu.h>
 #include <linux/preempt.h>
+#include <linux/locallock.h>
 
 DECLARE_PER_CPU(struct aa_buffers, aa_buffers);
+DECLARE_LOCAL_IRQ_LOCK(aa_buffers_lock);
 
 #define ASSIGN(FN, A, X, N) ((X) = FN(A, N))
 #define EVAL1(FN, A, X) ASSIGN(FN, A, X, 0) /*X = FN(0)*/
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:56 @ DECLARE_PER_CPU(struct aa_buffers, aa_bu
 
 #define for_each_cpu_buffer(I) for ((I) = 0; (I) < MAX_PATH_BUFFERS; (I)++)
 
-#ifdef CONFIG_DEBUG_PREEMPT
+#ifdef CONFIG_PREEMPT_RT_BASE
+static inline void AA_BUG_PREEMPT_ENABLED(const char *s)
+{
+	struct local_irq_lock *lv;
+
+	lv = this_cpu_ptr(&aa_buffers_lock);
+	WARN_ONCE(lv->owner != current,
+		  "__get_buffer without aa_buffers_lock\n");
+}
+
+#elif defined(CONFIG_DEBUG_PREEMPT)
 #define AA_BUG_PREEMPT_ENABLED(X) AA_BUG(preempt_count() <= 0, X)
 #else
 #define AA_BUG_PREEMPT_ENABLED(X) /* nop */
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:82 @ DECLARE_PER_CPU(struct aa_buffers, aa_bu
 
 #define get_buffers(X...)						\
 do {									\
-	struct aa_buffers *__cpu_var = get_cpu_ptr(&aa_buffers);	\
+	struct aa_buffers *__cpu_var;					\
+	__cpu_var = get_locked_ptr(aa_buffers_lock, &aa_buffers);	\
 	__get_buffers(__cpu_var, X);					\
 } while (0)
 
 #define put_buffers(X, Y...)		\
 do {					\
 	__put_buffers(X, Y);		\
-	put_cpu_ptr(&aa_buffers);	\
+	put_locked_ptr(aa_buffers_lock, &aa_buffers);	\
 } while (0)
 
 #endif /* __AA_PATH_H */
Index: linux-5.0.19-rt11/security/apparmor/lsm.c
===================================================================
--- linux-5.0.19-rt11.orig/security/apparmor/lsm.c
+++ linux-5.0.19-rt11/security/apparmor/lsm.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:51 @
 int apparmor_initialized;
 
 DEFINE_PER_CPU(struct aa_buffers, aa_buffers);
-
+DEFINE_LOCAL_IRQ_LOCK(aa_buffers_lock);
 
 /*
  * LSM hook functions
Index: linux-5.0.19-rt11/virt/kvm/arm/arm.c
===================================================================
--- linux-5.0.19-rt11.orig/virt/kvm/arm/arm.c
+++ linux-5.0.19-rt11/virt/kvm/arm/arm.c
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:712 @ int kvm_arch_vcpu_ioctl_run(struct kvm_v
 		 * involves poking the GIC, which must be done in a
 		 * non-preemptible context.
 		 */
-		preempt_disable();
+		migrate_disable();
 
 		kvm_pmu_flush_hwstate(vcpu);
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:761 @ int kvm_arch_vcpu_ioctl_run(struct kvm_v
 				kvm_timer_sync_hwstate(vcpu);
 			kvm_vgic_sync_hwstate(vcpu);
 			local_irq_enable();
-			preempt_enable();
+			migrate_enable();
 			continue;
 		}
 
@ linux-5.0.19-rt11/Documentation/preempt-locking.txt:839 @ int kvm_arch_vcpu_ioctl_run(struct kvm_v
 		/* Exit types that need handling before we can be preempted */
 		handle_exit_early(vcpu, run, ret);
 
-		preempt_enable();
+		migrate_enable();
 
 		ret = handle_exit(vcpu, run, ret);
 	}