You are here: Home / RTLWS 1999-2017 / RTLWS Submitted Papers / 
2024-04-18 - 08:17

Real Time Linux Workshops

1999 - 2000 - 2001 - 2002 - 2003 - 2004 - 2005 - 2006 - 2007 - 2008 - 2009 - 2010 - 2011 - 2012 - 2013 - 2014 - 2015

13th Real-Time Linux Workshop from October 20 to 22 at the Faculty of Electrical Engineering, Czech Technical University in Prague

Announcement - Hotels - Directions - Agenda - Paper Abstracts - Presentations - Registration - Abstract Submission - Sponsoring - Gallery

Experience with Sporadic Server Scheduling in Linux: Theory vs. Practice

Mark Stanovich, Florida State University, USA
Theodore Baker, Florida State University, USA
An-I Wang, Florida State University, USA

Aperiodic server algorithms were devised to schedule the execution of threads that serve a stream of jobs whose exact arrival and execution times are not known a priori, in a way that supports schedulability analysis. Well-known examples of such algorithms include the periodic polling server, deferrable server, sporadic server, and constant bandwidth server.

The primary goal of an aperiodic server scheduling algorithm is to enforce a demand bound for each thread – that is, an upper bound on the amount of CPU time the thread may request in any time interval of a given length. This demand bound determines a lower bound on the amount of CPU time that is guaranteed to remain for other threads.  Enforcement of such temporal isolation is an essential requirement for guaranteed resource reservations and compositional schedulability analysis in open real-time systems. A secondary goal of an aperiodic server is to minimize the worst-case and/or average response time while enforcing the demand bound. The theoretical aperiodic server algorithms meet both goals to varying degrees.

An implementation of an aperiodic server can result in performance significantly worse than its theoretical counterpart, and even fail to enforce temporal isolation, due to factors not found or considered in the theoretical algorithm. These factors include context-switching overheads, preemption delays (e.g., overruns), and replenishment limits (e.g., limited data structures available for bookkeeping).

This paper reports our experience implementing, in Linux, a number of different variations on the original Sporadic Server scheduling algorithm proposed by Sprunt, Sha, and Lehoczky. We chose to work with sporadic server scheduling because it fits well into the traditional Unix priority model, and is the only scheduling policy recognized by the Unix/POSIX standard that provides the temporal isolation property. While this paper only considers sporadic server, several of the lessons learned extend to other aperiodic servers including those based on deadline scheduling.

Through our Linux implementation experience, we show that an implemented sporadic server can perform worse than simpler and more naive aperiodic servers such as the polling server. To overcome the deficiencies of an implemented sporadic server, we propose and demonstrate several techniques that improve the performance closer to that of the theoretical sporadic server algorithm. In particular, we demonstrate that a sporadic server implementation differs from the theoretical model in its inability to divide CPU time into infinitely small slices and to use those time slices with no overhead. To address this implementation constraint, our approaches limit the degree to which CPU time can be divided thereby increasing the amount of effective work the server is able to perform within its allocated CPU time budget while bounding the short and long-term impact experienced by other tasks on the system. To address the aspect of overruns, we provide techniques that limit the impact of the overrun on other tasks. Finally, we consider improvements for sporadic server's replenishment mechanism. Given that supporting an unbounded number of replenishments is infeasible, our approach is to use a fixed maximum number replenishments that maintains the bound on interference, while at the same time improving the likelihood that the server will provide CPU time in more continuous allocations.

Through a network packet service example, we show that sporadic server can be effectively used to bound CPU time consumption. Further, the efficiency of jobs serviced by sporadic server can be improved in terms of both reduced average response time and increased throughput.