{
"responseHeader":{
"status":0,
"QTime":0,
"params":{
"q":"{!q.op=AND}id:\"708005\"",
"fq":"!embargo_i:1",
"wt":"json"}},
"response":{"numFound":1,"start":0,"numFoundExact":true,"docs":[
{
"thumb_s":"/34/01/3401abdd99f3d4a5ec18e91a0bb14f489055758d.jpg",
"description_t":"This paper introduces and shows how to schedule two novel scheduling abstractions that overcome limitations of existing work on preemption threshold scheduling. The abstractions are task clusters, groups of tasks that are mutually non-preemptible by design, and task barriers, which partition the task set into subsets that must be mapped to different threads. Barriers prevent the preemption threshold logic that runs multiple design-time tasks in the same run-time thread from violating architectural constraints, e.g. by merging an interrupt handler and a user-level thread. We show that the preemption threshold logic for mapping tasks to as few threads as possible can rule out the schedules with the highest critical scaling factors - these schedules are the least likely to miss deadlines under timing faults. We have developed a framework for robust CPU scheduling and three novel algorithms: an optimal algorithm for maximizing the critical scaling factor of a task set under restricted conditions, a more generally applicable heuristic that finds schedules with approximately maximal critical scaling factors, and a heuristic search that jointly maximizes the critical scaling factor of computed schedules and minimizes the number of threads required to run a task set. We demonstrate that our techniques for robust scheduling are applicable in a wide variety of situations where static priority scheduling is used.",
"metadata_cataloger_t":"CLR",
"restricted_i":0,
"rights_management_t":"(c) 2002 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.",
"ark_t":"ark:/87278/s64j0zwh",
"identifier_t":"uspace,17505",
"creator_t":"Regehr, John",
"parent_i":0,
"format_medium_t":"application/pdf",
"first_page_t":"1",
"publisher_t":"Institute of Electrical and Electronics Engineers (IEEE)",
"file_s":"/d1/e5/d1e5bd950a6a8c775a1e1d1befcf63d466e8a1da.pdf",
"date_t":"2002-01-01",
"type_t":"Text",
"created_tdt":"2012-08-01T00:00:00Z",
"publication_type_t":"Journal Article",
"format_extent_t":"146,789 bytes",
"mass_i":1515011812,
"title_t":"Scheduling Tasks with mixed preemption relations for robustness to timing faults",
"setname_s":"ir_uspace",
"department_t":"Computing, School of",
"bibliographic_citation_t":"Regehr, J. (2002). Scheduling Tasks with mixed preemption relations for robustness to timing faults. In Proceedings of the 23rd IEEE Real-Time Systems Symposium (RTSS 2002), 1-12. December 3-5.",
"dissertation_institution_t":"University of Utah",
"language_t":"eng",
"id":708005,
"oldid_t":"uspace 5935",
"format_t":"application/pdf",
"modified_tdt":"2021-05-06T23:38:40Z",
"last_page_t":"12",
"school_or_college_t":"College of Engineering",
"filesize_i":146789,
"_version_":1699754372151377920,
"ocr_t":"In Proceedings of the 23rd IEEE Real-Time Systems Symposium (RTSS 2002), pages 315-326, Austin, TX, December 3-5 2002. c 2002 IEEE. Scheduling Tasks with Mixed Preemption Relations for Robustness to Timing Faults John Regehr School of Computing, University of Utah regehr@cs.utah.edu Abstract This paper introduces and shows how to schedule two novel scheduling abstractions that overcome limitations of existing work on preemption threshold scheduling. The ab-stractions are task clusters, groups of tasks that are mutu-ally non-preemptible by design, and task barriers, which partition the task set into subsets that must be mapped to different threads. Barriers prevent the preemption threshold logic that runs multiple design-time tasks in the same run-time thread from violating architectural constraints, e.g. by merging an interrupt handler and a user-level thread. We show that the preemption threshold logic for mapping tasks to as few threads as possible can rule out the schedules with the highest critical scaling factors - these schedules are the least likely to miss deadlines under timing faults. We have developed a framework for robust CPU schedul-ing and three novel algorithms: an optimal algorithm for maximizing the critical scaling factor of a task set under restricted conditions, a more generally applicable heuris-tic that finds schedules with approximately maximal criti-cal scaling factors, and a heuristic search that jointly maxi-mizes the critical scaling factor of computed schedules and minimizes the number of threads required to run a task set. We demonstrate that our techniques for robust scheduling are applicable in a wide variety of situations where static priority scheduling is used. 1. Introduction To rapidly create reliable, reusable embedded and real-time systems software, it is important to begin with the right abstractions. This paper describes two novel abstractions that overcome limitations of existing work on preemption threshold scheduling [20, 26]. Saksena and Wang showed that task sets scheduled with preemption thresholds can have significant schedulability improvements over task sets using fixed priorities. They also showed that it is possible to transform a design model consisting of a feasible set of real-time tasks into a feasi-ble implementation model containing (often significantly) fewer threads than the task set contains tasks. These imple-mentation models use less memory and cause fewer context switches than the standard implementation model that maps each task to its own thread. Although the introduction of non-preemptive schedul-ing is a useful performance optimization, the developers of real-time systems are still presented with a problem that is known to be very difficult: creating correct and predictable software in the presence of concurrent, synchronizing tasks. This paper introduces the task cluster, a collection of real-time tasks that are designed to bemutually non-preemptible. A task cluster could be used, for example, to embed a Click [15] graph in a real-time system. Click is an architec-ture for creating flexible routing software; in a distributed real-time system its components would have real-time re-quirements. Click gains significant ease of use through its restricted programming model, including the restriction that its components cannot preempt each other. Task clusters provide a natural way to express this requirement without forcing a system to be globally non-preemptive, which can result in a serious loss of effective processor utilization. The preemption threshold model provides no way to ex-press architectural constraints that rule out some implemen-tation models. For example, it cannot ensure that a task representing an interrupt handler is not be mapped to the same implementation thread as a task representing user-level code. Our second new abstraction, the task barrier, provides first-class support for this type of constraint. Ab-stractly speaking, it might be useful to transparently mi-grate code into interrupt handlers when this does not impact schedulability, but in practice this is probably undesirable: the migrated code might attempt to access a blocking re-source, crashing the system. We describe task clusters and barriers in Section 2; in Section 3 we show how to find feasible schedules for task sets containing them, both for operating systems with sup-port for preemption threshold scheduling and for systems that have purely static priorities at run time. The potential to create implementation models with many fewer threads than tasks begs the question: Is a sched-ule with fewer threads always better than a schedule with more threads? We found that if there is uncertainty about task worst-case execution times (WCETs), the answer is no. This is because the additional constraints on schedules with fewer threads can rule out the schedules with the highest critical scaling factors [16]. The critical scaling factor is the largest constant by which the execution time of each task can be multiplied without rendering a schedule infea-sible. Intuitively, a schedule with critical scaling factor not much larger than one is \"barely feasible\" in the sense that a minor perturbation could cause it to miss deadlines - this may not be acceptable for important real-time systems. We have developed a framework for reasoning about the robustness of task sets that are subject to timing faults, and we have developed new algorithms for finding highly ro-bust schedules including a search algorithm for jointly min-imizing the number of threads in an implementation model and maximizing the critical scaling factor among schedules mapping to a particular number of threads. This algorithm can permit system developers to make an informed trade-off between memory use and robustness when picking an implementation model; it is described in Section 4. Fur-thermore, our algorithms for increasing the critical scaling factor have broader applicability than just to systems using preemption thresholds. For example, for randomly gener-ated fully preemptible task sets with five members and an average utilization of 0.78, Audsley's optimal scheduling algorithm [2] creates schedules that, on average, permit a 10% increase in task execution time before deadlines start to be missed. One of our algorithms finds schedules that, on average, tolerate an 18% increase without missing any deadlines. Section 5 contains an evaluation of the new al-gorithms presented in this paper. In Appendix A we correct an error in the existing re-sponse time analysis for task sets with preemption thresh-olds. We also extend the analysis to support tasks with re-lease jitter. 2. Two New Scheduling Abstractions This section formally defines task clusters and task barri-ers, but first provides an overview of the scheduling model and its notation, as well as some background on preemption threshold scheduling. Our work begins with a standard task model: tasks are scheduled on a uniprocessor and have deadlines, periods, worst-case execution times, and jitter. Throughout this pa-per we assume that tasks are scheduled using fixed priorities (or almost-fixed priorities, since we make use of preemption thresholds). Furthermore, we always make a distinction be-tween tasks, which are design-time entities with real-time requirements, and threads, which are run-time control flows scheduled by the RTOS (real-time operating system). A real-time task set is T = f0; :::; n−1g where i = (Ti; Ci;Di; Ji). The elements of a task tuple respectively represent the period, worst-case execution time, deadline, and maximumrelease jitter. Jitter, as defined by Tindell [24, x4], \"occurs when the worst-case time between successive releases of a task is shorter than the worst-case time be-tween arrivals of a task.\" When there is no danger of am-biguity we write 8i rather than 8i 2 T. A schedule for T is a set of priority and preemption threshold assignments P = f(P0; P0); :::; (Pn−1; Pn−1)g. Zero is the highest pri-ority, and throughout this paper we reverse the sense of the ordering relations for priorities so they have the intuitive meaning (e.g. x > y means x has higher priority than y) rather than the numerical meaning. Priorities are assumed to be assigned uniquely, and so the following predicatemust hold for a schedule to be valid: ^U def = 8i; j : i 6= j ) Pi 6= Pj Preemption thresholds are not assigned uniquely, but the preemption threshold of each task must be at least as high as its priority. Therefore, the following predicate is also true of valid schedules: ^ P def = 8i : Pi Pi Finally, for a given schedule, each task in a set has a worst-case response time Ri. Valid schedules must not per-mit a task to complete after its deadline: ^ S def = 8i : Ri Di 2.1. Background: Preemption Thresholds Preemption thresholds were introduced in ThreadX [9], a commercial RTOS, and were first studied academically by Saksena and Wang [20, 26] who developed a response time analysis and a number of useful associated algorithms. The idea behind preemption thresholds is simple. Task instances compete for processor time based on their priori-ties, but a task instance that has started running may only be preempted by tasks with priorities higher than the running task's preemption threshold. Preemption threshold schedul-ing subsumes both preemptive and non-preemptive fixed priority scheduling: purely preemptive scheduling behav-ior is obtained when each task's preemption threshold is equal to its priority, and purely non-preemptive behavior is obtained when all preemption thresholds are set to the maximum priority. The dominance of preemption thresh-old scheduling is not merely theoretical; it has been shown to improve schedulability in practice [20]. Intuitively, the source of the improvement is the addition of a limited form of dynamic priorities: if rate-monotonic scheduling is viewed as a first-order approximation to optimal dynamic 2 priority scheduling, then preemption threshold scheduling can be viewed as a second-order approximation. There is no known optimal algorithm for finding a feasi-ble assignment of priorities and preemption thresholds that takes less than exponential time. However, efficient approx-imate algorithms exist. Once a feasible assignment of pri-orities and thresholds for a given task set is found, Saksena and Wang provide efficient algorithms for assigning maxi-mal preemption thresholds (the largest threshold assignment for each task such that all tasks remain schedulable), and also for optimally dividing a task set into non-preemptible groups. Two tasks are mutually non-preemptible if the pri-ority of the first is not higher than the preemption threshold of the second, and vice versa. A non-preemptible group is a collection of tasks within which each pair is non-preemptible; a non-preemptible group of tasks can be run in a single run-time thread. An implementation model is a set of threads, each of which is responsible for running some non-empty set of tasks from T. Let M = fM0; :::;Mm−1g be the sets of design-time tasks that map to each of themimplementation threads. Given a feasible schedule Saksena and Wang [20, Fig. 3] provide an algorithm that can be used to find a cor-responding implementation model that is schedulable and satisfies the following predicate: ^M def = 8Mi 2 M : 8j; k 2 Mi : Pj Pk Saksena andWang showed that for task sets with random attributes, the number of maximal non-preemptible groups increases much more slowly than the number of tasks: this has important implications for memory-limited embedded systems since each thread has significantmemory overhead. 2.2. Task Clusters A task cluster is a subset of a task set within which each pair of tasks must be mutually non-preemptible. Task clusters are different than Saksena andWang's non-preemptible task groups: the latter are used as a performance optimization while the former are a first-class part of the programming model. In other words, task clusters are visible to, and can be specified by, real-time system developers. Task sets containing clusters have the important benefit of often permitting the high resource utilizations associated with preemptive scheduling while also permitting the ease of programming that comes from non-preemptive schedul-ing. Furthermore, task clusters facilitate synchronization elimination: the removal of locks that have become su-perfluous because the resources they protect are only ac-cessed by tasks within a single cluster. When synchroniza-tion elimination is applied retroactively, it is an optimization that does not provide software engineering benefits. Rather, it merely eliminates the CPU overhead of acquiring and re-leasing locks, and the memory overhead of functions sup-porting, e.g., the priority ceiling protocol. On the other hand, if synchronization elimination is applied at design time it potentially has enormous software engineering ben-efits: developers, who are often domain experts rather than skilled concurrent system programmers, can completely ig-nore the dangers of race conditions and deadlocks with re-spect to resources that are accessed within a single task clus-ter. A task set is augmented with a set G of task clusters, where Gi T . Valid schedules for T must satisfy: ^G def = 8Gi 2 G : 9Mj 2 M : Gi Mj Task clusters can have overlapping membership, and not every task need belong to a cluster. If a task cluster Gi = T exists, the only valid schedules will be fully non-preemptive. 2.3. Task Barriers Priority and preemption relations are often hardwired into the design of a system. To support these relations we re-quire an additional abstraction, the task barrier, which is the dual of the task cluster-it isolates groups of tasks that inherently run at different priorities, preventing the thread minimization logic from creating an impossible schedule. A task set T is augmentedwith a setX f0; :::; (n−1)g of task barriers where valid schedules must satisfy: ^X def = 8x 2 X : 8i : i > x ) (Pi < x^ Pi < x) ^ i x ) (Pi x ^ Pi x) For example, a task barrier at y forces tasks 0::y to have both priority and threshold at least as high as y, while tasks y+1::n must have priority and threshold lower than y. Clearly there can be no feasible schedule if there exists a task barrier that \"splits\" a task cluster. We use task barriers to model the inherent relation-ships between implementation artifacts such as interrupts, bottom-half kernel routines, and ordinary threads. For ex-ample, consider a task set where tasks 0::3 represent hard-ware interrupt handlers and 4::9 represent standard tasks. In this case barriers X = f0; 1; 2; 3g must exist to preserve the separate identities of the interrupt handlers. 2.4. Summary of the New Scheduling Model We have introduced two new scheduling abstractions: the task cluster, which guarantees that a collection of tasks will be mutually non-preemptible in the implementation model, and the task barrier, which partitions the set of tasks into subsets that cannot be mapped to the same implementation thread. 3 PT-ANL (Porig) f Pmax = enforce preds (Porig) Bmax = badness (Pmax) while (max iterations not exceeded) f Pnew = enforce preds (permute (Pmax)) Bnew = badness (Pnew) if (Bnew == 0) return Pnew if (Bnew Bmax) f Pmax = Pnew Bmax = Bnew g g return FAILURE g Figure 1. PT-ANL schedules task sets containing task clusters and barriers We define an overall schedulability function: S(G;X; T ; P;M) def = ^U ^ ^ P ^ ^ S ^ ^M ^ ^G ^ ^X In general,G, X, and T can be considered to be fixed for a given task set. On the other hand, P and M are derived terms and there may be many valid choices for them. 3. Scheduling with Task Clusters and Barriers The previous section defined two new abstractions; in this section we present two complementary techniques for scheduling task sets containing them. Although both tech-niques make essential use of the response time analysis for preemption threshold scheduling [26], only one of them re-quires run-time support for preemption thresholds - the other permits threads to have strictly static priorities at run-time. In Section 5 we quantitatively compare the two ap-proaches. 3.1. Targeting Systems with Run-Time Support for Pre-emption Thresholds Task clusters and barriers can be scheduled on operating systems that support preemption thresholds using a tech-nique similar to the one proposed by Saksena andWang for the assignment of priorities and preemption thresholds [20]. Our algorithm, shown in Figure 1, greedily attempts to minimize the \"badness\" of a schedule using a randomized search through the space of possible priority and preemp-tion threshold assignments. Our badness function is the same as Saksena and Wang's energy function [20, x4.3]: it is the sum of the lateness of each task where lateness is max(Ri−Di; 0). The algorithmis finished when a schedule with badness zero is found, since this means that no task's response time is later than its deadline. The permute function randomly either swaps the prior-ities of two tasks, or either increments or decrements the preemption threshold of a task. The enforce preds function ensures that a schedule does not violate any of the predicates (other than ^ S) defined in the previous section. It does this, for example, first by noticing that ^M is violated, and second by appropriately adjusting the priority and/or preemption threshold assignments of the offending tasks. These adjust-ments are repeated until all predicates are satisfied; this is possible because we test for a conflict between predicates, e.g. a task cluster that is split by a barrier, before starting the randomized search. For simplicity, the algorithms presented in this paper are randomized greedy algorithms. In practice, better results can often be obtained using simulated annealing. Convert-ing a greedy search to one that uses simulated annealing is a straightforward matter of adding logic to probabilistically accept inferior solutions [18, x10.9]. 3.2. Targeting Systems without Run-Time Support for Preemption Thresholds A straightforward implementation of task clusters on a stan-dard RTOS is to have each instance of a task belonging to a cluster acquire a lock associated with the cluster before per-forming any computation, and to release the lock just before terminating. If the lock implements the stack resource pol-icy [3] or the priority ceiling protocol [21], then the lock protocols themselves introduce a form of dynamic priori-ties not unlike preemption thresholds - the difference be-ing that the purpose of the priority change is to bound prior-ity inversion and prevent deadlock, rather than to improve schedulability. As Gai et al. [10] have observed, there is considerable synergy between these synchronization proto-cols and preemption threshold scheduling. A lock-based implementation of task clusters, however, seems inelegant. It adds the time and space overhead of a lock, does not help minimize threads, and does not help support task barriers. Rather, we develop two solutions that fit into our existing framework; both perform better than the lock-based imple-mentation, as we demonstrate in Section 5. Let maxp(Mi) denote the maximum of the highest pri-ority or preemption threshold of any task in Mi. Similarly, let minp(Mi) denote the minimum of the lowest priority or preemption threshold of any task in Mi. Define ^ F as fol-lows: ^ F def = 8Mi;Mj 2 M : maxp(Mi) < minp(Mj) _ minp(Mi) > maxp(Mj) This predicate ensures that the priorities and preemption thresholds of tasks mapped to each thread do not overlap the priorities and preemption thresholds of tasks mapped to 4 any other thread. Since there is no overlap any priority and preemption threshold in the range minp(Mi)::maxp(Mi) can be chosen for thread i. By choosing the priority and threshold to be the same value we create a run-time sched-ule that is equivalent to purely preemptive thread scheduling - no preemption threshold support is required and a stan-dard RTOS can be used. Furthermore, since only a single priority level is required for each thread, as opposed to the technique from the previous section that requires up to two priority levels per task, this technique is ideal for targeting a small RTOS that supports a limited number of priorities. To satisfy ^ F as well as the other predicates comprising the previously defined schedulability function S, we have developed a modified version of Audsley's optimal prior-ity assignment algorithm for pure preemptive [2] and non-preemptive [12] scheduling. Audsley's algorithm reduces the space of priority assignments that must be searched from n! to n2 by exploiting the property that although the re-sponse time of a task depends on the set of tasks that has higher priority, it does not depend on the particular priority ordering among those tasks. The natural algorithm, then, is to find a task that is schedulable at the lowest priority, then the second-lowest priority, etc. Once a task is found to meet its deadline at a given priority, this property will not be broken by priority assignments made to tasks with higher priority. To support task clusters and barriers within this frame-work we have designed a three-level hierarchical version of Audsley's algorithm, called SP-3, that operates as follows. At the outermost level the partitions created by task barriers are processed in order from lowest to highest priority. For example, a task set with 6 tasks and a barrier at 2 would be treated in two parts: first, tasks 3-5, and second, tasks 0- 2. Within each partition task clusters are treated separately. For purposes of this algorithm we assume that each task be-longs to a unique cluster: this can be easily accomplished by merging clusters that have tasks in common and by cre-ating singleton clusters for tasks not initially belonging to a cluster. Task clusters within a partition are scheduled in a manner analogous to Audsley's algorithm for tasks. We try to schedule each cluster at the lowest priority in the par-tition; as priority assignments are found that meet the re-sponse time requirements of all tasks within the cluster, we progress to higher priorities. Finally, within a cluster, indi-vidual tasks are scheduled using the version of Audsley's al-gorithm that is optimal for non-preemptive scheduling. SP- 3 will find a feasible schedule if one exists that does not introduce any extra non-preemption beyond what is speci-fied by the task clusters. We have developed a second algorithm, SP-ANL, for scheduling task sets with clusters and barriers that, given enough time, outperforms SP-3 in the sense that it finds feasible schedules more often. This performance is quan-tified in Section 5. SP-ANL is identical to PT-ANL (Fig-ure 1) except that the permute function operates at a higher level. Instead of randomly permuting a priority or preemp-tion threshold, it randomly either swaps the priorities of two tasks within a cluster, swaps the priority ordering of two entire clusters, or attempts to run two clusters in the same implementation thread. It is this final permutation that pro-vides additional non-preemption beyond what is specified by task clusters, permitting SP-ANL to schedule more task sets than SP-3. 4. Robust Scheduling A timing fault occurs when a task instance runs for too long, but eventually produces the correct result. Real-time sys-tems that are robust with respect to timing faults are desir-able for several reasons. First, analytic worst-case execution time (WCET) tools are not in widespread use, and it is not clear that tight bounds on WCET can be found for complex software running on aggressively designed processors. Sec-ond, even if accurate WCETs are available with respect to the CPU, it may difficult to ensure the absence of interfer-ence from bus contention, unexpected or too-frequent inter-rupts, or a processor that is forced to run in a low-power mode due to energy constraints. Finally, it is just sound engineering to avoid building systems that are sensitive to minor perturbations. The rate monotonic algorithm, the deadline monotonic algorithm, and Audsley's priority assignment algorithm be-long to the class of algorithms that we call FEAS-OPTIMAL: they are guaranteed to find, for different classes of task sets, a feasible schedule if any exist. In this section we define the ROBUST-OPTIMAL class of scheduling algorithms: they are guaranteed to produce a schedule that maximizes some robustness metric of interest. 4.1. A Framework for Robust Scheduling A transformation Z is an arbitrary function from task sets to task sets. Transformations of interest will model a class of changes that should be \"tolerated\" by a task set. For exam-ple, ZJ (T ; ) def = f(Ti; Ci;Di;Ji)g is the transformation that models an increase in release jitter. is a scaling fac-tor. The critical value of for a given priority assignment and transformation, denoted (G;X; T ; Z; P;M), is the largest value ofsuch that the transformed task set remains schedulable: 8 2 R : S(G;X;Z(T ; ); P;M) ) Let P be the set of all possible priority and preemption threshold assignments for a task set. Note that the size of P can be large even for modestly sized task sets since it contains n!n! elements. The maximal critical value of the 5 scaling factor, , has the following property: 8P 2 P : (G;X; T ; Z; P;M) (G;X; T ; Z) The set of priority assignments of maximal robustness is Pmax where: Pmax P : P 2 Pmax ) (G;X; T ; Z; P;M) = (G;X; T ; Z) A ROBUST-OPTIMAL scheduling algorithm is one that can find a member of Pmax. We usually abbreviate(G;X; T ; Z; P;M) as; it is to be understood that is a function of a task set, a trans-formation, and a schedule. Similarly, (G;X; T ; Z) is a function of a transformation and a task set, including its associated clusters and barriers; we usually abbreviate it as . 4.2. The Critical Scaling Factor Throughout the rest of this paper we use a transformation ZC that multiplies the WCET of each task in a set by the scaling factor: ZC(T ; ) def = f(Ti; Ci;Di; Ji)g. This is the transformation defined by Lehoczky et al. [16], but generalized slightly to support tasks with release jitter and arbitrary deadlines. ZC models generic uncertainty about WCET and also uniform expansion of task run-times due to interference from memory cycle stealing or brief, unan-ticipated interrupts. A useful property of this transforma-tion is that S(G;X;ZC(T ; ); P;M) is monotonic in and therefore can be efficiently computed using a bi-nary search. For the remainder of this paper when we say that a task set is robust, we mean \"robust with respect to uniform expansion in WCET.\" Also, we restrict the meaning of a ROBUST-OPTIMAL scheduling algorithm to be one that finds a schedule maximizing the scaling factor of ZC. Although our focus is on uniform expansion of task WCETs, the algorithms that we present are general and could easily support other transformations such as those that: scale only a single task or a subset of the tasks (this family of transformations is examined by Vestal [25]); re-duce the period of a task representing a hardware interrupt whose minimum interarrival time is not precisely known; scale task execution times by a weighted factor reflecting the degree of uncertainty in WCET estimates; or, scale tasks with smaller run-times by a larger factor to model interfer-ence from a long-running, unanticipated interrupt handler. 4.3. A Simple Example Consider the following task set: 0 : C = 400; T;D = 1999; J = 0 1 : C = 400; T;D = 2000; J = 1200 0 5 10 15 20 25 0 10 20 30 40 50 60 70 80 90 100 percent deadlines missed maximum percent overrun rate monotonic robust Figure 2. Comparing the behavior of two schedules in the presence of timing faults There are only two possible fully preemptive schedules, and both of them are feasible. When scheduled using the rate-monotonic priority assignment, the worst-case re-sponse time of Task 1 is 400 and Task 2 is 2000. A little experimentation will show that if the WCET of either task is increased, the task set ceases to be schedulable. When the non-rate-monotonic priority assignment is used, the worst-case response times are 800 and 1600, respectively, and the WCET of both tasks can be scaled by 1.67 before the task set becomes infeasible. In other words, by avoiding the rate-monotonic priority assignment, we increase the criti-cal scaling factor of the task set from approximately 1.0 to 1.67. Clearly the non-rate-monotonic priority assignment is preferable: a mispredicted worst-case execution time is far less likely to make it miss a deadline. This is demon-strated in Figure 2, which compares the propensity of the two schedules to miss deadlines under overload. Each data point was generated by simulating 50 million time units. A \"maximum percent overload\" of 25 means that the ex-ecution time of each task instance is uniformly distributed between the nominalWCET and 1.25 times the WCET. 4.4. Properties of Some FEAS-OPTIMAL Algorithms Theorem 1. For the class of task sets where the dead-line monotonic (DM) algorithm is FEAS-OPTIMAL (i.e. fully preemptive scheduling, no release jitter, deadline not greater than period), it is also ROBUST-OPTIMAL. Proof. Let T be a member of the class of task sets for which DM is an optimal scheduling algorithm. Based on T , de-fine a set of scaled task sets ZC def = f8 2 R : ZC(T ; )g that differ only in their WCETs. Let PDM be the deadline monotonic schedule for T . Since the deadline monotonic schedule for a task set is independent of theWCET of tasks in the set it follows that for each member of ZC, PDM is a feasible schedule if any exist. Therefore, it is impossi-ble that there exists a schedule Pmax 6= PDM such that 6 (Pmax) > (PDM), since this would imply that there is a member of ZC for which PDM is infeasible, but a dif-ferent schedule is feasible. Theorem2. Audsley's FEAS-OPTIMAL algorithmfor prior-ity assignment is not ROBUST-OPTIMAL for preemptive [2] or for non-preemptive [12] scheduling. Proof. In both versions of the algorithm, if all tests of task response time versus deadline succeed, then the first task in the set is assigned the lowest priority, the second task the second-lowest priority, etc. Therefore, we can feed tasks to the algorithm in such a way that a non-robust-optimal schedule is produced. For example, if 1 and then 0 from Section 4.3 were given to Audsley's algorithm, it would generate the rate-monotonic priority assignment that we know to not be ROBUST-OPTIMAL. It is straightfor-ward to construct an analogous example for non-preemptive scheduling. 4.5. Finding Robust Schedules For classes of task sets that have an efficient FEAS-OPTIMAL scheduling algorithm and for transformations where the schedulability function is monotonic in the scal-ing factor, an efficient ROBUST-OPTIMAL algorithm can be created by invoking the FEAS-OPTIMAL algorithm in a bi-nary search. This strategy can be used to maximize the criti-cal scaling factor, for example, of a task set scheduled by ei-ther the preemptive or non-preemptive version of Audsley's algorithm for priority assignment. We call these algorithms ROB-OPT. For classes of task sets that lack an efficient FEAS-OPTIMAL algorithm (e.g. task sets with preemption thresh-olds) or for transformations where schedulability is not monotonic in the scaling factor, we require an alternative to ROB-OPT. We have developed ROB-ANL, shown in Fig-ure 3. It is a randomized heuristic search that can efficiently compute an approximate member of Pmax. ROB-ANL is similar to PT-ANL (shown in Figure 1) except that (1) in-stead of minimizing the degree to which task response times exceed their deadlines, wemaximize the critical scaling fac-tor, and (2) in the version of the algorithm that uses simu-lated annealing we never accept an infeasible schedule, al-though we must sometimes accept a solution that has an inferior critical scaling factor. An advantage of using a heuristic search is that the de-tails of the parameter being optimized do not matter. For example, if the cost of acquiring and releasing locks were modeled in the schedulability function, then the heuristic would naturally attempt to merge synchronizing tasks since these schedules would have lower CPU overhead and con-sequently are good candidates for being highly robust. In the same vein, we would like to extend the response time analysis for preemption threshold scheduling to accurately ROB-ANL (Porig) f Pmax = Porig max = critical scaling factor (Pmax) while (max iterations not exceeded) f Pnew = permute (Pmax) new = critical scaling factor (Pnew) if (new max) f Pmax = Pnew max = new g g return Pmax g Figure 3. ROB-ANL approximately maximizes the critical scaling factor of a task set model the costs of preemptive and non-preemptive context switches. This would cause the search heuristic to find schedules with low numbers of context switches, again be-cause the reduced overhead would leave more room for tim-ing faults. In summary, searching for robust schedules per-mits many schedule optimizations to be treated uniformly; we believe this is a significant advantage. 4.6. Maximizing the Critical Scaling Factor and Mini-mizing Implementation Threads Minimizing the number of threads required to run a task set can conflict with maximizing robustness. To see this, no-tice that the fewer implementation threads required to run a schedule, the more constraints there are on the priority and preemption threshold assignments. Sometimes these con-straints hurt schedulability because they rule out the most robust schedules. Instead of optimizing a composite value function, i.e. one based on some weighting of maximizing robustness and minimizing implementation threads, we be-lieve that developers should be permitted to make an in-formed decision using a table that presents the largest criti-cal scaling factor that could be achieved for each number of threads. The algorithm for the joint minimization of implemen-tation threads and maximization of critical scaling factor is MIN-THR; it appears in Figure 4. This algorithm uses a heuristic search to find a schedule mapping to as few implementation threads as possible. Whenever a sched-ule is found that maps to a number of threads that has not yet been seen, it forks off an optimization to attempt to find the schedule that maximizes the critical scaling fac-tor over schedules mapping to that number of threads. In other words, it calls a slightly modified version of ROB-ANL (from Figure 3) that only accepts schedules that map to a particular number of threads. 7 MIN-THR (Porig) f Pmin = Porig Tmin = impl threads (Porig) while (max iterations not exceeded) f Pnew = permute (Pmin) Tnew = find impl threads (Tnew) if (not yet seen (Tnew)) ROB-ANL-T (Pnew, Tnew) if (Tnew Tmin) f Pmin = Pnew Tmin = Tnew g g g Figure 4. MIN-THR approximately minimizes threads and maximizes robustness 5. Experimental Evaluation This section provides a brief survey of the performance of the new techniques presented in this paper. Our procedure for generating random task sets is as follows, where all ran-dom numbers are taken from a uniform distribution. The period of each task is a random value between 1 and 1000 time units. The utilization is chosen by generating a ran-dom number in range 0.1-2.0 and dividing that number by the number of tasks in the set. (Scaling utilization by the inverse of the number of tasks is merely a heuristic to avoid generating too many infeasible task sets.) The deadline for each task is either set to be the same as the period or is an independently chosen random value between 1 and 1000, depending on the experiment. Tasks were assigned release jitter in some experiments; see below. Finally, any task set with utilization greater than one is immediately discarded. 5.1. Task Clusters and Barriers In this section we compare the different algorithms that we have developed for finding feasible schedules for task sets containing task clusters and barriers. We compare five algorithms for scheduling task clusters. The first is NP-OPT, the optimal algorithm for assigning priorities for fully non-preemptible scheduling [12]. Recall that in the presence of non-trivial task clusters a fully pre-emptive schedule is never valid (because members of a clus-ter must be mutually non-preemptible) while a fully non-preemptive schedule is always valid. The second algorithm is SP-LOCK, the strawman algorithm that we proposed in Section 3.2 for implementing task clusters by forcing tasks in the each cluster to always have a lock associated with the cluster. The third algorithm is SP-3, the hierarchical version of Audsley's algorithm for priority assignment, and the fourth is SP-ANL, the heuristic search for priority and preemption threshold assignments for task sets that are to n NP-OPT SP-LOCK SP-3 SP-ANL PT-ANL 5 34 65 73 88 100 10 35 49 61 77 100 15 25 48 53 63 100 20 29 41 49 58 100 25 41 39 43 47 100 Figure 5. Relative performance of algorithms for scheduling task clusters n SP-3 SP-ANL PT-ANL 5 98 99 100 10 77 89 100 15 75 79 100 20 71 70 100 25 59 57 100 Figure 6. Relative performance of algorithms for scheduling task barriers have purely static priorities at run time (Section 3.2). Fi-nally, the fifth algorithm is PT-ANL, the heuristic search for priority and preemption threshold assignments for task sets containing clusters and when preemption threshold support is available on the target RTOS. Figure 5 shows the results of an experiment where ran-dom task sets were passed to each of the five algorithms listed above. The experiment terminated when any algo-rithm successfully scheduled 100 task sets, and therefore the results are automatically normalized with respect to the best algorithm, which always has score 100. The experi-ment was repeated for task sets containing 5, 10, 15, 20, and 25 tasks. For every task set: there was a single task clus-ter containing between 2 and n=2 randomly selected tasks; each task's deadline was equal to its period; and, each task had a 50% chance of being assigned release jitter up to half its period. Figure 6 shows the results of an experiment similar to the previous one, except that instead of containing a task cluster, each task set was assigned a single randomly placed task barrier. The algorithms tested were the same as in the previous experiment except that NP-OPT and SP-LOCK had to be dropped since they may produce invalid schedules for task sets containing task barriers. These experiments show that PT-ANL consistently out-performs the other algorithms, and that the gap between it and the others increases for larger task sets. This can be taken as a corroboration of Saksena and Wang's re-sults [20] about the practical dominance of preemption threshold scheduling over static priority scheduling. Of the 8 Preemptive Non-Preemptive P- ROB- NP- ROB- n OPT OPT % inc. OPT OPT % inc. 5 1.11 1.18 63% 1.09 1.16 84% 10 1.06 1.13 109% 1.05 1.12 131% 15 1.05 1.10 110% 1.04 1.10 138% 20 1.05 1.09 97% 1.04 1.09 136% 25 1.04 1.08 92% 1.04 1.08 108% Figure 7. Improving the critical scaling factor no jitter 1 task w/J 50% tasks w/J n T=D T6=D T=D T6=D T=D T6=D 5 0% 62% 29% 61% 30% 63% 10 0% 101% 51% 93% 50% 109% 15 0% 100% 50% 87% 59% 110% 20 0% 100% 50% 100% 61% 97% 25 0% 81% 54% 77% 54% 92% Figure 8.Headroomincreases due to ROB-OPT three algorithms that generate static-priority schedules, SP-ANL, the heuristic search, outperforms SP-3, although the gap narrows with increasing numbers of tasks. We believe that this is because the extra non-preemptibility available to SP-ANL becomes less valuable for larger task sets. Also, notice that in Figure 6 SP-3 slightly outperforms SP-ANL for task sets with 20 and 25 members. We speculate that this happens because the size of the priority assignment space for large task sets overwhelms the search heuristic. 5.2. Improving the Robustness of Schedules Figure 7 shows the increase in critical scaling factor that ROB-OPT can achieve relative to Audsley's FEAS-OPTIMAL algorithms for fully preemptive (P-OPT) and fully non-preemptive scheduling (NP-OPT). As before, task sets are randomly generated and have 5-25 members. Each task's deadline and period are unrelated and each task has a 50% chance of being assigned random release jitter up to half its period. Values in the table represent the median critical scaling factor over 500 feasible task sets, and inc indicates the percent increase in the distance of the critical scaling factor from 1.0 under optimization by ROB-OPT. For example, if the FEAS-OPTIMAL scheduling algorithm produces a schedule where = 1:10 and ROB-OPT pro-duces a schedule that has = 1:13, then we say that we have increased the amount of \"headroom\" that the task has before missing deadlines by 30%. Figure 8 shows another way to evaluate ROB-OPT's abil-ity to increase . For task sets containing different num-bers of tasks it shows the increase in headroom for task sets where (1) the deadline of each task is equal to its period, and (2) where period and deadline are unrelated. The other parameter that is adjusted is the amount of jitter: task sets either have no release jitter, a single task with jitter ran-domly distributed between zero and half its period, or each task has a 50% chance of being assigned jitter up to half its period. The failure to increase for task sets without jitter and where T=D is a direct consequence of Theorem 1. 6. Related Work Hybrid preemptive/non-preemptive schedulers are an old idea, and in fact they can be found in the kernel of al-most every general-purpose operating system: interrupts are scheduled preemptively, bottom-half kernel routines are scheduled non-preemptively, and threads are scheduled pre-emptively. The real-time analysis of non-preemptive sec-tions caused by critical regions [3, 21] is more recent. The real-time analysis of mixed preemption for its own sake was pioneered by Saksena and Wang [20, 26] and by Davis et al. [7]. Our work builds directly on Saksena and Wang's, adding several new capabilities. Synchronization elimination has been addressed both by the real-time and programming language communities. For example, the Spring system [22] used static scheduling and was capable of recognizing situations where contention for a shared resource was impossible, in which case a lock was not used at run time. Aldrich et al. [1] show how to remove unnecessary synchronization operations from Java programs. The difference between previous work and the work presented in this paper is that synchronization elim-ination has until now been treated as a compile-time or run-time performance optimization. We believe that using task clusters to give the programmer explicit control over the elimination of synchronization between (logically) con-current tasks can result in significant software engineering benefits in addition to the previously realized performance benefits. Starting with Lehoczky et al. [16] a number of re-searchers have used the critical scaling factor as a metric for schedulability, including Katcher et al. [14], Vestal [25], Yerraballi et al. [27], and Punnekkat et al. [19]. However, as far as we know it has not been previously recognized that it is possible to search for schedules with higher critical scal-ing factors, and that these schedules are inherently prefer-able when there is generic uncertainty about task WCET. Existing techniques for tolerating timing faults - task instances that run for too long but eventually produce a correct result - can be divided into those that change the task model from the developer's point of view and those that do not. A number of scheduling techniques for dealing with timing faults have been proposed that change the task model, including robust earliest deadline [4], time redun-dancy [6], rate adaptation [5], user-defined timing failure 9 handlers [23], and (m; k)-firm deadlines [13]. Our method for increasing robustness does not change the task model. It is complementary to, and can be used independently of or in combination with, essentially all of the other known tech-niques for dealing with timing faults in systems using static priority scheduling. Another technique that is transparent to developers is isolation- or enforcement-based schedul-ing [11, 17] where tasks are preempted when they exceed their execution time budgets. Although this technique can-not prevent missed deadlines it can isolate deadline misses to tasks that overrun. Edgar and Burns [8] have developed a method for sta-tistically estimating task WCET based on measurements. They also show how to statistically estimate the feasibil-ity of a task set, but do not address the problem of finding highly or maximally robust schedules. Our work, on the other hand, directly addresses the problem of finding ro-bust schedules, but permits the statistical nature of unreli-able WCET estimates to remain implicit. It may be useful to integrate the two models. 7. Software All numerical results in this paper were generated using SPAK, a static priority analysis kit that we have developed. SPAK is a collection of portable, efficient functions for cre-ating and manipulating task sets, for analyzing their re-sponse times, and for simulating their execution. A variety of existing analyses with different tradeoffs between speed and generality are available, as is the corrected and extended preemption threshold analysis presented in Appendix A. SPAK is open source software and can be downloaded from http://www.cs.utah.edu/˜regehr/spak. 8. Future Work Currently, a task barrier is defined to split the task set into two parts based on task indices. This is useful when there are inherent priority relations between tasks, e.g. when some tasks model interrupt handlers. However, a more gen-eral abstraction is probably desirable-one that permits the specification of subsets of the task set that must be isolated from each other, e.g. by a CPU reservation, but between which there is no inherent priority ordering. Although we currently do not use CPU reservations or any other kind of enforcement-based scheduling, in the future we plan to use them to create temporal partitions between task clusters. Partitions inside clusters probably do not make sense because clusters are internally non-preemptible, and because tasks in clusters are assumed to be part of a subsystem and therefore semantically related, reducing the utility of isolating them from each others' tim-ing faults. 9. Conclusions The paper has described a number of practical additions to existing work on fixed-priority real-time scheduling. First, we have introduced two novel abstractions: task clusters and task barriers. Task clusters make non-preemptive scheduling into a first-class part of the real-time programming model. We claim that clusters provide sig-nificant software engineering benefits, such as the elimi-nation of the possibility of race conditions and deadlocks within a cluster, as well as performance benefits due to re-duced preemptions, reduced memory overhead for threads, and reduced lock acquisitions. These benefits are achieved without sacrificing the higher utilizations that can usually be achieved through preemptive scheduling. Task barriers restore an important advantage of static priority scheduling - support for integrated schedulability analysis of inter-rupts, kernel tasks, and user-level threads- to preemption threshold scheduling when the objective is to minimize the number of implementation threads onto which design tasks are mapped. Second, we have developed three novel algorithms for finding feasible schedules for task sets containing clusters and barriers. The first targets systems with run-time support for preemption thresholds while the others permit thread priorities to be strictly static at run-time. By \"compiling\" task sets containing task clusters and barriers to target a static-priority environment, we have shown that while run-time support for preemption thresholds is often not neces-sary, the response time analysis for preemption thresholds is an important building block for real-time systems. Third, we have characterized a framework within which it is possible to analyze the robustness of task sets under a given class of timing faults and we have developed two algorithms that can often find a schedule for a given task set that has a higher critical scaling factor than the schedule generated by the appropriate FEAS-OPTIMAL scheduling al-gorithm. This extra resilience to timing faults is essentially free: it is cheap at design time and imposes no cost at run-time. Finally, we have corrected an error in the response time analysis for task sets with preemption thresholds. Acknowledgments The author would like to thank Luca Abeni, Eric Eide, Jay Lepreau, Rob Morelli, Alastair Reid, Manas Saksena, Jack Stankovic, and the reviewers for providing valuable feed-back on drafts of this paper. This work was supported, in part, by the National Sci-ence Foundation under award CCR-0209185 and by the Defense Advanced Research Projects Agency and the Air Force Research Laboratory under agreements F30602-99- 1-0503 and F33615-00-C-1696. 10 References [1] Jonathan Aldrich, Craig Chambers, Emin G¨un Sirer, and Susan Eggers. Eliminating unnecessary synchronization from Java programs. In Proc. of the Static Analysis Symp., Venezia, Italy, September 1999. [2] Neil Audsley, Alan Burns, Mike Richardson, Ken Tindell, and Andy Wellings. Applying new scheduling theory to static priority pre-emptive scheduling. Software Engineering Journal, 8(5):284-292, September 1993. [3] Theodore P. Baker. A stack-based resource allocation policy for realtime processes. In Proc. of the 11th IEEE Real-Time Systems Symp., pages 191-200, Lake Buena Vista, FL, December 1990. [4] Giorgio Buttazzo and John A. Stankovic. RED: Robust earliest deadline scheduling. In Proc. of the 3rd International Workshop on Responsive Computing Systems, pages 100-111, Lincoln, NH, September 1993. [5] Marco Caccamo, Giorgio Buttazzo, and Lui Sha. Handling execution overruns in hard real-time control systems. IEEE Transactions on Computers, 51(5), May 2002. [6] Mario Caccamo and Giorgio Buttazzo. Optimal scheduling for fault-tolerant and firm real-time systems. In Proc. of the 5th Intl. Workshop on Real-Time Computing Systems and Applications, Hiroshima, Japan, October 1998. [7] Robert Davis, Nick Merriam, and Nigel Tracey. How embedded applications using an RTOS can stay within on-chip memory limits. In Proc. of the Work in Progress and Industrial Experience Sessions, 12th Euromicro Workshop on Real-Time Systems, pages 43-50, Stockholm, Sweden, June 2000. [8] Stewart Edgar and Alan Burns. Statistical analysis of WCET for scheduling. In Proc. of the 22nd IEEE Real-Time Systems Symp., London, UK, December 2001. [9] Express Logic Inc. ThreadX Technical Features, version 4. http://www.expresslogic.com/txtech.html. [10] Paolo Gai, Giuseppe Lipari, and Marco di Natale. Minimizing memory utilization of real-time task sets in single and multi-processor systems-on-a-chip. In Proc. of the 22nd IEEE Real-Time Systems Symp., London, UK, December 2001. [11] Mark K. Gardner and Jane W. S. Liu. Performance of algorithms for scheduling real-time systems with overrun and overload. In Proc. of the 11th Euromicro Workshop on Real-Time Systems, York, UK, June 1999. [12] Laurent George, Nicolas Rivierre, and Marco Spuri. Preemptive and non-preemptive real-time uni-processor scheduling. Technical Report 2966, INRIA, Rocquencourt, France, September 1996. [13] Moncef Hamdaoui and Parameswaran Ramanathan. A dynamic priority assignment technique for streams with (m; k)-firm deadlines. IEEE Transactions on Computers, 44(12):1443-1451, December 1995. [14] Daniel I. Katcher, Hiroshi Arakawa, and Jay K. Strosnider. Engineering and analysis of fixed priority schedulers. IEEE Transactions on Software Engineering, 19(9):920-934, September 1993. [15] Eddie Kohler, Robert Morris, Benjie Chen, John Jannotti, and M. Frans Kaashoek. The Click modular router. ACM Transactions on Computer Systems, 18(3):263-297, August 2000. [16] John Lehoczky, Lui Sha, and Ye Ding. The rate monotonic scheduling algorithm: Exact characterization and average case behavior. In Proc. of the 10th IEEE Real-Time Systems Symp., pages 166-171, Santa Monica, CA, December 1989. [17] Clifford W. Mercer, Stefan Savage, and Hideyuki Tokuda. Processor capacity reserves for multimedia operating systems. In Proc. of the IEEE Intl. Conf. on Multimedia Computing and Systems, May 1994. [18] William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. Numerical Recipes in C. Cambridge University Press, second edition, 1992. [19] Sasikumar Punnekkat, Rob Davis, and Alan Burns. Sensitivity analysis of real-time task sets. In Proc. of the Asian Computing Science Conference, pages 72-82, Kathmandu, Nepal, December 1997. [20] Manas Saksena and Yun Wang. Scalable real-time system design using preemption thresholds. In Proc. of the 21st IEEE Real-Time Systems Symp., Orlando, FL, November 2000. [21] Lui Sha, Ragunathan Rajkumar, and John Lehoczky. Priority inheritance protocols: An approach to real-time synchronization. IEEE Transactions on Computers, 39(9):1175-1185, September 1990. [22] John A. Stankovic, Krithi Ramamritham, Douglas Niehaus, Marty Humphrey, and Gary Wallace. The Spring system: Integrated support for complex real-time systems. Real-Time Systems Journal, 16(2/3):223-251, May 1999. [23] David B. Stewart and Pradeep K. Khosla. Mechanisms for detecting and handling timing errors. Communications of the ACM, 40(1):87-93, January 1997. [24] Ken Tindell, Alan Burns, and Andy J. Wellings. An extendible approach for analysing fixed priority hard real-time tasks. Real-Time Systems Journal, 6(2):133-151, March 1994. [25] Steve Vestal. Fixed-priority sensitivity analysis for linear compute time models. IEEE Transactions on Software Engineering, 20(4):308-317, April 1994. [26] Yun Wang and Manas Saksena. Scheduling fixed-priority tasks with preemption threshold. In Proc. of the 6th Intl. Workshop on Real-Time Computing Systems and Applications, Hong Kong, December 1999. [27] Ramesh Yerraballi, Ravi Mukkamala, Kurt Maly, and Hussein Abdel-Wahab. Issues in schedulability analysis of real-time systems. In Proc. of the 7th Euromicro Workshop on Real-Time Systems, Odense, Denmark, June 1995. 11 A. Correcting the Response Time Analysis for Preemption Threshold Scheduling The original response time analysis for task sets scheduled using preemption thresholds [26] contains an error - it sometimes examines too few previous task invocations, re-sulting in the potential for underestimated response times. Figure 9 shows the difference between the previous pre-emption threshold analysis and the corrected version pre-sented in this section. The old analysis predicts that the task set is feasible, while the new analysis predicts that 1 may not meet its deadline. Figure 10 is a trace of a simulated execution of the task set. It proves the infeasibility of the task set by counterexample: 1 misses its second deadline. The following response time analysis differs from the one presented by Wang and Saksena in two major ways. First, it has a different termination condition for the loop that takes previous invocations of a task into account when computing its response time. Second, it adds support for tasks with release jitter. We have also changed the notation tomatch that used in this paper. The worst-case blocking time for a task is: Bi= max 8j : PjPi>Pj Cj In other words, the worst-case blocking for i happenswhen the task with the longest WCET that has lower priority and higher preemption threshold is dispatched infinitesimally earlier than i is able to run. Si, the worst-case start time of task i, is: Si(q) = Bi + qCi + X 8j : Pj>Pi 1 + Si(q) + Jj Tj Cj Our only change to this equation is the addition of a term accounting for release jitter. Fi, the worst-case finish time of task i, is: Fi(q) = Si(q) + Ci+ X 8j : Pj>Pi Fi(q) + Jj Tj − 1 + Si(q) + Jj Tj Cj Again, we have only added the jitter terms. The response time of task i is: ri= max 8q : 0qQ (Fi(q) + Ji − qTi) Where Q is bLi=Tic. Li is the longest level-i busy period for preemption threshold scheduling, and is: Li = Bi + X 8j : PjPi Li + Jj Tj Cj Task Ci Ti Di Ji Pi Pi rold i rnew i 0 40 70 70 0 0 0 60 60 1 20 90 90 0 2 0 80 120 2 20 100 100 0 1 0 80 80 Figure 9. Response times computed using the original and fixed analyses 0 50 100 150 200 250 Time t0 t2 t1 Figure 10. Simulated execution trace of the task set from Figure 9. The second instance of task 1 misses its deadline. The computation of Q was adapted from George et al. [12], and is the core of the difference between our analysis and the previously published one, which iterated only until q = m where Fi(m) q Ti. By working through the response time calculation for 1 in the example task set in Figure 9, this termination condition can be seen to be the source of the error. Whenever a variable appears on both sides of the equa-tion (i.e., Si, Fi, and Li) its value can be found by iterating until the value converges. Zero is a safe initial value for Si and Fi, but Li needs to start at one. Finally, we do not believe that the discrepancy between the old and new response time analyses affects any of the qualitative results reported by Saksena and Wang. For ran-domly generated task sets with 10 members and no release jitter the two analyses agree on the response times of all tasks about 99% of the time. 12"}]
}}