Real-time scheduling: rate-monotonic & the priority-inversion fix
Three periodic tasks share one CPU. Rate-monotonic gives the shortest-period task the highest static priority — that part is easy. The hard part is what happens when a low-priority task locks a shared resource just before a high-priority task wakes up. Watch a medium-priority task that doesn’t even touch the resource starve the high-priority one indirectly — then turn on the highest-locker fix and watch the inversion vanish.
The trick in two paragraphs
Rate-monotonic scheduling (RMS, Liu & Layland 1973). Sort the periodic tasks by period. The shortest period gets the highest fixed priority. Whenever multiple tasks are ready, the CPU runs the highest-priority one and preempts everyone else. For a set of n tasks, RMS is guaranteed to meet all deadlines if total utilization U = Σ Ci/Ti ≤ n·(21/n−1) — the bound that drops to ln 2 ≈ 0.693 as n → ∞. Above the bound, run response-time analysis instead.
Highest locker’s priority (HLP, a.k.a. immediate priority-ceiling protocol). Each shared resource gets a ceiling equal to the priority of the highest-priority task that ever locks it. The instant a task acquires the resource, its effective priority is bumped to the ceiling. Now no task that could someday want the resource — nor any task of equal-or-lower priority — can preempt the holder. The holder gets through its critical section quickly, releases, and drops back. Bounded blocking, no deadlocks, no Mars Pathfinder.
Tasks (rate-monotonic order)
Edit any number. T = period, C = worst-case execution time, phase = first release time. Set CS len to 0 to make a task that doesn’t touch the shared resource.
Shared resource & protocol
Schedule
Analysis
Things to try
- Press Play with the defaults under No protocol. Watch what happens around t=2…7: the high-priority Sensor wakes up, runs for one tick, then asks for the resource the low-priority Display is still holding. Sensor blocks. Logger then arrives, doesn’t need the resource, and happily preempts Display for three full ticks — even though Sensor has higher priority. That delay is the priority inversion. It’s the medium-priority task indirectly starving the high-priority one through a resource it doesn’t even touch.
- Switch to Highest locker (HLP) and reset. The instant Display locks the resource at t=0, its effective priority jumps to the ceiling (= Sensor’s priority). Now neither Sensor nor Logger can preempt it. Display blasts through its critical section, releases, drops back. Sensor finishes a few ticks earlier; Logger a bit later. You traded a tiny bit of Logger’s slack for a strict bound on Sensor’s blocking time.
- Crank Display’s execution time C up until the total utilization passes the Liu–Layland bound (the marker on the bar). RMS is no longer guaranteed schedulable by the simple test, but the simulator may still meet all deadlines — that’s the gap between sufficient and necessary conditions. Push further until you see a red missed deadline marker.
- Set Logger’s CS len to a non-zero value so it also locks the resource. The ceiling is still Sensor’s priority (it’s the highest user), but now there’s a richer interaction: under HLP, Logger holding the resource also blocks Sensor — bounded by Logger’s critical section length. Compare to the no-protocol case where chained inversions can occur.
- Make all three periods harmonic (e.g. 4, 8, 16). RMS becomes optimal at U = 1 in this special case, well above the Liu–Layland bound. The bound is conservative for harmonic task sets.
- Set Sensor’s phase to 0 instead of 1. Now Sensor runs first (it has highest priority), grabs the resource, releases it, and is gone before Display ever starts. Inversion can’t happen if the high-priority task gets in first — but you can’t count on phasing to save you in general.
What this simulator skips
One CPU, one shared resource, integer time, zero context-switch overhead, deterministic worst-case execution times, no jitter on releases, and critical sections that always start at the same execution offset within a job. Real-world RTOSes also have to deal with tick interrupts, priority queue maintenance, kernel-mode/user-mode transitions, and cache effects on Ci. This page treats C as a single integer.
The protocol toggle is between no protocol at all (a task that holds a resource keeps its base priority) and highest locker / immediate priority-ceiling (priority bumped on lock). The closely-related priority inheritance protocol (PIP) only bumps the holder’s priority when a higher-priority task actually blocks on the resource — it solves bounded inversion too, but doesn’t prevent deadlock the way HLP/PCP do, and gives slightly worse blocking bounds in chained-resource cases. The full Priority Ceiling Protocol (Sha, Rajkumar, Lehoczky 1990) only allows a new lock when the requesting task’s priority is strictly higher than the system ceiling, which is subtly different from HLP but gives identical guarantees with one resource.
Schedulability shown here is the Liu–Layland utilization bound (sufficient, not necessary). Tighter exact tests — the hyperbolic bound and response-time analysis — aren’t evaluated; you just watch the simulator run and see whether deadlines slip.