Cheddar tutorial: an example of real-time scheduling analysis with Cheddar

Frank Singhoff (singhoff@univ-brest.fr), Hai Nam Tran (hai-nam.tran@univ-brest.fr)
Summer school "École d’Été Temps Réel/ETR", Poitiers, Futuroscope, 20-24 September, 2021
Also given during ETR 2015 and 2017.

The objective of this lab is to experiment several classical algorithms and analysis methods of the real-time scheduling theory. For such a purpose, we will use Cheddar. The first exercise of this tutorial should allow you to understand how to basically use Cheddar. In this exercise, we see how to model a software architecture and how to analyse it. In the sequel:
  1. Exercises 2 and 3 present and compare several classical real-time scheduling algorithms.
  2. Exercises 4 and 5 address shared resources.
  3. Finally, exercises 6 and 7 propose full software architectures to be modeled and verified.
Ideas of those exercises were taken from [COT 00, BUR 97, DEM 99, KRI 97]. We can look for those books for extra exercises.

I.1 How to setup your workspace

I.2 Edit the model to analyse

In this section, we will design our first Cheddar model.

Let describe the architecture of the real-time system we want to verify. For such a purpose, we will use the Cheddar built-in ADL (ADL stands for Architecture Description Language), called Cheddar ADL. Cheddar ADL is an architecture description language dedicated to real-time scheduling analysis. A Cheddar ADL model is composed of a description of both the hardware and of the software part of the system to analyse. Those models can be written with any editors, but in the sequel, we will use the graphical editor of Cheddar as follow:

  1. First, the hardware components composing the execution environment running the software must be descrided. With Cheddar, the execution platform can be composed of one or several processor components embedding each one or several core components with optionnal cache components. A core component models a unit of computation, i.e. an entity providing the support to run one or several tasks.
  2. To describe a core component, use the menu Edit/Hardware/Core which opens the windows of the Figure 2:
    Fig. 2: Editing a core
  3. A core component is defined by the following attributes: All the window you may use to edit Cheddar ADL components have the same interface:
  4. Once core1 is defined, we can define processor components that use core1 with the Edit/Hardware/Processor menu, which displays the following windows:
    Fig. 3: Editing a processor component
    In the Figure 3, we define a processor named cpu1 which is composed of the core1 component. Adding or deleting a core inside a processor can be achieved with the buttons Add and Delete from the subwindows called Cores Table. In this lab, we restrict ourself with uniprocessor architecture models. Then, processor only contains one core component and then, we need to set Processor Type and Migration Type attributes respectively to Monocore Type and No Migration Type values.
  5. In the sequel, we express the software part of the system to analyse. For this lab, this software part is mainly composed of the tasks and the shared resource components. (task Cheddar ADL component type and resource component type). Other software components exist in Cheddar ADL, but they are not required there (See the Cheddar ADL reference guide for further details). task and resource components have to be defined inside Address Space components. An Address Space component can be seen as a entity modelling the concept of an application, or a Unix process address space, or even an ARINC 653 partition.
  6. The window of the Figure 4 (menu Edit/Software/Address space) shows you how to define an Address Space for this tutorial. The only attributes we have to define are Name and Cpu Name. For the next exercises, we have to define one Address Space component for each processor component. However, for specific analysis as ARINC 653 or more generally, hierarchical architecture models, more attributes have to be specified for such kind of component.
    Fig. 4: Editing an address space component
  7. The last step, to edit a Cheddar ADL model, is to specify task parameters, which can be achieved with the menu Edit/Software/Task:
    Fig. 5: Editing a task component
    Again, such component type have mumerous attributes. The most importants are: In the example of the Figure 5, we have defined 3 tasks, which were periodic, synchronous, with deadlines on requests, and then, which were defined with the following attributes:

I.3 Load/Save a model

Cheddar provides features to load/save a model in the Cheddar ADL format.

I.4 Analyze a model

In this last section, we show how to perform an analysis of the architecture model previously designed.

From the menu Tools and for a given architecture model, we can perform 3 types of analysis with Cheddar:

  1. We can use simulation tools to compute the scheduling of a task set, and then, from such scheduling, to compute various performance criteria (worst/average/best response times, missed deadlines, blocking time on shared resources, deadlocks, ...).
  2. We can also call various feasibility tests that allow the designers to check deadlines without computing the scheduling of the task set.
  3. If the previous tools have shown that some deadlines cannot be met, we can also improve the architecture model by various means: priority assignments, task partitionning, task clustering, ...

Under the menu, some buttons allow you to quickly access to a summary of those Cheddar analysis features (all analysis features can be accessed by the Tools menu). Indeed, the button starts a scheduling simulation and the button computes a sample of feasibility tests for a basic periodic task set. Basically, the button computes worst case task response time following the Joseph et Pandia [JOS 86] method extended for any Deadline and classical CPU bound tests. Again, for specific feasibility tests (e.g. transactions), see the Tools menu.

Figures 6 et 7 show a screenshot of Cheddar for the model edited in section I.2, and for results produced by buttons and . We can see there worst case response times computed by the Joseph and Pandia [JOS 86] method and also by the scheduling simulation on the feasibility interval. Feasibility tests on cpu utilization factor cannot be applied there as the tool cannot be sure that priorities have been assigned according to Rate Monotonic. To apply this test, one must change the scheduling policy of the core1 component with the Rate_Monotonic_Protocol value.

We can also notice in the time lines that vertical red rectangles show task release times. Horizontal rectangles show when tasks are running. Many other display options exist there and can be modified by the Tools/Scheduling/Scheduling Options menu.

Fig. 6: A scheduling simulation
Fig. 7: Compute schedulability feasibility tests

Let see now our first true exercise.

Exercise 1

  1. Edit a Cheddar model for the system analyzed in the figures 6 and 7, and then, with buttons and , check that you get the results of the figures 6 and 7.
  2. From the scheduling simulation, read the different response times for each task (remind that response time = completion time - release time). For a given task, can you see the response time varying during the scheduling simulation ? Why ? When should we see the shortest response time ? Why ?
  3. Update the parameters of the tasks as follows: This task set is said to be harmonic as each period is multiple with the other periods of the task set. For such a kind of task set, we know that we can meet task deadlines even if the processor is busy upto 100%, provided a preemptive fixed priority scheduling policy and a Rate Monotonic priority assignment are applied.
  4. With Cheddar, compute the scheduling simulation during the feasibility interval (button ). From this simulation, check the schedulability of this task set.

In the sequel, we will experiment and compare 3 classical scheduling policies.

Exercise 2

We consider two periodic tasks, synchronous and with deadline on request : tasks T1 and T2. They are defined as follow:
  1. Open the model for exercice 2: ex2_base.xmlv3. It consists of a processor composed of one core.
  2. Edit this Cheddar model for the task set running on the processor.
  3. Edit the core to host a preemptive fixed priority scheduler.
  4. With Cheddar, compute a scheduling simulation for the feasibility interval (button ).
  5. From this simulation, what are the worst case response times of each task? Can we see missed deadlines?
  6. Change you Cheddar model in order to use a preemptive EDF instead (value Earliest_Deadline_First_Protocol for the Scheduler type attribute). Do again questions 2 and 3.
  7. Compare both analysis results. What can you see? Is it surprising?
  8. Change again your model: now we consider a preemptive EDF with the same task sets, but you must add now an aperiodic task. We do not require that the deadline of this aperiodic task must be met. To define such aperiodic task, change the value for the attribute Task Type (see. figure 5):
  9. Compute again the scheduling simulation for the first 30th units of time (button ). What can you see now?
  10. What can we conclude about the suitability of EDF for critical real-time systems? Is it the case with fixed priority scheduling?

Exercise 3

We consider two periodic tasks, synchronous, with deadline on request: tasks T1 and T2. They are defined as follow: We investigate now another scheduling policy: LLF (Least Laxity First). This policy selects the task to run amoung the ready tasks according to a dynamic priority called 'laxity'. Li(t), the laxity of a task i at time t can be computed by Li(t)= Deadline - remaining(t) where remaining(t) is the remaining capacity of the task at time t.
  1. Open the model for exercise 3: ex3_base.xmlv3. It consists of a processor composed of one core.
  2. Edit a Cheddar model for the task set running on the processor.
  3. Edit the core to host a preemptive EDF scheduler.
  4. With Cheddar, compute a scheduling simulation for the feasibility interval (button ).
  5. From this simulation, what are the worst case response times of each task? Can we see the missed deadline?
  6. Change your Cheddar model in order to use a LLF policy instead (value Least_Laxity_First_Protocol for the Scheduler type attribute). Do again questions 2 and 3.
  7. What can we see about the results produced by both those scheduling policies? What are the common points and differences?
  8. Conclude about the choice between those two scheduling policies.

In the previous exercises, we assumed the tasks as independent. In the sequel, we illustrate the use of Cheddar with dependent tasks: with tasks sharing resources according to classical protocols such as PCP or PIP.

Exercise 4

We consider three periodic tasks, synchronous and with deadlines on request: tasks T1, T2 and T3. They are defined as follow:

We assume a preemptive fixed priority scheduling policy and we apply Rate Monotonic to assign priorities.

We also assume that tasks T1 and T3 share a resource named S. T1 and T3 access to S in mutual exclusion:

  1. Open the model for exercise 4: ex4_base.xmlv3. It consists of a processor composed of one core and the task set described above.
  2. We edit our Cheddar model for the resource S. To define S, we need the Edit/Software/Resource menu which displays the following window:
    Fig. 8 : editing a resource component
    In this window, and for our lab, we need to set the following attributes:
  3. Compute the scheduling simulation on the feasibility interval (button ). You should get the scheduling simulation of the Figure 9.
    Fig. 9: scheduling simulation with a shared resource
  4. On the resource time line, horizontal rectangles show when the resource is used. Those rectangles are colored as the tasks are (see option in the Tools/Scheduling/Scheduling Options menu). Furthermore, bleu vertical rectangles show when the resources are locked while red vertical rectangles show when resources are unlocked.
  5. In this time line, find when tasks T1 and T2 lock and unlock S.
  6. From this time line, can we see priority inversions? Where?
  7. Change S protocol by Priority Inheritance Protocol (or PIP), and compute again the scheduling simulation. With PIP, a task which blocks another with a higher priority, runs its critical section with the blocked task priority.
  8. What can we see in this new scheduling simulation about the shared resource?
  9. Say at which time the priorities of the tasks change due to PIP.
  10. With the button, compute the worst case response time with the Joseph et Pandia method and compare those results with the one computed during the scheduling simulation. What can you see? Explain.
  11. From the scheduling simulation, give the blocking time of each task due to the shared resource S.

Exercise 5

We now see a Cheddar model with two shared resources. We assume two periodic tasks with the following parameters :

We use a fixed priority preemptive scheduling policy for such task set.

T1 and T2 require the access to the shared resources R1 and R2 as follows:

We also assume that both R1 and R2 apply PIP (Priority Inheritance Protocol).
  1. Open the model for exercise 5: exo5_base.xmlv3. It consists of a processor composed of one core and the task set described above.
  2. Edit a Cheddar model for resources R1 and R2
  3. Compute the scheduling simulation during the 30th first units of time (button ).
  4. Look for in the time lines where T1 and T2 lock and unlock R1 and R2.
  5. What can you see? Is it surprising? Explain.
  6. We now apply the PCP protocol (Priority Ceiling Protocol in the Figure 8). For such a purpose, you have to update both R1 and R2. PCP works as follows:
  7. Compute again the scheduling simulation during the 30th units of time with the PCP resources.
  8. Show in the time lines when T1 and T2 lock and unlock R1 and R2.
  9. Say when the priorities of the tasks change due to PCP.
  10. Compare those results with the one of the question 1.

Exercise 6

We investigate a system embedded in a car which displays various data to the driver. This system is composed of several tasks and we expect to verify their timing behavior.

The system is composed of 5 tasks:

  1. Task SENSOR_1 reads every 10 ms the speed of the car.
  2. Task SENSOR_2 reads every 10 ms the temperature inside the car.
  3. Task SENSOR_3 reads every 40 ms the GPS position of the car.
  4. Task DISPLAY_1 produces on a screen every 12 ms a summary of the data retrieved by the three sensor tasks.
  5. Finally, tasks DISPLAY_2 displays on a second screen the map of the current location of the car. This display is made on request by the driver every 6 ms.

An enginer implements all those tasks and then, run them alone on the targetted system. After several measurements, he shows that:
  1. Task SENSOR_2 and DISPLAY_1 execution times never exceed 2 ms.
  2. Task SENSOR_3 needs between 2 and 4 ms to successfully run its code.
  3. Finally, tasks SENSOR_1 and DISPLAY_2 have an execution time of less than 1 ms.

We assume that all tasks start their first activation at the same time. Furthermore, all tasks must complete their current activation before they have to start the next one.

We assume an execution platform composed of one processor which provides a preemptive fixed priority scheduler. Priorities range from 0 to 31. 0 is the lowest priority value. The system does not allow designers to assign the same priority value to several tasks.

Question 1:

Question 2:
With Cheddar, from the priority assignment of the previous question, compute the worst case response times with the Joseph and Pandia method for the SENSOR_1, the SENSOR_2, and the SENSOR_3 task.

Question 3:
In fact, tasks SENSOR_1, SENSOR_2, and SENSOR_3 share a memory area protected by a semaphore (i.e. accessed in mutual exclusion). SENSOR_1 et SENSOR_2 needs this memory during all their capacity while SENSOR_3 only needs it during the 2 first units of time of its capacity. The semaphore protecting the memory applies the PIP protocol. Explain why the worst case response times computed in the previous question are not true anymore.

Question 4:
With Cheddar, compute the scheduling of this task set on the 60th first units of time. Say when the shared resource is allocated and released. Say also if some priority inversions occur. Finally say when the priorities of the tasks change due to PIP.

Question 5:
For the questions 5 and 6, we now assume that all tasks are independent.

In order to activate more frequently the tasks, we experiment the scheduling of those tasks on a two processor platforms : processors a and b. Migrations of the tasks are not allowed: we then apply a partitionning approach.

The tasks are defined according to those parameters:

The tasks are scheduled according to a preemptive fixed priority scheduling policy. Task priorities are assigned according to Rate Monotonic.

Show without scheduling simulation that it exists at least one task which cannot meet its deadline.

Question 6:
In fact, tasks were assigned to processors randomly. All tasks can run on both processors. Processors a and b only differ from their execution speed: On processor b, a task runs twice more quickly than on processor a.

Assign tasks to processors a and b such as all task deadlines can be met. Explain your choice.

Exercise 7

This exercise is extracted from [COT 00] and is about a simplified architecture model of the Mars Pathfinder mission. In this exercise, you must look for a design mistake and propose a solution for it. In 1997, Mars Pathfinder casts a mobile robot called Sojourner on Mars. This mobile robot is controled by a multitask software running on a VxWorks target. This software is composed of the following tasks :

Tasks Priorities Periods/Deadlines WCET
SCHED_BUS 1 125 ms 25 ms
DATA 2 125 ms 25 ms
CONTROL 3 250 ms 25 ms
RADIO 4 250 ms 25 ms
VIDEO 5 250 ms 25 ms
MESURE 6 5000 ms 50 ms
FORECAST 7 5000 ms Between 50 ms and 75 ms
  1. During the mission of Mars PathFinder, operators noticed that some deadlines were missed, leading to a reboot of the hardware and some losses of data. What are the missed deadlines? Why?
  2. How to solve this issue? Apply it on your Cheddar ADL model.