{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Cheddar 3.3 User Guide Lab-STICC technical report Frank Singhoff, Hai Nam Tran, Ill-ham Atchadam, Christian Fotsing, St\u00e9phane Rubini, Mourad Dridi Cheddar is a free real time scheduling framework. Cheddar is designed for checking task temporal constraints of a real time application/system. It can also help you for quick prototyping of real time schedulers. Finally, it can be used for educational purpose. Cheddar is a free software, and you are welcome to redistribute it under certain conditions; See the GNU General Public License for details. The Cheddar project was started in 2002 by Frank Singhoff, Lab-STICC Team, University of Brest. Since 2008, Ellidiss technologies also contributes to the development of Cheddar and provides industrial support. WARNING : this user's guide supposes that you have a minimum background on real-time scheduling. If it's not your case, take a look on this link which includes some very basic articles or book references . This link also provides a description of the analytical methods implemented into Cheddar and gives some publications that show how to use Cheddar. To completed this user guide, you also have in this technical report , a precise description of all entities used in Cheddar. 1. Basic features: scheduling simulation and feasibility tests for independent tasks 1.1 First step : a simple scheduling simulation 1.2 Other available schedulers and task arrival patterns 1.3 Scheduling options 2. Cheddar project files (XML and AADL files) 3. Cheddar command line 4. Scheduling with dependencies 4.1 Shared resources analysis tools 4.2 Task precedencies 4.2.1 Editing Task precedencies 4.2.2 How to transform a dependent task set into an independent task set : the Chetto/Blazewicz modification rules 4.2.3 Computing end to end response time : the Holistic approach 4.3 Buffer analysis tools 4.4 Message scheduling services 5. Multiprocessor scheduling services 5.1 Global multiprocessor scheduling 5.2 Cache interference analysis 5.3 Memory interferences analysis 5.4 Network-On-Chip interferences analysis 5.5 Partitionning algorithms 6. User-defined simulation code : how to run simulations of specific systems 6.1 Defining new schedulers or task activation patterns. 6.2 Examples of user-defined schedulers 6.2.1 Low-level statements versus High-level statements 6.2.2 User-defined scheduler built with User's defined Task Parameters 6.3 Scheduling with user-defined task arrival patterns 6.4 Running a simulation with a user-defined scheduler 6.5 Looking for user-defined properties during a scheduling simulation 6.6 List of predefined variables and available statements for user-defined code 7. Using Cheddar within OSATE 2 7.1 How to install Cheddar plugin within OSATE 2 7.2 Cheddar plugins simple example of use 7.3 Cheddar AADL properties 8. Hierarchical scheduling 8.1 ARINC 653 scheduling 8.1.1 How to model an ARINC 653 two-levels scheduling 8.1.2 Example of an ARINC 653 scheduling 8.2 Aperiodic server hierarchical scheduling 9. MILS and security services","title":"Home - Table of Contents"},{"location":"pages/basics/","text":"Basic features: scheduling simulation and feasibility tests for independent tasks In this chapter, you find a description of the most important scheduling and feasibility services provided by Cheddar in the case of independent tasks. First step: a simple scheduling simulation This section shows you how to call the simpliest features of Cheddar. Cheddar provides tools to check temporal constraints of real time tasks. These tools are based on classical results from real time scheduling theory. Before calling such tools, you have to define a system which is mainly composed of several processors and tasks . To define a processor, you should first define one or multiple cores. For that choose the \"Edit/Hardware/Core\" submenu. The window below is then displayed: Figure 1.1 Adding a core A core is defined by the following fields (see Figure 1.1): The name of the core. A core name can be any combination of literal characters including underscore. Space is forbidden. Each core must have a unique name. The scheduler hosted by the core. Basically, you can choose from a various set of schedulers such as (to get a detailed description on these schedulers, see section Other Available schedulers and task arrival patterns ): \"Earliest Deadline First\" (or EDF). Tasks can be periodic or not and are scheduled according to their deadline. \"Least Laxity First\" (or LLF). Tasks can be periodic or not and are scheduled according to their laxity. The laxity is computed by : L_i = D_i - C'_i in which L_i is the laxity of the task, D_i is the deadline, and C'_i is the remaining capacity. \"Least Runtime Laxity First\" (a second interpretation of LLF). Tasks can be periodic or not and are scheduled according to their laxity. The laxity is computed by : L_i = D_i - (C'_i + t_i) in which L_i is the laxity of the task, D_i is the deadline, C'_i is the remaining capacity, and t_i is the time passed since the release time of the task. \"Rate Monotonic\" (or RM, or RMA, or RMS). Tasks have to be periodic, and deadline must be equal to period. Tasks are scheduled according to their period. You have to be aware that the value of the priority field of the tasks is ignored here. \"Deadline Monotonic\" (or DM). Tasks have to be periodic and are scheduled according to their deadline. You have to be aware that the value of the priority field of the tasks is ignored here. \"Posix 1003 Highest Priority First\". Tasks can be periodic or not. Tasks are scheduled according to the priority and the policy of the tasks. (Rate Monotonic and Deadline Monotonic use the same scheduler engine except that priorities are automatically computed from task period or deadline). POSIX 1003.1b scheduler supports SCHED_RR, SCHED_FIFO and SCHED_OTHERS queueing policies. SCHED_OTHERS is a time sharing policy. SCHED_RR and SCHED_FIFO tasks must have priorities ranging between 255 and 1. Priority level 0 is reserved for SCHED_OTHERS tasks. The highiest priority level is 255. \"Time sharing based on wait time\" (which is a Linux-like scheduler) and \"Time sharing based on cpu usage\". These two schedulers provide a way to share the processor as on a time sharing operatong system. With the first scheduler, the more a ready task waits for the processor and the more its priority increases. With the second scheduler, the more a ready task uses the processor and the more its priority decreases. \"Round robin\" (with quantum). The processor is regulary shared between all the tasks. A quantum (which is a bound on the time a task keeps the processor) can be given. \"Maximum Urgency First based on laxity\" and \"Maximum Urgency First based on deadline\". Such schedulers are based on an hybrid priority assignment : a task priority is made of a fixed part and a dynamic part (see ). \"D-Over\". This scheduler is an EDF like but which is work fine when the processor is over-loaded. When the processor is over-loaded, D-Over is always able to predict which tasks will miss its deadline (in contrary to EDF). User-defined schedulers (\"Pipeline user-defined scheduler\", \"Automata user-defined scheduler\" or \"Compiled user-defined scheduler\"). These schedulers allow users to define their own scheduler into Cheddar (see section User Defined Scheduler for details). If the scheduler is preemptive or not . By default, the scheduler is set to be preemptive. The quantum value associated with the scheduler. This information is useful if a scheduler has to manage several tasks with the same dynamic or static priority : in this case, the simulator has to choose how to share the processor between these tasks. The quantum is a bound on the delay a task can hold the processor (if the quantum is equal to zero, there is no bound on the processor holding time). At the time we're speaking, the quantum value can be used with the POSIX 1003.1b scheduler (only with SCHED_RR tasks) and the round robin scheduler. With POSIX 1003.1b, two SCHED_RR tasks with the same priority level should share the processor with a POSIX round-robin policy. In this case, the quantum value is the time slot of this round-robin scheduler. Finally, the quantum value could also be used for user-defined scheduler (see User Defined Scheduler for details). Automaton name : user-defined scheduler can be expressed as an automaton. In this case, the this attribute stores the name of the automaton for the given core. Capacity , Period , and Priority : These attributes are used to perform scheduling analysis with a polling server, for more information see Hierachical Scheduler The User Defined Scheduler Source File Name is the name of a file which contains the source code of a user-defined scheduler (see section User Defined Scheduler for details). Start time : time of the first release of the task Speed . This attribute is the speed of the core. Default value is 1 and only positive non null values are accepted for this attribute. When the value of this attribute is equal to n, it means that task are executed n times quicker. L1 Cache system name : This attribute indicate which cache is used to the core unit. Warning : with Cheddar, to add a core (or any object), you have to push the Add button before pushing the Close button. That allows you to define several objects quickly without closing the window (you should then push Add for each defined object). Then you can define a processor. For that choose the \"Edit/Entities/Hardware/Processor\" submenu. The window below is then displayed: Figure 1.2 Adding a processor A processor is defined by the following fields (see Figure 1.2) : The name of the processor. A processor name can be any combination of literal characters including underscore. Space is forbidden. Each processor must have a unique name. At the time we're speaking, the network field is not used (planned to be used in order to simulate message scheduling). Processor type 4 kinds of processor exists in Cheddar: Monocore type . It contains only one core and can run only one task at a time. Identical multicores type . The processor contains several cores that are identical, i.e. have the same scheduling protocol (but with potentionnaly different parameters). All core of such processor have the same speed. Uniform multicores type . The processor contains cores that have different speeds. However speeds have proportional values. All cores of the same processor have to run the same scheduling protocol. Unrelated multicores type . The processor contains cores with differents speeds. Speeds have unrelated values. Again, all cores of the same processor have to run the same scheduling protocol. Migration type . This attributes specifies how the tasks are allowed to move from one core to another. No migration type . Task cannot move from one core to another. This is typically the case of Multicore ARINC 653 architectures, or also of architectures with the concepts of core affinity (i.e. POSIX standard). Job level migration type . A task running on a core can move to another core only when its current job is completed. Running the smae job on two different cores is not allowed. Time unit migration type . A task can migrate at any core at any time. Cores table which contain the list of cores initially defined. The user should select one core in the monocore processor case, and almost one core in other case. Figure 1.3 Adding an address space The next step in order to run a simulation, is to define an address space. Choose the \"Edit/Entities/Software/Address space\" submenu. An address space models a piece of memory which contain tasks, buffers or shared resources. The Figure 1.3 shows the widget used to define such a feature. At the time we are speaking, the information you have to provide is: A name. An address space name can be any combination of literal characters including underscore. Space is forbidden. Each address space name has to be unique. A processor name. This is the processor which hosts the address space. Some fields related to the size of the address space memory: the text memory size , the heap memory size , the stack memory size and the data memory size . The fields related to memory size will be used in the next Cheddar's release in order to perform a global memory analysis. Figure 1.4 Adding a task Let see now, how to define a task, the last feature required to perform the most simpliest performance analysis. Choose the \"Edit/Entities/Software/Task\" submenu. The window of Figure 1.4 is then displayed. This window is composed of 3 sub-parts : the \"main part\", the \"offset part\" and the \"user's defined parameters part\". The main part contains the following informations : At least, a task is defined by a name (the task name should be unique), a capacity (bound on its execution time) and a place to run it (a processor name and an address space name ). The other parameters are optional but can be required for a particular scheduler A type of task . It describes the way the task is activated. An aperiodic task is only activated once. A periodic task is activated many times and the delay between two activations is a fixed one. A poisson process task is activated many times and the delay between two activations is a random delay : the random law used to generated these delays is an exponential one (poisson process). a sporadic task is a task which is activated many times with a minimal delay between two succesive activations. If the task type is \"user-defined\", the task activation law is defined by the user (see section User Defined Scheduler of this user's guide). The period . It is the time between two task activations. The period is a constant delay for a periodic task. It's an average delay for a poisson process task. If you have selected a processor that owns a Rate Monotonic or a Deadline Monotonic scheduler, you have to give a period for each of its tasks. A start time . It is the time when the task arrives in the system (its first activation time). A deadline . The task must end its activation before its deadline. A deadline is a relative information : to get the absolute date at which a task must end an activation, you should add the time when the task was awoken/activated to the task deadline. Warning : the deadline must be equal to the period if you define a Rate Monotonic scheduler. A priority and a policy . These parameters are dedicated to the POSIX 1003.1b/Highest Priority First scheduler. Priority is the fixed priority of a task. Policy can be SCHED RR, SCHED FIFO or SCHED OTHERS and describes how the scheduler chooses a task when several tasks have the same priority level. Warning : the priority and the policy are ignored by a Rate Monotonic and a Deadline Monotonic scheduler. A jitter . The jitter is a maximum lateness on the task wake up time. This information can be used to express task precedencies and to applied method such as the Holistic task response time method. A blocking time . It's a bound on shared resource waiting time. This delay could be set by the user but could also be computed by Cheddar if you described how shared resources are accessed. An activation rule . The name of the rule which defines the way the task should be activated. Only used with user-defined task. (see section User Defined Scheduler for details). A criticality level . The field indicates how the task is critical. Currently used by the MUF scheduler or any user-defined schedulers. A seed . If you define a poisson process task or a user-defined task, you can set here how random activation delay should be generated (in a deterministic way or not). The \"Seed\" button proposes you a randomly generated seed value but of course, you can give any seed value. This seed value is used only if the Predictable option is selected. If the Unpredictable option is selected, the seed is initialized at simulation time with \"gettimeofday\". The text memory size and stack memory size . The fields related to task memory size will be used in the next Cheddar's release in order to perform memory requirement analysis. The second and the third parts store task information which are less used by users. The offsets part is a table. Each entry of the table stores two informations : an activation number and a value. The offset part allows the user to change the wake up time of a task on a given activation number. For each activation number stored in the \"Activations:\" fields, the task wake up time will be delayed by the amount of time given in the \"Values\" fields. Finally, the third part (the \"User's defined parameters\" part) contains task parameters (similar to the deadline, the period, the capacity ...) used by user-defined schedulers. With this part, a user can define new task parameters. A user-defined task parameter has a value, a name and a type. The types currently available to defined user-defined task parameters are : string, integer boolean and double. Warning : when you create tasks, in most of cases, Cheddar does not check if your task parameters are erronous according to the scheduler you previously selected : these checks are done at task analysis/scheduling. Of course, you can always change task and processor parameters with \"Edit menus. When tasks and processors are defined, we can start the task analysis. Cheddar provides two kind of analysis tools: Feasibility analysis tools : these tools compute much information without scheduling the set of tasks. Equation references used to compute this feasibility information are always provided with the results. Feasibility services are provided for tasks and buffers. Simulation analysis tools : With these tools, scheduling has to be computed first. When the scheduling is computed (of course, this step can be long to proceed ...), the resulting scheduling is drawn in the top part of the window and information is computed and displayed in the bottom part of the window. Information retrieved here is only valid in the computed scheduling.The simpliest tools provided by Cheddar check if a set of tasks meet their temporal constraints. Simulation services are also provided for other resources (for buffers for instance). All these tools can be called from the \"Tools\" Menu and from some toolbar Buttons : From the submenu Tools/Scheduling/Customized scheduling simulation , the scheduling of each processor is drawn on the top of the Cheddar main window (see below). From the drawn scheduling, missed deadlines are shown and some statistics are displayed (number of preemption for instance). From the submenu Tools/Scheduling/Customize scheduling feasibility , response time, base period and processor utilization level are computed and displayed on the bottom of the Cheddar main window (see Figure 1.5). Figure 1.5 The Cheddar's main window In the top part of this window, each resource, buffer, message and task is shown by a time line: For a task time line: Each vertical red line means that the task is activated (woken up) at this time. Each horizontal rectangle means that the task is running at this time. The horizontal rectangle can have a task specific color. This horizontal colored rectangle can be found also on the core time line, which shows how the core is shared by the tasks of the architecture model. Task specific color can be deactivated, i.e. set to black for all tasks with the options windows. For a resource time line: Each vertical blue line means that the resource is allocated by a task at this time. Each vertical red line means that the resource is relaesed by a task at this time. Each horizontal rectangle means that the resource is used by a task which is running at this time. The color of this horizontal rectangle is set with the same color used in the task time line. For a message time line: Each vertical blue rectangle means that the message is sent at this time. Each vertical read rectangle means that the message is received at this time. To find the task sending or receiving a message, users have to check the core unit time line of the task time lines to find the related tasks. To produce such a display, users have to define for each message the corresponding dependencies that are used to computed the related events. For a buffer time line: Each horizontal blue rectangle means that a task writes data into a buffer. Each horizontal red rectangle means that a task reads data from a buffer.To find the task writing or readning a data in/from the buffer, users have to check the core unit time line of the task time lines to find the related tasks. To produce such a display, users have to define for each buffer the corresponding dependencies that are used to computed the related events. The scheduling result can also be saved in XML file. This allows user to run tools on Cheddar scheduling results. The scheduling result of Cheddar is an event table that gives for each time unit the set of events produced by the scheduling simulator. The event table is the data structure which is used by the simulator engine to perform analysis on scheduling. For each event, extra data related to the event is also stored. Here is the main produced events and their data: Start_Of_Task_Capacity . This event is generated when a task run the fist unit of time of its capacity. The event stores the started name of the task. End_Of_Task_Capacity . This event is generated when a task run the last unit of time of its capacity. The event stores the name of the completed task. Write_To_Buffer . This event is generated when a task write data into a buffer. The event stores the name of the buffer, the name of the task and the size of the written data. Read_From_Buffer . This event is generated when a task read data from a buffer. The event stores the name of the buffer, the name of the task and the size of the read data. Running_Task . This event is generated when a task get the processor. The event stores the name of the running task, its current priority, the core on which it runs, its CRPD value and the state of the associated cache. Task_Activation . This event is generated when a task is waking up. The event stores the name of the awoken task. Send_Message . This event is generated when a task is sending a message. The event stores the name of the message and the name of the task. Receive_Message . This event is generated when a task is receiving a message. The event stores the name of the message and the name of the task. Allocate_Resource . This event is generated when a task takes a resource. The event stores the name of the resource and the name of the task. Release_Resource . This event is generated when a task releases a resource. The event stores the name of the resource and the name of the task. Wait_For_Resource . This event is generated when a task waits for the access to a resource. The event stores the name of the resource and the name of the task. Address_Space_Activation . This event is generated with hierarchical scheduling such as ARINC 653 and when an address space is activated. This event stores the name of the activated address space and the activation duration, i.e. the amount of time the address space will stay activated. Buffer_Overflow . This event is generated when running scheduling simulations with buffer and a task tries to write to a buffer which is full. Buffer_Underflow .This event is generated when running scheduling simulations with buffer and a task tries to read from a buffer which is empty. Context_Switch_Overhead . This event is generate when there is context switch - a change in running task. Preemption . This event is generated whenever there is a preemptions. Be aware that for scalability, no all events are by default generated by Cheddar. Please refer to the option windows to select which events the simulator will produce or not. Here is an example of event table produced by Cheddar: event_table.xml : this simple event table is produced from a set of independent task scheduled with EDF. The file event_table_large.xml is similar except the size (it is a large file produced with a 200 task set). event_table_fixed_priority.xml : this event table is produced from a fixed priority scheduler.This scheduler provide an extra information for the event Running_Task. This extra information is the current priority of the running task. event_table_buffer.xml : this event table is produced from a set of tasks sharing a buffer. event_table_shared_resource.xml : this event table is produced from a set of tasks sharing a PCP resource. event_table_message.xml : this event table is produced from a set of tasks sending/receiving messages. To get a summary of the tools provided by Cheddar, see section User Defined Scheduler . Other available schedulers and task arrival patterns In Cheddar, you will find several schedulers. Some of them are directly implemented into the framework; others can be defined by the user. The list below describes the currently built-in schedulers you may find in the current release: Rate Monotonic : run the task with the smallest period first. The priority field of the tasks is ignored here. All tasks have to be periodic. `Deadline Monotonic : run the task with the smallest static deadline first. The priority field of the tasks is ignored here. All taks have to be periodic. Earliest Deadline First : run the task with the smallest dynamic deadline first. Tasks can be periodic or not. Least Laxity First and Least Runtime Laxity First : run the task with the smallest laxity first. The laxity is computed according to 2 various means. Posix 1003.1b Highest Priority First scheduler : run the task with the highest fixed priority first. Support SCHED_RR, SCHED_FIFO and SCHED_OTHERS policies. SCHED_OTHERS is a time sharing scheduler. SCHED_RR and SCHED_FIFO are policies which enforce real time scheduling. Tasks can be periodic or not. Tasks are scheduled according to the priority and the policy of the tasks. (Rate Monotonic and Deadline Monotonic use the same scheduler engine except that priorities are automaticly computed from task period or deadline). POSIX 1003.1b scheduler supports SCHED_RR, SCHED_FIFO and SCHED_OTHERS queueing policies. SCHED_OTHERS is a time sharing policy. SCHED_RR and SCHED_FIFO tasks must have priorities range from 255 to 1. Priority level 0 is reserved to SCHED_OTHERS tasks. The highiest priority level is Maximum Urgency First scheduler [STE 91] : run the tasks according to a mixed static and dynamic priority. The task to run is the task with the highest criticality level. If two tasks have the same crititicaly level, the scheduler then chooses the one with the smallest laxity. If two tasks have the same criticality level and the same laxity, the scheduler chooses the one with the highest fixed priority. D-over dynamic scheduler [KOR 92] : run the tasks as EDF but with a safe policy in case of transient overload. Round robin scheduler : give the processor during a fixed delay to each task at a fixed order. It allows the use of a given quantum : in this case, a task stays on the processor until the quantum becomes exhausted. Time sharing scheduler based on task waiting time (scheduler similar to the one provided by Linux): run the task which waits since the oldest date. Time sharing scheduler based on cpu usage: run the task which had consumed the least cpu time. Earliest Deadline First Energy Harvesting : a deadline oriented scheduler that takes care of the energy harvested during execution See [CHE 14] . AMC and EDF VD , which are 2 uniprocessor mixed criticality schedulers. DAG HLFET , a multicore scheduling that is using a DAG of task dependencies. See [ADA 74]. RUN (Reduction to Uniprocessor), a optimal multicore global scheduler, both online and offline. See [REG 11] . 3 implementations of the multicore global Proportionate Fair scheduling: PF , PD and PD2 . See [AND 04] . EDZL , for Earliest Deadline Zero Laxity, which is a deadline oriented global multicore scheduler. See [CIR 07] . LLREF , Largest Local Remaining Execution First, which is a laxity oriented global multicore scheduler. Hierarchical schedulers for uniprocessor architectures to support the scheduling of aperiodic tasks jointly with periodic tasks [SPR 90] : Hierarchical Polling Aperiodic Server , which implements a uniprocessor fixed priority scheduler with a aperiodic task server. The aperiodic task server is a periodic task running the polling protocol. Hierarchical Priority Exchange Aperiodic Server , which implements a uniprocessor fixed priority scheduler with a aperiodic task server. The aperiodic task server is a periodic task running the priority exchange protocol. Hierarchical Sporadic Aperiodic Server , which implements a uniprocessor fixed priority scheduler with a aperiodic task server. The aperiodic task server is a periodic task running the sporadic protocol. Hierarchical Deferrable Aperiodic Server , which implements a uniprocessor fixed priority scheduler with a aperiodic task server. The aperiodic task server is a periodic task running the deferrable protocol. Hierarchical schedulers for uniprocessor architecture to support Time-and-Space architectures such as ARINC 653. These hierarchical scheduling has a 2 level of scheduling: 1) A scheduler inside each address space to select the task amoung the one of the related address space. 2) A scheduler at the processor level to select the address space to activate. The following protocols have been implemented: Hierarchical Offline : address spaces are activated/scheduled according to a offline address space scheduling stored in a XML file. This scheduler is modeling the ARINC 653 MAF partition scheduling. Hierarchical Cyclic : address space are activated/scheduled cyclically. Hierarchical Round : addres space are activated/scheduled with a round robin policy. Hierarchical Fixed : address spaces are activited/scheduled according to their fixed priority. Besides the implemented scheduling protocols listed above, Cheddar provides a mean to define your own scheduling protocols. The current Cheddar's release provides examples of User-defined schedulers stored in some .sc files (see project_examples sub-directory and section VI ). These scheduler examples are: arinc.sc : modeling of an ARINC 653 partition and task scheduler schedule_according_to_criticity.parametric-cpu.sc : schedule tasks according a task criticity level non_preemptive_llf.sc : example of a LLF scheduler with no preemption when tasks have the same laxity value ts.sc : the processor is given to the task which ran the least frequently. fcfs.sc : first come/first served scheduling policy. short.sc : schedule the shortest task first (with the smallest capacity) dvd0.parametric-cpu.sc : Dynamic value density scheduler of the York University [ALD 98] . mllf.sc : Modified Least Laxity First scheduler with f=0.5 [OVE 97] . muf.sc : Maximum Urgency First scheduler [STE 91] . In the same way, Cheddar provides a set of built-in task models. The built-in task models are: Aperiodic tasks : this kind of task arrives in the system at a given time (the start time, see the \"Update Tasks\" widget), run a job and leaves the system. Periodic tasks : this kind of task periodically runs a job. A periodic task has a start time. The period of the task stores the fixed delay between two successive task wake-up times. See [LIU 73] . Sporadic tasks : this kind of task cyclycally runs a job. A sporadic task has a start time. The period field stores the minimum delay between two successive task wake-up times. Poisson process tasks : this kind of task periodically runs a job. A periodic task has a start time. The period of the task stores the average delay between two successive task wake-up times. The effective delay between two wake-up times is computed with an exponential random generator. Frame Task : this model implements the multiframe task model of [BAR 99] . Scheduling task : is planned to be used for hierarchical scheduling. Periodic inner periodic : is a task model to specify burst of periodic release separated by a fixed amount of time. This task model is then using 2 periods: an inner period for the delay between two task releases during the burts and a second period to express the delay between two burst. See [AUD 93] . Sporadic inner periodic : this task model is similar to periodic inner periodic instead of the delay between two burts is sporadic (we specify the minimum delay between two burts). See [AUD 93] . Again, you can define your own task model with user-defined code. Examples of user-defined task provided with this Cheddar release can be found in these files: sporadic.sc : tasks are woken up with a minimal inter-waking up period delay. The miminum delay is stored in the period field and the wake-up delay is randomly generated (exponential distribution). random_capacity.sc : task with a randomly generated capacity. increasing_capacity.sc : tasks with a growing capacity. activations.sc : various task models. Scheduling options Figure 1.6 Scheduling options windows The submenu \"Tools/Scheduling/Options\" allows you to tune the way all next scheduling simulations will be done (see Figure 1.6) : If you push the Offsets button, the simulation engine takes care of the task offsets given at task definition time : task activations can then be delayed if you provide offset values at task definition time. If you push the recedencies button, task scheduling will be done so that task precedencies will be met. By default, task precedencies are ignored. If you push the Resources button, access to shared ressources will be done during simulation. By default, all shared resources are ignored. Cheddar allows you to activate tasks randomly . If you want to do simulations with this kind of task, the simulator engive has to compute some random values. From this window, you can tune the way random activation delays are generated. A seed value can be associated with each task but you can also use only one seed for all tasks. In the two cases, you can do \"predictable\" or \"unpredictable\" simulations. If you choose \"predictable\" simulation, the seed will be initialized by a given value. In the other case, the seed is initialized with \"gettimeofday\". . Pushing the Predictable for all tasks radio button leads to take the seed value of the Option window during simulation for all tasks. If the Task specific seed radio button is pushed instead, the seed of each task is used to generate task activation delays. You should notice that by default, 0 is given to the seed value, but of course, you can choose any value. Pushing the Seed button gives you a random value for the seed. The check button of the window on the right side allows the user to define which events will be generated into the event table at simulation time (see section Multiprocessor scheduling service ). Figure 1.7 Scheduling options windows (both feasibility and simulation) The submenu Tools/Scheduling/Scheduling simulation allows you to tune the way the next scheduling simulation and the next feasibility test will be done (see the Figure 1.7). Options related to which information the engine has to compute when the scheduling sequence is built are : Pushing the Schedule all processors check button implies that the scheduling simulation will be computed on all defined processors. If this button stay unchecked, the user has to choose a given processor. Pushing the Number of context switch implies to compute the number of context switches from the computed scheduling sequence. Pushing the Number of preemption implies to compute the number of preemptions from the computed scheduling sequence. Pushing the Task response time implies to compute the worst/best/average task response times from the computed scheduling sequence. Pushing the Blocking time implies to compute the worst/best/average task blocking times on shared resources from the computed scheduling sequence. Pushing the Run event analyzers will imply to perform the user-defined code (see section V) on the computed scheduling sequence. The Display event table, Automatically export event table and Event table file name options are related to the computing scheduling sequence. These options allow you to save the computed scheduling into a file in a XML format or display it on the screen. Options related to which information the feasibility tests will compute are : Pushing the Feasibility on all processors check button implies that the feasibility tests will be computed on all defined processors. If this button stay unchecked, the user has to choose a given processor. Pushing the Feasibility test based on the processor utilization factor will imply to compute such a test. Pushing the Feasibility test based on worst case task response time will imply to compute such a test.","title":"1 - Basic Features"},{"location":"pages/basics/#basic-features-scheduling-simulation-and-feasibility-tests-for-independent-tasks","text":"In this chapter, you find a description of the most important scheduling and feasibility services provided by Cheddar in the case of independent tasks.","title":"Basic features: scheduling simulation and feasibility tests for independent tasks"},{"location":"pages/basics/#first-step-a-simple-scheduling-simulation","text":"This section shows you how to call the simpliest features of Cheddar. Cheddar provides tools to check temporal constraints of real time tasks. These tools are based on classical results from real time scheduling theory. Before calling such tools, you have to define a system which is mainly composed of several processors and tasks . To define a processor, you should first define one or multiple cores. For that choose the \"Edit/Hardware/Core\" submenu. The window below is then displayed: Figure 1.1 Adding a core A core is defined by the following fields (see Figure 1.1): The name of the core. A core name can be any combination of literal characters including underscore. Space is forbidden. Each core must have a unique name. The scheduler hosted by the core. Basically, you can choose from a various set of schedulers such as (to get a detailed description on these schedulers, see section Other Available schedulers and task arrival patterns ): \"Earliest Deadline First\" (or EDF). Tasks can be periodic or not and are scheduled according to their deadline. \"Least Laxity First\" (or LLF). Tasks can be periodic or not and are scheduled according to their laxity. The laxity is computed by : L_i = D_i - C'_i in which L_i is the laxity of the task, D_i is the deadline, and C'_i is the remaining capacity. \"Least Runtime Laxity First\" (a second interpretation of LLF). Tasks can be periodic or not and are scheduled according to their laxity. The laxity is computed by : L_i = D_i - (C'_i + t_i) in which L_i is the laxity of the task, D_i is the deadline, C'_i is the remaining capacity, and t_i is the time passed since the release time of the task. \"Rate Monotonic\" (or RM, or RMA, or RMS). Tasks have to be periodic, and deadline must be equal to period. Tasks are scheduled according to their period. You have to be aware that the value of the priority field of the tasks is ignored here. \"Deadline Monotonic\" (or DM). Tasks have to be periodic and are scheduled according to their deadline. You have to be aware that the value of the priority field of the tasks is ignored here. \"Posix 1003 Highest Priority First\". Tasks can be periodic or not. Tasks are scheduled according to the priority and the policy of the tasks. (Rate Monotonic and Deadline Monotonic use the same scheduler engine except that priorities are automatically computed from task period or deadline). POSIX 1003.1b scheduler supports SCHED_RR, SCHED_FIFO and SCHED_OTHERS queueing policies. SCHED_OTHERS is a time sharing policy. SCHED_RR and SCHED_FIFO tasks must have priorities ranging between 255 and 1. Priority level 0 is reserved for SCHED_OTHERS tasks. The highiest priority level is 255. \"Time sharing based on wait time\" (which is a Linux-like scheduler) and \"Time sharing based on cpu usage\". These two schedulers provide a way to share the processor as on a time sharing operatong system. With the first scheduler, the more a ready task waits for the processor and the more its priority increases. With the second scheduler, the more a ready task uses the processor and the more its priority decreases. \"Round robin\" (with quantum). The processor is regulary shared between all the tasks. A quantum (which is a bound on the time a task keeps the processor) can be given. \"Maximum Urgency First based on laxity\" and \"Maximum Urgency First based on deadline\". Such schedulers are based on an hybrid priority assignment : a task priority is made of a fixed part and a dynamic part (see ). \"D-Over\". This scheduler is an EDF like but which is work fine when the processor is over-loaded. When the processor is over-loaded, D-Over is always able to predict which tasks will miss its deadline (in contrary to EDF). User-defined schedulers (\"Pipeline user-defined scheduler\", \"Automata user-defined scheduler\" or \"Compiled user-defined scheduler\"). These schedulers allow users to define their own scheduler into Cheddar (see section User Defined Scheduler for details). If the scheduler is preemptive or not . By default, the scheduler is set to be preemptive. The quantum value associated with the scheduler. This information is useful if a scheduler has to manage several tasks with the same dynamic or static priority : in this case, the simulator has to choose how to share the processor between these tasks. The quantum is a bound on the delay a task can hold the processor (if the quantum is equal to zero, there is no bound on the processor holding time). At the time we're speaking, the quantum value can be used with the POSIX 1003.1b scheduler (only with SCHED_RR tasks) and the round robin scheduler. With POSIX 1003.1b, two SCHED_RR tasks with the same priority level should share the processor with a POSIX round-robin policy. In this case, the quantum value is the time slot of this round-robin scheduler. Finally, the quantum value could also be used for user-defined scheduler (see User Defined Scheduler for details). Automaton name : user-defined scheduler can be expressed as an automaton. In this case, the this attribute stores the name of the automaton for the given core. Capacity , Period , and Priority : These attributes are used to perform scheduling analysis with a polling server, for more information see Hierachical Scheduler The User Defined Scheduler Source File Name is the name of a file which contains the source code of a user-defined scheduler (see section User Defined Scheduler for details). Start time : time of the first release of the task Speed . This attribute is the speed of the core. Default value is 1 and only positive non null values are accepted for this attribute. When the value of this attribute is equal to n, it means that task are executed n times quicker. L1 Cache system name : This attribute indicate which cache is used to the core unit. Warning : with Cheddar, to add a core (or any object), you have to push the Add button before pushing the Close button. That allows you to define several objects quickly without closing the window (you should then push Add for each defined object). Then you can define a processor. For that choose the \"Edit/Entities/Hardware/Processor\" submenu. The window below is then displayed: Figure 1.2 Adding a processor A processor is defined by the following fields (see Figure 1.2) : The name of the processor. A processor name can be any combination of literal characters including underscore. Space is forbidden. Each processor must have a unique name. At the time we're speaking, the network field is not used (planned to be used in order to simulate message scheduling). Processor type 4 kinds of processor exists in Cheddar: Monocore type . It contains only one core and can run only one task at a time. Identical multicores type . The processor contains several cores that are identical, i.e. have the same scheduling protocol (but with potentionnaly different parameters). All core of such processor have the same speed. Uniform multicores type . The processor contains cores that have different speeds. However speeds have proportional values. All cores of the same processor have to run the same scheduling protocol. Unrelated multicores type . The processor contains cores with differents speeds. Speeds have unrelated values. Again, all cores of the same processor have to run the same scheduling protocol. Migration type . This attributes specifies how the tasks are allowed to move from one core to another. No migration type . Task cannot move from one core to another. This is typically the case of Multicore ARINC 653 architectures, or also of architectures with the concepts of core affinity (i.e. POSIX standard). Job level migration type . A task running on a core can move to another core only when its current job is completed. Running the smae job on two different cores is not allowed. Time unit migration type . A task can migrate at any core at any time. Cores table which contain the list of cores initially defined. The user should select one core in the monocore processor case, and almost one core in other case. Figure 1.3 Adding an address space The next step in order to run a simulation, is to define an address space. Choose the \"Edit/Entities/Software/Address space\" submenu. An address space models a piece of memory which contain tasks, buffers or shared resources. The Figure 1.3 shows the widget used to define such a feature. At the time we are speaking, the information you have to provide is: A name. An address space name can be any combination of literal characters including underscore. Space is forbidden. Each address space name has to be unique. A processor name. This is the processor which hosts the address space. Some fields related to the size of the address space memory: the text memory size , the heap memory size , the stack memory size and the data memory size . The fields related to memory size will be used in the next Cheddar's release in order to perform a global memory analysis. Figure 1.4 Adding a task Let see now, how to define a task, the last feature required to perform the most simpliest performance analysis. Choose the \"Edit/Entities/Software/Task\" submenu. The window of Figure 1.4 is then displayed. This window is composed of 3 sub-parts : the \"main part\", the \"offset part\" and the \"user's defined parameters part\". The main part contains the following informations : At least, a task is defined by a name (the task name should be unique), a capacity (bound on its execution time) and a place to run it (a processor name and an address space name ). The other parameters are optional but can be required for a particular scheduler A type of task . It describes the way the task is activated. An aperiodic task is only activated once. A periodic task is activated many times and the delay between two activations is a fixed one. A poisson process task is activated many times and the delay between two activations is a random delay : the random law used to generated these delays is an exponential one (poisson process). a sporadic task is a task which is activated many times with a minimal delay between two succesive activations. If the task type is \"user-defined\", the task activation law is defined by the user (see section User Defined Scheduler of this user's guide). The period . It is the time between two task activations. The period is a constant delay for a periodic task. It's an average delay for a poisson process task. If you have selected a processor that owns a Rate Monotonic or a Deadline Monotonic scheduler, you have to give a period for each of its tasks. A start time . It is the time when the task arrives in the system (its first activation time). A deadline . The task must end its activation before its deadline. A deadline is a relative information : to get the absolute date at which a task must end an activation, you should add the time when the task was awoken/activated to the task deadline. Warning : the deadline must be equal to the period if you define a Rate Monotonic scheduler. A priority and a policy . These parameters are dedicated to the POSIX 1003.1b/Highest Priority First scheduler. Priority is the fixed priority of a task. Policy can be SCHED RR, SCHED FIFO or SCHED OTHERS and describes how the scheduler chooses a task when several tasks have the same priority level. Warning : the priority and the policy are ignored by a Rate Monotonic and a Deadline Monotonic scheduler. A jitter . The jitter is a maximum lateness on the task wake up time. This information can be used to express task precedencies and to applied method such as the Holistic task response time method. A blocking time . It's a bound on shared resource waiting time. This delay could be set by the user but could also be computed by Cheddar if you described how shared resources are accessed. An activation rule . The name of the rule which defines the way the task should be activated. Only used with user-defined task. (see section User Defined Scheduler for details). A criticality level . The field indicates how the task is critical. Currently used by the MUF scheduler or any user-defined schedulers. A seed . If you define a poisson process task or a user-defined task, you can set here how random activation delay should be generated (in a deterministic way or not). The \"Seed\" button proposes you a randomly generated seed value but of course, you can give any seed value. This seed value is used only if the Predictable option is selected. If the Unpredictable option is selected, the seed is initialized at simulation time with \"gettimeofday\". The text memory size and stack memory size . The fields related to task memory size will be used in the next Cheddar's release in order to perform memory requirement analysis. The second and the third parts store task information which are less used by users. The offsets part is a table. Each entry of the table stores two informations : an activation number and a value. The offset part allows the user to change the wake up time of a task on a given activation number. For each activation number stored in the \"Activations:\" fields, the task wake up time will be delayed by the amount of time given in the \"Values\" fields. Finally, the third part (the \"User's defined parameters\" part) contains task parameters (similar to the deadline, the period, the capacity ...) used by user-defined schedulers. With this part, a user can define new task parameters. A user-defined task parameter has a value, a name and a type. The types currently available to defined user-defined task parameters are : string, integer boolean and double. Warning : when you create tasks, in most of cases, Cheddar does not check if your task parameters are erronous according to the scheduler you previously selected : these checks are done at task analysis/scheduling. Of course, you can always change task and processor parameters with \"Edit menus. When tasks and processors are defined, we can start the task analysis. Cheddar provides two kind of analysis tools: Feasibility analysis tools : these tools compute much information without scheduling the set of tasks. Equation references used to compute this feasibility information are always provided with the results. Feasibility services are provided for tasks and buffers. Simulation analysis tools : With these tools, scheduling has to be computed first. When the scheduling is computed (of course, this step can be long to proceed ...), the resulting scheduling is drawn in the top part of the window and information is computed and displayed in the bottom part of the window. Information retrieved here is only valid in the computed scheduling.The simpliest tools provided by Cheddar check if a set of tasks meet their temporal constraints. Simulation services are also provided for other resources (for buffers for instance). All these tools can be called from the \"Tools\" Menu and from some toolbar Buttons : From the submenu Tools/Scheduling/Customized scheduling simulation , the scheduling of each processor is drawn on the top of the Cheddar main window (see below). From the drawn scheduling, missed deadlines are shown and some statistics are displayed (number of preemption for instance). From the submenu Tools/Scheduling/Customize scheduling feasibility , response time, base period and processor utilization level are computed and displayed on the bottom of the Cheddar main window (see Figure 1.5). Figure 1.5 The Cheddar's main window In the top part of this window, each resource, buffer, message and task is shown by a time line: For a task time line: Each vertical red line means that the task is activated (woken up) at this time. Each horizontal rectangle means that the task is running at this time. The horizontal rectangle can have a task specific color. This horizontal colored rectangle can be found also on the core time line, which shows how the core is shared by the tasks of the architecture model. Task specific color can be deactivated, i.e. set to black for all tasks with the options windows. For a resource time line: Each vertical blue line means that the resource is allocated by a task at this time. Each vertical red line means that the resource is relaesed by a task at this time. Each horizontal rectangle means that the resource is used by a task which is running at this time. The color of this horizontal rectangle is set with the same color used in the task time line. For a message time line: Each vertical blue rectangle means that the message is sent at this time. Each vertical read rectangle means that the message is received at this time. To find the task sending or receiving a message, users have to check the core unit time line of the task time lines to find the related tasks. To produce such a display, users have to define for each message the corresponding dependencies that are used to computed the related events. For a buffer time line: Each horizontal blue rectangle means that a task writes data into a buffer. Each horizontal red rectangle means that a task reads data from a buffer.To find the task writing or readning a data in/from the buffer, users have to check the core unit time line of the task time lines to find the related tasks. To produce such a display, users have to define for each buffer the corresponding dependencies that are used to computed the related events. The scheduling result can also be saved in XML file. This allows user to run tools on Cheddar scheduling results. The scheduling result of Cheddar is an event table that gives for each time unit the set of events produced by the scheduling simulator. The event table is the data structure which is used by the simulator engine to perform analysis on scheduling. For each event, extra data related to the event is also stored. Here is the main produced events and their data: Start_Of_Task_Capacity . This event is generated when a task run the fist unit of time of its capacity. The event stores the started name of the task. End_Of_Task_Capacity . This event is generated when a task run the last unit of time of its capacity. The event stores the name of the completed task. Write_To_Buffer . This event is generated when a task write data into a buffer. The event stores the name of the buffer, the name of the task and the size of the written data. Read_From_Buffer . This event is generated when a task read data from a buffer. The event stores the name of the buffer, the name of the task and the size of the read data. Running_Task . This event is generated when a task get the processor. The event stores the name of the running task, its current priority, the core on which it runs, its CRPD value and the state of the associated cache. Task_Activation . This event is generated when a task is waking up. The event stores the name of the awoken task. Send_Message . This event is generated when a task is sending a message. The event stores the name of the message and the name of the task. Receive_Message . This event is generated when a task is receiving a message. The event stores the name of the message and the name of the task. Allocate_Resource . This event is generated when a task takes a resource. The event stores the name of the resource and the name of the task. Release_Resource . This event is generated when a task releases a resource. The event stores the name of the resource and the name of the task. Wait_For_Resource . This event is generated when a task waits for the access to a resource. The event stores the name of the resource and the name of the task. Address_Space_Activation . This event is generated with hierarchical scheduling such as ARINC 653 and when an address space is activated. This event stores the name of the activated address space and the activation duration, i.e. the amount of time the address space will stay activated. Buffer_Overflow . This event is generated when running scheduling simulations with buffer and a task tries to write to a buffer which is full. Buffer_Underflow .This event is generated when running scheduling simulations with buffer and a task tries to read from a buffer which is empty. Context_Switch_Overhead . This event is generate when there is context switch - a change in running task. Preemption . This event is generated whenever there is a preemptions. Be aware that for scalability, no all events are by default generated by Cheddar. Please refer to the option windows to select which events the simulator will produce or not. Here is an example of event table produced by Cheddar: event_table.xml : this simple event table is produced from a set of independent task scheduled with EDF. The file event_table_large.xml is similar except the size (it is a large file produced with a 200 task set). event_table_fixed_priority.xml : this event table is produced from a fixed priority scheduler.This scheduler provide an extra information for the event Running_Task. This extra information is the current priority of the running task. event_table_buffer.xml : this event table is produced from a set of tasks sharing a buffer. event_table_shared_resource.xml : this event table is produced from a set of tasks sharing a PCP resource. event_table_message.xml : this event table is produced from a set of tasks sending/receiving messages. To get a summary of the tools provided by Cheddar, see section User Defined Scheduler .","title":"First step: a simple scheduling simulation"},{"location":"pages/basics/#other-available-schedulers-and-task-arrival-patterns","text":"In Cheddar, you will find several schedulers. Some of them are directly implemented into the framework; others can be defined by the user. The list below describes the currently built-in schedulers you may find in the current release: Rate Monotonic : run the task with the smallest period first. The priority field of the tasks is ignored here. All tasks have to be periodic. `Deadline Monotonic : run the task with the smallest static deadline first. The priority field of the tasks is ignored here. All taks have to be periodic. Earliest Deadline First : run the task with the smallest dynamic deadline first. Tasks can be periodic or not. Least Laxity First and Least Runtime Laxity First : run the task with the smallest laxity first. The laxity is computed according to 2 various means. Posix 1003.1b Highest Priority First scheduler : run the task with the highest fixed priority first. Support SCHED_RR, SCHED_FIFO and SCHED_OTHERS policies. SCHED_OTHERS is a time sharing scheduler. SCHED_RR and SCHED_FIFO are policies which enforce real time scheduling. Tasks can be periodic or not. Tasks are scheduled according to the priority and the policy of the tasks. (Rate Monotonic and Deadline Monotonic use the same scheduler engine except that priorities are automaticly computed from task period or deadline). POSIX 1003.1b scheduler supports SCHED_RR, SCHED_FIFO and SCHED_OTHERS queueing policies. SCHED_OTHERS is a time sharing policy. SCHED_RR and SCHED_FIFO tasks must have priorities range from 255 to 1. Priority level 0 is reserved to SCHED_OTHERS tasks. The highiest priority level is Maximum Urgency First scheduler [STE 91] : run the tasks according to a mixed static and dynamic priority. The task to run is the task with the highest criticality level. If two tasks have the same crititicaly level, the scheduler then chooses the one with the smallest laxity. If two tasks have the same criticality level and the same laxity, the scheduler chooses the one with the highest fixed priority. D-over dynamic scheduler [KOR 92] : run the tasks as EDF but with a safe policy in case of transient overload. Round robin scheduler : give the processor during a fixed delay to each task at a fixed order. It allows the use of a given quantum : in this case, a task stays on the processor until the quantum becomes exhausted. Time sharing scheduler based on task waiting time (scheduler similar to the one provided by Linux): run the task which waits since the oldest date. Time sharing scheduler based on cpu usage: run the task which had consumed the least cpu time. Earliest Deadline First Energy Harvesting : a deadline oriented scheduler that takes care of the energy harvested during execution See [CHE 14] . AMC and EDF VD , which are 2 uniprocessor mixed criticality schedulers. DAG HLFET , a multicore scheduling that is using a DAG of task dependencies. See [ADA 74]. RUN (Reduction to Uniprocessor), a optimal multicore global scheduler, both online and offline. See [REG 11] . 3 implementations of the multicore global Proportionate Fair scheduling: PF , PD and PD2 . See [AND 04] . EDZL , for Earliest Deadline Zero Laxity, which is a deadline oriented global multicore scheduler. See [CIR 07] . LLREF , Largest Local Remaining Execution First, which is a laxity oriented global multicore scheduler. Hierarchical schedulers for uniprocessor architectures to support the scheduling of aperiodic tasks jointly with periodic tasks [SPR 90] : Hierarchical Polling Aperiodic Server , which implements a uniprocessor fixed priority scheduler with a aperiodic task server. The aperiodic task server is a periodic task running the polling protocol. Hierarchical Priority Exchange Aperiodic Server , which implements a uniprocessor fixed priority scheduler with a aperiodic task server. The aperiodic task server is a periodic task running the priority exchange protocol. Hierarchical Sporadic Aperiodic Server , which implements a uniprocessor fixed priority scheduler with a aperiodic task server. The aperiodic task server is a periodic task running the sporadic protocol. Hierarchical Deferrable Aperiodic Server , which implements a uniprocessor fixed priority scheduler with a aperiodic task server. The aperiodic task server is a periodic task running the deferrable protocol. Hierarchical schedulers for uniprocessor architecture to support Time-and-Space architectures such as ARINC 653. These hierarchical scheduling has a 2 level of scheduling: 1) A scheduler inside each address space to select the task amoung the one of the related address space. 2) A scheduler at the processor level to select the address space to activate. The following protocols have been implemented: Hierarchical Offline : address spaces are activated/scheduled according to a offline address space scheduling stored in a XML file. This scheduler is modeling the ARINC 653 MAF partition scheduling. Hierarchical Cyclic : address space are activated/scheduled cyclically. Hierarchical Round : addres space are activated/scheduled with a round robin policy. Hierarchical Fixed : address spaces are activited/scheduled according to their fixed priority. Besides the implemented scheduling protocols listed above, Cheddar provides a mean to define your own scheduling protocols. The current Cheddar's release provides examples of User-defined schedulers stored in some .sc files (see project_examples sub-directory and section VI ). These scheduler examples are: arinc.sc : modeling of an ARINC 653 partition and task scheduler schedule_according_to_criticity.parametric-cpu.sc : schedule tasks according a task criticity level non_preemptive_llf.sc : example of a LLF scheduler with no preemption when tasks have the same laxity value ts.sc : the processor is given to the task which ran the least frequently. fcfs.sc : first come/first served scheduling policy. short.sc : schedule the shortest task first (with the smallest capacity) dvd0.parametric-cpu.sc : Dynamic value density scheduler of the York University [ALD 98] . mllf.sc : Modified Least Laxity First scheduler with f=0.5 [OVE 97] . muf.sc : Maximum Urgency First scheduler [STE 91] . In the same way, Cheddar provides a set of built-in task models. The built-in task models are: Aperiodic tasks : this kind of task arrives in the system at a given time (the start time, see the \"Update Tasks\" widget), run a job and leaves the system. Periodic tasks : this kind of task periodically runs a job. A periodic task has a start time. The period of the task stores the fixed delay between two successive task wake-up times. See [LIU 73] . Sporadic tasks : this kind of task cyclycally runs a job. A sporadic task has a start time. The period field stores the minimum delay between two successive task wake-up times. Poisson process tasks : this kind of task periodically runs a job. A periodic task has a start time. The period of the task stores the average delay between two successive task wake-up times. The effective delay between two wake-up times is computed with an exponential random generator. Frame Task : this model implements the multiframe task model of [BAR 99] . Scheduling task : is planned to be used for hierarchical scheduling. Periodic inner periodic : is a task model to specify burst of periodic release separated by a fixed amount of time. This task model is then using 2 periods: an inner period for the delay between two task releases during the burts and a second period to express the delay between two burst. See [AUD 93] . Sporadic inner periodic : this task model is similar to periodic inner periodic instead of the delay between two burts is sporadic (we specify the minimum delay between two burts). See [AUD 93] . Again, you can define your own task model with user-defined code. Examples of user-defined task provided with this Cheddar release can be found in these files: sporadic.sc : tasks are woken up with a minimal inter-waking up period delay. The miminum delay is stored in the period field and the wake-up delay is randomly generated (exponential distribution). random_capacity.sc : task with a randomly generated capacity. increasing_capacity.sc : tasks with a growing capacity. activations.sc : various task models.","title":"Other available schedulers and task arrival patterns"},{"location":"pages/basics/#scheduling-options","text":"Figure 1.6 Scheduling options windows The submenu \"Tools/Scheduling/Options\" allows you to tune the way all next scheduling simulations will be done (see Figure 1.6) : If you push the Offsets button, the simulation engine takes care of the task offsets given at task definition time : task activations can then be delayed if you provide offset values at task definition time. If you push the recedencies button, task scheduling will be done so that task precedencies will be met. By default, task precedencies are ignored. If you push the Resources button, access to shared ressources will be done during simulation. By default, all shared resources are ignored. Cheddar allows you to activate tasks randomly . If you want to do simulations with this kind of task, the simulator engive has to compute some random values. From this window, you can tune the way random activation delays are generated. A seed value can be associated with each task but you can also use only one seed for all tasks. In the two cases, you can do \"predictable\" or \"unpredictable\" simulations. If you choose \"predictable\" simulation, the seed will be initialized by a given value. In the other case, the seed is initialized with \"gettimeofday\". . Pushing the Predictable for all tasks radio button leads to take the seed value of the Option window during simulation for all tasks. If the Task specific seed radio button is pushed instead, the seed of each task is used to generate task activation delays. You should notice that by default, 0 is given to the seed value, but of course, you can choose any value. Pushing the Seed button gives you a random value for the seed. The check button of the window on the right side allows the user to define which events will be generated into the event table at simulation time (see section Multiprocessor scheduling service ). Figure 1.7 Scheduling options windows (both feasibility and simulation) The submenu Tools/Scheduling/Scheduling simulation allows you to tune the way the next scheduling simulation and the next feasibility test will be done (see the Figure 1.7). Options related to which information the engine has to compute when the scheduling sequence is built are : Pushing the Schedule all processors check button implies that the scheduling simulation will be computed on all defined processors. If this button stay unchecked, the user has to choose a given processor. Pushing the Number of context switch implies to compute the number of context switches from the computed scheduling sequence. Pushing the Number of preemption implies to compute the number of preemptions from the computed scheduling sequence. Pushing the Task response time implies to compute the worst/best/average task response times from the computed scheduling sequence. Pushing the Blocking time implies to compute the worst/best/average task blocking times on shared resources from the computed scheduling sequence. Pushing the Run event analyzers will imply to perform the user-defined code (see section V) on the computed scheduling sequence. The Display event table, Automatically export event table and Event table file name options are related to the computing scheduling sequence. These options allow you to save the computed scheduling into a file in a XML format or display it on the screen. Options related to which information the feasibility tests will compute are : Pushing the Feasibility on all processors check button implies that the feasibility tests will be computed on all defined processors. If this button stay unchecked, the user has to choose a given processor. Pushing the Feasibility test based on the processor utilization factor will imply to compute such a test. Pushing the Feasibility test based on worst case task response time will imply to compute such a test.","title":"Scheduling options"},{"location":"pages/command_line/","text":"Cheddar command line The basic command line of cheddar is $cheddar [switches] foo1 foo2 ... where foo1 foo2 can be an unique XML file or one or several AADL files. Switches can be : -u : get the help. -l : select a language \"fr\" for fran\u00e7ais; \"en\" for english Default language is English. -a : the file names given to cheddar contain AADL descriptions instead of an XML file Only one XML file can be provided to Cheddar but several AADL files can be sent to Cheddar. -i : directory-name gives an extra directory name where to look for XML or AADL files. By default, Cheddar only looks for project files into the Cheddar's current directory. -d : activate Cheddar's debug mode : provides extra information on the way Cheddar works. -f : font-name : to select a new font to be used with the Cheddar editor -c : put the current Cheddar's configuration at the screen : usefull to check that the Cheddar binary you're using is correctly tuned according to the models you would like to analyze.","title":"3 - Cheddar Command Line"},{"location":"pages/command_line/#cheddar-command-line","text":"The basic command line of cheddar is $cheddar [switches] foo1 foo2 ... where foo1 foo2 can be an unique XML file or one or several AADL files. Switches can be : -u : get the help. -l : select a language \"fr\" for fran\u00e7ais; \"en\" for english Default language is English. -a : the file names given to cheddar contain AADL descriptions instead of an XML file Only one XML file can be provided to Cheddar but several AADL files can be sent to Cheddar. -i : directory-name gives an extra directory name where to look for XML or AADL files. By default, Cheddar only looks for project files into the Cheddar's current directory. -d : activate Cheddar's debug mode : provides extra information on the way Cheddar works. -f : font-name : to select a new font to be used with the Cheddar editor -c : put the current Cheddar's configuration at the screen : usefull to check that the Cheddar binary you're using is correctly tuned according to the models you would like to analyze.","title":"Cheddar command line"},{"location":"pages/dependencies/","text":"Scheduling with dependencies This chapter describes services provided by Cheddar when the system you want to study has task dependencies. By task dependencies, we mean resources shared by several tasks (ex : semaphores) or precedency relationships between several tasks (due to buffer access or message exchange or also constraints between the end of a task and the start of another one). Shared resources analysis tools With Cheddar, you can define shared resources. Shared resources can be seen as semaphores. They can be accessed by several tasks. Tasks that require access to an already allocated semaphore are blocked (and then, unscheduled). To define a shared resource in a Cheddar project, call the submenu \"Edit/Entities/Softwares/Resource\". The window below is then displayed : Figure 4.1 Add a new shared resource Before adding a shared resource, at least one processor and one task must already exist in your project. A resource is defined by the following information : An unique name . An initial value/state (simular to a semaphore initial value). During a scheduling simulation, at a given time, if a resource value is equal or less than zero, the requesting tasks are blocked until the semaphore/shared resource is released. An initial value equal to 1 allows you to design a shared resource that is initially free and that can be used by only one task at a given time. A protocol. Currently, you can choose between PCP (for Priority Ceiling Protocol), PIP (for Priority Inheritance Protocol) or \"No protocol\". With PCP or PIP, accessing shared resources may change task priorities [SHA 90] . The \"No protocol\" just means that no task prioriy will be changed at accessing the shared resource. A processor name : Each shared resource has to be hosted by a given processor. A priority : defines the ceiling priority of the resource A priority assignment : characterize the way that Cheddar assigns ceiling priority to resource automatic assignment: assigns automatically ceiling priorities to resources. The attribute priority is ignored during simulation. manual assignment: assigns manually ceiling priorities to resources. The attribute priority is used during simulation Finally, we must give information on tasks that need the resource. Tasks hold resources in critical section. Each critical section has to be defined by: The task name requiring the shared resource. The start time of the critical section. The end time of the critical section. Of course, you can define several critical sections for a given task of a given shared resource. By default, shared resources analysis tools are not included in the scheduling simulation engine of Cheddar. See \"Tools/Scheduling/Options\" if you want to take care of shared resources during scheduling simulation and if you want to display shared resources time line. Blocking time on shared resources can be computed from scheduling simulation analysis if scheduling simulation is invoked from the sub-menu \"Tools/Scheduling/Scheduling Simulation\". Finally, from the \"Tools/Resources/Bound on Blocking time\" sub-menu, you will find services to compute bounds on blocking time of each tasks. These bounds are computed without assumption on the scheduling actually generated for the analyzed system. To compute blocking time bound, shared resources have to used PCP or PIP protocols. Task precedencies With Cheddar, dependencies are links between at least two tasks. There are three different types of dependencies : precedencies, message and buffer dependencies. Precendencies express order constraints between end or beginning of task execution. Message dependencies express relationships between a sender and a receiver task of a given message. Buffer dependencies express relationships between producer and consumer of data in a given buffer. Editing Task precedencies To create a dependency, choose \"Edit/Entities/Softwares/Dependencies\". The window of figure 4.2 is then displayed : Figure 4.2 Add a new dependency A dependency is characterized by: The type of dependency . We distinguish: precedence dependency ... precedence_sink ... precedence_source ... queuing buffer dependency ... buffer dependent task ... buffer orientation ... buffer dependency object ... communication dependency ... communication dependent task ... communication orientation ... communication dependency object ... time triggered communication dependency ... sampled timing ... immediate timing ... delayed timing ... resource dependency ... resource dependency resource ... resource dependency task ... black board buffer dependency ... black board dependent task ... black board orientation ... black board dependency object ... How to transform a dependent task set into an independent task set: the Chetto/Blazewicz modification rules Computing end to end response time: the Holistic approach Buffer analysis tools Cheddar allows you to define buffers shared by tasks. If you want to define a buffer, a processor, an address space and a least one task have to be defined before. A buffer can be added to a Cheddar project with the submenu \"Edit/Entities/Softwares/Buffer\". The window below is then displayed : Figure 4.3 Add a new buffer A buffer has a unique name , size , initial data size and is hosted by a processor and an address space . A queueing system model is assigned to each buffer. This queueing system model describes the way buffer read and write operations will be done at simulation time. This information is also used to apply buffer feasibility tests. A list of tasks which access to the buffer (read or write operations). Two type of tasks can access a buffer : producers and consumers . We suppose that a producer/consumer writes/reads a fixed size of information in the buffer. For each producer or consumer, the size of the information produced or consummed have to be defined. The time of the read/write operation is also given : this time is relative to the task capacity. Buffer Underflow : Underflow event occurs when a task reads from a buffer and the read data size is greater than the current data size in the buffer. When it happens, a task does not read the buffer and current data in a buffer is not consumed. Buffer Overflow : Overflow event occurs when a task writes to a buffer and the write data size plus the current data size in the buffer is greater than Buffer Size. When it happens, a task does not write any data to the buffer. Like tasks, two kinds of tools can be invoked by the user from a buffer : simulation and feasibility tools. At first, the simulation of the task scheduling can help the user to see how the buffer is filled or not with messages (see \"Tools/Buffer/Buffer simulation\" submenu). In this case, a scheduling simulation must be previously run. The result is then displayed in a window as below : Buffer Feasibility mainly consists of computing buffer bounds. Bounds computed here suppose that each task that is defined as \"producer\", produces one message per periodic activation. In the same manner, each \"consumer\" extracts one message during each of its periodic activation. Figure 4.4 Display buffer utilization factor computed from scheduling simulation The picture contains the buffer utilization level for each time. Second, the feasibility tool provides a way to compute bounds on buffer utilization level. At the time we write this User's guide, bounds do not depend on the type of the scheduler. Bounds can be computed from the \"Tools/Buffer/Buffer feasibility\" submenu. Message scheduling services","title":"4 - Scheduling with Dependencies"},{"location":"pages/dependencies/#scheduling-with-dependencies","text":"This chapter describes services provided by Cheddar when the system you want to study has task dependencies. By task dependencies, we mean resources shared by several tasks (ex : semaphores) or precedency relationships between several tasks (due to buffer access or message exchange or also constraints between the end of a task and the start of another one).","title":"Scheduling with dependencies"},{"location":"pages/dependencies/#shared-resources-analysis-tools","text":"With Cheddar, you can define shared resources. Shared resources can be seen as semaphores. They can be accessed by several tasks. Tasks that require access to an already allocated semaphore are blocked (and then, unscheduled). To define a shared resource in a Cheddar project, call the submenu \"Edit/Entities/Softwares/Resource\". The window below is then displayed : Figure 4.1 Add a new shared resource Before adding a shared resource, at least one processor and one task must already exist in your project. A resource is defined by the following information : An unique name . An initial value/state (simular to a semaphore initial value). During a scheduling simulation, at a given time, if a resource value is equal or less than zero, the requesting tasks are blocked until the semaphore/shared resource is released. An initial value equal to 1 allows you to design a shared resource that is initially free and that can be used by only one task at a given time. A protocol. Currently, you can choose between PCP (for Priority Ceiling Protocol), PIP (for Priority Inheritance Protocol) or \"No protocol\". With PCP or PIP, accessing shared resources may change task priorities [SHA 90] . The \"No protocol\" just means that no task prioriy will be changed at accessing the shared resource. A processor name : Each shared resource has to be hosted by a given processor. A priority : defines the ceiling priority of the resource A priority assignment : characterize the way that Cheddar assigns ceiling priority to resource automatic assignment: assigns automatically ceiling priorities to resources. The attribute priority is ignored during simulation. manual assignment: assigns manually ceiling priorities to resources. The attribute priority is used during simulation Finally, we must give information on tasks that need the resource. Tasks hold resources in critical section. Each critical section has to be defined by: The task name requiring the shared resource. The start time of the critical section. The end time of the critical section. Of course, you can define several critical sections for a given task of a given shared resource. By default, shared resources analysis tools are not included in the scheduling simulation engine of Cheddar. See \"Tools/Scheduling/Options\" if you want to take care of shared resources during scheduling simulation and if you want to display shared resources time line. Blocking time on shared resources can be computed from scheduling simulation analysis if scheduling simulation is invoked from the sub-menu \"Tools/Scheduling/Scheduling Simulation\". Finally, from the \"Tools/Resources/Bound on Blocking time\" sub-menu, you will find services to compute bounds on blocking time of each tasks. These bounds are computed without assumption on the scheduling actually generated for the analyzed system. To compute blocking time bound, shared resources have to used PCP or PIP protocols.","title":"Shared resources analysis tools"},{"location":"pages/dependencies/#task-precedencies","text":"With Cheddar, dependencies are links between at least two tasks. There are three different types of dependencies : precedencies, message and buffer dependencies. Precendencies express order constraints between end or beginning of task execution. Message dependencies express relationships between a sender and a receiver task of a given message. Buffer dependencies express relationships between producer and consumer of data in a given buffer.","title":"Task precedencies"},{"location":"pages/dependencies/#editing-task-precedencies","text":"To create a dependency, choose \"Edit/Entities/Softwares/Dependencies\". The window of figure 4.2 is then displayed : Figure 4.2 Add a new dependency A dependency is characterized by: The type of dependency . We distinguish: precedence dependency ... precedence_sink ... precedence_source ... queuing buffer dependency ... buffer dependent task ... buffer orientation ... buffer dependency object ... communication dependency ... communication dependent task ... communication orientation ... communication dependency object ... time triggered communication dependency ... sampled timing ... immediate timing ... delayed timing ... resource dependency ... resource dependency resource ... resource dependency task ... black board buffer dependency ... black board dependent task ... black board orientation ... black board dependency object ...","title":"Editing Task precedencies"},{"location":"pages/dependencies/#how-to-transform-a-dependent-task-set-into-an-independent-task-set-the-chettoblazewicz-modification-rules","text":"","title":"How to transform a dependent task set into an independent task set: the Chetto/Blazewicz modification rules"},{"location":"pages/dependencies/#computing-end-to-end-response-time-the-holistic-approach","text":"","title":"Computing end to end response time: the Holistic approach"},{"location":"pages/dependencies/#buffer-analysis-tools","text":"Cheddar allows you to define buffers shared by tasks. If you want to define a buffer, a processor, an address space and a least one task have to be defined before. A buffer can be added to a Cheddar project with the submenu \"Edit/Entities/Softwares/Buffer\". The window below is then displayed : Figure 4.3 Add a new buffer A buffer has a unique name , size , initial data size and is hosted by a processor and an address space . A queueing system model is assigned to each buffer. This queueing system model describes the way buffer read and write operations will be done at simulation time. This information is also used to apply buffer feasibility tests. A list of tasks which access to the buffer (read or write operations). Two type of tasks can access a buffer : producers and consumers . We suppose that a producer/consumer writes/reads a fixed size of information in the buffer. For each producer or consumer, the size of the information produced or consummed have to be defined. The time of the read/write operation is also given : this time is relative to the task capacity. Buffer Underflow : Underflow event occurs when a task reads from a buffer and the read data size is greater than the current data size in the buffer. When it happens, a task does not read the buffer and current data in a buffer is not consumed. Buffer Overflow : Overflow event occurs when a task writes to a buffer and the write data size plus the current data size in the buffer is greater than Buffer Size. When it happens, a task does not write any data to the buffer. Like tasks, two kinds of tools can be invoked by the user from a buffer : simulation and feasibility tools. At first, the simulation of the task scheduling can help the user to see how the buffer is filled or not with messages (see \"Tools/Buffer/Buffer simulation\" submenu). In this case, a scheduling simulation must be previously run. The result is then displayed in a window as below : Buffer Feasibility mainly consists of computing buffer bounds. Bounds computed here suppose that each task that is defined as \"producer\", produces one message per periodic activation. In the same manner, each \"consumer\" extracts one message during each of its periodic activation. Figure 4.4 Display buffer utilization factor computed from scheduling simulation The picture contains the buffer utilization level for each time. Second, the feasibility tool provides a way to compute bounds on buffer utilization level. At the time we write this User's guide, bounds do not depend on the type of the scheduler. Bounds can be computed from the \"Tools/Buffer/Buffer feasibility\" submenu.","title":"Buffer analysis tools"},{"location":"pages/dependencies/#message-scheduling-services","text":"","title":"Message scheduling services"},{"location":"pages/download_compile/","text":"Download and Compile Cheddar In this page, you can find instructions to download and compile Cheddar from the source code on Linux and Windows. Linux Required software To checkout the source code and compile Cheddar, the following software are required: GNAT compiler GPL, we recommend using GNAT 2021 available on the AdaCore website https://www.adacore.com/download/more GtkAda 2021, also available on the AdaCore website above A svn client is necessary. The default subversion works fine on Linux The instructions have been applied to compile Cheddar on Ubuntu 20.04 64 bits. Please contact us if you encounter problems on other Linux distributions. Steps to follow Checkout Cheddar source code - the trunk folder on the svn repository http://beru.univ-brest.fr/svn/CHEDDAR/trunk/ Install GNAT 2021 Install GtkAda 2021 Move to [CHEDDAR]/trunk/src with [CHEDDAR] is the folder used to checkout the Cheddar source code Edit \"script/compilelinux.bash\" in the script folder according to: Cheddar source code location \u2013 CHEDDAR_DIR Your GNAT installation \u2013 variable GNAT_DIR Your GtkAda installation \u2013 variable GTKADA_DIR Run \u201csource script/compilelinux.bash\u201d Run \u201cmake cheddar\u201d to produce cheddar binary. One compiled, you should have a binary either called \u201ccheddar\u201d and you can then run it from the directory where the binary is stored Example This is just an example, you need to adapt CHEDDAR_DIR, GNAT_DIR, and GTKADA according to your installation The trunk repository is checked out at /home/user/cheddar/trunk GNAT is installed at /opt/gnat2021 GtkAda is installed at /opt/gtkada2021 File: compilelinux.bash export CHEDDAR_DIR=/home/user/cheddar/trunk export GNAT_DIR=/opt/gnat2021 export GTKADA_DIR=/opt/gtkada2021 Commands to compile and run Cheddar in the terminal $: cd /home/user/cheddar/trunk/src $: source script/compilelinux.bash $: make cheddar (...compilation, can take a while) $ : ./cheddar Windows To compile Cheddar with Windows: Checkout Cheddar source code from its SVN repository. Official source code is in the \"trunk\" folder. Install GNAT 2021. Install GtkAda for GNAT 2021. Launch windows \"cmd\" command Move to CHEDDAR/trunk/src/scripts Edit \"compilewindows.bat\" according to your GNAT, GtkAda and Cheddar source code installation location Run \"compilewindows.bat\" Go back to CHEDDAR/trunk/src Run \"gnatmake \u2013Pgpr/gprfilename\" where gprfilename is the gpr file you want to compile. To produce cheddar, the gpr file for windows is \"gpr/cheddar.gpr\" Virtual Box To compile Cheddar in a prepared Virtual Box image: Install Virtual Box Download the Cheddar virtual hard disk file: http://beru.univ-brest.fr/vbox/CHEDDAR_DEV.vdi Create a new Ubuntu 64-bit virtual machine, and load the cheddar_dev.vdi Start the virtual machine Select keyboard layout (french by default) and connect: password and login are both \"cheddar\" Open a terminal and go to the directory contained the Cheddar source (~/cheddar/trunk/src) Do \"source script/compilelinux.bash\" to setup the environment Do \"make\" to compile (or \"make cheddar\" if you just want the Cheddar GUI binary)","title":"Download and Compile"},{"location":"pages/download_compile/#download-and-compile-cheddar","text":"In this page, you can find instructions to download and compile Cheddar from the source code on Linux and Windows.","title":"Download and Compile Cheddar"},{"location":"pages/download_compile/#linux","text":"","title":"Linux"},{"location":"pages/download_compile/#required-software","text":"To checkout the source code and compile Cheddar, the following software are required: GNAT compiler GPL, we recommend using GNAT 2021 available on the AdaCore website https://www.adacore.com/download/more GtkAda 2021, also available on the AdaCore website above A svn client is necessary. The default subversion works fine on Linux The instructions have been applied to compile Cheddar on Ubuntu 20.04 64 bits. Please contact us if you encounter problems on other Linux distributions.","title":"Required software"},{"location":"pages/download_compile/#steps-to-follow","text":"Checkout Cheddar source code - the trunk folder on the svn repository http://beru.univ-brest.fr/svn/CHEDDAR/trunk/ Install GNAT 2021 Install GtkAda 2021 Move to [CHEDDAR]/trunk/src with [CHEDDAR] is the folder used to checkout the Cheddar source code Edit \"script/compilelinux.bash\" in the script folder according to: Cheddar source code location \u2013 CHEDDAR_DIR Your GNAT installation \u2013 variable GNAT_DIR Your GtkAda installation \u2013 variable GTKADA_DIR Run \u201csource script/compilelinux.bash\u201d Run \u201cmake cheddar\u201d to produce cheddar binary. One compiled, you should have a binary either called \u201ccheddar\u201d and you can then run it from the directory where the binary is stored","title":"Steps to follow"},{"location":"pages/download_compile/#example","text":"This is just an example, you need to adapt CHEDDAR_DIR, GNAT_DIR, and GTKADA according to your installation The trunk repository is checked out at /home/user/cheddar/trunk GNAT is installed at /opt/gnat2021 GtkAda is installed at /opt/gtkada2021 File: compilelinux.bash export CHEDDAR_DIR=/home/user/cheddar/trunk export GNAT_DIR=/opt/gnat2021 export GTKADA_DIR=/opt/gtkada2021 Commands to compile and run Cheddar in the terminal $: cd /home/user/cheddar/trunk/src $: source script/compilelinux.bash $: make cheddar (...compilation, can take a while) $ : ./cheddar","title":"Example"},{"location":"pages/download_compile/#windows","text":"To compile Cheddar with Windows: Checkout Cheddar source code from its SVN repository. Official source code is in the \"trunk\" folder. Install GNAT 2021. Install GtkAda for GNAT 2021. Launch windows \"cmd\" command Move to CHEDDAR/trunk/src/scripts Edit \"compilewindows.bat\" according to your GNAT, GtkAda and Cheddar source code installation location Run \"compilewindows.bat\" Go back to CHEDDAR/trunk/src Run \"gnatmake \u2013Pgpr/gprfilename\" where gprfilename is the gpr file you want to compile. To produce cheddar, the gpr file for windows is \"gpr/cheddar.gpr\"","title":"Windows"},{"location":"pages/download_compile/#virtual-box","text":"To compile Cheddar in a prepared Virtual Box image: Install Virtual Box Download the Cheddar virtual hard disk file: http://beru.univ-brest.fr/vbox/CHEDDAR_DEV.vdi Create a new Ubuntu 64-bit virtual machine, and load the cheddar_dev.vdi Start the virtual machine Select keyboard layout (french by default) and connect: password and login are both \"cheddar\" Open a terminal and go to the directory contained the Cheddar source (~/cheddar/trunk/src) Do \"source script/compilelinux.bash\" to setup the environment Do \"make\" to compile (or \"make cheddar\" if you just want the Cheddar GUI binary)","title":"Virtual Box"},{"location":"pages/download_install/","text":"Download and install Cheddar Download Cheddar binaries (current version is Cheddar-3.3, release date 25/09/2023): Windows: Cheddar-3.3-Windows-bin.zip (this file includes all required DLL). Linux: Cheddar-3.3-Linux-bin.tar.gz (this file includes all required libraries). Otherwise, to run Cheddar on any computer, you can also use VirtualBox. You can download the cheddar_dev.vdi file in this case. Users can also can get the Cheddar source code and the previous releases here . For installation procedures, please read the \"How to install\" section in README.md See the ChangesLog.pdf file to have history of modifications. The file REQUESTED_FEATURES.pdf contains the new features required by users and that we plan to implement in the next releases. Cheddar is written in Ada, with GNAT and GtkAda Adacore products. Cheddar is known to run on Linux and Windows, but should run on any Adacore supported platforms ( see AdaCore web site for details).","title":"Download and Install"},{"location":"pages/download_install/#download-and-install-cheddar","text":"Download Cheddar binaries (current version is Cheddar-3.3, release date 25/09/2023): Windows: Cheddar-3.3-Windows-bin.zip (this file includes all required DLL). Linux: Cheddar-3.3-Linux-bin.tar.gz (this file includes all required libraries). Otherwise, to run Cheddar on any computer, you can also use VirtualBox. You can download the cheddar_dev.vdi file in this case. Users can also can get the Cheddar source code and the previous releases here . For installation procedures, please read the \"How to install\" section in README.md See the ChangesLog.pdf file to have history of modifications. The file REQUESTED_FEATURES.pdf contains the new features required by users and that we plan to implement in the next releases. Cheddar is written in Ada, with GNAT and GtkAda Adacore products. Cheddar is known to run on Linux and Windows, but should run on any Adacore supported platforms ( see AdaCore web site for details).","title":"Download and install Cheddar"},{"location":"pages/editor_menu/","text":"Summary of Cheddar's editor menus and sub-menus All Cheddar analysis tools are called from the \"Tools\" menu. This section gives a short description of them. Some of them compute tasks parameters, and then are composed of two submenus : \"Compute and update tasks set\" and \"Compute and display\". Choose \"Compute and update tasks set\" submenu if you want to save computed parameters into your project tasks set. Choose \"Compute and display\" if you only want to display computed parameters on the bottom of the main Cheddar window. Menus and Sub-menus of the Cheddar's editor : File Menu : New sub-menu : creates a new XML project. Open sub-menu : loads a XML project file into the editor. Save sub-menu : saves the current XML project into a file with the current XML project file name. Save as sub-menu : saves the current XML project into a file with a new XML project file name. AADL sub-menu : provides ane features related to AADL specifications. AADL import : reads an AADL specification into Cheddar. AADL export : translates a Cheddar specification towards an AADL specification. Export property sets used by Cheddar : writes the Cheddar's property sets into files of the current directory. Export standard AADL property set : writes the standard AADL property set into files of the current directory. Customize how AADL services work : allows the user to set some options related to the AADL services provided by Cheddar. Exit sub-menu : Quit the Cheddar's editor. Edit menu : creates/updates/deletes entities of the current architecture to analyse (entities of the current XML project). Entities can be a processor, a task, a message, a buffer, a network or an event analyzer. Tools menu : Clear work space sub-menu : cleans the working area (main window). Does not change anything on the project itself. Scheduling sub-menu : Customized scheduling simulation sub-menu : computes and draws scheduling simulation. This sub-menu allows the user to customize the way the scheduling is computed. Customized scheduling feasibility sub-menu : computes some basics feasibility tests on all processors. The feasibility tests computed there are the utilization factor test and the response time test. Set priorities according to Rate Monotonic sub-menu : change the task priority according to its period (Tasks with the smallest period become tasks with the highest priority). Set priorities according to Deadline Monotonic sub-menu : changes the task priority according to its deadline (Tasks with the smallest deadline become tasks with the highest priority). Partition sub-menu : provides some services to assign tasks on a set of processors. With Best Fit sub-sub-menu : assigns tasks on the set of processors according to the Best Fit algorithm. With General Task sub-sub-menu : assigns tasks on the set of processors according to the General Task algorithm. With Next Fit sub-sub-menu : assigns tasks on the set of processors according to the Next Fit algorithm. With First Fit sub-sub-menu : assigns tasks on the set of processors according to the First Fit algorithm. With Small Task sub-sub-menu : assigns tasks on the set of processors according to the Small Task algorithm. Event table services sub-menu : provides some basic services on event tables. Compute scheduling and generate event table sub-sub-menu : computes the scheduling and produces the event table. Draw time line from event table sub-sub-menu : draws time line from the last computed or loaded scheduling/event table. Run analysis on event table sub-sub-menu : performs analysis on the last computed or loaded scheduling/event table. Export event table sub-sub-menu : saves the last scheduling/event table into a file with a XML format. Options sub-menu : describes how the scheduling simulation will be carried out. Resource sub-menu : Bound on blocking time sub-sub-menu : computes bound on shared resources blocking time according to PCP and PIP protocols without computing the scheduling Looking for priority inversion from simulation sub-sub-menu : runs analysis on a previously computed scheduling to look for high priority tasks blocked by lower priority task at shared resource access. Looking for priority inversion from simulation sub-sub-menu : runs analysis on a previously computed scheduling to look for tasks blocked forever on shared resources. Buffer sub-menu : this submenu can help you to study buffers shared by tasks. Buffer simulation sub-sub-menu : computes buffer utilization factor and message waiting time from a given scheduling simulation. Buffer feasibility tests sub-sub-menu : computes bound on buffer utilization factor and message waiting time without computing scheduling. Precedency sub-menu : . You will find here some heuristics/algorithms that can schedule or check feasibility of a tasks set with dependencies. Chetto/Blazewicz modifications on priorities sub-sub-menu : This service creates an independent task set from a dependent task set by modifying task priorities according to precedency constraints. Chetto/Blazewicz modifications on deadlines sub-sub-menu : This service creates an independent task set from a dependent task set by modifying task deadlines according to precedency constraints. End to End response time : computes response time from a set of task (which have precendency relationships) with the Holistic method. Random sub-menu : this submenu should provide necessary tools to carry out simulations with random events. Compute response time density sub-menu : compute statistic distribution of task response time from a scheduling simulation. Help Menu : About Cheddar sub-menu : provides version number of the Cheddar's binaries. Manual sub-menu : contains the text given in this section. Scheduling references sub-menu : gives all paper references used to compute feasibility tests and simulation results.","title":"10 - Editor Menu Summary"},{"location":"pages/editor_menu/#summary-of-cheddars-editor-menus-and-sub-menus","text":"All Cheddar analysis tools are called from the \"Tools\" menu. This section gives a short description of them. Some of them compute tasks parameters, and then are composed of two submenus : \"Compute and update tasks set\" and \"Compute and display\". Choose \"Compute and update tasks set\" submenu if you want to save computed parameters into your project tasks set. Choose \"Compute and display\" if you only want to display computed parameters on the bottom of the main Cheddar window. Menus and Sub-menus of the Cheddar's editor : File Menu : New sub-menu : creates a new XML project. Open sub-menu : loads a XML project file into the editor. Save sub-menu : saves the current XML project into a file with the current XML project file name. Save as sub-menu : saves the current XML project into a file with a new XML project file name. AADL sub-menu : provides ane features related to AADL specifications. AADL import : reads an AADL specification into Cheddar. AADL export : translates a Cheddar specification towards an AADL specification. Export property sets used by Cheddar : writes the Cheddar's property sets into files of the current directory. Export standard AADL property set : writes the standard AADL property set into files of the current directory. Customize how AADL services work : allows the user to set some options related to the AADL services provided by Cheddar. Exit sub-menu : Quit the Cheddar's editor. Edit menu : creates/updates/deletes entities of the current architecture to analyse (entities of the current XML project). Entities can be a processor, a task, a message, a buffer, a network or an event analyzer. Tools menu : Clear work space sub-menu : cleans the working area (main window). Does not change anything on the project itself. Scheduling sub-menu : Customized scheduling simulation sub-menu : computes and draws scheduling simulation. This sub-menu allows the user to customize the way the scheduling is computed. Customized scheduling feasibility sub-menu : computes some basics feasibility tests on all processors. The feasibility tests computed there are the utilization factor test and the response time test. Set priorities according to Rate Monotonic sub-menu : change the task priority according to its period (Tasks with the smallest period become tasks with the highest priority). Set priorities according to Deadline Monotonic sub-menu : changes the task priority according to its deadline (Tasks with the smallest deadline become tasks with the highest priority). Partition sub-menu : provides some services to assign tasks on a set of processors. With Best Fit sub-sub-menu : assigns tasks on the set of processors according to the Best Fit algorithm. With General Task sub-sub-menu : assigns tasks on the set of processors according to the General Task algorithm. With Next Fit sub-sub-menu : assigns tasks on the set of processors according to the Next Fit algorithm. With First Fit sub-sub-menu : assigns tasks on the set of processors according to the First Fit algorithm. With Small Task sub-sub-menu : assigns tasks on the set of processors according to the Small Task algorithm. Event table services sub-menu : provides some basic services on event tables. Compute scheduling and generate event table sub-sub-menu : computes the scheduling and produces the event table. Draw time line from event table sub-sub-menu : draws time line from the last computed or loaded scheduling/event table. Run analysis on event table sub-sub-menu : performs analysis on the last computed or loaded scheduling/event table. Export event table sub-sub-menu : saves the last scheduling/event table into a file with a XML format. Options sub-menu : describes how the scheduling simulation will be carried out. Resource sub-menu : Bound on blocking time sub-sub-menu : computes bound on shared resources blocking time according to PCP and PIP protocols without computing the scheduling Looking for priority inversion from simulation sub-sub-menu : runs analysis on a previously computed scheduling to look for high priority tasks blocked by lower priority task at shared resource access. Looking for priority inversion from simulation sub-sub-menu : runs analysis on a previously computed scheduling to look for tasks blocked forever on shared resources. Buffer sub-menu : this submenu can help you to study buffers shared by tasks. Buffer simulation sub-sub-menu : computes buffer utilization factor and message waiting time from a given scheduling simulation. Buffer feasibility tests sub-sub-menu : computes bound on buffer utilization factor and message waiting time without computing scheduling. Precedency sub-menu : . You will find here some heuristics/algorithms that can schedule or check feasibility of a tasks set with dependencies. Chetto/Blazewicz modifications on priorities sub-sub-menu : This service creates an independent task set from a dependent task set by modifying task priorities according to precedency constraints. Chetto/Blazewicz modifications on deadlines sub-sub-menu : This service creates an independent task set from a dependent task set by modifying task deadlines according to precedency constraints. End to End response time : computes response time from a set of task (which have precendency relationships) with the Holistic method. Random sub-menu : this submenu should provide necessary tools to carry out simulations with random events. Compute response time density sub-menu : compute statistic distribution of task response time from a scheduling simulation. Help Menu : About Cheddar sub-menu : provides version number of the Cheddar's binaries. Manual sub-menu : contains the text given in this section. Scheduling references sub-menu : gives all paper references used to compute feasibility tests and simulation results.","title":"Summary of Cheddar's editor menus and sub-menus"},{"location":"pages/hierarchical/","text":"Hierarchical scheduling Here we shortly present two kinds of hierarchical scheduling currently available into Cheddar: ARINC 653 scheduling and classical aperiodic task servers. ARINC 653 scheduling In this section, first we show how to model an ARINC 653 system, and then, we present what we can expect from Cheddar in term of schedulability anlysis for such a system. How to model an ARINC 653 two-levels scheduling ARINC 653 is a avionic standard characterized by the concept of partitioning. Partitionning ensures time and space isolation in order to improve safety. The ultimate goal is to guaranty that if a failure occurs on one partition, it will not impact the other partitions running on the same hardware. In order to enforce isolation, each partition executes in a fixed time frame and has a specific address space to store code and data. For multi-core systems, the ARINC653 standard required that the operating system support the following possibilities: Tasks within a partition are assigned to different cores. Then they are scheduled concurrently on the concerned cores. Partitions are assigned to cores. Then, the partitions are scheduled concurrently on the concerned cores In ARINC653, each task in a multi-core system has a task core affinity attribute that links it to a specific core. Then during scheduling, the task is limited to the assigned core. Moreover, each core in the system has a specific set of tasks that will be scheduled on it. A partition contains at least one task and a two levels of scheduling is applied: Partitions scheduling level: for each core, the partitions of the tasks assigned to the concerned core, are based on a cyclic and designed off-line scheduling. Each partition may be cyclically released for a given duration. Partitions have to execute during a cyclic interval called a major time frame (MAF). A MAF is then composed of MIF and partition windows. The set of partition windows give the sequence of partition execution. Each partition window is defined by a duration. Tasks scheduling level: each partition have to schedule its tasks. Inside a partition, tasks are usually scheduled according to a fixed priority policy. With Cheddar, such 2 level scheduling is modeled as follow: Partitions are modelled by Cheddar address spaces. In Cheddar, a scheduler is optionnal in each address space. To model an ARINC 653, each address space must host a scheduler.* Cheddar supports ARINC 653 scheduling for single-core and multi-core systems i.e. each processor can have one or several cores. For this purpose, monocore, identical multi-cores, uniform multi-cores, unrelated multi-core processors are proposed in Cheddar. * The scheduler assigned to the Cheddar core entity models the partition scheduling. When designing an ARINC 653 system, the core must host the Hierarchical_Offline_Protocol value. With Hierarchical_Offline_Protocol, the address spaces are scheduled off-line, as ARINC653 partition. The ARINC653 MAF of each core, i.e. the sequence of partitions, is store in an XML file. This XML file specifies for the associated core, at which order the Cheddar address spaces which models ARINC partitions, are run. Cheddar provides others address space scheduler such as Hierarchical\\_Cyclic\\_Protocol, Hierarchical\\_Round\\_Robin\\_Protocol or Hierarchical\\_Fixed\\_Priority\\_Protocol. They are not ARINC 653 compliant as they do are based on a offline partition scheduling, but may be used to build such partition scheduling. Hierarchical\\_Cyclic\\_Protocol and Hierarchical\\_Round\\_Robin\\_Protocol run partition/address space in a cyclic way while Hierarchical\\_Fixed\\_Priority\\_Protocol run partition/address space according to thier fixed priority. , Cheddar tasks are both assigned to a core and to an address space, which actually specifies in which partition the task must be run and with which partition/address space offline scheduling the tasks belongs. To be compliant with ARINC 653, each Cheddar address space modeling an ARINC 653 partition must have at least one task. Examples of an ARINC 653 scheduling In this section we present two examples of ARINC 653 scheduling with Cheddar. The first example modeles a single core system and the second one modeles a multi-core system. The first example is stored in the file rosace.xmlv3 . This model contains 2 partitions modeled by 2 Cheddar address spaces. The partitions MAF is described below: Partition addr1 is run from time 0 to 250 and then, partition addr2 is run during 250 time unit. Such scheduling is cyclically repeated. This offline partition scheduling is stored in an XML. The one shown above is stored in the file partition_scheduling_rosace.xml . The rosace.xmlv3 file contains also 15 tasks assigned to the two partitions/address space. The scheduling of this model is shown in the screenshoot bellow. In this screenshot, we see partition addr1 wihch is run fist. We also see T9 run in the beginning of the addr1 time slot. No task from addr2 before addr1 is not completed. The second example is stored in the file arinc653_multicore.xmlv3 . This model contains 4 partitions modeled by 4 Cheddar address spaces which tasks are splited into 2 cores. The partitions MAF of the first core is described below: Partition addr1 is run from time 0 to 250 and then, partition addr2 is run during 250 time unit. Such scheduling is cyclically repeated. This offline partition scheduling is stored in an XML. The one shown above is stored in the file core1_scheduling.xml . The same scheduling is applied in the second core that host tasks of addr3 and addr4 as shown in the following figure: The arinc653_multicore.xmlv3 file contains also 18 tasks assigned to the 4 partitions/address spaces. The scheduling of this model is shown in the screenshoot bellow. In this screenshot, we see partition addr1 (resp. addr2) and addr1 (resp. addr4) running in parallel. Aperiodic server hierarchical scheduling Cheddar implements several classical aperiodic servers. This paragraph illustrates this with the polling server. Aperiodic servers is a mean to adapt scheduling of aperiodic tasks with fixed priority scheduling and Rate Monotonic priority assignment. Several approaches have been proposed to handle aperiodic tasks in this context. The polling server is one of these approaches. Basically, a polling server is a periodic task defined by a period, a priority and a capacity. When aperiodic tasks are released in the system, they are stored in a queue. At each periodic release time of the polling server, it runs aperiodic tasks which are waiting for the processor in the queue. The polling server runs aperiodic tasks for the duration of its capacity. If the aperiodic task queue is empty at the polling server's release times, the server does nothing. It waits for its next periodic release time. This mechanism allows the processor to run aperiodic tasks as soon as possible but without delaying periodic tasks that are critical. Notice that a polling server has a fixed priority: if this priority is high, it may reduce latency for aperiodic tasks. Whatever this priority and the number of arriving aperiodic task, a polling server can be taken into account when schedulability of the system is verified. Then, as the polling server never exceeds its capacity, schedulability is never jeopardized. To play with a polling server with Cheddar, the main element to configure properly is a core. For such scheduling algorithme, the core must be used in a monoprocessor. Here is an example of valid parameters for a polling server: The core c has a high priority value (value of 100), which means that aperiodic tasks will be run quickly. Low scheduling latency of aperiodic tasks is also brought because of the value of the polling server, which is about 5 here. Finally, we give 20 percent of the processor to aperiodic tasks as the capacity of the server is about 1. Fianlly, notice the scheduler type specified here: Hierarchical_Polling_Aperiodic_Server_Protocol. The screenshoot above shows how the previous core can be used to perform a scheduling simulation. We have two periodic tasks (T1 and T2) and two aperiodic tasks (TA1 and TA2) respectively release at time 7 and 9, with a capacity of respectively 1 and 3. Notice that before time 7, the aperiodic server does nothing, but after time 10, i.e. after its next release with an aperiodic task is ready to run, the polling server used its 1 unit capacity to spread the execution of the aperiodic tasks present in its queue. Aperiodic tasks are then run at 10, 15 and 20 in this figure, while never joeoardizing T1 and T2 deadlines.","title":"8 - Hierarchical Scheduling"},{"location":"pages/hierarchical/#hierarchical-scheduling","text":"Here we shortly present two kinds of hierarchical scheduling currently available into Cheddar: ARINC 653 scheduling and classical aperiodic task servers.","title":"Hierarchical scheduling"},{"location":"pages/hierarchical/#arinc-653-scheduling","text":"In this section, first we show how to model an ARINC 653 system, and then, we present what we can expect from Cheddar in term of schedulability anlysis for such a system.","title":"ARINC 653 scheduling"},{"location":"pages/hierarchical/#how-to-model-an-arinc-653-two-levels-scheduling","text":"ARINC 653 is a avionic standard characterized by the concept of partitioning. Partitionning ensures time and space isolation in order to improve safety. The ultimate goal is to guaranty that if a failure occurs on one partition, it will not impact the other partitions running on the same hardware. In order to enforce isolation, each partition executes in a fixed time frame and has a specific address space to store code and data. For multi-core systems, the ARINC653 standard required that the operating system support the following possibilities: Tasks within a partition are assigned to different cores. Then they are scheduled concurrently on the concerned cores. Partitions are assigned to cores. Then, the partitions are scheduled concurrently on the concerned cores In ARINC653, each task in a multi-core system has a task core affinity attribute that links it to a specific core. Then during scheduling, the task is limited to the assigned core. Moreover, each core in the system has a specific set of tasks that will be scheduled on it. A partition contains at least one task and a two levels of scheduling is applied: Partitions scheduling level: for each core, the partitions of the tasks assigned to the concerned core, are based on a cyclic and designed off-line scheduling. Each partition may be cyclically released for a given duration. Partitions have to execute during a cyclic interval called a major time frame (MAF). A MAF is then composed of MIF and partition windows. The set of partition windows give the sequence of partition execution. Each partition window is defined by a duration. Tasks scheduling level: each partition have to schedule its tasks. Inside a partition, tasks are usually scheduled according to a fixed priority policy. With Cheddar, such 2 level scheduling is modeled as follow: Partitions are modelled by Cheddar address spaces. In Cheddar, a scheduler is optionnal in each address space. To model an ARINC 653, each address space must host a scheduler.* Cheddar supports ARINC 653 scheduling for single-core and multi-core systems i.e. each processor can have one or several cores. For this purpose, monocore, identical multi-cores, uniform multi-cores, unrelated multi-core processors are proposed in Cheddar. * The scheduler assigned to the Cheddar core entity models the partition scheduling. When designing an ARINC 653 system, the core must host the Hierarchical_Offline_Protocol value. With Hierarchical_Offline_Protocol, the address spaces are scheduled off-line, as ARINC653 partition. The ARINC653 MAF of each core, i.e. the sequence of partitions, is store in an XML file. This XML file specifies for the associated core, at which order the Cheddar address spaces which models ARINC partitions, are run. Cheddar provides others address space scheduler such as Hierarchical\\_Cyclic\\_Protocol, Hierarchical\\_Round\\_Robin\\_Protocol or Hierarchical\\_Fixed\\_Priority\\_Protocol. They are not ARINC 653 compliant as they do are based on a offline partition scheduling, but may be used to build such partition scheduling. Hierarchical\\_Cyclic\\_Protocol and Hierarchical\\_Round\\_Robin\\_Protocol run partition/address space in a cyclic way while Hierarchical\\_Fixed\\_Priority\\_Protocol run partition/address space according to thier fixed priority. , Cheddar tasks are both assigned to a core and to an address space, which actually specifies in which partition the task must be run and with which partition/address space offline scheduling the tasks belongs. To be compliant with ARINC 653, each Cheddar address space modeling an ARINC 653 partition must have at least one task.","title":"How to model an ARINC 653 two-levels scheduling"},{"location":"pages/hierarchical/#examples-of-an-arinc-653-scheduling","text":"In this section we present two examples of ARINC 653 scheduling with Cheddar. The first example modeles a single core system and the second one modeles a multi-core system. The first example is stored in the file rosace.xmlv3 . This model contains 2 partitions modeled by 2 Cheddar address spaces. The partitions MAF is described below: Partition addr1 is run from time 0 to 250 and then, partition addr2 is run during 250 time unit. Such scheduling is cyclically repeated. This offline partition scheduling is stored in an XML. The one shown above is stored in the file partition_scheduling_rosace.xml . The rosace.xmlv3 file contains also 15 tasks assigned to the two partitions/address space. The scheduling of this model is shown in the screenshoot bellow. In this screenshot, we see partition addr1 wihch is run fist. We also see T9 run in the beginning of the addr1 time slot. No task from addr2 before addr1 is not completed. The second example is stored in the file arinc653_multicore.xmlv3 . This model contains 4 partitions modeled by 4 Cheddar address spaces which tasks are splited into 2 cores. The partitions MAF of the first core is described below: Partition addr1 is run from time 0 to 250 and then, partition addr2 is run during 250 time unit. Such scheduling is cyclically repeated. This offline partition scheduling is stored in an XML. The one shown above is stored in the file core1_scheduling.xml . The same scheduling is applied in the second core that host tasks of addr3 and addr4 as shown in the following figure: The arinc653_multicore.xmlv3 file contains also 18 tasks assigned to the 4 partitions/address spaces. The scheduling of this model is shown in the screenshoot bellow. In this screenshot, we see partition addr1 (resp. addr2) and addr1 (resp. addr4) running in parallel.","title":"Examples of an ARINC 653 scheduling"},{"location":"pages/hierarchical/#aperiodic-server-hierarchical-scheduling","text":"Cheddar implements several classical aperiodic servers. This paragraph illustrates this with the polling server. Aperiodic servers is a mean to adapt scheduling of aperiodic tasks with fixed priority scheduling and Rate Monotonic priority assignment. Several approaches have been proposed to handle aperiodic tasks in this context. The polling server is one of these approaches. Basically, a polling server is a periodic task defined by a period, a priority and a capacity. When aperiodic tasks are released in the system, they are stored in a queue. At each periodic release time of the polling server, it runs aperiodic tasks which are waiting for the processor in the queue. The polling server runs aperiodic tasks for the duration of its capacity. If the aperiodic task queue is empty at the polling server's release times, the server does nothing. It waits for its next periodic release time. This mechanism allows the processor to run aperiodic tasks as soon as possible but without delaying periodic tasks that are critical. Notice that a polling server has a fixed priority: if this priority is high, it may reduce latency for aperiodic tasks. Whatever this priority and the number of arriving aperiodic task, a polling server can be taken into account when schedulability of the system is verified. Then, as the polling server never exceeds its capacity, schedulability is never jeopardized. To play with a polling server with Cheddar, the main element to configure properly is a core. For such scheduling algorithme, the core must be used in a monoprocessor. Here is an example of valid parameters for a polling server: The core c has a high priority value (value of 100), which means that aperiodic tasks will be run quickly. Low scheduling latency of aperiodic tasks is also brought because of the value of the polling server, which is about 5 here. Finally, we give 20 percent of the processor to aperiodic tasks as the capacity of the server is about 1. Fianlly, notice the scheduler type specified here: Hierarchical_Polling_Aperiodic_Server_Protocol. The screenshoot above shows how the previous core can be used to perform a scheduling simulation. We have two periodic tasks (T1 and T2) and two aperiodic tasks (TA1 and TA2) respectively release at time 7 and 9, with a capacity of respectively 1 and 3. Notice that before time 7, the aperiodic server does nothing, but after time 10, i.e. after its next release with an aperiodic task is ready to run, the polling server used its 1 unit capacity to spread the execution of the aperiodic tasks present in its queue. Aperiodic tasks are then run at 10, 15 and 20 in this figure, while never joeoardizing T1 and T2 deadlines.","title":"Aperiodic server hierarchical scheduling"},{"location":"pages/multiprocessor/","text":"Multiprocessor scheduling services Global multiprocessor scheduling Cache interference analysis Cache interference in Cheddar is based on the concepts of Cache Related Preemption Delay (CRPD). The bounds on the CRPD are computed by using : - Useful Cache Block [ LEE 98 ] - Evicting Cache Block[ BUS 95 ] The following process is required to run the CRPD analyses implemented in Cheddar. 1. Control Flow Graph (CFG) computation. Cheddar provides supports for modeling the CFG of a task. An external tool is used to parse the object file of a task and then to compute and generate the CFG in a Cheddar-compatible format. Examples of the CFGs computed for tasks in the Malardalen benchmark are available on the Cheddar svn repository at this link . 2. Import/create a Cheddar-ADL system model and import its CFGs The users can import an existing system model with several tasks in Cheddar or choose to create a new model. After that, the CFGs of these tasks can imported by Cheddar : Menu --> Tools --> Cache --> Import Control Flow Graph If the users import a Cheddar-ADL system model, the CFGs must be located in the same folder as the xml file. If the users create a new system model, the CFGs must be located in the same folder with the Chedder executable file. The attribute \"cfg_name\" is used to associate a CFG to a task. The xml file contains the CFG must has the same name as Cheddar automatically search for the CFGs to be imported. The current GUI of Cheddar does not provide an interactive way to work (add/modify/delete) with CFGs. The features are to be implemented in the next version. 3. UCBs and ECBs computation The sets of UCBs and ECBs of a program are computed by Cheddar from its CFG. The sets of UCBs and ECBs can also be computed by external tools and put directly in Cheddar without passing by this step. In order to do this, the users need to modify the xml file of a Cheddar-ADL system model with the information about the UCBs and ECBs of tasks. After this step, the following analyses are available in Cheddar 4.1 CRPD-Aware WCRT analysis In order to perform CRPD-Aware WCRT analysis in Cheddar, the following steps are required: The set of UCBs and ECBs of all tasks must be computed (Step 3). Menu --> Tools --> Scheduling --> Feasibility Test Options The following tests are available in Cheddar: ECB-Only [ LEE 98 ] ECB-Union Multi-set [ ALT 12 ] UCB-Union Multi-set [ ALT 12 ] Combined Multi-set [ ALT 12 ] 4.2 CRPD-Aware Scheduling Simulation [ TRA 16a ] In order to perform CRPD-aware scheduling simulation in Cheddar, the following steps are required. The set of UCBs and ECBs of all tasks must be computed (Step 3). Check the CRPD check box in Tools --> Scheduling --> Scheduling Options Run the simulation. The CRPD added to the capacities of tasks are represented by the red blocks (as shown in the image below) 4.3 CRPD-Aware Prority Assignment [ TRA 16b ] Our implementation of the CRPD-aware priority assignment is based on Audsley's Optimal Priority Assignment (OPA) algorithm. We extended this algorithm in order to take into account the CRPD. In order to perform CRPD-aware prority assignment in Cheddar, the following steps are required. The set of UCBs and ECBs of all tasks must be computed (Step 3). Tools --> Scheduling --> Scheduling Options --> Tasks Priority Assignment The following CRPD priority assignment algorithms, which are detailed in the referenced article, are available in Cheddar: CRPD OPA-PT CRPD OPA-PT Simplified CRPD OPA-PT Tree Memory interference analysis Network-On-Chip interference analysis Partitionning algorithms Multiprocessor system is good for heavy computing demands. It is sometimes the only way to provide sufficient processing power to meet critical real-time deadlines. In general multiprocessor systems are also more reliable than uni-processor systems. Scheduling of multiprocessor systems is proven to be a NP-hard (Non-deterministic Polynomial-time) problem [LEU 82] . The complexity class NP is the set of decision problems that can be solved by a non-deterministic machine in polynomial time. The complexity theory is part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps does it take to solve a problem) and space (how much memory does it take to solve a problem). Other resources can also be considered, such as how many parallel processors are needed to solve a problem in parallel. There are many scheduling heuristics to solve this. Rate-monotonic scheduling is good for numerous reasons: Rate-monotonic algorithm is optimal for fixed priority assignment of periodic tasks on a processor so it\ufffds easy to design predictable real-time system. Also it is easy to implement and it takes minimal scheduling overhead. Cheddar has four algorithms: RMNF, RMFF, RMBF, RMST and RMGT. Each of these are off-line schemes, so the entire task set must be known before starting task assignment. Bounds for functions are calculated using . Rate-Monotonic-Next-Fit [SON 93] Upper bound for this algorithm is 2.67. Tasks are sorted in non-decreasing order of periods. Then tasks are placed on processors, according to the IP Condition (Increasing Period). The first task is placed on the first processor. Then the second task is placed on the first processor, if it meets the IP Condition. Otherwise it is placed on a new processor. This continues until all tasks are scheduled. Rate-Monotonic-First-Fit [SON 93] The upper bound for this algorithm is 2.33 [SON 93] (original study by Liu and Dhall had a wrong bound of 2.23). Tasks are sorted in non-decreasing order of periods. The IP Condition is used to verify teh schedulability of tasks on processors. The first task is placed on the first processor. Then the second task is placed on the first processor, if it meets the IP Condition. Otherwise it is placed on a new processor. The third task is tried to be placed on the first processor according to the IP Condition. If it does not meet the condition, the task is tried to be placed on the second processor. Otherwise a new processor is selected for the third task\ufffd Rate-Monotonic-Best-Fit [SON 93] The upper bound for this algorithm is 2.33. Tasks are sorted in non-decreasing order of periods. The first task is placed on the first processor. For the second task, the function checks all processors, whether they meet the IP Condition. For processors that satisfy the condition, the algorithm checks the number kj of tasks already assigned to each processor j, and computes Uj, the total utilization of the kj tasks. And the task is assigned to the processor that has the smallest value. If the condition is not met, a new processor is selected for the task. Rate-Monotonic Small-Tasks [BUR 94] The upper bound for RMST is , a = max Ui, i = 1,\ufffd,K and U is utilization of all tasks. Tasks are sorted in increasing Si. Si = log2(Ti). The main idea of RMST is to minimize the value of b for each processor. \ufffd = max Si \ufffd min Si, 1= i =K. Rate-Monotonic General-Tasks [BUR 94] The upper bound for RMGT is 1.75. RMGT uses the RMST algorithm for task s = 1/3 and First-Fit heuristics for the rest of the tasks. Example of use : First, Define cores for processors. They have to be of Rate Monotonic type. Second, Define processors for tasks. Third, Define Address space used by tasks. Then, Define tasks (host tasks on any processor and address space) Finally, compute partitioning, with the submenu \"Tool/Scheduling/Partition/With Small Task\". :","title":"5 - Multiprocessor Scheduling Services"},{"location":"pages/multiprocessor/#multiprocessor-scheduling-services","text":"","title":"Multiprocessor scheduling services"},{"location":"pages/multiprocessor/#global-multiprocessor-scheduling","text":"","title":"Global multiprocessor scheduling"},{"location":"pages/multiprocessor/#cache-interference-analysis","text":"Cache interference in Cheddar is based on the concepts of Cache Related Preemption Delay (CRPD). The bounds on the CRPD are computed by using : - Useful Cache Block [ LEE 98 ] - Evicting Cache Block[ BUS 95 ] The following process is required to run the CRPD analyses implemented in Cheddar. 1. Control Flow Graph (CFG) computation. Cheddar provides supports for modeling the CFG of a task. An external tool is used to parse the object file of a task and then to compute and generate the CFG in a Cheddar-compatible format. Examples of the CFGs computed for tasks in the Malardalen benchmark are available on the Cheddar svn repository at this link . 2. Import/create a Cheddar-ADL system model and import its CFGs The users can import an existing system model with several tasks in Cheddar or choose to create a new model. After that, the CFGs of these tasks can imported by Cheddar : Menu --> Tools --> Cache --> Import Control Flow Graph If the users import a Cheddar-ADL system model, the CFGs must be located in the same folder as the xml file. If the users create a new system model, the CFGs must be located in the same folder with the Chedder executable file. The attribute \"cfg_name\" is used to associate a CFG to a task. The xml file contains the CFG must has the same name as Cheddar automatically search for the CFGs to be imported. The current GUI of Cheddar does not provide an interactive way to work (add/modify/delete) with CFGs. The features are to be implemented in the next version. 3. UCBs and ECBs computation The sets of UCBs and ECBs of a program are computed by Cheddar from its CFG. The sets of UCBs and ECBs can also be computed by external tools and put directly in Cheddar without passing by this step. In order to do this, the users need to modify the xml file of a Cheddar-ADL system model with the information about the UCBs and ECBs of tasks. After this step, the following analyses are available in Cheddar 4.1 CRPD-Aware WCRT analysis In order to perform CRPD-Aware WCRT analysis in Cheddar, the following steps are required: The set of UCBs and ECBs of all tasks must be computed (Step 3). Menu --> Tools --> Scheduling --> Feasibility Test Options The following tests are available in Cheddar: ECB-Only [ LEE 98 ] ECB-Union Multi-set [ ALT 12 ] UCB-Union Multi-set [ ALT 12 ] Combined Multi-set [ ALT 12 ] 4.2 CRPD-Aware Scheduling Simulation [ TRA 16a ] In order to perform CRPD-aware scheduling simulation in Cheddar, the following steps are required. The set of UCBs and ECBs of all tasks must be computed (Step 3). Check the CRPD check box in Tools --> Scheduling --> Scheduling Options Run the simulation. The CRPD added to the capacities of tasks are represented by the red blocks (as shown in the image below) 4.3 CRPD-Aware Prority Assignment [ TRA 16b ] Our implementation of the CRPD-aware priority assignment is based on Audsley's Optimal Priority Assignment (OPA) algorithm. We extended this algorithm in order to take into account the CRPD. In order to perform CRPD-aware prority assignment in Cheddar, the following steps are required. The set of UCBs and ECBs of all tasks must be computed (Step 3). Tools --> Scheduling --> Scheduling Options --> Tasks Priority Assignment The following CRPD priority assignment algorithms, which are detailed in the referenced article, are available in Cheddar: CRPD OPA-PT CRPD OPA-PT Simplified CRPD OPA-PT Tree","title":"Cache interference analysis"},{"location":"pages/multiprocessor/#memory-interference-analysis","text":"","title":"Memory interference analysis"},{"location":"pages/multiprocessor/#network-on-chip-interference-analysis","text":"","title":"Network-On-Chip interference analysis"},{"location":"pages/multiprocessor/#partitionning-algorithms","text":"Multiprocessor system is good for heavy computing demands. It is sometimes the only way to provide sufficient processing power to meet critical real-time deadlines. In general multiprocessor systems are also more reliable than uni-processor systems. Scheduling of multiprocessor systems is proven to be a NP-hard (Non-deterministic Polynomial-time) problem [LEU 82] . The complexity class NP is the set of decision problems that can be solved by a non-deterministic machine in polynomial time. The complexity theory is part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps does it take to solve a problem) and space (how much memory does it take to solve a problem). Other resources can also be considered, such as how many parallel processors are needed to solve a problem in parallel. There are many scheduling heuristics to solve this. Rate-monotonic scheduling is good for numerous reasons: Rate-monotonic algorithm is optimal for fixed priority assignment of periodic tasks on a processor so it\ufffds easy to design predictable real-time system. Also it is easy to implement and it takes minimal scheduling overhead. Cheddar has four algorithms: RMNF, RMFF, RMBF, RMST and RMGT. Each of these are off-line schemes, so the entire task set must be known before starting task assignment. Bounds for functions are calculated using . Rate-Monotonic-Next-Fit [SON 93] Upper bound for this algorithm is 2.67. Tasks are sorted in non-decreasing order of periods. Then tasks are placed on processors, according to the IP Condition (Increasing Period). The first task is placed on the first processor. Then the second task is placed on the first processor, if it meets the IP Condition. Otherwise it is placed on a new processor. This continues until all tasks are scheduled. Rate-Monotonic-First-Fit [SON 93] The upper bound for this algorithm is 2.33 [SON 93] (original study by Liu and Dhall had a wrong bound of 2.23). Tasks are sorted in non-decreasing order of periods. The IP Condition is used to verify teh schedulability of tasks on processors. The first task is placed on the first processor. Then the second task is placed on the first processor, if it meets the IP Condition. Otherwise it is placed on a new processor. The third task is tried to be placed on the first processor according to the IP Condition. If it does not meet the condition, the task is tried to be placed on the second processor. Otherwise a new processor is selected for the third task\ufffd Rate-Monotonic-Best-Fit [SON 93] The upper bound for this algorithm is 2.33. Tasks are sorted in non-decreasing order of periods. The first task is placed on the first processor. For the second task, the function checks all processors, whether they meet the IP Condition. For processors that satisfy the condition, the algorithm checks the number kj of tasks already assigned to each processor j, and computes Uj, the total utilization of the kj tasks. And the task is assigned to the processor that has the smallest value. If the condition is not met, a new processor is selected for the task. Rate-Monotonic Small-Tasks [BUR 94] The upper bound for RMST is , a = max Ui, i = 1,\ufffd,K and U is utilization of all tasks. Tasks are sorted in increasing Si. Si = log2(Ti). The main idea of RMST is to minimize the value of b for each processor. \ufffd = max Si \ufffd min Si, 1= i =K. Rate-Monotonic General-Tasks [BUR 94] The upper bound for RMGT is 1.75. RMGT uses the RMST algorithm for task s = 1/3 and First-Fit heuristics for the rest of the tasks. Example of use : First, Define cores for processors. They have to be of Rate Monotonic type. Second, Define processors for tasks. Third, Define Address space used by tasks. Then, Define tasks (host tasks on any processor and address space) Finally, compute partitioning, with the submenu \"Tool/Scheduling/Partition/With Small Task\". :","title":"Partitionning algorithms"},{"location":"pages/osate/","text":"Using Cheddar within OSATE 2 In the sequel, we first explain how to install the Cheddar plugins within OSATE 2. Then a small example of use is shown and finally, the set of Cheddar properties to customize analysis are shortly explained. How to install Cheddar plugin within OSATE 2 To install the Cheddar OSATE 2 plugin, you have to follow the installation procedure for additional OSATE components that is explained at http://osate.org/ . Basically, once OSATE 2 is started, you have to: Open the OSATE Help menu. Select from there the Install additional OSATE components menu. Then a window is opened and you can check Non SEI Components section, select Cheddar and follow installation instructions. . After installation and once OSATE 2 has been restarted, make sure you have a Cheddar binary distribution available elsewhere. Basically, three plugins are provided: The first plugin, displayed with the \"XML\" string in the toolbar, only transforms an AADL instance model into a Cheddar ADL XML file. Once this plugin is over, it displays the location of the generated file. No Cheddar binary is required there. The second plugin (which runs the Cheddar cheddar.exe binary on windows) both generates the Cheddar XML file and also calls the Cheddar binary if it is available. This plugin assumes Cheddar was previously installed in a folder called Cheddar_bin inside your home Desktop folder. Cheddar_bin must contain both the executables and all the required libraries or DLL. Figure 7.1 shows a typical folder with such binaries for a windows target. This folder can be populated by simply unzipping a Cheddar binary distribution ( see there for downloading one ). Figure 7.1 Cheddar_bin folder example If the Cheddar binary is stored in a specific location, the actual Cheddar binary location has to be specified by AADL properties in the root system component of the AADL model instance. For such a purpose, the required AADL properties are Cheddar_Install_Folder and Cheddar_Working_Folder. These properties are part of the Cheddar_Parameter_Properties set. Here is an sample of this set: -- Configure where Cheddar working folder. -- The property Cheddar\\_Working\\_Folder will be used by Cheddar to store any -- of its working files. -- Cheddar\\_Working\\_Folder : aadlstring applies to (system); -- Configure what is the folder where Cheddar binaries and libraries are installed -- Cheddar\\_Install\\_Folder : aadlstring applies to (system); The last plugin (which runs the Cheddar response_time.exe binary on windows) displayed with the \"RT\" string in the toolbar, allows users to compute worst/best/average response time of each thread. Again the Cheddar XML model file is generated and a dedicated Cheddar program is perform response time analysis. This program is an example that shows how to design a specific Cheddar plugin providing a specific schedulability analysis picked up from the Cheddar facade, i.e. the Cheddar framework interface. Cheddar plugins simple example of use Once the Cheddar OSATE plugins are installed, in order to use them, you must: Create and populate a new OSATE project, or open any existing AADL project. From a root AADL component of the declarative AADL model, build the instance model. The declarative AADL model cannot be used for scheduling analysis. Analysis are run on the instance model only. To create an instance model from a declarative model with OSATE, in the list of components on the right side of the OSATE windows, select a top system component and open the context menu (with the right mouse button), then select the \"instanciate\" menu item. The instance model appears in the middle of the OSATE windows. From this instance model, we can then generate the Cheddar XML model by selecting the root component of the instance model and pushing select the root system component and with a right mouse button call the Instanciate tool of OSATE. From the project explorer, select the instancied model and generate the Cheddar XML model by pushing one of the 3 Cheddar plugins toolbar buttons. In the windows above, you can see a screenshot of OSATE. The generated Cheddar XML model is saved in the location given by the dialog box opened when the transformation is over. The OSATE toolbar also shows the 3 Cheddar plugins, from to the left to the right: the first produces the Cheddar XML model, the second generates the XML Cheddar model and launches Cheddar, the third generates the Cheddar ADL model and launches the Cheddar tool to compute thread response times. Figure 7.2 OSATE 2 run the Cheddar plugin When you launch the Cheddar tool (middle toolbar button), you can call any analysis feature that is compliant with the XML Cheddar generated model. The response time tool (right toolbar button) can either compute response time from a scheduling simulation or from a feasibility test. To parametrize how the response time will be computed, one can set AADL properties on the root system. The sequel defines and explains what are the possible values for these properties. Cheddar AADL properties Basically, Cheddar properties are organized in 4 sets: Cheddar_Properties.aadl. This first set is the original Cheddar AADL properties set proposed in 2007 for AADL 2.1. Some of those properties are now part of the AADL standard. AADL models can be designed with such standardized properties or with Cheddar_Properties.aadl properties, i.e. the Cheddar plugin should behave similarly. See there for the description of these original properties . Cheddar_Parameters_Properties.aadl. This set provides parameters to the Cheddar plugins or to the analysis features called by the plugins. Most of the AADL properties of this set have to associated to the root system component of the AADL model instance. Parameters to configure how to run cheddar are: Cheddar_Working_Folder This property specifies the folder the plugins can use to store any Cheddar working files. Cheddar_Install_Folder. This property gives the path to the folder that is supposed to contain the Cheddar binaries and any components require to run them, i.e. windows DLL or linux shared libraries. Several parameters also exist to configure how to compute the response time of the threads. With the right Cheddar plugin, response times can be computed either by scheduling simulation or by feasibility tests. By feasibility tests, Cheddar is only able to compute worst case response time. From scheduling simulation, both worst, best and average case can be computed for the thread response time. During schduling simulation, various events can be activeted or not with AADL properties. The highest parameter is the interval of time the scheduling simulation has to be computed : in the best case, one should compute the scheduling simulation during the feasibility interval, i.e. the interval of time which captures all possible events of the analyzed moodel. Finally, for response times computed by feasibility tests, it is possible to take into account latencies due to interfences on shared resources (both software and hardware). Here is the properties available to configure such analysis: -- Allows user to select how to compute WCRT : with scheduling simulation or with -- feasibility tests -- Response\\_Time\\_From\\_Scheduling\\_Simulation : aadlboolean applies to (system); Response\\_Time\\_From\\_Feasibility\\_Test : aadlboolean applies to (system); -- Set the time interval on which scheduling simulation has to be computed -- by Cheddar tools -- Scheduling\\_Feasibility\\_Interval : aadlinteger applies to (system); -- Select the type of interference we apply when computing WCRT with -- feasibility tests -- Interferences can be computed on memory shared resources such as cache or -- memory bank -- CRPD\\_Interference\\_Type : type enumeration (No\\_CRPD, ECB\\_Only, ECB\\_Union\\_Multiset, UCB\\_Union\\_Multiset, Combined\\_Multiset); CRPD\\_Interference : Cheddar\\_Transformation\\_Properties::CRPD\\_Interference\\_Type applies to (system); Memory\\_Interference\\_Type : type enumeration (No\\_Memory\\_Interference, DRAM\\_Single\\_Arbiter, Kalray\\_Multi\\_Arbiter); Memory\\_Interference : Cheddar\\_Transformation\\_Properties::Memory\\_Interference\\_Type applies to (system); -- Properties to customize the scheduling simulations -- if True, those parameters allow scheduling simulator to take into account -- Offsets, jitters, CRPD, Precendencies and resources -- Those parameters have the following default values : -- Scheduling\\_With\\_Offsets => False -- Scheduling\\_With\\_Jitters => True -- Scheduling\\_With\\_CRPD => False -- Scheduling\\_With\\_Precendencies => True -- Scheduling\\_With\\_Resources => True -- Scheduling\\_With\\_Offsets : aadlboolean applies to (system); Scheduling\\_With\\_Jitters : aadlboolean applies to (system); Scheduling\\_With\\_CRPD : aadlboolean applies to (system); Scheduling\\_With\\_Precendencies : aadlboolean applies to (system); Scheduling\\_With\\_Resources : aadlboolean applies to (system); Cheddar_Transformation_Properties.aadl. The properties defined ther are used to drive transformation of AADL models towards Cheddar XML models. There is not an unique way to produce the Cheddar model from a given AADL model instance. How each entity of the AADL instance model must be mapped to Cheddar entities depends on the analysis method users expect to apply. Most of the AADL properties of this set have to associated to the root system component of the AADL model instance. First Cheddar only handles uniform timing data, i.e. timing data which are expressed with different units. In fact, in Cheddar, there is no unit at all. In contrary, AADL models may mix various units and then it is mandatory to transform all timing data from the AADL instance model to an unique unit. By default the Cheddar plugins generates millisecond values but such behavior can be changed with the Exported_Attribute_Time_Units property. The value of the Debug_Level property can be changed to fetch debug data from the Cheddar plugin. In many cases, designers expect to run worst case analysis by designing Cheddar model composed of periodic tasks. By default the Cheddar plugin does not change the thread dispatching model of the analyzed AADL instance model. If one expect to change Poisson or Sporadic AADL threads to periodic Cheddar ADL tasks, one can set the properties Transform_Sporadic_To_Periodic or Transform_Poisson_To_Periodic. Finally, data port can be handle by differents means with Cheddar. Data port connections can be mapped either by Cheddar precedences or by time triggered\" Cheddar dependency. AADL property Data_Port_To_Time_Triggered_Dependency can used to custimize such a mapping. Here is the definition of the properties available to configure the transformation from AADL to Cheddar ADL: -- Configure the level of debug data produce by the plug in during -- transformation -- Debug\\_Type : type enumeration (No\\_Debug, Minimal, Verbose, Very\\_Verbose); Debug\\_Level : Cheddar\\_Transformation\\_Properties::Debug\\_Type applies to (system); -- All attributes of a Cheddar XML file are homogenous from -- a units point of view. -- For time units, by default, OSATE 2 Cheddar plugin generates millisecond values -- The following property allows desginers to select a different -- time unit for the generated values -- Time\\_Units: type enumeration (MicroSecond, MilliSecond, Second); Exported\\_Attribute\\_Time\\_Units : Cheddar\\_Transformation\\_Properties::Time\\_Units applies to (system); -- For analysis motivations, one may want to produce Cheddar periodic tasks -- from AADL sporadic threads ; this is typically the case when from an AADL model -- one expect to run a worst case analysis -- If true, the following properties express that for a given system, its sporadic threads must -- be transformed to periodic Cheddar task, to sporadic Cheddar task otherwise. -- The same mechanism also exits for AADL threads with a poisson process dispatching law. -- By default, we do not produce periodic Cheddar tasks for those kind of AADL threads -- Transform\\_Sporadic\\_To\\_Periodic : aadlboolean applies to (system); Transform\\_Poisson\\_To\\_Periodic : aadlboolean applies to (system); -- Data port connections can be mapped either by Cheddar \"precedence\" -- dependency or by \"time triggered\" Cheddar dependency -- if Data\\_Port\\_To\\_Time\\_triggeret\\_Dependency is false, Cheddar precedencies are generated -- otherwise time triggered entities are generated -- Data\\_Port\\_To\\_Time\\_Triggered\\_Dependency : aadlboolean applies to (system); Cheddar_Multicore_Properties.aadl. This last property set contains experimental properties to model multi/manycore architectures. The main focus of these properties is to describe the hardware resources and their potential interferences/delays they may introduce. This last property set may change and it is adviced to check up its definition in OSATE 2 before using them.","title":"7 - Using Cheddar within OSATE 2"},{"location":"pages/osate/#using-cheddar-within-osate-2","text":"In the sequel, we first explain how to install the Cheddar plugins within OSATE 2. Then a small example of use is shown and finally, the set of Cheddar properties to customize analysis are shortly explained.","title":"Using Cheddar within OSATE 2"},{"location":"pages/osate/#how-to-install-cheddar-plugin-within-osate-2","text":"To install the Cheddar OSATE 2 plugin, you have to follow the installation procedure for additional OSATE components that is explained at http://osate.org/ . Basically, once OSATE 2 is started, you have to: Open the OSATE Help menu. Select from there the Install additional OSATE components menu. Then a window is opened and you can check Non SEI Components section, select Cheddar and follow installation instructions. . After installation and once OSATE 2 has been restarted, make sure you have a Cheddar binary distribution available elsewhere. Basically, three plugins are provided: The first plugin, displayed with the \"XML\" string in the toolbar, only transforms an AADL instance model into a Cheddar ADL XML file. Once this plugin is over, it displays the location of the generated file. No Cheddar binary is required there. The second plugin (which runs the Cheddar cheddar.exe binary on windows) both generates the Cheddar XML file and also calls the Cheddar binary if it is available. This plugin assumes Cheddar was previously installed in a folder called Cheddar_bin inside your home Desktop folder. Cheddar_bin must contain both the executables and all the required libraries or DLL. Figure 7.1 shows a typical folder with such binaries for a windows target. This folder can be populated by simply unzipping a Cheddar binary distribution ( see there for downloading one ). Figure 7.1 Cheddar_bin folder example If the Cheddar binary is stored in a specific location, the actual Cheddar binary location has to be specified by AADL properties in the root system component of the AADL model instance. For such a purpose, the required AADL properties are Cheddar_Install_Folder and Cheddar_Working_Folder. These properties are part of the Cheddar_Parameter_Properties set. Here is an sample of this set: -- Configure where Cheddar working folder. -- The property Cheddar\\_Working\\_Folder will be used by Cheddar to store any -- of its working files. -- Cheddar\\_Working\\_Folder : aadlstring applies to (system); -- Configure what is the folder where Cheddar binaries and libraries are installed -- Cheddar\\_Install\\_Folder : aadlstring applies to (system); The last plugin (which runs the Cheddar response_time.exe binary on windows) displayed with the \"RT\" string in the toolbar, allows users to compute worst/best/average response time of each thread. Again the Cheddar XML model file is generated and a dedicated Cheddar program is perform response time analysis. This program is an example that shows how to design a specific Cheddar plugin providing a specific schedulability analysis picked up from the Cheddar facade, i.e. the Cheddar framework interface.","title":"How to install Cheddar plugin within OSATE 2"},{"location":"pages/osate/#cheddar-plugins-simple-example-of-use","text":"Once the Cheddar OSATE plugins are installed, in order to use them, you must: Create and populate a new OSATE project, or open any existing AADL project. From a root AADL component of the declarative AADL model, build the instance model. The declarative AADL model cannot be used for scheduling analysis. Analysis are run on the instance model only. To create an instance model from a declarative model with OSATE, in the list of components on the right side of the OSATE windows, select a top system component and open the context menu (with the right mouse button), then select the \"instanciate\" menu item. The instance model appears in the middle of the OSATE windows. From this instance model, we can then generate the Cheddar XML model by selecting the root component of the instance model and pushing select the root system component and with a right mouse button call the Instanciate tool of OSATE. From the project explorer, select the instancied model and generate the Cheddar XML model by pushing one of the 3 Cheddar plugins toolbar buttons. In the windows above, you can see a screenshot of OSATE. The generated Cheddar XML model is saved in the location given by the dialog box opened when the transformation is over. The OSATE toolbar also shows the 3 Cheddar plugins, from to the left to the right: the first produces the Cheddar XML model, the second generates the XML Cheddar model and launches Cheddar, the third generates the Cheddar ADL model and launches the Cheddar tool to compute thread response times. Figure 7.2 OSATE 2 run the Cheddar plugin When you launch the Cheddar tool (middle toolbar button), you can call any analysis feature that is compliant with the XML Cheddar generated model. The response time tool (right toolbar button) can either compute response time from a scheduling simulation or from a feasibility test. To parametrize how the response time will be computed, one can set AADL properties on the root system. The sequel defines and explains what are the possible values for these properties.","title":"Cheddar plugins simple example of use"},{"location":"pages/osate/#cheddar-aadl-properties","text":"Basically, Cheddar properties are organized in 4 sets: Cheddar_Properties.aadl. This first set is the original Cheddar AADL properties set proposed in 2007 for AADL 2.1. Some of those properties are now part of the AADL standard. AADL models can be designed with such standardized properties or with Cheddar_Properties.aadl properties, i.e. the Cheddar plugin should behave similarly. See there for the description of these original properties . Cheddar_Parameters_Properties.aadl. This set provides parameters to the Cheddar plugins or to the analysis features called by the plugins. Most of the AADL properties of this set have to associated to the root system component of the AADL model instance. Parameters to configure how to run cheddar are: Cheddar_Working_Folder This property specifies the folder the plugins can use to store any Cheddar working files. Cheddar_Install_Folder. This property gives the path to the folder that is supposed to contain the Cheddar binaries and any components require to run them, i.e. windows DLL or linux shared libraries. Several parameters also exist to configure how to compute the response time of the threads. With the right Cheddar plugin, response times can be computed either by scheduling simulation or by feasibility tests. By feasibility tests, Cheddar is only able to compute worst case response time. From scheduling simulation, both worst, best and average case can be computed for the thread response time. During schduling simulation, various events can be activeted or not with AADL properties. The highest parameter is the interval of time the scheduling simulation has to be computed : in the best case, one should compute the scheduling simulation during the feasibility interval, i.e. the interval of time which captures all possible events of the analyzed moodel. Finally, for response times computed by feasibility tests, it is possible to take into account latencies due to interfences on shared resources (both software and hardware). Here is the properties available to configure such analysis: -- Allows user to select how to compute WCRT : with scheduling simulation or with -- feasibility tests -- Response\\_Time\\_From\\_Scheduling\\_Simulation : aadlboolean applies to (system); Response\\_Time\\_From\\_Feasibility\\_Test : aadlboolean applies to (system); -- Set the time interval on which scheduling simulation has to be computed -- by Cheddar tools -- Scheduling\\_Feasibility\\_Interval : aadlinteger applies to (system); -- Select the type of interference we apply when computing WCRT with -- feasibility tests -- Interferences can be computed on memory shared resources such as cache or -- memory bank -- CRPD\\_Interference\\_Type : type enumeration (No\\_CRPD, ECB\\_Only, ECB\\_Union\\_Multiset, UCB\\_Union\\_Multiset, Combined\\_Multiset); CRPD\\_Interference : Cheddar\\_Transformation\\_Properties::CRPD\\_Interference\\_Type applies to (system); Memory\\_Interference\\_Type : type enumeration (No\\_Memory\\_Interference, DRAM\\_Single\\_Arbiter, Kalray\\_Multi\\_Arbiter); Memory\\_Interference : Cheddar\\_Transformation\\_Properties::Memory\\_Interference\\_Type applies to (system); -- Properties to customize the scheduling simulations -- if True, those parameters allow scheduling simulator to take into account -- Offsets, jitters, CRPD, Precendencies and resources -- Those parameters have the following default values : -- Scheduling\\_With\\_Offsets => False -- Scheduling\\_With\\_Jitters => True -- Scheduling\\_With\\_CRPD => False -- Scheduling\\_With\\_Precendencies => True -- Scheduling\\_With\\_Resources => True -- Scheduling\\_With\\_Offsets : aadlboolean applies to (system); Scheduling\\_With\\_Jitters : aadlboolean applies to (system); Scheduling\\_With\\_CRPD : aadlboolean applies to (system); Scheduling\\_With\\_Precendencies : aadlboolean applies to (system); Scheduling\\_With\\_Resources : aadlboolean applies to (system); Cheddar_Transformation_Properties.aadl. The properties defined ther are used to drive transformation of AADL models towards Cheddar XML models. There is not an unique way to produce the Cheddar model from a given AADL model instance. How each entity of the AADL instance model must be mapped to Cheddar entities depends on the analysis method users expect to apply. Most of the AADL properties of this set have to associated to the root system component of the AADL model instance. First Cheddar only handles uniform timing data, i.e. timing data which are expressed with different units. In fact, in Cheddar, there is no unit at all. In contrary, AADL models may mix various units and then it is mandatory to transform all timing data from the AADL instance model to an unique unit. By default the Cheddar plugins generates millisecond values but such behavior can be changed with the Exported_Attribute_Time_Units property. The value of the Debug_Level property can be changed to fetch debug data from the Cheddar plugin. In many cases, designers expect to run worst case analysis by designing Cheddar model composed of periodic tasks. By default the Cheddar plugin does not change the thread dispatching model of the analyzed AADL instance model. If one expect to change Poisson or Sporadic AADL threads to periodic Cheddar ADL tasks, one can set the properties Transform_Sporadic_To_Periodic or Transform_Poisson_To_Periodic. Finally, data port can be handle by differents means with Cheddar. Data port connections can be mapped either by Cheddar precedences or by time triggered\" Cheddar dependency. AADL property Data_Port_To_Time_Triggered_Dependency can used to custimize such a mapping. Here is the definition of the properties available to configure the transformation from AADL to Cheddar ADL: -- Configure the level of debug data produce by the plug in during -- transformation -- Debug\\_Type : type enumeration (No\\_Debug, Minimal, Verbose, Very\\_Verbose); Debug\\_Level : Cheddar\\_Transformation\\_Properties::Debug\\_Type applies to (system); -- All attributes of a Cheddar XML file are homogenous from -- a units point of view. -- For time units, by default, OSATE 2 Cheddar plugin generates millisecond values -- The following property allows desginers to select a different -- time unit for the generated values -- Time\\_Units: type enumeration (MicroSecond, MilliSecond, Second); Exported\\_Attribute\\_Time\\_Units : Cheddar\\_Transformation\\_Properties::Time\\_Units applies to (system); -- For analysis motivations, one may want to produce Cheddar periodic tasks -- from AADL sporadic threads ; this is typically the case when from an AADL model -- one expect to run a worst case analysis -- If true, the following properties express that for a given system, its sporadic threads must -- be transformed to periodic Cheddar task, to sporadic Cheddar task otherwise. -- The same mechanism also exits for AADL threads with a poisson process dispatching law. -- By default, we do not produce periodic Cheddar tasks for those kind of AADL threads -- Transform\\_Sporadic\\_To\\_Periodic : aadlboolean applies to (system); Transform\\_Poisson\\_To\\_Periodic : aadlboolean applies to (system); -- Data port connections can be mapped either by Cheddar \"precedence\" -- dependency or by \"time triggered\" Cheddar dependency -- if Data\\_Port\\_To\\_Time\\_triggeret\\_Dependency is false, Cheddar precedencies are generated -- otherwise time triggered entities are generated -- Data\\_Port\\_To\\_Time\\_Triggered\\_Dependency : aadlboolean applies to (system); Cheddar_Multicore_Properties.aadl. This last property set contains experimental properties to model multi/manycore architectures. The main focus of these properties is to describe the hardware resources and their potential interferences/delays they may introduce. This last property set may change and it is adviced to check up its definition in OSATE 2 before using them.","title":"Cheddar AADL properties"},{"location":"pages/project_files/","text":"Cheddar Project File (XML and AADL) Information stored during a simulation can be saved into project files . A project file is a XML file defined by this DTD . By the way, you do not need a deep understanding of the layout of cheddar project files except if you want to edit project files by hand. If so, you should check if your project files are correctly structured by the tool xml2xml ( xml2xml just reads, parses and displays the content of a XML Cheddar project file on the screen ). All Cheddar XML files can be displayed with an Internet Browser if you put the following XSLT file and the following CSS file in the directory hosting your XML Cheddar files. To do so, you should use a recent release of Internet Explorer (version 6.0 or later), Netscape (version 7.0 or later) or Mozilla (version 1.0 or later). From Cheddar, there are two ways to load a project file : First, a project file can be loaded from the File/Open XML project submenu. Just click on the Open button, and give the file name of your project. Second, a project can be loaded from the command line. For instance, to start Cheddar and load the project file my_project.xml , just run Cheddar with my_project.xml as the first argument : $cheddar my_project.xml Saving a project can be done with the same \"File\" menu. Cheddar can also import AADL specification [SAE 04] . This service can be accessed through the submenu \"File/AADL/Import AADL\". In the same way, an XML project can be exported towards an AADL specification (see the \"File/AADL/Export AADL\" sub-menu). WARNING : the AADL parser included in Cheddar is only compliant with AADL V1 models. It is clearly deprecated and only kept for legacy reasons. If you plan to use Cheddar with AADL, please see AADLInspector or the Cheddar OSATE 2 plugin. As with XML files, you can launch Cheddar with an AADL file given from the command line. To launch Cheddar and automatically read the foo.aadl AADL specification file, do: $cheddar -a foo.aadl Finally, XML or AADL files can be loaded from any directory and a project can be saved in several project files. For example, to load a project saved in two AADL files called bar1.aadl and bar2.aadl, which are stored in the directory /home/foo, you must use the following command-line: $cheddar -I/home/foo -a bar1.aadl bar2.aadl By default, Cheddar automatically loads the standards AADL files AADL_Project.aadl, AADL_Properties.aadl, Cheddar_Properties.aadl and User_Defined_Cheddar_Properties.aadl. The -I option can also be used to give the directory storing these standard AADL files. Otherwise, these files ares supposed to be in the current directory. A copy of them can be generated from the File/AADL/Export property sets used by Cheddar and File/AADL/Export standard AADL property set submenus.","title":"2 - Cheddar Project File"},{"location":"pages/project_files/#cheddar-project-file-xml-and-aadl","text":"Information stored during a simulation can be saved into project files . A project file is a XML file defined by this DTD . By the way, you do not need a deep understanding of the layout of cheddar project files except if you want to edit project files by hand. If so, you should check if your project files are correctly structured by the tool xml2xml ( xml2xml just reads, parses and displays the content of a XML Cheddar project file on the screen ). All Cheddar XML files can be displayed with an Internet Browser if you put the following XSLT file and the following CSS file in the directory hosting your XML Cheddar files. To do so, you should use a recent release of Internet Explorer (version 6.0 or later), Netscape (version 7.0 or later) or Mozilla (version 1.0 or later). From Cheddar, there are two ways to load a project file : First, a project file can be loaded from the File/Open XML project submenu. Just click on the Open button, and give the file name of your project. Second, a project can be loaded from the command line. For instance, to start Cheddar and load the project file my_project.xml , just run Cheddar with my_project.xml as the first argument : $cheddar my_project.xml Saving a project can be done with the same \"File\" menu. Cheddar can also import AADL specification [SAE 04] . This service can be accessed through the submenu \"File/AADL/Import AADL\". In the same way, an XML project can be exported towards an AADL specification (see the \"File/AADL/Export AADL\" sub-menu). WARNING : the AADL parser included in Cheddar is only compliant with AADL V1 models. It is clearly deprecated and only kept for legacy reasons. If you plan to use Cheddar with AADL, please see AADLInspector or the Cheddar OSATE 2 plugin. As with XML files, you can launch Cheddar with an AADL file given from the command line. To launch Cheddar and automatically read the foo.aadl AADL specification file, do: $cheddar -a foo.aadl Finally, XML or AADL files can be loaded from any directory and a project can be saved in several project files. For example, to load a project saved in two AADL files called bar1.aadl and bar2.aadl, which are stored in the directory /home/foo, you must use the following command-line: $cheddar -I/home/foo -a bar1.aadl bar2.aadl By default, Cheddar automatically loads the standards AADL files AADL_Project.aadl, AADL_Properties.aadl, Cheddar_Properties.aadl and User_Defined_Cheddar_Properties.aadl. The -I option can also be used to give the directory storing these standard AADL files. Otherwise, these files ares supposed to be in the current directory. A copy of them can be generated from the File/AADL/Export property sets used by Cheddar and File/AADL/Export standard AADL property set submenus.","title":"Cheddar Project File (XML and AADL)"},{"location":"pages/security/","text":"MILS and security services MILS is a high-assurance security architecture characterized by untrusted and trusted components and based on security models. This chapter describes how to model a RTCS based on security architecture and services to perform security analysis. MILS (Multi Independent Levels of security) architecture MILS uses the divide and conquer approach to reduce the effort for security evaluation of a system. MILS adopts a classification level for subjects and objects that guides information control. MILS introduces many concepts that are represented in Cheddar. A real-time critical system based on MILS security architecture is defined as follows: At least a processor and a core. Cheddar address spaces that represent MILS partitions. Cheddar tasks that represent MILS processes. Each task has to be assigned to an address space Cheddar shared resources such as messages and buffers that represent MILS Objects. For each task, object, and address space, there attributes that define their degree of sensitivity. The Confidentiality_Level attribute can be Unclassified , or Classified , or Secret , or Top_Secret . The Integrity_Level attribute can be Low , or Medium , or High . Each task and address space is characterized by the attribute MILS_component_type that specifies classification in MILS architecture. It can be SLS for Single Level Secure component, or MLS for Multi-Level Secure component, or MSLS for Multi Single-Level Secure component. Cheddar dependencies that represent MILS communications. MILS messages can be modeled by asynchronous communication Cheddar dependencies. In MILS architecture, information flow control is performed through trustworthy components such as access control guard, MILS Message Router (MMR), downgrader, collator, etc.... These components are represented by tasks in Cheddar. Each task is characterized by an attribute MILS_Task_Type that specifies whether a task is a regular application or a security monitor. It can be Application , MMR , Guard , Collator , Downgrader , or Upgrader . Users should consider the above information in the XML file of a Cheddar-ADL system model. Security services This section is dedicated to how to perform security analysis based on some security models. A security model describes the security strategy for a system to ensure security objectives. Bell-La Padula, Biba, and chinese wall are examples of security models. They were implemented in Cheddar to verify Cheddar ADL models which are their main point of entry. Bell-La Padula concerns confidentiality, which is the assurance of preventing the system from disclosing information. It is based on the principle \"No read up/No write-down\". The implemented method \"bell_lapadula (Sys: System )\" checks if the Cheddar ADL model \"Sys\" complies with the Bell-La Padula rules by returning the number of violations of the \"No read up/No write-down\" rule. Biba concerns data integrity, meaning the protection of the system against unauthorized modifications. It is based on the principle \"No read down, no write up\". The implemented method \"biba (Sys: System)\" checks if the Cheddar ADL model \"Sys\" conforms to Biba rules by returning how many times the Read down and Write up rules are missed. Chinese wall model addresses conflicts of interest. The implemented method \"chinese_wall (Sys : System; COIs : array_tasks_set; M: in out matrice)\" returns for the Cheddar ADL model \"Sys\", the number of information flows that cause conflict of interest, based on the defined conflict of interest classes \"COIs\". All these methods are available on the Cheddar svn repository at this link . Example of security analysis In this section we present an example of ARINC 653 scheduling with Cheddar. This example is stored in the file security.xmlv3 . It contains tasks with different security levels that lead to some security violations. Some of these violations are already solved by the use of downgraders. The security analysis of this model is shown in the screenshot below. In this screenshot, there are 3 communications that violate integrity and 0 violations of confidentiality. The confidentiality issues have been solved by downgraders in the model.","title":"9 - MILS and Security Services"},{"location":"pages/security/#mils-and-security-services","text":"MILS is a high-assurance security architecture characterized by untrusted and trusted components and based on security models. This chapter describes how to model a RTCS based on security architecture and services to perform security analysis.","title":"MILS and security services"},{"location":"pages/security/#mils-multi-independent-levels-of-security-architecture","text":"MILS uses the divide and conquer approach to reduce the effort for security evaluation of a system. MILS adopts a classification level for subjects and objects that guides information control. MILS introduces many concepts that are represented in Cheddar. A real-time critical system based on MILS security architecture is defined as follows: At least a processor and a core. Cheddar address spaces that represent MILS partitions. Cheddar tasks that represent MILS processes. Each task has to be assigned to an address space Cheddar shared resources such as messages and buffers that represent MILS Objects. For each task, object, and address space, there attributes that define their degree of sensitivity. The Confidentiality_Level attribute can be Unclassified , or Classified , or Secret , or Top_Secret . The Integrity_Level attribute can be Low , or Medium , or High . Each task and address space is characterized by the attribute MILS_component_type that specifies classification in MILS architecture. It can be SLS for Single Level Secure component, or MLS for Multi-Level Secure component, or MSLS for Multi Single-Level Secure component. Cheddar dependencies that represent MILS communications. MILS messages can be modeled by asynchronous communication Cheddar dependencies. In MILS architecture, information flow control is performed through trustworthy components such as access control guard, MILS Message Router (MMR), downgrader, collator, etc.... These components are represented by tasks in Cheddar. Each task is characterized by an attribute MILS_Task_Type that specifies whether a task is a regular application or a security monitor. It can be Application , MMR , Guard , Collator , Downgrader , or Upgrader . Users should consider the above information in the XML file of a Cheddar-ADL system model.","title":"MILS (Multi Independent Levels of security) architecture"},{"location":"pages/security/#security-services","text":"This section is dedicated to how to perform security analysis based on some security models. A security model describes the security strategy for a system to ensure security objectives. Bell-La Padula, Biba, and chinese wall are examples of security models. They were implemented in Cheddar to verify Cheddar ADL models which are their main point of entry. Bell-La Padula concerns confidentiality, which is the assurance of preventing the system from disclosing information. It is based on the principle \"No read up/No write-down\". The implemented method \"bell_lapadula (Sys: System )\" checks if the Cheddar ADL model \"Sys\" complies with the Bell-La Padula rules by returning the number of violations of the \"No read up/No write-down\" rule. Biba concerns data integrity, meaning the protection of the system against unauthorized modifications. It is based on the principle \"No read down, no write up\". The implemented method \"biba (Sys: System)\" checks if the Cheddar ADL model \"Sys\" conforms to Biba rules by returning how many times the Read down and Write up rules are missed. Chinese wall model addresses conflicts of interest. The implemented method \"chinese_wall (Sys : System; COIs : array_tasks_set; M: in out matrice)\" returns for the Cheddar ADL model \"Sys\", the number of information flows that cause conflict of interest, based on the defined conflict of interest classes \"COIs\". All these methods are available on the Cheddar svn repository at this link .","title":"Security services"},{"location":"pages/security/#example-of-security-analysis","text":"In this section we present an example of ARINC 653 scheduling with Cheddar. This example is stored in the file security.xmlv3 . It contains tasks with different security levels that lead to some security violations. Some of these violations are already solved by the use of downgraders. The security analysis of this model is shown in the screenshot below. In this screenshot, there are 3 communications that violate integrity and 0 violations of confidentiality. The confidentiality issues have been solved by downgraders in the model.","title":"Example of security analysis"},{"location":"pages/user_defined/","text":"User-defined simulation code : how to run simulations of specific systems. Usual feasibility tests are limited to only few task models (mainly periodic tasks) and to only few schedulers. When an application built with a particular task activation pattern or scheduled with a particular scheduler has to be checked, feasibility tests are not necessarily available. In this case, the only solution consists in analyzing the scheduling simulation. Cheddar allows the user to design and easily build framework extensions to do simulation of user-defined schedulers or task activation patterns. By easy, we mean quickly write and test framework extensions without a deep understanding of the framework design and of the Ada language. We propose the use of a simple language to describe framework extensions. Framework extensions are interpreted at simulation time. As a consequence, they can be changed and tested without recompiling the framework itself. Figure 6.1 How a user-defined code is run by the scheduling engine Figure 6.1 gives an idea on the way the simulation engine is implemented in the framework. Running a simulation with Cheddar is a three-step process. The first step consists of computing the scheduling : we have to decide which events occur for each unit of time. Events can be allocating/releasing shared resources, writing/reading buffers, sending/receiving messages and of course running a task at a given time. At the end of this step, a table is built which stores all the generated events. The event table is built according to the XML description file of the studied application and according to a set of task activation patterns and schedulers. Usual task activation patterns and schedulers are predefined in the Cheddar framework but users can add their own schedulers and task activation patterns. In the second step, the analysis of the event table is performed. The table is scanned by \"event analyzers\" to find properties on the studied system. At this step, some standard information can be extracted by predefined event analyzers (worst/best/average blocking time, missed deadlines ..) but users can also define their own event analyzers to look for ad-hoc properties (ex : synchronization constraints between two tasks, shared resources access order, ...). The results produced during this step are XML formatted and can be exported towards other programs. Finally, the last step consists of displaying XML results in the Cheddar main window (see Figure 1.4). Defining new schedulers or task activation patterns. Now, let's see how user-defined schedulers or task activation patterns can be added into the framework. Basically, all tasks are stored in a set of arrays. Each array stores a given information for all tasks (ex : deadline, capacity, start time, ...). The job of a scheduler is to find a task to run from a set of ready tasks. To achieve this job, Cheddar models a scheduler with a 3 stages pipe-line which is similar to the POSIX 1003.1b scheduler (see [GAL 95]) . These 3 stages are : The priority stage. For each ready task, a priority is computed. The queueing stage. Ready tasks are inserted into different queues. There is one queue per priority level. Each queue contains all the ready tasks with the same priority value. Queues are managed like POSIX scheduling queues : if a quantum is associated with the scheduler, queues work like the SCHED_RR scheduling queueing policy. Otherwise, the SCHED_FIFO queueing policy is applied. The election stage. The scheduler looks for the non empty queue with the highest priority level and allocates the processor to the task at the head of this queue. The elected task keeps the processor during one unit of time if the designed scheduler is preemptive or during all its capacity if the scheduler is not preemptive. Defining a new scheduler is simply giving piece of code for some of the pipe-line stages we described above. Each of these stages can be defined by a user without the need to have a deep knowledge of the way the scheduling simulator works. User-defined schedulers are stored in text files. These files are organized in several sections : The start section. In this section, you may declare variables needed to schedule your tasks. Many variables are already predefined in Cheddar. Some of them are those defined at task/processor/buffer/message definition (ex : period, deadline, capacity ...). This set of predefined variables can be extended with the \"Edit/Update Tasks\" submenu (see user-defined parameters). The others are managed by the simulator engine and describe the state of tasks/processors/buffers/messages at simulation time. See section VI.5 for a list of all predefined variables. All variables used in a scheduler should have a type. The framework provides two type families : scalar types and arrays. One can define variable with scalar type of double, integer, boolean, string and also of random (a random is a type which allows the user to generate ramdom values). An array is a type which stores one scalar data per task, message, buffer or shared resource. Arrays are declared as usual Ada Table. Vectorial operations can be done on this kind of variable. The priority section. The section contains the code necessary to compute task priorities. The code given here is called each time a scheduling decision has to be made (at each unit of time for preemptive scheduler and when a task has run during all its capacity for non preemptive scheduler). The code given here can be composed of many differents statements described in section VI.5 The election section. This section just decides which task should receive the processor for next units of time. This section should only contain one return statement. The task activation section. This section describes how tasks could be activated during a simulation. In Cheddar, 3 kinds of tasks exists : aperiodic tasks which are activated only one time and periodic or poissons process tasks which are activated several times. In the case of periodic tasks, two successive task activations are delayed by an amount of fixed time called period. In the case of poisson process tasks, two successive task activations are delayed by an exponential random delay. The task activation section allows you to define new kinds of task activation patterns (ex : sporadic task, randomly activated task, burst of activations, ...). . In the sequel, we first give you some simple examples of user-defined schedulers. Then, we explain how to use this kind of scheduler to do scheduling simulation with Cheddar. The list of statements and the list of predefined variables is given at the end of this section. Examples of user-defined schedulers. In this section, we give some user-defined scheduler examples. We first show that a user-defined scheduler can be built with two kinds of statements : high-level and low-level statements. Second, we present how to add new task parameters with User's defined task parameters. Low-level statements versus High-level statements Now let's see some very simple user-defined schedulers. The most simple user-defined scheduler can be defined like below : election_section: return min_to_index(tasks.period); end section; Figure 6.2 A simple Rate Monotonic scheduler This first example shows you how to give the processor to the task with the smallest period. This scheduler is equivalent to the Rate monotonic implemented into Cheddar. tasks.period is a predefined variable initialized at task definition time by the user. To implement a Rate Monotonic scheduler, no dynamic priorities are computed and no variable is necessary. Then, the scheduler designer does not have to redefine the start and priority sections. The only section which is defined is the election one. The election section contains an unique return statement to inform the scheduling simulator engine which task should be run for the next unit of time. The return statement uses the high level min_to_index operator. This operator scans the task array to find the ready task with the minimum value for the variable tasks.period . In Cheddar, the scheduler designer can use two kinds of statements : high-level and low-level statements. High level statements like min_to_index , hides the data type organization of the scheduling simulator engine. For example, the scheduler designer do not need to give statement into its user-defined scheduler to scan manually the task array. Writing a scheduler with high-level statements is then easy work. On the contrary, low-level statements assume that the user has a deeper idea of the design of the scheduling engine simulator. By the way, these statements are sometimes necessary when the scheduler designer wants to code a too much specific scheduler. Now let's see how to define an EDF like scheduler : start_section: dynamic_priority : array (tasks_range) of integer; end section; priority_section: dynamic_priority := tasks.start_time + tasks.deadline + ((tasks.activation_number-1)\\*tasks.period); end section; election_section: return min_to_index(dynamic_priority); end section; Figure 6.3 An EDF like scheduler using vectorial operators EDF is a dynamic scheduler which computes a dynamic priority for each task. This dynamic priority is in fact a deadline. EDF just gives the processor to the task with the shortest deadline. In our example, this deadline is stored in a variable called dynamic_deadline . Since we need one value per task, the type of this variable is integer array . With this example the priority_section is not empty any more and contains (lines 5 to 7) the necessary code to compute EDF dynamic priorities. You should notice that the code in line 6/7 is in fact a vectorial operation : the arithmetic operation to compute the deadline is done for each item of the table dynamic_priority ranging from 1 to nb_tasks ( nb_tasks is a static predefined variable initialized by the number of tasks in the current processor). To compute the dynamic priorities of our example, we used many predefined variables : tasks.deadline, tasks.start_time and tasks.period : they are the deadline, start time and period values given by the user at task definition time (in the window Edit/Update tasks). tasks.activation_number : it's a variable updated by the simulation engine. The simulator increments this variable each time a periodic or a poisson process task starts a new activation. For instance, if tasks.activation_number(i) is equal to 3, it means that the task i has started its 4th activation. You can find in VI.5 a list of all predefined variables and all available statements you can used to build your user-defined scheduler. The example of the Figure 5.3 is built with vectorial operators : each arithmetic operation is done for all the tasks of the system. The scheduler designer does not need to take care of the task array and just gives rules to computed the EDF dynamic deadline. As max_to_index/min_to_index , these statements are High-level ones because they do not required to directly access the data type organization of the scheduling engine of Cheddar (mainly the task arrays). Now, let's see a third example: start_section: to_run : integer; current_priority : integer; priority_section: current_priority:=0; for i in tasks_range loop if (tasks.ready(i) = true) and (tasks.priority(i)>current_priority) then to_run:=i; current_priority:=tasks.priority(i); end if; end loop; end section; election_section: return to_run; end section; Figure 6.4 Building a user-defined with low-level statement This scheduler looks for the highest priority ready task of a processor and is fully equivalent to the scheduler described by : election_section: return max_to_index(tasks.priority); end section; Figure 6.5 A HPF scheduler built with hight-level statements but, in the example of Figure 6.4, the code scans itself the task array to find a ready task to run. To achieve this, the example of Figure 6.4 is built with low-level instructions : a for loop and an if statement. The priority_section is then composed of a loop that tests each task entry. This loop is made with a for statement, a loop that runs the inner statement for each task defined in the task array. Contrary to a high-level implementation, a scheduler made of low-level statements has to carry out more tests. For instance, the example of the Figure 6.4 checks with the ready dynamic variable if tasks are ready at the time the scheduler is called. Low-level scheduler are then more complicated and more difficult to test. The reader will find some tips to help test complicated user-defined schedulers in section 6.3 . User-defined scheduler built with User's defined Task Parameters In the previous examples, the data used to built user-defined schedulers were either static variables initialized at task definition time, either dynamic variables predefined or declared in the start section. A last type of data exists in Cheddar : User's defined task parameters. This kind of data are static ones and are defined at task definition time. User's defined task parameters allow the user to extend the set of static variables. Since they describe new task parameters, User's defined task parameters are table type. User's defined task parameters can be boolean, integer, double or string table type. To define User's defined task parameters, you have to update the third part of the entity task. Use the submenu \"Edit/Entities/Software/Task\" : Figure 6.6 Adding an Users's Defined Task Parameter The example above shows you a system composed of 3 tasks (T1, T2 and T3) where a criticity level is defined. Like usual task parameters, you should give a value to a User's defined task parameter (ex : the criticity level for task T1 is 1) but you also have to set a type to the parameter ( integer in our example). When tasks are created, as usually, you can call the scheduling simulation services of Cheddar. The next window is a snapshoot of the resulting scheduling of our example composed of 3 tasks scheduled according to their criticity level. (T2 is the most critical task and T1 the less critical). Figure 6.7 Scheduling according to a criticity level To conclude this chapter, let's have a look to a more complex example of user-defined scheduler which summarises all the features presented before. This example is an ARINC 653 scheduler (see [ARI 97]). An ARINC 653 system is composed of several partitions. A partition is a unit of software and is itself composed of processes and memory spaces. A processor can host several partitions so that two levels of scheduling exist in an ARINC653 system : partition scheduling and process scheduling. Process scheduling. In one partition, process are scheduled according to their fixed priority. The scheduler is preemptive and always gives the processor to the highiest fixed priority task of the partition which is ready to run. When several tasks of a partition have the same priority level, the oldest one is elected. Partition scheduling. Partitions share the processor in a predefined way. On each processor partitions are activated according to an activation table. This table is built at design time and defines a cycle of partition scheduling. The table describes for each partition when it has to be activated and how much time it has to run for each of its activation. Figure 6.8 An example of ARINC 653 scheduling The Figure 6.8 displays an example of ARINC 653 scheduling (see the XML project file project_examples/arinc653.xml). The studied system is made of 3 tasks hosted by one processor. The processor owns 2 partitions : partition number P0 and partition number P1. The task T1 runs in partition P0 and the two others run in partition P1. Each task has a fixed priority level : the T1 priority is 1, the T2 priority is 5 and the T3 priority is 4. The cyclic partition scheduling should be done so that P0 runs before P1. In each cycle, P0 should be run during 2 units of time and P1 should run during 4 units of time. The user-defined scheduler source code used to compute the scheduling displayed in Figure 6.8 is given below : start_section: partition_duration : array (tasks_range) of integer; dynamic_priority : array (tasks_range) of integer; number_of_partition : integer :=2; current_partition : integer :=0; time_partition : integer :=0; i : integer; partition_duration(0):=2; partition_duration(1):=4; time_partition:=partition_duration(current_partition); end section; priority_section: if time_partition=0 then current_partition:=(current_partition+1) mod number_of_partition; time_partition:=partition_duration(current_partition); end if; for i in tasks_range loop if tasks.task_partition(i)=current_partition then dynamic_priority(i\\]:=priority(i); else dynamic_priority(i):=0; tasks.ready(i):=false; end if; end loop; time_partition:=time_partition-1; end section; election_section: return max_to_index(dynamic_priority); end section; Figure 6.9 Processes and partitions scheduling into an ARINC 653 system In this code, tasks.task_partition is a User's defined task parameter. tasks.task_partition stores the partition number hosting the associated task. The variable partition_duration stores the partition cyclic activation table. Scheduling with specific task models. In the same way you can define specific schedulers, you can also define specific task activation patterns. By default, 3 kinds of task activation pattern are defined in Cheddar : Periodic task : a fixed amount of time exists between two successive task activations. Aperiodic task : the task is activated only once at a given time. Poisson process task : tasks are activated several times and the delay between two successive activations is a random delay. The static variable period in this case is the average time between two successive activations. The delay between activations is generetad according to a random poisson process. If the application you want to study can not be modeled with this 3 kinds of activation rules above, a possible solution is to explain your own task activation pattern with a user-defined scheduler. The description of task activation pattern is done in .sc files in a particulary section which is called task_activation_section . In this section, you can define named activation rules with set statements. The set statement just link a name/identifier (the left part ot the set statement) and an expression (the right part of the set statement). The expression explains the amount of time the scheduling simulator engine has to wait between two activations of a given task. start_section: gen1 : random; gen2 : random; exponential(gen1, 200); uniform(gen2, 0, 100); end section; election_section: return max_to_index(tasks.priority); end section; task_activation_section: set activation_rule1 10; set activation_rule2 2\\*tasks.capacity; set activation_rule3 gen1\\*20; set activation_rule4 gen2; end section; Figure 6.10 Defining new task activation patterns : how to run simulation with specific task models The example of the Figure 6.10 describes a Highest Priority First scheduler which hosts tasks activated with different patterns. Each pattern is described by a set statement : The pattern activation_rule1 describes periodic tasks with a period equal to 10. The pattern activation_rule2 describes periodic tasks with a period equal to twice their capacity. The pattern activation_rule3 describes randomly activated tasks. Two successive activations are delayed by an amount of time which is randomly computed. Delays are computed according to a random exponential distribution pattern with a mean value of 400 . 400 is then the average period value of the tasks. The seed used during random delay generation depends on the scheduling options set at simulation time (see section I.3 ) : the user can choose to associate a seed per task or a seed for all the tasks. Seeds can be initialized in a predictable way or in an unpredictable way. In the case of a predictable seed, the random generator is initialized with the seed value given at task definition time or in the scheduling option window. In the case of an unpredictable seed, the seed is initialized by the \"gettimeofday\" at simulation time. The pattern activation_rule4 describes randomly activated tasks. Two successive activations are delayed by an amount of time that is randomly computed. Delays are computed according to a random uniform distribution pattern with a mean value of 50 . At each periodic task activation, the period can have a value between 0 and 100. The seed used during random delay generation is managed in the same way than activation_rule3. When task activation rules are defined, task activation names (ex : activation_rule1) have to be associated with \"real\" task. The picture below shows you an \"Edit/Entities/Software/Task\" window : Figure 6.11 Assigning activation rules to tasks In this example, the task activation rule activation_rule1 is associated with task T1. The task activation rule activation_rule2 is associated with task T2. The task activation rule activation_rule3 is associated with tasks T3. Running a simulation with a user-defined scheduler. Let's see how to run a simulation with one or several user-defined schedulers. First, you have to add a scheduler by selecting the submenu \"Edit/Entities/Hardware/Core\". The following window is then launched : Figure 6.13 Define a core with a user-defined scheduler To add a user-defined scheduler into a Cheddar project, select the right item of the Combo Box and give a name to your scheduler. You should then provide the code of your user-defined scheduler. This operation can be done by pushing the \"Read\" button. In this case, the following window is spawned and you should give a file name containing the code of your user-defined scheduler : Figure 6.14 Selecting the .sc file which contains the user-defined scheduler By convention, files that contain user-defined scheduler code should be prefixed by .sc . For example, the file rm.sc in our example should almost contain an election section and of course, can also contain a start and a priority sections. When a processor is defined, you have to add tasks on it. To do so, select the submenu \"Edit/Entities/Software/Task\" like in section I . Just place the task on the previously defined processor. Finally, you can run scheduling simulations as the usual case. Since a user-defined scheduler is also a piece of code, you sometimes need to debug it. To do so, you can use the following tips : First, a special instruction can be used to display the value of a variable on the screen : the put statement. For instance, running the following user-defined code will display the value of the dynamic variable to_run each time the scheduler is called : \\--!TRACE start_section: to_run : integer; current_priority : integer; end section; priority_section: current_priority:=0; for i in tasks_range loop if (tasks.ready(i)=true) and (tasks.priority(i)>current_priority) then to_run:=i; put(to_run); current_priority:=tasks.priority(i); end if; end loop; end section; election_section: return to_run; end section; Figure 6.15 Using the put statement A second tip can help you to test if the syntax of your user-defined scheduler is correct. In all .sc file, you can add the line --!TRACE anywhere. If you add this line, the parser will give extra information during the syntax analysis of your user-defined scheduler. It's useful if you want to test a .sc file before using it in a Cheddar project file. You can also test it with sc , a program designed to read, parse and check .sc files. Looking for user-defined properties during a scheduling simulation. start_section: i : integer; nb_T2 : integer; nb_T1 : integer; bound_on_jitter : integer; max_delay : integer; min_delay : integer; tmp : integer; T1_end_time : array (time_units_range) of integer; T2_end_time : array (time_units_range) of integer; min_delay:=integer'last; max_delay:=integer'first; i:=0; nb_T1:=0; nb_T2:=0; end section; gather_event_analyzer_section: if (events.type = \"end_of_task_capacity\") then if (events.task_name = \"T1\") then T1_end_time(nb_T1):=events.time; nb_T1:=nb_T1+1; end if; if (events.task_name = \"T2\") then T2_end_time(nb_T2):=events.time; nb_T2:=nb_T2+1; end if; end if; end section; display_event_analyzer_section: while (i < nb_T1) and (i < nb_T2) loop tmp:=abs(T1_end_time(i)-T2_end_time(i)); min_delay:=min(tmp, min_delay); max_delay:=max(tmp, max_delay); i:=i+1; end loop; bound_on_jitter:=abs(max_delay-min_delay); put(min_delay); put(max_delay); put(bound_on_jitter); end section; Figure 6.16 Example of user-defined event analyzer : computing task termination jitter bound In the same way that users can define new schedulers, Cheddar makes it possible to create user-defined event analyzers. These event analyzers are also writen with an Ada-like language and interpreted at simulation time. The event table produced by the simulator records events related to task execution and related to objects that tasks access. Event examples stored in this table can be : Events produced when a task becomes ready to run (event task_activation), when a task starts or ends running its capacity (events start_of_task_capacity and end_of_task_capacity), Events produced when a task reads or writes data from/to a buffer (events write_to_buffer and read_from_buffer), Events produced when a task sends or receives a message (events send_message and receive_message), Events produced when a task starts waiting for a busy resource (event wait_for_a_resource), allocates or releases a given resource (events allocate_resource and release_resource). Each of these events is stored with the time it occurs and with information related to the event itself (eg. name of the resource, of the buffer, of the message, of the task ...). The event table is scanned sequentially by event analyzers. User-defined event analyzers are composed of several sections : a start section, a data gathering section and an analyze and display section. As user-defined schedulers, the start section is devoted to variable declarations and initializations. The gathering section contains code which is called for each item of the event table. Most of the time, this section contains statements which extract useful data from the event table, and store them for the event analyzer. Finally, the display section performs analysis on data previously saved by the gathering section and displays the results in the main window of the Cheddar Editor. Figure 6.16 gives an example of user-defined event analyzer. From an ARINC 653 scheduling this event analyzer computes the minimum, the maximum and the jitter on the delay between end times of two tasks owned by different partitions (tasks T1_P0 and T2_P1 ; see Figure 6.9). List of predefined variables and available statements. The tables below list all predefined variables that are available when you write a user-defined code. The columns from left to right are : Name : Variable name Type : Variable type Update : Is updated by the simulator engine Changeble : Can be changed by user code Meaning : Explaination of the variable Note: Use the scroll bar at the bottom of the table the see the entire content Name Type Update Changeble Meaning Variables related to processors nb_processors integer no no Gives the number of processors of the current analyzed system. processors.speed integer yes yes Gives the speed of the processor hosting the scheduler. Variables related to tasks tasks.period array (tasks_range) of integer yes yes Stores the value of the parameter given at task definition time. For the meaning of this variable, see section I . tasks.name array (tasks_range) of string no no Name of the task tasks.type array (tasks_range) of string no no Type of the task (periodic, aperiodic, sporadic, poisson_process or userd_defined) tasks.processor_name array (tasks_range) of string no no Stores the processor name of the cpu hosting the corresponding task. tasks.blocking_time array (tasks_range) of integer no yes Stores the sum of the bounded times the task has to wait on shared resource accesses. tasks.deadline array (tasks_range) of integer yes yes Stores the value of the parameter given at task definition time. For the meaning of this variable, see section I . tasks.capacity array (tasks_range) of integer yes yes Stores the value of the parameter given at task definition time. For the meaning of this variable, see section I . tasks.start_time array (tasks_range) of integer yes yes Stores the value of the parameter given at task definition time. For the meaning of this variable, see section I . tasks.used_cpu array (tasks_range) of integer yes no Stores the amount of processor time wasted by the associated task. tasks.activation_number array (tasks_range) of integer yes no Stores the activation number of the associated task. Of course, using this variable is meaningless for aperiodic tasks. tasks.jitter array (tasks_range) of integer yes yes Stores the value of the parameter given at task definition time. For the meaning of this variable, see section I . tasks.priority array (tasks_range) of integer yes yes Stores the value of the parameter given at task definition time. For the meaning of this variable, see section I . tasks.used_capacity array (tasks_range) of integer yes no This variable stores the umount of time unit the task had consumed since its last activation. When tasks.used_capacity reaches tasks.capacity, the task stops to run and waits its next activation tasks.rest_of_capacity array (tasks_range) of integer yes no For each task activation, this variable is initialized to the task capacity each time the task starts a new activation. If rest_of_capacity is equal to zero, the task has over its its current activation and then task is blocked upto its next activation. tasks.suspended array (tasks_range) of integer yes yes This variable can be used by scheduler programmers to block a task : remove a task from schedulable tasks. nb_tasks integer no no Gives the number of tasks of the current analyzed system. tasks.ready array (tasks_range) of boolean yes no Stores the state of the task : this boolean is true if the task is ready ; it means the task has a capacity to run, does not wait for a shared resource, does not wait for a delay, does not wait for a offset constraint and does not wait for a precedency constraint. Variables related to messages nb_messages integer no no Gives the number of messages of the current analyzed system. messages.name array (messages_range) of string no no Gives the names of each message. messages.jitter array (messages_range) of integer no no Jitter on the time the periodic message becomes ready to be sent. messages.period array (messages_range) of integer no no Gives the sending period if the message is a periodic one. messages.delay array (messages_range) of integer no no time needed by a message to go from the sendrer to the receiver node. messages.deadline array (messages_range) of integer no no Stores the deadline if the message has to meet one. messages.size array (messages_range) of integer no no Stores the size of the message. messages.users.time array (messages_range) of integer no no Stores the time when the task should send or receive the message. messages.users.task_name array (messages_range) of string no no Stores the task name that sends/receives the message. messages.users.type array (messages_range) of string no no Stores sender if the corresponding task sends the message or stores receiver if the task receives it. Variables related to buffers nb_buffers integer no no Gives the number of buffers of the current analyzed system. buffers.max_size array (buffers_range) of integer no no The maximum size of a given buffer. buffers.processor_name array (buffers_range) of string no no Gives the processor name that owns the buffer. buffers.name array (buffers_range) of string no no Unique name of the buffer. buffers.users.time array (buffers_range) of integer no no Stores the time a given task consumes/produces a message from/into a buffer. buffers.users.size array (buffers_range) of integer no no Stores the size of the message produced/consumed into/from a buffer by a given task. buffers.users.task_name array (buffers_range) of string no no Stores the task name that procudes/consumes messages into/from a given buffer. buffers.users.type array (buffers_range) of string no no Stores consumer if the corresponding task consumes messages from the buffer or stores producer if the task produces messages. Variables related to shared resources nb_resources integer no no Gives the number of shared resources of the current analyzed system. resources.initial_state array (resources_range) of integer no no Stores the state of the resource when the simulation is started. If this integer is equal of less than zero, the first allocation request will block the requesting task. resources.current_state array (resources_range) of integer no no Stores the current state of the resource. If this integer is equal of less than zero, the first allocation request will block the requesting task. After an allocation of the resource, this counter is decremented. After the task has released the resource, this counter is incremented. resources.processor_name array (resources_range) of string no no Stores the name of the processors hosting the shared resource. resources.protocol array (resources_range) of string no no Contains the protocol name used to manage the resource allocation request. Could be either no_protocol, priority_ceiling_protocol or priority_inheritance_protocol resources.name array (resources_range) of integer no no Unique name of the shared resource resources.users.task_name array (resources_range) of string no no Gives the name of a task that can access the shared resource. resources.users.start_time array (resources_range) of integer no no Gives the time the task starts accessing the shared resource during its capacity. resources.users.end_time array (resources_range) of integer no no Gives the time the task ends accessing the shared resource during its capacity. Variables related to the scheduling simulation previously_elected integer yes no At the time the user-defined scheduler runs, this variable stores the TCB index of the task elected at the previous simulation time simulation_time integer yes no Stores the current simulation time . Variables related to the event table events.type string no no Type of event on the current index table. Can be task_activation , running_task , write_to_buffer , read_from_buffer , send_message , receive_message , start_of_task_capacity , end_of_task_capacity , allocate_resource , release_resource , wait_for_resource . events.time integer no no The time when the event occurs. events.processor_name string no no The processor name hosting the task/resource/buffer related to the current event. events.task_name string no no The task name related to the current event. events.message_name string no no The message name related to the current event. events.buffer_name string no no The buffer name related to the current event. events.resource_name string no no The resource name related to the current event. The BNF syntax of a .sc file is given below : entry := start_rule priority_rule election_rule task_activation_rule gather_event_analyzer display_event_analyzer declare_rule := \"start_section:\" statements priority_rule := \"priority_section:\" statements election_rule := \"election_section:\" statements task_activation_rule := \"task_activation_section\" statements gather_event_analyzer := \"gather_event_analyzer_section\" statements display_event_analyzer:= \"display_event_analyzer_section\" statements statements := statement {statement} statement := \"put\" \"(\" identifier \\[, integer\\] \\[, integer\\]\")\" \";\" | identifier \":\" data_type \\[ \":=\" expression \\] \";\" | identifier \":=\" expression \";\" | \"if\" expression \"then\" statements \\[ \"else\" statements \\] \"end\" \"if\" \";\" | \"return\" expr \";\" | \"for\" identifier \"in\" ranges \"loop\" statements \"end\" \"loop\" \";\" | \"while\" expression \"loop\" statements \"end\" \"loop\" \";\" | \"set\" identifier expression \";\" | \"uniform\" \"(\" identifier \",\" expression \",\" expression \")\" \";\" | \"exponential\" \"(\" identifier \",\" expression \")\" \";\" data_type := scalar_data_type | \"array\" \"(\" ranges \")\" \"of\" scalar_data_type ranges := \"tasks_range\" | \"buffers_range\" | \"messages_range\" | \"resources_range\" | \"processors_range\" | \"time_units_range\" scalar_data_type := \"double\" | \"integer\" | \"boolean\" | \"string\" | \"random\" operator := \"and\" | \"or\" | \"mod\" | \"<\" | \">\" | \"<=\" | \">=\" | \"/=\" | \"=\" | \"+\" | \"/\" | \"-\" | \"\\*\" | \"\\*\\*\" expression := expression operator expression | \"(\" expression \")\" | \"not\" expression | \"-\" expression | \"max_to_index\" \"(\" expression \")\" | \"min_to_index\" \"(\" expression \")\" | \"max\" \"(\" expression \",\" expression \")\" | \"min\" \"(\" expression \",\" expression \")\" | \"lcm\" \"(\" expression \",\" expression \")\" | \"abs\" \"(\" expression \")\" | identifier \"\\[\" expression \"\\]\" | identifier | integer_value | double_value | boolean_value Notes on the BNF of .sc file syntax : entry is the entry point of the grammar. The data_type rule describes all data types available in a .sc file The operator rule lists all binary operators. The expression rule gives all possible expressions that you can use to define your scheduler. The statement rule contains all statements that can be used in a .sc file. identifier is a string constant. integer_value is a integer constant. double_value is a double constant. boolean_value is a boolean constant. Two kinds of statements exist to build your user-defined scheduler : low-level and high-level statements. high-level statements operate on all task information. low-level statements operate only on one information of a task at a time. all these statements work as follows : The if statement : works like in Ada or most of programming languages : run the else or the then statement branch according to the value of the if expression . The while statement : works like in Ada or most of programming languages : run the statements enclosed in the loop/end loop block until the while condition becomes false. The for statement : it's an Ada loop with a predefined iterator index. With a for statement, the statements enclosed in the loop are run for each task defined in the TCB table. At each iteration, the variable defined in the for statement is incremented. Then, in the case of task loop for instance (use keyword tasks_range in this case), its value ranges from 1 to nb_tasks ( nb_tasks is a predefined static variable initiliazed to the number of tasks hosted by the currently analyzed processor). The return statement. You can use a return statement in two cases : With any argument in any section except in the election_section . In this case, the return statement just end the code of the section. With a integer argument and only in the election_section . Then, the return statement give the task number to be run. When the return statement returns the -1 value, it means that no task has to be run at the nuext unit of time. The put(p,a,b) statement : displays the value of the variable p on the screen. This statement is useful to debug your user-defined scheduler. If a and b are not equal to zero and if p is an array type, put(p,a,b) displays entries of the table with index between a and b . If a and b are equal to zero and if p is an array, all entries of the array are displayed. The delete_precedence \"a/b\" statement : remove the dependency between task a and b ( a is the source task while b is the destination/sink task). The add_precedence \"a/b\" statement : add a dependency between task a and b ( a is the source task while b is the destination/sink task). The exponential(a,b) statement : intializes the random generator a to generate exponential random values with an average value of b . The uniform(a,b,c) statement : intializes the random generator a to generate uniformly random values between b and c . The set statement : description of new task activation model : assign an expression which shows how to compute task wake up time with an identifier. The predefined operators and subprograms are the following: abs(a) : returns the unsigned value of a . lcm(a,b) : returns the last common multiplier of a and b . max(a,b) : returns the maximum value between a and b . min(a,b) : returns the minimum value between a and b . max_to_index (v) : firstly finds the task in the TCB with the maximum value of v ,and then returns its position in the TCB table. Only ready tasks are considered by this operator. min_to_index(v) : firstly finds the task in the TCB with the minimum value of v , and then returns its position in the TCB table Only ready tasks are considered by this operator. a mod b : computes the modulo of a on b (rest of the integer division). to_integer(a) : cast a from double to integer. a must be a double. to_double(a) : cast a from integer to double. a must be an integer. integer'last : return the largest value for the integer type. integer'first : return the smallest value for the integer type. double'last : return the largest value for the double type. double'first : return the smallest value for the double type. get_task_index (a) : return the index in the task table for the task named a . get_buffer_index (a) : return the index in the buffer table for the buffer named a . get_resource_index (a) : return the index in the resource table for the resource named a . get_message_index (a) : return the index in the message table for the message named a .","title":"6 - User Defined Scheduler"},{"location":"pages/user_defined/#user-defined-simulation-code-how-to-run-simulations-of-specific-systems","text":"Usual feasibility tests are limited to only few task models (mainly periodic tasks) and to only few schedulers. When an application built with a particular task activation pattern or scheduled with a particular scheduler has to be checked, feasibility tests are not necessarily available. In this case, the only solution consists in analyzing the scheduling simulation. Cheddar allows the user to design and easily build framework extensions to do simulation of user-defined schedulers or task activation patterns. By easy, we mean quickly write and test framework extensions without a deep understanding of the framework design and of the Ada language. We propose the use of a simple language to describe framework extensions. Framework extensions are interpreted at simulation time. As a consequence, they can be changed and tested without recompiling the framework itself. Figure 6.1 How a user-defined code is run by the scheduling engine Figure 6.1 gives an idea on the way the simulation engine is implemented in the framework. Running a simulation with Cheddar is a three-step process. The first step consists of computing the scheduling : we have to decide which events occur for each unit of time. Events can be allocating/releasing shared resources, writing/reading buffers, sending/receiving messages and of course running a task at a given time. At the end of this step, a table is built which stores all the generated events. The event table is built according to the XML description file of the studied application and according to a set of task activation patterns and schedulers. Usual task activation patterns and schedulers are predefined in the Cheddar framework but users can add their own schedulers and task activation patterns. In the second step, the analysis of the event table is performed. The table is scanned by \"event analyzers\" to find properties on the studied system. At this step, some standard information can be extracted by predefined event analyzers (worst/best/average blocking time, missed deadlines ..) but users can also define their own event analyzers to look for ad-hoc properties (ex : synchronization constraints between two tasks, shared resources access order, ...). The results produced during this step are XML formatted and can be exported towards other programs. Finally, the last step consists of displaying XML results in the Cheddar main window (see Figure 1.4).","title":"User-defined simulation code : how to run simulations of specific systems."},{"location":"pages/user_defined/#defining-new-schedulers-or-task-activation-patterns","text":"Now, let's see how user-defined schedulers or task activation patterns can be added into the framework. Basically, all tasks are stored in a set of arrays. Each array stores a given information for all tasks (ex : deadline, capacity, start time, ...). The job of a scheduler is to find a task to run from a set of ready tasks. To achieve this job, Cheddar models a scheduler with a 3 stages pipe-line which is similar to the POSIX 1003.1b scheduler (see [GAL 95]) . These 3 stages are : The priority stage. For each ready task, a priority is computed. The queueing stage. Ready tasks are inserted into different queues. There is one queue per priority level. Each queue contains all the ready tasks with the same priority value. Queues are managed like POSIX scheduling queues : if a quantum is associated with the scheduler, queues work like the SCHED_RR scheduling queueing policy. Otherwise, the SCHED_FIFO queueing policy is applied. The election stage. The scheduler looks for the non empty queue with the highest priority level and allocates the processor to the task at the head of this queue. The elected task keeps the processor during one unit of time if the designed scheduler is preemptive or during all its capacity if the scheduler is not preemptive. Defining a new scheduler is simply giving piece of code for some of the pipe-line stages we described above. Each of these stages can be defined by a user without the need to have a deep knowledge of the way the scheduling simulator works. User-defined schedulers are stored in text files. These files are organized in several sections : The start section. In this section, you may declare variables needed to schedule your tasks. Many variables are already predefined in Cheddar. Some of them are those defined at task/processor/buffer/message definition (ex : period, deadline, capacity ...). This set of predefined variables can be extended with the \"Edit/Update Tasks\" submenu (see user-defined parameters). The others are managed by the simulator engine and describe the state of tasks/processors/buffers/messages at simulation time. See section VI.5 for a list of all predefined variables. All variables used in a scheduler should have a type. The framework provides two type families : scalar types and arrays. One can define variable with scalar type of double, integer, boolean, string and also of random (a random is a type which allows the user to generate ramdom values). An array is a type which stores one scalar data per task, message, buffer or shared resource. Arrays are declared as usual Ada Table. Vectorial operations can be done on this kind of variable. The priority section. The section contains the code necessary to compute task priorities. The code given here is called each time a scheduling decision has to be made (at each unit of time for preemptive scheduler and when a task has run during all its capacity for non preemptive scheduler). The code given here can be composed of many differents statements described in section VI.5 The election section. This section just decides which task should receive the processor for next units of time. This section should only contain one return statement. The task activation section. This section describes how tasks could be activated during a simulation. In Cheddar, 3 kinds of tasks exists : aperiodic tasks which are activated only one time and periodic or poissons process tasks which are activated several times. In the case of periodic tasks, two successive task activations are delayed by an amount of fixed time called period. In the case of poisson process tasks, two successive task activations are delayed by an exponential random delay. The task activation section allows you to define new kinds of task activation patterns (ex : sporadic task, randomly activated task, burst of activations, ...). . In the sequel, we first give you some simple examples of user-defined schedulers. Then, we explain how to use this kind of scheduler to do scheduling simulation with Cheddar. The list of statements and the list of predefined variables is given at the end of this section.","title":"Defining new schedulers or task activation patterns."},{"location":"pages/user_defined/#examples-of-user-defined-schedulers","text":"In this section, we give some user-defined scheduler examples. We first show that a user-defined scheduler can be built with two kinds of statements : high-level and low-level statements. Second, we present how to add new task parameters with User's defined task parameters.","title":"Examples of user-defined schedulers."},{"location":"pages/user_defined/#low-level-statements-versus-high-level-statements","text":"Now let's see some very simple user-defined schedulers. The most simple user-defined scheduler can be defined like below : election_section: return min_to_index(tasks.period); end section; Figure 6.2 A simple Rate Monotonic scheduler This first example shows you how to give the processor to the task with the smallest period. This scheduler is equivalent to the Rate monotonic implemented into Cheddar. tasks.period is a predefined variable initialized at task definition time by the user. To implement a Rate Monotonic scheduler, no dynamic priorities are computed and no variable is necessary. Then, the scheduler designer does not have to redefine the start and priority sections. The only section which is defined is the election one. The election section contains an unique return statement to inform the scheduling simulator engine which task should be run for the next unit of time. The return statement uses the high level min_to_index operator. This operator scans the task array to find the ready task with the minimum value for the variable tasks.period . In Cheddar, the scheduler designer can use two kinds of statements : high-level and low-level statements. High level statements like min_to_index , hides the data type organization of the scheduling simulator engine. For example, the scheduler designer do not need to give statement into its user-defined scheduler to scan manually the task array. Writing a scheduler with high-level statements is then easy work. On the contrary, low-level statements assume that the user has a deeper idea of the design of the scheduling engine simulator. By the way, these statements are sometimes necessary when the scheduler designer wants to code a too much specific scheduler. Now let's see how to define an EDF like scheduler : start_section: dynamic_priority : array (tasks_range) of integer; end section; priority_section: dynamic_priority := tasks.start_time + tasks.deadline + ((tasks.activation_number-1)\\*tasks.period); end section; election_section: return min_to_index(dynamic_priority); end section; Figure 6.3 An EDF like scheduler using vectorial operators EDF is a dynamic scheduler which computes a dynamic priority for each task. This dynamic priority is in fact a deadline. EDF just gives the processor to the task with the shortest deadline. In our example, this deadline is stored in a variable called dynamic_deadline . Since we need one value per task, the type of this variable is integer array . With this example the priority_section is not empty any more and contains (lines 5 to 7) the necessary code to compute EDF dynamic priorities. You should notice that the code in line 6/7 is in fact a vectorial operation : the arithmetic operation to compute the deadline is done for each item of the table dynamic_priority ranging from 1 to nb_tasks ( nb_tasks is a static predefined variable initialized by the number of tasks in the current processor). To compute the dynamic priorities of our example, we used many predefined variables : tasks.deadline, tasks.start_time and tasks.period : they are the deadline, start time and period values given by the user at task definition time (in the window Edit/Update tasks). tasks.activation_number : it's a variable updated by the simulation engine. The simulator increments this variable each time a periodic or a poisson process task starts a new activation. For instance, if tasks.activation_number(i) is equal to 3, it means that the task i has started its 4th activation. You can find in VI.5 a list of all predefined variables and all available statements you can used to build your user-defined scheduler. The example of the Figure 5.3 is built with vectorial operators : each arithmetic operation is done for all the tasks of the system. The scheduler designer does not need to take care of the task array and just gives rules to computed the EDF dynamic deadline. As max_to_index/min_to_index , these statements are High-level ones because they do not required to directly access the data type organization of the scheduling engine of Cheddar (mainly the task arrays). Now, let's see a third example: start_section: to_run : integer; current_priority : integer; priority_section: current_priority:=0; for i in tasks_range loop if (tasks.ready(i) = true) and (tasks.priority(i)>current_priority) then to_run:=i; current_priority:=tasks.priority(i); end if; end loop; end section; election_section: return to_run; end section; Figure 6.4 Building a user-defined with low-level statement This scheduler looks for the highest priority ready task of a processor and is fully equivalent to the scheduler described by : election_section: return max_to_index(tasks.priority); end section; Figure 6.5 A HPF scheduler built with hight-level statements but, in the example of Figure 6.4, the code scans itself the task array to find a ready task to run. To achieve this, the example of Figure 6.4 is built with low-level instructions : a for loop and an if statement. The priority_section is then composed of a loop that tests each task entry. This loop is made with a for statement, a loop that runs the inner statement for each task defined in the task array. Contrary to a high-level implementation, a scheduler made of low-level statements has to carry out more tests. For instance, the example of the Figure 6.4 checks with the ready dynamic variable if tasks are ready at the time the scheduler is called. Low-level scheduler are then more complicated and more difficult to test. The reader will find some tips to help test complicated user-defined schedulers in section 6.3 .","title":"Low-level statements versus High-level statements"},{"location":"pages/user_defined/#user-defined-scheduler-built-with-users-defined-task-parameters","text":"In the previous examples, the data used to built user-defined schedulers were either static variables initialized at task definition time, either dynamic variables predefined or declared in the start section. A last type of data exists in Cheddar : User's defined task parameters. This kind of data are static ones and are defined at task definition time. User's defined task parameters allow the user to extend the set of static variables. Since they describe new task parameters, User's defined task parameters are table type. User's defined task parameters can be boolean, integer, double or string table type. To define User's defined task parameters, you have to update the third part of the entity task. Use the submenu \"Edit/Entities/Software/Task\" : Figure 6.6 Adding an Users's Defined Task Parameter The example above shows you a system composed of 3 tasks (T1, T2 and T3) where a criticity level is defined. Like usual task parameters, you should give a value to a User's defined task parameter (ex : the criticity level for task T1 is 1) but you also have to set a type to the parameter ( integer in our example). When tasks are created, as usually, you can call the scheduling simulation services of Cheddar. The next window is a snapshoot of the resulting scheduling of our example composed of 3 tasks scheduled according to their criticity level. (T2 is the most critical task and T1 the less critical). Figure 6.7 Scheduling according to a criticity level To conclude this chapter, let's have a look to a more complex example of user-defined scheduler which summarises all the features presented before. This example is an ARINC 653 scheduler (see [ARI 97]). An ARINC 653 system is composed of several partitions. A partition is a unit of software and is itself composed of processes and memory spaces. A processor can host several partitions so that two levels of scheduling exist in an ARINC653 system : partition scheduling and process scheduling. Process scheduling. In one partition, process are scheduled according to their fixed priority. The scheduler is preemptive and always gives the processor to the highiest fixed priority task of the partition which is ready to run. When several tasks of a partition have the same priority level, the oldest one is elected. Partition scheduling. Partitions share the processor in a predefined way. On each processor partitions are activated according to an activation table. This table is built at design time and defines a cycle of partition scheduling. The table describes for each partition when it has to be activated and how much time it has to run for each of its activation. Figure 6.8 An example of ARINC 653 scheduling The Figure 6.8 displays an example of ARINC 653 scheduling (see the XML project file project_examples/arinc653.xml). The studied system is made of 3 tasks hosted by one processor. The processor owns 2 partitions : partition number P0 and partition number P1. The task T1 runs in partition P0 and the two others run in partition P1. Each task has a fixed priority level : the T1 priority is 1, the T2 priority is 5 and the T3 priority is 4. The cyclic partition scheduling should be done so that P0 runs before P1. In each cycle, P0 should be run during 2 units of time and P1 should run during 4 units of time. The user-defined scheduler source code used to compute the scheduling displayed in Figure 6.8 is given below : start_section: partition_duration : array (tasks_range) of integer; dynamic_priority : array (tasks_range) of integer; number_of_partition : integer :=2; current_partition : integer :=0; time_partition : integer :=0; i : integer; partition_duration(0):=2; partition_duration(1):=4; time_partition:=partition_duration(current_partition); end section; priority_section: if time_partition=0 then current_partition:=(current_partition+1) mod number_of_partition; time_partition:=partition_duration(current_partition); end if; for i in tasks_range loop if tasks.task_partition(i)=current_partition then dynamic_priority(i\\]:=priority(i); else dynamic_priority(i):=0; tasks.ready(i):=false; end if; end loop; time_partition:=time_partition-1; end section; election_section: return max_to_index(dynamic_priority); end section; Figure 6.9 Processes and partitions scheduling into an ARINC 653 system In this code, tasks.task_partition is a User's defined task parameter. tasks.task_partition stores the partition number hosting the associated task. The variable partition_duration stores the partition cyclic activation table.","title":"User-defined scheduler built with User's defined Task Parameters"},{"location":"pages/user_defined/#scheduling-with-specific-task-models","text":"In the same way you can define specific schedulers, you can also define specific task activation patterns. By default, 3 kinds of task activation pattern are defined in Cheddar : Periodic task : a fixed amount of time exists between two successive task activations. Aperiodic task : the task is activated only once at a given time. Poisson process task : tasks are activated several times and the delay between two successive activations is a random delay. The static variable period in this case is the average time between two successive activations. The delay between activations is generetad according to a random poisson process. If the application you want to study can not be modeled with this 3 kinds of activation rules above, a possible solution is to explain your own task activation pattern with a user-defined scheduler. The description of task activation pattern is done in .sc files in a particulary section which is called task_activation_section . In this section, you can define named activation rules with set statements. The set statement just link a name/identifier (the left part ot the set statement) and an expression (the right part of the set statement). The expression explains the amount of time the scheduling simulator engine has to wait between two activations of a given task. start_section: gen1 : random; gen2 : random; exponential(gen1, 200); uniform(gen2, 0, 100); end section; election_section: return max_to_index(tasks.priority); end section; task_activation_section: set activation_rule1 10; set activation_rule2 2\\*tasks.capacity; set activation_rule3 gen1\\*20; set activation_rule4 gen2; end section; Figure 6.10 Defining new task activation patterns : how to run simulation with specific task models The example of the Figure 6.10 describes a Highest Priority First scheduler which hosts tasks activated with different patterns. Each pattern is described by a set statement : The pattern activation_rule1 describes periodic tasks with a period equal to 10. The pattern activation_rule2 describes periodic tasks with a period equal to twice their capacity. The pattern activation_rule3 describes randomly activated tasks. Two successive activations are delayed by an amount of time which is randomly computed. Delays are computed according to a random exponential distribution pattern with a mean value of 400 . 400 is then the average period value of the tasks. The seed used during random delay generation depends on the scheduling options set at simulation time (see section I.3 ) : the user can choose to associate a seed per task or a seed for all the tasks. Seeds can be initialized in a predictable way or in an unpredictable way. In the case of a predictable seed, the random generator is initialized with the seed value given at task definition time or in the scheduling option window. In the case of an unpredictable seed, the seed is initialized by the \"gettimeofday\" at simulation time. The pattern activation_rule4 describes randomly activated tasks. Two successive activations are delayed by an amount of time that is randomly computed. Delays are computed according to a random uniform distribution pattern with a mean value of 50 . At each periodic task activation, the period can have a value between 0 and 100. The seed used during random delay generation is managed in the same way than activation_rule3. When task activation rules are defined, task activation names (ex : activation_rule1) have to be associated with \"real\" task. The picture below shows you an \"Edit/Entities/Software/Task\" window : Figure 6.11 Assigning activation rules to tasks In this example, the task activation rule activation_rule1 is associated with task T1. The task activation rule activation_rule2 is associated with task T2. The task activation rule activation_rule3 is associated with tasks T3.","title":"Scheduling with specific task models."},{"location":"pages/user_defined/#running-a-simulation-with-a-user-defined-scheduler","text":"Let's see how to run a simulation with one or several user-defined schedulers. First, you have to add a scheduler by selecting the submenu \"Edit/Entities/Hardware/Core\". The following window is then launched : Figure 6.13 Define a core with a user-defined scheduler To add a user-defined scheduler into a Cheddar project, select the right item of the Combo Box and give a name to your scheduler. You should then provide the code of your user-defined scheduler. This operation can be done by pushing the \"Read\" button. In this case, the following window is spawned and you should give a file name containing the code of your user-defined scheduler : Figure 6.14 Selecting the .sc file which contains the user-defined scheduler By convention, files that contain user-defined scheduler code should be prefixed by .sc . For example, the file rm.sc in our example should almost contain an election section and of course, can also contain a start and a priority sections. When a processor is defined, you have to add tasks on it. To do so, select the submenu \"Edit/Entities/Software/Task\" like in section I . Just place the task on the previously defined processor. Finally, you can run scheduling simulations as the usual case. Since a user-defined scheduler is also a piece of code, you sometimes need to debug it. To do so, you can use the following tips : First, a special instruction can be used to display the value of a variable on the screen : the put statement. For instance, running the following user-defined code will display the value of the dynamic variable to_run each time the scheduler is called : \\--!TRACE start_section: to_run : integer; current_priority : integer; end section; priority_section: current_priority:=0; for i in tasks_range loop if (tasks.ready(i)=true) and (tasks.priority(i)>current_priority) then to_run:=i; put(to_run); current_priority:=tasks.priority(i); end if; end loop; end section; election_section: return to_run; end section; Figure 6.15 Using the put statement A second tip can help you to test if the syntax of your user-defined scheduler is correct. In all .sc file, you can add the line --!TRACE anywhere. If you add this line, the parser will give extra information during the syntax analysis of your user-defined scheduler. It's useful if you want to test a .sc file before using it in a Cheddar project file. You can also test it with sc , a program designed to read, parse and check .sc files.","title":"Running a simulation with a user-defined scheduler."},{"location":"pages/user_defined/#looking-for-user-defined-properties-during-a-scheduling-simulation","text":"start_section: i : integer; nb_T2 : integer; nb_T1 : integer; bound_on_jitter : integer; max_delay : integer; min_delay : integer; tmp : integer; T1_end_time : array (time_units_range) of integer; T2_end_time : array (time_units_range) of integer; min_delay:=integer'last; max_delay:=integer'first; i:=0; nb_T1:=0; nb_T2:=0; end section; gather_event_analyzer_section: if (events.type = \"end_of_task_capacity\") then if (events.task_name = \"T1\") then T1_end_time(nb_T1):=events.time; nb_T1:=nb_T1+1; end if; if (events.task_name = \"T2\") then T2_end_time(nb_T2):=events.time; nb_T2:=nb_T2+1; end if; end if; end section; display_event_analyzer_section: while (i < nb_T1) and (i < nb_T2) loop tmp:=abs(T1_end_time(i)-T2_end_time(i)); min_delay:=min(tmp, min_delay); max_delay:=max(tmp, max_delay); i:=i+1; end loop; bound_on_jitter:=abs(max_delay-min_delay); put(min_delay); put(max_delay); put(bound_on_jitter); end section; Figure 6.16 Example of user-defined event analyzer : computing task termination jitter bound In the same way that users can define new schedulers, Cheddar makes it possible to create user-defined event analyzers. These event analyzers are also writen with an Ada-like language and interpreted at simulation time. The event table produced by the simulator records events related to task execution and related to objects that tasks access. Event examples stored in this table can be : Events produced when a task becomes ready to run (event task_activation), when a task starts or ends running its capacity (events start_of_task_capacity and end_of_task_capacity), Events produced when a task reads or writes data from/to a buffer (events write_to_buffer and read_from_buffer), Events produced when a task sends or receives a message (events send_message and receive_message), Events produced when a task starts waiting for a busy resource (event wait_for_a_resource), allocates or releases a given resource (events allocate_resource and release_resource). Each of these events is stored with the time it occurs and with information related to the event itself (eg. name of the resource, of the buffer, of the message, of the task ...). The event table is scanned sequentially by event analyzers. User-defined event analyzers are composed of several sections : a start section, a data gathering section and an analyze and display section. As user-defined schedulers, the start section is devoted to variable declarations and initializations. The gathering section contains code which is called for each item of the event table. Most of the time, this section contains statements which extract useful data from the event table, and store them for the event analyzer. Finally, the display section performs analysis on data previously saved by the gathering section and displays the results in the main window of the Cheddar Editor. Figure 6.16 gives an example of user-defined event analyzer. From an ARINC 653 scheduling this event analyzer computes the minimum, the maximum and the jitter on the delay between end times of two tasks owned by different partitions (tasks T1_P0 and T2_P1 ; see Figure 6.9).","title":"Looking for user-defined properties during a scheduling simulation."},{"location":"pages/user_defined/#list-of-predefined-variables-and-available-statements","text":"The tables below list all predefined variables that are available when you write a user-defined code. The columns from left to right are : Name : Variable name Type : Variable type Update : Is updated by the simulator engine Changeble : Can be changed by user code Meaning : Explaination of the variable Note: Use the scroll bar at the bottom of the table the see the entire content Name Type Update Changeble Meaning Variables related to processors nb_processors integer no no Gives the number of processors of the current analyzed system. processors.speed integer yes yes Gives the speed of the processor hosting the scheduler. Variables related to tasks tasks.period array (tasks_range) of integer yes yes Stores the value of the parameter given at task definition time. For the meaning of this variable, see section I . tasks.name array (tasks_range) of string no no Name of the task tasks.type array (tasks_range) of string no no Type of the task (periodic, aperiodic, sporadic, poisson_process or userd_defined) tasks.processor_name array (tasks_range) of string no no Stores the processor name of the cpu hosting the corresponding task. tasks.blocking_time array (tasks_range) of integer no yes Stores the sum of the bounded times the task has to wait on shared resource accesses. tasks.deadline array (tasks_range) of integer yes yes Stores the value of the parameter given at task definition time. For the meaning of this variable, see section I . tasks.capacity array (tasks_range) of integer yes yes Stores the value of the parameter given at task definition time. For the meaning of this variable, see section I . tasks.start_time array (tasks_range) of integer yes yes Stores the value of the parameter given at task definition time. For the meaning of this variable, see section I . tasks.used_cpu array (tasks_range) of integer yes no Stores the amount of processor time wasted by the associated task. tasks.activation_number array (tasks_range) of integer yes no Stores the activation number of the associated task. Of course, using this variable is meaningless for aperiodic tasks. tasks.jitter array (tasks_range) of integer yes yes Stores the value of the parameter given at task definition time. For the meaning of this variable, see section I . tasks.priority array (tasks_range) of integer yes yes Stores the value of the parameter given at task definition time. For the meaning of this variable, see section I . tasks.used_capacity array (tasks_range) of integer yes no This variable stores the umount of time unit the task had consumed since its last activation. When tasks.used_capacity reaches tasks.capacity, the task stops to run and waits its next activation tasks.rest_of_capacity array (tasks_range) of integer yes no For each task activation, this variable is initialized to the task capacity each time the task starts a new activation. If rest_of_capacity is equal to zero, the task has over its its current activation and then task is blocked upto its next activation. tasks.suspended array (tasks_range) of integer yes yes This variable can be used by scheduler programmers to block a task : remove a task from schedulable tasks. nb_tasks integer no no Gives the number of tasks of the current analyzed system. tasks.ready array (tasks_range) of boolean yes no Stores the state of the task : this boolean is true if the task is ready ; it means the task has a capacity to run, does not wait for a shared resource, does not wait for a delay, does not wait for a offset constraint and does not wait for a precedency constraint. Variables related to messages nb_messages integer no no Gives the number of messages of the current analyzed system. messages.name array (messages_range) of string no no Gives the names of each message. messages.jitter array (messages_range) of integer no no Jitter on the time the periodic message becomes ready to be sent. messages.period array (messages_range) of integer no no Gives the sending period if the message is a periodic one. messages.delay array (messages_range) of integer no no time needed by a message to go from the sendrer to the receiver node. messages.deadline array (messages_range) of integer no no Stores the deadline if the message has to meet one. messages.size array (messages_range) of integer no no Stores the size of the message. messages.users.time array (messages_range) of integer no no Stores the time when the task should send or receive the message. messages.users.task_name array (messages_range) of string no no Stores the task name that sends/receives the message. messages.users.type array (messages_range) of string no no Stores sender if the corresponding task sends the message or stores receiver if the task receives it. Variables related to buffers nb_buffers integer no no Gives the number of buffers of the current analyzed system. buffers.max_size array (buffers_range) of integer no no The maximum size of a given buffer. buffers.processor_name array (buffers_range) of string no no Gives the processor name that owns the buffer. buffers.name array (buffers_range) of string no no Unique name of the buffer. buffers.users.time array (buffers_range) of integer no no Stores the time a given task consumes/produces a message from/into a buffer. buffers.users.size array (buffers_range) of integer no no Stores the size of the message produced/consumed into/from a buffer by a given task. buffers.users.task_name array (buffers_range) of string no no Stores the task name that procudes/consumes messages into/from a given buffer. buffers.users.type array (buffers_range) of string no no Stores consumer if the corresponding task consumes messages from the buffer or stores producer if the task produces messages. Variables related to shared resources nb_resources integer no no Gives the number of shared resources of the current analyzed system. resources.initial_state array (resources_range) of integer no no Stores the state of the resource when the simulation is started. If this integer is equal of less than zero, the first allocation request will block the requesting task. resources.current_state array (resources_range) of integer no no Stores the current state of the resource. If this integer is equal of less than zero, the first allocation request will block the requesting task. After an allocation of the resource, this counter is decremented. After the task has released the resource, this counter is incremented. resources.processor_name array (resources_range) of string no no Stores the name of the processors hosting the shared resource. resources.protocol array (resources_range) of string no no Contains the protocol name used to manage the resource allocation request. Could be either no_protocol, priority_ceiling_protocol or priority_inheritance_protocol resources.name array (resources_range) of integer no no Unique name of the shared resource resources.users.task_name array (resources_range) of string no no Gives the name of a task that can access the shared resource. resources.users.start_time array (resources_range) of integer no no Gives the time the task starts accessing the shared resource during its capacity. resources.users.end_time array (resources_range) of integer no no Gives the time the task ends accessing the shared resource during its capacity. Variables related to the scheduling simulation previously_elected integer yes no At the time the user-defined scheduler runs, this variable stores the TCB index of the task elected at the previous simulation time simulation_time integer yes no Stores the current simulation time . Variables related to the event table events.type string no no Type of event on the current index table. Can be task_activation , running_task , write_to_buffer , read_from_buffer , send_message , receive_message , start_of_task_capacity , end_of_task_capacity , allocate_resource , release_resource , wait_for_resource . events.time integer no no The time when the event occurs. events.processor_name string no no The processor name hosting the task/resource/buffer related to the current event. events.task_name string no no The task name related to the current event. events.message_name string no no The message name related to the current event. events.buffer_name string no no The buffer name related to the current event. events.resource_name string no no The resource name related to the current event. The BNF syntax of a .sc file is given below : entry := start_rule priority_rule election_rule task_activation_rule gather_event_analyzer display_event_analyzer declare_rule := \"start_section:\" statements priority_rule := \"priority_section:\" statements election_rule := \"election_section:\" statements task_activation_rule := \"task_activation_section\" statements gather_event_analyzer := \"gather_event_analyzer_section\" statements display_event_analyzer:= \"display_event_analyzer_section\" statements statements := statement {statement} statement := \"put\" \"(\" identifier \\[, integer\\] \\[, integer\\]\")\" \";\" | identifier \":\" data_type \\[ \":=\" expression \\] \";\" | identifier \":=\" expression \";\" | \"if\" expression \"then\" statements \\[ \"else\" statements \\] \"end\" \"if\" \";\" | \"return\" expr \";\" | \"for\" identifier \"in\" ranges \"loop\" statements \"end\" \"loop\" \";\" | \"while\" expression \"loop\" statements \"end\" \"loop\" \";\" | \"set\" identifier expression \";\" | \"uniform\" \"(\" identifier \",\" expression \",\" expression \")\" \";\" | \"exponential\" \"(\" identifier \",\" expression \")\" \";\" data_type := scalar_data_type | \"array\" \"(\" ranges \")\" \"of\" scalar_data_type ranges := \"tasks_range\" | \"buffers_range\" | \"messages_range\" | \"resources_range\" | \"processors_range\" | \"time_units_range\" scalar_data_type := \"double\" | \"integer\" | \"boolean\" | \"string\" | \"random\" operator := \"and\" | \"or\" | \"mod\" | \"<\" | \">\" | \"<=\" | \">=\" | \"/=\" | \"=\" | \"+\" | \"/\" | \"-\" | \"\\*\" | \"\\*\\*\" expression := expression operator expression | \"(\" expression \")\" | \"not\" expression | \"-\" expression | \"max_to_index\" \"(\" expression \")\" | \"min_to_index\" \"(\" expression \")\" | \"max\" \"(\" expression \",\" expression \")\" | \"min\" \"(\" expression \",\" expression \")\" | \"lcm\" \"(\" expression \",\" expression \")\" | \"abs\" \"(\" expression \")\" | identifier \"\\[\" expression \"\\]\" | identifier | integer_value | double_value | boolean_value Notes on the BNF of .sc file syntax : entry is the entry point of the grammar. The data_type rule describes all data types available in a .sc file The operator rule lists all binary operators. The expression rule gives all possible expressions that you can use to define your scheduler. The statement rule contains all statements that can be used in a .sc file. identifier is a string constant. integer_value is a integer constant. double_value is a double constant. boolean_value is a boolean constant. Two kinds of statements exist to build your user-defined scheduler : low-level and high-level statements. high-level statements operate on all task information. low-level statements operate only on one information of a task at a time. all these statements work as follows : The if statement : works like in Ada or most of programming languages : run the else or the then statement branch according to the value of the if expression . The while statement : works like in Ada or most of programming languages : run the statements enclosed in the loop/end loop block until the while condition becomes false. The for statement : it's an Ada loop with a predefined iterator index. With a for statement, the statements enclosed in the loop are run for each task defined in the TCB table. At each iteration, the variable defined in the for statement is incremented. Then, in the case of task loop for instance (use keyword tasks_range in this case), its value ranges from 1 to nb_tasks ( nb_tasks is a predefined static variable initiliazed to the number of tasks hosted by the currently analyzed processor). The return statement. You can use a return statement in two cases : With any argument in any section except in the election_section . In this case, the return statement just end the code of the section. With a integer argument and only in the election_section . Then, the return statement give the task number to be run. When the return statement returns the -1 value, it means that no task has to be run at the nuext unit of time. The put(p,a,b) statement : displays the value of the variable p on the screen. This statement is useful to debug your user-defined scheduler. If a and b are not equal to zero and if p is an array type, put(p,a,b) displays entries of the table with index between a and b . If a and b are equal to zero and if p is an array, all entries of the array are displayed. The delete_precedence \"a/b\" statement : remove the dependency between task a and b ( a is the source task while b is the destination/sink task). The add_precedence \"a/b\" statement : add a dependency between task a and b ( a is the source task while b is the destination/sink task). The exponential(a,b) statement : intializes the random generator a to generate exponential random values with an average value of b . The uniform(a,b,c) statement : intializes the random generator a to generate uniformly random values between b and c . The set statement : description of new task activation model : assign an expression which shows how to compute task wake up time with an identifier. The predefined operators and subprograms are the following: abs(a) : returns the unsigned value of a . lcm(a,b) : returns the last common multiplier of a and b . max(a,b) : returns the maximum value between a and b . min(a,b) : returns the minimum value between a and b . max_to_index (v) : firstly finds the task in the TCB with the maximum value of v ,and then returns its position in the TCB table. Only ready tasks are considered by this operator. min_to_index(v) : firstly finds the task in the TCB with the minimum value of v , and then returns its position in the TCB table Only ready tasks are considered by this operator. a mod b : computes the modulo of a on b (rest of the integer division). to_integer(a) : cast a from double to integer. a must be a double. to_double(a) : cast a from integer to double. a must be an integer. integer'last : return the largest value for the integer type. integer'first : return the smallest value for the integer type. double'last : return the largest value for the double type. double'first : return the smallest value for the double type. get_task_index (a) : return the index in the task table for the task named a . get_buffer_index (a) : return the index in the buffer table for the buffer named a . get_resource_index (a) : return the index in the resource table for the resource named a . get_message_index (a) : return the index in the message table for the message named a .","title":"List of predefined variables and available statements."}]}