A Sensitivity Analysis for Mixed Criticality: Trading Criticality with Computational Resource

Mixing workloads with multiple criticality levels raises challenges both in timing analysis and schedulability analysis. The timing models have to characterize the different behaviors that real-time tasks can experience under the various criticality modes. Instead, the schedulability analysis has to combine every task and task interactions providing several guarantees, depending on the criticality level demanded at runtime. With this work, at first we propose representations to model every possible system criticality mode as a combination of task criticality modes. A set of bounding functions is obtained, a bound for each mode combination thus corresponding to a system criticality level. Secondly, we develop the schedulability analysis that applies such sets and derives schedulability conditions with mixed criticalities. The tasks are scheduled with fixed priority and earlies deadline first, and various levels of schedulability are defined from the mode combinations. Finally, we make use of the sensitivity analysis to evaluate the impact that multi mode task behaviors have on schedulability. Trade-offs between schedulability, criticality levels and resource availability are explored. A mixed critical real-time system case study validates the framework proposed.


I. INTRODUCTION
An increasingly important trend in developing real-time systems is the integration of applications with different levels of criticality. The criticality designs the level of assurance needed for a system element against failure e.g., standards ISO 26262, DO 178C, and IEC 61508.
A Mixed Criticality (MC) real-time system is one that has two or more distinct criticality levels e.g., safety critical, mission critical, and/or low-critical. Such systems are defined to execute in a number of criticality modes, each mode specifying execution conditions and system criticalities. All the possible modes have to be characterized and analyzed in order to guarantee the predictability of the system. Please refer to [15] for a good overview of the MC problems for real-time systems.
Safety critical applications have to account for the worstcase behaviors that can possibly happen. The 'best' modeling of task parameters has to assure the coverage of any of the execution conditions, including the worst-cases [30], [18]. Mission critical or low critical applications rely on less constrained/demanding models and the guarantees on them are not as strict as those for safety critical applications [30], [18].
With respect to schedulability, the MC problem consists of multiple correctness criteria: timing constraints of safety critical tasks are guaranteed first, and then less critical tasks are eventually accommodated in the scheduling. Todays research on MC schedulability relies on mode changes in order to provide different assurance levels to the possible execution conditions [19]. Mode changes can be triggered by execution length, processor speed [5], [22], [21], [4], task release patterns [3], and the combination of those [23]. The resource is utilized in the manner such that all tasks are allowed to execute under low-critical modes in a more fairly manner, while priorities will be given to more critical tasks in a dedicated manner upon a mode switch. Such mode based correctness definition is welcomed by the industry, yet providing many research challenges [2].
We believe that some of the challenges to MC modeling and MC schedulability analysis can be addressed with sensitivity analysis. The sensitivity analysis applies to task models, and it studies the impact that task parameters have on system schedulability [27], [12]. The goal of this work is to effectively make use of sensitivity analysis for MC problems in order to study the costs for guaranteeing certain criticality levels at runtime. Contribution: With this paper, we apply sensitivity analysis to models and schedulability analysis with MC. The MC task modeling is laid out with multiple bounding functions such as resource bound functions and workloads. Those functions are defined in order to bound task behaviors under the possible execution modes that can happen at runtime. Each bound represents a criticality level as well as an execution mode for the task set. The set of bounding functions is applied to develop the schedulability conditions with MC. Different levels of schedulability are defined from the possible criticality levels for fixed priority and earliest deadline first scheduling. Finally, the sensitivity analysis applies to evaluate the impact that MC task behaviors have on schedulability. Trade-offs between schedulability, criticality levels and resource availability are explored. Organization of the paper: Section II presents real-time models that make use of bounding functions to characterize both resource requests and resource provisioning. Section III illustrates how we instantiate real-time modeling into MC modeling. Mode combinations are defined according to the scheduling policies applied and result into multiple possible bounds for the whole application. Section IV provides detailed notion of scheduling with MC, and the sensitivity analysis we develop to study the effects of criticality levels on schedulability and resource usage. Section V validates the modeling and analysis framework we propose with a test case of a realistic real-time application. Section VI concludes the work and points out future research directions.

A. Related work
MC systems are typically defined to execute in a number of criticality modes. According to Vestal's definition [30], mode switch can be defined as follows: if any task attempts to execute for a longer time or more frequently like in case of faults, then a mode change occurs imposing high-criticality behavior to the tasks and the system [7], [16]. Under classic MC model, all low-critical tasks could be dropped from the system upon a mode switch, which may be a result of one single high-criticality task overrunning for 1 ms, or a 1 ms speed drop of one of the many processors. Obviously huge pessimism is involved under such modeling, even with the recent developments in providing multiple assurance levels to the possible execution conditions [19], [10], [8], [9]. We hereby apply real-time calculus basics [29] to derive bounds to resource request and resource provisioning in the interval domain. Those models have to be adapted to the MC problem for multiple bounding curves and different execution conditions. The resource usage can be further improved allowing MC application sharing computation without jeopardizing high criticality tasks.
The industry perspective to the MC problem focuses more on partitioning and separating applications by their criticality level. Safety critical applications would be timely and/or spacey separated from mission critical applications, [18]. Both models and schedulability are guaranteed within the partition which allow for compositional approaches especially handy with application qualification. Our work aims at studying all mode configurations which are possible at runtime with and without partitioning. We do not propose an alternative MC scheduling algorithm. It seems to us that this could be a way to close the gap between academic and industrial perspective to MC problems.
Traditional sensitivity analysis applies to real-time systems in order to study the impact that task parameters have on schedulability. It translates into abstract representations such as the (α, ∆)-space [27], and the C-space [12], [25], [20] where to map parameter values into schedulability conditions. Effective sensitivity analysis has yet to be built for MC problems. We hereby focus on the (α, ∆)-space to model schedulability conditions with MC and to which apply sensitivity analysis. In particular, the sensitivity analysis is hereby used for evaluating the different possible mode configurations, and to quantify the cost of changing the computational resource or schedulability conditions. II. COMPUTATIONAL MODELS A real-time task τ i consists of a sequence of recurring jobs, each to be executed before a given deadline. In the periodic case, it is τ i = (T i , D i , C i ), with T i as the period, D i as the deadline (it is assumed D i ≤ T i ), and C i as the worstcase execution time (WCET). Tasks are grouped into task sets Γ = {τ 1 , . . . , τ n }, equivalently real-time applications. Any executing task can be seen as a trace of events [13] with the cumulative function R(t) to define the amount of computation resource requested by the task within [0, t]. For τ i activated at time t = 0, its resource bound function (rbf) for all t. An example of rbf is represented in Figure 1(a) where the rbf is linked to task arrivals and immediate executions.
The computational resource is provided by reservation mechanisms, also known as servers. Although with different peculiarities, a lot of servers can be modeled as periodic servers which guarantee to provide Q (server capacity) units of time/resource in each period P (server period) [1]. With S(t) the amount of resource made available up to time t by server S, the resource provisioning can be lower bounded in [0, t] with the supply bound function sbf. sbf S (t) is the minimum amount of time (computational resource) provided by S in any interval S(x)dx [26], [28]. The bounded-delay function lsbf is the linear approximation that lower bounds sbf in is the resource provisioning rate, and ∆ def = inf{q | α(t−q) ≤ sbf(t) ∀t} is the longest interval with no resource provisioning [26], [28]. Figure 1(b) illustrates that and compares sbf with its linear approximation lsbf.
From lsbf, it is possible to define an (α, ∆)-space where to represent resource provisioning as well as resource requests [27]. An application Γ can be mapped into the (α, ∆)space with its feasibility region Φ Γ . Φ Γ depends on the scheduling policy applied, and collects all the service supply pairs (α, ∆) that guarantee the schedulability of Γ.

A. Real-time schedule
Two common preemptive scheduling policies are the Fixed Priority (FP) and the Earliest Deadline First (EDF) [11], [6]. In this work we apply both.
The FP schedulability is guaranteed if each task in Γ, with static priority ordering, has enough resource to execute within its deadline. A task set Γ executing within a server S can be guaranteed under FP iff: ∀ i ∃ t ∈ SchedP : wbf i (t) ≤ sbf S (t). The tasks are ordered by priority, from higher to lower priority, where hp(i) = {τ 1 , τ 2 , . . . , τ i } denotes the sub-set of all tasks with a priority higher than or equal to τ i ; SchedP defines the set of time instances where FP schedulability has to be verified [11], [14]. The level-i workload wbf i is the resource request from τ i and it includes all the contributions of all higher priority tasks than τ i : The feasibility region Φ Γ in the (α, ∆)-space for Γ is defined from the FP schedulability condition and the definition of lsbf. Thus, ∀i ∃t ∈ SchedP i : The EDF schedulability is guaranteed if the computational resource that Γ requires to execute is less than or equal to the available computational resource. A task set Γ within a server S can be guaranteed iff: ∀t dbf Γ (t) ≤ sbf S (t). In particular, the set of time instances where to check EDF schedulability can be reduced to the set D of deadlines within the task set hyperperiod [6]. The demand bound function dbf i of τ i is the resource requested by τ i to fully execute by its deadline: It is the minimum possible resource request in order to execute the task by its deadline. dbf Γ is the resource demand of the whole task set Γ:

III. BOUNDING WITH MIXED CRITICALITY
In MC real-time systems, task parameters such as WCETs depend on criticality levels. Safety critical applications have to be assured against any possible execution condition, faults included. A way to do that is to consider WCETs large enough to account for such conditions. Instead, if the task is mission critical or non-critical, its WCET requirement would be smaller as the task demands less in terms of resource and assurance [30], [24], [17]. We restrict our modeling and analysis to two criticality levels: the high criticality HI and the low criticality LO. Nonetheless, our reasoning can generalize to any criticality level.
HI-criticality tasks, i.e. safety critical, can have two execution modes, HI-criticality mode represented with C i (HI), and LO-criticality mode represented with C i (LO). C i (HI) models the HI-criticality behavior of the task τ i (most critical), thus the worst possible conditions it can suffer [24]. C i (LO) models the LO-criticality conditions for τ i . It does not assure against faults, at least it does not against all of them [24] -worst-cases are not included. It has to be C i (HI) ≥ C i (LO) [30].
The model of a HI-criticality task is: For it, there exist two resource request rbf, depending on the criticality mode active: rbf HI HI,i = max 0, t Ti · C i (HI) which models the resource request in HI-criticality mode, and rbf LO HI,i = max 0, t Ti · C i (LO) which models the resource request in LO-criticality mode. χ i indicates the the task actual (at runtime) criticality mode, χ i ∈ {HI, LO}.
LO-criticality tasks are tasks that can only execute under LO-criticality mode. C(LO) i is sufficient to model the task behavior. The LO-criticality task model is: with only the LO-criticality mode possible, and rbf LO,i = max 0, t Ti · C(LO) i models the resource request of the LO-criticality task τ i .
The MC real-time application Γ composes of a HI-criticality part Γ HI which includes all and only the HI-criticality tasks, and a LO-criticality part Γ LO which includes all and only the LO-criticality tasks; Γ = Γ HI ∪ Γ LO .
At runtime, there exist different possible combinations of tasks executing in their criticality modes. For example, there could exist combinations of only HI-criticality tasks executing in HI-criticality mode. There could also exist combinations where some HI-criticality tasks execute in HI-criticality mode, others executes in LO-criticality mode, and LO-criticality tasks executes as well. It is also possible to have all the HI-criticality task executing in LO-criticality mode together with some or all LO-criticality tasks. Each combination k is a scheduling that can happen at runtime, and has a criticality level associated χ k resulting from the combination of the criticality mode applied in the scheduling/combination.
The system criticality level χ describes the combination of tasks and task modes at runtime, χ ∈ {χ 1 , χ 2 , . . .}. The purpose of this work is to model all the possible combinations, and apply them into schedulability analysis.

A. Bounding resource request
From the MC task modeling, Equation (1) and Equation (2), we define multiple bounds to the task set resource request. They represent execution conditions as combination of tasks and task modes that can happen at runtime. i) rbf HI HI -is the resource request from all and only the HI-criticality tasks being in HI } is the set of such combinations, with k the index to represent which LOcriticality tasks are added; rbf LO -is the resource request from only LO-criticality tasks: The resource requests of all the combinations between tasks and task modes can be grouped as: To each resource request there is a system criticality level χ k associated, χ k ∈ χ.

B. MC Bounding for FP and EDF
With the MC model there exist a set of level-i workload bounds, each obtained with the combination of HI-and LOcriticality tasks in their respective modes. Only the tasks higher priority than τ i are combined for the level-i workload, and χ k i is the criticality level associated of the level-i workloads combination.
The bounds in Equation (4) can be ordered in increasing order, wbf j ≤ wbf j+1 ; the set χ i from Equation (5) and the set of criticality levels is: The bounds in Equation (6) can be ordered in increasing order, dbf i ≤ dbf i+1 ; the set χ from Equation (7) is ordered accordingly and such that χ j is for dbf j and χ j+1 is for dbf j+1 . Within the HI − LO case, there can be defined bounds such that: dbf * HI−LO,k def = max j {dbf HI−LO,j }. k the number of HIcriticality tasks in HI-criticality mode; dbf * HI−LO HI collects them all and can be applied into dbf instead of dbf HI−LO HI . This allows reducing the number of possible combinations and criticality levels, in turn reducing the complexity of the modeling. Figure 2 illustrates an example of some level-i bounds wbf i ∈ wbf, while Figure 3 is an example of some demand bounds dbf ∈ dbf from different task mode combinations. For each bound there is associated a criticality level. In the figures there are represented few bounds (

IV. SCHEDULING WITH MIXED CRITICALITY
We propose two schedulability analyses based on FP and EDF that apply MC tasks Equation (1) and Equation (2). They are off-line analyses which account for all the criticality mode combinations that can happen at runtime. They embeds criticality levels into schedulability conditions. These analyses focus on finding which are the criticality levels (criticality mode combinations) that can be assured schedulable. They also allow for evaluating the resource applied to execute Γ HI , and thus the remaining resource is left to execute Γ LO without harming HI-criticality tasks' executions.

A. FP and EDF scheduling with mixed criticality
To guarantee schedulability, the resource provisioning sbf is compared with resource requests (workloads) in case of FP, or with the resource demand in case for EDF. Figure 2 and Figure 3 illustrate the comparison between bounds and some available sbf. The way to compare depends on the scheduling policy.
Theorem 1 (FP schedulability with MC): Considering a mixed criticality task set Γ = {τ 1 , τ 2 , . . . , τ n } of n tasks ordered with decreasing priority, i.e., τ 1 is assigned the highest priority whereas τ n is assigned the lowest priority. Γ composes of HI-criticality tasks Γ HI defined as in Equation (1), and LO-criticality tasks Γ LO defined as in Equation (2). ∀τ i ∈ Γ, hp(i) = {τ j , τ k , . . . , τ i } is the set of tasks with priority higher or equal to τ i ; tasks in hp(i) belongs to Γ HI and Γ LO (4), and χ i defines the set of criticality levels for the level-i workloads Equation (5). Γ is FP schedulable under resource provisioning sbf with system criticality level χ = min i {χ i }, χ i being the level-i criticality level in χ i , if for all i ∈ {1, 2, . . . , n} ∃t 0 ∈ schedP i such that: schedP i is the set of deadlines of all τ j ∈ hp(i).
Proof: The schedulability of each task τ i in Γ is guaranteed with the largest level-i workload which is smaller than sbf [11]; χ i corresponding to the largest schedulable leveli, is the schedulability criticality level for τ i . The task set is schedulable if all tasks are schedulable, and the criticality level is the minimum among the schedulable criticality levels that satisfy all the conditions, χ = min i {χ i }. Equation (8) in Theorem 1 defines the FP schedulability conditions which apply MC models. It proposes different degree of schedulability for MC tasks.
Theorem 2 (EDF schedulability with MC): Considering a mixed criticality task set Γ = {τ 1 , τ 2 , . . . , τ n } with HIcriticality tasks Γ HI defined as in Equation (1) (6), with χ defining the ordered set of levels of criticality Equation (7). Γ is EDF schedulable under resource provisioning sbf with system criticality level χ if ∀t 0 ∈ D: χ ∈ χ and dbf χ ∈ dbf HI . Proof: For Γ, with HI-criticality tasks combined with LOcriticality tasks, the largest demand bound function dbf χ HI ∈ dbf which is smaller than sbf assures schedulability for the tasks combination that it represents, [6]. χ describes the criticality level of the application up to which, EDF schedulability is guaranteed. Equation (9) in Theorem 2 defines EDF schedulability conditions which apply the MC models. It proposes different degree of schedulability for MC tasks.

B. Feasibility regions with mixed criticality
The FP scheduling condition parametrized with χ i , Equation (8), translates into comparing feasibility regions and points within the (α, ∆)-space. A feasibility region Φ χi is defined such that: . It has associated the criticality level χ i such that all the wbf i , for all i applied, are from the same criticality level χ i , Theorem 1.
For EDF it is the same with the scheduling condition in Equation (9). A feasibility region Φ χ is defined such that: , and is parametrized with χ.
To both FP and EDF, there exist a set Φ of feasibility regions for the possible criticality levels resulting from the mode combinations -one set per scheduling policy. It is: with the criticality level associated:  (10) can be ordered in increasing order (from the small region to the larger), with the consequent ordering of the criticality levels in Equation (11).  Here it is possible comparing regions between them (ordering between regions Φ k ≤ Φ j ), and compare each region with the available resource sbf (sbf is inside Φ j thus χ j is guaranteed). LO-criticality conditions (LO) are more prone to schedulability since they require less computational resource -larger feasibility region. The more HI-criticality tasks are scheduled in HI-criticality mode or the more LO-criticality tasks are scheduled together with HIcriticality tasks, the larger is the resource required to schedule -smaller feasibility regions.

C. Sensitivity analysis with mixed criticality
We intend to use sensitivity analysis to investigate multiple elements which can impact the design of MC realtime systems. In particular, we apply sensitivity analysis with schedulability conditions parametrized with criticality levels, Theorem 1 and Theorem 2. Our proposal is illustrated with three questions. Q1) Which is the criticality level that can be assured with the available resource provisioning? This is a critical question for MC scheduling as it focuses on how enhancing computational resource usage by scheduling both HI-criticality and LOcriticality tasks together. With the (α, ∆)-space representation, the sensitivity analysis answers Q1 finding the combinations that can be scheduled for a given resource. Considering the (k, m) formalization, k LO-criticality tasks out of m total task executing, Q1 becomes seeking how many LO-criticality tasks can be executed together with m − k HI-criticality task in HIcriticality modes. k is the parameter to be studied in order to find the largest value that can be guaranteed with the available resource. With the MC modeling proposed in combination with the (α, ∆) representation, this can be solved seeking for the largest feasibility region that include the sbf given. It is exploring an index in Φ seeking for regions. Q2) What is the cost to guarantee schedulable a certain criticality level? The cost being in terms of computational resource. The sensitivity analysis can be used to define what is the resource change necessary to guarantee the schedulability up to a specific criticality level. This is very helpful in defining and evaluating trade-offs between resource and criticality/schedulability. The Euclidean distance dist(sbf 2 , sbf 1 ) between two points in the (α, ∆)-space, defined as: (12) quantifies the distance between two resource provisioning sbf 2 −sbf 1 = (α 2 −α 1 , ∆ 2 −∆ 1 ). The cost here is the resource provisioning change necessary to move from sbf 1 to sbf 2 . To note that in order to increase the resource provisioning, α has to increase and ∆ has to decrease. There exist also the distance between a point and a feasibility region, dist(sbf 1 , Φ j ). We define it as: Metric (13) quantifies the resource change to guarantee schedulable the configuration represented by Φ j . With all positive δs, the sign of the minimum between the absolute values | · | is +; with all negative δs, the sign is −. αs and ∆s for Φ j are taken from the region border, and the min is for the δs between sbf k and all those points. Q3) What is the cost to change a system criticality mode? With this, we intend the possibility in the (α, ∆)-space to quantify the computational resource difference between two criticality levels. The sensitivity analysis quantifies that difference as distance between the two regions which is defined as: Metric (14) is applied at iso-parameter, which means computing the δ∆ with the same α, and δα with the same ∆. With all positive δs, the sign of the minimum between the absolute values | · | is +; with all negative δs, the sign is −; with both negative and positive δs, the the min is 0 as the intersection between the regions. The αs and the ∆s are taken from the regions border. Metric (13) and Metric (14) are computed differently to signal the resource difference that exist between the two cases. Figure 4 is an example of sensitivity analysis for evaluating the resource necessary to guarantee schedulability -Q1 and Q2 with Metric (12) and Metric (13). There are 6 regions grouped in 4 different classes: LO for only LO-criticality modes combined, HI for only HI-criticality modes combined, HI − LO for some HI-criticality tasks in HI-criticality modes combined with some LO-criticality tasks, HI, LO for all HIcriticality tasks in HI-criticality mode combined with LOcriticality tasks. There are three cases which define three resource provisioning changes from an initial resource sbf 1 . With sbf 1 available is not possible guarantee any of the schedulability level represented, since sbf 1 in not included in any of those feasibility regions. The difference between change 2 and change 3 is in terms of changing either α's or ∆'s. Distances where one dimension is 0 are advantaged, since the resource change necessary is easier to apply -less constraints. Figure 4 presents also an example of cost evaluation with sensitivity analysis, Q3 and Metric (14). From HI to one HI − LO configuration k, it is: dist(Φ HI , Φ HI−LO,k ) = (0.17, −0.2). It quantifies the resource difference between Φ HI and Φ HI−LO,k , equivalently the resource increase necessary to schedule Φ HI−LO,k from a schedulable Φ HI . This translates also into the resource necessary to add LO-criticality tasks into the scheduling.

V. EVALUATION
The case study here is to apply our MC modeling, our schedulability analysis , and our sensitivity analysis.
The MC real-time application Γ considered is a robotic application which combines HI-criticality tasks and LO-criticality tasks. It composes of a total of 14 tasks, 10 HI-criticality tasks and 4 LO-criticality tasks, which implements the main functionalities that a robot could have e.g., drivers, slam, navigation, control. It is inspired by MAUVE project https: //forge.onera.fr/projects/mauve, to which τ 9 and τ 10 are added for two extra safety critical functionalities while LO-criticality tasks represents functionally important, but non-critical, jobs such as image processing. Table I recaps it and the parameters for each task with the specificity of the MC task model considered, Equation (1) and Equation (2). Tasks are assumed with T i = D i . We note that Γ is made from harmonic tasks, thus the utilization criteria 1 could be applied for schedulability analysis [13]. Instead, we use Theorem 1 and Theorem 2 in combination with the (α, ∆)-space. The utilization is used only for consideration on how to partition Γ.  . In order to guarantee the scheduling of some combinations, at least two resource partitions would be required, and for each have U j ≤ 1.
The industry approach to MC would consist of separating tasks by their criticality levels: HI-criticality tasks separated by LOcriticality tasks. This would end up into 3 different resource partitions, two for HI-criticality tasks, since one would not be enough to guarantee the schedulability being U HI HI = 333 200 , and one for LO-criticality tasks U LO = 100 200 . This partitioning choice is not optimal in terms of resource usage since it has large waist of computational resource, approximately 167 200 + 100 200 . Other partitioning solutions could be applied with equivalent guarantees on scheduling both HI-and LO-criticality tasks. We propose the following based on two partitions with almost evenly distributed utilizations. Partition P 1 , is such that P 1 = {τ 1 , τ 2 , τ 3 , τ 5 , τ 10 , τ 11 , τ 13 } with U HI 1,HI =  Table I the resource provisioning details are illustrated. P 1 sensitivity analysis. For P 1 there are: i) 5 dbfs (and Φs) from HI, HI, LO, 1 (τ 11 ), HI, LO, 2 (τ 13 ), HI, LO, 3 (τ 11 + τ 13 ), and LO; ii) 95 possible dbfs (and Φs) from HI − LO combinations. 15 are from only one HI-criticality task in HI-criticality mode, and can be reduced to three dbf * HI−LO−1 with the envelop bounding; 30 are from only two HI-criticality tasks in HI-criticality mode, and can be reduced to three bounds dbf * HI−LO−2 ; and so on. Figure 5 represents the set of feasibility regions for P 1 compared with the four possible resource available. The cases LO + LOs represents the bounds with all HI-criticality tasks in LO-criticality modes combined with LOcriticality tasks, while onlyLO is for only HI-criticality tasks in LO-criticality mode. The case with only LO-criticality tasks scheduled is not depicted. With sbf 1 1 it is not possible to guarantee any of the criticality combinations for P 1 ; with sbf 2 1 it is possible to guarantee onlyLO and LO + LOs with only τ 11 added. To note that it is not possible to guarantee schedulable the combination with all HI-criticality tasks in HI-criticality mode together with all LO-criticality tasks, since U HI 1,HI + U 1,LO > 1. This is illustrated in the (α, ∆)-space representation with only two feasibility regions Φ HI,LO . The sensitivity analysis quantifies all those costs with Metric (12). For example, while designing the system and deciding to change resource provisioning in order to guarantee HI, LO cases, sbf 3 1 would be necessary which make the need for and increase of 0.15 of α and decreasing of 5 of ∆ from an initial sbf 1 1 . As another example of sensitivity analysis for Q2, the cost for including a second LO-criticality task to all HIcriticality tasks in LO-criticality mode (LO + LOs) would be:   P 2 sensitivity analysis. Figure 6 illustrates the feasibility regions for the different task mode combinations in P 2 , cases HI, HI, LO, HI − LO and LO. In particular, for HI − LO there are represented the regions from wbf * HI−LO HI,i . With sbf 2 1 , it is possible to schedule the onlyLO case, all the LO + LOs cases, and some HI − LO cases. The schedulability of all the criticality levels can be achieved only with sbf 4 . sbf 2 2 and sbf 2 3 allow the schedulability of intermediate combinations. To note that with sbf 4 there is also some resource margin for eventually including new tasks. That margin can be quantified with dist(sbf 4 2 , Φ HI−LO HI ) = (−0.7, 16) as resource reduction. In P 2 , an example of Metric (14) applied to evaluate the difference between scheduling HI − LO (four HI-criticality modes and all the LO-criticality tasks) and scheduling HI − LO (three HI-criticality modes and all the LO-criticality tasks) is dist(Φ * HI−LO,4 HI , Φ * HI−LO,3 HI ) = (0.1, −4). In order to schedule four HI-criticality modes from three HI-criticality, the resource has to be increased by δα = 0.1 and δ∆ = −4.

VI. CONCLUSION
We have developed MC models with workloads and demand bound functions that bound criticality mode combinations and define multiple system criticality levels. The schedulability analyses we proposed make use of the MC models and apply them to FP and EDF. In there, the scheduling conditions are parametrized with the criticality levels: combinations are guaranteed to be schedulable or not, depending on the available computational resource sbf. We also formalized the (α, ∆)space for the MC problem. In there, MC models and MC scheduling conditions translate into feasibility regions where criticality level is guaranteed to be schedulable if the resource availability sbf belongs to the feasibility region. The sensitivity analysis is applied to evaluate the MC schedulability conditions, and the costs necessary to guarantee some combinations and not others. The sensitivity analysis identified trade-offs between criticality levels and resource provisioning which can be handy while designing systems.
Future work will focus on improving resource usage and optimally combining HI-criticality tasks with LO-criticality tasks. In particular, it will be developed policies to explore the proposed trade-offs. The policies will be implemented to define the best resource provisioning changes with respect to the criticality levels to be assured.