The classic approach to the design of real time systems is to determine worst case scenarios for the system statically and manually and then build the system with sufficient resources to meet deadlines and goals. This approach has worked well for traditional real time systems which operate in relatively simple, well-characterized environments. However the emerging generation of complex, dynamic and uncertain real time application domains accentuates the growing need for flexible, adaptable design approaches for real time systems.  With the increasing complexity of real time systems, it is becoming infeasible to build systems with sufficient resources to meet the functional and timing requirements of all application tasks at all times. What is becoming increasingly important in the new paradigm of real time computing is the need to meet deadlines with sufficient system solution quality without having to design the system to support worst case program execution. In this thesis, we explore the possibility of exploiting “soft computing” properties of kernels to meet this objective. The chief characteristic of “soft computations” is the fact that they are able to provide cruder results before they complete, or they may execute for a long time refining an already adequate result. In other words, such computations are able to provide useful/incremental results before fully completing execution. More specifically they provide a trade-off between computation time and algorithm solution quality.

  This thesis addresses the design issues involved in building a system that exploits the “soft computing” properties of kernels to optimize real time performance. In this context, we make the following contributions. Firstly we build a system prototype of a real time situational assessment scenario. We thereafter identify “soft computations” in the system and characterize the computation time/solution quality trade-off opportunities provided by them using performance profiles. Thirdly, we introduce a method to use performance profile based models at run time to determine the optimal composition of different “soft computations” in order to meet real time deadlines with sufficient system solution quality. We quantify the gains from our method both in terms of functional correctness of the system as well as CPU utilization as compared to conventional real time scheduling techniques. We observe that our dynamic scheduling scheme on an average is able to meet the system goals with 39% more accuracy with no missed deadlines as compared to conventional real time scheduling techniques for various design points that do not support worst case behavior. In addition, our method is able to meet the system objective while being highly utilized. Most importantly, our scheme exploits the soft computing properties of kernels to facilitate the design of the system at less aggressive design points while meeting deadlines and system goals at the same level as conventional real time design methodology. Finally, we perform an experimental study to understand the sensitivity of performance profiles to various input data parameters and identify the potential for online learning of performance profiles. 

Introduction and Motivation

Real time computing systems are defined as those systems in which the correctness of the system depends not only on the logical results of the computation, but also on the time at which they are produced. The objective of real time computing is to meet the timing and functional requirements of individual tasks. Additionally it is also desirable that real-time systems achieve their functional correctness and timeliness while being highly utilized.

Real time systems are specified by a set of timing constraints, called deadlines.  The objective of the system is to provide system solutions with requisite functional correctness by the specified deadlines. Based on the nature of the timing constraints that they need to satisfy, real-time applications and systems can be characterized as “hard” real time systems or “soft” real time systems. In a hard realtime system, if one or more activities miss a deadline or timing constraint, the system fails. In contrast, a soft real-time system is one that does have timing requirements, but occasionally missing them has negligible effects, as application requirements as a whole continue to be met.

When activities have timing constraints, as is typical of real time computing systems, scheduling these activities to meet their timing constraints is one major design issue that needs to be addressed. Traditional real time scheduling algorithms fall into two categories: static and dynamic. A static/pre-runtime approach calculates (or predetermines) schedules for the system off-line. It requires prior knowledge of all task characteristics (arrival times, deadlines, release times and worst case execution times). In a dynamic or runtime approach, the order of execution of tasks is decided at runtime based on priorities attached to tasks. These priorities in turn may be determined off-line (like in the case of the Rate Monotonic Algorithm[1] where priorities are based on the frequency/periods of tasks) or online (like in the case of the Earliest Deadline First algorithm (EDF) where a task with the earliest deadline is given highest priority). These scheduling techniques will be discussed in detail later in Chapter 2.  

Real time systems: The current paradigm

Traditionally the real time computing paradigm has included applications from the arena of digital control, digital signal processing, multimedia and database transactions. 

Traditional real time systems like digital controllers operate in relatively simple, well-characterized environments. Such “traditional” real-time systems have a set of repeated tasks with known execution times and arrival patterns. The primary challenge in building such systems is to schedule these independent periodic tasks and ensure that they meet their deadlines. Classic real-time system approach is to determine worst-case scenarios statically and manually, then build systems with sufficient resources to meet goals. 

 This approach works well only under the following scenarios:

 it is possible to develop an accurate workload model of the environment in which the real time system operates. This ensures that Worst Case Execution Time (WCET) estimates for task execution times, arrival and release times are accurate and hence guarantees that all timing constraints of tasks are met.

 the variance of actual execution times of  tasks with reference to the WCET estimates is low. This ensures that in addition to meeting the timing and functional constraints of the system, the real time system is not underutilized.

However many complex real world applications such as real time database systems, agile manufacturing, robotics and various command and control systems work in unpredictable dynamic environments. Accurate knowledge about the resource and data requirements of many of these tasks is not known because the execution time and resource requirements of many of these tasks may be dependent on input data ( information and decision support system) or dependent on sensor values (manufacturing plant, command and control system). The application of dynamic priority driven scheduling algorithms like Earliest Deadline First (EDF) have been shown to be amenable to be applied to such systems. However due to the unpredictably of dynamic schemes under overload conditions, current commercial real time applications executing in dynamic environments use static priority driven algorithms like Rate Monotonic (RMA) and hence are generally over designed /underutilized.

Real time computing: The next generation

The next generation of real time systems will have much more complex system requirements and will work in uncertain dynamic environments. Hence the need for adaptable, flexible behavior in such systems would be of primary importance. This need is accentuated by the following emerging trends in this field:

 Convergence of two major areas in computer science and engineering: Artificial Intelligence (AI) and real time systems. AI systems are moving towards more realistic domains requiring real time responses and real time systems are moving towards more complex applications requiring intelligent behavior.

 The growing need for real time fusion of huge amounts of data  into synthesized information in domains like avionics, medicine, defense, integrated vision, robotics, finance/business  etc. to facilitate real time decision support and diagnosis.