Project Activities

Schedulability Analysis of Real-Time Systems

Mission-critical systems, e.g., those used in the aerospace, automotive, and healthcare domains, often consist of many software tasks that run in parallel. The correctness of mission-critical systems does not only depend on the system outputs but also on the time the system takes to generate its outputs. Schedulability analysis is about determining whether a given set of real-time software tasks is schedulable, i.e., whether task executions always complete before their specified deadlines. It is an important activity at both early design and late development stages of mission-critical real-time systems. Specifically, this project aims at developing an analysis framework that helps engineers as follows: First, the analysis framework allows engineers to test schedulability of a set of tasks while accounting for uncertain task arrivals and worst-case execution times (WCET). Second, the framework enables engineers to estimate safe WCET ranges under which tasks are likely to be schedulable with a given level of confidence and to offer ways to achieve different trade-offs among tasks’ WCET values. Last, the framework aims to find a robust priority assignment that can tolerate unexpected execution time overruns due to uncertainties in system operations such as overheads of a real-time operating system and cycle stealing by peripheral devices. The proposed framework customizes and extends metaheuristic search, simulation, and machine learning techniques.

Scientists: Lionel Briand, Jaekwon Lee,Shiva Nejati, Seung Yeob Shin (lead).
Industry partner: Luxspace

Mining Assumptions for CPS Software Components

The ultimate goal of software verification techniques is to provably demonstrate the correctness of the system under analysis for a given system requirement. However, this problem is generally undecidable for cyber-physical or hybrid systems, and too difficult to achieve for many industrial systems that are far too complex to be verified exhaustively and in their entirety. As a result, in practice, exhaustive verification techniques can only be applied to some selected individual components or algorithms that are amenable to exhaustive verification and are used within a larger system. However, some requirements may fail to hold when individual components are considered, while the same requirements would hold when they are evaluated within the larger model they are extracted from. This is because in the latter case the components’ inputs are constrained by the values that can be generated within the larger model, which is not the case when the components are running independently. As a result, we need to verify whether the conditions under which the component works are acceptable given the input values that can be generated by its larger model. In this project, we investigate how to automatically synthesize environment assumptions for a component under analysis (i.e., conditions on the component inputs under which the component is guaranteed to satisfy its requirements). Our goal is to provide systematic techniques that allow developers to understand the conditions, i.e., environment assumptions, under which their components properly.

Scientists: Lionel Briand, Khouloud Gaaloul, Claudio Menghi, Shiva Nejati (lead).
Industry partner: Luxspace

Testing Compute-Intensive Cyber-Physical Models

Testing is extensively applied to verify whether Cyber-Physical systems (CPS) models violate their requirements. Testing, however, usually requires to execute the model under test for a large number of candidate test inputs. This poses a challenge for a large and practically-important category of CPS models, known as compute-intensive CPS models. Compute-intensive CPS models require a lot of computational power to execute since they include complex computations such as dynamic, non-linear, and non-algebraic mathematics, and further, they have to be executed for long durations to thoroughly exercise interactions between the CPS and its environment. For example, a single non-trivial simulation of an industrial model of a satellite system, capturing the satellite behavior for 24h, takes more than one hour. The sheer amount of time required for just a single simulation of a compute-intensive CPS model significantly impedes testing and verification since many testing and verification strategies require to execute the model under test for hundreds or thousands of test inputs. This project aims at designing effective approaches to test compute-intensive CPS models. Our goal is to provide systematic testing techniques for compute-intensive CPS models, including, for example, approaches to generate automated and online test oracles, and effective search-based testing procedures.

Scientists: Lionel Briand, Khouloud Gaaloul, Claudio Menghi, Shiva Nejati (lead).
Industry partner: Luxspace

Trace-Checking CPS Properties

Cyber-physical systems (CPSs) combine cyber and physical capabilities. Cyber capabilities are typically provided by software components that sense and act on the physical environment, while physical capabilities are provided by the environment in which the software is deployed. Therefore, CPSs combine software and physical dynamics. Physical dynamics are typically modeled through formalisms that capture the continuous evolution—according to physical laws—of the environment over time (e.g., differential equations); the corresponding behaviors are typically represented as continuous signals. Software (i.e., cyber) dynamics are typically modeled with discrete event systems (e.g., finite state machines), whose behavior is typically represented by a sequence of events. Cyber-physical systems exhibit hybrid dynamics since they combine both physical and software capabilities. The hybrid nature of these systems introduces new challenges when engineers need to assess whether the traces (a.k.a., logs), capturing instance behaviors of CPS, are compliant with the CPS requirements. For example, trace-checking tools for CPS need support for signal-based temporal properties, i.e., properties that characterize the behavior of a system when its inputs and outputs are signals over time. The goal of this project is to provide effective trace-checking techniques for CPS and approaches that support their usage within industrial contexts.

Scientists: Lionel Briand, Domenico Bianculli (lead), Claudio Menghi, Chaima Boufaied.
Industry partner: Luxspace

Model-based Simulation of Satellite Systems

To ensure that complex satellite systems behave as intended and meet their functional, performance, and robustness requirements, these systems are subject to extensive Verification and Validation (V&V). A key aspect of V&V for such systems is simulation, which is aimed at the detection of defects at design time, and before the systems are operationalized.  The aim of this project is to reduce the cost and increase the effectiveness of design-time simulation for satellite systems.  The project started in 2017 and is being conducted in collaboration with the Luxembourg-based satellite services company, SES.

Scientists: Lionel Briand, Shiva Nejati, Mehrdad Sabetzadeh (lead), Seung Yeob Shin.
Industry partner: SES

Log-Driven Analysis for Software Systems

Behavior models of software systems play an essential role in many software engineering tasks, such as program comprehension, test data generation, and model checking. However, such models are often not available in practice, mainly because of the time and cost involved in generating and maintaining them. To overcome the lack of software system models, model inference techniques can be used to extract models, typically in the form of a Finite State Machine (FSM), from software logs (i.e., semi-structured text printed by logging statements, such as and printf() in the source code) that are recorded during the execution of the software systems. However, using model inference techniques has several challenges in practice. First, the problem of inferring a minimal FSM is NP-complete, and model inference techniques, even though they are polynomial-time approximations, suffer from scalability issues. Second, the quality of inferred models largely relies on the quality of input logs, while real-world logs often contain some noise that, if not properly removed, could be wrongly captured by inferred models. This project aims to (1) advance the scalability of state-of-the-art model inference techniques and (2) develop an automated approach to identify and remove noise in logs.

Scientists: Lionel Briand, Domenico Bianculli (lead), Donghwan Shin, Salma Messaoudi.
Industry partner: SES

Testing for Automated Driving System with DNN components

Deep Neural Networks (DNNs) have been increasingly adopted in Automated Driving System (ADS), such as vision-based autonomous braking and lane-keeping systems, with the recent advances of machine learning. However, ensuring the reliability and safety of such DNN-based systems is extremely challenging for several reasons. First, the test input space is too large to be exhaustively explored; for example, a vision-based autonomous braking system must be tested under various road topologies, weather conditions, and many other environmental factors. Second, test data generation in the real-world (e.g., driving a car in extreme environments to collect driving scenes) is expensive, time-consuming, and even dangerous. Third, DNNs are often black-box in practice because they are provided by third parties with expertise in machine learning without the internal information of DNNs. This project aims to address the challenges by developing automated testing techniques for DNN-based systems in the application of ADS. Specifically, we identify different testing phases in the process of DNN development and testing in general. Then, for each testing phase, we develop automated test data generation techniques that effectively and efficiently search large test input space and generate critical test inputs with the help of high-fidelity simulators.

Scientists: Lionel Briand, Donghwan Shin (lead), Fitah Ul Haq.
Industry partner: IEE

Supporting DNN Safety-Analysis with DNN Explanation Approaches

Deep neural networks (DNNs) are increasingly crucial in safety-critical systems, for example, in their perception layer to analyze images. Unfortunately, there is a lack of methods to ensure the functional safety of DNN-based components. We observe three major challenges with existing practices regarding DNNs in safety-critical systems: (1) scenarios that are underrepresented in the test set may lead to serious safety violation risks but may, however, remain unnoticed; (2) characterizing such high-risk scenarios is critical for safety analysis; (3) retraining DNNs to address these risks is poorly supported when causes of violations are difficult to determine.
To address these problems in the context of DNNs analyzing images, we aim to automatically identify the root causes of DNN errors (e.g., by identifying similar images leading to DNN errors) and rely on such root cause to retrain the DNNs. Preliminary results obtained by applying clustering algorithms to neuron relevance analysis show that such an approach can detect distinct root causes of DNN errors, thus supporting safety analysis. Also, retraining based on the identified root cause clusters has shown to be more effective at improving DNN accuracy than state-of-the-art approaches.

Scientists: Lionel Briand, Hazem Fahmy, Fabrizio Pastore (lead).
Industry partner: IEE

Automated Requirements-based Testing for Embedded Systems

The complexity of embedded software in safety-critical domains, such as automotive and avionics, has significantly increased over the years. For most embedded systems, standards (e.g., ISO 26262) require system testing to explicitly demonstrate that the software meets its functional and safety requirements. In these domains, system test cases are often manually derived from functional requirements in natural language plus other design artifacts, like UML statecharts. The definition of system test cases is therefore time-consuming and error-prone, especially given the quickly rising complexity of embedded systems.

This project aims to reduce the manual effort required by automating the generation of test cases from requirements specifications. The resulting test suites are intended to be both effective at detecting faults, and capable of suitably demonstrating the connection between the verification process and the system requirements satisfying the verification guidelines. Project outcomes include the generation of test models and executable test cases from natural language, plus the capability of verifying non-deterministic behaviors induced by uncertainty in timing specifications.

Scientists: Lionel Briand, Arda Goknil, Fabrizio Pastore (lead), Chunhui Wang.
Industry partner: IEE

Use Case Centric Development and Testing for Product Lines

In the automotive domain, embedded software suppliers provide to multiple automotive manufacturers customized versions of the same product. To identify commonalities and variability early in requirements analysis companies need to adopt product line engineering concepts (e.g., variation points and variants). However, there is a lack of methods that fit development contexts in which requirements are described by mean of use case specifications. To fill this gap, the project aims to (1) define of a modeling method for capturing variability information in product line use case models, (2) automate the configuration of product-specific use case models, (3) enable change impact analysis in the presence of evolving configuration decisions and (4) support efficient testing by enabling the reuse of test cases across different products.

Scientists: Lionel Briand, Arda Goknil (lead), Ines Hajri.
Industry partner: IEE

Requirements-based Security Testing

Modern internet-based services like home-banking, music-streaming, food-delivery, and personal-training are delivered through multi-device software systems, which process private end-user data collected and stored by different devices. Therefore, security and privacy have become a crucial concern in the development of software systems, starting from requirements analysis to testing.

This project aims to develop a methodology to clearly capture security and privacy requirements including compliance with standards; furthermore, it aims to support automated generation of executable security test cases. Our motivation is to have a systematic way to identify threats, to test whether they can be exploited, and to automate testing relying exclusively on artifacts that can be realistically expected in most environments. The project focuses on security requirements captured using natural language since these are often preferred by engineers because they enable discussion among stakeholders.

Scientists: Lionel Briand, Arda Goknil, Phu X. Mai, Fabrizio Pastore (lead).
Industry partner: EDLAH2 consortium

Security Vulnerability Testing

According to security organizations such as OWASP and SANS, the most frequent security risks depend on specific vulnerabilities such as SQL injection vulnerabilities (SQLi) and XML injection vulnerabilities (XMLi). Unfortunately, effectively testing for such vulnerabilities within time and resource constraints is a challenge because of the large input space that characterizes modern software systems. Indeed, modern systems are complex, interact with other systems following multiple protocols and exchanging huge amount of data, and, finally, receive inputs from users who can use and misuse these systems in unexpected ways. For these reasons, we are developing a set of novel black-box testing approaches that address scalability, reachability, and executability issues in modern systems with an initial focus on Web applications.

We have developed approaches that combine machine learning and evolutionary algorithms to automatically detect XMLi and SQLi vulnerabilities in both Web application frontends and Web application firewalls (WAF). The approaches automatically generate a diverse set of attacks, send them to the system under test, and check if they are correctly identified. Ongoing work concerns the automated generation of inputs and oracles for other highly risky vulnerabilities not addressed by existing approaches.

Scientists: Lionel Briand, Annibale Panichella (lead), Sadeeq Jean, Dennis Appelt.
Industry partners: CETREL, HITEC, Westerdals

Model Testing and Model-based Testing of Satellite Control Systems

Cyber-Physical Systems (CPS) are becoming increasingly ubiquitous. Examples include cars, medical instruments, traffic control systems, and satellites. These systems are often subject to rigorous Verification and Validation due to their criticalities. However, they are largely regarded as untestable systems, meaning that traditional testing methods are expensive, time-consuming or infeasible to apply due to factors such as the systems’ continuous interactions with the environment and the deep intertwining of software with hardware. Model Testing has been proposed as a vision to tackle this challenge. Thus, by the means of executable models which represent and simulate the system at design level, numerous test scenarios can be run, and hence, faults at the same level can be captured before going deep into the implementation. In this project, we explore model testing in the area of space engineering. Namely, Attitude Determination and Control Sub-System (ADCS) which monitors and controls satellite systems, is an example of untestable systems: It receives sensor data, and produces control commands to actuators. It further has to implement remote and real-time interactions between the satellite and the ground station. Our goal is to provide systematic testing techniques for ADCS functional requirements by taking into account complex factors such as environment uncertainties, complex interaction of hardware and software and the time-based behaviors of CPS.

Scientists: Lionel Briand, Shiva Nejati (lead), Carlos Alberto Gonzalez Perez, Mojtaba Varmazyar.
Industry partner: Luxspace

Quality Assurance and Verification of Requirements

In industry, software requirements are prevalently written in natural language. This project aims to capitalize on the benefits of natural language for specifying requirements, and yet provide automation for complex and laborious requirements quality assurance tasks. These tasks notably include vagueness and ambiguity detection, enforcing requirements best practices, change impact analysis, and completeness and consistency checking. Even with good requirements in place, a great deal of effort must still go into verifying that the requirements have been properly realized by the system design. Given the complexity of modern-day systems, this task is beyond human capacity. A second goal of the project is to build automated verification techniques that can cope with such challenges. The project started in 2018 and is being conducted with the Canada-based QRA Corp, a developer of systems and requirements engineering technologies.

Scientists: Sallam Abualhaija, Lionel Briand, Khouloud Gaaloul, Shiva Nejati, Mehrdad Sabetzadeh (lead).

Effective testing of Advanced Driver Assistance Systems

With the challenge of developing software for autonomous vehicles comes the challenge of testing this software to ensure vehicles’ safety and reliability. Simulation, i.e., design time testing of system models, is arguably the most effective way of testing control and perception algorithms developed for autonomous driving. Rich simulation environments are able to replicate all kinds of weather conditions, different types of roads, vehicles and pedestrians, intersections, buildings, landscapes, etc. They enable engineers to generate and execute scenarios capturing various ways in which pedestrians interact with the road traffic or vehicles interact with one another. Recent years have seen major improvements in computational and modeling accuracy of simulators. Their full exploitation, however, is hampered by lack of guidance for identifying the most critical simulation scenarios among the sheer number of scenarios that one can possibly generate, and the time it takes to generate these scenarios.

Scientists: Raja Ben Abdessalem, Lionel Briand, Shiva Nejati (lead).
Industry partner: IEE

Logical Constraint Solving

A key technical challenge in model-based simulation and testing is the ability to solve logical constraints that characterize the invariants, preconditions and transition guards in the models. When UML is used for analysis and design, these constraints are typically expressed in the Object Constraint Language (OCL). The goal of this project is to develop a generalizable and yet scalable infrastructure for OCL constraint solving using a combination of search-based and SMT techniques. The project is transversal and expected to lead to technologies that can be used in a variety of other projects in the SVV lab.

Scientists: Lionel Briand, Ghanem Soltana, Mehrdad Sabetzadeh (lead).