Entity

Time filter

Source Type


Ziccardi M.,University of Padua | Mezzetti E.,University of Padua | Vardanega T.,University of Padua | Abella J.,Barcelona Supercomputing Center | Cazorla F.J.,Spanish National Research Council IIIA CSIC
Proceedings - Real-Time Systems Symposium | Year: 2016

Measurement-based probabilistic timing analysis (MBPTA) computes trustworthy upper bounds to the execution time of software programs. MBPTA has the connotation, typical of measurement-based techniques, that the bounds computed with it only relate to what is observed in actual program traversals, which may not include the effective worst-case phenomena. To overcome this limitation, we propose Extended Path Coverage (EPC), a novel technique that allows extending the representativeness of the bounds computed by MBPTA. We make the observation data probabilistically path-independent by modifying the probability distribution of the observed timing behaviour so as to negatively compensate for any benefits that a basic block may draw from a path leading to it. This enables the derivation of trustworthy upper bounds to the probabilistic execution time of all paths in the program, even when the user-provided input vectors do not exercise the worst-case path. Our results confirm that using MBPTA with EPC produces fully trustworthy upper bounds with competitively small overestimation in comparison to state-of-the-art MBPTA techniques. © 2015 IEEE. Source


Luque C.,Polytechnic University of Catalonia | Luque C.,Barcelona Supercomputing Center | Moreto M.,Polytechnic University of Catalonia | Moreto M.,Barcelona Supercomputing Center | And 6 more authors.
IEEE Transactions on Computers | Year: 2012

In single-threaded processors and Symmetric Multiprocessors the execution time of a task depends on the other tasks it runs with (the workload), since the Operating System (OS) time shares the CPU(s) between tasks in the workload. However, the time accounted to a task is roughly the same regardless of the workload in which the task runs in, since the OS takes into account those periods in which the task is not scheduled onto a CPU. Chip Multiprocessors (CMPs) introduce complexities when accounting CPU utilization, since the CPU time to account to a task not only depends on the time that the task is scheduled onto a CPU, but also on the amount of hardware resources it receives during that period. And given that in a CMP hardware resources are dynamically shared between tasks, the CPU time accounted to a task in a CMP depends on the workload it executes in. This is undesirable because the same task with the same input data set may be accounted differently depending on the workload it executes. In this paper, we identify how an inaccurate measurement of the CPU utilization affects several key aspects of the system such as OS statistics or the charging mechanism in data centers. We propose a new hardware CPU accounting mechanism to improve the accuracy when measuring the CPU utilization in CMPs and compare it with the previous accounting mechanisms. Our results show that currently known mechanisms lead to a 16 percent average error when it comes to CPU utilization accounting. Our proposal reduces this error to less than 2.8 percent in a modeled 8-core processor system. © 2006 IEEE. Source


Kosmidis L.,Polytechnic University of Catalonia | Kosmidis L.,Barcelona Supercomputing Center | Abella J.,Polytechnic University of Catalonia | Quinones E.,Polytechnic University of Catalonia | And 2 more authors.
Proceedings -Design, Automation and Test in Europe, DATE | Year: 2013

Caches provide significant performance improvements, though their use in real-time industry is low because current WCET analysis tools require detailed knowledge of program's cache accesses to provide tight WCET estimates. Probabilistic Timing Analysis (PTA) has emerged as a solution to reduce the amount of information needed to provide tight WCET estimates, although it imposes new requirements on hardware design. At cache level, so far only fully-associative random-replacement caches have been proven to fulfill the needs of PTA, but they are expensive in size and energy. In this paper we propose a cache design that allows setassociative and direct-mapped caches to be analysed with PTA techniques. In particular we propose a novel parametric random placement suitable for PTA that is proven to have low hardware complexity and energy consumption while providing comparable performance to that of conventional modulo placement. © 2013 EDAA. Source


Radojkovic P.,Barcelona Supercomputing Center | Girbal S.,Thales Alenia | Girbal S.,Ecole Polytechnique - Palaiseau | Grasset A.,Thales Alenia | And 6 more authors.
Transactions on Architecture and Code Optimization | Year: 2012

Commercial Off-The-Shelf (COTS) processors are now commonly used in real-time embedded systems. The characteristics of these processors fulfill system requirements in terms of time-to-market, low cost, and high performance-per-watt ratio. However, multithreaded (MT) processors are still not widely used in real-time systems because the timing analysis is too complex. In MT processors, simultaneously-running tasks share and compete for processor resources, so the timing analysis has to estimate the possible impact that the inter-task interferences have on the execution time of the applications. In this paper, we propose a method that quantifies the slowdown that simultaneously-running tasks may experience due to collision in shared processor resources. To that end, we designed benchmarks that stress specific processor resources and we used them to (1) estimate the upper limit of a slowdown that simultaneously-running tasks may experience because of collision in different shared processor resources, and (2) quantify the sensitivity of time-critical applications to collision in these resources. We used the presented method to determine if a given MT processor is a good candidate for systems with timing requirements. We also present a case study in which themethod is used to analyze threemultithreaded architectures exhibiting different configurations of resource sharing. Finally, we show that measuring the slowdown that real applications experience when simultaneously-running with resource-stressing benchmarks is an important step in measurement-based timing analysis. This information is a base for incremental verification of MT COTS architectures. © 2012 ACM. Source


Abella J.,Barcelona Supercomputing Center | Quinones E.,Barcelona Supercomputing Center | Wartel F.,Airbus | Vardanega T.,University of Padua | And 2 more authors.
Proceedings - Euromicro Conference on Real-Time Systems | Year: 2014

Measurement-Based Probabilistic Timing Analysis (MBPTA) has been recently proposed as a viable method to compute probabilistic worst-case execution time (pWCET) bounds for programs with hard real-time constraints. As a key trait, MBPTA needs a comparatively small number of observation runs, made on execution platforms to which MBPTA can be applied, to project the tail of the probability of occurrence of worst-case execution time durations of individual programs. In order for the use of MBPTA to fit the bill of industrial-quality development, it is imperative to understand what factors might threaten the trustworthiness of the pWCET computation. This paper addresses that important question by: (i) identifying the combined characteristics of applications and hardware resources that might lead to optimistic pWCET bounds, (ii) describing why this may occur, and (iii) providing the user with means to detect those cases so that trustworthiness is restored. In particular, we present a method for detecting risk scenarios for time-randomised caches, based on principles that apply to any other time-randomised resource which may challenge the application of MBPTA. © 2014 IEEE. Source

Discover hidden collaborations