Time filter

Source Type

Chakraborty S.,Max Planck Institute for Software Systems (Kaiserslautern) | Vafeiadis V.,Max Planck Institute for Software Systems (Kaiserslautern)
CGO 2017 - Proceedings of the 2017 International Symposium on Code Generation and Optimization | Year: 2017

The LLVM compiler follows closely the concurrency model of C/C++ 2011, but with a crucial difference. While in C/C++ a data race between a non-atomic read and a write is declared to be undefined behavior, in LLVM such a race has defined behavior: the read returns the special 'undef' value. This subtle difference in the semantics of racy programs has profound consequences on the set of allowed program transformations, but it has been not formally been studied before. This work closes this gap by providing a formal memory model for a substantial fragment of LLVM and showing that it is correct as a concurrency model for a compiler intermediate language: (1) it is stronger than the C/C++ model, (2) weaker than the known hardware models, and (3) supports the expected program transformations. In order to support LLVM's semantics for racy accesses, our formal model does not work on the level of single executions as the hardware and the C/C++ models do, but rather uses more elaborate structures called event structures. © 2017 IEEE.


Acar U.A.,Carnegie Mellon University | Chargueraud A.,University Paris - Sud | Rainey M.,Max Planck Institute for Software Systems (Kaiserslautern)
Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPOPP | Year: 2013

Work stealing has proven to be an effective method for scheduling parallel programs on multicore computers. To achieve high performance, work stealing distributes tasks between concurrent queues, called deques, which are assigned to each processor. Each processor operates on its deque locally except when performing load balancing via steals. Unfortunately, concurrent deques suffer from two limitations: 1) local deque operations require expensive memory fences in modern weak-memory architectures, 2) they can be very difficult to extend to support various optimizations and flexible forms of task distribution strategies needed many applications, e.g., those that do not fit nicely into the divide-and-conquer, nested data parallel paradigm. For these reasons, there has been a lot recent interest in implementations of work stealing with non-concurrent deques, where deques remain entirely private to each processor and load balancing is performed via message passing. Private deques eliminate the need for memory fences from local operations and enable the design and implementation of efficient techniques for reducing task-creation overheads and improving task distribution. These advantages, however, come at the cost of communication. It is not known whether work stealing with private deques enjoys the theoretical guarantees of concurrent deques and whether they can be effective in practice. In this paper, we propose two work-stealing algorithms with private deques and prove that the algorithms guarantee similar theoretical bounds as work stealing with concurrent deques. For the analysis, we use a probabilistic model and consider a new parameter, the branching depth of the computation. We present an implementation of the algorithm as a C++ library and show that it compares well to Cilk on a range of benchmarks. Since our approach relies on private deques, it enables implementing flexible task creation and distribution strategies. As a specific example, we show how to implement task coalescing and steal-half strategies, which can be important in fine-grain, non-divide-and-conquer algorithms such as graph algorithms, and apply them to the depth-first-search problem. © 2013 ACM.


Cha M.,KAIST | Benevenuto F.,Federal University of Ouro Preto | Haddadi H.,Queen Mary, University of London | Gummadi K.,Max Planck Institute for Software Systems (Kaiserslautern)
IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans | Year: 2012

Information propagation in online social networks like Twitter is unique in that word-of-mouth propagation and traditional media sources coexist. We collect a large amount of data from Twitter to compare the relative roles different types of users play in information flow. Using empirical data on the spread of news about major international headlines as well as minor topics, we investigate the relative roles of three types of information spreaders: 1) mass media sources like BBC; 2) grassroots, consisting of ordinary users; and 3) evangelists, consisting of opinion leaders, politicians, celebrities, and local businesses. Mass media sources play a vital role in reaching the majority of the audience in any major topics. Evangelists, however, introduce both major and minor topics to audiences who are further away from the core of the network and would otherwise be unreachable. Grassroots users are relatively passive in helping spread the news, although they account for the 98% of the network. Our results bring insights into what contributes to rapid information propagation at different levels of topic popularity, which we believe are useful to the designers of social search and recommendation engines. © 2012 IEEE.


Kulshrestha J.,Max Planck Institute for Software Systems (Kaiserslautern)
Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW | Year: 2016

The widespread adoption of social media like Twitter and Facebook has lead to a paradigm shift in the way our society is producing and consuming information, from the broadcast mass media to online social media. To study the effects of this paradigm shift, we define the concept of information diet-which is the composition of a set of information items being produced or consumed. Information diets can be constructed along many aspects like topics (eg. politics, sports, science etc), or perspectives (eg. politically left leaning or right leaning), or sources (eg. originating from different parts of the world). We use information diets to measure the diversity and bias in the information produced or consumed, and to study how the recommendation and search systems are shaping the diets of social media users. We leverage the insights we gain from analysing social media users' diets to design better information discovery and exchange systems over social media.


Catalano D.,University of Catania | Fiore D.,Max Planck Institute for Software Systems (Kaiserslautern)
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2013

Homomorphic message authenticators allow the holder of a (public) evaluation key to perform computations over previously authenticated data, in such a way that the produced tag σ can be used to certify the authenticity of the computation. More precisely, a user knowing the secret key sk used to authenticate the original data, can verify that σ authenticates the correct output of the computation. This primitive has been recently formalized by Gennaro and Wichs, who also showed how to realize it from fully homomorphic encryption. In this paper, we show new constructions of this primitive that, while supporting a smaller set of functionalities (i.e., polynomially-bounded arithmetic circuits as opposite to boolean ones), are much more efficient and easy to implement. Moreover, our schemes can tolerate any number of (malicious) verification queries. Our first construction relies on the sole assumption that one way functions exist, allows for arbitrary composition (i.e., outputs of previously authenticated computations can be used as inputs for new ones) but has the drawback that the size of the produced tags grows with the degree of the circuit. Our second solution, relying on the D-Diffie-Hellman Inversion assumption, offers somewhat orthogonal features as it allows for very short tags (one single group element!) but poses some restrictions on the composition side. © 2013 International Association for Cryptologic Research.


Brandenburg B.B.,Max Planck Institute for Software Systems (Kaiserslautern)
Proceedings - Real-Time Systems Symposium | Year: 2015

In mixed-criticality systems, highly critical tasks must be temporally and logically isolated from faults in lower-criticality tasks. Such strict isolation, however, is difficult to ensure even for independent tasks, and has not yet been attained if low- and high-criticality tasks share resources subject to mutual exclusion constraints (e.g., Shared data structures, peripheral I/O devices, or OS services), as it is often the case in practical systems. Taking a pragmatic, systems-oriented point of view, this paper argues that traditional real-time locking approaches are unsuitable in a mixed-criticality context: locking is a cooperative activity and requires trust, which is inherently in conflict with the paramount isolation requirements. Instead, a solution based on resource servers (in the microkernel sense) is proposed, and MC-IPC, a novel synchronous multiprocessor IPC protocol for invoking such servers, is presented. The MC-IPC protocol enables strict temporal and logical isolation among mutually untrusted tasks and thus can be used to share resources among tasks of different criticalities. It is shown to be practically viable with a prototype implementation in LITMUSRT and validated with a case study involving several antagonistic failure modes. Finally, MC-IPC is shown to offer analytical benefits in the context of Vestal's mixed-criticality task model. © 2014 IEEE.


Haeberlen A.,Max Planck Institute for Software Systems (Kaiserslautern)
Operating Systems Review (ACM) | Year: 2010

For many companies, clouds are becoming an interesting alternative to a dedicated IT infrastructure. However, cloud computing also carries certain risks for both the customer and the cloud provider. The customer places his computation and data on machines he cannot directly control; the provider agrees to run a service whose details he does not know. If something goes wrong - for example, data leaks to a competitor, or the computation returns incorrect results - it can be difficult for customer and provider to determinewhich of themhas caused the problem, and, in the absence of solid evidence, it is nearly impossible for them to hold each other responsible for the problem if a dispute arises. In this paper, we propose that the cloud should be made accountable to both the customer and the provider. Both parties should be able to check whether the cloud is running the service as agreed. If a problem appears, they should be able to determine which of them is responsible, and to prove the presence of the problem to a third party, such as an arbitrator or a judge. We outline the technical requirements for an accountable cloud, and we describe several challenges that are not yet met by current accountability techniques.


Vafeiadis V.,Max Planck Institute for Software Systems (Kaiserslautern)
Electronic Notes in Theoretical Computer Science | Year: 2011

This paper presents a new soundness proof for concurrent separation logic (CSL) in terms of a standard operational semantics. The proof gives a direct meaning to CSL judgments, which can easily be adapted to accommodate extensions of CSL, such as permissions and storable locks, as well as more advanced program logics, such as RGSep. Further, it explains clearly why resource invariants should be 'precise' in proofs using the conjunction rule. © 2011 Elsevier B.V.All rights reserved.


Brandenburg B.B.,Max Planck Institute for Software Systems (Kaiserslautern)
Proceedings - Euromicro Conference on Real-Time Systems | Year: 2013

Independence preservation, a property in real-time locking protocols that isolates latency-sensitive tasks from delays due to unrelated critical sections, is identified, formalized, and studied in detail. The key to independence preservation is to ensure that tasks remain fully preemptive at all times. For example, on uniprocessors, the classic priority inheritance protocol is independence-preserving. It is shown that, on multiprocessors, independence preservation is impossible if job migrations are disallowed. The O(m) independence-preserving protocol (OMIP), a new, asymptotically optimal binary sempahore protocol based on migratory priority inheritance, is proposed and analyzed. The OMIP is the first independence-preserving, real-time, suspension-based locking protocol for clustered job-level fixed-priority scheduling. It is shown to benefit latency-sensitive workloads, both analytically by means of schedulability experiments, and empirically using response-time measurements in LITMUSRT. © 2013 IEEE.


Wieder A.,Max Planck Institute for Software Systems (Kaiserslautern) | Brandenburg B.B.,Max Planck Institute for Software Systems (Kaiserslautern)
Proceedings - Real-Time Systems Symposium | Year: 2013

Motivated by the widespread use of spin locks in embedded multiprocessor real-time systems, the worst-case blocking in spin locks is analyzed using mixed-integer linear programming. Four queue orders and two preemption models are studied: (i) FIFO-ordered spin locks, (ii) unordered spin locks, (iii) priority-ordered spin locks with unordered tie-breaking, and (iv) priority-ordered spin locks with FIFO-ordered tie-breaking, each analyzed assuming both preempt able and non-preempt able spinning. Of the eight lock types, seven have not been analyzed in prior work. Concerning the sole exception (non-preempt able FIFO spin locks), the new analysis is asymptotically less pessimistic and typically much more accurate since no critical section is accounted for more than once. The eight lock types are empirically compared in schedulability experiments. While the presented analysis is generic in nature and applicable to real-time systems in general, it is specifically motivated by the recent inclusion of spin locks into the AUTOSAR standard, and four concrete suggestions for an improved AUTOSAR spin lock API are derived from the results. © 2013 IEEE.

Loading Max Planck Institute for Software Systems (Kaiserslautern) collaborators
Loading Max Planck Institute for Software Systems (Kaiserslautern) collaborators