Kulshrestha J.,Max Planck Institute for Software Systems (Kaiserslautern)
Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW | Year: 2016
The widespread adoption of social media like Twitter and Facebook has lead to a paradigm shift in the way our society is producing and consuming information, from the broadcast mass media to online social media. To study the effects of this paradigm shift, we define the concept of information diet-which is the composition of a set of information items being produced or consumed. Information diets can be constructed along many aspects like topics (eg. politics, sports, science etc), or perspectives (eg. politically left leaning or right leaning), or sources (eg. originating from different parts of the world). We use information diets to measure the diversity and bias in the information produced or consumed, and to study how the recommendation and search systems are shaping the diets of social media users. We leverage the insights we gain from analysing social media users' diets to design better information discovery and exchange systems over social media.
Brandenburg B.B.,Max Planck Institute for Software Systems (Kaiserslautern)
Proceedings - Real-Time Systems Symposium | Year: 2015
In mixed-criticality systems, highly critical tasks must be temporally and logically isolated from faults in lower-criticality tasks. Such strict isolation, however, is difficult to ensure even for independent tasks, and has not yet been attained if low- and high-criticality tasks share resources subject to mutual exclusion constraints (e.g., Shared data structures, peripheral I/O devices, or OS services), as it is often the case in practical systems. Taking a pragmatic, systems-oriented point of view, this paper argues that traditional real-time locking approaches are unsuitable in a mixed-criticality context: locking is a cooperative activity and requires trust, which is inherently in conflict with the paramount isolation requirements. Instead, a solution based on resource servers (in the microkernel sense) is proposed, and MC-IPC, a novel synchronous multiprocessor IPC protocol for invoking such servers, is presented. The MC-IPC protocol enables strict temporal and logical isolation among mutually untrusted tasks and thus can be used to share resources among tasks of different criticalities. It is shown to be practically viable with a prototype implementation in LITMUSRT and validated with a case study involving several antagonistic failure modes. Finally, MC-IPC is shown to offer analytical benefits in the context of Vestal's mixed-criticality task model. © 2014 IEEE.
Vafeiadis V.,Max Planck Institute for Software Systems (Kaiserslautern)
Electronic Notes in Theoretical Computer Science | Year: 2011
This paper presents a new soundness proof for concurrent separation logic (CSL) in terms of a standard operational semantics. The proof gives a direct meaning to CSL judgments, which can easily be adapted to accommodate extensions of CSL, such as permissions and storable locks, as well as more advanced program logics, such as RGSep. Further, it explains clearly why resource invariants should be 'precise' in proofs using the conjunction rule. © 2011 Elsevier B.V.All rights reserved.
Haeberlen A.,Max Planck Institute for Software Systems (Kaiserslautern)
Operating Systems Review (ACM) | Year: 2010
For many companies, clouds are becoming an interesting alternative to a dedicated IT infrastructure. However, cloud computing also carries certain risks for both the customer and the cloud provider. The customer places his computation and data on machines he cannot directly control; the provider agrees to run a service whose details he does not know. If something goes wrong - for example, data leaks to a competitor, or the computation returns incorrect results - it can be difficult for customer and provider to determinewhich of themhas caused the problem, and, in the absence of solid evidence, it is nearly impossible for them to hold each other responsible for the problem if a dispute arises. In this paper, we propose that the cloud should be made accountable to both the customer and the provider. Both parties should be able to check whether the cloud is running the service as agreed. If a problem appears, they should be able to determine which of them is responsible, and to prove the presence of the problem to a third party, such as an arbitrator or a judge. We outline the technical requirements for an accountable cloud, and we describe several challenges that are not yet met by current accountability techniques.
Acar U.A.,Carnegie Mellon University |
Chargueraud A.,University Paris - Sud |
Rainey M.,Max Planck Institute for Software Systems (Kaiserslautern)
Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPOPP | Year: 2013
Work stealing has proven to be an effective method for scheduling parallel programs on multicore computers. To achieve high performance, work stealing distributes tasks between concurrent queues, called deques, which are assigned to each processor. Each processor operates on its deque locally except when performing load balancing via steals. Unfortunately, concurrent deques suffer from two limitations: 1) local deque operations require expensive memory fences in modern weak-memory architectures, 2) they can be very difficult to extend to support various optimizations and flexible forms of task distribution strategies needed many applications, e.g., those that do not fit nicely into the divide-and-conquer, nested data parallel paradigm. For these reasons, there has been a lot recent interest in implementations of work stealing with non-concurrent deques, where deques remain entirely private to each processor and load balancing is performed via message passing. Private deques eliminate the need for memory fences from local operations and enable the design and implementation of efficient techniques for reducing task-creation overheads and improving task distribution. These advantages, however, come at the cost of communication. It is not known whether work stealing with private deques enjoys the theoretical guarantees of concurrent deques and whether they can be effective in practice. In this paper, we propose two work-stealing algorithms with private deques and prove that the algorithms guarantee similar theoretical bounds as work stealing with concurrent deques. For the analysis, we use a probabilistic model and consider a new parameter, the branching depth of the computation. We present an implementation of the algorithm as a C++ library and show that it compares well to Cilk on a range of benchmarks. Since our approach relies on private deques, it enables implementing flexible task creation and distribution strategies. As a specific example, we show how to implement task coalescing and steal-half strategies, which can be important in fine-grain, non-divide-and-conquer algorithms such as graph algorithms, and apply them to the depth-first-search problem. © 2013 ACM.