United States
United States

Time filter

Source Type

Jung R.,MPI SWS | Jung R.,Saarland University | Swasey D.,MPI SWS | Sieczkowski F.,University of Aarhus | And 4 more authors.
Conference Record of the Annual ACM Symposium on Principles of Programming Languages | Year: 2015

We present Iris, a concurrent separation logic with a simple premise: monoids and invariants are all you need. Partial commutative monoids enable us to express-and invariants enable us to enforce- user-defined protocols on shared state, which are at the conceptual core of most recent program logics for concurrency. Furthermore, through a novel extension of the concept of a view shift, Iris supports the encoding of logically atomic specifications, i.e., Hoare-style specs that permit the client of an operation to treat the operation essentially as if it were atomic, even if it is not. Copyright © 2015 by the Association for Computing Machinery, Inc. (ACM).


St-Amour V.,Northeastern University | Guo S.-Y.,Mozilla Research
Leibniz International Proceedings in Informatics, LIPIcs | Year: 2015

The performance of dynamic object-oriented programming languages such as JavaScript depends heavily on highly optimizing just-in-time compilers. Such compilers, like all compilers, can silently fall back to generating conservative, low-performance code during optimization. As a result, programmers may inadvertently cause performance issues on users' systems by making seemingly inoffensive changes to programs. This paper shows how to solve the problem of silent optimization failures. It specifically explains how to create a so-called optimization coach for an object-oriented just-in-time-compiled programming language. The development and evaluation build on the SpiderMonkey JavaScript engine, but the results should generalize to a variety of similar platforms. © Vincent St-Amour and Shu-yu Guo;.


Holk E.,Indiana University Bloomington | Pathirage M.,Indiana University Bloomington | Chauhan A.,Indiana University Bloomington | Lumsdaine A.,Indiana University Bloomington | Matsakis N.D.,Mozilla Research
Proceedings - IEEE 27th International Parallel and Distributed Processing Symposium Workshops and PhD Forum, IPDPSW 2013 | Year: 2013

Graphics processing units (GPUs) have the potential to greatly accelerate many applications, yet programming models remain too low level. Many language-based solutions to date have addressed this problem by creating embedded domain-specific languages that compile to CUDA or OpenCL. These targets are meant for human programmers and thus are less than ideal compilation targets. LLVM recently gained a compilation target for PTX, NVIDIA's low-level virtual instruction set for GPUs. This lower-level representation is more expressive than CUDA and OpenCL, making it easier to support advanced language features such as abstract data types or even certain closures. We demonstrate the effectiveness of this approach by extending the Rust programming language with support for GPU kernels. At the most basic level, our extensions provide functionality that is similar to that of CUDA. However, our approach seamlessly integrates with many of Rust's features, making it easy to build a library of ergonomic abstractions for data parallel computing. This approach provides the expressiveness of a high level GPU language like Copperhead or Accelerate, yet also provides the programmer the power needed to create new abstractions when those we have provided are insufficient. © 2013 IEEE.


Matsakis N.D.,Mozilla Research | Herman D.,Mozilla Research | Lomov D.,Google
DLS 2014 - Proceedings of the 10th Symposium on Dynamic Languages, Part of SPLASH 2014 | Year: 2014

JavaScript's typed arrays have proven to be a crucial API for many JS applications, particularly those working with large amounts of data or emulating other languages. Unfortunately, the current typed array API offers no means of abstraction. Programmers are supplied with a simple byte buffer that can be viewed as an array of integers or floats, but nothing more. This paper presents a generalization of the typed arrays API entitled typed objects. The typed objects API is slated for inclusion in the upcoming ES7 standard. The API gives users the ability to define named types, making typed arrays much easier to work with. In particular, it is often trivial to replace uses of existing JavaScript objects with typed objects, resulting in better memory consumption and more predictable performance. The advantages of the typed object specification go beyond convenience, however. By supporting opacity-that is, the ability to deny access to the raw bytes of a typed object-the new typed object specification makes it possible to store objects as well as scalar data and also enables more optimization by JIT compilers. Copyright © 2014 ACM.


Bocchino R.,Jet Propulsion Laboratory | Matsakis N.,Mozilla Research | Taft T.,AdaCore | Larson B.,Kansas State University | Seidewitz E.,Model Driven Solutions
HILT 2014 - Proceedings of the ACM Conference on High Integrity Language Technology | Year: 2014

This panel brings together designers of both traditional programming languages, and designers of behavioral specification languages for modeling systems, in each case with a concern for the challenges of multicore programming. Furthermore, several of these efforts have attempted to provide data-race-free programming models, so that multicore programmers need not be faced with the added burden of trying to debug race conditions on top of the existing challenges of building reliable systems. Copyright is held by the owner/author(s).


McCutchan J.,Google | Feng H.,Intel Corporation | Matsakis N.D.,Mozilla Research | Anderson Z.,Google | Jensen P.,Intel Corporation
WPMVP 2014 - Proceedings of the 2014 ACM SIGPLAN Workshop on Programming Models for SIMD/Vector Processing, Co-located with PPoPP 2014 | Year: 2014

It has not been possible to take advantage of the SIMD co-processors available in all x86 and most ARM processors shipping today in dynamically typed scripting languages. Web browsers have become a mainstream platform to deliver large and complex applications with feature sets and performance comparable to native applications, programmers must choose between Dart and JavaScript when writing web programs. This paper introduces an explicit SIMD programming model for Dart and JavaScript, we show that it can be compiled to efficient x86/SSE or ARM/Neon code by both Dart and JavaScript virtual machines achieving a 300%-600% speed increase across a variety of benchmarks. The result of this work is that more sophisticated and performant applications can be built to run in web browsers. The ideas introduced in this paper can also be used in other dynamically typed scripting languages to provide a similarly performant interface to SIMD coprocessors. Copyright © 2014 ACM.


Chugh R.,University of California at San Diego | Herman D.,Mozilla Research | Jhala R.,University of California at San Diego
Proceedings of the Conference on Object-Oriented Programming Systems, Languages, and Applications, OOPSLA | Year: 2012

We present Dependent JavaScript (DJS), a statically typed dialect of the imperative, object-oriented, dynamic language. DJS supports the particularly challenging features such as run-time type-tests, higher-order functions, extensible objects, prototype inheritance, and arrays through a combination of nested refinement types, strong updates to the heap, and heap unrolling to precisely track prototype hierarchies. With our implementation of DJS, we demonstrate that the type system is expressive enough to reason about a variety of tricky idioms found in small examples drawn from several sources, including the popular book JavaScript: The Good Parts and the SunSpider benchmark suite.


Bergstrom L.,Mozilla Research | Fluet M.,Rochester Institute of Technology | Le M.,Rochester Institute of Technology | Reppy J.,University of Chicago | Sandler N.,University of Chicago
ACM SIGPLAN Notices | Year: 2014

Inlining is an optimization that replaces a call to a function with that function's body. This optimization not only reduces the overhead of a function call, but can expose additional optimization opportunities to the compiler, such as removing redundant operations or unused conditional branches. Another optimization, copy propagation, replaces a redundant copy of a still-live variable with the original. Copy propagation can reduce the total number of live variables, reducing register pressure and memory usage, and possibly eliminating redundant memory-to-memory copies. In practice, both of these optimizations are implemented in nearly every modern compiler. These two optimizations are practical to implement and effective in first-order languages, but in languages with lexically-scoped first-class functions (aka, closures), these optimizations are not available to code programmed in a higher-order style. With higherorder functions, the analysis challenge has been that the environment at the call site must be the same as at the closure capture location, up to the free variables, or the meaning of the program may change. Olin Shivers' 1991 dissertation called this family of optimizations Super-β and he proposed one analysis technique, called reflow, to support these optimizations. Unfortunately, reflow has proven too expensive to implement in practice. Because these higher-order optimizations are not available in functional-language compilers, programmers studiously avoid uses of higher-order values that cannot be optimized (particularly in compiler benchmarks). This paper provides the first practical and effective technique for Super-β (higher-order) inlining and copy propagation, which we call unchanged variable analysis. We show that this technique is practical by implementing it in the context of a real compiler for an ML-family language and showing that the required analyses have costs below 3% of the total compilation time. This technique's effectiveness is shown through a set of benchmarks and example programs, where this analysis exposes additional potential optimization sites. © Copyright 2014 ACM.


Herman D.,Mozilla Research | Tomb A.,University of California at Santa Cruz | Flanagan C.,University of California at Santa Cruz
Higher-Order and Symbolic Computation | Year: 2010

Gradual type systems offer a smooth continuum between static and dynamic typing by permitting the free mixture of typed and untyped code. The runtime systems for these languages, and other languages with hybrid type checking, typically enforce function types by dynamically generating function proxies. This approach can result in unbounded growth in the number of proxies, however, which drastically impacts space efficiency and destroys tail recursion. We present a semantics for gradual typing that is based on coercions instead of function proxies, and which combines adjacent coercions at runtime to limit their space consumption. We prove bounds on the space consumed by coercions as well as soundness of the type system, demonstrating that programmers can safely mix typing disciplines without incurring unreasonable overheads. Our approach also detects certain errors earlier than prior work. © Springer Science+Business Media, LLC 2011.


Matsakis N.,Mozilla Research | Klock F.S.,Mozilla Research
HILT 2014 - Proceedings of the ACM Conference on High Integrity Language Technology | Year: 2014

Rust is a new programming language for developing reliable and efficient systems. It is designed to support concurrency and parallelism in building applications and libraries that take full advantage of modern hardware. Rust's static type system is safe1 and expressive and provides strong guarantees about isolation, concurrency, and memory safety. Rust also offers a clear performance model, making it easier to predict and reason about program efficiency. One important way it accomplishes this is by allowing fine-grained control over memory representations, with direct support for stack allocation and contiguous record storage. The language balances such controls with the absolute requirement for safety: Rust's type system and runtime guarantee the absence of data races, buffer overflows, stack overflows, and accesses to uninitialized or deallocated memory. Copyright 2014 ACM.

Loading Mozilla Research collaborators
Loading Mozilla Research collaborators