Time filter

Source Type

Bowie, MD, United States

Mardziel P.,University of Maryland University College | Mardziel P.,Center for Computing science | Hicks M.,University of Maryland University College | Srivatsa M.,IBM
Journal of Computer Security | Year: 2013

This paper explores the idea of knowledge-based security policies, which are used to decide whether to answer queries over secret data based on an estimation of the querier's (possibly increased) knowledge given the results. Limiting knowledge is the goal of existing information release policies that employ mechanisms such as noising, anonymization, and redaction. Knowledge-based policies are more general: they increase flexibility by not fixing the means to restrict information flow. We enforce a knowledge-based policy by explicitly tracking a model of a querier's belief about secret data, represented as a probability distribution, and denying any query that could increase knowledge above a given threshold. We implement query analysis and belief tracking via abstract interpretation, which allows us to trade off precision and performance through the use of abstraction. We have developed an approach to augment standard abstract domains to include probabilities, and thus define distributions. We focus on developing probabilistic polyhedra in particular, to support numeric programs. While probabilistic abstract interpretation has been considered before, our domain is the first whose design supports sound conditioning, which is required to ensure that estimates of a querier's knowledge are accurate. Experiments with our implementation show that several useful queries can be handled efficiently, particularly compared to exact (i.e., sound) inference involving sampling. We also show that, for our benchmarks, restricting constraints to octagons or intervals, rather than full polyhedra, can dramatically improve performance while incurring little to no loss in precision. © 2013 IOS Press and the authors. Source

Case J.,University of Delaware | Moelius III S.E.,Center for Computing science
Theory of Computing Systems | Year: 2012

Intuitively, a recursion theorem asserts the existence of self-referential programs. Two well-known recursion theorems are Kleene's Recursion Theorem (krt) and Rogers' Fixpoint Recursion Theorem (fprt). Does one of these two theorems better capture the notion of program self-reference than the other? In the context of the partial computable functions over the natural numbers (PC), fprt is strictly weaker than krt, in that fprt holds in any effective numbering of PC in which krt holds, but not vice versa. It is shown that, in this context, the existence of self-reproducing programs (a. k. a. quines) is assured by krt, but not by fprt. Most would surely agree that a self-reproducing program is self-referential. Thus, this result suggests that krt is better than fprt at capturing the notion of program self-reference in PC. A generalization of krt to arbitrary constructive Scott subdomains is then given. (For fprt, a similar generalization was already known.) Surprisingly, for some such subdomains, the two theorems turn out to be equivalent. A precise characterization is given of those constructive Scott subdomains in which this occurs. For such subdomains, the two theorems capture the notion of program self-reference equally well. © 2011 Springer Science+Business Media, LLC. Source

Harris D.G.,University of Maryland University College | Sullivan F.,Center for Computing science
Leibniz International Proceedings in Informatics, LIPIcs | Year: 2015

The all-terminal reliability polynomial of a graph counts its connected subgraphs of various sizes. Algorithms based on sequential importance sampling (SIS) have been proposed to estimate a graph's reliability polynomial. We show upper bounds on the relative error of three sequential importance sampling algorithms. We use these to create a hybrid algorithm, which selects the best SIS algorithm for a particular graph G and particular coefficient of the polynomial. This hybrid algorithm is particularly effective when G has low degree. For graphs of average degree ≤ 11, it is the fastest known algorithm; for graphs of average degree ≤ 45 it is the fastest known polynomial-space algorithm. For example, when a graph has average degree 3, this algorithm estimates to error ε in time O(1.26nε-2). Although the algorithm may take exponential time, in practice it can have good performance even on medium-scale graphs. We provide experimental results that show quite practical performance on graphs with hundreds of vertices and thousands of edges. By contrast, alternative algorithms are either not rigorous or are completely impractical for such large graphs. © David G. Harris and Francis Sullivan. Source

Nau D.S.,University of Maryland University College | Lustrek M.,Jozef Stefan Institute | Parker A.,University of Maryland University College | Parker A.,Center for Computing science | And 2 more authors.
Artificial Intelligence | Year: 2010

In situations where one needs to make a sequence of decisions, it is often believed that looking ahead will help produce better decisions. However, it was shown 30 years ago that there are "pathological" situations in which looking ahead is counterproductive. Two long-standing open questions are (a) what combinations of factors have the biggest influence on whether lookahead pathology occurs, and (b) whether it occurs in real-world decision-making. This paper includes simulation results for several synthetic game-tree models, and experimental results for three well-known board games: two chess endgames, kalah (with some modifications to facilitate experimentation), and the 8-puzzle. The simulations show the interplay between lookahead pathology and several factors that affect it; and the experiments confirm the trends predicted by the simulation models. The experiments also show that lookahead pathology is more common than has been thought: all three games contain situations where it occurs. © 2010 Elsevier B.V. All rights reserved. Source

Moelius III S.E.,Center for Computing science | Zilles S.,University of Regina
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

Iterative learning is a model of language learning from positive data, due to Wiehagen. When compared to a learner in Gold's original model of language learning from positive data, an iterative learner can be thought of as memory-limited. However, an iterative learner can memorize some input elements by coding them into the syntax of its hypotheses. A main concern of this paper is: to what extent are such coding tricks necessary? One means of preventing some such coding tricks is to require that the hypothesis space used be free of redundancy, i.e., that it be 1-1. By extending a result of Lange & Zeugmann, we show that many interesting and non-trivial classes of languages can be iteratively identified in this manner. On the other hand, we show that there exists a class of languages that cannot be iteratively identified using any 1-1 effective numbering as the hypothesis space. We also consider an iterative-like learning model in which the computational component of the learner is modeled as an enumeration operator, as opposed to a partial computable function. In this new model, there are no hypotheses, and, thus, no syntax in which the learner can encode what elements it has or has not yet seen. We show that there exists a class of languages that can be identified under this new model, but that cannot be iteratively identified. On the other hand, we show that there exists a class of languages that cannot be identified under this new model, but that can be iteratively identified using a Friedberg numbering as the hypothesis space. © 2010 Springer-Verlag. Source

Discover hidden collaborations