Time filter

Source Type

Jersey City, NJ, United States

Chen H.,Purdue University | Chen H.,Knight Capital Group | Li N.,Purdue University | Gates C.S.,Purdue University | And 2 more authors.
Proceedings of ACM Symposium on Access Control Models and Technologies, SACMAT | Year: 2010

An operating system relies heavily on its access control mechanisms to defend against local and remote attacks. The complexities of modern access control mechanisms and the scale of possible configurations are often overwhelming to system administrators and software developers. Therefore mis-configurations are very common and the security consequences are serious. Given the popularity and uniqueness of Microsoft Windows systems, it is critical to have a tool to comprehensively examine the access control configurations. However, current studies on Windows access control mechanisms are mostly based on known attack patterns. We propose a tool, WACCA, to systematically analyze the Windows configurations. Given the attacker's initial abilities and goals,WACCA generates an attack graph based on interaction rules. The tool then automatically generates attack patterns from the attack graph. Each attack pattern represents attacks of the same nature. The attack subgraphs and instances are also generated for each pattern. Compared to existing solutions,WACCA is more comprehensive and does not rely on manually defined attack patterns. It also has a unique feature in that it models software vulnerabilities and therefore can find attacks that rely on exploiting these vulnerabilities. We study two attack cases on a Windows Vista host and discuss the analysis results. Source

Khandekar R.,Knight Capital Group | Hildrum K.,IBM | Rajan D.,Lawrence Livermore National Laboratory | Wolf J.,IBM
Leibniz International Proceedings in Informatics, LIPIcs | Year: 2012

We consider single processor preemptive scheduling with job-dependent setup times. In this model, a job-dependent setup time is incurred when a job is started for the first time, and each time it is restarted after preemption. This model is a common generalization of preemptive scheduling, and actually of non-preemptive scheduling as well. The objective is to minimize the sum of any general non-negative, non-decreasing cost functions of the completion times of the jobs - this generalizes objectives of minimizing weighted flow time, flow-time squared, tardiness or the number of tardy jobs among many others. Our main result is a randomized polynomial time O(1)-speed O(1)-approximation algorithm for this problem. Without speedup, no polynomial time finite multiplicative approximation is possible unless ℙ = ℕℙ. We extend the approach of Bansal et al. (FOCS 2007) of rounding a linear programming relaxation which accounts for costs incurred due to the non-preemptive nature of the schedule. A key new idea used in the rounding is that a point in the intersection polytope of two matroids can be decomposed as a convex combination of incidence vectors of sets that are independent in both matroids. In fact, we use this for the intersection of a partition matroid and a laminar matroid, in which case the decomposition can be found efficiently using network flows. Our approach gives a randomized polynomial time offline O(1)-speed O(1)-approximation algorithm for the broadcast scheduling problem with general cost functions as well. © R. Khandekar and K. Hildrum and D. Rajan and J. Wolf. Source

Khandekar R.,Knight Capital Group | Schieber B.,IBM | Shachnai H.,Technion - Israel Institute of Technology | Tamir T.,The Interdisciplinary Center
Journal of Scheduling | Year: 2015

Consider the following scheduling problem. We are given a set of jobs, each having a release time, a due date, a processing time, and demand for machine capacity. The goal is to schedule all jobs non-preemptively in their release-time deadline windows on machines that can process multiple jobs simultaneously, subject to machine capacity constraints, with the objective to minimize the total busy time of the machines. Our problem naturally arises in power-aware scheduling, optical network design, and customer service systems, among others. The problem is APX-hard by a simple reduction from the subset sum problem. A main result of this paper is a 5-approximation algorithm for general instances. While the algorithm is simple, its analysis involves a non-trivial charging scheme which bounds the total busy time in terms of work and span lower bounds on the optimum. This improves and extends the results of Flammini et al. (Theor Comput Sci 411(40–42):3553–3562, 2010). We extend this approximation to the case of moldable jobs, where the algorithm also needs to choose, for each job, one of several processing-time versus demand configurations. Better bounds and exact algorithms are derived for several special cases, including proper interval graphs, intervals forming a clique and laminar families of intervals. © 2014, Springer Science+Business Media New York. Source

Khandekar R.,Knight Capital Group | Kortsarz G.,Rutgers University | Mirrokni V.,Google
Algorithmica | Year: 2014

Graph clustering is an important problem with applications to bioinformatics, community discovery in social networks, distributed computing, and more. While most of the research in this area has focused on clustering using disjoint clusters, many real datasets have inherently overlapping clusters. We compare overlapping and non-overlapping clusterings in graphs in the context of minimizing their conductance. It is known that allowing clusters to overlap gives better results in practice. We prove that overlapping clustering may be significantly better than non-overlapping clustering with respect to conductance, even in a theoretical setting. For minimizing the maximum conductance over the clusters, we give examples demonstrating that allowing overlaps can yield significantly better clusterings, namely, one that has much smaller optimum. In addition for the min-max variant, the overlapping version admits a simple approximation algorithm, while our algorithm for the non-overlapping version is complex and yields a worse approximation ratio due to the presence of the additional constraint. Somewhat surprisingly, for the problem of minimizing the sum of conductances, we found out that allowing overlap does not help. We show how to apply a general technique to transform any overlapping clustering into a non-overlapping one with only a modest increase in the sum of conductances. This uncrossing technique is of independent interest and may find further applications in the future. We consider this work as a step toward rigorous comparison of overlapping and non-overlapping clusterings and hope that it stimulates further research in this area. © 2013 Springer Science+Business Media New York. Source

Wolf J.,IBM | Balmin A.,IBM | Rajan D.,Lawrence Livermore National Laboratory | Hildrum K.,IBM | And 4 more authors.
VLDB Journal | Year: 2012

We consider MapReduce clusters designed to support multiple concurrent jobs, concentrating on environments in which the number of distinct datasets is modest relative to the number of jobs. In such scenarios, many individual datasets are likely to be scanned concurrently by multiple Map phase jobs. As has been noticed previously, this scenario provides an opportunity for Map phase jobs to cooperate, sharing the scans of these datasets, and thus reducing the costs of such scans. Our paper has three main contributions over previous work. First, we present a novel and highly general method for sharing scans and thus amortizing their costs. This concept, which we call cyclic piggybacking, has a number of advantages over the more traditional batching scheme described in the literature. Second, we notice that the various subjobs generated in this manner can be assumed in an optimal schedule to respect a natural chain precedence ordering. Third, we describe a significant but natural generalization of the recently introduced FLEX scheduler for optimizing schedules within the context of this cyclic piggybacking paradigm, which can be tailored to a variety of cost metrics. Such cost metrics include average response time, average stretch, and any minimax-type metric-a total of 11 separate and standard metrics in all. Moreover, most of this carries over in the more general case of overlapping rather than identical datasets as well, employing what we will call semi-shared scans. In such scenarios, chain precedence is replaced by arbitrary precedence, but we can still handle 8 of the original 11 metrics. The overall approach, including both cyclic piggybacking and the FLEX scheduling generalization, is called CIRCUMFLEX. We describe some practical implementation strategies. And we evaluate the performance of CIRCUMFLEX via a variety of simulation and real benchmark experiments. © 2012 Springer-Verlag. Source

Discover hidden collaborations