Leibniz International Proceedings in Informatics, LIPIcs | Year: 2017
Security of cryptographic applications is typically defined by security games. The adversary, within certain resources, cannot win with probability much better than 0 (for unpredictability applications, like one-way functions) or much better than 1/2 (indistinguishability applications for instance encryption schemes). In so called squared-friendly applications the winning probability of the adversary, for different values of the application secret randomness, is not only close to 0 or 1/2 on average, but also concentrated in the sense that its second central moment is small. The class of squared-friendly applications, which contains all unpredictability applications and many indistinguishability applications, is particularly important for key derivation. Barak et al. observed that for square-friendly applications one can beat the "RT-bound", extracting secure keys with significantly smaller entropy loss. In turn Dodis and Yu showed that in squared-friendly applications one can directly use a "weak" key, which has only high entropy, as a secure key. In this paper we give sharp lower bounds on square security assuming security for "weak" keys. We show that any application which is either (a) secure with weak keys or (b) allows for entropy savings for keys derived by universal hashing, must be square-friendly. Quantitatively, our lower bounds match the positive results of Dodis and Yu and Barak et al. (TCC'13, CRYPTO'11) Hence, they can be understood as a general characterization of squared-friendly applications. While the positive results on squared-friendly applications where derived by one clever application of the Cauchy-Schwarz Inequality, for tight lower bounds we need more machinery. In our approach we use convex optimization techniques and some theory of circular matrices. © Maciej Skorski.
Agency: European Commission | Branch: FP7 | Program: CSA-SA | Phase: SiS-2010-1.0.1 | Award Amount: 5.14M | Year: 2011
The PACITA Mobilisation and Mutual Learning Action Plan will distribute capacity and enhance the institutional foundation for knowledge-based policy-making on issues involving science, technology and innovation, mainly based upon the practices in Parliamentary Technology Assessment (PTA). PTA supports the processes of democratic policy-making on issues involving science, technology and innovation, by providing comprehensive insight into knowledge on opportunities and consequences, by facilitating democratic processes of debate and clarification, and by formulating policy options. PACITA will a) document these practices, b) describe schemes for using them nationally and at European level, c) establish a set of training schemes for users and practitioners, d) establish a Web Portal to European TA expertise e) create a debate on such practices in countries, which do not have them formally established, f) involve experts, societal actors and politicians in European debates on these practices, g) provide three large example-projects on expert-based praxis, stakeholder involvement and citizen consultation, h) support this with a strong dissemination strategy towards the policy-makers, the scientific community, media and countries, which can favor from the mobilization and mutual learning created by the Action Plan, and i) have an independent evaluator monitor the progress and results. The consortium has 15 partners from: National/regional parliamentary offices for science and technology; Science academies; Research institutions; Universities; Civil Society Organisations. The Coordinator is a PTA institution, highly experienced in project managing, which has taken part in more than 10 EU projects including an ERA-Net, and has coordinated two FP projects and a global citizen consultation project involving 55 partners in 38 countries.
Agency: European Commission | Branch: FP7 | Program: CP-FP | Phase: SSH-2007-5.1-01;SSH-2007-7.4-01 | Award Amount: 909.57K | Year: 2008
The CIVISTI project will identify new emerging issues for European Science and Technology by uncovering European citizens visions of the future and transform these into relevant long term science, technology and innovation issues, which are of relevance for European policies of S&T and for the development of FP8. The CIVISTI project will do this by a) consult national citizen panels through an informed deliberation process, focussing on long term visions, needs and concerns of the citizens; b) develop an analytical model for transformation of the visions into relevant issues for future science and technology; c) by use of the analytical model, through stakeholder and expert participation processes, analyse the citizen visions and transform them into posible priorities for research programmes; d) validate the priorities through a second round of citizen consultation. The project will develop a novell citizen participation process with the aim of making cost-effective citizen participation possible in foresight processes. CIVISTI will include new European actors in the foresight processes in order to expand the experience and capacity of foresight among the member states, institutions and researchers.
Agency: National Science Foundation | Branch: | Program: STTR | Phase: Phase II | Award Amount: 500.00K | Year: 2010
This Small Business Technology Transfer (STTR) Phase II project is a study to expand the high speed addressing work conducted under Phase I using monochrome Plasma-spheres to color Plasma-spheres. Plasma-spheres are hollow transparent shells that encapsulate a selected pressurized gas. When a voltage is applied across the shell, the gas ionizes and glows. Plasma-spheres are applied to flexible, electrically addressable arrays to form Plasma-sphere arrays for use as large area plasma displays. Plasma-sphere arrays, like standard plasma displays require secondary electron emitting materials to increases addressing speeds. Under Phase II, the team will continue to investigate both thin film and thick film techniques for applying these materials to color Plasma-spheres. The proposed research presents a novel approach to produce video speed large area plasma displays. The Plasma-sphere array differs from other display technologies in that it allows for low-cost displays that are flexible, ultra-large, with full-color and full motion video. The broader impact/commercial potential of this project is a breakthrough display technology. It moves away from the traditional semiconductor fabrication processes as practiced by many display manufacturers in Asia and replaces them with low cost plastic, glass, and printing processes practiced and well understood by US based companies. The successful development of a high speed addressing will help move this product toward commercialization in the large and growing market of dynamic signage. Commercialization of this technology will lead to job creation and commercial opportunities in the United States. Furthermore, Plasma-sphere arrays are an order of magnitude lower in production cost when compared with ultra large LED displays. Lower material and manufacturing costs provide a social benefit in that fewer natural resources are required with a less taxing effect on the environment. The Plasmasphere array can be made large like an LED display, while retaining many of the exceptional features of a conventional, rigid plasma display including good viewing angle, high brightness, excellent contrast, and full motion video.
Fried M.W.,University of North Carolina at Chapel Hill |
Hadziyannis S.J.,Henry Dunant Hospital |
Shiffman M.L.,Health News |
Messinger D.,IST |
Zeuzem S.,Goethe University Frankfurt
Journal of Hepatology | Year: 2011
Background & Aims: The probability of response to peginterferon and ribavirin is associated with numerous host and virological factors. Attainment of a rapid virological response (RVR), defined as undetectable HCV RNA at week 4 during treatment with peginterferon and ribavirin, is highly predictive of sustained virological response (SVR). The aim of the present study was to determine the relative importance of the kinetics of antiviral response compared to baseline host and virological factors for predicting SVR. Methods: A retrospective analysis of 1383 patients, encompassing genotypes 1-4, treated with peginterferon alfa-2a and ribavirin, was performed. Baseline characteristics were compared across HCV genotypes and pretreatment factors associated with RVR were identified. The relative significance of RVR compared to other baseline factors for predicting SVR was analyzed by multiple logistic regression analysis. Results: RVR was achieved by 16% of patients with genotype 1 and 71% and 60% of those with genotype 2 and 3, respectively. Among patients who achieved RVR, the rate of SVR was high across all genotypes and ranged from 88% to 100% (genotypes 1-4). Baseline factors predictive of RVR included genotype, younger age, lower initial viral load, higher ALT ratio, absence of advanced fibrosis, and younger age. Notably, the presence of RVR generated the highest odds ratio (5.47, 95% confidence interval 3.97-7.52) for predicting SVR in multiple logistic regression analysis of these factors. Conclusions: Attainment of RVR varies by genotype and is associated with several baseline factors. Patients who achieve RVR have the highest rates of SVR, regardless of genotype. These findings have important implications for predicting and managing response-guided combination antiviral therapies. © 2010 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2014
We study two-player concurrent games on finite-state graphs played for an infinite number of rounds, where in each round, the two players (player 1 and player 2) choose their moves independently and simultaneously; the current state and the two moves determine the successor state. The objectives are ω-regular winning conditions specified as parity objectives. We consider the qualitative analysis problems: the computation of the almost-sure and limit-sure winning set of states, where player 1 can ensure to win with probability 1 and with probability arbitrarily close to 1, respectively. In general the almost-sure and limit-sure winning strategies require both infinite-memory as well as infinite-precision (to describe probabilities). While the qualitative analysis problem for concurrent parity games with infinite-memory, infinite-precision randomized strategies was studied before, we study the bounded-rationality problem for qualitative analysis of concurrent parity games, where the strategy set for player 1 is restricted to bounded-resource strategies. In terms of precision, strategies can be deterministic, uniform, finite-precision, or infinite-precision; and in terms of memory, strategies can be memoryless, finite-memory, or infinite-memory. We present a precise and complete characterization of the qualitative winning sets for all combinations of classes of strategies. In particular, we show that uniform memoryless strategies are as powerful as finite-precision infinite-memory strategies, and infinite-precision memoryless strategies are as powerful as infinite-precision finite-memory strategies. We show that the winning sets can be computed in O(n2d+3) time, where n is the size of the game structure and 2d is the number of priorities (or colors), and our algorithms are symbolic. The membership problem of whether a state belongs to a winning set can be decided in NP ∩ coNP. Our symbolic algorithms are based on a characterization of the winning sets as μ-calculus formulas, however, our μ-calculus formulas are crucially different from the ones for concurrent parity games (without bounded rationality); and our memoryless witness strategy constructions are significantly different from the infinite-memory witness strategy constructions for concurrent parity games. © 2014 Springer-Verlag.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012
Unsatisfiability proofs find many applications in verification. Today, many SAT solvers are capable of producing resolution proofs of unsatisfiability. For efficiency smaller proofs are preferred over bigger ones. The solvers apply proof reduction methods to remove redundant parts of the proofs while and after generating the proofs. One method of reducing resolution proofs is redundant resolution reduction, i.e., removing repeated pivots in the paths of resolution proofs (aka Pivot recycle). The known single pass algorithm only tries to remove redundancies in the parts of the proof that are trees. In this paper, we present three modifications to improve the algorithm such that the redundancies can be found in the parts of the proofs that are DAGs. The first modified algorithm covers greater number of redundancies as compared to the known algorithm without incurring any additional cost. The second modified algorithm covers even greater number of the redundancies but it may have longer run times. Our third modified algorithm is parametrized and can trade off between run times and the coverage of the redundancies. We have implemented our algorithms in OpenSMT and applied them on unsatisfiability proofs of 198 examples from plain MUS track of SAT11 competition. The first and second algorithm additionally remove 0.89% and 10.57% of clauses respectively as compared to the original algorithm. For certain value of the parameter, the third algorithm removes almost as many clauses as the second algorithm but is significantly faster. © 2012 Springer-Verlag.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012
The (decisional) learning with errors problem (LWE) asks to distinguish "noisy" inner products of a secret vector with random vectors from uniform. The learning parities with noise problem (LPN) is the special case where the elements of the vectors are bits. In recent years, the LWE and LPN problems have found many applications in cryptography. In this paper we introduce a (seemingly) much stronger adaptive assumption, called "subspace LWE" (SLWE), where the adversary can learn the inner product of the secret and random vectors after they were projected into an adaptively and adversarially chosen subspace. We prove that, surprisingly, the SLWE problem mapping into subspaces of dimension d is almost as hard as LWE using secrets of length d (the other direction is trivial.) This result immediately implies that several existing cryptosystems whose security is based on the hardness of the LWE/LPN problems are provably secure in a much stronger sense than anticipated. As an illustrative example we show that the standard way of using LPN for symmetric CPA secure encryption is even secure against a very powerful class of related key attacks. © 2012 Springer-Verlag.
Computer Graphics Forum | Year: 2016
This paper generalizes the well-known Diffusion Curves Images (DCI), which are composed of a set of Bezier curves with colors specified on either side. These colors are diffused as Laplace functions over the image domain, which results in smooth color gradients interrupted by the Bezier curves. Our new formulation allows for more color control away from the boundary, providing a similar expressive power as recent Bilaplace image models without introducing associated issues and computational costs. The new model is based on a special Laplace function blending and a new edge blur formulation. We demonstrate that given some user-defined boundary curves over an input raster image, fitting colors and edge blur from the image to the new model and subsequent editing and animation is equally convenient as with DCIs. Numerous examples and comparisons to DCIs are presented. © 2016 The Author(s) Computer Graphics Forum © 2016 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.
Gridchyn I.,IST |
Proceedings of the IEEE International Conference on Computer Vision | Year: 2013
The problem of minimizing the Potts energy function frequently occurs in computer vision applications. One way to tackle this NP-hard problem was proposed by Kovtun [19, 20]. It identifies a part of an optimal solution by running k maxflow computations, where k is the number of labels. The number of 'labeled' pixels can be significant in some applications, e.g. 50-93% in our tests for stereo. We show how to reduce the runtime to O(log k) maxflow computations (or one parametric maxflow computation). Furthermore, the output of our algorithm allows to speed-up the subsequent alpha expansion for the unlabeled part, or can be used as it is for time-critical applications. To derive our technique, we generalize the algorithm of Felzenszwalb et al.  for Tree Metrics}. We also show a connection to k-sub modular functions from combinatorial optimization, and discuss k-sub modular relaxations for general energy functions. © 2013 IEEE.