Manno, Switzerland
Manno, Switzerland

Time filter

Source Type

Rafiey A.,IDSIA
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2011

The Dichotomy Conjecture for Constraint Satisfaction Problems has been verified for conservative problems (or, equivalently, for list homomorphism problems) by Andrei Bulatov. An earlier case of this dichotomy, for list homomorphisms to undirected graphs, came with an elegant structural distinction between the tractable and intractable cases. Such structural characterization is absent in Bulatov's classification, and Bulatov asked whether one cam be found. We provide an answer in the case of digraphs. In the process we give forbidden structure characterizations of the existence of certain polymorphisms relevant in Bulatov's dichotomy classification. The key concept we introduce is that of a digraph asteroidal triple (DAT). The dichotomy then takes the following form. If a digraph H has a DAT, then the list homomorphism problem for H is NP-complete; and a DAT-free digraph H has a polynomial time solvable list homomorphism problem. DAT-free digraphs can be recognized in polynomial time. It follows from our results that the list homomorphism problem for a DAT-free digraph H can be solved by a local consistency algorithm (of width (2,3)).

De Cooman G.,Ghent University | Miranda E.,University of Oviedo | Zaffalon M.,IDSIA
Artificial Intelligence | Year: 2011

There is no unique extension of the standard notion of probabilistic independence to the case where probabilities are indeterminate or imprecisely specified. Epistemic independence is an extension that formalises the intuitive idea of mutual irrelevance between different sources of information. This gives epistemic independence very wide scope as well as appeal: this interpretation of independence is often taken as natural also in precise-probabilistic contexts. Nevertheless, epistemic independence has received little attention so far. This paper develops the foundations of this notion for variables assuming values in finite spaces. We define (epistemically) independent products of marginals (or possibly conditionals) and show that there always is a unique least-committal such independent product, which we call the independent natural extension. We supply an explicit formula for it, and study some of its properties, such as associativity, marginalisation and external additivity, which are basic tools to work with the independent natural extension. Additionally, we consider a number of ways in which the standard factorisation formula for independence can be generalised to an imprecise-probabilistic context. We show, under some mild conditions, that when the focus is on least-committal models, using the independent natural extension is equivalent to imposing a so-called strong factorisation property. This is an important outcome for applications as it gives a simple tool to make sure that inferences are consistent with epistemic independence judgements. We discuss the potential of our results for applications in Artificial Intelligence by recalling recent work by some of us, where the independent natural extension was applied to graphical models. It has allowed, for the first time, the development of an exact linear-time algorithm for the imprecise probability updating of credal trees. © 2011 Elsevier B.V. All rights reserved.

Zaffalon M.,IDSIA | Corani G.,IDSIA | Maua D.,IDSIA
International Journal of Approximate Reasoning | Year: 2012

Predictions made by imprecise-probability models are often indeterminate (that is, set-valued). Measuring the quality of an indeterminate prediction by a single number is important to fairly compare different models, but a principled approach to this problem is currently missing. In this paper we derive, from a set of assumptions, a metric to evaluate the predictions of credal classifiers. These are supervised learning models that issue set-valued predictions. The metric turns out to be made of an objective component, and another that is related to the decision-maker's degree of risk aversion to the variability of predictions. We discuss when the measure can be rendered independent of such a degree, and provide insights as to how the comparison of classifiers based on the new measure changes with the number of predictions to be made. Finally, we make extensive empirical tests of credal, as well as precise, classifiers by using the new metric. This shows the practical usefulness of the metric, while yielding a first insightful and extensive comparison of credal classifiers. © 2012 Elsevier Inc. All rights reserved.

Ambuhl C.,University of Liverpool | Mastrolilli M.,IDSIA | Svensson O.,KTH Royal Institute of Technology
SIAM Journal on Computing | Year: 2011

We consider the Minimum Linear Arrangement problem and the (Uniform) Sparsest Cut problem. So far, these two notorious NP-hard graph problems have resisted all attempts to prove inapproximability results. We show that they have no polynomial time approximation scheme, unless NP-complete problems can be solved in randomized subexponential time. Furthermore, we show that the same techniques can be used for the Maximum Edge Biclique problem, for which we obtain a hardness factor similar to previous results but under a more standard assumption. Copyright © by SIAM.

Chalermsook P.,IDSIA | Laekhanukit B.,McGill University | Nanongkai D.,University of Vienna
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2013

Graph product is a fundamental tool with rich applications in both graph theory and theoretical computer science. It is usually studied in the form f(G * H) where G and H are graphs, * is a graph product and f is a graph property. For example, if f is the independence number and * is the disjunctive product, then the product is known to be multiplicative: f(G * H) = f(G)f(H). In this paper, we study graph products in the following non-standard form: f((G⊕H)*J) where G, H and J are graphs, ⊕ and * are two different graph products and f is a graph property. We show that if f is the induced and semi-induced matching number, then for some products ⊕ and *, it is subadditive in the sense that f((G⊕H) * J) ≤ f(G * J) + f(H * J). Moreover, when f is the poset dimension number, it is almost subadditive. As applications of this result (we only need J = K2 here), we obtain tight hardness of approximation for various problems in discrete mathematics and computer science: bipartite induced and semi-induced matching (a.k.a. maximum expanding sequences), poset dimension, maximum feasible subsystem with 0/1 coefficients, unit-demand min-buying and single-minded pricing, donation center location, boxicity, cubicity, threshold dimension and independent packing. Copyright © SIAM.

Miranda E.,University of Oviedo | Zaffalon M.,IDSIA | De Cooman G.,Ghent University
International Journal of Approximate Reasoning | Year: 2012

At the foundations of probability theory lies a question that has been open since de Finetti framed it in 1930: whether or not an uncertainty model should be required to be conglomerable. Conglomerability is related to accepting infinitely many conditional bets. Walley is one of the authors who have argued in favor of conglomerability, while de Finetti rejected the idea. In this paper we study the extension of the conglomerability condition to two types of uncertainty models that are more general than the ones envisaged by de Finetti: sets of desirable gambles and coherent lower previsions. We focus in particular on the weakest (i.e., the least-committal) of those extensions, which we call the conglomerable natural extension. The weakest extension that does not take conglomerability into account is simply called the natural extension. We show that taking the natural extension of assessments after imposing conglomerability - the procedure adopted in Walley's theory - does not yield, in general, the conglomerable natural extension (but it does so in the case of the marginal extension). Iterating this process of imposing conglomerability and taking the natural extension produces a sequence of models that approach the conglomerable natural extension, although it is not known, at this point, whether this sequence converges to it. We give sufficient conditions for this to happen in some special cases, and study the differences between working with coherent sets of desirable gambles and coherent lower previsions. Our results indicate that it is necessary to rethink the foundations of Walley's theory of coherent lower previsions for infinite partitions of conditioning events. © 2012 Elsevier Inc. All rights reserved.

Lockett A.J.,IDSIA
2013 IEEE Congress on Evolutionary Computation, CEC 2013 | Year: 2013

The performance of evolutionary algorithms has been studied extensively, but it has been difficult to answer many basic theoretical questions using the existing theoretical frameworks and approaches. In this paper, the performance of evolutionary algorithms is studied from a measure-theoretic point of view, and a framework is offered that can address some difficult theoretical questions in an abstract and general setting. It is proven that the performance of continuous optimizers is in general nonlinear and continuous for finitely determined performance criteria. Since most common optimizers are continuous, it follows that in general there is substantial reason to expect that mixtures of optimization algorithms can outperform pure algorithms on many if not most problems. The methodology demonstrated in this paper rigorously connects performance analysis of evolutionary algorithms and other optimization methods to functional analysis, which is expected to enable new and important theoretical results by leveraging prior work in these fields. © 2013 IEEE.

Cuccu G.,IDSIA | Gomez F.,IDSIA
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

The idea of evolving novel rather than fit solutions has recently been offered as a way to automatically discover the kind of complex solutions that exhibit truly intelligent behavior. So far, novelty search has only been studied in the context of problems where the number of possible "different" solutions has been limited. In this paper, we show, using a task with a much larger solution space, that selecting for novelty alone does not offer an advantage over fitness-based selection. In addition, we examine how the idea of novelty search can be used to sustain diversity and improve the performance of standard, fitness-based search. © 2011 Springer-Verlag.

Miranda E.,University of Oviedo | Zaffalon M.,IDSIA
International Journal of Approximate Reasoning | Year: 2013

We contrast Williams' and Walley's theories of coherent lower previsions in the light of conglomerability. These are two of the most credited approaches to a behavioural theory of imprecise probability. Conglomerability is the notion that distinguishes them most: Williams' theory does not consider it, while Walley aims at embedding it in his theory. This question is important, as conglomerability is a major point of disagreement at the foundations of probability, since it was first defined by de Finetti in 1930. We show that Walley's notion of joint coherence (which is the single axiom of his theory) for conditional lower previsions does not take all the implications of conglomerability into account. Considering also some previous results in the literature, we deduce that Williams' theory should be the one to use when conglomerability is not required; for the opposite case, we define the new theory of conglomerably coherent lower previsions, which is arguably the one to use, and of which Walley's theory can be understood as an approximation. We show that this approximation is exact in two important cases: when all conditioning events have positive lower probability, and when conditioning partitions are nested. © 2013 Elsevier Inc. All rights reserved.

News Article | December 21, 2016

SGS is pleased to announce the acquisition of a controlling stake in C-Labs SA, Chiasso, Switzerland. Chiasso, Switzerland, 21-Dec-2016 — /EuropaWire/ — Founded in 2016, C-Labs is an Industry 4.0 startup, developing solutions for transforming food regulatory compliance. It adopts the latest machine learning techniques with the support of Swiss Artificial Intelligence Lab IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artificiale). The C-Labs platform is being developed as an integrated solution within SGS. It creates a new paradigm, using both human and technology elements to deliver enhanced, scaled, actionable insights from data. “This acquisition is a valuable contribution to our TIC 4.0 strategic initiative on digitalization and data, and an excellent complement to the partnership initiated earlier this year with Transparency-One,” said Frankie Ng, CEO of SGS. “We will continue to partner with selected technology providers to broaden our service portfolio and enhance our value proposition.” For further information, please contact: SGS is the world’s leading inspection, verification, testing and certification company. SGS is recognized as the global benchmark for quality and integrity. With more than 85,000 employees, SGS operates a network of over 1,800 offices and laboratories around the world.

Loading IDSIA collaborators
Loading IDSIA collaborators