Bisquert P.,French National Institute for Agricultural Research |
Croitoru M.,Montpellier University |
De Saint-Cyr F.D.,IRIT
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2015
In this paper we are interested in the computational and formal analysis of the persuasive impact that an argument can produce on a human agent. We propose a dual process cognitive computational model based on the highly influential work of Kahneman and investigate its reasoning mechanisms in the context of argument evaluation. This formal model is a first attempt to take a greater account of human reasoning and is a first step to a better understanding of persuasion processes as well as human argumentative strategies, which is crucial in collective decision making domain. © Springer International Publishing Switzerland 2015.
Herzig A.,French National Center for Scientific Research |
Lorini E.,French National Center for Scientific Research |
Moisan F.,IRIT |
Troquard N.,University of Essex
IJCAI International Joint Conference on Artificial Intelligence | Year: 2011
We propose a logical framework to represent and reason about agent interactions in normative systems. Our starting point is a dynamic logic of propositional assignments whose satisfiability problem is PSPACE-complete. We show that it embeds Coalition Logic of Propositional Control CL-PC and that various notions of ability and capability can be captured in it. We illustrate it on a water resource management case study. Finally, we show how the logic can be easily extended in order to represent constitutive rules which are also an essential component of the modelling of social reality.
Bayoudh M.,Center Ird Of Guyane |
Prade H.,IRIT |
Knowledge-Based Systems | Year: 2012
In this paper, we try to identify analogical proportions, i.e., statements of the form "a is to b as c is to d", expressed in linguistic terms. While it is conceivable to use an algebraic model for testing proportions such as "2 is to 4 as 5 is to 10", or even such as "read is to reader as lecture is to lecturer", there is no algebraic framework to support statements such as "engine is to car as heart is to human" or "wine is to France as beer is to England", helping to recognize them as meaningful analogical proportions. The idea is then to rely on text corpora, or even on the Web itself, where one may expect to find the pragmatics and the semantics of the words, in their common use. In that context, in order to attach a numerical value to the "analogical ratio" corresponding to the phrase "a is to b", we start from the works of Kolmogorov on complexity theory. This is the basis for a universal measure of the information content of a word a, or of a word a with respect to another one b, which, in practice, is estimated in a statistical manner. We investigate the link between a purely logical, recently introduced view of analogical proportions and its counterpart based on Kolmogorov theory. The criteria proposed for testing candidate proportions fit with the expected properties (symmetry, central permutation) of analogical proportions. This leads to a new computational method to define, and ultimately to try to detect, analogical proportions in natural language. Experiments with classifiers based on these ideas are reported, and results are rather encouraging with respect to the recognition of common sense linguistic analogies. The approach is also compared with existing works on similar problems. © 2011 Elsevier B.V. All rights reserved.
Fargier H.,IRIT |
Rollon E.,University of Barcelona
Constraints | Year: 2010
Many computational problems linked to uncertainty and preference management can be expressed in terms of computing the marginal(s) of a combination of a collection of valuation functions. Shenoy and Shafer showed how such a computation can be performed using a local computation scheme. A major strength of this work is that it is based on an algebraic description: what is proved is the correctness of the local computation algorithm under a few axioms on the algebraic structure. The instantiations of the framework in practice make use of totally ordered scales. The present paper focuses on the use of partially ordered scales and examines how such scales can be cast in the Shafer-Shenoy framework and thus benefit from local computation algorithms. It also provides several examples of such scales, thus showing that each of the algebraic structures explored here is of interest. © 2010 Springer Science+Business Media, LLC.
Luo Z.,Royal Holloway, University of London |
Soloviev S.,IRIT |
Xue T.,Royal Holloway, University of London
Information and Computation | Year: 2013
Coercive subtyping is a useful and powerful framework of subtyping for type theories. The key idea of coercive subtyping is subtyping as abbreviation. In this paper, we give a new and adequate formulation of T [C], the system that extends a type theory T with coercive subtyping based on a set C of basic subtyping judgements, and show that coercive subtyping is a conservative extension and, in a more general sense, a definitional extension. We introduce an intermediate system, the star-calculus T [C]., in which the positions that require coercion insertions are marked, and show that T [C]. is a conservative extension of T and that T [C]. is equivalent to T [C]. This makes clear what we mean by coercive subtyping being a conservative extension, on the one hand, and amends a technical problem that has led to a gap in the earlier conservativity proof, on the other. We also compare coercive subtyping with the ′ordinary′ notion of subtyping-subsumptive subtyping, and show that the former is adequate for type theories with canonical objects while the latter is not. An improved implementation of coercive subtyping is done in the proof assistant Plastic. © 2012 Elsevier Inc. All rights reserved.
De Saint-Cyr F.D.,IRIT
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011
This paper is a first attempt to define a framework to handle enthymeme in a time-limited persuasion dialog. The notion of incomplete argument is explicited and a protocol is proposed to regulate the utterances of a persuasion dialog with respect to the three criteria of consistency, non-redundancy and listening. This protocol allows the use of enthymemes concerning the support or conclusion of the argument, enables the agent to retract or re-specify an argument. The system is illustrated on a small example and some of its properties are outlined. © 2011 Springer-Verlag.
Gossler G.,French Institute for Research in Computer Science and Automation |
Le Metayer D.,French Institute for Research in Computer Science and Automation |
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010
Establishing liabilities in component-based systems is a challenging task, as it requires to establish convincing evidence with respect to the occurrence of a fault, and the causality relation between the fault and a damage. The second issue is especially complex when several faults are detected and the impact of these faults on the occurrence of the failure has to be assessed. In this paper we propose a formal framework for reasoning about logical causality between contract violations. © 2010 Springer-Verlag.
Cayrol C.,IRIT |
International Journal of Intelligent Systems | Year: 2010
Bipolar argumentation frameworks enable to represent two kinds of interaction between arguments: support and conflict. In this paper, we turn a bipolar argumentation framework into a meta-argumentation framework where conflicts occur between sets of arguments, characterized as coalitions of supporting arguments. So, Dung's well-known semantics can be used on this meta-argumentation framework to select the acceptable arguments. © 2009 Wiley Periodicals, Inc.
Calvet L.,IRIT |
Proceedings of the IEEE International Conference on Computer Vision | Year: 2013
This work aims at introducing a new unified Structure from Motion (SfM) paradigm in which images of circular point-pairs can be combined with images of natural points. An imaged circular point-pair encodes the 2D Euclidean structure of a world plane and can easily be derived from the image of a planar shape, especially those including circles. A classical SfM method generally runs two steps: first a projective factorization of all matched image points (into projective cameras and points) and second a camera self calibration that updates the obtained world from projective to Euclidean. This work shows how to introduce images of circular points in these two SfM steps while its key contribution is to provide the theoretical foundations for combining 'classical' linear self-calibration constraints with additional ones derived from such images. We show that the two proposed SfM steps clearly contribute to better results than the classical approach. We validate our contributions on synthetic and real images. © 2013 IEEE.
Bach C.,IRIT |
Scapin D.L.,French Institute for Research in Computer Science and Automation
International Journal of Human-Computer Interaction | Year: 2010
This article describes an experiment comparing three Usability Evaluation Methods: User Testing (UT), Document-based Inspection (DI), and Expert Inspection (EI) for evaluating Virtual Environments (VEs). Twenty-nine individuals (10 end-users and 19 junior usability experts) participated during 1 hr each in the evaluation of two VEs (a training VE and a 3D map). Quantitative results of the comparison show that the effectiveness of UT and DI is significantly better than the effectiveness of EI. For each method, results show their problem coverage: DI- and UT-based diagnoses lead to more problem diversity than EI. The overlap of identified problems amounts to 22% between UT and DI, 20% between DI and EI, and 12% between EI and UT for both virtual environments. The identification impact of the whole set of usability problems is 60% for DI, 57% for UT, and only 36% for EI for both virtual environments. Also reliability of UT and DI is significantly better than reliability of EI. In addition, a qualitative analysis identified 35 classes describing the profile of usability problems found with each method. It shows that UT seems particularly efficient for the diagnosis of problems that require a particular state of interaction to be detectable. On the other hand, DI supports the identification of problems directly observable, often related to learn-ability and basic usability. This study shows that DI could be viewed as a "4-wheel drive SUV evaluation type" (less powerful under certain conditions but able to go everywhere, with any driver), whereas UT could be viewed as a "Formula 1 car eval- uation type" (more powerful but requiring adequate road and a very skilled driver). EI is found (considering all metrics) to be not efficient enough to evaluate usability of VEs. © Taylor & Francis Group, LLC.