News Article | April 25, 2017
The ELO system, which most chess federations use today, ranks players by the results of their games. Although simple and efficient, it overlooks relevant criteria such as the quality of the moves players actually make. To overcome these limitations, Jean-Marc Alliot of the Institut de recherche en informatique de Toulouse (IRIT - CNRS/INP Toulouse/Université Toulouse Paul Sabatier/Université Toulouse Jean Jaurès/Université Toulouse Capitole) demonstrates a new system, published on 24 April 2017 in the International Computer Games Association Journal.
News Article | April 25, 2017
The ELO system, which most chess federations use today, ranks players by the results of their games. Although simple and efficient, it overlooks relevant criteria such as the quality of the moves players actually make. To overcome these limitations, Jean-Marc Alliot of the Institut de recherche en informatique de Toulouse (IRIT -- CNRS/INP Toulouse/Université Toulouse Paul Sabatier/Université Toulouse Jean Jaurès/Université Toulouse Capitole) demonstrates a new system, published on 24 april 2017 in the International Computer Games Association Journal. Since the 1970s, the system designed by the Hungarian, Arpad Elo, has been ranking chess players according to the result of their games. The best players have the highest ranking, and the difference in ELO points between two players predicts the probability of either player winning a given game. If players perform better or worse than predicted, they either gain or lose points accordingly. This method does not take into account the quality of the moves played during a game and is therefore unable to reliably rank players who have played at different periods. Jean-Marc Alliot suggests a direct ranking of players based on the quality of their actual moves. His system computes the difference between the move actually played and the move that would have been selected by the best chess program to date, Stockfish. Running on the OSIRIM supercomputer, this program executes almost perfect moves. Starting with those of Wilhelm Steinitz (1836-1900), all 26,000 games played since then by chess world champions have been processed in order to create a probabilistic model for each player. For each position, the model estimates the probability of making a mistake, and the magnitude of the mistake. These models can then be used to compute the win/draw/lose probability for any given match between two players. The predictions have proven not only to be extremely close to the actual results when players have played concrete games against one another, they also fare better than those based on ELO scores. The results demonstrate that the level of chess players has been steadily increasing. The current world champion, Magnus Carlsen, tops the list, while Bobby Fischer is third. Under current conditions, this new ranking method cannot immediately replace the ELO system, which is easier to set up and implement. However, increases in computing power will make it possible to extend the new method to an ever-growing pool of players in the near future.  Open Services for Indexing and Research Information in Multimedia contents, one of the platforms of IRIT laboratory.
News Article | April 25, 2017
Entre les exigences de plus en plus fortes des consommateurs (attrait de la nouveauté, équité, traçabilité, sécurité.), les nouvelles contraintes réglementaires et environnementales et la hausse du coût des matières premières (graines, engrais, pesticides, eau.), le secteur agricole est sous pression. Atos s'appuie sur la dernière génération de satellites d'observation terrestre Sentinel-2, conçue par l'Agence spatiale européenne (ASE) et capable de fournir des panoramas à l'aide de lumière visible et proche infrarouge avec une résolution de 10 à 60 m, dans un périmètre de 290 km et avec une fréquence de répétitivité de cinq jours, idéale pour surveiller les récoltes. Atos analyse pour un de ses grands clients aux Etats-Unis plus de 100 000 hectares de récoltes et 6 000 parcelles de terre (recouvertes de blé, maïs, soja et légume) pour fournir des indicateurs biophysiques sur le développement des plantes comme la chlorophylle ou la proportion de feuillage vert. Cette surveillance permet de détecter des anomalies ou des différences dans et entre les parcelles de terre. En connectant ces informations aux données du terrain et grâce à la suite de solutions de business analytics Atos Codex, les experts peuvent ensuite fournir des diagnostics et suggérer des ajustements des pratiques agricoles. Philippe Miltin, Vice-président du groupe Atos, directeur du secteur Fabrication et Distribution, explique : « Grâce à la puissance et à la flexibilité des nouveaux outils cognitifs et analytiques, nous pouvons exploiter pleinement les données. Elles permettent à nos clients d'améliorer leurs pratiques et de s'adapter à leurs nouveaux besoins environnementaux, sociaux, réglementaires et économiques. En coopération avec la start-up TerraNIS, nous relevons ces défis multiples en combinant compétences avancées en imagerie et agronomie avec les fonctionnalités d'Atos Codex. » Retrouvez les cas d'usage (en anglais) : Thanks to Atos and TerraNIS La Coop Fédérée is turning to precision in agriculture Data Analytics - Transforming the future of agriculture *Le consortium, baptisé SparkInData, a plusieurs membres : Atos en tant que leader du consortium, les PME TerraNIS, Geomatys et Geosigweb, Mercator-Ocean, le CNES (Centre National d'Études Spatiales), l'IGN (Institut National de l'Information Géographique et Forestier), le BRGM (Bureau des Recherches Géologiques et Minières), l'IRIT (Institut de Recherche Informatique de Toulouse à l'Université Paul-Sabatier), l'École d'ingénieurs de Purpan et le pôle de compétitivité Aerospace Valley. À propos d'Atos Codex Atos Codex est la marque d'Atos pour les Analytics, l'Internet des Objets et les solutions cognitives. Atos Codex comprend une méthodologie, des Design Labs, une plateforme industrielle ouverte ainsi que des Data Analytics de haute performance fournissant une gamme complète de produits et de services pour concevoir et exécuter des plateformes commerciales digitales. Atos Codex est l'un des quatre piliers de la Digital Transformation Factory d'Atos - aux côtés d'Atos Canopy, le Cloud Hybride Orchestré d'Atos ; SAP HANA par Atos et l'Atos Digital Workplace - permettant d'accompagner la transformation digitale des clients. À propos d'Atos Atos is a global leader in digital transformation with approximately 100,000 employees in 72 countries and annual revenue of around € 12 billion. The European number one in Big Data, Cybersecurity, High Performance Computing and Digital Workplace, The Group provides Cloud services, Infrastructure & Data Management, Business & Platform solutions, as well as transactional services through Worldline, the European leader in the payment industry. With its cutting-edge technologies, digital expertise and industry knowledge, Atos supports the digital transformation of its clients across various business sectors: Defense, Financial Services, Health, Manufacturing, Media, Energy & Utilities, Public sector, Retail, Telecommunications and Transportation. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and operates under the brands Atos, Atos Consulting, Atos Worldgrid, Bull, Canopy, Unify and Worldline. Atos SE (Societas Europaea) is listed on the CAC40 Paris stock index.
Herzig A.,French National Center for Scientific Research |
Lorini E.,French National Center for Scientific Research |
Moisan F.,IRIT |
Troquard N.,University of Essex
IJCAI International Joint Conference on Artificial Intelligence | Year: 2011
We propose a logical framework to represent and reason about agent interactions in normative systems. Our starting point is a dynamic logic of propositional assignments whose satisfiability problem is PSPACE-complete. We show that it embeds Coalition Logic of Propositional Control CL-PC and that various notions of ability and capability can be captured in it. We illustrate it on a water resource management case study. Finally, we show how the logic can be easily extended in order to represent constitutive rules which are also an essential component of the modelling of social reality.
Bayoudh M.,Center Ird Of Guyane |
Prade H.,IRIT |
Knowledge-Based Systems | Year: 2012
In this paper, we try to identify analogical proportions, i.e., statements of the form "a is to b as c is to d", expressed in linguistic terms. While it is conceivable to use an algebraic model for testing proportions such as "2 is to 4 as 5 is to 10", or even such as "read is to reader as lecture is to lecturer", there is no algebraic framework to support statements such as "engine is to car as heart is to human" or "wine is to France as beer is to England", helping to recognize them as meaningful analogical proportions. The idea is then to rely on text corpora, or even on the Web itself, where one may expect to find the pragmatics and the semantics of the words, in their common use. In that context, in order to attach a numerical value to the "analogical ratio" corresponding to the phrase "a is to b", we start from the works of Kolmogorov on complexity theory. This is the basis for a universal measure of the information content of a word a, or of a word a with respect to another one b, which, in practice, is estimated in a statistical manner. We investigate the link between a purely logical, recently introduced view of analogical proportions and its counterpart based on Kolmogorov theory. The criteria proposed for testing candidate proportions fit with the expected properties (symmetry, central permutation) of analogical proportions. This leads to a new computational method to define, and ultimately to try to detect, analogical proportions in natural language. Experiments with classifiers based on these ideas are reported, and results are rather encouraging with respect to the recognition of common sense linguistic analogies. The approach is also compared with existing works on similar problems. © 2011 Elsevier B.V. All rights reserved.
Fargier H.,IRIT |
Rollon E.,University of Barcelona
Constraints | Year: 2010
Many computational problems linked to uncertainty and preference management can be expressed in terms of computing the marginal(s) of a combination of a collection of valuation functions. Shenoy and Shafer showed how such a computation can be performed using a local computation scheme. A major strength of this work is that it is based on an algebraic description: what is proved is the correctness of the local computation algorithm under a few axioms on the algebraic structure. The instantiations of the framework in practice make use of totally ordered scales. The present paper focuses on the use of partially ordered scales and examines how such scales can be cast in the Shafer-Shenoy framework and thus benefit from local computation algorithms. It also provides several examples of such scales, thus showing that each of the algebraic structures explored here is of interest. © 2010 Springer Science+Business Media, LLC.
Luo Z.,Royal Holloway, University of London |
Soloviev S.,IRIT |
Xue T.,Royal Holloway, University of London
Information and Computation | Year: 2013
Coercive subtyping is a useful and powerful framework of subtyping for type theories. The key idea of coercive subtyping is subtyping as abbreviation. In this paper, we give a new and adequate formulation of T [C], the system that extends a type theory T with coercive subtyping based on a set C of basic subtyping judgements, and show that coercive subtyping is a conservative extension and, in a more general sense, a definitional extension. We introduce an intermediate system, the star-calculus T [C]., in which the positions that require coercion insertions are marked, and show that T [C]. is a conservative extension of T and that T [C]. is equivalent to T [C]. This makes clear what we mean by coercive subtyping being a conservative extension, on the one hand, and amends a technical problem that has led to a gap in the earlier conservativity proof, on the other. We also compare coercive subtyping with the ′ordinary′ notion of subtyping-subsumptive subtyping, and show that the former is adequate for type theories with canonical objects while the latter is not. An improved implementation of coercive subtyping is done in the proof assistant Plastic. © 2012 Elsevier Inc. All rights reserved.
De Saint-Cyr F.D.,IRIT
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011
This paper is a first attempt to define a framework to handle enthymeme in a time-limited persuasion dialog. The notion of incomplete argument is explicited and a protocol is proposed to regulate the utterances of a persuasion dialog with respect to the three criteria of consistency, non-redundancy and listening. This protocol allows the use of enthymemes concerning the support or conclusion of the argument, enables the agent to retract or re-specify an argument. The system is illustrated on a small example and some of its properties are outlined. © 2011 Springer-Verlag.
Cayrol C.,IRIT |
International Journal of Intelligent Systems | Year: 2010
Bipolar argumentation frameworks enable to represent two kinds of interaction between arguments: support and conflict. In this paper, we turn a bipolar argumentation framework into a meta-argumentation framework where conflicts occur between sets of arguments, characterized as coalitions of supporting arguments. So, Dung's well-known semantics can be used on this meta-argumentation framework to select the acceptable arguments. © 2009 Wiley Periodicals, Inc.
Bach C.,IRIT |
Scapin D.L.,French Institute for Research in Computer Science and Automation
International Journal of Human-Computer Interaction | Year: 2010
This article describes an experiment comparing three Usability Evaluation Methods: User Testing (UT), Document-based Inspection (DI), and Expert Inspection (EI) for evaluating Virtual Environments (VEs). Twenty-nine individuals (10 end-users and 19 junior usability experts) participated during 1 hr each in the evaluation of two VEs (a training VE and a 3D map). Quantitative results of the comparison show that the effectiveness of UT and DI is significantly better than the effectiveness of EI. For each method, results show their problem coverage: DI- and UT-based diagnoses lead to more problem diversity than EI. The overlap of identified problems amounts to 22% between UT and DI, 20% between DI and EI, and 12% between EI and UT for both virtual environments. The identification impact of the whole set of usability problems is 60% for DI, 57% for UT, and only 36% for EI for both virtual environments. Also reliability of UT and DI is significantly better than reliability of EI. In addition, a qualitative analysis identified 35 classes describing the profile of usability problems found with each method. It shows that UT seems particularly efficient for the diagnosis of problems that require a particular state of interaction to be detectable. On the other hand, DI supports the identification of problems directly observable, often related to learn-ability and basic usability. This study shows that DI could be viewed as a "4-wheel drive SUV evaluation type" (less powerful under certain conditions but able to go everywhere, with any driver), whereas UT could be viewed as a "Formula 1 car eval- uation type" (more powerful but requiring adequate road and a very skilled driver). EI is found (considering all metrics) to be not efficient enough to evaluate usability of VEs. © Taylor & Francis Group, LLC.