CNRS Toulouse Institute in Information Technology

Toulouse, France

CNRS Toulouse Institute in Information Technology

Toulouse, France
Time filter
Source Type

Adje A.,CNRS Toulouse Institute in Information Technology
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2017

In this paper, we present a model based on copositive programming and semidefinite relaxations of it to prove properties on discrete-time piecewise affine systems. We consider invariant i.e. properties represented by a set and formulated as all reachable values are included in the set. Also, we restrict the analysis to sublevel sets of quadratic forms i.e. ellipsoids. In this case, to check the property is equivalent to solve a quadratic maximization problem under the constraint that the decision variable belongs to the reachable values set. This maximization problem is relaxed using an abstraction of reachable values set by a union of truncated ellipsoids. © Springer International Publishing AG 2017.

Eches O.,CNRS Toulouse Institute in Information Technology | Dobigeon N.,CNRS Toulouse Institute in Information Technology | Tourneret J.-Y.,CNRS Toulouse Institute in Information Technology
IEEE Transactions on Geoscience and Remote Sensing | Year: 2011

This paper describes a new algorithm for hyperspectral image unmixing. Most unmixing algorithms proposed in the literature do not take into account the possible spatial correlations between the pixels. In this paper, a Bayesian model is introduced to exploit these correlations. The image to be unmixed is assumed to be partitioned into regions (or classes) where the statistical properties of the abundance coefficients are homogeneous. A Markov random field, is then proposed to model the spatial dependencies between the pixels within any class. Conditionally upon a given class, each pixel is modeled by using the classical linear mixing model with additive white Gaussian noise. For this model, the posterior distributions of the unknown parameters and hyperparameters allow the parameters of interest to be inferred. These parameters include the abundances for each pixel, the means and variances of the abundances for each class, as well as a classification map indicating the classes of all pixels in the image. To overcome the complexity of the posterior distribution, we consider a Markov chain Monte Carlo method that generates samples asymptotically distributed according to the posterior. The generated samples are then used for parameter and hyperparameter estimation. The accuracy of the proposed algorithms is illustrated on synthetic and real data. © 2006 IEEE.

Cabanac G.,CNRS Toulouse Institute in Information Technology
Scientometrics | Year: 2011

Scientific literature recommender systems (SLRSs) provide papers to researchers according to their scientific interests. Systems rely on inter-researcher similarity measures that are usually computed according to publication contents (i.e., by extracting paper topics and citations). We highlight two major issues related to this design. The required full-text access and processing are expensive and hardly feasible. Moreover, clues about meetings, encounters, and informal exchanges between researchers (which are related to a social dimension) were not exploited to date. In order to tackle these issues, we propose an original SLRS based on a threefold contribution. First, we argue the case for defining inter-researcher similarity measures building on publicly available metadata. Second, we define topical and social measures that we combine together to issue socio-topical recommendations. Third, we conduct an evaluation with 71 volunteer researchers to check researchers' perception against socio-topical similarities. Experimental results show a significant 11.21% accuracy improvement of socio-topical recommendations compared to baseline topical recommendations. © 2011 Akadémiai Kiadó, Budapest, Hungary.

Lorini E.,CNRS Toulouse Institute in Information Technology | Schwarzentruber F.,CNRS Toulouse Institute in Information Technology
Artificial Intelligence | Year: 2011

The aim of this work is to propose a logical framework for the specification of cognitive emotions that are based on counterfactual reasoning about agents' choices. The prototypical counterfactual emotion is regret. In order to meet this objective, we exploit the well-known STIT logic (Belnap et al. (2001) [9], Horty (2001) [30], Horty and Belnap (1995) [31]). STIT logic has been proposed in the domain of formal philosophy in the nineties and, more recently, it has been imported into the field of theoretical computer science where its formal relationships with other logics for multi-agent systems such as ATL and Coalition Logic (CL) have been studied. STIT is a very suitable formalism to reason about choices and capabilities of agents and groups of agents. Unfortunately, the version of STIT with agents and groups has been recently proved to be undecidable and not finitely axiomatizable. In this work we study a decidable and finitely axiomatizable fragment of STIT with agents and groups which is sufficiently expressive for our purpose of formalizing counterfactual emotions. We call dfSTIT our STIT fragment. After having extended dfSTIT with knowledge modalities, in the second part of article, we exploit it in order to formalize four types of counterfactual emotions: regret, rejoicing, disappointment, and elation. At the end of the article we present an application of our formalization of counterfactual emotions to a concrete example. © 2010 Elsevier B.V. All rights reserved.

Dubois D.,CNRS Toulouse Institute in Information Technology
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

The notion of uncertainty has been a controversial issue for a long time. In particular the prominence of probability theory in the scientific arena has blurred some distinctions that were present from its inception, namely between uncertainty due to the variability of physical phenomena, and uncertainty due to a lack of information. The Bayesian school claims that whatever its origin, uncertainty can be modeled by single probability distributions [18]. © 2010 Springer-Verlag Berlin Heidelberg.

Destercke S.,Montpellier SupAgro | Dubois D.,CNRS Toulouse Institute in Information Technology
Information Sciences | Year: 2011

When conjunctively merging two belief functions concerning a single variable but coming from different sources, Dempster rule of combination is justified only when information sources can be considered as independent. When dependencies between sources are ill-known, it is usual to require the property of idempotence for the merging of belief functions, as this property captures the possible redundancy of dependent sources. To study idempotent merging, different strategies can be followed. One strategy is to rely on idempotent rules used in either more general or more specific frameworks and to study, respectively, their particularization or extension to belief functions. In this paper, we study the feasibility of extending the idempotent fusion rule of possibility theory (the minimum) to belief functions. We first investigate how comparisons of information content, in the form of inclusion and least-commitment, can be exploited to relate idempotent merging in possibility theory to evidence theory. We reach the conclusion that unless we accept the idea that the result of the fusion process can be a family of belief functions, such an extension is not always possible. As handling such families seems impractical, we then turn our attention to a more quantitative criterion and consider those combinations that maximize the expected cardinality of the joint belief functions, among the least committed ones, taking advantage of the fact that the expected cardinality of a belief function only depends on its contour function. © 2011 Elsevier Inc. All rights reserved.

Cabanac G.,CNRS Toulouse Institute in Information Technology
Journal of the American Society for Information Science and Technology | Year: 2012

Characteristics of the Journal of the American Society for Information Science and Technology and 76 other journals listed in the Information Systems category of the Journal Citation Reports-Science edition 2009 were analyzed. Besides reporting usual bibliographic indicators, we investigated the human cornerstone of any peer-reviewed journal: its editorial board. Demographic data about the 2,846 gatekeepers serving in information systems (IS) editorial boards were collected. We discuss various scientometric indicators supported by descriptive statistics. Our findings reflect the great variety of IS journals in terms of research output, author communities, editorial boards, and gatekeeper demographics (e.g., diversity in gender and location), seniority, authority, and degree of involvement in editorial boards. We believe that these results may help the general public and scholars (e.g., readers, authors, journal gatekeepers, policy makers) to revise and increase their knowledge of scholarly communication in the IS field. The EB-IS-2009 dataset supporting this scientometric study is released as online supplementary material to this article to foster further research on editorial boards. © 2012 ASIS&T.

Demolombe R.,CNRS Toulouse Institute in Information Technology
Journal of Logic and Computation | Year: 2014

Segerberg's Dynamic Deontic Logic is a dynamic logic where among the set of all possible histories those fulfilling the norms are distinguished. An extension of this logic to obligations (respectively permissions and prohibitions) to do an action before a given deadline or during a given time interval is defined. These temporal constraints are defined by events which may have several occurrences (like the obligation to update a given file before midnight). Violations of these kinds of norms are defined in this logical framework. © 2012 The Author, 2012. Published by Oxford University Press.

Cabanac G.,CNRS Toulouse Institute in Information Technology
Scientometrics | Year: 2014

Eponyms are known to praise leading scientists for their contributions to science. Some are so widespread that they are even known by laypeople (e.g., Alzheimer's disease, Darwinism). However, there is no systematic way to discover the distributions of eponyms in scientific domains. Prior work has tackled this issue but has failed to address it completely. Early attempts involved the manual labelling of all eponyms found in a few textbooks of given domains, such as chemistry. Others relied on search engines to probe bibliographic records seeking a single eponym at a time, such as Nash Equilibrium. Nonetheless, we failed to find any attempt of eponym quantification in a large volume of full-text publications. This article introduces a semi-automatic text mining approach to extracting eponyms and quantifying their use in such datasets. Candidate eponyms are matched programmatically by regular expressions, and then validated manually. As a case study, the processing of 821 recent Scientometrics articles reveals a mixture of established and emerging eponyms. The results stress the value of text mining for the rapid extraction and quantification of eponyms that may have substantial implications for research evaluation. © 2013 Akadémiai Kiadó, Budapest, Hungary.

Cabanac G.,CNRS Toulouse Institute in Information Technology
Scientometrics | Year: 2013

Schubert introduced the partnership ability φ-index relying on a researcher's number of co-authors and collaboration rate. As a Hirsch-type index, φ was expected to be consistent with Schubert-Glänzel's model of h-index. Schubert demonstrated this relationship with the 34 awardees of the Hevesy medal in the field of nuclear and radiochemistry (r2=0.8484). In this paper, we upscale this study by testing the φ-index on a million researchers in computer science. We found that the Schubert-Glänzel's model correlates with the million empirical φ values (r2=0.8695). In addition, machine learning through symbolic regression produces models whose accuracy does not exceed a 6.1 % gain (r2=0.9227). These results suggest that the Schubert-Glänzel's model of φ-index is accurate and robust on the domain-wide bibliographic dataset of computer science. © 2012 Akadémiai Kiadó, Budapest, Hungary.

Loading CNRS Toulouse Institute in Information Technology collaborators
Loading CNRS Toulouse Institute in Information Technology collaborators