Time filter

Source Type

Le Bardo, Tunisia

Ishak M.B.,LINA Laboratories | Leary P.,LINA Laboratory | Amor N.B.,LARODEC Laboratory
Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI

Probabilistic relational models (PRMs) extend Bayesian networks (BNs) to a relational data mining context. Even though a panoply of works have focused, separately, on Bayesian networks and relational databases random generation, no work has been identified for PRMs on that track. This paper provides an algorithmic approach allowing to generate random PRMs from scratch to cover the absence of generation process. The proposed method allows to generate PRMs as well as synthetic relational data from a randomly generated relational schema and a random set of probabilistic dependencies. This can be of interest for machine learning researchers to evaluate their proposals in a common framework, as for databases designers to evaluate the effectiveness of the components of a database management system. © 2014 IEEE. Source

Bounhas M.,LARODEC Laboratory | Prade H.,Toulouse 1 University Capitole | Prade H.,University of Technology, Sydney | Richard G.,Toulouse 1 University Capitole
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

An approach to classification, based on a formal modeling of analogical proportions linking the features of 4 items, has been recently shown to be surprisingly successful on difficult benchmarks. Analogical proportions are homogeneous logical proportions. Homogeneous here refers both to the structure of their logical expression and to the specificity of their truth tables. In contrast, heterogeneous proportions express that there is an intruder among 4 truth values, which is forbidden to appear in a specific position. The 2 types of proportions are of an opposite nature. However heterogeneous proportions can also be considered as a basis for classification by considering that a new item can be added to a class only if its addition leaves the class as even as possible: the new item should rarely be an intruder with respect to any triple of items known to be in the class. Experiments show that this new evenness-based classifier gets good results on a number of representative benchmarks. Its accuracy is both compared to the ones of well-known classifiers and to previous analogy-based classifiers. A discussion investigates on what type of particular benchmarks the evenness-based classifiers outperform the analogical ones and when it is the opposite. © Springer International Publishing Switzerland 2015. Source

Bounhas M.,LARODEC Laboratory | Bounhas M.,P.A. College | Prade H.,French National Center for Scientific Research | Richard G.,French National Center for Scientific Research
Communications in Computer and Information Science

Analogical proportion-based classification methods have been introduced a few years ago. They look in the training set for suitable triples of examples that are in an analogical proportion with the item to be classified, on a maximal set of attributes. This can be viewed as a lazy classification technique since, like k-nn algorithms, there is no static model built from the set of examples. The amazing results (at least in terms of accuracy) that have been obtained from such techniques are not easy to justify from a theoretical viewpoint. In this paper, we show that there exists an alternative method to build analogical proportion-based learners by statically building a set of inference rules during a preliminary training step. This gives birth to a new classification algorithm that deals with pairs rather than with triples of examples. Experiments on classical benchmarks of the UC Irvine repository are reported, showing that we get comparable results. © Springer International Publishing Switzerland 2014. Source

Ben-Romdhane H.,LARODEC Laboratory | Ben Jouida S.,LARODEC Laboratory | Krichen S.,FSJEG de Jendouba
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

The online knapsack problem (OKP) is a generalized version of the 0-1 knapsack problem (0-1KP) to a setting in which the problem inputs are revealed over time. Whereas the 0-1KP involves the maximization of the value of the knapsack contents without exceeding its capacity, the OKP involves the following additional requirements: items are presented one at a time, their features are only revealed at their arrival, and an immediate and irrevocable decision on the current item is required before observing the next one. This problem is known to be non-approximable in its general case. Accordingly, we study a relaxed variant of the OKP in which items delay is allowed: we assume that the decision maker is allowed to retain the observed items until a given deadline before deciding definitively on them. The main objective in this problem is to load the best subset of items that maximizes the expected value of the knapsack without exceeding its capacity. We propose an online algorithm based on dynamic programming, that builds-up the solution in several stages. Our approach incorporates a decision rule that identifies the most desirable items at each stage, then places the fittest ones in the knapsack. Our experimental study shows that the proposed algorithm is able to approach the optimal solution by a small error margin. © Springer International Publishing Switzerland 2014. Source

Bounhas M.,LARODEC Laboratory | Bounhas M.,P.A. College | Ghasemi Hamed M.,French National Center for Scientific Research | Ghasemi Hamed M.,DTI Detector Trade International | And 4 more authors.
Fuzzy Sets and Systems

In real-world problems, input data may be pervaded with uncertainty. In this paper, we investigate the behavior of naive possibilistic classifiers, as a counterpart to naive Bayesian ones, for dealing with classification tasks in the presence of uncertainty. For this purpose, we extend possibilistic classifiers, which have been recently adapted to numerical data, in order to cope with uncertainty in data representation. Here the possibility distributions that are used are supposed to encode the family of Gaussian probabilistic distributions that are compatible with the considered dataset. We consider two types of uncertainty: (i) the uncertainty associated with the class in the training set, which is modeled by a possibility distribution over class labels, and (ii) the imprecision pervading attribute values in the testing set represented under the form of intervals for continuous data. Moreover, the approach takes into account the uncertainty about the estimation of the Gaussian distribution parameters due to the limited amount of data available. We first adapt the possibilistic classification model, previously proposed for the certain case, in order to accommodate the uncertainty about class labels. Then, we propose an algorithm based on the extension principle to deal with imprecise attribute values. The experiments reported show the interest of possibilistic classifiers for handling uncertainty in data. In particular, the probability-to-possibility transform-based classifier shows a robust behavior when dealing with imperfect data. © 2013 Elsevier B.V. Published by Elsevier B.V. All rights reserved. Source

Discover hidden collaborations