Time filter

Source Type

Zaidi N.A.,Monash University | Cerquides J.,Artificial Intelligence Research Institute | Carman M.J.,Monash University | Webb G.I.,Monash University
Journal of Machine Learning Research | Year: 2013

Despite the simplicity of the Naive Bayes classifier, it has continued to perform well against more sophisticated newcomers and has remained, therefore, of great interest to the machine learning community. Of numerous approaches to refining the naive Bayes classifier, attribute weighting has received less attention than it warrants. Most approaches, perhaps influenced by attribute weighting in other machine learning algorithms, use weighting to place more emphasis on highly predictive attributes than those that are less predictive. In this paper, we argue that for naive Bayes attribute weighting should instead be used to alleviate the conditional independence assumption. Based on this premise, we propose a weighted naive Bayes algorithm, called WANBIA, that selects weights to minimize either the negative conditional log likelihood or the mean squared error objective functions. We perform extensive evaluations and find that WANBIA is a competitive alternative to state of the art classifiers like Random Forest, Logistic Regression and A1DE. © 2013 Nayyar A. Zaidi, Jesus Cerquides, Mark J. Carman and Geoffrey I. Webb.


Garcia-Cerdana A.,Artificial Intelligence Research Institute | Armengol E.,Artificial Intelligence Research Institute | Esteva F.,Artificial Intelligence Research Institute
International Journal of Approximate Reasoning | Year: 2010

Description Logics (DLs) are knowledge representation languages built on the basis of classical logic. DLs allow the creation of knowledge bases and provide ways to reason on the contents of these bases. Fuzzy Description Logics (FDLs) are natural extensions of DLs for dealing with vague concepts, commonly present in real applications. Hájek proposed to deal with FDLs taking as basis t-norm based fuzzy logics with the aim of enriching the expressive possibilities in FDLs and to capitalize on recent developments in the field of Mathematical Fuzzy Logic. From this perspective we define a family of description languages, denoted by ALC*(S), which includes truth constants for representing truth degrees. Having truth constants in the language allows us to define the axioms of the knowledge bases as sentences of a predicate language in much the same way as in classical DLs. On the other hand, taking advantage of the expressive power provided by these truth constants, we define a graded notion of satisfiability, validity and subsumption of DL concepts as the satisfiability, validity and subsumption of evaluated formulas. In the last section we summarize some results concerning fuzzy logics associated with these new description languages, we analyze aspects relative to general and canonical semantics, and we prove some results relative to canonical standard completeness for some FDLs considered in the paper. © 2010 Elsevier Inc. All rights reserved.


Torra V.,Artificial Intelligence Research Institute
International Journal of Intelligent Systems | Year: 2010

Several extensions and generalizations of fuzzy sets have been introduced in the literature, for example, Atanassov's intuitionistic fuzzy sets, type 2 fuzzy sets, and fuzzy multisets. In this paper, we propose hesitant fuzzy sets. Although from a formal point of view, they can be seen as fuzzy multisets, we will show that their interpretation differs from the two existing approaches for fuzzy multisets. Because of this, together with their definition, we also introduce some basic operations. In addition, we also study their relationship with intuitionistic fuzzy sets. We prove that the envelope of the hesitant fuzzy sets is an intuitionistic fuzzy set. We prove also that the operations we propose are consistent with the ones of intuitionistic fuzzy sets when applied to the envelope of the hesitant fuzzy sets.© 2010 Wiley Periodicals, Inc.


Ontanon S.,Artificial Intelligence Research Institute | Plaza E.,Artificial Intelligence Research Institute
Machine Learning | Year: 2012

Similarity also plays a crucial role in support vector machines. Similarity assessment plays a key role in lazy learning methods such as k-nearest neighbor or case-based reasoning. In this paper we will show how refinement graphs, that were originally introduced for inductive learning, can be employed to assess and reason about similarity. We will define and analyze two similarity measures, S λ and S π, based on refinement graphs. The anti-unification-based similarity, S λ, assesses similarity by finding the anti-unification of two instances, which is a description capturing all the information common to these two instances. The property-based similarity, S π, is based on a process of disintegrating the instances into a set of properties, and then analyzing these property sets. Moreover these similarity measures are applicable to any representation language for which a refinement graph that satisfies the requirements we identify can be defined. Specifically, we present a refinement graph for feature terms, in which several languages of increasing expressiveness can be defined. The similarity measures are empirically evaluated on relational data sets belonging to languages of different expressiveness. © 2011 The Author(s).


Vinyals M.,Artificial Intelligence Research Institute | Rodriguez-Aguilar J.A.,Artificial Intelligence Research Institute | Cerquides J.,University of Barcelona
Autonomous Agents and Multi-Agent Systems | Year: 2011

In this paper we propose a novel message-passing algorithm, the so-called Action-GDL, as an extension to the generalized distributive law (GDL) to efficiently solve DCOPs. Action-GDL provides a unifying perspective of several dynamic programming DCOP algorithms that are based on GDL, such as DPOP and DCPOP algorithms. We empirically show how Action-GDL using a novel distributed post-processing heuristic can outperform DCPOP, and by extension DPOP, even when the latter uses the best arrangement provided by multiple state-of-the-art heuristics. © 2010 The Author(s).


Vinyals M.,Artificial Intelligence Research Institute | Rodriguez-Aguilar J.A.,Artificial Intelligence Research Institute | Cerquides J.,University of Barcelona
Computer Journal | Year: 2011

Sensor networks (SNs) have arisen as one of the most promising technologies for the next decades. The recent emergence of small and inexpensive sensors based upon microelectromechanical systems ease the development and proliferation of this kind of networks in a wide range of actual-world applications. Multiagent systems (MAS) have been identified as one of the most suitable technologies to contribute to the deployment of SNs that exhibit flexibility, robustness and autonomy. The purpose of this survey is 2-fold. On the one hand, we review the most relevant contributions of agent technologies to this emerging application domain. On the other hand, we identify the challenges that researchers must address to establish MAS as the key enabling technology for SNs. © The Author 2010. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.


Navarro-Arribas G.,Autonomous University of Barcelona | Torra V.,Artificial Intelligence Research Institute
Information Fusion | Year: 2012

In this paper, we review the role of information fusion in data privacy. To that end, we introduce data privacy, and describe how information and data fusion are used in some fields of data privacy. Our study is focused on the use of aggregation for privacy protections, and record linkage techniques. © 2011 Elsevier B.V. All rights reserved.


Atencia M.,French Institute for Research in Computer Science and Automation | Atencia M.,Joseph Fourier University | Schorlemmer M.,Artificial Intelligence Research Institute | Schorlemmer M.,University of Barcelona
Journal of Web Semantics | Year: 2012

We tackle the problem of semantic heterogeneity in the context of agent communication and argue that solutions based solely on ontologies and ontology matching do not capture adequately the richness of semantics as it arises in dynamic and open multiagent systems. Current solutions to the semantic heterogeneity problem in distributed systems usually do not address the contextual nuances of the interaction underlying an agent communication. The meaning an agent attaches to its utterances is, in our view, very relative to the particular dialogue in which it may be engaged, and that the interaction model specifying its dialogical structure and its unfolding should not be left out of the semantic alignment mechanism. In this article we provide the formal foundation of a novel, interaction-based approach to semantic alignment, drawing from a mathematical construct inspired from category theory that we call the communication product. In addition, we describe a simple alignment protocol which, combined with a probabilistic matching mechanism, endows an agent with the capacity of bootstrapping - by repeated successful interaction - the basic semantic relationship between its local vocabulary and that of another agent. We have also implemented the alignment technique based on this approach and prove its viability by means of an abstract experimentation and a thorough statistical analysis. © 2011 Elsevier B.V. All rights reserved.


Schorlemmer M.,Artificial Intelligence Research Institute | Robertson D.,University of Edinburgh
IEEE Transactions on Knowledge and Data Engineering | Year: 2011

We address the problem of how to reason about properties of knowledge transformations as they occur in distributed and decentralized interactions between large and complex artifacts, such as databases, web services, and ontologies. Based on the conceptual distinction between specifications of interactions and properties of knowledge transformations that follow from these interactions, we explore a novel mixture of process calculus and property inference by connecting interaction models with knowledge transformation rules. We aim at being generic in our exploration, hence our emphasis on abstract knowledge transformations, although we exemplify it using a lightweight specification language for interaction modeling (for which an executable peer-to-peer environment already exists) and provide a formal semantics for knowledge transformation rules using the theory of institutions. Consequently, our exploration is also an example of the gain obtained by linking current state-of-the-art distributed knowledge engineering based on web services and peer-based architectures with formal methods drawn from a long tradition in algebraic specification. © 2006 IEEE.


Ontanon S.,Artificial Intelligence Research Institute | Meseguer P.,Artificial Intelligence Research Institute
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

Feature Terms are a generalization of first-order terms that have been introduced in theoretical computer science in order to formalize object-oriented capabilities of declarative languages, and which have been recently receiving increased attention for their usefulness in structured machine learning applications. The main obstacle with feature terms (as well as other formal representation languages like Horn clauses or Description Logics) is that the basic operations like subsumption have a very high computational cost. In this paper we model subsumption, antiunification and unification using constraint programming (CP), solving those operations in a more efficient way than using traditional methods. © 2012 Springer-Verlag Berlin Heidelberg.

Loading Artificial Intelligence Research Institute collaborators
Loading Artificial Intelligence Research Institute collaborators