University of Computer Science of Chile

www.ucinf.cl
Santiago, Chile
SEARCH FILTERS
Time filter
Source Type

Donoso-Guzman I.,University of Computer Science of Chile
International Conference on Intelligent User Interfaces, Proceedings IUI | Year: 2017

Evidence-based health care (EBHC) is an important practice of medicine which provides systematic scientific evidence to answer clinical questions. Epistemonikos is one of the most important online systems in the field. Currently, many tasks within this system require a large amount of manual effort, which could be improved by leveraging humanin- the-loop machine learning techniques. In this article we propose a system called EpistAid, which combines machine learning, relevance feedback and an interactive user interface to support Epistemonikos users' on EBHC information filtering tasks. Copyright is held by the owner/author(s).


Robbes R.,University of Chile | Rothlisberger D.,University of Computer Science of Chile
IEEE International Working Conference on Mining Software Repositories | Year: 2013

The expertise of a software developer is said to be a crucial factor for the development time required to complete a task. Even if this hypothesis is intuitive, research has not yet quantified the effect of developer expertise on development time. A related problem is that the design space for expertise metrics is large; out of the various automated expertise metrics proposed, we do not know which metric most reliably captures expertise. What prevents a proper evaluation of expertise metrics and their relation with development time is the lack of data on development tasks, such as their precise duration. Fortunately, this data is starting to become available in the form of growing developer interaction repositories. We show that applying MSR techniques to these developer interaction repositories gives us the necessary tools to perform such an evaluation. © 2013 IEEE.


Barcelo P.,University of Chile | Libkin L.,University of Edinburgh | Reutter J.L.,University of Computer Science of Chile
Journal of the ACM | Year: 2014

Graph data appears in a variety of application domains, and many uses of it, such as querying, matching, and transforming data, naturally result in incompletely specified graph data, that is, graph patterns. While queries need to be posed against such data, techniques for querying patterns are generally lacking, and properties of such queries are not well understood. Our goal is to study the basics of querying graph patterns. The key features of patterns we consider here are node and label variables and edges specified by regular expressions. We provide a classification of patterns, and study standard graph queries on graph patterns. We give precise characterizations of both data and combined complexity for each class of patterns. If complexity is high, we do further analysis of features that lead to intractability, as well as lower-complexity restrictions. Since our patterns are based on regular expressions, query answering for them can be captured by a new automata model. These automata have two modes of acceptance: one captures queries returning nodes, and the other queries returning paths. We study properties of such automata, and the key computational tasks associated with them. Finally, we provide additional restrictions for tractability, and show that some intractable cases can be naturally cast as instances of constraint satisfaction problems. © 2014 ACM.


Gutierrez C.,University of Chile | Hurtado C.A.,Adolfo Ibáñez University | Perez J.,University of Computer Science of Chile
Journal of Computer and System Sciences | Year: 2011

The Semantic Web is based on the idea of a common and minimal language to enable large quantities of existing data to be analyzed and processed. This triggers the need to develop the database foundations of this basic language, which is the Resource Description Framework (RDF). This paper addresses this challenge by: 1) developing an abstract model and query language suitable to formalize and prove properties about the RDF data and query language; 2) studying the RDF data model, minimal and maximal representations, as well as normal forms; 3) studying systematically the complexity of entailment in the model, and proving complexity bounds for the main problems; 4) studying the notions of query answering and containment arising in the RDF data model; and 5) proving complexity bounds for query answering and query containment. © 2010 Elsevier Inc. All rights reserved.


Aldunate R.,Catholic University of Temuco | Nussbaum M.,University of Computer Science of Chile
Computers in Human Behavior | Year: 2013

Technology adoption is usually modeled as a process with dynamic transitions between costs and benefits. Nevertheless, school teachers do not generally make effective use of technology in their teaching. This article describes a study designed to exhibit the interplay between two variables: the type of technology, in terms of its complexity of use, and the type of teacher, in terms of attitude towards innovation. The results from this study include: (a) elaboration of a characteristic teacher technology adoption process, based on an existing learning curve for new technology proposed for software development; and (b) presentation of exit points during the technology adoption process. This paper concludes that teachers who are early technology adopters and commit a significant portion of their time to incorporating educational technology into their teaching are more likely to adopt new technology, regardless of its complexity. However, teachers who are not early technology adopters and commit a small portion of their time to integrating educational technology are less likely to adopt new technology and are prone to abandoning the adoption at identified points in the process. © 2012 Elsevier Ltd. All rights reserved.


Garrido P.,University of Computer Science of Chile | Riff M.C.,University of Computer Science of Chile
Journal of Heuristics | Year: 2010

In this paper we propose and evaluate an evolutionary-based hyper-heuristic approach, called EH-DVRP, for solving hard instances of the dynamic vehicle routing problem. A hyper-heuristic is a high-level algorithm, which generates or chooses a set of low-level heuristics in a common framework, to solve the problem at hand. In our collaborative framework, we have included three different types of low-level heuristics: constructive, perturbative, and noise heuristics. Basically, the hyper-heuristic manages and evolves a sophisticated sequence of combinations of these low-level heuristics, which are sequentially applied in order to construct and improve partial solutions, i.e., partial routes. In presenting some design considerations, we have taken into account the allowance of a proper cooperation and communication among low-level heuristics, and as a result, find the most promising sequence to tackle partial states of the (dynamic) problem. Our approach has been evaluated using the Kilby's benchmarks, which comprise a large number of instances with different topologies and degrees of dynamism, and we have compared it with some well-known methods proposed in the literature. The experimental results have shown that, due to the dynamic nature of the hyper-heuristic, our proposed approach is able to adapt to dynamic scenarios more naturally than low-level heuristics. Furthermore, the hyper-heuristic can obtain high-quality solutions when compared with other (meta) heuristic-based methods. Therefore, the findings of this contribution justify the employment of hyper-heuristic techniques in such changing environments, and we believe that further contributions could be successfully proposed in related dynamic problems. © 2010 Springer Science+Business Media, LLC.


Mery D.,University of Computer Science of Chile
IEEE/ASME Transactions on Mechatronics | Year: 2015

This paper presents a new methodology for identifying parts of interest inside of a complex object using multiple-X-ray views. The proposed method consists of five steps: 1) image acquisition, that acquires an image sequence where the parts of the object are captured from different viewpoints; 2) geometric model estimation, that establishes a multiple-view geometric model used to find the correct correspondence among different views; 3) single-view detection, that segment potential regions of interest in each view; 4) multiple-view detection, that matches and tracks potential regions based on similarity and geometrical multiple-view constraints; and 5) analysis, that analyzes the tracked regions using multiple-view information, filtering out false alarms without eliminating existing parts of interest. In order to evaluate the effectiveness of the proposed method, the algorithm was tested on 32 cases (five applications using different segmentation approaches) yielding promising results: precision and recall were 95.7% and 93.9%, respectively. Additionally, the multiple-view information obtained from the tracked parts was effectively used for recognition purposes. In our recognition experiments, we obtained an accuracy of 96.5%. Validation experiments show that our approach achieves better performance than other representative methods in the literature. © 1996-2012 IEEE.


Montabone S.,University of Computer Science of Chile | Soto A.,University of Computer Science of Chile
Image and Vision Computing | Year: 2010

Human detection is a key ability to an increasing number of applications that operates in human inhabited environments or needs to interact with a human user. Currently, most successful approaches to human detection are based on background substraction techniques that apply only to the case of static cameras or cameras with highly constrained motions. Furthermore, many applications rely on features derived from specific human poses, such as systems based on features derived from the human face which is only visible when a person is facing the detecting camera. In this work, we present a new computer vision algorithm designed to operate with moving cameras and to detect humans in different poses under partial or complete view of the human body. We follow a standard pattern recognition approach based on four main steps: (i) preprocessing to achieve color constancy and stereo pair calibration, (ii) segmentation using depth continuity information, (iii) feature extraction based on visual saliency, and (iv) classification using a neural network. The main novelty of our approach lies in the feature extraction step, where we propose novel features derived from a visual saliency mechanism. In contrast to previous works, we do not use a pyramidal decomposition to run the saliency algorithm, but we implement this at the original image resolution using the so-called integral image. Our results indicate that our method: (i) outperforms state-of-the-art techniques for human detection based on face detectors, (ii) outperforms state-of-the-art techniques for complete human body detection based on different set of visual features, and (iii) operates in real time onboard a mobile platform, such as a mobile robot (15 fps). © 2009 Elsevier B.V. All rights reserved.


Peralta B.,University of Computer Science of Chile | Soto A.,University of Computer Science of Chile
Information Sciences | Year: 2014

A useful strategy to deal with complex classification scenarios is the "divide and conquer" approach. The mixture of experts (MoE) technique makes use of this strategy by jointly training a set of classifiers, or experts, that are specialized in different regions of the input space. A global model, or gate function, complements the experts by learning a function that weighs their relevance in different parts of the input space. Local feature selection appears as an attractive alternative to improve the specialization of experts and gate function, particularly, in the case of high dimensional data. In general, subsets of dimensions, or subspaces, are usually more appropriate to classify instances located in different regions of the input space. Accordingly, this work contributes with a regularized variant of MoE that incorporates an embedded process for local feature selection using L1 regularization. Experiments using artificial and real-world datasets provide evidence that the proposed method improves the classical MoE technique, in terms of accuracy and sparseness of the solution. Furthermore, our results indicate that the advantages of the proposed technique increase with the dimensionality of the data. © 2014 Elsevier Inc. All rights reserved.


Alvarez C.,University of Computer Science of Chile | Brown C.,University of Computer Science of Chile | Nussbaum M.,University of Computer Science of Chile
Computers in Human Behavior | Year: 2011

With the recent appearance of netbooks and low-cost tablet PCs, a study was undertaken to explore their potential in the classroom and determine which of the two device types is more suitable in this setting. A collaborative learning activity based on these devices was implemented in 5 sessions of a graduate engineering course of 20 students, most of whom were aged 22-25 and enrolled in undergraduate computer science and information technology engineering programs. Student behavior attributes indicating oral and gesture-based communication were observed and evaluated. Our findings indicate that in the context in which this study was undertaken, tablet PCs strengthen collective discourse capabilities and facilitate a richer and more natural body language. The students preferred tablet PCs to netbooks and also indicated greater self-confidence in expressing their ideas with the tablet's digital ink and paper technology than with the netbooks' traditional vertical screen and keyboard arrangement. © 2010 Elsevier Ltd. All rights reserved.

Loading University of Computer Science of Chile collaborators
Loading University of Computer Science of Chile collaborators