Martinez R.G.,University of Lisbon |
Lopes A.,LASIGE |
Rodrigues L.,University of Lisbon
Proceedings of the ACM Symposium on Applied Computing | Year: 2017
Cloud computing has enabled myriad of applications to benefit from dynamic allocation of resources. Yet, as the offer of resources and pricing schema varies, the selection of the best configuration to adapt to environment conditions and business goals becomes a harder problem to solve manually. Humans may fail to anticipate conditions that arise at run-time or to elaborate complex adaptation plans. Hence, expert's know-how should be complemented with the use of intelligent tools. Automated planning presents itself as a promising solution to complex decision-making, when finding strategic plans over a multidimensional space is required. We address the challenges of relying on adaptation planning for the generation of high-level policies, namely: representing the adaptation problem using a planning language, taking advantage of off-the-shelf planners to generate and select adequate plans, and incorporating techniques to scan initial system states and merge plans under common conditions. © 2017 ACM.
Pesquita C.,LASIGE |
Pesquita C.,University of Lisbon |
Ferreira J.D.,LASIGE |
Ferreira J.D.,University of Lisbon |
And 4 more authors.
Journal of Biomedical Semantics | Year: 2014
Background: Epidemiology is a data-intensive and multi-disciplinary subject, where data integration, curation and sharing are becoming increasingly relevant, given its global context and time constraints. The semantic annotation of epidemiology resources is a cornerstone to effectively support such activities. Although several ontologies cover some of the subdomains of epidemiology, we identified a lack of semantic resources for epidemiology-specific terms. This paper addresses this need by proposing the Epidemiology Ontology (EPO) and by describing its integration with other related ontologies into a semantic enabled platform for sharing epidemiology resources. Results: The EPO follows the OBO Foundry guidelines and uses the Basic Formal Ontology (BFO) as an upper ontology. The first version of EPO models several epidemiology and demography parameters as well as transmission of infection processes, participants and related procedures. It currently has nearly 200 classes and is designed to support the semantic annotation of epidemiology resources and data integration, as well as information retrieval and knowledge discovery activities. Conclusions: EPO is under active development and is freely available at https://code.google.com/p/epidemiology-ontology/. We believe that the annotation of epidemiology resources with EPO will help researchers to gain a better understanding of global epidemiological events by enhancing data integration and sharing. © 2014 Pesquita et al.; licensee BioMed Central Ltd.
Teixeira A.L.,LaSIGE |
Teixeira A.L.,University of Lisbon |
Leal J.P.,University of Lisbon |
Journal of Cheminformatics | Year: 2013
Background: One of the main topics in the development of quantitative structure-property relationship (QSPR) predictive models is the identification of the subset of variables that represent the structure of a molecule and which are predictors for a given property. There are several automated feature selection methods, ranging from backward, forward or stepwise procedures, to further elaborated methodologies such as evolutionary programming. The problem lies in selecting the minimum subset of descriptors that can predict a certain property with a good performance, computationally efficient and in a more robust way, since the presence of irrelevant or redundant features can cause poor generalization capacity. In this paper an alternative selection method, based on Random Forests to determine the variable importance is proposed in the context of QSPR regression problems, with an application to a manually curated dataset for predicting standard enthalpy of formation. The subsequent predictive models are trained with support vector machines introducing the variables sequentially from a ranked list based on the variable importance. Results: The model generalizes well even with a high dimensional dataset and in the presence of highly correlated variables. The feature selection step was shown to yield lower prediction errors with RMSE values 23% lower than without feature selection, albeit using only 6% of the total number of variables (89 from the original 1485). The proposed approach further compared favourably with other feature selection methods and dimension reduction of the feature space. The predictive model was selected using a 10-fold cross validation procedure and, after selection, it was validated with an independent set to assess its performance when applied to new data and the results were similar to the ones obtained for the training set, supporting the robustness of the proposed approach. Conclusions: The proposed methodology seemingly improves the prediction performance of standard enthalpy of formation of hydrocarbons using a limited set of molecular descriptors, providing faster and more cost-effective calculation of descriptors by reducing their numbers, and providing a better understanding of the underlying relationship between the molecular structure represented by descriptors and the property of interest. © 2013 Teixeira et al.; licensee Chemistry Central Ltd.
Alves F.,LaSIGE |
Cogo V.,University of Lisbon |
Wandelt S.,Humboldt University of Berlin |
Leser U.,Humboldt University of Berlin |
Bessani A.,University of Lisbon
PLoS ONE | Year: 2015
The decreasing costs of genome sequencing is creating a demand for scalable storage and processing tools and techniques to deal with the large amounts of generated data. Referential compression is one of these techniques, in which the similarity between the DNA of organisms of the same or an evolutionary close species is exploited to reduce the storage demands of genome sequences up to 700 times. The general idea is to store in the compressed file only the differences between the to-be-compressed and a well-known reference sequence. In this paper, we propose a method for improving the performance of referential compression by removing the most costly phase of the process, the complete reference indexing. Our approach, called On-Demand Indexing (ODI) compresses human chromosomes five to ten times faster than other state-of-the-art tools (on average), while achieving similar compression ratios. Copyright: © 2015 Alves et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Da Rosa Righi R.,Applied Computer Grad. Program |
Rodrigues V.F.,Applied Computer Grad. Program |
Da Costa C.A.,Applied Computer Grad. Program |
Chiwiacowsky L.,Applied Computer Grad. Program |
And 2 more authors.
Proceedings of IEEE/ACS International Conference on Computer Systems and Applications, AICCSA | Year: 2015
Electronic transactions have become the mainstream mechanism for performing commerce activities in our daily lives. Aiming at processing them, the most common approach addresses the use of a switch that dispatches transactions to processing machines using the so-called Round-Robin scheduler. Considering this electronic funds transfer (EFT) scenario, we developed a framework model denoted GetLB which comprises not only a new and efficient scheduler, but also a cooperative communication infrastructure for handling heterogeneous and dynamic environments. The GetLB scheduler uses a scheduling heuristic that combines static data from transactions and dynamic information from the processing nodes to overcome the limitations of the Round-Robin based scheduling approaches. Scheduling efficiency takes place thanks to the periodic interaction between the switching node and processing machines, enabling local decision making with up-to-date information about the environment. Besides the description of the aforementioned model in detail, this article also presents a prototype evaluation by using both traces and configurations obtained with a real EFT company. The results show improvements in transaction makespan when comparing our approach with the traditional one over homogeneous and heterogeneous clusters. © 2014 IEEE.
Caires L.,New University of Lisbon |
De Nicola R.,University of Florence |
Pugliese R.,University of Florence |
Vasconcelos V.T.,LaSIGE |
Zavattaro G.,University Bolona
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011
Core calculi have been adopted in the Sensoria project with three main aims. First of all, they have been used to clarify and formally define the basic concepts that characterize the Sensoria approach to the modeling of service-oriented applications. In second place, they are formal models on which the Sensoria analysis techniques have been developed. Finally, they have been used to drive the implementation of the prototypes of the Sensoria languages for programming actual service-based systems. This chapter reports about the Sensoria core calculi presenting their syntax and intuitive semantics, and describing their main features by means of a common running example, namely a Credit Request scenario taken from the Sensoria Finance case study. © 2011 Springer-Verlag Berlin Heidelberg.
Bessani A.,LaSIGE |
Neves N.F.,LaSIGE |
Verissimo P.,Uni.Lu |
Dantas W.,Federal University of Santa Catarina |
And 4 more authors.
Computer Networks | Year: 2016
The paper addresses the problem of providing message latency and reliability assurances for control traffic in wide-area IP networks. This is an important problem for cloud services and other geo-distributed information infrastructures that entail inter-datacenter real-time communication. We present the design and validation of JITeR (Just-In-Time Routing), an algorithm that timely routes messages at application-layer using overlay networking and multihoming, leveraging the natural redundancy of wide-area IP networks. We implemented a prototype of JITeR that we evaluated experimentally by placing nodes in several regions of Amazon EC2. We also present a scenario-based (geo-distributed utility network) evaluation comparing JITeR with alternative overlay/multihoming routing algorithms that shows that it provides better timeliness and reliability guarantees. © 2016 Elsevier B.V. All rights reserved.
Kreutz D.,University of Luxembourg |
Malichevskyy O.,LaSIGE |
Feitosa E.,IComp |
Cunha H.,IComp |
And 2 more authors.
Journal of Network and Computer Applications | Year: 2016
Authentication and authorization are two of the most important services for any IT infrastructure. Taking into account the current state of affairs of cyber warfare, the security and dependability of such services is a first class priority. For instance, the correct and continuous operation of identity providers (e.g., OpenID) and authentication, authorization and accounting services (e.g., RADIUS) is essential for all sorts of systems and infrastructures. As a step towards this direction, we introduce a functional architecture and system design artifacts for prototyping fault- and intrusion-tolerant identification and authentication services. The feasibility and applicability of the proposed elements are evaluated through two distinct prototypes. Our findings indicate that building and deploying resilient and reliable critical services is an achievable goal through a set of system design artifacts based on well-established concepts in the fields of security and dependability. Additionally, we provide an extensive evaluation of both resilient RADIUS (R-RADIUS) and OpenID (R-OpenID) prototypes. We show that our solution makes services resilient against attacks without affecting their correct operation. Our results also pinpoint that the prototypes are capable of meeting the needs of small to large-scale systems (e.g., IT infrastructures with 800k to 10M users). © 2016 Elsevier Ltd. All rights reserved.
Faria D.,LASIGE |
Pesquita C.,LASIGE |
Santos E.,LASIGE |
Cruz I.F.,University of Illinois at Chicago |
CEUR Workshop Proceedings | Year: 2013
AgreementMakerLight (AML) is an automated ontology matching framework based on element-level matching and the use of external resources as background knowledge. This paper describes the configuration of AML for the OAEI 2013 competition and discusses its results. Being a newly developed and still incomplete system, our focus in this year's OAEI were the anatomy and large biomedical ontologies tracks, wherein background knowledge plays a critical role. Nevertheless, AML was fairly successful in other tracks as well, showing that in many ontology matching tasks, a lightweight approach based solely on element-level matching can compete with more complex approaches.
Kreutz D.,LaSIGE |
Feitosa E.,IComp |
Cunha H.,IComp |
Niedermayer H.,Institute of Informatics |
Kinkelin H.,Institute of Informatics
Proceedings - 9th International Conference on Availability, Reliability and Security, ARES 2014 | Year: 2014
We introduce a set of tools and techniques for increasing the resilience and trustworthiness of identity providers (IdPs) based on OpenID. To this purpose we propose an architecture of specialized components capable of fulfilling the essential requirements for ensuring high availability, integrity and higher confidentiality guarantees for sensitive data and operations. Additionally, we also discuss how trusted components (e.g., TPMs, smart cards) can be used to provide remote attestation on the client and server side, i.e., how to measure the trustworthiness of the system. The proposed solution outperforms related work in different aspects, such as countermeasures for solving different security issues, throughput, and by tolerating arbitrary faults without compromising the system operations. We evaluate the system behavior under different circumstances, such as continuous faults and attacks. Furthermore, the first performance evaluations show that the system is capable of supporting environments with thousands of users. © 2014 IEEE.