Entity

Time filter

Source Type

Valencia, Spain

Petcu D.,West University of Timisoara | Gonzalez-Velez H.,National College of Ireland | Nicolae B.,IBM | Garcia-Gomez J.M.,Polytechnic University of Valencia | And 2 more authors.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2014

In spite of the rapid growth of Infrastructure-as-a-Service offers, support to run data-intensive and scientific applications large-scale is still limited. On the user side, existing features and programming models are insufficiently developed to express an application in such way that it can benefit from an elastic infrastructure that dynamically adapts to the requirements, which often leads to unnecessary over-provisioning and extra costs. On the provider side, key performance and scalability issues arise when having to deal with large groups of tightly coupled virtualized resources needed by such applications, which is especially challenging considering the multi-tenant dimension where sharing of physical resources introduces interference both inside and across large virtual machine deployments. This paper contributes with a holistic vision that imagines a tight integration between programming models, runtime middlewares and the virtualization infrastructure in order to provide a framework that transparently handles allocation and utilization of heterogeneous resources while dealing with performance and elasticity issues. © Springer International Publishing Switzerland 2014. Source


Marco-Ruiz L.,University of Tromso | Pedrinaci C.,Open University Milton Keynes | Maldonado J.A.,Polytechnic University of Valencia | Maldonado J.A.,VeraTech for Health | And 3 more authors.
Journal of Biomedical Informatics | Year: 2016

Background The high costs involved in the development of Clinical Decision Support Systems (CDSS) make it necessary to share their functionality across different systems and organizations. Service Oriented Architectures (SOA) have been proposed to allow reusing CDSS by encapsulating them in a Web service. However, strong barriers in sharing CDS functionality are still present as a consequence of lack of expressiveness of services’ interfaces. Linked Services are the evolution of the Semantic Web Services paradigm to process Linked Data. They aim to provide semantic descriptions over SOA implementations to overcome the limitations derived from the syntactic nature of Web services technologies. Objective To facilitate the publication, discovery and interoperability of CDS services by evolving them into Linked Services that expose their interfaces as Linked Data. Materials and methods We developed methods and models to enhance CDS SOA as Linked Services that define a rich semantic layer based on machine interpretable ontologies that powers their interoperability and reuse. These ontologies provided unambiguous descriptions of CDS services properties to expose them to the Web of Data. Results We developed models compliant with Linked Data principles to create a semantic representation of the components that compose CDS services. To evaluate our approach we implemented a set of CDS Linked Services using a Web service definition ontology. The definitions of Web services were linked to the models developed in order to attach unambiguous semantics to the service components. All models were bound to SNOMED-CT and public ontologies (e.g. Dublin Core) in order to count on a lingua franca to explore them. Discovery and analysis of CDS services based on machine interpretable models was performed reasoning over the ontologies built. Discussion Linked Services can be used effectively to expose CDS services to the Web of Data by building on current CDS standards. This allows building shared Linked Knowledge Bases to provide machine interpretable semantics to the CDS service description alleviating the challenges on interoperability and reuse. Linked Services allow for building ‘digital libraries’ of distributed CDS services that can be hosted and maintained in different organizations. © 2016 Elsevier Inc. Source


Bosca D.,Institute Universitario Of Aplicaciones Of Las Tecnologias Of La Informacion municaciones Avanzadas | Bosca D.,Polytechnic University of Valencia | Moner D.,Institute Universitario Of Aplicaciones Of Las Tecnologias Of La Informacion municaciones Avanzadas | Maldonado J.A.,Institute Universitario Of Aplicaciones Of Las Tecnologias Of La Informacion municaciones Avanzadas | And 2 more authors.
Studies in Health Technology and Informatics | Year: 2015

Messaging standards, and specifically HL7 v2, are heavily used for the communication and interoperability of Health Information Systems. HL7 FHIR was created as an evolution of the messaging standards to achieve semantic interoperability. FHIR is somehow similar to other approaches like the dual model methodology as both are based on the precise modeling of clinical information. In this paper, we demonstrate how we can apply the dual model methodology to standards like FHIR. We show the usefulness of this approach for data transformation between FHIR and other specifications such as HL7 CDA, EN ISO 13606, and openEHR. We also discuss the advantages and disadvantages of defining archetypes over FHIR, and the consequences and outcomes of this approach. Finally, we exemplify this approach by creating a testing data server that supports both FHIR resources and archetypes. © 2015 European Federation for Medical Informatics (EFMI). Source


Gonzalez-Ferrer A.,Institute Investigacion Sanitaria San Carlos IdISSC | Peleg M.,Haifa University | Marcos M.,Jaume I University | Maldonado J.A.,Polytechnic University of Valencia | Maldonado J.A.,VeraTech for Health
Journal of Medical Systems | Year: 2016

Delivering patient-specific decision-support based on computer-interpretable guidelines (CIGs) requires mapping CIG clinical statements (data items, clinical recommendations) into patients’ data. This is most effectively done via intermediate data schemas, which enable querying the data according to the semantics of a shared standard intermediate schema. This study aims to evaluate the use of HL7 virtual medical record (vMR) and openEHR archetypes as intermediate schemas for capturing clinical statements from CIGs that are mappable to electronic health records (EHRs) containing patient data and patient-specific recommendations. Using qualitative research methods, we analyzed the encoding of ten representative clinical statements taken from two CIGs used in real decision-support systems into two health information models (openEHR archetypes and HL7 vMR instances) by four experienced informaticians. Discussion among the modelers about each case study example greatly increased our understanding of the capabilities of these standards, which we share in this educational paper. Differing in content and structure, the openEHR archetypes were found to contain a greater level of representational detail and structure while the vMR representations took fewer steps to complete. The use of openEHR in the encoding of CIG clinical statements could potentially facilitate applications other than decision-support, including intelligent data analysis and integration of additional properties of data items from existing EHRs. On the other hand, due to their smaller size and fewer details, the use of vMR potentially supports quicker mapping of EHR data into clinical statements. © 2016, Springer Science+Business Media New York. Source


Juan-Albarracin J.,Polytechnic University of Valencia | Fuster-Garcia E.,Polytechnic University of Valencia | Fuster-Garcia E.,VeraTech for Health | Manjon J.V.,Polytechnic University of Valencia | And 4 more authors.
PLoS ONE | Year: 2015

Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. © 2015 Juan-Albarracín et al. Source

Discover hidden collaborations