Entity

Time filter

Source Type

Australia

Halpin T.,LogicBlox | Halpin T.,INTI Education Group
International Journal of Information System Modeling and Design | Year: 2010

Object-Role Modeling (ORM) is an approach for modeling and querying information at the conceptual level, and for transforming ORM models and queries to or from other representations. Unlike attribute-based approaches such as Entity-Relationship (ER) modeling and class modeling within the Unified Modeling Language (UML), ORM is fact-oriented, where all facts and rules are modeled in terms of natural sentences easily understood and validated by nontechnical business users. ORM's modeling procedure facilitates validation by verbalization and population with concrete examples. ORM's graphical notation is far more expressive than that of ER diagrams or UML class diagrams, and its attribute-free nature makes it more stable and adaptable to changing business requirements. This article explains the fundamentals of ORM, illustrates some of its advantages as a data modeling approach, and outlines some recent research to extend ORM, with special attention to mappings to deductive databases. Copyright © 2010, IGI Global. Source


Karvounarakis G.,LogicBlox | Ives Z.G.,University of Pennsylvania | Tannen V.,University of Pennsylvania
Proceedings of the ACM SIGMOD International Conference on Management of Data | Year: 2010

Many advanced data management operations (e.g., incremental maintenance, trust assessment, debugging schema mappings, keyword search over databases, or query answering in probabilistic databases), involve computations that look at how a tuple was produced, e.g., to determine its score or existence. This requires answers to queries such as, "Is this data derivable from trusted tuples?"; "What tuples are derived from this relation?"; or "What score should this answer receive, given initial scores of the base tuples?". Such questions can be answered by consulting the provenance of query results. In recent years there has been significant progress on formal models for provenance. However, the issues of provenance storage, maintenance, and querying have not yet been addressed in an application-independent way. In this paper, we adopt the most general formalism for tuple-based provenance, semiring provenance. We develop a query language for provenance, which can express all of the aforementioned types of queries, as well as many more; we propose storage, processing and indexing schemes for data provenance in support of these queries; and we experimentally validate the feasibility of provenance querying and the benefits of our indexing techniques across a variety of application classes and queries. © 2010 ACM. Source


Halpin T.,LogicBlox | Halpin T.,INTI International University
Lecture Notes in Business Information Processing | Year: 2011

A conceptual data model for an information system specifies the fact structures of interest as well as the constraints and derivation rules that apply to the business domain being modeled. The languages for specifying these models may be graphical or textual, and may be based upon approaches such as Entity Relationship modeling, class diagramming in the Unified Modeling Language, fact orientation (e.g. Object-Role Modeling), Semantic Web modeling (e.g. the Web Ontology Language), or deductive databases (e.g. datalog). Although sharing many aspects in common, these languages also differ in fundamental ways which impact not only how, but which, aspects of a business domain may be specified. This paper provides a logical analysis and critical comparison of how such modeling languages deal with three main structural aspects: the entity/value distinction; existential facts; and entity reference schemes. The analysis has practical implications for modeling within a specific language and for transforming between languages. © 2011 Springer-Verlag. Source


Halpin T.,LogicBlox | Halpin T.,INTI Education Group | Wijbenga J.P.,University of Groningen
Lecture Notes in Business Information Processing | Year: 2010

A conceptual schema of an information system specifies the fact structures of interest as well as the business rules that apply to the business domain being modeled. These rules, which may be complex, are best validated with subject matter experts, since they best understand the business domain. In practice, business domain experts often lack expertise in the technical languages used by modelers to capture or query the information model. Controlled natural languages offer a potential solution to this problem, by allowing business experts to validate models and queries expressed in language they understand, while still being executable, with automated generation of implementation code. This paper describes FORML 2, a controlled natural language based on ORM 2 (second generation Object-Role Modeling), featuring rich expressive power, intelligibility, and semantic stability. Design guidelines are discussed, as well as a prototype implemented as an extension to the open source NORMA (Natural ORM Architect) tool. © 2010 Springer-Verlag Berlin Heidelberg. Source


Theoharis Y.,Institute of Computer Science Forth ICS | Karvounarakis G.,LogicBlox | Christophides V.,University of Crete
IEEE Internet Computing | Year: 2011

Capturing trustworthiness, reputation, and reliability of Semantic Web data manipulated by SPARQL requires researchers to represent adequate provenance information, usually modeled as source data annotations and propagated to query results along with a query evaluation. Alternatively, abstract provenance models can capture the relationship between query results and source data by taking into account the employed query operators. The authors argue the benefits of the latter for settings in which query results are materialized in several repositorwies and analyzed by multiple users. They also investigate how relational provenance models can be leveraged for SPARQL queries, and advocate for new provenance models. © 2011 IEEE. Source

Discover hidden collaborations