Phoenix, AZ, United States
Phoenix, AZ, United States

Time filter

Source Type

Kawano S.,Database Systems | Watanabe T.,CrossEdge Systems Inc. | Mizuguchi S.,Kumamoto University | Araki N.,Kumamoto University | And 2 more authors.
Nucleic Acids Research | Year: 2014

TogoTable (http://togotable.dbcls.jp/) is a web tool that adds user-specified annotations to a table that a user uploads. Annotations are drawn from several biological databases that use the Resource Description Framework (RDF) data model. TogoTable uses database identifiers (IDs) in the table as a query key for searching. RDF data, which form a network called Linked Open Data (LOD), can be searched from SPARQL endpoints using a SPARQL query language. Because TogoTable uses RDF, it can integrate annotations from not only the reference database to which the IDs originally belong, but also externally linked databases via the LOD network. For example, annotations in the Protein Data Bank can be retrieved using GeneID through links provided by the UniProt RDF. Because RDF has been standardized by the World Wide Web Consortium, any database with annotations based on the RDF data model can be easily incorporated into this tool. We believe that TogoTable is a valuable Web tool, particularly for experimental biologists who need to process huge amounts of data such as high-throughput experimental output. © 2014 The Author(s).


Lehner W.,TU Dresden | Sattler K.-U.,Database Systems
Proceedings - International Conference on Data Engineering | Year: 2010

Modern Web or "Eternal-Beta" applications necessitate a flexible and easy-to-use data management platform that allows the evolutionary development of databases and applications. The classical approach of relational database systems following strictly the ACID properties has to be extended by an extensible and easy-to-use persistency layer with specialized DB features. Using the underlying concept of Software as a Service (SaaS) also enables an economic advantage based on the "economy of the scale", where application and system environments only need to be provided once but can be used by thousands of users. Within this tutorial, we are looking at the current state-of-the-art from different perspectives. We outline foundations and techniques to build database services based on the SaaS-paradigm. We discuss requirements from a programming perspective, show different dimensions in the context of consistency and reliability, and also describe different non-functional properties under the umbrella of Service-Level agreements (SLA). © 2010 IEEE.


Naito Y.,Database Systems | Bono H.,Database Systems
Nucleic Acids Research | Year: 2012

GGRNA (http://GGRNA.dbcls.jp/) is a Google-like, ultrafast search engine for genes and transcripts. The web server accepts arbitrary words and phrases, such as gene names, IDs, gene descriptions, annotations of gene and even nucleotide/amino acid sequences through one simple search box, and quickly returns relevant RefSeq transcripts. A typical search takes just a few seconds, which dramatically enhances the usability of routine searching. In particular, GGRNA can search sequences as short as 10 nt or 4 amino acids, which cannot be handled easily by popular sequence analysis tools. Nucleotide sequences can be searched allowing up to three mismatches, or the query sequences may contain degenerate nucleotide codes (e.g. N, R, Y, S). Furthermore, Gene Ontology annotations, Enzyme Commission numbers and probe sequences of catalog microarrays are also incorporated into GGRNA, which may help users to conduct searches by various types of keywords. GGRNA web server will provide a simple and powerful interface for finding genes and transcripts for a wide range of users. All services at GGRNA are provided free of charge to all users. © 2012 The Author(s).


Zulkernine F.H.,Database Systems | Martin P.,Database Systems
IEEE Transactions on Services Computing | Year: 2011

The effective use of services to compose business processes in services computing demands that the Quality of Services (QoS) meet consumers' expectations. Automated web-based negotiation of Service Level Agreements (SLA) can help define the QoS requirements of critical service-based processes. We propose a novel trusted Negotiation Broker (NB) framework that performs adaptive and intelligent bilateral bargaining of SLAs between a service provider and a service consumer based on each party's high-level business requirements. We define mathematical models to map business-level requirements to low-level parameters of the decision function, which obscures the complexity of the system from the parties. We also define an algorithm for adapting the decision functions during an ongoing negotiation to comply with an opponent's offers or with updated consumer preferences. The NB uses intelligent agents to conduct the negotiation locally by selecting the most appropriate time-based decision functions. The negotiation outcomes are validated by extensive experimental study for Exponential, Polynomial, and Sigmoid time-based decision functions using simulations on our prototype framework. Results are compared in terms of a total utility value of the negotiating parties to demonstrate the efficiency of our proposed approach. © 2008 IEEE.


Wu H.,Database Systems | Yamaguchi A.,Database Systems
BioScience Trends | Year: 2014

The life sciences field is entering an era of big data with the breakthroughs of science and technology. More and more big data-related projects and activities are being performed in the world. Life sciences data generated by new technologies are continuing to grow in not only size but also variety and complexity, with great speed. To ensure that big data has a major influence in the life sciences, comprehensive data analysis across multiple data sources and even across disciplines is indispensable. The increasing volume of data and the heterogeneous, complex varieties of data are two principal issues mainly discussed in life science informatics. The ever-evolving next-generation Web, characterized as the Semantic Web, is an extension of the current Web, aiming to provide information for not only humans but also computers to semantically process large-scale data. The paper presents a survey of big data in life sciences, big data related projects and Semantic Web technologies. The paper introduces the main Semantic Web technologies and their current situation, and provides a detailed analysis of how Semantic Web technologies address the heterogeneous variety of life sciences big data. The paper helps to understand the role of Semantic Web technologies in the big data era and how they provide a promising solution for the big data in life sciences.


Pham M.C.,Database Systems | Klamma R.,Database Systems
Proceedings - 2010 International Conference on Advances in Social Network Analysis and Mining, ASONAM 2010 | Year: 2010

How is our knowledge organized? What research fields in computer science do exist? How are they interconnected? Previous work on knowledge mapping focused on building the map of all of sciences or a particular domain based on ISI published JCR (Journal Citation Report) dataset. Although this dataset covers most of important journals, it lacks of computer science conference and workshop proceedings. That results in an imprecise and incomplete analysis on the map of computer science knowledge. This paper presents an analysis on the computer science knowledge network with the aims to understand its structure and to answer the above questions. Based on the combination of two important digital libraries for computer science (DBLP and CiteSeerX), the knowledge networks are created at venue (journals, conferences and workshops) level and social network analysis is applied to determine clusters of similar venues, interdisciplinary venues and high prestige venues. © 2010 IEEE.


Mian R.,Database Systems | Martin P.,Database Systems
Proceedings - 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid 2012 | Year: 2012

The promise of "infinite" resources given by the cloud computing paradigm has led to recent interest in exploiting clouds for large-scale data-intensive computing. Given this supposedly infinite resource set, we need a management function that regulates application workload on these resources. This doctoral research focuses on two aspects of workload management, namely scheduling and provisioning. We propose a novel framework for workload execution and resource provisioning, and associated models, algorithms, and protocols. © 2012 IEEE.


Woltran S.,Database Systems
Proceedings - 17th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC 2015 | Year: 2015

Many prominent NP-hard problems have been shown tractable for bounded treewidth. To put these results to practice, dynamic programming (DP) via a traversal of the nodes of a tree decomposition is the standard technique. However, the concrete implementation of suitable efficient algorithms is often tedious. To this end, we have developed the D-FLAT system, which [1] - [4] allows the user to specify the DP algorithm in a declarative fashion and where the computation of intermediate solutions is delegated to a logic-programming system. In this talk, we first give a brief introduction to the D-FLAT system and demonstrate its usage on some standard DP algorithms. In the second part of the talk, we reflect on some of our experiences. © 2015 IEEE.


Barker K.,Database Systems
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

Protecting data privacy is an area of increasing concern as the general public has become increasingly impacted by its inability to guarantee that those things we wish to hold private remain private. The concept of privacy is found in the earliest writing ofmankind and continues today to be a very high value inmodern society.Unfortunately, challenges to privacy have always existed and at times, these challenges have become so strong that any real sense of personal privacy was assumed to unattainable. This occurred in the middle-ages when communal living was normative, at least among the poor, so it was necessary to accept this as a simple matter of fact. This does not mean that privacy was not valued at the time, simply that it was assumed to be unachievable so the absence of it was accepted. A poll in a recent undergraduate/graduate class at the University of Calgary revealed that over half of the students felt that there was no way to protect their privacy in online systems. It was not that they did not value their privacy but simply felt, much like those in the middle-ages, there was nothing they could do about it. In addition, about half of the students felt that there was value in their private information and felt that they would consider trading it for an economic return under certain conditions that varied widely from individual to individual. © 2012 Springer-Verlag.


Loading Database Systems collaborators
Loading Database Systems collaborators