SBA Research

Austria

SBA Research

Austria
SEARCH FILTERS
Time filter
Source Type

Miksa T.,SBA Research | Rauber A.,Vienna University of Technology
Journal of Web Semantics | Year: 2017

Scientific experiments performed in the eScience domain require special tooling, software, and workflows that allow researchers to link, transform, visualise and interpret data. Recent studies report that such experiments often cannot be replicated due to differences in the underlying infrastructure. The provenance collection mechanisms were built into workflow engines to increase research replicability. However, the traces do not contain the execution context that consists of software, hardware and external services used to produce the result which may change between executions.The problem thus remains on how to identify such context and how to store such data. To address this challenge we propose the context model that integrates ontologies which describe workflow and its environment. It includes not only high level description of workflow steps and services but also low level technical details on infrastructure, including hardware, software, and files. In this paper we discuss which ontologies that compose the context model must be instantiated to enable verification of a workflow re-execution. We use a tool that monitors a workflow execution and automatically creates the context model. We also authored the VPlan ontology that enables modelling validation requirements. It contains a controlled vocabulary of metrics that can be used for quantification of requirements. We evaluate the proposed ontologies on five Taverna workflows that differ in the degree on which they depend on additional software and services.The results show that the proposed ontologies are necessary and can be used for verification and validation of scientific workflows re-executions in different environments without the necessity of accessing the original environment at the same time. Thus the scientists can state whether the scientific experiment is replicable. © 2017 Elsevier B.V.


Heurix J.,SBA Research | Neubauer T.,Vienna University of Technology
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

E-health allows better communication between health care providers and higher availability of medical data. However, the downside of interconnected systems is the increased probability of unauthorized access to highly sensitive records that could result in serious discrimination against the patient. This article provides an overview of actual privacy threats and presents a pseudonymization approach that preserves the patient's privacy and data confidentiality. It allows (direct care) primary use of medical records by authorized health care providers and privacy-preserving (non-direct care) secondary use by researchers. The solution also addresses the identifying nature of genetic data by extending the basic pseudonymization approach with queryable encryption. © 2011 Springer-Verlag.


Fenz S.,Vienna University of Technology | Ekelhart A.,SBA Research
IEEE Security and Privacy | Year: 2011

Over the past four decades, various information security risk management (ISRM) approaches have emerged. However, there is a lack of sound verification, validation, and evaluation methods for these approaches. Although restrictions, such as the impossibility of measuring exact values for probabilities and follow-up costs, obviously exist, verification, validation, and evaluation of research is essential in any field, and ISRM is no exception. So far, there is no systematic overview of the available methods. In this article, the authors survey verification, validation, and evaluation methods referenced in ISRM literature and discuss which ISRM phase to apply the methods. They then demonstrate how to select appropriate methods with a real-world example. This systematic analysis draws conclusions on the current status of ISRM verification, validation, and evaluation and can serve as a reference for ISRM researchers and users who aim to establish trust in their results. © 2011 IEEE.


Bozic J.,University of Graz | Simos D.E.,SBA Research | Wotawa F.,University of Graz
9th International Workshop on Automation of Software Test, AST 2014 - Proceedings | Year: 2014

The number of potential security threats rises with the increasing number of web applications, which cause tremendous financial and existential implications for developers and users as well. The biggest challenge for security testing is to specify and implement ways in order to detect potential vulnerabilities of the developed system in a never ending quest against new security threats but also to cover already known ones so that a program is suited against typical attack vectors. For these purposes many approaches have been developed in the area of model-based security testing in order to come up with solutions for real-world application problems. These approaches provide theoretical background as well as practical solutions for certain security issues. In this paper, we partially rely on previous work but focus on the representation of attack patterns using UML state diagrams. We extend previous work in combining the attack pattern models with combinatorial testing in order to provide concrete test input, which is submitted to the system under test. With combinatorial testing we capture different combinations of inputs and thus increasing the likelihood to find weaknesses in the implementation under test that can be exploited. Besides the foundations of our approach we further report on first experiments that indicate its practical use.


Proll S.,SBA Research | Rauber A.,Vienna University of Technology
Proceedings - 2013 IEEE International Conference on Big Data, Big Data 2013 | Year: 2013

Uniquely and precisely identifying and citing arbitrary subsets of data is essential in many settings, e.g. to facilitate experiment validation and data re-use in meta-studies. Current approaches relying on pointers to entire data collections or on explicit copies of data do not scale. We propose a novel approach relying on persistent, timestamped, adapted queries to versioned and timestamped data sources. Result set hashes are used for validation correctness on later re-execution. The proposed method works both for static as well as dynamically growing or changing data. Alternative implementation styles for relational databases are presented and evaluated with regard to performance issues and impact on existing applications while aiming at minimal to no additional effort requirements for data users. The approach is validated in an infrastructure monitoring domain relying on sensor data networks. © 2013 IEEE.


Neubauer T.,Vienna University of Technology | Heurix J.,SBA Research
International Journal of Medical Informatics | Year: 2011

Purpose: E-health enables the sharing of patient-related data whenever and wherever necessary. Electronic health records (EHRs) promise to improve communication between health care providers, thus leading to better quality of patients' treatment and reduced costs. However, as highly sensitive patient information provides a promising goal for attackers and is also frequently demanded by insurance companies and employers, there is increasing social and political pressure regarding the prevention of health data misuse. This work addresses this problem and introduces a methodology that protects health records from unauthorized access and lets the patient as data owner decide who the authorized persons are, i.e., who the patient discloses her health information to. Therefore, the methodology prevents data disclosure that negatively influences the patient's life (e.g., by being denied health insurance or employment). Methods: This research uses a combination of conceptual-analytical, artifact-building and artifact-evaluating research approaches. The article starts with a detailed exploration of existing privacy protection mechanisms, such as encryption, anonymization and pseudonymization, by comparing and analyzing related work (conceptual-analytical approach). Based on these results and the identified shortcomings, a pseudonymization methodology is defined and evaluated by means of a threat analysis. Finally, the research results are validated with the design and implementation of a prototype (artifact building and artifact evaluation). Results: This paper presents a new methodology for the pseudonymization of medical data that stores health data decoupled from the corresponding patient-identifying information, allowing privacy-preserving secondary use of the health records in clinical studies without additional anonymization steps. In contrast to clinical studies, where it is not necessary to identify the individual participants, insurance companies and employers are interested in the health status of individuals such as potential insurance or job applicants. In this case, pseudonymized records are practically useless for these parties as the patient controls who is able to reestablish the link between health records and patient for primary use - usually only trusted health care providers. Conclusions: The framework provides health care providers with a unique solution that guarantees data privacy (e.g., according to HIPAA) and allows primary and secondary use of the data at the same time. The security analysis showed that the methodology is secure and protected against common intruder scenarios. © 2010 Elsevier Ireland Ltd.


Ullrich J.,SBA Research | Weippl E.,SBA Research
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2016

Rate limits, i.e., throttling network bandwidth, are considered to be means of protection; and guarantee fair bandwidth distribution among virtual machines that reside on the same Xen hypervisor. In the absence of rate limits, a single virtual machine would be able to (unintentionally or maliciously) exhaust all resources, and cause a denial-of-service for its neighbors. In this paper, we show that rate limits snap back and become attack vectors themselves. Our analysis highlights that Xen’s rate limiting throttles only outbound traffic, and is further prone to burst transmissions making virtual machines that are rate limited vulnerable to externally-launched attacks. In particular, we propose two attacks: Our side channel allows to infer all configuration parameters that are related to rate limiting functionality; while our denial-of-service attack causes up to 88.3% packet drops, or up to 13.8 s of packet delay. © Springer International Publishing Switzerland 2016.


Stefanidis K.,Industrial Systems Institute RC Athena | Voyiatzis A.G.,SBA Research
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2016

We describe the architecture of an anomaly detection system based on the Hidden Markov Model (HMM) for intrusion detection in Industrial Control Systems (ICS) and especially in SCADA systems interconnected using TCP/IP. The proposed system exploits the unique characteristics of ICS networks and protocols to efficiently detect multiple attack vectors. We evaluate the proposed system in terms of detection accuracy using as reference datasets made available by other researchers. These datasets refer to real industrial networks and contain a variety of identified attack vectors. We benchmark our findings against a large set of machine learning algorithms and demonstrate that our proposal exhibits superior performance characteristics. © IFIP International Federation for Information Processing 2016.


Kampel L.,SBA Research | Simos D.E.,SBA Research
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2016

Testing is an important and expensive part of software and hardware development. Over the recent years, the construction of combinatorial interaction tests rose to play an important role towards making the cost of testing more efficient. Covering arrays are the key element of combinatorial interaction testing and a means to provide abstract test sets. In this paper, we present a family of set-based algorithms for generating covering arrays and thus combinatorial test sets. Our algorithms build upon an existing mathematical method for constructing independent families of sets, which we extend sufficiently in terms of algorithmic design in this paper. We compare our algorithms against commonly used greedy methods for producing 3-way combinatorial test sets, and these initial evaluation results favor our approach in terms of generating smaller test sets. © IFIP International Federation for Information Processing 2016.


Simos D.E.,SBA Research
Computational Methods in Applied Sciences | Year: 2015

Response surface methodology is widely used for developing, improving and optimizing processes in various fields. In this paper, we present a general algorithmic method for constructing 2q -level design matrices in order to explore and optimize response surfaces where the predictor variables are each at 2q equally spaced levels, by utilizing a genetic algorithm. We emphasize on various properties that arise from the implementation of the genetic algorithm, such as symmetries in different objective functions used and the representation of the 2q levels of the design with a q-bit Gray Code. We executed the genetic algorithm for q = 2, 3 and the produced four and eight-level designs achieve both properties of near-rotatability and estimation efficiency thus demonstrating the efficiency of the proposed heuristic. © Springer International Publishing Switzerland 2015.

Loading SBA Research collaborators
Loading SBA Research collaborators