SBA Research gGmbH

Vienna, Austria

SBA Research gGmbH

Vienna, Austria
SEARCH FILTERS
Time filter
Source Type

Buhov D.,SBA Research GGmbH | Huber M.,St. Pölten University of Applied Sciences | Merzdovnik G.,SBA Research GGmbH | Weippl E.,SBA Research GGmbH | Dimitrova V.,Ss. Cyril and Methodius University of Skopje
Proceedings - 10th International Conference on Availability, Reliability and Security, ARES 2015 | Year: 2015

The digital world is in constant battle for improvement - especially in the security field. Taking into consideration the revelations from Edward Snowden about the mass surveillance programs conducted by governmental authorities, the number of users that raised awareness towards security is constantly increasing. More and more users agree that additional steps must be taken to ensure the fact that communication will remain private as intended in the first place. Taking in consideration the ongoing transition in the digital world, there are already more mobile phones than people on this planet. According to recent statistics there are around 7 billion active cell phones by 2014 out of which nearly 2 billion are smartphones. The use of smartphones by itself could open a great security hole. The most common problem when it comes to Android applications is the common misuse of the HTTPS protocol. Having this in mind, this paper addresses the current issues when it comes to misuse of the HTTPS protocol and proposes possible solutions to overcome this common problem. In this paper we evaluate the SSL implementation in a recent set of Android applications and present some of the most common missuses. The goal of this paper is to raise awareness to current and new developers to actually consider security as one of their main goals during the development life cycle of applications. © 2015 IEEE.


Kieseberg P.,SBA Research gGmbH | Schrittwieser S.,St. Pölten University of Applied Sciences | Mulazzani M.,SBA Research gGmbH | Echizen I.,Chiyoda Corporation | Weippl E.,SBA Research gGmbH
Electronic Markets | Year: 2014

The collection, processing, and selling of personal data is an integral part of today's electronic markets, either as means for operating business, or as an asset itself. However, the exchange of sensitive information between companies is limited by two major issues: Firstly, regulatory compliance with laws such as SOX requires anonymization of personal data prior to transmission to other parties. Secondly, transmission always implicates some loss of control over the data since further dissemination is possible without knowledge of the owner. In this paper, we extend an approach based on the utilization of k-anonymity that aims at solving both concerns in one single step-anonymization and fingerprinting of microdata such as database records. Furthermore, we develop criteria to achieve detectability of colluding attackers, as well as an anonymization strategy that resists combined efforts of colluding attackers on reducing the anonymization-level. Based on these results we propose an algorithm for the generation of collusion-resistant fingerprints for microdata. © Institute of Information Management, University of St. Gallen 2014.


Herzberg A.,Bar - Ilan University | Shulman H.,TU Darmstadt | Ullrich J.,SBA Research gGmbH | Weippl E.,SBA Research gGmbH
Proceedings of the ACM Conference on Computer and Communications Security | Year: 2013

We define and study cloudoscopy, i.e., exposing sensitive information about the location of (victim) cloud services and/or about the internal organisation of the cloud network, in spite of location-hiding efforts by cloud providers. A typical cloudoscopy attack is composed of a number of steps: first expose the internal IP address of a victim instance, then measure its hop-count distance from adversarial cloud instances, and finally test to find a specific instance which is close enough to the victim (e.g., co-resident) to allow (denial of service or side-channel) attacks. We refer to the three steps/modules involved in such cloudoscopy attack by the terms IP address deanonymisation, hop-count measuring, and co-residence testing. We present specific methods for these three cloudoscopy modules, and report on results of our experimental validation on popular cloud platform providers. Our techniques can be used for attacking (victim) servers, as well as for benign goals, e.g., optimisation of instances placement and communication, or comparing clouds and validating cloud-provider placement guarantees. © 2013 ACM.


Winkler D.,SBA Research gGmbH | Winkler D.,Vienna University of Technology | Musil J.,Vienna University of Technology | Musil A.,Vienna University of Technology | Biffl S.,Vienna University of Technology
Communications in Computer and Information Science | Year: 2016

In Multi-Disciplinary Engineering (MDE) environments, engineers coming from different disciplines have to collaborate. Typically, individual engineers apply isolated tools with heterogeneous data models and strong limitations for collaboration and data exchange. Thus, projects become more error-prone and risky. Although Quality Assurance (QA) methods help to improve individual engineering artifacts, results and experiences from previous activities remain unused. This paper describes, Collective Intelligence-Based Quality Assurance (CI-Based QA) approach that combines two established QA approaches, i.e., (Software) Inspection and the Failure Mode and Effect Analysis (FMEA), supported by, Collective Intelligence System (CIS) to improve engineering artifacts and processes based on reusable experience. CIS can help to bridge the gap between inspection and FMEA by collecting and exchanging previously isolated knowledge and experience. The conceptual evaluation with industry partners showed promising results of reusing experience and improving quality assurance performance as foundation for engineering process improvement. © Springer International Publishing Switzerland 2016.


Guttenbrunner M.,SBA Research GGmbH | Rauber A.,Vienna University of Technology
ACM Transactions on Information Systems | Year: 2012

Accessible emulation is often the method of choice for maintaining digital objects, specifically complex ones such as applications, business processes, or electronic art. However, validating the emulator's ability to faithfully reproduce the original behavior of digital objects is complicated. This article presents an evaluation framework and a set of tests that allow assessment of the degree to which system emulation preserves original characteristics and thus significant properties of digital artifacts. The original system, hardware, and software properties are described. Identical environment is then recreated via emulation. Automated user input is used to eliminate potential confounders. The properties of a rendered form of the object are then extracted automatically or manually either in a target state, a series of states, or as a continuous stream. The concepts described in this article enable preservation planners to evaluate how emulation affects the behavior of digital objects compared to their behavior in the original environment. We also review how these principles can and should be applied to the evaluation of migration and other preservation strategies as a general principle of evaluating the invocation and faithful rendering of digital objects and systems. The article concludes with design requirements for emulators developed for digital preservation tasks. © 2012 ACM.


Malle B.,Medical University of Graz | Malle B.,SBA Research gGmbH | Kieseberg P.,Medical University of Graz | Kieseberg P.,SBA Research gGmbH | And 2 more authors.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2016

Today’s increasingly complex information infrastructures represent the basis of any data-driven industries which are rapidly becoming the 21st century’s economic backbone. The sensitivity of those infrastructures to disturbances in their knowledge bases is therefore of crucial interest for companies, organizations, customers and regulating bodies. This holds true with respect to the direct provisioning of such information in crucial applications like clinical settings or the energy industry, but also when considering additional insights, predictions and personalized services that are enabled by the automatic processing of those data. In the light of new EU Data Protection regulations applying from 2018 onwards which give customers the right to have their data deleted on request, information processing bodies will have to react to these changing jurisdictional (and therefore economic) conditions. Their choices include a re-design of their data infrastructure as well as preventive actions like anonymization of databases per default. Therefore, insights into the effects of perturbed/anonymized knowledge bases on the quality of machine learning results are a crucial basis for successfully facing those future challenges. In this paper we introduce a series of experiments we conducted on applying four different classifiers to an established dataset, as well as several distorted versions of it and present our initial results. © IFIP International Federation for Information Processing 2016.


Fruhwirt P.,SBA Research GGmbH | Kieseberg P.,SBA Research GGmbH | Krombholz K.,SBA Research GGmbH | Weippl E.,SBA Research GGmbH
Digital Investigation | Year: 2014

Databases contain an enormous amount of structured data. While the use of forensic analysis on the file system level for creating (partial) timelines, recovering deleted data and revealing concealed activities is very popular and multiple forensic toolsets exist, the systematic analysis of database management systems has only recently begun. Databases contain a large amount of temporary data files and metadata which are used by internal mechanisms. These data structures are maintained in order to ensure transaction authenticity, to perform rollbacks, or to set back the database to a predefined earlier state in case of e.g. an inconsistent state or a hardware failure. However, these data structures are intended to be used by the internal system methods only and are in general not human-readable. In this work we present a novel approach for a forensic-aware database management system using transaction- and replication sources. We use these internal data structures as a vital baseline to reconstruct evidence during a forensic investigation. The overall benefit of our method is that no additional logs (such as administrator logs) are needed. Furthermore, our approach is invariant to retroactive malicious modifications by an attacker. This assures the authenticity of the evidence and strengthens the chain of custody. To evaluate our approach, we present a formal description, a prototype implementation in MySQL alongside and a comprehensive security evaluation with respect to the most relevant attack scenarios. © 2014 Elsevier Ltd. All rights reserved.


Hudic A.,SBA Research gGmbH | Islam S.,University of East London | Kieseberg P.,SBA Research gGmbH | Rennert S.,SBA Research gGmbH | Weippl E.R.,SBA Research gGmbH
International Journal of Pervasive Computing and Communications | Year: 2013

Purpose: The aim of this research is to secure the sensitive outsourced data with minimum encryption within the cloud provider. Unfaithful solutions for providing privacy and security along with performance issues by encryption usage of outsourced data are the main motivation points of this research. Design/methodology/approach: This paper presents a method for secure and confidential storage of data in the cloud environment based on fragmentation. The method supports minimal encryption to minimize the computations overhead due to encryption. The proposed method uses normalization of relational databases, tables are categorized based on user requirements relating to performance, availability and serviceability, and exported to XML as fragments. After defining the fragments and assigning the appropriate confidentiality levels, the lowest number of Cloud Service Providers (CSPs) is used required to store all fragments that must remain unlinkable in separate locations. Findings: Particularly in the cloud databases are sometimes de-normalised (their normal form is decreased to lower level) to increase the performance. Originality/value: The paper proposes a methodology to minimize the need for encryption and instead focus on making data entities unlinkable so that even in the case of a security breach for one set of data, the privacy impact on the whole is limited. The paper would be relevant to those people whose main concern is to preserve data privacy in distributed systems. © Emerald Group Publishing Limited.


Hudic A.,SBA Research GGmbH | Zechner L.,SBA Research GGmbH | Islam S.,SBA Research GGmbH | Krieg C.,SBA Research GGmbH | And 3 more authors.
Proceedings - 2012 ASE/IEEE International Conference on Privacy, Security, Risk and Trust and 2012 ASE/IEEE International Conference on Social Computing, SocialCom/PASSAT 2012 | Year: 2012

Penetration testing is a time consuming process which combines different mechanisms (security standards, protocols, best practices, vulnerability databases, techniques and guidelines) to evaluate computer systems and network vulnerabilities. It's main goal is to identify security weaknesses by using methods and procedures that are commonly used by malicious attackers. Furthermore, the best companies have certificated penetration testers to increase the quality and efficiency of their work. However, the rapid technology evolution increases the complexity and decreases security, and it raises the question if these support mechanisms are adequate and up-to-date. To provide an efficient widespread quality assessment of penetration testing process and mechanisms. Our work is formed to use developed framework to depict an efficient taxonomy over widespread technical and non-technical aspects that cover penetration testing process. © 2012 IEEE.


Islam S.,University of East London | Mouratidis H.,University of East London | Weippl E.R.,SBA Research GGmbh
Information and Software Technology | Year: 2014

Context Building a quality software product in the shortest possible time to satisfy the global market demand gives an enterprise a competitive advantage. However, uncertainties and risks exist at every stage of a software development project. These can have an extremely high influence on the success of the final software product. Early risk management practice is effective to manage such risks and contributes effectively towards the project success. Objective Despite risk management approaches, a detailed guideline that explains where to integrate risk management activities into the project is still missing. Little effort has been directed towards the evaluation of the overall impact of a risk management method. We present a Goal-driven Software Development Risk Management Model (GSRM) and its explicit integration into the requirements engineering phase and an empirical investigation result of applying GSRM into a project. Method We combine the case study method with action research so that the results from the case study directly contribute to manage the studied project risks and to identify ways to improve the proposed methodology. The data is collected from multiple sources and analysed both in a qualitative and quantitative way. Results When risk factors are beyond the control of the project manager and project environment, it is difficult to control these risks. The project scope affects all the dimensions of risk. GSRM is a reasonable risk management method that can be employed in an industrial context. The study results have been compared against other study results in order to generalise findings and identify contextual factors. Conclusion A formal early stage risk management practice provides early warning related to the problems that exists in a project, and it contributes to the overall project success. It is not necessary to always consider budget and schedule constraints as top priority. There exist issues such as requirements, change management, and user satisfaction which can influence these constraints. © 2013 Elsevier B.V. All rights reserved.

Loading SBA Research gGmbH collaborators
Loading SBA Research gGmbH collaborators