Hasso Plattner Institute for Software Systems Engineering

Potsdam, Germany

Hasso Plattner Institute for Software Systems Engineering

Potsdam, Germany
Time filter
Source Type

Cattania C.,German Research Center for Geosciences | Khalid F.,Hasso Plattner Institute for Software Systems Engineering
Computers and Geosciences | Year: 2016

The estimation of space and time-dependent earthquake probabilities, including aftershock sequences, has received increased attention in recent years, and Operational Earthquake Forecasting systems are currently being implemented in various countries. Physics based earthquake forecasting models compute time dependent earthquake rates based on Coulomb stress changes, coupled with seismicity evolution laws derived from rate-state friction. While early implementations of such models typically performed poorly compared to statistical models, recent studies indicate that significant performance improvements can be achieved by considering the spatial heterogeneity of the stress field and secondary sources of stress. However, the major drawback of these methods is a rapid increase in computational costs. Here we present a code to calculate seismicity induced by time dependent stress changes. An important feature of the code is the possibility to include aleatoric uncertainties due to the existence of multiple receiver faults and to the finite grid size, as well as epistemic uncertainties due to the choice of input slip model. To compensate for the growth in computational requirements, we have parallelized the code for shared memory systems (using OpenMP) and distributed memory systems (using MPI). Performance tests indicate that these parallelization strategies lead to a significant speedup for problems with different degrees of complexity, ranging from those which can be solved on standard multicore desktop computers, to those requiring a small cluster, to a large simulation that can be run using up to 1500 cores. © 2016 Elsevier Ltd.

Asche H.,Hasso Plattner Institute for Software Systems Engineering
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2017

Positioning, orientation and targeted movement in a 3D environment are fundamental human needs that facilitate a perception of the world around us. Navigation and route guidance of pedestrians require geospatial data specifically dedicated to pedestrian navigation. In a feasibility study, extant geospatial databases were investigated to study their suitability for seamless, cost-effective routing, and navigation of pedestrians in public space. This research was guided by the assumption that such dedicated databases with seamless national coverage were lacking in Germany. To validate this hypothesis, the status quo has been assessed. In addition methods and techniques have been investigated facilitating cost-effective generation and maintenance of a uniform, seamless database for pedestrian navigation. The availability of spatial objects relevant to pedestrian navigation and data acquisition concepts has been determined in two study regions; these regions are representative of urban and rural areas. This paper, therefore, deals with the geospatial data requirements for pedestrian navigation, the evaluation of existing databases, and discussions of cost-effective acquisition strategies for missing pedestrian navigation data. © Springer International Publishing AG 2017.

Ludwig N.,Hasso Plattner Institute for Software Systems Engineering | Sack H.,Hasso Plattner Institute for Software Systems Engineering
Proceedings - International Workshop on Database and Expert Systems Applications, DEXA | Year: 2011

Content Based Multimedia Retrieval on nontextual documents is often constrained by available metadata. User-generated tags constitute an important source of information about a resource. To enable search scenarios exceeding traditional text-based search, such as exploratory and semantic search, this textual information must be complemented with semantic entities. Due to tag ambiguities and creative neologisms automatic semantic annotation based on user tags represents a major challenge. In this work, we show how to adopt context information and ontological knowledge to automatically assign semantic entities to user-generated tags for video data. Thus, a sophisticated semantic search on semantic entities is enabled. The algorithm combines co-occurence and link graph analysis using Linked Data. Also, a definition of context reliability in audio-visual content is described. © 2011 IEEE.

Hentschel C.,Hasso Plattner Institute for Software Systems Engineering | Sack H.,Hasso Plattner Institute for Software Systems Engineering
ACM International Conference Proceeding Series | Year: 2015

Recent advances for visual concept detection based on deep convolutional neural networks have only been successful because of the availability of huge training datasets provided by benchmarking initiatives such as ImageNet. Assembly of reliably annotated training data still is a largely manual effiort and can only be approached e ciently as crowdworking tasks. On the other hand, user generated photos and annotations are available at almost no costs in social photo communities such as Flickr. Leveraging the information available in these communities may help to extend existing datasets as well as to create new ones for completely diffierent classification scenarios. However, user generated annotations of photos are known to be incomplete, subjective and do not necessarily relate to the depicted content. In this paper, we therefore present an approach to reliably identify photos relevant for a given visual concept category. We have downloaded additional metadata for 1 million Flickr images and have trained a language model based on user generated annotations. Relevance estimation is based on accordance of an image's annotation data with our language model and on subsequent visual re-ranking. Experimental results demonstrate the potential of the proposed method { comparison with a baseline approach based on single tag matching shows significant improvements. © 2015 ACM.

Hentschel C.,Hasso Plattner Institute for Software Systems Engineering | Sack H.,Hasso Plattner Institute for Software Systems Engineering
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2015

Bag-of-Visual-Words (BoVW) features which quantize and count local gradient distributions in images similar to counting words in texts have proven to be powerful image representations. In combination with supervised machine learning approaches, models for nearly every visual concept can be learned. BoVW feature extraction, however, is performed by cascading multiple stages of local feature detection and extraction, vector quantization and nearest neighbor assignment that makes interpretation of the obtained image features and thus the overall classification results very difficult. In this work, we present an approach for providing an intuitive heat map-like visualization of the influence each image pixel has on the overall classification result. We compare three different classifiers (AdaBoost, Random Forest and linear SVM) that were trained on the Caltech-101 benchmark dataset based on their individual classification performance and the generated model visualizations. The obtained visualizations not only allow for intuitive interpretation of the classification results but also help to identify sources of misclassification due to badly chosen training examples. © Springer International Publishing Switzerland 2015.

Giese H.,Hasso Plattner Institute for Software Systems Engineering | Hildebrandt S.,Hasso Plattner Institute for Software Systems Engineering | Neumann S.,Hasso Plattner Institute for Software Systems Engineering
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

During the overall development of complex engineering systems different modeling notations are employed. For example, in the domain of automotive systems system engineering models are employed quite early to capture the requirements and basic structuring of the entire system, while software engineering models are used later on to describe the concrete software architecture. Each model helps in addressing the specific design issue with appropriate notations and at a suitable level of abstraction. However, when we step forward from system design to the software design, the engineers have to ensure that all decisions captured in the system design model are correctly transferred to the software engineering model. Even worse, when changes occur later on in either model, today the consistency has to be reestablished in a cumbersome manual step. In this paper, we present how model synchronization and consistency rules can be applied to automate this task and ensure that the different models are kept consistent. We also introduce a general approach for model synchronization. Besides synchronization, the approach consists of tool adapters as well as consistency rules covering the overlap between the synchronized parts of a model and the rest. We present the model synchronization algorithm based on triple graph grammars in detail and further exemplify the general approach by means of a model synchronization solution between system engineering models in SysML and software engineering models in AUTOSAR which has been developed for an industrial partner. © 2010 Springer-Verlag Berlin Heidelberg.

Heinze T.,Hasso Plattner Institute for Software Systems Engineering
Proceedings of the IASTED International Conference on Signal and Image Processing, SIP 2011 | Year: 2011

This paper introduces a robust super-resolution algorithm for joint color multi-frame demosaicing. We show that our algorithm, although fast and simple, exhibits convincing results not only within the modeling assumptions, but also for real raw data series. The ultimate goal is its application to telemedical patient monitoring through mobile devices with limited computing power and low quality imaging devices.

Steinmetz N.,Hasso Plattner Institute for Software Systems Engineering | Sack H.,Hasso Plattner Institute for Software Systems Engineering
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2013

Semantic analysis and annotation of textual information with appropriate semantic entities is an essential task to enable content based search on the annotated data. For video resources textual information is rare at first sight. But in recent years the development of technologies for automatic extraction of textual information from audio visual content has advanced. Additionally, video portals allow videos to be annotated with tags and comments by authors as well as users. All this information taken together forms video metadata which is manyfold in various ways. By making use of the characteristics of the different metadata types context can be determined to enable sound and reliable semantic analysis and to support accuracy of understanding the video's content. This paper proposes a description model of video metadata for semantic analysis taking into account various contextual factors. © 2013 Springer-Verlag Berlin Heidelberg.

Panchenko O.,Hasso Plattner Institute for Software Systems Engineering
Proceedings - Working Conference on Reverse Engineering, WCRE | Year: 2011

Software engineers are coerced to deal with a large amount of information about source code. Appropriate tools could assist to handle it, but existing tools are not capable of processing and presenting such a large amount of information sufficiently. With the advent of in-memory column-oriented databases the performance of some data-intensive applications could be significantly improved. This has resulted in a completely new user experience of those applications and enabled new use-cases. This PhD thesis investigates the applicability of in-memory column-oriented databases for supporting daily software engineering activities. The major research question addressed in this thesis is as follows: does in-memory column-oriented database technology provide the necessary performance advantages for working interactively with large amounts of fine-grained structural information about source code? To investigate this research question two scenarios have been selected that particularly suffer from low performance. The first selected scenario is source code search. Existing source code repositories contain a large amount of structural data. Interface definitions, abstract syntax trees, and call graphs are examples of such structural data. Existing tools have solved the performance problems either by reducing the amount of data because of using a coarse-grained representation, or by preparing answers to developers' questions in advance, or by reducing the scope of search. All currently existing alternatives result in the loss of developers' productivity. The second scenario is source code analytics. To complete reverse engineering tasks software engineers often are required to analyze a number of atomic facts that have been extracted from source code. Examples of such atomic facts are occurrences of certain syntactic patterns in code, software product metrics or violations of development guidelines. Each fact typically has several characteristics, such as the type of the fact, the location in code where found, and some attributes. Particularly, analysis of large software systems requires the ability to process a large amount of such facts efficiently. During industrial experiments conducted for this thesis it was evidenced that in-memory technology provides performance gains that improve developers' productivity and enable scenarios previously not possible. This thesis overlaps both software engineering and database technology. From the viewpoint of software engineering, it seeks to find a way to support developers in dealing with a large amount of structural data. From the viewpoint of database technology, source code search and analytics are domains for studying fundamental issues of storing and querying structural data. © 2011 IEEE.

Panchenko O.,Hasso Plattner Institute for Software Systems Engineering | Plattner H.,Hasso Plattner Institute for Software Systems Engineering | Zeier A.,Hasso Plattner Institute for Software Systems Engineering
Proceedings - International Conference on Software Engineering | Year: 2011

Source code search is an important tool used by software engineers. However, until now relatively little is known about what developers search for in source code and why. This paper addresses this knowledge gap. We present the results of a log file analysis of a source code search engine. The data from the log file was analyzed together with the change history of four development and maintenance systems. The results show that most of the search targets were not changed after being downloaded, thus we concluded that the developers conducted searches to find reusable components, to obtain coding examples or to perform impact analysis. In contrast, maintainers often change the code they have downloaded. Moreover, we automatically categorized the search queries. The most popular categories were: method name, structural pattern, and keyword. The major search target was a statement. Although the selected data set was small, the deviations between the systems were negligible, therefore we conclude that our results are valid. © 2011 ACM.

Loading Hasso Plattner Institute for Software Systems Engineering collaborators
Loading Hasso Plattner Institute for Software Systems Engineering collaborators