Time filter

Source Type

Dittrich J.,Saarland University | Quiane-Ruiz J.-A.,Saarland University | Jindal A.,Saarland University | Jindal A.,International Max Planck Research School for Computer Science | And 3 more authors.
Proceedings of the VLDB Endowment | Year: 2010

MapReduce is a computing paradigm that has gained a lot of attention in recent years from industry and research. Unlike parallel DBMSs, MapReduce allows non-expert users to run complex analytical tasks over very large data sets on very large clusters and clouds. However, this comes at a price: MapReduce processes tasks in a scan-oriented fashion. Hence, the performance of Hadoop-an open-source implementation of MapReduce-often does not match the one of a well-configured parallel DBMS. In this paper we propose a new type of system named Hadoop++: it boosts task performance without changing the Hadoop framework at all (Hadoop does not even 'notice it'). To reach this goal, rather than changing a working system (Hadoop), we inject our technology at the right places through UDFs only and affect Hadoop from inside. This has three important consequences: First, Hadoop++ significantly outperforms Hadoop. Second, any future changes of Hadoop may directly be used with Hadoop++ without rewriting any glue code. Third, Hadoop++ does not need to change the Hadoop interface. Our experiments show the superiority of Hadoop++ over both Hadoop and HadoopDB for tasks related to indexing and join processing. © 2010 VLDB Endowment.

Hanser C.,University of Graz | Rabkin M.,Saarland University | Rabkin M.,International Max Planck Research School for Computer Science | Schroder D.,Saarland University
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2015

In structure-preserving signatures on equivalence classes (SPS-EQ-R), introduced at Asiacrypt 2014, each message M in (G ∗)ℓ is associated to its projective equivalence class, and a signature commits to the equivalence class: anybody can transfer the signature to a new, scaled, representative. In this work, we give the first black-box construction of a public-key encryption scheme from any SPS-EQ-R satisfying a simple new property which we call perfect composition. The construction does notinvolve any non-black-box technique and the implication is that such SPS-EQ-R cannot be constructed from one-way functions in a black-box way. The main idea of our scheme is to build a verifiable encrypted signature (VES) first and then apply the general transformation suggested by Calderon et al. (CT-RSA 2014). The original definition of VES requires that the underlying signature scheme be correct and secure in addition to other security properties. The latter have been extended in subsequent literature, but the former requirements have sometimes been neglected, leaving a hole in the security notion. We show that Calderon et al.’s notion of resolution independence fills this gap. © Springer International Publishing Switzerland 2015.

Chen S.,Zhejiang University of Technology | Wang Z.,International Max Planck Research School for Computer Science | Tong H.,Zhejiang University of Technology | Liu S.,Zhejiang University of Technology | Zhang B.,Nanjing University of Finance and Economics
Intelligent Automation and Soft Computing | Year: 2011

For feature matching in 3D computer vision, there are two main kinds of methods, i.e. global-based and local-based algorithms. Although both have achieved some useful results, they still have own disadvantages. This paper proposes a novel method which combines the global and local information as much as possible so that it can take both advantages. A series of sub-pixel window correlation method is employed with the guidance of fronto-parallel result to produce some local results. These local results are then repeatedly merged by quadratic pseudo-boolean optimization under the guidance of global information. After several sub-pixel local optimizations, the error rates at high resolution are tremendously reduced. When combining the global and local traits together, the third step optimization can both reduce the low resolution error as well as keep high-accuracy resolution error low. Compared with other existing algorithms, the proposed approach performs well when the scene is comprised with planar or curved surfaces. Practical experiments are carried out in this research to illustrate the method and typical results. © 2011, TSI® Press.

Chen S.,Zhejiang University of Technology | Wang Z.,International Max Planck Research School for Computer Science
IEEE Transactions on Industrial Informatics | Year: 2012

Generalized belief propagation is a popular algorithm to perform inference on large-scale Markov random fields (MRFs) networks. This paper proposes the method of accelerated generalized belief propagation with three strategies to reduce the computational effort. First, a min-sum messaging scheme and a caching technique are used to improve the accessibility. Second, a direction set method is used to reduce the complexity of computing clique messages from quartic to cubic. Finally, a coarse-to-fine hierarchical state-space reduction method is presented to decrease redundant states. The results show that a combination of these strategies can greatly accelerate the inference process in large-scale MRFs. For common stereo matching, it results in a speed-up of about 200 times. © 2006 IEEE.

PubMed | Leibniz Research Center for Working Environment and Human Factors o, German Rheumatism Research Center, Max Delbrück Center for Molecular Medicine, Saarland University and 4 more.
Type: Journal Article | Journal: Nucleic acids research | Year: 2016

The binding and contribution of transcription factors (TF) to cell specific gene expression is often deduced from open-chromatin measurements to avoid costly TF ChIP-seq assays. Thus, it is important to develop computational methods for accurate TF binding prediction in open-chromatin regions (OCRs). Here, we report a novel segmentation-based method, TEPIC, to predict TF binding by combining sets of OCRs with position weight matrices. TEPIC can be applied to various open-chromatin data, e.g. DNaseI-seq and NOMe-seq. Additionally, Histone-Marks (HMs) can be used to identify candidate TF binding sites. TEPIC computes TF affinities and uses open-chromatin/HM signal intensity as quantitative measures of TF binding strength. Using machine learning, we find low affinity binding sites to improve our ability to explain gene expression variability compared to the standard presence/absence classification of binding sites. Further, we show that both footprints and peaks capture essential TF binding events and lead to a good prediction performance. In our application, gene-based scores computed by TEPIC with one open-chromatin assay nearly reach the quality of several TF ChIP-seq data sets. Finally, these scores correctly predict known transcriptional regulators as illustrated by the application to novel DNaseI-seq and NOMe-seq data for primary human hepatocytes and CD4+ T-cells, respectively.

Loading International Max Planck Research School for Computer Science collaborators
Loading International Max Planck Research School for Computer Science collaborators