Entity

Time filter

Source Type

Vienna, Austria

The Vienna University of Technology is one of the major universities in Vienna, the capital of Austria. Founded in 1815 as the "Imperial-Royal Polytechnic Institute", it currently has about 26,200 students , eight faculties and about 4,000 staff members . The university's teaching and research is focused on engineering and natural science. Wikipedia.


Petrushevski F.,Vienna University of Technology
UbiComp'12 - Proceedings of the 2012 ACM Conference on Ubiquitous Computing | Year: 2012

This research focuses on personalization of lighting conditions in office buildings. A lighting control agent is proposed that uses spatial context retrieved from a space model, as well as other context data, to address the challenges of personalized lighting control. Benefits include improved user satisfaction, productivity and minimized energy use. A user scenario is presented to illustrate the envisioned concept of personalized lighting control. Requirements are derived from this and related scenarios. A system design is proposed that meets these requirements. A first version of a system prototype has been implemented and validated against the user scenario. Copyright 2012 ACM. Source


Barany G.,Vienna University of Technology
Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI) | Year: 2014

The Python programming language is known for performing poorly on many tasks. While to some extent this is to be expected from a dynamic language, it is not clear how much each dynamic feature contributes to the costs of interpreting Python. In this study we attempt to quantify the costs of language features such as dynamic typing, reference counting for memory management, boxing of numbers, and late binding of function calls. We use an experimental compilation framework for Python that can make use of type annotations provided by the user to specialize the program as well as elide unnecessary reference counting operations and function lookups. The compiled programs run within the Python interpreter and use its internal API to implement language semantics. By separately enabling and disabling compiler optimizations, we can thus measure how much each language feature contributes to total execution time in the interpreter. We find that a boxed representation of numbers as heap objects is the single most costly language feature on numeric codes, accounting for up to 43% of total execution time in our benchmark set. On symbolic object-oriented code, late binding of function and method calls costs up to 30%. Redundant reference counting, dynamic type checks, and Python's elaborate function calling convention have comparatively smaller costs. © 2014 ACM. Source


Linhardt P.,Vienna University of Technology
Materials and Corrosion | Year: 2010

Manganese oxidizing microorganisms are known as ubiquitous species in soil and fresh water. Their ability to extract dissolved manganese even at minute concentrations from the water and to biomineralize it as manganese(III/IV)oxides makes them potentially relevant for corrosion processes in technical systems carrying freshwater. These oxides are known as strong oxidants and may act as catalyst for the oxygen reduction reaction. Thus, they are cathodically active, possibly driving anodic metal dissolution processes. The personal experiences over two decades from failure analysis related to these organisms have indeed shown that manganese oxidizers may appear in all kinds of freshwater systems. This paper summarizes observations and conclusions drawn from these cases and provides an overview on the methods found useful in their investigation. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Bliem M.,Institute of Advanced Studies Carinthia | Getzner M.,Vienna University of Technology
Environmental Economics and Policy Studies | Year: 2012

The European Water Framework Directive (WFD) includes an article on the mandatory provision for environmental and resource costs and benefits in pricing water services. Valuing water resources-e. g., regarding water quality, water availability, ecology, and biodiversity-is therefore an increasingly important topic for all water-related policies, such as the provision of drinking water, waste-water treatment, hydrological engineering, and ship transport. The current study provides empirical evidence on a specific river restoration project in the Danube National Park (Austria) combining improvements in water quality, the reduction of flood risks, and ecological benefits in terms of providing improved groundwater and flooding dynamics in the adjacent wetlands. Our study allows us to test whether willingness-to-pay (WTP) bids of respondents for such programs are different between two identical surveys employed in different years, and between two scenarios differing in scope. The results are encouraging regarding the (short-term) temporal stability of preferences for river restoration. Except for minor differences which are not statistically significant, we find empirical (econometric) indications that WTP bids were roughly in the same order of magnitude between the two surveys. The results of the paper suggest that from the viewpoint of temporal stability, WTP bids may be reasonably transferred over time. © 2012 Springer. Source


Yang W.,Chalmers University of Technology | Durisi G.,Chalmers University of Technology | Riegler E.,Vienna University of Technology
IEEE Journal on Selected Areas in Communications | Year: 2013

We characterize the capacity of Rayleigh block-fading multiple-input multiple-output (MIMO) channels in the noncoherent setting where transmitter and receiver have no a priori knowledge of the realizations of the fading channel. We prove that unitary space-time modulation (USTM) is not capacity-achieving in the high signal-to-noise ratio (SNR) regime when the total number of antennas exceeds the coherence time of the fading channel (expressed in multiples of the symbol duration), a situation that is relevant for MIMO systems with large antenna arrays (large-MIMO systems). This result settles a conjecture by Zheng & Tse (2002) in the affirmative. The capacity-achieving input signal, which we refer to as Beta-variate space-time modulation (BSTM), turns out to be the product of a unitary isotropically distributed random matrix, and a diagonal matrix whose nonzero entries are distributed as the square-root of the eigenvalues of a Beta-distributed random matrix of appropriate size. Numerical results illustrate that using BSTM instead of USTM in large-MIMO systems yields a rate gain as large as 13% for SNR values of practical interest. © 2012 IEEE. Source


Feinerer I.,Vienna University of Technology
Artificial Intelligence for Engineering Design, Analysis and Manufacturing: AIEDAM | Year: 2013

Configuration of large-scale applications in an engineering context requires a modeling environment that allows the design engineer to draft the configuration problem in a natural way and efficient methods that can process the modeled setting and scale with the number of components. Existing configuration methods in artificial intelligence typically perform quite well in certain subareas but are hard to use for general-purpose modeling without mathematical or logics background (the so-called knowledge acquisition bottleneck) and/or have scalability issues. As a remedy to this important issue both in theory and in practical applications, we use a standard modeling environment like the Unified Modeling Language that has been proposed by the configuration community as a suitable object-oriented formalism for configuration problems. We provide a translation of key concepts of class diagrams to inequalities and identify relevant configuration aspects and show how they are treated as an integer linear program. Solving an integer linear program can be done efficiently, and integer linear programming scales well to large configurations consisting of several thousands components and interactions. We conduct an empirical study in the context of package management for operating systems and for the Linux kernel configuration. We evaluate our methodology by a benchmark and obtain convincing results in support for using integer linear programming for configuration applications of realistic size and complexity. Copyright © 2013.Cambridge University Press. Source


Wasicek A.,Vienna University of Technology
Lecture Notes in Electrical Engineering | Year: 2014

In this chapter we discuss the application of integrity models in a mixed-criticality system to enable the secure sharing of information. The sharing of resources and information in computer systems enables cost savings. The major technical challenge of these systems is simple: low criticality applications must be prevented from interfering with high criticality ones which execute in the same system. An example for such an integrated architecture is the the ACROSS MPSoC architecture which facilitates the implementation of hard real-time systems. We present an integrity model for the secure exchange of information between different levels of criticality within ACROSS. Our approach is based on Totel's integrity model which proposes to upgrade information from low to high by rigorously validating this information. We were able to show that the encapsulation mechanisms of the ACROSS architecture support the implementation of the proposed integrity model. © 2014 Springer Science+Business Media Dordrecht. Source


Suter G.,Vienna University of Technology
CAD Computer Aided Design | Year: 2013

Network-based space layouts are schematic models of whole spaces, subspaces, and related physical elements. They address diverse space modeling needs in building and product design. A schema (data model) for network-based space layouts is defined that is influenced by existing space schemas. Layout elements and selected spatial relations form a geometric network. The network is embedded in 3-space and facilitates analysis with graph and network algorithms. Spatial constraints on layout elements and spatial relations extend the schema to support spatial consistency checking. Spatially consistent layouts are required for reliable network analysis and desirable for layout modification operations. An operation is introduced that evaluates spatial constraints to detect and semi- or fully-automatically resolve spatial inconsistencies in a layout. A layout modeling system prototype that includes proof-of-concept implementations of the layout schema extended by spatial constraints and the inconsistency resolution operation is described. Layouts of a floor of an office building and a rack server cabinet have been modeled with the system prototype. © 2013 Elsevier Ltd. All rights reserved. Source


Chen C.,Vienna University of Technology | Freedman D.,Hewlett - Packard
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2010

We address the problem of localizing homology classes, namely, finding the cycle representing a given class with the most concise geometric measure. We focus on the volume measure, that is, the 1-norm of a cycle. Two main results are presented. First, we prove the problem is NP-hard to approximate within any constant factor. Second, we prove that for homology of dimension two or higher, the problem is NP-hard to approximate even when the Betti number is O(1). A side effect is the inapproximability of the problem of computing the nonbounding cycle with the smallest volume, and computing cycles representing a homology basis with the minimal total volume. We also discuss other geometric measures (diameter and radius) and show their disadvantages in homology localization. Our work is restricted to homology over the ℤ2 field. Copyright © by SIAM. Source


Retscher G.,Vienna University of Technology
Journal of Applied Geodesy | Year: 2015

Location-based Services (LBS) influence nowadays every individual's life due to the emerging market penetration of smartphones and other mobile devices. For smartphone Apps localization technologies are developed ranging from GNSS beyond to other alternative ubiquitous positioning methods as well as the use of the in-built inertial sensors, such as accelerometers, gyroscopes, magnetometer, barometer, etc. Moreover, signals-of-opportunity which are not intended for positioning at the first sight but are receivable in many environments such as in buildings and public spaces are more and more utilized for positioning and navigation. The use of Wi-Fi (Wireless Fidelity) is a typical example. These technologies, however, have become very powerful tools as the enable to track an individual or even a group of users. Most technical researchers imply that it is mainly about further enhancing technologies and algorithms including the development of new advanced Apps to improve personal navigation and to deliver location oriented information just in time to a single LBS user or group of users. The authors claim that there is a need that ethical and political issues have to be addressed within our research community from the very beginning. Although there is a lot of research going on in developing algorithms to keep ones data and LBS search request in private, researchers can no longer keep their credibility without cooperating with ethical experts or an ethical committee. In a study called InKoPoMoVer (Cooperative Positioning for Real-time User Assistance and Guidance at Multi-modal Public Transit Junctions) a cooperation with social scientists was initiated for the first time at the Vienna University of Technology, Austria, in this context. The major aims of this study in relation to ethical questions are addressed in this paper. © 2015 Walter de Gruyter GmbH. Source


Tresch R.,Telecommunications Research Center Vienna | Guillaud M.,Vienna University of Technology
IEEE International Symposium on Information Theory - Proceedings | Year: 2010

Spatial interference alignment among a finite number of users is investigated as a technique to increase the probability of successful transmission in an interference limited clustered wireless ad hoc network. Using techniques from stochastic geometry, we build on the work of Ganti and Haenggi dealing with Poisson cluster processes with a fixed number of cluster points and provide a numerically integrable expression for the outage probability using an intra-cluster interference alignment strategy with multiplexing gain one. For a special network setting we derive a closed-form upper bound. We demonstrate significant performance gains compared to single-antenna systems without local cooperation. © 2010 IEEE. Source


Hanbury A.,Vienna University of Technology
SIGIR'12 - Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval | Year: 2012

Due to an explosion in the amount of medical information available, search techniques are gaining importance in the medical domain. This tutorial discusses recent results on search in the medical domain, including the outcome of surveys on end user requirements, research relevant to the field, and current medical and health search applications available. Finally, the extent to which available techniques meet user requirements are discussed, and open challenges in the field are identified. © 2012 Author. Source


Yao H.,University of California at San Diego | Gerstoft P.,University of California at San Diego | Shearer P.M.,University of California at San Diego | Mecklenbrauker C.,Vienna University of Technology
Geophysical Research Letters | Year: 2011

Compressive sensing (CS) is a technique for finding sparse signal representations to underdetermined linear measurement equations. We use CS to locate seismic sources during the rupture of the 2011 Tohoku-Oki Mw9.0 earthquake in Japan from teleseismic P waves recorded by an array of stations in the United States. The seismic sources are located by minimizing the l 2-norm of the difference between the observed and modeled waveforms penalized by the l1-norm of the seismic source vector. The resulting minimization problem is convex and can be solved efficiently. Our results show clear frequency-dependent rupture modes with high-frequency energy radiation dominant in the down-dip region and low-frequency radiation in the updip region, which may be caused by differences in rupture behavior (more intermittent or continuous) at the slab interface due to heterogeneous frictional properties. © 2011 by the American Geophysical Union. Source


Velik R.,Vienna University of Technology
Minds and Machines | Year: 2010

For a long time, emotions have been ignored in the attempt to model intelligent behavior. However, within the last years, evidence has come from neuroscience that emotions are an important facet of intelligent behavior being involved into cognitive problem solving, decision making, the establishment of social behavior, and even conscious experience. Also in research communities like software agents and robotics, an increasing number of researchers start to believe that computational models of emotions will be needed to design intelligent systems. Nevertheless, modeling emotions in technical terms poses many difficulties and has often been accounted as just not feasible. In this article, there are identified the main problems, which occur when attempting to implement emotions into machines. By pointing out these problems, it is aimed to avoid repeating mistakes committed when modeling computational models of emotions in order to speed up future development in this area. The identified issues are not derived from abstract reflections about this topic but from the actual attempt to implement emotions into a technical system based on neuroscientific research findings. It is argued that besides focusing on the cognitive aspects of emotions, a consideration of the bodily aspects of emotions-their grounding into a visceral body-is of crucial importance, especially when a system shall be able to learn correlations between environmental objects and events and their "emotional meaning". © 2010 Springer Science+Business Media B.V. Source


Zdun U.,Vienna University of Technology
Information and Software Technology | Year: 2010

A number of mature toolkits and language workbenches for DSL-based design have been proposed, making DSL-based design attractive for many projects. These toolkits preselect many architectural decision options. However, in many cases it would be beneficial for DSL-based design to decide for the DSL's architecture later on in a DSL project, once the requirements and the domain have been sufficiently understood. We propose a language and a number of DSLs for DSL-based design and development that combine important benefits of different DSL toolkits in a unique way. Our approach specifically targets at deferring architectural decisions in DSL-based design. As a consequence, the architect can choose, even late in a DSL project, for options such as whether to provide the DSL as one or more external or embedded DSLs and whether to use an explicit language model or not . © 2010 Elsevier B.V. All rights reserved. Source


Collinucci A.,Vienna University of Technology | Wyder T.,Catholic University of Leuven
Journal of High Energy Physics | Year: 2010

We analyze a mixed ensemble of low charge D4-D2-D0 brane states on the quintic and show that these can be successfully enumerated using attractor flow tree techniques and Donaldson-Thomas invariants. In this low charge regime one needs to take into account worldsheet instanton corrections to the central charges, which is accomplished by making use of mirror symmetry. All the charges considered can be realized as fluxed D6-D2-D0 and D6̄-D2-D0 pairs which we enumerate using DT invariants. Our procedure uses the low charge counterpart of the picture developed by Denef and Moore. By establishing the existence of flow trees numerically and refining the index factorization scheme, we reproduce and improve some results obtained by Gaiotto, Strominger and Yin. Our results provide appealing evidence that the strong split flow tree conjecture holds and allows to compute exact results for an important sector of the theory. Our refined scheme for computing indices might shed some light on how to improve index computations for systems with larger charges. © SISSA 2010. Source


Hofkirchner W.,Vienna University of Technology
TripleC | Year: 2013

Gregory Bateson's famous saying about information can be looked upon as a good foundation of a Unified Theory of Information (UTI). Section one discusses the hard and the soft science approaches to information. It will be argued that a UTI approach needs to overcome the divide between these approaches and can do so by adopting an historical and logical account of information. Section two gives a system theoretical sketch of such an information concept. It is based upon assuming a co-extension of self-organisation and information. Information is defined as a tripartite relation such that (1) Bateson's "making a difference" is the build-up of the self-organised order; (2) Bateson's "difference" that makes the difference is the perturbation that triggers the build-up; (3) Bateson's difference that is made is made to the system because the perturbation serves a function for the system's self-organisation. In semiotic terms, (1) a sign (= the self-organised order) relates (2) a signified (= the perturbation) (3) to a signmaker (= the system). In a third section, consequences of this concept for the knowledge about techno-social information processes and information structures will be focused on. Source


Collinucci A.,Vienna University of Technology
Journal of High Energy Physics | Year: 2010

In this paper, a procedure is developed to construct compact F-theory fourfolds corresponding to perturbative IIB O7/O3 models on CICY threefolds with permutation involutions. The method is explained in generality, and then applied to specific examples where the involution permutes two Del Pezzo surfaces. The fourfold construction is successfully tested by comparing the D3 charges predicted by F-theory and IIB string theory. The constructed fourfolds are then taken to the locus in moduli space where they have enhanced SU(5) singularities. A general, intuitive method is developed for engineering the desired singularities in Weierstrass models for complicated D7-brane setups. © SISSA 2010. Source


Buric M.,University of Belgrade | Wohlgenannt M.,Vienna University of Technology
Journal of High Energy Physics | Year: 2010

We analyze properties of a family of finite-matrix spaces obtained by a truncation of the Heisenberg algebra and we show that it has a three-dimensional, noncommutative and curved geometry. Further, we demonstrate that the Heisenberg algebra can be described as a two-dimensional hyperplane embedded in this space. As a consequence of the given construction we show that the Grosse-Wulkenhaar (renormalizable) action can be interpreted as the action for the scalar field on a curved background space. We discuss the generalization to four dimensions. © 2010 SISSA. Source


Pitschmann M.,Vienna University of Technology | Seng C.-Y.,University of Massachusetts Amherst | Roberts C.D.,Argonne National Laboratory | Schmidt S.M.,Julich Research Center
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2015

A symmetry-preserving Dyson-Schwinger equation treatment of a vector-vector contact interaction is used to compute dressed-quark-core contributions to the nucleon σ-term and tensor charges. The latter enable one to directly determine the effect of dressed-quark electric dipole moments (EDMs) on neutron and proton EDMs. The presence of strong scalar and axial-vector diquark correlations within ground-state baryons is a prediction of this approach. These correlations are active participants in all scattering events and thereby modify the contribution of the singly represented valence quark relative to that of the doubly represented quark. Regarding the proton σ-term and that part of the proton mass which owes to explicit chiral symmetry breaking, with a realistic d- u mass splitting, the singly represented d quark contributes 37% more than the doubly represented u quark; and in connection with the proton's tensor charges, δTu, δTd, the ratio δTd/δTu is 18% larger than anticipated from simple quark models. Of particular note, the size of δTu is a sensitive measure of the strength of dynamical chiral symmetry breaking; and δTd measures the amount of axial-vector diquark correlation within the proton, vanishing if such correlations are absent. © 2015 American Physical Society. Source


Guillaud M.,Vienna University of Technology
2010 48th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2010 | Year: 2010

We consider interference alignment (IA) over K-user Gaussian MIMO interference channel (MIMO-IC) when the SNR is not asymptotically high. We introduce a generalization of IA which enables receive diversity inside the interference-free subspace. We generalize the existence criterion of an IA solution proposed by Yetis et al. to this case, thereby establishing a multi-user diversity-multiplexing trade-off (DMT) for the interference channel. Furthermore, we derive a closed-form tight lower-bound for the ergodic mutual information achievable using IA over a Gaussian MIMO-IC with Gaussian i.i.d. channel coefficients at arbitrary SNR, when the transmitted signals are white inside the subspace defined by IA. Finally, as an application of the previous results, we compare the performance achievable by IA at various operating points allowed by the DMT, to a recently introduced distributed method based on game theory. ©2010 IEEE. Source


Mathieson L.,University of Newcastle | Szeider S.,Vienna University of Technology
Journal of Computer and System Sciences | Year: 2012

We study a wide class of graph editing problems that ask whether a given graph can be modified to satisfy certain degree constraints, using a limited number of vertex deletions, edge deletions, or edge additions. The problems generalize several well-studied problems such as the General Factor Problem and the Regular Subgraph Problem. We classify the parameterized complexity of the considered problems taking upper bounds on the number of editing steps and the maximum degree of the resulting graph as parameters. © 2011 Elsevier Inc. All rights reserved. Source


Doblinger G.,Vienna University of Technology
IEEE Transactions on Signal Processing | Year: 2012

In this correspondence, we present a new and fast design algorithm for perfect-reconstruction (PR), maximally decimated, uniform, cosine-modulated filter banks. Perfect reconstruction is obtained within arithmetic machine precision. The new design does not need numerical optimization routines and is significantly faster than a competing method based on second-order cone programming (SOCP). The proposed design algorithm finds the optimum solution by iteratively solving a quadratic programming problem with linear equality constraints. By a special modification of the basic algorithm, we obtain PR filter banks with high stopband attenuations. In addition, fast convergence is verified by designing PR filter banks with up to 128 channels. © 2012 IEEE. Source


Rupp M.,Vienna University of Technology
IEEE Transactions on Signal Processing | Year: 2012

Although equalizers promise to improve the signal- to-noise energy ratio, zero forcing equalizers are derived classically in a deterministic setting minimizing intersymbol interference, while minimum mean square error (MMSE) equalizer solutions are derived in a stochastic context based on quadratic Wiener cost functions. In this paper, we show that it is possibleand in our opinion even simplerto derive the classical results in a purely deterministic setup, interpreting both equalizer types as least squares solutions. This, in turn, allows the introduction of a simple linear reference model for equalizers, which supports the exact derivation of a family of iterative and recursive algorithms with robust behavior. The framework applies equally to multiuser transmissions and multiple-input multiple-output (MIMO) channels. A major contribution is that due to the reference approach the adaptive equalizer problem can equivalently be treated as an adaptive system identification problem for which very precise statements are possible with respect to convergence, robustness and l 2-stability. Robust adaptive equalizers are much more desirable as they guarantee a much stronger form of stability than conventional in the mean square sense convergence. Even some blind channel estimation schemes can now be included in the form of recursive algorithms and treated under this general framework. © 2011 IEEE. Source


Aleksic S.,Vienna University of Technology
2010 IEEE Photonics Society Winter Topicals Meeting Series, WTM 2010 | Year: 2010

Four realization options of large switching fabrics are evaluated with respect to total electrical power consumption. The considered options include circuit and packet switches realized using either electronic (CMOS) or optical (SOA, MEMS) technologies. ©2010 IEEE. Source


Ederer N.,Vienna University of Technology
Renewable Energy | Year: 2016

With a total installed capacity of 5.1 GW and an expansion pipeline of 11.9 GW, offshore wind constitutes a story of success in the UK. The necessary foundation for this outstanding attainment is an energy policy that offered entities enough incentive in the form of profit and certainty so that investing in a rather immature technology became attractive. In this article, the profitability of 14 early-stage offshore wind farms (1.7 GW) is assessed with the objective to review at what price this rapid expansion occurred. Within the framework of a developed standardised financial model, the data from the offshore wind farms' original annual reports were extrapolated, which made it possible to simulate their profitability individually. The results reveal a return on capital in the range of more than 15% and a decreasing trend. This implies that the levelised cost of electricity from the first offshore wind farms were underestimated in the past. In addition, a stress test revealed that the operation of some farms might become unprofitable towards the end of their planned lifetimes. The particular reliable data basis and novel modelling approach presented in this article ensure that this study is of high interest for offshore wind stakeholders. © 2016 Elsevier Ltd. Source


Uzunova E.L.,Bulgarian Academy of Science | Mikosch H.,Vienna University of Technology
Microporous and Mesoporous Materials | Year: 2013

The structure of zeolite clinoptilolite and the coordination of extraframework cations (Ca2+, Ba2+, Na+ and K+) by framework oxygen atoms are examined by density functional theory (DFT) using two methods: the ONIOM two-layer model (DFT/MM) and periodic DFT. Both ONIOM and periodic model calculations predict correctly the clinoptilolite structure and find configurations with Al → Si substitution at the T2 site which interconnects the xz layers via its apical oxygen atom as the most stable ones, in agreement with experimental data. The configurations with Al → Si substitution at the T1 sites are favorable for migration of cations into the large channel A from the eight-member rings which are side rings of channel A and opened in channel C (c-rings); the process is energetically most favorable for the Na+ cations. K+ and Ba2+ cations occupy sites in the vicinity of the c-rings: M1, M1a and M3, among which K+ prefers the M3 site inside the c-rings. The molecular electrostatic potential (MEP) maps reveal areas of increased nucleophilic properties inside the large channel A, to which cations can migrate upon heating, or incoming cations can be retained. The minimal interatomic distances between two equivalent cations, residing in channel A are 4.850 Å (Ca2+-Ca2+); 5.025 Å (Na+-Na +) and 4.305 Å (K+-K+). The ONIOM method should be preferred over periodic models for describing cations in the c-rings: the smaller c-parameter of the unit cell replicates the cations in adjacent rings along the [001] direction. © 2013 Elsevier Inc. All rights reserved. Source


Wubben D.,ITG | Seethaler D.,Vienna University of Technology | Jalden J.,KTH Royal Institute of Technology | Matz G.,European Union
IEEE Signal Processing Magazine | Year: 2011

Lattice reduction is a powerful concept for solving diverse problems involving point lattices. Signal processing applications where lattice reduction has been successfully used include global positioning system (GPS), frequency estimation, color space estimation in JPEG pictures, and particularly data detection and precoding in wireless communication systems. In this article, we first provide some background on point lattices and then give a tutorial-style introduction to the theoretical and practical aspects of lattice reduction. We describe the most important lattice reduction algorithms and comment on their performance and computational complexity. © 2006 IEEE. Source


Laskowski R.,Institute of High Performance Computing | Blaha P.,Vienna University of Technology
Journal of Physical Chemistry C | Year: 2015

Density functional theory (DFT) calculations of the magnetic shielding in solid state nuclear magnetic resonance (NMR) experiments provide an important contribution for the understanding of the experimentally observed chemical shifts. In this work we focus on the relation between atomic and orbital character of the valence and conduction band wave functions and the 33S NMR shielding in sulfides and sulfates. This allows us to understand the origin of the observed large (over 1000 ppm) variation of the chemical shifts measured at the sulfur nucleus. We show that the variation of the NMR chemical shifts in sulfides is mostly related to the presence of metal d states and their variation in the energy position within the conduction bands. © 2014 American Chemical Society. Source


Matthes D.,Vienna University of Technology | Toscani G.,University of Pavia
Nonlinearity | Year: 2010

A class of Kac-like kinetic equations on the real line is considered, with general smoothing transforms as collisional kernels. These equations have been introduced recently e.g. in the context of econophysics (Cordier et al 2005 J. Stat. Phys. 120 253-77) or as models for granular gases with a background heat bath (Carrillo et al 2009 Discrete Contin. Dyn. Syst. 24 59-81). We show that the stationary solutions to these equations are not smooth in general, and we characterize their (finite) Sobolev regularity in dependence of the properties of the collisional kernel. Moreover, we prove that any initial Sobolev regularity below a well-defined threshold is uniformly propagated in time by the transient weak solutions, implying their strong convergence to the steady state. The applied techniques differ from the classical ones developed for the Kac equation as the models at hand neither dissipate the entropy nor the Fisher information. Instead, the proof relies on direct estimates on the collisional operator. © 2010 IOP Publishing Ltd&London Mathematical Society. Source


Grumiller D.,Vienna University of Technology | Hohm O.,Massachusetts Institute of Technology
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2010

We calculate 2-point correlators for New Massive Gravity at the chiral point and find that they behave precisely as those of a logarithmic conformal field theory, which is characterized in addition to the central charges cL = cR = 0 by 'new anomalies' bL = bR = - σ frac(12 ℓ, GN), where σ is the sign of the Einstein-Hilbert term, ℓ the AdS radius and GN Newton's constant. © 2010 Elsevier B.V. All rights reserved. Source


Rupp M.,Vienna University of Technology
IEEE Transactions on Signal Processing | Year: 2011

In this paper, we provide a thorough stability analysis of two well known adaptive algorithms for equalization based on a novel least squares reference model that allows to treat the equalizer problem equivalently as system identification problem. While not surprising the adaptive minimum mean-square error (MMSE) equalizer algorithm behaves l 2stable for a wide range of step-sizes, the even older zero-forcing (ZF) algorithm however behaves very differently. We prove that the ZF algorithm generally does not belong to the class of robust algorithms but can be convergent in the mean square sense. We furthermore provide conditions on the upper step-size bound to guarantee such mean squares convergence. We specifically show how noise variance of added channel noise and the channel impulse response influences this bound. Simulation examples validate our findings. © 2011 IEEE. Source


Hsu H.,University of Minnesota | Blaha P.,Vienna University of Technology | Wentzcovitch R.M.,University of Minnesota
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

With local density approximation+Hubbard U (LDA+U) calculations, we show that the ferromagnetic (FM) insulating state observed in tensile-strained LaCoO 3 epitaxial thin films is most likely a mixture of low-spin (LS) and high-spin (HS) Co, namely, a HS/LS mixture state. Compared with other FM states, including the intermediate-spin (IS) state (metallic within LDA+U), which consists of IS Co only, and the insulating IS/LS mixture state, the HS/LS state is the most favorable one. The FM order in the HS/LS state is stabilized via the superexchange interactions between adjacent LS and HS Co. We also show that the Co spin state can be identified by measuring the electric field gradient at the Co nucleus via nuclear magnetic resonance spectroscopy. © 2012 American Physical Society. Source


Rupp M.,Vienna University of Technology
IEEE Transactions on Signal Processing | Year: 2011

The so-called Affine Projection (AP) algorithm is of large interest in many adaptive filters applications due to its considerable speed-up in convergence compared to its simpler version, the LMS algorithm. While the original AP algorithm is well understood, gradient type variants of less complexity with relaxed step-size conditions called pseudo affine projection offer still unresolved problems. This contribution shows i) local robustness properties of such algorithms, ii) global properties of these, concluding l 2-stability conditions that are independent of the input signal statistics, as well as iii) steady-state values of moderate to high accuracy by relatively simple terms when applied to long filters. Of particular interest is the existence of lower step-size bounds for one of the variants, a bound that has not been observed before. © 2011 IEEE. Source


Svozil K.,Vienna University of Technology
Natural Computing | Year: 2012

The amount of contextuality is quantified in terms of the probability of the necessary violations of noncontextual assignments to counterfactual elements of physical reality. © Springer Science+Business Media B.V. 2012. Source


Taubock G.,Vienna University of Technology
IEEE Transactions on Information Theory | Year: 2012

Recent research has demonstrated significant achievable performance gains by exploiting circularity/noncircularity or properness/improperness of complex-valued signals. In this paper, we investigate the influence of these properties on important information theoretic quantities such as entropy, divergence, and capacity. We prove two maximum entropy theorems that strengthen previously known results. The proof of the first maximum entropy theorem is based on the so-called circular analog of a given complex-valued random vector. The introduction of the circular analog is additionally supported by a characterization theorem that employs a minimum Kullback-Leibler divergence criterion. In the proof of the second maximum entropy theorem, results about the second-order structure of complex-valued random vectors are exploited. Furthermore, we address the capacity of multiple-input multiple-output (MIMO) channels. Regardless of the specific distribution of the channel parameters (noise vector and channel matrix, if modeled as random), we show that the capacity-achieving input vector is circular for a broad range of MIMO channels (including coherent and noncoherent scenarios). Finally, we investigate the situation of an improper and Gaussian distributed noise vector. We compute both capacity and capacity-achieving input vector and show that improperness increases capacity, provided that the complementary covariance matrix is exploited. Otherwise, a capacity loss occurs, for which we derive an explicit expression. © 2011 IEEE. Source


Joksimovic G.M.,University of Montenegro | Riger J.,University of Montenegro | Wolbank T.M.,Vienna University of Technology | Peric N.,University of Zagreb | Vasak M.,University of Zagreb
IEEE Transactions on Industrial Electronics | Year: 2013

Before applying current-signature-analysis-based monitoring methods, it is necessary to thoroughly analyze the existence of the various harmonics on healthy machines. As such an analysis is only done in very few papers, the objective of this paper is to make a clear and rigorous characterization and classification of the harmonics present in a healthy cage rotor induction motor spectrum as a starting point for diagnosis. Magnetomotive force space harmonics, slot permeance harmonics, and saturation of main magnetic flux path through the virtual air-gap permeance variation are taken into analytical consideration. General rules are introduced giving a connection between the number of stator slots, rotor bars, and pole pairs and the existence of rotor slot harmonics as well as saturation-related harmonics in the current spectrum. For certain combinations of stator and rotor slots, saturation-related harmonics are shown to be most prominent in motors with a pole pair number of two or more. A comparison of predicted and measured current harmonics is given for several motors with different numbers of pole pairs, stator slots, and rotor bars. © 1982-2012 IEEE. Source


Giouroudi I.,Vienna University of Technology
Technical Proceedings of the 2013 NSTI Nanotechnology Conference and Expo, NSTI-Nanotech 2013 | Year: 2013

This paper presents a method for in-vitro detection of bioanalyte 1 using conductive microstructures to move magnetic nanoparticles (MAPs) in an integrated microfluidic system. The fundamental idea behind the elaboration of such a biosensing system is that the induced velocity of MAPs in suspension, while imposed to a magnetic field gradient, is inversely proportional to their volume [1-2]. Therefore, the volumetric increase of MAPs due to binding of bioanalyte onto their surface, changes consequently the velocity of the MAPs. The resulting compounds, called loaded MAPs (LMAPs), which consist of the MAPs and the attached bioanalyte, need more time to travel the same distance compared to bare MAPs (smaller). Thus, when a liquid sample is analyzed and a change in the velocity of the MAPs occurs the bioanalyte presence in the liquid under examination is demonstrated. Source


Krebs N.,Ludwig Maximilians University of Munich | Pugliesi I.,Ludwig Maximilians University of Munich | Hauer J.,Vienna University of Technology | Riedle E.,Ludwig Maximilians University of Munich
New Journal of Physics | Year: 2013

Experimental realizations of two-dimensional (2D) electronic spectroscopy in the ultraviolet (UV) must so far contend with a limited bandwidth in both the excitation and particularly the probe frequency. The pump bandwidth is at best 1500 cm-1 (full width at half maximum) at a fixed wavelength of 267 nm or 400 cm-1 for tunable pulses. The use of a replica of the pump pulse as a probe limits the observation of photochemical processes to the excitation region and makes the disentanglement of overlapping signal contributions difficult. We show that 2D Fourier transform spectroscopy can be conducted in a shaper-assisted collinear setup comprising fully tunable UV pulse pairs and supercontinuum probe spanning 250-720 nm. The pump pulses are broadened up to a useable spectral coverage of 2000 cm-1 (25 nm at 316 nm) by self-phase modulation in bulk CaF2 and compressed to 18 fs. By referencing the white light probe and eliminating pump stray light contributions, high signal-to-noise ratios even for weak probe intensities are achieved. Data acquisition times as short as 4 min for a selected population time allow the rapid recording of 2D spectra for photolabile biological samples even with the employed 1 kHz laser system. The potential of the setup is demonstrated on two representative molecules: pyrene and 2,2-diphenyl-5,6- benzo(2H)chromene. Well-resolved cross-peaks are observed and the excitation energy dependence of the relaxation processes is revealed. © IOP Publishing and Deutsche Physikalische Gesellschaft. Source


Balzer J.,University of California at Los Angeles | Morwald T.,Vienna University of Technology
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2012

Inverse problems are abundant in vision. A common way to deal with their inherent ill-posedness is reformulating them within the framework of the calculus of variations. This always leads to partial differential equations as conditions of (local) optimality. In this paper, we propose solving such equations numerically by isogeometric analysis, a special kind of finite-elements method. We will expose its main advantages including superior computational performance, a natural ability to facilitate multi-scale reconstruction, and a high degree of compatibility with the spline geometries encountered in modern computer-aided design systems. To animate these fairly general arguments, their impact on the well-known depth-from-gradients problem is discussed, which amounts to solving a Poisson equation on the image plane. Experiments suggest that, by the isogeometry principle, reconstructions of unprecedented quality can be obtained without any prefiltering of the data. © 2012 IEEE. Source


Graichen K.,University of Ulm | Kugi A.,Vienna University of Technology
IEEE Transactions on Automatic Control | Year: 2010

The stability of suboptimal model predictive control (MPC) without terminal constraints is investigated for continuous-time nonlinear systems under input constraints. Exponential stability and decay of the optimization error are guaranteed if the number of optimization steps in each sampling instant satisfies a lower bound that depends on the convergence ratio of the underlying optimization algorithm. The decay of the optimization error shows the incremental improvement of the suboptimal MPC scheme. © 2006 IEEE. Source


Einhorn M.,AIT Austrian Institute of Technology | Conte F.V.,AIT Austrian Institute of Technology | Kral C.,AIT Austrian Institute of Technology | Fleig J.,Vienna University of Technology
IEEE Transactions on Industry Applications | Year: 2012

In this paper, a method to estimate the capacity of individual lithium ion battery cells during operation is presented. When having two different states of charge of a battery cell as well as the transferred charge between these two states, the capacity of the battery cell can be estimated. The method is described in detail and validated on a battery cell with a current pulse test cycle. It is then applied to a real-life cycle; the accuracy is analyzed and discussed. © 2011 IEEE. Source


Blaizot J.-P.,CEA Saclay Nuclear Research Center | Ipp A.,Vienna University of Technology | Wschebor N.,University of the Republic of Uruguay
Nuclear Physics A | Year: 2011

We apply to the calculation of the pressure of a hot scalar field theory a method that has been recently developed to solve the Non-Perturbative Renormalization Group. This method yields an accurate determination of the momentum dependence of n-point functions over the entire momentum range, from the low momentum, possibly critical, region up to the perturbative, high momentum region. It has therefore the potential to account well for the contributions of modes of all wavelengths to the thermodynamical functions, as well as for the effects of the mixing of quasiparticles with multi-particle states. We compare the thermodynamical functions obtained with this method to those of the so-called Local Potential Approximation, and we find extremely small corrections. This result points to the robustness of the quasiparticle picture in this system. It also demonstrates the stability of the overall approximation scheme, and this up to the largest values of the coupling constant that can be used in a scalar theory in 3+1 dimensions. This is in sharp contrast to perturbation theory which shows no sign of convergence, up to the highest orders that have been recently calculated. © 2010 Elsevier B.V. Source


El-Sayed A.-M.,University College London | Watkins M.B.,University College London | Grasser T.,Vienna University of Technology | Afanas'Ev V.V.,Catholic University of Leuven | Shluger A.L.,University College London
Physical Review Letters | Year: 2015

Using ab initio modeling we demonstrate that H atoms can break strained SiO bonds in continuous amorphous silicon dioxide (a-SiO2) networks, resulting in a new defect consisting of a threefold-coordinated Si atom with an unpaired electron facing a hydroxyl group, adding to the density of dangling bond defects, such as E′ centers. The energy barriers to form this defect from interstitial H atoms range between 0.5 and 1.3 eV. This discovery of unexpected reactivity of atomic hydrogen may have significant implications for our understanding of processes in silica glass and nanoscaled silica, e.g., in porous low-permittivity insulators, and strained variants of a-SiO2. © 2015 American Physical Society. Source


Schall D.,Vienna University of Technology
International Journal of Communication Networks and Distributed Systems | Year: 2013

Over the past years, the web has transformed from a pool of statically linked information to a people-centric web. Various web-based tools and social services have become available enabling people to communicate, coordinate, and collaborate in a distributed manner. In this work, we consider social crowdsourcing environments that are based on the capabilities of human-provided services (HPS) and software-based services (SBS). Unlike traditional crowdsourcing platforms, we consider collaborative environments where people and software services jointly perform tasks. We propose formation and interaction patterns that are based on social principles. The idea of our social crowdsourcing approach is that interactions emerge dynamically at runtime based on social preferences among actors. The evolution of interactions is guided by a monitoring and control feedback cycle to recommend competitive compositions and to adjust interaction behaviour. Here we present fundamental formation patterns including link mesh and broker-based formations. We discuss the prototype implementation of our service-oriented crowdsourcing framework. Copyright © 2013 Inderscience Enterprises Ltd. Source


Ilo A.,Vienna University of Technology
Electric Power Systems Research | Year: 2016

This paper presents for the first time the Smart Grid Paradigm: the Link. Having a standardized structure, the Link can be applied to any partition of the power system: electricity production entity, storage entity, the grid or even the costumer plant. From this paradigm are extracted three architecture components: the "Grid-Link", the "Producer-Link", and the "Storage-Link". The distributed Link-based architecture is designed. It allows a flat business model across the electrical industry and minimizes the amount of the data, which needs to be exchanged. It takes also into account the electricity market rules and the rigorous cyber security and privacy requirements. The interfaces between the all three architecture components are defined. The power system operation processes like load-generation balance, dynamic security and demand response are outlined to demonstrate the architecture applicability. To complete the big picture, the operator role, the corresponding information and communication architecture and the market accommodation are also described. © 2015 Elsevier B.V. All rights reserved. Source


Fink M.,Vienna University of Technology
Theory and Practice of Logic Programming | Year: 2011

Different notions of equivalence, such as the prominent notions of strong and uniform equivalence, have been studied in Answer-Set Programming, mainly for the purpose of identifying programs that can serve as substitutes without altering the semantics, for instance in program optimization. Such semantic comparisons are usually characterized by various selections of models in the logic of Here-and-There (HT). For uniform equivalence however, correct characterizations in terms of HT-models can only be obtained for finite theories, respectively programs. In this paper, we show that a selection of countermodels in HT captures uniform equivalence also for infinite theories. This result is turned into coherent characterizations of the different notions of equivalence by countermodels, as well as by a mixture of HT-models and countermodels (so-called equivalence interpretations). Moreover, we generalize the so-called notion of relativized hyperequivalence for programs to propositional theories, and apply the same methodology in order to obtain a semantic characterization which is amenable to infinite settings. This allows for a lifting of the results to first-order theories under a very general semantics given in terms of a quantified version of HT. We thus obtain a general framework for the study of various notions of equivalence for theories under answer-set semantics. Moreover, we prove an expedient property that allows for a simplified treatment of extended signatures, and provide further results for non-ground logic programs. In particular, uniform equivalence coincides under open and ordinary answer-set semantics, and for finite non-ground programs under these semantics, also the usual characterization of uniform equivalence in terms of maximal and total HT-models of the grounding is correct, even for infinite domains, when corresponding ground programs are infinite. © 2011 Cambridge University Press. Source


Antic C.,Vienna University of Technology
Theory and Practice of Logic Programming | Year: 2014

Describing complex objects by elementary ones is a common strategy in mathematics and science in general. In their seminal 1965 paper, Kenneth Krohn and John Rhodes showed that every finite deterministic automaton can be represented (or emulated) by a cascade product of very simple automata. This led to an elegant algebraic theory of automata based on finite semigroups (Krohn-Rhodes Theory). Surprisingly, by relating logic programs and automata, we can show in this paper that the Krohn-Rhodes Theory is applicable in Answer Set Programming (ASP). More precisely, we recast the concept of a cascade product to ASP, and prove that every program can be represented by a product of very simple programs, the reset and standard programs. Roughly, this implies that the reset and standard programs are the basic building blocks of ASP with respect to the cascade product. In a broader sense, this paper is a first step towards an algebraic theory of products and networks of nonmonotonic reasoning systems based on Krohn-Rhodes Theory, aiming at important open issues in ASP and AI in general. © 2014 Cambridge University Press. Source


Hsu H.,University of Minnesota | Umemoto K.,University of Minnesota | Blaha P.,Vienna University of Technology | Wentzcovitch R.M.,University of Minnesota
Earth and Planetary Science Letters | Year: 2010

With the guidance of first-principles phonon calculations, we have searched and found several metastable equilibrium sites for substitutional ferrous iron in MgSiO3 perovskite. In the relevant energy range, there are two distinct sites for high-spin, one for low-spin, and one for intermediate-spin iron. Because of variable d-orbital occupancy across these sites, the two competing high-spin sites have different iron quadrupole splittings (QS). At low pressure, the high-spin iron with QS of 2.3-2.5mm/s is more stable, while the high-spin iron with QS of 3.3-3.6mm/s is more favorable at higher pressure. The crossover occurs between 4 and 24GPa, depending on the choice of exchange-correlation functional and the inclusion of on-site Coulomb interaction (Hubbard U). Our calculation supports the notion that the transition observed in recent Mössbauer spectra corresponds to an atomic-site change rather than a spin-state crossover. Our result also helps to explain the lack of anomaly in the compression curve of iron-bearing silicate perovskite in the presence of a large change of quadrupole splitting, and provides important guidance for future studies of thermodynamic properties of this phase. © 2010 Elsevier B.V. Source


Blum C.,Polytechnic University of Catalonia | Puchinger J.,AIT Austrian Institute of Technology | Raidl G.R.,Vienna University of Technology | Roli A.,University of Bologna
Applied Soft Computing Journal | Year: 2011

Research in metaheuristics for combinatorial optimization problems has lately experienced a noteworthy shift towards the hybridization of metaheuristics with other techniques for optimization. At the same time, the focus of research has changed from being rather algorithm-oriented to being more problem-oriented. Nowadays the focus is on solving the problem at hand in the best way possible, rather than promoting a certain metaheuristic. This has led to an enormously fruitful cross-fertilization of different areas of optimization. This cross-fertilization is documented by a multitude of powerful hybrid algorithms that were obtained by combining components from several different optimization techniques. Hereby, hybridization is not restricted to the combination of different metaheuristics but includes, for example, the combination of exact algorithms and metaheuristics. In this work we provide a survey of some of the most important lines of hybridization. The literature review is accompanied by the presentation of illustrative examples. © 2010 Elsevier B.V. All rights reserved. Source


Unterlass M.M.,Vienna University of Technology
European Journal of Inorganic Chemistry | Year: 2016

The term "inorganic-organic hybrid materials" designates inorganic building blocks in the colloidal domain (1-1000 nm) embedded in an organic, typically polymeric, matrix. Owing to their outstanding properties, hybrid materials have the potential to improve human life significantly. In the last two decades, the importance of reorienting chemical syntheses in the direction of more sustainable, less harmful and energy-consuming procedures, referred to as green chemistry, has been much emphasized and worked on. This review deals with the application of green chemistry to the synthesis of inorganic-organic hybrid materials. The origin and preparation both of the inorganic components and of the organic polymer matrix are critically analyzed for various examples. The development of more sustainable syntheses for hybrid materials still poses an open challenge. Potential options to tackle this task are discussed. Inorganic-organic hybrids combine, for example, the mechanical and temperature stability of inorganic colloids with the light weight and processability of organic polymers. For their green synthesis, one has to combine benign approaches towards both constituents. This microreview summarizes green syntheses of both components and reviews the implementation of green chemistry for hybrid materials. Copyright © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Carrington M.E.,Brandon University | Carrington M.E.,Winnipeg Institute for Theoretical Physics | Rebhan A.,Vienna University of Technology
European Physical Journal C | Year: 2011

In numerical simulations of nonabelian plasma instabilities in the hard-loop approximation, a turbulent spectrum has been observed that is characterized by a phase-space density of particles n(p) ∼ p–ν with exponent ν (Formula presented.) 2, which is larger than expected from relativistic 2 ↔ 2 scatterings. Using the approach of Zakharov, L'vov and Falkovich, we analyze possible Kolmogorov coefficients for relativistic (m ≥ 4)-particle processes, which give at most ν = 5/3 perturbatively for an energy cascade. We discuss non-perturbative scenarios which lead to larger values. As an extreme limit we find the result ν = 5 generically in an inherently non-perturbative effective field theory situation, which coincides with results obtained by Berges et al. in large-N scalar field theory. If we instead assume that scaling behavior is determined by Schwinger-Dyson resummations such that the different scaling of bare and dressed vertices matters, we find that intermediate values are possible. We present one simple scenario, which would single out ν = 2. © Springer-Verlag / Società Italiana di Fisica 2011. Source


Bennett S.D.,Harvard University | Yao N.Y.,Harvard University | Otterbach J.,Harvard University | Zoller P.,Austrian Academy of Sciences | And 3 more authors.
Physical Review Letters | Year: 2013

We propose and analyze a novel mechanism for long-range spin-spin interactions in diamond nanostructures. The interactions between electronic spins, associated with nitrogen-vacancy centers in diamond, are mediated by their coupling via strain to the vibrational mode of a diamond mechanical nanoresonator. This coupling results in phonon-mediated effective spin-spin interactions that can be used to generate squeezed states of a spin ensemble. We show that spin dephasing and relaxation can be largely suppressed, allowing for substantial spin squeezing under realistic experimental conditions. Our approach has implications for spin-ensemble magnetometry, as well as phonon-mediated quantum information processing with spin qubits. © 2013 American Physical Society. Source


Liu L.-M.,Princeton University | Li S.-C.,Tulane University | Cheng H.,Princeton University | Diebold U.,Tulane University | And 2 more authors.
Journal of the American Chemical Society | Year: 2011

Anatase TiO2 is a widely used photocatalytic material, and catechol (1,2-benzendiol) is a model organic sensitizer for dye-sensitized solar cells. The growth and the organization of a catecholate monolayer on the anatase (101) surface were investigated with scanning tunneling microscopy and density functional theory calculations. Isolated molecules adsorb preferentially at steps. On anatase terraces, monodentate (?D1?) and bidentate (?D2?) conformations are both present in the dilute limit, and frequent interconversions can take place between these two species. A D1 catechol is mobile at room temperature and can explore the most favorable surface adsorption sites, whereas D2 is essentially immobile. When a D1 molecule arrives in proximity of another adsorbed catechol in an adjacent row, it is energetically convenient for them to pair up in nearest-neighbor positions taking a D2-D2 or D2-D1 configuration. This intermolecular interaction, which is largely substrate mediated, causes the formation of one-dimensional catecholate islands that can change in shape but are stable to break-up. The change between D1 and D2 conformations drives both the dynamics and the energetics of this model system and is possibly of importance in the functionalization of dye-sensitized solar cells. © 2011 American Chemical Society. Source


Einhorn M.,AIT Austrian Institute of Technology | Conte F.V.,AIT Austrian Institute of Technology | Kral C.,AIT Austrian Institute of Technology | Fleig J.,Vienna University of Technology
IEEE Transactions on Power Electronics | Year: 2013

This paper describes the comparison and parameterization process of dynamic battery models for cell and system simulation. Three commonly used equivalent circuit battery models are parameterized using a numeric optimization method and basic electrical tests with a lithium-ion polymer battery cell. The maximum model performance is investigated, and the parameterized models are compared regarding the parameterization effort and the model accuracy. For the model with the best tradeoff between the parametrization effort and the model accuracy, a reasonable simplification of the parameterization process is presented. This model is parameterized with the simplified parameterization process and, finally, validated by using a current profile obtained from an electric vehicle simulation performing a real-life driving cycle. © 2012 IEEE. Source


Li S.-W.,The Interdisciplinary Center | Schmitt A.,Vienna University of Technology | Wang Q.,The Interdisciplinary Center
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2015

Quantum chromodynamics is notoriously difficult to solve at nonzero baryon density, and most models or effective theories of dense quark or nuclear matter are restricted to a particular density regime and/or a particular form of matter. Here we study dense (and mostly cold) matter within the holographic Sakai-Sugimoto model, aiming at a strong-coupling framework in the wide density range between nuclear saturation density and ultrahigh quark matter densities. The model contains only three parameters, and we ask whether it fulfills two basic requirements of real-world cold and dense matter, a first-order onset of nuclear matter and a chiral phase transition at high density to quark matter. Such a model would be extremely useful for astrophysical applications because it would provide a single equation of state for all densities relevant in a compact star. Our calculations are based on two approximations for baryonic matter - first, an instanton gas and, second, a homogeneous ansatz for the non-Abelian gauge fields on the flavor branes of the model. While the instanton gas shows chiral restoration at high densities but an unrealistic second-order baryon onset, the homogeneous ansatz behaves exactly the other way around. Our study, thus, provides all ingredients that are necessary for a more realistic model and allows for systematic improvements of the applied approximations. © 2015 American Physical Society. Source


Nawratil G.,Vienna University of Technology
Mechanisms and Machine Science | Year: 2010

Parallel manipulators which are singular with respect to the Schönflies motion group X(a) are called Schönflies-singular, or more precisely X(a)-singular, where a denotes the rotary axis. A special class of such manipulators are architecturally singular ones because they are singular with respect to any Schönflies group. Another remarkable set of Schönflies-singular planar Stewart Gough platforms was already presented by the author in [5].Moreover the main theorem on these manipulators was given in [6]. In this paper we give a complete discussion of the remaining special cases which also include so-called Cartesian-singular planar manipulators as byproduct. © Springer Science+Business Media B.V. 2010. Source


Korjenic A.,Vienna University of Technology | Petranek V.,Brno University of Technology | Zach J.,Brno University of Technology | Hroudova J.,Brno University of Technology
Energy and Buildings | Year: 2011

Because energy efficiency in buildings will be evaluated not only based upon heating demand, but also according to the primary energy demand, the ecological properties of the building materials for the whole assessment has become essential. The demand for green building materials is rising sharply, especially insulating materials from renewable resources. The application of natural materials has become increasingly important as a consequence of the increasing need to conserve energy, use natural materials, incorporate architecture and construction into sustainable development processes, and the recently promulgated discussions on appropriate disposal of used insulation materials such as polystyrene (EPS). Due to the fact that natural materials are more sensitive to moisture, decomposition factors such as temperature, material moisture content, attacks by microorganisms, and possible decomposition of the material or shorter durability, it is necessary to evaluate the degradation rate of built-in materials and also determine their real in situ hygrothermal properties according to their moisture content, and volume changes. This paper describes the results of a research project carried out at the Vienna University of Technology and Brno University of Technology. The objective is to use jute, flax, and hemp to develop a new insulating material from renewable resources with comparable building physics and mechanical properties to commonly used insulations materials. All input components are varied in the tests. The impact of moisture content changes in relation to the rate of change of other properties was the focus of the investigation. The tests results show that the correct combination of natural materials is absolutely comparable with convectional materials. © 2011 Elsevier B.V. All rights reserved. Source


Palensky P.,AIT Austrian Institute of Technology | Dietrich D.,Vienna University of Technology
IEEE Transactions on Industrial Informatics | Year: 2011

Energy management means to optimize one of the most complex and important technical creations that we know: the energy system. While there is plenty of experience in optimizing energy generation and distribution, it is the demand side that receives increasing attention by research and industry. Demand Side Management (DSM) is a portfolio of measures to improve the energy system at the side of consumption. It ranges from improving energy efficiency by using better materials, over smart energy tariffs with incentives for certain consumption patterns, up to sophisticated real-time control of distributed energy resources. This paper gives an overview and a taxonomy for DSM, analyzes the various types of DSM, and gives an outlook on the latest demonstration projects in this domain. © 2011 IEEE. Source


Lukasiewicz T.,University of Oxford | Lukasiewicz T.,Vienna University of Technology
IEEE Transactions on Knowledge and Data Engineering | Year: 2010

We present a novel combination of disjunctive programs under the answer set semantics with description logics for the Semantic Web. The combination is based on a well-balanced interface between disjunctive programs and description logics, which guarantees the decidability of the resulting formalism without assuming syntactic restrictions. We show that the new formalism has very nice semantic properties. In particular, it faithfully extends both disjunctive programs and description logics. Furthermore, we describe algorithms for reasoning in the new formalism, and we give a precise picture of its computational complexity. We also define the well-founded semantics for the normal case, where normal programs are combined with tractable description logics, and we explore its semantic and computational properties. In particular, we show that the well-founded semantics approximates the answer set semantics. We also describe algorithms for the problems of consistency checking and literal entailment under the well-founded semantics, and we give a precise picture of their computational complexity. As a crucial property, in the normal case, consistency checking and literal entailment under the well-founded semantics are both tractable in the data complexity, and even first-order rewritable (and thus can be done in LogSpace in the data complexity) in a special case that is especially useful for representing mappings between ontologies. © 2010 IEEE. Source


Kramer O.,University of Oldenburg | Gieseke F.,University of Oldenburg | Satzger B.,Vienna University of Technology
Neurocomputing | Year: 2013

Wind energy has an important part to play as renewable energy resource in a sustainable world. For a reliable integration of wind energy high-dimensional wind time-series have to be analyzed. Fault analysis and prediction are an important aspect in this context. The objective of this work is to show how methods from neural computation can serve as forecasting and monitoring techniques, contributing to a successful integration of wind into sustainable and smart energy grids. We will employ support vector regression as prediction method for wind energy time-series. Furthermore, we will use dimension reduction techniques like self-organizing maps for monitoring of high-dimensional wind time-series. The methods are briefly introduced, related work is presented, and experimental case studies are exemplarily described. The experimental parts are based on real wind energy time-series data from the National Renewable Energy Laboratory (NREL) western wind resource data set. © 2012 Elsevier B.V.. Source


Leckner B.,Chalmers University of Technology | Szentannai P.,Budapest University of Technology and Economics | Winter F.,Vienna University of Technology
Fuel | Year: 2011

Methods for scaling of fluidized-bed combustors are reviewed. It is found that a general scaling methodology, including simultaneously fluid-dynamic and combustion scaling, cannot be applied in practical scaling tests. Simplifications are needed. The approach followed here is to differentiate between fluid-dynamic scaling, combustion scaling, both related to the basic equations describing the phenomena, and boiler scaling that means scale-up from one boiler size to another, where established design elements can be utilized in the scaling procedure. © 2011 Elsevier Ltd. All rights reserved. Source


Kirnbauer F.,Bioenergy 2020+ GmbH | Hofbauer H.,Vienna University of Technology
Energy and Fuels | Year: 2011

Bed material coating in fluidized biomass combustion plants is a precursor for bed agglomeration. While bed agglomeration is a well-described problem in connection with biomass combustion plants, the literature on bed agglomeration or bed material coating in gasification plants is sparse. Recently developed biomass gasification plants face similar ash-related problems, but inorganic matter is also linked to their catalytic activity to reduce the tar concentration in the product gas. This paper summarizes recent ash-related research activities at a dual fluidized bed steam gasification plant located in Güssing, Austria. The used fuel is forestry residues; the bed material is olivine. The setup of inorganic flows and loops is described. Bed material analyses were carried out and presented, such as X-ray fluorescence, X-ray diffraction, and scanning electron microscopy with energy-dispersive X-ray spectroscopy. The analyses show the building of two calcium-rich layers around the bed particles. The inner layer is homogeneous, composed mainly of calcium and silicate, while the outer layer has a similar composition to the fly ash of the plant. Analyses of the crystal structure of the used bed material show the formation of calcium silicates that were not detected in the fresh bed material. This has consequences on the performance of the plant concerning the catalytic activity of the bed material and the tendency for fouling in the plant. © 2011 American Chemical Society. Source


Stoger-Pollach M.,Vienna University of Technology
Ultramicroscopy | Year: 2014

The present work is a short note on the performance of a conventional transmission electron microscope (TEM) being operated at very low beam energies (below 20. keV). We discuss the high tension stability and resolving power of this uncorrected TEM. We find out that the theoretical lens performance can nearly be achieved in practice. We also demonstrate that electron energy loss spectra can be recorded at these low beam energies with standard equipment. The signal-to-noise ratio is sufficiently good for further data treatment like multiple scattering deconvolution and Kramers-Kronig analysis. © 2014 Elsevier B.V. Source


Libisch F.,Vienna University of Technology | Huang C.,Los Alamos National Laboratory | Carter E.A.,Andlinger Center for Energy and the Environment
Accounts of Chemical Research | Year: 2014

ConspectusAb initio modeling of matter has become a pillar of chemical research: with ever-increasing computational power, simulations can be used to accurately predict, for example, chemical reaction rates, electronic and mechanical properties of materials, and dynamical properties of liquids. Many competing quantum mechanical methods have been developed over the years that vary in computational cost, accuracy, and scalability: density functional theory (DFT), the workhorse of solid-state electronic structure calculations, features a good compromise between accuracy and speed. However, approximate exchange-correlation functionals limit DFT's ability to treat certain phenomena or states of matter, such as charge-transfer processes or strongly correlated materials. Furthermore, conventional DFT is purely a ground-state theory: electronic excitations are beyond its scope. Excitations in molecules are routinely calculated using time-dependent DFT linear response; however applications to condensed matter are still limited.By contrast, many-electron wavefunction methods aim for a very accurate treatment of electronic exchange and correlation. Unfortunately, the associated computational cost renders treatment of more than a handful of heavy atoms challenging. On the other side of the accuracy spectrum, parametrized approaches like tight-binding can treat millions of atoms. In view of the different (dis-)advantages of each method, the simulation of complex systems seems to force a compromise: one is limited to the most accurate method that can still handle the problem size. For many interesting problems, however, compromise proves insufficient. A possible solution is to break up the system into manageable subsystems that may be treated by different computational methods. The interaction between subsystems may be handled by an embedding formalism.In this Account, we review embedded correlated wavefunction (CW) approaches and some applications. We first discuss our density functional embedding theory, which is formally exact. We show how to determine the embedding potential, which replaces the interaction between subsystems, at the DFT level. CW calculations are performed using a fixed embedding potential, that is, a non-self-consistent embedding scheme. We demonstrate this embedding theory for two challenging electron transfer phenomena: (1) initial oxidation of an aluminum surface and (2) hot-electron-mediated dissociation of hydrogen molecules on a gold surface. In both cases, the interaction between gas molecules and metal surfaces were treated by sophisticated CW techniques, with the remainder of the extended metal surface being treated by DFT. Our embedding approach overcomes the limitations of conventional Kohn-Sham DFT in describing charge transfer, multiconfigurational character, and excited states. From these embedding simulations, we gained important insights into fundamental processes that are crucial aspects of fuel cell catalysis (i.e., O2 reduction at metal surfaces) and plasmon-mediated photocatalysis by metal nanoparticles. Moreover, our findings agree very well with experimental observations, while offering new views into the chemistry. We finally discuss our recently formulated potential-functional embedding theory that provides a seamless, first-principles way to include back-action onto the environment from the embedded region. © 2014 American Chemical Society. Source


Jungel A.,Vienna University of Technology
Mathematical and Computer Modelling of Dynamical Systems | Year: 2010

The modelling, analysis and numerical approximation of energy-transport models for semiconductor devices are reviewed. The derivation of the partial differential equations from the semiconductor Boltzmann equation is sketched. Furthermore, the main ideas for the analytical treatment of the equations, employing thermodynamic principles, are given. A new result is the proof of the weak sequential stability of approximate solutions to some time-dependent energy-transport equations with physical transport coefficients. The discretization of the stationary model using mixed finite elements is explained, and some numerical results in two and three space dimensions are presented. Finally, energy-transport models with lattice heating or quantum corrections are reviewed. © 2010 Taylor & Francis. Source


Fischer F.D.,University of Leoben | Svoboda J.,Academy of Sciences of the Czech Republic | Appel F.,Helmholtz Center Geesthacht | Kozeschnik E.,Vienna University of Technology
Acta Materialia | Year: 2011

The equilibrium site fraction of vacancies increases with temperature and, thus, annealing and rapid quenching may lead to states with a significant vacancy supersaturation. Excess vacancies can then gradually annihilate at available sinks represented by jogs at dislocations, by grain boundaries or free surfaces. Significant supersaturation by vacancies may also lead to the nucleation and growth of Frank loops acting as additional sinks. Three models corresponding to three different annihilation mechanisms are developed in this paper. They refer to annihilation of excess vacancies at jogs at dislocation with a constant density, at homogeneously distributed Frank loops with a constant density and at grain boundaries. The simulations based on the models are performed for individual annihilation mechanisms under isothermal and non-isothermal conditions as well as for simultaneous annihilation of vacancies at Frank loops and dislocation jogs and grain boundaries using different cooling conditions. © 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Source


Kaltenbacher B.,Klagenfurt University | Kaltenbacher M.,Vienna University of Technology | Sim I.,Klagenfurt University
Journal of Computational Physics | Year: 2013

We consider the second order wave equation in an unbounded domain and propose an advanced perfectly matched layer (PML) technique for its efficient and reliable simulation. In doing so, we concentrate on the time domain case and use the finite-element (FE) method for the space discretization. Our un-split-PML formulation requires four auxiliary variables within the PML region in three space dimensions. For a reduced version (rPML), we present a long time stability proof based on an energy analysis. The numerical case studies and an application example demonstrate the good performance and long time stability of our formulation for treating open domain problems. © 2012 Elsevier Inc. Source


Sichani M.T.,University of Aalborg | Nielsen S.R.K.,University of Aalborg | Bucher C.,Vienna University of Technology
Probabilistic Engineering Mechanics | Year: 2011

An efficient method for estimating low first passage probabilities of high-dimensional nonlinear systems based on asymptotic estimation of low probabilities is presented. The method does not require any a priori knowledge of the system, i.e. it is a black-box method, and has very low requirements on the system memory. Consequently, high-dimensional problems can be handled, and nonlinearities in the model neither bring any difficulty in applying it nor lead to considerable reduction of its efficiency. These characteristics suggest that the method is a powerful candidate for complicated problems. First, the failure probabilities of three well-known nonlinear systems are estimated. Next, a reduced degree-of-freedom model of a wind turbine is developed and is exposed to a turbulent wind field. The model incorporates very high dimensions and strong nonlinearities simultaneously. The failure probability of the wind turbine model is estimated down to very low values; this demonstrates the efficiency and power of the method on a realistic high-dimensional highly nonlinear system. © 2011 Elsevier Ltd. All rights reserved. Source


Durakbasa M.N.,Vienna University of Technology
Key Engineering Materials | Year: 2014

It is of extreme importance in present time of worldwide international competition in industry and production engineering to safe time on the one hand and on the other keep an eye on increasingly higher costs of energy and raw materials. Comprehensive knowledge in the areas of market requirements, product and process development and design, intelligent metrology and end of life management are important presuppositions to achieve rapid, agile, waste free and cost-effective production of innovative, customized complex products using next-generation materials as well as to protect the environment by making zero emissions and improve environmental sustainability and reduce the use of energy by using intelligent manufacturing systems. © (2014) Trans Tech Publications, Switzerland. Source


Barth S.,Tyndall National Institute | Barth S.,Trinity College Dublin | Barth S.,Vienna University of Technology | Boland J.J.,Trinity College Dublin | And 2 more authors.
Nano Letters | Year: 2011

Metal-seeded growth of one-dimensional (1D) semiconductor nanostructures is still a very active field of research, despite the huge progress which has been made in understanding this fundamental phenomenon. Liquid growth promoters allow control of the aspect ratio, diameter, and structure of 1D crystals via external parameters, such as precursor feedstock, temperature, and operating pressure. However the transfer of crystallographic information from a catalytic nanoparticle seed to a growing nanowire has not been described in the literature. Here we define the theoretical requirements for transferring defects from nanoparticle seeds to growing semiconductor nanowires and describe why Ag nanoparticles are ideal candidates for this purpose. We detail in this paper the influence of solid Ag growth seeds on the crystal quality of Ge nanowires, synthesized using a supercritical fluid growth process. Significantly, under certain reaction conditions {111} stacking faults in the Ag seeds can be directly transferred to a high percentage of 〈112〉-oriented Ge nanowires, in the form of radial twins in the semiconductor crystals. Defect transfer from nanoparticles to nanowires could open up the possibility of engineering 1D nanostructures with new and tunable physical properties and morphologies. © 2011 American Chemical Society. Source


Marquez-Sillero I.,University of Cordoba, Spain | Aguilera-Herrador E.,Vienna University of Technology | Cardenas S.,University of Cordoba, Spain | Valcarcel M.,University of Cordoba, Spain
TrAC - Trends in Analytical Chemistry | Year: 2011

Advances and changes in practical aspects of ion-mobility spectrometry (IMS) have led to its widespread use for applications of environmental concern due to its unique characteristics, which include portability, ruggedness, relatively low acquisition costs and speed of analysis. However, limitations regarding the complexity of environmental samples and strict requirements on limits of detection have to be overcome. This article critically reviews existing environmental applications using IMS for the determination of different families of compounds. We also consider the analytical tools developed to solve the limitations regarding selectivity and sensitivity, including those approaches that have led to advances in the instrumentation of IMS and its combination with other techniques for extraction and pre-concentration of analytes, pre-separation of analytes and its coupling to other detection systems. Finally, we discuss current trends that facilitate the deployment of IMS for on-site or in-field analysis. © 2011 Elsevier Ltd. Source


Muldoon F.,Vienna University of Technology
International Journal for Numerical Methods in Fluids | Year: 2013

The problem of controlling the hydrothermal waves in a thermocapillary flow is addressed using a gradient-based control strategy. The state equations are the two-dimensional unsteady incompressible Navier-Stokes and energy equations under the Boussinesq approximation. The modeled problem is the 'open boat' process of crystal growth, the flow which is driven by Marangoni and buoyancy effects. The control is a spatially and temporally varying heat flux boundary condition at the free surface. The control that minimizes the hydrothermal waves is found using a conjugate gradient method, where the gradient of the objective function with respect to the control variables is obtained from solving a set of adjoint equations. The effectiveness of choices of the parameters governing the control algorithm is examined. Almost complete suppression ofthe hydrothermal waves is obtained for certain choices of the parameters governing the control algorithm. Thenumerical issues involved with finding the control using the optimizer are discussed, and the features of the resulting control are analyzed with the goal of understanding how it affects the flow. © 2012 John Wiley & Sons, Ltd. Source


Fermuller C.G.,Vienna University of Technology
Journal of Multiple-Valued Logic and Soft Computing | Year: 2016

Hintikka's game theoretic semantics for classical connectives and quantifiers has been generalized to many-valued logics in various ways. We introduce a new type of semantic games, so-called backtracking games, where a stack of formulas is used to store information on how to continue the game after reaching an atomic formula. This mechanism allows one to avoid the explicit reference to truth values, that is characteristic for some evalution games. Moreover, the indeterminism due to the multiplicity of still to be analyzed formulas that can be observed in Giles's game for Łukasiewicz logic is disolved. We present backtracking games for the three fundamental t-norm based logics: Łukasiewicz, Gödel, and Product logic and provide corresponding adequateness theorems. © 2016 Old City Publishing, Inc. Source


Schwanninger M.,University of Natural Resources and Life Sciences, Vienna | Rodrigues J.C.,Tropical Research Institute of Portugal | Fackler K.,Vienna University of Technology
Journal of Near Infrared Spectroscopy | Year: 2011

Near infrared (NIR) spectra of wood and wood products contain information regarding their chemical composition and molecular structure. Both influence physical properties and performance, however, at present, this information is under-utilised in research and industry. Presently NIR spectroscopy is mainly used following the explorative approach, by which the contents of chemical components and physico-chemical as well as mechanical properties of the samples of interest are determined by applying multivariate statistical methods on the spectral data. Concrete hypotheses or prior knowledge on the chemistry and structure of the sample-exceeding that of reference data-are not necessary to build such multivariate models. However, to understand the underlying chemistry, knowledge on the chemical/functional groups that absorb at distinct wavelengths is indispensable and the assignment of NIR bands is necessary. Band assignment is an interesting and important part of spectroscopy that allows conclusions to be drawn on the chemistry and physico-chemical properties of samples. To summarise current knowledge on this topic, 70 years of NIR band assignment literature for wood and wood components were reviewed. In addition, preliminary results of ongoing investigations that also led to new assignments were included for discussion. Furthermore, some basic considerations on the interactions of NIR radiation with the inhomogeneous, anisotropic and porous structure of wood, and what impact this structure has on information contained in the spectra, are presented. In addition, the influence of common data (pre)-processing methods on the position of NIR bands is discussed. For more conclusive band assignments, it is recommended that wood is separated into its components. However, this approach may lead to misinterpretations when evaluation methods other than direct comparison of spectra are used, because isolation and purification of wood components is difficult and may lead to chemical and structural alterations when compared to the native state. Furthermore, "pure" components have more distinct and symmetric bands that influence the shape of the spectra. This extended review provides the reader with a comprehensive summary of NIR bands, as well as some practical considerations important for the application of NIR to wood. © IM Publications LLP 2011. Source


Hlinka O.,Slovak University of Technology in Bratislava | Hlawatsch F.,Vienna University of Technology | Djuric P.M.,State University of New York at Stony Brook
IEEE Signal Processing Magazine | Year: 2013

Distributed particle filter (DPFs) are a powerful and versatile approach to decentralized state estimation in agent networks (ANs), and they are especially suited to large-scale, nonlinear, and non-Gaussian systems. Most distributed discrete-time sequential estimation algorithms presuppose synchronization, the availability of a common clock or time base at each agent, whereas some approaches relax this requirement. The existing DPF algorithms differ in aspects such as type and amount of the data communicated between the agents, communication range, local processing, computational complexity, memory requirements, estimation accuracy, robustness, scalability, and latency. In fusion center (FC) based DPFs, each agent uses a local PF to convert its own measurement into a local posterior, which is then transmitted to an FC. In leader agent (LA) DPFs information accumulates along a path formed by a sequence of adjacent agents. With consensus-based DPFs, all agents perform particle filtering simultaneously and possess a particle representation of a posterior. Source


Barrabes N.,Vienna University of Technology | Sa J.,ETH Zurich
Applied Catalysis B: Environmental | Year: 2011

Catalytic hydrogenation of nitrates has been studied since the beginning of the nineties. Despite the encouraging initial results the problematic of ammonium by-product formation is yet to be solved. This manuscript aims to be an overview of past and present research in the field and propose some key areas, which should be addressed to improve current and newly developed systems. © 2011 Elsevier B.V. Source


Kroyer G.,Vienna University of Technology
Journal fur Verbraucherschutz und Lebensmittelsicherheit | Year: 2010

The stability of the natural sweetener stevioside during different processing and storage conditions as well as the effects of its interaction with water-soluble vitamins, food relevant organic acids and other common low calorie sweeteners and its application in coffee and tea beverages were evaluated. Incubation of the solid sweetener stevioside at elevated temperatures for 1 h showed good stability up to 120°C, whilst at temperatures exceeding 140°C forced decomposition was noticed. In aqueous solutions stevioside is remarkable stable in a pH range 2-10 under thermal treatment up to 80°C, however, under strong acidic conditions (pH 1) a significant decrease in the stevioside concentration was detected. Up to 4 h incubation of stevioside with individual water-soluble vitamins in aqueous solution at 80°C showed no significant changes in regard to stevioside and the B-vitamins, whereas a protective effect of stevioside on the degradation of ascorbic acid was observed resulting in a significant delayed degradation rate. In the presence of other individual low calorie sweeteners practically no interaction was found at room temperature after 4 months incubation in aqueous media. Stability studies of stevioside in solutions of organic acids showed a tendency towards enhanced decomposition of the sweetener at lower pH values depending on the acidic medium. In a stevioside-sweetened coffee and tea beverage, practically, no significant chances neither in caffeine content nor in stevioside content could be noticed. Furthermore an overview of already performed studies in literature about the Stevia-sweetener stevioside and rebaudioside A is given. © Birkhäuser Verlag, Basel/Switzerland 2010. Source


Egele M.,Vienna University of Technology | Scholte T.,SAP | Kirda E.,Eurecom | Kruegel C.,University of California at Santa Barbara
ACM Computing Surveys | Year: 2012

Anti-virus vendors are confronted with a multitude of potentially malicious samples today. Receiving thousands of new samples every day is not uncommon. The signatures that detect confirmedmalicious threats are mainly still created manually, so it is important to discriminate between samples that pose a new unknown threat and those that are mere variants of known malware. This survey article provides an overview of techniques based on dynamic analysis that are used to analyze potentially malicious samples. It also covers analysis programs that employ these techniques to assist human analysts in assessing, in a timely and appropriate manner, whether a given sample deserves closer manual inspection due to its unknown malicious behavior. © 2012 ACM. Source


Ries M.,Vienna University of Technology | Gardlo B.,University of Zilina
IEEE Journal on Selected Areas in Communications | Year: 2010

Provisioning of mobile video services is rather challenging since in mobile environments, bandwidth and processing resources are limited. Audiovisual content is present in most multimedia services, however, the user expectation of perceived audiovisual quality differs for speech and non-speech contents. The majority of recently proposed metrics for audiovisual quality estimation assumes only one continuous medium, either audio or video. In order to accurately predict the audiovisual quality of a multi-media system it is necessary to apply a metric that takes simultaneously into account audio as well as video quality. When assessing a multi-modal system, one cannot model it only as a simple combination of mono-modal models, because the pure combination of audio and video models does not give a robust perceived-quality performance metric. We show the importance taking into account the cross-modal interaction between audio and video modes also code mutual compensation effect. In this contribution we report on measuring the cross-modal interaction and propose a content adaptive audiovisual metric for video sequences that distinguishes between speech and non-speech audio. Furthermore, the proposed method allows for a referencefree audiovisual quality estimation, which reduces computational complexity and extends applicability. © 2010 IEEE. Source


Treiblmaier H.,Vienna University of Economics and Business | Filzmoser P.,Vienna University of Technology
Information and Management | Year: 2010

Exploratory factor analysis is commonly used in IS research to detect multivariate data structures. Frequently, the method is blindly applied without checking if the data fulfill the requirements of the method. We investigated the influence of sample size, data transformation, factor extraction method, rotation, and number of factors on the outcome. We compared classical exploratory factor analysis with a robust counterpart which is less influenced by data outliers and data heterogeneities. Our analyses revealed that robust exploratory factor analysis is more stable than the classical method. Copyright © 2010 Published by Elsevier B.V. All rights reserved. Source


Lovesey S.W.,ISIS Facility | Lovesey S.W.,Diamond Light Source | Balcar E.,Vienna University of Technology
Journal of the Physical Society of Japan | Year: 2013

The practice of replacing matrix elements in atomic calculations by those of convenient operators with strong physical appeal has a long history, and in condensed matter physics it is perhaps best known through use of operator equivalents in electron resonance by Elliott and Stevens. Likewise, electronic multipoles, created with irreducible spherical-tensors, to represent charge-like and magnetic-like quantities are widespread in modern physics. Examples in recent headlines include a magnetic charge (a monopole), an anapole (a dipole) and a triakontadipole (a magnetic-like atomic multipole of rank 5). In this communication, we aim to guide the reader through use of atomic, spherical multipoles in photon scattering, and resonant Bragg diffraction and dichroic signals in particular. Applications to copper oxide CuO and neptunium dioxide (NpO2) are described. In keeping with it being a simple guide, there is sparse use in the communication of algebra and expressions are gathered from the published literature and not derived, even when central to the exposition. An exception is a thorough grounding, contained in an Appendix, for an appropriate version of the photon scattering length based on quantum electrodynamics. A theme of the guide is application of symmetry in scattering, in particular constraints imposed on results by symmetry in crystals. To this end, a second Appendix catalogues constraints on multipoles imposed by symmetry in crystal point-groups. Copyright © 2013 The Physical Society of Japan. Source


Marcos D.,Austrian Academy of Sciences | Rabl P.,Vienna University of Technology | Rico E.,University of Ulm | Zoller P.,Austrian Academy of Sciences | Zoller P.,University of Innsbruck
Physical Review Letters | Year: 2013

We describe a superconducting-circuit lattice design for the implementation and simulation of dynamical lattice gauge theories. We illustrate our proposal by analyzing a one-dimensional U(1) quantum-link model, where superconducting qubits play the role of matter fields on the lattice sites and the gauge fields are represented by two coupled microwave resonators on each link between neighboring sites. A detailed analysis of a minimal experimental protocol for probing the physics related to string breaking effects shows that, despite the presence of decoherence in these systems, distinctive phenomena from condensed-matter and high-energy physics can be visualized with state-of-the-art technology in small superconducting-circuit arrays. © 2013 American Physical Society. Source


Brewka G.,University of Leipzig | Eiter T.,Vienna University of Technology | Truszczynski M.,University of Kentuckys
Communications of the ACM | Year: 2011

Can solving hard computational problems be made easy? If we restrict the scope of the question to computational problems that can be stated in terms of constraints over binary domains, and if we understand "easy" as "using a simple and intuitive modeling language that comes with software for processing programs in the language," then the answer is Yes! Answer Set Programming (ASP, for short) fits the bill. While already well represented at research conferences and workshops, ASP has been around for barely more than a decade. Its origins, however, go back a long time; it is an outcome of years of research in knowledge representation, logic programming, and constraint satisfaction-areas that sought and studied declarative languages to model domain knowledge, as well as general-purpose computational tools for processing programs and theories that represent problem specifications in these languages. ASP borrows from each of these areas, all the time aiming. © 2011 ACM. Source


Liserre M.,Polytechnic of Bari | Sauter T.,Vienna University of Technology | Hung J.Y.,Auburn University
IEEE Industrial Electronics Magazine | Year: 2010

Industrialization and economic development have historically been associated with man's ability to harness natural energy resources to improve his condition. Based on this definition, two industrial revolutions occurred in the 18th and 19th centuries, where natural resources such as coal (first revolution) and petroleum (second revolution) were widely exploited to produce levels of energy far beyond what could be achieved by human or animal muscle power. Furthermore, modern power distribution systems made abundant energy reliably available and relatively independent from the plant location. More than two centuries of past industrialization exploited nonrenewable energy resources, however, often with undesirable side effects such as pollution and other damage to the natural environment. In the second half of the 20th century, extraction of energy from nuclear processes grew in popularity, relieving some demands on limited fossil fuel reserves, but at the same time, raising safety and political problems. Meeting the global demand for energy is now the key challenge to sustained industrialization. © IEEE. Source


Fulton C.,Florida Institute of Technology | Langer H.,Vienna University of Technology
Complex Analysis and Operator Theory | Year: 2010

The Titchmarsh-Weyl function, which was introduced in Fulton (Math Nachr 281(10):1418-1475, 2008) for the Sturm-Liouville equation with a hydrogen-like potential on (0, ∞), is shown to belong to a generalized Nevanlinna class Nk. As a consequence, also in the case of two singular endpoints for the Fourier transformation defined by means of Frobenius solutions there exists a scalar spectral function. This spectral function is given explicitly for potentials of the form ≤qo < ∞. © Birkhäuser Verlag Basel/Switzerland 2009. Source


Inferring gene regulatory networks from expression data is difficult, but it is common and often useful. Most network problems are under-determined-there are more parameters than data points-and therefore data or parameter set reduction is often necessary. Correlation between variables in the model also contributes to confound network coefficient inference. In this paper, we present an algorithm that uses integrated, probabilistic clustering to ease the problems of under-determination and correlated variables within a fully Bayesian framework. Specifically, ours is a dynamic Bayesian network with integrated Gaussian mixture clustering, which we fit using variational Bayesian methods. We show, using public, simulated time-course data sets from the DREAM4 Challenge, that our algorithm outperforms non-clustering methods in many cases (7 out of 25) with fewer samples, rarely underperforming (1 out of 25), and often selects a non-clustering model if it better describes the data. Source code (GNU Octave) for BAyesian Clustering Over Networks (BACON) and sample data are available at: http://code.google.com/p/bacon-for-genetic-networks. © 2013 Brian Godsey. Source


Walter H.,Vienna University of Technology | Hofmann R.,Josef Bertsch Gesm.b.H. and Co KG
Applied Thermal Engineering | Year: 2011

This paper presents the results of a theoretical investigation on the influence of different heat transfer correlations for finned-tubes to the dynamic behavior of a heat recovery steam generator (HRSG). The investigation was done for a vertical type natural circulation HRSG with 3 pressure stages under hot start-up and shutdown conditions. For the calculation of the flue gas-side heat transfer coefficient the well known correlations for segmented finned-tubes according to Schmidt, VDI and ESCOATM (traditional and revised) as well as a new correlation, which was developed at the Institute for Energy Systems and Thermodynamics, are used. The simulation results show a good agreement in the overall behavior of the boiler between the different correlations. But there are still some important differences found in the detail analysis of the boiler behavior. © 2010 Elsevier Ltd. All rights reserved. Source


Christensson N.,University of Vienna | Kauffmann H.F.,University of Vienna | Kauffmann H.F.,Vienna University of Technology | Pullerits T.,Lund University | Mancal T.,Charles University
Journal of Physical Chemistry B | Year: 2012

A vibronic exciton model is applied to explain the long-lived oscillatory features in the two-dimensional (2D) electronic spectra of the Fenna-Matthews-Olson (FMO) complex. Using experimentally determined parameters and uncorrelated site energy fluctuations, the model predicts oscillations with dephasing times of 1.3 ps at 77 K, which is in a good agreement with the experimental results. These long-lived oscillations originate from the coherent superposition of vibronic exciton states with dominant contributions from vibrational excitations on the same pigment. The oscillations obtain a large amplitude due to excitonic intensity borrowing, which gives transitions with strong vibronic character a significant intensity despite the small Huang-Rhys factor. Purely electronic coherences are found to decay on a 200 fs time scale. © 2012 American Chemical Society. Source


Das P.,Dibrugarh University | Linert W.,Vienna University of Technology
Coordination Chemistry Reviews | Year: 2016

The ligand-assisted palladium (Pd)-catalyzed Suzuki-Miyaura cross-coupling reaction is one of the most attractive protocols in organic chemistry and phosphines have been established as the best ligand system for this transformation. However, these phosphines have significant limitations, such as high toxicity, sensitivity to air and moisture, handling problems, and high costs. Recently, Schiff bases have been recognized as excellent alternatives to phosphines in Suzuki-Miyaura reactions. Similar to phosphines, the steric and electronic characteristics of Schiff bases can be manipulated by selecting suitable condensing aldehydes and amines. Many Schiff base-derived homogeneous and heterogeneous Pd catalysts have been reported for Suzuki-Miyaura reactions and this review provides insights into the state-of-the-art in applications of these Schiff base-derived Pd catalysts in the Suzuki-Miyaura reaction. © 2015 Elsevier B.V. Source


Wallner G.,University of Applied Arts Vienna | Kriglstein S.,Vienna University of Technology
Entertainment Computing | Year: 2013

As video games are becoming more and more complex and are reaching a broader audience, there is an increasing interest in procedures to analyze player behavior and the impact of design decisions. Game companies traditionally relied on user-testing methods, like playtesting, surveys or videotaping, to obtain player feedback. However, these qualitative methods for data collection are time-consuming and the obtained data is often incomplete or subjective. Therefore, instrumentation became popular in recent years to unobtrusively obtain the detailed data required to thoroughly evaluate player behavior. To make sense of the large amount of data, appropriate tools and visualizations have been developed.This article reviews literature on visualization-based analysis of game metric data in order to give an overview of the current state of this emerging field of research. We discuss issues related to gameplay analysis, propose a broad categorization of visualization techniques and discuss their characteristics. Furthermore, we point out open problems to promote future research in this area. © 2013 International Federation for Information Processing. Source


Fitzpatrick G.,Vienna University of Technology | Ellingsen G.,University of Tromso
Computer Supported Cooperative Work: CSCW: An International Journal | Year: 2013

CSCW as a field has been concerned since its early days with healthcare, studying how healthcare work is collaboratively and practically achieved and designing systems to support that work. Reviewing literature from the CSCW Journal and related conferences where CSCW work is published, we reflect on the contributions that have emerged from this work. The analysis illustrates a rich range of concepts and findings towards understanding the work of healthcare but the work on the larger policy level is lacking. We argue that this presents a number of challenges for CSCW research moving forward: in having a greater impact on larger-scale health IT projects; broadening the scope of settings and perspectives that are studied; and reflecting on the relevance of the traditional methods in this field - namely workplace studies - to meet these challenges. © 2012 Springer. Source


Braun A.P.,Vienna University of Technology | Collinucci A.,Arnold Sommerfeld Center for Theoretical Physics | Valandro R.,University of Hamburg
Nuclear Physics B | Year: 2012

We construct explicit G4 fluxes in F-theory compactifications. Our method relies on identifying algebraic cycles in the Weierstrass equation of elliptic Calabi-Yau fourfolds. We show how to compute the D3-brane tadpole and the induced chirality indices directly in F-theory. Whenever a weak coupling limit is available, we compare and successfully match our findings to the corresponding results in type IIB string theory. Finally, we present some generalizations of our results which hint at a unified description of the elliptic Calabi-Yau fourfold together with the four-form flux G4 as a coherent sheaf. In this description the close link between G4 fluxes and algebraic cycles is manifest. © 2011 Elsevier B.V. Source


Van Der Aalst W.M.P.,TU Eindhoven | Dustdar S.,Vienna University of Technology
IEEE Internet Computing | Year: 2012

Process mining techniques help organizations discover and analyze business processes based on raw event data. The recently released "Process Mining Manifesto" presents guiding principles and challenges for process mining. Here, the authors summarize the manifesto's main points and argue that analysts should take into account the context in which events occur when analyzing processes. © 2006 IEEE. Source


Grasserbauer M.,Vienna University of Technology
Analytical and Bioanalytical Chemistry | Year: 2010

In this paper the major elements of the European Union's policy on environmental protection and sustainable development and the resulting challenges for analytical sciences are presented. The priority issues dealt with are: Sustainable management of natural resources: air, water and soil Climate change and clean energy Global development cooperation Analytical sciences are required to provide policy-relevant information for the development and implementation of European Union legislation and form a strong pillar for a sustainable evolution of our region and our planet. It shows what information needs to be provided, how the necessary quality levels can be achieved and what new approaches, e.g. combining measurements and modelling, or earth observations with in situ chemical/physical measurements, need to be taken to achieve an integrated assessment of the state of the environment and to develop approaches for sustainable development. © 2009 Springer-Verlag. Source


Nawratil G.,Vienna University of Technology
Computer Aided Geometric Design | Year: 2014

By means of bond theory, we study Stewart Gough (SG) platforms with n-dimensional self-motions with n > 2. It turns out that only architecturally singular manipulators can possess these self-motions. Based on this result, we present a complete list of all SG platforms, which have n-dimensional self-motions. Therefore this paper also solves the famous Borel Bricard problem for n-dimensional motions. We also give some remarks and a new result on SG platforms with 2-dimensional self-motions; nevertheless a full discussion of this case remains open. © 2014 Elsevier Ltd. All rights reserved. Source


Wasicek A.,Vienna University of Technology
IEEE International Conference on Industrial Informatics (INDIN) | Year: 2012

Protection of intellectual property rights is a vital aspect for the future automotive supplier market, in particular for the aftersales market for ECUs. Computer security can deliver the required protection mechanisms and sustain the according business models. We propose an approach to facilitate the rigorous checking of components for originality in a vehicle. In our system model, a security controller receives special messages (i.e., the authenticity heartbeat signal) from relevant ECUs and it performs subsequent authentication and plausibility checks. As a result, the security controller can tell, if the current setup of components in the vehicle is original. We evaluate our authentication architecture for the Battery Management System (BMS) of a hybrid car. Here, the security controller detects reliably, if the BMS is an original component, and whether an attacker has modified the operational limits of the battery. In this paper, we reason that an effective copy protection scheme needs to fuse relevant information from different sources. Therefore, various security techniques have to be combined in a sound architectural approach. The distinctive feature of our architecture is that it takes into account application-specific knowledge of the real-time entities under control. © 2012 IEEE. Source


Haubner R.,Vienna University of Technology
International Journal of Refractory Metals and Hard Materials | Year: 2013

The history of chemical vapour deposition (CVD) started in the 19th century with the production of lamp filaments and by the Mond process for the nickel production. In the 20th century Van Arkel deposited metals from the gas phase for application in lamp industry. TiC was the first hard coating deposited by CVD in the 1950s. Nearly 20 years later Krupp Widia introduced the first commercial TiC coating on hardmetal tools. Prof. Richard Kieffer started with TiN deposition by the CVD process in the 1970s at the "Technischer Hochschule Wien" and Prof. Benno Lux continued with Al2O 3- and diamond coatings. In the following years CVD processes for TiN, Ti(C,N), ZrC, (Ti,Zr)C, TiB2, Al2O3, TaxC, CrxCy, diamond, BN and BCN were investigated at the University of Technology Vienna. The depositions of new crystalline solid solutions (mixed crystals), nano-crystalline materials and nano-crystalline mixtures of phases have been research topics so far. © 2013 Elsevier Ltd. All rights reserved. Source


Fackler K.,Vienna University of Technology | Thygesen L.G.,Copenhagen University
Wood Science and Technology | Year: 2013

Microspectroscopy gives access to spatially resolved information on the molecular structure and chemical composition of a material. For a highly heterogeneous and anisotropic material like wood, such information is essential when assessing structure/property relationships such as moisture-induced dimensional changes, decay resistance or mechanical properties. It is, however, important to choose the right technique for the purpose at hand and to apply it in a suitable way if any new insights are to be gained. This review presents and compares three different microspectroscopic techniques: infrared, Raman and ultraviolet. Issues such as sample preparation, spatial resolution, data acquisition and extraction of knowledge from the spectral data are discussed. Additionally, an overview of applications in wood science is given for each method. Lastly, current trends and challenges within microspectroscopy of wood are discussed. © 2012 The Author(s). Source


Serrat C.,Polytechnic University of Catalonia | Roca D.,Polytechnic University of Catalonia | Seres J.,Vienna University of Technology
Optics Express | Year: 2015

We present a theoretical study on coherent extreme ultraviolet (XUV) attosecond pulse amplification mediated by nonlinear parametric enhanced forward scattering occurring in the interaction of a strong femtosecond infrared (IR) laser pulse combined with a weak attosecond XUV pulse train with an atom. We predict large amplification of XUV radiation when the IR strong pulse and the XUV weak pulse are optimally phased. We study high-order harmonic processes (HHG) in He, He+ and Ne++, and show how although the HHG yield is largely affected by the particular atom used as target, nonlinear parametric XUV amplification is only weakly affected. We conclude that XUV nonlinear parametric attosecond pulse amplification can be most efficiently observed by using atoms with a high ionization potential and that the nonlinear amplification is robust at high photon energies where HHG is not efficient, such as in the water-window spectral region. © 2015 Optical Society of America. Source


Fenz S.,Vienna University of Technology
VINE | Year: 2012

Purpose: Collaborative ontology editing tools enable distributed user groups to build and maintain ontologies. Enterprises that use these tools to simply capture knowledge for a given ontological structure face the following problems: isolated software solution requiring its own user management; the user interface often does not provide a look-and-feel that is familiar to users; additional security issues; hard to integrate into existing electronic work flows; and additional deployment and training costs. This paper aims to investigate these issues. Design/methodology/approach: To address these problems, the author designed, developed, and validated a plug-in concept for widely-used enterprise content and collaboration portals. The prototype is implemented as a Microsoft SharePoint web part and was validated in the risk and compliance management domain. Findings: The research results enable enterprises to capture knowledge efficiently within given organizational and ontological structures. Considerable cost and time savings were realized in the conducted case study. Originality/value: According to the results of the literature survey, this work represents the first research effort that provides a generic approach to supporting and increasing the efficiency of ontological knowledge capturing processes by enterprise portals. © Emerald Group Publishing Limited. Source


Jiresch E.,Vienna University of Technology
Electronic Proceedings in Theoretical Computer Science, EPTCS | Year: 2014

We present ingpu, a GPU-based evaluator for interaction nets that heavily utilizes their potential for parallel evaluation. We discuss advantages and challenges of the ongoing implementation of ingpu and compare its performance to existing interaction nets evaluators. © E. Jiresch This work is licensed under the Creative Commons Attribution License. Source


Wisser F.,Vienna University of Technology
IJCAI International Joint Conference on Artificial Intelligence | Year: 2015

Despite some success of Perfect Information Monte Carlo Sampling (PIMC) in imperfect information games in the past, it has been eclipsed by other approaches in recent years. Standard PIMC has well-known shortcomings in the accuracy of its decisions, but has the advantage of being simple, fast, robust and scalable, making it well-suited for imperfect information games with large state-spaces. We propose Presumed Value PIMC resolving the problem of overestimation of opponent's knowledge of hidden information in future game states. The resulting AI agent was tested against human experts in Schnapsen, a Central European 2-player trick-taking card game, and performs above human expert-level. Source


Qureshi N.,National United University | Friedl A.,Vienna University of Technology | Maddox I.S.,Massey University
Applied Microbiology and Biotechnology | Year: 2014

In these studies, butanol (acetone butanol ethanol or ABE) was produced from concentrated lactose/whey permeate containing 211 g L−1 lactose. Fermentation of such a highly concentrated lactose solution was possible due to simultaneous product removal using a pervaporation membrane. In this system, a productivity of 0.43 g L−1 h−1 was obtained which is 307 % of that achieved in a non-product removal batch reactor (0.14 g L−1 h−1) where approximately 60 g L−1 whey permeate lactose was fermented. The productivity obtained in this system is much higher than that achieved in other product removal systems (perstraction 0.21 g L−1 h−1 and gas stripping 0.32 g L−1 h−1). This membrane was also used to concentrate butanol from approximately 2.50 g L−1 in the reactor to 755 g L−1. Using this membrane, ABE selectivities and fluxes of 24.4–44.3 and 0.57–4.05 g m−2 h−1 were obtained, respectively. Pervaporation restricts removal of water from the reaction mixture thus requiring significantly less energy for product recovery when compared to gas stripping. © 2014, Springer-Verlag Berlin Heidelberg (outside the USA). Source


Abou-Hussein A.A.,Ain Shams University | Linert W.,Vienna University of Technology
Spectrochimica Acta - Part A: Molecular and Biomolecular Spectroscopy | Year: 2015

Two series of new mono and binuclear complexes with a Schiff base ligand derived from the condensation of 3-acetylcoumarine and diethylenetriamine, in the molar ratio 2:1 have been prepared. The ligand was characterized by elemental analysis, IR, UV-visible, 1H-NMR and mass spectra. The reaction of the Schiff base ligand with cobalt(II), nickel(II), copper(II), zinc(II) and oxovanadium(IV) lead to mono or binuclear species of cyclic or macrocyclic complexes, depending on the mole ratio of metal to ligand and as well as on the method of preparation. The Schiff base ligand behaves as a cyclic bidentate, tetradendate or pentaentadentae ligand. The formation of macrocyclic complexes depends significantly on the dimension of the internal cavity, the rigidity of the macrocycles, the nature of its donor atoms and on the complexing properties of the anion involved in the coordination. Electronic spectra and magnetic moments of the complexes indicate that the geometries of the metal centers are either square pyramidal or octahedral for acyclic or macro-cyclic complexes. The structures are consistent with the IR, UV-visible, ESR, 1H-NMR, mass spectra as well as conductivity and magnetic moment measurements. The Schiff base ligand and its metal complexes were tested against two pathogenic bacteria as Gram-positive and Gram-negative bacteria as well as one kind of fungi. Most of the complexes exhibit mild antibacterial and antifungal activities against these organisms. © 2015 Elsevier B.V. All rights reserved. Source


Stachel H.,Vienna University of Technology
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2014

In kinematics, a framework is called overconstrained if its continuous flexibility is caused by particular dimensions; in the generic case, a framework of this type is rigid. Famous examples of overconstrained structures are the Bricard octahedra, the Bennett isogram, the Grünbaum framework, Bottema's 16-bar mechanism, Chasles' body-bar framework, Burmester's focalmechanism or flexible quad meshes. The aim of this paper is to present some examples in detail and to focus on their symmetry properties. It turns out that only for a few is a global symmetry a necessary condition for flexibility. Sometimes, there is a hidden symmetry, and in some cases, for example, at the flexible type-3 octahedra or at discrete Voss surfaces, there is only a local symmetry. However, there remain overconstrained frameworks where the underlying algebraic conditions for flexibility have no relation to symmetry at all. © 2013 The Author(s) Published by the Royal Society. Source


Weil M.,Vienna University of Technology
Crystal Growth and Design | Year: 2016

During systematic phase formation studies in the system BaO/As2O5/H2O, the phases with composition Ba3(AsO4)2·16.3H2O, Ba3(AsO4)2·17H2O, Ba(H2AsO4)2·H2O, Ba(H2AsO4)2, Ba3(HAs2O7)2, Ba3As4O13, Ba2As2O7, and Ba2As4O12 were isolated and structurally characterized for the first time, using either X-ray powder diffraction data (Ba3As4O13) or single crystal X-ray diffraction data (all other phases). From the eight phases investigated, three (Ba(H2AsO4)2, Ba3As4O13, and Ba2As2O7) crystallize in known structure types and the remaining ones in novel structure types. In the crystal structures, the coordination numbers of the Ba2+ cations span a range from 8 to 11, and the different arsenate anions are built up from tetrahedral AsO4 groups, except for Ba2As4O12 that contains a novel type of a catena-metaarsenate anion consisting of condensed AsO4 tetrahedra and AsO6 octahedra. Another remarkable structural feature is the hydrogendiarsenate anion present in Ba3(HAs2O7)2, with the longest As-Obridging distance of a diarsenate group observed so far. Temperature-dependent X-ray diffraction measurements on selected phases show anhydrous Ba3(AsO4)2 to be the only phase stable above 1000 °C. © 2015 American Chemical Society. Source


Pedersen U.R.,Vienna University of Technology
Journal of Chemical Physics | Year: 2013

Computing phase diagrams of model systems is an essential part of computational condensed matter physics. In this paper, we discuss in detail the interface pinning (IP) method for calculation of the Gibbs free energy difference between a solid and a liquid. This is done in a single equilibrium simulation by applying a harmonic field that biases the system towards two-phase configurations. The Gibbs free energy difference between the phases is determined from the average force that the applied field exerts on the system. As a test system, we study the Lennard-Jones model. It is shown that the coexistence line can be computed efficiently to a high precision when the IP method is combined with the Newton-Raphson method for finding roots. Statistical and systematic errors are investigated. Advantages and drawbacks of the IP method are discussed. The high pressure part of the temperature-density coexistence region is outlined by isomorphs. © 2013 © 2013 Author(s). Source


The catalytic properties in methanol steam reforming (MSR) of PdZn/ZnO and Pd2Ga/Ga2O3 were investigated and related to the actual surface composition. MSR selectivities of around 90% were observed on both catalysts. In situ FTIR spectroscopy was utilized to clarify the origin of the CO formed as a by-product. Exposure to CO at room temperature caused a degradation of both intermetallic surfaces, which was more severe on Pd 2Ga. Likely, the strong Pd-CO interaction led to enrichment of Pd at the surface, as observed by the CO spectra resembling that of CO on metallic Pd. A limited stability of the PdZn and Pd2Ga surfaces was also observed in methanol/water, most likely due to the CO formed as a by-product in MSR. Therefore, the surface under reaction conditions consists of domains of metallic Pd (or a Pd-rich alloy) in addition to the intermetallic surface. This effect was more pronounced at lower temperatures. A faster in situ regeneration of the intermetallic surface by the hydrogen produced in MSR at elevated temperatures (>500 K) is proposed. Degradation by CO and re-reformation of the alloy by H2 likely leads to a certain steady state of the surface. X-ray absorption spectroscopy measurements indicate that the effect of CO apparently goes beyond the surface only. © 2013 Elsevier B.V. Source


Werner W.S.M.,Vienna University of Technology
Journal of Electron Spectroscopy and Related Phenomena | Year: 2010

A survey is presented on modeling the effects of electron transport on the energy and angular spectra of electrons emitted or reflected from non-crystalline solid surfaces and nanostructures. This is intended to aid in the quantitative interpretation of such spectra and should also provide a useful guideline for experiment design. A brief review of the most significant characteristics of the electron-solid interaction is given and the theory describing the energy dissipation and momentum relaxation of electrons in solids is outlined, which is based on the so-called Landau-Goudsmit-Saunderson (LGS) loss function. It is shown that the basis for true quantitative spectrum interpretation is provided by the collision statistics, i.e. the number of electrons arriving at the detector after participating in a given number of inelastic collisions, being equal to the partial intensities. By introducing an appropriate stochastic process for multiple scattering, the validity of the partial intensity approach (PIA) can be extended to the true slowing down regime making it possible, in a very simple way, to fully account for energy fluctuations in the limit of large energy losses. The LGS loss function thus provides a unified theoretical basis for electron spectroscopy and microscopy. The usefulness of the concept of the collision statistics, or partial intensities, for quantitative spectrum interpretation is illustrated by considering various examples of practical significance, including elastic peak electron spectroscopy (EPES), reflection electron energy loss spectroscopy (REELS), (hard) X-ray photoelectron emission ((HA)XPS), electron coincidence spectroscopy, the Auger electron backscattering factor and the ionization depth distribution. Finally, the relationship between the partial intensities and the emission depth is discussed, which allows one to combine the unique features of electron spectroscopy for investigation of chemical, electronic and magnetic properties of surfaces with a depth selectivity within the first few atomic layers of a solid. © 2009 Elsevier B.V. All rights reserved. Source


Koppens F.H.L.,ICFO - Institute of Photonic Sciences | Mueller T.,Vienna University of Technology | Avouris P.,IBM | Ferrari A.C.,Cambridge Graphene Center | And 3 more authors.
Nature Nanotechnology | Year: 2014

Graphene and other two-dimensional materials, such as transition metal dichalcogenides, have rapidly established themselves as intriguing building blocks for optoelectronic applications, with a strong focus on various photodetection platforms. The versatility of these material systems enables their application in areas including ultrafast and ultrasensitive detection of light in the ultraviolet, visible, infrared and terahertz frequency ranges. These detectors can be integrated with other photonic components based on the same material, as well as with silicon photonic and electronic technologies. Here, we provide an overview and evaluation of state-of-the-art photodetectors based on graphene, other two-dimensional materials, and hybrid systems based on the combination of different two-dimensional crystals or of two-dimensional crystals and other (nano)materials, such as plasmonic nanoparticles, semiconductors, quantum dots, or their integration with (silicon) waveguides. © 2014 Macmillan Publishers Limited. Source


Nilsson T.,Chalmers University of Technology | Nilsson T.,Vienna University of Technology | Haas R.,Chalmers University of Technology
Journal of Geophysical Research: Solid Earth | Year: 2010

We assess the impact of atmospheric turbulence on geodetic very long baseline interferometry (VLBI) through simulations of atmospheric delays. VLBI observations are simulated for the two best existing VLBI data sets: The continuous VLBI campaigns CONT05 and CONT08. We test different methods to determine the magnitude of the turbulence above each VLBI station, i.e., the refractive index structure constant Cn 2. The results from the analysis of the simulated data and the actually observed VLBI data are compared. We find that atmospheric turbulence today is the largest error source for geodetic VLBI. Accurate modeling of atmospheric turbulence is necessary to reach the highest accuracy with geodetic VLBI. Copyright 2010 by the American Geophysical Union. Source


Schallenberg-Rodriguez J.,University of Las Palmas de Gran Canaria | Haas R.,Vienna University of Technology
Renewable and Sustainable Energy Reviews | Year: 2012

Since 1998 the Spanish Government established a feed-in system where RES-E generators could choose between two alternatives: fixed feed-in tariff and premium. Nowadays, all RES-E1 can be sold in the electricity market (getting an additional premium) except for solar photovoltaic. One important novelty established in 2007 is a cap and floor system for facilities under the premium option. The aim of this paper is to analyze and compare these two alternative options, fixed-FIT and premiums, which coexist at the same time in Spain, describe the evolution of both systems and evaluate its performance. The introduction of this support system in Spain led to very good results in terms of RES-E deployment. The main advantage of the premium option is that it is a scheme integrated in the electricity market. One disadvantage is that it can occasionally lead to overcompensation; one way to try to avoid it is to set a cap value. In order to evaluate the performance of this dual support system not only RES-E deployment has been assessed but also the policy stability, the adequacy of RES-E production to the electricity demand pattern and the changes in the investors' behaviour. © 2011 Elsevier Ltd. All rights reserved. Source


Munekane H.,Geographical Survey Institute | Boehm J.,Vienna University of Technology
Journal of Geodesy | Year: 2010

Troposphere-induced errors in GPS-derived geodetic time series, namely, height and zenith total delays (ZTDs), over Japan are quantitatively evaluated through the analyses of simulated GPS data using realistic cumulative tropospheric delays and observed GPS data. The numerical simulations show that the use of a priori zenith hydrostatic delays (ZHDs) derived from the European Centre for Medium-Range Weather Forecasts (ECMWF) numerical weather model data and gridded Vienna mapping function 1 (gridded VMF1) results in smaller spurious annual height errors and height repeatabilities (0.45 and 2.55 mm on average, respectively) as compared to those derived from the global pressure and temperature (GPT) model and global mapping function (GMF) (1.08 and 3.22 mm on average, respectively). On the other hand, the use of a priori ZHDs derived from the GPT and GMF would be sufficient for applications involving ZTDs, given the current discrepancies between GPS-derived ZTDs and those derived from numerical weather models. The numerical simulations reveal that the use of mapping functions constructed with fine-scale numerical weather models will potentially improve height repeatabilities as compared to the gridded VMF1 (2.09 mm against 2.55 mm on average). However, they do not presently outperform the gridded VMF1 with the observed GPS data (6.52 mm against 6.50 mm on average). Finally, the commonly observed colored components in GPS-derived height time series are not primarily the result of troposphere-induced errors, since they become white in numerical simulations with the proper choice of a priori ZHDs and mapping functions. © 2010 Springer-Verlag. Source


Schneider W.,Vienna University of Technology
European Physical Journal: Special Topics | Year: 2015

Negative surface heat capacities are observed for many liquids, at least in certain temperature regimes. Since thermodynamic stability of a system requires positive heat capacities, it is usually argued that the surface must not be considered as an autonomous system. This, however, is not possible when the energy balance of the surface plays the role of a boundary condition for the field equations, e.g. the heat diffusion equation. A heat pulse supplied to the surface of a liquid and the stretching of a liquid film provide two examples to demonstrate that negative surface heat capacities may lead to unbounded and unconfined growth of the temperature disturbances in the liquid. To deal with the instabilities associated with negative surface heat capacities it is proposed to introduce a surface layer of small, but finite, thickness that is defined solely in terms of macroscopic thermodynamic quantities. By considering the energy balance of the surface layer, which is an open system, it is shown that the isobaric heat capacity of the liquid contained in the surface layer is to be added to the (possibly negative) surface heat capacity to obtain a positive total heat capacity of the surface layer. © 2015, EDP Sciences and Springer. Source


Hofko B.,Vienna University of Technology
Construction and Building Materials | Year: 2015

As roads are subjected to high traffic loads due to the strong growth in heavy vehicle traffic and new trends in the automotive and tire industries, the traditional asphalt mix tests are often inadequate for a reliable prediction of the in-service performance of flexible road pavements. With performance-based test methods (PBT) at hand, the thermo-rheological properties of hot mix asphalt can be obtained. This paper presents results of a research project where 4-point bending beam (4-PBB)-tests are carried out on different AC mixes for base layers at various temperatures and frequencies to obtain stiffness and fatigue behavior. At the same time, linear elastic finite element simulations are performed with input data for the materials different from the 4-PBB. These simulations are carried out on two different pavement structures, different tire types (twin-tires and wide base super-single tires) and wheel configurations (tire load and pressure). Loading data for the tires are obtained from stress-in-motion measurements using the Vehicle-Road Surface Pressure Transducer Array (VRSPTA). The strain at the bottom of the bituminous bound layers are taken from the simulations and used in combination with the fatigue functions to evaluate the life-time in permissible load cycles for different tire configurations. The main findings are that super-single tires lead to significantly lower pavement life-times than the standard twin-tire configuration and that the relative difference increases with decreasing thickness of the pavement structure. Also, the tire pressure has a strong impact on the pavement life-time; an increase in tire pressure by 60% decreases the life-time by 25-52% (super-single) and 15-38% (twin-tire) respectively. © 2015 Elsevier Ltd. All rights reserved. Source


Szeider S.,Vienna University of Technology
Proceedings of the National Conference on Artificial Intelligence | Year: 2011

We present a first theoretical analysis of the power of polynomial-time preprocessing for important combinatorial problems from various areas in AI. We consider problems from Constraint Satisfaction, Global Constraints, Satisfiability, Nonmonotonic and Bayesian Reasoning. We show that, subject to a complexity theoretic assumption, none of the considered problems can be reduced by polynomial-time preprocessing to a problem kernel whose size is polynomial in a structural problem parameter of the input, such as induced width or backdoor size. Our results provide a firm theoretical boundary for the performance of polynomial-time preprocessing algorithms for the considered problems. Copyright © 2011, Association for the Advancement of Artificial Intelligence. All rights reserved. Source


Li S.-C.,Tulane University | Losovyj Y.,Louisiana State University | Diebold U.,Tulane University | Diebold U.,Vienna University of Technology
Langmuir | Year: 2011

The adsorption of catechol (1,2-benzendiol) on the anatase TiO 2(101) surface was studied with synchrotron-based ultraviolet photoemission spectroscopy (UPS), X-ray photoemission spectroscopy (XPS), and scanning tunneling microscopy (STM). Catechol adsorbs with a unity sticking coefficient and the phenyl ring intact. STM reveals preferred nucleation at step edges and subsurface point defects, followed by 1D growth and the formation of a 2 × 1 superstructure at full coverage. A gap state of ∼1 eV above the valence band maximum is observed for dosages in excess of ∼0.4 Langmuir, but such a state is absent for lower coverages. The formation of the band gap states thus correlates with the adsorption at regular lattice sites and the onset of self-assembled superstructures. © 2011 American Chemical Society. Source


Brownnutt M.,University of Innsbruck | Brownnutt M.,University of Hong Kong | Kumph M.,University of Innsbruck | Rabl P.,Vienna University of Technology | And 2 more authors.
Reviews of Modern Physics | Year: 2015

Electric-field noise near surfaces is a common problem in diverse areas of physics and a limiting factor for many precision measurements. There are multiple mechanisms by which such noise is generated, many of which are poorly understood. Laser-cooled, trapped ions provide one of the most sensitive systems to probe electric-field noise at MHz frequencies and over a distance range 30-3000 μm from a surface. Over recent years numerous experiments have reported spectral densities of electric-field noise inferred from ion heating-rate measurements and several different theoretical explanations for the observed noise characteristics have been proposed. This paper provides an extensive summary and critical review of electric-field noise measurements in ion traps and compares these experimental findings with known and conjectured mechanisms for the origin of this noise. This reveals that the presence of multiple noise sources, as well as the different scalings added by geometrical considerations, complicates the interpretation of these results. It is thus the purpose of this review to assess which conclusions can be reasonably drawn from the existing data, and which important questions are still open. In so doing it provides a framework for future investigations of surface-noise processes. © 2015 American Physical Society. Source


Fackler K.,Vienna University of Technology | Schwanninger M.,University of Natural Resources and Life Sciences, Vienna
Applied Microbiology and Biotechnology | Year: 2012

Nuclear magnetic resonance, mid and near infrared, and ultra violet (UV) spectra of wood contain information on its chemistry and composition. When solid wood samples are analysed, information on the molecular structure of the lignocellulose complex of wood e.g. crystallinity of polysaccharides and the orientation of the polymers in wood cell walls can also be gained. UV and infrared spec-troscopy allow also for spatially resolved spectroscopy, and state-of-the-art mapping and imaging systems have been able to provide local information on wood chemistry and structure at the level of wood cells (with IR) or cell wall layers (with UV). During the last decades, these methods have also proven useful to follow alterations of the composition, chemistry and physics of the substrate wood after fungi had grown on it as well as changes of the interactions between the wood polymers within the lignocellulose complex caused by decay fungi. This review provides an overview on how molecular spectroscopic methods could contribute to understand these degradation processes and were able to characterise and localise fungal wood decay in its various stages starting from the incipient and early ones even if the major share of research focussed on advanced decay. Practical issues such as requirements in terms of sample preparation and sample form and present examples of optimised data analysis will also be addressed to be able to detect and characterise the generally highly variable microbial degradation processes within their highly variable substrate wood. © The Author(s) 2012. Source


Hirschi M.,ETH Zurich | Mueller B.,ETH Zurich | Mueller B.,Environment Canada | Dorigo W.,Vienna University of Technology | Seneviratne S.I.,ETH Zurich
Remote Sensing of Environment | Year: 2014

Hot extremes have been shown to be induced by antecedent surface moisture deficits in several regions. While most previous studies on this topic relied on modeling results or precipitation-based surface moisture information (particularly the standardized precipitation index, SPI), we use here a new merged remote sensing soil moisture product that combines active and passive microwave sensors to investigate the relation between the number of hot days (NHD) and preceding soil moisture deficits. Along with analyses of temporal variabilities of surface vs. root-zone soil moisture, this sheds light on the role of different soil depths for soil moisture-temperature coupling.The global patterns of soil moisture-NHD correlations from remote sensing data and from SPI as used in previous studies are comparable. Nonetheless, the strength of the relationship appears underestimated with remote sensing-based soil moisture compared to SPI-based estimates, particularly in regions of strong soil moisture-temperature coupling. This is mainly due to the fact that the temporal hydrological variability is less pronounced in the remote sensing data than in the SPI estimates in these regions, and large dry/wet anomalies appear underestimated. Comparing temporal variabilities of surface and root-zone soil moisture in in-situ observations reveals a drop of surface-layer variability below that of root-zone when dry conditions are considered. This feature is a plausible explanation for the observed weaker relationship of remote sensing-based soil moisture (representing the surface layer) with NHD as it leads to a gradual decoupling of the surface layer from temperature under dry conditions, while root-zone soil moisture sustains more of its temporal variability. © 2014. Source


Ajanovic A.,Vienna University of Technology
Energy | Year: 2011

Rapidly growing fossil energy consumption in the transport sector in the last two centuries caused problems such as increasing greenhouse gas emissions, growing energy dependency and supply insecurity. One approach to solve these problems could be to increase the use of biofuels. Preferred feedstocks for current 1st generation biofuels production are corn, wheat, sugarcane, soybean, rapeseed and sunflowers. The major problem is that these feedstocks are also used for food and feed production. The core objective of this paper is to investigate whether the recent increase of biofuels production had a significant impact on the development of agricultural commodity (feedstock) prices. The most important impact factors like biofuels production, land use, yields, feedstock and crude oil prices are analysed. The major conclusions of this analysis are: In recent years the share of bioenergy-based fuels has increased moderately, but continuously, and so did feedstock production, as well as yields. So far, no significant impact of biofuels production on feedstock prices can be observed. Hence, a co-existence of biofuel and food production seems possible especially for 2nd generation biofuels. However, sustainability criteria should be seriously considered. But even if all crops, forests and grasslands currently not used were used for biofuels production it would be impossible to substitute all fossil fuels used today in transport. © 2010 Elsevier Ltd. Source


Fiorani M.,University of Modena and Reggio Emilia | Casoni M.,University of Modena and Reggio Emilia | Aleksic S.,Vienna University of Technology
Journal of Optical Communications and Networking | Year: 2011

Hybrid optical switching (HOS) is a switching paradigm that aims to combine optical circuit switching, optical burst switching, and optical packet switching on the same network. This paper proposes a novel integrated control plane for an HOS core node. The control plane makes use of a unified control packet able to carry the control information for all the different data formats and employs an appropriate scheduling algorithm for each incoming data type. Three possible node architectures are presented and an analytical model is introduced to analyze their power consumption. Also, the concept of increase in power efficiency is introduced to compare the considered architectures. The performance and power consumption analysis of the node have been carried out through the use of a simulation model developed specifically for the scope. The obtained results show the effectiveness of HOS networks. © 2011 Optical Society of America. Source


Tisch D.,Vienna University of Technology | Schmoll M.,AIT Austrian Institute of Technology
BMC Genomics | Year: 2013

Background: The tropical ascomycete Trichoderma reesei (Hypocrea jecorina) represents one of the most efficient plant cell wall degraders. Regulation of the enzymes required for this process is affected by nutritional signals as well as other environmental signals including light.Results: Our transcriptome analysis of strains lacking the photoreceptors BLR1 and BLR2 as well as ENV1 revealed a considerable increase in the number of genes showing significantly different transcript levels in light and darkness compared to wild-type. We show that members of all glycoside hydrolase families can be subject to light dependent regulation, hence confirming nutrient utilization including plant cell wall degradation as a major output pathway of light signalling. In contrast to N. crassa, photoreceptor mediated regulation of carbon metabolism in T. reesei occurs primarily by BLR1 and BLR2 via their positive effect on induction of env1 transcription, rather than by a presumed negative effect of ENV1 on the function of the BLR complex. Nevertheless, genes consistently regulated by photoreceptors in N. crassa and T. reesei are significantly enriched in carbon metabolic functions. Hence, different regulatory mechanisms are operative in these two fungi, while the light dependent regulation of plant cell wall degradation appears to be conserved. Analysis of growth on different carbon sources revealed that the oxidoreductive D-galactose and pentose catabolism is influenced by light and ENV1. Transcriptional regulation of the target enzymes in these pathways is enhanced by light and influenced by ENV1, BLR1 and/or BLR2. Additionally we detected an ENV1-regulated genomic cluster of 9 genes including the D-mannitol dehydrogenase gene lxr1, with two genes of this cluster showing consistent regulation in N. crassa.Conclusions: We show that one major output pathway of light signalling in Trichoderma reesei is regulation of glycoside hydrolase genes and the degradation of hemicellulose building blocks. Targets of ENV1 and BLR1/BLR2 are for the most part distinct and indicate individual functions for ENV1 and the BLR complex besides their postulated regulatory interrelationship. © 2013 Tisch and Schmoll; licensee BioMed Central Ltd. Source


Drmota M.,Vienna University of Technology | Szpankowski W.,Purdue University
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2011

Divide-and-conquer recurrences are one of the most studied equations in computer science. Yet, discrete versions of these recurrences, namely m T(n) = an + ∑j=1m bjT([pjn + δj]) for some known sequence an and given b j, pj and δj, present some challenges. The discrete nature of this recurrence (represented by the floor function) introduces certain oscillations not captured by the traditional Master Theorem, for example due to Akra and Beizzi who primary studied the con-tinuous version of the recurrence. We apply powerful techniques such as Dirichlet series, Mellin-Perron formula, and (extended) Tauberian theorems of Wiener-Ikehara to provide a complete and precise solution to this basic computer science recurrence. We illustrate applicability of our results on several examples including a popular and fast arithmetic coding algorithm due to Boncelet for which we estimate its average redundancy. To the best of our knowledge, discrete divide and conquer recurrences were not studied in this generality and such detail; in particular, this allows us to compare the redinidancy of Boncelet's algorithm to the (asymptotically) optimal Tunstall scheme. Source


Samwald Dr. M.,Medical University of Vienna | Samwald Dr. M.,Vienna University of Technology | Adlassnig K.-P.,Medical University of Vienna | Adlassnig K.-P.,Medexter Healthcare GmbH
Journal of the American Medical Informatics Association | Year: 2013

A sizable fraction of patients experiences adverse drug events or lack of drug efficacy. A part of this variability in drug response can be explained by genetic differences between patients. However, pharmacogenomic data as well as computational clinical decision support systems for interpreting such data are still unavailable in most healthcare settings. We address this problem by introducing the medicine safety code (MSC), which captures compressed pharmacogenomic data in a twodimensional barcode that can be carried in a patient's wallet. We successfully encoded data about 385 genetic polymorphisms in MSC and were able to decode and interpret MSC quickly with common mobile devices. The MSC could make individual pharmacogenomic data and decision support available in a wide variety of healthcare settings without the set up of large-scale infrastructures or centralized databases. Source


Asai S.,Japan Atomic Energy Agency | Limbeck A.,Vienna University of Technology
Talanta | Year: 2015

Rare earth elements (REE) concentrated on cation-exchange resin particles were measured with laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) to obtain chondrite-normalized REE plots. The sensitivity of REE increased in ascending order of the atomic number, according to the sensitivity trend in pneumatic nebulization ICP-MS (PN-ICP-MS). The signal intensities of REE were nearly proportional to the concentrations of REE in the immersion solution used for particle-preparation. Minimum measurable concentration calculated from the net signals of REE was approximately 1 ng/g corresponding to 0.1 ng in the particle-preparation solution. In LA analysis, formation of oxide and hydroxide of the light REE and Ba which causes spectral interferences in the heavy REE measurement was effectively attenuated due to the solvent-free measurement capability, compared to conventional PN-ICP-MS. To evaluate the applicability of the proposed method, the REE-adsorbed particles prepared by immersing them in a U-bearing solution (commercially available U standard solution) were measured with LA-ICP-MS. Aside from the LA analysis, each concentration of REE in the same U standard solution was determined with conventional PN-ICP-MS after separating REE by cation-exchange chromatography. The concentrations of REE were ranging from 0.04 (Pr) to 1.08 (Dy) μg/g-U. The chondrite-normalized plot obtained through LA-ICP-MS analysis of the U standard sample exhibited close agreement with that obtained through the PN-ICP-MS of the REE-separated solution within the uncertainties. © 2014 Elsevier B.V. All rights reserved. Source


Viglione A.,Vienna University of Technology
Hydrology and Earth System Sciences | Year: 2010

The coefficient of L-variation (L-CV) is commonly used in statistical hydrology, in particular in regional frequency analysis, as a measure of steepness for the frequency curve of the hydrological variable of interest. As opposed to the point estimation of the L-CV, in this work we are interested in the estimation of the interval of values (confidence interval) in which the L-CV is included at a given level of probability (confidence level). Several candidate distributions are compared in terms of their suitability to provide valid estimators of confidence intervals for the population L-CV. Monte-Carlo simulations of synthetic samples from distributions frequently used in hydrology are used as a basis for the comparison. The best estimator proves to be provided by the log-Student t distribution whose parameters are estimated without any assumption on the underlying parent distribution of the hydrological variable of interest. This estimator is shown to also outperform the non parametric bias-corrected and accelerated bootstrap method. An illustrative example of how this result can be used in hydrology is presented, namely in the comparison of methods for regional flood frequency analysis. In particular, it is shown that the confidence intervals for the L-CV can be used to assess the amount of spatial heterogeneity of flood data not explained by regionalization models. © 2010 Author(s). Source


Ghobadi N.,University of Tehran | Pourfath M.,University of Tehran | Pourfath M.,Vienna University of Technology
IEEE Transactions on Electron Devices | Year: 2014

In this paper, for the first time device characteristics of field-effect tunneling transistors based on vertical graphene-hBN heterostructure (VTGFET) and vertical graphene nanoribbon (GNR)-hBN heterostructure (VTGNRFET) are theoretically investigated and compared. An atomistic simulation based on the nonequilibrium Green's function (NEGF) formalism is employed. The results indicate that due to the presence of an energy gap in GNRs, the I ON/IOFF ratio of VTGNRFET can be much larger than that of VTGFET, which renders VTGNRFETs as promising candidates for future electronic applications. Furthermore, it can be inferred from the results that due to smaller density of states and as a result smaller quantum capacitance of GNRs in comparison with that of graphene, better switching and frequency response can be achieved for VTGNRFETs. © 2013 IEEE. Source


Hartmann M.,Schneider Electric | Ertl H.,Vienna University of Technology | Kolar J.W.,ETH Zurich
IEEE Transactions on Power Electronics | Year: 2012

Due to their basic physical properties, power MOSFETs exhibit an output capacitance C oss that is dependent on the drain-source voltage. This (nonlinear) parasitic capacitance has to be charged at turn-off of the MOSFET by the drain-source current in rectifier applications that yield input current distortions. A detailed analysis shows that the nonlinear behavior of this capacitance is even more pronounced for modern super junction MOSFET devices. Whereas C oss increases with increasing chip area, the on-state resistance of the MOSFET decreases accordingly. Hence, a tradeoff between efficiency and input current distortions exists. A detailed analysis of this effect considering different semiconductor technologies is given in this study and a Pareto curve in the η-THD I space is drawn that clearly highlights this relationship. It is further shown that the distortions can be reduced considerably by the application of a proper feedforward control signal counteracting the nonlinear switching delay due to C oss. The theoretical considerations are verified by experimental results taken from 10-kW laboratory prototypes with the switching frequencies of 250 kHz and 1 MHz. © 1986-2012 IEEE. Source


Matyas K.,Vienna University of Technology | Auer S.,Fraunhofer Austria Research GmbH
CIRP Annals - Manufacturing Technology | Year: 2012

Medium-term sales and operations as well as medium to short-term production planning in customer order driven production processes are performed using a cascading planning process. A lack of coordination and feedback between different planning phases causes problems with a negative effect on costs in production that originate from unfeasible production programs. Based on a system for the classification of planning restrictions the planning process will be controlled utilizing a newly developed combination of the methods of Linear Programming and Constraint Programming. The result is a formal logic to combine the different planning horizons and the two sets of planning methods. © 2012 CIRP. Source


Haessler S.,Vienna University of Technology | Caillat J.,University Pierre and Marie Curie | Salieres P.,CEA Saclay Nuclear Research Center
Journal of Physics B: Atomic, Molecular and Optical Physics | Year: 2011

This tutorial presents the most important aspects of the molecular self-probing paradigm, which views the process of high harmonic generation as 'a molecule being probed by one of its own electrons'. Since the properties of the electron wavepacket acting as a probe allow a combination of attosecond and ngström resolutions in measurements, this idea bears great potential for the observation, and possibly control, of ultrafast quantum dynamics in molecules at the electronic level. Theoretical as well as experimental methods and concepts at the basis of self-probing measurements are introduced. Many of these are discussed as the example of molecular orbital tomography. © 2011 IOP Publishing Ltd. Source


Azadbeh M.,Sahand University of Technology | Mohammadzadeh A.,Sahand University of Technology | Danninger H.,Vienna University of Technology
Materials and Design | Year: 2014

In this study an experimental investigation using response surface methodology has been undertaken in order to model and evaluate the physical and mechanical properties of Cr-Mo prealloyed sintered steels with respect to the variation of powder metallurgy process parameters such as compacting pressure, sintering temperature and Cr content of the prealloyed steel powder. Mathematical models were developed at 95% confidence level to predict the physical properties such as sintered density and electrical resistivity and mechanical properties such as transverse rupture strength, apparent (=macro-)hardness, and impact energy. Analysis of variance was used to validate the adequacy of the developed models. The obtained mathematical models are useful not only for predicting the physical and mechanical properties with higher accuracy but also for selecting optimum manufacturing parameters to achieve the desired properties. © 2013 Elsevier Ltd. Source


Schmitt A.,Vienna University of Technology
Lecture Notes in Physics | Year: 2010

The QCD phase diagram collects the equilibrium phases of QCD in the plane of quark (or baryon) chemical potential μ and temperature T. We show a sketch of this phase diagram in Fig. 1.1. In this introduction, we are not concerned with the details of this diagram. We observe that compact stars, on the scales of this diagram, live in the region of small temperatures and intermediate densities. They may live in the region where quarks are confined, i.e., in the hadronic phase. This would imply that they are neutron stars. They may also live in the deconfined region which would make them quark stars. A compact star may also contain both deconfined and confined quark matter because the star actually has a density profile rather than a homogeneous density. In the interior, we expect the density to be larger than at the surface. Therefore, the third possibility is a hybrid star with a quark core and a nuclear mantle. © 2010 Springer-Verlag Berlin Heidelberg. Source


Jakoby B.,Johannes Kepler University | Vellekoop M.J.,Vienna University of Technology
IEEE Sensors Journal | Year: 2011

In this contribution, an overview is given of physical sensors for the determination and monitoring of the state of liquids. Basic liquid properties in different energy domains (electrical, mechanical, thermal, and optical) are discussed and their application in sensors is reviewed. Such sensors are useful for bulk fluids, but also for very small liquid volumes such as used in microfluidic devices. In addition, the capability of physical chemosensors for the retrieval of chemical or biochemical information of the liquid is explicated. © 2011 IEEE. Source


Chen C.-M.,Vienna University of Technology | Chung Y.-C.,Texas A&M University
Journal of High Energy Physics | Year: 2011

We approach the Minimum Supersymmetric Standard Model (MSSM) from an E 6 GUT by using the spectral cover construction and non-abelian gauge fluxes in F-theory. We start with an E6 singularity unfolded from an E8 singularity and obtain E6 GUTs by using an SU(3) spectral cover. By turning on SU(2) × U(1)2 gauge fluxes, we obtain a rank 5 model with the gauge group SU(3) × SU(2) × U(1) 2. Based on the well-studied geometric backgrounds in the literature, we demonstrate several models and discuss their phenomenology. © SISSA 2011. Source


Filzmoser M.,Vienna University of Technology
International Journal of Artificial Intelligence | Year: 2010

Automated negotiation, in which software agents assume the negotiation tasks of their human users, is argued to improve the benefits of e-business transactions. Higher benefits result on one hand from reduced transaction costs due to the avoidance of human intervention, on the other hand software agents are supposed to achieve better outcomes. While the former argument is straightforward the latter lacks empirical evidence. The few studies that compare human and software agent performance in automated negotiations only come to inconclusive results. We model and simulate automated negotiation systems and compare the output of the simulation runs to outcomes of negotiation experiments with human subjects. The automated negotiation systems consist of software agents that follow concession strategies proposed in negotiation literature and appropriate protocols that allow these agents to interrupt their strategy to avoid exploitation and unfavorable agreements. The negotiation problems used in the simulations are those derived from the experiments so that outcomes of human and automated negotiation are directly comparable. The outcome dimensions considered in our analysis are the proportion of agreements, dyadic and individual performance, and fairness. Only a set of systems managed to significantly outperform human negotiations in all outcome dimensions. These systems consist of software agents, that systematically propose offers of monotonically decreasing utility and make first concession steps if the opponent reciprocated previous concessions. The protocols of these systems enable to reject unfavorable offers to avoid exploitation or unfavorable agreements without immediately aborting negotiations. Copyright © 2010-11 by IJAI (CESER Publications). Source


Bartsch A.,Vienna University of Technology
Remote Sensing | Year: 2010

The scatterometer SeaWinds on QuikSCAT provided regular measurements at Ku-band from 1999 to 2009. Although it was designed for ocean applications, it has been frequently used for the assessment of seasonal snowmelt patterns aside from other terrestrial applications such as ice cap monitoring, phenology and urban mapping. This paper discusses general data characteristics of SeaWinds and reviews relevant change detection algorithms. Depending on the complexity of the method, parameters such as long-term noise and multiple event analyses were incorporated. Temporal averaging is a commonly accepted preprocessing step with consideration of diurnal, multi-day or seasonal averages. © 2010 by the author; licensee Molecular Diversity Preservation International, Basel, Switzerland. Source


Koudriavtsev A.B.,Mendeleev University of Chemical Technology | Linert W.,Vienna University of Technology
Journal of Structural Chemistry | Year: 2010

The nature and theoretical models of spin crossover equilibrium between high-spin and low-spin forms of transition metal complexes are reviewed. Spin crossover compounds are promising materials for information storage and display devices. In the solid state spin crossover is accompanied by several phenomena related to phase transitions. A critical analysis of theoretical models proposed for the explanation of these phenomena is given. The paper mainly focuses on two models that provide for adequate descriptions of the majority of experimental data, viz. the model of the Ising-like Hamiltonian and the molecular statistical model. Descriptions of spin crossover yielded by these two models are formally similar but not identical and they are based on fundamentally different concepts of molecular interactions and ordering, the latter being the origin of the two-step spin crossover. The Ising-like Hamiltonian model approaches spin crossover from the point of view of properties of lattices whereas the molecular statistical model explains this phenomenon starting from molecules. The latter approach provides for the elucidation of the molecular nature of cooperative phenomena observed in spin crossover which is important for developing the synthetic strategy of promising spin crossover compounds. © 2010 Pleiades Publishing, Ltd. Source


Haubner R.,Vienna University of Technology | Kalss W.,Balzers Ag
International Journal of Refractory Metals and Hard Materials | Year: 2010

Diamond deposition on various hardmetal tools is widely used. For applications where the mechanical forces are low diamond coatings have long lifetimes, but especially for heavy duty applications the reproducibility of the diamond coating adhesion is not adequate. Wear and lifetime of diamond coated tools are determined by the diamond microstructure, the coating thickness, and the adhesion of the coating. The diamond substrate interface is important for layer adhesion, but in the case of diamond deposition on hardmetal tools, the interface can change during the diamond deposition. For this reason, surface pre-treatments are important, not only for a better diamond nucleation, but also to create a stable interface that allows good coating adhesion. The various aspects of different surface pre-treatments of hardmetal tools will be discussed. © 2010 Elsevier Ltd. All rights reserved. Source


Rauch H.,Vienna University of Technology
Journal of Physics: Conference Series | Year: 2012

Neutron interferometry provides a powerful tool to investigate particle and wave features in quantum physics. Single particle interference phenomena can be observed with neutrons and the entanglement of degrees of freedom, i.e., contextuality can be verified and used in further experiments. Entanglement of two photons, or atoms, is analogous to a double slit diffraction of a single photon, neutron or atom. Neutrons are proper tools for testing quantum mechanics because they are massive, they couple to electromagnetic fields due to their magnetic moment, they are subject to all basic interactions, and they are sensitive to topological effects, as well. The 4π-symmetry of spinor wave functions, the spin-superposition law and many topological phenomena can be made visible, thus showing interesting intrinsic features of quantum physics. Related experiments will be discussed. Deterministic and stochastic partial absorption experiments can be described by Bell-type inequalities. Neutron interferometry experiments based on post-selection methods renewed the discussion about quantum non-locality and the quantum measuring process. It has been shown that interference phenomena can be revived even when the overall interference pattern has lost its contrast. This indicates a persisting coupling in phase space even in cases of spatially separated Schrödinger cat-like situations. These states are extremely fragile and sensitive against any kind of fluctuations and other decoherence processes. More complete quantum experiments also show that a complete retrieval of quantum states behind an interaction volume becomes impossible in principle, but where and when a collapse of the wave-field occurs depends on the level of experiment. Source


Grandits P.,Vienna University of Technology
Mathematics of Operations Research | Year: 2016

We give an algorithmic solution of the optimal consumption problem supC F[0,τ] eβt- dCt, where Ct denotes the accumulated consumption until time t, and τ denotes the time of ruin. Moreover, the endowment process Xt is modeled by Xt = x + f0 t μ(Xs) ds-Ct. We solve the problem by showing that the function provided by the algorithm solves the Hamilton-Jacobi (HJ) equation in a viscosity sense and that the same is true for the value function of the problem. The argument is finished by a uniqueness result. It turns out that one has to change the optimal strategy at a sequence of endowment values, described by a free boundary value problem. Finally we give an illustrative example. © 2016 INFORMS. Source


Balan A.,RWTH Aachen | May G.,RWTH Aachen | Schoberl J.,Vienna University of Technology
Journal of Computational Physics | Year: 2012

Numerical schemes using piecewise polynomial approximation are very popular for high order discretization of conservation laws. While the most widely used numerical scheme under this paradigm appears to be the Discontinuous Galerkin method, the Spectral Difference scheme has often been found attractive as well, because of its simplicity of formulation and implementation. However, recently it has been shown that the scheme is not linearly stable on triangles. In this paper we present an alternate formulation of the scheme, featuring a new flux interpolation technique using Raviart-Thomas spaces, which proves stable under a similar linear analysis in which the standard scheme failed. We demonstrate viability of the concept by showing linear stability both in the semi-discrete sense and for time stepping schemes of the SSP Runge-Kutta type. Furthermore, we present convergence studies, as well as case studies in compressible flow simulation using the Euler equations. © 2011 Elsevier Inc. Source


Bointner R.,Vienna University of Technology
Energy Policy | Year: 2014

Long time series of the IEA and international patent offices offer a huge potential for scientific investigations of the energy innovation process. Thus, this paper deals with a broad literature review on innovation drivers and barriers, and an analysis of the knowledge induced by public research and development expenditures (R&D) and patents in the energy sector. The cumulative knowledge stock induced by public R&D expenditures in 14 investigated IEA-countries is 102.3. bn. EUR in 2013. Nuclear energy has the largest share of 43.9. bn. EUR, followed by energy efficiency accounting for 14.9. bn. EUR, fossil fuels with 13.5. bn. EUR, and renewable energy with 12.1. bn. EUR. A regression analysis indicates a linear relation between the GDP and the cumulative knowledge, with each billion EUR of GDP leading to an additional knowledge of 3.1. mil. EUR. However, linearity is not given for single energy technologies. Further, the results show that appropriate public R&D funding for research and development associated with a subsequent promotion of the market diffusion of a niche technology may lead to a breakthrough of the respective technology. © 2014 Elsevier Ltd. Source


Zemann R.,Vienna University of Technology
Materials Today: Proceedings | Year: 2016

This paper describes the manufacturing of threads directly into a carbon fibre reinforced polymer. For the investigation, a specimen of prepreg is made and used. The curing is done via an autoclave cycle. For the manufacturing of the thread a CNC machining center is used. The process starts with the drilling of a start hole. The threads geometry is elaborated with a specific end mill. The manufactured threads are tested by a tensile test machine. The results show, that the direct tapping into a carbon fibre reinforced polymer is possible. The two tested threads have both shown a good load capacity of averaged 4.42 kN for M5 and 6.56 kN for the M8 thread. © 2016 The Authors. Source


Schmoll M.,Vienna University of Technology | Esquivel-Naranjo E.U.,CINVESTAV | Herrera-Estrella A.,CINVESTAV
Fungal Genetics and Biology | Year: 2010

In recent years, considerable progress has been made in the elucidation of photoresponses and the mechanisms responsible for their induction in species of the genus Trichoderma. Although an influence of light on these fungi had already been reported five decades ago, their response is not limited to photoconidiation. While early studies on the molecular level concentrated on signaling via the secondary messenger cAMP, a more comprehensive scheme is available today. The photoreceptor-orthologs BLR1 and BLR2 are known to mediate almost all known light responses in these fungi and another light-regulatory protein, ENVOY, is suggested to establish the connection between light response and nutrient signaling. As a central regulatory mechanism, this light signaling machinery impacts diverse downstream pathways including vegetative growth, reproduction, carbon and sulfur metabolism, response to oxidative stress and biosynthesis of peptaibols. These responses involve several signaling cascades, for example the heterotrimeric G-protein and MAP-kinase cascades, resulting in an integrated response to environmental conditions. © 2010 Elsevier Inc. Source


Gabor F.,Vienna University of Technology
Handbook of Experimental Pharmacology | Year: 2010

It is estimated that 90% of all medicines are oral formulations and their market share is still increasing, due to sound advantages for the patient, the pharmaceutical industry and healthcare systems. Considering biopharmaceutical issues such as physicochemical requirements of the drug and physiological conditions, however, oral delivery is one of the most challenging routes. Recognising solubility, permeability and residence time in the gastrointestinal milieu as key parameters, different characteristics of drugs and their delivery systems such as size, pH, density, diffusion, swelling, adhesion, degradation and permeability can be adjusted to improve oral delivery. Future developments will focus on further improvement in patient compliance as well as the feasibility of administering biotech drugs via the oral route. © 2009 Springer-Verlag Berlin Heidelberg. Source


Jenei S.,Vienna University of Technology | Jenei S.,University of Pecs
Fuzzy Sets and Systems | Year: 2010

By analogy with the usual extension of the group operation from the positive cone of an ordered Abelian group into the whole group, a construction-called symmetrization-is defined and it is related to the rotation construction [Jenei, On the structure of rotation-invariant semigroups, Archive for Mathematical Logic 42 (2003) 489-514]. Symmetrization turns out to be a kind of dualized rotation. A characterization is given for the left-continuous t-conorms for which their symmetrization is a uninorm. As a by-product a new family of involutive uninorms is introduced. © 2009 Elsevier B.V. All rights reserved. Source


Uzunova E.L.,Bulgarian Academy of Science | Mikosch H.,Vienna University of Technology
ACS Catalysis | Year: 2013

Ethene adsorption in transition-metal-exchanged clinoptilolite with cations of the d elements (Fe-Cu) and also Pd was examined by density functional theory with the B3LYP functional, using the embedded cluster method in ONIOM. The preferred extraframework cation sites for the divalent cations are those in the large channel A. The monovalent cations Ni+ and Cu+ show little preference toward the available cation sites: they approach a center of negative framework charge, forming shorter M-O bonds as compared with the divalent cations. Periodic model calculations were applied to validate the extraframework site preference of Cu+ cations, and they confirm the ONIOM results, though periodic calculations systematically predict smaller energy gaps between the different cation site occupancies. A dominant component in the formation of the metal cation-adsorbate π-complexes is the electron charge transfer from the filled d orbitals of the transition metal cations to the unoccupied π* orbitals of ethene. Significant contribution of the framework oxygen atoms as electron donors to the cations was revealed. This results in lengthening of the C=C bond and a red shift of the corresponding stretching vibration, which is most pronounced in the adsorption complexes with monovalent cations, (Ni+, Cu+, Pd+). The hydrogen atoms in the ethene molecule become nonequivalent upon adsorption. The C-H bond lengthening is more significant for the adsorption complexes in channel B and on Ni+ cations in channel A. The deformation density maps, derived from the B3LYP calculated densities provide insight into the role of the framework in the charge transfer from the metal cations to the ethene molecule. © 2013 American Chemical Society. Source


HUppe A.,Klagenfurt University | Kaltenbacher M.,Vienna University of Technology
Journal of Computational Acoustics | Year: 2012

This paper addresses the application of the spectral finite element (FE) method to problems in the field of computational aeroacoustics (CAA). We apply a mixed finite element approximation to the acoustic perturbation equations, in which the flow induced sound is modeled by assessing the impact of a mean flow field on the acoustic wave propagation. We show the properties of the approximation by numerical benchmarks and an application to the CAA problem of sound generated by an airfoil. © 2012 IMACS. Source


Halbwirth H.,Vienna University of Technology
International Journal of Molecular Sciences | Year: 2010

Flavonoids and biochemically-related chalcones are important secondary metabolites, which are ubiquitously present in plants and therefore also in human food. They fulfill a broad range of physiological functions in planta and there are numerous reports about their physiological relevance for humans. Flavonoids have in common a basic C6-C3-C6 skeleton structure consisting of two aromatic rings (A and B) and a heterocyclic ring (C) containing one oxygen atom, whereas chalcones, as the intermediates in the formation of flavonoids, have not yet established the heterocyclic C-ring. Flavonoids are grouped into eight different classes, according to the oxidative status of the C-ring. The large number of divergent chalcones and flavonoid structures is from the extensive modification of the basic molecules. The hydroxylation pattern influences physiological properties such as light absorption and antioxidative activity, which is the base for many beneficial health effects of flavonoids. In some cases antiinfective properties are also effected. © 2010 by the author; licensee Molecular Diversity Preservation International. Source


Mahmood A.,Vienna University of Technology | Exel R.,Danube University Krems | Sauter T.,Danube University Krems
IEEE Transactions on Industrial Informatics | Year: 2014

In distributed systems, clock synchronization performance is hampered by delays and jitter accumulated not only in the network, but also in the timestamping procedures of the devices being synchronized. This is particularly critical in software timestamp-based synchronization where both software- and hardware-related sources contribute to this behavior. Usually, these synchronization impairments are collapsed into a black-box performance figure without quantifying the impact of each individual source, which obscures the picture and reduces the possibility to find optimized remedies. In this study, for the first time, the individual sources of delay and jitter are investigated for an IEEE 802.11 wireless local area network (WLAN) synchronization system using the IEEE 1588 protocol and software timestamps. Novel measurement techniques are proposed to quantify the hardware- and software-related delay and jitter mechanisms. It is shown that the delays and their associated jitter originate from both the WLAN chipset and the host computer. Moreover, the delay from the chipset cannot be considered symmetric and any such assumption inevitably leads to a residual offset, and thus to synchronization inaccuracy. Therefore, a calibration-based approach is proposed to compensate for these delays and to improve the performance of WLAN synchronization. Experimental results show that with optimal error compensation, a similar synchronization performance as software-based synchronization in Ethernet networks can be achieved. © 2012 IEEE. Source


Bagchi A.,University of Edinburgh | Detournay S.,Harvard University | Grumiller D.,Vienna University of Technology
Physical Review Letters | Year: 2012

We provide the first evidence for a holographic correspondence between a gravitational theory in flat space and a specific unitary field theory in one dimension lower. The gravitational theory is a flat-space limit of topologically massive gravity in three dimensions at a Chern-Simons level of k=1. The field theory is a chiral two-dimensional conformal field theory with a central charge of c=24. © 2012 American Physical Society. Source


Traff J.L.,Vienna University of Technology
Parallel Computing | Year: 2012

In both the regular and the irregular MPI (Message-Passing Interface) collective communication and reduction interfaces there is a correspondence between the argument lists and certain MPI derived datatypes. As a means to address and alleviate well-known memory and performance scalability problems in the irregular (or vector) collective interface definitions of MPI we propose to push this correspondence to its natural limit, and replace the interfaces of the MPI collectives with a different set of interfaces that specify all data sizes and displacements solely by means of derived datatypes. This reduces the number of collective (communication and reduction) interfaces from 16 to 10, significantly generalizes the operations, unifies regular and irregular collective interfaces, makes it possible to decouple certain algorithmic decisions from the collective operation, and moves the interface scalability issue from the collective interfaces to the MPI derived datatypes. To complete the proposal we discuss the memory scalability of the derived datatypes and suggest a number of alternative datatypes for MPI, some of which should be of independent interest. A running example illustrates the benefits of this alternative set of collective interfaces. Implementation issues are discussed showing that an implementation can be undertaken within any reasonable MPI library implementation. © 2011 Elsevier B.V. All rights reserved. Source


Krivic P.,Vienna University of Technology
Advanced Materials Research | Year: 2013

Usage of planar microcoils in nuclear magnetic resonance (NMR) analysis of volume-limited chemical and biological samples has been widespread over the decades, since these microcoils obtain high sensitivity and resolution in localized units of volume. On the other hand, low-temperature co-fired ceramic (LTCC) materials exhibit highly reliable and advantageous properties in the radio frequency (RF) working area. In this work, author tries to incorporate this prosperous material technology in to design of high quality NMR microcoils, which were so far fabricated on the glass or polymer substrates. Set of few ceramic substrate microcoils is fabricated and characterized with detailed description of fabrication process in this paper. © (2013) Trans Tech Publications, Switzerland. Source


Troster A.,Vienna University of Technology
Physics Procedia | Year: 2014

The Fourier Monte Carlo algorithm represents a powerful tool to study criticality in lattice spins systems. In particular, the algorithm constitutes an interesting alternative to other simulation approaches for models with microscopic or effective long-ranged interactions. However, due to the somewhat involved implementation of the basic algorithmic machinery, many researchers still refrain from using Fourier Monte Carlo. It is the aim of the present article to lower this barrier. Thus, the basic Fourier Monte Carlo algorithm is presented in great detail with emphasis on providing ready- To-use formulas for the reader's own implementation. © 2014 Elsevier B.V. Source


Skarke H.,Vienna University of Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2014

A refined version of a recently introduced method for analyzing the dynamics of an inhomogeneous irrotational dust universe is presented. A fully nonperturbative numerical computation of the time dependence of volume in this framework leads to the following results. If the initial state of the universe is Einstein-de Sitter with small Gaussian perturbations, then there is no acceleration even though the inhomogeneities strongly affect the evolution. A universe with a positive background curvature can exhibit acceleration, but not in conjunction with reasonable values for the Hubble rate. Thus the correct values for both quantities can be achieved only by introducing a positive cosmological constant. Possible loopholes to this conclusion are discussed; in particular, acceleration as an illusion created by peculiarities of light propagation in an inhomogeneous universe is still possible. Independently of the cosmological constant question, the present formalism should provide an important tool for precision cosmology. © 2014 American Physical Society. Source


Bath D.E.,Research Institute of Molecular Pathology | Bath D.E.,Howard Hughes Medical Institute | Stowers J.R.,Research Institute of Molecular Pathology | Stowers J.R.,Vienna University of Technology | And 5 more authors.
Nature Methods | Year: 2014

Rapidly and selectively modulating the activity of defined neurons in unrestrained animals is a powerful approach in investigating the circuit mechanisms that shape behavior. In Drosophila melanogaster, temperature- sensitive silencers and activators are widely used to control the activities of genetically defined neuronal cell types. A limitation of these thermogenetic approaches, however, has been their poor temporal resolution. Here we introduce FlyMAD (the fly mind-altering device), which allows thermogenetic silencing or activation within seconds or even fractions of a second. Using computer vision, FlyMAD targets an infrared laser to freely walking flies. As a proof of principle, we demonstrated the rapid silencing and activation of neurons involved in locomotion, vision and courtship. The spatial resolution of the focused beam enabled preferential targeting of neurons in the brain or ventral nerve cord. Moreover, the high temporal resolution of FlyMAD allowed us to discover distinct timing relationships for two neuronal cell types previously linked to courtship song. Source


Steinacker H.,Vienna University of Technology | Steinacker H.,City University of New York
Journal of High Energy Physics | Year: 2012

A mechanism for emergent gravity on brane solutions in Yang-Mills matrix models is exhibited. Gravity and a partial relation between the Einstein tensor and the energy-momentum tensor can arise from the basic matrix model action, without invoking an Einstein-Hilbert-type term. The key requirements are compactified extra dimensions with extrinsic curvature M 4 × κ ⊂ ℝ D and split noncommutativity, with a Poisson tensor θ ab linking the compact with the noncompact directions. The moduli of the compactification provide the dominant degrees of freedom for gravity which are transmitted to the 4 noncompact directions via the Poisson tensor. The effective Newton constant is determined by the scale of noncommutativity and the compactification. This gravity theory is well suited for quantization, and argued to be perturbatively finite for the IKKT model. Since no compactification of the target space is needed, it might provide a way to avoid the landscape problem in string theory. © SISSA 2012. Source


Rawassizadeh R.,Vienna University of Technology
Behaviour and Information Technology | Year: 2012

We are living in an era of social media such as online communities and social networking sites. Exposing or sharing personal information with -These communities has risks as well as benefits and -There is always a trade off between -The risks versus -The benefits of using -These technologies. Life-logs are pervasive tools or systems which sense and capture contextual information from -The user's environment in a continuous manner. A life-log produces a dataset, which consists of continuous streams of sensor data. Sharing this information has a wide range of advantages for both user and society. On -The o-Ther hand, in terms of individual privacy, life-log information is very sensitive. Although social media enable users to share -Their information, due to life-log data structure, current sharing models are not capable of handling life-log information while maintaining user privacy. Our approach here is to describe -The sharing of life-log information with society based on -The identification of associated risks and benefits. Subsequently, based on -The identified risks, we propose a data model for sharing life-log information. This data model has been designed to reduce -The potential risks of life-logs. Fur-Thermore, ethics for providing and using life-logs will be discussed. -These ethics focus on reducing risks as much as possible while sharing life-log information. © 2012 Copyright Taylor and Francis Group, LLC. Source


Franck G.,Vienna University of Technology
Angewandte Chemie - International Edition | Year: 2012

Your attention please: Phenomenal conciousness, that is, how something feels, does not exist for an observer. As science relies on observations, it is not aware of the nature of subjectivity and thus science is not often defined as a collective intelligence. In this Essay, the roles of intelligence and attention are discussed, as well as an analysis of scientific communication and citation, in order to evaluate whether science is a case of collective intelligence. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Haid S.,University of Ulm | Mishra A.,University of Ulm | Weil M.,Vienna University of Technology | Uhrich C.,Heliatek | And 2 more authors.
Advanced Functional Materials | Year: 2012

The convergent synthesis of a series of acceptor-donor-acceptor (A-D-A) type dicaynovinyl (DCV)-substituted oligoselenophenes DCVnS (n = 3-5) is presented. Trends in thermal and optoelectronic properties are studied, in dependence on the length of the conjugated backbone. Optical measurements reveal red-shifted absorption spectra and electrochemical investigations show lowering of the lowest unoccupied molecular orbital (LUMO) energy levels for DCVnS compared to the corresponding thiophene analogs DCVnT. As a consequence, a lowering of the bandgap is observed. Single crystal X-ray structure analysis of tetramer DCV4S provides important insight into the packing features and intermolecular interactions of the molecules, further corroborating the importance of the DCV acceptor groups for the molecular ordering. DCV4S and DCV5S are used as donor materials in planar heterojunction (PHJ) and bulk-heterojunction (BHJ) organic solar cells. The devices show very high fill factors (FF), a high open circuit voltage, and power conversion efficiencies (PCE) of up to 3.4% in PHJ solar cells and slightly reduced PCEs of up to 2.6% in BHJ solar cells. In PHJ devices, the PCE for DCV4S almost doubles compared to the PCE reported for the oligothiophene analog DCV4T, while DCV5S shows an about 30% higher PCE than DCV5T. Trends in the thermal and optoelectronic properties of a series of dicyanovinyl-substituted oligoselenophenes are studied and compared to corresponding thiophene analogs. X-ray structure analysis corroborates the importance of the dicyanovinyl acceptor groups for the molecular ordering. These oligomers show efficiencies as high as 3.4% in vacuum-processed planar heterojunction solar cells. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Collado-Ruiz D.,Polytechnic University of Valencia | Ostad-Ahmad-Ghorabi H.,Vienna University of Technology
Resources, Conservation and Recycling | Year: 2010

In order for products to be comparable in different life cycle assessments, functional units need to be defined. Nevertheless, their definitions tend to be simplified or ambiguous. There is thus a need to standardize these functional units, to be properly used for environmental comparison of the environmental performance of products. This paper introduces a systematic approach to define standardized functional units: the concept of fuons. Fuons are defined as an abstraction of a product, based on its essential function and representing the whole set of products that share the parameters for this function's flows. The use of fuons, and by these means the correct definition of the functional unit, should then help to retrieve a suitable product family for life cycle comparison, hence a set of products whose LCA shares a common behavior. This will allow comparing the environmental performance of a new product in development with the products in that family. © 2010 Elsevier B.V. All rights reserved. Source


Fichte J.K.,Vienna University of Technology
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

Backdoors of answer-set programs are sets of atoms that represent "clever reasoning shortcuts" through the search space. Assignments to backdoor atoms reduce the given program to several programs that belong to a tractable target class. Previous research has considered target classes based on notions of acyclicity where various types of cycles (good and bad cycles) are excluded from graph representations of programs. We generalize the target classes by taking the parity of the number of negative edges on bad cycles into account and consider backdoors for such classes. We establish new hardness results and non-uniform polynomial-time tractability relative to directed or undirected cycles. © 2012 Springer-Verlag. Source


Egly U.,Vienna University of Technology
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

Quantified Boolean formulas generalize propositional formulas by admitting quantifications over propositional variables. We compare proof systems with different quantifier handling paradigms for quantified Boolean formulas (QBFs) with respect to their ability to allow succinct proofs. We analyze cut-free sequent systems extended by different quantifier rules and show that some rules are better than some others. Q-resolution is an elegant extension of propositional resolution to QBFs and is applicable to formulas in prenex conjunctive normal form. In Q-resolution, there is no explicit handling of quantifiers by specific rules. Instead the forall reduction rule which operates on single clauses inspects the global quantifier prefix. We show that there are classes of formulas for which there are short cut-free tree proofs in a sequent system, but any Q-resolution refutation of the negation of the formula is exponential. © 2012 Springer-Verlag. Source


Hetzl S.,Vienna University of Technology
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

We introduce a new connection between formal language theory and proof theory. One of the most fundamental proof transformations in a class of formal proofs is shown to correspond exactly to the computation of the language of a certain class of tree grammars. Translations in both directions, from proofs to grammars and from grammars to proofs, are provided. This correspondence allows theoretical as well as practical applications. © 2012 Springer-Verlag. Source


Giouroudi I.,Vienna University of Technology | Kosel J.,King Abdullah University of Science and Technology
Recent Patents on Nanotechnology | Year: 2010

Magnetic nanoparticles have been proposed for biomedical applications for several years. Various research groups worldwide have focused on improving their synthesis, their characterization techniques and the specific tailoring of their properties. Yet, it is the recent, impressive advances in nanotechnology and biotechnology which caused the breakthrough in their successful application in biomedicine. This paper aims at reviewing some current biomedical applications of magnetic nanoparticles as well as some recent patents in this field. Special emphasis is placed on i) hyperthermia, ii) therapeutics iii) diagnostics. Future prospects are also discussed. © 2010 Bentham Science Publishers Ltd. Source


Retscher G.,Vienna University of Technology
Journal of Applied Geodesy | Year: 2015

The study 'Cooperative Positioning for Real-time User Assistance and Guidance at Multi-modal Public Transit Junctions' (InKoPoMoVer) aims at a better understanding of passenger movement at multi-modal transit situations for providing improved passenger assistance and guidance. By using a novel Differential Wi-Fi (DWi-Fi) approach through intelligent Cooperative Positioning (CP), algorithms can be generated, which considerably increase the accuracy of person tracking, allowing for the derivation of movement patterns. Due to user support smoothly transit at the stations is enabled, thus the gap in current multimodal transport information systems where routing is not performed at the junction itself is closed. Addressing ethical and usability aspects will ensure user-friendly results. In this article, the concept of transit assistance is introduced followed by a comprehensive discussion of the suitable CP localization techniques and sensors as well as the implementation strategy. © 2015 Walter de Gruyter GmbH, Berlin/Munich/Boston. Source


Salotti J.-M.,French National Center for Scientific Research | Suhir E.,Vienna University of Technology
Acta Astronautica | Year: 2014

Some major risks-of-failure issues for the future manned missions to Mars are discussed, with an objective to address criteria for making such missions possible, successful, safe and cost-effective. The following astronautical and instrumentation-and-equipment-reliability related aspects of the missions are considered: redundancies and backup strategies; costs; assessed probability of failure as a suitable reliability criterion for the instrumentation (equipment); probabilistic assessment of the likelihood of the mission success and safety. It is concluded that parametric risk modeling is a must for a risk-driven decision-making process. © 2013 IAA. Source


Kuehn C.,Vienna University of Technology
Nonlinearity | Year: 2014

This work is motivated by mathematical questions arising in differential equation models for autocatalytic reactions. We extend the local theory of singularities in fast-slow polynomial vector fields to classes of unbounded manifolds which lose normal hyperbolicity due to an alignment of the tangent and normal bundles. A projective transformation is used to localize the unbounded problem. Then the blow-up method is employed to characterize the loss of normal hyperbolicity for the transformed slow manifolds. Our analysis yields a rigorous scaling law for all unbounded manifolds which exhibit a power-law decay for the alignment with a fast subsystem domain. Furthermore, the proof also provides a technical extension of the blow-up method itself by augmenting the analysis with an optimality criterion for the blow-up exponents. © 2014 IOP Publishing Ltd & London Mathematical Society. Source


Xie X.,Vienna University of Technology
Physical Review Letters | Year: 2015

We propose a two-dimensional interferometry based on the electron wave-packet interference by using a cycle-shaped orthogonally polarized two-color laser field. With such a method, the subcycle and intercycle interferences can be disentangled into different directions in the measured photoelectron momentum spectra. The Coulomb influence can be minimized and the overlapping of interference fringes with the complicated low-energy structures can be avoided as well. The contributions of the excitation effect and the long-range Coulomb potential can be traced in the Fourier domain of the photoelectron distribution. Because of these advantages, precise information on valence electron dynamics of atoms or molecules with attosecond temporal resolution and additional spatial information with angstrom resolution can be obtained with the two-dimensional electron wave-packet interferometry. © 2015 American Physical Society. Source


Grumillera D.,Vienna University of Technology | Grumillera D.,Massachusetts Institute of Technology | Sachsc I.,Arnold Sommerfeld Center for Theoretical Physics
Journal of High Energy Physics | Year: 2010

For cosmological topologically massive gravity at the chiral point we calculate momentum space 2- and 3-point correlators of operators in the postulated dual CFT on the cylinder. These operators are sourced by the bulk and boundary gravitons. Our correlators are fully consistent with the proposal that cosmological topologically massive gravity at the chiral point is dual to a logarithmic CFT. In the process we give a complete classification of normalizable and non-normalizeable left, right and logarithmic solutions to the linearized equations of motion in global AdS 3. © SISSA 2010. Source


Laaha G.,University of Natural Resources and Life Sciences, Vienna | Skoien J.O.,European Commission - Joint Research Center Ispra | Bloschl G.,Vienna University of Technology
Hydrological Processes | Year: 2014

Top-kriging is a method for estimating stream flow-related variables on a river network. Top-kriging treats these variables as emerging from a two-dimensional spatially continuous process in the landscape. The top-kriging weights are estimated by regularising the point variogram over the catchment area (kriging support), which accounts for the nested nature of the catchments. We test the top-kriging method for a comprehensive Austrian data set of low stream flows. We compare it with the regional regression approach where linear regression models between low stream flow and catchment characteristics are fitted independently for sub-regions of the study area that are deemed to be homogeneous in terms of flow processes. Leave-one-out cross-validation results indicate that top-kriging outperforms the regional regression on average over the entire study domain. The coefficients of determination (cross-validation) of specific low stream flows are 0.75 and 0.68 for the top-kriging and regional regression methods, respectively. For locations without upstream data points, the performances of the two methods are similar. For locations with upstream data points, top-kriging performs much better than regional regression as it exploits the low flow information of the neighbouring locations. © 2012 John Wiley & Sons, Ltd. Source


Filzmoser P.,Vienna University of Technology | Hron K.,Palacky University | Reimann C.,Geological Survey of Norway
Science of the Total Environment | Year: 2010

Environmental sciences usually deal with compositional (closed) data. Whenever the concentration of chemical elements is measured, the data will be closed, i.e. the relevant information is contained in the ratios between the variables rather than in the data values reported for the variables. Data closure has severe consequences for statistical data analysis. Most classical statistical methods are based on the usual Euclidean geometry - compositional data, however, do not plot into Euclidean space because they have their own geometry which is not linear but curved in the Euclidean sense. This has severe consequences for bivariate statistical analysis: correlation coefficients computed in the traditional way are likely to be misleading, and the information contained in scatterplots must be used and interpreted differently from sets of non-compositional data. As a solution, the ilr transformation applied to a variable pair can be used to display the relationship and to compute a measure of stability. This paper discusses how this measure is related to the usual correlation coefficient and how it can be used and interpreted. Moreover, recommendations are provided for how the scatterplot can still be used, and which alternatives exist for displaying the relationship between two variables. © 2010 Elsevier B.V. Source


Abou-Hussein A.A.A.,Ain Shams University | Linert W.,Vienna University of Technology
Spectrochimica Acta - Part A: Molecular and Biomolecular Spectroscopy | Year: 2012

Mono- and bi-nuclear acyclic and macrocyclic complexes with hard-soft Schiff base, H 2L, ligand derived from the reaction of 4,6-diacetylresorcinol and thiocabohydrazide, in the molar ratio 1:2 have been prepared. The H 2L ligand reacts with Co(II), Ni(II), Cu(II), Zn(II), Mn(II) and UO 2(VI) nitrates, VO(IV) sulfate and Ru(III) chloride to get acyclic binuclear complexes except for VO(IV) and Ru(III) which gave acyclic mono-nuclear complexes. Reaction of the acyclic mono-nuclear VO(IV) and Ru(III) complexes with 4,6-diacetylresorcinol afforded the corresponding macrocyclic mono-nuclear VO(IV) and Ru(IIII) complexes. Template reactions of the 4,6-diacetylresorcinol and thiocarbohydrazide with either VO(IV) or Ru(III) salts afforded the macrocyclic binuclear VO(IV) and Ru(III) complexes. The Schiff base, H 2L, ligand acts as dibasic with two NSO-tridentate sites and can coordinate with two metal ions to form binuclear complexes after the deprotonation of the hydrogen atoms of the phenolic groups in all the complexes, except in the case of the acyclic mononuclear Ru(III) and VO(IV) complexes, where the Schiff base behaves as neutral tetradentate chelate with N 2S 2 donor atoms. The ligands and the metal complexes were characterized by elemental analysis, IR, UV-vis 1H-NMR, thermal gravimetric analysis (TGA) and ESR, as well as the measurements of conductivity and magnetic moments at room temperature. Electronic spectra and magnetic moments of the complexes indicate the geometries of the metal centers are either tetrahedral, square planar or octahedral. Kinetic and thermodynamic parameters were calculated using Coats-Redfern equation, for the different thermal decomposition steps of the complexes. The ligands and the metal complexes were screened for their antimicrobial activity against Staphylococcus aureus as Gram-positive bacteria, and Pseudomonas fluorescens as Gram-negative bacteria in addition to Fusarium oxysporum fungus. Most of the complexes exhibit mild antibacterial and antifungal activities against these organisms. © 2012 Elsevier B.V. All rights reserved. Source


Gulwani S.,Microsoft | Zuleger F.,Vienna University of Technology
Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI) | Year: 2010

We define the reachability-bound problem to be the problem of finding a symbolic worst-case bound on the number of times a given control location inside a procedure is visited in terms of the inputs to that procedure. This has applications in bounding resources consumed by a program such as time, memory, network-traffic, power, as well as estimating quantitative properties (as opposed to boolean properties) of data in programs, such as information leakage or uncertainty propagation. Our approach to solving the reachability-bound problem brings together two different techniques for reasoning about loops in an effective manner. One of these techniques is an abstract-interpretation based iterative technique for computing precise disjunctive invariants (to summarize nested loops). The other technique is a non-iterative proof-rules based technique (for loop bound computation) that takes over the role of doing inductive reasoning, while deriving its power from the use of SMT solvers to reason about abstract loop-free fragments. Our solution to the reachability-bound problem allows us to compute precise symbolic complexity bounds for several loops in .Net base-class libraries for which earlier techniques fail. We also illustrate the precision of our algorithm for disjunctive invariant computation (which has a more general applicability beyond the reachability-bound problem) on a set of benchmark examples. © 2010 ACM. Source


Ohrhallinger S.,Concordia University at Montreal | Mudur S.,Vienna University of Technology
Computer Graphics Forum | Year: 2013

We present anefficient algorithm for determining an aesthetically pleasing shape boundary connecting all the points in a given unorganized set of 2D points, with no other information than point coordinates. By posing shape construction as a minimisation problem which follows the Gestalt laws, our desired shape Bmin is non-intersecting, interpolates all points and minimizes a criterion related to these laws. The basis for our algorithm is an initial graph, an extension of the Euclidean minimum spanning tree but with no leaf nodes, called as the minimum boundary complex BCmin. BCmin and Bmin can be expressed similarly by parametrizing a topological constraint. A close approximation of BCmin, termed BC0 can be computed fast using a greedy algorithm. BC0 is then transformed into a closed interpolating boundary Bout in two steps to satisfy Bmin's topological and minimization requirements. Computing Bmin exactly is an NP (Non-Polynomial)-hard problem, whereas Bout is computed in linearithmic time. We present many examples showing considerable improvement over previous techniques, especially for shapes with sharp corners. Source code is available online. We present an efficient algorithm for determining an aesthetically pleasing shape boundary connecting all the points in a given unorganised set of 2D points, with no other information than point coordinates. By posing shape construction as a minimisation problem which follows the Gestalt laws, our desired shape Bmin is non-intersecting, interpolates all points and minimises a criterion related to these laws. The basis for our algorithm is an initial graph, an extension of the Euclidean minimum spanning tree but with no leaf nodes, called as the minimum boundary complex BCmin. BCmin and Bmin can be expressed similarly by parametrising a topological constraint. A close approximation of BCmin, termed BC0 can be computed fast using a greedy algorithm. © 2013 The Authors Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd. Source


Neouze M.-A.,Vienna University of Technology | Neouze M.-A.,CEA Saclay Nuclear Research Center
Journal of Materials Science | Year: 2013

Nanoparticle assemblies are emerging highly promising materials, which aim at making use of nanoparticles collective properties. The synthesis pathway to create nanoparticle assemblies ensures the control on the distance between the nanoparticles. Various pathways will be presented for the formation of nanoparticle assemblies: such as template-assisted or pressure-induced synthesis, layer-by-layer or evaporation-induced deposition, introduction of a molecular linker. Nanoparticle assemblies can address many cutting-edge applications like plasmonic, sensoric, or catalysis. © 2013 Springer Science+Business Media New York. Source


Szeider S.,Vienna University of Technology
Discrete Optimization | Year: 2011

Sat and Max Sat are among the most prominent problems for which local search algorithms have been successfully applied. A fundamental task for such an algorithm is to increase the number of clauses satisfied by a given truth assignment by flipping the truth values of at most k variables (k-flip local search). For a total number of n variables the size of the search space is of order nk and grows quickly in k; hence most practical algorithms use 1-flip local search only. In this paper we investigate the worst-case complexity of k-flip local search, considering k as a parameter: is it possible to search significantly faster than the trivial nk bound? In addition to the unbounded case we consider instances with a bounded number of literals per clause and instances where each variable occurs in a bounded number of clauses. We also consider the related problem that asks whether we can satisfy all clauses by flipping the truth values of at most k variables. © 2010 Elsevier B.V. All rights reserved. Source


Nawratil G.,Vienna University of Technology
Computer Aided Geometric Design | Year: 2010

We present a set of planar parallel manipulators of Stewart Gough type which are singular with respect to the Schönflies group X(a) without being architecturally singular. This set of so-called Schönflies-singular planar parallel manipulators is characterized by the property that the carrier plane of the platform or of the base anchor points is orthogonal to the rotational axis a of the Schönflies group X(a). By giving the necessary and sufficient conditions we provide a complete classification of this set. Beside this algebraic characterization we also present a geometric one. Moreover we discuss the self-motional behavior of these manipulators and prove that they possess a quadratic singularity surface. © 2010 Elsevier B.V. Source


Stachel H.,Vienna University of Technology
Computer Aided Geometric Design | Year: 2010

A Kokotsakis mesh is a polyhedral structure consisting of an n-sided central polygon P0 surrounded by a belt of quadrangles or triangles in the following way: Each side ai of P0 is shared by an adjacent polygon Pi, and the relative motion between cyclically consecutive neighbor polygons is a spherical coupler motion. Hence, each vertex of P0 is the meeting point of four faces. In the case n=3 the mesh is part of an octahedron. These structures with rigid faces and variable dihedral angles were first studied in the thirties of the last century. However, in the last years there was a renaissance: The question under which conditions such meshes are infinitesimally or continuously flexible gained high actuality in discrete differential geometry. The goal of this paper is to revisit the well-known continuously flexible examples (Bricard, Graf, Sauer, Kokotsakis) from the kinematic point of view and to extend their list by a new family. © 2010 Elsevier B.V. All rights reserved. Source


Schoeber M.,Vienna University of Technology
Real-Time Systems | Year: 2010

Automatic memory management or garbage collection greatly simplifies development of large systems. However, garbage collection is usually not used in real-time systems due to the unpredictable temporal behavior of current implementations of a garbage collector. In this paper we propose a real-time garbage collector that can be scheduled like a normal real-time thread with a deadline monotonic assigned priority. We provide an upper bound for the collector period so that the application threads will never run out of memory. Furthermore, we show that the restricted execution model of the Safety Critical Java standard simplifies root scanning and reduces copying of static data. Our proposal has been implemented and evaluated in the context of the Java processor JOP. © Springer Science+Business Media, LLC 2010. Source


Kazianka H.,Vienna University of Technology
Computational Statistics and Data Analysis | Year: 2012

The issue of objective prior specification for the parameters in the normal compositional model is considered within the context of statistical analysis of linearly mixed structures in image processing. In particular, the Jeffreys prior for the vector of fractional abundances in case of a known covariance matrix is derived. If an additional unknown variance parameter is present, the Jeffreys prior and the reference prior are computed and it is proven that the resulting posterior distributions are proper. Markov chain Monte Carlo strategies are proposed to efficiently sample from the posterior distributions and the priors are compared on the grounds of the frequentist properties of the resulting Bayesian inferences. The default Bayesian analysis is illustrated by a dataset taken from fluorescence spectroscopy. © 2011 Elsevier B.V. All rights reserved. Source


Linsbichler T.,Vienna University of Technology
Frontiers in Artificial Intelligence and Applications | Year: 2014

Among the abundance of generalizations of abstract argumentation frameworks, the formalism of abstract dialectical frameworks (ADFs) proved to be powerful in modelling various argumentation problems. Implementations of reasoning tasks that come within ADFs struggle with their high computational complexity. Thus methods simplifying the evaluation process are required. One such method is splitting, which was shown to be an effective optimization technique in other nonmonotonic formalisms. We apply this approach to ADFs by providing suitable techniques for directional splitting (allowing links only from the first to the second part of the splitting) under all the standard semantics of ADFs as well as preliminary results on general splitting. © 2014 The authors and IOS PressAll rights reserved. Source


Polberg S.,Vienna University of Technology
Frontiers in Artificial Intelligence and Applications | Year: 2014

One of the most prominent tools for abstract argumentation is the Dung's framework, AF for short. Although powerful, AFs have their shortcomings, which led to development of numerous enrichments. Among the most general ones are the abstract dialectical frameworks, also known as the ADFs. They make use of the so-called acceptance conditions to represent arbitrary relations. This level of abstraction brings not only new challenges, but also requires addressing existing problems in the field. One of the most controversial issues, recognized not only in argumentation, concerns the support or positive dependency cycles. In this paper we introduce a new method to ensure acyclicity of arguments and present a family of extension-based semantics built on it, along with their classification w.r.t. cycles. Finally, we provide ADF versions of the properties known from the Dung setting. © 2014 The Authors and IOS Press. Source


Harms J.,Vienna University of Technology
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2013

Forms have been static, document-like user interfaces (UIs) for centuries. This work proposes to evolve the 'form' UI metaphor towards more interactivity. Related work has proposed interactive form elements such as autocompleting or otherwise assistive input fields. But a unified concept and scientific reflection on the topic are missing. Methodologically, this work first provides a deeper understanding of forms as UI metaphor. It then presents relevant research goals for improved usability, including collaborative form filling, easier navigation in long forms, and combined input fields for comfortable data entry. Taken together, the contributions of this work are to provide a deeper understanding of forms, systematically highlight relevant research topics, and hopefully foster a scientific discussion in form design. © 2013 Springer-Verlag. Source


Kuehn C.,Vienna University of Technology
Journal of Nonlinear Science | Year: 2013

Critical transitions occur in a wide variety of applications including mathematical biology, climate change, human physiology and economics. Therefore it is highly desirable to find early-warning signs. We show that it is possible to classify critical transitions by using bifurcation theory and normal forms in the singular limit. Based on this elementary classification, we analyze stochastic fluctuations and calculate scaling laws of the variance of stochastic sample paths near critical transitions for fast-subsystem bifurcations up to codimension two. The theory is applied to several models: the Stommel-Cessi box model for the thermohaline circulation from geoscience, an epidemic-spreading model on an adaptive network, an activator-inhibitor switch from systems biology, a predator-prey system from ecology and to the Euler buckling problem from classical mechanics. For the Stommel-Cessi model we compare different detrending techniques to calculate early-warning signs. In the epidemics model we show that link densities could be better variables for prediction than population densities. The activator-inhibitor switch demonstrates effects in three time-scale systems and points out that excitable cells and molecular units have information for subthreshold prediction. In the predator-prey model explosive population growth near a codimension-two bifurcation is investigated and we show that early-warnings from normal forms can be misleading in this context. In the biomechanical model we demonstrate that early-warning signs for buckling depend crucially on the control strategy near the instability which illustrates the effect of multiplicative noise. © 2012 Springer Science+Business Media New York. Source


Klotzsch E.,Vienna University of Technology
Philosophical transactions of the Royal Society of London. Series B, Biological sciences | Year: 2013

The plasma membrane is still one of the enigmatic cellular structures. Although the microscopic structure is getting clearer, not much is known about the organization at the nanometre level. Experimental difficulties have precluded unambiguous approaches, making the current picture rather fuzzy. In consequence, a variety of different membrane models has been proposed over the years, on the basis of different experimental strategies. Recent data obtained via high-resolution single-molecule microscopy shed new light on the existing hypotheses. We thus think it is a good time for reviewing the consistency of the existing models with the new data. In this paper, we summarize the available models in ten propositions, each of which is discussed critically with respect to the applied technologies and the strengths and weaknesses of the approaches. Our aim is to provide the reader with a sound basis for his own assessment. We close this chapter by exposing our picture of the membrane organization at the nanoscale. Source


Ederer N.,Vienna University of Technology
Renewable and Sustainable Energy Reviews | Year: 2015

An actual growth rate greater than 30% indicates that offshore wind is a reasonable alternative to other energy sources. The industry today is faced with the challenge of becoming competitive and thus significantly reduce the cost of electricity from offshore wind. This situation implies that the evaluation of costs incurred during development, installation and operation is one of the most pressing issues in this industry at the moment. Unfortunately, actual cost analyses suffer from less resilient input data and the application of simple methodologies. Therefore, the objective of this study was to elevate the discussion, providing stakeholders with a sophisticated methodology and representative benchmark figures. The use of Data Envelopment Analysis (DEA) allowed for plants to be modelled as entities and costs to be related to the main specifics, such as distance to shore and water depth, ensuring the necessary comparability. Moreover, a particularly reliable database was established using cost data from annual reports. Offshore wind capacity of 3.6 GW was benchmarked regarding capital and operating cost efficiency, best-practice cost frontiers were determined, and the effects of learning-by-doing and economies of scale were investigated, ensuring that this article is of significant interest for the offshore wind industry. © 2014 Elsevier Ltd. All rights reserved. Source


Ederer N.,Vienna University of Technology
Energy | Year: 2014

Although early experiences indicate that the maturity of deployed technology might not be sufficient for operating wind farms in large scale far away from shore, the rapid development of offshore wind energy is in full progress. Driven by the demand of customers and the pressure to keep pace with competitors, offshore wind turbine manufacturers continuously develop larger wind turbines instead of improving the present ones which would ensure reliability in harsh offshore environment. Pursuing the logic of larger turbines generating higher energy yield and therefore achieving higher efficiency, this trend is also supported by governmental subsidies under the expectation to bring down the cost of electricity from offshore wind. The aim of this article is to demonstrate that primarily due to the limited wind resource upscaling offshore wind turbines beyond the size of 10MW (megawatt) is not reasonable. Applying the planning methodology of an offshore wind project developer to a case study wind farm in the German North Sea and assessing energy yield, lifetime project profitability and levelized cost of electricity substantiate this thesis. This is highly interesting for all stakeholders in the offshore wind industry and questions current subsidy policies supporting projects for developing turbines up to 20MW. © 2014 Elsevier Ltd. Source


Thurner P.J.,Vienna University of Technology | Thurner P.J.,University of Southampton | Katsamenis O.L.,University of Southampton
Current Osteoporosis Reports | Year: 2014

Strength is the most widely reported parameter with regards to bone failure. However, bone contains pre-existing damage and stress concentration sites, perhaps making measures of fracture toughness more indicative of the resistance of the tissue to withstand fracture. Several toughening mechanisms have been identified in bone, prominently, at the microscale. More recently, nanoscale toughness mechanisms, such as sacrificial-bonds and hidden-length or dilatational band formation, mediated by noncollagenous proteins, have been reported. Absence of specific noncollagenous proteins results in lowered fracture toughness in animal models. Further, roles of several other, putative influencing, factors such as closely bound water, collagen cross-linking and citrate bonds in bone mineral have also been proposed. Yet, it is still not clear if and which mechanisms are hallmarks of osteoporosis disease and how they influence fracture risk. Further insights on the workings of such influencing factors are of high importance for developing complementary diagnostics and therapeutics strategies. © 2014 Springer Science+Business Media. Source


Piatkowska E.,AIT Austrian Institute of Technology | Belbachir A.N.,AIT Austrian Institute of Technology | Gelautz M.,Vienna University of Technology
Proceedings of the IEEE International Conference on Computer Vision | Year: 2013

This paper presents an adaptive cooperative approach towards the 3D reconstruction tailored for a bio-inspired depth camera: the stereo dynamic vision sensor (DVS). DVS consists of self-spiking pixels that asynchronously generate events upon relative light intensity changes. These sensors have the advantage to allow simultaneously high temporal resolution (better than 10μs) and wide dynamic range (>120dB) at sparse data representation, which is not possible with frame-based cameras. In order to exploit the potential of DVS and benefit from its features, depth calculation should take into account the spatiotemporal and asynchronous aspect of data provided by the sensor. This work deals with developing an appropriate approach for the asynchronous, event-driven stereo algorithm. We propose a modification of the cooperative network in which the history of the recent activity in the scene is stored to serve as spatiotemporal context used in disparity calculation for each incoming event. The network constantly evolves in time - as events are generated. In our work, not only the spatiotemporal aspect of the data is preserved but also the matching is performed asynchronously. The results of the experiments prove that the proposed approach is well suited for DVS data and can be successfully used for our efficient passive depth camera. © 2013 IEEE. Source


Stricker S.A.,Vienna University of Technology
European Physical Journal C | Year: 2014

We investigate the behavior of energy-momentum tensor correlators in holographic N = 4 super Yang-Mills plasma, taking finite coupling corrections into account. In the thermal limit we determine the flow of quasinormal modes as a function of the 't Hooft coupling. Then we use a specific model of holographic thermalization to study the deviation of the spectral densities from their thermal limit in an out-of-equilibrium situation. The main focus lies on the thermalization pattern with which the plasma constituents approach their thermal distribution as the coupling constant decreases from the infinite coupling limit. All obtained results point towards the weakening of the usual top-down thermalization pattern. © 2014 The Author(s). Source


Dall'Ara E.,Vienna University of Technology | Schmidt R.,Medical University of Vienna | Zysset P.,University of Bern
Bone | Year: 2012

Bone mineral density and microarchitecture was found to predict 70-95% of bone strength. Microdamage, as factor of bone quality, might help to explain the remaining uncertainties. The goal of this study was to investigate whether microindentation can discriminate between intact and severely damaged human vertebral bone tissue in vitro. One portion from each human vertebral slice (N = 35) tested in compression in a previous study was embedded, polished and tested in wet conditions by means of microindentation. The indentation moduli and hardness (HV) of trabecular, osteonal and interstitial bone structural units were computed along the cranio-caudal direction. Each indented region was defined as damaged or intact as seen under a light microscope. A total of 1190 indentations were performed. While both hardness and indentation modulus were independent from gender, both mechanical properties were affected by damage and microstructure. The damaged regions showed 50% lower stiffness and hardness compared to undamaged ones. Interstitial bone was stiffer and harder (13.2 ± 4.4. GPa and 44.7 ± 20.3. HV) than osteonal bone (10.9 ± 3.8. GPa and 37.8 ± 17.3. HV), which was stiffer and harder than trabecular bone (8.1 ± 3.0. GPa and 28.8 ± 11.2. HV) indented in the transverse direction. Moreover, along the axial direction intact trabecular bone (11.4 ± 4.3. GPa) was 16% less stiff than the intact interstitial bone and as stiff as intact osteonal bone. In conclusion microindentation was found to discriminate between highly damaged and intact tissue in both trabecular and cortical bone tested in vitro. It remains to be investigated whether this technique would be able to detect also the damage, which is induced by physiological load in vivo. © 2012 Elsevier Inc. Source


The deployment and retrieval processes of satellites from a space station are demanding tasks during the operations of tethered satellite systems. The satellite should be steered into its working state within a reasonable amount of time and without too much control efforts. For the pure in-plane oscillation we have found time-optimal solutions with bang-bang control strategy for the deployment and retrieval process. In our working group we have also investigated different stabilization methods of the vertical equilibrium configuration, for example parametric swing control and chaotic control. In this article we concentrate on the final stage of the operation, when the oscillations around the vertical configuration should be brought to halt. While this task is quite simple for a motion of the satellite in the orbital plane, it is considerably more difficult, if the satellite has been perturbed out of that plane. We first analyze the control for a purely out-of-plane oscillation, which is governed by a Hamiltonian Hopf bifurcation, and then investigate the combined control for the spatial dynamics. Using a center manifold ansatz for the in-plane oscillations, we can show, that it is possible to diminish the oscillations of the tethered satellite in both directions, but the decay is extremely slow. © 2014 Springer Science+Business Media Dordrecht. Source


Nawratil G.,Vienna University of Technology
Transactions of the Canadian Society for Mechanical Engineering | Year: 2013

It has been previously shown that non-architecturally singular parallel manipulators of Stewart-Gough type, where the planar platform and the planar base are related by a projectivity, have either so-called elliptic self-motions or pure translational self-motions. As the geometry of all manipulators with translational selfmotions is already known, we focus on elliptic self-motions. We show that these necessarily one-parameter self-motions have a second, instantaneously local, degree of freedom in each pose of the self-motion. Moreover, we introduce a geometrically motivated classification of elliptic self-motions and study the so-called orthogonal ones in detail. Source


Wielemaker J.,VU University Amsterdam | Schrijvers T.,Ghent University | Triska M.,Vienna University of Technology | Lager T.,Gothenburg University
Theory and Practice of Logic Programming | Year: 2012

SWI-Prolog is neither a commercial Prolog system nor a purely academic enterprise, but increasingly a community project. The core system has been shaped to its current form while being used as a tool for building research prototypes, primarily for knowledge-intensive and interactive systems. Community contributions have added several interfaces and the constraint (CLP) libraries. Commercial involvement has created the initial garbage collector, added several interfaces and two development tools: PlDoc (a literate programming documentation system) and PlUnit (a unit testing environment). In this article, we present SWI-Prolog as an integrating tool, supporting a wide range of ideas developed in the Prolog community and acting as glue between foreign resources. This article itself is the glue between technical articles on SWI-Prolog, providing context and experience in applying them over a longer period. © Copyright Cambridge University Press 2011. Source


Aleksic S.,Vienna University of Technology
Telecommunication Systems | Year: 2013

High-capacity optical transmission technologies have made possible very high data rates and a large number of wavelength channels. Further, optical network functionality has made progress from simple point-to-point WDM links to automatically switched optical networks. In the future, dynamic burst-switched and packet-switched photonic networks may be expected. This paper describes a novel architecture of transparent WDM metropolitan area network (MAN) that is capable of switching on both packet-by-packet and burst-by-burst basis, thereby having the potential to achieve high throughput efficiency. The optically transparent MAN also includes a large part of the access network infrastructure. It is scalable, flexible, easy upgradeable and able to support heterogeneous network traffic. Some results of a preliminarly study on network performance are shown. © 2011 Springer Science+Business Media, LLC. Source


Kopacek P.,Vienna University of Technology
Elektrotechnik und Informationstechnik | Year: 2013

Robotics has been a very fast growing field especially in the last years. In the late 1970s the first industrial applications of stationary unintelligent industrial robots were realised. Since the beginning of the 1990s a new generation of mobile, intelligent, cooperative robots has grown up. This new generation opens new applications areas, e.g. in production automation, in agriculture, in the food industry, in household, for medical and rehabilitation applications, in the entertainment industry as well as for leisure and hobby. Current developing trends are humanoid robots and robots supporting people in everyday life. Other intensive research areas are cooperative robots, bio inspired robots, ubiquitous robots and cloud robots. The following paper is divided into three parts: current state, future development trends as well as visions in robotics. © 2013 Springer Verlag Wien. Source


Galabov V.,Vienna University of Technology
Journal of Electrical Engineering | Year: 2013

By means of the rise-of-temperature method, the regional distribution of the local building factor BF was measured at 23 positions in a model transformer core assembled from grain oriented SiFe. In a systematic way, the case of mere AC excitation was compared with that of additional DC excitation in the middle S-limb. The mere AC case showed lowest BF in the central regions of the outer limbs, highest one in the corners and - in special - in the T-joints due to rotational magnetization (RM). DC bias yielded strongly increased BF in regions of alternating magnetization and lower ones in regions of RM, a tendency which is interpreted through domain theory. Energetic relevance is not expected for the case of geomagnetically induced currents, strong effects being restricted in time. On the other hand side, weak bias of long term may deteriorate the performance of 5-limb 3-phase cores and - in special - that of 1-phase cores. © 2010 FEI STU. Source


Kirnbauer F.,Bioenergy 2020+ GmbH | Wilk V.,Bioenergy 2020+ GmbH | Hofbauer H.,Vienna University of Technology
Fuel | Year: 2013

To meet the aims of the worldwide effort to reduce greenhouse gas emissions, product gas from biomass steam gasification in DFB (dual fluidized bed) gasification plants can play an important role for the production of electricity, fuel for transportation and chemicals. Using a catalytically active bed material, such as olivine, brings advantages concerning tar reduction in the product gas. Experience from industrial scale gasification plants showed that a modification of the olivine occurs during operation due to the interaction of the bed material with ash components from the biomass and additives. This interaction leads to a calcium-rich layer on the bed material particles which influences the gasification properties and reduces tar concentration in the product gas. In this paper, the influence on the gasification performance, product gas composition and tar formation of a reduction of the gasification temperature are studied. A variation of the gasification temperature from 870 °C to 750 °C was carried out in a 100 kW pilot plant. A reduction of the gasification temperature down to 750 °C reduces the concentration of hydrogen and carbon monoxide in the product gas and increases the concentration of carbon dioxide and methane. The product gas volume produced per kg of fuel is reduced at lower gasification temperatures but the calorific value of the product gas increases. The volumetric concentration of tars in the product gas increases slightly until 800 °C and nearly doubles when decreasing the gasification temperature to 750 °C. The tars detected by gas chromatography-mass spectrometry (GCMS) were classified into substance groups and related to the fuel input to the gasifier and showed a decrease in naphthalenes and polycyclic aromatic hydrocarbons (PAHs) and an increase in phenols, aromatic compounds and furans when reducing the gasification temperature. The comparison with results from an earlier study, where the gasification properties of unused fresh olivine were compared with used olivine, underlines the importance of a long retention time of the bed material in the gasifier, ensuring the formation of a calcium-rich layer in the bed material. © 2012 Elsevier Ltd. All rights reserved. Source


Fenz S.,Vienna University of Technology
Proceedings of the ACM Symposium on Applied Computing | Year: 2010

Legal regulations and industry standards require organizations to measure and maintain a specified IT-security level. Although several IT-security metrics approaches have been developed, a methodology for automatically generating ISO 27001-based IT-security metrics based on concrete organization-specific control implementation knowledge is missing. Based on the security ontology by Fenz et al., including information security domain knowledge and the necessary structures to incorporate organization-specific facts into the ontology, this paper proposes a methodology for automatically generating ISO 27001-based IT-security metrics. The conducted validation has shown that the research results are a first step towards increasing the degree of automation in the field of IT-security metrics. Using the introduced methodology, organizations are enabled to evaluate their compliance with information security standards, and to evaluate control implementations' effectiveness at the same time. © 2010 ACM. Source


Neophytou N.,University of Warwick | Kosina H.,Vienna University of Technology
Applied Physics Letters | Year: 2014

We investigate the effect of electrostatic gating on the thermoelectric power factor of p-type Si nanowires (NWs) of up to 20 nm in diameter in the [100], [110], and [111] crystallographic transport orientations. We use atomistic tight-binding simulations for the calculation of the NW electronic structure, coupled to linearized Boltzmann transport equation for the calculation of the thermoelectric coefficients. We show that gated NW structures can provide ∼5× larger thermoelectric power factor compared to doped channels, attributed to their high hole phonon-limited mobility, as well as gating induced bandstructure modifications which further improve mobility. Despite the fact that gating shifts the charge carriers near the NW surface, surface roughness scattering is not strong enough to degrade the transport properties of the accumulated hole layer. The highest power factor is achieved for the [111] NW, followed by the [110], and finally by the [100] NW. As the NW diameter increases, the advantage of the gated channel is reduced. We show, however, that even at 20 nm diameters (the largest ones that we were able to simulate), a ∼3× higher power factor for gated channels is observed. Our simulations suggest that the advantage of gating could still be present in NWs with diameters of up to ∼40†‰nm. © 2014 AIP Publishing LLC. Source


Wilk V.,Bioenergy2020 GmbH | Hofbauer H.,Vienna University of Technology
Fuel | Year: 2013

During gasification, fuel nitrogen is converted into gaseous species, such as NH3, HCN and others. Several materials are gasified in the dual fluidized bed gasification pilot plant in order to assess the conversion of fuel nitrogen. The fuels tested in this study are different kinds of waste wood, bark and plastic residues. The nitrogen content of these materials ranges from 0.05 to 2.70 wt.-%. Detailed measurements of NH3, N2, HCN, NO and nitrogenous tars are carried out during the test runs. It is found that the vast majority of nitrogen is present in the form of NH3. There is a linear relationship with high accuracy between fuel nitrogen and NH 3 in the producer gas. The nitrogen balance of the dual fluidized bed gasification system shows the distribution of nitrogen in the two coupled reactors of the gasification system. It is assessed that nitrogen conversion occurs almost exclusively in the gasification reactor. Only minor amounts of nitrogen are found in the char, which is transported to the combustion reactor and is converted to NO there. This result provides important information for the gas cleaning requirements when nitrogen-rich fuels are gasified. © 2012 Elsevier Ltd. All rights reserved. Source


Wilk V.,Bioenergy2020 GmbH | Hofbauer H.,Vienna University of Technology
Energy and Fuels | Year: 2013

Co-gasification of biomass and plastics was investigated in a 100 kW dual fluidized-bed pilot plant using four types of plastic material of different origins and soft wood pellets. The proportion of plastics was varied within a broad range to assess the interaction of the materials. The product gas composition was considerably influenced by co-gasification, whereas the changes were nonlinear. More CO and CO2 were measured in the product gas from co-gasification than would be expected from linear interpolation of mono-gasification of the materials. Less CH4 and C2H 4 were formed, and the tar content in the product gas was considerably lower than presumed. With the generation of more product gas than expected, co-gasification of wood and plastic materials also had other beneficial effects. Because of the fuel mixtures, more radicals of different types were available that interacted with each other and with the fluidization steam, enhancing the reforming reactions. Wood char had a positive effect on polymer decomposition, steam reforming, and tar reduction. As a result of the more active splash zone during co-gasification of wood and plastics, contact between gas and bed material was enhanced, which is crucial for catalytic tar removal. © 2013 American Chemical Society. Source


Taricco G.,Polytechnic University of Turin | Riegler E.,Vienna University of Technology
IEEE Transactions on Information Theory | Year: 2011

An asymptotic approach to derive the ergodic capacity achieving covariance matrix for a multiple-input multiple-output (MIMO) channel is presented. The method is applicable to MIMO channels affected by separately correlated Rician fading and co-channel interference. It is assumed that the number of transmit, receive and interfering antennas grows asymptotically while their ratios, as well as the SNR and the SIR, approach finite constants. Nevertheless, it is shown that the asymptotic results represent an accurate approximation in the case of a finitely many antennas and can be used to derive the ergodic channel capacity. This is accomplished by using an iterative power allocation algorithm based on a water-filling approach. The convergence of a similar algorithm (nicknamed frozen water-filling) was conjectured in a work by Dumont Here, we show that, in the Rayleigh case, the frozen water-filling algorithm may not converge while, in those cases, our proposed algorithm converges. Finally, numerical results are included in order to assess the accuracy of the asymptotic method proposed, which is compared to equivalent results obtained via Monte-Carlo simulations. © 2011 IEEE. Source


Bergshoeff E.,University of Groningen | Rosseel J.,Vienna University of Technology | Zojer T.,University of Groningen
Classical and Quantum Gravity | Year: 2015

We define a procedure that, starting from a relativistic theory of supergravity, leads to a consistent, non-relativistic version thereof. As a first application we use this limiting procedure to show how the Newton-Cartan formulation of non-relativistic gravity can be obtained from general relativity. Then we apply it in a supersymmetric case and derive a novel, non-relativistic, off-shell formulation of three-dimensional Newton-Cartan supergravity. © 2015 IOP Publishing Ltd. Source


Kallosh R.,Stanford University | Wrase T.,Vienna University of Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2015

We present the explicit de Sitter supergravity action describing the interaction of supergravity with an arbitrary number of chiral and vector multiplets as well as one nilpotent chiral multiplet. The action has a non-Gaussian dependence on the auxiliary field of the nilpotent multiplet; however, it can be integrated out for an arbitrary matter-coupled supergravity. The general supergravity action with a given Kähler potential K, superpotential W and vector matrix fAB interacting with a nilpotent chiral multiplet consists of the standard supergravity action defined by K, W and fAB where the scalar in the nilpotent multiplet has to be replaced by a bilinear combination of the fermion in the nilpotent multiplet divided by the Gaussian value of the auxiliary field. All additional contributions to the action start with terms quartic and higher order in the fermion of the nilpotent multiplet. These are given by a simple universal closed form expression. © 2015 American Physical Society. Source


Weidow J.,Chalmers University of Technology | Weidow J.,Vienna University of Technology
Ultramicroscopy | Year: 2013

A tantalum doped tungsten carbide powder, (W,Ta)C, was prepared with the purpose to maximise the amount of Ta in the hexagonal mixed crystal carbide. Atom probe tomography (APT) was considered to be the best technique to quantitatively measure the amount of Ta within this carbide. As the carbide powder consisted in the form of very small particles (<1. μm), a method to produce APT specimens of such a powder was developed. The powder was at first embedded in copper and a FIB-SEM workstation was used to make an in-situ lift-out from a selected powder particle. The powder particle was then deposited on a post made from a WC-Co based cemented carbide specimen. With the use of a laser assisted atom probe, it was shown that the method is working and the Ta content of the (W,Ta)C could be measured quantitatively. © 2013 Elsevier B.V. Source


Wolfram U.,University of Ulm | Wilke H.-J.,University of Ulm | Zysset P.K.,Vienna University of Technology
Bone | Year: 2010

For understanding the fracture risk of vertebral bodies the macroscopic mechanical properties of the cancellous core are of major interest. Due to the hierarchical nature of bone, these depend in turn on the micromechanical properties of bone extracellular matrix which is at least linear elastic transverse isotropic. The experimental determination of local elastic properties of bone ex vivo necessitates a high spatial resolution which can be provided by depth-sensing indentation techniques. Using microindentation, this study investigated the effects of rehydration on the transverse isotropic elastic properties of vertebral trabecular bone matrix obtained from two orthogonal directions with a view to microanatomical location, age, gender, vertebral level and anatomic direction in a conjoint statistics. Biopsies were gained from 104 human vertebrae (T1-L3) with a median age of 65 years (21-94). Wet elastic moduli were 29% lower (p < 0.05) than dry elastic moduli. For wet indentation the ratio of mean elastic moduli tested in axial to those tested in transverse indentation direction were 1.13 to 1.23 times higher than for dry indentation. The ratio of elastic moduli tested in the core to those tested in the periphery of trabeculae was 1.05 to 1.16 times higher when testing wet. Age and gender did not show any influence on the elastic moduli for wet and dry measurements. The correlation between vertebral level and elastic moduli became weaker after rehydration (pwet < 0.09, rwet2 = 0.14) and (pdry < 0.01, rwet2 = 0.38). Elastic and dissipated energies were similarly affected by rehydration compared to the elastic modulus. No significant difference in the energies could be found for gender (p > 0.05). Significant differences in the energies were found for age (p < 0.05) after rehydration. Qualitative and quantitative insights into the transverse isotropic elastic properties of trabecular bone matrix under two testing conditions over a broad spectrum of vertebrae could be given. This study could help to further improve understanding of the mechanical properties of vertebral trabecular bone. © 2009 Elsevier Inc. All rights reserved. Source


Andringa R.,University of Groningen | Bergshoeff E.A.,University of Groningen | Rosseel J.,Vienna University of Technology | Sezgin E.,Texas A&M University
Classical and Quantum Gravity | Year: 2013

We construct a supersymmetric extension of three-dimensional Newton-Cartan gravity by gauging a super-Bargmann algebra. In order to obtain a non-trivial supersymmetric extension of the Bargmann algebra one needs at least two supersymmetries leading to a N = 2 super-Bargmann algebra. Due to the fact that there is a universal Newtonian time, only one of the two supersymmetries can be gauged. The other supersymmetry is realized as a fermionic Stueckelberg symmetry and only survives as a global supersymmetry. We explicitly show how, in the frame of aGalilean observer, the system reduces to a supersymmetric extension of the Newton potential. The corresponding supersymmetry rules can only be defined, provided we also introduce a dual Newton potential.We comment on the four-dimensional case. © 2013 IOP Publishing Ltd Printed in the UK and the USA. Source


Hobler G.,Vienna University of Technology
Nuclear Instruments and Methods in Physics Research, Section B: Beam Interactions with Materials and Atoms | Year: 2015

Many experiments indicate the importance of stress and stress relaxation upon ion implantation. In this paper, a model is proposed that is capable of describing ballistic effects as well as stress relaxation by viscous flow. It combines atomistic binary collision simulation with continuum mechanics. The only parameters that enter the continuum model are the bulk modulus and the radiation-induced viscosity. The shear modulus can also be considered but shows only minor effects. A boundary-fitted grid is proposed that is usable both during the binary collision simulation and for the spatial discretization of the force balance equations. As an application, the milling of a slit into an amorphous silicon membrane with a 30 keV focused Ga beam is studied, which demonstrates the relevance of the new model compared to a more heuristic approach used in previous work. © 2014 Elsevier B.V. All rights reserved. Source


Nawratil G.,Vienna University of Technology
Applied Mechanics and Materials | Year: 2012

In this paper we give a detailed review on Stewart Gough (SG) platforms with selfmotions and the related Borel Bricard problem. Moreover, we report about recent results achieved by the author on this topic (SG platforms with type II DM self-motions). In context of these results, we also present two new theorems, which open the way for addressed future work. Finally, we give some remarks on planar SG platforms with type I DM self-motions and formulate a central conjecture. © (2012) Trans Tech Publications, Switzerland. Source


Aleksic S.,Vienna University of Technology
Journal of Lightwave Technology | Year: 2012

Current high-capacity and long-reach optical fiber links would not be possible without optical amplification. Especially the use of erbium-doped fiber amplifiers (EDFAs) has revolutionized optical communication systems during the last two decades. Although the amplification process and various effects occurring in rare earth doped amplifiers have been already well understood and accurately modeled, evolution of thermodynamic entropy and other thermodynamic aspects have not been sufficiently considered in the past. This paper analyzes the amplification process in EDFA from the thermodynamic point of view and proposes a novel modeling approach to evaluate both energy and entropy dynamics. The model is described in detail and some exemplary numerical results are presented. © 2012 IEEE. Source


Nawratil G.,Vienna University of Technology
Journal of Mechanisms and Robotics | Year: 2013

We transfer the basic idea of bonds, introduced by Hegedüs, Schicho, and Schröcker for overconstrained closed chains with rotational joints, to the theory of self-motions of parallel manipulators of Stewart Gough (SG) type. Moreover, we present some basic facts and results on bonds and demonstrate the potential of this theory on the basis of several examples. As a by-product we give a geometric characterization of all SG platforms with a pure translational self-motion and of all spherical three-degrees of freedom (DOF) RPR manipulators with self-motions. © 2014 by ASME. Source


Lindberg E.,Swedish University of Agricultural Sciences | Hollaus M.,Vienna University of Technology
Remote Sensing | Year: 2012

This study compares methods to estimate stem volume, stem number and basal area from Airborne Laser Scanning (ALS) data for 68 field plots in a hemi-boreal, spruce dominated forest (Lat. 58°N, Long. 13°E). The stem volume was estimated with five different regression models: one model based on height and density metrics from the ALS data derived from the whole field plot, two models based on similar combinations derived from 0.5 m raster cells, and two models based on canopy volumes from the ALS data. The best result was achieved with a model based on height and density metrics derived from 0.5 m raster cells (Root Mean Square Error or RMSE 37.3%) and the worst with a model based on height and density metrics derived from the whole field plot (RMSE 41.9%). The stem number and the basal area were estimated with: (i) area-based regression models using height and density metrics from the ALS data; and (ii) single tree-based information derived from local maxima in a normalized digital surface model (nDSM) mean filtered with different conditions. The estimates from the regression model were more accurate (RMSE 52.7% for stem number and 21.5% for basal area) than those derived from the nDSM (RMSE 63.4%-91.9% and 57.0%-175.5%, respectively). The accuracy of the estimates from the nDSM varied depending on the filter size and the conditions of the applied filter. This suggests that conditional filtering is useful but sensitive to the conditions. © 2012 by the authors. Source


Peel M.C.,University of Melbourne | Bloschl G.,Vienna University of Technology
Progress in Physical Geography | Year: 2011

Changing hydrological conditions due to climate, land use and infrastructure pose significant ongoing challenges to the hydrological research and water management communities. While, traditionally, hydrological models have assumed stationary conditions, there has been much progress since 2005 on model parameter estimation under unknown or changed conditions and on techniques for modelling in those conditions. There is an analogy between extrapolation in space (termed Prediction in Ungauged Basins, PUB), and extrapolation in time (termed Prediction in Ungauged Climates, PUC) that can be exploited for estimating model parameters. Methods for modelling changing hydrological conditions need to progress beyond the current scenario approach, which is reliant upon precalibrated models. Top-down methods and analysis of spatial gradients of a variable of interest, instead of temporal gradients (a method termed 'Trading space for time') show much promise for validating more complex model projections. Understanding hydrological processes and how they respond to change, along with quantification of parameter estimation and modelling process uncertainty will continue to be active areas of research within hydrology. Contributions from these areas will not only help inform future climate change impact studies about what will change and by how much, but also provide insight into why any changes may occur, what changes we are able to predict in a realistic manner, and what changes are beyond the current predictability of hydrological systems. © The Author(s) 2011. Source


Gruber P.M.,Vienna University of Technology
Discrete and Computational Geometry | Year: 2011

John's ellipsoid criterion characterizes the unique ellipsoid of globally maximum volume contained in a given convex body C. In this article local and global maximum properties of the volume on the space of all ellipsoids in C are studied, where ultra maximality is a stronger version of maximality: the volume is nowhere stationary. The ellipsoids for which the volume is locally maximum, resp. locally ultra maximum are characterized. The global maximum is the only local maximum and for generic C it is an ultra maximum. The characterizations make use of notions originating from the geometric theory of positive quadratic forms. Part of these results generalize to the case where the ellipsoids are replaced by affine copies of a convex body D. In contrast to the ellipsoid case, there are convex bodies C and D, such that on the space of all affine images of D in C the volume has countably many local maxima. All results have dual counterparts. Extensions to the surface area and, more generally, to intrinsic volumes are mentioned. © 2011 Springer Science+Business Media, LLC. Source


Jenei S.,Vienna University of Technology | Jenei S.,University of Pecs
Journal of Logic and Computation | Year: 2011

The main 'philosophical' outcome of this article is to demonstrate that the structural description of residuated lattices requires the use of the co-residuated setting. A construction, called skew symmetrization, which generalizes the well-known representation of an ordered Abelian group obtained from the positive (or negative) cone of the algebra is introduced here. Its definition requires leaving the accustomed residuated setting and entering the co-residuated setting. It is shown that every uninorm on [0, 1] with an involution defined by the residual complement with respect to the unit and having the unit as the fixed point of the involution can be described as the skew symmetrization of its underlying t-norm or underlying t-conorm. © 2009 The Author. Source


Weller D.,Vienna University of Technology
Theoretical Computer Science | Year: 2011

When investigating the complexity of cut-elimination in first-order logic, a natural subproblem is the elimination of quantifier-free cuts. So far, the problem has only been considered in the context of general cut-elimination, and the upper bounds that have been obtained are essentially double exponential. In this note, we observe that a method due to Dale Miller can be applied to obtain an exponential upper bound. © 2011 Elsevier B.V. All rights reserved. Source


Basu R.,Saha Institute of Nuclear Physics | Riegler M.,Vienna University of Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2016

In this paper we present in more detail a construction using Wilson lines and the corresponding dual Galilean conformal field theory calculations for analytically determining holographic entanglement entropy for flat space in 2+1 dimensions first presented in A. Bagchi, R. Basu, D. Grumiller, and M. Riegler, Phys. Rev. Lett. 114, 111602 (2015). In addition, we show how the construction using Wilson lines can be expanded to flat space higher-spin theories and determine the thermal entropy of (spin-3 charged) flat space cosmologies using this approach. © 2016 American Physical Society. Source


Fritz H.,Vienna University of Technology | Garcia-Escudero L.A.,University of Valladolid | Mayo-Iscar A.,University of Valladolid
Information Sciences | Year: 2013

It is well-known that outliers and noisy data can be very harmful when applying clustering methods. Several fuzzy clustering methods which are able to handle the presence of noise have been proposed. In this work, we propose a robust clustering approach called F-TCLUST based on trimming a fixed proportion of observations that are ("impartially") determined by the data set itself. The proposed approach also considers an eigenvalue ratio constraint that makes it a mathematically well-defined problem and serves to control the allowed differences among cluster scatters. A computationally feasible algorithm is proposed for its practical implementation. Some guidelines about how to choose the parameters controlling the performance of the fuzzy clustering procedure are also given. © 2012 Elsevier Inc. All rights reserved. Source