Entity

Time filter

Source Type

Eindhoven, Netherlands

The Eindhoven University of Technology is a university of technology located in Eindhoven, Netherlands. Its motto is Mens agitat molem . The university was the second of its kind in the Netherlands, only Delft University of Technology existed previously. Until mid-1980 it was known as the Technische Hogeschool Eindhoven . In 2011 QS World University Rankings placed Eindhoven at 146th internationally, but 61st globally for Engineering & IT. Furthermore, in 2011 Academic Ranking of World Universities rankings, TU/e was placed at the 52-75 bucket internationally in Engineering/Technology and Computer Science category and at 34th place internationally in the field of Computer Science. In 2003 a European Commission report ranked TU/e at third place among all European research universities , thus making it the highest ranked Technical University in Europe. Wikipedia.


Su R.,Nanyang Technological University | Van Schuppen J.H.,Centrum voor Wiskunde en Informatica CWI | Rooda J.E.,TU Eindhoven
IEEE Transactions on Automatic Control | Year: 2012

In many practical applications, we need to compute a nonblocking supervisor that not only complies with pre-specified safety requirements but also achieves a certain time optimal performance such as maximum throughput. In this paper, we first present a minimum-makespan supervisor synthesis problem. Then we show that the problem can be solved by a terminable algorithm, where the execution time of each string is computable by the theory of heaps-of-pieces. We also provide a timed supervisory control map that can implement the synthesized minimum-makespan sublanguage. © 2006 IEEE.


Parsa S.,Wesleyan University | Calzavarini E.,Lille Laboratory of Mechanics | Toschi F.,TU Eindhoven | Voth G.A.,Wesleyan University
Physical Review Letters | Year: 2012

The rotational dynamics of anisotropic particles advected in a turbulent fluid flow are important in many industrial and natural settings. Particle rotations are controlled by small scale properties of turbulence that are nearly universal, and so provide a rich system where experiments can be directly compared with theory and simulations. Here we report the first three-dimensional experimental measurements of the orientation dynamics of rodlike particles as they are advected in a turbulent fluid flow. We also present numerical simulations that show good agreement with the experiments and allow extension to a wide range of particle shapes. Anisotropic tracer particles preferentially sample the flow since their orientations become correlated with the velocity gradient tensor. The rotation rate is heavily influenced by this preferential alignment, and the alignment depends strongly on particle shape. © 2012 American Physical Society.


Van Der Vaart A.,VU University Amsterdam | Van Zanten H.,TU Eindhoven
Journal of Machine Learning Research | Year: 2011

We consider the quality of learning a response function by a nonparametric Bayesian approach using a Gaussian process (GP) prior on the response function. We upper bound the quadratic risk of the learning procedure, which in turn is an upper bound on the Kullback-Leibler information between the predictive and true data distribution. The upper bound is expressed in small ball probabilities and concentration measures of the GP prior. We illustrate the computation of the upper bound for the Matérn and squared exponential kernels. For these priors the risk, and hence the information criterion, tends to zero for all continuous response functions. However, the rate at which this happens depends on the combination of true response function and Gaussian prior, and is expressible in a certain concentration function. In particular, the results show that for good performance, the regularity of the GP prior should match the regularity of the unknown response function. © 2011 Aad van der Vaart and Harry van Zanten.


De Waele A.T.A.M.,TU Eindhoven
Cryogenics | Year: 2012

This paper deals with the influence the finite heat capacity of the matrix of regenerators on the performance of cryocoolers. The dynamics of the various parameters is treated in the harmonic approximation focussing on the finite heat-capacity effects, real-gas effects, and heat conduction. It is assumed that the flow resistance is zero, that the heat contact between the gas and the matrix is perfect, and that there is no mass storage in the matrix. Based on an energy-flow analysis, the limiting temperature, temperature profiles in the regenerator, and cooling powers are calculated. The discussion refers to pulse-tube refrigerators, but it is equally relevant for Stirling coolers and GM-coolers. © 2011 Elsevier Ltd. All rights reserved.


Verburg J.M.,Harvard University | Verburg J.M.,TU Eindhoven | Seco J.,Harvard University
Physics in Medicine and Biology | Year: 2014

We present an experimental study of a novel method to verify the range of proton therapy beams. Differential cross sectionswere measured for 15 prompt gamma-ray lines from proton-nuclear interactions with 12C and 16O at proton energies up to 150MeV. These cross sectionswere used to model discrete prompt gamma-ray emissions along proton pencil-beams. By fitting detected prompt gamma-ray counts to these models, we simultaneously determined the beam range and the oxygen and carbon concentration of the irradiated matter. The performance of the method was assessed in two phantoms with different elemental concentrations, using a small scale prototype detector. Based on five pencil-beams with different ranges delivering 5×108 protons and without prior knowledge of the elemental composition at the measurement point, the absolute range was determined with a standard deviation of 1.0-1.4mm. Relative range shifts at the same dose level were detected with a standard deviation of 0.3-0.5mm. The determined oxygen and carbon concentrations also agreed well with the actual values. These results show that quantitative prompt gamma-ray measurements enable knowledge of nuclear reaction cross sectionsto be used for precise proton range verification in the presence of tissue with an unknown composition. © 2014 Institute of Physics and Engineering in Medicine.


van Schijndel A.W.M.,TU Eindhoven
Building Simulation | Year: 2011

The paper presents an overview of Multiphysics applications using a Multiphysics modeling package for building physical constructions simulation. The overview includes three main basic transport phenomena for building physical constructions: (1) heat transfer, (2) heat and moisture transfer and (3) heat, air and moisture (HAM) transfer. It is concluded that full 3D transient coupled HAM models for building physical constructions can be build using a Multiphysics modeling package. Regarding the heat transport, neither difficulties nor limitations are expected. Concerning the combined heat and moisture transport the main difficulties are related with the material properties but this seems to be no limitation. Regarding the HAM modeling inside solid constructions, there is at least one limitation: the validation is almost impossible due to limitation of measuring ultra low air velocities of order μm/s. © Tsinghua University Press and Springer-Verlag Berlin Heidelberg 2011.


Bastiaans M.J.,TU Eindhoven
Journal of the Franklin Institute | Year: 2011

It is shown that the recently introduced T-class of timefrequency distributions is a subclass of the S-method distributions. From the generalization of the S-method distribution by rotating it in the timefrequency plane, a similar generalization of the T-class distribution follows readily. The generalized T-class distribution is then applicable to signals that behave chirp like, with their instantaneous frequency slowly varying around the slope of the chirp; this slope needs no longer be zero, as is the case for the original T-class distribution, but may take an arbitrary value. © 2011 The Franklin Institute.


Van Der Aalst W.M.P.,TU Eindhoven
Lecture Notes in Business Information Processing | Year: 2011

Due to the availability of more and more event data and mature process mining techniques, it has become possible to discover the actual processes within an organization. Process mining techniques use event logs to automatically construct process models that explain the behavior observed. Existing process models can be validated using conformance checking techniques. Moreover, the link between real-life events and model elements allows for the projection of additional information onto process models (e.g., showing bottlenecks and the flow of work within an organization). Although process mining has been mainly used within individual organizations, this new technology can also be applied in cross-organizational settings. In this paper, we identify such settings and highlight some of the challenges and opportunities. In particular, we show that cross-organizational processes can be partitioned along two orthogonal dimensions. This helps us to identify relevant process mining challenges involving multiple organizations. © 2011 IFIP International Federation for Information Processing.


Van Soestbergen M.,Materials Innovation Institute M2i | Van Soestbergen M.,TU Eindhoven
Electrochemistry Communications | Year: 2012

Theory predicts that ionic currents through electrochemical cells at nanometer scale can exceed the diffusion limitation due to an expansion of the interfacial electrostatic double layer. Corresponding voltammetry experiments revealed a clear absence of a plateau for the current, which cannot be described by the classical Butler-Volmer approach using realistic values for the transfer coefficient. We show that extending the classical approach by considering the double layer structure using the Frumkin correction leads to an accurate description of the anomalous experimental data. © 2012 Elsevier B.V. All rights reserved.


Akhtar N.,TU Eindhoven
Chemical Engineering Research and Design | Year: 2012

Our previously developed numerical model has been used to study the flow, species and temperature distribution in a micro-tubular, single-chamber solid oxide fuel cell stack. The stack consists of three cells, spaced equally inside the gas-chamber. Two different configurations of the gas-chamber have been investigated, i.e., a bare gas-chamber and a porous material filled gas-chamber. The results show that the porous material filled gas-chamber is advantageous in improving the cell performance, as it forces the flow to pass through the cell, which improves mass transport via convection and enhances the reaction rate. The cell performance in the case of a bare gas-chamber follows in the following order: cell 1 > cell 2 > cell 3. However, the performance order is reversed for the porous gas-chamber case. This is due to enhanced flow which is forced to flow through the downstream cells, as we move along the gas-chamber length. © 2011 The Institution of Chemical Engineers.


Buchin K.,TU Eindhoven | Mulzer W.,Free University of Berlin
Journal of the ACM | Year: 2011

We present several results about Delaunay triangulations (DTs) and convex hulls in transdichotomous and hereditary settings: (i) the DT of a planar point set can be computed in expected time O(sort(n)) on a word RAM, where sort(n) is the time to sort n numbers. We assume that the word RAM supports the shuffle operation in constant time; (ii) if we know the ordering of a planar point set in x-and in y-direction, its DT can be found by a randomized algebraic computation tree of expected linear depth; (iii) given a universe U of points in the plane, we construct a data structure D for Delaunay queries :for any P ⊆ U, D can find the DT of P in expected time O(| P| log log |U |); (iv) given a universe U of points in 3-space in general convex position, there is a data structure D for convex hull queries :for any P ⊆ U, D can find the convex hull of P in expected time O(|P|(loglog |U|)2); (v) given a convex polytope in 3-space with n vertices which are colored with x ≥ 2 colors, we can split it into the convex hulls of the individual color classes in expected time O(n(log log n)2). The results (i)-(iii) generalize to higher dimensions, where the expected running time now also depends on the complexity of the resulting DT. We need a wide range of techniques. Most prominently, we describe a reduction from DTs to nearest-neighbor graphs that relies on a new variant of randomized incremental constructions using dependent sampling. © 2011 ACM.


Hill M.T.,TU Eindhoven
Journal of the Optical Society of America B: Optical Physics | Year: 2010

A remarkable miniaturization of lasers has occurred in just the past few years by employing metals to form the laser resonator. From having minimum laser dimensions being at least several wavelengths of the light emitted, many devices have been shown where the laser size is of a wavelength or less. Additionally some devices show lasing in structures significantly smaller than the wavelength of light in several dimensions, and the optical mode is far smaller than allowed by the diffraction limit. In this article we review what has been achieved then look forward to what some of the directions development could take and where possible applications could lie. In particular we show that there are devices with an optical size slightly larger or near the diffraction limit which could soon be employed in many applications requiring coherent light sources. Application of devices with dimensions far below the diffraction limit is also on the horizon, but may take more time. © 2010 Optical Society of America.


Prieto G.,University Utrecht | Zecevic J.,University Utrecht | Friedrich H.,TU Eindhoven | De Jong K.P.,University Utrecht | De Jongh P.E.,University Utrecht
Nature Materials | Year: 2013

Supported metal nanoparticles play a pivotal role in areas such as nanoelectronics, energy storage/conversion and as catalysts for the sustainable production of fuels and chemicals. However, the tendency of nanoparticles to grow into larger crystallites is an impediment for stable performance. Exemplarily, loss of active surface area by metal particle growth is a major cause of deactivation for supported catalysts. In specific cases particle growth might be mitigated by tuning the properties of individual nanoparticles, such as size, composition and interaction with the support. Here we present an alternative strategy based on control over collective properties, revealing the pronounced impact of the three-dimensional nanospatial distribution of metal particles on catalyst stability. We employ silica-supported copper nanoparticles as catalysts for methanol synthesis as a showcase. Achieving near-maximum interparticle spacings, as accessed quantitatively by electron tomography, slows down deactivation up to an order of magnitude compared with a catalyst with a non-uniform nanoparticle distribution, or a reference Cu/ZnO/Al 2 O 3 catalyst. Our approach paves the way towards the rational design of practically relevant catalysts and other nanomaterials with enhanced stability and functionality, for applications such as sensors, gas storage, batteries and solar fuel production.


Bohm C.,University of Stuttgart | Lazar M.,TU Eindhoven | Allgower F.,University of Stuttgart
Automatica | Year: 2012

This paper proposes a novel approach to stability analysis of discrete-time nonlinear periodically time-varying systems. The contributions are as follows. Firstly, a relaxation of standard Lyapunov conditions is derived. This leads to a less conservative Lyapunov function that is required to decrease at each period rather than at each time instant. Secondly, for linear periodic systems with constraints, it is shown that compared to standard Lyapunov theory, the novel concept of periodic Lyapunov functions allows for the calculation of a larger estimate of the region of attraction. An example illustrates the effectiveness of the developed theory. © 2012 Elsevier Ltd. All rights reserved.


Van Oijen J.A.,TU Eindhoven
Proceedings of the Combustion Institute | Year: 2013

MILD combustion is a new combustion technology which promises an enhanced efficiency and reduced emission of pollutants. It is characterized by a high degree of preheating and dilution of the reactants. Since the temperature of the reactants is higher than that of autoignition, a complex interplay between turbulent mixing, molecular transport and chemical kinetics occurs. In order to reveal the fundamental reaction structures of MILD combustion, the process of a cold methane-hydrogen fuel jet issuing in a hot diluted coflow and the subsequent ignition process is modeled by direct numerical simulation of autoigniting mixing layers using detailed chemistry and transport models. Detailed analysis of one-dimensional laminar mixing layers shows that the ignition process is dominated by hydrogen chemistry and that non-unity Lewis number effects are of the utmost importance for modeling of autoignition. High scalar dissipation rates in mixing layers delay the autoignition time, but have a negligible effect on the chemical pathway followed during ignition. This supports the idea of using homogeneous reactor simulations for the construction of chemistry look-up tables. Simulations of two-dimensional turbulent mixing layers confirm the effect of scalar dissipation rate on autoignition time. The turbulence-chemistry interaction is limited under the investigated conditions, because the reaction layer lies at the edge of the mixing layer due to the very small value of the stoichiometric mixture fraction. When the oxidizer stream is more diluted, the autoignition time is delayed, allowing the developing turbulence to interact more with the ignition chemistry. The results of these direct numerical simulations employing a detailed reaction mechanism are expected to be used for the development of tabulated chemistry models and sub-grid scale models for large-eddy simulations of MILD combustion. © 2012 The Combustion Institute.


Lenstra D.,TU Eindhoven | Yousefi M.,Photonic Sensing Solutions
Optics Express | Year: 2014

We present a set of rate equations for the modal amplitudes and carrier-inversion moments that describe the deterministic multi-mode dynamics of a semiconductor laser due to spatial hole burning. Mutual interactions among the lasing modes, induced by high-frequency modulations of the carrier distribution, are included by carrier-inversion moments for which rate equations are given as well. We derive the Bogatov effect of asymmetric gain suppression in semiconductor lasers and illustrate the potential of the model for a two and three-mode laser by numerical and analytical methods. © 2014 Optical Society of America.


Chen H.,TU Eindhoven
Discrete and Computational Geometry | Year: 2016

We investigate in this paper the relation between Apollonian d-ball packings and stacked (Formula presented.)-polytopes for dimension (Formula presented.). For (Formula presented.), the relation is fully described: we prove that the 1-skeleton of a stacked 4-polytope is the tangency graph of an Apollonian 3-ball packing if and only if there is no six 4-cliques sharing a 3-clique. For higher dimension, we have some partial results. © 2016 The Author(s)


The paper investigates greenhouse gas (GHG) emissions from land use change associated with the introduction of large-scale Jatropha curcas cultivation on Miombo Woodland, using data from extant forestry and ecology studies about this ecosystem. Its results support the notion that Jatropha can help sequester atmospheric carbon when grown on complete wastelands and in severely degraded conditions. Conversely, when introduced on tropical woodlands with substantial biomass and medium/high organic soil carbon content, Jatropha will induce significant emissions that offset any GHG savings from the rest of the biofuel production chain. A carbon debt of more than 30 years is projected. On semi-degraded Miombo the overall GHG balance of Jatropha is found to hinge a lot on the extent of carbon depletion of the soil, more than on the state of the biomass. This finding points to the urgent need for detailed measurements of soil carbon in a range of Miombo sub-regions and similar tropical dryland ecosystems in Asia and Latin America. Efforts should be made to clarify concepts such as 'degraded lands' and 'wastelands' and to refine land allocation criteria and official GHG calculation methodologies for biofuels on that basis. © 2010 Elsevier Ltd.


According to Austro-British philosopher Karl Popper, a system of theoretical claims is scientific only if it is methodologically falsifiable, i.e., only if systematic attempts to falsify or severely test the system are being carried out [Popper, 2005, pp. 20, 62]. He holds that a test of a theoretical system is severe if and only if it is a test of the applicability of the system to a case in which the system's failure is likely in light of background knowledge, i.e., in light of scientific assumptions other than those of the system being tested [Popper, 2002, p. 150]. Popper counts the 1919 tests of general relativity's then unlikely predictions of the deflection of light in the Sun's gravitational field as severe. An implication of Popper's above condition for being a scientific theoretical system is the injunction to assess theoretical systems in light of how well they have withstood severe testing. Applying this injunction to assessing the quality of climate model predictions (CMPs), including climate model projections, would involve assigning a quality to each CMP as a function of how well it has withstood severe tests allowed by its implications for past, present, and near-future climate or, alternatively, as a function of how well the models that generated the CMP have withstood severe tests of their suitability for generating the CMP.


Janssen P.J.A.,University of Wisconsin - Madison | Anderson P.D.,TU Eindhoven
Macromolecular Materials and Engineering | Year: 2011

A proper description of coalescence of viscous drops is challenging from an experimental, numerical, and theoretical point of view. Although the problem seems easy at first sight, consensus in the literature has still not been reached on how to predict a realistic coalescence rate given flow type, capillary number and viscosity ratio. Despite advances in algorithms and computational power, and the emergence of fully-closed analytical results, a match between theory, experiment and simulation for drainage rates only appears in a severely limited number of cases. In this paper, several recent developments are reviewed, and a summary is made of several challenges that still lay ahead.(Figure Presented) © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Speetjens M.F.M.,TU Eindhoven
International Journal of Thermal Sciences | Year: 2012

Heat transfer in essence is the transport of thermal energy along certain paths in a similar way as fluid motion is the transport of fluid parcels along fluid paths. This similarity admits Lagrangian heat-transfer analyses by the geometry of such "thermal paths" analogous to well-known Lagrangian mixing analyses. Essential to Lagrangian heat-transfer formalisms is the reference state for the convective flux. Existing approaches admit only uniform references. However, for convective heat transfer, a case of great practical relevance, the conductive state that sets in for vanishing fluid motion is the more natural reference. This typically is an inhomogeneous state and thus beyond the existing formalism. The present study closes this gap by its generalisation to non-uniform references and thus substantially strengthens Lagrangian methods for thermal analyses by (i) greatly extending their applicability, (ii) resolving the fundamental ambiguity concerning arbitrariness of the reference state that limits the original formalism, (iii) facilitating accessible physical interpretation of heat fluxes and thermal paths and (iv) enabling subtler distinction of (Lagrangian) heat-transfer phenomena. The generalised Lagrangian formalism is elaborated for laminar convective heat transfer, which can be done without loss of generality, and completed by a comprehensive geometrical framework for the composition and organisation of thermal paths. This ansatz is demonstrated by way of 2D (un)steady case studies and offers new fundamental insight into thermal transport that is complementary to the Eulerian picture based on temperature. Highlights: Generalization Lagrangian heat-transfer formalism to non-uniform reference states. Clear definition and physical interpretation of convective flux and thermal paths. Resolution fundamental ambiguity of the reference state of existing formalisms. Formulation comprehensive geometrical framework for composition of thermal paths. Illustrative Lagrangian thermal analysis using concepts from mixing studies. © 2012 Elsevier Masson SAS. All rights reserved.


Heertjes M.,TU Eindhoven | Van Engelen A.,TMC
Control Engineering Practice | Year: 2011

To minimize cross-talk in high-precision motion systems, the possibilities of data-based dynamic decoupling are studied. Atop a model-based and static decoupling, a multi-input multi-output (MIMO) and finite impulse response (FIR) dynamic decoupling structure is considered for machine-specific and performance-driven fine tunings. The coefficients of the FIR filters are obtained via data-based optimization, whilst the machine operates under nominal and closed-loop conditions. The FIR filters provide the ability to generate zeros outside the origin. These zeros are needed in the description of the low-frequency inverted plant dynamics. In addition, a low-pass filter structure supports the ability to generate poles outside the origin as to account for plant zeros. Both filter structures are effectively used in the high-precision motion control of a state-of-the-art scanning stage system and an industrial vibration isolation system. © 2011 Elsevier Ltd.


Pham K.,TU Eindhoven | Marigo J.-J.,Ecole Polytechnique - Palaiseau
Continuum Mechanics and Thermodynamics | Year: 2013

We propose a construction method of non-homogeneous solutions for the traction problem of an elastic damaging bar. This bar has a softening behavior that obeys a gradient damaged model. The method is applicable for a wide range of brittle materials. For sufficiently long bars, we show that localization arises on sets whose length is proportional to the material internal length and with a profile that is also a material characteristic. From its onset until the rupture, the damage profile is obtained either in a closed form or after a simple numerical integration depending on the model. Thus, the proposed method provides definitions for the critical stress and fracture energy that can be compared with experimental results. We finally discuss some features of the global behavior of the bar such as the possibility of a snapback at the onset of damage. We point out the sensitivity of the responses to the parameters of the damage law. All these theoretical considerations are illustrated by numerical examples. © 2012 Springer-Verlag.


Lakens D.,TU Eindhoven
Journal of Experimental Psychology: Learning Memory and Cognition | Year: 2012

Previous research has shown that words presented on metaphor congruent locations (e.g., positive words UP on the screen and negative words DOWN on the screen) are categorized faster than words presented on metaphor incongruent locations (e.g., positive words DOWN and negative words UP). These findings have been explained in terms of an interference effect: The meaning associated with UP and DOWN vertical space can automatically interfere with the categorization of words with a metaphorically incongruent meaning. The current studies test an alternative explanation for the interaction between the vertical position of abstract concepts and the speed with which these stimuli are categorized. Research on polarity differences (basic asymmetries in the way dimensions are processed) predicts that +polar endpoints of dimensions (e.g., positive, moral, UP) are categorized faster than -polar endpoints of dimensions (e.g., negative, immoral, DOWN). Furthermore, the polarity correspondence principle predicts that stimuli where polarities correspond (e.g., positive words presented UP) provide an additional processing benefit compared to stimuli where polarities do not correspond (e.g., negative words presented UP). A meta-analysis (Study 1) shows that a polarity account provides a better explanation of reaction time patterns in previous studies than an interference explanation. An experiment (Study 2) reveals that controlling for the polarity benefit of +polar words compared to -polar words did not only remove the main effect of word polarity but also the interaction between word meaning and vertical position due to polarity correspondence. These results reveal that metaphor congruency effects should not be interpreted as automatic associations between vertical locations and word meaning but instead are more parsimoniously explained by their structural overlap in polarities. © 2011 American Psychological Association.


Ozcelebi T.,TU Eindhoven
Signal Processing: Image Communication | Year: 2011

In state-of-the-art adaptive streaming solutions, to cope with varying network conditions, the client side can switch between several video copies encoded at different bit-rates during streaming. Each video copy is divided into chunks of equal duration. To achieve continuous video playback, each chunk needs to arrive at the client before its playback deadline. The perceptual quality of a chunk increases with the chunk size in bits, whereas bigger chunks require more transmission time and, as a result, have a higher risk of missing transmission deadline. Therefore, there is a trade-off between the overall video quality and continuous playback, which can be optimized by proper selection of the next chunk from the encoded versions. This paper proposes a method to compute a set of optimal client strategies for this purpose. © 2011 Elsevier B.V.


Westergaard M.,TU Eindhoven
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

This paper introduces Access/CPN 2.0, which extends Access/ CPN with high-level primitives for interacting with coloured Petri net (CPN) models in Java programs. The primitives allow Java programs to monitor and interact with places and transitions during execution, and embed entire programs as subpages of CPN models or embed CPN models as parts of programs. This facilitates building environments for systematic testing of program components using a CPN models. We illustrate the use of Access/CPN 2.0 in the context of business processes by embedding a workflow system into a CPN model. © 2011 Springer-Verlag.


Galagan Y.,Holst Center | Debije M.G.,TU Eindhoven | Blom P.W.M.,Holst Center
Applied Physics Letters | Year: 2011

Semitransparent organic solar cells employing solution-processable organic wavelength dependent reflectors of chiral nematic (cholesteric) liquid crystals are demonstrated. The cholesteric liquid crystal (CLC) reflects only in a narrow band of the solar spectrum and remains transparent for the remaining wavelengths. The reflective band is matched to the absorption spectrum of the organic solar cell such that only unabsorbed photons that can contribute to the photocurrent are reflected to pass through the active layer a second time. In this way, the efficiency of semitransparent organic solar cells can be enhanced without significant transparency losses. An efficiency increase of 6% was observed when a CLC reflector with a reflection band of 540-620 nm was used, whereas the transparency of the organic solar cells is only suppressed in the 80 nm narrow bandwidth. © 2011 American Institute of Physics.


Markovski J.,TU Eindhoven
Proceedings - International Conference on Application of Concurrency to System Design, ACSD | Year: 2011

We propose a model-based systems engineering framework for supervisory control of stochastic discrete-event systems with unrestricted nondeterminism. We intend to develop the proposed framework in four phases outlined in this paper. Here, we study in detail the first step which comprises investigation of the underlying model and development of a corresponding notion of controllability. The model of choice is termed Interactive Markov Chains, which is a natural semantic model for stochastic variants of process calculi and Petri nets, and it requires a process-theoretic treatment of supervisory control theory. To this end, we define a new behavioral preorder, termed Markovian partial bisimulation, that captures the notion of controllability while preserving correct stochastic behavior. We provide a sound and ground-complete axiomatic characterization of the preorder and, based on it, we define two notions of controllability. The first notion conforms to the traditional way of reasoning about supervision and control requirements, whereas in the second proposal we abstract from the stochastic behavior of the system. For the latter, we intend to separate the concerns regarding synthesis of an optimal supervisor. The control requirements cater only for controllability, whereas we ensure that the stochastic behavior of the supervised plant meets the performance specification by extracting directive optimal supervisors. © 2011 IEEE.


Vaesen K.,TU Eindhoven
Behavioral and Brain Sciences | Year: 2012

This article has two goals. The first is to assess, in the face of accruing reports on the ingenuity of great ape tool use, whether and in what sense human tool use still evidences unique, higher cognitive ability. To that effect, I offer a systematic comparison between humans and nonhuman primates with respect to nine cognitive capacities deemed crucial to tool use: enhanced hand-eye coordination, body schema plasticity, causal reasoning, function representation, executive control, social learning, teaching, social intelligence, and language. Since striking differences between humans and great apes stand firm in eight out of nine of these domains, I conclude that human tool use still marks a major cognitive discontinuity between us and our closest relatives. As a second goal of the paper, I address the evolution of human technologies. In particular, I show how the cognitive traits reviewed help to explain why technological accumulation evolved so markedly in humans, and so modestly in apes. © 2012 Cambridge University Press.


Bellouard Y.,TU Eindhoven | Hongler M.-O.,Ecole Polytechnique Federale de Lausanne
Optics Express | Year: 2011

By continuously scanning a femtosecond laser beam across a fused silica specimen, we demonstrate the formation of self-organized bubbles buried in the material. Rather than using high intensity pulses and high numerical aperture to induce explosions in the material, here bubbles form as a consequence of cumulative energy deposits. We observe a transition between chaotic and self-organized patterns at high scanning rate (above 10 mm/s). Through modeling the energy exchange, we outline the similarities of this phenomenon with other non-linear dynamical systems. Furthermore, we demonstrate with this method the high-speed writing of two- and three- dimensional bubble "crystals" in bulk silica. © 2011 Optical Society of America.


Vreman A.W.,Akzo Nobel | Kuerten J.G.M.,TU Eindhoven | Kuerten J.G.M.,University of Twente
Physics of Fluids | Year: 2014

Direct numerical simulation (DNS) databases are compared to assess the accuracy and reproducibility of standard and non-standard turbulence statistics of incompressible plane channel flow at Reτ = 180. Two fundamentally different DNS codes are shown to produce maximum relative deviations below 0.2% for the mean flow, below 1% for the root-mean-square velocity and pressure fluctuations, and below 2% for the three components of the turbulent dissipation. Relatively fine grids and long statistical averaging times are required. An analysis of dissipation spectra demonstrates that the enhanced resolution is necessary for an accurate representation of the smallest physical scales in the turbulent dissipation. The results are related to the physics of turbulent channel flow in several ways. First, the reproducibility supports the hitherto unproven theoretical hypothesis that the statistically stationary state of turbulent channel flow is unique. Second, the peaks of dissipation spectra provide information on length scales of the small-scale turbulence. Third, the computed means and fluctuations of the convective, pressure, and viscous terms in the momentum equation show the importance of the different forces in the momentum equation relative to each other. The Galilean transformation that leads to minimum peak fluctuation of the convective term is determined. Fourth, an analysis of higher-order statistics is performed. The skewness of the longitudinal derivative of the streamwise velocity is stronger than expected (-1.5 at y+ =30). This skewness and also the strong near-wall intermittency of the normal velocity are related to coherent structures. © 2014 AIP Publishing LLC.


Katzav J.,TU Eindhoven
Studies in History and Philosophy of Science Part B - Studies in History and Philosophy of Modern Physics | Year: 2014

I bring out the limitations of four important views of what the target of useful climate model assessment is. Three of these views are drawn from philosophy. They include the views of Elisabeth Lloyd and Wendy Parker, and an application of Bayesian confirmation theory. The fourth view I criticise is based on the actual practice of climate model assessment. In bringing out the limitations of these four views, I argue that an approach to climate model assessment that neither demands too much of such assessment nor threatens to be unreliable will, in typical cases, have to aim at something other than the confirmation of claims about how the climate system actually is. This means, I suggest, that the Intergovernmental Panel on Climate Change's (IPCC[U+05F3]s) focus on establishing confidence in climate model explanations and predictions is misguided. So too, it means that standard epistemologies of science with pretensions to generality, e.g., Bayesian epistemologies, fail to illuminate the assessment of climate models. I go on to outline a view that neither demands too much nor threatens to be unreliable, a view according to which useful climate model assessment typically aims to show that certain climatic scenarios are real possibilities and, when the scenarios are determined to be real possibilities, partially to determine how remote they are. © 2014 Elsevier Ltd.


Kaptein M.,TU Eindhoven
Journal of Ambient Intelligence and Smart Environments | Year: 2012

On March 29, 2012 the author successfully defended his PhD thesis entitles Personalized persuasion in Ambient Intelligence. The PhD Degree was awarded with honors. © 2012 - IOS Press and the authors. All rights reserved.


Kraemer F.,TU Eindhoven
Bioethics | Year: 2013

This article deals with the euthanasia debate in light of new life-sustaining technologies such as the left ventricular assist device (LVAD). The question arises: does the switching off of a LVAD by a doctor upon the request of a patient amount to active or passive euthanasia, i.e. to 'killing' or to 'letting die'? The answer hinges on whether the device is to be regarded as a proper part of the patient's body or as something external. We usually regard the switching off of an internal device as killing, whereas the deactivation of an external device is seen as 'letting die'. The case is notoriously difficult to decide for hybrid devices such as LVADs, which are partly inside and partly outside the patient's body. Additionally, on a methodological level, I will argue that the 'ontological' arguments from analogy given for both sides are problematic. Given the impasse facing the ontological arguments, complementary phenomenological arguments deserve closer inspection. In particular, we should consider whether phenomenologically the LVAD is perceived as a body part or as an external device. I will support the thesis that the deactivation of a LVAD is to be regarded as passive euthanasia if the device is not perceived by the patient as a part of the body proper. © 2011 Blackwell Publishing Ltd.


De Waele A.T.A.M.,TU Eindhoven
Journal of Low Temperature Physics | Year: 2011

This paper deals with the basics of cryocoolers and related thermodynamic systems. The treatment is based on the first and second law of thermodynamics for inhomogeneous, open systems using enthalpy flow, entropy flow, and entropy production. Various types of machines, which use an oscillating gas flow, are discussed such as: Stirling refrigerators, GM coolers, pulse-tube refrigerators, and thermoacoustic coolers and engines. Furthermore the paper deals with Joule-Thomson and dilution refrigerators which use a constant flow of the working medium. © 2011 The Author(s).


Janssen A.J.E.M.,TU Eindhoven
Journal of the European Optical Society | Year: 2011

Several quantities related to the Zernike circle polynomials admit an expression, via the basic identity in the diffraction theory of Nijboer and Zernike, as an infinite integral involving the product of two or three Bessel functions. In this paper these integrals are identified and evaluated explicitly for the cases of (a) the expansion coefficients of scaled-and-shifted circle polynomials, (b) the expansion coefficients of the correlation of two circle polynomials, (c) the Fourier coefficients occurring in the cosine representation of the radial part of the circle polynomials.


Bouyahyi M.,SABIC | Duchateau R.,SABIC | Duchateau R.,TU Eindhoven
Macromolecules | Year: 2014

This contribution describes our recent results regarding the metal-catalyzed ring-opening polymerization of pentadecalactone and its copolymerization with ε-caprolactone involving single-site metal complexes based on aluminum, zinc, and calcium. Under the right conditions (i.e., monomer concentration, catalyst type, catalyst/initiator ratio, reaction time, etc.), high molecular weight polypentadecalactone with Mn up to 130 000 g mol-1 could be obtained. The copolymerization of a mixture of ε-caprolactone and pentadecalactone yielded random copolymers. Zinc and calcium-catalyzed copolymerization using a sequential feed of pentadecalactone followed by ε-caprolactone afforded perfect block copolymers. The blocky structure was retained even for prolonged times at 100 C after full conversion of the monomers, indicating that transesterification is negligible. On the other hand, in the presence of the aluminum catalyst, the initially formed block copolymers gradually randomized as a result of intra- and intermolecular transesterification reactions. The formation of homopolymers and copolymers with different architectures has been evidenced by HT-SEC chromatography, NMR, DSC and MALDI-ToF-MS. © 2014 American Chemical Society.


Hwang W.R.,Gyeongsang National University | Hulsen M.A.,TU Eindhoven
Macromolecular Materials and Engineering | Year: 2011

The alignment and the aggregation of particles in a viscoelastic fluid in simple shear flow are qualitatively analyzed using a two-dimensional direct numerical simulation. Depending on the shear thinning, solvent viscosity, and Weissenburg number, a typical sequence in structural transitions from random particle configuration to string formation is found with clustering and clustered string formation in between. The solvent viscosity and the Weissenburg number, the ratio of normal stress to shear stress, are found to be the most influential parameters for the onset of string formation. The influence of shear thinning is less clear. More shear thinning seems to promote string formation if the stress ratio is constant. The angular velocity of the particles is reduced by approximately 60% when particles form a string, independent of the parameters used.(Figure Presented) © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Borden M.J.,University of Texas at Austin | Hughes T.J.R.,University of Texas at Austin | Landis C.M.,University of Texas at Austin | Verhoosel C.V.,TU Eindhoven
Computer Methods in Applied Mechanics and Engineering | Year: 2014

Phase-field models based on the variational formulation for brittle fracture have recently been gaining popularity. These models have proven capable of accurately and robustly predicting complex crack behavior in both two and three dimensions. In this work we propose a fourth-order model for the phase-field approximation of the variational formulation for brittle fracture. We derive the thermodynamically consistent governing equations for the fourth-order phase-field model by way of a variational principle based on energy balance assumptions. The resulting model leads to higher regularity in the exact phase-field solution, which can be exploited by the smooth spline function spaces utilized in isogeometric analysis. This increased regularity improves the convergence rate of the numerical solution and opens the door to higher-order convergence rates for fracture problems. We present an analysis of our proposed theory and numerical examples that support this claim. We also demonstrate the robustness of the model in capturing complex three-dimensional crack behavior. © 2014 Elsevier B.V.


Loos J.,University of Glasgow | Loos J.,TU Eindhoven | Loos J.,Dutch Polymer Institute
Materials Today | Year: 2010

Printable polymer or hybrid solar cells (PSCs) have the potential to become one of the leading technologies of the 21st century in conversion of sunlight to electrical energy. Because of their ease of processing from solution fast and low cost mass production of devices is possible in a roll-to-roll printing fashion. The performance of such printed devices, in turn, is determined to a large extent by the three-dimensional organization of the photoactive layer, i.e. layer where light is absorbed and converted into free electrical charges, and its contacts with the charge collecting electrodes. In this review I briefly introduce our current understanding of morphology-performance relationships in PSCs with specific focus on electron tomography as analytical tool providing volume information with nanometer resolution. © 2010 Elsevier Ltd.


Janssen A.J.E.M.,TU Eindhoven
Journal of the European Optical Society | Year: 2011

The integrals occurring in optical diffraction theory under conditions of partial coherence have the form of an incomplete autocorrelation integral of the pupil function of the optical system. The incompleteness is embodied by a spatial coherence function of limited extent. In the case of circular optical systems and coherence functions supported by a disk, this gives rise to Hopkins' 3-circle integrals. In this paper, a computation scheme for these integrals (initially with coherence functions that are constant on their disks) is proposed where the required integral is expressed semi-analytically in the Zernike expansion coefficients of the pupil function. To this end, the Zernike expansion coefficients of a shifted pupil function restricted to the coherence disk are expressed in terms of the pupil function's Zernike expansion coefficients. Next, the required integral is expressed as an infinite series involving two sets of Zernike coefficients of restricted pupils using Parseval's theorem for orthogonal series. Due to a convenient separation of the radial parameters and the spatial variables, the method avoids a cumbersome administration involving separate consideration of various overlap situations. The computation method is extended to the case of coherence functions that are not necessarily constant on their supporting disks by using a result on linearization of the product of two Zernike circle polynomials involving Wigner coefficients.


Brouwers J.J.H.,TU Eindhoven
Physica D: Nonlinear Phenomena | Year: 2011

A theoretical analysis is presented of the response of a lightly and nonlinearly damped massspring system in which the spring constant contains a small randomly fluctuating component. Damping is represented by a combination of linear and nonlinear power-law damping. System response to some initial disturbance at time zero is described by a sinusoidal wave whose amplitude and phase vary slowly and randomly with time. Leading order formulations for the equations of amplitude and phase are obtained through the application of methods of stochastic averaging of Stratonovich. The equations of amplitude and phase are given in two versions: FokkerPlanck equations for transient probability and Langevin equations for response in the time-domain. Solutions in closed-form of these equations are derived by methods of mathematical and theoretical physics involving higher transcendental functions. They are used to study the behavior of system response for ever increasing time applying asymptotic methods of analysis such as the method of steepest descent or saddle-point method. It is found that system behavior depends on the power density of the parametric excitation at twice the natural frequency and on the magnitude and form of the damping. Depending on these parameters different types of system behavior are found to be possible: response which decays exponentially to zero, response which leads to a stationary state of random behavior, and response which can either grow unboundedly or which approaches zero in a finite time. © 2011 Elsevier B.V. All rights reserved.


Leijtens X.,TU Eindhoven
IET Optoelectronics | Year: 2011

Jeppix is the European platform that is aiming to offer access to Indium Phosphide-based technology for manufacturing of photonic integrated circuits. This is enabled by using a generic integration technology. The authors outline the current status and developments. © 2011 The Institution of Engineering and Technology.


Van De Vosse F.N.,TU Eindhoven | Stergiopulos N.,Ecole Polytechnique Federale de Lausanne
Annual Review of Fluid Mechanics | Year: 2011

The beating heart creates blood pressure and flow pulsations that propagate as waves through the arterial tree that are reflected at transitions in arterial geometry and elasticity. Waves carry information about the matter in which they propagate. Therefore, modeling of arterial wave propagation extends our knowledge about the functioning of the cardiovascular system and provides a means to diagnose disorders and predict the outcome of medical interventions. In this review we focus on the physical and mathematical modeling of pulse wave propagation, based on general fluid dynamical principles. In addition we present potential applications in cardiovascular research and clinical practice. Models of short- and long-term adaptation of the arterial system and methods that deal with uncertainties in personalized model parameters and boundary conditions are briefly discussed, as they are believed to be major topics for further study and will boost the significance of arterial pulse wave modeling even more. © 2011 by Annual Reviews. All rights reserved.


Etman L.F.P.,TU Eindhoven
Structural and Multidisciplinary Optimization | Year: 2010

We reflect on the convergence and termination of optimization algorithms based on convex and separable approximations using two recently proposed strategies, namely a trust region with filtered acceptance of the iterates, and conservatism. We then propose a new strategy for convergence and termination, denoted filtered conservatism,in which the acceptance or rejection of an iterate is determined using the nonlinear acceptance filter. However, if an iterate is rejected, we increase the conservatism of every unconser-vative approximation, rather than reducing the trust region. Filtered conservatism aims to combine the salient features of trust region strategies with nonlinear acceptance filters on the one hand, and conservatism on the other. In filtered conservatism, the nonlinear acceptance filter is used to decide if an iterate is accepted or rejected. This allows for the acceptance of infeasible iterates, which would not be accepted in a method based on conservatism. If however an iterate is rejected, the trust region need not be decreased; it may be kept constant. Convergence is than effected by increasing the conservatism of only the unconservative approximations in the (large, constant) trust region, until the iterate becomes acceptable to the filter. Numerical results corroborate the accuracy and robustness of the method. © The Author(s) 2010.


Shivamoggi B.K.,TU Eindhoven
European Physical Journal D | Year: 2011

Beltrami states in several models of plasma dynamics - incompressible magnetohydrodynamic (MHD) model, barotropic compressible MHD model, incompressible Hall MHD model, barotropic compressible Hall MHD model, electron MHD model, barotropic compressible Hall MHD with electron inertia model, are considered. Notwithstanding the diversity of the physics underlying the various models, the Beltrami states are shown to exhibit some common features like - certain robustness with respect to the plasma compressibility effects (albeit in the barotropy assumption), the Bernoulli condition. The Beltrami states for these models are deduced by minimizing the appropriate total energy while keeping the appropriate total helicity constant. © 2011 EDP Sciences, SIF, Springer-Verlag Berlin Heidelberg.


Darvishian M.,University of Groningen | Bijlsma M.J.,Unit of PharmacoEpidemiology and PharmacoEconomics PE2 | Hak E.,University of Groningen | van den Heuvel E.R.,University of Groningen | van den Heuvel E.R.,TU Eindhoven
The Lancet Infectious Diseases | Year: 2014

Background: The application of test-negative design case-control studies to assess the effectiveness of influenza vaccine has increased substantially in the past few years. The validity of these studies is predicated on the assumption that confounding bias by risk factors is limited by design. We aimed to assess the effectiveness of influenza vaccine in a high-risk group of elderly people. Methods: We searched the Cochrane library, Medline, and Embase up to July 13, 2014, for test-negative design case-control studies that assessed the effectiveness of seasonal influenza vaccine against laboratory confirmed influenza in community-dwelling people aged 60 years or older. We used generalised linear mixed models, adapted for test-negative design case-control studies, to estimate vaccine effectiveness according to vaccine match and epidemic conditions. Findings: 35 test-negative design case-control studies with 53 datasets met inclusion criteria. Seasonal influenza vaccine was not significantly effective during local virus activity, irrespective of vaccine match or mismatch to the circulating viruses. Vaccination was significantly effective against laboratory confirmed influenza during sporadic activity (odds ratio [OR] 0.69, 95% CI 0.48-0.99) only when the vaccine matched. Additionally, vaccination was significantly effective during regional (match: OR 0.42, 95% CI 0.30-0.60; mismatch: OR 0.57, 95% CI 0.41-0.79) and widespread (match: 0.54, 0.46-0.62; mismatch: OR 0.72, 95% CI 0.60-0.85) outbreaks. Interpretation: Our findings show that in elderly people, irrespective of vaccine match, seasonal influenza vaccination is effective against laboratory confirmed influenza during epidemic seasons. Efforts should be renewed worldwide to further increase uptake of the influenza vaccine in the elderly population. Funding: None. © 2014 Elsevier Ltd.


Nakano Y.,Nara Institute of Science and Technology | Nakano Y.,TU Eindhoven | Fujiki M.,Nara Institute of Science and Technology
Macromolecules | Year: 2011

Circularly polarized (CP) light may play key roles in the migration and delocalization of photoexcited energy in optically active macroscopic aggregates of chiral chlorophylls surrounded by an aqueous fluid in the chloroplasts under incoherent unpolarized sunlight. Learning from the chiral fluid biosystem, we designed artificial polymer aggregates of three highly luminescent helical polysilanes, 1-S, 2-S, and 2-R (Chart 1). Under specific conditions (molecular weights and good-and-poor solvent ratio), 1-S aggregates with ∼5 μm in organic fluid generated an efficient circularly polarized luminescence (CPL) with gCPL = -0.7 at 330 nm while retaining a high quantum efficiency (φPL) ∼53% at room temperature under incoherent unpolarized photoexcitation at 290 nm. This huge gCPL value was the consequence of the intense bisignate circularly dichroism (CD) signals (gCD = -0.35 at 325 nm and +0.31 at 313 nm) due to coupled oscillators with electric-dipole-allowed-transition origin. Also, 2-S and 2-R aggregates gave almost identical intense CD and CPL amplitudes of 1-S. The most critical factors for the CD/CPL enhancements were the molecular weights of 1-S, 2-S, and 2-R and a refractive index of good/poor cosolvents. The former was connected to a long persistence length of ∼70 nm, characteristic of rod-like helical polysilanes. The latter was due to an efficient photoexcited energy confinement effect of slow CP-light in the aggregate. © 2011 American Chemical Society.


Koroglu H.,TU Eindhoven
Proceedings of the IEEE Conference on Decision and Control | Year: 2010

Attenuation of sinusoidal disturbances with uncertain and arbitrarily time-varying frequencies is considered for a plant that depends on online measurable parameters. The disturbances are modeled as the outputs of a neutrally stable exogenous system that depends on measurable as well as unmeasurable parameters. Solvability conditions are then derived in the form of parameter-dependent matrix inequalities, based on which a linear parameter-varying controller synthesis procedure is outlined. Alternative conditions are provided for the synthesis of a controller that has no dependence on the derivatives of the parameters. It is also clarified how the transient behavior of the controller can be improved. ©2010 IEEE.


Martens T.,University of Antwerp | Bogaerts A.,University of Antwerp | Van Dijk J.,TU Eindhoven
Applied Physics Letters | Year: 2010

In this letter we compare the effect of a radio-frequency sine, a low frequency sine, a rectangular and a pulsed dc voltage profile on the calculated electron production and power consumption in the dielectric barrier discharge. We also demonstrate using calculated potential distribution profiles of high time and space resolution how the pulsed dc discharge generates a secondary discharge pulse by deactivating the power supply. © 2010 American Institute of Physics.


Irmscher M.,TU Eindhoven
Journal of the Royal Society, Interface / the Royal Society | Year: 2013

The internalization of matter by phagocytosis is of key importance in the defence against bacterial pathogens and in the control of cancerous tumour growth. Despite the fact that phagocytosis is an inherently mechanical process, little is known about the forces and energies that a cell requires for internalization. Here, we use functionalized magnetic particles as phagocytic targets and track their motion while actuating them in an oscillating magnetic field, in order to measure the translational and rotational stiffnesses of the phagocytic cup as a function of time. The measured evolution of stiffness reveals a characteristic pattern with a pronounced peak preceding the finalization of uptake. The measured stiffness values and their time dependence can be interpreted with a model that describes the phagocytic cup as a prestressed membrane connected to an elastically deformable actin cortex. In the context of this model, the stiffness peak is a direct manifestation of a previously described mechanical bottleneck, and a comparison of model and data suggests that the membrane advances around the particle at a speed of about 20 nm s(-1). This approach is a novel way of measuring the progression of emerging phagocytic cups and their mechanical properties in situ and in real time.


Geilen M.,TU Eindhoven
Transactions on Embedded Computing Systems | Year: 2010

The Synchronous Dataflow (SDF) model of computation by Lee and Messerschmitt has become popular for modeling concurrent applications on a multiprocessor platform. It is used to obtain a guaranteed, predictable performance. The model, on the other hand, is quite restrictive in its expressivity, making it less applicable to many modern, more dynamic applications. A common technique to deal with dynamic behavior is to consider different scenarios in separation. This analysis is, however, currently limited mainly to sequential applications. In this article, we present a new analysis approach that allows analysis of synchronous dataflow models across different scenarios of operation. The dataflow graphs corresponding to the different scenarios can be completely different. Execution times, consumption and production rates and the structure of the SDF may change. Our technique allows to derive or prove worst-case performance guarantees of the resulting model and as such extends the model-driven approach to designing predictable systems to significantly more dynamic applications and platforms. The approach is illustrated with three MP3 and MPEG-4 related case studies. © 2010 ACM.


Fabre B.,CNRS Jean Le Rond dAlembert Institute | Gilbert J.,CNRS Acoustic Lab of Du Maine University | Hirschberg A.,TU Eindhoven | Pelorson X.,CNRS GIPSA Laboratory
Annual Review of Fluid Mechanics | Year: 2011

We are interested in the quality of sound produced by musical instruments and their playability. In wind instruments, a hydrodynamic source of sound is coupled to an acoustic resonator. Linear acoustics can predict the pitch of an instrument. This can significantly reduce the trial-and-error process in the design of a new instrument. We consider deviations from the linear acoustic behavior and the fluid mechanics of the sound production. Real-time numerical solution of the nonlinear physical models is used for sound synthesis in so-called virtual instruments. Although reasonable analytical models are available for reeds, lips, and vocal folds, the complex behavior of flue instruments escapes a simple universal description. Furthermore, to predict the playability of real instruments and help phoneticians or surgeons analyze voice quality, we need more complex models.


Moodera J.S.,Massachusetts Institute of Technology | Koopmans B.,TU Eindhoven | Oppeneer P.M.,Uppsala University
MRS Bulletin | Year: 2014

Organic materials provide a unique platform for exploiting the spin of the electron - a field dubbed organic spintronics. Originally, this was mostly motivated by the notion that because of weak spin-orbit coupling, due to the small mass elements in organics and small hyperfine field coupling, organic matter typically displays a very long electron spin coherence time. More recently, however, it was found that organics provide a special class of spintronic materials for many other reasons - several of which are discussed throughout this issue. Over the past decade, there has been a growing interest in utilizing the molecular spin state as a quantum of information, aiming to develop multifunctional molecular spintronics for memory, sensing, and logic applications. The aim of this issue is to stimulate the interest of researchers by bringing to their attention the vast possibilities not only for unexpected science but also for the enormous potential for developing new functionalities and applications. The six articles in this issue deal with some of the breakthrough work that has been ongoing in this field in recent years. © Materials Research Society 2014.


Van Beurden M.C.,TU Eindhoven
Journal of the Optical Society of America A: Optics and Image Science, and Vision | Year: 2011

For block-shaped dielectric gratings with two-dimensional periodicity, a spectral-domain volume integral equation is derived in which explicit Fourier factorization rules are employed. The Fourier factorization rules are derived from a projection-operator framework and enhance the numerical accuracy of the method, while maintaining a low computational complexity of O(N log N) or better and a low memory demand of O(N). © 2011 Optical Society of America.


Van Brummelen E.H.,TU Eindhoven
International Journal for Numerical Methods in Fluids | Year: 2011

The basic subiteration method for fluid-structure interaction (FSI) problems is based on a partitioning of the fluid-structure system into a fluidic part and a structural part. The effect of the fluid on the structure can be represented by an added mass to the structural operator. This added mass can be identified as an upper bound on the norm or spectral radius of the Poincar'e-Steklov operator of the fluid. The convergence behavior of the subiteration method depends sensitively on the ratio of the added mass to the actual structural mass. For FSI problems with large added-mass effects, the subiteration method is either unstable or its convergence behavior is prohibitively inefficient. In recent years, several more advanced partitioned iterative solution methods have been proposed for this class of problems, which use subiteration as a component. The rudimentary characterization of the Poincaré-Steklov operator provided by the added mass is, however, inadequate to analyze these methods. Moreover, this characterization is inappropriate for compressible flows. In this paper, we investigate the fine properties of the Poincaré-Steklov operators and of the corresponding subiteration operators for incompressible- and compressible flow models and for two distinct structural operators. Based on the characteristic properties of the subiteration operators, we subsequently examine the convergence behavior of several partitioned iterative solution methods for FSI, viz. subiteration, subiteration in conjunction with underrelaxation, the modified-mass method, Aitken's method, and interface-GMRES and interface-Newton-Krylov methods. Copyright © 2010 John Wiley & Sons, Ltd.


Koenraad P.M.,TU Eindhoven | Flatte M.E.,University of Iowa
Nature Materials | Year: 2011

The sensitive dependence of a semiconductor's electronic, optical and magnetic properties on dopants has provided an extensive range of tunable phenomena to explore and apply to devices. Recently it has become possible to move past the tunable properties of an ensemble of dopants to identify the effects of a solitary dopant on commercial device performance as well as locally on the fundamental properties of a semiconductor. New applications that require the discrete character of a single dopant, such as single-spin devices in the area of quantum information or single-dopant transistors, demand a further focus on the properties of a specific dopant. This article describes the huge advances in the past decade towards observing, controllably creating and manipulating single dopants, as well as their application in novel devices which allow opening the new field of solotronics (solitary dopant optoelectronics). © 2011 Macmillan Publishers Limited. All rights reserved.


Elwany A.H.,TU Eindhoven | Gebraeel N.Z.,Georgia Institute of Technology | Maillart L.M.,University of Pittsburgh
Operations Research | Year: 2011

Failure of many engineering systems usually results from a gradual and irreversible accumulation of damage, a degradation process. Most degradation processes can be monitored using sensor technology. The resulting degradation signals are usually correlated with the degradation process. A system is considered to have failed once its degradation signal reaches a prespecified failure threshold. This paper considers a replacement problem for components whose degradation process can be monitored using dedicated sensors. First, we present a stochastic degradation modeling framework that characterizes, in real time, the path of a component's degradation signal. These signals are used to predict the evolution of the component's degradation state. Next, we formulate a single-unit replacement problem as a Markov decision process and utilize the realtime signal observations to determine a replacement policy. We focus on exponentially increasing degradation signals and show that the optimal replacement policy for this class of problems is a monotonically nondecreasing control limit policy. Finally, the model is used to determine an optimal replacement policy by utilizing vibration-based degradation signals from a rotating machinery application. © 2011 INFORMS.


Blocken B.,TU Eindhoven | Derome D.,Empa - Swiss Federal Laboratories for Materials Science and Technology | Carmeliet J.,Empa - Swiss Federal Laboratories for Materials Science and Technology | Carmeliet J.,ETH Zurich
Building and Environment | Year: 2013

Rainwater runoff from building facades is a complex process governed by a wide range of urban, building, material and meteorological parameters. Given this complexity and the wide range of influencing parameters, it is not surprising that despite research efforts spanning over almost a century, wind-driven rain and rainwater runoff are still very active research subjects. Accurate knowledge of rainwater runoff is important for hygrothermal and durability analyses of building facades, assessment of indirect evaporative cooling by water films on facades to mitigate outdoor and indoor overheating, assessment of the self-cleaning action of facade surface coatings and leaching of particles from surface coatings that enter the water cycle as hazardous pollutants. Research on rainwater runoff is performed by field observations, field measurements, laboratory measurements and analytical and numerical modelling. While field observations are many, up to now, field experiments and modelling efforts are few and have been almost exclusively performed for plain facades without facade details. Field observations, often based on a posteriori investigation of the reasons for differential surface soiling, are important because they have provided and continue to provide very valuable qualitative information on runoff, which is very difficult to obtain in any other way. Quantitative measurements are increasing, but are still very limited in relation to the wide range of influencing parameters. To the knowledge of the authors, current state-of-the-art hygrothermal models do not yet contain runoff models. The development, validation and implementation of such models into hygrothermal models is required to supplement observational and experimental research efforts. © 2012 Elsevier Ltd.


Paredes J.,University of Amsterdam | Michels M.A.J.,TU Eindhoven | Bonn D.,University of Amsterdam
Physical Review Letters | Year: 2013

Many soft-matter systems show a transition between fluidlike and mechanically solidlike states when the volume fraction of the material, e.g., particles, drops, or bubbles is increased. Using an emulsion as a model system with a precisely controllable volume fraction, we show that the entire mechanical behavior in the vicinity of the jamming point can be understood if the mechanical transition is assumed to be analogous to a phase transition. We find power-law scalings in the distance to the jamming point, in which the parameters and exponents connect the behavior above and below jamming. We propose a simple two-state model with heterogeneous dynamics to describe the transition between jammed and mobile states. The model reproduces the steady-state and creep rheology and relates the power-law exponents to diverging microscopic time scales. © 2013 American Physical Society.


Brouwers H.J.H.,TU Eindhoven
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2010

This paper addresses the relative viscosity of concentrated suspensions loaded with unimodal hard particles. So far, exact equations have only been put forward in the dilute limit, e.g., by Einstein for spheres. For larger concentrations, a number of phenomenological models for the relative viscosity was presented, which depend on particle concentration only. Here, an original and exact closed form expression is derived based on geometrical considerations that predicts the viscosity of a concentrated suspension of monosized particles. This master curve for the suspension viscosity is governed by the relative viscosity-concentration gradient in the dilute limit (for spheres the Einstein limit) and by random close packing of the unimodal particles in the concentrated limit. The analytical expression of the relative viscosity is thoroughly compared with experiments and simulations reported in the literature, concerning both dilute and concentrated suspensions of spheres, and good agreement is found. © 2010 The American Physical Society.


Groenwold A.A.,Stellenbosch University | Etman L.F.P.,TU Eindhoven
International Journal for Numerical Methods in Engineering | Year: 2010

In topology optimization, it is customary to use reciprocal-like approximations, which result in monotonically decreasing approximate objective functions. In this paper, we demonstrate that efficient quadratic approximations for topology optimization can also be derived, if the approximate Hessian terms are chosen with care. To demonstrate this, we construct a dual SAO algorithm for topology optimization based on a strictly convex, diagonal quadratic approximation to the objective function. Although the approximation is purely quadratic, it does contain essential elements of reciprocal-like approximations: for self-adjoint problems, our approximation is identical to the quadratic or second-order Taylor series approximation to the exponential approximation. We present both a single-point and a two-point variant of the new quadratic approximation. Copyright © 2009 John Wiley & Sons, Ltd.


Kirkels Y.,Fontys University of Applied Sciences | Duysters G.,TU Eindhoven
Research Policy | Year: 2010

This study focuses on SME networks of design and high-tech companies in Southeast Netherlands. By highlighting the personal networks of members across design and high-tech industries, the study attempts to identify the main brokers in this dynamic environment. In addition, we investigate whether specific characteristics are associated with these brokers. The main contribution of the paper lies in the fact that, in contrast to most other work, it is of a quantitative nature and focuses on brokers identified in an actual network. Studying the phenomenon of brokerage provides us with clear insights into the concept of brokerage regarding SME networks in different fields. In particular we highlight how third parties contribute to the transfer and development of knowledge. Empirical results show, among others, that the most influential brokers are found in the non-profit and science sector and have a long track record in their branch. © 2010 Elsevier B.V. All rights reserved.


Van Den Dries S.,TU Eindhoven | Wiering M.A.,University of Groningen
IEEE Transactions on Neural Networks and Learning Systems | Year: 2012

This paper describes a methodology for quickly learning to play games at a strong level. The methodology consists of a novel combination of three techniques, and a variety of experiments on the game of Othello demonstrates their usefulness. First, structures or topologies in neural network connectivity patterns are used to decrease the number of learning parameters and to deal more effectively with the structural credit assignment problem, which is to change individual network weights based on the obtained feedback. Furthermore, the structured neural networks are trained with the novel neural-fitted temporal difference (TD) learning algorithm to create a system that can exploit most of the training experiences and enhance learning speed and performance. Finally, we use the neural-fitted TD-leaf algorithm to learn more effectively when look-ahead search is performed by the game-playing program. Our extensive experimental study clearly indicates that the proposed method outperforms linear networks and fully connected neural networks or evaluation functions evolved with evolutionary algorithms. © 2012 IEEE.


Dennison M.,VU University Amsterdam | Sheinman M.,VU University Amsterdam | Storm C.,TU Eindhoven | Mackintosh F.C.,VU University Amsterdam
Physical Review Letters | Year: 2013

We study the elastic properties of thermal networks of Hookean springs. In the purely mechanical limit, such systems are known to have a vanishing rigidity when their connectivity falls below a critical, isostatic value. In this work, we show that thermal networks exhibit a nonzero shear modulus G well below the isostatic point and that this modulus exhibits an anomalous, sublinear dependence on temperature T. At the isostatic point, G increases as the square root of T, while we find GaTα below the isostatic point, where α 0.8. We show that this anomalous T dependence is entropic in origin. © 2013 American Physical Society.


Mendling J.,Humboldt University of Berlin | Reijers H.A.,TU Eindhoven | Recker J.,Queensland University of Technology
Information Systems | Year: 2010

Few studies have investigated the factors contributing to the successful practice of process modeling. In particular, studies that contribute to the act of developing process models that facilitate communication and understanding are scarce. Although the value of process models is not only dependent on the choice of graphical constructs but also on their annotation with textual labels, there has been hardly any work on the quality of these labels. Accordingly, the research presented in this paper examines activity labeling practices in process modeling. Based on empirical data from process modeling practice, we identify and discuss different labeling styles and their use in process modeling praxis. We perform a grammatical analysis of these styles and use data from an experiment with process modelers to examine a range of hypotheses about the usability of the different styles. Based on our findings, we suggest specific programs of research towards better tool support for labeling practices. Our work contributes to the emerging stream of research investigating the practice of process modeling and thereby contributes to the overall body of knowledge about conceptual modeling quality. © 2009 Elsevier B.V. All rights reserved.


Chen W.,Key Laboratory of Silicate Materials Science and Engineering | Chen W.,Wuhan University of Technology | Brouwers H.J.H.,TU Eindhoven
Cement and Concrete Research | Year: 2010

The alkali-binding capacity of C-S-H in hydrated Portland cement pastes is addressed in this study. The amount of bound alkalis in C-S-H is computed based on the alkali partition theories firstly proposed by Taylor (1987) and later further developed by Brouwers and Van Eijk (2003). Experimental data reported in literatures concerning thirteen different recipes are analyzed and used as references. A three-dimensional computer-based cement hydration model (CEMHYD3D) is used to simulate the hydration of Portland cement pastes. These model predictions are used as inputs for deriving the alkali-binding capacity of the hydration product C-S-H in hydrated Portland cement pastes. It is found that the relation of Na+ between the moles bound in C-S-H and its concentration in the pore solution is linear, while the binding of K+ in C-S-H complies with the Freundlich isotherm. New models are proposed for determining the alkali-binding capacities of C-S-H in hydrated Portland cement paste. An updated method for predicting the alkali concentrations in the pore solution of hydrated Portland cement pastes is developed. It is also used to investigate the effects of various factors (such as the water to cement ratio, clinker composition and alkali types) on the alkali concentrations. © 2010 Elsevier Ltd. All rights reserved.


Heemels W.P.M.H.,TU Eindhoven | Daafouz J.,University of Lorraine | Millerioux G.,University of Lorraine
IEEE Transactions on Automatic Control | Year: 2010

In this note, linear matrix inequality-based design conditions are presented for observer-based controllers that stabilize discrete-time linear parameter-varying systems in the situation where the parameters are not exactly known, but are only available with a finite accuracy. The presented framework allows to make tradeoffs between the admissible level of parameter uncertainty on the one hand and the transient perfor mance on the other. In addition, the level of parameter uncertainty can be maximized while still guaranteeing closed-loop stability. © 2010 IEEE.


Zhang B.,IBM | Van Leeuwaarden J.S.H.,TU Eindhoven | Zwart B.,Pna Innovations, Inc.
Operations Research | Year: 2012

In call centers it is crucial to staff the right number of agents so that the targeted service levels are met. These staffing problems typically lead to constraint satisfaction problems that are hard to solve. During the last decade, a beautiful manyserver asymptotic theory has been developed to solve such problems for large call centers, and optimal staffing rules are known to obey the square-root staffing principle. This paper presents refinements to many-server asymptotics and this staffing principle for a Markovian queueing model with impatient customers. © 2012 INFORMS.


Tajer A.,Wayne State University | Castro R.M.,TU Eindhoven | Wang X.,Columbia University
IEEE Transactions on Information Theory | Year: 2012

Cognitive radios process their sensed information collectively in order to opportunistically identify and access underutilized spectrum segments (spectrum holes). Due to the transient and rapidly varying nature of the spectrum occupancy, the cognitive radios (secondary users) must be agile in identifying the spectrum holes in order to enhance their spectral efficiency. We propose a novel adaptive procedure to reinforce the agility of the secondary users for identifying multiple spectrum holes simultaneously over a wide spectrum band. This is accomplished by successively exploring the set of potential spectrum holes and progressively allocating the sensing resources to the most promising areas of the spectrum. Such exploration and resource allocation results in conservative spending of the sensing resources and translates into very agile spectrum monitoring. The proposed successive and adaptive sensing procedure is in contrast to the more conventional approaches that distribute the sampling resources equally over the entire spectrum. Besides improved agility, the adaptive procedure requires less-stringent constraints on the power of the primary users to guarantee that they remain distinguishable from the environment noise and renders more reliable spectrum hole detection. © 1963-2012 IEEE.


Voyiadjis G.Z.,Louisiana State University | Peters R.,TU Eindhoven
Acta Mechanica | Year: 2010

This work addresses the size effect encountered in nanoindentation experiments. It is generally referred to as the indentation size effect (ISE). Classical descriptions of the ISE show a decrease in hardness for increasing indentation depth. Recently new experiments have shown that after the initial decrease, hardness increases with increasing indentation depth. After this increase, finally the hardness decreases with increasing indentation. This work reviews the existing theories describing the ISE and presents new formulations that incorporate the hardening effect into the ISE. Furthermore, indentation experiments have been performed on several metal samples, to see whether the hardening effect was an anomaly or not. Finally, numerical simulations are performed using the commercial program ABAQUS. © 2009 Springer-Verlag.


Janssen J.H.,TU Eindhoven
Journal on Multimodal User Interfaces | Year: 2012

Empathy can be considered one of our most important social processes. In that light, empathic technologies are the class of technologies that can augment empathy between two or more individuals. To provide a basis for such technologies, a three component framework is presented based on psychology and neuroscience, consisting of cognitive empathy, emotional convergence, and empathic responding. These three components can be situated in affective computing and social signal processing and pose different opportunities for empathic technologies. To leverage these opportunities, automated measurement possibilities for each component are identified using (combinations of) facial expressions, speech, and physiological signals. Thereafter, methodological challenges are discussed, including ground truth measurements and empathy induction. Finally, a research agenda is presented for social signal processing. This framework can help to further research on empathic technologies and ultimately bring it to fruition in meaningful innovations. In turn, this could enhance empathic behavior, thereby increasing altruism, trust, cooperation, and bonding. © 2012 The Author(s).


Martens J.-B.,TU Eindhoven
Transactions on Interactive Intelligent Systems | Year: 2014

Progress in empirical research relies on adequate statistical analysis and reporting. This article proposes an alternative approach to statistical modeling that is based on an old but mostly forgotten idea, namely Thurstone modeling. Traditional statistical methods assume that either the measured data, in the case of parametric statistics, or the rank-order transformed data, in the case of nonparametric statistics, are samples from a specific (usually Gaussian) distribution with unknown parameters. Consequently, such methods should not be applied when this assumption is not valid. Thurstone modeling similarly assumes the existence of an underlying process that obeys an a priori assumed distribution with unknown parameters, but combines this underlying process with a flexible response mechanism that can be either continuous or discrete and either linear or nonlinear. One important advantage of Thurstone modeling is that traditional statistical methods can still be applied on the underlying process, irrespective of the nature of the measured data itself. Another advantage is that Thurstone models can be graphically represented, which helps to communicate them to a broad audience. A new interactive statistical package, Interactive Log Likelihood MOdeling (Illmo), was specifically designed for estimating and rendering Thurstone models and is intended to bring Thurstone modeling within the reach of persons who are not experts in statistics. Illmo is unique in the sense that it provides not only extensive graphical renderings of the data analysis results but also an interface for navigating between different model options. In this way, users can interactively explore different models and decide on an adequate balance between model complexity and agreement with the experimental data. Hypothesis testing on model parameters is also made intuitive and is supported by both textual and graphical feedback. The flexibility and ease of use of Illmo means that it is also potentially useful as a didactic tool for teaching statistics. © 2014 ACM 2160-6455/2014/03-ART4 $ 15.00.


Abb M.,University of Southampton | Bakkers E.P.A.M.,TU Eindhoven | Muskens O.L.,University of Southampton
Physical Review Letters | Year: 2011

We demonstrate ultrafast dephasing in the random transport of light through a layer consisting of strongly scattering GaP nanowires. Dephasing results in a nonlinear intensity modulation of individual pseudomodes which is 100 times larger than that of bulk GaP. Different contributions to the nonlinear response are separated by using total transmission, white-light frequency correlation, and statistical pseudomode analysis. A dephasing time of 1.2±0.2ps is found. Quantitative agreement is obtained with numerical model calculations which include photoinduced absorption and deformation of individual scatterers. Nonlinear dephasing of photonic eigenmodes opens up avenues for ultrafast control of random lasers, nanophotonic switches, and photon localization. © 2011 American Physical Society.


Zecevic J.,University Utrecht | Gommes C.J.,University of Liege | Friedrich H.,TU Eindhoven | Dejongh P.E.,University Utrecht | Dejong K.P.,University Utrecht
Angewandte Chemie - International Edition | Year: 2012

Quantitative insight into the three-dimensional morphology of complex zeoliteY mesopore networks was achieved by combining electron tomography and image processing. Properties could be studied that are not measurable by other techniques, such as the size distribution of the intact microporous domains. This has great relevance in descriptions of the molecular diffusion through zeolite crystals and, hence, catalytic activity and selectivity. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Adan I.,TU Eindhoven | Weiss G.,Haifa University
Operations Research | Year: 2012

Motivated by queues with multitype servers and multitype customers, we consider an infinite sequence of items of types C = (c 1, ⋯, c I), and another infinite sequence of items of types S =(s 1, ⋯, s J), and a bipartite graph G of allowable matches between the types. We assume that the types of items in the two sequences are independent and identically distributed (i.i.d.) with given probability vectors α, β Matching the two sequences on a first-come, first-served basis defines a unique infinite matching between the sequences. For (c i1, s j) ∈ G we define the matching rate r ci, sj as the long-term fraction of c i, s j matches in the infinite matching, if it exists. We describe this system by a multidimensional countable Markov chain, obtain conditions for ergodicity, and derive its stationary distribution, which is, most surprisingly, of product form. We show that if the chain is ergodic, then the matching rates exist almost surely, and we give a closed-form formula to calculate them. We point out the connection of this model to some queueing models. © 2012 INFORMS.


OBJECTIVES: Novel quantitative measures of transpulmonary circulation status may allow the improvement of heart failure (HF) patient management. In this work, we propose a method for the assessment of the transpulmonary circulation using measurements from indicator time intensity curves, derived from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) series. The derived indicator dilution parameters in healthy volunteers (HVs) and HF patients were compared, and repeatability was assessed. Furthermore, we compared the parameters derived using the proposed method with standard measures of cardiovascular function, such as left ventricular (LV) volumes and ejection fraction. MATERIALS AND METHODS: In total, 19 HVs and 33 HF patients underwent a DCE-MRI scan on a 1.5 T MRI scanner using a T1-weighted spoiled gradient echo sequence. Image loops with 1 heartbeat temporal resolution were acquired in 4-chamber view during ventricular late diastole, after the injection of a 0.1-mmol gadoteriol bolus. In a subset of subjects (8 HFs, 2 HVs), a second injection of a 0.3-mmol gadoteriol bolus was performed with the same imaging settings. The study was approved by the local institutional review board.Indicator dilution curves were derived, averaging the MR signal within regions of interest in the right and left ventricle; parametric deconvolution was performed between the right and LV indicator dilution curves to identify the impulse response of the transpulmonary dilution system. The local density random walk model was used to parametrize the impulse response; pulmonary transit time (PTT) was defined as the mean transit time of the indicator. λ, related to the Péclet number (ratio between convection and diffusion) for the dilution process, was also estimated. RESULTS: Pulmonary transit time was significantly prolonged in HF patients (8.70 ± 1.87 seconds vs 6.68 ± 1.89 seconds in HV, P < 0.005) and even stronger when normalized to subject heart rate (normalized PTT, 9.90 ± 2.16 vs 7.11 ± 2.17 in HV, dimensionless, P < 0.001). λ was significantly smaller in HF patients (8.59 ± 4.24 in HF vs 12.50 ± 17.09 in HV, dimensionless, P < 0.005), indicating a longer tail for the impulse response. Pulmonary transit time correlated well with established cardiovascular parameters (LV end-diastolic volume index, r = 0.61, P < 0.0001; LV ejection fraction, r = −0.64, P < 0.0001). The measurement of indicator dilution parameters was repeatable (correlation between estimates based on the 2 repetitions for PTT: r = 0.94, P < 0.001, difference between 2 repetitions 0.01 ± 0.60 second, for λ: r = 0.74, P < 0.01, difference 0.69 ± 4.39). CONCLUSIONS: Characterization of the transpulmonary circulation by DCE-MRI is feasible in HF patients and HVs. Significant differences are observed between indicator dilution parameters measured in HVs and HF patients; preliminary results suggest good repeatability for the proposed parameters. Copyright © 2016 Wolters Kluwer Health, Inc. All rights reserved.


Albertazzi L.,TU Eindhoven | Albertazzi L.,CNR Institute of Neuroscience | Bendikov M.,Weizmann Institute of Science | Baran P.S.,Scripps Research Institute
Journal of the American Chemical Society | Year: 2012

The detection of chemical or biological analytes upon molecular reactions relies increasingly on fluorescence methods, and there is a demand for more sensitive, more specific, and more versatile fluorescent molecules. We have designed long wavelength fluorogenic probes with a turn-ON mechanism based on a donor-two-acceptor π-electron system that can undergo an internal charge transfer to form new fluorochromes with longer π-electron systems. Several latent donors and multiple acceptor molecules were incorporated into the probe modular structure to generate versatile dye compounds. This new library of dyes had fluorescence emission in the near-infrared (NIR) region. Computational studies reproduced the observed experimental trends well and suggest factors responsible for high fluorescence of the donor-two-acceptor active form and the low fluorescence observed from the latent form. Confocal images of HeLa cells indicate a lysosomal penetration pathway of a selected dye. The ability of these dyes to emit NIR fluorescence through a turn-ON activation mechanism makes them promising candidate probes for in vivo imaging applications. © 2012 American Chemical Society.


Laminopathies, mainly caused by mutations in the LMNA gene, are a group of inherited diseases with a highly variable penetrance; i.e., the disease spectrum in persons with identical LMNA mutations range from symptom-free conditions to severe cardiomyopathy and progeria, leading to early death. LMNA mutations cause nuclear abnormalities and cellular fragility in response to cellular mechanical stress, but the genotype/phenotype correlations in these diseases remain unclear. Consequently, tools such as mutation analysis are not adequate for predicting the course of the disease.   Here, we employ growth substrate stiffness to probe nuclear fragility in cultured dermal fibroblasts from a laminopathy patient with compound progeroid syndrome. We show that culturing of these cells on substrates with stiffness higher than 10 kPa results in malformations and even rupture of the nuclei, while culture on a soft substrate (3 kPa) protects the nuclei from morphological alterations and ruptures. No malformations were seen in healthy control cells at any substrate stiffness. In addition, analysis of the actin cytoskeleton organization in this laminopathy cells demonstrates that the onset of nuclear abnormalities correlates to an increase in cytoskeletal tension. Together, these data indicate that culturing of these LMNA mutated cells on substrates with a range of different stiffnesses can be used to probe the degree of nuclear fragility. This assay may be useful in predicting patient-specific phenotypic development and in investigations on the underlying mechanisms of nuclear and cellular fragility in laminopathies.


Loffler W.,Leiden University | Broer D.J.,TU Eindhoven | Woerdman J.P.,Leiden University
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2011

We explore experimentally if light's orbital angular momentum (OAM) interacts with chiral nematic polymer films. Specifically, we measure the circular dichroism of such a material using light beams with different OAM. We investigate the case of strongly focused, nonparaxial light beams, where the spatial and polarization degrees of freedom are coupled. Within the experimental accuracy, we cannot find any influence of the OAM on the circular dichroism of cholesteric polymers. © 2011 American Physical Society.


Vreman A.W.,Akzo Nobel | Kuerten J.G.M.,TU Eindhoven | Kuerten J.G.M.,University of Twente
Physics of Fluids | Year: 2014

Statistical profiles of the first- and second-order spatial derivatives of velocity and pressure are reported for turbulent channel flow at Reτ = 590. The statistics were extracted from a high-resolution direct numerical simulation. To quantify the anisotropic behavior of fine-scale structures, the variances of the derivatives are compared with the theoretical values for isotropic turbulence. It is shown that appropriate combinations of first- and second-order velocity derivatives lead to (directional) viscous length scales without explicit occurrence of the viscosity in the definitions. To quantify the non-Gaussian and intermittent behavior of fine-scale structures, higher-order moments and probability density functions of spatial derivatives are reported. Absolute skewnesses and flatnesses of several spatial derivatives display high peaks in the near wall region. In the logarithmic and central regions of the channel flow, all first-order derivatives appear to be significantly more intermittent than in isotropic turbulence at the same Taylor Reynolds number. Since the nine variances of first-order velocity derivatives are the distinct elements of the turbulence dissipation, the budgets of these nine variances are shown, together with the budget of the turbulence dissipation. The comparison of the budgets in the near-wall region indicates that the normal derivative of the fluctuating streamwise velocity (∂ú/∂y) plays a more important role than other components of the fluctuating velocity gradient. The small-scale generation term formed by triple correlations of fluctuations of first-order velocity derivatives is analyzed. A typical mechanism of small-scale generation near the wall (around y+ = 1), the intensification of positive ∂ú/∂y by local strain fluctuation (compression in normal and stretching in spanwise direction), is illustrated and discussed. © 2014 AIP Publishing LLC.


Van Oorschot K.,TU Eindhoven
Journal of Product Innovation Management | Year: 2010

Stage-Gates is a widely used product innovation process for managing portfolios of new product development projects. The process enables companies to minimize uncertainty by helping them identify-at various stages or gates-the "wrong" projects before too many resources are invested. The present research looks at the question of whether using Stage-Gates may lead companies also to jettison some "right" projects (i.e., those that could have become successful). The specific context of this research involves projects characterized by asymmetrical uncertainty: where workload is usually underestimated at the start (because new development tasks or new customer requirements are discovered after the project begins) and where the development team's size is often overestimated (because assembling a productive team takes more time than anticipated). Software development projects are a perfect example. In the context of an underestimated workload and an understaffed team, the Stage-Gates philosophy of low investment at the start may set off a negative dynamic: low investments in the beginning lead to massive schedule pressure, which increases turnover in an already understaffed team and results in the team missing schedules for the first stage. This delay cascades into the second stage and eventually leads management to conclude that the project is not viable and should be abandoned. However, this paper shows how, with slightly more flexible thinking (i.e., initial Stage-Gates investments that are slightly less lean), some of the ostensibly "wrong" projects can actually become the "right" projects to pursue. Principal conclusions of the analysis are as follows: (1) adhering strictly to the Stage-Gates philosophy may well kill off viable projects and damage the firm's bottom line; (2) slightly relaxing the initial investment constraint can improve the dynamics of project execution; and (3) during a project's first stages, managers should focus more on ramping up their project team than on containing project costs. © 2010 Product Development & Management Association.


Rebrov E.V.,TU Eindhoven
Theoretical Foundations of Chemical Engineering | Year: 2010

Capillary hydrodynamics has three considerable distinctions from macrosystems: first, there is an increase in the ratio of the surface area of the phases to the volume that they occupy; second, a flow is characterized by small Reynolds numbers at which viscous forces predominate over inertial forces; and third, the microroughness and wettability of the wall of the channel exert a considerable influence on the flow pattern. In view of these differences, the correlations used for tubes with a larger diameter cannot be used to calculate the boundaries of the transitions between different flow regimes in microchannels. In the present review, an analysis of published data on a gas-liquid two-phase flow in capillaries of various shapes is given, which makes it possible to systematize the collected body of information. The specific features of the geometry of a mixer and an inlet section, the hydraulic diameter of a capillary, and the surface tension of a liquid exert the strongest influence on the position of the boundaries of two-phase flow regimes. Under conditions of the constant geometry of the mixer, the best agreement in the position of the boundaries of the transitions between different hydrodynamic regimes in capillaries is observed during the construction of maps of the regimes with the use of the Weber numbers for a gas and a liquid as coordinate axes. © 2010 Pleiades Publishing, Ltd.


Van Der Aalst W.,TU Eindhoven
Communications of the ACM | Year: 2012

Recent breakthroughs in process mining research make it possible to discover, analyze, and improve business processes based on event data. Activities executed by people, machines, and software leave trails in so-called event logs. What events (such as entering a customer order into SAP, a passenger checking in for a flight, a doctor changing a patient's dosage, or a planning agency rejecting a building permit) have in common is that all are recorded by information systems. Data volume and storage capacity have grown spectacularly over the past decade, while the digital universe and the physical universe are increasingly aligned. Business processes thus ought to be managed, supported, and improved based on event data rather than on subjective opinions or obsolete experience. Application of process mining in hundreds of organizations worldwide shows that managers and users alike tend to overestimate their knowledge of © 2012 ACM.


Su R.,Nanyang Technological University | Woeginger G.,TU Eindhoven
Automatica | Year: 2011

In performance evaluation or supervisory control, we often encounter problems of determining the maximum or minimum string execution time for a finite language when estimating the worst-case or best-case performance. It has been shown in the literature that the time complexity for computing the maximum string execution time for a finite language is polynomial with respect to the size of an automaton recognizer of that language and the dimension of the corresponding resource matrices. In this paper we provide a more efficient algorithm to compute such maximum string execution time. Then we show that it is NP-complete to determine the minimum string execution time. © 2011 Elsevier Ltd. All rights reserved.


Jorissen A.,TU Eindhoven | Fragiacomo M.,University of Sassari
Engineering Structures | Year: 2011

The paper discusses the implications of ductility in design of timber structures under static and dynamic loading including earthquakes. Timber is a material inherently brittle in bending and in tension, unless reinforced adequately. However connections between timber members can exhibit significant ductility, if designed and detailed properly to avoid splitting. Hence it is possible to construct statically indeterminate systems made of brittle timber members connected with ductile connections that behave in a ductile fashion. The brittle members, however, must be designed for the overstrength related to the strength of the ductile connections to ensure the ductile failure mechanism will take place before the failure of the brittle members. The overstrength ratio, defined as the ratio between the 95th percentile of the connection strength distribution and the analytical prediction of the characteristic connection strength, was calculated for multiple doweled connections loaded parallel to the grain based on the results of an extensive experimental programme carried out on timber splice connections with 10.65 and 11.75 mm diameter steel dowels grade 4.6. In this particular case the overstrength ratio was found to range from 1.2 to 2.1, and a value of 1.6 is recommended for ductile design. The paper illustrates the use of the elastic-perfectly plastic analysis with ductility control for a simple statically indeterminate structure and compares this approach to the fully non-linear analysis and with the more traditional linear elastic analysis. It is highlighted that plastic design should not be used for timber bridges since fatigue may lead to significant damage accumulation in the connections if plastic deformations have developed. The paper also shows that the current relative definitions of ductility, as a ratio between an ultimate deformation/displacement and the corresponding yield quantity, should be replaced by absolute definitions of ductility, for example the ultimate deformation/displacement, as the latter ones better represent the ductile structural behavior. © 2011 Elsevier Ltd.


Voss T.,TU Eindhoven | Scherpen J.M.A.,University of Groningen
Automatica | Year: 2011

In this paper we show how to perform stabilization and shape control for a finite dimensional model that recasts the dynamics of an inflatable space reflector in port-Hamiltonian (pH) form. We show how to derive a decentralized passivity-based controller which can be used to stabilize a 1D piezoelectric Timoshenko beam around a desired shape. Furthermore, we present simulation results obtained for the proposed decentralized control approach. © 2011 Elsevier Ltd. All rights reserved.


Aalst W.V.D.,TU Eindhoven | Aalst W.V.D.,Queensland University of Technology
IEEE Transactions on Services Computing | Year: 2013

Web services are an emerging technology to implement and integrate business processes within and across enterprises. Service orientation can be used to decompose complex systems into loosely coupled software components that may run remotely. However, the distributed nature of services complicates the design and analysis of service-oriented systems that support end-to-end business processes. Fortunately, services leave trails in so-called event logs and recent breakthroughs in process mining research make it possible to discover, analyze, and improve business processes based on such logs. Recently, the task force on process mining released the process mining manifesto. This manifesto is supported by 53 organizations and 77 process mining experts contributed to it. The active participation from end-users, tool vendors, consultants, analysts, and researchers illustrate the growing significance of process mining as a bridge between data mining and business process modeling. In this paper, we focus on the opportunities and challenges for service mining, i.e., applying process mining techniques to services. We discuss the guiding principles and challenges listed in the process mining manifesto and also highlight challenges specific for service-orientated systems. © 2008-2012 IEEE.


Van Der Aalst W.M.P.,TU Eindhoven
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

The Business Process Management (BPM) conference series celebrates its tenth anniversary. This is a nice opportunity to reflect on a decade of BPM research. This paper describes the history of the conference series, enumerates twenty typical BPM use cases, and identifies six key BPM concerns: process modeling languages, process enactment infrastructures, process model analysis, process mining, process flexibility, and process reuse. Although BPM matured as a research discipline, there are still various important open problems. Moreover, despite the broad interest in BPM, the adoption of state-of-the-art results by software vendors, consultants, and end-users leaves much to be desired. Hence, the BPM discipline should not shy away from the key challenges and set clear targets for the next decade. © 2012 Springer-Verlag.


This article examines whether a mixture of virtual and real-life interaction - in contrast to purely virtual interaction - among some members of online communities for teachers is beneficial for all teachers' professional development in the whole community. Earlier research indicated that blended communities tend to face fewer trust and free rider problems. This study continues this stream of research by examining whether blended communities provide more practical benefits to teachers, both in terms of perceived improvements to their teaching capabilities as well as for their substantial understanding of their core topic. In addition, it is tested whether blended communities provide more information about vacancies, as teachers' mobility is regarded as too low in the EU. The analysis uses survey data from 26 online communities for secondary education teachers in The Netherlands. The communities are part of a virtual organization that hosts communities for teachers' professional development. The findings indeed show beneficial effects of blended communities. Moreover, the results modify earlier claims about the integration of online communication with offline interaction by showing that complete integration is unnecessary. This facilitates a scaling up of the use of online communities for teachers' professional development. © 2012 Elsevier Ltd. All rights reserved.


Luiten J.,TU Eindhoven
Europhysics News | Year: 2015

55 years after Richard Feynman's famous Caltech lecture 'There is plenty of room at the bottom' [1], heralding the age of nano science and technology, many of the possibilities he envisaged have come true: Using electron microscopy it is nowadays possible to resolve and even identify individual atoms; STM and AFM not only provide us with similar spatial resolution on surfaces, but also allow dragging individual atoms around in a controlled way; X-ray diffraction has revealed the complicated structures of thousands of proteins, giving invaluable insight into the machinery of life. © European Physical Society, EDP Sciences, 2015.


Liao F.,TU Eindhoven
Transportation Research Part C: Emerging Technologies | Year: 2016

Multi-state supernetworks have been advanced recently for modeling individual activity-travel scheduling decisions. The main advantage is that multi-dimensional choice facets are modeled simultaneously within an integral framework, supporting systematic assessments of a large spectrum of policies and emerging modalities. However, duration choice of activities and home-stay has not been incorporated in this formalism yet. This study models duration choice in the state-of-the-art multi-state supernetworks. An activity link with flexible duration is transformed into a time-expanded bipartite network; a home location is transformed into multiple time-expanded locations. Along with these extensions, multi-state supernetworks can also be coherently expanded in space-time. The derived properties are that any path through a space-time supernetwork still represents a consistent activity-travel pattern, duration choice are explicitly associated with activity timing, duration and chain, and home-based tours are generated endogenously. A forward recursive formulation is proposed to find the optimal patterns with the optimal worst-case run-time complexity. Consequently, the trade-off between travel and time allocation to activities and home-stay can be systematically captured. © 2016 Elsevier Ltd.


van der Meijden C.M.,Energy Research Center of the Netherlands | Veringa H.J.,TU Eindhoven | Rabou L.P.L.M.,Energy Research Center of the Netherlands
Biomass and Bioenergy | Year: 2010

The production of Synthetic Natural Gas from biomass (Bio-SNG) by gasification and upgrading of the gas is an attractive option to reduce CO2 emissions and replace declining fossil natural gas reserves. Production of energy from biomass is approximately CO2 neutral. Production of Bio-SNG can even be CO2 negative, since in the final upgrading step, part of the biomass carbon is removed as CO2, which can be stored. The use of biomass for CO2 reduction will increase the biomass demand and therefore will increase the price of biomass. Consequently, a high overall efficiency is a prerequisite for any biomass conversion process. Various biomass gasification technologies are suitable to produce SNG. The present article contains an analysis of the Bio-SNG process efficiency that can be obtained using three different gasification technologies and associated gas cleaning and methanation equipment. These technologies are: 1) Entrained Flow, 2) Circulating Fluidized Bed and 3) Allothermal or Indirect gasification. The aim of this work is to identify the gasification route with the highest process efficiency from biomass to SNG and to quantify the differences in overall efficiency. Aspen Plus® was used as modeling tool. The heat and mass balances are based on experimental data from literature and our own experience. Overall efficiency to SNG is highest for Allothermal gasification. The net overall efficiencies on LHV basis, including electricity consumption and pre-treatment but excluding transport of biomass are 54% for Entrained Flow, 58% for CFB and 67% for Allothermal gasification. Because of the significantly higher efficiency to SNG for the route via Allothermal gasification, ECN is working on the further development of Allothermal gasification. ECN has built and tested a 30 kWth lab scale gasifier connected to a gas cleaning test rig and methanation unit and presently is building a 0.8 MWth pilot plant, called Milena, which will be connected to the existing pilot scale gas cleaning. © 2009 Elsevier Ltd. All rights reserved.


Pirruccio G.,FOM Institute for Atomic and Molecular Physics | Martin Moreno L.,University of Zaragoza | Lozano G.,FOM Institute for Atomic and Molecular Physics | Gomez Rivas J.,TU Eindhoven
ACS Nano | Year: 2013

We experimentally demonstrate a broadband enhancement of the light absorption in graphene over the whole visible spectrum. This enhanced absorption is obtained in a multilayer structure by using an Attenuated Total Reflectance (ATR) configuration and it is explained in terms of coherent absorption arising from interference and dissipation. The interference mechanism leading to the phenomenon of coherent absorption allows for its precise control by varying the refractive index and/or thickness of the medium surrounding the graphene. © 2013 American Chemical Society.


Ghiami Y.,TU Eindhoven | Williams T.,University of Hull
International Journal of Production Economics | Year: 2015

In a production-inventory system, the manufacturer produces the items at a rate, e.g. R, dispatches the order quantities to the customers in specific intervals and stores the excess inventory for subsequent deliveries. Therefore each inventory cycle of the manufacturer can be divided into two phases, first is the period of production, the second is when the manufacturer does not do any production and utilises the inventory that is in stock. One of the challenges in these models is how to obtain the inventory level of the supplier when there is deterioration. The existing literature that considers multi-echelon systems (including models with single-buyer or multi-buyer), analyses the deterioration/inventory cost of these echelons with the assumption of having huge surplus in production capacity. Then it seems acceptable to drop part of the production period which is for producing the first batch(s) for buyer(s) at the beginning of each production period. In this paper we develop a single-manufacturer, multi-buyer model for a deteriorating item with finite production rate. We also relax the assumption on the production capacity and find the average inventory of the supplier. It is shown that in case the production rate is not high, the existing models may not be sufficiently accurate. It is also illustrated that these models are more applicable to inventory systems (and not production-inventory) as they result in fairly accurate solutions when the manufacturer has much higher production capacity compared to the demand rate. Also a sensitivity analysis is conducted to show how the model reacts to changes in parameters. © 2014 Elsevier B.V. All rights reserved.


Spahn A.,TU Eindhoven
Science and Engineering Ethics | Year: 2012

The paper develops ethical guidelines for the development and usage of persuasive technologies (PT) that can be derived from applying discourse ethics to this type of technologies. The application of discourse ethics is of particular interest for PT, since 'persuasion' refers to an act of communication that might be interpreted as holding the middle between 'manipulation' and 'convincing'. One can distinguish two elements of discourse ethics that prove fruitful when applied to PT: the analysis of the inherent normativity of acts of communication ('speech acts') and the Habermasian distinction between 'communicative' and 'strategic rationality' and their broader societal interpretation. This essay investigates what consequences can be drawn if one applies these two elements of discourse ethics to PT. © 2011 The Author(s).


Van Der Aalst W.M.P.,TU Eindhoven | Van Der Aalst W.M.P.,National Research University Higher School of Economics
Distributed and Parallel Databases | Year: 2013

The practical relevance of process mining is increasing as more and more event data become available. Process mining techniques aim to discover, monitor and improve real processes by extracting knowledge from event logs. The two most prominent process mining tasks are: (i) process discovery: learning a process model from example behavior recorded in an event log, and (ii) conformance checking: diagnosing and quantifying discrepancies between observed behavior and modeled behavior. The increasing volume of event data provides both opportunities and challenges for process mining. Existing process mining techniques have problems dealing with large event logs referring to many different activities. Therefore, we propose a generic approach to decompose process mining problems. The decomposition approach is generic and can be combined with different existing process discovery and conformance checking techniques. It is possible to split computationally challenging process mining problems into many smaller problems that can be analyzed easily and whose results can be combined into solutions for the original problems. © 2013 Springer Science+Business Media New York.


Brunenberg E.J.,TU Eindhoven
Journal of neurosurgery | Year: 2011

The authors reviewed 70 publications on MR imaging-based targeting techniques for identifying the subthalamic nucleus (STN) for deep brain stimulation in patients with Parkinson disease. Of these 70 publications, 33 presented quantitatively validated results. There is still no consensus on which targeting technique to use for surgery planning; methods vary greatly between centers. Some groups apply indirect methods involving anatomical landmarks, or atlases incorporating anatomical or functional data. Others perform direct visualization on MR imaging, using T2-weighted spin echo or inversion recovery protocols. The combined studies do not offer a straightforward conclusion on the best targeting protocol. Indirect methods are not patient specific, leading to varying results between cases. On the other hand, direct targeting on MR imaging suffers from lack of contrast within the subthalamic region, resulting in a poor delineation of the STN. These deficiencies result in a need for intraoperative adaptation of the original target based on test stimulation with or without microelectrode recording. It is expected that future advances in MR imaging technology will lead to improvements in direct targeting. The use of new MR imaging modalities such as diffusion MR imaging might even lead to the specific identification of the different functional parts of the STN, such as the dorsolateral sensorimotor part, the target for deep brain stimulation.


Tissue engineering is an innovative method to restore cardiovascular tissue function by implanting either an in vitro cultured tissue or a degradable, mechanically functional scaffold that gradually transforms into a living neo-tissue by recruiting tissue forming cells at the site of implantation. Circulating endothelial colony forming cells (ECFCs) are capable of differentiating into endothelial cells as well as a mesenchymal ECM-producing phenotype, undergoing Endothelial-to-Mesenchymal-transition (EndoMT). We investigated the potential of ECFCs to produce and organize ECM under the influence of static and cyclic mechanical strain, as well as stimulation with transforming growth factor β1 (TGFβ1). A fibrin-based 3D tissue model was used to simulate neo-tissue formation. Extracellular matrix organization was monitored using confocal laser-scanning microscopy. ECFCs produced collagen and also elastin, but did not form an organized matrix, except when cultured with TGFβ1 under static strain. Here, collagen was aligned more parallel to the strain direction, similar to Human Vena Saphena Cell-seeded controls. Priming ECFC with TGFβ1 before exposing them to strain led to more homogenous matrix production. Biochemical and mechanical cues can induce extracellular matrix formation by ECFCs in tissue models that mimic early tissue formation. Our findings suggest that priming with bioactives may be required to optimize neo-tissue development with ECFCs and has important consequences for the timing of stimuli applied to scaffold designs for both in vitro and in situ cardiovascular tissue engineering. The results obtained with ECFCs differ from those obtained with other cell sources, such as vena saphena-derived myofibroblasts, underlining the need for experimental models like ours to test novel cell sources for cardiovascular tissue engineering.


Van Den Brand M.,TU Eindhoven
Science of Computer Programming | Year: 2015

Compilers are one of the cornerstones of Computer Science and in particular for Software Development. Compiler research has a long tradition and is very mature. Nevertheless, there is hardly any standardization with respect to formalisms and tools for developing compilers. Comparison of formalisms and tools to describe compilers for languages is not a simple task. In 2011 the Language Descriptions Tools and Applications community created a challenge where formalisms and tools were to be used in constructing a compiler for the Oberon-0 language. This special issue presents the tool challenge, the Oberon-0 language, various solutions to the challenge, and some conclusions. The aim of the challenge was to develop the same compiler using different formalisms to learn about these approaches in a concrete setting. © 2015 Published by Elsevier B.V.


Zeinalipour-Yazdi C.D.,CySilicoTech Research Ltd | Van Santen R.A.,TU Eindhoven
Journal of Physical Chemistry C | Year: 2012

Metal-adsorbate nanoclusters serve as useful models to study elementary catalytic and gas-sensor processes. However, little is known about their structural, energetic, and spectroscopic properties as a function of adsorbate surface coverage and structure. Here, we perform a systematic study of the adsorption of carbon monoxide (CO) on a tetra-atomic rhodium cluster to understand the coverage- and structure-dependent adsorption energy of CO as a function of CO coverage and to provide deeper insight into the metacarbonyl bond on metal nanoclusters. The coverage-dependent adsorption energy trends are rationalized with a use of a theoretical model, molecular orbital energy diagrams, electron density difference plots, molecular electrostatic potential plots, and simulated infrared spectra. Our model demonstrates that a critical parameter that determines the coverage-dependent energetics of the adsorption of CO at low coverages is the polarization of metal-metal π-bonds during the effective charge transfer, occurring from the metal cluster to the 2π2p x and 2π2p x states of CO, which enhances the adsorption of CO vertical to the metal-metal bond. This configuration specific effect explains the negative coverage-dependent adsorption energy trend observed at low coverages on metal nanoclusters. © 2012 American Chemical Society.


Bos E.J.C.,TU Eindhoven | Bos E.J.C.,Xpress Precision Engineering B.V.
Precision Engineering | Year: 2011

This paper discusses the aspects that influence the interaction between a probe tip and a work piece during tactile probing in a coordinate measuring machine (CMM). Measurement instruments are sensitive to more than one physical quantity. When measuring the topography of a work piece, the measurement result will therefore always be influenced by the environment and (local) variations in the work piece itself. A mechanical probe will respond to both topography and changes in the mechanical properties of the surface, e.g. the Young's modulus and hardness. An optical probe is influenced by the reflectivity and optical constants of the work piece, a scanning tunneling microscope (STM) responds to the electrical properties of the work piece and so on (Franks, 1991 [1]). The trend of component miniaturization results in a need for 3-dimensional characterization of micrometer sized features to nanometer accuracy. As the scale of the measurement decreases, the problems associated with the surfaceprobe interactions become increasingly apparent (Leach et al., 2001 [2]). The aspects of the interaction that are discussed include the deformation of probe tip and work piece during contact, surface forces during single point probing and scanning, dynamic excitation of the probe, synchronization errors, microfriction, tip rotations, finite stiffness effects, mechanical filtering, anisotropic stiffness, thermal effects and probe repeatability. These aspects are investigated using the Gannen XP 3D tactile probing system developed by Xpress Precision Engineering using modeling and experimental verification of the effects. The Gannen XP suspension consists of three slender rods with integrated piezo resistive strain gauges. The deformation of the slender rods is measured using the strain gauges and is a measure for the deflection of the probe tip. It is shown that the standard deviation in repeatability is 2 nm in any direction and over the whole measurement range of the probe. Finally, this probe has an isotropic stiffness of 480 N/m and a moving mass below 25 mg. © 2010 Elsevier Inc. All rights reserved.


Demerouti E.,TU Eindhoven | Bakker A.B.,Erasmus University Rotterdam | Leiter M.,Acadia University
Journal of Occupational Health Psychology | Year: 2014

The present study aims to explain why research thus far has found only low to moderate associations between burnout and performance. We argue that employees use adaptive strategies that help them to maintain their performance (i.e., task performance, adaptivity to change) at acceptable levels despite experiencing burnout (i.e., exhaustion, disengagement). We focus on the strategies included in the selective optimization with compensation model. Using a sample of 294 employees and their supervisors, we found that compensation is the most successful strategy in buffering the negative associations of disengagement with supervisor-rated task performance and both disengagement and exhaustion with supervisor-rated adaptivity to change. In contrast, selection exacerbates the negative relationship of exhaustion with supervisor-rated adaptivity to change. In total, 42% of the hypothesized interactions proved to be significant. Our study uncovers successful and unsuccessful strategies that people use to deal with their burnout symptoms in order to achieve satisfactory job performance. © 2014 American Psychological Association.


Guillemin F.,Orange S.A. | van Leeuwaarden J.S.H.,TU Eindhoven
Queueing Systems | Year: 2011

This paper presents a novel technique for deriving asymptotic expressions for the occurrence of rare events for a random walk in the quarter plane. In particular, we study a tandem queue with Poisson arrivals, exponential service times and coupled processors. The service rate for one queue is only a fraction of the global service rate when the other queue is non-empty; when one queue is empty, the other queue has full service rate. The bivariate generating function of the queue lengths gives rise to a functional equation. In order to derive asymptotic expressions for large queue lengths, we combine the kernel method for functional equations with boundary value problems and singularity analysis. © 2010 The Author(s).


Leijten A.J.M.,TU Eindhoven
Engineering Structures | Year: 2011

In statically indeterminate structures, connections play a vital role in the moment distribution. Demonstrated here is a method to evaluate the conditions, taking full advantage of the benefits offered by the indeterminate nature of the structures, and using the well-established, graphical beam-line method. This method shows how important the immediate load take-up is, the stiffness, the moment capacity of the connection and how it all affects the structural behaviour. The examples considered here use both the traditional non-reinforced dowel-type fastener connections and also timber connections reinforced with steel plates. They show that the minimum rotation requirements to achieve an effective structure are satisfied easily in contrast to requirements on stiffness. In this respect, timber connections with local reinforcement glued at the interface of the connection area offer more prospects. © 2011 Elsevier Ltd.


Van Der Bij H.,University of Groningen | Van Weele A.,TU Eindhoven
Journal of Product Innovation Management | Year: 2013

As today's firms increasingly outsource their noncore activities, they not only have to manage their own resources and capabilities, but they are ever more dependent on the resources and capabilities of supplying firms to respond to customer needs. This paper explicitly examines whether and how firms and suppliers, who are both oriented to the same customer market, enable innovativeness in their supply chains and deliver value to their joint customer. We will call this customer of the focal firm the "end user." The authors take a resource-dependence perspective to hypothesize how suppliers' end-user orientation and innovativeness influence downstream activities at the focal firm and end-user satisfaction. The resource dependence theory looks typically beyond the boundaries of an individual firm for explaining firm success: firms need to satisfy customer demands to survive and depend on other parties such as their suppliers to achieve customer satisfaction. Accordingly, the research design focuses on three parties along a supply chain: the focal firm, a supplier, and a customer of the focal firm (end user). The results drawn from a survey of 88 matched chains suggest the following. First, customer satisfaction is driven by focal firms' innovativeness. A focal firm's innovativeness depends, on the one hand, on a focal firm's market orientation and, on the other hand, on its suppliers' innovativeness. Second, no relationship could be established between a focal firm's market orientation and a supplier's end-user orientation. Market orientation typically has within-firm effects, while innovativeness has impact beyond the boundaries of the firm. These results suggest that firms create value for their customer through internal market orientation efforts and external suppliers' innovativeness. © 2013 Product Development & Management Association.


Lakens D.,TU Eindhoven
IEEE Transactions on Affective Computing | Year: 2013

This study demonstrates the feasibility of measuring heart rate (HR) differences associated with emotional states such as anger and happiness with a smartphone. Novice experimenters measured higher HRs during relived anger and happiness (replicating findings in the literature) outside a laboratory environment with a smartphone app that relied on photoplethysmography. © 2010-2012 IEEE.


Lakens D.,TU Eindhoven | Semin G.R.,University Utrecht | Semin G.R.,Koc University | Foroni F.,University Utrecht
Journal of Experimental Psychology: General | Year: 2012

Light and dark are used pervasively to represent positive and negative concepts. Recent studies suggest that black and white stimuli are automatically associated with negativity and positivity. However, structural factors in experimental designs, such as the shared opposition in the valence (good vs. bad) and brightness (light vs. dark) dimensions might play an important role in the valence-brightness association. In 6 experiments, we show that while black ideographs are consistently judged to represent negative words, white ideographs represent positivity only when the negativity of black is coactivated. The positivity of white emerged only when brightness and valence were manipulated within participants (but not between participants) or when the negativity of black was perceptually activated by presenting positive and white stimuli against a black (vs. gray) background. These findings add to an emerging literature on how structural overlap between dimensions creates associations and highlight the inherently contextualized construction of meaning structures. © 2011 American Psychological Association.


Van Der Aalst W.M.P.,TU Eindhoven
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

Process discovery-discovering a process model from example behavior recorded in an event log-is one of the most challenging tasks in process mining. Discovery approaches need to deal with competing quality criteria such as fitness, simplicity, precision, and generalization. Moreover, event logs may contain low frequent behavior and tend to be far from complete (i.e., typically only a fraction of the possible behavior is recorded). At the same time, models need to have formal semantics in order to reason about their quality. These complications explain why dozens of process discovery approaches have been proposed in recent years. Most of these approaches are time-consuming and/or produce poor quality models. In fact, simply checking the quality of a model is already computationally challenging. This paper shows that process mining problems can be decomposed into a set of smaller problems after determining the so-called causal structure. Given a causal structure, we partition the activities over a collection of passages. Conformance checking and discovery can be done per passage. The decomposition of the process mining problems has two advantages. First of all, the problem can be distributed over a network of computers. Second, due to the exponential nature of most process mining algorithms, decomposition can significantly reduce computation time (even on a single computer). As a result, conformance checking and process discovery can be done much more efficiently. © 2012 Springer-Verlag.


Van Den Elzen S.,SynerScope | Van Wijk J.J.,TU Eindhoven
Computer Graphics Forum | Year: 2013

We present a novel visual exploration method based on small multiples and large singles for effective and efficient data analysis. Users are enabled to explore the state space by offering multiple alternatives from the current state. Users can then select the alternative of choice and continue the analysis. Furthermore, the intermediate steps in the exploration process are preserved and can be revisited and adapted using an intuitive navigation mechanism based on the well-known undo-redo stack and filmstrip metaphor. As proof of concept the exploration method is implemented in a prototype. The effectiveness of the exploration method is tested using a formal user study comparing four different interaction methods. By using Small Multiples as data exploration method users need fewer steps in answering questions and also explore a significantly larger part of the state space in the same amount of time, providing them with a broader perspective on the data, hence lowering the chance of missing important features. Also, users prefer visual exploration with small multiples over non-small multiple variants. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.


Van Wijk J.J.,TU Eindhoven
Computer | Year: 2013

Because visual analytics has a broad scope and aims at knowledge discovery, evaluating the methods used in this field is challenging. Successful solutions are often found through trial and error, with solid guidelines and findings still lagging. The Web Extra document contains links with further information on visual analytics challenges and repositories. © 2013 IEEE.


Katzav J.,TU Eindhoven
Studies in History and Philosophy of Science Part B - Studies in History and Philosophy of Modern Physics | Year: 2013

I examine, from Mayo's severe testing perspective, the case found in the Intergovernmental Panel on Climate Change fourth report (IPCC-AR4) for the claim (OUR FAULT) that increases in anthropogenic greenhouse gas concentrations caused most of the post-1950 global warming. My examination begins to provide an alternative to standard, probabilistic assessments of OUR FAULT. It also brings out some of the limitations of variety of evidence considerations in assessing this and other hypotheses about the causes of climate change, and illuminates the epistemology of optimal fingerprinting studies. Finally, it shows that some features of Mayo's perspective should be kept in whatever approach is preferred for assessing hypotheses about the causes of climate change. © 2013 Elsevier Ltd.


Eling K.,TU Eindhoven
Journal of Product Innovation Management | Year: 2013

Research on reducing new product development (NPD) cycle time has shown that firms tend to adopt different cycle time reduction mechanisms for different process stages. However, the vast majority of previous studies investigating the relationship between new product performance and NPD cycle time have adopted a monolithic process perspective rather than looking at cycle time for the distinct stages of the NPD process (i.e., fuzzy front end, development, and commercialization). As a result, little is known about the specific effect of the cycle times of the different stages on new product performance or how they interact to influence new product performance. This study uses a stage-wise approach to NPD cycle time to test the main and interacting effects of fuzzy front end, development, and commercialization cycle times on new product performance using objective data for 399 NPD projects developed following a Stage-Gate® type of process in one firm. The results reveal that at least in this firm, new product performance only increases if all three stages of the NPD process are consistently accelerated. This finding, combined with the previous research showing that firms use different mechanisms to accelerate different stages of the process, emphasizes the need to conduct performance effect studies of NPD cycle time at the stage level rather than at the monolithic process level. © 2013 Product Development & Management Association.


Santiago J.,University of Granada | Lakens D.,TU Eindhoven
Acta Psychologica | Year: 2015

Conceptual congruency effects have been interpreted as evidence for the idea that the representations of abstract conceptual dimensions (e.g., power, affective valence, time, number, importance) rest on more concrete dimensions (e.g., space, brightness, weight). However, an alternative theoretical explanation based on the notion of polarity correspondence has recently received empirical support in the domains of valence and morality, which are related to vertical space (e.g., good things are up). In the present study we provide empirical arguments against the applicability of the polarity correspondence account to congruency effects in two conceptual domains related to lateral space: number and time. Following earlier research, we varied the polarity of the response dimension (left-right) by manipulating keyboard eccentricity. In a first experiment we successfully replicated the congruency effect between vertical and lateral space and its interaction with response eccentricity. We then examined whether this modulation of a concrete-concrete congruency effect can be extended to two types of concrete-abstract effects, those between left-right space and number (in both parity and magnitude judgment tasks), and temporal reference. In all three tasks response eccentricity failed to modulate the congruency effects. We conclude that polarity correspondence does not provide an adequate explanation of conceptual congruency effects in the domains of number and time. © 2014 Elsevier B.V.


Wang X.,Harvard University | Cuny G.D.,Harvard University | Cuny G.D.,University of Houston | Noel T.,TU Eindhoven
Angewandte Chemie - International Edition | Year: 2013

Visible advance: A mild, one-pot Stadler-Ziegler process for C-S bond formation has been developed. The method employs the photoredox catalyst [Ru(bpy)3Cl2]×6 H2O irradiated with visible light. A variety of aryl-alkyl and diaryl sulfides were prepared from readily available arylamines and aryl/alkylthiols in good yields. The use of a photo microreactor led to a significant improvement with respect to safety and efficiency. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Kuang Y.,University Utrecht | Vece M.D.,University Utrecht | Rath J.K.,University Utrecht | Dijk L.V.,University Utrecht | And 2 more authors.
Reports on Progress in Physics | Year: 2013

In solar cell technology, the current trend is to thin down the active absorber layer. The main advantage of a thinner absorber is primarily the reduced consumption of material and energy during production. For thin film silicon (Si) technology, thinning down the absorber layer is of particular interest since both the device throughput of vacuum deposition systems and the stability of the devices are significantly enhanced. These features lead to lower cost per installed watt peak for solar cells, provided that the (stabilized) efficiency is the same as for thicker devices. However, merely thinning down inevitably leads to a reduced light absorption. Therefore, advanced light trapping schemes are crucial to increase the light path length. The use of elongated nanostructures is a promising method for advanced light trapping. The enhanced optical performance originates from orthogonalization of the light's travel path with respect to the direction of carrier collection due to the radial junction, an improved anti-reflection effect thanks to the three-dimensional geometric configuration and the multiple scattering between individual nanostructures. These advantages potentially allow for high efficiency at a significantly reduced quantity and even at a reduced material quality, of the semiconductor material. In this article, several types of elongated nanostructures with the high potential to improve the device performance are reviewed. First, we briefly introduce the conventional solar cells with emphasis on thin film technology, following the most commonly used fabrication techniques for creating nanostructures with a high aspect ratio. Subsequently, several representative applications of elongated nanostructures, such as Si nanowires in realistic photovoltaic (PV) devices, are reviewed. Finally, the scientific challenges and an outlook for nanostructured PV devices are presented. © 2013 IOP Publishing Ltd.


Blocken B.,TU Eindhoven | Gualtieri C.,University of Naples Federico II
Environmental Modelling and Software | Year: 2012

Computational Fluid Dynamics (CFD) is increasingly used to study a wide variety of complex Environmental Fluid Mechanics (EFM) processes, such as water flow and turbulent mixing of contaminants in rivers and estuaries and wind flow and air pollution dispersion in urban areas. However, the accuracy and reliability of CFD modeling and the correct use of CFD results can easily be compromised. In 2006, Jakeman et al. set out ten iterative steps of good disciplined model practice to develop purposeful, credible models from data and a priori knowledge, in consort with end-users, with every stage open to critical review and revision (Jakeman et al., 2006). This paper discusses the application of the ten-steps approach to CFD for EFM in three parts. In the first part, the existing best practice guidelines for CFD applications in this area are reviewed and positioned in the ten-steps framework. The second and third part present a retrospective analysis of two case studies in the light of the ten-steps approach: (1) contaminant dispersion due to transverse turbulent mixing in a shallow water flow and (2) coupled urban wind flow and indoor natural ventilation of the Amsterdam ArenA football stadium. It is shown that the existing best practice guidelines for CFD mainly focus on the last steps in the ten-steps framework. The reasons for this focus are outlined and the value of the additional - preceding - steps is discussed. The retrospective analysis of the case studies indicates that the ten-steps approach is very well applicable to CFD for EFM and that it provides a comprehensive framework that encompasses and extends the existing best practice guidelines. © 2012 Elsevier Ltd.


Jurrius R.P.M.J.,TU Eindhoven
Designs, Codes, and Cryptography | Year: 2012

We study the generalized and extended weight enumerator of the q-ary Simplex code and the q-ary first order Reed-Muller code. For our calculations we use that these codes correspond to a projective system containing all the points in a finite projective or affine space. As a result from the geometric method we use for the weight enumeration, we also completely determine the set of supports of subcodes and words in an extension code. © Springer Science+Business Media, LLC 2011.


van der Aalst W.M.P.,TU Eindhoven
Software and Systems Modeling | Year: 2012

There seems to be a never ending stream of new process modeling notations. Some of these notations are foundational and have been around for decades (e. g., Petri nets). Other notations are vendor specific, incremental, or are only popular for a short while. Discussions on the various competing notations concealed the more important question "What makes a good process model?". Fortunately, large scale experiences with process mining allow us to address this question. Process mining techniques can be used to extract knowledge from event data, discover models, align logs and models, measure conformance, diagnose bottlenecks, and predict future events. Today's processes leave many trails in data bases, audit trails, message logs, transaction logs, etc. Therefore, it makes sense to relate these event data to process models independent of their particular notation. Process models discovered based on the actual behavior tend to be very different from the process models made by humans. Moreover, conformance checking techniques often reveal important deviations between models and reality. The lessons that can be learned from process mining shed a new light on process model quality. This paper discusses the role of process models and lists seven problems related to process modeling. Based on our experiences in over 100 process mining projects, we discuss these problems. Moreover, we show that these problems can be addressed by exposing process models and modelers to event data. © 2012 Springer-Verlag.


Tiemessen H.G.H.,IBM | Van Houtum G.J.,TU Eindhoven
International Journal of Production Economics | Year: 2013

We study a system consisting of one repair shop and one stockpoint, where spare parts of multiple critical repairables are kept on stock to serve an installed base of technical systems. Part requests are met from stock if possible, and backordered otherwise. The objective is to minimize aggregate downtime via smart repair job scheduling. We evaluate various relevant dynamic scheduling policies, including two that stem from other application fields. One of them is the myopic allocation rule from the make-to-stock environment. It selects the SKU with the highest expected backorder reduction per invested time unit and has excellent performance on repairable inventory systems. It combines the following three strengths: (i) it selects the SKU with the shortest expected repair time in case of backorders, (ii) it recognizes the benefits of short average repair times even if there are no backorders, and (iii) it takes the stochasticity of the part failure processes into account. We investigate the optimality gaps of the heuristic scheduling rules, compare their performance on a large test bed containing problem instances of real-life size, and illustrate the impact of key problem characteristics on the aggregate downtime. We show that the myopic allocation rule performs well and that it outperforms the other heuristic scheduling rules. © 2012 Elsevier B.V. All rights reserved.


Van Der Aalst W.M.P.,TU Eindhoven
Proceedings of the 2011 20th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, WETICE 2011 | Year: 2011

Process mining serves a bridge between data mining and business process modeling. The goal is to extract process related knowledge from event data stored in information systems. One of the most challenging process mining tasks is process discovery, i.e., the automatic construction of process models from raw event logs. Today there are dozens of process discovery techniques generating process models using different notations (Petri nets, EPCs, BPMN, heuristic nets, etc.). This paper focuses on the representational bias used by these techniques. We will show that the choice of target model is very important for the discovery process itself. The representational bias should not be driven by the desired graphical representation but by the characteristics of the underlying processes and process discovery techniques. Therefore, we analyze the role of the representational bias in process mining. © 2011 IEEE.


Niesten E.,University Utrecht | Alkemade F.,TU Eindhoven
Renewable and Sustainable Energy Reviews | Year: 2016

Profitable business models for value creation and value capture with smart grid services are pivotal to realize the transition to smart and sustainable electricity grids. In addition to knowledge regarding the technical characteristics of smart grids, we need to know what drives companies and consumers to sell and purchase services in a smart grid. This paper reviews 45 scientific articles on business models for smart grid services and analyses information on value in 434 European and US smart grid pilot projects. Our review observes that the articles and pilots most often discuss three types of smart grid services: vehicle-to-grid and grid-to-vehicle services, demand response services, and services to integrate renewable energy (RE). We offer a classification of business models, value creation and capture for each of these services and for the different actors in the electricity value chain. Although business models have been developed for grid-to-vehicle services and for services that connect RE, knowledge regarding demand response services is restricted to different types of value creation and capture. Our results highlight that business models can be profitable when a new actor in the electricity industry, that is, the aggregator, can collect sufficiently large amounts of load. In addition, our analysis indicates that demand response services or vehicle-to-grid and grid-to-vehicle services will be offered in conjunction with the supply of RE. © 2015 Elsevier Ltd. All rights reserved.


Yuan H.,Leiden University | Khatua S.,Leiden University | Zijlstra P.,TU Eindhoven | Yorulmaz M.,Leiden University | Orrit M.,Leiden University
Angewandte Chemie - International Edition | Year: 2013

Single molecules: Large enhancements of single-molecule fluorescence up to 1100 times by using synthesized gold nanorods are reported (see picture). This high enhancement is achieved by selecting a dye with its adsorption and emission close to the surface plasmon resonance of the gold nanorods. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Van Der Aalst W.M.P.,TU Eindhoven
Proceedings - 9th IEEE European Conference on Web Services, ECOWS 2011 | Year: 2011

Lion's share of cloud research has been focusing on performance related problems. However, cloud computing will also change the way in which business processes are managed and supported, e.g., more and more organizations will be sharing common processes. In the classical setting, where product software is used, different organizations can make ad-hoc customizations to let the system fit their needs. This is undesirable, especially when multiple organizations share a cloud infrastructure. Configurable process models enable the sharing of common processes among different organizations in a controlled manner. This paper discusses challenges and opportunities related to business process configuration. Causal nets (C-nets) are proposed as a new formalism to deal with these challenges, e.g., merging variants into a configurable model is supported by a simple union operator. C-nets also provide a good representational bias for process mining, i.e., process discovery and conformance checking based on event logs. In the context of cloud computing, we focus on the application of C-nets to cross-organizational process mining. © 2011 IEEE.


Melazzi D.,University of Padua | Lancellotti V.,TU Eindhoven
Computer Physics Communications | Year: 2014

We present a full-wave numerical tool, dubbed ADAMANT (Advanced coDe for Anisotropic Media and ANTennas), devised for the analysis and design of radiofrequency antennas which drive the discharge in helicon plasma sources. ADAMANT relies on a set of coupled surface and volume integral equations in which the unknowns are the surface electric current density on the antenna conductors and the volume polarization current within the plasma. The latter can be inhomogeneous and anisotropic whereas the antenna can have arbitrary shape. The set of integral equations is solved numerically through the Method of Moments with sub-sectional surface and volume vector basis functions. This approach allows the accurate evaluation of the current distribution on the antenna and in the plasma as well as the antenna input impedance, a parameter crucial for the design of the feeding and matching network. We report several numerical examples which serve to validate ADAMANT against other well-established numerical approaches as well as experimental data. The numerical accuracy of the computed solution versus the number of basis functions in the plasma is also assessed. Finally, we employ ADAMANT to characterize the antenna of a real-life helicon plasma source. © 2014 Elsevier B.V. All rights reserved.


Duarte J.L.,TU Eindhoven | Lokos J.,Heliox B.V. | Van Horck F.B.M.,Heliox B.V.
IEEE Transactions on Power Electronics | Year: 2013

The simplicity of phase-shift control at fixed switching frequency and 50% duty-cycle operation is fully exploited by the proposed converter topology. The transistor voltages are clamped to only 50% of the dc input, the dc bus capacitive dividers being naturally stabilized. Furthermore, zero-voltage switching for all switches is guaranteed from no-load to full-load conditions, that is to say, from zero to nominal output voltage and from zero to nominal load current. As such, the proposed topology is an excellent candidate for demanding applications as compact battery chargers for electric vehicles. Experimental results obtained from a 400-80-V/0-360-V/2-kW/100-kHz prototype support the theoretical analysis. © 2012 IEEE.


van Weele A.J.,TU Eindhoven | van Raaij E.M.,Erasmus University Rotterdam
Journal of Supply Chain Management | Year: 2014

The Journal of Supply Chain Management (JSCM) is a hallmark in the academic field of operations and supply chain management. During the past 50 years, it has contributed substantially to the recognition and adoption of purchasing and supply management (PSM) as an academic and strategic business domain. Having been invited by the JSCM editors to provide some ideas on the future directions of PSM research, the authors discuss what can be done to further increase both its relevance and rigor. Rigor and relevance in academic research are interconnected. To improve its relevance, the authors argue that future PSM research should better reflect the strategic priorities raised in the contemporary strategic management literature. Next, future PSM research should be much better embedded in a limited number of management theories. Here, stakeholder theory, network theory, the resource-based view of the firm, dynamic capabilities theory, and the relational view could be considered as interesting candidates. Rigor is connected with robustness of academic research designs and projects. To foster its rigor, future PSM research should allow for an increase in the number of replication studies, longitudinal studies, and meta-analytical studies. Future PSM research designs should reflect a careful distinction between informants and respondents and a careful sample selection. When discussing the results of quantitative studies, future PSM research should report on effect sizes and confidence intervals, rather than p-values. Adoption of these ideas would have some important implications for both the academic PSM community and academic journal editors. © 2014 Institute for Supply Management, Inc.


Waltrich G.,Federal University of Santa Catarina | Waltrich G.,TU Eindhoven | Barbi I.,Federal University of Santa Catarina
IEEE Transactions on Industrial Electronics | Year: 2010

In this paper, a modular three-phase multilevel inverter specially suited for electrical drive applications is proposed. Unlike the cascaded H-bridge inverter, this topology is based on power cells connected in cascade using two inverter legs in series. A detailed analysis of the structure and the development of design equations for the load voltage with $n$ levels are carried out using pulsewidth-modulation phase-shifted multicarrier modulation. Simulations and experimental results for a 15-kW three-phase system, with nine voltage levels, validate the study presented. © 2006 IEEE.


Van Der Laan W.J.,University of Groningen | Jalba A.C.,TU Eindhoven | Roerdink J.B.T.M.,University of Groningen
IEEE Transactions on Parallel and Distributed Systems | Year: 2011

The Discrete Wavelet Transform (DWT) has a wide range of applications from signal processing to video and image compression. We show that this transform, by means of the lifting scheme, can be performed in a memory and computation-efficient way on modern, programmable GPUs, which can be regarded as massively parallel coprocessors through NVidia's CUDA compute paradigm. The three main hardware architectures for the 2D DWT (row-column, line-based, block-based) are shown to be unsuitable for a CUDA implementation. Our CUDA-specific design can be regarded as a hybrid method between the row-column and block-based methods. We achieve considerable speedups compared to an optimized CPU implementation and earlier non-CUDA-based GPU DWT methods, both for 2D images and 3D volume data. Additionally, memory usage can be reduced significantly compared to previous GPU DWT methods. The method is scalable and the fastest GPU implementation among the methods considered. A performance analysis shows that the results of our CUDA-specific design are in close agreement with our theoretical complexity analysis. © 2011 IEEE.


Westergaard M.,TU Eindhoven
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

Declarative workflow languages are easy for humans to understand and use for specifications, but difficult for computers to check for consistency and use for enactment. Therefore, declarative languages need to be translated to something a computer can handle. One approach is to translate the declarative language to linear temporal logic (LTL), which can be translated to finite automata. While computers are very good at handling finite automata, the translation itself is often a road block as it may take time exponential in the size of the input. Here, we present algorithms for doing this translation much more efficiently (around a factor of 10,000 times faster and handling 10 times larger systems on a standard computer), making declarative specifications scale to realistic settings. © 2011 Springer-Verlag.


Verhoosel C.V.,TU Eindhoven | de Borst R.,University of Glasgow
International Journal for Numerical Methods in Engineering | Year: 2013

In this paper, a phase-field model for cohesive fracture is developed. After casting the cohesive zone approach in an energetic framework, which is suitable for incorporation in phase-field approaches, the phase-field approach to brittle fracture is recapitulated. The approximation to the Dirac function is discussed with particular emphasis on the Dirichlet boundary conditions that arise in the phase-field approximation. The accuracy of the discretisation of the phase field, including the sensitivity to the parameter that balances the field and the boundary contributions, is assessed at the hand of a simple example. The relation to gradient-enhanced damage models is highlighted, and some comments on the similarities and the differences between phase-field approaches to fracture and gradient-damage models are made. A phase-field representation for cohesive fracture is elaborated, starting from the aforementioned energetic framework. The strong as well as the weak formats are presented, the latter being the starting point for the ensuing finite element discretisation, which involves three fields: the displacement field, an auxiliary field that represents the jump in the displacement across the crack, and the phase field. Compared to phase-field approaches for brittle fracture, the modelling of the jump of the displacement across the crack is a complication, and the current work provides evidence that an additional constraint has to be provided in the sense that the auxiliary field must be constant in the direction orthogonal to the crack. The sensitivity of the results with respect to the numerical parameter needed to enforce this constraint is investigated, as well as how the results depend on the orders of the discretisation of the three fields. Finally, examples are given that demonstrate grid insensitivity for adhesive and for cohesive failure, the latter example being somewhat limited because only straight crack propagation is considered. © 2013 John Wiley & Sons, Ltd.


Khatua S.,Leiden University | Paulo P.M.R.,University of Lisbon | Yuan H.,Leiden University | Gupta A.,Leiden University | And 2 more authors.
ACS Nano | Year: 2014

Enhancing the fluorescence of a weak emitter is important to further extend the reach of single-molecule fluorescence imaging to many unexplored systems. Here we study fluorescence enhancement by isolated gold nanorods and explore the role of the surface plasmon resonance (SPR) on the observed enhancements. Gold nanorods can be cheaply synthesized in large volumes, yet we find similar fluorescence enhancements as literature reports on lithographically fabricated nanoparticle assemblies. The fluorescence of a weak emitter, crystal violet, can be enhanced more than 1000-fold by a single nanorod with its SPR at 629 nm excited at 633 nm. This strong enhancement results from both an excitation rate enhancement of ~130 and an effective emission enhancement of ~9. The fluorescence enhancement, however, decreases sharply when the SPR wavelength moves away from the excitation laser wavelength or when the SPR has only a partial overlap with the emission spectrum of the fluorophore. The reported measurements of fluorescence enhancement by 11 nanorods with varying SPR wavelengths are consistent with numerical simulations. © 2014 American Chemical Society.


Kunert C.,University of Stuttgart | Harting J.,University of Stuttgart | Harting J.,TU Eindhoven | Vinogradova O.I.,RWTH Aachen
Physical Review Letters | Year: 2010

We report results of lattice Boltzmann simulations of a high-speed drainage of liquid films squeezed between a smooth sphere and a randomly rough plane. A significant decrease in the hydrodynamic resistance force as compared with that predicted for two smooth surfaces is observed. However, this force reduction does not represent slippage. The computed force is exactly the same as that between equivalent smooth surfaces obeying no-slip boundary conditions, but located at an intermediate position between peaks and valleys of asperities. The shift in hydrodynamic thickness is shown to depend on the height and density of roughness elements. Our results do not support some previous experimental conclusions on a very large and shear-dependent boundary slip for similar systems. © 2010 The American Physical Society.


Su R.,Nanyang Technological University | Van Schuppen J.H.,Centrum voor Wiskunde en Informatica CWI | Rooda J.E.,TU Eindhoven
IEEE Transactions on Automatic Control | Year: 2010

Blockingness is one of the major obstacles that need to be overcome in the Ramadge-Wonham supervisory synthesis paradigm, especially for large systems. In this paper, we propose an abstraction technique to overcome this difficulty. We first provide details of this abstraction technique, then describe how it can be applied to a supervisor synthesis problem, where plant models are nondeterministic but specifications and supervisors are deterministic. We show that a nonblocking supervisor for an abstraction of a plant under a specification is guaranteed to be a nonblocking supervisor of the original plant under the same specification. The reverse statement is also true, if we impose an additional constraint in the choice of the alphabet of abstraction, i.e., every event, which is either observable or labels a transition to a marker state, is contained in the alphabet of abstraction. © 2006 IEEE.


De Teresa J.M.,University of Zaragoza | Cordoba R.,University of Zaragoza | Cordoba R.,TU Eindhoven
ACS Nano | Year: 2014

One of the main features of any lithography technique is its resolution, generally maximized for a single isolated object. However, in most cases, functional devices call for highly dense arrays of nanostructures, the fabrication of which is generally challenging. Here, we show the growth of arrays of densely packed isolated nanowires based on the use of focused beam induced deposition plus Ar+ milling. The growth strategy presented herein allows the creation of films showing thickness modulation with periodicity determined by the beam scan pitch. The subsequent Ar+ milling translates such modulation into an array of isolated nanowires. This approach has been applied to grow arrays of W-based nanowires by focused ion beam induced deposition and Co nanowires by focused electron beam induced deposition, achieving linear densities up to 2.5 × 107 nanowires/cm (one nanowire every 40 nm). These results open the route for specific applications in nanomagnetism, nanosuperconductivity, and nanophotonics, where arrays of densely packed isolated nanowires grown by focused beam deposition are required. © 2014 American Chemical Society.


After demonstrating by means of an in vitro model experiment that the flow in the glottis can become asymmetric, Erath [J. Acoust. Soc. Am. 130, 389-403 (2011)] propose a theory to estimate the resulting asymmetry in the lateral hydrodynamic force on the vocal folds. A wall-jet attached to one side of the divergent downstream part of the glottis is considered. The model assumes that the wall is a flat plate and that the jet separates at the glottal exit. They implement this so-called Boundary Layer Estimation of Asymmetric Pressure force model in a lumped two mass model of the vocal folds. This should allow them to study the impact of the asymmetry on voiced sound production. A critical discussion of the merits and shortcomings of the model is provided. It predicts discontinuities in the time dependency of the lateral force. It predicts this force to be independent from the glottal opening, which is not reasonable. An alternative model is proposed, which avoids these problems and predicts that there is a minimum glottal opening below which the wall-jet does not separate from the wall at the glottal exit. This is in agreement with the experimental results provided by Erath © 2013 Acoustical Society of America.


Hopfe C.J.,University of Cardiff | Hensen J.L.M.,TU Eindhoven
Energy and Buildings | Year: 2011

Building performance simulation (BPS) has the potential to provide relevant design information by indicating directions for design solutions. A major challenge in simulation tools is how to deal with difficulties through large variety of parameters and complexity of factors such as non-linearity, discreteness, and uncertainty. The purpose of uncertainty and sensitivity analysis can be described as identifying uncertainties in input and output of a system or simulation tool [1-3]. In practice uncertainty and sensitivity analysis have many additional benefits including: (1) With the help of parameter screening it enables the simplification of a model [4]. (2) It allows the analysis of the robustness of a model [5]. (3) It makes aware of unexpected sensitivities that may lead to errors and/or wrong specifications (quality assurance) [6-10]. (4) By changing the input of the parameters and showing the effect on the outcome of a model, it provides a "what-if analysis" (decision support). [11]. In this paper a case study is performed based on an office building with respect to various building performance parameters. Uncertainty analysis (UA) is carried out and implications for the results considering energy consumption and thermal comfort are demonstrated and elaborated. The added value and usefulness of the integration of UA in BPS is shown. © 2011 Elsevier B.V. All rights reserved.


Dorst K.,University of Technology, Sydney | Dorst K.,TU Eindhoven
Design Studies | Year: 2011

In the last few years, "Design Thinking" has gained popularity - it is now seen as an exciting new paradigm for dealing with problems in sectors as far a field as IT, Business, Education and Medicine. This potential success challenges the design research community to provide unambiguous answers to two key questions: "What is the core of Design Thinking?" and "What could it bring to practitioners and organisations in other fields?". We sketch a partial answer by considering the fundamental reasoning pattern behind design, and then looking at the core design practices of framing and frame creation. The paper ends with an exploration of the way in which these core design practices can be adopted for organisational problem solving and innovation. © 2011 Elsevier Ltd. All rights reserved.


Goossens K.,TU Eindhoven | Hansson A.,University of Twente
Proceedings - Design Automation Conference | Year: 2010

The goals for the Æthereal network on silicon, as it was then called, were set in 2000 and its concepts were defined early 2001. Ten years on, what has been achieved? Did we meet the goals, and what is left of the concepts? In this paper we answer those questions, and evaluate different implementations, based on a new performance:cost analysis. We discuss and reflect on our experiences, and conclude with open issues and future directions. © Copyright 2010 ACM.


Design can be described as a sequence of decisions made to balance design goals and constraints. These decisions must be made in every design effort, although they may not be explicit, conscious, or formally represented. In routine design, these decisions are straightforward, requiring little learning by designers. Problem understanding evolves in parallel with the problem solution, and many components of the design problem cannot be expected to emerge until some attempt has been made at generating solutions. Generalized knowledge can also be derived by using other empirical or theoretical research methods. Design-based research, however, can produce knowledge that normally could not be generated by isolated analysis or traditional empirical approaches, and therefore complements existing empirical and theoretical research methods.


Ottmann C.,TU Eindhoven
Bioorganic and Medicinal Chemistry | Year: 2013

14-3-3 Proteins are eukaryotic adapter proteins that regulate a plethora of physiological processes by binding to several hundred partner proteins. They play a role in biological activities as diverse as signal transduction, cell cycle regulation, apoptosis, host-pathogen interactions and metabolic control. As such, 14-3-3s are implicated in disease areas like cancer, neurodegeneration, diabetes, pulmonary disease, and obesity. Targeted modulation of 14-3-3 protein-protein interactions (PPIs) by small molecules is therefore an attractive concept for disease intervention. In recent years a number of examples of inhibitors and stabilizers of 14-3-3 PPIs have been reported promising a vivid future in chemical biology and drug development for this remarkable class of proteins. © 2013 Elsevier Ltd. All rights reserved.


Heuts J.P.A.,TU Eindhoven | Smeets N.M.B.,Queens University
Polymer Chemistry | Year: 2011

An overview is given of cobalt-catalyzed chain transfer in free-radical polymerization and the chemistry and applications of its derived macromonomers. Catalytic chain transfer polymerization is a very efficient and versatile technique for the synthesis of functional macromonomers. Firstly the mechanism and kinetic aspects of the process are briefly discussed in solution/bulk and in emulsion polymerization, followed by a description of its application to produce functional macromonomers. The second part of this review briefly describes the behavior of the macromonomers as chain transfer agents and/or comonomers in second-stage radical polymerizations yielding polymers of more complex architectures. The review ends with a brief overview of post-polymerization modifications of the vinyl endfunctionality of the macromonomers yielding functional polymers with applications ranging from initiators in anionic polymerization to end-functional lectin-binding glycopolymers. This journal is © The Royal Society of Chemistry.


Holder S.J.,University of Kent | Sommerdijk N.A.J.M.,TU Eindhoven
Polymer Chemistry | Year: 2011

Amphophilic AB and ABA block copolymers have been demonstrated to form a variety of self-assembled aggregate structures in dilute solutions where the solvent preferentially solvates one of the blocks. The most common structures formed by these amphiphilic macromolecules are spherical micelles, cylindrical micelles and vesicles (polymersomes). Interest into the characterisation and controlled formation of block copolymer aggregates has been spurred on by their potential as surfactants, nano- to micro-sized carriers for active compounds, for the controlled release of encapsulated compounds and for inorganic materials templating, amongst numerous other proposed applications. Research in the past decade has focussed not only on manipulating the properties of aggregates through control of both the chemistry of the constituent polymer blocks but also the external and internal morphology of the aggregates. This review article will present an overview of recent approaches to controlling the self-assembly of amphiphilic block copolymers with a view to obtaining novel micellar morphologies. Whilst the article touches upon multi-compartment micelles particular focus is placed upon control of the overall shape of micelles; i.e. those systems that expand the range of accessible morphologies beyond 'simple' spherical and cylindrical micelles namely disklike, toroidal and bicontinuous micelles. © The Royal Society of Chemistry 2011.


Teunissen J.,Centrum Wiskunde and Informatica CWI | Ebert U.,Centrum Wiskunde and Informatica CWI | Ebert U.,TU Eindhoven
Journal of Computational Physics | Year: 2014

In particle simulations, the weights of particles determine how many physical particles they represent. Adaptively adjusting these weights can greatly improve the efficiency of the simulation, without creating severe nonphysical artifacts. We present a new method for the pairwise merging of particles, in which two particles are combined into one. To find particles that are 'close' to each other, we use a k-d tree data structure. With a k-d tree, close neighbors can be searched for efficiently, and independently of the mesh used in the simulation. The merging can be done in different ways, conserving for example momentum or energy. We introduce probabilistic schemes, which set properties for the merged particle using random numbers. The effect of various merge schemes on the energy distribution, the momentum distribution and the grid moments is compared. We also compare their performance in the simulation of the two-stream instability. © 2013 Elsevier Inc.


Su R.,Nanyang Technological University | Van Schuppen J.H.,Centrum voor Wiskunde en Informatica CWI | Rooda J.E.,TU Eindhoven
Automatica | Year: 2012

In supervisor synthesis for discrete-event systems achieving nonblockingness is a major challenge for a large system. To overcome it we present an approach to synthesize a deterministic coordinated distributed supervisor under partial observation, where the plant is modeled by a collection of nondeterministic finite-state automata and the requirement is modeled by a collection of deterministic finite-state automata. Then we provide a sufficient condition to ensure the maximal permissiveness of a coordinated distributed supervisor generated by the proposed synthesis approach. © 2012 Elsevier Ltd. All rights reserved.


Peng H.,TU Eindhoven | Coit D.W.,Rutgers University | Feng Q.,University of Houston
IEEE Transactions on Reliability | Year: 2012

This paper proposes two new importance measures: one new importance measure for systems with s -independent degrading components, and another one for systems with s-correlated degrading components. Importance measures in previous research are inadequate for systems with degrading components because they are only applicable to steady-state cases and problems with discrete states without considering the continuously changing status of the degrading components. Our new importance measures are proposed as functions of time that can provide timely feedback on the critical components prior to failure based on the measured or observed degradation. Furthermore, the correlation between components is considered for developing these importance measures through a multivariate distribution. To evaluate the criticality of components, we analysed reliability models for multi-component systems with degrading components, which can also be utilized for studying maintenance models. Numerical examples show that the proposed importance measures can be used as an effective tool to assess component criticality for systems with degrading components. © 2006 IEEE.


Brouwers J.J.H.,TU Eindhoven
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2012

We derive a comprehensive statistical model for dispersion of passive or almost passive admixture particles such as fine particulate matter, aerosols, smoke, and fumes in turbulent flow. The model rests on the Markov limit for particle velocity. It is in accordance with the asymptotic structure of turbulence at large Reynolds number as described by Kolmogorov. The model consists of Langevin and diffusion equations in which the damping and diffusivity are expressed by expansions in powers of the reciprocal Kolmogorov constant C0. We derive solutions of O(C00) and O(C 0-1). We truncate at O(C0-2) which is shown to result in an error of a few percentages in predicted dispersion statistics for representative cases of turbulent flow. We reveal analogies and remarkable differences between the solutions of classical statistical mechanics and those of statistical turbulence. © 2012 American Physical Society.


Woeginger G.J.,TU Eindhoven
Journal of Informetrics | Year: 2014

In a recent paper, Chambers and Miller introduced two fundamental axioms for scientific research indices. We perform a detailed analysis of these two axioms, thereby providing clean combinatorial characterizations of the research indices that satisfy these axioms and of the so-called step-based indices. We single out the staircase indices as a particularly simple subfamily of the step-based indices, and we provide a simple axiomatic characterization for them. © 2014 Elsevier Ltd.


Biferale L.,University of Rome Tor Vergata | Musacchio S.,French National Center for Scientific Research | Toschi F.,TU Eindhoven | Toschi F.,CNR Institute for applied mathematics Mauro Picone
Physical Review Letters | Year: 2012

We study the statistical properties of homogeneous and isotropic three-dimensional (3D) turbulent flows. By introducing a novel way to make numerical investigations of Navier-Stokes equations, we show that all 3D flows in nature possess a subset of nonlinear evolution leading to a reverse energy transfer: from small to large scales. Up to now, such an inverse cascade was only observed in flows under strong rotation and in quasi-two-dimensional geometries under strong confinement. We show here that energy flux is always reversed when mirror symmetry is broken, leading to a distribution of helicity in the system with a well-defined sign at all wave numbers. Our findings broaden the range of flows where the inverse energy cascade may be detected and rationalize the role played by helicity in the energy transfer process, showing that both 2D and 3D properties naturally coexist in all flows in nature. The unconventional numerical methodology here proposed, based on a Galerkin decimation of helical Fourier modes, paves the road for future studies on the influence of helicity on small-scale intermittency and the nature of the nonlinear interaction in magnetohydrodynamics. © 2012 American Physical Society.


Ulu C.,University of Texas at Austin | Honhon D.,TU Eindhoven | Alptekinoglu A.,Southern Methodist University
Operations Research | Year: 2012

How should a firm modify its product assortment over time when learning about consumer tastes? In this paper, we study dynamic assortment decisions in a horizontally differentiated product category for which consumers' diverse tastes can be represented as locations on a Hotelling line. We presume that the firm knows all possible consumer locations, comprising a finite set, but does not know their probability distribution. We model this problem as a discrete-time dynamic program; each period, the firm chooses an assortment and sets prices to maximize the total expected profit over a finite horizon, given its subjective beliefs over consumer tastes. The consumers then choose a product from the assortment that maximizes their own utility. The firm observes sales, which provide censored information on consumer tastes, and it updates beliefs in a Bayesian fashion. There is a recurring trade-off between the immediate profits from sales in the current period (exploitation) and the informational gains to be exploited in all future periods (exploration). We show that one can (partially) order assortments based on their information content and that in any given period the optimal assortment cannot be less informative than the myopically optimal assortment. This result is akin to the well-known "stock more" result in censored newsvendor problems with the newsvendor learning about demand through sales when lost sales are not observable. We demonstrate that it can be optimal for the firm to alternate between exploration and exploitation, and even offer assortments that lead to losses in the current period in order to gain information on consumer tastes. We also develop a Bayesian conjugate model that reduces the state space of the dynamic program and study value of learning using this conjugate model. © 2012 INFORMS.


Van De Wouw N.,TU Eindhoven | Leine R.I.,ETH Zurich
International Journal of Robust and Nonlinear Control | Year: 2012

SUMMARY In this paper, we consider the robust set-point stabilization problem for motion systems subject to friction. Robustness aspects are particularly relevant in practice, where uncertainties in the friction model are unavoidable. We propose an impulsive feedback control design that robustly stabilizes the set-point for a class of position-, velocity-and time-dependent friction laws with uncertainty. Moreover, it is shown that this control strategy guarantees the finite-time convergence to the set-point which is a favorable characteristic of the resulting closed loop from a transient performance perspective. The results are illustrated by means of a representative motion control example. © 2011 John Wiley & Sons, Ltd.


Vaesen K.,TU Eindhoven
Biology and Philosophy | Year: 2012

Dubreuil (Biol Phil 25:53-73, 2010b, this journal) argues that modern-like cognitive abilities for inhibitory control and goal maintenance most likely evolved in Homo heidelbergensis, much before the evolution of oft-cited modern traits, such as symbolism and art. Dubreuil's argument proceeds in two steps. First, he identifies two behavioral traits that are supposed to be indicative of the presence of a capacity for inhibition and goal maintenance: cooperative feeding and cooperative breeding. Next, he tries to show that these behavioral traits most likely emerged in Homo heidelbergensis. In this paper, I show that neither of these steps are warranted in light of current scientific evidence, and thus, that the evolutionary background of human executive functions, such as inhibition and goal maintenance, remains obscure. Nonetheless, I suggest that cooperative breeding might mark a crucial step in the evolution of our species: its early emergence in Homo erectus might have favored a social intelligence that was required to get modernity really off the ground in Homo sapiens. © 2011 The Author(s).


Lazar M.,TU Eindhoven
Proceedings of the IEEE Conference on Decision and Control | Year: 2010

This paper considers the synthesis of infinity norm Lyapunov functions for discrete-time linear systems. A proper conic partition of the state-space is employed to construct a finite set of linear inequalities in the elements of the Lyapunov weight matrix. Under typical assumptions, it is proven that the feasibility of the derived set of linear inequalities is equivalent with the existence of an infinity norm Lyapunov function. Furthermore, it is shown that the developed solution extends naturally to several relevant classes of discrete-time nonlinear systems. ©2010 IEEE.


Gonzalez-Rodriguez D.,Autonomous University of Madrid | Schenning A.P.H.J.,TU Eindhoven
Chemistry of Materials | Year: 2011

Recent developments in the area of H-bonded supramolecular assemblies of π-conjugated systems, that is, oligomers and polymers, are described. The state-of-the-art summary of the recent developments in the design of discrete systems and functional materials is presented. © 2010 American Chemical Society.


Van Der Aalst W.,TU Eindhoven
ACM Transactions on Management Information Systems | Year: 2012

Over the last decade, process mining emerged as a new research field that focuses on the analysis of processes using event data. Classical data mining techniques such as classification, clustering, regression, association rule learning, and sequence/episode mining do not focus on business process models and are often only used to analyze a specific step in the overall process. Process mining focuses on end-to-end processes and is possible because of the growing availability of event data and new process discovery and conformance checking techniques. Process models are used for analysis (e.g., simulation and verification) and enactment by BPM/WFM systems. Previously, process models were typically made by hand without using event data. However, activities executed by people, machines, and software leave trails in so-called event logs. Process mining techniques use such logs to discover, analyze, and improve business processes. Recently, the Task Force on Process Mining released the Process Mining Manifesto. This manifesto is supported by 53 organizations and 77 process mining experts contributed to it. The active involvement of end-users, tool vendors, consultants, analysts, and researchers illustrates the growing significance of process mining as a bridge between data mining and business process modeling. The practical relevance of process mining and the interesting scientific challenges make process mining one of the "hot" topics in Business Process Management (BPM). This article introduces process mining as a new research field and summarizes the guiding principles and challenges described in the manifesto. © 2012 ACM.


Rai V.R.,Colorado School of Mines | Vandalon V.,TU Eindhoven | Agarwal S.,Colorado School of Mines
Langmuir | Year: 2012

We have examined the role of substrate temperature on the surface reaction mechanisms during the atomic layer deposition (ALD) of Al 2O 3 from trimethyl aluminum (TMA) in combination with an O 2 plasma and O 3 over a substrate temperature range of 70-200 °C. The ligand-exchange reactions were investigated using in situ attenuated total reflection Fourier transform infrared spectroscopy. Consistent with our previous work on ALD of Al 2O 3 from an O 2 plasma and O 3 [Rai, V. R.; Vandalon, V.; Agarwal, S. Langmuir2010, 26, 13732], both -OH groups and carbonates were the chemisorption sites for TMA over the entire temperature range explored. The concentration of surface -CH 3 groups after the TMA cycle was, however, strongly dependent on the surface temperature and the type of oxidizer, which in turn influenced the corresponding growth per cycle. The combustion of surface -CH 3 ligands was not complete at 70 °C during O 3 exposure, indicating that an O 2 plasma is a relatively stronger oxidizing agent. Further, in O 3-assisted ALD, the ratio of mono- and bidentate carbonates on the surface after O 3 exposure was dependent on the substrate temperature. © 2011 American Chemical Society.


Van Helden P.,Sasol Limited | Ciobeca I.M.,TU Eindhoven
ChemPhysChem | Year: 2011

Under your skin: Carbon plays an important role in the deactivation process of Co-based FT catalysts. Therefore the adsorption behavior of carbon at various coverages on the surfaces and into the first subsurface layers of fcc-Co(111) and fcc-Co(100) (see picture) was calculated by density functional theory (DFT). Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Vaesen K.,TU Eindhoven
Studies in History and Philosophy of Science Part C :Studies in History and Philosophy of Biological and Biomedical Sciences | Year: 2014

Chimpanzees, but very few other animals, figure prominently in (recent) attempts to reconstruct the evolution of uniquely human traits. In particular, the chimpanzee is used (i) to identify traits unique to humans, and thus in need of reconstruction; (ii) to initialize the reconstruction, by taking its state to reflect the state of the last common ancestor of humans and chimpanzees; (iii) as a baseline against which to test evolutionary hypotheses. Here I point out the flaws in this three-step procedure, and show how they can be overcome by taking advantage of much broader phylogenetic comparisons. More specifically, I explain how such comparisons yield more reliable estimations of ancestral states and how they help to resolve problems of underdetermination inherent to chimpocentric accounts. To illustrate my points, I use a recent chimpocentric argument by Kitcher. © 2013 Elsevier Ltd.


Aiki T.,Japan Womens University | Muntean A.,TU Eindhoven
Interfaces and Free Boundaries | Year: 2013

We study a one-dimensional free-boundary problem describing the penetration of carbonation fronts (free reaction-triggered interfaces) in concrete. Using suitable integral estimates for the free boundary and involved concentrations, we reach a twofold aim: (1) We fill a fundamental gap by justifying rigorously the experimentally guessed pt asymptotic behavior. Previously we obtained the upper bound s.t / 6 C0pt for some constant C0; now we show the optimality of the rate by proving the right nontrivial lower estimate, i.e., there exists C00 > 0 such that s.t / > C00pt . (2) We obtain weak solutions to the free-boundary problem for the case when the measure of the initial domain vanishes. In this way, we allow for the nucleation of the moving carbonation front -a scenario that until now was open from the mathematical analysis point of view. © European Mathematical Society 2013.


Desmet L.,Philips | Ras A.J.M.,Philips | De Boer D.K.G.,Philips | Debije M.G.,TU Eindhoven
Optics Letters | Year: 2012

We report conversion efficiencies of experimental single and dual light guide luminescent solar concentrators. We have built several 5 cm × 5 cm and 10 × cm × 10 cm luminescent solar concentrator (LSC) demonstrators consisting of c-Si photovoltaic cells attachedto luminescent light guides of Lumogen F Red 305 dye and perylene perinone dye. The highest overall efficiency obtained was 4.2% on a 5 cm × 5 cm stacked dual light guide using both luminescent materials. To our knowledge, this is the highest reported experimentally determined efficiency for c-Si photovoltaicbased LSCs. Furthermore, we also produced a5 cm × 5 cm LSC specimen based on an inorganic phosphor layer with an overall efficiency of 2.5%. © 2012 Optical Society of America.


Torricelli F.,TU Eindhoven
IEEE Transactions on Electron Devices | Year: 2012

An extended theory of carrier hopping transport in organic transistors is proposed. According to many experimental studies, the density of localized states in organic thin-film transistors can be described by a double-exponential function. In this work, using a percolation model of hopping, the analytical expressions of conductivity and mobility as functions of temperature and charge concentration are obtained. The conductivity depends only on the tail states, while the mobility is determined by the total charge carriers in the semiconductor. © 2012 IEEE.


Hopfe C.J.,University of Cardiff | Augenbroe G.L.M.,Georgia Institute of Technology | Hensen J.L.M.,TU Eindhoven
Building and Environment | Year: 2013

Building performance assessment is complex, as it has to respond to multiple criteria. Objectives originating from the demands that are put on energy consumption, acoustical performance, thermal occupant comfort, indoor air quality and many other issues must all be reconciled. An assessment requires the use of predictive models that involve numerous design and physical parameters as their inputs. Since these input parameters, as well as the models that operate on them, are not precisely known, it is imprudent to assume deterministic values for them. A more realistic approach is to introduce ranges of uncertainty in the parameters themselves, or in their derivation, from underlying approximations. In so doing, it is recognized that the outcome of a performance assessment is influenced by many sources of uncertainty. As a consequence of this approach the design process is informed by assessment outcomes that produce probability distributions of a target measure instead of its deterministic value. In practice this may lead to a "well informed" analysis but not necessarily to a straightforward, cost effective and efficient design process.This paper discusses how design decision making can be based on uncertainty assessments. A case study is described focussing on a discrete decision that involves a choice between two HVAC system designs. Analytical hierarchy process (AHP) including uncertainty information is used to arrive at a rational decision. In this approach, key performance indicators such as energy efficiency, thermal comfort and others are ranked according to their importance and preferences. This process enables a clear group consensus based choice of one of the two options. The research presents a viable means of collaboratively ranking complex design options based on stakeholder's preferences and considering the uncertainty involved in the designs. In so doing it provides important feedback to the design team. © 2013 Elsevier Ltd.


Van Der Aalst W.M.P.,TU Eindhoven
Lecture Notes in Business Information Processing | Year: 2010

Computer simulation attempts to "mimic" real-life or hypothetical behavior on a computer to see how processes or systems can be improved and to predict their performance under different circumstances. Simulation has been successfully applied in many disciplines and is considered to be a relevant and highly applicable tool in Business Process Management (BPM). Unfortunately, in reality the use of simulation is limited. Few organizations actively use simulation. Even organizations that purchase simulation software (stand-alone or embedded in some BPM suite), typically fail to use it continuously over an extended period. This keynote paper highlights some of the problems causing the limited adoption of simulation. For example, simulation models tend to oversimplify the modeling of people working part-time on a process. Also simulation studies typically focus on the steady-state behavior of business processes while managers are more interested in short-term results (a "fast forward button" into the future) for operational decision making. This paper will point out innovative simulation approaches leveraging on recent breakthroughs in process mining. © 2010 Springer-Verlag.


Van Der Aalst W.M.P.,TU Eindhoven
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

The Software as a Service (SaaS) paradigm is particularly interesting in situations where many organizations need to support similar processes. For example, municipalities, courts, rental agencies, etc. all need to support highly similar processes. However, despite these similarities, there is also the need to allow for local variations in a controlled manner. Therefore, cloud infrastructures should provide configurable services such that products and processes can be customized while sharing commonalities. Configurable and executable process models are essential for realizing such infrastructures. This will finally transform reference models from "paper tigers" (reference modeling à la SAP, ARIS, etc.) into an "executable reality". Moreover, "configurable services in the cloud" enable cross-organizational process mining. This way, organizations can learn from each other and improve their processes. © Springer-Verlag 2010.


Derler S.,Empa - Swiss Federal Laboratories for Materials Science and Technology | Gerhardt L.-C.,TU Eindhoven
Tribology Letters | Year: 2012

In this review, we discuss the current knowledge on the tribology of human skin and present an analysis of the available experimental results for skin friction coefficients. Starting with an overview on the factors influencing the friction behaviour of skin, we discuss the up-to-date existing experimental data and compare the results for different anatomical skin areas and friction measurement techniques. For this purpose, we also estimated and analysed skin contact pressures applied during the various friction measurements. The detailed analyses show that substantial variations are a characteristic feature of friction coefficients measured for skin and that differences in skin hydration are the main cause thereof, followed by the influences of surface and material properties of the contacting materials. When the friction coefficients of skin are plotted as a function of the contact pressure, the majority of the literature data scatter over a wide range that can be explained by the adhesion friction model. The case of dry skin is reflected by relatively low and pressure-independent friction coefficients (greater than 0.2 and typically around 0.5), comparable to the dry friction of solids with rough surfaces. In contrast, the case of moist or wet skin is characterised by significantly higher (typically >1) friction coefficients that increase strongly with decreasing contact pressure and are essentially determined by the mechanical shear properties of wet skin. In several studies, effects of skin deformation mechanisms contributing to the total friction are evident from friction coefficients increasing with contact pressure. However, the corresponding friction coefficients still lie within the range delimited by the adhesion friction model. Further research effort towards the analysis of the microscopic contact area and mechanical properties of the upper skin layers is needed to improve our so far limited understanding of the complex tribological behaviour of human skin. © 2011 Springer Science+Business Media, LLC.


Rakovic S.V.,University of Maryland University College | Lazar M.,TU Eindhoven
Automatica | Year: 2012

This technical communique delivers a systematic procedure for obtaining a suitable terminal cost function for model predictive control based on Minkowski cost functions. It is shown that, for any given stabilizing linear state feedback control law and associated λ-contractive proper C-set, there always exists a non-trivial scaling of the λ-contractive proper C-set such that the associated Minkowski function satisfies the standard MPC terminal cost stability inequality. © 2012 Elsevier Ltd. All rights reserved.


Dirksz D.A.,TU Eindhoven | Scherpen J.M.A.,University of Groningen
Automatica | Year: 2012

Power-based modeling was originally developed in the early sixties to describe a large class of nonlinear electrical RLC networks, in a special gradient form. Recently this idea has been extended for modeling and control of a larger class of physical systems. In this paper, first, coordinate transformations are introduced for systems described in this framework, such that the physical structure is preserved. Such a transformation can provide new insights for both analysis and control design. Second, power-based integral and adaptive control schemes are presented. Advantages of these schemes are shown by their application on standard mechanical systems. © 2012 Elsevier Ltd. All rights reserved.


Grzela G.,FOM Institute for Atomic and Molecular Physics | Paniagua-Dominguez R.,CSIC - Institute for the Structure of Matter | Barten T.,FOM Institute for Atomic and Molecular Physics | Fontana Y.,FOM Institute for Atomic and Molecular Physics | And 3 more authors.
Nano Letters | Year: 2012

We experimentally demonstrate the directional emission of polarized light from single semiconductor nanowires. The directionality of this emission has been directly determined with Fourier microphotoluminescence measurements of vertically oriented InP nanowires. Nanowires behave as efficient optical nanoantennas, with emission characteristics that are not only given by the material but also by their geometry and dimensions. By means of finite element simulations, we show that the radiated power can be enhanced for frequencies and diameters at which leaky modes in the structure are present. These leaky modes can be associated to Mie resonances in the cylindrical structure. The radiated power can be also inhibited at other frequencies or when the coupling of the emission to the resonances is not favored. We anticipate the relevance of these results for the development of nanowire photon sources with optimized efficiency and/or controlled emission by the geometry. © 2012 American Chemical Society.


Leermakers C.A.J.,TU Eindhoven | Musculus M.P.B.,Sandia National Laboratories
Proceedings of the Combustion Institute | Year: 2015

The growth of poly-cyclic aromatic hydrocarbon (PAH) soot precursors are observed using a two-laser technique combining laser-induced fluorescence (LIF) of PAH with laser-induced incandescence (LII) of soot in a diesel engine under low-temperature combustion (LTC) conditions. The broad mixture distributions and slowed chemical kinetics of LTC "stretch out" soot-formation processes in both space and time, thereby facilitating their study. Imaging PAH-LIF from pulsed-laser excitation at three discrete wavelengths (266, 532, and 633 nm) reveals the temporal growth of PAH molecules, while soot-LII from a 1064-nm pulsed laser indicates inception to soot. The distribution of PAH-LIF also grows spatially within the combustion chamber before soot-LII is first detected. The PAH-LIF signals have broad spectra, much like LII, but typically with spectral profile that is inconsistent with laser-heated soot. Quantitative natural-emission spectroscopy also shows a broad emission spectrum, presumably from PAH chemiluminescence, temporally coinciding with of the PAH-LIF. © 2014 The Combustion Institute. Published by Elsevier Inc. All rights reserved.


Kirkels A.F.,TU Eindhoven
Renewable and Sustainable Energy Reviews | Year: 2012

This study aims to provide a long term overview of developments in energy from biomass in Western Europe by analyzing the discourse in RD&D and related policy. To this end, the discourse in Western Europe between 1980 and 2010 has been studied by the literature study of open literature and articles of the European Biomass Conference. In addition, a quantitative content analysis of titles of the conference has been performed. This shows the dynamics with respect to considered feedstock, conversion technology, application as well as supporting arguments for this - a dynamics that will not show in a technology or country oriented study. We distinguish four different discourses based on differentiation to scale and knowledge intensity - but that also relates to feedstock and conversion technology. This way, the complex developments can be structured and understood as shift between and within discourses. This is especially relevant as each discourse involves a different policy arena and different actors. With a still growing interest in energy from biomass, the multiple discourses seem to keep co-existing. Emphasis continues to be given to large scale and knowledge intensive processes, which will further increase the importance of the supra-national level for future developments. © 2012 Elsevier Ltd. All right reserved.


Attia S.,Catholic University of Louvain | Gratia E.,Catholic University of Louvain | De Herde A.,Catholic University of Louvain | Hensen J.L.M.,TU Eindhoven
Energy and Buildings | Year: 2012

There is a need for decision support tools that integrate energy simulation into early design of zero energy buildings in the architectural practice. Despite the proliferation of simulation programs in the last decade, there are no ready-to-use applications that cater specifically for the hot climates and their comfort conditions. Furthermore, the majority of existing tools focus on evaluating the design alternatives after the decision making, and largely overlook the issue of informing the design before the decision making. This paper presents energy-oriented software tool that both accommodates the Egyptian context and provides informative support that aims to facilitate decision making of zero energy buildings. A residential benchmark was established coupling sensitivity analysis modelling and energy simulation software (EnergyPlus) as a means of developing a decision support tool to allow designers to rapidly and flexibly assess the thermal comfort and energy performance of early design alternatives. Validation of the results generated by the tool and ability to support the decision making are presented in the context of a case study and usability testing. © 2012 Elsevier B.V. All rights reserved.


Sijs J.,TNO | Lazar M.,TU Eindhoven
Automatica | Year: 2012

This article focuses on the problem of fusing two prior Gaussian estimates into a single estimate, when the correlation is unknown. Existing solutions either lead to a conservative fusion result, as the chosen parametrization focuses on the fusion formulas instead of correlations, or they are computationally expensive. The contribution of this article is a novel parametrization, in which the correlation is explicitly characterized a priori to deriving the fusion formulas. Then, maximizing the correlation ensures that the fusion result is based on independent parts of the prior estimates and, simultaneously, addresses the fact that the correlation is unknown. In addition, a guaranteed improvement of the accuracy after fusion is attained. An illustrative example demonstrates the benefits of the proposed method compared to an existing fusion method. © 2012 Elsevier Ltd. All rights reserved.


Ma H.,Copenhagen University | Tian P.,Copenhagen University | Pello J.,TU Eindhoven | Bendix P.M.,Copenhagen University | Oddershede L.B.,Copenhagen University
Nano Letters | Year: 2014

Heating of irradiated metallic e-beam generated nanostructures was quantified through direct measurements paralleled by novel model-based numerical calculations. By comparing discs, triangles, and stars we showed how particle shape and composition determines the heating. Importantly, our results revealed that substantial heat is generated in the titanium adhesive layer between gold and glass. Even when the Ti layer is as thin as 2 nm it absorbs as much as a 30 nm Au layer and hence should not be ignored. © 2014 American Chemical Society.


Kraemer F.,TU Eindhoven
Journal of Medical Ethics | Year: 2013

While deep brain stimulation (DBS) for patients with Parkinson's disease has typically raised ethical questions about autonomy, accountability and personal identity, recent research indicates that we need to begin taking into account issues surrounding the patients' feelings of authenticity and alienation as well. In order to bring out the relevance of this dimension to ethical considerations of DBS, I analyse a recent case study of a Dutch patient who, as a result of DBS, faced a dilemma between autonomy and authenticity. This case study is meant to point out the normatively meaningful tension patients under DBS experience between authenticity and autonomy.


Bovendeerd P.H.M.,TU Eindhoven
Journal of Biomechanics | Year: 2012

The heart has the ability to respond to long-term changes in its environment through changes in mass (growth), shape (morphogenesis) and tissue properties (remodeling). For improved quantitative understanding of cardiac growth and remodeling (G&R) experimental studies need to be complemented by mathematical models. This paper reviews models for cardiac growth and remodeling of myofiber orientation, as induced by mechanical stimuli. A distinction is made between optimization models, that focus on the end stage of G&R, and adaptation models, that aim to more closely describe the mechanistic relation between stimulus and effect. While many models demonstrate qualitatively promising results, a lot of questions remain, e.g. with respect to the choice of the stimulus for G&R or the long-term stability of the outcome of the model. A continued effort combining information on mechanotransduction at the cellular level, experimental observations on G&R at organ level, and testing of hypotheses on stimulus-effect relations in mathematical models is needed to answer these questions on cardiac G&R. Ultimately, models of cardiac G&R seem indispensable for patient-specific modeling, both to reconstruct the actual state of the heart and to assess the long-term effect of potential interventions. © 2011 Elsevier Ltd.


Litvak N.,University of Twente | Van Der Hofstad R.,TU Eindhoven
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2013

Mixing patterns in large self-organizing networks, such as the Internet, the World Wide Web, and social and biological networks, are often characterized by degree-degree dependencies between neighboring nodes. In this paper, we propose a new way of measuring degree-degree dependencies. One of the problems with the commonly used assortativity coefficient is that in disassortative networks its magnitude decreases with the network size. We mathematically explain this phenomenon and validate the results on synthetic graphs and real-world network data. As an alternative, we suggest to use rank correlation measures such as Spearman's ρ. Our experiments convincingly show that Spearman's ρ produces consistent values in graphs of different sizes but similar structure, and it is able to reveal strong (positive or negative) dependencies in large graphs. In particular, we discover much stronger negative degree-degree dependencies in Web graphs than was previously thought. Rank correlations allow us to compare the assortativity of networks of different sizes, which is impossible with the assortativity coefficient due to its genuine dependence on the network size. We conclude that rank correlations provide a suitable and informative method for uncovering network mixing patterns. © 2013 American Physical Society.


Blocken B.,TU Eindhoven | Blocken B.,Catholic University of Leuven
Building and Environment | Year: 2015

Urban physics is the science and engineering of physical processes in urban areas. It basically refers to the transfer of heat and mass in the outdoor and indoor urban environment, and its interaction with humans, fauna, flora and materials. Urban physics is a rapidly increasing focus area as it is key to understanding and addressing the grand societal challenges climate change, energy, health, security, transport and aging. The main assessment tools in urban physics are field measurements, full-scale and reduced-scale laboratory measurements and numerical simulation methods including Computational Fluid Dynamics (CFD). In the past 50 years, CFD has undergone a successful transition from an emerging field into an increasingly established field in urban physics research, practice and design. This review and position paper consists of two parts. In the first part, the importance of urban physics related to the grand societal challenges is described, after which the spatial and temporal scales in urban physics and the associated model categories are outlined. In the second part, based on a brief theoretical background, some views on CFD are provided. Possibilities and limitations are discussed, and in particular, ten tips and tricks towards accurate and reliable CFD simulations are presented. These tips and tricks are certainly not intended to be complete, rather they are intended to complement existing CFD best practice guidelines on ten particular aspects. Finally, an outlook to the future of CFD for urban physics is given. © 2015 Elsevier Ltd.


Montali M.,Free University of Bozen Bolzano | Maggi F.M.,University of Tartu | Chesani F.,University of Bologna | Mello P.,University of Bologna | Van Der Aalst W.M.P.,TU Eindhoven
ACM Transactions on Intelligent Systems and Technology | Year: 2013

Today, large business processes are composed of smaller, autonomous, interconnected subsystems, achieving modularity and robustness. Quite often, these large processes comprise software components as well as human actors, they face highly dynamic environments and their subsystems are updated and evolve independently of each other. Due to their dynamic nature and complexity, it might be difficult, if not impossible, to ensure at design-time that such systems will always exhibit the desired/expected behaviors. This, in turn, triggers the need for runtime verification and monitoring facilities. These are needed to check whether the actual behavior complies with expected business constraints, internal/external regulations and desired best practices. In this work, we present Mobucon EC, a novel monitoring framework that tracks streams of events and continuously determines the state of business constraints. In Mobucon EC, business constraints are defined using the declarative language Declare. For the purpose of this work, Declare has been suitably extended to support quantitative time constraints and non-atomic, durative activities. The logic-based language Event Calculus (EC) has been adopted to provide a formal specification and semantics to Declare constraints, while a light-weight, logic programming-based EC tool supports dynamically reasoning about partial, evolving execution traces. To demonstrate the applicability of our approach, we describe a case study about maritime safety and security and provide a synthetic benchmark to evaluate its scalability. © 2013 ACM 2157-6904/2013/12-ART5 $ 15.00.


Gierds C.,Humboldt University of Berlin | Mooij A.J.,TU Eindhoven | Wolf K.,University of Rostock
IEEE Transactions on Services Computing | Year: 2012

Service-oriented computing aims to create complex systems by composing less-complex systems, called services. Since services can be developed independently, the integration of services requires an adaptation mechanism for bridging any incompatibilities. Behavioral adapters aim to adjust the communication between some services to be composed in order to establish proper interaction between them. We present a novel approach for specifying such adapters, based on domain-specific transformation rules that reflect the elementary operations that adapters can perform. We also present a novel way to synthesize complex adapters that adhere to these rules, viz., by consistently separating data and control, and by using existing controller-synthesis algorithms. Our approach has been implemented, and we discuss some example applications, including real business processes in WS-BPEL. © 2008 IEEE.


Jalba A.C.,TU Eindhoven | Kustra J.,HIGH-TECH | Telea A.C.,University of Groningen
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2013

We present a GPU-based framework for extracting surface and curve skeletons of 3D shapes represented as large polygonal meshes. We use an efficient parallel search strategy to compute point-cloud skeletons and their distance and feature transforms (FTs) with user-defined precision. We regularize skeletons by a new GPU-based geodesic tracing technique which is orders of magnitude faster and more accurate than comparable techniques. We reconstruct the input surface from skeleton clouds using a fast and accurate image-based method. We also show how to reconstruct the skeletal manifold structure as a polygon mesh and the curve skeleton as a polyline. Compared to recent skeletonization methods, our approach offers two orders of magnitude speed-up, high-precision, and low-memory footprints. We demonstrate our framework on several complex 3D models. © 2013 IEEE.


Haans A.,TU Eindhoven
Journal of Environmental Psychology | Year: 2014

The natural preference refers to the human tendency to prefer natural substances over their synthetic counterparts, for example in the domains of food and medication. In four studies, we confirm that the natural preference is also operative in the domain of light. Study 1 confirmed that natural has a consistent meaning when people apply it to light, and that the source (e.g., daylight vs. electrical) and the transformation of the light (e.g., daylight through a blinded window) affects its naturalness. Studies 2 and 3 employed a classic forced-choice decision making paradigm. Study 2 did not confirm the natural preference hypothesis, probably because the artificial option had clear functional benefits over the natural one. Controlling for this confound, our hypothesis was confirmed in Study 3. In Study 4, three light sources were appraised in a randomized experiment. We confirmed that beliefs regarding the effects of light on health and concentration mediate the naturalness-attitude relationship; thus confirming instrumental motives behind the natural preference. Studies 2 and 4, however, suggest that the lower functionality of daylight-based systems may outweigh their perceived instrumental benefits. The weak and statistically non-significant correlations between connectedness to nature and light appraisals in Study 4 speak against an ideational basis for the natural preference as seen in earlier studies. Taken together, our studies provide evidence for a natural preference to be operative in the domain of light. © 2014 Elsevier Ltd.


Van Santen R.A.,TU Eindhoven
Angewandte Chemie - International Edition | Year: 2014

The perfect catalyst: The advances towards the ability to design a catalyst from first principles are explored. Aspects of computational chemistry as well as the kinetics and physical state of the reactive catalyst are discussed. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Luque A.,Institute Astrofisica Of Andalucia Iaa | Ebert U.,Centrum Wiskunde and Informatica CWI | Ebert U.,TU Eindhoven
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2011

Branching is an essential element of streamer discharge dynamics. We review the current state of theoretical understanding and recall that branching requires a finite perturbation. We argue that, in current laboratory experiments in ambient or artificial air, these perturbations can only be inherited from the initial state, or they can be due to intrinsic electron-density fluctuations owing to the discreteness of electrons. We incorporate these electron-density fluctuations into fully three-dimensional simulations of a positive streamer in air at standard temperature and pressure. We derive a quantitative estimate for the ratio of branching length to streamer diameter that agrees within a factor of 2 with experimental measurements. As branching without this noise would occur considerably later, if at all, we conclude that the intrinsic stochastic particle noise triggers branching of positive streamers in air at atmospheric pressure. © 2011 American Physical Society.


Antunes D.,TU Eindhoven
IFAC Proceedings Volumes (IFAC-PapersOnline) | Year: 2013

We consider an event-triggered control loop in which the intervals between consecutive events are exponentially distributed. Events cause state jumps following a given distribution and may result from multiple sources. This framework can capture scenarios in which events are sporadic (low rate Poisson processes) or frequent (in the limit Wiener processes). We propose an event-triggered control strategy which guarantees a better transmission rate vs performance trade-off than periodic control and which is optimal when events are sporadic, although approaching the periodic control trade-off when events are frequent. Performance is measured by an average cost. We also propose a different strategy by which transmissions are triggered when a special estimation error norm exceeds a threshold. For the latter strategy, we provide lower and upper bounds for the transmission rate and performance. We discuss the applicability of the results in queuing systems, control of plants disturbed by Wiener processes, and networked control. Copyright © 2013 IFAC.


Calabretta N.,TU Eindhoven
Journal of Lightwave Technology | Year: 2012

We present a field-programmable gate array (FPGA)-based label processor for in-band optical labels with a processing time independent of the number of label bits. This allows for implementing an optical packet switching architecture that scales to a large port count without compromising the latency. As a proof of concept, we have employed an FPGA board with 100 MHz clock to validate the operation of the label processor in a 160 Gb/s optical packet switching system. Experimental results show successful three label bits processing and 160 Gb/s packets switching with 1 dB power penalty and 470 ns of latency. Projections on the label processor performance by using more powerful FPGAs indicate that 60 label bits (2 60 optical addresses) can be processed within 31 ns. © 2012 IEEE.


Broer D.J.,TU Eindhoven
Nature Materials | Year: 2010

The new mechanism of writing stable particle molecular architectures in a chiral-nematic liquid crystal (LC) using a vortex laser beam has the potential to create new applications in liquid-crystal photonics. The new mechanism involves the use of laser light to manipulate the local molecular order of the LC resulting in the formation of unique liquid-confined quasiparticles with toroidal geometries. The new methodology was developed for forming defect structures at predetermined positions and with an well-controlled director pattern. Chiral-nematic LC molecules that are characterized by an asymmetric center in the molecule organize themselves by following a helicoidal director alignment under non-restricted conditions. A cell construct consisting of glass plates coated with a homeotropic aligning polyimide layer filled with LC materials has been developed to over such challenges.


Brouwers H.J.H.,TU Eindhoven
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2011

The packing fraction of geometric random packings of discretely sized particles is addressed in the present paper. In an earlier paper, analytical solutions were presented for the packing fraction of polydisperse geometric packings for discretely sized particles with infinitely large size ratio and the packing of continuously sized particles. Here the packing of discretely sized particles with finite size ratio u is analyzed and compared with empirical data concerning five ternary geometric random close packings of spheres with a size ratio of 2, yielding good agreement. © 2011 American Physical Society.


Lenstra D.,TU Eindhoven
IEEE Photonics Technology Letters | Year: 2013

We theoretically investigate the stability of a single-mode semiconductor laser with weak optical feedback in the short external-delay regime. Although the laser is, in general, very sensitive to feedback-induced excitation of relaxation oscillations, we predict complete insensitivity for these oscillations when the product of oscillation frequency and external-delay time equals a small integer. This may form the basis for relaxation-oscillation-free laser design. © 1989-2012 IEEE.


Peters C.,TU Eindhoven
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

The best known non-structural attacks against code-based cryptosystems are based on information-set decoding. Stern's algorithm and its improvements are well optimized and the complexity is reasonably well understood. However, these algorithms only handle codes over F2. This paper presents a generalization of Stern's information-set- decoding algorithm for decoding linear codes over arbitrary finite fields Fq and analyzes the complexity. This result makes it possible to compute the security of recently proposed code-based systems over non-binary fields. As an illustration, ranges of parameters for generalized McEliece cryptosystems using classical Goppa codes over F31 are suggested for which the new information-set-decoding algorithm needs 2128 bit operations. © 2010 Springer-Verlag.


Acampora G.,TU Eindhoven
Studies in Fuzziness and Soft Computing | Year: 2013

Historically, the theory of fuzzy logic has been strongly used for enabling designers of industrial controllers and intelligent decision making frameworks to model complex systems by expressing their expertise through simple linguistic rules. Nevertheless, the design activity of a fuzzy system may be affected by strong difficulties related to the implementation of a same system on different hardware architectures, each one characterized by a proper set of electrical/electronic/programming constraints. These difficulties could become very critical when a fuzzy system needs to be deployed in distributed environments populated by a collection of interacting and heterogeneous hardware devices. Fuzzy Markup Language (FML) is a XML-based language whose main aim is to bridge the aforementioned implementation gaps by introducing an abstract and unified approach for designing fuzzy systems in hardware independent way. In details, FML is a novel specific-purpose computer language that defines a detailed structure of fuzzy control independent from its legacy representation and improve systems' designers capabilities by providing them with a collection of facilities speeding up the whole development process of a centralized or distributed fuzzy system. This chapter is devoted to introduce FML details, an application sample, and it will provided some initial aspects about FML-II, a FML grammar extension aimed at modeling Type-II fuzzy systems. © 2013 Springer-Verlag Berlin Heidelberg.


Haseli Y.,TU Eindhoven
Energy Conversion and Management | Year: 2013

The idea is to find out whether 2nd law efficiency optimization may be a suitable trade-off between maximum work output and maximum 1st law efficiency designs for a regenerative gas turbine engine operating on the basis of an open Brayton cycle. The primary emphasis is placed on analyzing the ideal cycle to determine the upper limit of the engine. Explicit relationships are established for work and entropy production of the ideal cycle. To examine whether a Brayton cycle may operate at the regime of fully reversible characterized by zero entropy generation condition, the cycle net work is computed. It is shown that an ideal Brayton-type engine with or without a regenerator cannot operate at fully reversible limit. Subsequently, the analysis is expanded to an irreversible cycle and the relevant relationships are obtained for net work, thermal efficiency, total entropy production, and second law efficiency defined as the thermal efficiency of the irreversible cycle divided by the thermal efficiency of the ideal cycle. The effects of the compressor and turbine efficiencies, regenerator effectiveness, pressure drop in the cycle and the ratio of maximum-to-minimum cycle temperature on optimum pressure ratios obtained by maximization of 1st and 2nd law efficiencies and work output are examined. The results indicate that for the regenerator effectiveness greater than 0.82, the 2nd law efficiency optimization may be considered as a trade-off between the maximum work output and the maximum 1st law efficiency. © 2013 Elsevier Ltd. All rights reserved.


Brouwers H.J.H.,TU Eindhoven
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2013

In previous papers analytical equations were derived and validated for the packing fraction of crystalline structures consisting of bimodal randomly placed hard spheres. In this article it will be demonstrated that the bimodal random packing fraction of spheres with small size ratio can be described by the same type of closed-form equation. This equation contains the volume of the spheres and of the elementary cluster formed by these spheres. The obtained compact analytical expression appears to be in good agreement with a large collection of empirical and computer-generated packing data, taken from literature. By following a statistical approach of the number of uneven pairs in a binary packing, and the associated packing reduction (compared to the monosized limit), the number fraction of hypostatic spheres is estimated to be 0.548. © 2013 American Physical Society.


Brouwers H.J.H.,TU Eindhoven
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2013

In previous papers analytical expressions were derived and validated for the packing fraction of bimodal hard spheres with small size ratio, applicable to ordered (crystalline) and disordered (random) packings. In the present paper the underlying statistical approach, based on counting the occurrences of uneven pairs, i.e., the fraction of contacts between unequal spheres, is applied to trimodal discretely sized spheres. The packing of such ternary packings can be described by the same type of closed-form equation as the bimodal case. This equation contains the mean volume of the spheres and of the elementary cluster formed by these spheres; for crystalline arrangements this corresponds to the unit cell volume. The obtained compact analytical expression is compared with empirical packing data concerning random close packing of spheres, taken from the literature, comprising ternary binomial and geometric packings; good agreement is obtained. The presented approach is generalized to ordered and disordered packings of multimodal mixes. © 2013 American Physical Society.


Basten R.J.I.,University of Twente | van Houtum G.J.,TU Eindhoven
Surveys in Operations Research and Management Science | Year: 2014

Stocks of spare parts, located at appropriate locations, can prevent long downtimes of technical systems that are used in the primary processes of their users. Since such downtimes are typically very expensive, generally system-oriented service measures are used in spare parts inventory control. Examples of such measures are system availability and the expected number of backorders over all spare parts. This is one of the key characteristics that distinguishes such inventory control from other fields of inventory control. In this paper, we survey models for spare parts inventory control under system-oriented service constraints. We link those models to two archetypical types of spare parts networks: networks of users who maintain their own systems, for instance in the military world, and networks of original equipment manufacturers who service the installed base of products that they have sold. We describe the characteristics of these networks and refer back to them throughout the survey. Our aim is to bring structure into the large body of related literature and to refer to the most important papers. We discuss both the single location and multi-echelon models. We further focus on the use of lateral and emergency shipments, and we refer to other extensions and the coupling of spare parts inventory control models to related problems, such as repair shop capacity planning. We conclude with a short discussion of application of these models in practice. © 2014 Elsevier Ltd.


Bouma H.,TU Eindhoven
Gerontechnology | Year: 2014

Due to rapid changes in society driven by developments in technology, the concept of lifelong learning has replaced the youth and adolescence as the paramount period of learning, in preparation for a whole life integrated in society. People are able to internalize new information and acquire new skills at any age up to the end of life, except in case of specific memory- or movement debilitating diseases. Quite a few normal life skills are based on memory. Education is defined here as organized opportunities for learning with well-defined students, goals, and methods. So, in a dynamic society, lifelong learning has to be matched by sufficient opportunities for continuous, lifelong education. However, society as a whole has only partly embraced lifelong learning and seems to still predominantly focused on education as preparation and training for a continuing professional career, i.e. education for jobs. Purpose: Working out the concept of lifelong learning and continuous education for ageing people in present society. Method: Analysis of the structural lag of older people to participate in a society characterised by fast innovations, and deriving suggestions for solutions. Results and discussion: A flood of innovations and scientific findings has deeply changed society and is continuing doing so. It follows that the validity over time of what is learned has shrunk and there is now a need for continuous education and skills training right up to the end of life. Some options will be outlined for continuous education, thus supporting lifelong learning. Open learning environments via the Internet combined with the training of digital competences appear to be the basic skill set. It is presently unclear who will take on a position as primary stakeholder to make this happen. As for scientific support, disciplines of gerontology and technology necessary for developing digital competence, education software, user interfaces, social organization, and massive introduction are indicated.


Van Amstel M.F.,TU Eindhoven
Proceedings - International Computer Software and Applications Conference | Year: 2010

Model-Driven Engineering (MDE) is a software engineering discipline in which models play a central role. One of the key concepts of MDE is model transformations. Because of the crucial role of model transformations in MDE, they have to be treated in a similar way as traditional software artifacts. They have to be used by multiple developers, they have to be maintained according to changing requirements and they should preferably be reused. It is therefore necessary to define and assess their quality. In this paper, we give two definitions for two different views on the quality of model transformations. We will also give some examples of quality assessment techniques for model transformations. The paper concludes with an argument about which type of quality assessment technique is most suitable for either of the views on model transformation quality. © 2010 IEEE.


Van Helden P.,Sasol Limited | Van Den Berg J.-A.,Sasol Limited | Weststrate C.J.,TU Eindhoven
ACS Catalysis | Year: 2012

Density functional theory (DFT) calculations and temperature programmed desorption (TPD) experiments were performed to study the adsorption of hydrogen on the Co(111) and Co(100) surfaces. On the Co(111) surface, hydrogen adsorption is coverage dependent and the calculated adsorption energies are very similar to those on the Co(0001) surface. The experimental adsorption saturation coverage on the Co(111)/(0001) surface is θ max ≈ 0.5 ML, although DFT predicts θ max ≈ 1.0 ML. DFT calculations indicate that preadsorbed hydrogen will kinetically impede the adsorption process as the coverage approaches θ = 0.5 ML, giving rise to this difference. Adsorption on Co(100) is coverage independent up to θ = 1.00 ML, contrasting observations on the Ni(100) surface. Hydrogen atoms have low barriers of diffusion on both the Co(111) and Co(100) surfaces. A microkinetic analysis of desorption, simulating the expected TPD experiments, indicated that on the Co(111) surface two TPD peaks are expected, while on the Co(100) only one peak is expected. Low coverage adsorption energies of between 0.97 and 1.1 eV are obtained from the TPD experiment on a smooth single crystal of Co(0001), in line with the DFT results. Defects play a important role in the adsorption process. Further calculations on the Co(211) and Co(221) surfaces have been performed to model the effects of step and defect sites, indicating that steps and defects will expose a broad range of adsorption sites with varying (mostly less favorable) adsorption energies. The effect of defects has been studied by TPD by sputtering of the Co crystal surface. Defects accelerate the adsorption of hydrogen by providing alternative, almost barrierless pathways, making it possible to increase the coverage on the Co(111)/(0001) surface to above θ = 0.50 ML. The presence of defects at a high concentration will give rise to adsorption sites with much lower desorption activation energies, resulting in broad low temperature TPD features. © 2012 American Chemical Society.


van Schijndel A.W.M.,TU Eindhoven
Journal of Building Performance Simulation | Year: 2014

Multi domain modelling provides great opportunities for possible synergy between the building simulation domain and other scientific and technological domains. Although domains may have quite different models, they often use common mathematical representations, based on differential algebraic equations (DAEs) and/or ordinary differential equations (ODEs). This paper reviews the use of S-Functions in SimuLink for DAEs and ODEs modelling regarding building simulation and its potential for multi domain applications. It is concluded that ODEs are directly implementable using S-Functions in SimuLink. DAEs are indirectly implementable by a manual process of integrating Dymola/Modelica models. Examples from the literature confirm the great opportunities for the combined building thermal, geothermal, electrical and grid performance simulation. © 2013 International Building Performance Simulation Association (IBPSA).


White T.J.,Air Force Research Lab | Broer D.J.,TU Eindhoven
Nature Materials | Year: 2015

Liquid crystals are the basis of a pervasive technology of the modern era. Yet, as the display market becomes commoditized, researchers in industry, government and academia are increasingly examining liquid crystalline materials in a variety of polymeric forms and discovering their fascinating and useful properties. In this Review, we detail the historical development of liquid crystalline polymeric materials, with emphasis on the thermally and photogenerated macroscale mechanical responses-such as bending, twisting and buckling-and on local-feature development (primarily related to topographical control). Within this framework, we elucidate the benefits of liquid crystallinity and contrast them with other stimuli-induced mechanical responses reported for other materials. We end with an outlook of existing challenges and near-term application opportunities. © 2015 Macmillan Publishers Limited.


Kamp L.P.J.,TU Eindhoven
Physics of Fluids | Year: 2012

Deviations from two-dimensionality of a shallow flow that is dominated by bottom friction are quantified in terms of the spatial distribution of strain and vorticity as described by the Okubo-Weiss function. This result is based on a Poisson equation for the pressure in a quasi-horizontal (primary) flow. It is shown that the Okubo-Weiss function specifies vertical pressure gradients, which for their part drive vertical (secondary) motion. An asymptotic expansion of these gradients based on the smallness of the vertical to horizontal scale ratio demonstrates that the sign and magnitude of secondary circulation inside the fluid layer is dictated by the signs and magnitude of the Okubo-Weiss function. As a consequence of this, secondary motion as well as nonzero horizontal divergence do also depend on the strength, i.e., the Reynolds number of the primary flow. The theory is exemplified by two generic vortical structures (monopolar and dipolar structures). Most importantly, the theory can be applied to more complicated turbulent shallow flows in order to assess the degree of two-dimensionality using measurements of the free-surface flow only. © 2012 American Institute of Physics.


Heise A.,Dublin City University | Palmans A.R.A.,TU Eindhoven
Advances in Polymer Science | Year: 2010

Lipases show high activity in the polymerization of a range of monomers using ring-opening polymerization and polycondensation. The range of polymer structures from this enzymatic polymerization can be further increased by combination with chemical methods. This paper reviews the developments of the last 5-8 years in chemoenzymatic strategies towards polymeric materials. Special emphasis is on the synthesis of polymer architectures like block and graft copolymers and polymer networks. Moreover, the combination of chemical and enzymatic catalysis for the synthesis of unique chiral polymers is highlighted. © 2011 Springer-Verlag Berlin Heidelberg.


Van Herk A.M.,TU Eindhoven
Advances in Polymer Science | Year: 2010

In this introductory chapter, a brief overview of emulsion polymerization and miniemulsion polymerization principles is given in relation to preparation of hybrid latex particles. An account is presented of the early history of preparation of hybrid latex particles with an emphasis on the hybrid lattices containing organic and inorganic material phases. The two approaches for obtaining encapsulated inorganic particles are discussed: the chemical method in which polymerization takes place in the presence of inorganic particles, and the physical method whereby latex particles are deposited on the surface of inorganic particles by heterocoagulation. A new classification scheme for the preparation of hybrid latex particles and corresponding higher-order nanostructures is given in this paper. This classification is partially based on a discussion during the International Polymer Colloids Group meeting in Italy in 2009. © 2010 Springer-Verlag Berlin Heidelberg.


Hoffken J.I.,TU Eindhoven
Renewable and Sustainable Energy Reviews | Year: 2014

Associated with being green, clean and small-scale, small hydroelectric power (SHP) projects generally enjoy a positive image. In India SHP promises answers to issues such as meeting a growing electricity demand, facilitating lucrative investment opportunities, and climate change considerations. The features of being green, clean and small-scale have contributed to the assumption of SHP as an essentially uncontested technology. Empirical studies questioning this assumption are scarce. Research on SHP has so far remained rather hypothetical and policy-level-focused. This article investigates the social acceptability of small hydroelectric plants in India by empirically looking at how people engage with these plants. It thereby underlines the importance of studying technologies in their local context. Based on a detailed case study analysis of two SHP projects in Karnataka, India, the article shows how SHP projects are contested on the local level. The engagement of local people played a crucial role in the contestation of the plants and led to significant and unexpected outcomes and effects. The article highlights the importance of having a broader perspective in the development of SHP that goes beyond a mindset of technological fixes. This includes taking account of existing water infrastructure and a broader range of water users. The article shows that the implementation of SHP projects does not take place in a void. Rather, complex existing physical and social realities on the ground matter for the development and performance of SHP. © 2014 Elsevier Ltd.


Frens J.,TU Eindhoven
Proceedings of the 5th International Conference on Tangible Embedded and Embodied Interaction, TEI'11 | Year: 2011

Central to this studio is the question of how to design for rich and embodied (meaningful) interaction. We approach this question from a designerly perspective and find inspiration in the theory of ecological perception and in the domain of tangible and embodied interaction. As we aim for a meaningful interaction style that is firmly rooted in human experience and the diverse human skills, we present cardboard modeling as a designerly exploration tool that offers experiential insight into the solution domain of a given interaction design challenge. The studio has two distinct parts: part one aims at familiarizing the participants with the cardboard modeling technique and part two emphasizes the use of the cardboard modeling technique as an instrument to explore meaningful interaction. During the second part of the studio also the quality of the interaction solutions are discussed through presentations. The studio runs from 9.30h to approximately 17.00h. © 2011 ACM.


Morris P.D.,University of Sheffield | Van De Vosse F.N.,TU Eindhoven | Lawford P.V.,University of Sheffield | Hose D.R.,University of Sheffield | Gunn J.P.,University of Sheffield
JACC: Cardiovascular Interventions | Year: 2015

Fractional flow reserve (FFR) is the "gold standard" for assessing the physiological significance of coronary artery disease during invasive coronary angiography. FFR-guided percutaneous coronary intervention improves patient outcomes and reduces stent insertion and cost; yet, due to several practical and operator related factors, it is used in <10% of percutaneous coronary intervention procedures. Virtual fractional flow reserve (vFFR) is computed using coronary imaging and computational fluid dynamics modeling. vFFR has emerged as an attractive alternative to invasive FFR by delivering physiological assessment without the factors that limit the invasive technique. vFFR may offer further diagnostic and planning benefits, including virtual pullback and virtual stenting facilities. However, there are key challenges that need to be overcome before vFFR can be translated into routine clinical practice. These span a spectrum of scientific, logistic, commercial, and political areas. The method used to generate 3-dimensional geometric arterial models (segmentation) and selection of appropriate, patient-specific boundary conditions represent the primary scientific limitations. Many conflicting priorities and design features must be carefully considered for vFFR models to be sufficiently accurate, fast, and intuitive for physicians to use. Consistency is needed in how accuracy is defined and reported. Furthermore, appropriate regulatory and industry standards need to be in place, and cohesive approaches to intellectual property management, reimbursement, and clinician training are required. Assuming successful development continues in these key areas, vFFR is likely to become a desirable tool in the functional assessment of coronary artery disease. © 2015 American College of Cardiology Foundation.


Van De Stolpe A.,HIGH-TECH | Den Toonder J.,TU Eindhoven
Lab on a Chip - Miniaturisation for Chemistry and Biology | Year: 2013

The concept of "Organs-on-Chips" has recently evolved and has been described as 3D (mini-) organs or tissues consisting of multiple and different cell types interacting with each other under closely controlled conditions, grown in a microfluidic chip, and mimicking the complex structures and cellular interactions in and between different cell types and organs in vivo, enabling the real time monitoring of cellular processes. In combination with the emerging iPSC (induced pluripotent stem cell) field this development offers unprecedented opportunities to develop human in vitro models for healthy and diseased organ tissues, enabling the investigation of fundamental mechanisms in disease development, drug toxicity screening, drug target discovery and drug development, and the replacement of animal testing. Capturing the genetic background of the iPSC donor in the organ or disease model carries the promise to move towards "in vitro clinical trials", reducing costs for drug development and furthering the concept of personalized medicine and companion diagnostics. During the Lorentz workshop (Leiden, September 2012) an international multidisciplinary group of experts discussed the current state of the art, available and emerging technologies, applications and how to proceed in the field. Organ-on-a-chip platform technologies are expected to revolutionize cell biology in general and drug development in particular. © The Royal Society of Chemistry 2013.


Thelander C.,Lund University | Caroff P.,CNRS Institute of Electronics, Microelectronics and Nanotechnology | Plissard S.,CNRS Institute of Electronics, Microelectronics and Nanotechnology | Plissard S.,TU Eindhoven | And 2 more authors.
Nano Letters | Year: 2011

We report a systematic study of the relationship between crystal quality and electrical properties of InAs nanowires grown by MOVPE and MBE, with crystal structure varying from wurtzite to zinc blende. We find that mixtures of these phases can exhibit up to 2 orders of magnitude higher resistivity than single-phase nanowires, with a temperature-activated transport mechanism. However, it is also found that defects in the form of stacking faults and twin planes do not significantly affect the resistivity. These findings are important for nanowire-based devices, where uncontrolled formation of particular polytype mixtures may lead to unacceptable device variability. © 2011 American Chemical Society.


Rodriguez S.R.K.,HIGH-TECH | Gomez Rivas J.,TU Eindhoven
Optics Express | Year: 2013

We demonstrate the strong coupling of surface lattice resonances (SLRs) - hybridized plasmonic/photonic modes in metallic nanoparticle arrays - to excitons in Rhodamine 6G molecules. We investigate experimentally angle-dependent extinction spectra of silver nanorod arrays with different lattice constants, with and without the Rhodamine 6G molecules. The properties of the coupled modes are elucidated with simple Hamiltonian models. At low momenta, plasmon-exciton-polaritons - the mixed SLR/exciton states - behave as free-quasiparticles with an effective mass, lifetime, and composition tunable via the periodicity of the array. The results are relevant for the design of plasmonic systems aimed at reaching the quantum degeneracy threshold, wherein a single quantum state becomes macroscopically populated. © 2013 Optical Society of America.


van der Hofstad R.,TU Eindhoven
Random Structures and Algorithms | Year: 2013

We study the critical behavior of inhomogeneous random graphs in the so-called rank-1 case, where edges are present independently but with unequal edge occupation probabilities. The edge occupation probabilities are moderated by vertex weights, and are such that the degree of vertex i is close in distribution to a Poisson random variable with parameter wi, where wi denotes the weight of vertex i. We choose the weights such that the weight of a uniformly chosen vertex converges in distribution to a limiting random variable W. In this case, the proportion of vertices with degree k is close to the probability that a Poisson random variable with random parameter W takes the value k. We pay special attention to the power-law case, i.e., the case where ℙ(W≥ k) is proportional to k-(τ-1) for some power-law exponent τ > 3, a property which is then inherited by the asymptotic degree distribution. We show that the critical behavior depends sensitively on the properties of the asymptotic degree distribution moderated by the asymptotic weight distribution W. Indeed, when ℙ (W > k) ≤ ck--(τ-1)} for all k ≥ 1 and some τ > 4 and c > 0, the largest critical connected component in a graph of size n is of order n2/3, as it is for the critical Erdos-Rényi random graph. When, instead, ℙ(W > k)=ck-(τ-1)(1+o(1)) for k large and some τ∈(3,4) and c > 0, the largest critical connected component is of the much smaller order n(τ-2)/(τ-1). © 2012 Wiley Periodicals, Inc.


Barendregt W.,Gothenburg University | Bekker T.M.,TU Eindhoven
Computers and Education | Year: 2011

Employing a mixed-method explorative approach, this study examined the in situ use of and opinions about an educational computer game for learning English introduced in three schools offering different levels of freedom to choose school activities. The results indicated that the general behaviour of the children with the game was very different for each of the schools while there were no significant differences in subjective opinions or previous computer game experience as measured with a questionnaire. The gaming records and interviews informed that children do enjoy playing the game in comparison with other formal learning activities, but appreciate it less as a leisure-time activity. Furthermore it appears that children used to teacher-initiated activities tend to depend on their teacher's directions for how and when to play. The study highlights the level of choice as one of the important aspects to consider when introducing a game in the classroom. The study also points out some suggestions for the design of educational games, such as providing communication possibilities between players and integrating fast-paced motor-skill based games with learning content in a meaningful way. © 2010 Elsevier Ltd. All rights reserved.


Sijs J.,TNO | Lazar M.,TU Eindhoven
IEEE Transactions on Automatic Control | Year: 2012

To reduce the amount of data transfer in networked systems, measurements are usually taken only when an event occurs rather than at each synchronous sample instant. However, this complicates estimation problems considerably, especially in the situation when no measurement is received anymore. The goal of this paper is therefore to develop a state estimator that can successfully cope with event based measurements and attains an asymptotically bounded error-covariance matrix. To that extent, a general mathematical description of event sampling is proposed. This description is used to set up a state estimator with a hybrid update, i.e., when an event occurs the estimated state is updated using the measurement, while at synchronous instants the update is based on knowledge that the sensor value lies within a bounded subset of the measurement space. Furthermore, to minimize computational complexity of the estimator, the algorithm is implemented using a sum of Gaussians approach. The benefits of this implementation are demonstrated by an illustrative example of state estimation with event sampling. © 2012 IEEE.


Borden M.J.,University of Texas at Austin | Verhoosel C.V.,TU Eindhoven | Scott M.A.,University of Texas at Austin | Hughes T.J.R.,University of Texas at Austin | Landis C.M.,University of Texas at Austin
Computer Methods in Applied Mechanics and Engineering | Year: 2012

In contrast to discrete descriptions of fracture, phase-field descriptions do not require numerical tracking of discontinuities in the displacement field. This greatly reduces implementation complexity. In this work, we extend a phase-field model for quasi-static brittle fracture to the dynamic case. We introduce a phase-field approximation to the Lagrangian for discrete fracture problems and derive the coupled system of equations that govern the motion of the body and evolution of the phase-field. We study the behavior of the model in one dimension and show how it influences material properties. For the temporal discretization of the equations of motion, we present both a monolithic and staggered time integration scheme. We study the behavior of the dynamic model by performing a number of two and three dimensional numerical experiments. We also introduce a local adaptive refinement strategy and study its performance in the context of locally refined T-splines. We show that the combination of the phase-field model and local adaptive refinement provides an effective method for simulating fracture in three dimensions. © 2012 Elsevier B.V.


Van De Aalst W.,TU Eindhoven
IEEE Computational Intelligence Magazine | Year: 2010

Processes are everywhere. Organizations have business processes to manufacture products, provide services, purchase goods, handle applications, etc. Also in our daily lives we are involved in a variety of processes, for example when we use our car or when we book a trip via the Internet. Although such operational processes are omnipresent, they are at the same time intangible. Unlike a product or a piece of data, processes are less concrete because of their dynamic nature. However, more and more information about these processes is captured in the form of event logs. Contemporary systems ranging from copiers and medical devices to enterprise information systems and cloud infrastructures record events. These events can be used to make processes visible. Using process mining techniques it is possible to discover processes. This provides the insights necessary to manage, control, and improve processes. Process mining has been successfully applied in a variety of domains ranging from healthcare and e-business to high-tech systems and auditing. Despite these successes, there are still many challenges as process discovery shows that the real processes are more "spaghetti-like" than people like to think. It is still very difficult to capture the complex reality in a suitable model. Given the nature of these challenges, techniques originating from Computational Intelligence may assist in the discovery of complex processes. © 2006 IEEE.


Smulders P.,TU Eindhoven
IEEE Communications Magazine | Year: 2013

This work addresses the blueprint of shortrange wireless systems supporting data rates of 100 Gb/s and beyond. A number of usage models are identified for such ultra-high data rates, including wireless communication within electronic equipment enabling possibilities such as wireless reconfigurable chip-to-chip communication. The 300 GHz band spanning about 55 GHz of contiguous bandwidth is identified as the most suitable candidate to accommodate these bandwidth-demanding applications. Furthermore, the main bottleneck issues are discussed: power consumption and antenna integration. With the help of basic link budget considerations, we indicate the technical feasibility of the proposed concept with compact low-cost antenna solutions. Finally, we discuss the overall system architecture to be standardized and indicate a number of key research topics. © 1979-2012 IEEE.


Arentze T.A.,TU Eindhoven
IEEE Transactions on Intelligent Transportation Systems | Year: 2013

Providing personalized advice is an important objective in the development of advanced traveler information systems. In this paper, a Bayesian method to incorporate learning of users' personal travel preferences in a multimodal routing system is proposed. The system learns preference parameters incrementally based on travel choices a user makes. Existing Bayesian inference methods require too much computation time for the learning problem that we are dealing with here. Therefore, an approximation method is developed, which is based on sequential processing of preference parameters and systematic sampling of the parameter space. The data of repetitive travel choices of a representative sample of individuals are used to test the system. The results indicate that the system rapidly adapts to a user and learns his or her preferences effectively. The efficiency of the algorithm allows the system to handle realistically sized learning problems with short response times even when many users are to be simultaneously processed. It is therefore concluded that the approach is feasible; problems for future research are identified. © 2000-2011 IEEE.


Out-of-home leisure activities are often conducted jointly by individuals implying that location and travel choices made for these activities are the result of a group interaction. Current utility-theoretic approaches assume an aggregated group utility function and hence ignore aspects of the group decision making process. In this study, an empirical model of joint-activity choice is developed that, in contrast, assumes a negotiation process. A social utility function describes how individuals deal with preference differences in the group. The model is estimated based on an experimental activity-travel choice task where group settings are mimicked. A sample (. N=. 315) from a national panel of individuals participated in the experiment. Estimation results based on a discrete mixture model show that individuals display a preference for locations in which losses are equally distributed in the group even when this comes at the costs of the total group outcome. Results further show that the social utility function is asymmetric: compromise solutions are favored more strongly when consequences relate to costs (travel costs) than when they concern rewards (attractiveness). Furthermore, there is considerable heterogeneity in how people make social trade-offs. It is concluded that the model offers new insights in location preferences for joint activities that should be taken into account in spatial choice models and accessibility analysis. © 2015 Elsevier B.V.


Vaesen K.,TU Eindhoven
PLoS ONE | Year: 2012

The idea that demographic change may spur or slow down technological change has become widely accepted among evolutionary archaeologists and anthropologists. Two models have been particularly influential in promoting this idea: a mathematical model by Joseph Henrich, developed to explain the Tasmanian loss of culture during the Holocene; and an agent-based adaptation thereof, devised by Powell et al. to explain the emergence of modern behaviour in the Late Pleistocene. However, the models in question make rather strong assumptions about the distribution of skills among social learners and about the selectivity of social learning strategies. Here I examine the behaviour of these models under more conservative and, on empirical and theoretical grounds, equally reasonable assumptions. I show that, some qualifications notwithstanding, Henrich's model largely withstands my robustness tests. The model of Powell et al., in contrast, does not-a finding that warrants a fair amount of skepticism towards Powell et al.'s explanation of the Upper Paleolithic transition. More generally, my evaluation of the accounts of Henrich and of Powell et al. helpfully clarify which inferences their popular models do and not support. © 2012 Krist Vaesen.


Demerouti E.,TU Eindhoven
Journal of Occupational Health Psychology | Year: 2012

This study tested the positive spillover-crossover model among dual-earner couples. Job resources of 1 partner were predicted to spill over to his/her individual energy, that is, reduced fatigue and increased motivation. Consequently, individual energy was predicted to influence one's partner's family resources, which were hypothesized to influence the partner's level of individual energy. Work-self facilitation and family-self facilitation were hypothesized to mediate the favorable effects of job and home resources, respectively, on individual energy. A sample of 131 couples participated in the study. Structural equation modeling analyses showed that job resources influence one's own individual energy through work-self facilitation. Consequently, the levels of individual energy positively influence one's partner's perception of home resources, which eventually spill over to the partner's individual energy through experienced family-self facilitation. Work-self and family-self facilitation are useful in explaining why job and family resources may enhance the levels of energy that individuals invest in different life domains. © 2012 American Psychological Association.


In this article, I explore select case studies of Parkinson patients treated with deep brain stimulation (DBS) in light of the notions of alienation and authenticity. While the literature on DBS has so far neglected the issues of authenticity and alienation, I argue that interpreting these cases in terms of these concepts raises new issues for not only the philosophical discussion of neuro-ethics of DBS, but also for the psychological and medical approach to patients under DBS. In particular, I suggest that the experience of alienation and authenticity varies from patient to patient with DBS. For some, alienation can be brought about by neurointerventions because patients no longer feel like themselves. But, on the other hand, it seems alienation can also be cured by DBS as other patients experience their state of mind as authentic under treatment and retrospectively regard their former lives without stimulation as alienated. I argue that we must do further research on the relevance of authenticity and alienation to patients treated with DBS in order to gain a deeper philosophical understanding, and to develop the best evaluative criterion for the behavior of DBS patients. © 2011 The Author(s).


Cirillo E.N.M.,University of Rome La Sapienza | Muntean A.,TU Eindhoven
Physica A: Statistical Mechanics and its Applications | Year: 2013

We investigate the motion of pedestrians through obscure corridors where the lack of visibility (due to smoke, fog, darkness, etc.) hides the precise position of the exits. We focus our attention on a set of basic mechanisms, which we assume to be governing the dynamics at the individual level. Using a lattice model, we explore the effects of non-exclusion on the overall exit flux (evacuation rate). More precisely, we study the effect of the buddying threshold (of no-exclusion per site) on the dynamics of the crowd and investigate to which extent our model confirms the following pattern revealed by investigations on real emergencies: If the evacuees tend to cooperate and act altruistically, then their collective action tends to favor the occurrence of disasters. The research reported here opens many fundamental questions and should be seen therefore as a preliminary investigation of the very complex behavior of the people and their motion in dark regions. © 2013 Elsevier B.V. All rights reserved.


Vandamme L.K.J.,TU Eindhoven
Fluctuation and Noise Letters | Year: 2011

The resistance and noise of films prepared with poor contacts are dominated by the contact interface and for perfect contacts holds that resistance and noise stem from outside the contact interface region. The proposed test pattern to study the different contributions uses one mask. It permits two- and four-point measurements enabling the detection of a weak contribution from outside the contact interface on top of a strong interface contribution. The resistance and noise for poor and perfect contacts are calculated between pairs of circular top electrodes of equal diameters 2r at distances L with L/2r = 10. The dependences of resistance and noise on the contact diameter are quite different for perfect and poor contacts. 1/f noise of films taken from literature are compared in the noise figure of merit K = C us [cm 2]/R sh[Ω]. K is the ratio of 1/f noise normalized for bias, frequency and unit surface to sheet resistance. Materials can be classified based on K-values. Very high K-values point to inhomogeneous electric fields on a microscopic scale (percolation conduction). The contact interface 1/f noise and specific contact resistance are characterized by C ust [cm 2] and ρ ct [Ω cm 2]. Reviews of K for films and C ust for interfaces show that 1/f noise is a more sensitive tool than merely the resistance parameters R sh and ρ ct. © 2011 World Scientific Publishing Company.


Janssen R.A.J.,TU Eindhoven | Nelson J.,Imperial College London
Advanced Materials | Year: 2013

The power conversion efficiency of the most efficient organic photovoltaic (OPV) cells has recently increased to over 10%. To enable further increases, the factors limiting the device efficiency in OPV must be identified. In this review, the operational mechanism of OPV cells is explained and the detailed balance limit to photovoltaic energy conversion, as developed by Shockley and Queisser, is outlined. The various approaches that have been developed to estimate the maximum practically achievable efficiency in OPV are then discussed, based on empirical knowledge of organic semiconductor materials. Subsequently, approaches made to adapt the detailed balance theory to incorporate some of the fundamentally different processes in organic solar cells that originate from using a combination of two complementary, donor and acceptor, organic semiconductors using thermodynamic and kinetic approaches are described. The more empirical formulations to the efficiency limits provide estimates of 10-12%, but the more fundamental descriptions suggest limits of 20-24% to be reachable in single junctions, similar to the highest efficiencies obtained for crystalline silicon p-n junction solar cells. Closing this gap sets the stage for future materials research and development of OPV. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Turkiewicz J.P.,Warsaw University of Technology | De Waardt H.,TU Eindhoven
IEEE Photonics Technology Letters | Year: 2012

In this letter, we demonstrate low complexity dense wavelength-division multiplexing (DWDM) over a ∼40-km standard single-mode fiber transmission system in the 1310-nm wavelength domain with a total transmission capacity up to 400 Gb/s. The demonstrated system is based exclusively on semiconductor components without any form of dispersion compensation. The system showed excellent performance. The presented results prove that the 1310-nm wavelength domain can support low cost and low complexity high-speed transmission with the wide range applications like the future 400G+ Ethernet. © 1989-2012 IEEE.


Wang H.,Arizona State University | Sun M.,Arizona State University | Ding K.,Arizona State University | Hill M.T.,TU Eindhoven | Ning C.-Z.,Arizona State University
Nano Letters | Year: 2011

We demonstrate a novel top-down approach for fabricating nanowires with unprecedented complexity and optical quality by taking advantage of a nanoscale self-masking effect. We realized vertical arrays of nanowires of 20-40 nm in diameter with 16 segments of complex longitudinal InGaAsP/InP structures. The unprecedented high quality of etched wires is evidenced by the narrowest photoluminescence linewidth ever produced in similar wavelengths, indistinguishable from that of the corresponding wafer. This top-down, mask-free, large scale approach is compatible with the established device fabrication processes and could serve as an important alternative to the bottom-up approach, significantly expanding ranges and varieties of applications of nanowire technology. © 2011 American Chemical Society.


Demerouti E.,TU Eindhoven
European Journal of Clinical Investigation | Year: 2015

Background: Burnout represents a syndrome that is related to demanding job characteristics combined with the absence of resources or motivational job characteristics. The aim of this position study was to present strategies that individuals use to minimize burnout and its unfavourable effects. Materials and methods: The study focuses explicitly on strategies that individuals use to (i) deal with diminished resources that come with burnout, (ii) change their job characteristics such that the job becomes less demanding and more motivating and (iii) manage the interplay between the work and nonwork domains. Results: Individuals seem to use coping, recovery and compensation strategies to reduce the impact of work stressors by changing the stressor or their responses to the stressor. Moreover, they use job crafting to alter the characteristics of the job such that it becomes less hindering and more motivating. Finally, individuals create boundaries between their work and nonwork domains to experience less work-family and family-work conflicts by actively detaching from work. Conclusions: Finding bottom-up strategies that individuals use to minimize burnout or its unfavourable effects may be essential to complement the top-down interventions initiated by organizations. © 2015 Stichting European Society for Clinical Investigation Journal Foundation.


The motion of small particles in turbulent conditions is influenced by the entire range of length- and time-scales of the flow. At high Reynolds numbers this range of scales is too broad for direct numerical simulation (DNS). Such flows can only be approached using large-eddy simulation (LES), which requires the introduction of a sub-filter model for the momentum dynamics. Likewise, for the particle motion the effect of sub-filter scales needs to be reconstructed approximately, as there is no explicit access to turbulent sub-filter scales. To recover the dynamic consequences of the unresolved scales, partial reconstruction through approximate deconvolution of the LES-filter is combined with explicit stochastic forcing in the equations of motion of the particles. We analyze DNS of high-Reynolds turbulent channel flow to a priori extract the ideal forcing that should be added to retain correct statistical properties of the dispersed particle phase in LES. The probability density function of the velocity differences that need to be included in the particle equations and their temporal correlation display a striking and simple structure with little dependence on Reynolds number and particle inertia, provided the differences are normalized by their RMS, and the correlations expressed in wall units. This is key to the development of a general "stand-alone" stochastic forcing for inertial particles in LES. © 2012 American Institute of Physics.


Bansal N.,TU Eindhoven
Mathematical Programming | Year: 2012

Recently, there have been several newdevelopments in discrepancy theory based on connections to semidefinite programming. This connection has been useful in several ways. It gives efficient polynomial time algorithms for several problems for which only non-constructive results were previously known. It also leads to several new structural results in discrepancy itself, such as tightness of the so-called determinant lower bound, improved bounds on the discrepancy of the union of set systems and so on. We will give a brief survey of these results, focussing on the main ideas and the techniques involved. © The Author(s) 2012.


Bansal N.,TU Eindhoven
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2015

We consider the maximum independent set problem on sparse graphs with maximum degree d. The best known result for the problem is an SDP based O(dlog logd/logd) approximation due to Halperin. It is also known that no O(d/log2 d) approximation exists assuming the Unique Games Conjecture. We show the following two results: (i) The natural LP formulation for the problem strengthened by O(log4(d)) levels of the mixed-hierarchy has an integrality gap of O(d/log2d), where O(·) ignores some log log d factors. However, our proof is non-constructive, in particular it uses an entropy based approach due to Shearer, and does not give a O(d/log2 d) approximation algorithm with sub-exponential running time. (ii) We give an O(d/\og d) approximation based on polylog(d)-levels of the mixed hierarchy that runs in no1 exp(logo(1) d) time, improving upon Halperin's bound by a modest log log d factor. Our algorithm is based on combining Halperin's approach together with an idea used by Ajtai, Erdos, Kómlos and Szemeredi to show that ATr-free, degree-d graphs have independent sets of size ω(n log log d/d). Copyright © 2015 by the Society for Industrial and Applied Mathmatics.


Yeh C.-W.,National Taiwan University | Chen W.-T.,National Taiwan University | Liu R.-S.,National Taiwan University | Hu S.-F.,National Taiwan Normal University | And 3 more authors.
Journal of the American Chemical Society | Year: 2012

The orange-red emitting phosphors based on M 2Si 5N 8:Eu (M = Sr, Ba) are widely utilized in white light-emitting diodes (WLEDs) because of their improvement of the color rendering index (CRI), which is brilliant for warm white light emission. Nitride-based phosphors are adopted in high-performance applications because of their excellent thermal and chemical stabilities. A series of nitridosilicate phosphor compounds, M 2-xSi 5N 8:Eu x (M = Sr, Ba), were prepared by solid-state reaction. The thermal degradation in air was only observed in Sr 2-xSi 5N 8:Eu x with x = 0.10, but it did not appear in Sr 2-xSi 5N 8:Eu x with x = 0.02 and Ba analogue with x = 0.10. This is an unprecedented investigation to study this phenomenon in the stable nitrides. The crystal structural variation upon heating treatment of these compounds was carried out using the in situ XRD measurements. The valence of Eu ions in these compounds was determined by electron spectroscopy for chemical analysis (ESCA) and X-ray absorption near-edge structure (XANES) spectroscopy. The morphology of these materials was examined by transmission electron microscopy (TEM). Combining all results, it is concluded that the origin of the thermal degradation in Sr 2-xSi 5N 8:Eu x with x = 0.10 is due to the formation of an amorphous layer on the surface of the nitride phosphor grain during oxidative heating treatment, which results in the oxidation of Eu ions from divalent to trivalent. This study provides a new perspective for the impact of the degradation problem as a consequence of heating processes in luminescent materials. © 2012 American Chemical Society.


D'Avino G.,University of Naples Federico II | Hulsen M.A.,TU Eindhoven
Journal of Non-Newtonian Fluid Mechanics | Year: 2010

The simulation of transient flows is relevant in several applications involving viscoelastic fluids. In the last decades, much effort has been spent on deriving time-marching schemes able to efficiently solve the governing equations at low computational cost. In this direction, decoupling schemes, where the global system is split into smaller subsystems, have been particularly successful. However, most of these techniques only work if inertia and/or a large Newtonian solvent contribution is included in the modeling. This is not the case for polymer melts or concentrated polymer solutions.In this work, we propose two second-order time-integration schemes for discretizing the momentum balance as well as the constitutive equation, based on a Gear and a Crank-Nicolson scheme. The solution of the momentum and continuity equations is decoupled from the constitutive one. The stress tensor term in the momentum balance is replaced by its space-continuous but time-discretized form of the constitutive equation through an Euler scheme implicit in the velocity. This adds velocity unknowns in the momentum equation thus an updating of the velocity field is possible even if inertia and solvent viscosity are not included in the model. To further reduce computational costs, the non-linear relaxation term in the constitutive equation is taken explicitly leading to a linear system of equations for each stress component.Four benchmark problems are considered to test the numerical schemes. The results show that a Crank-Nicolson based discretization for the momentum equation produces oscillations when combined with a Crank-Nicolson based scheme for the constitutive equation whereas, if a Gear based scheme is implemented for the constitutive equation, the stability is found to be dependent on the specific problem. However, the Gear based scheme applied to the momentum balance combined with both second-order methods used for the constitutive equation is stable and accurate and performs much better than a first-order Euler scheme. Finally, a numerical proof of the second-order convergence is also carried out. © 2010 Elsevier B.V.