Entity

Time filter

Source Type

Eindhoven, Netherlands

The Eindhoven University of Technology is a university of technology located in Eindhoven, Netherlands. Its motto is Mens agitat molem . The university was the second of its kind in the Netherlands, only Delft University of Technology existed previously. Until mid-1980 it was known as the Technische Hogeschool Eindhoven . In 2011 QS World University Rankings placed Eindhoven at 146th internationally, but 61st globally for Engineering & IT. Furthermore, in 2011 Academic Ranking of World Universities rankings, TU/e was placed at the 52-75 bucket internationally in Engineering/Technology and Computer Science category and at 34th place internationally in the field of Computer Science. In 2003 a European Commission report ranked TU/e at third place among all European research universities , thus making it the highest ranked Technical University in Europe. Wikipedia.


Su R.,Nanyang Technological University | Van Schuppen J.H.,Centrum voor Wiskunde en Informatica CWI | Rooda J.E.,TU Eindhoven
IEEE Transactions on Automatic Control | Year: 2012

In many practical applications, we need to compute a nonblocking supervisor that not only complies with pre-specified safety requirements but also achieves a certain time optimal performance such as maximum throughput. In this paper, we first present a minimum-makespan supervisor synthesis problem. Then we show that the problem can be solved by a terminable algorithm, where the execution time of each string is computable by the theory of heaps-of-pieces. We also provide a timed supervisory control map that can implement the synthesized minimum-makespan sublanguage. © 2006 IEEE. Source


Parsa S.,Wesleyan University | Calzavarini E.,Lille Laboratory of Mechanics | Toschi F.,TU Eindhoven | Voth G.A.,Wesleyan University
Physical Review Letters | Year: 2012

The rotational dynamics of anisotropic particles advected in a turbulent fluid flow are important in many industrial and natural settings. Particle rotations are controlled by small scale properties of turbulence that are nearly universal, and so provide a rich system where experiments can be directly compared with theory and simulations. Here we report the first three-dimensional experimental measurements of the orientation dynamics of rodlike particles as they are advected in a turbulent fluid flow. We also present numerical simulations that show good agreement with the experiments and allow extension to a wide range of particle shapes. Anisotropic tracer particles preferentially sample the flow since their orientations become correlated with the velocity gradient tensor. The rotation rate is heavily influenced by this preferential alignment, and the alignment depends strongly on particle shape. © 2012 American Physical Society. Source


La Rosa M.,Queensland University of Technology | Dumas M.,Queensland University of Technology | Uba R.,University of Tartu | Dijkman R.,TU Eindhoven
ACM Transactions on Software Engineering and Methodology | Year: 2013

This article addresses the problem of constructing consolidated business process models out of collections of process models that share common fragments. The article considers the construction of unions of multiple models (called merged models) as well as intersections (called digests). Merged models are intended for analysts who wish to create a model that subsumes a collection of process models - typically representing variants of the same underlying process - with the aim of replacing the variants with the merged model. Digests, on the other hand, are intended for analysts who wish to identify the most recurring fragments across a collection of process models, so that they can focus their efforts on optimizing these fragments. The article presents an algorithm for computing merged models and an algorithm for extracting digests from a merged model. The merging and digest extraction algorithms have been implemented and tested against collections of process models taken from multiple application domains. The tests show that the merging algorithm produces compact models and scales up to process models containing hundreds of nodes. Furthermore, a case study conducted in a large insurance company has demonstrated the usefulness of the merging and digest extraction operators in a practical setting. © 2013 ACM. Source


Van Der Vaart A.,VU University Amsterdam | Van Zanten H.,TU Eindhoven
Journal of Machine Learning Research | Year: 2011

We consider the quality of learning a response function by a nonparametric Bayesian approach using a Gaussian process (GP) prior on the response function. We upper bound the quadratic risk of the learning procedure, which in turn is an upper bound on the Kullback-Leibler information between the predictive and true data distribution. The upper bound is expressed in small ball probabilities and concentration measures of the GP prior. We illustrate the computation of the upper bound for the Matérn and squared exponential kernels. For these priors the risk, and hence the information criterion, tends to zero for all continuous response functions. However, the rate at which this happens depends on the combination of true response function and Gaussian prior, and is expressible in a certain concentration function. In particular, the results show that for good performance, the regularity of the GP prior should match the regularity of the unknown response function. © 2011 Aad van der Vaart and Harry van Zanten. Source


De Waele A.T.A.M.,TU Eindhoven
Cryogenics | Year: 2012

This paper deals with the influence the finite heat capacity of the matrix of regenerators on the performance of cryocoolers. The dynamics of the various parameters is treated in the harmonic approximation focussing on the finite heat-capacity effects, real-gas effects, and heat conduction. It is assumed that the flow resistance is zero, that the heat contact between the gas and the matrix is perfect, and that there is no mass storage in the matrix. Based on an energy-flow analysis, the limiting temperature, temperature profiles in the regenerator, and cooling powers are calculated. The discussion refers to pulse-tube refrigerators, but it is equally relevant for Stirling coolers and GM-coolers. © 2011 Elsevier Ltd. All rights reserved. Source


Verburg J.M.,Harvard University | Verburg J.M.,TU Eindhoven | Seco J.,Harvard University
Physics in Medicine and Biology | Year: 2014

We present an experimental study of a novel method to verify the range of proton therapy beams. Differential cross sectionswere measured for 15 prompt gamma-ray lines from proton-nuclear interactions with 12C and 16O at proton energies up to 150MeV. These cross sectionswere used to model discrete prompt gamma-ray emissions along proton pencil-beams. By fitting detected prompt gamma-ray counts to these models, we simultaneously determined the beam range and the oxygen and carbon concentration of the irradiated matter. The performance of the method was assessed in two phantoms with different elemental concentrations, using a small scale prototype detector. Based on five pencil-beams with different ranges delivering 5×108 protons and without prior knowledge of the elemental composition at the measurement point, the absolute range was determined with a standard deviation of 1.0-1.4mm. Relative range shifts at the same dose level were detected with a standard deviation of 0.3-0.5mm. The determined oxygen and carbon concentrations also agreed well with the actual values. These results show that quantitative prompt gamma-ray measurements enable knowledge of nuclear reaction cross sectionsto be used for precise proton range verification in the presence of tissue with an unknown composition. © 2014 Institute of Physics and Engineering in Medicine. Source


van Schijndel A.W.M.,TU Eindhoven
Building Simulation | Year: 2011

The paper presents an overview of Multiphysics applications using a Multiphysics modeling package for building physical constructions simulation. The overview includes three main basic transport phenomena for building physical constructions: (1) heat transfer, (2) heat and moisture transfer and (3) heat, air and moisture (HAM) transfer. It is concluded that full 3D transient coupled HAM models for building physical constructions can be build using a Multiphysics modeling package. Regarding the heat transport, neither difficulties nor limitations are expected. Concerning the combined heat and moisture transport the main difficulties are related with the material properties but this seems to be no limitation. Regarding the HAM modeling inside solid constructions, there is at least one limitation: the validation is almost impossible due to limitation of measuring ultra low air velocities of order μm/s. © Tsinghua University Press and Springer-Verlag Berlin Heidelberg 2011. Source


Bastiaans M.J.,TU Eindhoven
Journal of the Franklin Institute | Year: 2011

It is shown that the recently introduced T-class of timefrequency distributions is a subclass of the S-method distributions. From the generalization of the S-method distribution by rotating it in the timefrequency plane, a similar generalization of the T-class distribution follows readily. The generalized T-class distribution is then applicable to signals that behave chirp like, with their instantaneous frequency slowly varying around the slope of the chirp; this slope needs no longer be zero, as is the case for the original T-class distribution, but may take an arbitrary value. © 2011 The Franklin Institute. Source


Van Der Aalst W.M.P.,TU Eindhoven
Lecture Notes in Business Information Processing | Year: 2011

Due to the availability of more and more event data and mature process mining techniques, it has become possible to discover the actual processes within an organization. Process mining techniques use event logs to automatically construct process models that explain the behavior observed. Existing process models can be validated using conformance checking techniques. Moreover, the link between real-life events and model elements allows for the projection of additional information onto process models (e.g., showing bottlenecks and the flow of work within an organization). Although process mining has been mainly used within individual organizations, this new technology can also be applied in cross-organizational settings. In this paper, we identify such settings and highlight some of the challenges and opportunities. In particular, we show that cross-organizational processes can be partitioned along two orthogonal dimensions. This helps us to identify relevant process mining challenges involving multiple organizations. © 2011 IFIP International Federation for Information Processing. Source


Van Soestbergen M.,Materials Innovation Institute M2i | Van Soestbergen M.,TU Eindhoven
Electrochemistry Communications | Year: 2012

Theory predicts that ionic currents through electrochemical cells at nanometer scale can exceed the diffusion limitation due to an expansion of the interfacial electrostatic double layer. Corresponding voltammetry experiments revealed a clear absence of a plateau for the current, which cannot be described by the classical Butler-Volmer approach using realistic values for the transfer coefficient. We show that extending the classical approach by considering the double layer structure using the Frumkin correction leads to an accurate description of the anomalous experimental data. © 2012 Elsevier B.V. All rights reserved. Source


Akhtar N.,TU Eindhoven
Chemical Engineering Research and Design | Year: 2012

Our previously developed numerical model has been used to study the flow, species and temperature distribution in a micro-tubular, single-chamber solid oxide fuel cell stack. The stack consists of three cells, spaced equally inside the gas-chamber. Two different configurations of the gas-chamber have been investigated, i.e., a bare gas-chamber and a porous material filled gas-chamber. The results show that the porous material filled gas-chamber is advantageous in improving the cell performance, as it forces the flow to pass through the cell, which improves mass transport via convection and enhances the reaction rate. The cell performance in the case of a bare gas-chamber follows in the following order: cell 1 > cell 2 > cell 3. However, the performance order is reversed for the porous gas-chamber case. This is due to enhanced flow which is forced to flow through the downstream cells, as we move along the gas-chamber length. © 2011 The Institution of Chemical Engineers. Source


Buchin K.,TU Eindhoven | Mulzer W.,Free University of Berlin
Journal of the ACM | Year: 2011

We present several results about Delaunay triangulations (DTs) and convex hulls in transdichotomous and hereditary settings: (i) the DT of a planar point set can be computed in expected time O(sort(n)) on a word RAM, where sort(n) is the time to sort n numbers. We assume that the word RAM supports the shuffle operation in constant time; (ii) if we know the ordering of a planar point set in x-and in y-direction, its DT can be found by a randomized algebraic computation tree of expected linear depth; (iii) given a universe U of points in the plane, we construct a data structure D for Delaunay queries :for any P ⊆ U, D can find the DT of P in expected time O(| P| log log |U |); (iv) given a universe U of points in 3-space in general convex position, there is a data structure D for convex hull queries :for any P ⊆ U, D can find the convex hull of P in expected time O(|P|(loglog |U|)2); (v) given a convex polytope in 3-space with n vertices which are colored with x ≥ 2 colors, we can split it into the convex hulls of the individual color classes in expected time O(n(log log n)2). The results (i)-(iii) generalize to higher dimensions, where the expected running time now also depends on the complexity of the resulting DT. We need a wide range of techniques. Most prominently, we describe a reduction from DTs to nearest-neighbor graphs that relies on a new variant of randomized incremental constructions using dependent sampling. © 2011 ACM. Source


Hill M.T.,TU Eindhoven
Journal of the Optical Society of America B: Optical Physics | Year: 2010

A remarkable miniaturization of lasers has occurred in just the past few years by employing metals to form the laser resonator. From having minimum laser dimensions being at least several wavelengths of the light emitted, many devices have been shown where the laser size is of a wavelength or less. Additionally some devices show lasing in structures significantly smaller than the wavelength of light in several dimensions, and the optical mode is far smaller than allowed by the diffraction limit. In this article we review what has been achieved then look forward to what some of the directions development could take and where possible applications could lie. In particular we show that there are devices with an optical size slightly larger or near the diffraction limit which could soon be employed in many applications requiring coherent light sources. Application of devices with dimensions far below the diffraction limit is also on the horizon, but may take more time. © 2010 Optical Society of America. Source


Prieto G.,University Utrecht | Zecevic J.,University Utrecht | Friedrich H.,TU Eindhoven | De Jong K.P.,University Utrecht | De Jongh P.E.,University Utrecht
Nature Materials | Year: 2013

Supported metal nanoparticles play a pivotal role in areas such as nanoelectronics, energy storage/conversion and as catalysts for the sustainable production of fuels and chemicals. However, the tendency of nanoparticles to grow into larger crystallites is an impediment for stable performance. Exemplarily, loss of active surface area by metal particle growth is a major cause of deactivation for supported catalysts. In specific cases particle growth might be mitigated by tuning the properties of individual nanoparticles, such as size, composition and interaction with the support. Here we present an alternative strategy based on control over collective properties, revealing the pronounced impact of the three-dimensional nanospatial distribution of metal particles on catalyst stability. We employ silica-supported copper nanoparticles as catalysts for methanol synthesis as a showcase. Achieving near-maximum interparticle spacings, as accessed quantitatively by electron tomography, slows down deactivation up to an order of magnitude compared with a catalyst with a non-uniform nanoparticle distribution, or a reference Cu/ZnO/Al 2 O 3 catalyst. Our approach paves the way towards the rational design of practically relevant catalysts and other nanomaterials with enhanced stability and functionality, for applications such as sensors, gas storage, batteries and solar fuel production. Source


Bohm C.,University of Stuttgart | Lazar M.,TU Eindhoven | Allgower F.,University of Stuttgart
Automatica | Year: 2012

This paper proposes a novel approach to stability analysis of discrete-time nonlinear periodically time-varying systems. The contributions are as follows. Firstly, a relaxation of standard Lyapunov conditions is derived. This leads to a less conservative Lyapunov function that is required to decrease at each period rather than at each time instant. Secondly, for linear periodic systems with constraints, it is shown that compared to standard Lyapunov theory, the novel concept of periodic Lyapunov functions allows for the calculation of a larger estimate of the region of attraction. An example illustrates the effectiveness of the developed theory. © 2012 Elsevier Ltd. All rights reserved. Source


Van Oijen J.A.,TU Eindhoven
Proceedings of the Combustion Institute | Year: 2013

MILD combustion is a new combustion technology which promises an enhanced efficiency and reduced emission of pollutants. It is characterized by a high degree of preheating and dilution of the reactants. Since the temperature of the reactants is higher than that of autoignition, a complex interplay between turbulent mixing, molecular transport and chemical kinetics occurs. In order to reveal the fundamental reaction structures of MILD combustion, the process of a cold methane-hydrogen fuel jet issuing in a hot diluted coflow and the subsequent ignition process is modeled by direct numerical simulation of autoigniting mixing layers using detailed chemistry and transport models. Detailed analysis of one-dimensional laminar mixing layers shows that the ignition process is dominated by hydrogen chemistry and that non-unity Lewis number effects are of the utmost importance for modeling of autoignition. High scalar dissipation rates in mixing layers delay the autoignition time, but have a negligible effect on the chemical pathway followed during ignition. This supports the idea of using homogeneous reactor simulations for the construction of chemistry look-up tables. Simulations of two-dimensional turbulent mixing layers confirm the effect of scalar dissipation rate on autoignition time. The turbulence-chemistry interaction is limited under the investigated conditions, because the reaction layer lies at the edge of the mixing layer due to the very small value of the stoichiometric mixture fraction. When the oxidizer stream is more diluted, the autoignition time is delayed, allowing the developing turbulence to interact more with the ignition chemistry. The results of these direct numerical simulations employing a detailed reaction mechanism are expected to be used for the development of tabulated chemistry models and sub-grid scale models for large-eddy simulations of MILD combustion. © 2012 The Combustion Institute. Source


Lenstra D.,TU Eindhoven | Yousefi M.,Photonic Sensing Solutions
Optics Express | Year: 2014

We present a set of rate equations for the modal amplitudes and carrier-inversion moments that describe the deterministic multi-mode dynamics of a semiconductor laser due to spatial hole burning. Mutual interactions among the lasing modes, induced by high-frequency modulations of the carrier distribution, are included by carrier-inversion moments for which rate equations are given as well. We derive the Bogatov effect of asymmetric gain suppression in semiconductor lasers and illustrate the potential of the model for a two and three-mode laser by numerical and analytical methods. © 2014 Optical Society of America. Source


Chen H.,TU Eindhoven
Discrete and Computational Geometry | Year: 2016

We investigate in this paper the relation between Apollonian d-ball packings and stacked (Formula presented.)-polytopes for dimension (Formula presented.). For (Formula presented.), the relation is fully described: we prove that the 1-skeleton of a stacked 4-polytope is the tangency graph of an Apollonian 3-ball packing if and only if there is no six 4-cliques sharing a 3-clique. For higher dimension, we have some partial results. © 2016 The Author(s) Source


Thelander C.,Lund University | Caroff P.,CNRS Institute of Electronics, Microelectronics and Nanotechnology | Plissard S.,TU Eindhoven | Dick K.A.,Lund University
Applied Physics Letters | Year: 2012

Results of electrical characterization of Au nucleated InAs 1-xSb x nanowires grown by molecular beam epitaxy are reported. An almost doubling of the extracted field effect mobility compared to reference InAs nanowires is observed for a Sb content of x = 0.13. Pure InSb nanowires on the other hand show considerably lower, and strongly diameter dependent, mobility values. Finally, InAs of wurtzite crystal phase overgrown with an InAs 1-xSb x shell is found to have a substantial positive shift in threshold voltage compared to reference nanowires. © 2012 American Institute of Physics. Source


The paper investigates greenhouse gas (GHG) emissions from land use change associated with the introduction of large-scale Jatropha curcas cultivation on Miombo Woodland, using data from extant forestry and ecology studies about this ecosystem. Its results support the notion that Jatropha can help sequester atmospheric carbon when grown on complete wastelands and in severely degraded conditions. Conversely, when introduced on tropical woodlands with substantial biomass and medium/high organic soil carbon content, Jatropha will induce significant emissions that offset any GHG savings from the rest of the biofuel production chain. A carbon debt of more than 30 years is projected. On semi-degraded Miombo the overall GHG balance of Jatropha is found to hinge a lot on the extent of carbon depletion of the soil, more than on the state of the biomass. This finding points to the urgent need for detailed measurements of soil carbon in a range of Miombo sub-regions and similar tropical dryland ecosystems in Asia and Latin America. Efforts should be made to clarify concepts such as 'degraded lands' and 'wastelands' and to refine land allocation criteria and official GHG calculation methodologies for biofuels on that basis. © 2010 Elsevier Ltd. Source


According to Austro-British philosopher Karl Popper, a system of theoretical claims is scientific only if it is methodologically falsifiable, i.e., only if systematic attempts to falsify or severely test the system are being carried out [Popper, 2005, pp. 20, 62]. He holds that a test of a theoretical system is severe if and only if it is a test of the applicability of the system to a case in which the system's failure is likely in light of background knowledge, i.e., in light of scientific assumptions other than those of the system being tested [Popper, 2002, p. 150]. Popper counts the 1919 tests of general relativity's then unlikely predictions of the deflection of light in the Sun's gravitational field as severe. An implication of Popper's above condition for being a scientific theoretical system is the injunction to assess theoretical systems in light of how well they have withstood severe testing. Applying this injunction to assessing the quality of climate model predictions (CMPs), including climate model projections, would involve assigning a quality to each CMP as a function of how well it has withstood severe tests allowed by its implications for past, present, and near-future climate or, alternatively, as a function of how well the models that generated the CMP have withstood severe tests of their suitability for generating the CMP. Source


Janssen P.J.A.,University of Wisconsin - Madison | Anderson P.D.,TU Eindhoven
Macromolecular Materials and Engineering | Year: 2011

A proper description of coalescence of viscous drops is challenging from an experimental, numerical, and theoretical point of view. Although the problem seems easy at first sight, consensus in the literature has still not been reached on how to predict a realistic coalescence rate given flow type, capillary number and viscosity ratio. Despite advances in algorithms and computational power, and the emergence of fully-closed analytical results, a match between theory, experiment and simulation for drainage rates only appears in a severely limited number of cases. In this paper, several recent developments are reviewed, and a summary is made of several challenges that still lay ahead.(Figure Presented) © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Speetjens M.F.M.,TU Eindhoven
International Journal of Thermal Sciences | Year: 2012

Heat transfer in essence is the transport of thermal energy along certain paths in a similar way as fluid motion is the transport of fluid parcels along fluid paths. This similarity admits Lagrangian heat-transfer analyses by the geometry of such "thermal paths" analogous to well-known Lagrangian mixing analyses. Essential to Lagrangian heat-transfer formalisms is the reference state for the convective flux. Existing approaches admit only uniform references. However, for convective heat transfer, a case of great practical relevance, the conductive state that sets in for vanishing fluid motion is the more natural reference. This typically is an inhomogeneous state and thus beyond the existing formalism. The present study closes this gap by its generalisation to non-uniform references and thus substantially strengthens Lagrangian methods for thermal analyses by (i) greatly extending their applicability, (ii) resolving the fundamental ambiguity concerning arbitrariness of the reference state that limits the original formalism, (iii) facilitating accessible physical interpretation of heat fluxes and thermal paths and (iv) enabling subtler distinction of (Lagrangian) heat-transfer phenomena. The generalised Lagrangian formalism is elaborated for laminar convective heat transfer, which can be done without loss of generality, and completed by a comprehensive geometrical framework for the composition and organisation of thermal paths. This ansatz is demonstrated by way of 2D (un)steady case studies and offers new fundamental insight into thermal transport that is complementary to the Eulerian picture based on temperature. Highlights: Generalization Lagrangian heat-transfer formalism to non-uniform reference states. Clear definition and physical interpretation of convective flux and thermal paths. Resolution fundamental ambiguity of the reference state of existing formalisms. Formulation comprehensive geometrical framework for composition of thermal paths. Illustrative Lagrangian thermal analysis using concepts from mixing studies. © 2012 Elsevier Masson SAS. All rights reserved. Source


Heertjes M.,TU Eindhoven | Van Engelen A.,TMC
Control Engineering Practice | Year: 2011

To minimize cross-talk in high-precision motion systems, the possibilities of data-based dynamic decoupling are studied. Atop a model-based and static decoupling, a multi-input multi-output (MIMO) and finite impulse response (FIR) dynamic decoupling structure is considered for machine-specific and performance-driven fine tunings. The coefficients of the FIR filters are obtained via data-based optimization, whilst the machine operates under nominal and closed-loop conditions. The FIR filters provide the ability to generate zeros outside the origin. These zeros are needed in the description of the low-frequency inverted plant dynamics. In addition, a low-pass filter structure supports the ability to generate poles outside the origin as to account for plant zeros. Both filter structures are effectively used in the high-precision motion control of a state-of-the-art scanning stage system and an industrial vibration isolation system. © 2011 Elsevier Ltd. Source


Laarhoven T.,TU Eindhoven
IEEE Transactions on Information Forensics and Security | Year: 2015

In this paper, we consider the large-coalition asymptotics of various fingerprinting and group testing games, and derive explicit expressions for the capacities for each of these models. We do this both for simple (fast but suboptimal) and arbitrary, joint decoders (slow but optimal). For fingerprinting, we show that if the pirate strategy is known, the capacity often decreases linearly with the number of colluders, instead of quadratically as in the uninformed fingerprinting game. For all considered attacks, the joint capacity is shown to be strictly higher than the simple capacity, including the interleaving attack. This last result contradicts the results of Huang and Moulin regarding the joint fingerprinting capacity, which implies that finding the fingerprinting capacity without restrictions on the tracer's capabilities remains an open problem. For group testing, we improve upon the previous work about the joint capacities, and derive new explicit asymptotics for the simple capacities of various models. These show that the existing simple group testing algorithms of Chan et al. are suboptimal, and that simple decoders cannot asymptotically be as efficient as joint decoders. For the traditional group testing model, we show that the gap between the simple and joint capacities is a factor log2(e) ≈ 1.44 for large numbers of defectives. © 2015 IEEE. Source


Pham K.,TU Eindhoven | Marigo J.-J.,Ecole Polytechnique - Palaiseau
Continuum Mechanics and Thermodynamics | Year: 2013

We propose a construction method of non-homogeneous solutions for the traction problem of an elastic damaging bar. This bar has a softening behavior that obeys a gradient damaged model. The method is applicable for a wide range of brittle materials. For sufficiently long bars, we show that localization arises on sets whose length is proportional to the material internal length and with a profile that is also a material characteristic. From its onset until the rupture, the damage profile is obtained either in a closed form or after a simple numerical integration depending on the model. Thus, the proposed method provides definitions for the critical stress and fracture energy that can be compared with experimental results. We finally discuss some features of the global behavior of the bar such as the possibility of a snapback at the onset of damage. We point out the sensitivity of the responses to the parameters of the damage law. All these theoretical considerations are illustrated by numerical examples. © 2012 Springer-Verlag. Source


Lakens D.,TU Eindhoven
Journal of Experimental Psychology: Learning Memory and Cognition | Year: 2012

Previous research has shown that words presented on metaphor congruent locations (e.g., positive words UP on the screen and negative words DOWN on the screen) are categorized faster than words presented on metaphor incongruent locations (e.g., positive words DOWN and negative words UP). These findings have been explained in terms of an interference effect: The meaning associated with UP and DOWN vertical space can automatically interfere with the categorization of words with a metaphorically incongruent meaning. The current studies test an alternative explanation for the interaction between the vertical position of abstract concepts and the speed with which these stimuli are categorized. Research on polarity differences (basic asymmetries in the way dimensions are processed) predicts that +polar endpoints of dimensions (e.g., positive, moral, UP) are categorized faster than -polar endpoints of dimensions (e.g., negative, immoral, DOWN). Furthermore, the polarity correspondence principle predicts that stimuli where polarities correspond (e.g., positive words presented UP) provide an additional processing benefit compared to stimuli where polarities do not correspond (e.g., negative words presented UP). A meta-analysis (Study 1) shows that a polarity account provides a better explanation of reaction time patterns in previous studies than an interference explanation. An experiment (Study 2) reveals that controlling for the polarity benefit of +polar words compared to -polar words did not only remove the main effect of word polarity but also the interaction between word meaning and vertical position due to polarity correspondence. These results reveal that metaphor congruency effects should not be interpreted as automatic associations between vertical locations and word meaning but instead are more parsimoniously explained by their structural overlap in polarities. © 2011 American Psychological Association. Source


Ozcelebi T.,TU Eindhoven
Signal Processing: Image Communication | Year: 2011

In state-of-the-art adaptive streaming solutions, to cope with varying network conditions, the client side can switch between several video copies encoded at different bit-rates during streaming. Each video copy is divided into chunks of equal duration. To achieve continuous video playback, each chunk needs to arrive at the client before its playback deadline. The perceptual quality of a chunk increases with the chunk size in bits, whereas bigger chunks require more transmission time and, as a result, have a higher risk of missing transmission deadline. Therefore, there is a trade-off between the overall video quality and continuous playback, which can be optimized by proper selection of the next chunk from the encoded versions. This paper proposes a method to compute a set of optimal client strategies for this purpose. © 2011 Elsevier B.V. Source


Westergaard M.,TU Eindhoven
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

This paper introduces Access/CPN 2.0, which extends Access/ CPN with high-level primitives for interacting with coloured Petri net (CPN) models in Java programs. The primitives allow Java programs to monitor and interact with places and transitions during execution, and embed entire programs as subpages of CPN models or embed CPN models as parts of programs. This facilitates building environments for systematic testing of program components using a CPN models. We illustrate the use of Access/CPN 2.0 in the context of business processes by embedding a workflow system into a CPN model. © 2011 Springer-Verlag. Source


Galagan Y.,Holst Center | Debije M.G.,TU Eindhoven | Blom P.W.M.,Holst Center
Applied Physics Letters | Year: 2011

Semitransparent organic solar cells employing solution-processable organic wavelength dependent reflectors of chiral nematic (cholesteric) liquid crystals are demonstrated. The cholesteric liquid crystal (CLC) reflects only in a narrow band of the solar spectrum and remains transparent for the remaining wavelengths. The reflective band is matched to the absorption spectrum of the organic solar cell such that only unabsorbed photons that can contribute to the photocurrent are reflected to pass through the active layer a second time. In this way, the efficiency of semitransparent organic solar cells can be enhanced without significant transparency losses. An efficiency increase of 6% was observed when a CLC reflector with a reflection band of 540-620 nm was used, whereas the transparency of the organic solar cells is only suppressed in the 80 nm narrow bandwidth. © 2011 American Institute of Physics. Source


Markovski J.,TU Eindhoven
Proceedings - International Conference on Application of Concurrency to System Design, ACSD | Year: 2011

We propose a model-based systems engineering framework for supervisory control of stochastic discrete-event systems with unrestricted nondeterminism. We intend to develop the proposed framework in four phases outlined in this paper. Here, we study in detail the first step which comprises investigation of the underlying model and development of a corresponding notion of controllability. The model of choice is termed Interactive Markov Chains, which is a natural semantic model for stochastic variants of process calculi and Petri nets, and it requires a process-theoretic treatment of supervisory control theory. To this end, we define a new behavioral preorder, termed Markovian partial bisimulation, that captures the notion of controllability while preserving correct stochastic behavior. We provide a sound and ground-complete axiomatic characterization of the preorder and, based on it, we define two notions of controllability. The first notion conforms to the traditional way of reasoning about supervision and control requirements, whereas in the second proposal we abstract from the stochastic behavior of the system. For the latter, we intend to separate the concerns regarding synthesis of an optimal supervisor. The control requirements cater only for controllability, whereas we ensure that the stochastic behavior of the supervised plant meets the performance specification by extracting directive optimal supervisors. © 2011 IEEE. Source


Vaesen K.,TU Eindhoven
Behavioral and Brain Sciences | Year: 2012

This article has two goals. The first is to assess, in the face of accruing reports on the ingenuity of great ape tool use, whether and in what sense human tool use still evidences unique, higher cognitive ability. To that effect, I offer a systematic comparison between humans and nonhuman primates with respect to nine cognitive capacities deemed crucial to tool use: enhanced hand-eye coordination, body schema plasticity, causal reasoning, function representation, executive control, social learning, teaching, social intelligence, and language. Since striking differences between humans and great apes stand firm in eight out of nine of these domains, I conclude that human tool use still marks a major cognitive discontinuity between us and our closest relatives. As a second goal of the paper, I address the evolution of human technologies. In particular, I show how the cognitive traits reviewed help to explain why technological accumulation evolved so markedly in humans, and so modestly in apes. © 2012 Cambridge University Press. Source


Song L.Z.,University of Missouri - Kansas City | Song M.,University of Missouri - Kansas City | Benedetto C.A.D.,Temple University | Benedetto C.A.D.,TU Eindhoven
Journal of Operations Management | Year: 2011

Successfully launching its first product is critical to a new venture's continued success, yet the new venture has relatively few financial or human resources to support its marketing or R&D activities. It is thus important for the new venture to attract funding from external investors such as suppliers. Although the operations management (OM) literature addresses product development and supplier involvement in large firms, few studies have examined the relationship between suppliers and new ventures. This study examines how new ventures can complement their resources and experience with supplier investment, to build positional advantages for their first product and increase marketplace performance. We integrate the OM and entrepreneurship literatures to develop a model based on the resource-based view of the firm, in which the new venture uses external and internal resources to achieve positional advantages of product innovativeness, supplier involvement in production, and product launch quality. We also investigate how market potential moderates the relationship between positional advantages and performance. We empirically test our model using data from 711 new ventures. We find that it is beneficial for a new venture to involve suppliers in production of the first product, and that market potential positively moderates the relationship of product launch quality and performance. However, the results reported here also reveal several surprising results challenging traditional views. Developing a highly innovative first product is much less, not more, important than achieving a high quality first-product launch. Increasing product innovativeness does not necessarily lead to high product performance for new ventures. For a small market with low growth potential, product innovativeness has a negative, not positive, effect on first product performance. We discuss managerial implications of our findings. © 2010 Elsevier B.V. Source


Bellouard Y.,TU Eindhoven | Hongler M.-O.,Ecole Polytechnique Federale de Lausanne
Optics Express | Year: 2011

By continuously scanning a femtosecond laser beam across a fused silica specimen, we demonstrate the formation of self-organized bubbles buried in the material. Rather than using high intensity pulses and high numerical aperture to induce explosions in the material, here bubbles form as a consequence of cumulative energy deposits. We observe a transition between chaotic and self-organized patterns at high scanning rate (above 10 mm/s). Through modeling the energy exchange, we outline the similarities of this phenomenon with other non-linear dynamical systems. Furthermore, we demonstrate with this method the high-speed writing of two- and three- dimensional bubble "crystals" in bulk silica. © 2011 Optical Society of America. Source


Vreman A.W.,Akzo Nobel | Kuerten J.G.M.,TU Eindhoven | Kuerten J.G.M.,University of Twente
Physics of Fluids | Year: 2014

Direct numerical simulation (DNS) databases are compared to assess the accuracy and reproducibility of standard and non-standard turbulence statistics of incompressible plane channel flow at Reτ = 180. Two fundamentally different DNS codes are shown to produce maximum relative deviations below 0.2% for the mean flow, below 1% for the root-mean-square velocity and pressure fluctuations, and below 2% for the three components of the turbulent dissipation. Relatively fine grids and long statistical averaging times are required. An analysis of dissipation spectra demonstrates that the enhanced resolution is necessary for an accurate representation of the smallest physical scales in the turbulent dissipation. The results are related to the physics of turbulent channel flow in several ways. First, the reproducibility supports the hitherto unproven theoretical hypothesis that the statistically stationary state of turbulent channel flow is unique. Second, the peaks of dissipation spectra provide information on length scales of the small-scale turbulence. Third, the computed means and fluctuations of the convective, pressure, and viscous terms in the momentum equation show the importance of the different forces in the momentum equation relative to each other. The Galilean transformation that leads to minimum peak fluctuation of the convective term is determined. Fourth, an analysis of higher-order statistics is performed. The skewness of the longitudinal derivative of the streamwise velocity is stronger than expected (-1.5 at y+ =30). This skewness and also the strong near-wall intermittency of the normal velocity are related to coherent structures. © 2014 AIP Publishing LLC. Source


Katzav J.,TU Eindhoven
Studies in History and Philosophy of Science Part B - Studies in History and Philosophy of Modern Physics | Year: 2014

I bring out the limitations of four important views of what the target of useful climate model assessment is. Three of these views are drawn from philosophy. They include the views of Elisabeth Lloyd and Wendy Parker, and an application of Bayesian confirmation theory. The fourth view I criticise is based on the actual practice of climate model assessment. In bringing out the limitations of these four views, I argue that an approach to climate model assessment that neither demands too much of such assessment nor threatens to be unreliable will, in typical cases, have to aim at something other than the confirmation of claims about how the climate system actually is. This means, I suggest, that the Intergovernmental Panel on Climate Change's (IPCC[U+05F3]s) focus on establishing confidence in climate model explanations and predictions is misguided. So too, it means that standard epistemologies of science with pretensions to generality, e.g., Bayesian epistemologies, fail to illuminate the assessment of climate models. I go on to outline a view that neither demands too much nor threatens to be unreliable, a view according to which useful climate model assessment typically aims to show that certain climatic scenarios are real possibilities and, when the scenarios are determined to be real possibilities, partially to determine how remote they are. © 2014 Elsevier Ltd. Source


Laarhoven T.,TU Eindhoven
2013 51st Annual Allerton Conference on Communication, Control, and Computing, Allerton 2013 | Year: 2013

Inspired by recent results from collusion-resistant traitor tracing, we provide a framework for constructing efficient probabilistic group testing schemes. In the traditional group testing model, our scheme asymptotically requires T ∼ 2K ln N tests to find (with high probability) the correct set of K defectives out of N items. The framework is also applied to several noisy group testing and threshold group testing models, often leading to improvements over previously known results, but we emphasize that this framework can be applied to other variants of the classical model as well, both in adaptive and in non-adaptive settings. © 2013 IEEE. Source


Kaptein M.,TU Eindhoven
Journal of Ambient Intelligence and Smart Environments | Year: 2012

On March 29, 2012 the author successfully defended his PhD thesis entitles Personalized persuasion in Ambient Intelligence. The PhD Degree was awarded with honors. © 2012 - IOS Press and the authors. All rights reserved. Source


Kraemer F.,TU Eindhoven
Bioethics | Year: 2013

This article deals with the euthanasia debate in light of new life-sustaining technologies such as the left ventricular assist device (LVAD). The question arises: does the switching off of a LVAD by a doctor upon the request of a patient amount to active or passive euthanasia, i.e. to 'killing' or to 'letting die'? The answer hinges on whether the device is to be regarded as a proper part of the patient's body or as something external. We usually regard the switching off of an internal device as killing, whereas the deactivation of an external device is seen as 'letting die'. The case is notoriously difficult to decide for hybrid devices such as LVADs, which are partly inside and partly outside the patient's body. Additionally, on a methodological level, I will argue that the 'ontological' arguments from analogy given for both sides are problematic. Given the impasse facing the ontological arguments, complementary phenomenological arguments deserve closer inspection. In particular, we should consider whether phenomenologically the LVAD is perceived as a body part or as an external device. I will support the thesis that the deactivation of a LVAD is to be regarded as passive euthanasia if the device is not perceived by the patient as a part of the body proper. © 2011 Blackwell Publishing Ltd. Source


De Waele A.T.A.M.,TU Eindhoven
Journal of Low Temperature Physics | Year: 2011

This paper deals with the basics of cryocoolers and related thermodynamic systems. The treatment is based on the first and second law of thermodynamics for inhomogeneous, open systems using enthalpy flow, entropy flow, and entropy production. Various types of machines, which use an oscillating gas flow, are discussed such as: Stirling refrigerators, GM coolers, pulse-tube refrigerators, and thermoacoustic coolers and engines. Furthermore the paper deals with Joule-Thomson and dilution refrigerators which use a constant flow of the working medium. © 2011 The Author(s). Source


Janssen A.J.E.M.,TU Eindhoven
Journal of the European Optical Society | Year: 2011

Several quantities related to the Zernike circle polynomials admit an expression, via the basic identity in the diffraction theory of Nijboer and Zernike, as an infinite integral involving the product of two or three Bessel functions. In this paper these integrals are identified and evaluated explicitly for the cases of (a) the expansion coefficients of scaled-and-shifted circle polynomials, (b) the expansion coefficients of the correlation of two circle polynomials, (c) the Fourier coefficients occurring in the cosine representation of the radial part of the circle polynomials. Source


Bouyahyi M.,SABIC | Duchateau R.,SABIC | Duchateau R.,TU Eindhoven
Macromolecules | Year: 2014

This contribution describes our recent results regarding the metal-catalyzed ring-opening polymerization of pentadecalactone and its copolymerization with ε-caprolactone involving single-site metal complexes based on aluminum, zinc, and calcium. Under the right conditions (i.e., monomer concentration, catalyst type, catalyst/initiator ratio, reaction time, etc.), high molecular weight polypentadecalactone with Mn up to 130 000 g mol-1 could be obtained. The copolymerization of a mixture of ε-caprolactone and pentadecalactone yielded random copolymers. Zinc and calcium-catalyzed copolymerization using a sequential feed of pentadecalactone followed by ε-caprolactone afforded perfect block copolymers. The blocky structure was retained even for prolonged times at 100 C after full conversion of the monomers, indicating that transesterification is negligible. On the other hand, in the presence of the aluminum catalyst, the initially formed block copolymers gradually randomized as a result of intra- and intermolecular transesterification reactions. The formation of homopolymers and copolymers with different architectures has been evidenced by HT-SEC chromatography, NMR, DSC and MALDI-ToF-MS. © 2014 American Chemical Society. Source


Hwang W.R.,Gyeongsang National University | Hulsen M.A.,TU Eindhoven
Macromolecular Materials and Engineering | Year: 2011

The alignment and the aggregation of particles in a viscoelastic fluid in simple shear flow are qualitatively analyzed using a two-dimensional direct numerical simulation. Depending on the shear thinning, solvent viscosity, and Weissenburg number, a typical sequence in structural transitions from random particle configuration to string formation is found with clustering and clustered string formation in between. The solvent viscosity and the Weissenburg number, the ratio of normal stress to shear stress, are found to be the most influential parameters for the onset of string formation. The influence of shear thinning is less clear. More shear thinning seems to promote string formation if the stress ratio is constant. The angular velocity of the particles is reduced by approximately 60% when particles form a string, independent of the parameters used.(Figure Presented) © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Borden M.J.,University of Texas at Austin | Hughes T.J.R.,University of Texas at Austin | Landis C.M.,University of Texas at Austin | Verhoosel C.V.,TU Eindhoven
Computer Methods in Applied Mechanics and Engineering | Year: 2014

Phase-field models based on the variational formulation for brittle fracture have recently been gaining popularity. These models have proven capable of accurately and robustly predicting complex crack behavior in both two and three dimensions. In this work we propose a fourth-order model for the phase-field approximation of the variational formulation for brittle fracture. We derive the thermodynamically consistent governing equations for the fourth-order phase-field model by way of a variational principle based on energy balance assumptions. The resulting model leads to higher regularity in the exact phase-field solution, which can be exploited by the smooth spline function spaces utilized in isogeometric analysis. This increased regularity improves the convergence rate of the numerical solution and opens the door to higher-order convergence rates for fracture problems. We present an analysis of our proposed theory and numerical examples that support this claim. We also demonstrate the robustness of the model in capturing complex three-dimensional crack behavior. © 2014 Elsevier B.V. Source


Loos J.,University of Glasgow | Loos J.,TU Eindhoven | Loos J.,Dutch Polymer Institute
Materials Today | Year: 2010

Printable polymer or hybrid solar cells (PSCs) have the potential to become one of the leading technologies of the 21st century in conversion of sunlight to electrical energy. Because of their ease of processing from solution fast and low cost mass production of devices is possible in a roll-to-roll printing fashion. The performance of such printed devices, in turn, is determined to a large extent by the three-dimensional organization of the photoactive layer, i.e. layer where light is absorbed and converted into free electrical charges, and its contacts with the charge collecting electrodes. In this review I briefly introduce our current understanding of morphology-performance relationships in PSCs with specific focus on electron tomography as analytical tool providing volume information with nanometer resolution. © 2010 Elsevier Ltd. Source


Janssen A.J.E.M.,TU Eindhoven
Journal of the European Optical Society | Year: 2011

The integrals occurring in optical diffraction theory under conditions of partial coherence have the form of an incomplete autocorrelation integral of the pupil function of the optical system. The incompleteness is embodied by a spatial coherence function of limited extent. In the case of circular optical systems and coherence functions supported by a disk, this gives rise to Hopkins' 3-circle integrals. In this paper, a computation scheme for these integrals (initially with coherence functions that are constant on their disks) is proposed where the required integral is expressed semi-analytically in the Zernike expansion coefficients of the pupil function. To this end, the Zernike expansion coefficients of a shifted pupil function restricted to the coherence disk are expressed in terms of the pupil function's Zernike expansion coefficients. Next, the required integral is expressed as an infinite series involving two sets of Zernike coefficients of restricted pupils using Parseval's theorem for orthogonal series. Due to a convenient separation of the radial parameters and the spatial variables, the method avoids a cumbersome administration involving separate consideration of various overlap situations. The computation method is extended to the case of coherence functions that are not necessarily constant on their supporting disks by using a result on linearization of the product of two Zernike circle polynomials involving Wigner coefficients. Source


Brouwers J.J.H.,TU Eindhoven
Physica D: Nonlinear Phenomena | Year: 2011

A theoretical analysis is presented of the response of a lightly and nonlinearly damped massspring system in which the spring constant contains a small randomly fluctuating component. Damping is represented by a combination of linear and nonlinear power-law damping. System response to some initial disturbance at time zero is described by a sinusoidal wave whose amplitude and phase vary slowly and randomly with time. Leading order formulations for the equations of amplitude and phase are obtained through the application of methods of stochastic averaging of Stratonovich. The equations of amplitude and phase are given in two versions: FokkerPlanck equations for transient probability and Langevin equations for response in the time-domain. Solutions in closed-form of these equations are derived by methods of mathematical and theoretical physics involving higher transcendental functions. They are used to study the behavior of system response for ever increasing time applying asymptotic methods of analysis such as the method of steepest descent or saddle-point method. It is found that system behavior depends on the power density of the parametric excitation at twice the natural frequency and on the magnitude and form of the damping. Depending on these parameters different types of system behavior are found to be possible: response which decays exponentially to zero, response which leads to a stationary state of random behavior, and response which can either grow unboundedly or which approaches zero in a finite time. © 2011 Elsevier B.V. All rights reserved. Source


Leijtens X.,TU Eindhoven
IET Optoelectronics | Year: 2011

Jeppix is the European platform that is aiming to offer access to Indium Phosphide-based technology for manufacturing of photonic integrated circuits. This is enabled by using a generic integration technology. The authors outline the current status and developments. © 2011 The Institution of Engineering and Technology. Source


Van De Vosse F.N.,TU Eindhoven | Stergiopulos N.,Ecole Polytechnique Federale de Lausanne
Annual Review of Fluid Mechanics | Year: 2011

The beating heart creates blood pressure and flow pulsations that propagate as waves through the arterial tree that are reflected at transitions in arterial geometry and elasticity. Waves carry information about the matter in which they propagate. Therefore, modeling of arterial wave propagation extends our knowledge about the functioning of the cardiovascular system and provides a means to diagnose disorders and predict the outcome of medical interventions. In this review we focus on the physical and mathematical modeling of pulse wave propagation, based on general fluid dynamical principles. In addition we present potential applications in cardiovascular research and clinical practice. Models of short- and long-term adaptation of the arterial system and methods that deal with uncertainties in personalized model parameters and boundary conditions are briefly discussed, as they are believed to be major topics for further study and will boost the significance of arterial pulse wave modeling even more. © 2011 by Annual Reviews. All rights reserved. Source


Etman L.F.P.,TU Eindhoven
Structural and Multidisciplinary Optimization | Year: 2010

We reflect on the convergence and termination of optimization algorithms based on convex and separable approximations using two recently proposed strategies, namely a trust region with filtered acceptance of the iterates, and conservatism. We then propose a new strategy for convergence and termination, denoted filtered conservatism,in which the acceptance or rejection of an iterate is determined using the nonlinear acceptance filter. However, if an iterate is rejected, we increase the conservatism of every unconser-vative approximation, rather than reducing the trust region. Filtered conservatism aims to combine the salient features of trust region strategies with nonlinear acceptance filters on the one hand, and conservatism on the other. In filtered conservatism, the nonlinear acceptance filter is used to decide if an iterate is accepted or rejected. This allows for the acceptance of infeasible iterates, which would not be accepted in a method based on conservatism. If however an iterate is rejected, the trust region need not be decreased; it may be kept constant. Convergence is than effected by increasing the conservatism of only the unconservative approximations in the (large, constant) trust region, until the iterate becomes acceptable to the filter. Numerical results corroborate the accuracy and robustness of the method. © The Author(s) 2010. Source


Shivamoggi B.K.,TU Eindhoven
European Physical Journal D | Year: 2011

Beltrami states in several models of plasma dynamics - incompressible magnetohydrodynamic (MHD) model, barotropic compressible MHD model, incompressible Hall MHD model, barotropic compressible Hall MHD model, electron MHD model, barotropic compressible Hall MHD with electron inertia model, are considered. Notwithstanding the diversity of the physics underlying the various models, the Beltrami states are shown to exhibit some common features like - certain robustness with respect to the plasma compressibility effects (albeit in the barotropy assumption), the Bernoulli condition. The Beltrami states for these models are deduced by minimizing the appropriate total energy while keeping the appropriate total helicity constant. © 2011 EDP Sciences, SIF, Springer-Verlag Berlin Heidelberg. Source


Darvishian M.,University of Groningen | Bijlsma M.J.,Unit of PharmacoEpidemiology and PharmacoEconomics PE2 | Hak E.,University of Groningen | van den Heuvel E.R.,University of Groningen | van den Heuvel E.R.,TU Eindhoven
The Lancet Infectious Diseases | Year: 2014

Background: The application of test-negative design case-control studies to assess the effectiveness of influenza vaccine has increased substantially in the past few years. The validity of these studies is predicated on the assumption that confounding bias by risk factors is limited by design. We aimed to assess the effectiveness of influenza vaccine in a high-risk group of elderly people. Methods: We searched the Cochrane library, Medline, and Embase up to July 13, 2014, for test-negative design case-control studies that assessed the effectiveness of seasonal influenza vaccine against laboratory confirmed influenza in community-dwelling people aged 60 years or older. We used generalised linear mixed models, adapted for test-negative design case-control studies, to estimate vaccine effectiveness according to vaccine match and epidemic conditions. Findings: 35 test-negative design case-control studies with 53 datasets met inclusion criteria. Seasonal influenza vaccine was not significantly effective during local virus activity, irrespective of vaccine match or mismatch to the circulating viruses. Vaccination was significantly effective against laboratory confirmed influenza during sporadic activity (odds ratio [OR] 0.69, 95% CI 0.48-0.99) only when the vaccine matched. Additionally, vaccination was significantly effective during regional (match: OR 0.42, 95% CI 0.30-0.60; mismatch: OR 0.57, 95% CI 0.41-0.79) and widespread (match: 0.54, 0.46-0.62; mismatch: OR 0.72, 95% CI 0.60-0.85) outbreaks. Interpretation: Our findings show that in elderly people, irrespective of vaccine match, seasonal influenza vaccination is effective against laboratory confirmed influenza during epidemic seasons. Efforts should be renewed worldwide to further increase uptake of the influenza vaccine in the elderly population. Funding: None. © 2014 Elsevier Ltd. Source


Hlalele L.,Stellenbosch University | Klumperman B.,Stellenbosch University | Klumperman B.,TU Eindhoven
Macromolecules | Year: 2011

The combination of in situ1H NMR and in situ31P NMR was used to study the nitroxide mediated copolymerization of styrene and n-butyl acrylate. The alkoxyamine MAMA-DEPN was employed to initiate and mediate the copolymerization. The nature of the ultimate/terminal monomer units of dormant polymer chains were identifiedand quantified by in situ31P NMR. Simulations of the styrene and n-butyl acrylate copolymerization mediated by DEPN were investigated using the Predici software package. The rate coefficients of reversible deactivation of chains with n-butyl acrylate as the terminal unit were estimated via parameter estimation studies using Predici. Good correlations were obtained between experimental data and simulated data. © 2011 American Chemical Society. Source


Nakano Y.,Nara Institute of Science and Technology | Nakano Y.,TU Eindhoven | Fujiki M.,Nara Institute of Science and Technology
Macromolecules | Year: 2011

Circularly polarized (CP) light may play key roles in the migration and delocalization of photoexcited energy in optically active macroscopic aggregates of chiral chlorophylls surrounded by an aqueous fluid in the chloroplasts under incoherent unpolarized sunlight. Learning from the chiral fluid biosystem, we designed artificial polymer aggregates of three highly luminescent helical polysilanes, 1-S, 2-S, and 2-R (Chart 1). Under specific conditions (molecular weights and good-and-poor solvent ratio), 1-S aggregates with ∼5 μm in organic fluid generated an efficient circularly polarized luminescence (CPL) with gCPL = -0.7 at 330 nm while retaining a high quantum efficiency (φPL) ∼53% at room temperature under incoherent unpolarized photoexcitation at 290 nm. This huge gCPL value was the consequence of the intense bisignate circularly dichroism (CD) signals (gCD = -0.35 at 325 nm and +0.31 at 313 nm) due to coupled oscillators with electric-dipole-allowed-transition origin. Also, 2-S and 2-R aggregates gave almost identical intense CD and CPL amplitudes of 1-S. The most critical factors for the CD/CPL enhancements were the molecular weights of 1-S, 2-S, and 2-R and a refractive index of good/poor cosolvents. The former was connected to a long persistence length of ∼70 nm, characteristic of rod-like helical polysilanes. The latter was due to an efficient photoexcited energy confinement effect of slow CP-light in the aggregate. © 2011 American Chemical Society. Source


Koroglu H.,TU Eindhoven
Proceedings of the IEEE Conference on Decision and Control | Year: 2010

Attenuation of sinusoidal disturbances with uncertain and arbitrarily time-varying frequencies is considered for a plant that depends on online measurable parameters. The disturbances are modeled as the outputs of a neutrally stable exogenous system that depends on measurable as well as unmeasurable parameters. Solvability conditions are then derived in the form of parameter-dependent matrix inequalities, based on which a linear parameter-varying controller synthesis procedure is outlined. Alternative conditions are provided for the synthesis of a controller that has no dependence on the derivatives of the parameters. It is also clarified how the transient behavior of the controller can be improved. ©2010 IEEE. Source


Martens T.,University of Antwerp | Bogaerts A.,University of Antwerp | Van Dijk J.,TU Eindhoven
Applied Physics Letters | Year: 2010

In this letter we compare the effect of a radio-frequency sine, a low frequency sine, a rectangular and a pulsed dc voltage profile on the calculated electron production and power consumption in the dielectric barrier discharge. We also demonstrate using calculated potential distribution profiles of high time and space resolution how the pulsed dc discharge generates a secondary discharge pulse by deactivating the power supply. © 2010 American Institute of Physics. Source


Irmscher M.,TU Eindhoven
Journal of the Royal Society, Interface / the Royal Society | Year: 2013

The internalization of matter by phagocytosis is of key importance in the defence against bacterial pathogens and in the control of cancerous tumour growth. Despite the fact that phagocytosis is an inherently mechanical process, little is known about the forces and energies that a cell requires for internalization. Here, we use functionalized magnetic particles as phagocytic targets and track their motion while actuating them in an oscillating magnetic field, in order to measure the translational and rotational stiffnesses of the phagocytic cup as a function of time. The measured evolution of stiffness reveals a characteristic pattern with a pronounced peak preceding the finalization of uptake. The measured stiffness values and their time dependence can be interpreted with a model that describes the phagocytic cup as a prestressed membrane connected to an elastically deformable actin cortex. In the context of this model, the stiffness peak is a direct manifestation of a previously described mechanical bottleneck, and a comparison of model and data suggests that the membrane advances around the particle at a speed of about 20 nm s(-1). This approach is a novel way of measuring the progression of emerging phagocytic cups and their mechanical properties in situ and in real time. Source


Geilen M.,TU Eindhoven
Transactions on Embedded Computing Systems | Year: 2010

The Synchronous Dataflow (SDF) model of computation by Lee and Messerschmitt has become popular for modeling concurrent applications on a multiprocessor platform. It is used to obtain a guaranteed, predictable performance. The model, on the other hand, is quite restrictive in its expressivity, making it less applicable to many modern, more dynamic applications. A common technique to deal with dynamic behavior is to consider different scenarios in separation. This analysis is, however, currently limited mainly to sequential applications. In this article, we present a new analysis approach that allows analysis of synchronous dataflow models across different scenarios of operation. The dataflow graphs corresponding to the different scenarios can be completely different. Execution times, consumption and production rates and the structure of the SDF may change. Our technique allows to derive or prove worst-case performance guarantees of the resulting model and as such extends the model-driven approach to designing predictable systems to significantly more dynamic applications and platforms. The approach is illustrated with three MP3 and MPEG-4 related case studies. © 2010 ACM. Source


Fabre B.,CNRS Jean Le Rond dAlembert Institute | Gilbert J.,CNRS Acoustic Lab of Du Maine University | Hirschberg A.,TU Eindhoven | Pelorson X.,CNRS GIPSA Laboratory
Annual Review of Fluid Mechanics | Year: 2011

We are interested in the quality of sound produced by musical instruments and their playability. In wind instruments, a hydrodynamic source of sound is coupled to an acoustic resonator. Linear acoustics can predict the pitch of an instrument. This can significantly reduce the trial-and-error process in the design of a new instrument. We consider deviations from the linear acoustic behavior and the fluid mechanics of the sound production. Real-time numerical solution of the nonlinear physical models is used for sound synthesis in so-called virtual instruments. Although reasonable analytical models are available for reeds, lips, and vocal folds, the complex behavior of flue instruments escapes a simple universal description. Furthermore, to predict the playability of real instruments and help phoneticians or surgeons analyze voice quality, we need more complex models. Source


Moodera J.S.,Massachusetts Institute of Technology | Koopmans B.,TU Eindhoven | Oppeneer P.M.,Uppsala University
MRS Bulletin | Year: 2014

Organic materials provide a unique platform for exploiting the spin of the electron - a field dubbed organic spintronics. Originally, this was mostly motivated by the notion that because of weak spin-orbit coupling, due to the small mass elements in organics and small hyperfine field coupling, organic matter typically displays a very long electron spin coherence time. More recently, however, it was found that organics provide a special class of spintronic materials for many other reasons - several of which are discussed throughout this issue. Over the past decade, there has been a growing interest in utilizing the molecular spin state as a quantum of information, aiming to develop multifunctional molecular spintronics for memory, sensing, and logic applications. The aim of this issue is to stimulate the interest of researchers by bringing to their attention the vast possibilities not only for unexpected science but also for the enormous potential for developing new functionalities and applications. The six articles in this issue deal with some of the breakthrough work that has been ongoing in this field in recent years. © Materials Research Society 2014. Source


Koroglu H.,TU Eindhoven | Scherer C.W.,University of Stuttgart
International Journal of Robust and Nonlinear Control | Year: 2011

Attenuation of sinusoidal disturbances with uncertain and arbitrarily time-varying frequencies is considered in the form of a generalized asymptotic regulation problem. The disturbances are modeled as the outputs of a parameter-dependent, unexcited and neutrally stable exogenous system that evolves from nonzero initial conditions. The problem is considered for a plant that depends partially on the uncertain parameters. Moreover, both the plant and the exogenous system are allowed to have dependence on another parameter vector that is measurable during online operation. The problem is then formulated as the synthesis of a controller that is scheduled on the measurable parameter in a way to guarantee robust internal stability and attenuate the disturbance according to a desired profile in steady state. The main result of the paper is a synthesis procedure based on a convex optimization problem, which is identified by a set of parameter-dependent linear matrix inequalities and can be rendered tractable through standard relaxation schemes. It is also clarified how the transient behavior of the controller can be improved by some additional constraints. The order of the synthesized controller is equal to the order of the plant plus the order of the exogenous system. © 2010 John Wiley & Sons, Ltd. Source


Van Beurden M.C.,TU Eindhoven
Journal of the Optical Society of America A: Optics and Image Science, and Vision | Year: 2011

For block-shaped dielectric gratings with two-dimensional periodicity, a spectral-domain volume integral equation is derived in which explicit Fourier factorization rules are employed. The Fourier factorization rules are derived from a projection-operator framework and enhance the numerical accuracy of the method, while maintaining a low computational complexity of O(N log N) or better and a low memory demand of O(N). © 2011 Optical Society of America. Source


Van Brummelen E.H.,TU Eindhoven
International Journal for Numerical Methods in Fluids | Year: 2011

The basic subiteration method for fluid-structure interaction (FSI) problems is based on a partitioning of the fluid-structure system into a fluidic part and a structural part. The effect of the fluid on the structure can be represented by an added mass to the structural operator. This added mass can be identified as an upper bound on the norm or spectral radius of the Poincar'e-Steklov operator of the fluid. The convergence behavior of the subiteration method depends sensitively on the ratio of the added mass to the actual structural mass. For FSI problems with large added-mass effects, the subiteration method is either unstable or its convergence behavior is prohibitively inefficient. In recent years, several more advanced partitioned iterative solution methods have been proposed for this class of problems, which use subiteration as a component. The rudimentary characterization of the Poincaré-Steklov operator provided by the added mass is, however, inadequate to analyze these methods. Moreover, this characterization is inappropriate for compressible flows. In this paper, we investigate the fine properties of the Poincaré-Steklov operators and of the corresponding subiteration operators for incompressible- and compressible flow models and for two distinct structural operators. Based on the characteristic properties of the subiteration operators, we subsequently examine the convergence behavior of several partitioned iterative solution methods for FSI, viz. subiteration, subiteration in conjunction with underrelaxation, the modified-mass method, Aitken's method, and interface-GMRES and interface-Newton-Krylov methods. Copyright © 2010 John Wiley & Sons, Ltd. Source


Gu Z.Y.,VU University Amsterdam | Ubachs W.,VU University Amsterdam | Van De Water W.,TU Eindhoven
Optics Letters | Year: 2014

The spectral line shape of spontaneous Rayleigh-Brillouin scattering in CO2 is studied in a range of pressures. The spectrum is influenced by the bulk viscosity ?b, which is a relaxation phenomenon involving the internal degrees of freedom of the molecule. The associated relaxation rates can be compared to the frequency shift of the scattered light, which demands precise measurements of the spectral line shape. We find ηb=(5.7± 0.6) × 10?6 kgm?1 s?1 for the range of pressures p = 2-4 bar and for room temperature conditions. © 2014 Optical Society of America. Source


Koenraad P.M.,TU Eindhoven | Flatte M.E.,University of Iowa
Nature Materials | Year: 2011

The sensitive dependence of a semiconductor's electronic, optical and magnetic properties on dopants has provided an extensive range of tunable phenomena to explore and apply to devices. Recently it has become possible to move past the tunable properties of an ensemble of dopants to identify the effects of a solitary dopant on commercial device performance as well as locally on the fundamental properties of a semiconductor. New applications that require the discrete character of a single dopant, such as single-spin devices in the area of quantum information or single-dopant transistors, demand a further focus on the properties of a specific dopant. This article describes the huge advances in the past decade towards observing, controllably creating and manipulating single dopants, as well as their application in novel devices which allow opening the new field of solotronics (solitary dopant optoelectronics). © 2011 Macmillan Publishers Limited. All rights reserved. Source


Elwany A.H.,TU Eindhoven | Gebraeel N.Z.,Georgia Institute of Technology | Maillart L.M.,University of Pittsburgh
Operations Research | Year: 2011

Failure of many engineering systems usually results from a gradual and irreversible accumulation of damage, a degradation process. Most degradation processes can be monitored using sensor technology. The resulting degradation signals are usually correlated with the degradation process. A system is considered to have failed once its degradation signal reaches a prespecified failure threshold. This paper considers a replacement problem for components whose degradation process can be monitored using dedicated sensors. First, we present a stochastic degradation modeling framework that characterizes, in real time, the path of a component's degradation signal. These signals are used to predict the evolution of the component's degradation state. Next, we formulate a single-unit replacement problem as a Markov decision process and utilize the realtime signal observations to determine a replacement policy. We focus on exponentially increasing degradation signals and show that the optimal replacement policy for this class of problems is a monotonically nondecreasing control limit policy. Finally, the model is used to determine an optimal replacement policy by utilizing vibration-based degradation signals from a rotating machinery application. © 2011 INFORMS. Source


Blocken B.,TU Eindhoven | Derome D.,Empa - Swiss Federal Laboratories for Materials Science and Technology | Carmeliet J.,Empa - Swiss Federal Laboratories for Materials Science and Technology | Carmeliet J.,ETH Zurich
Building and Environment | Year: 2013

Rainwater runoff from building facades is a complex process governed by a wide range of urban, building, material and meteorological parameters. Given this complexity and the wide range of influencing parameters, it is not surprising that despite research efforts spanning over almost a century, wind-driven rain and rainwater runoff are still very active research subjects. Accurate knowledge of rainwater runoff is important for hygrothermal and durability analyses of building facades, assessment of indirect evaporative cooling by water films on facades to mitigate outdoor and indoor overheating, assessment of the self-cleaning action of facade surface coatings and leaching of particles from surface coatings that enter the water cycle as hazardous pollutants. Research on rainwater runoff is performed by field observations, field measurements, laboratory measurements and analytical and numerical modelling. While field observations are many, up to now, field experiments and modelling efforts are few and have been almost exclusively performed for plain facades without facade details. Field observations, often based on a posteriori investigation of the reasons for differential surface soiling, are important because they have provided and continue to provide very valuable qualitative information on runoff, which is very difficult to obtain in any other way. Quantitative measurements are increasing, but are still very limited in relation to the wide range of influencing parameters. To the knowledge of the authors, current state-of-the-art hygrothermal models do not yet contain runoff models. The development, validation and implementation of such models into hygrothermal models is required to supplement observational and experimental research efforts. © 2012 Elsevier Ltd. Source


Paredes J.,University of Amsterdam | Michels M.A.J.,TU Eindhoven | Bonn D.,University of Amsterdam
Physical Review Letters | Year: 2013

Many soft-matter systems show a transition between fluidlike and mechanically solidlike states when the volume fraction of the material, e.g., particles, drops, or bubbles is increased. Using an emulsion as a model system with a precisely controllable volume fraction, we show that the entire mechanical behavior in the vicinity of the jamming point can be understood if the mechanical transition is assumed to be analogous to a phase transition. We find power-law scalings in the distance to the jamming point, in which the parameters and exponents connect the behavior above and below jamming. We propose a simple two-state model with heterogeneous dynamics to describe the transition between jammed and mobile states. The model reproduces the steady-state and creep rheology and relates the power-law exponents to diverging microscopic time scales. © 2013 American Physical Society. Source


Brouwers H.J.H.,TU Eindhoven
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2010

This paper addresses the relative viscosity of concentrated suspensions loaded with unimodal hard particles. So far, exact equations have only been put forward in the dilute limit, e.g., by Einstein for spheres. For larger concentrations, a number of phenomenological models for the relative viscosity was presented, which depend on particle concentration only. Here, an original and exact closed form expression is derived based on geometrical considerations that predicts the viscosity of a concentrated suspension of monosized particles. This master curve for the suspension viscosity is governed by the relative viscosity-concentration gradient in the dilute limit (for spheres the Einstein limit) and by random close packing of the unimodal particles in the concentrated limit. The analytical expression of the relative viscosity is thoroughly compared with experiments and simulations reported in the literature, concerning both dilute and concentrated suspensions of spheres, and good agreement is found. © 2010 The American Physical Society. Source


Groenwold A.A.,Stellenbosch University | Etman L.F.P.,TU Eindhoven
International Journal for Numerical Methods in Engineering | Year: 2010

In topology optimization, it is customary to use reciprocal-like approximations, which result in monotonically decreasing approximate objective functions. In this paper, we demonstrate that efficient quadratic approximations for topology optimization can also be derived, if the approximate Hessian terms are chosen with care. To demonstrate this, we construct a dual SAO algorithm for topology optimization based on a strictly convex, diagonal quadratic approximation to the objective function. Although the approximation is purely quadratic, it does contain essential elements of reciprocal-like approximations: for self-adjoint problems, our approximation is identical to the quadratic or second-order Taylor series approximation to the exponential approximation. We present both a single-point and a two-point variant of the new quadratic approximation. Copyright © 2009 John Wiley & Sons, Ltd. Source


Kirkels Y.,Fontys University of Applied Sciences | Duysters G.,TU Eindhoven
Research Policy | Year: 2010

This study focuses on SME networks of design and high-tech companies in Southeast Netherlands. By highlighting the personal networks of members across design and high-tech industries, the study attempts to identify the main brokers in this dynamic environment. In addition, we investigate whether specific characteristics are associated with these brokers. The main contribution of the paper lies in the fact that, in contrast to most other work, it is of a quantitative nature and focuses on brokers identified in an actual network. Studying the phenomenon of brokerage provides us with clear insights into the concept of brokerage regarding SME networks in different fields. In particular we highlight how third parties contribute to the transfer and development of knowledge. Empirical results show, among others, that the most influential brokers are found in the non-profit and science sector and have a long track record in their branch. © 2010 Elsevier B.V. All rights reserved. Source


Van Den Dries S.,TU Eindhoven | Wiering M.A.,University of Groningen
IEEE Transactions on Neural Networks and Learning Systems | Year: 2012

This paper describes a methodology for quickly learning to play games at a strong level. The methodology consists of a novel combination of three techniques, and a variety of experiments on the game of Othello demonstrates their usefulness. First, structures or topologies in neural network connectivity patterns are used to decrease the number of learning parameters and to deal more effectively with the structural credit assignment problem, which is to change individual network weights based on the obtained feedback. Furthermore, the structured neural networks are trained with the novel neural-fitted temporal difference (TD) learning algorithm to create a system that can exploit most of the training experiences and enhance learning speed and performance. Finally, we use the neural-fitted TD-leaf algorithm to learn more effectively when look-ahead search is performed by the game-playing program. Our extensive experimental study clearly indicates that the proposed method outperforms linear networks and fully connected neural networks or evaluation functions evolved with evolutionary algorithms. © 2012 IEEE. Source


Dennison M.,VU University Amsterdam | Sheinman M.,VU University Amsterdam | Storm C.,TU Eindhoven | Mackintosh F.C.,VU University Amsterdam
Physical Review Letters | Year: 2013

We study the elastic properties of thermal networks of Hookean springs. In the purely mechanical limit, such systems are known to have a vanishing rigidity when their connectivity falls below a critical, isostatic value. In this work, we show that thermal networks exhibit a nonzero shear modulus G well below the isostatic point and that this modulus exhibits an anomalous, sublinear dependence on temperature T. At the isostatic point, G increases as the square root of T, while we find GaTα below the isostatic point, where α 0.8. We show that this anomalous T dependence is entropic in origin. © 2013 American Physical Society. Source


Markovski J.,TU Eindhoven
IEEE International Conference on Automation Science and Engineering | Year: 2012

We propose a model-based systems engineering framework that couples supervisory control and verification. The framework has a process-theoretic backbone, which supports all required concepts, and it is implemented using state-of-the-art tools: Supremica for supervisor synthesis and UPPAAL for state-based verification. The process theory relies on partial bisimulation to model controllability and propositional signal emission to model a supervisory control loop with state-based observations. Supremica can model the signal observation by employing finite integer variables and action guards, whereas the supervised system can be consistently translated to UPPAAL by using a translation tool we developed. We illustrate the framework by revisiting an industrial case study of coordinating maintenance procedures of a high-tech Océ printer. © 2012 IEEE. Source


Piga D.,Istituto Dalle Molle di Studi sullIntelligenza Artificiale | Toth R.,TU Eindhoven
Automatica | Year: 2014

Parametric identification of linear time-invariant (LTI) systems with output-error (OE) type of noise model structures has a well-established theoretical framework. Different algorithms, like instrumental-variables based approaches or prediction error methods (PEMs), have been proposed in the literature to compute a consistent parameter estimate for linear OE systems. Although the prediction error method provides a consistent parameter estimate also for nonlinear output-error (NOE) systems, it requires to compute the solution of a nonconvex optimization problem. Therefore, an accurate initialization of the numerical optimization algorithms is required, otherwise they may get stuck in a local minimum and, as a consequence, the computed estimate of the system might not be accurate. In this paper, we propose an approach to obtain, in a computationally efficient fashion, a consistent parameter estimate for output-error systems with polynomial nonlinearities. The performance of the method is demonstrated through a simulation example. © 2014 Elsevier Ltd. All rights reserved. Source


Mendling J.,Humboldt University of Berlin | Reijers H.A.,TU Eindhoven | Recker J.,Queensland University of Technology
Information Systems | Year: 2010

Few studies have investigated the factors contributing to the successful practice of process modeling. In particular, studies that contribute to the act of developing process models that facilitate communication and understanding are scarce. Although the value of process models is not only dependent on the choice of graphical constructs but also on their annotation with textual labels, there has been hardly any work on the quality of these labels. Accordingly, the research presented in this paper examines activity labeling practices in process modeling. Based on empirical data from process modeling practice, we identify and discuss different labeling styles and their use in process modeling praxis. We perform a grammatical analysis of these styles and use data from an experiment with process modelers to examine a range of hypotheses about the usability of the different styles. Based on our findings, we suggest specific programs of research towards better tool support for labeling practices. Our work contributes to the emerging stream of research investigating the practice of process modeling and thereby contributes to the overall body of knowledge about conceptual modeling quality. © 2009 Elsevier B.V. All rights reserved. Source


Chen W.,Key Laboratory of Silicate Materials Science and Engineering | Chen W.,Wuhan University of Technology | Brouwers H.J.H.,TU Eindhoven
Cement and Concrete Research | Year: 2010

The alkali-binding capacity of C-S-H in hydrated Portland cement pastes is addressed in this study. The amount of bound alkalis in C-S-H is computed based on the alkali partition theories firstly proposed by Taylor (1987) and later further developed by Brouwers and Van Eijk (2003). Experimental data reported in literatures concerning thirteen different recipes are analyzed and used as references. A three-dimensional computer-based cement hydration model (CEMHYD3D) is used to simulate the hydration of Portland cement pastes. These model predictions are used as inputs for deriving the alkali-binding capacity of the hydration product C-S-H in hydrated Portland cement pastes. It is found that the relation of Na+ between the moles bound in C-S-H and its concentration in the pore solution is linear, while the binding of K+ in C-S-H complies with the Freundlich isotherm. New models are proposed for determining the alkali-binding capacities of C-S-H in hydrated Portland cement paste. An updated method for predicting the alkali concentrations in the pore solution of hydrated Portland cement pastes is developed. It is also used to investigate the effects of various factors (such as the water to cement ratio, clinker composition and alkali types) on the alkali concentrations. © 2010 Elsevier Ltd. All rights reserved. Source


Heemels W.P.M.H.,TU Eindhoven | Daafouz J.,University of Lorraine | Millerioux G.,University of Lorraine
IEEE Transactions on Automatic Control | Year: 2010

In this note, linear matrix inequality-based design conditions are presented for observer-based controllers that stabilize discrete-time linear parameter-varying systems in the situation where the parameters are not exactly known, but are only available with a finite accuracy. The presented framework allows to make tradeoffs between the admissible level of parameter uncertainty on the one hand and the transient perfor mance on the other. In addition, the level of parameter uncertainty can be maximized while still guaranteeing closed-loop stability. © 2010 IEEE. Source


Zhang B.,IBM | Van Leeuwaarden J.S.H.,TU Eindhoven | Zwart B.,Pna Innovations, Inc.
Operations Research | Year: 2012

In call centers it is crucial to staff the right number of agents so that the targeted service levels are met. These staffing problems typically lead to constraint satisfaction problems that are hard to solve. During the last decade, a beautiful manyserver asymptotic theory has been developed to solve such problems for large call centers, and optimal staffing rules are known to obey the square-root staffing principle. This paper presents refinements to many-server asymptotics and this staffing principle for a Markovian queueing model with impatient customers. © 2012 INFORMS. Source


Tajer A.,Wayne State University | Castro R.M.,TU Eindhoven | Wang X.,Columbia University
IEEE Transactions on Information Theory | Year: 2012

Cognitive radios process their sensed information collectively in order to opportunistically identify and access underutilized spectrum segments (spectrum holes). Due to the transient and rapidly varying nature of the spectrum occupancy, the cognitive radios (secondary users) must be agile in identifying the spectrum holes in order to enhance their spectral efficiency. We propose a novel adaptive procedure to reinforce the agility of the secondary users for identifying multiple spectrum holes simultaneously over a wide spectrum band. This is accomplished by successively exploring the set of potential spectrum holes and progressively allocating the sensing resources to the most promising areas of the spectrum. Such exploration and resource allocation results in conservative spending of the sensing resources and translates into very agile spectrum monitoring. The proposed successive and adaptive sensing procedure is in contrast to the more conventional approaches that distribute the sampling resources equally over the entire spectrum. Besides improved agility, the adaptive procedure requires less-stringent constraints on the power of the primary users to guarantee that they remain distinguishable from the environment noise and renders more reliable spectrum hole detection. © 1963-2012 IEEE. Source


Voyiadjis G.Z.,Louisiana State University | Peters R.,TU Eindhoven
Acta Mechanica | Year: 2010

This work addresses the size effect encountered in nanoindentation experiments. It is generally referred to as the indentation size effect (ISE). Classical descriptions of the ISE show a decrease in hardness for increasing indentation depth. Recently new experiments have shown that after the initial decrease, hardness increases with increasing indentation depth. After this increase, finally the hardness decreases with increasing indentation. This work reviews the existing theories describing the ISE and presents new formulations that incorporate the hardening effect into the ISE. Furthermore, indentation experiments have been performed on several metal samples, to see whether the hardening effect was an anomaly or not. Finally, numerical simulations are performed using the commercial program ABAQUS. © 2009 Springer-Verlag. Source


Rijksen D.O.,DWA | Wisse C.J.,DWA | van Schijndel A.W.M.,TU Eindhoven
Energy and Buildings | Year: 2010

This paper presents general guidelines for the required cooling capacity of an entire office building using thermally activated building systems (TABS). By activating the thermal mass of the building using pipes embedded in the floor, peak loads can be reduced. On-site measurements were performed to obtain the required cooling power of an entire building as well as individual zones. Beside this, the internal climate conditions of rooms and surface temperatures of the TABS were measured. The measured data were used to analyze the predictive performance of a simulation model. In order to acquire general guidelines for the required cooling capacity of a standard office building, simulations of an entire building were used to determine the impact of variable internal heat gains and different sized windows. The required cooling capacity was compared to the cooling capacity of a system without energy buffering (e.g. chilled ceiling panels). It was found that reductions up to 50% of the cooling capacity for a chiller can be achieved using TABS. The presented results within this paper can be used as design guidelines in the first stage of a design process. The results focus on temperate climates and were derived using Dutch climate conditions. © 2009 Elsevier B.V. All rights reserved. Source


Janssen J.H.,TU Eindhoven
Journal on Multimodal User Interfaces | Year: 2012

Empathy can be considered one of our most important social processes. In that light, empathic technologies are the class of technologies that can augment empathy between two or more individuals. To provide a basis for such technologies, a three component framework is presented based on psychology and neuroscience, consisting of cognitive empathy, emotional convergence, and empathic responding. These three components can be situated in affective computing and social signal processing and pose different opportunities for empathic technologies. To leverage these opportunities, automated measurement possibilities for each component are identified using (combinations of) facial expressions, speech, and physiological signals. Thereafter, methodological challenges are discussed, including ground truth measurements and empathy induction. Finally, a research agenda is presented for social signal processing. This framework can help to further research on empathic technologies and ultimately bring it to fruition in meaningful innovations. In turn, this could enhance empathic behavior, thereby increasing altruism, trust, cooperation, and bonding. © 2012 The Author(s). Source


Martens J.-B.,TU Eindhoven
Transactions on Interactive Intelligent Systems | Year: 2014

Progress in empirical research relies on adequate statistical analysis and reporting. This article proposes an alternative approach to statistical modeling that is based on an old but mostly forgotten idea, namely Thurstone modeling. Traditional statistical methods assume that either the measured data, in the case of parametric statistics, or the rank-order transformed data, in the case of nonparametric statistics, are samples from a specific (usually Gaussian) distribution with unknown parameters. Consequently, such methods should not be applied when this assumption is not valid. Thurstone modeling similarly assumes the existence of an underlying process that obeys an a priori assumed distribution with unknown parameters, but combines this underlying process with a flexible response mechanism that can be either continuous or discrete and either linear or nonlinear. One important advantage of Thurstone modeling is that traditional statistical methods can still be applied on the underlying process, irrespective of the nature of the measured data itself. Another advantage is that Thurstone models can be graphically represented, which helps to communicate them to a broad audience. A new interactive statistical package, Interactive Log Likelihood MOdeling (Illmo), was specifically designed for estimating and rendering Thurstone models and is intended to bring Thurstone modeling within the reach of persons who are not experts in statistics. Illmo is unique in the sense that it provides not only extensive graphical renderings of the data analysis results but also an interface for navigating between different model options. In this way, users can interactively explore different models and decide on an adequate balance between model complexity and agreement with the experimental data. Hypothesis testing on model parameters is also made intuitive and is supported by both textual and graphical feedback. The flexibility and ease of use of Illmo means that it is also potentially useful as a didactic tool for teaching statistics. © 2014 ACM 2160-6455/2014/03-ART4 $ 15.00. Source


Abb M.,University of Southampton | Bakkers E.P.A.M.,TU Eindhoven | Muskens O.L.,University of Southampton
Physical Review Letters | Year: 2011

We demonstrate ultrafast dephasing in the random transport of light through a layer consisting of strongly scattering GaP nanowires. Dephasing results in a nonlinear intensity modulation of individual pseudomodes which is 100 times larger than that of bulk GaP. Different contributions to the nonlinear response are separated by using total transmission, white-light frequency correlation, and statistical pseudomode analysis. A dephasing time of 1.2±0.2ps is found. Quantitative agreement is obtained with numerical model calculations which include photoinduced absorption and deformation of individual scatterers. Nonlinear dephasing of photonic eigenmodes opens up avenues for ultrafast control of random lasers, nanophotonic switches, and photon localization. © 2011 American Physical Society. Source


Zecevic J.,University Utrecht | Gommes C.J.,University of Liege | Friedrich H.,TU Eindhoven | Dejongh P.E.,University Utrecht | Dejong K.P.,University Utrecht
Angewandte Chemie - International Edition | Year: 2012

Quantitative insight into the three-dimensional morphology of complex zeoliteY mesopore networks was achieved by combining electron tomography and image processing. Properties could be studied that are not measurable by other techniques, such as the size distribution of the intact microporous domains. This has great relevance in descriptions of the molecular diffusion through zeolite crystals and, hence, catalytic activity and selectivity. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Adan I.,TU Eindhoven | Weiss G.,Haifa University
Operations Research | Year: 2012

Motivated by queues with multitype servers and multitype customers, we consider an infinite sequence of items of types C = (c 1, ⋯, c I), and another infinite sequence of items of types S =(s 1, ⋯, s J), and a bipartite graph G of allowable matches between the types. We assume that the types of items in the two sequences are independent and identically distributed (i.i.d.) with given probability vectors α, β Matching the two sequences on a first-come, first-served basis defines a unique infinite matching between the sequences. For (c i1, s j) ∈ G we define the matching rate r ci, sj as the long-term fraction of c i, s j matches in the infinite matching, if it exists. We describe this system by a multidimensional countable Markov chain, obtain conditions for ergodicity, and derive its stationary distribution, which is, most surprisingly, of product form. We show that if the chain is ergodic, then the matching rates exist almost surely, and we give a closed-form formula to calculate them. We point out the connection of this model to some queueing models. © 2012 INFORMS. Source


OBJECTIVES: Novel quantitative measures of transpulmonary circulation status may allow the improvement of heart failure (HF) patient management. In this work, we propose a method for the assessment of the transpulmonary circulation using measurements from indicator time intensity curves, derived from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) series. The derived indicator dilution parameters in healthy volunteers (HVs) and HF patients were compared, and repeatability was assessed. Furthermore, we compared the parameters derived using the proposed method with standard measures of cardiovascular function, such as left ventricular (LV) volumes and ejection fraction. MATERIALS AND METHODS: In total, 19 HVs and 33 HF patients underwent a DCE-MRI scan on a 1.5 T MRI scanner using a T1-weighted spoiled gradient echo sequence. Image loops with 1 heartbeat temporal resolution were acquired in 4-chamber view during ventricular late diastole, after the injection of a 0.1-mmol gadoteriol bolus. In a subset of subjects (8 HFs, 2 HVs), a second injection of a 0.3-mmol gadoteriol bolus was performed with the same imaging settings. The study was approved by the local institutional review board.Indicator dilution curves were derived, averaging the MR signal within regions of interest in the right and left ventricle; parametric deconvolution was performed between the right and LV indicator dilution curves to identify the impulse response of the transpulmonary dilution system. The local density random walk model was used to parametrize the impulse response; pulmonary transit time (PTT) was defined as the mean transit time of the indicator. λ, related to the Péclet number (ratio between convection and diffusion) for the dilution process, was also estimated. RESULTS: Pulmonary transit time was significantly prolonged in HF patients (8.70 ± 1.87 seconds vs 6.68 ± 1.89 seconds in HV, P < 0.005) and even stronger when normalized to subject heart rate (normalized PTT, 9.90 ± 2.16 vs 7.11 ± 2.17 in HV, dimensionless, P < 0.001). λ was significantly smaller in HF patients (8.59 ± 4.24 in HF vs 12.50 ± 17.09 in HV, dimensionless, P < 0.005), indicating a longer tail for the impulse response. Pulmonary transit time correlated well with established cardiovascular parameters (LV end-diastolic volume index, r = 0.61, P < 0.0001; LV ejection fraction, r = −0.64, P < 0.0001). The measurement of indicator dilution parameters was repeatable (correlation between estimates based on the 2 repetitions for PTT: r = 0.94, P < 0.001, difference between 2 repetitions 0.01 ± 0.60 second, for λ: r = 0.74, P < 0.01, difference 0.69 ± 4.39). CONCLUSIONS: Characterization of the transpulmonary circulation by DCE-MRI is feasible in HF patients and HVs. Significant differences are observed between indicator dilution parameters measured in HVs and HF patients; preliminary results suggest good repeatability for the proposed parameters. Copyright © 2016 Wolters Kluwer Health, Inc. All rights reserved. Source


Albertazzi L.,TU Eindhoven | Albertazzi L.,CNR Institute of Neuroscience | Bendikov M.,Weizmann Institute of Science | Baran P.S.,Scripps Research Institute
Journal of the American Chemical Society | Year: 2012

The detection of chemical or biological analytes upon molecular reactions relies increasingly on fluorescence methods, and there is a demand for more sensitive, more specific, and more versatile fluorescent molecules. We have designed long wavelength fluorogenic probes with a turn-ON mechanism based on a donor-two-acceptor π-electron system that can undergo an internal charge transfer to form new fluorochromes with longer π-electron systems. Several latent donors and multiple acceptor molecules were incorporated into the probe modular structure to generate versatile dye compounds. This new library of dyes had fluorescence emission in the near-infrared (NIR) region. Computational studies reproduced the observed experimental trends well and suggest factors responsible for high fluorescence of the donor-two-acceptor active form and the low fluorescence observed from the latent form. Confocal images of HeLa cells indicate a lysosomal penetration pathway of a selected dye. The ability of these dyes to emit NIR fluorescence through a turn-ON activation mechanism makes them promising candidate probes for in vivo imaging applications. © 2012 American Chemical Society. Source


Laminopathies, mainly caused by mutations in the LMNA gene, are a group of inherited diseases with a highly variable penetrance; i.e., the disease spectrum in persons with identical LMNA mutations range from symptom-free conditions to severe cardiomyopathy and progeria, leading to early death. LMNA mutations cause nuclear abnormalities and cellular fragility in response to cellular mechanical stress, but the genotype/phenotype correlations in these diseases remain unclear. Consequently, tools such as mutation analysis are not adequate for predicting the course of the disease.   Here, we employ growth substrate stiffness to probe nuclear fragility in cultured dermal fibroblasts from a laminopathy patient with compound progeroid syndrome. We show that culturing of these cells on substrates with stiffness higher than 10 kPa results in malformations and even rupture of the nuclei, while culture on a soft substrate (3 kPa) protects the nuclei from morphological alterations and ruptures. No malformations were seen in healthy control cells at any substrate stiffness. In addition, analysis of the actin cytoskeleton organization in this laminopathy cells demonstrates that the onset of nuclear abnormalities correlates to an increase in cytoskeletal tension. Together, these data indicate that culturing of these LMNA mutated cells on substrates with a range of different stiffnesses can be used to probe the degree of nuclear fragility. This assay may be useful in predicting patient-specific phenotypic development and in investigations on the underlying mechanisms of nuclear and cellular fragility in laminopathies. Source


Loffler W.,Leiden University | Broer D.J.,TU Eindhoven | Woerdman J.P.,Leiden University
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2011

We explore experimentally if light's orbital angular momentum (OAM) interacts with chiral nematic polymer films. Specifically, we measure the circular dichroism of such a material using light beams with different OAM. We investigate the case of strongly focused, nonparaxial light beams, where the spatial and polarization degrees of freedom are coupled. Within the experimental accuracy, we cannot find any influence of the OAM on the circular dichroism of cholesteric polymers. © 2011 American Physical Society. Source


Vreman A.W.,Akzo Nobel | Kuerten J.G.M.,TU Eindhoven | Kuerten J.G.M.,University of Twente
Physics of Fluids | Year: 2014

Statistical profiles of the first- and second-order spatial derivatives of velocity and pressure are reported for turbulent channel flow at Reτ = 590. The statistics were extracted from a high-resolution direct numerical simulation. To quantify the anisotropic behavior of fine-scale structures, the variances of the derivatives are compared with the theoretical values for isotropic turbulence. It is shown that appropriate combinations of first- and second-order velocity derivatives lead to (directional) viscous length scales without explicit occurrence of the viscosity in the definitions. To quantify the non-Gaussian and intermittent behavior of fine-scale structures, higher-order moments and probability density functions of spatial derivatives are reported. Absolute skewnesses and flatnesses of several spatial derivatives display high peaks in the near wall region. In the logarithmic and central regions of the channel flow, all first-order derivatives appear to be significantly more intermittent than in isotropic turbulence at the same Taylor Reynolds number. Since the nine variances of first-order velocity derivatives are the distinct elements of the turbulence dissipation, the budgets of these nine variances are shown, together with the budget of the turbulence dissipation. The comparison of the budgets in the near-wall region indicates that the normal derivative of the fluctuating streamwise velocity (∂ú/∂y) plays a more important role than other components of the fluctuating velocity gradient. The small-scale generation term formed by triple correlations of fluctuations of first-order velocity derivatives is analyzed. A typical mechanism of small-scale generation near the wall (around y+ = 1), the intensification of positive ∂ú/∂y by local strain fluctuation (compression in normal and stretching in spanwise direction), is illustrated and discussed. © 2014 AIP Publishing LLC. Source


Van Oorschot K.,TU Eindhoven
Journal of Product Innovation Management | Year: 2010

Stage-Gates is a widely used product innovation process for managing portfolios of new product development projects. The process enables companies to minimize uncertainty by helping them identify-at various stages or gates-the "wrong" projects before too many resources are invested. The present research looks at the question of whether using Stage-Gates may lead companies also to jettison some "right" projects (i.e., those that could have become successful). The specific context of this research involves projects characterized by asymmetrical uncertainty: where workload is usually underestimated at the start (because new development tasks or new customer requirements are discovered after the project begins) and where the development team's size is often overestimated (because assembling a productive team takes more time than anticipated). Software development projects are a perfect example. In the context of an underestimated workload and an understaffed team, the Stage-Gates philosophy of low investment at the start may set off a negative dynamic: low investments in the beginning lead to massive schedule pressure, which increases turnover in an already understaffed team and results in the team missing schedules for the first stage. This delay cascades into the second stage and eventually leads management to conclude that the project is not viable and should be abandoned. However, this paper shows how, with slightly more flexible thinking (i.e., initial Stage-Gates investments that are slightly less lean), some of the ostensibly "wrong" projects can actually become the "right" projects to pursue. Principal conclusions of the analysis are as follows: (1) adhering strictly to the Stage-Gates philosophy may well kill off viable projects and damage the firm's bottom line; (2) slightly relaxing the initial investment constraint can improve the dynamics of project execution; and (3) during a project's first stages, managers should focus more on ramping up their project team than on containing project costs. © 2010 Product Development & Management Association. Source


Rebrov E.V.,TU Eindhoven
Theoretical Foundations of Chemical Engineering | Year: 2010

Capillary hydrodynamics has three considerable distinctions from macrosystems: first, there is an increase in the ratio of the surface area of the phases to the volume that they occupy; second, a flow is characterized by small Reynolds numbers at which viscous forces predominate over inertial forces; and third, the microroughness and wettability of the wall of the channel exert a considerable influence on the flow pattern. In view of these differences, the correlations used for tubes with a larger diameter cannot be used to calculate the boundaries of the transitions between different flow regimes in microchannels. In the present review, an analysis of published data on a gas-liquid two-phase flow in capillaries of various shapes is given, which makes it possible to systematize the collected body of information. The specific features of the geometry of a mixer and an inlet section, the hydraulic diameter of a capillary, and the surface tension of a liquid exert the strongest influence on the position of the boundaries of two-phase flow regimes. Under conditions of the constant geometry of the mixer, the best agreement in the position of the boundaries of the transitions between different hydrodynamic regimes in capillaries is observed during the construction of maps of the regimes with the use of the Weber numbers for a gas and a liquid as coordinate axes. © 2010 Pleiades Publishing, Ltd. Source


Van Der Aalst W.,TU Eindhoven
Communications of the ACM | Year: 2012

Recent breakthroughs in process mining research make it possible to discover, analyze, and improve business processes based on event data. Activities executed by people, machines, and software leave trails in so-called event logs. What events (such as entering a customer order into SAP, a passenger checking in for a flight, a doctor changing a patient's dosage, or a planning agency rejecting a building permit) have in common is that all are recorded by information systems. Data volume and storage capacity have grown spectacularly over the past decade, while the digital universe and the physical universe are increasingly aligned. Business processes thus ought to be managed, supported, and improved based on event data rather than on subjective opinions or obsolete experience. Application of process mining in hundreds of organizations worldwide shows that managers and users alike tend to overestimate their knowledge of © 2012 ACM. Source


Su R.,Nanyang Technological University | Woeginger G.,TU Eindhoven
Automatica | Year: 2011

In performance evaluation or supervisory control, we often encounter problems of determining the maximum or minimum string execution time for a finite language when estimating the worst-case or best-case performance. It has been shown in the literature that the time complexity for computing the maximum string execution time for a finite language is polynomial with respect to the size of an automaton recognizer of that language and the dimension of the corresponding resource matrices. In this paper we provide a more efficient algorithm to compute such maximum string execution time. Then we show that it is NP-complete to determine the minimum string execution time. © 2011 Elsevier Ltd. All rights reserved. Source


Jorissen A.,TU Eindhoven | Fragiacomo M.,University of Sassari
Engineering Structures | Year: 2011

The paper discusses the implications of ductility in design of timber structures under static and dynamic loading including earthquakes. Timber is a material inherently brittle in bending and in tension, unless reinforced adequately. However connections between timber members can exhibit significant ductility, if designed and detailed properly to avoid splitting. Hence it is possible to construct statically indeterminate systems made of brittle timber members connected with ductile connections that behave in a ductile fashion. The brittle members, however, must be designed for the overstrength related to the strength of the ductile connections to ensure the ductile failure mechanism will take place before the failure of the brittle members. The overstrength ratio, defined as the ratio between the 95th percentile of the connection strength distribution and the analytical prediction of the characteristic connection strength, was calculated for multiple doweled connections loaded parallel to the grain based on the results of an extensive experimental programme carried out on timber splice connections with 10.65 and 11.75 mm diameter steel dowels grade 4.6. In this particular case the overstrength ratio was found to range from 1.2 to 2.1, and a value of 1.6 is recommended for ductile design. The paper illustrates the use of the elastic-perfectly plastic analysis with ductility control for a simple statically indeterminate structure and compares this approach to the fully non-linear analysis and with the more traditional linear elastic analysis. It is highlighted that plastic design should not be used for timber bridges since fatigue may lead to significant damage accumulation in the connections if plastic deformations have developed. The paper also shows that the current relative definitions of ductility, as a ratio between an ultimate deformation/displacement and the corresponding yield quantity, should be replaced by absolute definitions of ductility, for example the ultimate deformation/displacement, as the latter ones better represent the ductile structural behavior. © 2011 Elsevier Ltd. Source


Voss T.,TU Eindhoven | Scherpen J.M.A.,University of Groningen
Automatica | Year: 2011

In this paper we show how to perform stabilization and shape control for a finite dimensional model that recasts the dynamics of an inflatable space reflector in port-Hamiltonian (pH) form. We show how to derive a decentralized passivity-based controller which can be used to stabilize a 1D piezoelectric Timoshenko beam around a desired shape. Furthermore, we present simulation results obtained for the proposed decentralized control approach. © 2011 Elsevier Ltd. All rights reserved. Source


Aalst W.V.D.,TU Eindhoven | Aalst W.V.D.,Queensland University of Technology
IEEE Transactions on Services Computing | Year: 2013

Web services are an emerging technology to implement and integrate business processes within and across enterprises. Service orientation can be used to decompose complex systems into loosely coupled software components that may run remotely. However, the distributed nature of services complicates the design and analysis of service-oriented systems that support end-to-end business processes. Fortunately, services leave trails in so-called event logs and recent breakthroughs in process mining research make it possible to discover, analyze, and improve business processes based on such logs. Recently, the task force on process mining released the process mining manifesto. This manifesto is supported by 53 organizations and 77 process mining experts contributed to it. The active participation from end-users, tool vendors, consultants, analysts, and researchers illustrate the growing significance of process mining as a bridge between data mining and business process modeling. In this paper, we focus on the opportunities and challenges for service mining, i.e., applying process mining techniques to services. We discuss the guiding principles and challenges listed in the process mining manifesto and also highlight challenges specific for service-orientated systems. © 2008-2012 IEEE. Source


van der Meer J.C.,TU Eindhoven
Journal of Geometry and Physics | Year: 2015

In this paper we review the connection between the Kepler problem and the harmonic oscillator. More specifically we consider how the Kepler system can be obtained through geometric reduction of the harmonic oscillator. We use the method of constructive geometric reduction and explicitly construct the reduction map in terms of invariants. The Kepler system is obtained in a particular chart on the reduced phase space. This reduction is the reverse of the well known KS regularization. Furthermore the reduced phase space connects to Moser's regularization. The integrals for the Kepler system given by the momentum and Laplace vectors, as well as the Delaunay elements, can now be easily related to symmetries of the harmonic oscillator. © 2015 Elsevier B.V. Source


Van Der Aalst W.M.P.,TU Eindhoven
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

The Business Process Management (BPM) conference series celebrates its tenth anniversary. This is a nice opportunity to reflect on a decade of BPM research. This paper describes the history of the conference series, enumerates twenty typical BPM use cases, and identifies six key BPM concerns: process modeling languages, process enactment infrastructures, process model analysis, process mining, process flexibility, and process reuse. Although BPM matured as a research discipline, there are still various important open problems. Moreover, despite the broad interest in BPM, the adoption of state-of-the-art results by software vendors, consultants, and end-users leaves much to be desired. Hence, the BPM discipline should not shy away from the key challenges and set clear targets for the next decade. © 2012 Springer-Verlag. Source


Smids J.,TU Eindhoven
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

The most important ethical question regarding PTs is the voluntariness of changes they bring about. Coercive technologies control its users by application of direct force or credible threat. Manipulative technologies control their users by influencing them in ways of which the users are not aware and cannot control. As a result, both violate the voluntariness condition of the standard definition of PTs. Any voluntariness assessment needs to consider whether there are external controlling influences and whether the user acts intentionally. © 2012 Springer-Verlag. Source


This article examines whether a mixture of virtual and real-life interaction - in contrast to purely virtual interaction - among some members of online communities for teachers is beneficial for all teachers' professional development in the whole community. Earlier research indicated that blended communities tend to face fewer trust and free rider problems. This study continues this stream of research by examining whether blended communities provide more practical benefits to teachers, both in terms of perceived improvements to their teaching capabilities as well as for their substantial understanding of their core topic. In addition, it is tested whether blended communities provide more information about vacancies, as teachers' mobility is regarded as too low in the EU. The analysis uses survey data from 26 online communities for secondary education teachers in The Netherlands. The communities are part of a virtual organization that hosts communities for teachers' professional development. The findings indeed show beneficial effects of blended communities. Moreover, the results modify earlier claims about the integration of online communication with offline interaction by showing that complete integration is unnecessary. This facilitates a scaling up of the use of online communities for teachers' professional development. © 2012 Elsevier Ltd. All rights reserved. Source


Willems F.,TU Eindhoven | Willems F.,TNO | Cloudt R.,TNO
IEEE Transactions on Control Systems Technology | Year: 2011

Selective catalytic reduction (SCR) is a promising diesel aftertreatment technology that enables low nitrogen oxides (NO x) tailpipe emissions with relatively low fuel consumption. Future emission legislation is pushing the boundaries for SCR control systems to achieve high NO x conversion within a tailpipe ammonia (NH 3) slip constraint, and to provide robustness to meet in-use compliance requirements. This work presents a new adaptive control strategy that uses an ammonia feedback sensor and an online ammonia storage model. Experimental validation on a 12-liter heavy-duty diesel engine with a 34-liter Zeolite SCR catalyst shows good performance and robustness against urea under- and over-dosage for both the European steady-state and transient test cycles. The new strategy is compared with a NO x sensor-based control strategy with cross-sensitivity compensation. It proved to be superior in terms of transient adaptation and taking an NH 3 slip constraint into account. © 2006 IEEE. Source


Luiten J.,TU Eindhoven
Europhysics News | Year: 2015

55 years after Richard Feynman's famous Caltech lecture 'There is plenty of room at the bottom' [1], heralding the age of nano science and technology, many of the possibilities he envisaged have come true: Using electron microscopy it is nowadays possible to resolve and even identify individual atoms; STM and AFM not only provide us with similar spatial resolution on surfaces, but also allow dragging individual atoms around in a controlled way; X-ray diffraction has revealed the complicated structures of thousands of proteins, giving invaluable insight into the machinery of life. © European Physical Society, EDP Sciences, 2015. Source


Liao F.,TU Eindhoven
Transportation Research Part C: Emerging Technologies | Year: 2016

Multi-state supernetworks have been advanced recently for modeling individual activity-travel scheduling decisions. The main advantage is that multi-dimensional choice facets are modeled simultaneously within an integral framework, supporting systematic assessments of a large spectrum of policies and emerging modalities. However, duration choice of activities and home-stay has not been incorporated in this formalism yet. This study models duration choice in the state-of-the-art multi-state supernetworks. An activity link with flexible duration is transformed into a time-expanded bipartite network; a home location is transformed into multiple time-expanded locations. Along with these extensions, multi-state supernetworks can also be coherently expanded in space-time. The derived properties are that any path through a space-time supernetwork still represents a consistent activity-travel pattern, duration choice are explicitly associated with activity timing, duration and chain, and home-based tours are generated endogenously. A forward recursive formulation is proposed to find the optimal patterns with the optimal worst-case run-time complexity. Consequently, the trade-off between travel and time allocation to activities and home-stay can be systematically captured. © 2016 Elsevier Ltd. Source


van der Meijden C.M.,Energy Research Center of the Netherlands | Veringa H.J.,TU Eindhoven | Rabou L.P.L.M.,Energy Research Center of the Netherlands
Biomass and Bioenergy | Year: 2010

The production of Synthetic Natural Gas from biomass (Bio-SNG) by gasification and upgrading of the gas is an attractive option to reduce CO2 emissions and replace declining fossil natural gas reserves. Production of energy from biomass is approximately CO2 neutral. Production of Bio-SNG can even be CO2 negative, since in the final upgrading step, part of the biomass carbon is removed as CO2, which can be stored. The use of biomass for CO2 reduction will increase the biomass demand and therefore will increase the price of biomass. Consequently, a high overall efficiency is a prerequisite for any biomass conversion process. Various biomass gasification technologies are suitable to produce SNG. The present article contains an analysis of the Bio-SNG process efficiency that can be obtained using three different gasification technologies and associated gas cleaning and methanation equipment. These technologies are: 1) Entrained Flow, 2) Circulating Fluidized Bed and 3) Allothermal or Indirect gasification. The aim of this work is to identify the gasification route with the highest process efficiency from biomass to SNG and to quantify the differences in overall efficiency. Aspen Plus® was used as modeling tool. The heat and mass balances are based on experimental data from literature and our own experience. Overall efficiency to SNG is highest for Allothermal gasification. The net overall efficiencies on LHV basis, including electricity consumption and pre-treatment but excluding transport of biomass are 54% for Entrained Flow, 58% for CFB and 67% for Allothermal gasification. Because of the significantly higher efficiency to SNG for the route via Allothermal gasification, ECN is working on the further development of Allothermal gasification. ECN has built and tested a 30 kWth lab scale gasifier connected to a gas cleaning test rig and methanation unit and presently is building a 0.8 MWth pilot plant, called Milena, which will be connected to the existing pilot scale gas cleaning. © 2009 Elsevier Ltd. All rights reserved. Source


Pirruccio G.,FOM Institute for Atomic and Molecular Physics | Martin Moreno L.,University of Zaragoza | Lozano G.,FOM Institute for Atomic and Molecular Physics | Gomez Rivas J.,TU Eindhoven
ACS Nano | Year: 2013

We experimentally demonstrate a broadband enhancement of the light absorption in graphene over the whole visible spectrum. This enhanced absorption is obtained in a multilayer structure by using an Attenuated Total Reflectance (ATR) configuration and it is explained in terms of coherent absorption arising from interference and dissipation. The interference mechanism leading to the phenomenon of coherent absorption allows for its precise control by varying the refractive index and/or thickness of the medium surrounding the graphene. © 2013 American Chemical Society. Source


Ghiami Y.,TU Eindhoven | Williams T.,University of Hull
International Journal of Production Economics | Year: 2015

In a production-inventory system, the manufacturer produces the items at a rate, e.g. R, dispatches the order quantities to the customers in specific intervals and stores the excess inventory for subsequent deliveries. Therefore each inventory cycle of the manufacturer can be divided into two phases, first is the period of production, the second is when the manufacturer does not do any production and utilises the inventory that is in stock. One of the challenges in these models is how to obtain the inventory level of the supplier when there is deterioration. The existing literature that considers multi-echelon systems (including models with single-buyer or multi-buyer), analyses the deterioration/inventory cost of these echelons with the assumption of having huge surplus in production capacity. Then it seems acceptable to drop part of the production period which is for producing the first batch(s) for buyer(s) at the beginning of each production period. In this paper we develop a single-manufacturer, multi-buyer model for a deteriorating item with finite production rate. We also relax the assumption on the production capacity and find the average inventory of the supplier. It is shown that in case the production rate is not high, the existing models may not be sufficiently accurate. It is also illustrated that these models are more applicable to inventory systems (and not production-inventory) as they result in fairly accurate solutions when the manufacturer has much higher production capacity compared to the demand rate. Also a sensitivity analysis is conducted to show how the model reacts to changes in parameters. © 2014 Elsevier B.V. All rights reserved. Source


Spahn A.,TU Eindhoven
Science and Engineering Ethics | Year: 2012

The paper develops ethical guidelines for the development and usage of persuasive technologies (PT) that can be derived from applying discourse ethics to this type of technologies. The application of discourse ethics is of particular interest for PT, since 'persuasion' refers to an act of communication that might be interpreted as holding the middle between 'manipulation' and 'convincing'. One can distinguish two elements of discourse ethics that prove fruitful when applied to PT: the analysis of the inherent normativity of acts of communication ('speech acts') and the Habermasian distinction between 'communicative' and 'strategic rationality' and their broader societal interpretation. This essay investigates what consequences can be drawn if one applies these two elements of discourse ethics to PT. © 2011 The Author(s). Source


Van Der Aalst W.M.P.,TU Eindhoven | Van Der Aalst W.M.P.,National Research University Higher School of Economics
Distributed and Parallel Databases | Year: 2013

The practical relevance of process mining is increasing as more and more event data become available. Process mining techniques aim to discover, monitor and improve real processes by extracting knowledge from event logs. The two most prominent process mining tasks are: (i) process discovery: learning a process model from example behavior recorded in an event log, and (ii) conformance checking: diagnosing and quantifying discrepancies between observed behavior and modeled behavior. The increasing volume of event data provides both opportunities and challenges for process mining. Existing process mining techniques have problems dealing with large event logs referring to many different activities. Therefore, we propose a generic approach to decompose process mining problems. The decomposition approach is generic and can be combined with different existing process discovery and conformance checking techniques. It is possible to split computationally challenging process mining problems into many smaller problems that can be analyzed easily and whose results can be combined into solutions for the original problems. © 2013 Springer Science+Business Media New York. Source


Brunenberg E.J.,TU Eindhoven
Journal of neurosurgery | Year: 2011

The authors reviewed 70 publications on MR imaging-based targeting techniques for identifying the subthalamic nucleus (STN) for deep brain stimulation in patients with Parkinson disease. Of these 70 publications, 33 presented quantitatively validated results. There is still no consensus on which targeting technique to use for surgery planning; methods vary greatly between centers. Some groups apply indirect methods involving anatomical landmarks, or atlases incorporating anatomical or functional data. Others perform direct visualization on MR imaging, using T2-weighted spin echo or inversion recovery protocols. The combined studies do not offer a straightforward conclusion on the best targeting protocol. Indirect methods are not patient specific, leading to varying results between cases. On the other hand, direct targeting on MR imaging suffers from lack of contrast within the subthalamic region, resulting in a poor delineation of the STN. These deficiencies result in a need for intraoperative adaptation of the original target based on test stimulation with or without microelectrode recording. It is expected that future advances in MR imaging technology will lead to improvements in direct targeting. The use of new MR imaging modalities such as diffusion MR imaging might even lead to the specific identification of the different functional parts of the STN, such as the dorsolateral sensorimotor part, the target for deep brain stimulation. Source


Tissue engineering is an innovative method to restore cardiovascular tissue function by implanting either an in vitro cultured tissue or a degradable, mechanically functional scaffold that gradually transforms into a living neo-tissue by recruiting tissue forming cells at the site of implantation. Circulating endothelial colony forming cells (ECFCs) are capable of differentiating into endothelial cells as well as a mesenchymal ECM-producing phenotype, undergoing Endothelial-to-Mesenchymal-transition (EndoMT). We investigated the potential of ECFCs to produce and organize ECM under the influence of static and cyclic mechanical strain, as well as stimulation with transforming growth factor β1 (TGFβ1). A fibrin-based 3D tissue model was used to simulate neo-tissue formation. Extracellular matrix organization was monitored using confocal laser-scanning microscopy. ECFCs produced collagen and also elastin, but did not form an organized matrix, except when cultured with TGFβ1 under static strain. Here, collagen was aligned more parallel to the strain direction, similar to Human Vena Saphena Cell-seeded controls. Priming ECFC with TGFβ1 before exposing them to strain led to more homogenous matrix production. Biochemical and mechanical cues can induce extracellular matrix formation by ECFCs in tissue models that mimic early tissue formation. Our findings suggest that priming with bioactives may be required to optimize neo-tissue development with ECFCs and has important consequences for the timing of stimuli applied to scaffold designs for both in vitro and in situ cardiovascular tissue engineering. The results obtained with ECFCs differ from those obtained with other cell sources, such as vena saphena-derived myofibroblasts, underlining the need for experimental models like ours to test novel cell sources for cardiovascular tissue engineering. Source


Van Den Brand M.,TU Eindhoven
Science of Computer Programming | Year: 2015

Compilers are one of the cornerstones of Computer Science and in particular for Software Development. Compiler research has a long tradition and is very mature. Nevertheless, there is hardly any standardization with respect to formalisms and tools for developing compilers. Comparison of formalisms and tools to describe compilers for languages is not a simple task. In 2011 the Language Descriptions Tools and Applications community created a challenge where formalisms and tools were to be used in constructing a compiler for the Oberon-0 language. This special issue presents the tool challenge, the Oberon-0 language, various solutions to the challenge, and some conclusions. The aim of the challenge was to develop the same compiler using different formalisms to learn about these approaches in a concrete setting. © 2015 Published by Elsevier B.V. Source


Amft O.,TU Eindhoven
Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference | Year: 2011

Distributed ambient and on-body sensor systems can provide a suitable basis for recognizing complex human activities in daily life. Moreover, distributed activity recognition systems have high prospects for handling processing and communication loads more effectively than centralized solutions. A key challenge is to construct distributed activity recognition systems that make efficient use of the resources available for the recognition task, considering scalability and dynamic system reconfiguration. In this work, we present an approach to distributed activity recognition by introducing an activity-event-detector (AED) concept. We show formally how to construct and use AED for distributed recognition systems based on directed acyclic graphs. We illustrate essential properties for system scalability and efficiency using AED graphs. Results from a home monitoring study targeted at monitoring daily life activities are presented to illustrate the AED-based model regarding applicability and reconfiguration. Source


Cheng C.-Y.,University of California at Santa Barbara | Goor O.J.G.M.,TU Eindhoven | Han S.,University of California at Santa Barbara
Analytical Chemistry | Year: 2012

We introduce a new NMR technique to dramatically enhance the solution-state 13C NMR sensitivity and contrast at 0.35 T and at room temperature by actively transferring the spin polarization from Overhauser dynamic nuclear polarization (ODNP)-enhanced 1H to 13C nuclei through scalar (J) coupling, a method that we term J-mediated 13C ODNP. We demonstrate the capability of this technique by quantifying the permeability of glycine across negatively charged liposomal bilayers composed of dipalmitoylphosphatidylcholine (DPPC) and dipalmitoylphosphatidylglycerol (DPPG). The permeability coefficient of glycine across this DPPC/DPPG bilayer is measured to be (1.8 ± 0.1) × 10-11m/s, in agreement with the literature value. We further observed that the presence of 20 mol % cholesterol within the DPPC/DPPG lipid membrane significantly retards the permeability of glycine by a factor of 4. These findings demonstrate that the high sensitivity and contrast of J-mediated 13C ODNP affords the measurement of the permeation kinetics of small hydrophilic molecules across lipid bilayers, a quantity that is difficult to accurately measure with existing techniques. © 2012 American Chemical Society. Source


Zeinalipour-Yazdi C.D.,CySilicoTech Research Ltd | Van Santen R.A.,TU Eindhoven
Journal of Physical Chemistry C | Year: 2012

Metal-adsorbate nanoclusters serve as useful models to study elementary catalytic and gas-sensor processes. However, little is known about their structural, energetic, and spectroscopic properties as a function of adsorbate surface coverage and structure. Here, we perform a systematic study of the adsorption of carbon monoxide (CO) on a tetra-atomic rhodium cluster to understand the coverage- and structure-dependent adsorption energy of CO as a function of CO coverage and to provide deeper insight into the metacarbonyl bond on metal nanoclusters. The coverage-dependent adsorption energy trends are rationalized with a use of a theoretical model, molecular orbital energy diagrams, electron density difference plots, molecular electrostatic potential plots, and simulated infrared spectra. Our model demonstrates that a critical parameter that determines the coverage-dependent energetics of the adsorption of CO at low coverages is the polarization of metal-metal π-bonds during the effective charge transfer, occurring from the metal cluster to the 2π2p x and 2π2p x states of CO, which enhances the adsorption of CO vertical to the metal-metal bond. This configuration specific effect explains the negative coverage-dependent adsorption energy trend observed at low coverages on metal nanoclusters. © 2012 American Chemical Society. Source


Bos E.J.C.,TU Eindhoven | Bos E.J.C.,Xpress Precision Engineering B.V.
Precision Engineering | Year: 2011

This paper discusses the aspects that influence the interaction between a probe tip and a work piece during tactile probing in a coordinate measuring machine (CMM). Measurement instruments are sensitive to more than one physical quantity. When measuring the topography of a work piece, the measurement result will therefore always be influenced by the environment and (local) variations in the work piece itself. A mechanical probe will respond to both topography and changes in the mechanical properties of the surface, e.g. the Young's modulus and hardness. An optical probe is influenced by the reflectivity and optical constants of the work piece, a scanning tunneling microscope (STM) responds to the electrical properties of the work piece and so on (Franks, 1991 [1]). The trend of component miniaturization results in a need for 3-dimensional characterization of micrometer sized features to nanometer accuracy. As the scale of the measurement decreases, the problems associated with the surfaceprobe interactions become increasingly apparent (Leach et al., 2001 [2]). The aspects of the interaction that are discussed include the deformation of probe tip and work piece during contact, surface forces during single point probing and scanning, dynamic excitation of the probe, synchronization errors, microfriction, tip rotations, finite stiffness effects, mechanical filtering, anisotropic stiffness, thermal effects and probe repeatability. These aspects are investigated using the Gannen XP 3D tactile probing system developed by Xpress Precision Engineering using modeling and experimental verification of the effects. The Gannen XP suspension consists of three slender rods with integrated piezo resistive strain gauges. The deformation of the slender rods is measured using the strain gauges and is a measure for the deflection of the probe tip. It is shown that the standard deviation in repeatability is 2 nm in any direction and over the whole measurement range of the probe. Finally, this probe has an isotropic stiffness of 480 N/m and a moving mass below 25 mg. © 2010 Elsevier Inc. All rights reserved. Source


Demerouti E.,TU Eindhoven | Bakker A.B.,Erasmus University Rotterdam | Leiter M.,Acadia University
Journal of Occupational Health Psychology | Year: 2014

The present study aims to explain why research thus far has found only low to moderate associations between burnout and performance. We argue that employees use adaptive strategies that help them to maintain their performance (i.e., task performance, adaptivity to change) at acceptable levels despite experiencing burnout (i.e., exhaustion, disengagement). We focus on the strategies included in the selective optimization with compensation model. Using a sample of 294 employees and their supervisors, we found that compensation is the most successful strategy in buffering the negative associations of disengagement with supervisor-rated task performance and both disengagement and exhaustion with supervisor-rated adaptivity to change. In contrast, selection exacerbates the negative relationship of exhaustion with supervisor-rated adaptivity to change. In total, 42% of the hypothesized interactions proved to be significant. Our study uncovers successful and unsuccessful strategies that people use to deal with their burnout symptoms in order to achieve satisfactory job performance. © 2014 American Psychological Association. Source


Guillemin F.,Orange S.A. | van Leeuwaarden J.S.H.,TU Eindhoven
Queueing Systems | Year: 2011

This paper presents a novel technique for deriving asymptotic expressions for the occurrence of rare events for a random walk in the quarter plane. In particular, we study a tandem queue with Poisson arrivals, exponential service times and coupled processors. The service rate for one queue is only a fraction of the global service rate when the other queue is non-empty; when one queue is empty, the other queue has full service rate. The bivariate generating function of the queue lengths gives rise to a functional equation. In order to derive asymptotic expressions for large queue lengths, we combine the kernel method for functional equations with boundary value problems and singularity analysis. © 2010 The Author(s). Source


Leijten A.J.M.,TU Eindhoven
Engineering Structures | Year: 2011

In statically indeterminate structures, connections play a vital role in the moment distribution. Demonstrated here is a method to evaluate the conditions, taking full advantage of the benefits offered by the indeterminate nature of the structures, and using the well-established, graphical beam-line method. This method shows how important the immediate load take-up is, the stiffness, the moment capacity of the connection and how it all affects the structural behaviour. The examples considered here use both the traditional non-reinforced dowel-type fastener connections and also timber connections reinforced with steel plates. They show that the minimum rotation requirements to achieve an effective structure are satisfied easily in contrast to requirements on stiffness. In this respect, timber connections with local reinforcement glued at the interface of the connection area offer more prospects. © 2011 Elsevier Ltd. Source


Vanhaverbeke W.,Hasselt University | Gilsing V.,University of Tilburg | Duysters G.,TU Eindhoven
Journal of Product Innovation Management | Year: 2012

Whereas most of the literature on the benefits of alliances for learning and innovation has taken on a competence perspective, this paper provides an alternative integrated framework based on both a competence and governance point of view. The former focuses on the role of knowledge flows as means to access new knowledge, whereas the latter is centered around the core concepts of opportunism and freeridership in knowledge exchange situations. Although it has generally been acknowledged that competence-based benefits of collaboration may come at a price of elevated risks due to knowledge spillovers and freeridership, such a governance view remains understudied. This paper explains how a firm's alliance network structure affects benefits as well as risks of collaboration in the context of the creation of core and noncore technology. In the case of core technology, firms attach more value to reducing governance-based risks relative to obtaining competence-based benefits. The opposite is found when firms develop noncore technology. This paper contributes to the existing literature by going beyond the common idea that competence and governance perspectives are either complementary or competing. Instead, this study shows that for technology-based collaboration, they can both apply at the same time, implying a trade-off in some cases and offering synergy in other cases. Based on an empirical test in three different industries (pharmaceuticals, chemicals, and automotive), there is support for most of our hypotheses. Direct ties have an inverted U-shaped effect on both core and noncore technology, and the effect is relatively stronger for the former. The results furthermore show that indirect ties play a positive role in noncore technology development and that this effect is not hampered by the number of direct ties a firm has. In contrast, indirect ties seem to hamper core competence development when companies have a lot of direct ties. Finally, firms are found to benefit from nonredundancy in their alliance network in their efforts to strengthen their core technology. The joint effect of these three network characteristics leads to optimal results for core and noncore technologies under quite different alliance network structures. This poses a problem for the ambidexterity of companies, when they simultaneously try to strengthen core and noncore technologies. © 2012 Product Development & Management Association. Source


De Hon B.P.,TU Eindhoven | Arnold J.M.,University of Glasgow
Journal of Physics A: Mathematical and Theoretical | Year: 2012

Up to a multiplicative constant, the lattice Green's function (LGF) as defined in condensed matter physics and lattice statistical mechanics is equivalent to the Z-domain counterpart of the finite-difference time-domain Green's function (GF) on a lattice. Expansion of a well-known integral representation for the LGF on a ν-dimensional hyper-cubic lattice in powers of Z 1 and application of the Chu-Vandermonde identity results in ν 1 nested finite-sum representations for discrete space-time GFs. Due to severe numerical cancellations, these nested finite sums are of little practical use. For ν = 2, the finite sum may be evaluated in closed form in terms of a generalized hypergeometric function. For special lattice points, that representation simplifies considerably, while on the other hand the finite-difference stencil may be used to derive single-lattice-point second-order recurrence schemes for generating 2D discrete space-time GF time sequences on the fly. For arbitrary symbolic lattice points, Zeilberger's algorithm produces a third-order recurrence operator with polynomial coefficients of the sixth degree. The corresponding recurrence scheme constitutes the most efficient numerical method for the majority of lattice points, in spite of the fact that for explicit numeric lattice points the associated third-order recurrence operator is not the minimum recurrence operator. As regards the asymptotic bounds for the possible solutions to the recurrence scheme, Perron's theorem precludes factorial or exponential growth. Along horizontal lattices directions, rapid initial growth does occur, but poses no problems in augmented dynamic-range fixed precision arithmetic. By analysing long-distance wave propagation along a horizontal lattice direction, we have concluded that the chirp-up oscillations of the discrete space-time GF are the root cause of grid dispersion anisotropy. With each factor of ten increase in the lattice distance, one would have to roughly double the pulse width of the source signature to keep pulse distortion at bay. The GF time sequences can also be used for an efficient computation of discrete space-frequency LGFs, especially if one employs Aitken's δ 2 process for the acceleration of the convergence of the consecutive partial sums. © 2012 IOP Publishing Ltd. Source


Van Der Bij H.,University of Groningen | Van Weele A.,TU Eindhoven
Journal of Product Innovation Management | Year: 2013

As today's firms increasingly outsource their noncore activities, they not only have to manage their own resources and capabilities, but they are ever more dependent on the resources and capabilities of supplying firms to respond to customer needs. This paper explicitly examines whether and how firms and suppliers, who are both oriented to the same customer market, enable innovativeness in their supply chains and deliver value to their joint customer. We will call this customer of the focal firm the "end user." The authors take a resource-dependence perspective to hypothesize how suppliers' end-user orientation and innovativeness influence downstream activities at the focal firm and end-user satisfaction. The resource dependence theory looks typically beyond the boundaries of an individual firm for explaining firm success: firms need to satisfy customer demands to survive and depend on other parties such as their suppliers to achieve customer satisfaction. Accordingly, the research design focuses on three parties along a supply chain: the focal firm, a supplier, and a customer of the focal firm (end user). The results drawn from a survey of 88 matched chains suggest the following. First, customer satisfaction is driven by focal firms' innovativeness. A focal firm's innovativeness depends, on the one hand, on a focal firm's market orientation and, on the other hand, on its suppliers' innovativeness. Second, no relationship could be established between a focal firm's market orientation and a supplier's end-user orientation. Market orientation typically has within-firm effects, while innovativeness has impact beyond the boundaries of the firm. These results suggest that firms create value for their customer through internal market orientation efforts and external suppliers' innovativeness. © 2013 Product Development & Management Association. Source


Lakens D.,TU Eindhoven
IEEE Transactions on Affective Computing | Year: 2013

This study demonstrates the feasibility of measuring heart rate (HR) differences associated with emotional states such as anger and happiness with a smartphone. Novice experimenters measured higher HRs during relived anger and happiness (replicating findings in the literature) outside a laboratory environment with a smartphone app that relied on photoplethysmography. © 2010-2012 IEEE. Source


Lakens D.,TU Eindhoven | Semin G.R.,University Utrecht | Semin G.R.,Koc University | Foroni F.,University Utrecht
Journal of Experimental Psychology: General | Year: 2012

Light and dark are used pervasively to represent positive and negative concepts. Recent studies suggest that black and white stimuli are automatically associated with negativity and positivity. However, structural factors in experimental designs, such as the shared opposition in the valence (good vs. bad) and brightness (light vs. dark) dimensions might play an important role in the valence-brightness association. In 6 experiments, we show that while black ideographs are consistently judged to represent negative words, white ideographs represent positivity only when the negativity of black is coactivated. The positivity of white emerged only when brightness and valence were manipulated within participants (but not between participants) or when the negativity of black was perceptually activated by presenting positive and white stimuli against a black (vs. gray) background. These findings add to an emerging literature on how structural overlap between dimensions creates associations and highlight the inherently contextualized construction of meaning structures. © 2011 American Psychological Association. Source


Van Der Aalst W.M.P.,TU Eindhoven
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

Process discovery-discovering a process model from example behavior recorded in an event log-is one of the most challenging tasks in process mining. Discovery approaches need to deal with competing quality criteria such as fitness, simplicity, precision, and generalization. Moreover, event logs may contain low frequent behavior and tend to be far from complete (i.e., typically only a fraction of the possible behavior is recorded). At the same time, models need to have formal semantics in order to reason about their quality. These complications explain why dozens of process discovery approaches have been proposed in recent years. Most of these approaches are time-consuming and/or produce poor quality models. In fact, simply checking the quality of a model is already computationally challenging. This paper shows that process mining problems can be decomposed into a set of smaller problems after determining the so-called causal structure. Given a causal structure, we partition the activities over a collection of passages. Conformance checking and discovery can be done per passage. The decomposition of the process mining problems has two advantages. First of all, the problem can be distributed over a network of computers. Second, due to the exponential nature of most process mining algorithms, decomposition can significantly reduce computation time (even on a single computer). As a result, conformance checking and process discovery can be done much more efficiently. © 2012 Springer-Verlag. Source


Van Den Elzen S.,SynerScope | Van Wijk J.J.,TU Eindhoven
Computer Graphics Forum | Year: 2013

We present a novel visual exploration method based on small multiples and large singles for effective and efficient data analysis. Users are enabled to explore the state space by offering multiple alternatives from the current state. Users can then select the alternative of choice and continue the analysis. Furthermore, the intermediate steps in the exploration process are preserved and can be revisited and adapted using an intuitive navigation mechanism based on the well-known undo-redo stack and filmstrip metaphor. As proof of concept the exploration method is implemented in a prototype. The effectiveness of the exploration method is tested using a formal user study comparing four different interaction methods. By using Small Multiples as data exploration method users need fewer steps in answering questions and also explore a significantly larger part of the state space in the same amount of time, providing them with a broader perspective on the data, hence lowering the chance of missing important features. Also, users prefer visual exploration with small multiples over non-small multiple variants. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd. Source


Maatta T.,TU Eindhoven
Journal of Ambient Intelligence and Smart Environments | Year: 2013

This short report introduces the topics of PhD research that was conducted on 2008-2013 and was defended on July 2013. The PhD thesis covers sensor fusion theory, gathers it into a framework with design rules for fusion-friendly design of vision networks, and elaborates on the rules through fusion experiments performed with four distinct applications of Ambient Intelligence. © 2013 IOS Press and the authors. All rights reserved. Source


Van Wijk J.J.,TU Eindhoven
Computer | Year: 2013

Because visual analytics has a broad scope and aims at knowledge discovery, evaluating the methods used in this field is challenging. Successful solutions are often found through trial and error, with solid guidelines and findings still lagging. The Web Extra document contains links with further information on visual analytics challenges and repositories. © 2013 IEEE. Source


Katzav J.,TU Eindhoven
Studies in History and Philosophy of Science Part B - Studies in History and Philosophy of Modern Physics | Year: 2013

I examine, from Mayo's severe testing perspective, the case found in the Intergovernmental Panel on Climate Change fourth report (IPCC-AR4) for the claim (OUR FAULT) that increases in anthropogenic greenhouse gas concentrations caused most of the post-1950 global warming. My examination begins to provide an alternative to standard, probabilistic assessments of OUR FAULT. It also brings out some of the limitations of variety of evidence considerations in assessing this and other hypotheses about the causes of climate change, and illuminates the epistemology of optimal fingerprinting studies. Finally, it shows that some features of Mayo's perspective should be kept in whatever approach is preferred for assessing hypotheses about the causes of climate change. © 2013 Elsevier Ltd. Source


Eling K.,TU Eindhoven
Journal of Product Innovation Management | Year: 2013

Research on reducing new product development (NPD) cycle time has shown that firms tend to adopt different cycle time reduction mechanisms for different process stages. However, the vast majority of previous studies investigating the relationship between new product performance and NPD cycle time have adopted a monolithic process perspective rather than looking at cycle time for the distinct stages of the NPD process (i.e., fuzzy front end, development, and commercialization). As a result, little is known about the specific effect of the cycle times of the different stages on new product performance or how they interact to influence new product performance. This study uses a stage-wise approach to NPD cycle time to test the main and interacting effects of fuzzy front end, development, and commercialization cycle times on new product performance using objective data for 399 NPD projects developed following a Stage-Gate® type of process in one firm. The results reveal that at least in this firm, new product performance only increases if all three stages of the NPD process are consistently accelerated. This finding, combined with the previous research showing that firms use different mechanisms to accelerate different stages of the process, emphasizes the need to conduct performance effect studies of NPD cycle time at the stage level rather than at the monolithic process level. © 2013 Product Development & Management Association. Source


Santiago J.,University of Granada | Lakens D.,TU Eindhoven
Acta Psychologica | Year: 2015

Conceptual congruency effects have been interpreted as evidence for the idea that the representations of abstract conceptual dimensions (e.g., power, affective valence, time, number, importance) rest on more concrete dimensions (e.g., space, brightness, weight). However, an alternative theoretical explanation based on the notion of polarity correspondence has recently received empirical support in the domains of valence and morality, which are related to vertical space (e.g., good things are up). In the present study we provide empirical arguments against the applicability of the polarity correspondence account to congruency effects in two conceptual domains related to lateral space: number and time. Following earlier research, we varied the polarity of the response dimension (left-right) by manipulating keyboard eccentricity. In a first experiment we successfully replicated the congruency effect between vertical and lateral space and its interaction with response eccentricity. We then examined whether this modulation of a concrete-concrete congruency effect can be extended to two types of concrete-abstract effects, those between left-right space and number (in both parity and magnitude judgment tasks), and temporal reference. In all three tasks response eccentricity failed to modulate the congruency effects. We conclude that polarity correspondence does not provide an adequate explanation of conceptual congruency effects in the domains of number and time. © 2014 Elsevier B.V. Source


Wang X.,Harvard University | Cuny G.D.,Harvard University | Cuny G.D.,University of Houston | Noel T.,TU Eindhoven
Angewandte Chemie - International Edition | Year: 2013

Visible advance: A mild, one-pot Stadler-Ziegler process for C-S bond formation has been developed. The method employs the photoredox catalyst [Ru(bpy)3Cl2]×6 H2O irradiated with visible light. A variety of aryl-alkyl and diaryl sulfides were prepared from readily available arylamines and aryl/alkylthiols in good yields. The use of a photo microreactor led to a significant improvement with respect to safety and efficiency. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Zafra A.,University of Cordoba, Spain | Pechenizkiy M.,TU Eindhoven | Ventura S.,University of Cordoba, Spain
Information Sciences | Year: 2013

Feature selection techniques have been successfully applied in many applications for making supervised learning more effective and efficient. These techniques have been widely used and studied in traditional supervised learning settings, where each instance is expected to have a label. In multiple instance learning (MIL) each example or bag consists of a variable set of instances, and the label is known for the bag as a whole, but not for the individual instances it consists of. Therefore utilizing these labels for feature selection in MIL becomes less straightforward. In this paper we study a new feature subset selection method for MIL called HyDR-MI (hybrid dimensionality reduction method for multiple instance learning). The hybrid consists of the filter component based on an extension of the ReliefF algorithm developed for working with MIL and the wrapper component based on a genetic algorithm that optimizes the search for the best feature subset from a reduced set of features, output by the filter component. We conducted an extensive experimental evaluation of our method on five benchmark datasets and 17 classification algorithms for MIL. The results of our study show the potential of the proposed hybrid with respect to the desirable effect it produces: a significant improvement of the predictive performance of many MIL classification techniques as compared to the effect of filter-based feature selection. This is achieved due to the possibility to decide how many of the top ranked features are useful for each particular algorithm and the possibility to discard redundant attributes. © 2012 Elsevier Inc. All rights reserved. Source


Kuang Y.,University Utrecht | Vece M.D.,University Utrecht | Rath J.K.,University Utrecht | Dijk L.V.,University Utrecht | And 2 more authors.
Reports on Progress in Physics | Year: 2013

In solar cell technology, the current trend is to thin down the active absorber layer. The main advantage of a thinner absorber is primarily the reduced consumption of material and energy during production. For thin film silicon (Si) technology, thinning down the absorber layer is of particular interest since both the device throughput of vacuum deposition systems and the stability of the devices are significantly enhanced. These features lead to lower cost per installed watt peak for solar cells, provided that the (stabilized) efficiency is the same as for thicker devices. However, merely thinning down inevitably leads to a reduced light absorption. Therefore, advanced light trapping schemes are crucial to increase the light path length. The use of elongated nanostructures is a promising method for advanced light trapping. The enhanced optical performance originates from orthogonalization of the light's travel path with respect to the direction of carrier collection due to the radial junction, an improved anti-reflection effect thanks to the three-dimensional geometric configuration and the multiple scattering between individual nanostructures. These advantages potentially allow for high efficiency at a significantly reduced quantity and even at a reduced material quality, of the semiconductor material. In this article, several types of elongated nanostructures with the high potential to improve the device performance are reviewed. First, we briefly introduce the conventional solar cells with emphasis on thin film technology, following the most commonly used fabrication techniques for creating nanostructures with a high aspect ratio. Subsequently, several representative applications of elongated nanostructures, such as Si nanowires in realistic photovoltaic (PV) devices, are reviewed. Finally, the scientific challenges and an outlook for nanostructured PV devices are presented. © 2013 IOP Publishing Ltd. Source


Blocken B.,TU Eindhoven | Gualtieri C.,University of Naples Federico II
Environmental Modelling and Software | Year: 2012

Computational Fluid Dynamics (CFD) is increasingly used to study a wide variety of complex Environmental Fluid Mechanics (EFM) processes, such as water flow and turbulent mixing of contaminants in rivers and estuaries and wind flow and air pollution dispersion in urban areas. However, the accuracy and reliability of CFD modeling and the correct use of CFD results can easily be compromised. In 2006, Jakeman et al. set out ten iterative steps of good disciplined model practice to develop purposeful, credible models from data and a priori knowledge, in consort with end-users, with every stage open to critical review and revision (Jakeman et al., 2006). This paper discusses the application of the ten-steps approach to CFD for EFM in three parts. In the first part, the existing best practice guidelines for CFD applications in this area are reviewed and positioned in the ten-steps framework. The second and third part present a retrospective analysis of two case studies in the light of the ten-steps approach: (1) contaminant dispersion due to transverse turbulent mixing in a shallow water flow and (2) coupled urban wind flow and indoor natural ventilation of the Amsterdam ArenA football stadium. It is shown that the existing best practice guidelines for CFD mainly focus on the last steps in the ten-steps framework. The reasons for this focus are outlined and the value of the additional - preceding - steps is discussed. The retrospective analysis of the case studies indicates that the ten-steps approach is very well applicable to CFD for EFM and that it provides a comprehensive framework that encompasses and extends the existing best practice guidelines. © 2012 Elsevier Ltd. Source


Jurrius R.P.M.J.,TU Eindhoven
Designs, Codes, and Cryptography | Year: 2012

We study the generalized and extended weight enumerator of the q-ary Simplex code and the q-ary first order Reed-Muller code. For our calculations we use that these codes correspond to a projective system containing all the points in a finite projective or affine space. As a result from the geometric method we use for the weight enumeration, we also completely determine the set of supports of subcodes and words in an extension code. © Springer Science+Business Media, LLC 2011. Source


van der Aalst W.M.P.,TU Eindhoven
Software and Systems Modeling | Year: 2012

There seems to be a never ending stream of new process modeling notations. Some of these notations are foundational and have been around for decades (e. g., Petri nets). Other notations are vendor specific, incremental, or are only popular for a short while. Discussions on the various competing notations concealed the more important question "What makes a good process model?". Fortunately, large scale experiences with process mining allow us to address this question. Process mining techniques can be used to extract knowledge from event data, discover models, align logs and models, measure conformance, diagnose bottlenecks, and predict future events. Today's processes leave many trails in data bases, audit trails, message logs, transaction logs, etc. Therefore, it makes sense to relate these event data to process models independent of their particular notation. Process models discovered based on the actual behavior tend to be very different from the process models made by humans. Moreover, conformance checking techniques often reveal important deviations between models and reality. The lessons that can be learned from process mining shed a new light on process model quality. This paper discusses the role of process models and lists seven problems related to process modeling. Based on our experiences in over 100 process mining projects, we discuss these problems. Moreover, we show that these problems can be addressed by exposing process models and modelers to event data. © 2012 Springer-Verlag. Source


Tiemessen H.G.H.,IBM | Van Houtum G.J.,TU Eindhoven
International Journal of Production Economics | Year: 2013

We study a system consisting of one repair shop and one stockpoint, where spare parts of multiple critical repairables are kept on stock to serve an installed base of technical systems. Part requests are met from stock if possible, and backordered otherwise. The objective is to minimize aggregate downtime via smart repair job scheduling. We evaluate various relevant dynamic scheduling policies, including two that stem from other application fields. One of them is the myopic allocation rule from the make-to-stock environment. It selects the SKU with the highest expected backorder reduction per invested time unit and has excellent performance on repairable inventory systems. It combines the following three strengths: (i) it selects the SKU with the shortest expected repair time in case of backorders, (ii) it recognizes the benefits of short average repair times even if there are no backorders, and (iii) it takes the stochasticity of the part failure processes into account. We investigate the optimality gaps of the heuristic scheduling rules, compare their performance on a large test bed containing problem instances of real-life size, and illustrate the impact of key problem characteristics on the aggregate downtime. We show that the myopic allocation rule performs well and that it outperforms the other heuristic scheduling rules. © 2012 Elsevier B.V. All rights reserved. Source


Van Der Aalst W.M.P.,TU Eindhoven
Proceedings of the 2011 20th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, WETICE 2011 | Year: 2011

Process mining serves a bridge between data mining and business process modeling. The goal is to extract process related knowledge from event data stored in information systems. One of the most challenging process mining tasks is process discovery, i.e., the automatic construction of process models from raw event logs. Today there are dozens of process discovery techniques generating process models using different notations (Petri nets, EPCs, BPMN, heuristic nets, etc.). This paper focuses on the representational bias used by these techniques. We will show that the choice of target model is very important for the discovery process itself. The representational bias should not be driven by the desired graphical representation but by the characteristics of the underlying processes and process discovery techniques. Therefore, we analyze the role of the representational bias in process mining. © 2011 IEEE. Source


Niesten E.,University Utrecht | Alkemade F.,TU Eindhoven
Renewable and Sustainable Energy Reviews | Year: 2016

Profitable business models for value creation and value capture with smart grid services are pivotal to realize the transition to smart and sustainable electricity grids. In addition to knowledge regarding the technical characteristics of smart grids, we need to know what drives companies and consumers to sell and purchase services in a smart grid. This paper reviews 45 scientific articles on business models for smart grid services and analyses information on value in 434 European and US smart grid pilot projects. Our review observes that the articles and pilots most often discuss three types of smart grid services: vehicle-to-grid and grid-to-vehicle services, demand response services, and services to integrate renewable energy (RE). We offer a classification of business models, value creation and capture for each of these services and for the different actors in the electricity value chain. Although business models have been developed for grid-to-vehicle services and for services that connect RE, knowledge regarding demand response services is restricted to different types of value creation and capture. Our results highlight that business models can be profitable when a new actor in the electricity industry, that is, the aggregator, can collect sufficiently large amounts of load. In addition, our analysis indicates that demand response services or vehicle-to-grid and grid-to-vehicle services will be offered in conjunction with the supply of RE. © 2015 Elsevier Ltd. All rights reserved. Source


Yuan H.,Leiden University | Khatua S.,Leiden University | Zijlstra P.,TU Eindhoven | Yorulmaz M.,Leiden University | Orrit M.,Leiden University
Angewandte Chemie - International Edition | Year: 2013

Single molecules: Large enhancements of single-molecule fluorescence up to 1100 times by using synthesized gold nanorods are reported (see picture). This high enhancement is achieved by selecting a dye with its adsorption and emission close to the surface plasmon resonance of the gold nanorods. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Van Der Aalst W.M.P.,TU Eindhoven
Proceedings - 9th IEEE European Conference on Web Services, ECOWS 2011 | Year: 2011

Lion's share of cloud research has been focusing on performance related problems. However, cloud computing will also change the way in which business processes are managed and supported, e.g., more and more organizations will be sharing common processes. In the classical setting, where product software is used, different organizations can make ad-hoc customizations to let the system fit their needs. This is undesirable, especially when multiple organizations share a cloud infrastructure. Configurable process models enable the sharing of common processes among different organizations in a controlled manner. This paper discusses challenges and opportunities related to business process configuration. Causal nets (C-nets) are proposed as a new formalism to deal with these challenges, e.g., merging variants into a configurable model is supported by a simple union operator. C-nets also provide a good representational bias for process mining, i.e., process discovery and conformance checking based on event logs. In the context of cloud computing, we focus on the application of C-nets to cross-organizational process mining. © 2011 IEEE. Source


Melazzi D.,University of Padua | Lancellotti V.,TU Eindhoven
Computer Physics Communications | Year: 2014

We present a full-wave numerical tool, dubbed ADAMANT (Advanced coDe for Anisotropic Media and ANTennas), devised for the analysis and design of radiofrequency antennas which drive the discharge in helicon plasma sources. ADAMANT relies on a set of coupled surface and volume integral equations in which the unknowns are the surface electric current density on the antenna conductors and the volume polarization current within the plasma. The latter can be inhomogeneous and anisotropic whereas the antenna can have arbitrary shape. The set of integral equations is solved numerically through the Method of Moments with sub-sectional surface and volume vector basis functions. This approach allows the accurate evaluation of the current distribution on the antenna and in the plasma as well as the antenna input impedance, a parameter crucial for the design of the feeding and matching network. We report several numerical examples which serve to validate ADAMANT against other well-established numerical approaches as well as experimental data. The numerical accuracy of the computed solution versus the number of basis functions in the plasma is also assessed. Finally, we employ ADAMANT to characterize the antenna of a real-life helicon plasma source. © 2014 Elsevier B.V. All rights reserved. Source


Duarte J.L.,TU Eindhoven | Lokos J.,Heliox B.V. | Van Horck F.B.M.,Heliox B.V.
IEEE Transactions on Power Electronics | Year: 2013

The simplicity of phase-shift control at fixed switching frequency and 50% duty-cycle operation is fully exploited by the proposed converter topology. The transistor voltages are clamped to only 50% of the dc input, the dc bus capacitive dividers being naturally stabilized. Furthermore, zero-voltage switching for all switches is guaranteed from no-load to full-load conditions, that is to say, from zero to nominal output voltage and from zero to nominal load current. As such, the proposed topology is an excellent candidate for demanding applications as compact battery chargers for electric vehicles. Experimental results obtained from a 400-80-V/0-360-V/2-kW/100-kHz prototype support the theoretical analysis. © 2012 IEEE. Source


van Weele A.J.,TU Eindhoven | van Raaij E.M.,Erasmus University Rotterdam
Journal of Supply Chain Management | Year: 2014

The Journal of Supply Chain Management (JSCM) is a hallmark in the academic field of operations and supply chain management. During the past 50 years, it has contributed substantially to the recognition and adoption of purchasing and supply management (PSM) as an academic and strategic business domain. Having been invited by the JSCM editors to provide some ideas on the future directions of PSM research, the authors discuss what can be done to further increase both its relevance and rigor. Rigor and relevance in academic research are interconnected. To improve its relevance, the authors argue that future PSM research should better reflect the strategic priorities raised in the contemporary strategic management literature. Next, future PSM research should be much better embedded in a limited number of management theories. Here, stakeholder theory, network theory, the resource-based view of the firm, dynamic capabilities theory, and the relational view could be considered as interesting candidates. Rigor is connected with robustness of academic research designs and projects. To foster its rigor, future PSM research should allow for an increase in the number of replication studies, longitudinal studies, and meta-analytical studies. Future PSM research designs should reflect a careful distinction between informants and respondents and a careful sample selection. When discussing the results of quantitative studies, future PSM research should report on effect sizes and confidence intervals, rather than p-values. Adoption of these ideas would have some important implications for both the academic PSM community and academic journal editors. © 2014 Institute for Supply Management, Inc. Source


Waltrich G.,Federal University of Santa Catarina | Waltrich G.,TU Eindhoven | Barbi I.,Federal University of Santa Catarina
IEEE Transactions on Industrial Electronics | Year: 2010

In this paper, a modular three-phase multilevel inverter specially suited for electrical drive applications is proposed. Unlike the cascaded H-bridge inverter, this topology is based on power cells connected in cascade using two inverter legs in series. A detailed analysis of the structure and the development of design equations for the load voltage with $n$ levels are carried out using pulsewidth-modulation phase-shifted multicarrier modulation. Simulations and experimental results for a 15-kW three-phase system, with nine voltage levels, validate the study presented. © 2006 IEEE. Source


Van Der Laan W.J.,University of Groningen | Jalba A.C.,TU Eindhoven | Roerdink J.B.T.M.,University of Groningen
IEEE Transactions on Parallel and Distributed Systems | Year: 2011

The Discrete Wavelet Transform (DWT) has a wide range of applications from signal processing to video and image compression. We show that this transform, by means of the lifting scheme, can be performed in a memory and computation-efficient way on modern, programmable GPUs, which can be regarded as massively parallel coprocessors through NVidia's CUDA compute paradigm. The three main hardware architectures for the 2D DWT (row-column, line-based, block-based) are shown to be unsuitable for a CUDA implementation. Our CUDA-specific design can be regarded as a hybrid method between the row-column and block-based methods. We achieve considerable speedups compared to an optimized CPU implementation and earlier non-CUDA-based GPU DWT methods, both for 2D images and 3D volume data. Additionally, memory usage can be reduced significantly compared to previous GPU DWT methods. The method is scalable and the fastest GPU implementation among the methods considered. A performance analysis shows that the results of our CUDA-specific design are in close agreement with our theoretical complexity analysis. © 2011 IEEE. Source


Westergaard M.,TU Eindhoven
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

Declarative workflow languages are easy for humans to understand and use for specifications, but difficult for computers to check for consistency and use for enactment. Therefore, declarative languages need to be translated to something a computer can handle. One approach is to translate the declarative language to linear temporal logic (LTL), which can be translated to finite automata. While computers are very good at handling finite automata, the translation itself is often a road block as it may take time exponential in the size of the input. Here, we present algorithms for doing this translation much more efficiently (around a factor of 10,000 times faster and handling 10 times larger systems on a standard computer), making declarative specifications scale to realistic settings. © 2011 Springer-Verlag. Source


Verhoosel C.V.,TU Eindhoven | de Borst R.,University of Glasgow
International Journal for Numerical Methods in Engineering | Year: 2013

In this paper, a phase-field model for cohesive fracture is developed. After casting the cohesive zone approach in an energetic framework, which is suitable for incorporation in phase-field approaches, the phase-field approach to brittle fracture is recapitulated. The approximation to the Dirac function is discussed with particular emphasis on the Dirichlet boundary conditions that arise in the phase-field approximation. The accuracy of the discretisation of the phase field, including the sensitivity to the parameter that balances the field and the boundary contributions, is assessed at the hand of a simple example. The relation to gradient-enhanced damage models is highlighted, and some comments on the similarities and the differences between phase-field approaches to fracture and gradient-damage models are made. A phase-field representation for cohesive fracture is elaborated, starting from the aforementioned energetic framework. The strong as well as the weak formats are presented, the latter being the starting point for the ensuing finite element discretisation, which involves three fields: the displacement field, an auxiliary field that represents the jump in the displacement across the crack, and the phase field. Compared to phase-field approaches for brittle fracture, the modelling of the jump of the displacement across the crack is a complication, and the current work provides evidence that an additional constraint has to be provided in the sense that the auxiliary field must be constant in the direction orthogonal to the crack. The sensitivity of the results with respect to the numerical parameter needed to enforce this constraint is investigated, as well as how the results depend on the orders of the discretisation of the three fields. Finally, examples are given that demonstrate grid insensitivity for adhesive and for cohesive failure, the latter example being somewhat limited because only straight crack propagation is considered. © 2013 John Wiley & Sons, Ltd. Source


Khatua S.,Leiden University | Paulo P.M.R.,University of Lisbon | Yuan H.,Leiden University | Gupta A.,Leiden University | And 2 more authors.
ACS Nano | Year: 2014

Enhancing the fluorescence of a weak emitter is important to further extend the reach of single-molecule fluorescence imaging to many unexplored systems. Here we study fluorescence enhancement by isolated gold nanorods and explore the role of the surface plasmon resonance (SPR) on the observed enhancements. Gold nanorods can be cheaply synthesized in large volumes, yet we find similar fluorescence enhancements as literature reports on lithographically fabricated nanoparticle assemblies. The fluorescence of a weak emitter, crystal violet, can be enhanced more than 1000-fold by a single nanorod with its SPR at 629 nm excited at 633 nm. This strong enhancement results from both an excitation rate enhancement of ~130 and an effective emission enhancement of ~9. The fluorescence enhancement, however, decreases sharply when the SPR wavelength moves away from the excitation laser wavelength or when the SPR has only a partial overlap with the emission spectrum of the fluorophore. The reported measurements of fluorescence enhancement by 11 nanorods with varying SPR wavelengths are consistent with numerical simulations. © 2014 American Chemical Society. Source


Kunert C.,University of Stuttgart | Harting J.,University of Stuttgart | Harting J.,TU Eindhoven | Vinogradova O.I.,RWTH Aachen
Physical Review Letters | Year: 2010

We report results of lattice Boltzmann simulations of a high-speed drainage of liquid films squeezed between a smooth sphere and a randomly rough plane. A significant decrease in the hydrodynamic resistance force as compared with that predicted for two smooth surfaces is observed. However, this force reduction does not represent slippage. The computed force is exactly the same as that between equivalent smooth surfaces obeying no-slip boundary conditions, but located at an intermediate position between peaks and valleys of asperities. The shift in hydrodynamic thickness is shown to depend on the height and density of roughness elements. Our results do not support some previous experimental conclusions on a very large and shear-dependent boundary slip for similar systems. © 2010 The American Physical Society. Source


Su R.,Nanyang Technological University | Van Schuppen J.H.,Centrum voor Wiskunde en Informatica CWI | Rooda J.E.,TU Eindhoven
IEEE Transactions on Automatic Control | Year: 2010

Blockingness is one of the major obstacles that need to be overcome in the Ramadge-Wonham supervisory synthesis paradigm, especially for large systems. In this paper, we propose an abstraction technique to overcome this difficulty. We first provide details of this abstraction technique, then describe how it can be applied to a supervisor synthesis problem, where plant models are nondeterministic but specifications and supervisors are deterministic. We show that a nonblocking supervisor for an abstraction of a plant under a specification is guaranteed to be a nonblocking supervisor of the original plant under the same specification. The reverse statement is also true, if we impose an additional constraint in the choice of the alphabet of abstraction, i.e., every event, which is either observable or labels a transition to a marker state, is contained in the alphabet of abstraction. © 2006 IEEE. Source


De Teresa J.M.,University of Zaragoza | Cordoba R.,University of Zaragoza | Cordoba R.,TU Eindhoven
ACS Nano | Year: 2014

One of the main features of any lithography technique is its resolution, generally maximized for a single isolated object. However, in most cases, functional devices call for highly dense arrays of nanostructures, the fabrication of which is generally challenging. Here, we show the growth of arrays of densely packed isolated nanowires based on the use of focused beam induced deposition plus Ar+ milling. The growth strategy presented herein allows the creation of films showing thickness modulation with periodicity determined by the beam scan pitch. The subsequent Ar+ milling translates such modulation into an array of isolated nanowires. This approach has been applied to grow arrays of W-based nanowires by focused ion beam induced deposition and Co nanowires by focused electron beam induced deposition, achieving linear densities up to 2.5 × 107 nanowires/cm (one nanowire every 40 nm). These results open the route for specific applications in nanomagnetism, nanosuperconductivity, and nanophotonics, where arrays of densely packed isolated nanowires grown by focused beam deposition are required. © 2014 American Chemical Society. Source


Brouwers H.J.H.,TU Eindhoven
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2016

This paper addresses the void fraction of polydisperse particles with a Weibull (or Rosin-Rammler) size distribution. It is demonstrated that the governing parameters of this distribution can be uniquely related to those of the lognormal distribution. Hence, an existing closed-form expression that predicts the void fraction of particles with a lognormal size distribution can be transformed into an expression for Weibull distributions. Both expressions contain the contraction coefficient β. Likewise the monosized void fraction φ1, it is a physical parameter which depends on the particles' shape and their state of compaction only. Based on a consideration of the scaled binary void contraction, a linear relation for (1-φ1)β as function of φ1 is proposed, with proportionality constant B, depending on the state of compaction only. This is validated using computational and experimental packing data concerning random close and random loose packing arrangements. Finally, using this β, the closed-form analytical expression governing the void fraction of Weibull distributions is thoroughly compared with empirical data reported in the literature, and good agreement is found. Furthermore, the present analysis yields an algebraic equation relating the void fraction of monosized particles at different compaction states. This expression appears to be in good agreement with a broad collection of random close and random loose packing data. © 2016 authors. Published by the American Physical Society. Published by the American Physical Society under the terms of the Creative Commons Attribution 3.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI. Source


After demonstrating by means of an in vitro model experiment that the flow in the glottis can become asymmetric, Erath [J. Acoust. Soc. Am. 130, 389-403 (2011)] propose a theory to estimate the resulting asymmetry in the lateral hydrodynamic force on the vocal folds. A wall-jet attached to one side of the divergent downstream part of the glottis is considered. The model assumes that the wall is a flat plate and that the jet separates at the glottal exit. They implement this so-called Boundary Layer Estimation of Asymmetric Pressure force model in a lumped two mass model of the vocal folds. This should allow them to study the impact of the asymmetry on voiced sound production. A critical discussion of the merits and shortcomings of the model is provided. It predicts discontinuities in the time dependency of the lateral force. It predicts this force to be independent from the glottal opening, which is not reasonable. An alternative model is proposed, which avoids these problems and predicts that there is a minimum glottal opening below which the wall-jet does not separate from the wall at the glottal exit. This is in agreement with the experimental results provided by Erath © 2013 Acoustical Society of America. Source


Willemse R.X.E.,Oce Technologies B.V. | Van Herk A.M.,TU Eindhoven
Macromolecular Chemistry and Physics | Year: 2010

The combination of MALDI-ToF-MS with pulsed laser polymerization (PLP) has one big advantage over the combination of size exclusion chromatography (SEC) with PLP. MALDIToF- MS is an absolute measurement which does not need calibration. Especially in the field of acrylates, this is an important advantage over the conventional use of SEC, since low polydispersity standards are not readily available for acrylates. Moreover, acrylates suffer from branching. Literature shows that since branched polymers have a different hydrodynamic volume than linear polymers, this can affect the calibration of SEC. The determination of the Arrhenius parameters for a family of acrylates is performed with PLP-MALDI-ToF-MS. The results clearly demonstrate that an increase of the ester side group indeed results in an increase of the propagation rate coefficient. Whether this is due to an entropic or enthalpic effect cannot be derived from the results. © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Hopfe C.J.,University of Cardiff | Hensen J.L.M.,TU Eindhoven
Energy and Buildings | Year: 2011

Building performance simulation (BPS) has the potential to provide relevant design information by indicating directions for design solutions. A major challenge in simulation tools is how to deal with difficulties through large variety of parameters and complexity of factors such as non-linearity, discreteness, and uncertainty. The purpose of uncertainty and sensitivity analysis can be described as identifying uncertainties in input and output of a system or simulation tool [1-3]. In practice uncertainty and sensitivity analysis have many additional benefits including: (1) With the help of parameter screening it enables the simplification of a model [4]. (2) It allows the analysis of the robustness of a model [5]. (3) It makes aware of unexpected sensitivities that may lead to errors and/or wrong specifications (quality assurance) [6-10]. (4) By changing the input of the parameters and showing the effect on the outcome of a model, it provides a "what-if analysis" (decision support). [11]. In this paper a case study is performed based on an office building with respect to various building performance parameters. Uncertainty analysis (UA) is carried out and implications for the results considering energy consumption and thermal comfort are demonstrated and elaborated. The added value and usefulness of the integration of UA in BPS is shown. © 2011 Elsevier B.V. All rights reserved. Source


Dorst K.,University of Technology, Sydney | Dorst K.,TU Eindhoven
Design Studies | Year: 2011

In the last few years, "Design Thinking" has gained popularity - it is now seen as an exciting new paradigm for dealing with problems in sectors as far a field as IT, Business, Education and Medicine. This potential success challenges the design research community to provide unambiguous answers to two key questions: "What is the core of Design Thinking?" and "What could it bring to practitioners and organisations in other fields?". We sketch a partial answer by considering the fundamental reasoning pattern behind design, and then looking at the core design practices of framing and frame creation. The paper ends with an exploration of the way in which these core design practices can be adopted for organisational problem solving and innovation. © 2011 Elsevier Ltd. All rights reserved. Source


Ustebay D.,McGill University | Castro R.,TU Eindhoven | Rabbat M.,McGill University
IEEE Journal on Selected Topics in Signal Processing | Year: 2011

Recently, gossip algorithms have received much attention from the wireless sensor network community due to their simplicity, scalability and robustness. Motivated by applications such as compression and distributed transform coding, we propose a new gossip algorithm called Selective Gossip. Unlike traditional randomized gossip which computes the average of scalar values, we run gossip algorithms in parallel on the elements of a vector. The goal is to compute only the entries which are above a defined threshold in magnitude, i.e., significant entries. Nodes adaptively approximate the significant entries while abstaining from calculating the insignificant ones. Consequently, network lifetime and bandwidth are preserved. We show that with the proposed algorithm nodes reach consensus on the values of the significant entries and on the indices of insignificant ones. We illustrate the performance of our algorithm with a field estimation application. For regular topologies, selective gossip computes an approximation of the field using the wavelet transform. For irregular network topologies, we construct an orthonormal transform basis using eigenvectors of the graph Laplacian. Using two real sensor network datasets we show substantial communication savings over randomized gossip. We also propose a decentralized adaptive threshold mechanism such that nodes estimate the threshold while approximating the entries of the vector for computing the best $m$ -term approximation of the data. © 2011 IEEE. Source


Goossens K.,TU Eindhoven | Hansson A.,University of Twente
Proceedings - Design Automation Conference | Year: 2010

The goals for the Æthereal network on silicon, as it was then called, were set in 2000 and its concepts were defined early 2001. Ten years on, what has been achieved? Did we meet the goals, and what is left of the concepts? In this paper we answer those questions, and evaluate different implementations, based on a new performance:cost analysis. We discuss and reflect on our experiences, and conclude with open issues and future directions. © Copyright 2010 ACM. Source


Agboma F.,University of Essex | Liotta A.,TU Eindhoven
Telecommunication Systems | Year: 2012

This study contributes towards the relatively new but growing discipline of QoE management in content delivery systems. The study focuses on the development of a QoE-based management framework for the construction of QoE models for different types of multimedia contents delivered onto three typical mobile terminals-a mobile phone, PDA and a laptop. A statistical modelling technique is employed which, correlates QoS parameters with estimates of QoE perceptions. These correlations were found to be dependent on terminals and multimedia content types. The application of the framework and prediction models in QoE management strategies are demonstrated using examples. We find that significant resource savings can be achieved with our approach by contrast to conventional QoS solutions. © Springer Science+Business Media, LLC 2010. Source


Design can be described as a sequence of decisions made to balance design goals and constraints. These decisions must be made in every design effort, although they may not be explicit, conscious, or formally represented. In routine design, these decisions are straightforward, requiring little learning by designers. Problem understanding evolves in parallel with the problem solution, and many components of the design problem cannot be expected to emerge until some attempt has been made at generating solutions. Generalized knowledge can also be derived by using other empirical or theoretical research methods. Design-based research, however, can produce knowledge that normally could not be generated by isolated analysis or traditional empirical approaches, and therefore complements existing empirical and theoretical research methods. Source


Ottmann C.,TU Eindhoven
Bioorganic and Medicinal Chemistry | Year: 2013

14-3-3 Proteins are eukaryotic adapter proteins that regulate a plethora of physiological processes by binding to several hundred partner proteins. They play a role in biological activities as diverse as signal transduction, cell cycle regulation, apoptosis, host-pathogen interactions and metabolic control. As such, 14-3-3s are implicated in disease areas like cancer, neurodegeneration, diabetes, pulmonary disease, and obesity. Targeted modulation of 14-3-3 protein-protein interactions (PPIs) by small molecules is therefore an attractive concept for disease intervention. In recent years a number of examples of inhibitors and stabilizers of 14-3-3 PPIs have been reported promising a vivid future in chemical biology and drug development for this remarkable class of proteins. © 2013 Elsevier Ltd. All rights reserved. Source


Heuts J.P.A.,TU Eindhoven | Smeets N.M.B.,Queens University
Polymer Chemistry | Year: 2011

An overview is given of cobalt-catalyzed chain transfer in free-radical polymerization and the chemistry and applications of its derived macromonomers. Catalytic chain transfer polymerization is a very efficient and versatile technique for the synthesis of functional macromonomers. Firstly the mechanism and kinetic aspects of the process are briefly discussed in solution/bulk and in emulsion polymerization, followed by a description of its application to produce functional macromonomers. The second part of this review briefly describes the behavior of the macromonomers as chain transfer agents and/or comonomers in second-stage radical polymerizations yielding polymers of more complex architectures. The review ends with a brief overview of post-polymerization modifications of the vinyl endfunctionality of the macromonomers yielding functional polymers with applications ranging from initiators in anionic polymerization to end-functional lectin-binding glycopolymers. This journal is © The Royal Society of Chemistry. Source


Holder S.J.,University of Kent | Sommerdijk N.A.J.M.,TU Eindhoven
Polymer Chemistry | Year: 2011

Amphophilic AB and ABA block copolymers have been demonstrated to form a variety of self-assembled aggregate structures in dilute solutions where the solvent preferentially solvates one of the blocks. The most common structures formed by these amphiphilic macromolecules are spherical micelles, cylindrical micelles and vesicles (polymersomes). Interest into the characterisation and controlled formation of block copolymer aggregates has been spurred on by their potential as surfactants, nano- to micro-sized carriers for active compounds, for the controlled release of encapsulated compounds and for inorganic materials templating, amongst numerous other proposed applications. Research in the past decade has focussed not only on manipulating the properties of aggregates through control of both the chemistry of the constituent polymer blocks but also the external and internal morphology of the aggregates. This review article will present an overview of recent approaches to controlling the self-assembly of amphiphilic block copolymers with a view to obtaining novel micellar morphologies. Whilst the article touches upon multi-compartment micelles particular focus is placed upon control of the overall shape of micelles; i.e. those systems that expand the range of accessible morphologies beyond 'simple' spherical and cylindrical micelles namely disklike, toroidal and bicontinuous micelles. © The Royal Society of Chemistry 2011. Source


Illiberi A.,Applied Scientific Research | Roozeboom F.,Applied Scientific Research | Roozeboom F.,TU Eindhoven | Poodt P.,Applied Scientific Research
ACS Applied Materials and Interfaces | Year: 2012

Zinc oxide thin films have been deposited at high growth rates (up to ∼1 nm/s) by spatial atomic layer deposition technique at atmospheric pressure. Water has been used as oxidant for diethylzinc (DEZ) at deposition temperatures between 75 and 250 °C. The electrical, structural (crystallinity and morphology), and optical properties of the films have been analyzed by using Hall, four-point probe, X-ray diffraction, scanning electron microscopy, spectrophotometry, and photoluminescence, respectively. All the films have c-axis (100) preferential orientation, good crystalline quality and high transparency (∼ 85%) in the visible range. By varying the DEZ partial pressure, the electrical properties of ZnO can be controlled, ranging from heavily n-type conductive (with 4 mOhm.cm resistivity for 250 nm thickness) to insulating. Combining the high deposition rates with a precise control of functional properties (i.e., conductivity and transparency) of the films, the industrially scalable spatial ALD technique can become a disruptive manufacturing method for the ZnO-based industry. © 2011 American Chemical Society. Source


Van Rijnsoever F.J.,University Utrecht | Castaldi C.,TU Eindhoven
Journal of the American Society for Information Science and Technology | Year: 2011

Consumer categorizations based on innovativeness were originally proposed by E.M. Rogers (2003) and remain of relevance for predicting purchasing behavior in high-tech domains such as consumer electronics. We extend such innovativeness-based categorizations in two directions: We first take into account the existence of technology clusters within product domains and then enrich the definition of consumer innovativeness by considering not only past adoption behavior but also future purchase intentions. We derive a novel consumer categorization based on data from a sample of 2,094 Dutch consumers for the case of consumer electronics. In so doing, we apply endogenous categorization techniques that represent a methodological improvement with respect to previously applied techniques. © 2011 ASIS&T. Source


Teunissen J.,Centrum Wiskunde and Informatica CWI | Ebert U.,Centrum Wiskunde and Informatica CWI | Ebert U.,TU Eindhoven
Journal of Computational Physics | Year: 2014

In particle simulations, the weights of particles determine how many physical particles they represent. Adaptively adjusting these weights can greatly improve the efficiency of the simulation, without creating severe nonphysical artifacts. We present a new method for the pairwise merging of particles, in which two particles are combined into one. To find particles that are 'close' to each other, we use a k-d tree data structure. With a k-d tree, close neighbors can be searched for efficiently, and independently of the mesh used in the simulation. The merging can be done in different ways, conserving for example momentum or energy. We introduce probabilistic schemes, which set properties for the merged particle using random numbers. The effect of various merge schemes on the energy distribution, the momentum distribution and the grid moments is compared. We also compare their performance in the simulation of the two-stream instability. © 2013 Elsevier Inc. Source


Su R.,Nanyang Technological University | Van Schuppen J.H.,Centrum voor Wiskunde en Informatica CWI | Rooda J.E.,TU Eindhoven
Automatica | Year: 2012

In supervisor synthesis for discrete-event systems achieving nonblockingness is a major challenge for a large system. To overcome it we present an approach to synthesize a deterministic coordinated distributed supervisor under partial observation, where the plant is modeled by a collection of nondeterministic finite-state automata and the requirement is modeled by a collection of deterministic finite-state automata. Then we provide a sufficient condition to ensure the maximal permissiveness of a coordinated distributed supervisor generated by the proposed synthesis approach. © 2012 Elsevier Ltd. All rights reserved. Source


Peng H.,TU Eindhoven | Coit D.W.,Rutgers University | Feng Q.,University of Houston
IEEE Transactions on Reliability | Year: 2012

This paper proposes two new importance measures: one new importance measure for systems with s -independent degrading components, and another one for systems with s-correlated degrading components. Importance measures in previous research are inadequate for systems with degrading components because they are only applicable to steady-state cases and problems with discrete states without considering the continuously changing status of the degrading components. Our new importance measures are proposed as functions of time that can provide timely feedback on the critical components prior to failure based on the measured or observed degradation. Furthermore, the correlation between components is considered for developing these importance measures through a multivariate distribution. To evaluate the criticality of components, we analysed reliability models for multi-component systems with degrading components, which can also be utilized for studying maintenance models. Numerical examples show that the proposed importance measures can be used as an effective tool to assess component criticality for systems with degrading components. © 2006 IEEE. Source


Brouwers J.J.H.,TU Eindhoven
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2012

We derive a comprehensive statistical model for dispersion of passive or almost passive admixture particles such as fine particulate matter, aerosols, smoke, and fumes in turbulent flow. The model rests on the Markov limit for particle velocity. It is in accordance with the asymptotic structure of turbulence at large Reynolds number as described by Kolmogorov. The model consists of Langevin and diffusion equations in which the damping and diffusivity are expressed by expansions in powers of the reciprocal Kolmogorov constant C0. We derive solutions of O(C00) and O(C 0-1). We truncate at O(C0-2) which is shown to result in an error of a few percentages in predicted dispersion statistics for representative cases of turbulent flow. We reveal analogies and remarkable differences between the solutions of classical statistical mechanics and those of statistical turbulence. © 2012 American Physical Society. Source


Woeginger G.J.,TU Eindhoven
Journal of Informetrics | Year: 2014

In a recent paper, Chambers and Miller introduced two fundamental axioms for scientific research indices. We perform a detailed analysis of these two axioms, thereby providing clean combinatorial characterizations of the research indices that satisfy these axioms and of the so-called step-based indices. We single out the staircase indices as a particularly simple subfamily of the step-based indices, and we provide a simple axiomatic characterization for them. © 2014 Elsevier Ltd. Source


Biferale L.,University of Rome Tor Vergata | Musacchio S.,French National Center for Scientific Research | Toschi F.,TU Eindhoven | Toschi F.,CNR Institute for applied mathematics Mauro Picone
Physical Review Letters | Year: 2012

We study the statistical properties of homogeneous and isotropic three-dimensional (3D) turbulent flows. By introducing a novel way to make numerical investigations of Navier-Stokes equations, we show that all 3D flows in nature possess a subset of nonlinear evolution leading to a reverse energy transfer: from small to large scales. Up to now, such an inverse cascade was only observed in flows under strong rotation and in quasi-two-dimensional geometries under strong confinement. We show here that energy flux is always reversed when mirror symmetry is broken, leading to a distribution of helicity in the system with a well-defined sign at all wave numbers. Our findings broaden the range of flows where the inverse energy cascade may be detected and rationalize the role played by helicity in the energy transfer process, showing that both 2D and 3D properties naturally coexist in all flows in nature. The unconventional numerical methodology here proposed, based on a Galerkin decimation of helical Fourier modes, paves the road for future studies on the influence of helicity on small-scale intermittency and the nature of the nonlinear interaction in magnetohydrodynamics. © 2012 American Physical Society. Source


Rodriguez S.R.K.,HIGH-TECH | Murai S.,HIGH-TECH | Murai S.,Kyoto University | Verschuuren M.A.,HIGH-TECH | And 2 more authors.
Physical Review Letters | Year: 2012

We demonstrate the generation of light in an optical waveguide strongly coupled to a periodic array of metallic nanoantennas. This coupling gives rise to hybrid waveguide-plasmon polaritons (WPPs), which undergo a transmutation from plasmon to waveguide mode and vice versa as the eigenfrequency detuning of the bare states transits through zero. Near zero detuning, the structure is nearly transparent in the far-field but sustains strong local field enhancements inside the waveguide. Consequently, light-emitting WPPs are strongly enhanced at energies and in-plane momenta for which WPPs minimize light extinction. We elucidate the unusual properties of these polaritons through a classical model of coupled harmonic oscillators. © 2012 American Physical Society. Source


Munoz-Bonilla A.,CSIC - Institute of Polymer Science and Technology | Heuts J.P.A.,TU Eindhoven | Fernandez-Garcia M.,CSIC - Institute of Polymer Science and Technology
Soft Matter | Year: 2011

A well-defined amphiphilic diblock glycopolymer of poly(2-{[(d-glucosamin- 2-N-yl)carbonyl]oxy}ethyl methacrylate)-b-poly(butyl methacrylate) (PHEMAGl-b-PBMA) was synthesized via atom transfer radical polymerization (ATRP). Due to its capability to form micelles in aqueous solution, the obtained block glycopolymer was used as polymeric surfactant in the emulsion polymerization of butyl methacrylate in order to prepare glycosylated polymer particles. Core-shell particles consisting of a soft core of poly(butyl methacrylate) covered with glycopolymer bearing glucose moieties were obtained. Then these latex particles were employed to prepare polymer films with active surface. The surface bioactivity of this polymer coating was examined using the specific lectin Concanavalin A, Canavalia ensiformis. The specific and successful binding to the Concanavalin A was demonstrated by both fluorescence microscopy and spectroscopy being more intense with increasing concentration of block glycopolymer surfactant. The good accessibility of the glucose moieties at the surface of the coating makes this method a powerful tool to achieve potential materials for biomedical applications involving molecular recognition processes. © 2011 The Royal Society of Chemistry. Source


Ulu C.,University of Texas at Austin | Honhon D.,TU Eindhoven | Alptekinoglu A.,Southern Methodist University
Operations Research | Year: 2012

How should a firm modify its product assortment over time when learning about consumer tastes? In this paper, we study dynamic assortment decisions in a horizontally differentiated product category for which consumers' diverse tastes can be represented as locations on a Hotelling line. We presume that the firm knows all possible consumer locations, comprising a finite set, but does not know their probability distribution. We model this problem as a discrete-time dynamic program; each period, the firm chooses an assortment and sets prices to maximize the total expected profit over a finite horizon, given its subjective beliefs over consumer tastes. The consumers then choose a product from the assortment that maximizes their own utility. The firm observes sales, which provide censored information on consumer tastes, and it updates beliefs in a Bayesian fashion. There is a recurring trade-off between the immediate profits from sales in the current period (exploitation) and the informational gains to be exploited in all future periods (exploration). We show that one can (partially) order assortments based on their information content and that in any given period the optimal assortment cannot be less informative than the myopically optimal assortment. This result is akin to the well-known "stock more" result in censored newsvendor problems with the newsvendor learning about demand through sales when lost sales are not observable. We demonstrate that it can be optimal for the firm to alternate between exploration and exploitation, and even offer assortments that lead to losses in the current period in order to gain information on consumer tastes. We also develop a Bayesian conjugate model that reduces the state space of the dynamic program and study value of learning using this conjugate model. © 2012 INFORMS. Source


Van De Wouw N.,TU Eindhoven | Leine R.I.,ETH Zurich
International Journal of Robust and Nonlinear Control | Year: 2012

SUMMARY In this paper, we consider the robust set-point stabilization problem for motion systems subject to friction. Robustness aspects are particularly relevant in practice, where uncertainties in the friction model are unavoidable. We propose an impulsive feedback control design that robustly stabilizes the set-point for a class of position-, velocity-and time-dependent friction laws with uncertainty. Moreover, it is shown that this control strategy guarantees the finite-time convergence to the set-point which is a favorable characteristic of the resulting closed loop from a transient performance perspective. The results are illustrated by means of a representative motion control example. © 2011 John Wiley & Sons, Ltd. Source


Vaesen K.,TU Eindhoven
Biology and Philosophy | Year: 2012

Dubreuil (Biol Phil 25:53-73, 2010b, this journal) argues that modern-like cognitive abilities for inhibitory control and goal maintenance most likely evolved in Homo heidelbergensis, much before the evolution of oft-cited modern traits, such as symbolism and art. Dubreuil's argument proceeds in two steps. First, he identifies two behavioral traits that are supposed to be indicative of the presence of a capacity for inhibition and goal maintenance: cooperative feeding and cooperative breeding. Next, he tries to show that these behavioral traits most likely emerged in Homo heidelbergensis. In this paper, I show that neither of these steps are warranted in light of current scientific evidence, and thus, that the evolutionary background of human executive functions, such as inhibition and goal maintenance, remains obscure. Nonetheless, I suggest that cooperative breeding might mark a crucial step in the evolution of our species: its early emergence in Homo erectus might have favored a social intelligence that was required to get modernity really off the ground in Homo sapiens. © 2011 The Author(s). Source


Lazar M.,TU Eindhoven
Proceedings of the IEEE Conference on Decision and Control | Year: 2010

This paper considers the synthesis of infinity norm Lyapunov functions for discrete-time linear systems. A proper conic partition of the state-space is employed to construct a finite set of linear inequalities in the elements of the Lyapunov weight matrix. Under typical assumptions, it is proven that the feasibility of the derived set of linear inequalities is equivalent with the existence of an infinity norm Lyapunov function. Furthermore, it is shown that the developed solution extends naturally to several relevant classes of discrete-time nonlinear systems. ©2010 IEEE. Source


Gonzalez-Rodriguez D.,Autonomous University of Madrid | Schenning A.P.H.J.,TU Eindhoven
Chemistry of Materials | Year: 2011

Recent developments in the area of H-bonded supramolecular assemblies of π-conjugated systems, that is, oligomers and polymers, are described. The state-of-the-art summary of the recent developments in the design of discrete systems and functional materials is presented. © 2010 American Chemical Society. Source


Van Der Aalst W.,TU Eindhoven
ACM Transactions on Management Information Systems | Year: 2012

Over the last decade, process mining emerged as a new research field that focuses on the analysis of processes using event data. Classical data mining techniques such as classification, clustering, regression, association rule learning, and sequence/episode mining do not focus on business process models and are often only used to analyze a specific step in the overall process. Process mining focuses on end-to-end processes and is possible because of the growing availability of event data and new process discovery and conformance checking techniques. Process models are used for analysis (e.g., simulation and verification) and enactment by BPM/WFM systems. Previously, process models were typically made by hand without using event data. However, activities executed by people, machines, and software leave trails in so-called event logs. Process mining techniques use such logs to discover, analyze, and improve business processes. Recently, the Task Force on Process Mining released the Process Mining Manifesto. This manifesto is supported by 53 organizations and 77 process mining experts contributed to it. The active involvement of end-users, tool vendors, consultants, analysts, and researchers illustrates the growing significance of process mining as a bridge between data mining and business process modeling. The practical relevance of process mining and the interesting scientific challenges make process mining one of the "hot" topics in Business Process Management (BPM). This article introduces process mining as a new research field and summarizes the guiding principles and challenges described in the manifesto. © 2012 ACM. Source


Rademacher C.,Max Planck Institute of Colloids and Interfaces | Ottmann C.,TU Eindhoven | Grossmann T.N.,TU Dortmund
Angewandte Chemie - International Edition | Year: 2014

Bioactive conformations of peptides can be stabilized by macrocyclization, resulting in increased target affinity and activity. Such macrocyclic peptides proved useful as modulators of biological functions, in particular as inhibitors of protein-protein interactions (PPI). However, most peptide-derived PPI inhibitors involve stabilized α-helices, leaving a large number of secondary structures unaddressed. Herein, we present a rational approach towards stabilization of an irregular peptide structure, using hydrophobic cross-links that replace residues crucially involved in target binding. The molecular basis of this interaction was elucidated by X-ray crystallography and isothermal titration calorimetry. The resulting cross-linked peptides inhibit the interaction between human adaptor protein 14-3-3 and virulence factor exoenzyme S. Taking into consideration that irregular peptide structures participate widely in PPIs, this approach provides access to novel peptide-derived inhibitors. Irregular peptide structures were stabilized using hydrophobic cross-links that replace residues crucially involved in target binding. The cross-links were designed in a rational and iterative process that involved X-ray crystallography. The resulting peptides inhibit the protein-protein interaction between virulence factor ExoS and human protein 14-3-3. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Rai V.R.,Colorado School of Mines | Vandalon V.,TU Eindhoven | Agarwal S.,Colorado School of Mines
Langmuir | Year: 2012

We have examined the role of substrate temperature on the surface reaction mechanisms during the atomic layer deposition (ALD) of Al 2O 3 from trimethyl aluminum (TMA) in combination with an O 2 plasma and O 3 over a substrate temperature range of 70-200 °C. The ligand-exchange reactions were investigated using in situ attenuated total reflection Fourier transform infrared spectroscopy. Consistent with our previous work on ALD of Al 2O 3 from an O 2 plasma and O 3 [Rai, V. R.; Vandalon, V.; Agarwal, S. Langmuir2010, 26, 13732], both -OH groups and carbonates were the chemisorption sites for TMA over the entire temperature range explored. The concentration of surface -CH 3 groups after the TMA cycle was, however, strongly dependent on the surface temperature and the type of oxidizer, which in turn influenced the corresponding growth per cycle. The combustion of surface -CH 3 ligands was not complete at 70 °C during O 3 exposure, indicating that an O 2 plasma is a relatively stronger oxidizing agent. Further, in O 3-assisted ALD, the ratio of mono- and bidentate carbonates on the surface after O 3 exposure was dependent on the substrate temperature. © 2011 American Chemical Society. Source


Van Helden P.,Sasol Limited | Ciobeca I.M.,TU Eindhoven
ChemPhysChem | Year: 2011

Under your skin: Carbon plays an important role in the deactivation process of Co-based FT catalysts. Therefore the adsorption behavior of carbon at various coverages on the surfaces and into the first subsurface layers of fcc-Co(111) and fcc-Co(100) (see picture) was calculated by density functional theory (DFT). Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Vaesen K.,TU Eindhoven
Studies in History and Philosophy of Science Part C :Studies in History and Philosophy of Biological and Biomedical Sciences | Year: 2014

Chimpanzees, but very few other animals, figure prominently in (recent) attempts to reconstruct the evolution of uniquely human traits. In particular, the chimpanzee is used (i) to identify traits unique to humans, and thus in need of reconstruction; (ii) to initialize the reconstruction, by taking its state to reflect the state of the last common ancestor of humans and chimpanzees; (iii) as a baseline against which to test evolutionary hypotheses. Here I point out the flaws in this three-step procedure, and show how they can be overcome by taking advantage of much broader phylogenetic comparisons. More specifically, I explain how such comparisons yield more reliable estimations of ancestral states and how they help to resolve problems of underdetermination inherent to chimpocentric accounts. To illustrate my points, I use a recent chimpocentric argument by Kitcher. © 2013 Elsevier Ltd. Source


Aiki T.,Japan Womens University | Muntean A.,TU Eindhoven
Interfaces and Free Boundaries | Year: 2013

We study a one-dimensional free-boundary problem describing the penetration of carbonation fronts (free reaction-triggered interfaces) in concrete. Using suitable integral estimates for the free boundary and involved concentrations, we reach a twofold aim: (1) We fill a fundamental gap by justifying rigorously the experimentally guessed pt asymptotic behavior. Previously we obtained the upper bound s.t / 6 C0pt for some constant C0; now we show the optimality of the rate by proving the right nontrivial lower estimate, i.e., there exists C00 > 0 such that s.t / > C00pt . (2) We obtain weak solutions to the free-boundary problem for the case when the measure of the initial domain vanishes. In this way, we allow for the nucleation of the moving carbonation front -a scenario that until now was open from the mathematical analysis point of view. © European Mathematical Society 2013. Source


Desmet L.,Philips | Ras A.J.M.,Philips | De Boer D.K.G.,Philips | Debije M.G.,TU Eindhoven
Optics Letters | Year: 2012

We report conversion efficiencies of experimental single and dual light guide luminescent solar concentrators. We have built several 5 cm × 5 cm and 10 × cm × 10 cm luminescent solar concentrator (LSC) demonstrators consisting of c-Si photovoltaic cells attachedto luminescent light guides of Lumogen F Red 305 dye and perylene perinone dye. The highest overall efficiency obtained was 4.2% on a 5 cm × 5 cm stacked dual light guide using both luminescent materials. To our knowledge, this is the highest reported experimentally determined efficiency for c-Si photovoltaicbased LSCs. Furthermore, we also produced a5 cm × 5 cm LSC specimen based on an inorganic phosphor layer with an overall efficiency of 2.5%. © 2012 Optical Society of America. Source


Torricelli F.,TU Eindhoven
IEEE Transactions on Electron Devices | Year: 2012

An extended theory of carrier hopping transport in organic transistors is proposed. According to many experimental studies, the density of localized states in organic thin-film transistors can be described by a double-exponential function. In this work, using a percolation model of hopping, the analytical expressions of conductivity and mobility as functions of temperature and charge concentration are obtained. The conductivity depends only on the tail states, while the mobility is determined by the total charge carriers in the semiconductor. © 2012 IEEE. Source


Hopfe C.J.,University of Cardiff | Augenbroe G.L.M.,Georgia Institute of Technology | Hensen J.L.M.,TU Eindhoven
Building and Environment | Year: 2013

Building performance assessment is complex, as it has to respond to multiple criteria. Objectives originating from the demands that are put on energy consumption, acoustical performance, thermal occupant comfort, indoor air quality and many other issues must all be reconciled. An assessment requires the use of predictive models that involve numerous design and physical parameters as their inputs. Since these input parameters, as well as the models that operate on them, are not precisely known, it is imprudent to assume deterministic values for them. A more realistic approach is to introduce ranges of uncertainty in the parameters themselves, or in their derivation, from underlying approximations. In so doing, it is recognized that the outcome of a performance assessment is influenced by many sources of uncertainty. As a consequence of this approach the design process is informed by assessment outcomes that produce probability distributions of a target measure instead of its deterministic value. In practice this may lead to a "well informed" analysis but not necessarily to a straightforward, cost effective and efficient design process.This paper discusses how design decision making can be based on uncertainty assessments. A case study is described focussing on a discrete decision that involves a choice between two HVAC system designs. Analytical hierarchy process (AHP) including uncertainty information is used to arrive at a rational decision. In this approach, key performance indicators such as energy efficiency, thermal comfort and others are ranked according to their importance and preferences. This process enables a clear group consensus based choice of one of the two options. The research presents a viable means of collaboratively ranking complex design options based on stakeholder's preferences and considering the uncertainty involved in the designs. In so doing it provides important feedback to the design team. © 2013 Elsevier Ltd. Source


Rauterberg M.,TU Eindhoven
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

In the paper the idea is presented that emotions are the result of a high dimensional optimization process happening in the unconscious mapped onto the low dimensional conscious. Instead of framing emotions as a separate subcomponent of our cognitive architecture, we argue for emotions as the main characteristic of the communication between the unconscious and the conscious. We see emotions as the conscious experiences of affect based on complex internal states. Based on this holistic view we recommend a different design and architecture for entertainment robots and other entertainment products with 'emotional' behavior. Intuition is the powerful information processing function of the unconscious while emotion is the result of this process communicated to the conscious. Emotions are the perception of the mapping from the high dimensional problem solving space of the unconscious to the low dimensional space of the conscious. © 2010 Springer-Verlag. Source


Van Der Aalst W.M.P.,TU Eindhoven
Lecture Notes in Business Information Processing | Year: 2010

Computer simulation attempts to "mimic" real-life or hypothetical behavior on a computer to see how processes or systems can be improved and to predict their performance under different circumstances. Simulation has been successfully applied in many disciplines and is considered to be a relevant and highly applicable tool in Business Process Management (BPM). Unfortunately, in reality the use of simulation is limited. Few organizations actively use simulation. Even organizations that purchase simulation software (stand-alone or embedded in some BPM suite), typically fail to use it continuously over an extended period. This keynote paper highlights some of the problems causing the limited adoption of simulation. For example, simulation models tend to oversimplify the modeling of people working part-time on a process. Also simulation studies typically focus on the steady-state behavior of business processes while managers are more interested in short-term results (a "fast forward button" into the future) for operational decision making. This paper will point out innovative simulation approaches leveraging on recent breakthroughs in process mining. © 2010 Springer-Verlag. Source


Van Der Aalst W.M.P.,TU Eindhoven
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

The Software as a Service (SaaS) paradigm is particularly interesting in situations where many organizations need to support similar processes. For example, municipalities, courts, rental agencies, etc. all need to support highly similar processes. However, despite these similarities, there is also the need to allow for local variations in a controlled manner. Therefore, cloud infrastructures should provide configurable services such that products and processes can be customized while sharing commonalities. Configurable and executable process models are essential for realizing such infrastructures. This will finally transform reference models from "paper tigers" (reference modeling à la SAP, ARIS, etc.) into an "executable reality". Moreover, "configurable services in the cloud" enable cross-organizational process mining. This way, organizations can learn from each other and improve their processes. © Springer-Verlag 2010. Source


Skoric B.,TU Eindhoven
IEEE Transactions on Information Forensics and Security | Year: 2015

The topic of this paper is collusion resistant watermarking, also known as traitor tracing, in particular bias-based traitor tracing codes as introduced by Tardos. The past years have seen an ongoing effort to construct efficient high-performance decoders for these codes. In this paper we construct a score system from the Neyman-Pearson hypothesis test (which is known to be the most powerful test possible) into which we feed more evidence than in previous work, in particular the symbol tallies for all columns of the code matrix. As far as we know, until now simple decoders using Neyman-Pearson have taken into consideration only the codeword of a single user, namely the user under scrutiny. The Neyman-Pearson score needs as input the attack strategy of the colluders, which typically is not known to the tracer. We insert the interleaving attack, which plays a very special role in the theory of bias-based traitor tracing by virtue of being part of the asymptotic (i.e., large coalition size) saddle-point solution. The score system obtained in this way is universal: effective not only against the interleaving attack, but against all other attack strategies as well. Our score function for one user depends on the other users' codewords in a very simple way through the symbol tallies, which are easily computed. We present bounds on the false positive probability and show receiver operating characteristic curves obtained from simulations. We investigate the probability distribution of the score. Finally, we apply our construction to the area of (medical) group testing, which is related to traitor tracing. © 2015 IEEE. Source


Van den Brande T.,Catholic University of Leuven | Blocken B.,TU Eindhoven | Roels S.,Catholic University of Leuven
Building and Environment | Year: 2013

Wind-driven rain (WDR) is one of the most important moisture sources for a building facade. Therefore, a reliable prediction of WDR loads is a prerequisite to assess the durability of building facade components. However, current state of the art Heat-Air-Moisture (HAM) models that are used to assess the moisture behaviour of building facades are still based on several simplifications. Important phenomena of WDR such as raindrop impact, absorption, evaporation and runoff are not yet taken fully into account. This paper presents the implementation and application of a rainwater runoff model coupled to a 2D HAM model. In the first part of the paper, the runoff model itself is briefly described and implemented. In the second part, the coupled runoff-HAM model is used to calculate absorption and runoff of WDR during a two-shower rain event on two different types of porous facades with different capillary absorption coefficient and capillary moisture content. The calculation is performed with a realistic distribution of the impinging WDR intensity, based on CFD simulations, and with meteorological data, on a 10- minute basis. The impinging rain water that cannot be absorbed by the material develops a water film on the surface and runs down along the wall. It is shown that runoff of WDR can have significant influence on the moisture behaviour of the facade, e.g. materials with low capillary absorption coefficients may absorb almost double the amount of impinging WDR when including runoff. Also the moistening time of the facade was to be found extended. To conclude some important notes are given for future development of runoff models. © 2013 Elsevier Ltd. Source


Derler S.,Empa - Swiss Federal Laboratories for Materials Science and Technology | Gerhardt L.-C.,TU Eindhoven
Tribology Letters | Year: 2012

In this review, we discuss the current knowledge on the tribology of human skin and present an analysis of the available experimental results for skin friction coefficients. Starting with an overview on the factors influencing the friction behaviour of skin, we discuss the up-to-date existing experimental data and compare the results for different anatomical skin areas and friction measurement techniques. For this purpose, we also estimated and analysed skin contact pressures applied during the various friction measurements. The detailed analyses show that substantial variations are a characteristic feature of friction coefficients measured for skin and that differences in skin hydration are the main cause thereof, followed by the influences of surface and material properties of the contacting materials. When the friction coefficients of skin are plotted as a function of the contact pressure, the majority of the literature data scatter over a wide range that can be explained by the adhesion friction model. The case of dry skin is reflected by relatively low and pressure-independent friction coefficients (greater than 0.2 and typically around 0.5), comparable to the dry friction of solids with rough surfaces. In contrast, the case of moist or wet skin is characterised by significantly higher (typically >1) friction coefficients that increase strongly with decreasing contact pressure and are essentially determined by the mechanical shear properties of wet skin. In several studies, effects of skin deformation mechanisms contributing to the total friction are evident from friction coefficients increasing with contact pressure. However, the corresponding friction coefficients still lie within the range delimited by the adhesion friction model. Further research effort towards the analysis of the microscopic contact area and mechanical properties of the upper skin layers is needed to improve our so far limited understanding of the complex tribological behaviour of human skin. © 2011 Springer Science+Business Media, LLC. Source


Rakovic S.V.,University of Maryland University College | Lazar M.,TU Eindhoven
Automatica | Year: 2012

This technical communique delivers a systematic procedure for obtaining a suitable terminal cost function for model predictive control based on Minkowski cost functions. It is shown that, for any given stabilizing linear state feedback control law and associated λ-contractive proper C-set, there always exists a non-trivial scaling of the λ-contractive proper C-set such that the associated Minkowski function satisfies the standard MPC terminal cost stability inequality. © 2012 Elsevier Ltd. All rights reserved. Source


Dirksz D.A.,TU Eindhoven | Scherpen J.M.A.,University of Groningen
Automatica | Year: 2012

Power-based modeling was originally developed in the early sixties to describe a large class of nonlinear electrical RLC networks, in a special gradient form. Recently this idea has been extended for modeling and control of a larger class of physical systems. In this paper, first, coordinate transformations are introduced for systems described in this framework, such that the physical structure is preserved. Such a transformation can provide new insights for both analysis and control design. Second, power-based integral and adaptive control schemes are presented. Advantages of these schemes are shown by their application on standard mechanical systems. © 2012 Elsevier Ltd. All rights reserved. Source


Grzela G.,FOM Institute for Atomic and Molecular Physics | Paniagua-Dominguez R.,CSIC - Institute for the Structure of Matter | Barten T.,FOM Institute for Atomic and Molecular Physics | Fontana Y.,FOM Institute for Atomic and Molecular Physics | And 3 more authors.
Nano Letters | Year: 2012

We experimentally demonstrate the directional emission of polarized light from single semiconductor nanowires. The directionality of this emission has been directly determined with Fourier microphotoluminescence measurements of vertically oriented InP nanowires. Nanowires behave as efficient optical nanoantennas, with emission characteristics that are not only given by the material but also by their geometry and dimensions. By means of finite element simulations, we show that the radiated power can be enhanced for frequencies and diameters at which leaky modes in the structure are present. These leaky modes can be associated to Mie resonances in the cylindrical structure. The radiated power can be also inhibited at other frequencies or when the coupling of the emission to the resonances is not favored. We anticipate the relevance of these results for the development of nanowire photon sources with optimized efficiency and/or controlled emission by the geometry. © 2012 American Chemical Society. Source


Leermakers C.A.J.,TU Eindhoven | Musculus M.P.B.,Sandia National Laboratories
Proceedings of the Combustion Institute | Year: 2015

The growth of poly-cyclic aromatic hydrocarbon (PAH) soot precursors are observed using a two-laser technique combining laser-induced fluorescence (LIF) of PAH with laser-induced incandescence (LII) of soot in a diesel engine under low-temperature combustion (LTC) conditions. The broad mixture distributions and slowed chemical kinetics of LTC "stretch out" soot-formation processes in both space and time, thereby facilitating their study. Imaging PAH-LIF from pulsed-laser excitation at three discrete wavelengths (266, 532, and 633 nm) reveals the temporal growth of PAH molecules, while soot-LII from a 1064-nm pulsed laser indicates inception to soot. The distribution of PAH-LIF also grows spatially within the combustion chamber before soot-LII is first detected. The PAH-LIF signals have broad spectra, much like LII, but typically with spectral profile that is inconsistent with laser-heated soot. Quantitative natural-emission spectroscopy also shows a broad emission spectrum, presumably from PAH chemiluminescence, temporally coinciding with of the PAH-LIF. © 2014 The Combustion Institute. Published by Elsevier Inc. All rights reserved. Source


Kirkels A.F.,TU Eindhoven
Renewable and Sustainable Energy Reviews | Year: 2012

This study aims to provide a long term overview of developments in energy from biomass in Western Europe by analyzing the discourse in RD&D and related policy. To this end, the discourse in Western Europe between 1980 and 2010 has been studied by the literature study of open literature and articles of the European Biomass Conference. In addition, a quantitative content analysis of titles of the conference has been performed. This shows the dynamics with respect to considered feedstock, conversion technology, application as well as supporting arguments for this - a dynamics that will not show in a technology or country oriented study. We distinguish four different discourses based on differentiation to scale and knowledge intensity - but that also relates to feedstock and conversion technology. This way, the complex developments can be structured and understood as shift between and within discourses. This is especially relevant as each discourse involves a different policy arena and different actors. With a still growing interest in energy from biomass, the multiple discourses seem to keep co-existing. Emphasis continues to be given to large scale and knowledge intensive processes, which will further increase the importance of the supra-national level for future developments. © 2012 Elsevier Ltd. All right reserved. Source


Attia S.,Catholic University of Louvain | Gratia E.,Catholic University of Louvain | De Herde A.,Catholic University of Louvain | Hensen J.L.M.,TU Eindhoven
Energy and Buildings | Year: 2012

There is a need for decision support tools that integrate energy simulation into early design of zero energy buildings in the architectural practice. Despite the proliferation of simulation programs in the last decade, there are no ready-to-use applications that cater specifically for the hot climates and their comfort conditions. Furthermore, the majority of existing tools focus on evaluating the design alternatives after the decision making, and largely overlook the issue of informing the design before the decision making. This paper presents energy-oriented software tool that both accommodates the Egyptian context and provides informative support that aims to facilitate decision making of zero energy buildings. A residential benchmark was established coupling sensitivity analysis modelling and energy simulation software (EnergyPlus) as a means of developing a decision support tool to allow designers to rapidly and flexibly assess the thermal comfort and energy performance of early design alternatives. Validation of the results generated by the tool and ability to support the decision making are presented in the context of a case study and usability testing. © 2012 Elsevier B.V. All rights reserved. Source


Sijs J.,TNO | Lazar M.,TU Eindhoven
Automatica | Year: 2012

This article focuses on the problem of fusing two prior Gaussian estimates into a single estimate, when the correlation is unknown. Existing solutions either lead to a conservative fusion result, as the chosen parametrization focuses on the fusion formulas instead of correlations, or they are computationally expensive. The contribution of this article is a novel parametrization, in which the correlation is explicitly characterized a priori to deriving the fusion formulas. Then, maximizing the correlation ensures that the fusion result is based on independent parts of the prior estimates and, simultaneously, addresses the fact that the correlation is unknown. In addition, a guaranteed improvement of the accuracy after fusion is attained. An illustrative example demonstrates the benefits of the proposed method compared to an existing fusion method. © 2012 Elsevier Ltd. All rights reserved. Source


Bourne D.P.,University of Glasgow | Peletier M.A.,TU Eindhoven | Theil F.,University of Warwick
Communications in Mathematical Physics | Year: 2014

We prove strong crystallization results in two dimensions for an energy that arises in the theory of block copolymers. The energy is defined on sets of points and their weights, or equivalently on the set of atomic measures. It consists of two terms; the first term is the sum of the square root of the weights, and the second is the quadratic optimal transport cost between the atomic measure and the Lebesgue measure. We prove that this system admits crystallization in several different ways: (1) the energy is bounded from below by the energy of a triangular lattice (called T); (2) if the energy equals that of T, then the measure is a rotated and translated copy of T; (3) if the energy is close to that of T, then locally the measure is close to a rotated and translated copy of T. These three results require the domain to be a polygon with at most six sides. A fourth result states that the energy of T can be achieved in the limit of large domains, for domains with arbitrary boundaries. The proofs make use of three ingredients. First, the optimal transport cost associates to each point a polygonal cell; the energy can be bounded from below by a sum over all cells of a function that depends only on the cell. Second, this function has a convex lower bound that is sharp at T. Third, Euler's polytope formula limits the average number of sides of the polygonal cells to six, where six is the number corresponding to the triangular lattice. © 2014 Springer-Verlag Berlin Heidelberg. Source


Ma H.,Copenhagen University | Tian P.,Copenhagen University | Pello J.,TU Eindhoven | Bendix P.M.,Copenhagen University | Oddershede L.B.,Copenhagen University
Nano Letters | Year: 2014

Heating of irradiated metallic e-beam generated nanostructures was quantified through direct measurements paralleled by novel model-based numerical calculations. By comparing discs, triangles, and stars we showed how particle shape and composition determines the heating. Importantly, our results revealed that substantial heat is generated in the titanium adhesive layer between gold and glass. Even when the Ti layer is as thin as 2 nm it absorbs as much as a 30 nm Au layer and hence should not be ignored. © 2014 American Chemical Society. Source


Kraemer F.,TU Eindhoven
Journal of Medical Ethics | Year: 2013

While deep brain stimulation (DBS) for patients with Parkinson's disease has typically raised ethical questions about autonomy, accountability and personal identity, recent research indicates that we need to begin taking into account issues surrounding the patients' feelings of authenticity and alienation as well. In order to bring out the relevance of this dimension to ethical considerations of DBS, I analyse a recent case study of a Dutch patient who, as a result of DBS, faced a dilemma between autonomy and authenticity. This case study is meant to point out the normatively meaningful tension patients under DBS experience between authenticity and autonomy. Source


Bovendeerd P.H.M.,TU Eindhoven
Journal of Biomechanics | Year: 2012

The heart has the ability to respond to long-term changes in its environment through changes in mass (growth), shape (morphogenesis) and tissue properties (remodeling). For improved quantitative understanding of cardiac growth and remodeling (G&R) experimental studies need to be complemented by mathematical models. This paper reviews models for cardiac growth and remodeling of myofiber orientation, as induced by mechanical stimuli. A distinction is made between optimization models, that focus on the end stage of G&R, and adaptation models, that aim to more closely describe the mechanistic relation between stimulus and effect. While many models demonstrate qualitatively promising results, a lot of questions remain, e.g. with respect to the choice of the stimulus for G&R or the long-term stability of the outcome of the model. A continued effort combining information on mechanotransduction at the cellular level, experimental observations on G&R at organ level, and testing of hypotheses on stimulus-effect relations in mathematical models is needed to answer these questions on cardiac G&R. Ultimately, models of cardiac G&R seem indispensable for patient-specific modeling, both to reconstruct the actual state of the heart and to assess the long-term effect of potential interventions. © 2011 Elsevier Ltd. Source


Litvak N.,University of Twente | Van Der Hofstad R.,TU Eindhoven
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2013

Mixing patterns in large self-organizing networks, such as the Internet, the World Wide Web, and social and biological networks, are often characterized by degree-degree dependencies between neighboring nodes. In this paper, we propose a new way of measuring degree-degree dependencies. One of the problems with the commonly used assortativity coefficient is that in disassortative networks its magnitude decreases with the network size. We mathematically explain this phenomenon and validate the results on synthetic graphs and real-world network data. As an alternative, we suggest to use rank correlation measures such as Spearman's ρ. Our experiments convincingly show that Spearman's ρ produces consistent values in graphs of different sizes but similar structure, and it is able to reveal strong (positive or negative) dependencies in large graphs. In particular, we discover much stronger negative degree-degree dependencies in Web graphs than was previously thought. Rank correlations allow us to compare the assortativity of networks of different sizes, which is impossible with the assortativity coefficient due to its genuine dependence on the network size. We conclude that rank correlations provide a suitable and informative method for uncovering network mixing patterns. © 2013 American Physical Society. Source


Blocken B.,TU Eindhoven | Blocken B.,Catholic University of Leuven
Building and Environment | Year: 2015

Urban physics is the science and engineering of physical processes in urban areas. It basically refers to the transfer of heat and mass in the outdoor and indoor urban environment, and its interaction with humans, fauna, flora and materials. Urban physics is a rapidly increasing focus area as it is key to understanding and addressing the grand societal challenges climate change, energy, health, security, transport and aging. The main assessment tools in urban physics are field measurements, full-scale and reduced-scale laboratory measurements and numerical simulation methods including Computational Fluid Dynamics (CFD). In the past 50 years, CFD has undergone a successful transition from an emerging field into an increasingly established field in urban physics research, practice and design. This review and position paper consists of two parts. In the first part, the importance of urban physics related to the grand societal challenges is described, after which the spatial and temporal scales in urban physics and the associated model categories are outlined. In the second part, based on a brief theoretical background, some views on CFD are provided. Possibilities and limitations are discussed, and in particular, ten tips and tricks towards accurate and reliable CFD simulations are presented. These tips and tricks are certainly not intended to be complete, rather they are intended to complement existing CFD best practice guidelines on ten particular aspects. Finally, an outlook to the future of CFD for urban physics is given. © 2015 Elsevier Ltd. Source


Koschmider A.,Karlsruhe Institute of Technology | Reijers H.A.,TU Eindhoven
Enterprise Information Systems | Year: 2015

The use of business process models has become prevalent in a wide area of enterprise applications. But while their popularity is expanding, concerns are growing with respect to their proper creation and maintenance. An obvious way to boost the efficiency of creating high-quality business process models would be to reuse relevant parts of existing models. At this point, however, limited support exists to guide process modellers towards the usage of appropriate model content. In this paper, a set of content-oriented patterns is presented, which is extracted from a large set of process models from the order management and manufacturing production domains. The patterns are derived using a newly proposed set of algorithms, which are being discussed in this paper. The authors demonstrate how such Domain Process Patterns, in combination with information on their historic usage, can support process modellers in generating new models. To support the wider dissemination and development of Domain Process Patterns within and beyond the studied domains, an accompanying website has been set up. © 2013, © 2013 Taylor & Francis. Source


Montali M.,Free University of Bozen Bolzano | Maggi F.M.,University of Tartu | Chesani F.,University of Bologna | Mello P.,University of Bologna | Van Der Aalst W.M.P.,TU Eindhoven
ACM Transactions on Intelligent Systems and Technology | Year: 2013

Today, large business processes are composed of smaller, autonomous, interconnected subsystems, achieving modularity and robustness. Quite often, these large processes comprise software components as well as human actors, they face highly dynamic environments and their subsystems are updated and evolve independently of each other. Due to their dynamic nature and complexity, it might be difficult, if not impossible, to ensure at design-time that such systems will always exhibit the desired/expected behaviors. This, in turn, triggers the need for runtime verification and monitoring facilities. These are needed to check whether the actual behavior complies with expected business constraints, internal/external regulations and desired best practices. In this work, we present Mobucon EC, a novel monitoring framework that tracks streams of events and continuously determines the state of business constraints. In Mobucon EC, business constraints are defined using the declarative language Declare. For the purpose of this work, Declare has been suitably extended to support quantitative time constraints and non-atomic, durative activities. The logic-based language Event Calculus (EC) has been adopted to provide a formal specification and semantics to Declare constraints, while a light-weight, logic programming-based EC tool supports dynamically reasoning about partial, evolving execution traces. To demonstrate the applicability of our approach, we describe a case study about maritime safety and security and provide a synthetic benchmark to evaluate its scalability. © 2013 ACM 2157-6904/2013/12-ART5 $ 15.00. Source


Gierds C.,Humboldt University of Berlin | Mooij A.J.,TU Eindhoven | Wolf K.,University of Rostock
IEEE Transactions on Services Computing | Year: 2012

Service-oriented computing aims to create complex systems by composing less-complex systems, called services. Since services can be developed independently, the integration of services requires an adaptation mechanism for bridging any incompatibilities. Behavioral adapters aim to adjust the communication between some services to be composed in order to establish proper interaction between them. We present a novel approach for specifying such adapters, based on domain-specific transformation rules that reflect the elementary operations that adapters can perform. We also present a novel way to synthesize complex adapters that adhere to these rules, viz., by consistently separating data and control, and by using existing controller-synthesis algorithms. Our approach has been implemented, and we discuss some example applications, including real business processes in WS-BPEL. © 2008 IEEE. Source


Jalba A.C.,TU Eindhoven | Kustra J.,HIGH-TECH | Telea A.C.,University of Groningen
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2013

We present a GPU-based framework for extracting surface and curve skeletons of 3D shapes represented as large polygonal meshes. We use an efficient parallel search strategy to compute point-cloud skeletons and their distance and feature transforms (FTs) with user-defined precision. We regularize skeletons by a new GPU-based geodesic tracing technique which is orders of magnitude faster and more accurate than comparable techniques. We reconstruct the input surface from skeleton clouds using a fast and accurate image-based method. We also show how to reconstruct the skeletal manifold structure as a polygon mesh and the curve skeleton as a polyline. Compared to recent skeletonization methods, our approach offers two orders of magnitude speed-up, high-precision, and low-memory footprints. We demonstrate our framework on several complex 3D models. © 2013 IEEE. Source


Haans A.,TU Eindhoven
Journal of Environmental Psychology | Year: 2014

The natural preference refers to the human tendency to prefer natural substances over their synthetic counterparts, for example in the domains of food and medication. In four studies, we confirm that the natural preference is also operative in the domain of light. Study 1 confirmed that natural has a consistent meaning when people apply it to light, and that the source (e.g., daylight vs. electrical) and the transformation of the light (e.g., daylight through a blinded window) affects its naturalness. Studies 2 and 3 employed a classic forced-choice decision making paradigm. Study 2 did not confirm the natural preference hypothesis, probably because the artificial option had clear functional benefits over the natural one. Controlling for this confound, our hypothesis was confirmed in Study 3. In Study 4, three light sources were appraised in a randomized experiment. We confirmed that beliefs regarding the effects of light on health and concentration mediate the naturalness-attitude relationship; thus confirming instrumental motives behind the natural preference. Studies 2 and 4, however, suggest that the lower functionality of daylight-based systems may outweigh their perceived instrumental benefits. The weak and statistically non-significant correlations between connectedness to nature and light appraisals in Study 4 speak against an ideational basis for the natural preference as seen in earlier studies. Taken together, our studies provide evidence for a natural preference to be operative in the domain of light. © 2014 Elsevier Ltd. Source


Van Santen R.A.,TU Eindhoven
Angewandte Chemie - International Edition | Year: 2014

The perfect catalyst: The advances towards the ability to design a catalyst from first principles are explored. Aspects of computational chemistry as well as the kinetics and physical state of the reactive catalyst are discussed. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Luque A.,Institute Astrofisica Of Andalucia Iaa | Ebert U.,Centrum Wiskunde and Informatica CWI | Ebert U.,TU Eindhoven
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2011

Branching is an essential element of streamer discharge dynamics. We review the current state of theoretical understanding and recall that branching requires a finite perturbation. We argue that, in current laboratory experiments in ambient or artificial air, these perturbations can only be inherited from the initial state, or they can be due to intrinsic electron-density fluctuations owing to the discreteness of electrons. We incorporate these electron-density fluctuations into fully three-dimensional simulations of a positive streamer in air at standard temperature and pressure. We derive a quantitative estimate for the ratio of branching length to streamer diameter that agrees within a factor of 2 with experimental measurements. As branching without this noise would occur considerably later, if at all, we conclude that the intrinsic stochastic particle noise triggers branching of positive streamers in a