Time filter

Source Type

Delft, Netherlands

Delft University of Technology ft]), also known as TU Delft, is the largest and oldest Dutch public technical university, located in Delft, Netherlands. With eight faculties and numerous research institutes it hosts over 19,000 students , more than 3,300 scientists and more than 2,200 people in the support and management staff.The university was established on January 8, 1842 by King William II of the Netherlands as a Royal Academy, with the main purpose of training civil servants for the Dutch East Indies. The school rapidly expanded its research and education curriculum, becoming first a Polytechnic School in 1864, Institute of Technology in 1905, gaining full university rights, and finally changing its name to Delft University of Technology in 1986.Dutch Nobel laureates Jacobus Henricus van 't Hoff, Heike Kamerlingh Onnes, and Simon van der Meer have been associated with TU Delft. TU Delft is a member of several university federations including the IDEA League, CESAER, UNITECH, and 3TU. Wikipedia.

Peters J.A.,Technical University of Delft
Coordination Chemistry Reviews | Year: 2014

The general principles of interaction between bor(on)ic acids and sugars in aqueous media are discussed with a focus on the structural aspects that play a role with respect to the regioselectivity of the interactions and the stability of the resulting adducts. Preorganization and pKas appear to play important roles. Glucose and sialic acid will be demonstrated to be the promising targets for artificial B-based sensors. These sugars are important markers for diabetes and cancer, respectively. © 2014 Elsevier B.V.

Peter J.E.V.,ONERA | Dwight R.P.,Technical University of Delft
Computers and Fluids | Year: 2010

The calculation of the derivatives of output quantities of aerodynamic flow codes, commonly known as numerical sensitivity analysis, has recently become of increased importance for a variety of applications in flow analysis, but the original motivation came from the field of aerodynamic shape optimization. There the large numbers of design variables needed to parameterize surfaces in 3D necessitates the use of gradient-based optimization algorithms, and hence efficient and accurate evaluation of gradients. In this context over the last 20 years a variety of approaches have been developed to supply these gradients, raising particular challenges that have required novel algorithms. In this paper, we examine the historical development of these approaches, describe in some detail the theoretical background of each major method and the associated numerical techniques required to make them practical in an engineering setting. We give examples from our own experience and describe what we consider to be the state-of-the-art in these methods, including their application to optimization of complex 3D aircraft configurations. © 2009 Elsevier Ltd. All rights reserved.

Straathof A.J.J.,Technical University of Delft
Chemical Reviews | Year: 2014

Transformation of biomass into commodity chemicals using enzymes or cells will be successful if the production process is more attractive than for alternative options to produce these chemicals. Sufficient second generation biomass should be available for a reasonable price, the price will not only be dictated by the biomass production but also by competitive uses of this biomass such as combustion for energy generation. All biomass components should be convertible into product, or otherwise into valuable coproduct. Too high bioreactor investments, due to high O2 requirements or too low productivities, should be avoided. Biochemical processes compete with chemical processes that aim at similar routes from biomass to product. The biochemical process should be more selective or should avoid production and isolation of intermediate chemicals. Scientific discoveries and method development have been very important to increase the rate of development of biochemical routes.

Abate A.,Technical University of Delft | D'Innocenzo A.,University of LAquila | Di Benedetto M.D.,University of LAquila
IEEE Transactions on Automatic Control | Year: 2011

We present a constructive procedure for obtaining a finite approximate abstraction of a discrete-time stochastic hybrid system. The procedure consists of a partition of the state space of the system and depends on a controllable parameter. Given proper continuity assumptions on the model, the approximation errors introduced by the abstraction procedure are explicitly computed and it is shown that they can be tuned through the parameter of the partition. The abstraction is interpreted as a Markov set-Chain. We show that the enforcement of certain ergodic properties on the stochastic hybrid model implies the existence of a finite abstraction with finite error in time over the concrete model, and allows introducing a finite-time algorithm that computes the abstraction. © 2011 IEEE.

Swuste P.,Technical University of Delft
Safety Science | Year: 2013

Since cranes and tower cranes are complex installations they constitute critical aspects of safety at construction sites. The risks posed by cranes are specific and should be treated as such. Prior to assessing the impact of management and organizational factors, accident analysis should first start with an analysis of the actual accident process. The Dutch Safety Board conducted such an accident analysis involving a non-mobile, peak less, trolley tower crane. This tower crane collapsed at a Rotterdam building site on July 10th 2008. The results show that the flexibility of the configuration of the mast and the horizontal arm of the crane or the jib was greater than that calculated by the design engineer. While hoisting a heavy load, the crane collapsed. The defects in the design of the crane were not identified, so the accident was classified as a 'normal accident', one that was essentially integral to the design and could also thus occur in other tower cranes of the same make. Such tower crane design shortcomings emerge as process disturbances once the crane is operational. Despite its shortcomings, the collapsed crane did have a CE mark. Other officially required safety audits and crane inspections did not address possible defects in the design, production, or operation of the crane. Once on the market there appears to be no further effective safety net for the detection of structural weaknesses. The article will also discuss the role of parties involved in construction and inspection of tower cranes. © 2013 Elsevier Ltd.

Alderliesten R.C.,Technical University of Delft
Materials and Design | Year: 2015

This paper provides an overview of damage phenomena and mechanisms in hybrid aerospace structural materials subject to damage tolerance certification requirements. Because the hybrid technology originates from the objective to design for damage tolerance, the paper starts with illustrating recent developments in this material technology. Subsequently, the damage mechanisms that occur under quasi-static, fatigue and impact loading are discussed. Explanations of these phenomena are given with theories following fracture mechanics and energy balance approaches. To illustrate the top-down design approach these fundamental theories enable, potential future solutions are presented to design against these damages in order to improve damage tolerance. © 2014 Elsevier Ltd.

Schrama E.J.O.,Technical University of Delft | Wouters B.,Koninklijk Nederlands Meteorologisch Instituut
Journal of Geophysical Research: Solid Earth | Year: 2011

In this paper we discuss a new method for determining mass time series for 16 hydrological basins representing the Greenland system (GS) whereby we rely on Gravity Recovery and Climate Experiment (GRACE) mission data. In the same analysis we also considered observed mass changes over Ellesmere Island, Baffin Island, Iceland, and Svalbard (EBIS). The summed contribution of the complete system yields a mass loss rate and acceleration of -252 ± 28 Gt/yr and -22 ± 4 Gt/yr2 between March 2003 and February 2010 where the error margins follow from two glacial isostatic adjustment (GIA) models and three processing centers providing GRACE monthly potential coefficient sets. We describe the relation between mass losses in the GS and the EBIS region and found that the uncertainties in all areas are correlated. The summed contribution of Ellesmere Island, Baffin Island, Iceland, and Svalbard yields a mass loss rate of -51 ± 17 Gt/yr and an acceleration of -13 ± 3 Gt/yr2 between March 2003 and February 2010. The new regional basin reconstruction method shows that the mass loss within the southeastern basins in the GS has slowed down since 2007, while mass loss in western basins increased showing a progression to the north of Greenland. Copyright © 2011 by the American Geophysical Union.

Weijermars R.,Technical University of Delft
Journal of Natural Gas Science and Engineering | Year: 2010

The aim of any value chain & network analysis is to understand the systemic factors and conditions through which a value framework and its firms can achieve higher levels of performance. The upstream oil & gas business is increasingly stimulated for growth by federal legislation (e.g. tax credits unconventional gas plays), while the corporate earnings in the US midstream and downstream energy segments remain strictly regulated and constrained by FERC and state regulators. This study concisely describes the physical and the financial value chains of the US natural gas business in a systemic fashion. The value chains of the natural gas industry are governed and interconnected by a regulatory decision-making framework. Legislation and regulation by the US Congress for the upstream energy value chain traditionally aim to facilitate the development of domestic natural gas fields. Likewise, FERC regulation maximizes access to the midstream gas transmission segment and provisions for fair tariffs for all shippers. State regulators protect the end-consumers in the downstream value chain by providing guidelines and rulings in rate cases. Corporate energy development decisions are critically impacted by such energy policies and regulations. Long-term, mid-term and short-term measures are distinguished based upon the duration of their impact on the performance of the US natural gas market. The present analysis of the physical and financial value chains and the regulatory framework that governs the US natural gas market provides new insights on appropriate policies and regulatory strategies that could improve both the liquidity and security of supply in the European gas market. Strategic and tactical instruments for maximizing returns on investment for regulated energy utilities are also formulated. © 2010 Elsevier B.V.

Burger M.,Erasmus University Rotterdam | Meijers E.,Technical University of Delft
Urban Studies | Year: 2012

Empirical research establishing the costs and benefits that can be associated with polycentric urban systems is often called for but rather thin on the ground. In part, this is due to the persistence of what appear to be two analytically distinct approaches in understanding and measuring polycentricity: a morphological approach centring on nodal features and a functional approach focused on the relations between centres. Informed by the oft-overlooked but rich heritage of urban systems research, this paper presents a general theoretical framework that links both approaches and discusses the way both can be measured and compared in a coherent manner. Using the Netherlands as a test case, it is demonstrated that most regions tend to be more morphologically polycentric than functionally polycentric. The difference is largely explained by the size, external connectivity and degree of self-sufficiency of a region's principal centre. © 2011 Urban Studies Journal Limited.

Salomons E.M.,TNO | Berghauser Pont M.,Technical University of Delft
Landscape and Urban Planning | Year: 2012

Traffic noise in cities has serious effects on the inhabitants. Well-known effects are annoyance and sleep disturbance, but long-term health effects such as cardiovascular disease have also been related to traffic noise. The spatial distribution of traffic noise in a city is related to the distributions of traffic volume and urban density, and also to urban form. This relation is investigated by means of numerical calculations for two cities, Amsterdam and Rotterdam, and for various idealized urban fabrics. The concept of urban traffic elasticity is introduced to relate local population density to local vehicle kilometers driven on the urban road network. The concept of Spacematrix is used to represent urban density and urban form. For the two cities it is found that the average sound level in an urban area decreases with increasing population and building density. The results for idealized urban fabrics show that the shape of buildings blocks has a large effect on the sound level at the least-exposed façade (quiet façade) of a building, and a smaller effect on the sound level at the most-exposed façade. Sound levels at quiet facades are in general lower for closed building blocks than for open blocks such as strips. © 2012 Elsevier B.V.

Schneider G.F.,Technical University of Delft
Nature communications | Year: 2013

Graphene nanopores are potential successors to biological and silicon-based nanopores. For sensing applications, it is however crucial to understand and block the strong nonspecific hydrophobic interactions between DNA and graphene. Here we demonstrate a novel scheme to prevent DNA-graphene interactions, based on a tailored self-assembled monolayer. For bare graphene, we encounter a paradox: whereas contaminated graphene nanopores facilitated DNA translocation well, clean crystalline graphene pores very quickly exhibit clogging of the pore. We attribute this to strong interactions between DNA nucleotides and graphene, yielding sticking and irreversible pore closure. We develop a general strategy to noncovalently tailor the hydrophobic surface of graphene by designing a dedicated self-assembled monolayer of pyrene ethylene glycol, which renders the surface hydrophilic. We demonstrate that this prevents DNA to adsorb on graphene and show that single-stranded DNA can now be detected in graphene nanopores with excellent nanopore durability and reproducibility.

Janic M.,Technical University of Delft
International Journal of Hydrogen Energy | Year: 2010

This paper investigates the potential of LH2 (Liquid Hydrogen) as an alternative fuel for achieving more sustainable long-term development of large airports in terms of mitigating their air pollution. For such purpose, a methodology for quantifying the potential of LH2 is developed. It consists of two models: the first model enables the estimation of the fuel demand and the specification of the fuel production and storage capacity needed to satisfy that demand at a given airport under given conditions; the other model enables assessment of the effects of introducing LH2 on mitigating air pollution at that airport. The main inputs for the methodology are scenarios of the long-term growth of air traffic demand at the airport in terms of the annual number of ATM (Air Transport Movements), i.e. flights and related LTO (Landing and Take-Off) cycles and their time characteristics, the aircraft fleet mix, characterized by the aircraft size and proportions of conventional and cryogenic aircraft, the fuel consumption per particular categories of aircraft/flights; and specifically, the fuel consumption and related emission rates of particular air pollutants by these aircraft during LTO cycles. The output from the methodology includes an estimation of the long-term development of demand at a given airport in terms of the volume and structure of ATM, which depend on: the scenarios of traffic growth and introduction of cryogenic aircraft, the required production and storage capacity of particular fuel types, the fuel consumed, and the quantities of related air pollutants emitted during LTO cycles carried out during the period concerned. The airport planners and policy makers can use the methodology for estimating, planning, design, and managing the fuel production and storage capacity, as well as for setting a cap on the air pollution depending of the circumstances. © 2009 Professor T. Nejat Veziroglu.

Mostert E.,Technical University of Delft
Ecology and Society | Year: 2012

One of the central tenets of adaptive management is polycentric governance. Yet, despite the popularity of the concept, few detailed case studies of polycentric governance systems exist. In this paper, we aim to partly fill this gap. We describe water management between the years 1000 and 1953 on the Dutch island of IJsselmonde in the Netherlands near Rotterdam, and then use this case to reflect on the theory of polycentric governance. Despite the small size of the island, water management on IJsselmonde was the responsibility of no fewer than 31 local jurisdictions and some 65 polders. In addition, some supra-local arrangements were made, such as joint supervision of dikes. According to the theory, such a polycentric system should have many advantages over more centralized management systems, and indeed there is some evidence of this. Yet, there is also evidence of a disadvantage that is not mentioned in the literature: petrification. IJsselmonde's water management system was often slow to adapt to changing conditions, and at times it provided an answer to yesterday's challenges rather than today's. We conclude that the theory of polycentric governance needs to be developed further because it now lumps together too many different systems under the heading of polycentric governance. This calls for more longitudinal case studies on the development and effectiveness of individual polycentric governance systems within their changing context. © 2012 by the author(s).

Geelhoed B.,Technical University of Delft
Minerals Engineering | Year: 2011

Pierre Gy's theory for the sampling of particulate materials is widely applied and taught. A crucial part of Gy's theory deals with the estimation, prediction and minimization of the variance of the Fundamental Sampling Error using a formula that is known as "Gy's formula". Experimental evidence, however, supports the conclusion that Gy's formula is inaccurate. © 2010 Elsevier Ltd. All rights reserved.

Hoeven F.V.D.,Technical University of Delft
Tunnelling and Underground Space Technology | Year: 2011

In the early 1990s, it became clear that tunnelling as Dutch engineers knew was about to change fundamentally. Local governments, pressure groups and individuals had become aware of the added value that underground space technology (tunnelling) could bring, giving rise to the next generation of multifunctional road and rail tunnel projects. This article reports on the outcome of a landmark study that focused on the new opportunities that multifunctional tunnels for motorways offer: RingRing. The RingRing study includes a concise state-of-the-art exploration of path-finding projects, delivers a solid methodology for dealing with the many design decisions during the conceptual phase of the multidisciplinary project, proposes a set of generic concepts that respond to the four key challenges faced by multifunctional tunnels, conducts two showcase design study projects, and provides an overall analysis of the applicability of multifunctional tunnels for integrating orbital motorways in large cities such as Amsterdam and Rotterdam. Eight years after the publication of the RingRing study, the first projects it identified are built. Others projects have not yet made it off the drawing board. This article looks back to assess what has been accomplished and draws lessons learned in order to be able to improve future projects. © 2010 Elsevier Ltd.

The total annual revenue stream in the US natural gas value chain over the past decade is analyzed. Growth of total revenues has been driven by higher wellhead prices, which peaked in 2008. The emergence of the unconventional gas business was made possible in part by the pre-recessional rise in global energy prices. The general rise in natural gas prices between 1998 and 2008 did not lower overall US gas consumption, but shifts have occurred during the past decade in the consumption levels of individual consumer groups. Industry's gas consumption has decreased, while power stations increased their gas consumption. Commercial and residential consumers maintained flat gas consumption patterns. This study introduces the Weighted Average Cost of Retail Gas (WACORG) as a tool to calculate and monitor an average retail price based on the different natural gas prices charged to the traditional consumer groups. The WACORG also provides insight in wellhead revenues and may be used as an instrument for calibrating retail prices in support of wellhead price-floor regulation. Such price-floor regulation is advocated here as a possible mitigation measure against excessive volatility in US wellhead gas prices to improve the security of gas supply. © 2011 Elsevier Ltd.

Dorenbos P.,Technical University of Delft
Journal of Luminescence | Year: 2014

The chemical shift model of electronic binding energies will be applied to the lanthanides in T O2 and MT O3 compounds where T is the cation Ti4+, Zr4+, Ce4+, Hf4+, or Th4+ and M is the alkaline earth cation Ba2+, Sr 2+, or Ca2+. As input, data from lanthanide spectroscopy will be used to generate the binding energies of electrons in all lanthanide impurity states and in the valence band and conduction band states of the host compound. In these compounds the bottom of the conduction band has a strong nd-orbital character (n=3, 4, 5, and 6 for titanates, zirconates, hafnates, and thorates, respectively). Electronic structure diagrams are determined that show the valence band and conduction band energy together with all lanthanide impurity level energies relative to the vacuum level. They reveal clear trends when n increases that has profound consequences for the lanthanide luminescence properties. © 2014 Elsevier B.V.

Abate A.,Technical University of Delft
Electronic Notes in Theoretical Computer Science | Year: 2013

This article provides a survey of approximation metrics for stochastic processes. We deal with Markovian processes in discrete time evolving on general state spaces, namely on domains with infinite cardinality and endowed with proper measurability and metric structures. The focus of this work is to discuss approximation metrics between two such processes, based on the notion of probabilistic bisimulation: in particular we investigate metrics characterized by an approximate variant of this notion. We suggest that metrics between two processes can be introduced essentially in two distinct ways: the first employs the probabilistic conditional kernels underlying the two stochastic processes under study, and leverages notions derived from algebra, logic, or category theory; whereas the second looks at distances between trajectories of the two processes, and is based on the dynamical properties of the two processes (either their syntax, via the notion of bisimulation function; or their semantics, via sampling techniques). The survey moreover covers the problem of constructing formal approximations of stochastic processes according to the introduced metrics. © 2013 Published by Elsevier B.V.

Bakker M.,Technical University of Delft
Advances in Water Resources | Year: 2010

A new analytic solution approach is presented for the modeling of steady flow to pumping wells near rivers in strip aquifers; all boundaries of the river and strip aquifer may be curved. The river penetrates the aquifer only partially and has a leaky stream bed. The water level in the river may vary spatially. Flow in the aquifer below the river is semi-confined while flow in the aquifer adjacent to the river is confined or unconfined and may be subject to areal recharge. Analytic solutions are obtained through superposition of analytic elements and Fourier series. Boundary conditions are specified at collocation points along the boundaries. The number of collocation points is larger than the number of coefficients in the Fourier series and a solution is obtained in the least squares sense. The solution is analytic while boundary conditions are met approximately. Very accurate solutions are obtained when enough terms are used in the series. Several examples are presented for domains with straight and curved boundaries, including a well pumping near a meandering river with a varying water level. The area of the river bottom where water infiltrates into the aquifer is delineated and the fraction of river water in the well water is computed for several cases. © 2010 Elsevier Ltd.

De Ridder D.,Technical University of Delft | De Ridder J.,Delft Bioinformatics Laboratory | Reinders M.J.T.,Pattern Recognition and Bioinformatics group
Briefings in Bioinformatics | Year: 2013

Pattern recognition is concerned with the development of systems that learn to solve a given problem using a set of example instances, each represented by a number of features. These problems include clustering, the grouping of similar instances; classification, the task of assigning a discrete label to a given instance; and dimensionality reduction, combining or selecting features to arrive at a more useful representation. The use of statistical pattern recognition algorithms in bioinformatics is pervasive. Classification and clustering are often applied to high-throughput measurement data arising from microarray, mass spectrometry and next-generation sequencing experiments for selecting markers, predicting phenotype and grouping objects or genes. Less explicitly, classification is at the core of a wide range of tools such as predictors of genes, protein function, functional or genetic interactions, etc., and used extensively in systems biology. A course on pattern recognition (or machine learning) should therefore be at the core of any bioinformatics education program. In this review, we discuss the main elements of a pattern recognition course, based on material developed for courses taught at the BSc, MSc and PhD levels to an audience of bioinformaticians, computer scientists and life scientists. We pay attention to common problems and pitfalls encountered in applications and in interpretation of the results obtained. © The Author 2013.

Goverde R.M.P.,Technical University of Delft
Transportation Research Part C: Emerging Technologies | Year: 2010

In scheduled railway traffic networks a single delayed train may cause a domino effect of secondary delays over the entire network, which is a main concern to planners and dispatchers. This paper presents a model and an algorithm to compute the propagation of initial delays over a periodic railway timetable. The railway system is modelled as a linear system in max-plus algebra including zero-order dynamics corresponding to delay propagation within a timetable period. A timed event graph representation is exploited in an effective graph algorithm that computes the propagation of train delays using a bucket implementation to store the propagated delays. The behaviour of the delay propagation and the convergence of the algorithm is analysed depending on timetable properties such as realisability and stability. Different types of delays and delay behaviour are discussed, including primary and secondary delays, structural delays, periodic delay regimes, and delay explosion. A decomposition method based on linearity is introduced to deal with structural and initial delays separately. The algorithm can be applied to large-scale scheduled railway traffic networks in real-time applications such as interactive timetable stability analysis and decision support systems to assist train dispatchers. © 2010 Elsevier Ltd.

Priemus H.,Technical University of Delft
Housing Studies | Year: 2010

Current housing finance systems are mostly a poor reflection of the government's policy priorities. This paper explains how the current Dutch housing finance system works, and analyses its weaknesses against the backdrop of a well-functioning housing market and national policy goals. It specifically looks at recent proposals and some building blocks for future housing finance reform in the Netherlands. The paper ends with conclusions on the potential relevance of the analysis for other European countries. © 2010 Taylor & Francis.

Hagen W.R.,Technical University of Delft
Coordination Chemistry Reviews | Year: 2011

Cells acquire molybdenum and tungsten as their highly soluble oxoanions, Mo VIO 4 2- or W VIO 4 2-, which they internalize by means of an active (i.e. energy requiring) transmembrane importer, for subsequent conversion into the metalloenzyme cofactors Moco or Wco (and FeMoco in nitrogen fixers). This import system has been studied as one of the models for the functioning of the protein complex superfamily of ABC (ATP binding cassette) transporters, but its mechanistic details are presently not clear. The complex exhibits interesting variants, known as the microbial Mod, Tup, and Wtp system, and the - less well defined - eukaryotic MOT1 system, which mutually differ in oxoanion coordination chemistry and in the control of intracellular Mo/W levels. This evolutionary diversity of Mo/W transporters has resulted in confusing nomenclature whose rectification is here proposed. © 2011 Elsevier B.V.

Robaey Z.,Technical University of Delft
Science and Engineering Ethics | Year: 2016

Genetically modified organisms are a technology now used with increasing frequency in agriculture. Genetically modified seeds have the special characteristic of being living artefacts that can reproduce and spread; thus it is difficult to control where they end up. In addition, genetically modified seeds may also bring about uncertainties for environmental and human health. Where they will go and what effect they will have is therefore very hard to predict: this creates a puzzle for regulators. In this paper, I use the problem of contamination to complicate my ascription of forward-looking moral responsibility to owners of genetically modified organisms. Indeed, how can owners act responsibly if they cannot know that contamination has occurred? Also, because contamination creates new and unintended ownership, it challenges the ascription of forward-looking moral responsibility based on ownership. From a broader perspective, the question this paper aims to answer is as follows: how can we ascribe forward-looking moral responsibility when the effects of the technologies in question are difficult to know or unknown? To solve this problem, I look at the epistemic conditions for moral responsibility and connect them to the normative notion of the social experiment. Indeed, examining conditions for morally responsible experimentation helps to define a range of actions and to establish the related epistemic virtues that owners should develop in order to act responsibly where genetically modified organisms are concerned. © 2016, The Author(s).

Asveld L.,Technical University of Delft
Science and Engineering Ethics | Year: 2016

The policies of the European Union concerning the development of biofuels can be termed a lock-in. Biofuels were initially hailed as a green, sustainability technology. However evidence to the contrary quickly emerged. The European Commission proposed to alter its policies to accommodate for these effects but met with fierce resistance from a considerable number of member states who have an economic interest in these first generation biofuels. In this paper I argue that such a lock-in might have been avoided if an experimental approach to governance had been adopted. Existing approaches such as anticipation and niche management either do not reduce uncertainty sufficiently or fail to explicitly address conflicts between values motivating political and economic support for new technologies. In this paper, I suggest to apply an experimental framework to the development of sustainable biobased technologies. Such an approach builds on insights from adaptive management and transition management in that it has the stimulation of learning effects at its core. I argue that these learning effects should occur on the actual impacts of new technologies, on the institutionalisation of new technologies and most specifically on the norms and values that underly policies supporting new technologies. This approach can be relevant for other emerging technologies. © 2016, The Author(s).

Hendriks R.C.,Technical University of Delft | Gerkmann T.,KTH Royal Institute of Technology
IEEE Transactions on Audio, Speech and Language Processing | Year: 2012

For multi-channel noise reduction algorithms like the minimum variance distortionless response (MVDR) beamformer, or the multi-channel Wiener filter, an estimate of the noise correlation matrix is needed. For its estimation, it is often proposed in the literature to use a voice activity detector (VAD). However, using a VAD the estimated matrix can only be updated in speech absence. As a result, during speech presence the noise correlation matrix estimate does not follow changing noise fields with an appropriate accuracy. This effect is further increased, as in nonstationary noise voice activity detection is a rather difficult task, and false-alarms are likely to occur. In this paper, we present and analyze an algorithm that estimates the noise correlation matrix without using a VAD. This algorithm is based on measuring the correlation of the noisy input and a noise reference which can be obtained, e.g., by steering a null towards the target source. When applied in combination with an MVDR beamformer, it is shown that the proposed noise correlation matrix estimate results in a more accurate beamformer response, a larger signal-to-noise ratio improvement and a larger instrumentally predicted speech intelligibility when compared to competing algorithms such as the generalized sidelobe canceler, a VAD-based MVDR beamformer, and an MVDR based on the noisy correlation matrix. © 2011 IEEE.

Pasman H.J.,Technical University of Delft
Journal of Loss Prevention in the Process Industries | Year: 2011

In the middle of 70s concerns of the public in the Netherlands about fire, explosion and toxic risks due to mishaps in the expanding process industry, causing political pressure led to the embracement of quantitative risk analysis as tool for licensing and land-use planning. Probabilistic treatment of risk had been exercised before to design flood defense. A 'test' on six different plants, the COVO study, favored the idea. Failure rate values were in immediate need. For storage vessels AKZO's chlorine vessel data and British steam boiler data have been the first. Risk criteria to make decisions on were also developed and in 1985 embodied in legislation. As licensing and land-use planning are tasks of provincial authorities, under the auspices of the Inter-Provincial Consultation (IPO), further details such as failure frequency values have been worked out. In the late 90s the Purple Book consolidated the information as a guideline for Dutch quantitative risk assessment of process installations. The paper will give a condensed historical overview, guidance to published papers; it will further make comments, explain policy backgrounds, present comparison with other data and will briefly indicate in which direction developments should go to improve QRA. © 2010 Elsevier Ltd.

de Smet L.C.,Technical University of Delft
Sensors (Basel, Switzerland) | Year: 2013

Since their introduction in 2001, SiNW-based sensor devices have attracted considerable interest as a general platform for ultra-sensitive, electrical detection of biological and chemical species. Most studies focus on detecting, sensing and monitoring analytes in aqueous solution, but the number of studies on sensing gases and vapors using SiNW-based devices is increasing. This review gives an overview of selected research papers related to the application of electrical SiNW-based devices in the gas phase that have been reported over the past 10 years. Special attention is given to surface modification strategies and the sensing principles involved. In addition, future steps and technological challenges in this field are addressed.

Van Antwerpen D.,Technical University of Delft
Proceedings - HPG 2011: ACM SIGGRAPH Symposium on High Performance Graphics | Year: 2011

Monte Carlo Light Transport algorithms such as Path Tracing (PT), Bi-Directional Path Tracing (BDPT) and Metropolis Light Transport (MLT) make use of random walks to sample light transport paths. When parallelizing these algorithms on the GPU the stochastic termination of random walks results in an uneven workload between samples, which reduces SIMD efficiency. In this paper we propose to combine stream compaction and sample regeneration to keep SIMD efficiency high during random walk construction, in spite of stochastic termination. Furthermore, for BDPT and MLT, we propose to evaluate all bidirectional connections of a sample in parallel in order to balance the workload between GPU threads and improve SIMD efficiency during sample evaluation. We present efficient parallel GPU-only implementations for PT, BDPT, and MLT in CUDA.We show that our GPU implementations outperform similar CPU implementations by an order of magnitude. © 2011 ACM.

Pikulin D.I.,Leiden University | Nazarov Y.V.,Technical University of Delft
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

We derive a generic phenomenological model of a Majorana Josephson junction that accounts for avoided crossing of Andreev states, and investigate its dynamics at constant bias voltage to reveal an unexpected pattern of an any-π Josephson effect in the limit of slow decoherence: sharp peaks in noise not related to any definite fraction of Josephson frequency. © 2012 American Physical Society.

Collins S.H.,Technical University of Delft | Kuo A.D.,University of Michigan
PLoS ONE | Year: 2010

Background: Humans normally dissipate significant energy during walking, largely at the transitions between steps. The ankle then acts to restore energy during push-off, which may be the reason that ankle impairment nearly always leads to poorer walking economy. The replacement of lost energy is necessary for steady gait, in which mechanical energy is constant on average, external dissipation is negligible, and no net work is performed over a stride. However, dissipation and replacement by muscles might not be necessary if energy were instead captured and reused by an assistive device. Methodology/Principal Findings: We developed a microprocessor-controlled artificial foot that captures some of the energy that is normally dissipated by the leg and "recycles" it as positive ankle work. In tests on subjects walking with an artificially-impaired ankle, a conventional prosthesis reduced ankle push-off work and increased net metabolic energy expenditure by 23% compared to normal walking. Energy recycling restored ankle push-off to normal and reduced the net metabolic energy penalty to 14%. Conclusions/Significance: These results suggest that reduced ankle push-off contributes to the increased metabolic energy expenditure accompanying ankle impairments, and demonstrate that energy recycling can be used to reduce such cost. © 2010 Collins, Kuo.

Dorenbos P.,Technical University of Delft
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

Lanthanides in compounds can adopt the tetravalent [Xe]4fn -1 (like Ce4 +, Pr4 +, Tb4 +), the trivalent [Xe]4fn (all lanthanides), or the divalent [Xe]4fn +1 configuration (like Eu2 +, Yb2 +, Sm2 +, Tm2 +). The 4f-electron binding energy depends on the charge Q of the lanthanide ion and its chemical environment A. Experimental data on three environments (i.e., the bare lanthanide ions where A=vacuum, the pure lanthanide metals, and the lanthanides in aqueous solutions) are employed to determine the 4f-electron binding energies in all divalent and trivalent lanthanides. The action of the chemical environment on the 4f-electron binding energy will be represented by an effective ambient charge Q A=-Q at an effective distance from the lanthanide. This forms the basis of a model that relates the chemical shift of the 4f-electron binding energy in the divalent lanthanide with that in the trivalent one. Eu will be used as the lanthanide of reference, and special attention is devoted to the 4f-electron binding energy difference between Eu2 + and Eu3 +. When that difference is known, the model provides the 4f-electron binding energies of all divalent and all trivalent lanthanide ions relative to the vacuum energy. © 2012 American Physical Society.

Wijntjes M.W.,Technical University of Delft
Journal of vision | Year: 2012

Among other cues, the visual system uses shading to infer the 3D shape of objects. The shading pattern depends on the illumination and reflectance properties (BRDF). In this study, we compared 3D shape perception between identical shapes with different BRDFs. The stimuli were photographed 3D printed random smooth shapes that were either painted matte gray or had a gray velvet layer. We used the gauge figure task (J. J. Koenderink, A. J. van Doorn, & A. M. L. Kappers, 1992) to quantify 3D shape perception. We found that the shape of velvet objects was systematically perceived to be flatter than the matte objects. Furthermore, observers' judgments were more similar for matte shapes than for velvet shapes. Lastly, we compared subjective with veridical reliefs and found large systematic differences: Both matte and velvet shapes were perceived more flat than the actual shape. The isophote pattern of a flattened Lambertian shape resembles the isophote pattern of an unflattened velvet shape. We argue that the visual system uses a similar shape-from-shading computation for matte and velvet objects that partly discounts material properties.

Sheldon R.A.,Technical University of Delft
Applied Microbiology and Biotechnology | Year: 2011

Cross-linked enzyme aggregates (CLEAs) have many economic and environmental benefits in the context of industrial biocatalysis. They are easily prepared from crude enzyme extracts, and the costs of (often expensive) carriers are circumvented. They generally exhibit improved storage and operational stability towards denaturation by heat, organic solvents, and autoproteolysis and are stable towards leaching in aqueous media. Furthermore, they have high catalyst productivities (kilograms product per kilogram biocatalyst) and are easy to recover and recycle. Yet another advantage derives from the possibility to co-immobilize two or more enzymes to provide CLEAs that are capable of catalyzing multiple biotransformations, independently or in sequence as catalytic cascade processes. © 2011 The Author(s).

Tighe B.P.,Technical University of Delft
Physical Review Letters | Year: 2012

The isostatic state plays a central role in organizing the response of many amorphous materials. We construct a diverging length scale in nearly isostatic spring networks that is defined both above and below isostaticity and at finite frequencies and relate the length scale to viscoelastic response. Numerical measurements verify that proximity to isostaticity controls the viscosity, shear modulus, and creep of random networks. © 2012 American Physical Society.

van Rhee C.,Technical University of Delft
Journal of Hydraulic Engineering | Year: 2010

In dredging practice sand is eroded at very high flow velocities using water jets. Breaching of dikes or dams is another process where sediments are eroded under the influence of high flow velocities. The existing pick-up functions were developed for relative low values of bed shear stress and hence overestimate erosion. Conventional pick-up functions can be made suitable for dealing with high-velocity erosion by taking the permeability of the sediment into account. This can be achieved by modifying the critical Shields parameter. The procedure is demonstrated using the van Rijn pick-up function. The theory is compared with experiment, and agreement is found to be good. © 2010 ASCE.

Van der Neut J.,Technical University of Delft
Geophysical Prospecting | Year: 2013

With seismic interferometry or the virtual source method, controlled sources can be redatumed from the Earth's surface to generate so-called virtual sources at downhole receiver locations. Generally this is done by cross-correlation of the recorded downhole data and stacking over source locations. By studying the retrieved data at zero time lag, downhole illumination conditions that determine the virtual source radiation pattern can be analysed without a velocity model. This can be beneficial for survey planning in time-lapse experiments. Moreover, the virtual source radiation pattern can be corrected by multi-dimensional deconvolution or directional balancing. Such an approach can help to improve virtual source repeatability, posing major advantages for reservoir monitoring. An algorithm is proposed for so-called illumination balancing (being closely related to directional balancing). It can be applied to single-component receiver arrays with limited aperture below a strongly heterogeneous overburden. The algorithm is demonstrated on synthetic 3D elastic data to retrieve time-lapse amplitude attributes. © 2012 European Association of Geoscientists & Engineers.

TTim is a free code for the semi-analytic simulation of transient flow in multi-layer systems consisting of an arbitrary number of layers. No grid or time-stepping is required, nor does a closed model boundary need to be specified in any of the layers. Currently, TTim includes multi-layer wells and line-sinks, which may be used to simulate transient flow to a variety of hydrogeologic features, including wells with a skin and wellbore storage, incompletely sealed abandoned wells, streams with leaky beds, vertical faults, and horizontal wells; transient forcing needs to be represented by a step function. Other features that may be simulated include vertical anisotropy and the delayed response of the water table. Behind the scenes of TTim, the Laplace-transform analytic element method is applied. TTim is written in Python, with Python scripts used as input files. TTim has many practical applications, including the design of riverbank filtration systems, analysis of aquifer tests near surface-water bodies, design and evaluation of recirculation wells, and modeling of the transient pressure response of proposed carbon geologic sequestration projects. In addition, the short and simple input files and the one-to-one link between analytic elements and hydrogeologic features make TTim well suited for education. © 2013 Springer-Verlag Berlin Heidelberg.

Cooke R.M.,Technical University of Delft
Climatic Change | Year: 2013

This article traces the development of uncertainty analysis through three generations punctuated by large methodology investments in the nuclear sector. Driven by a very high perceived legitimation burden, these investments aimed at strengthening the scientific basis of uncertainty quantification. The first generation building off the Reactor Safety Study introduced structured expert judgment in uncertainty propagation and distinguished variability and uncertainty. The second generation emerged in modeling the physical processes inside the reactor containment building after breach of the reactor vessel. Operational definitions and expert judgment for uncertainty quantification were elaborated. The third generation developed in modeling the consequences of release of radioactivity and transport through the biosphere. Expert performance assessment, dependence elicitation and probabilistic inversion are among the hallmarks. Third generation methods may be profitably employed in current Integrated Assessment Models (IAMs) of climate change. Possible applications of dependence modeling and probabilistic inversion are sketched. It is unlikely that these methods will be fully adequate for quantitative uncertainty analyses of the impacts of climate change, and a penultimate section looks ahead to fourth generation methods. © 2012 The Author(s).

Zhuge X.,FEI Company | Yarovoy A.G.,Technical University of Delft
IEEE Transactions on Image Processing | Year: 2012

This paper presents a 3-D near-field imaging algorithm that is formulated for 2-D wideband multiple-input-multiple-output (MIMO) imaging array topology. The proposed MIMO range migration technique performs the image reconstruction procedure in the frequency-wavenumber domain. The algorithm is able to completely compensate the curvature of the wavefront in the near-field through a specifically defined interpolation process and provides extremely high computational efficiency by the application of the fast Fourier transform. The implementation aspects of the algorithm and the sampling criteria of a MIMO aperture are discussed. The image reconstruction performance and computational efficiency of the algorithm are demonstrated both with numerical simulations and measurements using 2-D MIMO arrays. Real-time 3-D near-field imaging can be achieved with a real-aperture array by applying the proposed MIMO range migration techniques. © 2012 IEEE.

Remis R.F.,Technical University of Delft
Journal of Computational Physics | Year: 2011

In this paper we show that the Finite-Difference Time-Domain method (FDTD method) follows the recurrence relation for Fibonacci polynomials. More precisely, we show that FDTD approximates the electromagnetic field by Fibonacci polynomials in Δ tA, where Δ t is the time step and A is the first-order Maxwell system matrix. By exploiting the connection between Fibonacci polynomials and Chebyshev polynomials of the second kind, we easily obtain the Courant-Friedrichs-Lewy (CFL) stability condition and we show that to match the spectral width of the system matrix, the time step should be chosen as large as possible, that is, as close to the CFL upper bound as possible. © 2010 Elsevier Inc.

Berkhout A.J.G.,Technical University of Delft
Geophysical Prospecting | Year: 2014

The next-generation seismic imaging algorithms will consider multiple scattering as indispensable information, being referred to as Full-Wavefield Migration. In addition, these algorithms will also include autonomous velocity updating in the migration process, being referred to as Joint Migration Inversion. Full-Wavefield Migration and Joint Migration Inversion address the industrial needs to improve images of very complex reservoirs as well as the industrial ambition to produce these images in a more automatic manner (automation in seismic processing). In this vision paper on seismic imaging, Full-Wavefield Migration and Joint Migration Inversion are formulated in terms of a closed-loop estimation algorithm that can be physically explained by an iterative double focusing process (full-wavefield common-focus-point technology). A critical module in this formulation is forward modelling, allowing feedback from migrated output to unmigrated input (closing the loop). For this purpose, a full-wavefield modelling module has been developed, which utilizes an operator description of complex geology. The full-wavefield modelling module is pre-eminently suited to function in the feedback path of a closed-loop migration algorithm. 'The Future of Seismic Imaging' is presented as a coherent trilogy, proposing the migration framework of the future in three consecutive parts. In Part I, it was shown that the proposed full-wavefield modelling module algorithm differs fundamentally from finite-difference modelling because velocities and densities need not be provided. Instead, an operator description of the subsurface is used. In addition, the concept of reverse modelling was introduced. In Part II, it is shown how the theory of Primary Wavefield Migration can be extended to Full-Wavefield Migration by correcting for angle-dependent transmission effects and by utilizing multiple scattering. The potential of the Full-Wavefield Migration algorithm is illustrated with numerical examples. A multidirectional migration strategy is proposed that navigates the Full-Wavefield Migration algorithm through the seismic data cube in different directions. © 2014 European Association of Geoscientists & Engineers.

Amiri-Simkooei A.R.,University of Isfahan | Amiri-Simkooei A.R.,Technical University of Delft
Journal of Geodesy | Year: 2013

In an earlier work, a simple and flexible formulation for the weighted total least squares (WTLS) problem was presented. The formulation allows one to directly apply the existing body of knowledge of the least squares theory to the errors-in-variables (EIV) models of which the complete description of the covariance matrices of the observation vector and of the design matrix can be employed. This contribution presents one of the well-known theories-least squares variance component estimation (LS-VCE)-to the total least squares problem. LS-VCE is adopted to cope with the estimation of different variance components in an EIV model having a general covariance matrix obtained from the (fully populated) covariance matrices of the functionally independent variables and a proper application of the error propagation law. Two empirical examples using real and simulated data are presented to illustrate the theory. The first example is a linear regression model and the second example is a 2-D affine transformation. For each application, two variance components-one for the observation vector and one for the coefficient matrix-are simultaneously estimated. Because the formulation is based on the standard least squares theory, the covariance matrix of the estimates in general and the precision of the estimates in particular can also be presented. © 2013 Springer-Verlag Berlin Heidelberg.

Weijermars R.,Technical University of Delft
International Journal of Rock Mechanics and Mining Sciences | Year: 2013

This study reviews the analytical descriptions for stress characterization around balanced and unbalanced drill holes. The stress-trajectory patterns around cylindrical wellbores are visualized for a range of typical physical conditions. The effects of variations in far-field stress, boundary conditions, wellbore fluid pressure, and formation pressure are systematically outlined. Axially symmetric and asymmetric far-field stresses and their interaction with various wellbore pressures are quantified in diagrams scaled for universal use. The stress-perturbation zone, the region around the wellbore that is affected by a stress perturbation due to the presence of the wellbore, is delineated. Rules are formulated for practical application in wellbore-balancing studies and wellbore-stability analysis. These rules are useful for application in drilling activities aimed at the safe and effective extraction of energy resources (geothermal heat, oil, wet gas, dry gas). © 2013 Elsevier Ltd.

Eelkema R.,Technical University of Delft
Liquid Crystals | Year: 2011

This review deals with recent developments in the design of switchable dopants capable of altering the pitch of cholesteric liquid crystals under the influence of light. Cholesteric liquid crystals possess many potentially useful properties owing to the helical organisation of their mesogens. For many of these applications, dynamic control over the cholesteric pitch can be achieved using chiral dopants capable of changing their shape in response to an external stimulus. The first attempts at developing such responsive systems stem from the 1970s, but major advances have been reported in recent years (2003-present), which is the subject of this review. Efficient dopants showing large changes in helical twisting power upon photo-switching have been developed, often capable of inverting the sign of the helical organisation of the surrounding liquid crystalline host. Overcrowded alkene-based molecular motors and binaphthylazobenzene-based switches have emerged as two classes of highly efficient dopants, enabling manipulation of cholesteric pitch over wide ranges using low concentrations of dopant. © 2011 Taylor & Francis.

Lesaja G.,Georgia Southern University | Roos C.,Technical University of Delft
SIAM Journal on Optimization | Year: 2010

We present an interior-point method for the P *(κ)- linear complementarity problem (LCP) that is based on barrier functions which are defined by a large class of univariate functions called eligible kernel functions. This class is fairly general and includes the classical logarithmic function and the self-regular functions, as well as many non-self-regular functions as special cases. We provide a unified analysis of the method and give a general scheme on how to calculate the iteration bounds for the entire class. We also calculate the iteration bounds of both long-step and short-step versions of the method for several specific eligible kernel functions. For some of them we match the best known iteration bounds for the long-step method, while for the short-step method the iteration bounds are of the same order of magnitude. As far as we know, this is the first paper that provides a unified approach and comprehensive treatment of interior-point methods for P * (κ)-LCPs based on the entire class of eligible kernel functions. © 2010 Society for Industrial and Applied Mathematics.

Berkhout A.J.G.,Technical University of Delft
Geophysical Prospecting | Year: 2014

The next generation seismic imaging algorithms will consider multiple scattering as indispensable information, referred to as Full Wavefield Migration. In addition, these algorithms will also include autonomous velocity updating in the migration process, referred to as Joint Migration Inversion. Full wavefield migration and joint migration inversion address the industrial needs of improving images of very complex reservoirs as well as the industry ambition of producing these images in a more automatic manner ('automation in seismic processing'). In this vision paper on seismic imaging, full wavefield migration and joint migration inversion are formulated in terms of a closed-loop, estimation algorithm that can be physically explained by an iterative double focusing process (full wavefield Common-Focus-Point technology). A critical module in this formulation is forward modelling, allowing feedback from migrated output to unmigrated input ('closing the loop'). For this purpose a full wavefield modelling module has been developed that utilizes an operator description of complex geology. Full wavefield modelling module is pre-eminently suited to function in the feedback path of a closed-loop migration algorithm. 'The Future of Seismic Imaging' is presented as a coherent trilogy, proposing in three consecutive parts the migration framework of the future. In part I it was shown that the proposed full wavefield modelling module algorithm differs fundamentally from finite difference modelling, as velocities and densities need not be provided. Instead, full wavefield modelling module uses an operator description of the subsurface. In Part II it was shown how the theory of Primary Wavefield Migration can be extended to Full Wavefield Migration by correcting for elastic transmission effects and by utilizing multiple scattering. In Part III it is shown how the full wavefield migration technology can be extended to Joint Migration Inversion, allowing full wavefield migration of blended data without knowledge of the velocity. Velocities are part of the joint migration inversion output, being obtained by an operator-driven parametric inversion process. The potential of the proposed joint migration inversion algorithm is illustrated with numerical examples. © 2014 European Association of Geoscientists & Engineers.

Jansen M.L.A.,Royal DSM | van Gulik W.M.,Technical University of Delft
Current Opinion in Biotechnology | Year: 2014

Fermentative production of succinic acid (SA) from renewable carbohydrate feed-stocks can have the economic and sustainability potential to replace petroleum-based production in the future, not only for existing markets, but also new larger volume markets. To accomplish this, extensive efforts have been undertaken in the field of strain construction and metabolic engineering to optimize SA production in the last decade. However, relatively little effort has been put into fermentation process development. The choice for a specific host organism determines to a large extent the process configuration, which in turn influences the environmental impact of the overall process. In the last five years, considerable progress has been achieved towards commercialization of fermentative production of SA. Several companies have demonstrated their confidence about the economic feasibility of fermentative SA production by transferring their processes from pilot to production scale. © 2014 Elsevier Ltd.

Heijnen J.J.,Technical University of Delft
Advances in Biochemical Engineering/Biotechnology | Year: 2010

It is shown that properties of biological systems which are relevant for systems biology motivated mathematical modelling are strongly shaped by general thermodynamic principles such as osmotic limit, Gibbs energy dissipation, near equilibria and thermodynamic driving force. Each of these aspects will be demonstrated both theoretically and experimentally. © Springer-Verlag Berlin Heidelberg 2010.

Berkhout A.J.G.,Technical University of Delft
Geophysical Prospecting | Year: 2014

The next generation of seismic imaging algorithms will use full wavefield migration, which regards multiple scattering as indispensable information. These algorithms will also include autonomous velocity-updating in the migration process, called joint migration inversion. Full wavefield migration and joint migration inversion address industrial requirements to improve the images of highly complex reservoirs as well as the industrial ambition to produce these images more automatically (automation in seismic processing). In these vision papers on seismic imaging, full wavefield migration and joint migration inversion are formulated in terms of a closed-loop, estimation algorithm that can be physically explained by an iterative double-focusing process (full wavefield Common Focus Point technology). A critical module in this formulation is forward modelling, allowing feedback from the migrated output to the unmigrated input ('closing the loop'). For this purpose, a full wavefield modelling module has been developed, which uses an operator description of complex geology. Full wavefield modelling is pre-eminently suited to function in the feedback path of a closed-loop migration algorithm. 'The Future of Seismic Imaging' is presented as a coherent trilogy of papers that propose the migration framework of the future. In Part I, the theory of full wavefield modelling is explained, showing the fundamental distinction with the finite-difference approach. Full wavefield modelling allows the computation of complex shot records without the specification of velocity and density models. Instead, an operator description of the subsurface is used. The capability of full wavefield modelling is illustrated with examples. Finally, the theory of full wavefield modelling is extended to full wavefield reverse modelling (FWMod-1), which allows accurate estimation of (blended) source properties from (blended) shot records. © 2014 European Association of Geoscientists & Engineers.

Childress L.,McGill University | Hanson R.,Technical University of Delft
MRS Bulletin | Year: 2013

The exotic features of quantum mechanics have the potential to revolutionize information technologies. Using superposition and entanglement, a quantum processor could efficiently tackle problems inaccessible to current-day computers. Nonlocal correlations may be exploited for intrinsically secure communication across the globe. Finding and controlling a physical system suitable for fulfilling these promises is one of the greatest challenges of our time. The nitrogen-vacancy (NV) center in diamond has recently emerged as one of the leading candidates for such quantum information technologies thanks to its combination of atom-like properties and solid-state host environment. We review the remarkable progress made in the past years in controlling electrons, atomic nuclei, and light at the single-quantum level in diamond. We also discuss prospects and challenges for the use of NV centers in future quantum technologies. Copyright © Materials Research Society 2013.

Frantzeskaki N.,Dutch Research Institute for Transitions DRIFT | Tilie N.,Technical University of Delft
Ambio | Year: 2014

We explore whether Rotterdam city has the governance capacity in terms of processes at place, and the attention in terms of vision and strategy to take up an integrated approach toward urban resilience. We adopt an interpretative policy analysis approach to assess the dynamics of urban ecosystem governance considering interviews, gray literature, and facilitated dialogues with policy practitioners. We show the inner workings of local government across strategic, operational, tactical, and reflective governance processes about the way urban ecosystems are regulated. Despite the existing capacity to steer such processes, a number of underlying challenges exist: need for coordination between planning departments; need to ease the integration of new policy objectives into established adaptive policy cycles; and need to assess the lessons learnt from pilots and emerging green initiatives. Regulating and provisioning ecosystem services receive heightened policy attention. Focus on regulating services is aintained by a policy renewal cycle that limits and delays consideration of other ecosystem services in policy and planning. © The Author(s) 2014.

Infante Ferreira C.,Technical University of Delft | Kim D.-S.,Chungbuk National University
International Journal of Refrigeration | Year: 2014

Solar energy can potentially contribute to 10% of the energy demand in OECD countries if all cooling and heating systems would be driven by solar energy. This paper considers cooling systems for residential and utility buildings in both South and North Europe and investigates the most promising alternatives when solar energy is to be used to supply the cooling demand of these buildings while the heat rejection temperatures are high. Both the solar electric and solar thermal routes are considered. The discussion considers both concentrating and non-concentrating thermal technologies. It is concluded that presently vapor compression cycles in combination with PV collectors lead to the economically most attractive solutions. The second best option are vapor compression cycles driven by electricity delivered by parabolic dish collectors and Stirling engines. The best thermally driven solution is the double-effect absorption cycle equipped with concentrating trough collectors closely followed by desiccant systems equipped with flat-plate solar collectors. Adsorption systems options are significantly more expensive. © 2013 Elsevier Ltd and IIR. All rights reserved.

Schmidt R.-H.M.,Technical University of Delft
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2012

The developments in lithographic tools for the production of an integrated circuit (IC) are ruled by Moore's Law': the density of components on an IC doubles in about every two years. The corresponding size reduction of the smallest detail in an IC entails several technological breakthroughs. The wafer scanner, the exposure system that defines those details, is the determining factor in these developments. This review deals with those aspects of the positioning systems inside these wafer scanners that enable the extension of Moore's Law into the future. The design of these systems is increasingly difficult because of the accuracy levels in the sub-nanometre range coupled with motion velocities of several metres per second. In addition to the use of feedback control for the reduction of errors, high-precision model-based feed-forward control is required with an almost ideally reproducible motion-system behaviour and a strict limitation of random disturbing events. The full mastering of this behaviour even includes material drift on an atomic scale and is decisive for the future success of these machines. © 2012 The Royal Society.

Jeltsema D.,Technical University of Delft | Doria-Cerezo A.,Polytechnic University of Catalonia
Proceedings of the IEEE | Year: 2012

In this paper, we consider memristors, meminductors, and memcapacitors and their properties as port-Hamiltonian systems. The port-Hamiltonian formalism naturally arises from network modeling of physical systems in a variety of domains. Exposing the relation between the energy storage, dissipation, and interconnection structure, this framework underscores the physics of the system. One of the strong aspects of the port-Hamiltonian formalism is that a power-preserving interconnection between port-Hamiltonian systems results in another port-Hamiltonian system with composite energy, dissipation, and interconnection structure. This feature can advantageously be used to model, analyze, and simulate networks consisting of complex interconnections of both conventional and memory circuit elements. Furthermore, the port-Hamiltonian formalism naturally extends the fundamental properties of the memory elements beyond the realm of electrical circuits. © 1963-2012 IEEE.

Rotering N.,RWTH Aachen | Ilic M.,Carnegie Mellon University | Ilic M.,Technical University of Delft
IEEE Transactions on Power Systems | Year: 2011

Plug-in hybrid electric vehicles are a midterm solution to reduce the transportation sector's dependency on oil. However, if implemented in a large scale without control, peak load increases significantly and the grid may be overloaded. Two algorithms to address this problem are proposed and analyzed. Both are based on a forecast of future electricity prices and use dynamic programming to find the economically optimal solution for the vehicle owner. The first optimizes the charging time and energy flows. It reduces daily electricity cost substantially without increasing battery degradation. The latter also takes into account vehicle to grid support as a means of generating additional profits by participating in ancillary service markets. Constraints caused by vehicle utilization as well as technical limitations are taken into account. An analysis, based on data of the California independent system operator, indicates that smart charge timing reduces daily electricity costs for driving from $0.43 to $0.2. Provision of regulating power substantially improves plug-in hybrid electric vehicle economics and the daily profits amount to $1.71, including the cost of driving. © 2011 IEEE.

Casas-Prat M.,University of Barcelona | Holthuijsen L.H.,Technical University of Delft
Journal of Geophysical Research: Oceans | Year: 2010

The short-term statistics of 10 million individual waves observed with buoys in deep water have been investigated, corrected for a sample-rate bias, and normalized with the standard deviation of the surface elevation (the range of normalized wave heights is 0 < H̃ < 10). The observed normalized trough depths are found to be Rayleigh distributed with near-perfect scaling. The normalized crest heights are also Rayleigh distributed but 3% higher than given by the conventional Rayleigh distribution. The observed normalized wave heights are not well predicted by the conventional Rayleigh distribution (overprediction by 9.5% on average), but they are very well predicted by Rayleigh-like distributions obtained from linear theories and by an empirical Weibull distribution (errors < 1.5%). These linear theories also properly predict the observed monotonic variation of the normalized wave heights with the (de-)correlation between crest height and trough depth. The theoretical Rayleigh-like distributions may therefore be preferred over the empirical Weibull distribution and certainly over the conventional Rayleigh distribution. The values of the observed expected maximum wave height (normalized) as a function of duration are consistent with these findings. To inspect nonlinear effects, the buoy observations were supplemented with 10,000 waves observed with laser altimeters mounted on a fixed platform (0 < H̃ < 7). The (normalized) crest heights thus observed are typically 5% higher than those observed with the buoys, whereas the (normalized) trough depths are typically 12% shallower. The distribution of the normalized wave heights thus observed is practically identical to the distribution observed with the buoys. These findings suggest that crest heights and trough depths are affected by nonlinear effects, but wave heights are not. One wave in our buoy observations may qualify as a freak wave. Copyright 2010 by the American Geophysical Union.

Sorokin D.Y.,Technical University of Delft | Banciu H.L.,Babes - Bolyai University | Muyzer G.,University of Amsterdam
Current Opinion in Microbiology | Year: 2015

Soda lakes represent unique permanently haloalkaline system. Despite the harsh conditions, they are inhabited by abundant, mostly prokaryotic, microbial communities. This review summarizes results of studies of main functional groups of the soda lake prokaryotes responsible for carbon, nitrogen and sulfur cycling, including oxygenic and anoxygenic phototrophs, aerobic chemolithotrophs, fermenting and respiring anaerobes. The main conclusion from this work is that the soda lakes are very different from other high-salt systems in respect to microbial richness and activity. The reason for this difference is determined by the major physico-chemical features of two dominant salts - NaCl in neutral saline systems and sodium carbonates in soda lakes, that are influencing the amount of energy required for osmotic adaptation. © 2015 Elsevier Ltd.

Dorenbos P.,Technical University of Delft
Journal of Materials Chemistry | Year: 2012

There are fourteen lanthanides that may adopt the 2+ and 3+ charge states, and that can be incorporated into a countless number of compounds. Few tenths of an eV change in the location of the lanthanide impurity states with respect to the host compound band states can have dramatic performance consequences. In this unimaginable large materials research field, knowledge on the electronic structure and how it changes with the type of lanthanide and the type of compound are highly desired. Past years have witnessed large progress in our understanding, and today models to construct electronic structure diagrams have reached sufficient accuracy to provide a tool for engineering the properties of lanthanide activated compounds. Here the models to construct those diagrams and how they can be utilized to explain or engineer properties are reviewed. © 2012 The Royal Society of Chemistry.

Sheldon R.A.,Technical University of Delft | Sanders J.P.M.,Wageningen University
Catalysis Today | Year: 2014

The development of a set of sustainability metrics for quickly evaluating the production of commoditychemicals from renewable biomass is described. The method is based on four criteria: material and energyefficiency, land use and process economics. The method will be used for comparing the sustainability ofthe production of seven commodity chemicals lactic acid, 1-butanol, propylene glycol, succinic acid,acrylonitrile, isoprene and methionine from fossil feedstocks (crude oil or natural gas) versus renewablebiomass. © 2014 Elsevier B.V. All rights reserved.

Zhang W.,Ohio State University | Hu J.,Purdue University | Abate A.,Technical University of Delft
IEEE Transactions on Automatic Control | Year: 2012

This paper studies the quadratic regulation problem for discrete-time switched linear systems (DSLQR problem) on an infinite time horizon. A general relaxation framework is developed to simplify the computation of the value iterations. Based on this framework, an efficient algorithm is developed to solve the infinite-horizon DSLQR problem with guaranteed closed-loop stability and suboptimal performance. Due to these guarantees, the proposed algorithm can be used as a general controller synthesis tool for switched linear systems. © 2011 IEEE.

Comin A.,Ludwig Maximilians University of Munich | Comin A.,Italian Institute of Technology | Manna L.,Italian Institute of Technology | Manna L.,Technical University of Delft
Chemical Society Reviews | Year: 2014

We present a review on the emerging materials for novel plasmonic colloidal nanocrystals. We start by explaining the basic processes involved in surface plasmon resonances in nanoparticles and then discuss the classes of nanocrystals that to date are particularly promising for tunable plasmonics: non-stoichiometric copper chalcogenides, extrinsically doped metal oxides, oxygen-deficient metal oxides and conductive metal oxides. We additionally introduce other emerging types of plasmonic nanocrystals and finally we give an outlook on nanocrystals of materials that could potentially display interesting plasmonic properties. © 2014 the Partner Organisations.

Friege J.,Wuppertal Institute for Climate | Chappin E.,Wuppertal Institute for Climate | Chappin E.,Technical University of Delft
Renewable and Sustainable Energy Reviews | Year: 2014

The buildings sector accounts for more than 30% of global greenhouse gas emissions. Despite the well-known economic viability of many energy-efficient renovation measures which offer great potential for reducing greenhouse gas emissions and meeting climate protection targets, there is a relatively low level of implementation. We performed a citation network analysis in order to identify papers at the research front and intellectual base on energy-efficient renovation in four areas: technical options, understanding decisions, incentive instruments, and models and simulation. The literature was reviewed in order to understand what is needed to sufficiently increase the number of domestic energy-efficient renovations and to identify potential research gaps. Our findings show that the literature on energy-efficient renovation gained considerable momentum in the last decade, but lacks a deep understanding of the uncertainties surrounding economic aspects and non-economic factors driving renovation decisions of homeowners. The analysis indicates that the (socio-economic) energy saving potential and profitability of energy-efficient renovation measures is lower than generally expected. It is suggested that this can be accounted for by the failure to understand and consider the underpinning influences of energy-consuming behaviour in calculations. Homeowners decisions to renovate are shaped by an alliance of economic and non-economic goals. Therefore, existing incentives, typically targeting the economic viability of measures, have brought little success. A deeper understanding of the decisions of homeowners is needed and we suggest that a simulation model which maps the decision-making processes of homeowners may result in refining existing instruments or developing new innovative mechanisms to tackle the situation. © 2014 Elsevier Ltd.

Straathof A.J.,Technical University of Delft
Sub-cellular biochemistry | Year: 2012

Fermentative fumaric acid production from renewable resources may become competitive with petrochemical production. This will require very efficient processes. So far, using Rhizopus strains, the best fermentations reported have achieved a fumaric acid titer of 126 g/L with a productivity of 1.38 g L(-1) h(-1) and a yield on glucose of 0.97 g/g. This requires pH control, aeration, and carbonate/CO(2) supply. Limitations of the used strains are their pH tolerance, morphology, accessibility for genetic engineering, and partly, versatility to alternative carbon sources. Understanding of the mechanism and energetics of fumaric acid export by Rhizopus strains will be a success factor for metabolic engineering of other hosts for fumaric acid production. So far, metabolic engineering has been described for Escherichia coli and Saccharomyces cerevisiae.

Williams P.,Technical University of Delft
Journal of Spacecraft and Rockets | Year: 2010

Varying the current in electrodynamic tethers provides a means for manipulating the orbits of spacecraft. These variations can induce unstable librational motion of the tether. Periodic solutions of electrodynamic tethers under forced currents are studied, which provide a reference trajectory for feedback control of the tether librations during orbit transfer. The tether is treated as a dumbbell, And periodic solutions are obtained by means of the Legendre pseudospectral method. The technique provides the stability characteristics of the solutions by application of Floquet's theory. Five different current profiles suitable for orbital maneuvering are used to obtain periodic solutions. An energy-rate feedback controller is applied to stabilize the dumbbell librations around the time-varying reference trajectory. The results show that sinusoidal and cosine currents with frequencies equal to that of the orbit are neutrally stable for some orbit inclinations, whereas a cosine current with frequency twice that of the orbit is unstable, with the degree of instability growing with the current amplitude.

Teunissen P.J.G.,Curtin University Australia | Teunissen P.J.G.,Technical University of Delft
Journal of Geodesy | Year: 2010

Global navigation satellite system (GNSS) carrier phase integer ambiguity resolution is the key to high-precision positioning and attitude determination. In this contribution, we develop new integer least-squares (ILS) theory for the GNSS compass model, together with efficient integer search strategies. It extends current unconstrained ILS theory to the nonlinearly constrained case, an extension that is particularly suited for precise attitude determination. As opposed to current practice, our method does proper justice to the a priori given information. The nonlinear baseline constraint is fully integrated into the ambiguity objective function, thereby receiving a proper weighting in its minimization and providing guidance for the integer search. Different search strategies are developed to compute exact and approximate solutions of the nonlinear constrained ILS problem. Their applicability depends on the strength of the GNSS model and on the length of the baseline. Two of the presented search strategies, a global and a local one, are based on the use of an ellipsoidal search space. This has the advantage that standard methods can be applied. The global ellipsoidal search strategy is applicable to GNSS models of sufficient strength, while the local ellipsoidal search strategy is applicable to models for which the baseline lengths are not too small. We also develop search strategies for the most challenging case, namely when the curvature of the non-ellipsoidal ambiguity search space needs to be taken into account. Two such strategies are presented, an approximate one and a rigorous, somewhat more complex, one. The approximate one is applicable when the fixed baseline variance matrix is close to diagonal. Both methods make use of a search and shrink strategy. The rigorous solution is efficiently obtained by means of a search and shrink strategy that uses non-quadratic, but easy-to-evaluate, bounding functions of the ambiguity objective function. The theory presented is generally valid and it is not restricted to any particular GNSS or combination of GNSSs. Its general applicability also applies to the measurement scenarios (e. g. single-epoch vs. multi-epoch, or single-frequency vs. multi-frequency). In particular it is applicable to the most challenging case of unaided, single frequency, single epoch GNSS attitude determination. The success rate performance of the different methods is also illustrated. © 2010 The Author(s).

Da Silva C.B.,University of Lisbon | Hunt J.C.R.,University College London | Eames I.,University College London | Westerweel J.,Technical University of Delft
Annual Review of Fluid Mechanics | Year: 2014

Recent developments in the physics and modeling of interfacial layers between regions with different turbulent intensities are reviewed. The flow dynamics across these layers governs exchanges of mass, momentum, energy, and scalars (e.g., temperature), which determine the growth, spreading, mixing, and reaction rates in many flows of engineering and natural interest. Results from several analytical and linearized models are reviewed. Particular attention is given to the case of turbulent/nonturbulent interfaces that exist at the edges of jets, wakes, mixing layers, and boundary layers. The geometry, dynamics, and scaling of these interfaces are reviewed, and future lines of research are suggested. The dynamics of passive and active scalars is also discussed, including the effects of stratification, turbulence level, and internal forcing. Finally, the modeling challenges for one-point closures and subgrid-scale models are briefly mentioned. Copyright © 2014 by Annual Reviews. All rights reserved.

Amiri-Simkooei A.R.,University of Isfahan | Amiri-Simkooei A.R.,Technical University of Delft
Journal of Geophysical Research: Solid Earth | Year: 2013

Plate tectonics studies using GPS require proper analysis of time series, in which all functional effects are understood and all stochastic effects are captured using an appropriate noise assessment technique. Both issues are addressed in this contribution. Estimates of spatial correlation, time correlated noise, and multivariate power spectrum for daily position time series of 350, 150, and 50 permanent GPS stations, respectively, collected between 2000-2007, 1998-2007, and 1996-2007 are obtained. The daily GPS global solutions were processed by the GPS Analysis Center at JPL. The detection power of the common-mode signals is improved by including the time- and space-correlated noise into the least squares power spectrum. Previous signals, such as those with periods of 13.63, 14.2, 14.6, and 14.8 days, are identified in the multivariate analysis. Significant signal with period of 351.6 ± 0.2 days and its higher harmonics are detected in the series, which closely follows the GPS draconitic year. The variation range of this periodic pattern for the north, east, and up components are about ±3, ±3.2, and ±6.5 mm, respectively. Three independent criteria confirm that this periodic pattern is of similar nature at adjacent stations, indicating its independence of the station-related effects such as multipath. It is likely due to the other causes of the GPS draconitic year period driven into GPS time series. The multivariate power spectrum shows a cluster of signals with periods ranging from 5 to 6 days (quasiperiodic signals). In their aliased forms, the effects are likely partly responsible for the time-correlated noise and partly for the periodic patterns at lower frequencies. © 2013. American Geophysical Union. All Rights Reserved.

Robertson L.A.,Technical University of Delft
FEMS Microbiology Letters | Year: 2015

When Antonie van Leeuwenhoek died, he left over 500 simple microscopes, aalkijkers (an adaption of his microscope to allow the examination of blood circulation in the tails of small eels) and lenses, yet now there are only 10 microscopes with a claim to being authentic, one possible aalkijker and six lenses. He made microscopes with more than one lens, and possibly three forms of the aalkijker. This paper attempts to establish exactly what he left and trace the fate of some of the others using the earliest possible documents and publications. © FEMS 2015. All rights reserved.

Polat I.,Technical University of Delft
Proceedings of the American Control Conference | Year: 2011

We present the stability analysis of bilateral teleoperation systems in the face of time varying stiff environments via Integral Quadratic Constraints (IQCs). Numerical cases are given for both arbitrarily fast and slowly varying parametric uncertainties. © 2011 AACC American Automatic Control Council.

Robertson L.A.,Technical University of Delft
FEMS Microbiology Letters | Year: 2015

Facsimile microscopes have been used to examine the possibilities of van leeuwenhoek microscopes with a range of magnifications, particularly to confirm that bacteria can be seen if the microscope is strong enough. The relevance of historical microbiology in education is also illustrated by adapting versions of van leeuwenhoek's pepper water experiment and beijerinck's use of bioluminescent bacteria as oxygen probes. These experiments can demonstrate fundamentals such as enrichment and isolation cultures, physiology and experimental planning as wellas critical reading of published material. © FEMS 2015. All rights reserved.

Geerlings H.,Erasmus University Rotterdam | Van Duin R.,Technical University of Delft
Journal of Cleaner Production | Year: 2011

At present, the notion is generally accepted that societies have to combat climate change. The reduction of CO2-emissions, an important cause for global warming, has become a priority, and consequently there is increasing pressure on governments and industries to come forward with initiatives to reduce CO2-emissions. This is highly relevant for the transport sector, as the share of transportation is still increasing, while other sectors are reducing their CO2-footprint. The main purpose of this paper is to present a methodology to analyse the CO2-emissions from container terminals, illustrated by the Port of Rotterdam. The objective of this paper is twofold. Firstly, the development of a methodology to analyse and gain a better understanding of the CO2-emissions by container terminals in port areas is described. Secondly, the most effective solutions to reduce CO 2-emissions by container terminals in port areas are identified. The study provides insight into the processes of container transshipment at the terminals and the contribution of these processes to the CO2- emissions of the container terminals. Using these insights, potential solutions to reduce the CO2 at the terminals are identified and policy proposals are made for the operators of existing terminals and for governments. The most effective measure for CO2 reduction is undoubtedly the adaptation of the terminal layout as in the example of the Rotterdam Shortsea Terminal. This makes it possible to reduce the CO2-emissions of the current terminals by nearly 70 per cent. The other perspective is the incorporation of mixing 30 per cent biofuels with the presently used diesel. This results in a reduction of CO2-emissions by between 13 and 26 per cent per terminal and a reduction of the emissions of the total container sector by 21 per cent. On the basis of these findings, concrete recommendations are made to reducing CO2-emissions at container terminals. © 2010 Elsevier Ltd. All rights reserved.

Mlecnik E.,Technical University of Delft
Journal of Cleaner Production | Year: 2013

The construction of highly energy-efficient buildings is more than an emerging business opportunity, it is also a major challenge in systemic innovation, particularly for SMEs, which are more accustomed to incremental innovation. Accordingly, this study searches for new innovation opportunities for supplier-led innovation in highly energy-efficient housing by examining the innovation journey and analysing the innovation opportunities and barriers encountered by a successful innovator in Flanders. Existing innovation models and innovation barriers and opportunities in the construction of highly energy-efficient housing are discussed within the framework of the theory on systemic innovation. The successful innovation journey of a supplier illustrates how coordinated collaboration can help an incremental (technological) innovation idea to develop into modular, system and even radical innovation. The study highlighted the importance of suppliers as players in the development of innovation. To successfully introduce innovation (even incremental innovation) suppliers need to join forces with other organisations and respond to the challenges of systemic innovation. Demonstration projects and collaboration between SMEs are key to achieving the modular and architectural innovation needed for highly energy-efficient housing. Given the specificity of both the construction sector and highly energy-efficient housing, the supplier should be given explicit guidance on how to link modular innovation to architectural and system innovation. Finally, the study also showed that players in dedicated radical innovations such as passive house networks can contribute to the market uptake of innovation. © 2012 Elsevier Ltd. All rights reserved.

Alfano M.,Technical University of Delft
American Journal of Bioethics | Year: 2015

The concepts of placebos and placebo effects refer to extremely diverse phenomena. I recommend dissolving the concepts of placebos and placebo effects into loosely related groups of specific mechanisms, including (potentially among others) expectation-fulfillment, classical conditioning, and attentional-somatic feedback loops. If this approach is on the right track, it has three main implications for the ethics of informed consent. First, because of the expectation-fulfillment mechanism, the process of informing cannot be considered independently from the potential effects of treatment. Obtaining informed consent influences the effects of treatment. This provides support for the authorized concealment and authorized deception paradigms, and perhaps even for outright deceptive placebo use. Second, doctors may easily fail to consider the potential benefits of conditioning, leading them to misjudge the trade-off between beneficence and autonomy. Third, how attentional-somatic feedback loops play out depends not only on the content of the informing process but also on its framing. This suggests a role for libertarian paternalism in clinical practice. © 2015, Copyright © Taylor & Francis Group, LLC.

Langendoen K.,Technical University of Delft | Meier A.,ETH Zurich
ACM Transactions on Sensor Networks | Year: 2010

The fundamental wireless sensors network (WSN) requirement to be energy-efficient has produced a whole range of specialized medium access control (MAC) protocols. They differ in how performance (latency, throughput) is traded off for a reduction in energy consumption. The question "which protocol is best?" is difficult to answer because (i) this depends on specific details of the application requirements and hardware characteristics involved, and (ii) protocols have mainly been assessed individually with each outperforming the canonical S-MAC protocol, but with different simulators, hardware platforms, and workloads. This article addresses that void for low data-rate applications where collisions are of little concern, making an analytical approach tractable in which latency and energy consumption are modeled as functions of key protocol parameters (duty cycle, slot length, number of slots, etc.). By exhaustive search we determine the Pareto-optimal protocol settings for a given workload (data rate, network topology). Of the protocols compared we find that WiseMAC strikes the best latency versus energy-consumption tradeoff across the range of workloads considered. In particular, its random access scheme in combination with local synchronization not only minimizes protocol overhead, but also maximizes the available channel bandwidth. Categories and Subject Descriptors: C.2.1 [Computer-Communication Networks]: Network Architecture and Design-Wireless communication; C.3 [Special-Purpose and Application-Based Systems]: Real-time and embedded systems General Terms: Performance © 2010 ACM 1550-4859/2010/08-ART10 $10.00.

Weber J.H.,Technical University of Delft | Schouhamer Immink K.A.,Nanyang Technological University
IEEE Transactions on Information Theory | Year: 2010

In 1986, Don Knuth published a very simple algorithm for constructing sets of bipolar codewords with equal numbers of "1"s and "-1"s, called balanced codes. Knuth's algorithm is well suited for use with large codewords. The redundancy of Knuth's balanced codes is a factor of two larger than that of a code comprising the full set of balanced codewords. In this paper, we will present results of our attempts to improve the performance of Knuth's balanced codes. © 2006 IEEE.

Visser E.,Technical University of Delft
IEEE Software | Year: 2010

Web application development is a complex task, in which developers must address many concerns, such as user interface, data model, access control, data validation, and search. Current technology typically requires multiple languages and programming paradigms to cover these aspects. Using such domain-specific languages improves developer expressivity and lets them separate concerns. However, coupling these technologies is often less than optimal. It results in little or no consistency checking between concerns as well as wildly different language styles and paradigmsfrom XML-style transformation languages like Extensible Style Sheet Language Transformation, to aspect languages like cascading style sheets, to object-oriented languages like Java and Java Script. WebDSL is a domain-specific language for constructing Web information systems. The language comprises sublanguages that address Web application concerns, maintaining separation of concerns, but integrating linguistically to provide consistency checking and reuse of common language concepts consistency checking and reuse of common language concepts between concerns. In this paper we describe the problems in web application development and discuss the WebDSL solution. © 2006 IEEE.

Nihtianov S.,Technical University of Delft
IEEE Industrial Electronics Magazine | Year: 2014

In this article, an overview of capacitive and eddy current sensors (ECSs) for measuring very small displacements in the subnanometer range is presented in view of the latest advancements in the field. The need for an accurate displacement/position measurement in such extremely small scales as nanometers and picometers has increased significantly during the last few years. Application examples can be found in high-Tech industry, metrology, and space equipment manufacturing. A better understanding of the commonalities between these two types of sensors, as well as the main performance differences and limitations, will help to make the best choice for a specific application. The comparative survey in this article is based on both theoretical analysis and experimental results. The main performance criteria used are sensitivity, resolution, compactness, long-Term stability, thermal drift, and power efficiency. © 2007-2011 IEEE.

Ansari M.H.,Technical University of Delft
Superconductor Science and Technology | Year: 2015

In superconducting qubits the lifetime of quantum states cannot be prolonged arbitrarily by decreasing temperature. At low temperature quasiparticles tunneling between the electromagnetic environment and superconducting islands takes the condensate state out of equilibrium due to charge imbalance. We obtain the tunneling rate from a phenomenological model of non-equilibrium, where nonequilibrium quasiparticle tunnelling stimulates a temperature-dependent chemical potential shift in the superconductor. As a result we obtain a non-monotonic behavior for relaxation rate as a function of temperature. Depending on the fabrication parameters for some qubits, the lowest tunneling rate of nonequilibrium quasiparticles can take place only near the onset temperature below which nonequilibrium quasiparticles dominate over equilibrium one. Our theory also indicates that such tunnelings can influence the probability of transitions in qubits through a coupling to the zero-point energy of phase fluctuations. © 2015 IOP Publishing Ltd.

Van Eijk C.W.E.,Technical University of Delft
IEEE Transactions on Nuclear Science | Year: 2012

The 3He shortage is forcing the neutron community to look for other detection methods. The inorganic scintillator may be an alternative. Thermal-neutron detection by means of inorganic scintillators has successfully been realized on a large scale at ISIS, UK, using 6 LiF/ZnS:Ag mixed with an organic binder. This material is now introduced in the security field. For several reasons other traditional neutron scintillators, 6Li-glass:Ce and 6LiI:Eu, and relatively new materials such as 6 Li 6 Gd(BO 3 ) 3 :Ce and elpasolites like Cs 2 LiYCl 6 :Ce and Cs 2 LiLaBr 6 :Ce are hardly used or did not yet find their way to application. The same applies to more recently studied materials of the LICAF group. The pros and cons of these inorganic materials for thermal neutron detection will be discussed. © 2012 IEEE.

Charbon E.,Technical University of Delft
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2014

This paper describes the basics of single-photon counting in complementary metal oxide semiconductors, through single-photon avalanche diodes (SPADs), and the making of miniaturized pixels with photon-counting capability based on SPADs. Some applications, whichmay take advantage of SPAD image sensors, are outlined, such as fluorescence-based microscopy, three-dimensional time-of-flight imaging and biomedical imaging, to name just a few. The paper focuses on architectures that are best suited to those applications and the tradeoffs they generate. In this context, architectures are described that efficiently collect the output of single pixels when designed in large arrays. Off-chip readout circuit requirements are described for a variety of applications in physics, medicine and the life sciences. Owing to the dynamic nature of SPADs, designs featuring a large number of SPADs require careful analysis of the target application for an optimal use of silicon real estate and of limited readout bandwidth. The paper also describes the main trade-offs involved in architecting such chips and the solutions adopted with focus on scalability and miniaturization. © 2014 The Authors.

Adam A.J.L.,Technical University of Delft
Journal of Infrared, Millimeter, and Terahertz Waves | Year: 2011

In the last decades, many research teams working at Terahertz frequencies focused their efforts on surpassing the diffraction limit. Numerous techniques have been investigated, combining methods existing at optic wavelength with THz system such as Time Domain Spectroscopy. The actual development led on one side to a resolution as high as λ/3000 and one the other side to a video-rate recording. The purpose of this paper is to give an overview of the history of the field, to describe the different approaches, to give examples of existing applications and to draw the perspective for this research area. © 2011 The Author(s).

Meijers E.J.,Technical University of Delft | Burger M.J.,Erasmus University Rotterdam
Environment and Planning A | Year: 2010

Recent concepts such as 'megaregions' and 'polycentric urban regions' emphasize that external economies are not confined to a single urban core, but are shared among a collection of nearby and linked cities. However, empirical analyses of agglomeration and agglomeration externalities have so far neglected the multicentric spatial organization of agglomeration and the possibility of the 'sharing' or 'borrowing' of size between cities. The authors take up this empirical challenge by analyzing how different spatial structures, in particular the monocentricity-polycentricity dimension, affect the economic performance of US metropolitan areas. Ordinary least squares and two-stage least-squares models explaining labor productivity show that spatial structure matters: polycentricity is associated with higher labor productivity. This appears to justify suggestions that, compared with more monocentric metropolitan areas, agglomeration diseconomies remain relatively limited in the more polycentric metropolitan areas, whereas agglomeration externalities are to some extent shared among the cities in such an area. However, it was also found that a network of geographically proximate smaller cities cannot substitute for the urbanization externalities of a single large city. ©2010 Pion Ltd and its Licensors.

McClain M.E.,UNESCO-IHE Institute for Water Education | McClain M.E.,Technical University of Delft
Ambio | Year: 2013

Sustainable development in Africa is dependent on increasing use of the continent's water resources without significantly degrading ecosystem services that are also fundamental to human wellbeing. This is particularly challenging in Africa because of high spatial and temporal variability in the availability of water resources and limited amounts of total water availability across expansive semi-arid portions of the continent. The challenge is compounded by ambitious targets for increased water use and a rush of international funding to finance development activities. Balancing development with environmental sustainability requires (i) understanding the boundary conditions imposed by the continent's climate and hydrology today and into the future, (ii) estimating the magnitude and spatial distribution of water use needed to meet development goals, and (iii) understanding the environmental water requirements of affected ecosystems, their current status and potential consequences of increased water use. This article reviews recent advancements in each of these topics and highlights innovative approaches and tools available to support sustainable development. While much remains to be learned, scientific understanding and technology should not be viewed as impediments to sustainable development on the continent. © 2012 The Author(s).

Chorus C.G.,Technical University of Delft
Transportmetrica | Year: 2012

This article studies route choices and traffic equilibria when travel times are risky, and travellers are risk averse and regret averse. It is shown how regret theory, being one of the most popular contenders of expected utility theory throughout the social sciences, can be applied to model risky route choices by means of an expected modified utility function. Subsequently this function is used to study numerically how risk aversion and regret aversion jointly determine equilibrium outcomes in a simple binary route choice situation. It is found that increasing levels of regret aversion lead to equilibrium shifts towards routes whose mean travel time is low, routes that are less risky and especially routes whose worst-case travel time is low compared to that of the competing route. Furthermore, risk aversion and regret aversion are found to reinforce each other's impact on equilibrium towards a situation where safer routes are preferred over riskier (but faster) ones. © 2012 Copyright Taylor and Francis Group, LLC.

van den Bogaard M.,Technical University of Delft
European Journal of Engineering Education | Year: 2012

Student success is among the most widely researched areas in tertiary education. Generalisability of research in this field is problematic due to cultural and structural differences between countries, institutions and programmes where the research is done. Engineering education in the Netherlands has not been studied in depth. In this paper, outcomes of studies done outside and inside engineering and outside and inside the Netherlands are discussed to help understand the complexity of student retention issues. Although generalisation is an issue, there are a number of concepts and variables that surface in many of these studies, including students' background and disposition variables, education attributes, variables concerning educational climate and student behaviour. How these variables are related and how a university can apply the outcomes of research in this field of study are discussed in this paper. © 2012 SEFI.

Van Ostayen R.A.J.,Technical University of Delft
Tribology International | Year: 2010

A method is presented to determine the optimal surface shape distribution for a hydrodynamic slider bearing. This is the surface shape distribution that is able to carry a prescribed load while maintaining a maximum separation between the surfaces. This method is first derived for a bearing with constant load and sliding speed. It is subsequently extended for a bearing with periodic load and sliding speed. Results for slider bearings with different shapes, loads and speeds are presented. It is shown that the numerical procedure developed in this paper is numerically more efficient than a reference optimization method. © 2010 ElsevierLtd. All rights reserved.

Merlijn Van Spengen W.,Technical University of Delft
Sensors and Actuators, A: Physical | Year: 2012

Fatigue in silicon microstructures has been widely observed but is currently not well understood. In this paper, it is shown that typical silicon MEMS fatigue can be described by a 'classic' stress corrosion cracking (SCC) model for glass fracture. The model can be used to describe the slow crack propagation and ultimate failure of MEMS structures due to mechanical stress under different conditions. With this SCC model it is possible to do full lifetime predictions, and these correspond very well with measured fatigue data from literature, as a function of applied stress and temperature. This suggests that at least part of the literature data available can be explained by static fatigue, which is better described by a time to failure than cycles to failure. However, not all failures can be explained by SCC alone, a notable exception being those of MEMS devices with very thin surface oxides. © 2012 Elsevier B.V. All rights reserved.

Papastathopoulou P.,Athens University of Economics and Business | Hultink E.J.,Technical University of Delft
Journal of Product Innovation Management | Year: 2012

This study examines the state of the art in new service development (NSD) research published in the period between 1982, when the first NSD article appeared in an academic journal, and 2008. First, a multisource search was conducted, which resulted in the identification of 145 NSD-related articles. Then, a content analysis was performed of these articles using multiple classifier variables with regard to general publication characteristics, focus of the research, and the research methodology that was employed. By examining the results, a number of developments in and patterns of scholarly research in NSD are revealed. More specifically, it appears that the greatest attention in the early writings was on a narrow set of NSD topics like critical success factors and the NSD process, which were predominantly investigated through large-scale surveys with single respondents in the U.S., Canadian, and U.K. financial services industry. The analytical techniques that were used at that time were rather simple. In contrast, in recent NSD works there is an expansion of research topics (such as customer involvement and the organization of NSD) that are increasingly investigated in high-tech service industries in Europe through qualitative research designs. Also, multiple respondent studies have started to appear in NSD investigations, while analytical techniques have also become more advanced. This pattern clearly uncovers signs of increasing maturation for the NSD discipline. In addition, some underresearched areas are identified, leading to suggestions for future research into this growing and important field. © 2012 Product Development & Management Association.

Zadpoor A.A.,Technical University of Delft
Materials Science and Engineering C | Year: 2014

In a large number of studies, it has been assumed that the in vitro apatite-forming ability measured by simulated body fluid (SBF) test is a predictor of in vivo bioactivity. Several researchers have argued in favor and against this assumption; but the actual experimental evidence is not yet fully examined. The purpose of this study is to review the currently available evidence that supports or rejects the above-mentioned assumption. Ultimately, it is important that SBF tests could simulate the actual physiological conditions experienced by biomaterials within the human body. Given that in vivo animal experiments provide the best pre-clinical test conditions, all studies in which both in vitro apatite forming ability and in vivo performance of two or more biomaterials are compared were found by searching the literature. From all studies that satisfied the inclusion criteria (33), in 25 studies in vitro apatite-forming ability could predict the relative performance of the tested biomaterials in vivo. In 8 studies, in vitro performance did not correctly predict the relative in vivo performance. In majority of failure cases (i.e. 5/8), none of the compared biomaterials formed apatite, while all compared biomaterials showed bioactive behavior in vivo. It is therefore concluded that, in majority of cases, the SBF immersion test has been successful in predicting the relative performance of biomaterials in vivo. However, the details of the test protocols and the (expected) mechanisms of bioactivity of tested biomaterials should be carefully considered in the design of SBF immersion tests and in interpretation of their results. Certain guidelines are devised based on the results of this review for the design of SBF immersion test protocols and interpretation of the test results. These guidelines could help in designing better SBF test protocols that have better chances of predicting the bioactivity of biomaterials for potential application in clinical orthopedics. © 2013 Elsevier Ltd. All rights reserved.

Vos R.,Technical University of Delft | Barrett R.,University of Kansas
Smart Materials and Structures | Year: 2011

Current, highly active classes of adaptive materials have been considered for use in many different aerospace applications. From adaptive flight control surfaces to wing surfaces, shape-memory alloy (SMA), piezoelectric and electrorheological fluids are making their way into wings, stabilizers and rotor blades. Despite the benefits which can be seen in many classes of aircraft, some profound challenges are ever present, including low power and energy density, high power consumption, high development and installation costs and outright programmatic blockages due to a lack of a materials certification database on FAR 23/25 and 27/29 certified aircraft. Three years ago, a class of adaptive structure was developed to skirt these daunting challenges. This pressure-adaptive honeycomb (PAH) is capable of extremely high performance and is FAA/EASA certifiable because it employs well characterized materials arranged in ways that lend a high level of adaptivity to the structure. This study is centered on laying out the mechanics, analytical models and experimental test data describing this new form of adaptive material. A directionally biased PAH system using an external (spring) force acting on the PAH bending structure was examined. The paper discusses the mechanics of pressure adaptive honeycomb and describes a simple reduced order model that can be used to simplify the geometric model in a finite element environment. The model assumes that a variable stiffness honeycomb results in an overall deformation of the honeycomb. Strains in excess of 50% can be generated through this mechanism without encountering local material (yield) limits. It was also shown that the energy density of pressure-adaptive honeycomb is akin to that of shape-memory alloy, while exhibiting strains that are an order of magnitude greater with an energy efficiency close to 100%. Excellent correlation between theory and experiment is demonstrated in a number of tests. A proof-of-concept wing section test was conducted on a 12% thick wing section representative of a modern commercial aircraft winglet or flight control surface with a 35% PAH trailing edge. It was shown that camber variations in excess of 5% can be generated by a pressure differential of 40kPa. Results of subsequent wind tunnel test show an increase in lift coefficient of 0.3 at 23ms -1 through an angle of attack from - 6° to + 20°. © 2011 IOP Publishing Ltd.

Van Mieghem P.,Technical University of Delft
Computer Communications | Year: 2012

Besides the epidemic threshold, the recently proposed viral conductance ψ by Kooij et al. [11] may be regarded as an additional characterizer of the viral robustness of a network, that measures the overall ease in which viruses can spread in a particular network. Motivated to explain observed features of the viral conductance ψ in simulations [29], we have analysed this metric in depth using the N-intertwined SIS epidemic model, that upper bounds the real infection probability in any network and, hence, provides safe-side bounds on which network protection can be based. Our study here derives a few exact results for ψ, a number of different lower and upper bounds for ψ with variable accuracy. We also extend the theory of the N-intertwined SIS epidemic model, by deducing formal series expansions of the steady-state fraction of infected nodes for any graph and any effective infection rate, that result in a series for the viral conductance ψ. Though approximate, we illustrate here that the N-intertwined SIS epidemic model is so far the only SIS model on networks that is analytically tractable, and valuable to provide first order estimates of the epidemic impact in networks. Finally, inspired by the analogy between virus spread and synchronization of coupled oscillators in a network, we propose the synchronizability as the analogue of the viral conductance. © 2012 Elsevier B.V. All rights reserved.

Van Lint J.W.C.,Technical University of Delft
Transportation Research Record | Year: 2010

Travel times are key statistics for traffic performance, policy, and management evaluation purposes. Estimating travel times from local traffic speeds collected with loops or other sensors has been a relevant and lively research area. The most widespread and arguably most flexible algorithms developed for this purpose fall into the class of trajectory methods, which reconstruct synthetic vehicle trajectories on the basis of measured spot speeds and encompass various assumptions on which speeds prevail between traffic sensors. From these synthetic trajectories, average travel times can be deduced. This paper reviews and compares a number of these algorithms against two new trajectory algorithms based on spatiotemporal filtering of speed and 1/speed (slowness). On the basis of real data (from induction loops and an automated vehicle identification system), it is demonstrated that these new algorithms are more accurate (in terms of bias and residual error) than previous algorithms, and more robust with respect to increasing amounts of missing data.

Van Eck D.,Technical University of Delft
Research in Engineering Design | Year: 2010

In this paper, I discuss a methodology for the conversion of functional models between functional taxonomies developed by Kitamura et al. (2007) and Ookubo et al. (2007). They apply their methodology to the conversion of functional models described in terms of the Functional Basis taxonomy into functional models described in terms of the Functional Concept Ontology taxonomy. I argue that this model conversion harbors two problems. One, a step in this model conversion that is aimed to handle differences in the modeling of user features consists of the removal of Functional Basis functions. It is shown that this removal can lead to considerable information loss. Two, some Functional Basis functions that I argue correspond to user functions, get re-interpreted as device functions in the model conversion. I present an alternative strategy that prevents information loss and information change in model conversions between the Functional Basis and Functional Concept Ontology taxonomies. © 2009 The Author(s).

Dijkstra J.T.,Technical University of Delft | Uittenbogaard R.E.,Deltares
Water Resources Research | Year: 2010

Aquatic vegetation has an important role in estuaries and rivers by acting as bed stabilizer, filter, food source, and nursing area. However, macrophyte populations worldwide are under high anthropogenic pressure. Protection and restoration efforts will benefit from more insight into the interaction between vegetation, currents, waves, and sediment transport. Most aquatic plants are very flexible, implying that their shape and hence their drag and turbulence production depend on the flow conditions. We have developed a numerical simulation model that describes this dynamic interaction between very flexible vegetation and a time-varying flow, using the sea grass Zostera marina as an example. The model consists of two parts: an existing 1DV k- varepsilon turbulence model simulating the flow combined with a new model simulating the bending of the plants, based on a force balance that takes account of both vegetation position and buoyancy. We validated this model using observations of positions of flexible plastic strips and of the forces they are subjected to, as well as hydrodynamic measurements. The model predicts important properties like the forces on plants, flow velocity profiles, and turbulence characteristics well. Although the validation data are limited, the results are sufficiently encouraging to consider our model to be of generic value in studying flow processes in fields of flexible vegetation. Copyright © 2010 by the American Geophysical Union.

Faludi A.,Technical University of Delft
Environment and Planning A | Year: 2013

Territorial cohesion is a shared EU competence, but what is territory? This paper seeks to alert planners-in particular those involved in European spatial planning-that common-sense answers do not necessarily apply: it is not a container. A view of macrospace as filled with territories-as-containers-territorialism-is nonetheless the basis for common misunderstandings about the EU, and also about European planning, now being articulated in terms of territorial cohesion. Leaving the container view behind means that control over territories-territoriality-must be negotiated, something that relational regionalism also suggests. The planning literature is beginning to absorb such views, articulating soft rather than hard forms of planning for 'soft spaces'. Hard planning is bound to continue, but it will be embedded in new practices, including the conceptualisation of multiple visions on territory. © Pion and its Licensors.

Karjalainen T.-M.,IDBM Program | Snelders D.,Technical University of Delft
Journal of Product Innovation Management | Year: 2010

The present paper examines how companies strategically employ design to create visual recognition of their brands' core values. To address this question, an explorative in-depth case study was carried out concerning the strategic design efforts of two companies: Nokia (mobile phones) and Volvo (passenger cars). It was found that these two companies fostered design philosophies that lay out which approach to design and which design features are expressive of the core brand values. The communication of value through design was modeled as a process of semantic transformation. This process specifies how meaning is created by design in a three-way relation among design features, brand values, and the interpretation by a potential customer. By analyzing the design effort of Nokia and Volvo with the help of this model, it is shown that control over the process of semantic transformation enabled managers in both companies to make strategic decisions over the type, strength, and generality of the relation between design features and brand values. Another result is that the embodiment of brand values in a design can be strategically organized around lead products. Such products serve as reference points for what the brand stands for and can be used as such during subsequent new product development (NPD) projects for other products in the brand portfolio. The design philosophy of Nokia was found to depart from that of Volvo. Nokia had a bigger product portfolio and served more market segments. It therefore had to apply its design features more flexibly over its product portfolio, and in many of its designs the relation between design features and brand values was more implicit. Six key drivers for the differences between the two companies were derived from the data. Two external drivers were identified that relate to the product category, and four internal drivers were found to stem from the companies' past and present brand management strategies. These drivers show that the design of visual recognition for the brand depends on the particular circumstances of the company and that it is tightly connected to strategic decision making on branding. These results are relevant for brand, product, and design managers, because they provide two good examples of companies that have organized their design efforts in such a way that they communicate the core values of their brands. Other companies can learn from these examples by considering why these two companies acted as they did and how their communication goals of product design were aligned to those of brand management. © 2009 Product Development & Management Association.

Jagtman H.M.,Technical University of Delft
Reliability Engineering and System Safety | Year: 2010

In emergency situations authorities need to warn the public. The conventionally used method for warning citizens in The Netherlands is the use of a siren. Modern telecommunication technologies, especially the use of text-based features of mobile phones, have great potential for warning the public. In the years 2005-2007 cell broadcast was tested during several large-scale field trials with citizens in The Netherlands. One of the questions was to determine the penetration of cell broadcast for citizens' alarming. This article argues that the definition of penetration in the light of warning citizens in case of emergencies should include the citizens' responses to warning messages. In addition, the approach to determining the penetration, the data and validity issues regarding these data is discussed. The trials have shown cell broadcast has potential to become an effective citizens' alarming technology. This however requires the entire technological and organisational chain of the warning system to function correctly. Attention is required to network management, handset improvements and correct communication to the public about the conditions under which a cell broadcast message can be received. The latter includes managing realistic expectations including circumstances in which cell broadcast will not reach a citizen. © 2009 Elsevier Ltd. All rights reserved.

Roy K.,Indian Institute of Science | Padmanabhan M.,Indian Institute of Science | Goswami S.,Indian Institute of Science | Goswami S.,Technical University of Delft | And 5 more authors.
Nature Nanotechnology | Year: 2013

Combining the electronic properties of graphene and molybdenum disulphide (MoS 2) in hybrid heterostructures offers the possibility to create devices with various functionalities. Electronic logic and memory devices have already been constructed from graphene-MoS 2 hybrids, but they do not make use of the photosensitivity of MoS 2, which arises from its optical-range bandgap. Here, we demonstrate that graphene-on-MoS 2 binary heterostructures display remarkable dual optoelectronic functionality, including highly sensitive photodetection and gate-tunable persistent photoconductivity. The responsivity of the hybrids was found to be nearly 1 × 10 10 A W -1 at 130 K and 5 × 10 8 A W -1 at room temperature, making them the most sensitive graphene-based photodetectors. When subjected to time-dependent photoillumination, the hybrids could also function as a rewritable optoelectronic switch or memory, where the persistent state shows almost no relaxation or decay within experimental timescales, indicating near-perfect charge retention. These effects can be quantitatively explained by gate-tunable charge exchange between the graphene and MoS 2 layers, and may lead to new graphene-based optoelectronic devices that are naturally scalable for large-area applications at room temperature.

Nazer B.,Boston University | Gastpar M.,University of California at Berkeley | Gastpar M.,Technical University of Delft
Proceedings of the IEEE | Year: 2011

When two or more users in a wireless network transmit simultaneously, their electromagnetic signals are linearly superimposed on the channel. As a result, a receiver that is interested in one of these signals sees the others as unwanted interference. This property of the wireless medium is typically viewed as a hindrance to reliable communication over a network. However, using a recently developed coding strategy, interference can in fact be harnessed for network coding. In a wired network, (linear) network coding refers to each intermediate node taking its received packets, computing a linear combination over a finite field, and forwarding the outcome towards the destinations. Then, given an appropriate set of linear combinations, a destination can solve for its desired packets. For certain topologies, this strategy can attain significantly higher throughputs over routing-based strategies. Reliable physical layer network coding takes this idea one step further: using judiciously chosen linear error-correcting codes, intermediate nodes in a wireless network can directly recover linear combinations of the packets from the observed noisy superpositions of transmitted signals. Starting with some simple examples, this paper explores the core ideas behind this new technique and the possibilities it offers for communication over interference-limited wireless networks. © 2006 IEEE.

Mudde R.F.,Technical University of Delft
AIChE Journal | Year: 2011

We present experiments on a bubble train in a 23-cm-diameter fluidized bed of a Geldart B powder. The bubbles are injected via a single capillary inserted in the bed. We use our double X-ray tomographic scanner to measure the solids distribution in two parallel cross sections of the bed. We report data for four different heights of the measuring planes above the capillary outlet. The velocity of individual bubbles is found from the time of flight from the lower to the upper plane. We have done separate calibration experiments for the velocity. In this article, we present data for the size and velocity of individual bubbles. From the bubble velocity, we could obtain the vertical dimension of the bubbles. This makes it possible to measure the volume of each bubble. The results show that our scanner is capable of measuring properties of bubbles with a size of 2.5 cm and above. © 2010 American Institute of Chemical Engineers (AIChE).

Weijermars R.,Technical University of Delft
SPE Economics and Management | Year: 2012

This study analyzes the typical challenges and opportunities related to unconventional-gas-reserves maturation and asset performance. Volatility in natural-gas prices may lead to downgrading of formerly proved reserves when the marginal cost of production cannot be sustained by the wellhead prices realized. New US Security and Exchange Commission (SEC) rules have accelerated the growth of unconventional-gas reserves, which in a way is an additional but unintended source of volatility and hence risk. Concerns about security of investments in unconventional-gas assets are fuelled by the effects of volatile natural-gas prices on production economics and by uncertainty about stability of reported reserves. This concern is exacerbated by an unprecedented rise in proved undeveloped gas reserves (PUDs) reported by unconventional-gas operators, arguably effectuated by favorable interpretations of PUDs when applying the new SEC accounting rules. This study includes a benchmark of proved reserves reported by two peer groups, each comprising four representative companies. The peer group of conventional companies includes Exxon, Chevron, Shell, and BP, and the unconventional peer group is made up of Chesapeake, Petrohawk, Devon, and EOG. Possible sources of undue uncertainty in reported reserves are highlighted, and recommendations are given to improve the reliability of reported reserves, especially from unconventional field assets. Copyright © 2012 Society of Petroleum Engineers.

Karana E.,Technical University of Delft
Journal of Cleaner Production | Year: 2012

Over the past decade, the deployment of sustainable product design has led to a dramatic increase in the use of bio-plastics as an environmentally sensitive substitute to regular petroleum-based ones. Published literature has explored the environmental performance and their suitability as an alternative for regular plastics. However, the reception of these materials by users, who come into contact with these materials embodied in consumer products, has not been researched and published. Even though the principle of using such materials with improved environmental credentials is sound, it is down to the users' appreciation of those materials that ultimately determine their commercial success. A significant challenge faced by material developers and product designers is to facilitate the appraisal of bio-plastics as a natural alternative to regular plastics, whilst at the same time meeting users' perceptions of quality. Drawing on the results of an empirical study this paper discusses when a material is perceived as 'natural' and/or 'high- quality'. The study concludes that there are more contradictory aspects than congruent aspects when evoking these two meanings. Imposition of new aesthetic values and uniqueness are discussed as critical strategies to elicit the desired meanings. © 2012 Elsevier Ltd. All rights reserved.

Van Waterschoot T.,Catholic University of Leuven | Van Waterschoot T.,Technical University of Delft | Moonen M.,Catholic University of Leuven
Proceedings of the IEEE | Year: 2011

The acoustic feedback problem has intrigued researchers over the past five decades, and a multitude of solutions has been proposed. In this survey paper, we aim to provide an overview of the state of the art in acoustic feedback control, to report results of a comparative evaluation with a selection of existing methods, and to cast a glance at the challenges for future research. © 2010 IEEE.

Ruess M.,Technical University of Delft | Schillinger D.,University of Texas at Austin | Ozcan A.I.,TU Munich | Rank E.,TU Munich
Computer Methods in Applied Mechanics and Engineering | Year: 2014

Nitsche's method can be used as a coupling tool for non-matching discretizations by weakly enforcing interface constraints. We explore the use of weak coupling based on Nitsche's method in the context of higher order and higher continuity B-splines and NURBS. We demonstrate that weakly coupled spline discretizations do not compromise the accuracy of isogeometric analysis. We show that the combination of weak coupling with the finite cell method opens the door for a truly isogeometric treatment of trimmed B-spline and NURBS geometries that eliminates the need for costly reparameterization procedures. We test our methodology for several relevant technical problems in two and three dimensions, such as gluing together trimmed multi-patches and connecting non-matching meshes that contain B-spline basis functions and standard triangular finite elements. The results demonstrate that the concept of Nitsche based weak coupling in conjunction with the finite cell method has the potential to considerably increase the flexibility of the design-through-analysis process in isogeometric analysis. © 2013 Elsevier B.V.

Pastor-Satorras R.,Polytechnic University of Catalonia | Castellano C.,CNR Institute for Complex Systems | Castellano C.,University of Rome La Sapienza | Van Mieghem P.,Technical University of Delft | And 2 more authors.
Reviews of Modern Physics | Year: 2015

In recent years the research community has accumulated overwhelming evidence for the emergence of complex and heterogeneous connectivity patterns in a wide range of biological and sociotechnical systems. The complex properties of real-world networks have a profound impact on the behavior of equilibrium and nonequilibrium phenomena occurring in various systems, and the study of epidemic spreading is central to our understanding of the unfolding of dynamical processes in complex networks. The theoretical analysis of epidemic spreading in heterogeneous networks requires the development of novel analytical frameworks, and it has produced results of conceptual and practical relevance. A coherent and comprehensive review of the vast research activity concerning epidemic processes is presented, detailing the successful theoretical approaches as well as making their limits and assumptions clear. Physicists, mathematicians, epidemiologists, computer, and social scientists share a common interest in studying epidemic spreading and rely on similar models for the description of the diffusion of pathogens, knowledge, and innovation. For this reason, while focusing on the main results and the paradigmatic models in infectious disease modeling, the major results concerning generalized social contagion processes are also presented. Finally, the research activity at the forefront in the study of epidemic spreading in coevolving, coupled, and time-varying networks is reported. © 2015 American Physical Society. © 2015 American Physical Society.

de Winter J.C.F.,Technical University of Delft
Cognition, Technology and Work | Year: 2014

Situation awareness and workload are popular constructs in human factors science. It has been hotly debated whether these constructs are scientifically credible, or whether they should merely be seen as folk models. Reflecting on the works of psychophysicist Stanley Smith Stevens and of measurement theorist David Hand, we suggest a resolution to this debate, namely that human factors constructs are situated towards the operational end of a representational-operational continuum. From an operational perspective, human factors constructs do not reflect an empirical reality, but they aim to predict. For operationalism to be successful, however, it is important to have suitable measurement procedures available. To explore how human factors constructs are measured, we focused on (mental) workload and its measurement by questionnaires and applied a culturomic analysis to investigate secular trends in word use. The results reveal an explosive use of the NASA Task Load Index (TLX). Other questionnaires, such as the Cooper Harper rating scale and the Subjective Workload Assessment Technique, show a modest increase, whereas many others appear short lived. We found no indication that the TLX is improved by iterative self-correction towards optimal validity, and we argue that usage of the NASA-TLX has become dominant through a Matthew effect. Recommendations for improving the quality of human factors research are provided. © 2014 Springer-Verlag London.

Villegas I.F.,Technical University of Delft
Journal of Thermoplastic Composite Materials | Year: 2015

Ultrasonic welding is a very fast joining technique well suited for thermoplastic composites, which does not require the use of foreign materials at the welding interface for either carbon or glass fibre-reinforced substrates. Despite very interesting investigations carried out by several researchers on different aspects of the process, ultrasonic welding of thermoplastic composite parts is not well understood yet. This article presents a deep experimental analysis of the transformations and heating mechanisms at the welding interface and their relationship with the dissipated power and the displacement of the sonotrode as provided by a microprocessor-controlled ultrasonic welder. The main aim of this research is to build up the knowledge to enable straightforward monitoring of the process and ultimately of the weld quality through the feedback provided by the ultrasonic welder. © The Author(s) 2013.

Cirillo P.,Technical University of Delft
Physica A: Statistical Mechanics and its Applications | Year: 2013

Pareto distributions, and power laws in general, have demonstrated to be very useful models to describe very different phenomena, from physics to finance. In recent years, the econophysical literature has proposed a large amount of papers and models justifying the presence of power laws in economic data. Most of the times, this Paretianity is inferred from the observation of some plots, such as the Zipf plot and the mean excess plot. If the Zipf plot looks almost linear, then everything is ok and the parameters of the Pareto distribution are estimated. Often with OLS. Unfortunately, as we show in this paper, these heuristic graphical tools are not reliable. To be more exact, we show that only a combination of plots can give some degree of confidence about the real presence of Paretianity in the data. We start by reviewing some of the most important plots, discussing their points of strength and weakness, and then we propose some additional tools that can be used to refine the analysis. © 2013 Elsevier B.V. All rights reserved.

Nabavi M.R.,Catena Microelectronics BV | Nihtianov S.N.,Technical University of Delft
IEEE Sensors Journal | Year: 2012

This paper presents a comprehensive study of the design aspects of eddy-current displacement sensor (ECS) systems. In accordance with the sensor analysis presented in this paper, design strategies to compensate for important sensor imperfections are recommended. To this end, the challenges that are associated with ECS interfaces are identified, with focus on advanced industrial applications. This paper also provides a technical overview of the design advances of ECS interfaces proposed in the last decade and evaluates their pros and cons. Recently reported interface solutions for demanding industrial applications with respect to high resolution, stability, bandwidth, and low power consumption, at a sufficiently high excitation frequency, are addressed in more detail. © 2001-2012 IEEE.

Van Spengen W.M.,Technical University of Delft
Journal of Micromechanics and Microengineering | Year: 2012

This paper presents a comprehensive review of the reliability issues hampering capacitive RF MEMS switches in their development toward commercialization. Dielectric charging and its effects on device behavior are extensively addressed, as well as the application of different dielectric materials, improvements in the mechanical design and the use of advanced actuation waveforms. It is concluded that viable capacitive RF MEMS switches with a great chance of market acceptance preferably have no actuation voltage across a dielectric at all, contrary to the standard geometry. This is substantiated by the reliability data of a number of dielectric-less MEMS switch designs. However, a dielectric can be used for the signal itself, resulting in a higher C on/C offratio than that one would be able to achieve in a switch without any dielectric. The other reliability issues of these devices are also covered, such as creep, RF-power-related failures and packaging reliability. This paper concludes with a recipe for a conceptual ideal switch from a reliability point of view, based on the lessons learned. © 2012 IOP Publishing Ltd.

Cuppen E.,Technical University of Delft
Research Policy | Year: 2012

Dealing with unstructured issues, such as the transition to a sustainable energy system, requires stakeholder participation. A stakeholder dialogue should enhance learning about a problem and its potential solutions. However, not in any form will a stakeholder dialogue be effective. Part and parcel to the development of methodologies for stakeholder dialogue is the evaluation of those methodologies. The aim of this paper is to show how a methodology for stakeholder dialogue can be evaluated in terms of learning. This paper suggests three criteria for the evaluation of learning in stakeholder dialogue: (1) an operationalizable definition of the desired effect of dialogue, (2) the inclusion of a reference situation or control condition, and (3) the use of congruent and replicable evaluation methods. Q methodology was used in a quasi-experimental design to analyse to what extent learning took place in a stakeholder dialogue on energy options from biomass in the Netherlands. It is concluded that the dialogue had a significant effect: the dialogue increased participants' understanding of the diversity of perspectives. This effect is traced back to particular methodological and design elements in the dialogue. © 2011 Elsevier B.V. All rights reserved.

Rocca G.L.,Technical University of Delft
Advanced Engineering Informatics | Year: 2012

Knowledge based engineering (KBE) is a relatively young technology with an enormous potential for engineering design applications. Unfortunately the amount of dedicated literature available to date is quite low and dispersed. This has not promoted the diffusion of KBE in the world of industry and academia, neither has it contributed to enhancing the level of understanding of its technological fundamentals. The scope of this paper is to offer a broad technological review of KBE in the attempt to fill the current information gap. The artificial intelligence roots of KBE are briefly discussed and the main differences and similarities with respect to classical knowledge based systems and modern general purpose CAD systems highlighted. The programming approach, which is a distinctive aspect of state-of-the-art KBE systems, is discussed in detail, to illustrate its effectiveness in capturing and re-using engineering knowledge to automate large portions of the design process. The evolution and trends of KBE systems are investigated and, to conclude, a list of recommendations and expectations for the KBE systems of the future is provided. © 2012 Elsevier Ltd. All rights reserved.

Wols B.A.,KWR Watercycle Research Institute | Wols B.A.,Technical University of Delft | Hofman-Caris C.H.M.,KWR Watercycle Research Institute
Water Research | Year: 2012

Emerging organic contaminants (pharmaceutical compounds, personal care products, pesticides, hormones, surfactants, fire retardants, fuel additives etc.) are increasingly found in water sources and therefore need to be controlled by water treatment technology. UV advanced oxidation technologies are often used as an effective barrier against organic contaminants. The combined operation of direct photolysis and reaction with hydroxyl radicals ensures good results for a wide range of contaminants. In this review, an overview is provided of the photochemical reaction parameters (quantum yield, molar absorption, OH radical reaction rate constant) of more than 100 organic micropollutants. These parameters allow for a prediction of organic contaminant removal by UV advanced oxidation systems. An example of contaminant degradation is elaborated for a simplified UV/H 2O 2 system. © 2012 Elsevier Ltd.

Gil J.,Technical University of Delft
Geographical Analysis | Year: 2014

This article proposes urban network models as instruments to measure urban form, structure, and function indicators for the assessment of the sustainable mobility of urban areas, thanks to their capacity to describe the detail of a local environment in the context of a wider city-region. Drawing from the features of existing street network models that offer disaggregate, scalable, and relational analysis of the spatial configuration of urban areas, it presents a multimodal urban network (MMUN) model that describes an urban environment using three systems-private transport (i.e., car, bicycle, and pedestrian), public transport (i.e., rail, tram, metro, and bus), and land use. This model offers a unifying framework that allows the use of a range of analysis metrics and conceptions of distance (i.e., physical, topological, and cognitive), and aims to be simple and applicable in practice. An implementation of the MMUN is created for the Randstad city-region in the Netherlands. This is analyzed with network centrality measures in a series of experiments, testing its performance against empirical data. The experiments yield conclusions regarding the use of different distance parameters, the choice of network centrality metrics, and the relevant combinations of multimodal layers to describe the structure and configuration of a city-region. © 2014 by The Ohio State University.

Vizcaino M.,Technical University of Delft
Wiley Interdisciplinary Reviews: Climate Change | Year: 2014

One of the major impacts of anthropogenic climate change is sea level rise. Reliable estimates of the contribution of ice sheets to future sea level rise are important to policy makers and the civil society. In addition to sea level rise, ice sheet changes can affect the global climate through modified freshwater fluxes in the areas of deep-water convection. Also, ice sheets modify local and large-scale climate through changes in surface albedo and in their own topography. In the past, ice sheets have played a fundamental role in shaping climate and climate transitions. Despite their strong interactions with the climate system, they are not yet standard components of climate models. First attempts have been made in this direction, and it is foreseeable that in several years ice sheets will be included as interactive components of most models. The main challenges for this coupling are related to spatial and temporal resolution, ice sheet initialization, model climate biases, the need for explicit representation of snow/ice surface physics (e.g., albedo evolution, surface melt, refreezing, compaction), and coupling to the ocean component. This article reviews the main processes contributing to the ice sheet mass budget, the suite of ice sheet-climate interactions, and the requirements for modelling them in a coupled system. Focus is given to four major subjects: surface mass balance, ice sheet flow, ocean-ice sheet interaction, and challenges in coupling ice sheet models to climate models. © 2014 John Wiley & Sons, Ltd.

Renaud N.,Northwestern University | Renaud N.,Technical University of Delft | Sherratt P.A.,Northwestern University | Ratner M.A.,Northwestern University
Journal of Physical Chemistry Letters | Year: 2013

By generating two free charge carriers from a single high-energy photon, singlet fission (SF) promises to significantly improve the efficiency of a class of organic photovoltaics (OPVs). However, SF is generally a very inefficient process with only a small number of absorbed photons successfully converting into triplet states. In this Letter, we map the relation between stacking geometry and SF yield in crystals based on perylenediimide (PDI) derivatives. This structure-function analysis provides a potential explanation for the SF yield discrepancies observed among similar molecular crystals and may help to identify favorable geometries that lead to an optimal SF yield. Exploring the subtle relationship between stacking geometry and SF yield, this Letter suggests using crystal structure engineering to improve the design of SF-based OPVs. © 2013 American Chemical Society.

Weijermars R.,Technical University of Delft
Energy Strategy Reviews | Year: 2012

Global trends - past and future - of world natural gas consumption, production, reserves, and prices are highlighted here analyzing the BP Statistical Review of World Energy 2011, the BP Energy Outlook 2011, and the latest natural gas data from the world's major energy agencies. Growing demand and declining gas-reserve- replacement ratios support market model predictions of rising natural gas prices. © 2011 .

Van Den Akker H.E.A.,Technical University of Delft
Industrial and Engineering Chemistry Research | Year: 2010

In chemical reactor engineering, simple concepts are used for describing the flow in a reactor. Turbulent two-phase flow processes, however, are characterized by a broad range of time and length scales. Two-phase reactors operated in the turbulent regime therefore qualify for multiscale modeling. Multiscale models are being developed in such diverging fields as chemical reaction engineering (among which are packed bed reactors, fluidized beds, and risers), chemical vapor deposition reactor modeling, turbulent single-phase and two-phase flow simulations (among which is combustion), and materials science. The common aspects of these multiscale models are highlighted: they all comprise a coarse-grained simulation for the macroscale and some type of fully resolved microscale simulation. Different simulation techniques are used however. Processes involving single-phase flow may require one of three computational fluid dynamics (CFD) techniques: direct numerical simulations (DNSs), large eddy simulations (LESs), and Reynolds averaged Navier-Stokes (RANS)-based simulations. For two-phase flows, two CFD options are open: Euler-Lagrangian (or particle tracking) and Euler-Euler (or two-fluid). The characteristics of all these approaches are discussed. One of the more interesting options in dealing with turbulent two-phase, i.e., multiscale, flow reactors is to run a DNS for the local small-scale processes. Such a DNS is carried out in a periodic box, a dedicated forcing technique being used to impose the turbulent-flow conditions pertinent to a specific position in the macro domain. Several such successful DNSs are reviewed, which all exploit lattice Boltzmann (LB) techniques. Some new and promising results from LB simulations for gas-liquid flow systems are presented. Finally, a truly multiscale simulation strategy is presented for turbulent two-phase flow reactors, which combines a coarse-grained simulation for the macro domain concurrently run with several DNSs for properly chosen positions in the domain. LB techniques are recommended. The crucial step of this strategy-feeding the results of the local DNS frequently back into the coarse-grained simulations until convergence is reached-is described but has not been implemented yet. © 2010 American Chemical Society.

In recent decades, a series of regulatory agencies has been created at the European Union (EU) level. The existing literature on EU agencies focuses either on autonomy as a reason for the creation of such agencies or on the autonomy that they are granted by design. As a result, we do not know much about how EU agencies' actual autonomy comes about. This article therefore probes into the early development of two specific agencies. On the basis of document analysis and interviews with agency staff members, national experts, EU officials, external stakeholders, and clients, it explores why, in practice, the European Medicines Agency (EMA) seems to have developed a higher level of autonomy than the European Food Safety Authority (EFSA), even though on paper EMA appears to be as autonomous as, or if anything, less autonomous than EFSA. The article demonstrates the importance of investigating the managerial strategies of EU regulatory agencies to understand the actual practice of their autonomy and points to legitimacy as a key condition affecting the early development of such agencies. © 2014 Copyright Taylor & Francis Group, LLC.

Goswami S.,Technical University of Delft
Nature Nanotechnology | Year: 2016

The two-dimensional superconductor that forms at the interface between the complex oxides lanthanum aluminate (LAO) and strontium titanate (STO) has several intriguing properties that set it apart from conventional superconductors. Most notably, an electric field can be used to tune its critical temperature (Tc; ref. 7), revealing a dome-shaped phase diagram reminiscent of high-Tc superconductors. So far, experiments with oxide interfaces have measured quantities that probe only the magnitude of the superconducting order parameter and are not sensitive to its phase. Here, we perform phase-sensitive measurements by realizing the first superconducting quantum interference devices (SQUIDs) at the LAO/STO interface. Furthermore, we develop a new paradigm for the creation of superconducting circuit elements, where local gates enable the in situ creation and control of Josephson junctions. These gate-defined SQUIDs are unique in that the entire device is made from a single superconductor with purely electrostatic interfaces between the superconducting reservoir and the weak link. We complement our experiments with numerical simulations and show that the low superfluid density of this interfacial superconductor results in a large, gate-controllable kinetic inductance of the SQUID. Our observation of robust quantum interference opens up a new pathway to understanding the nature of superconductivity at oxide interfaces. © 2016 Nature Publishing Group

Dorenbos P.,Technical University of Delft
Journal of Luminescence | Year: 2013

The spectroscopy of the lanthanide dopants in the RE3(Al 1-xGax)5O12 (RE=Gd, Y, Lu and x=0, 0.2, 0.4, 0.6, 0.8, 1) family of garnet compounds is reviewed providing information on the redshift, the centroid shift, the charge transfer energies, and the host exciton creation energies. Clear and systematic trends with changing composition are identified which enables the prediction of properties on compounds where information is not yet available or incomplete. The data are used as input to the recently developed chemical shift model which then generates the vacuum referred binding energy of electrons in 4f-states and 5d-states of all trivalent and all divalent lanthanides as dopants in the garnet family. The obtained binding energies are in excellent agreement with observed properties like thermal quenching and efficiency of 5d-4f emission, electron trapping in trivalent lanthanides, photoconductivity and thermoluminescence. © 2012 Elsevier B.V. All rights reserved.

Quaglietta E.,Technical University of Delft | Punzo V.,University of Naples Federico II
Transportation Research Part C: Emerging Technologies | Year: 2013

Recently the growing demand in railway transportation has raised the need for practitioners to improve the design of railway systems. This means to identify a configuration of the infrastructure components (e.g. number of rail tracks, type and layout of the signalling system, layout of station tracks) and the operational schedule (e.g. train headways, scheduled dwell times) that improves given measures of performance such as the level of capacity, the punctuality of the service and the energy saving.Planners and designers involved in this process have the hard task of determining sound design solutions in order to achieve certain levels of network performances in a cost-effective way, especially when investment funds are limited. To this aim, a sensitivity analysis can support early decisional phases in order to better understand dependencies between performances and design variables and drive the decisional process towards effective solutions.In this paper, the Sobol variance-based method is applied to this purpose. A practical application has been carried out for a mass transit line in the city of Naples. Such study has investigated how train delays and energy consumption are affected by variations in design variables relative to the operational plan, the signalling system and factors related to the layout of station platforms. Results highlight the ability of this analysis in explaining the effects of different design solutions from a statistical point of view and finding the most influential factors for a given performance. This aspect suggests to practitioners the usefulness of this approach in addressing decisions towards cost-effective interventions. © 2013 Elsevier Ltd.

A review on the wavelengths of all five 4f-5d transitions for Ce 3+ in about 150 different inorganic compounds (fluorides, chlorides, bromides, iodides, oxides, sulfides, selenides, nitrides) is presented. It provides data on the centroid shift and the crystal field splitting of the 5d-configuration which are then used to estimate the Eu2+ inter 4f-electron Coulomb repulsion energy U(6,A) in compound A. The four semi-empirical models (the redshift model, the centroid shift model, the charge transfer model, and the chemical shift model) on lanthanide levels that were developed past 12 years are briefly reviewed. It will be demonstrated how those models together with the collected data of this work and elsewhere can be united to construct schemes that contain the binding energy of electrons in the 4f and 5d states for each divalent and each trivalent lanthanide ion relative to the vacuum energy. As example the vacuum referred binding energy schemes for LaF3 and La2O3 will be constructed. © 2012 Elsevier B.V.

Kim E.D.,Technical University of Delft
Mathematical Programming | Year: 2014

We introduce a new combinatorial abstraction for the graphs of polyhedra. The new abstraction is a flexible framework defined by combinatorial properties, with each collection of properties taken providing a variant for studying the diameters of polyhedral graphs. One particular variant has a diameter which satisfies the best known upper bound on the diameters of polyhedra. Another variant has superlinear asymptotic diameter, and together with some combinatorial operations, gives a concrete approach for disproving the Linear Hirsch Conjecture. © 2012 Springer-Verlag Berlin Heidelberg and Mathematical Optimization Society.

Janssens K.,University of Antwerp | Dik J.,Technical University of Delft | Cotte M.,European Synchrotron Radiation Facility | Susini J.,European Synchrotron Radiation Facility
Accounts of Chemical Research | Year: 2010

Often, just micrometers below a paintings surface lies a wealth of information, both with Old Masters such as Peter Paul Rubens and Rembrandt van Rijn and with more recent artists of great renown such as Vincent Van Gogh and James Ensor. Subsurface layers may include underdrawing, underpainting, and alterations, and in a growing number of cases conservators have discovered abandoned compositions on paintings, illustrating artists practice of reusing a canvas or panel. The standard methods for studying the inner structure of cultural heritage (CH) artifacts are infrared reflectography and X-ray radiography, techniques that are optionally complemented with the microscopic analysis of cross-sectioned samples. These methods have limitations, but recently, a number of fundamentally new approaches for fully imaging the buildup of hidden paint layers and other complex three-dimensional (3D) substructures have been put into practice. In this Account, we discuss these developments and their recent practical application with CH artifacts. We begin with a tabular summary of 14 IR- and X-ray-based imaging methods and then continue with a discussion of each technique, illustrating CH applications with specific case studies. X-ray-based tomographic and laminographic techniques can be used to generate 3D renditions of artifacts of varying dimensions. These methods are proving invaluable for exploring inner structures, identifying the conservation state, and postulating the original manufacturing technology of metallic and other sculptures. In the analysis of paint layers, terahertz time-domain spectroscopy (THz-TDS) can highlight interfaces between layers in a stratigraphic buildup, whereas macrosopic scanning X-ray fluorescence (MA-XRF) has been employed to measure the distribution of pigments within these layers. This combination of innovative methods provides topographic and color information about the micrometer depth scale, allowing us to look "into" paintings in an entirely new manner. Over the past five years, several new variants of traditional IR- and X-ray-based imaging methods have been implemented by conservators and museums, and the first reports have begun to emerge in the primary research literature. Applying these state-of-the-art techniques in a complementary fashion affords a more comprehensive view of paintings and other artworks. © 2010 American Chemical Society.

Bauer G.E.W.,Tohoku University | Bauer G.E.W.,Technical University of Delft | Saitoh E.,Tohoku University | Saitoh E.,Japan Science and Technology Agency | Van Wees B.J.,Zernike Institute for Advanced Materials
Nature Materials | Year: 2012

Spintronics is about the coupled electron spin and charge transport in condensed-matter structures and devices. The recently invigorated field of spin caloritronics focuses on the interaction of spins with heat currents, motivated by newly discovered physical effects and strategies to improve existing thermoelectric devices. Here we give an overview of our understanding and the experimental state-of-the-art concerning the coupling of spin, charge and heat currents in magnetic thin films and nanostructures. Known phenomena are classified either as independent electron (such as spin-dependent Seebeck) effects in metals that can be understood by a model of two parallel spin-transport channels with different thermoelectric properties, or as collective (such as spin Seebeck) effects, caused by spin waves, that also exist in insulating ferromagnets. The search to find applications-for example heat sensors and waste heat recyclers-is on. © 2012 Macmillan Publishers Limited. All rights reserved.

Mudde R.F.,Technical University of Delft
Powder Technology | Year: 2010

This paper discusses first results of bubbles moving through a fluidized bed imaged with an X-ray Tomographic Scanner. The scanner is made of 3 medical X-ray sources equipped with 30 CdWO4 detectors each. The fluidized bed has a diameter of 23 cm and is filled with Geldart B powder. The scanner measures the attenuation in a thin slice perpendicular to the column axis at a sampling frequency of 2500 Hz. The data collected during 2 s are reconstructed using the SART algorithm with a one-step-late correction. The reconstructions show the distribution of the bubbles in the 2-dimensional cross-section. By stacking these images, a 3-dimensional view of the bubbles in the column is represented. © 2009 Elsevier B.V. All rights reserved.

Kohler A.R.,Technical University of Delft
Materials and Design | Year: 2013

The combination of textile and electronic technologies results in new challenges for sustainable product design. Electronic textiles (e-textiles) feature a seamless integration of textiles with electronics and other high-tech materials. Such products may, if they become mass consumer applications, result in a new kind of waste that could be difficult to recycle. The ongoing innovation process of e-textiles holds opportunities to prevent future end-of-life impacts. Implementing eco-design in the technological development process can help to minimise future waste. However, the existing Design for Recycling (DfR) principles for textiles or electronics do not match with the properties of the combined products. This article examines possibilities to advance eco-design of a converging technology. DfR strategies for e-textiles are discussed from the background of contemporary innovation trends. Three waste preventative eco-design approaches for e-textiles are discussed: 1 harnessing the inherent advantages of smart materials for sustainable design; 2 establishing open compatibility standards; 3 labelling the e-textiles to facilitate their recycling. It is argued that life-cycle thinking needs to be implemented concurrent to the technological development process. © 2013 Elsevier Ltd.

Geerlings H.,Technical University of Delft | Zevenhoven R.,Abo Akademi University
Annual Review of Chemical and Biomolecular Engineering | Year: 2013

CO2 mineralization comprises a chemical reaction between suitable minerals and the greenhouse gas carbon dioxide. The CO2 is effectively sequestered as a carbonate, which is stable on geological timescales. In addition, the variety of materials that can be produced through mineralization could find applications in the marketplace, which makes implementation of the technology more attractive. In this article, we review recent developments and assess the current status of the CO2 mineralization field. In an outlook, we briefly describe a few mineralization routes, which upon further development have the potential to be implemented on a large scale. Copyright © 2013 by Annual Reviews. All rights reserved.

Breugem W.-P.,Technical University of Delft
Journal of Computational Physics | Year: 2012

An immersed boundary method (IBM) with second-order spatial accuracy is presented for fully resolved simulations of incompressible viscous flows laden with rigid particles. The method is based on the computationally efficient direct-forcing method of Uhlmann [M. Uhlmann, An immersed boundary method with direct forcing for simulation of particulate flows, J. Comput. Phys. 209 (2005) 448-476] that is embedded in a finite-volume/pressure-correction method. The IBM consists of two grids: a fixed uniform Eulerian grid for the fluid phase and a uniform Lagrangian grid attached to and moving with the particles. A regularized delta function is used to communicate between the two grids and proved to be effective in suppressing grid locking. Without significant loss of efficiency, the original method is improved by: (1) a better approximation of the no-slip/no-penetration (ns/np) condition on the surface of the particles by a multidirect forcing scheme, (2) a correction for the excess in the effective particle diameter by a slight retraction of the Lagrangian grid from the surface towards the interior of the particles with a fraction of the Eulerian grid spacing, and (3) an enhancement of the numerical stability for particle-fluid mass density ratios near unity by a direct account of the inertia of the fluid contained within the particles. The new IBM contains two new parameters: the number of iterations N s of the multidirect forcing scheme and the retraction distance r d. The effect of N s and r d on the accuracy is investigated for five different flows. The results show that r d has a strong influence on the effective particle diameter and little influence on the error in the ns/np condition, while exactly the opposite holds for N s. A novel finding of this study is the demonstration that r d has a strong influence on the order of grid convergence. It is found that for spheres the choice of r d=0.3Δx yields second-order accuracy compared to first-order accuracy of the original method that corresponds to r d=0. Finally, N s=2 appears optimal for reducing the error in the ns/np condition and maintaining the computational efficiency of the method. © 2012 Elsevier Inc.

Kalff F.E.,Technical University of Delft
Nature Nanotechnology | Year: 2016

The advent of devices based on single dopants, such as the single-atom transistor, the single-spin magnetometer and the single-atom memory, has motivated the quest for strategies that permit the control of matter with atomic precision. Manipulation of individual atoms by low-temperature scanning tunnelling microscopy provides ways to store data in atoms, encoded either into their charge state, magnetization state or lattice position. A clear challenge now is the controlled integration of these individual functional atoms into extended, scalable atomic circuits. Here, we present a robust digital atomic-scale memory of up to 1 kilobyte (8,000 bits) using an array of individual surface vacancies in a chlorine-terminated Cu(100) surface. The memory can be read and rewritten automatically by means of atomic-scale markers and offers an areal density of 502 terabits per square inch, outperforming state-of-the-art hard disk drives by three orders of magnitude. Furthermore, the chlorine vacancies are found to be stable at temperatures up to 77 K, offering the potential for expanding large-scale atomic assembly towards ambient conditions. © 2016 Nature Publishing Group

Engelsman M.,Technical University of Delft | Schwarz M.,ATreP Agenzia Provinciale per la Protonterapia | Dong L.,Proton Therapy
Seminars in Radiation Oncology | Year: 2013

The physical characteristics of proton beams are appealing for cancer therapy. The rapid increase in operational and planned proton therapy facilities may suggest that this technology is a "plug-and-play" valuable addition to the arsenal of the radiation oncologist and medical physicist. In reality, the technology is still evolving, so planning and delivery of proton therapy in patients face many practical challenges. This review article discusses the current status of proton therapy treatment planning and delivery techniques, indicates current limitations in dealing with range uncertainties, and proposes possible developments for proton therapy and supplementary technology to try to realize the actual potential of proton therapy. © 2013 Elsevier Inc.

Dejene F.K.,Zernike Institute for Advanced Materials | Flipse J.,Zernike Institute for Advanced Materials | Bauer G.E.W.,Technical University of Delft | Bauer G.E.W.,Tohoku University | Van Wees B.J.,Zernike Institute for Advanced Materials
Nature Physics | Year: 2013

Since the discovery of the giant magnetoresistance effect the intrinsic angular momentum of the electron has opened up new spin-based device concepts. Our present understanding of the coupled transport of charge, spin and heat relies on the two-channel model for spin-up and spin-down electrons having equal temperatures. Here we report the observation of different (effective) temperatures for the spin-up and spin-down electrons in a nanopillar spin valve subject to a heat current. By three-dimensional finite element modelling of our devices for varying thickness of the non-magnetic layer, spin heat accumulations (the difference of the spin temperatures) of 120 mK and 350 mK are extracted at room temperature and 77 K, respectively, which is of the order of 10% of the total temperature bias over the nanopillar. This technique uniquely allows the study of inelastic spin scattering at low energies and elevated temperatures, which is not possible by spectroscopic methods. © 2013 Macmillan Publishers Limited.

Joo J.-Y.,Carnegie Mellon University | Ilic M.D.,Carnegie Mellon University | Ilic M.D.,Technical University of Delft
IEEE Transactions on Smart Grid | Year: 2013

This paper concerns mathematical conditions under which a system-level optimization of supply and demand scheduling can be implemented as a distributed optimization in which users and suppliers, as well as the load serving entities, are decision makers with well-defined sub-objectives. We start by defining the optimization problem of the system that includes the sub-objectives of many different players, both supply and demand entities in the system, and decompose the problem into each player's optimization problem, using Lagrange dual decomposition. A demand entity or a load serving entity's problem is further decomposed into problems of the many different end-users that the load serving entity serves. By examining the relationships between the global objectives and the local/individual objectives in these multiple layers and the optimality conditions of these decomposable problems, we define the requirements of these different objectives to converge. We propose a novel set of methods for coordinating supply and demand over different time horizons, namely day-ahead scheduling and real-time adjustment. We illustrate the ideas by simulating simple examples with different conditions and objectives of each entity in the system. © 2010-2012 IEEE.

Makinwa K.A.A.,Technical University of Delft
Procedia Engineering | Year: 2010

A smart temperature sensor is an integrated system consisting of a temperature sensor, its bias circuitry and an analog-to-digital converter (ADC). When manufactured in CMOS technology, such sensors have found widespread use due to their low cost, small size and ease of use. In this paper the basic operating principles of CMOS smart temperature sensors are explained and the stateof-the-art is reviewed. Two new figures of merit for smart temperature sensors are defined, which express the tradeoff between their energy/conversion and their resolution and inaccuracy, respectively. A survey of data published over the last 25 years shows that both these figures of merit usefully bound the performance of state-of-the-art smart temperature sensors.

Weijermars R.,Technical University of Delft
Oil and Gas Journal | Year: 2011

Two plausible scenarios for gas price recovery may alter the current shift of North American unconventional gas companies, diverting capital investment into oil rather than gas development projects. US gas producers have paved the way for replacing dwindling domestic gas production from conventional to unconventional resources. Short-term gas delivery contracts dominate the US market, and US gas prices have responded rapidly to economic changes. The strategy shift includes moving gas rigs to liquid prone areas, as shown in US rig count statistics. The question remains whether this strategy shift of US gas operators, aimed at restoring their corporate earnings, will lead to a timely recovery of wellhead gas prices. Serious doubts about the ability of companies to meet wellhead breakeven cost in unconventional gas plays could slow down the emerging global interest in unconventional gas development projects.

Sheldon R.A.,Technical University of Delft
Chemical Society Reviews | Year: 2012

In this tutorial review, the fundamental concepts underlying the principles of green and sustainable chemistry - atom and step economy and the E factor - are presented, within the general context of efficiency in organic synthesis. The importance of waste minimisation through the widespread application of catalysis in all its forms - homogeneous, heterogeneous, organocatalysis and biocatalysis - is discussed. These general principles are illustrated with simple practical examples, such as alcohol oxidation and carbonylation and the asymmetric reduction of ketones. The latter reaction is exemplified by a three enzyme process for the production of a key intermediate in the synthesis of the cholesterol lowering agent, atorvastatin. The immobilisation of enzymes as cross-linked enzyme aggregates (CLEAs) as a means of optimizing operational performance is presented. The use of immobilised enzymes in catalytic cascade processes is illustrated with a trienzymatic process for the conversion of benzaldehyde to (S)-mandelic acid using a combi-CLEA containing three enzymes. Finally, the transition from fossil-based chemicals manufacture to a more sustainable biomass-based production is discussed. © 2012 The Royal Society of Chemistry.

Erden M.S.,ENSTA ParisTech | Tomiyama T.,Technical University of Delft
IEEE Transactions on Robotics | Year: 2010

In this paper, a physically interactive control scheme is developed for a manipulator robot arm. The human touches the robot and applies force in order to make it behave as he/she likes. The communication between the robot and the human is maintained by a physical contact with no sensors. The intent of the human is estimated by observing the change in control effort. The robot receives the estimated human intent and updates its position reference accordingly. The developed method uses the principle of conservation of zero momentum for position-controlled systems. A switching scheme is developed that goes between the modes of pure impedance control with a fixed-position reference and interactive control under human intent. The switching mechanism uses neither a physical switch nor a sensor; it observes the human intent and puts the robot into interactive mode, if there is any. When the human intent disappears, the robot goes into the pure-impedance-control mode, thus stabilizing in the left position. © 2010 IEEE.

Weijermars R.,Technical University of Delft
Applied Energy | Year: 2010

This study presents the clockspeed analysis of a peer group comprising six major integrated US energy companies with substantial US interstate natural gas pipeline business activities: El Paso, Williams, NiSource, Kinder Morgan, MidAmerican and CMS Energy. For this peer group, the three clockspeed accelerators have been benchmarked at both corporate level and gas transmission business level, using time-series analysis and cross-sectional analysis over a 6-year period (2002-2007). The results are visualized in so-called clockspeed radargraphs. Overall corporate clockspeed winners - over the performance period studied - are: Williams, El Paso and Kinder Morgan; MidAmerican is a close follower. Corporate clockspeed laggards are: CMS Energy and NiSource. The peer group ranking for the natural gas transmission business segment shows similar clockspeed winners, but with different ranking in the following order: Kinder Morgan, MidAmerican and El Paso; Williams is a close follower. Clockspeed laggards for the natural gas transmission segments coincide with the corporate clockspeed laggards of the peer group: CMS Energy and NiSource (over the performance period studied); laggards of the past may become clockspeed leaders of the future if adjustments are made. Practical recommendations are formulated for achieving competitive clockspeed optimization in the US gas transmission industry as a whole. Recommendations for clockspeed acceleration at individual companies are also given. Although the US natural gas market is subject to specific regulations and its own geographical dynamics, this study also provides hints for improving the competitive clockspeed performance of gas transmission companies elsewhere, in other world regions. © 2010 Elsevier Ltd. All rights reserved.

Zijlema M.,Technical University of Delft
Coastal Engineering | Year: 2010

An unstructured-grid procedure for SWAN is presented. It is a vertex-based, fully implicit, finite difference method which can accommodate unstructured meshes with a high variability in geographic resolution suitable for representing complicated bottom topography in shallow areas and irregular shoreline. The numerical solution is found by means of a point-to-point multi-directional Gauss-Seidel iteration method requiring a number of sweeps through the grid. The approach is stable for any time step while permitting local mesh refinements in areas of interest. A number of applications are shown to verify the correctness and numerical accuracy of the unstructured version of SWAN. © 2009 Elsevier B.V. All rights reserved.

Janssen M.,Technical University of Delft
Social Science Computer Review | Year: 2012

Enterprise architecture (EA) has been embraced by governments as an instrument to advance their e-government efforts, create coherence, and improve interoperability. EA is often viewed as a codified understanding covering elements ranging from organization till infrastructure. It is aimed at closing the gap between high-level policies of organizations and low-level implementations of information systems. Important elements of EA are a framework, tools, principles, patterns, basic facilities, and shared services. EA is influenced by the social interdependencies and interactions among stakeholders in which it is embedded. The survey among public organizations shows that current EAs are primarily product oriented, whereas sociopolitical aspects are often neglected. Architecture implementation also involves learning effects and requires effective communication among participants. The author argue that the architecture concept should be reconceptualized and can only be effective if they incorporate relational capabilities, clear responsibilities, and sound governance mechanisms. © SAGE Publications 2012.

Wapenaar K.,Technical University of Delft | Broggini F.,Colorado School of Mines | Snieder R.,Colorado School of Mines
Geophysical Journal International | Year: 2012

With seismic interferometry a virtual source can be created inside a medium, assuming a receiver is present at the position of the virtual source. Here we discuss a method that creates a virtual source inside a medium from reflection data, without needing a receiver inside the medium. Apart from the reflection data, an estimate of the direct arrivals is required. However, no explicit information about the scatterers in the medium is needed. We analyse the proposed method for a simple configuration with the method of stationary phase. We show that the retrieved virtual-source response correctly contains the multiple scattering coda of the inhomogeneous medium. The proposed method can serve as a basis for data-driven suppression of internal multiples in seismic imaging. © 2012 The Authors Geophysical Journal International © 2012 RAS.

Jenny P.,ETH Zurich | Roekaerts D.,Technical University of Delft | Beishuizen N.,Robert Bosch GmbH
Progress in Energy and Combustion Science | Year: 2012

In a real turbulent spray flame, dispersion, continuous phase turbulence modification, dispersed phase inter-particle collisions, evaporation, mixing and combustion occur simultaneously. Dealing with all these complexities and their interactions poses a tremendous modeling task. Therefore, in order to advance current modeling capabilities, it seems reasonable to aim for progress in individual sub-areas like breakup, dispersion, mixing and combustion, which however cannot be viewed in complete isolation. Further, one has to consider advantages and disadvantages of the general modeling approaches, which are direct numerical simulation (DNS), large eddy simulation (LES), simulations based on Reynolds averaged equations and probability density function (PDF) methods. Not least one also has to distinguish between Eulerian and Lagrangian dispersed phase descriptions. The goal of this paper is to provide a review of computational model developments relevant for turbulent dilute spray combustion, i.e. the dense regime, including collisions as well as primary and secondary atomization, is not covered. Also not considered is breakup in dilute sprays, which can occur in the presence of sufficiently high local turbulence. It is intended to guide readers interested in theory, in the development and validation of predictive models, and in planning new experiments. In terms of physical phenomena, the current understanding regarding turbulence modification due to droplets, preferential droplet concentration, impact on evaporation and micro-mixing, and different spray combustion regimes is summarized. In terms of modeling, different sets of equations are discussed, i.e. the governing conservation laws without and with point droplet approximation as employed by DNS, the filtered equations considered in LES, the Reynolds averaged equations, and Lagrangian evolution equations. Further, small scale models required in the context of point droplet approximations are covered. In terms of computational studies and method developments, progress is categorized by the employed approaches, i.e. DNS, LES, simulations based on Reynolds averaged equations, and PDF methods. In terms of experiments, various canonical spray flame configurations are discussed. Moreover, some of the most important experiments in this field are presented in a structured way with the intention to provide a database for model validation and a guideline for future investigations. © 2012 Elsevier Ltd. All rights reserved.

Feuz L.,Chalmers University of Technology | Jonsson M.P.,Chalmers University of Technology | Jonsson M.P.,Technical University of Delft | Hook F.,Chalmers University of Technology
Nano Letters | Year: 2012

Optical sensors utilizing the principle of localized surface plasmon resonance (LSPR) offer the advantage of a simple label-free mode of operation, but the sensitivity is typically limited to a very thin region close to the surface. In bioanalytical sensing applications, this can be a significant drawback, in particular since the surface needs to be coated with a recognition layer in order to ensure specific detection of target molecules. We show that the signal upon protein binding decreases dramatically with increasing thickness of the recognition layer, highlighting the need for thin high quality recognition layers compatible with LSPR sensors. The effect is particularly strong for structures that provide local hot spots with highly confined fields, such as in the gap between pairs of gold disks. While our results show a significant improvement in sensor response for pairs over single gold disks upon binding directly to the gold surface, disk pairs did not provide larger signal upon binding of proteins to a recognition layer (already for around 3 nm thin layers) located on the gold. Local plasmonic hot spots are however shown advantageous in combination with directed binding to the hot spots. This was demonstrated using a structure consisting of three surface materials (gold, titanium dioxide, and silicon dioxide) and a new protocol for material-selective surface chemistry of these three materials, which allows for controlled binding only in the gap between pairs of disks. Such a design increased the signal obtained per bound molecule by a factor of around four compared to binding to single disks. © 2012 American Chemical Society.

Reiserer A.,Technical University of Delft | Rempe G.,Max Planck Institute of Quantum Optics
Reviews of Modern Physics | Year: 2015

Distributed quantum networks will allow users to perform tasks and to interact in ways which are not possible with present-day technology. Their implementation is a key challenge for quantum science and requires the development of stationary quantum nodes that can send and receive as well as store and process quantum information locally. The nodes are connected by quantum channels for flying information carriers, i.e., photons. These channels serve both to directly exchange quantum information between nodes and to distribute entanglement over the whole network. In order to scale such networks to many particles and long distances, an efficient interface between the nodes and the channels is required. This article describes the cavity-based approach to this goal, with an emphasis on experimental systems in which single atoms are trapped in and coupled to optical resonators. Besides being conceptually appealing, this approach is promising for quantum networks on larger scales, as it gives access to long qubit coherence times and high light-matter coupling efficiencies. Thus, it allows one to generate entangled photons on the push of a button, to reversibly map the quantum state of a photon onto an atom, to transfer and teleport quantum states between remote atoms, to entangle distant atoms, to detect optical photons nondestructively, to perform entangling quantum gates between an atom and one or several photons, and even provides a route toward efficient heralded quantum memories for future repeaters. The presented general protocols and the identification of key parameters are applicable to other experimental systems. © 2015 American Physical Society.

Amar A.,Technical University of Delft
IEEE Transactions on Wireless Communications | Year: 2010

Collaborative beamforming is an approach where sensor nodes in a wireless sensor network, deployed randomly in an area of interest, transmit a common message by forming a beampattern towards a destination. Previous statistical analysis of the averaged power beampattern considered multipath-free conditions. Herein, we express the averaged power beampattern when the signal is observed at the destination in the presence of local scattering. Assuming the spreading angles are uniformly distributed around the destination direction, we derive closed-form expressions for the maximum gain and numerically examine the beamwidth as a function of the number of nodes, the cluster size, and the scattering parameters, for node positions with a uniform distribution or a Gaussian distribution. © 2010 IEEE.

Scarano F.,Technical University of Delft
Measurement Science and Technology | Year: 2013

A survey is given of the major developments in three-dimensional velocity field measurements using the tomographic particle image velocimetry (PIV) technique. The appearance of tomo-PIV dates back seven years from the present review (Elsinga et al 2005a 6th Int. Symp. PIV (Pasadena, CA)) and this approach has rapidly spread as a versatile, robust and accurate technique to investigate three-dimensional flows (Arroyo and Hinsch 2008 Topics in Applied Physics vol 112 ed A Schröder and C E Willert (Berlin: Springer) pp 127-54) and turbulence physics in particular. A considerable number of applications have been achieved over a wide range of flow problems, which requires the current status and capabilities of tomographic PIV to be reviewed. The fundamental aspects of the technique are discussed beginning from hardware considerations for volume illumination, imaging systems, their configurations and system calibration. The data processing aspects are of uppermost importance: image pre-processing, 3D object reconstruction and particle motion analysis are presented with their fundamental aspects along with the most advanced approaches. Reconstruction and cross-correlation algorithms, attaining higher measurement precision, spatial resolution or higher computational efficiency, are also discussed. The exploitation of 3D and time-resolved (4D) tomographic PIV data includes the evaluation of flow field pressure on the basis of the flow governing equation. The discussion also covers a-posteriori error analysis techniques. The most relevant applications of tomo-PIV in fluid mechanics are surveyed, covering experiments in air and water flows. In measurements in flow regimes from low-speed to supersonic, most emphasis is given to the complex 3D organization of turbulent coherent structures. © 2013 IOP Publishing Ltd.

Koster D.A.,Weizmann Institute of Science | Crut A.,University Claude Bernard Lyon 1 | Shuman S.,Sloan Kettering Institute | Bjornsti M.-A.,University of Alabama at Birmingham | Dekker N.H.,Technical University of Delft
Cell | Year: 2010

Entangling and twisting of cellular DNA (i.e., supercoiling) are problems inherent to the helical structure of double-stranded DNA. Supercoiling affects transcription, DNA replication, and chromosomal segregation. Consequently the cell must fine-tune supercoiling to optimize these key processes. Here, we summarize how supercoiling is generated and review experimental and theoretical insights into supercoil relaxation. We distinguish between the passive dissipation of supercoils by diffusion and the active removal of supercoils by topoisomerase enzymes. We also review single-molecule studies that elucidate the timescales and mechanisms of supercoil removal. © 2010 Elsevier Inc.

Steenbergen M.,Technical University of Delft | Dollevoet R.,ProRail Inframanagement
International Journal of Fatigue | Year: 2013

A phenomenological investigation of squat defects on rail grade R260Mn is performed. The surface-breaking crack pattern, which is either linear or branched (V-shaped), shows a typical position and orientation in the running band. Particular characteristics are asymmetry of this pattern, with the presence of a leading and a trailing branch, and crack reflection or deviation at the running band border. Bending tests reveal a 3D internal crack pattern, with a pair of crack planes or 'wings' enclosing a wedge at the surface. Microstructural analysis of the rail upper layer shows metallurgical principles of crack initiation: delamination and transverse fracture of white etching material at the surface. This analysis moreover reveals a 3D anisotropic texture of the upper layer under combined bi-directional tangential surface stresses. Mechanical interpretation of the crack morphology shows that the leading or single branch of the surface-breaking crack pattern is a shear-induced fatigue crack, following the anisotropic microstructure when growing into the rail. The trailing crack of a branched squat is explained as the result of a subsequent transverse, wedge-shaped brittle failure mechanism of the surface layer of the rail, developing within the actual elliptical Hertzian contact patch - or the envelope of potential contact ellipses at the leading crack position. It is driven by the transverse shear loading towards the rail gauge face. © 2012 Elsevier Ltd. All rights reserved.

Sheldon R.A.,Technical University of Delft
Catalysis Today | Year: 2015

Catalytic oxidations of alcohols, with dioxygen or hydrogen peroxide as the primary oxidant, in aqueous reaction media are reviewed. Selective alcohol oxidations with hydrogen peroxide generally involve early transition elements, mostly tungsten, molybdenum and vanadium, in high oxidation states and peroxometal complexes as the active oxidants. Aerobic oxidations, in contrast, involve oxidative dehydrogenation, usually catalyzed by late transition elements, e.g. water soluble palladium(II)-diamine complexes, or supported nanoparticles of Pd or Au as hybrid species at the interface of homogeneous and heterogeneous catalysis. Alternatively, water soluble organocatalysts, exemplified by stable N-oxy radicals such as TEMPO and derivatives thereof, in conjunction with copper catalysts, are efficient catalysts for the aerobic oxidation of alcohols. Metal-free variants of these systems, e.g. employing nitrite or nitric acid as a cocatalyst, are also effective catalysts for aerobic alcohol oxidations. Finally, enzymatic aerobic oxidations of alcohols employing oxidases as catalysts are described. In particular, the laccase/TEMPO system is receiving much attention because of possible applications in the selective oxidations of diols and carbohydrates derived from renewable resources. © 2014 Elsevier B.V. All rights reserved.

Pribiag V.S.,Technical University of Delft
Nature Nanotechnology | Year: 2015

Topological superconductivity is an exotic state of matter that supports Majorana zero-modes, which have been predicted to occur in the surface states of three-dimensional systems, in the edge states of two-dimensional systems, and in one-dimensional wires. Localized Majorana zero-modes obey non-Abelian exchange statistics, making them interesting building blocks for topological quantum computing. Here, we report superconductivity induced in the edge modes of semiconducting InAs/GaSb quantum wells, a two-dimensional topological insulator. Using superconducting quantum interference we demonstrate gate-tuning between edge-dominated and bulk-dominated regimes of superconducting transport. The edge-dominated regime arises only under conditions of high-bulk resistivity, which we associate with the two-dimensional topological phase. These experiments establish InAs/GaSb as a promising platform for the confinement of Majoranas into localized states, enabling future investigations of non-Abelian statistics. © 2015 Nature Publishing Group

Roeser S.,Technical University of Delft
Science and Engineering Ethics | Year: 2012

Engineers are normally seen as the archetype of people who make decisions in a rational and quantitative way. However, technological design is not value neutral. The way a technology is designed determines its possibilities, which can, for better or for worse, have consequences for human wellbeing. This leads various scholars to the claim that engineers should explicitly take into account ethical considerations. They are at the cradle of new technological developments and can thereby influence the possible risks and benefits more directly than anybody else. I have argued elsewhere that emotions are an indispensable source of ethical insight into ethical aspects of risk. In this paper I will argue that this means that engineers should also include emotional reflection into their work. This requires a new understanding of the competencies of engineers: they should not be unemotional calculators; quite the opposite, they should work to cultivate their moral emotions and sensitivity, in order to be engaged in morally responsible engineering. © 2010 The Author(s).

Dorenbos P.,Technical University of Delft
Physical Review B - Condensed Matter and Materials Physics | Year: 2013

Models and methods to determine the absolute binding energy of 4f-shell electrons in lanthanide dopants will be combined with data on the energy of electron transfer from the valence band to a lanthanide dopant. This work will show that it provides a powerful tool to determine the absolute binding energy of valence band electrons throughout the entire family of insulator and semiconductor compounds. The tool will be applied to 28 fluoride, oxide, and nitride compounds providing the work function and electron affinity together with the location of the energy levels of all divalent and all trivalent lanthanide dopants with an accuracy that surpasses that of traditional methods like photoelectron spectroscopy. The 28 compounds were selected to demonstrate how work function and electron affinity change with composition and structure, and how electronic structure affects the optical properties of the lanthanide dopants. Data covering more than 1000 different halide (F, Cl, Br, I), chalcogenide (O, S, Se), and nitride compounds are available in the archival literature enabling us to routinely establish work function and electron affinity for this much wider collection of compounds. © 2013 American Physical Society.

Xiao J.,Fudan University | Bauer G.E.W.,Tohoku University | Bauer G.E.W.,Technical University of Delft
Physical Review Letters | Year: 2012

We study the excitation of spin waves in magnetic insulators by the current-induced spin-transfer torque. We predict preferential excitation of surface spin waves induced by an easy-axis surface anisotropy with critical current inversely proportional to the penetration depth and surface anisotropy. The surface modes strongly reduce the critical current and enhance the excitation power of the current-induced magnetization dynamics. © 2012 American Physical Society.

Aubin-Tam M.-E.,Technical University of Delft
Methods in Molecular Biology | Year: 2013

Nanoparticle-protein conjugates hold great promise in biomedical applications. Diverse strategies have been developed to link nanoparticles to proteins. This chapter describes a method to assemble and purify nanoparticle-protein conjugates. First, stable and biocompatible 1.5nm gold nanoparticles are synthesized. Conjugation of the nanoparticle to the protein is then achieved via two different approaches that do not require heavy chemical modifications or cloning: cysteine-gold covalent bonding, or electrostatic attachment of the nanoparticle to charged groups of the protein. Co-functionalization of the nanoparticle with PEG thiols is recommended to help protein folding. Finally, structural characterization is performed with circular dichroism, as this spectroscopy technique has proven to be effective at examining protein secondary structure in nanoparticle-protein conjugates. © Springer Science+Business Media New York 2013.

The next generation seismic migration and inversion technology considers multiple scattering as vital information, allowing the industry to derive significantly better reservoir models - with more detail and less uncertainty - while requiring a minimum of user intervention. Three new insights have been uncovered with respect to this fundamental transition. Unblended or blended multiple scattering can be included in the seismic migration process, and it has been proposed to formulate the imaging principle as a minimization problem. The resulting process yields angle-dependent reflectivity and is referred to as recursive full wavefield migration (WFM). The full waveform inversion process for velocity estimation can be extended to a recursive, optionally blended, anisotropic multiple-scattering algorithm. The resulting process yields angle-dependent velocity and is referred to as recursive full waveform inversion (WFI). The mathematical equations of WFM and WFI have an identical structure, but the physical meaning behind the expressions is fundamentally different. In WFM the reflection process is central, and the aim is to estimate reflection operators of the subsurface, using the up- and downgoing incident wavefields (including the codas) in each gridpoint. In WFI, however, the propagation process is central and the aim is to estimate velocity operators of the subsurface, using the total incident wavefield (sum of up- and downgoing) in each gridpoint. Angle-dependent reflectivity in WFM corresponds with angle-dependent velocity (anisotropy) in WFI. The algorithms of WFM and WFI could be joined into one automated joint migration-inversion process. In the resulting hybrid algorithm, being referred to as recursive joint migration inversion (JMI), the elaborate volume integral solution was replaced by an efficient alternative: WFM and WFI are alternately applied at each depth level, where WFM extrapolates the incident wavefields and WFI updates the velocities without any user interaction. The output of the JMI process offers an integrated picture of the subsurface in terms of angle-dependent reflectivity as well as anisotropic velocity. This two-fold output, reflectivity image and velocity model, offers new opportunities to extract accurate rock and pore properties at a fine reservoir scale. © 2012 Society of Exploration Geophysicists.

Gousios G.,Technical University of Delft
IEEE International Working Conference on Mining Software Repositories | Year: 2013

During the last few years, GitHub has emerged as a popular project hosting, mirroring and collaboration platform. GitHub provides an extensive REST API, which enables researchers to retrieve high-quality, interconnected data. The GHTorent project has been collecting data for all public projects available on Github for more than a year. In this paper, we present the dataset details and construction process and outline the challenges and research opportunities emerging from it. © 2013 IEEE.

van Geenhuizen M.,Technical University of Delft
Environment and Planning C: Government and Policy | Year: 2013

University-industry collaboration in research projects has received little attention in studies on the performance of universities in bringing knowledge to market. This situation holds also true for the differences between regions and for understanding the hampering factors in collaboration, including regional ones. To fill this gap, in this paper I attempt to characterize the outcomes of technology projects-in terms of market introduction, continuation, stagnation, and failure-and to identify the barriers, particularly regional ones. I also propose an extended use of a tool that facilitates and accelerates market introduction-that is, living labs. The study draws on a database of 370 technology projects covering two different regions in the Netherlands and on in-depth data of 51 of such projects in a limited number of technologies. © 2013 Pion and its Licensors.

Garcia S.J.,Technical University of Delft
European Polymer Journal | Year: 2014

Intrinsic and extrinsic self-healing strategies can be employed to mitigate the effects of local damage in order to (partially) restore a lost property or functionality and to avoid premature catastrophic failure of the whole system. It is well known that polymer architecture has a crucial influence on mechanical, physical and thermal properties. However, the effect of polymer architecture on the healing capabilities of self-healing polymers has not yet been studied in detail. This paper addresses the effect of polymer architecture on the intrinsic healing character of polymeric materials using different reversible chemistries and aims at highlighting the need for more studies on this particular topic. © 2014 Elsevier Ltd. All rights reserved.

Bahr H.,Karlsruhe Institute of Technology | Hanssen R.F.,Technical University of Delft
Journal of Geodesy | Year: 2012

An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e. g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates. © 2012 Springer-Verlag.

Papachristos G.,Technical University of Delft
Environmental Innovation and Societal Transitions | Year: 2014

This paper studies the link between capital goods supply chains and sociotechnical transitions. Research on the latter has so far tended to focus on sustainability, energy and transport systems. Despite the considerable shift from products to services, supply chains are an integral element of most sociotechnical systems and there seems to be no foreseeable substitute for them. Consequently, for transitions to sustainability to take place, the inertia of supply chains in these systems has to be overcome and their environmental impact reduced. The paper explores this with a system dynamics model of a supply chain. While remanufacturing of used products by the retailer and recycling by the supplier can reduce the environmental impact of the supply chain, competition in the market between new and remanufactured products forces them into a situation where improving business and environmental performance is difficult. © 2014 Elsevier B.V.

Faludi A. Territorial cohesion and subsidiarity under the European Union treaties: a critique of the 'territorialism' underlying, Regional Studies. The European Union competence for territorial cohesion is subject to the subsidiarity principle regulating relations between authorities at various levels, each concerned with a fixed space. The literature describes the underlying view as 'territorialism'. In reality space is relative and each area the point of intersection of numerous configurations. Therefore, territorial authorities cannot deal with all aspects of territorial cohesion, nor can territorial representation be the only source of legitimacy. By enforcing the assumption that decisions by representative bodies as close as possible to citizens safeguard democratic legitimacy, subsidiarity is therefore a stumbling block in the pursuit of territorial cohesion. © 2013 © 2013 Regional Studies Association.

Van Oosterom P.,Technical University of Delft
Computers, Environment and Urban Systems | Year: 2013

Ten years after the first special issue of Computers, Environment and Urban Systems (CEUS) on 3D cadastres, seeing the progress in this second special issue is impressive. The domain of 3D cadastres has clearly matured in both research and practice. The ever-increasing complexity of infrastructures and densely developed areas requires proper registration of their legal status (private and public), which the existing 2D cadastral registrations can only partly do. During the past decade, various R&D activities have provided better 3D support for the registration of ownership and other rights, restrictions and responsibilities (RRRs). Despite this progress, of which an overview is given in this introduction paper (and is further elaborated upon in subsequent papers of this special issue), our research agenda for the next decade involves many challenges. This paper sketches six remaining 3D cadastres research topics: (1) shared concepts and terminology (standardization), (2) full life cycle in 3D (not only the rights), (3) legal framework, (4) creation and submission of initial 3D spatial units, (5) 3D cadastral visualization, and (6) more formal semantics. © 2013 Elsevier Ltd.

Ghose R.,Technical University of Delft
Geophysics | Year: 2012

A digital 3C array seismic cone penetrometer has been developed for multidisciplinary geophysical and geotechnical applications. Seven digital triaxial microelectromechanical system accelerometers are installed at 0.25-m intervals to make a 1.5-m-long downhole seismic array. The accelerometers have a flat response up to 2 kHz. The seismic array is attached to a class 1 digital seismic cone, which measures cone tip resistance, sleeve friction, pore-pressure, and inclination. The downhole 3C array can be used together with impulsive seismic sources and/or high-frequency vibrators that are suitable for high-resolution shallow applications. Results from two field experiments showed that a good data quality, including a constant source function within an array, and a dense depth-sampling allowed robust estimation of seismic velocity profiles in the shallow subsoil. Using horizontal and vertical seismic sources, downhole 9C seismic array data can be easily acquired. The quality of the shear-wave data is much superior when the surface seismic source is a controlled, high-frequency vibrator in stead of a traditional sledge hammer. A remarkable correlation in depth, in a fine scale, between low-strain seismic shear wave velocity and high-strain cone tip resistance could be observed. The array measurements of the full-elastic wavefield and the broad spectral bandwidth are useful in investigating frequency-dependent seismic wave propagation in the porous near-surface soil layers, which is informative of the in situ fluid-flow properties. Stable estimates of dispersive seismic velocity and attenuation can be obtained. © 2012 Society of Exploration Geophysicists.

Burchard H.,Leibniz Institute for Baltic Sea Research | Schuttelaars H.M.,Technical University of Delft
Journal of Physical Oceanography | Year: 2012

Tidal straining, which can mathematically be described as the covariance between eddy viscosity and vertical shear of the along-channel velocity component, has been acknowledged as one of the major drivers for estuarine circulation in channelized tidally energetic estuaries. In this paper, the authors investigate the role of lateral circulation for generating this covariance. Five numerical experiments are carried out, starting with a reference scenario including the full physics and four scenarios in which specific key physical processes are neglected. These processes are longitudinal internal pressure gradient forcing, lateral internal pressure gradient forcing, lateral advection, and the neglect of temporal variation of eddy viscosity. The results for the viscosity-shear covariance are correlated across different experiments to quantify the change due to neglect of these key processes. It is found that the lateral advection of vertical shear of the along-channel velocity component and its interaction with the tidally asymmetric eddy viscosity (which is also modified by the lateral circulation) is the major driving force for estuarine circulation in well-mixed tidal estuaries. © 2012 American Meteorological Society.

Pierce R.L.,Technical University of Delft
Biotechnology Journal | Year: 2013

This bioethics article by Robin Pierce, examines the current challenges in bridging science and society. Dr. Pierce discusses the "bi-directionality challenge" and how a meaningful dialogue between scientists and society can be established for socially responsible innovations. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Jarquin-Laguna A.,Technical University of Delft
Journal of Physics: Conference Series | Year: 2014

An innovative and completely different wind-energy conversion system is studied where a centralized electricity generation within a wind farm is proposed by means of a hydraulic network. This paper presents the dynamic interaction of two turbines when they are coupled to the same hydraulic network. Due to the stochastic nature of the wind and wake interaction effects between turbines, the operating parameters (i.e. pitch angle, rotor speed) of each turbine are different. Time domain simulations, including the main turbine dynamics and laminar transient flow in pipelines, are used to evaluate the efficiency and rotor speed stability of the hydraulic system. It is shown that a passive control of the rotor speed, as proposed in previous work for a single hydraulic turbine, has strong limitations in terms of performance for more than one turbine coupled to the same hydraulic network. It is concluded that in order to connect several turbines, a passive control strategy of the rotor speed is not sufficient and a hydraulic network with constant pressure is suggested. However, a constant pressure network requires the addition of active control at the hydraulic motors and spear valves, increasing the complexity of the initial concept. Further work needs to be done to incorporate an active control strategy and evaluate the feasibility of the constant pressure hydraulic network. © Published under licence by IOP Publishing Ltd.

Van Der Maaten L.,Technical University of Delft
Journal of Machine Learning Research | Year: 2014

The paper investigates the acceleration of t-SNE - an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots-using two tree-based algorithms. In particular, the paper develops variants of the Barnes-Hut algorithm and of the dual-tree algorithm that approximate the gradient used for learning t-SNE embeddings in O(N log N ). Our experiments show that the resulting algorithms substantially accelerate t-SNE, and that they make it possible to learn embeddings of data sets with millions of objects. Somewhat counterintuitively, the Barnes-Hut variant of t-SNE appears to outperform the dual-tree variant. ©2014 Laurens van der Maaten.

Savenije H.H.G.,Technical University of Delft
Hydrology and Earth System Sciences | Year: 2010

Heterogeneity and complexity of hydrological processes offer substantial challenges to the hydrological modeller. Some hydrologists try to tackle this problem by introducing more and more detail in their models, or by setting-up more and more complicated models starting from basic principles at the smallest possible level. As we know, this reductionist approach leads to ever higher levels of equifinality and predictive uncertainty. On the other hand, simple, lumped and parsimonious models may be too simple to be realistic or representative of the dominant hydrological processes. In this commentary, a new approach is proposed that tries to find the middle way between complex distributed and simple lumped modelling approaches. Here we try to find the right level of simplification while avoiding over-simplification. Paraphrasing Einstein, the maxim is: make a model as simple as possible, but not simpler than that. The approach presented is process based, but not physically based in the traditional sense. Instead, it is based on a conceptual representation of the dominant physical processes in certain key elements of the landscape. The essence of the approach is that the model structure is made dependent on a limited number of landscape classes in which the topography is the main driver, but which can include geological, geomorphological or land-use classification. These classes are then represented by lumped conceptual models that act in parallel. The advantage of this approach over a fully distributed conceptualisation is that it retains maximum simplicity while taking into account observable landscape characteristics. © Author(s) 2010.

Slob E.,Technical University of Delft
Physical Review Letters | Year: 2016

Single-sided Marchenko equations for Green's function construction and imaging relate the measured reflection response of a lossless heterogeneous medium to an acoustic wave field inside this medium. I derive two sets of single-sided Marchenko equations for the same purpose, each in a heterogeneous medium, with one medium being dissipative and the other a corresponding medium with negative dissipation. Double-sided scattering data of the dissipative medium are required as input to compute the surface reflection response in the corresponding medium with negative dissipation. I show that each set of single-sided Marchenko equations leads to Green's functions with a virtual receiver inside the medium: one exists inside the dissipative medium and one in the medium with negative dissipation. This forms the basis of imaging inside a dissipative heterogeneous medium. I relate the Green's functions to the reflection response inside each medium, from which the image can be constructed. I illustrate the method with a one-dimensional example that shows the image quality. The method has a potentially wide range of imaging applications where the material under test is accessible from two sides. © 2016 American Physical Society.

Namdar Zanganeh M.,Xodus Group Bv | Rossen W.R.,Technical University of Delft
SPE Reservoir Evaluation and Engineering | Year: 2013

Foam is a means of improving sweep efficiency that reduces the gas mobility by capturing gas in foam bubbles and hindering its movement. Foam enhanced-oi I-recovery (EOR) techniques are relatively expensive; hence, it is important to optimize their per formance. We present a case study on the conflict between mobil ity control and injectivity in optimizing oil recovery in a foam EOR process in a simple 3D reservoir with constrained injection and production pressures. Specifically, we examine a surfactant- alternatinas (SAG) process in which the surfactant-slug size is optimized. The maximum oil recovery is obtained with a surfac tant slug just sufficient to advance the foam front just short of the production well. In other words, the reservoir is partially unswept by foam at the optimum surfactant-slug size. If a larger surfactant slug is used and the foam front breaks through to the production well, productivity index (PI) is seriously reduced and oil recovery is less than optimal: The benefit of sweeping the far corners of the pattern does not compensate for the harm to PI. A similar effect occurs near the injection well: Small surfactant slugs harm injec tivity with little or no benefit to sweep. Larger slugs give better sweep with only a modest decrease in injectivity until the foam front approaches the production well. In some cases, SAG is infe rior to gasflood (Namdar Zanganeh 2011). Copyright © 2013 Society of Petroleum Engineers.

Davey R.J.,University of Manchester | Schroeder S.L.M.,University of Manchester | Ter Horst J.H.,Technical University of Delft
Angewandte Chemie - International Edition | Year: 2013

The outcome of synthetic procedures for crystalline organic materials strongly depends on the first steps along the molecular self-assembly pathway, a process we know as crystal nucleation. New experimental techniques and computational methodologies have spurred significant interest in understanding the detailed molecular mechanisms by which nuclei form and develop into macroscopic crystals. Although classical nucleation theory (CNT) has served well in describing the kinetics of the processes involved, new proposed nucleation mechanisms are additionally concerned with the evolution of structure and the competing nature of crystallization in polymorphic systems. In this Review, we explore the extent to which CNT and nucleation rate measurements can yield molecular-scale information on this process and summarize current knowledge relating to molecular self-assembly in nucleating systems. Everything starts out small: The synthesis of organic materials depends strongly on the first steps of molecular self-assembly during crystal nucleation. This Review summarizes current knowledge on these processes. Self-association in different solvents can lead to the creation of different building blocks, which form differently packed nuclei and thus in each case specific crystalline phases. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Pande S.,Technical University of Delft
Water Resources Research | Year: 2013

Quantile hydrologic model selection and structure deficiency assessment is applied in three case studies. The performance of quantile model selection problem is rigorously evaluated using a model structure on the French Broad river basin data set. The case study shows that quantile model selection encompasses model selection strategies based on summary statistics and that it is equivalent to maximum likelihood estimation under certain likelihood functions. It also shows that quantile model predictions are fairly robust. The second case study is of a parsimonious hydrological model for dry land areas in Western India. The case study shows that an intuitive improvement in the model structure leads to reductions in asymmetric loss function values for all considered quantiles. The asymmetric loss function is a quantile specific metric that is minimized to obtain a quantile specific prediction model. The case study provides evidence that a quantile-wise reduction in the asymmetric loss function is a robust indicator of model structure improvement. Finally a case study of modeling daily streamflow for the Guadalupe River basin is presented. A model structure that is least deficient for the study area is identified from nine different model structures based on quantile structural deficiency assessment. The nine model structures differ in interception, routing, overland flow and base flow conceptualizations. The three case studies suggest that quantile model selection and deficiency assessment provides a robust mechanism to compare deficiencies of different model structures and helps to identify better model structures. In addition to its novelty, quantile hydrologic model selection is a frequentist approach that seeks to complement existing Bayesian approaches to hydrological model uncertainty. Key Points Different quantile models for a given structure has non-overlapping information Quantile predictions donot cross when models are monotonous in parameters Quantile model selection can be used to assess model structure deficiency ©2013. American Geophysical Union. All Rights Reserved.

Dekking M.,Technical University of Delft
Theoretical Computer Science | Year: 2012

An interesting class of automatic sequences emerges from iterated paperfolding. The sequences generate curves in the plane with an almost periodic structure. We generalize the results obtained by Davis and Knuth on the self-avoiding and planefilling properties of these curves, giving simple geometric criteria for a complete classification. Finally, we show how the automatic structure of the sequences leads to self-similarity of the curves, which turns the planefilling curves in a scaling limit into fractal tiles. For some of these tiles we give a particularly simple formula for the Hausdorff dimension of their boundary. © 2011 Elsevier B.V. All rights reserved.

Santin O.G.,Technical University of Delft | Santin O.G.,University of Cardiff
Energy and Buildings | Year: 2011

The difference between the actual and predicted energy consumption for heating in housing is thought to be partly attributable to the use of HVAC systems. More reliable data on energy consumption could help in determining the actual energy performance of dwellings and in the search for the most adequate design for housing and home amenities. Further reductions on energy consumption might also be achieved if energy-saving policy programmes were geared to different household groups. The aim of this paper is to statistically determine Behavioural Patterns associated with the energy spent on heating and to identify household and building characteristics that could contribute to the development of energy-User Profiles. This study had two outcomes: It identified Behavioural Patterns to be used in energy calculations and it discerned User Profiles with different behaviours. Five underlying groups of behavioural variables were found, which were used to define the Behavioural Patterns and User Profiles. The groups showed statistically significant differences in the scores for most of the behavioural factors. This study established clear relationships between occupant behaviour and household characteristics. However, it seems difficult to establish relationships between energy consumption and Behavioural Patterns and household groups. © 2011 Elsevier B.V. All rights reserved.

Doorn N.,Technical University of Delft
Science and Engineering Ethics | Year: 2012

In the last decades increasing attention is paid to the topic of responsibility in technology development and engineering. The discussion of this topic is often guided by questions related to liability and blameworthiness. Recent discussions in engineering ethics call for a reconsideration of the traditional quest for responsibility. Rather than on alleged wrongdoing and blaming, the focus should shift to more socially responsible engineering, some authors argue. The present paper aims at exploring the different approaches to responsibility in order to see which one is most appropriate to apply to engineering and technology development. Using the example of the development of a new sewage water treatment technology, the paper shows how different approaches for ascribing responsibilities have different implications for engineering practice in general, and R&D or technological design in particular. It was found that there was a tension between the demands that follow from these different approaches, most notably between efficacy and fairness. Although the consequentialist approach with its efficacy criterion turned out to be most powerful, it was also shown that the fairness of responsibility ascriptions should somehow be taken into account. It is proposed to look for alternative, more procedural ways to approach the fairness of responsibility ascriptions. © 2009 The Author(s).

Manders-Huits N.,Technical University of Delft
Science and Engineering Ethics | Year: 2011

Recently, there is increased attention to the integration of moral values into the conception, design, and development of emerging IT. The most reviewed approach for this purpose in ethics and technology so far is Value-Sensitive Design (VSD). This article considers VSD as the prime candidate for implementing normative considerations into design. Its methodology is considered from a conceptual, analytical, normative perspective. The focus here is on the suitability of VSD for integrating moral values into the design of technologies in a way that joins in with an analytical perspective on ethics of technology. Despite its promising character, it turns out that VSD falls short in several respects: (1) VSD does not have a clear methodology for identifying stakeholders, (2) the integration of empirical methods with conceptual research within the methodology of VSD is obscure, (3) VSD runs the risk of committing the naturalistic fallacy when using empirical knowledge for implementing values in design, (4) the concept of values, as well as their realization, is left undetermined and (5) VSD lacks a complimentary or explicit ethical theory for dealing with value trade-offs. For the normative evaluation of a technology, I claim that an explicit and justified ethical starting point or principle is required. Moreover, explicit attention should be given to the value aims and assumptions of a particular design. The criteria of adequacy for such an approach or methodology follow from the evaluation of VSD as the prime candidate for implementing moral values in design. © 2010 The Author(s).

Dorenbosz P.,Technical University of Delft
ECS Journal of Solid State Science and Technology | Year: 2014

The vacuum referred binding energy of electrons in the 4fn levels for all divalent and trivalent lanthanide impurity states in TiO2, ZnO, SnO2, and related compounds MTiO3 and MSnO3 (M = Ca2+, Sr2+, Ba2+) and Ca2SnO4 are presented. They are obtained by collecting data from the literature on the spectroscopy of lanthanide ions, and by combining that data with the chemical shift model. The model provides the energy at the top of the valence band and at the bottom of the conduction band, and it will be shown that those energies are in excellent agreement with what is known from techniques like photo-electron spectroscopy and electrochemical studies. Electronic level diagrams are presented that explain and predict aspects like absence or presence of lanthanide 4f-4f or 5d-4f emissions and the preferred lanthanide valence. © 2013 The Electrochemical Society. All rights reserved.

Polder-Verkiel S.E.,Technical University of Delft
Science and Engineering Ethics | Year: 2012

In 2008 a young man committed suicide while his webcam was running. 1,500 people apparently watched as the young man lay dying: when people finally made an effort to call the police, it was too late. This closely resembles the case of Kitty Genovese in 1964, where 39 neighbours supposedly watched an attacker assault and did not call until it was too late. This paper examines the role of internet mediation in cases where people may or may not have been good Samaritans and what their responsibilities were. The method is an intuitive one: intuitions on the various potentially morally relevant differences when it comes to responsibility between offline and online situations are examined. The number of onlookers, their physical nearness and their anonymity have no moral relevance when it comes to holding them responsible. Their perceived reality of the situation and ability to act do have an effect on whether we can hold people responsible, but this doesn't seem to be unique to internet mediation. However the way in which those factors are intrinsically connected to internet mediation does seem to have a diminishing effect on responsibility in online situations. © 2010 The Author(s).

Jonkers H.M.,Technical University of Delft
Heron | Year: 2011

A typical durability-related phenomenon in many concrete constructions is crack formation. While larger cracks hamper structural integrity, also smaller sub-millimeter sized cracks may result in durability problems as particularly connected cracks increase matrix permeability. Ingress water and chemicals can cause premature matrix degradation and corrosion of embedded steel reinforcement. As regular manual maintenance and repair of concrete constructions is costly and in some cases not at all possible, inclusion of an autonomous self- healing repair mechanism would be highly beneficial as it could both reduce maintenance and increase material durability. Therefore, within the Delft Centre for Materials at the Delft University of Technology, the functionality of various self healing additives is investigated in order to develop a new generation of self-healing concretes. In the present study the crack healing capacity of a specific bio-chemical additive, consisting of a mixture of viable but dormant bacteria and organic compounds packed in porous expanded clay particles, was investigated. Microscopic techniques in combination with permeability tests revealed that complete healing of cracks occurred in bacterial concrete and only partly in control concrete. The mechanism of crack healing in bacterial concrete presumably occurs through metabolic conversion of calcium lactate to calcium carbonate what results in crack-sealing. This biochemically mediated process resulted in efficient sealing of sub-millimeter sized (0.15 mm width) cracks. It is expected that further development of this new type of self-healing concrete will result in a more durable and moreover sustainable concrete which will be particularly suited for applications in wet environments where reinforcement corrosion tends to impede durability of traditional concrete constructions.

Brinkman Dzwig Z.E.,Technical University of Delft
Information Services and Use | Year: 2013

Thanks to new technologies libraries worldwide go digital and are accessible 24/7 from remote locations. The innovation is even more visible in traditional library tasks such as collection development and acquisitions. Those tasks are undergoing rapid transformation as well. One of the reasons is the current economic climate, leading to shrinking library budgets. The common 'just in case' acquisition model becomes outdated. The TU Delft Library takes this challenge seriously and has a hybrid, 'just in time' acquisition model, described in this paper. The new model combines Patron Driven Acquisition (PDA), introduced by Ebook Library (EBL), with our current approval plans for paper books at Blackwell Book Services. Our aim is to get our users involved in the collection development process, whilst maintaining our standard of service and controlling our budget in an efficient way.

Abrishami S.,Ferdowsi University of Mashhad | Naghibzadeh M.,Ferdowsi University of Mashhad | Epema D.H.J.,Technical University of Delft
Future Generation Computer Systems | Year: 2013

The advent of Cloud computing as a new model of service provisioning in distributed systems encourages researchers to investigate its benefits and drawbacks on executing scientific applications such as workflows. One of the most challenging problems in Clouds is workflow scheduling, i.e., the problem of satisfying the QoS requirements of the user as well as minimizing the cost of workflow execution. We have previously designed and analyzed a two-phase scheduling algorithm for utility Grids, called Partial Critical Paths (PCP), which aims to minimize the cost of workflow execution while meeting a user-defined deadline. However, we believe Clouds are different from utility Grids in three ways: on-demand resource provisioning, homogeneous networks, and the pay-as-you-go pricing model. In this paper, we adapt the PCP algorithm for the Cloud environment and propose two workflow scheduling algorithms: a one-phase algorithm which is called IaaS Cloud Partial Critical Paths (IC-PCP), and a two-phase algorithm which is called IaaS Cloud Partial Critical Paths with Deadline Distribution (IC-PCPD2). Both algorithms have a polynomial time complexity which make them suitable options for scheduling large workflows. The simulation results show that both algorithms have a promising performance, with IC-PCP performing better than IC-PCPD2 in most cases. © 2012 Elsevier B.V. All rights reserved.

Tuck C.O.,University of Nottingham | Perez E.,University of Nottingham | Horvath I.T.,City University of Hong Kong | Sheldon R.A.,Technical University of Delft | Poliakoff M.,University of Nottingham
Science | Year: 2012

Most of the carbon-based compounds currently manufactured by the chemical industry are derived from petroleum. The rising cost and dwindling supply of oil have been focusing attention on possible routes to making chemicals, fuels, and solvents from biomass instead. In this context, many recent studies have assessed the relative merits of applying different dedicated crops to chemical production. Here, we highlight the opportunities for diverting existing residual biomass - the by-products of present agricultural and food-processing streams - to this end.

Bonato C.,Technical University of Delft
Nature Nanotechnology | Year: 2015

Quantum sensors based on single solid-state spins promise a unique combination of sensitivity and spatial resolution. The key challenge in sensing is to achieve minimum estimation uncertainty within a given time and with high dynamic range. Adaptive strategies have been proposed to achieve optimal performance, but their implementation in solid-state systems has been hindered by the demanding experimental requirements. Here, we realize adaptive d.c. sensing by combining single-shot readout of an electron spin in diamond with fast feedback. By adapting the spin readout basis in real time based on previous outcomes, we demonstrate a sensitivity in Ramsey interferometry surpassing the standard measurement limit. Furthermore, we find by simulations and experiments that adaptive protocols offer a distinctive advantage over the best known non-adaptive protocols when overhead and limited estimation time are taken into account. Using an optimized adaptive protocol we achieve a magnetic field sensitivity of 6.1 ± 1.7 nT Hz-1/2 over a wide range of 1.78 mT. These results open up a new class of experiments for solid-state sensors in which real-time knowledge of the measurement history is exploited to obtain optimal performance. © 2015 Nature Publishing Group

Janssen M.,Technical University of Delft
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

Measuring e-government has traditionally been focused on measuring and benchmarking websites and their use. This provides useful information from a user-perspective, but does not provide any information how well the back-end of e-government is organized and what can be learnt from others. In this paper a self-assessment instrument for organizational and technology infrastructure aspects is developed and tested. This model has been used to benchmark 15 initiates in the Netherlands in a group session. This helped them to identify opportunities for improvement and to share their experiences and practices. The benchmark results shows that only a disappointingly few investigated back-ends (20%) fall in the highest quadrant. Measuring the back-end should capture both organizational and technical elements. A crucial element for gaining in-depth insight with limited resources is the utilizing of a participative, self-assessment approach. Such an approach ensures an emphasis on learning, avoids the adverse aspects of benchmarking and dispute over the outcomes. © 2010 Springer-Verlag Berlin Heidelberg.

Gerritsma M.,Technical University of Delft
Mechanics of Advanced Materials and Structures | Year: 2012

In this paper I try to explain what one means when one refers to compatible or mimetic discretization methods. I will show why this approach is so appealing from a computational and physical point of view. In this respect, this is not really a scientific paper, although some new ideas will be presented, but-as the title suggests-an introduction to a new way of looking at discretization methods. This papers shows the path from physical modeling, to a representation in terms of differential forms, algebraic topological cochains with an implementation in terms of an orthogonal polynomials. © 2012 Copyright Taylor and Francis Group, LLC.

Schijve J.,Technical University of Delft
International Journal of Fatigue | Year: 2012

Several research papers on a new concept of the effective notch stress of a welded joint were recently published. The toe of the weld is modelled with a prescribed radius. A parameter for the severity of the stress distribution at the weld can then be expressed as a K t-value to be calculated with FE analysis. The new concept offers interesting applications for designing against fatigue crack initiation and predictions on the fatigue limit. It is argued that the radius concept should be replaced by a ratio of two dimensions and a proposal is made for this purpose. The fatigue assessment should be based on the calculated K t-value. Limitations of the present codes are discussed. © 2012 Elsevier Ltd. All rights reserved.

Hees R.V.,Technical University of Delft
Materials and Structures/Materiaux et Constructions | Year: 2012

This article focuses on repair or replacement mortars for historical buildings. Both the decision process and questions arising are dealt with, in order to better define and illustrate technical requirements for mortars to be used for the repair or restoration of monuments and historic buildings (masonry mortars, plasters, renders). The article summarizes a longer document, meant to help professionals in their decisions on the interventions, taking into account aspects, which are ranging from the ethics of restoration to the technical requirements. © 2012 RILEM.

Groot C.,Technical University of Delft
Materials and Structures/Materiaux et Constructions | Year: 2012

This article gives a summary of functional and performance requirements for renders and plasters for historic masonry (design, execution and maintenance). Specific attention has been paid to degradation effects, such as caused by salt crystallization and freeze-thaw cycling. Traditional as well as designed prefab mortars are considered for repair intervention. © 2012 RILEM.

De Boer P.,University of Groningen | Hoogenboom J.P.,Technical University of Delft | Giepmans B.N.G.,University of Groningen
Nature Methods | Year: 2015

Microscopy has gone hand in hand with the study of living systems since van Leeuwenhoek observed living microorganisms and cells in 1674 using his light microscope. A spectrum of dyes and probes now enable the localization of molecules of interest within living cells by fluorescence microscopy. With electron microscopy (EM), cellular ultrastructure has been revealed. Bridging these two modalities, correlated light microscopy and EM (CLEM) opens new avenues. Studies of protein dynamics with fluorescent proteins (FPs), which leave the investigator 'in the dark' concerning cellular context, can be followed by EM examination. Rare events can be preselected at the light microscopy level before EM analysis. Ongoing development - including of dedicated probes, integrated microscopes, large-scale and three-dimensional EM and super-resolution fluorescence microscopy - now paves the way for broad CLEM implementation in biology. © 2015 Nature America, Inc. All rights reserved.

Kotsonis M.,Technical University of Delft
Measurement Science and Technology | Year: 2015

The popularity of plasma actuators as flow control devices has sparked a flurry of diagnostic efforts towards their characterisation. This review article presents an overview of experimental investigations employing diagnostic techniques specifically aimed at AC dielectric barrier discharge, DC corona and nanosecond pulse plasma actuators. Mechanical, thermal and electrical characterisation techniques are treated. Various techniques for the measurement of induced velocity, body force, heating effects, voltage, current, power and discharge morphology are presented and common issues and challenges are described. The final part of this report addresses the effect of ambient conditions on the performance of plasma actuators. © 2015 IOP Publishing Ltd.

Gerkmann T.,University of Oldenburg | Hendriks R.C.,Technical University of Delft
IEEE Transactions on Audio, Speech and Language Processing | Year: 2012

Recently, it has been proposed to estimate the noise power spectral density by means of minimum mean-square error (MMSE) optimal estimation. We show that the resulting estimator can be interpreted as a voice activity detector (VAD)-based noise power estimator, where the noise power is updated only when speech absence is signaled, compensated with a required bias compensation. We show that the bias compensation is unnecessary when we replace the VAD by a soft speech presence probability (SPP) with fixed priors. Choosing fixed priors also has the benefit of decoupling the noise power estimator from subsequent steps in a speech enhancement framework, such as the estimation of the speech power and the estimation of the clean speech. We show that the proposed speech presence probability (SPP) approach maintains the quick noise tracking performance of the bias compensated minimum mean-square error (MMSE)-based approach while exhibiting less overestimation of the spectral noise power and an even lower computational complexity. © 2011 IEEE.

Walraven J.,Technical University of Delft
Structural Concrete | Year: 2013

The Model Code for Concrete Structures 2010 is a recommendation for the design of structural concrete, written with the intention of giving guidance for future codes. As such, the results of the newest research and development work are used to generate recommendations for structural concrete at the level of the latest state of the art. While carrying out this exercise, areas are inevitably found where information is insufficient, thus inviting further study. This paper begins with a brief introduction to the new expertise and ideas implemented in fib Model Code 2010, followed by a treatment of areas where knowledge appeared to be insufficient or even lacking and where further research might be useful. Copyright © 2013 Ernst & Sohn Verlag für Architektur und technische Wissenschaften GmbH & Co. KG, Berlin.

Van Mieghem P.,Technical University of Delft
Computing (Vienna/New York) | Year: 2011

Serious epidemics, both in cyber space as well as in our real world, are expected to occur with high probability, which justifies investigations in virus spread models in (contact) networks. The N-intertwined virus spread model of the SIS-type is introduced as a promising and analytically tractablemodel of which the steady-state behavior is fairly completely determined. Compared to the exact SIS Markov model, the N-intertwined model makes only one approximation of a mean-field kind that results in upper bounding the exact model for finite network size N and improves in accuracy with N. We review many properties theoretically, thereby showing, besides the flexibility to extend the model into an entire heterogeneous setting, that much insight can be gained that is hidden in the exact Markov model. © 2011 Springer-Verlag.

Mayer I.,Technical University of Delft
Procedia Computer Science | Year: 2012

The author presents the methodological backgrounds and underlying research design of an on-going scientific research project concerned with the scientific evaluation of serious games and/or computer-based simulation-games (SG) for advanced learning. The main questions of this research project are: 1. what are the requirements and design principles for a comprehensive social-scientific methodology for the evaluation of SG? 2. To what extend does SG contribute to advanced learning? 3. What factors contribute to, or determine this learning? 4. To what extend and under what conditions can SGbased learning be transferred to the real world (RW)? Between 2004 and 2012, several hundreds of SG-sessions in the Netherlands with twelve different SG were evaluated systematically, uniformly and quantitatively to give a data-set of 2100 respondents in higher education and in work-organizations. The author presents the research model, the quasi-experimental design and evaluation instruments. This focus in this article is on methodology and data-set to establish a proper foundation for forthcoming publications on empirical results. © 2012 The Authors. Published by Elsevier B.V.

Zadpoor A.A.,Technical University of Delft
Journal of the Mechanical Behavior of Biomedical Materials | Year: 2013

Theoretical modeling of bone tissue adaptation started several decades ago. Many important problems have been addressed in this area of research during the last decades. However, many important questions remain unanswered. In this paper, an overview of open problems in theoretical modeling of bone tissue adaptation is presented. First, the principal elements of bone tissue adaptation models are defined and briefly reviewed. Based on these principal elements, four categories of open problems are identified. Two of these categories primarily include forward problems, while two others include inverse problems. In every one of the identified categories, important open problems are highlighted and their importance is discussed. It is shown that most of previous studies on the theoretical modeling of bone tissue adaptation have been focused on the problems of the first category and not much is done in three other categories. The paper tries to highlight these potentially important problems that have been so far largely overlooked and to inspire new avenues of research. © 2013 Elsevier Ltd.

Pertijs M.A.P.,Technical University of Delft | Kindt W.J.,National Semiconductor
IEEE Journal of Solid-State Circuits | Year: 2010

This paper presents a precision general-purpose current-feedback instrumentation amplifier (CFIA) that employs a combination of ping-pong auto-zeroing and chopping to cancel its offset and 1/f noise. A comparison of offset-cancellation techniques shows that neither chopping nor auto-zeroing is an ideal solution for general-purpose CFIAs, since chopping results in output ripple, and auto-zeroing is associated with increased low-frequency noise. The presented CFIA mitigates these unintended side effects through a combination of these techniques. A ping-pong auto-zeroed input stage with slow-settling offset-nulling loops is applied to limit the bandwidth of the increased noise to less than half of the auto-zeroing frequency. This noise is then modulated away from DC by chopping the input stage at half the auto-zeroing frequency, reducing the low-frequency noise to the 27 nV/√Hz white-noise level, without introducing extra output ripple. The auto-zeroing is augmented with settling phases to further reduce output transients. The CFIA was realized in a 0.5 μm analog CMOS process and achieves a typical offset of 2.8 μV and a CMRR of 140 dB in a common-mode voltage range that includes the negative supply. © 2006 IEEE.

Schifferstein H.N.J.,Technical University of Delft
Food Quality and Preference | Year: 2012

Labeled Magnitude Scales (LMS) have gained substantial popularity in the sensory community. They were claimed to outperform traditional response methods, such as category rating and magnitude estimation, because they allegedly generated ratio-level data, enabled valid comparison of individual and group differences, and were not susceptible to ceiling effects (e.g., Green, Shaffer, & Gilmore, 1993; Lim, Wood, & Green, 2009). However, none of these claims seems to be well-founded. Although responses on the LMS are highly similar to those obtained through magnitude estimation, it is questionable whether any of these methods yields ratio-level data. In addition, comparing LMS data between individuals and groups may be invalid, because LMS data vary with manipulation of experimental context. Furthermore, restricting the LMS at the upper end of the scale possibly makes it susceptible to ceiling effects. Therefore, none of the original claims seems to hold. Moreover, the LMS holds a disadvantage compared to more traditional scaling methods, in that no simple cognitive algebraic model seems to underlie its responses, which makes it unclear what LMS responses exactly signify. © 2012 Elsevier Ltd.

Talmon A.M.,Technical University of Delft
Particulate Science and Technology | Year: 2013

In pseudo-homogenous pipeline flow of sand-water mixtures the measured hydraulic resistance is higher than for the flow of water, but is less than for a liquid having the same density as the mixture but a viscosity equal to that of water. A new analytical model is devised for pipe wall friction in which there is a watery viscous sublayer along the pipe wall. It is assumed that the presence of solids does not affect the viscous properties of the mixture, and that grain-stresses are negligible. A satisfying result is obtained in a particle range of 0.1 to 2 mm median grain diameter and volumetric concentrations up to about 30%. It is found that the model describes a lower bound for hydraulic resistance. The theoretical concept can be used as a basis for further developments into the heterogeneous regime, where additional physical processes come into play. © 2013 Taylor & Francis Group, LLC.

Mobius W.,Harvard University | Laan L.,Technical University of Delft
Cell | Year: 2015

An increasing number of publications include modeling. Often, such studies help us to gain a deeper insight into the phenomena studied and break down barriers between experimental and theoretical communities. However, combining experimental and theoretical work is challenging for authors, reviewers, and readers. To help maximize the usefulness and impact of combined theoretical and experimental research, this Primer describes the purpose, usefulness, and different types of models and addresses the practical aspect of integrated publications by outlining characteristics of good modeling, presentation, and fruitful collaborations. © 2015 Elsevier Inc. All rights reserved.

Papachristos G.,Technical University of Delft
Energy and Buildings | Year: 2015

Twenty percent of the total energy consumption in the Netherlands comes from household electricity consumption. This comes from household electric appliances whose number has grown in recent years. The paper explores the effect of smart meter introduction, appliance efficiency and consumer behaviour on reducing electricity consumption in the Netherlands. It does so by combining two perspectives: a sociotechnical approach and a bottom up simulation approach. The range of scenarios explored through simulation in the paper provides an understanding of the interplay between efficiency, smart meter diffusion and consumer behaviour. The results show their effect on electricity consumption and suggest that further effort is required to control and reduce it. Insights from the paper suggest that future studies should disaggregate with respect to a number of factors. © 2014 Elsevier B.V.

Van Mieghem P.,Technical University of Delft
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2016

A general two-layer network consists of two networks G1 and G2, whose interconnection pattern is specified by the interconnectivity matrix B. We deduce desirable properties of B from a dynamic process point of view. Many dynamic processes are described by the Laplacian matrix Q. A regular topological structure of the interconnectivity matrix B (constant row and column sum) enables the computation of a nontrivial eigenmode (eigenvector and eigenvalue) of Q. The latter eigenmode is independent from G1 and G2. Such a regularity in B, associated to equitable partitions, suggests design rules for the construction of interconnected networks and is deemed crucial for the interconnected network to show intriguing behavior, as discovered earlier for the special case where B=wI refers to an individual node to node interconnection with interconnection strength w. Extensions to a general m-layer network are also discussed. © 2016 American Physical Society.

Bakker M.,Technical University of Delft
Water Resources Research | Year: 2014

The Dupuit solution for interface flow toward the coast in a confined aquifer is compared to a new exact solution, which is obtained with the Hodograph method and conformal mapping. The position of the toe of the interface is a function of two dimensionless parameters: the ratio of the hydraulic gradient upstream of the interface where flow is one-dimensional over the dimensionless density difference, and the ratio of the horizontal hydraulic conductivity over the vertical hydraulic conductivity. The Dupuit interface, which neglects resistance to vertical flow, is a very accurate approximation of the exact interface for isotropic aquifers. The difference in the position of the toe between the exact and Dupuit solutions increases when the vertical anisotropy increases. For highly anistropic aquifers, it is proposed to add an effective resistance layer along the bottom of the sea in Dupuit models. The resistance of the layer is chosen such that the head in the Dupuit model is equal to the head in the exact solution upstream of the interface where flow is one-dimensional. Key Points New exact solution for interface flow in confined aquifer Dupuit interface gives good prediction of toe position in isotropic aquifers Lumped resistance along streambed improves Dupuit models with anisotropy © 2014. American Geophysical Union. All Rights Reserved.

Rhee C.V.,Technical University of Delft
Proceedings of the Institution of Civil Engineers: Maritime Engineering | Year: 2015

During the last decades, the breaching process as a production mechanism for stationary suction dredgers has become less important and therefore little research has been directed towards this process. It was not very well known as a cause of slope failures outside the dredging community until some years ago, but is now more accepted as a failure mechanism in the geotechnical world. In this paper the unstable breaching mechanism is explained. The breaching process is simulated using a two-dimensional computational fluid dynamics code with a special boundary condition at the bed. This leads to a better understanding of the process and how unstable situations can be predicted ©, 2015 Thomas Telford Services Ltd. All Rights reserved.

Saraber A.,Technical University of Delft
Fuel Processing Technology | Year: 2012

As first part of extensive study on the relation of co-combustion and fly ash quality, co-combustion fly ash was generated from Paso Diablo coal and one secondary fuel, namely poultry dung, demolition wood or solid recovered fuels. The co-combustion experiments were performed in a 1 MW th test boiler (dry bottom). The fly ash was investigated in relation to its performance for use in concrete. It was shown that the properties of co-combustion fly ash can be explained by the characteristics of the fuel and the combustion process. Further it was shown that also fly ash obtained from very high co-combustion percentages (up to 33% e/e) is able to meet the basic requirements of the European standard (NEN-EN 450). Whether an individual co-combustion fly ash does so, depends on the nature of the co-fired fuel and especially on the amount and nature of its inorganic matter. © 2012 Elsevier B.V.

Van Mieghem P.,Technical University of Delft
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2014

By invoking the famous Fortuin, Kasteleyn, and Ginibre (FKG) inequality, we prove the conjecture that the correlation of infection at the same time between any pair of nodes in a network cannot be negative for (exact) Markovian susceptible-infected-susceptible (SIS) and susceptible-infected-removed (SIR) epidemics on networks. The truth of the conjecture establishes that the N-intertwined mean-field approximation (NIMFA) upper bounds the infection probability in any graph so that network design based on NIMFA always leads to safe protections against malware spread. However, when the infection or/and curing are not Poisson processes, the infection correlation between two nodes can be negative. © 2014 American Physical Society.

De Bekker-Grob E.W.,Rotterdam University | Chorus C.G.,Technical University of Delft
PharmacoEconomics | Year: 2013

Background: A new modelling approach for analysing data from discrete-choice experiments (DCEs) has been recently developed in transport economics based on the notion of regret minimization-driven choice behaviour. This so-called Random Regret Minimization (RRM) approach forms an alternative to the dominant Random Utility Maximization (RUM) approach. The RRM approach is able to model semi-compensatory choice behaviour and compromise effects, while being as parsimonious and formally tractable as the RUM approach. Objectives: Our objectives were to introduce the RRM modelling approach to healthcare-related decisions, and to investigate its usefulness in this domain. Methods: Using data from DCEs aimed at determining valuations of attributes of osteoporosis drug treatments and human papillomavirus (HPV) vaccinations, we empirically compared RRM models, RUM models and Hybrid RUM-RRM models in terms of goodness of fit, parameter ratios and predicted choice probabilities. Results: In terms of model fit, the RRM model did not outperform the RUM model significantly in the case of the osteoporosis DCE data (p = 0.21), whereas in the case of the HPV DCE data, the Hybrid RUM-RRM model outperformed the RUM model (p < 0.05). Differences in predicted choice probabilities between RUM models and (Hybrid RUM-) RRM models were small. Derived parameter ratios did not differ significantly between model types, but trade-offs between attributes implied by the two models can vary substantially. Conclusion: Differences in model fit between RUM, RRM and Hybrid RUM-RRM were found to be small. Although our study did not show significant differences in parameter ratios, the RRM and Hybrid RUM-RRM models did feature considerable differences in terms of the trade-offs implied by these ratios. In combination, our results suggest that RRM and Hybrid RUM-RRM modelling approach hold the potential of offering new and policy-relevant insights for health researchers and policy makers. © 2013 Springer International Publishing Switzerland.

Mlecnik E.,Technical University of Delft
Energy Efficiency | Year: 2012

Europe expects the housing sector to evolve towards 'nearly zero-energy' dwellings. Meanwhile, general terms and research, marketing and legal definitions considering such dwellings have already been introduced. Appraisal of existing definitions is now needed for further policy development. This paper examines what nearly zero-energy terms can be expected to be adopted in Belgium and the Netherlands. The research method uses an interview method based on innovation diffusion theory. The analysis traces the regional adoption trajectory of relevant definitions and examines the opportunities and barriers for the inclusion of existing definitions in regional energy policy. The analysis shows that-whilst international prominence of the terms 'net zero energy' and 'net zero carbon', in addition to 'low energy' and 'passive house', is observed-in Belgium and the Netherlands 'passive house' and 'energy neutral' are preferred. The research findings indicate that the adoption of already existing definitions for nearly zero-energy houses will depend on the region and can prove a very complex process with several conflicting issues. Terms should be clearly defined and used at all political and marketing levels. It is recommended to enhance the relative advantage, demonstrability, visibility and compatibility of favoured definitions by policy initiatives. © 2011 The Author(s).

Schijve J.,Technical University of Delft
International Journal of Fatigue | Year: 2014

The fatigue life of specimens and structures covers two periods: a crack initiation period and a crack growth period. Micro-crack nucleation and initial micro-crack growth are a surface phenomenon controlled by the local stress cycles at the material surface. The subsequent macro-crack growth is depending on the fatigue crack growth resistance of the material as a bulk property. The fatigue behaviour in both periods is qualitatively reasonably well understood. However, the quantitative analysis is problematic. Moreover the number of variables which can effect the fatigue behaviour of specimens and structures is large. The paper is focussed on realistic understanding of the prediction problem, especially on the prediction of the fatigue limit of notched specimens and structures. The effect of a salt water environment on the fatigue limit is discussed. As a special topic comments are presented on the notch effect of welded joints. Short comings of the so-called effective notch concept are indicated. Comments on the design recommendations of the International Institute of Welding are presented. The significance of realistic experiments and a profound FE-analysis are emphasized. © 2013 Elsevier Ltd. All rights reserved.

Morishita Y.,Geospatial Information Authority of Japan | Hanssen R.F.,Technical University of Delft
IEEE Transactions on Geoscience and Remote Sensing | Year: 2015

Temporal decorrelation is one of the main limitations of synthetic aperture radar (SAR) interferometry. For nonurban areas, its mechanism is very complex, as it is very dependent of vegetation types and their temporal dynamics, actual land use, soil types, and climatological circumstances. Yet, an a priori assessment and comprehension of the expected coherence levels of interferograms are required for designing new satellite missions (in terms of frequency, resolution, and repeat orbits), for choosing the optimal data sets for a specific application, and for feasibility studies for new interferometric applications. Although generic models for temporal decorrelation have been proposed, their parameters depend heavily on the land use in the area of interest. Here, we report the behavior of temporal decorrelation for a specific class of land use: pasture on drained peat soils. We use L-, C-, and X-band SAR observations from the Advanced Land Observation Satellite (ALOS), European Remote Sensing Satellite, Envisat, RADARSAT-2, and TerraSAR-X missions. We present a dedicated temporal decorrelation model using three parameters and demonstrate how coherent information can be retrieved as a function of frequency, repeat intervals, and coherence estimation window sizes. New satellites such as Sentinel-1 and ALOS-2, with shorter repeat intervals than their predecessors, would enhance the possibility to obtain a coherent signal over pasture. © 2014 IEEE.

Zadpoor A.A.,Technical University of Delft
Biomaterials Science | Year: 2015

The geometry of porous scaffolds that are used for bone tissue engineering and/or bone substitution has recently been shown to significantly influence the cellular response and the rate of bone tissue regeneration. Most importantly, it has been shown that the rate of tissue generation increases with curvature and is much larger on concave surfaces as compared to convex and planar surfaces. In this work, recent discoveries concerning the effects of geometrical features of porous scaffolds such as surface curvature, pore shape, and pore size on the cellular response and bone tissue regeneration process are reviewed. In addition to reviewing the recent experimental observations, we discuss the mechanisms through which geometry affects the bone tissue regeneration process. Of particular interest are the theoretical models that have been developed to explain the role of geometry in the bone tissue regeneration process. We then follow with a section on the implications of the observed phenomena for geometrical design of porous scaffolds including the application of predictive computational models in geometrical design of porous scaffolds. Moreover, some geometrical concepts in the design of porous scaffolds such as minimal surfaces and porous structures with geometrical gradients that have not been explored before are suggested for future studies. We especially focus on the porous scaffolds manufactured using additive manufacturing techniques where the geometry of the porous scaffolds could be precisely controlled. The paper concludes with a general discussion of the current state-of-the-art and recommendations for future research. © The Royal Society of Chemistry 2015.

Dabrowski M.,Technical University of Delft
European Urban and Regional Studies | Year: 2014

This article draws on the concept of Europeanization to assess the EU cohesion policy’s capacity to promote inclusive regional governance and cooperation in regional development initiatives in Central and Eastern European countries. EU cohesion policy is often credited with improving cooperation and coordination in the delivery of the regional development policy through the application of multi-level governance enshrined in the partnership principle. By imposing a close partnership among a variety of actors, cohesion policy has the capacity to alter domestic relations between the centre and the periphery, and to create a broader scope for regional and bottom-up involvement in economic development policy. However, a lack of tradition of decentralization and collaborative policy-making, as well as a limited capacity of sub-national actors, can result in uneven outcomes of the application of the partnership principle across countries and regions. This raises questions about the transferability of the partnership approach to new Member States characterized by weak sub-national institutions, a legacy of centralized policy-making and limited civic involvement. This paper addresses this issue by comparing horizontal partnership arrangements put in place for the purpose of cohesion policy implementation and examining their impacts on the patterns of sub-national governance. The horizontal partnership arrangements are compared across three regions in countries with differentiated systems of territorial administration: Poland, the Czech Republic and Hungary. © The Author(s) 2013.

de Jong A.T.,Technical University of Delft
The Journal of the Acoustical Society of America | Year: 2010

Cavity aeroacoustic noise is relevant for aerospace and automotive industries and widely investigated since the 1950s. Most investigations so far consider cavities where opening length and width are of similar scale. The present investigation focuses on a less investigated setup, namely cavities that resemble the door gaps of automobiles. These cavities are both slender (width much greater than length or depth) and partially covered. Furthermore they are under influence of a low Mach number flow with a relatively thick boundary layer. Under certain conditions, these gaps can produce tonal noise. The present investigation attempts to reveal the aeroacoustic mechanism of this tonal noise for higher resonance modes. Experiments have been conducted on a simplified geometry, where unsteady internal pressures have been measured at different spanwise locations. With increasing velocity, several resonance modes occur. In order to obtain higher mode shapes, the cavity acoustic response is simulated and compared with experiment. Using the frequency-filtered simulation pressure field, the higher modes shapes are retrieved. The mode shapes can be interpreted as the slender cavity self-organizing into separate Helmholtz resonators that interact with each other. Based on this, an analytical model is derived that shows good agreement with the simulations and experimental results.

Serrano-Ruiz J.C.,University of Alicante | Ramos-Fernandez E.V.,Technical University of Delft | Sepulveda-Escribano A.,University of Alicante
Energy and Environmental Science | Year: 2012

Biodiesel and bioethanol, produced by simple and well-known transesterification and fermentation technologies, dominate the current biofuel market. However, their implementation in the hydrocarbon-based transport infrastructure faces serious energy-density and compatibility issues. The transformation of biomass into liquid hydrocarbons chemically identical to those currently used in our vehicles can help to overcome these issues eliminating the need to accommodate new fuels and facilitating a smooth transition toward a low carbon transportation system. These strong incentives are favoring the onset of new technologies such as hydrotreating and advanced microbial synthesis which are designed to produce gasoline, diesel and jet fuels from classical biomass feedstocks such as vegetable oils and sugars. The present Perspective paper intends to provide a state-of-the-art overview of these promising routes. © The Royal Society of Chemistry 2012.

Janic M.,Technical University of Delft
International Journal of Hydrogen Energy | Year: 2014

Global commercial air transportation has grown over the past two decades at a rather stable annual rate of 4.5-5% in the passenger and 6% in the cargo segment. Such developments have contributed to globalization of the economy and overall social welfare while at the same time increased impacts on the environment and society in terms of fuel consumption from non-renewable sources and related emissions of GHG (Green House Gases), land use, congestion, and local noise. In particular, further growth of emissions of GHG driven by growth of air transportation demand could contribute to global warming and consequent climate change. This paper, as an update of the author's previous research, investigates the potential of LH2 (Liquid Hydrogen) as a breakthrough solution for greening commercial air transportation. This includes analyzing the main sources of emissions of GHG, their impacts, mitigating measures and their effects - all considered under the conditions of using (conventional) jet A fuel - kerosene - as a derivative of crude oil. Then the characteristics LH2 as an alternative fuel and its effects on the emissions of GHG (particularly CO2) are considered, the latter by using dedicated models and their scenario-based application to the long-term future development of the international commercial air transportation. The results indicate that the gradual replacement of the jet A (conventional) with a LH2-fuelled (cryogenic) aircraft fleet could result in total cumulative emissions of GHG (CO2) stabilizing and then reducing to and below the specified target level, despite further continuous growth of air transportation demand. Thus, greening of commercial air transportation could be achieved in the long-term future. © 2014 Hydrogen Energy Publications, LLC.

Ferreira J.A.,Technical University of Delft
IEEE Transactions on Power Electronics | Year: 2013

The modular multilevel converter (M2C) has become an increasingly important topology in medium-and high-voltage applications. A limitation is that it relies on positive and negative half-cycles of the ac output voltage waveform to achieve charge balance on the submodule capacitors. To overcome this constraint a secondary power loop is introduced that exchanges power with the primary power loops at the input and output. Power is exchanged between the primary and secondary loops by using the principle of orthogonality of power flow at different frequencies. Two modular multilevel topologies are proposed to step up or step down dc in medium-and high-voltage dc applications: the tuned filter modular multilevel dc converter and the push-pull modular multilevel dc converter. An analytical simulation of the latter converter is presented to explain the operation. © 1986-2012 IEEE.

Casella F.,Polytechnic of Milan | Colonna P.,Technical University of Delft
Applied Thermal Engineering | Year: 2012

Integrated Gasification Combined Cycle (IGCC) power plants are an effective option to reduce emissions and implement carbon-dioxide sequestration. The combination of a very complex fuel-processing plant and a combined cycle power station leads to challenging problems as far as dynamic operation is concerned. Dynamic performance is extremely relevant because recent developments in the electricity market push toward an ever more flexible and varying operation of power plants. A dynamic model of the entire system and models of its sub-systems are indispensable tools in order to perform computer simulations aimed at process and control design. This paper presents the development of the lumped-parameters dynamic model of an entrained-flow gasifier, with special emphasis on the modeling approach. The model is implemented into software by means of the Modelica language and validated by comparison with one set of data related to the steady operation of the gasifier of the Buggenum power station in the Netherlands. Furthermore, in order to demonstrate the potential of the proposed modeling approach and the use of simulation for control design purposes, a complete model of an exemplary IGCC power plant, including its control system, has been developed, by re-using existing models of combined cycle plant components; the results of a load dispatch ramp simulation are presented and shortly discussed. © 2011 Elsevier Ltd. All rights reserved.

Velsink H.,Technical University of Delft
Journal of Applied Geodesy | Year: 2016

Adjustment and testing of a combination of stochastic and nonstochastic observations is applied to the deformation analysis of a time series of 3D coordinates. Nonstochastic observations are constant values that are treated as if they were observations. They are used to formulate constraints on the unknown parameters of the adjustment problem. Thus they describe deformation patterns. If deformation is absent, the epochs of the time series are supposed to be related via affine, similarity or congruence transformations. S-basis invariant testing of deformation patterns is treated. The model is experimentally validated by showing the procedure for a point set of 3D coordinates, determined from total station measurements during five epochs. The modelling of two patterns, the movement of just one point in several epochs, and of several points, is shown. Full, rank deficient covariance matrices of the 3D coordinates, resulting from free network adjustments of the total station measurements of each epoch, are used in the analysis. © 2016 Walter de Gruyter GmbH, Berlin/Munich/Boston.

Grunwald D.,Technical University of Delft | Singer R.H.,Yeshiva University
Current Opinion in Cell Biology | Year: 2012

The nuclear pore complex (NPC) has long been viewed as a point-like entry and exit channel between the nucleus and the cytoplasm. New data support a different view whereby the complex displays distinct spatial dynamics of variable duration ranging from milliseconds to events spanning the entire cell cycle. Discrete interaction sites outside the central channel become apparent, and transport regulation at these sites seems to be of greater importance than currently thought. Nuclear pore components are highly active outside the NPC or impact the fate of cargo transport away from the nuclear pore. The NPC is a highly dynamic, crowded environment - constantly loaded with cargo while providing selectivity based on unfolded proteins. Taken together, this comprises a new paradigm in how we view import/export dynamics and emphasizes the multiscale nature of NPC-mediated cellular transport. © 2011 Elsevier Ltd.

Vollebregt E.A.H.,Technical University of Delft
Journal of Computational Physics | Year: 2013

This paper presents our new solver BCCG+FAI for solving elastic normal contact problems. This is a comprehensible approach that is based on the Conjugate Gradients (CG) algorithm and that uses FFTs. A first novel aspect is the definition of the "FFT-based Approximate Inverse" preconditioner. The underlying idea is that the inverse matrix can be approximated well using a Toeplitz or block-Toeplitz form, which can be computed using the FFT of the original matrix elements. This preconditioner makes the total number of CG iterations effectively constant in 2D and very slowly increasing in 3D problems. A second novelty is how we deal with a prescribed total force. This uses a deflation technique in such a way that CGs convergence and finite termination properties are maintained. Numerical results show that this solver is more effective than existing CG-based strategies, such that it can compete with Multi-Grid strategies over a much larger problem range. In our opinion it could be the new method of choice because of its simple structure and elegant theory, and because robust performance is achieved independently of any problem specific parameters. © 2013 The Author.

De Hoop A.T.,Technical University of Delft
Proceedings of the IEEE | Year: 2013

In this paper, a modern time-domain introduction is presented for electromagnetic field theory in (N+1)-space-time. It uses a consistent tensor/array notation that accommodates the description of electromagnetic phenomena in N-dimensional space (plus time), a requirement that turns up in present-day theoretical cosmology, where a unified theory of electromagnetic and gravitational phenomena is aimed at. The standard vectorial approach, adequate for describing electromagnetic phenomena in (3+1)-space-time, turns out to be not generalizable to (N+1)-space-time for N>3 and the tensor/array approach that, in fact, has been introduced in Einstein's theory of relativity, proves, together with its accompanying notation, to furnish the appropriate tools. Furthermore, such an approach turns out to lead to considerable simplifications, such as the complete superfluousness of standard vector calculus and the standard condition on the right-handedness of the reference frames employed. Since the field equations do no more than interrelate (in a particular manner) changes of the field quantities in time to their changes in space, only elementary properties of (spatial and temporal) derivatives are needed to formulate the theory. The tensor/array notation furthermore furnishes indications about the structure of the field equations in any of the space-time discretization procedures for time-domain field computation. After discussing the field equations, the field/source compatibility relations and the constitutive relations, the field radiated by sources in an unbounded, homogeneous, isotropic, lossless medium is determined. All components of the radiated field are shown to be expressible as elementary operations acting on the scalar Green's function of the scalar wave equation in (N+1)-space-time. Time-convolution and time-correlation reciprocity relations conclude the general theory. Finally, two items on field computation are touched upon: the space-time-integrated field equations method of computation and the time-domain Cartesian coordinate stretching method for constructing perfectly matched computational embeddings. The performance of these items is illustrated in a demonstrator showing the 1-D pulsed electric-current and magnetic-current sources excited wave propagation in a layered medium. © 1963-2012 IEEE.

Boersma B.J.,Technical University of Delft
Journal of Physics: Conference Series | Year: 2011

In this paper we will present results of several direct numerical simulations of turbulent pipe flow. The highest Reynolds number simulated in this study was 61, 000. Our numerical model uses Fourier expansions in axial and circumferential directions and 6th order staggered compact finite difference in the wall normal direction. Apart form standard turbulent statistics we also will present 1D energy spectra and autocorrelation functions.

ten Veldhuis J.A.E.,Technical University of Delft
Journal of Flood Risk Management | Year: 2011

This study presents a first attempt to quantify tangible and intangible flood damage according to two different damage metrics: monetary values and number of people affected by flooding. Tangible damage includes material damage to buildings and infrastructure; intangible damage includes damages that are difficult to quantify exactly, such as stress and inconvenience. The data used are representative of lowland flooding incidents with return periods up to 10 years. The results show that monetarisation of damage prioritises damage to buildings in comparison with roads, cycle paths and footpaths. When, on the other hand, damage is expressed in terms of numbers of people affected by a flood, road flooding is the main contributor to total flood damage. The results also show that the cumulative damage of 10 years of successive flood events is almost equal to the damage of a singular event with a T=125 years return period. Differentiation between urban functions and the use of different kinds of damage metrics to quantify flood risk provide the opportunity to weigh tangible and intangible damages from an economic and societal perspective. © 2011 The Author. Journal of Flood Risk Management © 2011 The Chartered Institution of Water and Environmental Management.

Remis R.,Technical University of Delft
Progress in Electromagnetics Research | Year: 2010

In this paper we present a Lanczos-type reduction method to simulate the low-frequency response of multiconductor transmission lines. Reduced-order models are constructed in such a way that low frequencies are approximated first. The inverse of the transmission line system matrix is then required and an explicit expression for this inverse is presented. No matrix factorization needs to be computed numerically. Furthermore, computing the action of the inverse on a vector requires an O(N) amount of work, where N is the total number of unknowns, and the inverse satisfies a particular reciprocityrelated symmetry relation as well. These two properties are exploited in a Lanczos-type algorithm to efficiently construct the low-frequency reduced-order models. Numerical examples illustrate the performance of the method.

Koper G.J.M.,Technical University of Delft | Borkovec M.,University of Geneva
Polymer | Year: 2010

This article reviews our understanding of ionization processes of weak polyelectrolytes. The emphasis is put on a general introduction to site binding models, which are able to account for many experimental features of linear and branched polyelectrolytes, including dendrimers. These models are fully compatible with the classical description of acid-base equilibria. The review further discusses the nature of the site-site interaction and role of conformational equilibria. Experimental charging data of numerous weak polyelectrolytes are discussed in terms of these models in detail. © 2010 Elsevier Ltd.

Uijttewaal W.S.J.,Technical University of Delft
Journal of Hydraulic Research | Year: 2014

Shallowness defines a wide class of flows of high significance to hydraulic and environmental engineering. This paper discusses research on shallow flows that have been carried out in laboratory and field studies as well as with numerical simulations. Recent advances in experimental and numerical techniques helped to reveal the important features of shallow flows which are directly relevant to rivers. Particular attention is paid to the contribution of large-scale structures to transverse transport of momentum and mass which is assessed for archetypical flow configurations like wakes, shear layers, and bend flows. It is demonstrated that the flow geometry and roughness distribution determine the relative contribution of secondary circulation and large-scale turbulent structures to this transport. For applications in civil and environmental engineering, a proper parameterization of the physical processes is required for representing shallow flows at the large scale. The paper outlines some perspective directions to be developed in the forthcoming years. © 2014 International Association for Hydro-Environment Engineering and Research.

Blaauwendraad J.,Technical University of Delft
Journal of Applied Mechanics, Transactions ASME | Year: 2010

Since Haringx introduced his stability hypothesis for the buckling prediction of helical springs over 60 years ago, discussion is on whether or not the older hypothesis of Engesser should be replaced in structural engineering for stability studies of shear-weak members. The accuracy and applicability of both theories for structures has been subject of study in the past by others, but quantitative information about the accuracy for structural members is not provided. This is the main subject of this paper. The second goal is to explain the experimental evidence that the critical buckling load of a sandwich beamcolumn surpasses the shear buckling load GAs, which is commonly not expected on basis of the Engesser hypothesis. The key difference between the two theories regards the relationship, which is adopted in the deformed state between the shear force in the beam and the compressive load. It is shown for a wide range of the ratio of shear and flexural rigidity to which extent the two theories agree and/or conflict with each other. The Haringx theory predicts critical buckling loads which are exceeding the value GAs, which is not possible in the Engesser approach. That sandwich columns have critical buckling loads larger than GAs does, however, not imply the preference of the Haringx hypothesis. This is illustrated by the introduction of the thought experiment of a compressed cable along the central axis of a beam-column in deriving governing differential equations and finding a solution for three different cases of increasing complexity: (i) a compressed member of either flexural or shear deformation, (ii) a compressed member of both flexural and shear deformations, and (iii) a compressed sandwich column. It appears that the Engesser hypothesis leads to a critical buckling load larger than GAs for layered cross section shapes and predicts the sandwich behavior very satisfactory, whereas the Haringx hypothesis then seriously overestimates the critical buckling load. The fact that the latter hypothesis is perfectly confirmed for helical springs (and elastomeric bearings) has no meaning for shear-weak members in structural engineering. Then, the Haringx hypothesis should be avoided. It is strongly recommended to investigate the stability of the structural members on the basis of the Engesser hypothesis. © 2010 by ASME.

Neto A.,Technical University of Delft
IEEE Transactions on Antennas and Propagation | Year: 2010

An efficient directive antenna is described that can be used to realize essentially non dispersive links over extremely large bandwidths. The antenna is a significantly enhanced version of previously proposed Leaky Lens antennas that use a frequency independent leaky slot radiation mechanism. A theoretical breakthrough now allows the use of this mechanism also in the presence of purely planar structures. This step allows the realization of the feed of a leaky lens antenna in a unique planar structure that is then glued to a standard circularly symmetric elliptical dielectric lens, as integrated technology requires in the mm and sub-mm wave domains. The first part of this sequence deals with a theoretical breakthrough, the consequent antenna concept and a description of the basic physical mechanisms inside and outside the lens antenna. It is shown that Leaky Lens antennas have the potential to be used to realize antenna links over bands exceeding a decade with minimal dispersion, high efficiency and high directivity. The second part of this sequence deals with the demonstration of these claims via the measurement of two prototypes. © 2006 IEEE.

Dwight R.P.,Technical University of Delft
IOP Conference Series: Materials Science and Engineering | Year: 2014

It has recently been observed that Least-Squares Finite Element methods (LS-FEMs) can be used to assimilate experimental data into approximations of PDEs in a natural way, as shown by Heyes et al. in the case of incompressible Navier-Stokes flow [1]. The approach was shown to be effective without regularization terms, and can handle substantial noise in the experimental data without filtering. Of great practical importance is that-unlike other data assimilation techniques-it is not significantly more expensive than a single physical simulation. However the method as presented so far in the literature is not set in the context of an inverse problem framework, so that for example the meaning of the final result is unclear. In this paper it is shown that the method can be interpreted as finding a maximum a posteriori (MAP) estimator in a Bayesian approach to data assimilation, with normally distributed observational noise, and a Bayesian prior based on an appropriate norm of the governing equations. In this setting the method may be seen to have several desirable properties: most importantly discretization and modelling error in the simulation code does not affect the solution in limit of complete experimental information, so these errors do not have to be modelled statistically. Also the Bayesian interpretation better justifies the choice of the method, and some useful generalizations become apparent. The technique is applied to incompressible Navier-Stokes flow in a pipe with added velocity data, where its effectiveness, robustness to noise, and application to inverse problems is demonstrated. © 2010 IOP Publishing Ltd.

Mudde R.F.,Technical University of Delft
Industrial and Engineering Chemistry Research | Year: 2010

This paper discusses the measurement of bubbles moving through a fluidized bed imaged with a double X-ray tomographic scanner. The scanner is made of 3 X-ray sources equipped with two rings of 90 CdWO4 detectors. The fluidized bed has a diameter of 23 cm and is filled with a Geldart B powder. The scanner measures the attenuation in two thin parallel slices separated by a small vertical distance. This allows the estimate of the velocity of individual bubbles rising through the bed. Data are collected at a frequency of 2500 Hz. To remove the noise, data are averaged over 10 samples before tomographic reconstruction is performed, resulting in 250 independent frames per second. From the measured bubble velocity and the passage time through the reconstruction planes, the vertical dimensions of the bubbles are found. This allows imaging of the real shape of the bubbles and calculation of the volume of each individual bubble. © 2010 American Chemical Society.

Jia X.,Beijing Normal University | Xia K.,Beijing Normal University | Bauer G.E.W.,Tohoku University | Bauer G.E.W.,Technical University of Delft
Physical Review Letters | Year: 2011

We compute thermal spin transfer (TST) torques in Fe-MgO-Fe tunnel junctions using a first principles wave-function-matching method. At room temperature, the TST in a junction with 3 MgO monolayers amounts to 10 -7J/m2/K, which is estimated to cause magnetization reversal for temperature differences over the barrier of the order of 10 K. The large TST can be explained by multiple scattering between interface states through ultrathin barriers. The angular dependence of the TST can be very skewed, possibly leading to thermally induced high-frequency generation. © 2011 American Physical Society.

Wapenaar K.,Technical University of Delft
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2014

In time-reversal acoustics, waves recorded at the boundary of a strongly scattering medium are sent back into the medium to focus at the original source position. This requires that the medium can be accessed from all sides. We discuss a focusing method for media that can be accessed from one side only. We show how complex focusing functions, emitted from the top surface into the medium, cause independent foci for compressional and shear waves. The focused fields are isotropic and act as independent virtual sources for these wave types inside the medium. We foresee important applications in nondestructive testing of construction materials and seismological monitoring of processes inside the Earth. © 2014 American Physical Society.

Sheldon R.A.,Technical University of Delft
Green Chemistry | Year: 2014

The various strategies for the valorisation of waste biomass to platform chemicals, and the underlying developments in chemical and biological catalysis which make this possible, are critically reviewed. The option involving the least changes to the status quo is the drop-in strategy of complete deoxygenation to petroleum hydrocarbons and further processing using existing technologies. The alternative, redox economic approach, is direct conversion of, for example, carbohydrates to oxygenates by fermentation or chemocatalytic processes. Examples of both approaches are described, e.g. fermentation of carbohydrates to produce hydrocarbons, lower alcohols, diols and carboxylic acids or acid catalyzed hydrolysis of hexoses to hydroxymethyl furfural (HMF) and subsequent conversion to levulinic acid (LA), γ-valerolactone (GVL) and furan dicarboxylic acid (FDCA). Three possible routes for producing a bio-based equivalent of the large volume polymer, polyethylene terephthalate (PET) are delineated. Valorisation of waste protein could, in the future, form an important source of amino acids, such as l-glutamic acid and l-lysine, as platform chemicals, which in turn can be converted to nitrogen containing commodity chemicals. Glycerol, the coproduct of biodiesel manufacture from triglycerides, is another waste stream for which valorisation to commodity chemicals, such as epichlorohydrin and acrolein, is an attractive option. © 2014 The Royal Society of Chemistry.

Janic M.,Technical University of Delft
Transportation Research Part A: Policy and Practice | Year: 2015

This paper deals with developing a methodology for estimating the resilience, friability, and costs of an air transport network affected by a large-scale disruptive event. The network consists of airports and airspace/air routes between them where airlines operate their flights. Resilience is considered as the ability of the network to neutralize the impacts of disruptive event(s). Friability implies reducing the network's existing resilience due to removing particular nodes/airports and/or links/air routes, and consequently cancelling the affected airline flights. The costs imply additional expenses imposed on airports, airlines, and air passengers as the potentially most affected actors/stakeholders due to mitigating actions such as delaying, cancelling and rerouting particular affected flights. These actions aim at maintaining both the network's resilience and safety at the acceptable level under given conditions.Large scale disruptive events, which can compromise the resilience and friability of a given air transport network, include bad weather, failures of particular (crucial) network components, the industrial actions of the air transport staff, natural disasters, terrorist threats/attacks and traffic incidents/accidents.The methodology is applied to the selected real-life case under given conditions. In addition, this methodology could be used for pre-selecting the location of airline hub airport(s), assessing the resilience of planned airline schedules and the prospective consequences, and designing mitigating measures before, during, and in the aftermath of a disruptive event. As such, it could, with slight modifications, be applied to transport networks operated by other transport modes. © 2014 Elsevier Ltd.

The delivery of in-spec coal qualities is essential for an efficient and environmental friendly operation of modern coal-fired power plants. The design of the mining operation systems and blending opportunities plays a key role in homogenising variability and improving the prediction of key quality parameters, such as the calorific value (CV).Modern methods of conditional simulation in geostatistics allow for generating several realisations for large deposits capturing in-situ variability of key quality parameters. Integrating simulated realisations of the deposit with a simulation of transport- and blending models of mining operation leads to valuable insights into its performance as a function of the technical design and operational mode.The contribution first reviews the method of Generalised Sequential Gaussian Simulation (GSGS), which is especially designed for computational efficient simulation of large deposits. In a second step GSGS is applied to a large coal field in Eastern Europe. The practical simulation process is described and applied in a complex geological environment of highly variable seam geometry and quality including multiple split seams. Results are applied to a large open pit coal operation to investigate the variability of the calorific value and its behaviour along the extraction, transportation and blending process in a continuous mining environment.The described approach provides a valuable view into the performance of a continuous mining system in terms of homogenisation. Conclusions can be drawn to optimise the design of key equipment and to adjust the operation mode to ensure that the customer's requirements in terms of coal quality variability are met with high probability. © 2012 Elsevier B.V.

de Winter J.C.F.,Technical University of Delft
Scientometrics | Year: 2014

An analysis of article-level metrics of 27,856 PLOS ONE articles reveals that the number of tweets was weakly associated with the number of citations (β = 0.10), and weakly negatively associated with citations when the number of article views was held constant (β = −0.06). The number of tweets was predictive of other social media activity (β = 0.34 for Mendeley and β = 0.41 for Facebook), but not of the number of article views on PubMed Central (β = 0.01). It is concluded that the scientific citation process acts relatively independently of the social dynamics on Twitter. © 2014, Akadémiai Kiadó, Budapest, Hungary.

Bozdag E.,Technical University of Delft
Ethics and Information Technology | Year: 2013

Online information intermediaries such as Facebook and Google are slowly replacing traditional media channels thereby partly becoming the gatekeepers of our society. To deal with the growing amount of information on the social web and the burden it brings on the average user, these gatekeepers recently started to introduce personalization features, algorithms that filter information per individual. In this paper we show that these online services that filter information are not merely algorithms. Humans not only affect the design of the algorithms, but they also can manually influence the filtering process even when the algorithm is operational. We further analyze filtering processes in detail, show how personalization connects to other filtering techniques, and show that both human and technical biases are present in today's emergent gatekeepers. We use the existing literature on gatekeeping and search engine bias and provide a model of algorithmic gatekeeping. © 2013 Springer Science+Business Media Dordrecht.

Larson M.,Technical University of Delft | Jones G.J.F.,Centre for Next Generation Localisation
Foundations and Trends in Information Retrieval | Year: 2011

Speech media, that is, digital audio and video containing spoken content, has blossomed in recent years. Large collections are accruing on the Internet as well as in private and enterprise settings. This growth has motivated extensive research on techniques and technologies that facilitate reliable indexing and retrieval. Spoken content retrieval (SCR) requires the combination of audio and speech processing technologies with methods from information retrieval (IR). SCR research initially investigated planned speech structured in document-like units, but has subsequently shifted focus to more informal spoken content produced spontaneously, outside of the studio and in conversational settings. This survey provides an overview of the field of SCR encompassing component technologies, the relationship of SCR to text IR and automatic speech recognition and user interaction issues. It is aimed at researchers with backgrounds in speech technology or IR who are seeking deeper insight on how these fields are integrated to support research and development, thus addressing the core challenges of SCR. © 2012 M. Larson and G. J. F. Jones.

Precup R.-E.,Polytechnic University of Timisoara | Hellendoorn H.,Technical University of Delft
Computers in Industry | Year: 2011

Fuzzy control has long been applied to industry with several important theoretical results and successful results. Originally introduced as model-free control design approach, model-based fuzzy control has gained widespread significance in the past decade. This paper presents a survey on recent developments of analysis and design of fuzzy control systems focused on industrial applications reported after 2000. © 2010 Elsevier B.V. All rights reserved.

Lipfert J.,Technical University of Delft | Lipfert J.,Ludwig Maximilians University of Munich | Doniach S.,Stanford University | Das R.,Stanford University | Herschlag D.,Stanford University
Annual Review of Biochemistry | Year: 2014

Ions surround nucleic acids in what is referred to as an ion atmosphere. As a result, the folding and dynamics of RNA and DNA and their complexes with proteins and with each other cannot be understood without a reasonably sophisticated appreciation of these ions' electrostatic interactions. However, the underlying behavior of the ion atmosphere follows physical rules that are distinct from the rules of site binding that biochemists are most familiar and comfortable with. The main goal of this review is to familiarize nucleic acid experimentalists with the physical concepts that underlie nucleic acid-ion interactions. Throughout, we provide practical strategies for interpreting and analyzing nucleic acid experiments that avoid pitfalls from oversimplified or incorrect models. We briefly review the status of theories that predict or simulate nucleic acid-ion interactions and experiments that test these theories. Finally, we describe opportunities for going beyond phenomenological fits to a next-generation, truly predictive understanding of nucleic acid-ion interactions. Copyright © 2014 by Annual Reviews. All rights reserved.

Amar A.,Technical University of Delft
IEEE Transactions on Signal Processing | Year: 2010

The parameters of interest of a polynomial phase signal observed by a sensor array include the direction of arrival and the polynomial coefficients. The direct maximum likelihood estimation of these parameters requires a nonlinear multidimensional search. In this paper, we present a two-step estimation approach. The estimation requires only a one-dimensional search in the direction of arrival space and involves a simple least squares solution for the polynomial coefficients. The efficiency of the estimates is corroborated by Monte Carlo simulations. Copyright © 2010 IEEE.

Dorenbos P.,Technical University of Delft
IEEE Transactions on Nuclear Science | Year: 2010

The fundamental limits on the scintillation decay, scintillation yield, and X-ray or γ-ray energy resolution obtainable with Ce 3+, Pr 3+, and Eu 2+ activated inorganic scintillators are determined. Those limits are compared with what has been achieved today with scintillators like YA10s:Ce, Lu2SiO5Ce, LaCl3:Ce, LaBr3:Ce, LuI3:Ce, and emerging scintillators like Lu3Al5O12:Pr 3+ and SrI2:Eu 2+. Higher scintillation yield not necessarily leads to better energy resolution; nonproportional scintillation yield with deposited amount of ionization energy is an important aspect too that will be addressed. © 2010 IEEE.

Weijermars R.,Technical University of Delft
Applied Energy | Year: 2013

This study evaluates the economic feasibility of five emergent shale gas plays on the European Continent. Each play is assessed using a uniform field development plan with 100 wells drilled at a rate of 10. wells/year in the first decade. The gas production from the realized wells is monitored over a 25. year life cycle. Discounted cash flow models are used to establish for each shale field the estimated ultimate recovery (EUR) that must be realized, using current technology cost, to achieve a profit. Our analyses of internal rates of return (IRR) and net present values (NPVs) indicate that the Polish and Austrian shale plays are the more robust, and appear profitable when the strict P90 assessment criterion is applied. In contrast, the Posidonia (Germany), Alum (Sweden) and a Turkish shale play assessed all have negative discounted cumulative cash flows for P90 wells, which puts these plays below the hurdle rate. The IRR for P90 wells is about 5% for all three plays, which suggests that a 10% improvement of the IRR by sweet spot targeting may lift these shale plays above the hurdle rate. Well productivity estimates will become better constrained over time as geological uncertainty is reduced and as technology improves during the progressive development of the shale gas fields. © 2013.