Entity

Time filter

Source Type

Waterloo, Canada

University of Waterloo is a public research university whose main campus is located in Waterloo, Ontario, Canada. The main campus is located on 400 hectares of land in Uptown Waterloo, adjacent to Waterloo Park. The university offers a wide variety of academic programs, which is administered by six faculties, and three affiliated university colleges. Waterloo is a member of the U15, a group of research-intensive universities in Canada. Wikipedia.


Carter J.C.,Memorial University of Newfoundland | Kelly A.C.,University of Waterloo
British Journal of Clinical Psychology | Year: 2015

Objective This study aimed to identify baseline predictors of autonomous and controlled motivation for treatment (ACMT) in a transdiagnostic eating disorder sample, and to examine whether ACMT at baseline predicted change in eating disorder psychopathology during treatment. Method Participants were 97 individuals who met DSM-IV-TR criteria for an eating disorder and were admitted to a specialized intensive treatment programme. Self-report measures of eating disorder psychopathology, ACMT, and various psychosocial variables were completed at the start of treatment. A subset of these measures was completed again after 3, 6, 9, and 12 weeks of treatment. Results Multiple regression analyses showed that baseline autonomous motivation was higher among patients who reported more self-compassion and more received social support, whereas the only baseline predictor of controlled motivation was shame. Multilevel modelling revealed that higher baseline autonomous motivation predicted faster decreases in global eating disorder psychopathology, whereas the level of controlled motivation at baseline did not. Conclusion The current findings suggest that developing interventions designed to foster autonomous motivation specifically and employing autonomy supportive strategies may be important to improving eating disorders treatment outcome. Practitioner points The findings of this study suggest that developing motivational interventions that focus specifically on enhancing autonomous motivation for change may be important for promoting eating disorder recovery. Our results lend support for the use of autonomy supportive strategies to strengthen personally meaningful reasons to achieve freely chosen change goals in order to enhance treatment for eating disorders. One study limitation is that there were no follow-up assessments beyond the 12-week study and we therefore do not know whether the relationships that we observed persisted after treatment. Another limitation is that this was a correlational study and it is therefore important to be cautious about making causal conclusions when interpreting the results. © 2014 The British Psychological Society.


Razavi S.,University of Saskatchewan | Tolson B.A.,University of Waterloo
Water Resources Research | Year: 2013

Long periods of hydrologic data records have become available in many watersheds around the globe. Hydrologic model calibration on such long, full-length data periods is typically deemed the most robust approach for calibration but at larger computational costs. Determination of a representative short period as a surrogate of a long data period that sufficiently embeds its information content is not trivial and is a challenging research question. The representativeness of such a short period is not only a function of data characteristics but also model and calibration error function dependent. Unlike previous studies, this study goes beyond identifying the best surrogate data period to be used in model calibration and proposes an efficient framework that calibrates the hydrologic model to full-length data while running the model only on a short period for the majority of the candidate parameter sets. To this end, a mapping system is developed to approximate the model performance on the full-length data period based on the model performance for the short data period. The basic concepts and the promise of the framework are demonstrated through a computationally expensive hydrologic model case study. Three calibration approaches, namely calibration solely to a surrogate period, calibration to the full period, and calibration through the proposed framework, are evaluated and compared. Results show that within the same computational budget, the proposed framework leads to improved or equal calibration performance compared to the two conventional approaches. Results also indicate that model calibration solely to a short data period may lead to a range of performances from poor to very well depending on the representativeness of the short data period which is typically not known a priori. © 2013. American Geophysical Union. All Rights Reserved.


Hydrogels are crosslinked hydrophilic polymers that undergo swelling in water. The gel volume is affected by many environmental parameters including temperature, pH, ionic strength, and solvent composition. Therefore, these factors have been traditionally used for making smart hydrogels. DNA, on the other hand, is a special block copolymer. Incorporation of DNA within a hydrogel network can have several important effects. For example, DNA can serve as a reversible crosslinker modulating the mechanical and rheological properties of a hydrogel. Second, DNA can selectively bind to a variety of different molecules. Attaching these binding DNAs (aptamers) to hydrogel makes it possible to expand the range of stimuli to chemical and biological molecules. At the same time, the gel matrix can also improve DNA-based sensors and materials. For example, the hydrogel can be dried for storage and rehydrated prior to use and the immobilized DNAs are protected from nuclease cleavage. The gel backbone property can also be tuned to affect the interaction between DNA and other molecules. The rational functionalization of DNA in hydrogels has generated a diverse range of smart materials and biosensors. In the last 15 years, the field has made tremendous progress and some of the recent developments are summarized in this review. Challenges and possible future directions are also discussed. © 2011 The Royal Society of Chemistry.


Cisneros G.A.,Wayne State University | Karttunen M.,University of Waterloo | Ren P.,University of Texas at Austin | Sagui C.,North Carolina State University
Chemical Reviews | Year: 2014

Electrostatic interactions are crucial for biomolecular simulations, as their calculation is the most time-consuming when computing the total classical forces, and their representation has profound consequences for the accuracy of classical force fields. Long-range electrostatic interactions are crucial for the stability of proteins, nucleic acids, glycomolecules, lipids, and other macromolecules, and their interactions with solvent, ions, and other molecules. Traditionally, electrostatic interactions have been modeled using a set of fixed atom-centered point charges or partial charges. The most popular methods for extracting charges from molecular wave functions are based on a fitting of the atomic charges to the molecular electrostatic potential (MEP) computed with ab initio or semiempirical methods outside the van der Waals surface. Computationally, the electrostatic potential for a system with explicit solvent is calculated by either solving Poisson's equation or explicitly adding the individual charge potentials.


Johannsen T.,Perimeter Institute for Theoretical Physics | Johannsen T.,University of Waterloo
Classical and Quantum Gravity | Year: 2016

General relativity has been widely tested in weak gravitational fields but still stands largely untested in the strong-field regime. According to the no-hair theorem, black holes in general relativity depend only on their masses and spins and are described by the Kerr metric. Mass and spin are the first two multipole moments of the Kerr spacetime and completely determine all higher-order moments. The no-hair theorem and, hence, general relativity can be tested by measuring potential deviations from the Kerr metric affecting such higher-order moments. Sagittarius A∗ (Sgr A∗), the supermassive black hole at the center of the Milky Way, is a prime target for precision tests of general relativity with several experiments across the electromagnetic spectrum. First, near-infrared (NIR) monitoring of stars orbiting around Sgr A∗ with current and new instruments is expected to resolve their orbital precessions. Second, timing observations of radio pulsars near the Galactic center may detect characteristic residuals induced by the spin and quadrupole moment of Sgr A∗. Third, the event horizon telescope, a global network of mm and sub-mm telescopes, aims to study Sgr A∗ on horizon scales and to image the silhouette of its shadow cast against the surrounding accretion flow using very-long baseline interferometric (VLBI) techniques. Both NIR and VLBI observations may also detect quasiperiodic variability of the emission from the accretion flow of Sgr A∗. In this review, I discuss our current understanding of the spacetime of Sgr A∗ and the prospects of NIR, timing, and VLBI observations to test its Kerr nature in the near future. © 2016 IOP Publishing Ltd.


Gibson R.B.,University of Waterloo
Impact Assessment and Project Appraisal | Year: 2013

Progress towards sustainability requires positive steps to meet all of the interdependent core requirements for sustainability - including biophysical system integrity and basic human well-being. Where these essentials are involved trade-offs should be avoided, unless all other options are worse. In environmental assessments, it is useful to identify major trade-offs and minimize them through selection of less bad alternatives or addition of mitigations or offsets. However, the more promising approach starts earlier and encourages planning that avoids invidious trade-offs, including through re-consideration of the initial purposes and alternatives.This paper considers two historical cases of assessments that avoided significant trade-offs through processes that gave mandatory attention to purposes and alternatives, covered the full suite of sustainability considerations, empowered citizen participants and facilitated the bumping of cases up to a more strategic level where broader alternatives offered better trade-off avoidance. These long-advocated assessment design elements are still rarely applied as a full package in existing environmental assessment law and practice. Commitment to trade-off avoidance adds to the reasons for their general adoption. © 2013 Copyright IAIA.


Ahmed R.,Bangladesh University of Engineering and Technology | Boutaba R.,University of Waterloo
IEEE Communications Surveys and Tutorials | Year: 2011

Peer-to-peer (P2P) technology has triggered a wide range of distributed applications beyond simple file-sharing. Distributed XML databases, distributed computing, server-less web publishing and networked resource/service sharing are only a few to name. Despite of the diversity in applications, these systems share a common problem regarding searching and discovery of information. This commonality stems from the transitory nodes population and volatile information content in the participating nodes. In such dynamic environment, users are not expected to have the exact information about the available objects in the system. Rather queries are based on partial information, which requires the search mechanism to be flexible. On the other hand, to scale with network size the search mechanism is required to be bandwidth efficient. In this survey, we identify the search requirements in large scale distributed systems and investigate the ability of the existing search techniques in satisfying these requirements. Representative search techniques from P2P content sharing, service discovery and P2P databases are considered in this work. © 2011 IEEE.


Madani K.,University of Central Florida | Hipel K.W.,University of Waterloo
Water Resources Management | Year: 2011

In game theory, potential resolutions to a conflict are found through stability analysis, based on stability definitions having precise mathematical structures. A stability definition reflects a decision maker's behavior in a conflict or game, predicts how the game is played, and suggests the resolutions or equilibria of the dispute. Various stability definitions, reflecting different types of people with different levels of foresight, risk attitude, and knowledge of opponents' preferences, have been proposed for resolving games. This paper reviews and illustrates six stability definitions, applicable to finite strategy strategic non-cooperative water resources games, including Nash Stability, General Metarationality (GMR), Symmetric Metarationality (SMR), Sequential Stability (SEQ), Limited-Move Stability, and Non-Myopic Stability. The introduced stability definitions are applied to an interesting and highly informative range of generic water resources games to show how analytical results vary based on the applied stability definitions. The paper suggests that game theoretic models can better simulate real conflicts if the applied stability definitions better reflect characteristics of the players. When there is a lack of information about the types of decision makers, the employment of a range of stability definitions might improve the strategic results and provide useful insights into the basic framework of the conflict and its resolution. © 2011 Springer Science+Business Media B.V.


Bahadori A.,Southern Cross University of Australia | Zahedi G.,University of Technology Malaysia | Zendehboudi S.,University of Waterloo
Renewable and Sustainable Energy Reviews | Year: 2013

Hydropower is the most advanced and mature renewable energy technology and provides some level of electricity generation in many countries worldwide. As hydropower does not consume or pollute the water it uses to generate power, it leaves this vital resource available for other uses. The objective of this article is to identify and analyse issues that are imperative for hydropower energy development in Australia. This study shows opportunities for further hydroelectricity generation in Australia are offered by refurbishment and efficiency improvements at existing hydroelectricity plants, and continued growth of small-scale hydroelectricity plants connected to the grid. © 2012 Elsevier Ltd.


Johannsen T.,Perimeter Institute for Theoretical Physics | Johannsen T.,University of Waterloo
Classical and Quantum Gravity | Year: 2016

According to the general-relativistic no-hair theorem, astrophysical black holes depend only on their masses and spins and are uniquely described by the Kerr metric. Mass and spin are the first two multipole moments of the Kerr spacetime and completely determine all other moments. The no-hair theorem can be tested by measuring potential deviations from the Kerr metric which alter such higher-order moments. In this review, I discuss tests of the no-hair theorem with current and future observations of such black holes across the electromagnetic spectrum, focusing on near-infrared observations of the supermassive black hole at the Galactic center, pulsar-timing and very-long baseline interferometric observations, as well as x-ray observations of fluorescent iron lines, thermal continuum spectra, variability, and polarization. © 2016 IOP Publishing Ltd.


Childs A.M.,University of Waterloo | Van Dam W.,University of California at Santa Barbara
Reviews of Modern Physics | Year: 2010

Quantum computers can execute algorithms that dramatically outperform classical computation. As the best-known example, Shor discovered an efficient quantum algorithm for factoring integers, whereas factoring appears to be difficult for classical computers. Understanding what other computational problems can be solved significantly faster using quantum algorithms is one of the major challenges in the theory of quantum computation, and such algorithms motivate the formidable task of building a large-scale quantum computer. This article reviews the current state of quantum algorithms, focusing on algorithms with superpolynomial speedup over classical computation and, in particular, on problems with an algebraic flavor. © 2010 The American Physical Society.


Doxey A.C.,University of Waterloo
Virulence | Year: 2013

Molecular mimicry of host proteins is a common strategy adopted by bacterial pathogens to interfere with and exploit host processes. Despite the availability of pathogen genomes, few studies have attempted to predict virulence-associated mimicry relationships directly from genomic sequences. Here, we analyzed the proteomes of 62 pathogenic and 66 non-pathogenic bacterial species, and screened for the top pathogen-specific or pathogen-enriched sequence similarities to human proteins. The screen identified approximately 100 potential mimicry relationships including well-characterized examples among the top-scoring hits (e.g., RalF, internalin, yopH, and others), with about 1/3 of predicted relationships supported by existing literature. Examination of homology to virulence factors, statistically enriched functions, and comparison with literature indicated that the detected mimics target key host structures (e.g., extracellular matrix, ECM) and pathways (e.g., cell adhesion, lipid metabolism, and immune signaling). The top-scoring and most widespread mimicry pattern detected among pathogens consisted of elevated sequence similarities to ECM proteins including collagens and leucine-rich repeat proteins. Unexpectedly, analysis of the pathogen counterparts of these proteins revealed that they have evolved independently in different species of bacterial pathogens from separate repeat amplifications. Thus, our analysis provides evidence for two classes of mimics: complex proteins such as enzymes that have been acquired by eukaryote-to-pathogen horizontal transfer, and simpler repeat proteins that have independently evolved to mimic the host ECM. Ultimately, computational detection of pathogen-specific and pathogen-enriched similarities to host proteins provides insights into potentially novel mimicry-mediated virulence mechanisms of pathogenic bacteria.


Duan Z.,University of Waterloo
International Journal of Thermal Sciences | Year: 2012

The objective of this paper is to furnish the research and design communities with a simple and convenient means of predicting quantities of engineering interest for slip flow in doubly connected microchannels. Slip flow in doubly connected microchannels has been examined and a simple model is proposed to predict the friction factor and Reynolds number product. As doubly connected regions are inherently more difficult to solve than simply connected regions, and for slip flow no solutions or graphical and tabulated data exist for nearly all doubly connected geometries, the developed simple model fills this void and can be used to predict friction factor and Reynolds number product, mass flow rate, pressure distribution, and pressure drop of slip flow in doubly connected microchannels for the practical engineering design of doubly connected microchannels. The proposed models are preferable since the effects of various independent parameters are demonstrated and the difficulty and investment is completely negligible compared with the cost of alternative numerical methods. © 2012 Elsevier Masson SAS. All rights reserved.


Scott M.,University of Waterloo | Hwa T.,University of California at San Diego
Current Opinion in Biotechnology | Year: 2011

Quantitative empirical relationships between cell composition and growth rate played an important role in the early days of microbiology. Gradually, the focus of the field began to shift from growth physiology to the ever more elaborate molecular mechanisms of regulation employed by the organisms. Advances in systems biology and biotechnology have renewed interest in the physiology of the cell as a whole. Furthermore, gene expression is known to be intimately coupled to the growth state of the cell. Here, we review recent efforts in characterizing such couplings, particularly the quantitative phenomenological approaches exploiting bacterial 'growth laws.' These approaches point toward underlying design principles that can guide the predictive manipulation of cell behavior in the absence of molecular details. © 2011 Elsevier Ltd.


Christians J.A.,University of Notre Dame | Fung R.C.M.,University of Notre Dame | Fung R.C.M.,University of Waterloo | Kamat P.V.,University of Notre Dame
Journal of the American Chemical Society | Year: 2014

Organo-lead halide perovskite solar cells have emerged as one of the most promising candidates for the next generation of solar cells. To date, these perovskite thin film solar cells have exclusively employed organic hole conducting polymers which are often expensive and have low hole mobility. In a quest to explore new inorganic hole conducting materials for these perovskite-based thin film photovoltaics, we have identified copper iodide as a possible alternative. Using copper iodide, we have succeeded in achieving a promising power conversion efficiency of 6.0% with excellent photocurrent stability. The open-circuit voltage, compared to the best spiro-OMeTAD devices, remains low and is attributed to higher recombination in CuI devices as determined by impedance spectroscopy. However, impedance spectroscopy revealed that CuI exhibits 2 orders of magnitude higher electrical conductivity than spiro-OMeTAD which allows for significantly higher fill factors. Reducing the recombination in these devices could render CuI as a cost-effective competitor to spiro-OMeTAD in perovskite solar cells. © 2013 American Chemical Society.


Vogelsberger M.,Harvard - Smithsonian Center for Astrophysics | Zavala J.,University of Waterloo | Zavala J.,Perimeter Institute for Theoretical Physics
Monthly Notices of the Royal Astronomical Society | Year: 2013

Self-interacting darkmatter offers an interesting alternative to collisionless darkmatter because of its ability to preserve the large-scale success of the cold dark matter model, while seemingly solving its challenges on small scales. We present here the first study of the expected dark matter detection signal in a fully cosmological context taking into account different selfscattering models for dark matter. We demonstrate that models with constant and velocitydependent cross-sections, which are consistent with observational constraints, lead to distinct signatures in the velocity distribution, because non-thermalized features found in the cold dark matter distribution are thermalized through particle scattering. Depending on the model, selfinteraction can lead to a 10 per cent reduction of the recoil rates at high energies, corresponding to a minimum speed that can cause recoil larger than 300 km s-1, compared to the cold dark matter case. At lower energies these differences are smaller than 5 per cent for all models. The amplitude of the annual modulation signal can increase by up to 25 per cent, and the day of maximum amplitude can shift by about two weeks with respect to the cold dark matter expectation. Furthermore, the exact day of phase reversal of the modulation signal can also differ by about a week between the different models. In general, models with velocitydependent cross-sections peaking at the typical velocities of dwarf galaxies lead only to minor changes in the detection signals, whereas allowed constant cross-section models lead to significant changes. We conclude that different self-interacting dark matter scenarios might be distinguished from each other through the details of direct detection signals. Furthermore, detailed constraints on the intrinsic properties of dark matter based on null detections should take into account the possibility of self-scattering and the resulting effects on the detector signal. © 2013 The Author. Published by Oxford University Press on behalf of the Royal Astronomical Society.


Gillis N.,University of Mons | Vavasis S.A.,University of Waterloo
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2014

In this paper, we study the nonnegative matrix factorization problem under the separability assumption (that is, there exists a cone spanned by a small subset of the columns of the input nonnegative data matrix containing all columns), which is equivalent to the hyperspectral unmixing problem under the linear mixing model and the pure-pixel assumption. We present a family of fast recursive algorithms and prove they are robust under any small perturbations of the input data matrix. This family generalizes several existing hyperspectral unmixing algorithms and hence provides for the first time a theoretical justification of their better practical performance. © 2014 IEEE.


Nelson P.,University of Waterloo
Journal of Combinatorial Theory. Series B | Year: 2014

We show that, if k and ℓ are positive integers and r is sufficiently large, then the number of rank-k flats in a rank-r matroid M with no U2,ℓ+2-minor is less than or equal to the number of rank-k flats in a rank-r projective geometry over GF(q), where q is the largest prime power not exceeding ℓ. © 2014 Elsevier Inc.


Titantah J.T.,University of Western Ontario | Karttunen M.,University of Waterloo
Journal of the American Chemical Society | Year: 2012

The physical mechanisms behind hydrophobic hydration have been debated for over 65 years. Spectroscopic techniques have the ability to probe the dynamics of water in increasing detail, but many fundamental issues remain controversial. We have performed systematic first-principles ab initio Car-Parrinello molecular dynamics simulations over a broad temperature range and provide a detailed microscopic view on the dynamics of hydration water around a hydrophobic molecule, tetramethylurea. Our simulations provide a unifying view and resolve some of the controversies concerning femtosecond-infrared, THz-GHz dielectric relaxation, and nuclear magnetic resonance experiments and classical molecular dynamics simulations. Our computational results are in good quantitative agreement with experiments, and we provide a physical picture of the long-debated "iceberg" model; we show that the slow, long-time component is present within the hydration shell and that molecular jumps and over-coordination play important roles. We show that the structure and dynamics of hydration water around an organic molecule are non-uniform. © 2012 American Chemical Society.


Sato C.M.,University of Waterloo
European Journal of Combinatorics | Year: 2014

The k-core of a graph is its maximal subgraph with minimum degree at least k. In this paper, we address robustness questions about k-cores (with fixed k ≥ 3). Given a k-core, remove one edge uniformly at random and find its new k-core. We are interested in how many vertices are deleted from the original k-core to find the new one. This can be seen as a measure of robustness of the original k-core. We prove that, if the initial k-core is chosen uniformly at random from the k-cores with n vertices and m edges, its robustness depends essentially on its average degree c. We prove that, if c → k, then the new k-core is empty with probability 1 - o (1) We define a constant ck' such that when k+εck'+ψ(n) with ψ(n)=ω(n-1/2logn), ψ (n) > 0 and c is bounded, then the probability that the new k-core has less than n - h (n) vertices goes to zero, for any function h (n) → ∞. © 2014 Elsevier Ltd.


Li F.,National University of Singapore | Ooi B.C.,National University of Singapore | Ozsu M.T.,University of Waterloo | Wu S.,Zhejiang University
ACM Computing Surveys | Year: 2014

MapReduce is a framework for processing and managing large-scale datasets in a distributed cluster, which has been used for applications such as generating search indexes, document clustering, access log analysis, and various other forms of data analytics. MapReduce adopts a flexible computation model with a simple interface consisting of map and reduce functions whose implementations can be customized by application developers. Since its introduction, a substantial amount of research effort has been directed toward making it more usable and efficient for supporting database-centric operations. In this article, we aim to provide a comprehensive review of a wide range of proposals and systems that focusing fundamentally on the support of distributed data management and processing using the MapReduce framework. © 2014 ACM.


Ames B.P.W.,California Institute of Technology | Vavasis S.A.,University of Waterloo
Mathematical Programming | Year: 2014

We consider the k-disjoint-clique problem. The input is an undirected graph G in which the nodes represent data items, and edges indicate a similarity between the corresponding items. The problem is to find within the graph kdisjoint cliques that cover the maximum number of nodes of G. This problem may be understood as a general way to pose the classical 'clustering' problem. In clustering, one is given data items and a distance function, and one wishes to partition the data into disjoint clusters of data items, such that the items in each cluster are close to each other. Our formulation additionally allows 'noise' nodes to be present in the input data that are not part of any of the cliques. The k-disjoint-clique problem is NP-hard, but we show that a convex relaxation can solve it in polynomial time for input instances constructed in a certain way. The input instances for which our algorithm finds the optimal solution consist of kdisjoint large cliques (called 'planted cliques') that are then obscured by noise edges inserted either at random or by an adversary, as well as additional nodes not belonging to any of the kplanted cliques. © 2013 Springer-Verlag Berlin Heidelberg and Mathematical Optimization Society.


Varghese G.,Maxim Integrated Products Inc. | Wang Z.,University of Waterloo
IEEE Transactions on Circuits and Systems for Video Technology | Year: 2010

We propose a video denoising algorithm based on a spatiotemporal Gaussian scale mixture model in the wavelet transform domain. This model simultaneously captures the local correlations between the wavelet coefficients of natural video sequences across both space and time. Such correlations are further strengthened with a motion compensation process, for which a Fourier domain noise-robust cross correlation algorithm is proposed for motion estimation. Bayesian least square estimation is used to recover the original video signal from the noisy observation. Experimental results show that the performance of the proposed approach is competitive when compared with state-of-the-art video denoising algorithms based on both peak signal-to-noise-ratio and structural similarity evaluations. © 2010 IEEE.


Leung B.,University of Waterloo
IEEE Transactions on Circuits and Systems I: Regular Papers | Year: 2010

For a ring oscillator with an arbitrary voltage swing, core transistors in delay cells typically move between saturation and triode region This can result in the overall timing jitter being dominated by timing jitter accumulated within a particular region. Based on multiple thresholds crossing concept, a new and more accurate way of handling such region change is developed. Specifically any crossing between two such regions, prior to the actual crossing of the threshold that triggers the next stage delay cell, is treated as an internal threshold crossing. The timing jitter is then the sum (in the rms sense) of the timing jitter accumulated across multiple thresholds crossing. The model agrees to within 2 dB with measurements, on differential pair based (both replica bias and physical resistor load) and current starved inverter based ring oscillators, fabricated in 0.18 μm CMOS. Design insights from the model show that, for a differential pair ring oscillator that is originally designed with a given voltage swing such that the input transistors can be in triode, if voltage swing is reduced so that input transistors just do not go into triode, phase noise can be improved. A 7-dB phase noise improvement on an example design using a replica bias differential ring oscillator, based on this insight, is demonstrated. © 2006 IEEE.


Dawson L.L.,University of Waterloo
Terrorism and Political Violence | Year: 2010

This article examines: (1) the obvious reasons for, and curious absence of, a dialogue between scholars studying new religious movements (NRMs), particularly those responsible for acts of mass violence, and those studying processes of radicalization in home-grown terrorist groups; (2) the substantial parallels between established understandings of who joins NRMs, how, and why and recent findings about who joins terrorist groups in a Western context, how, and why; and (3) the ways in which explanations of the causes of violent behaviour in NRMs are pertinent to securing a more systematic and complete grasp of the process of radicalization in terrorist cells. The latter discussion focuses on the role of apocalyptic belief systems and charismatic forms of authority, highlighting the behavioural consequences of this dangerous combination and their possible strategic significance. Recommendations are made for further research, integrating insights from the two fields of study. © Taylor & Francis Group, LLC.


BACKGROUND: The beneficial effects of exercise on the brain regions that support cognitive control and memory are well documented. However, examination of the capacity of acute exercise to promote cortical resilience—the ability to recover from temporary pertubation—has been largely unexplored. The present study sought to determine whether single session of moderate-intensity aerobic exercise can accelerate recovery of inhibitory control centers in the dorsolateral prefrontal cortex after transient perturbation via continuous theta burst stimulation (cTBS). METHODS: In a within-participants experimental design, 28 female participants aged 18 to 26 years (mean [standard deviation] = 20.32 [1.79] years) completed a session each of moderate-intensity and very light-intensity exercise, in a randomized order. Before each exercise session, participants received active cTBS to the left dorsolateral prefrontal cortex. A Stroop task was used to quantify both the initial perturbation and subsequent recovery effects on inhibitory control. RESULTS: Results revealed a significant exercise condition (moderate-intensity exercise, very light-intensity exercise) by time (prestimulation, poststimulation, postexercise) interaction (F(2,52) = 5.93, p = .005, d = 0.38). Specifically, the proportion of the cTBS-induced decrement in inhibition restored at 40 minutes postexercise was significantly higher after a bout of moderate-intensity exercise (101.26%) compared with very light-intensity exercise (18.36%; t(27) = −2.17, p = .039, d = −.57, 95% confidence interval = −161.40 to −4.40). CONCLUSION: These findings support the hypothesis that exercise promotes cortical resilience, specifically in relation to the brain regions that support inhibitory control. The resilience-promoting effects of exercise have empirical and theoretical implications for how we conceptualize the neuroprotective effects of exercise. Copyright © 2016 by American Psychosomatic Society


Lowe C.J.,University of Waterloo
Psychosomatic Medicine | Year: 2016

OBJECTIVE: The primary aim of this review was to evaluate the effectiveness of noninvasive brain stimulation to the dorsolateral prefrontal cortex (dlPFC) for modulating appetitive food cravings and consumption in laboratory (via meta-analysis) and therapeutic (via systematic review) contexts. METHODS: Keyword searches of electronic databases (PubMed, Scopus, Web of Science, PsychoInfo, and EMBASE) and searches of previous quantitative reviews were used to identify studies (experimental [single-session] or randomized trials [multi-session]) that examined the effects of neuromodulation to the dlPFC on food cravings (n = 9) and/or consumption (n = 7). Random-effects models were employed to estimate the overall and method-specific (repetitive transcranial magnetic stimulation [rTMS] and transcranial direct current stimulation [tDCS]) effect sizes. Age and body mass index were examined as potential moderators. Two studies involving multisession therapeutic stimulation were considered in a separate systematic review. RESULTS: Findings revealed a moderate-sized effect of modulation on cravings across studies (g, −0.516; p = .037); this effect was subject to significant heterogeneity (Q, 33.086; p < .001). Although no statistically significant moderators were identified, the stimulation effect on cravings was statistically significant for rTMS (g, −0.834; p = .008) but not tDCS (g, −0.252; p = .37). There was not sufficient evidence to support a causal effect of neuromodulation and consumption in experimental studies; therapeutic studies reported mixed findings. CONCLUSIONS: Stimulation of the dlPFC modulates cravings for appetitive foods in single-session laboratory paradigms; when estimated separately, the effect size is only significant for rTMS protocols. Effects on consumption in laboratory contexts were not reliable across studies, but this may reflect methodological variability in delivery of stimulation and assessment of eating behavior. Additional single- and multi-session studies assessing eating behavior outcomes are needed. Copyright © 2016 by American Psychosomatic Society


Steiner S.H.,University of Waterloo | Jones M.,Center for Healthcare Related Infection Surveillance and Prevention
Statistics in Medicine | Year: 2010

Monitoring medical outcomes is desirable to help quickly detect performance changes. Previous applications have focused mostly on binary outcomes, such as 30-day mortality after surgery. However, in many applications the survival time data are routinely collected. In this paper, we propose an updating exponentially weighted moving average (EWMA) control chart to monitor risk-adjusted survival times. The updating EWMA (uEWMA) operates in a continuous time; hence, the scores for each patient always reflect the most up-to-date information. The uEWMA can be implemented based on a variety of survival-time models and can be set up to provide an ongoing estimate of a clinically interpretable average patient score. The efficiency of the uEWMA is shown to compare favorably with the competing methods. Copyright © 2009 John Wiley & Sons, Ltd.


Wesson P.S.,University of Waterloo
International Journal of Modern Physics D | Year: 2015

Recent criticism of higher-dimensional extensions of Einstein's theory is considered. This may have some justification in regard to string theory, but is misguided as applied to five-dimensional (5D) theories with a large extra dimension. Such theories smoothly embed general relativity, ensuring recovery of the latter's observational support. When the embedding of spacetime is carried out in accordance with Campbell's theorem, the resulting 5D theory naturally explains the origin of classical matter and vacuum energy. Also, constraints on the equations of motion near a high-energy surface or membrane in the 5D manifold lead to quantization and quantum uncertainty. These are major returns on the modest investment of one extra dimension. Instead of fruitless bickering about whether it is possible to "see" the fifth dimension, it is suggested that it be treated on par with other concepts of physics, such as time. The main criterion for the acceptance of a fifth dimension (or not) should be its usefulness. © 2015 World Scientific Publishing Company.


Kirchhoff D.,University of Waterloo | Tsuji L.J.S.,University of Toronto
Impact Assessment and Project Appraisal | Year: 2014

In Canada, the use of omnibus budget bills in recent years has grown substantially. In 2012, it was used twice by the Government of Canada. As a result, a number of substantial changes to environmental legislation were introduced with virtually no debate nor compromise. This situation has been criticized for seriously reducing the credibility of the budget process and the authority of Parliament in Canada, as well as undermining the transparency and accountability of the policy-making process. This paper describes how changes to major policies through the use of omnibus bills (all, arguably, in the name of faster project review decisions) affect not only established environmental protection efforts, but also the public and Aboriginal (First Nations, Inuit and Metis) peoples, particularly in terms of their capacity to effectively participate in resource development. © 2014 © 2014 IAIA.


Poulin F.J.,University of Waterloo
Journal of Physical Oceanography | Year: 2010

This article aims to advance the understanding of inherent randomness in geophysical fluids by considering the particular example of baroclinic shear flows that are spatially uniform in the horizontal directions and aperiodic in time. The time variability of the shear is chosen to be the Kubo oscillator, which is a family of time-dependent bounded noise that is oscillatory in nature with various degrees of stochasticity. The author analyzed the linear stability of a wide range of temporally periodic and aperiodic shears with a zero and nonzero mean to get a more complete understanding of the effect of oscillations in shear flows in the context of the two-layer quasigeostrophic Phillips model. It is determined that the parametric mode, which exists in the periodic limit, also exists in the range of small and moderate stochasticities but vanishes in highly erratic flows. Moreover, random variations weaken the effects of periodicity and yield growth rates more similar to that of the time-averaged steady-state analog. This signifies that the periodic shear flows possess the most extreme case of stabilization and destabilization and are thus anomalous. In the limit of an f plane, the linear stability problem is solved exactly to reveal that individual solutions to the linear dynamics with time-dependent baroclinic shear have growth rates that are equal to that of the time-averaged steady state. This implies that baroclinic shear flows with zero means are linearly stable in that they do not grow exponentially in time. This means that the stochastic mode that was found to exist in the Mathieu equation does not arise in this model. However, because the perturbations grow algebraically, the aperiodic baroclinic shear on an f plane can give rise to nonlinear instabilities. © 2010 American Meteorological Society.


Ammar K.,University of Waterloo
Proceedings of the ACM SIGMOD International Conference on Management of Data | Year: 2016

Many applications regularly generate large graph data. Many of these graphs change dynamically, and analysis techniques for static graphs are not suitable in these cases. This thesis proposes an architecture to process and analyze dynamic graphs. It is based on a new computation model called Grab'n Fix. The architecture includes a novel distributed graph storage layer to support dynamic graph processing. These proposals were inspired by an extensive quantitative and qualitative analysis of existing graph analytics platform. © 2016 ACM.


The aim of this review is to introduce the reader first to the mathematical complexity associated with the analysis of fluorescence decays acquired with solutions of macromolecules labeled with a fluorophore and its quencher that are capable of interacting with each other via photophysical processes within the macromolecular volume, second to the experimental and mathematical approaches that have been proposed over the years to handle this mathematical complexity, and third to the information that one can expect to retrieve with respect to the internal dynamics of such fluorescently labeled macromolecules. In my view, the ideal fluorophore-quencher pair to use in studying the internal dynamics of fluorescently labeled macromolecules would involve a long-lived fluorophore, a fluorophore and a quencher that do not undergo energy migration, and a photophysical process that results in a change in fluorophore emission upon contact between the excited fluorophore and quencher. Pyrene, with its ability to form an excimer on contact between excited-state and ground-state species, happens to possess all of these properties. Although the concepts described in this review apply to any fluorophore and quencher pair sharing pyrene's exceptional photophysical properties, this review focuses on the study of pyrene-labeled macromolecules that have been characterized in great detail over the past 40 years and presents the main models that are being used today to analyze the fluorescence decays of pyrene-labeled macromolecules reliably. These models are based on Birks' scheme, the DMD model, the fluorescence blob model, and the model free analysis. The review also provides a step-by-step protocol that should enable the noneducated user to achieve a successful decay analysis exempt of artifacts. Finally, some examples of studies of pyrene-labeled macromolecules are also presented to illustrate the different types of information that can be retrieved from these fluorescence decay analyses depending on the model that is selected. © 2013 American Chemical Society.


Khan S.S.,University of Waterloo | Madden M.G.,National University of Ireland
Knowledge Engineering Review | Year: 2014

One-class classification (OCC) algorithms aim to build classification models when the negative class is either absent, poorly sampled or not well defined. This unique situation constrains the learning of efficient classifiers by defining class boundary just with the knowledge of positive class. The OCC problem has been considered and applied under many research themes, such as outlier/novelty detection and concept learning. In this paper, we present a unified view of the general problem of OCC by presenting a taxonomy of study for OCC problems, which is based on the availability of training data, algorithms used and the application domains applied. We further delve into each of the categories of the proposed taxonomy and present a comprehensive literature review of the OCC algorithms, techniques and methodologies with a focus on their significance, limitations and applications. We conclude our paper by discussing some open research problems in the field of OCC and present our vision for future research. © Cambridge University Press 2014..


Elliott S.J.,University of Waterloo
Current Opinion in Environmental Sustainability | Year: 2011

The water-health nexus represents the intersection at which issues of water, sanitation, and human health collide. This collision is of crisis proportions at present. This paper will briefly outline the crisis, discuss some theoretical lenses through which we might view the crisis with an aim to action, review the importance of the knowledge journey with respect to linking evidence with action, and conclude with some reflections and next steps. In essence, if we do not adopt a contextualized theoretical lens through which to address the transdisciplinary nature of the problems requiring action at the water-health nexus, we will never succeed. - from a scientific or moral imperative. - of meeting global human needs. © 2011 Elsevier B.V.


Gibson R.B.,University of Waterloo
Impact Assessment and Project Appraisal | Year: 2011

Ultimately, the enhancement we need to deliver through environmental assessment is confidence that every approved undertaking will move us positively towards a desirable and durable future. In Canada, the most promising steps in this direction have been in several major project assessment reviews with public hearings and independent panels that applied a contribution to sustainability test. The most recent and advanced case is the review of a proposed C$16.2 billion natural gas infrastructure undertaking in the Northwest Territories. The Panel's application of the contribution to sustainability test compared the cumulative effects, equity and legacy implications of a range of project pace and scale alternatives. The Panel concluded that the project would offer positive overall contributions only if 176 recommendations were implemented. While the Panel's process was slow and the governments accepted only the most modest recommendations, the Panel's review set a new standard of analytical practice. This paper examines how the review was done and assesses its strengths and limitations, with particular attention to the design and application of the contribution to sustainability test. © IAIA 2011.


Quilley S.,University of Waterloo
Environmental Values | Year: 2013

Degrowth is identified as a prospective turning point in human development as significant as the domestication of fire or the process of agrarianisation. The Transition movement is identified as the most important attempt to develop a prefigurative, local politics of degrowth. Explicating the links between capitalist modernisation, metabolic throughput and psychological individuation, Transition embraces 'limits' but downplays the implications of scarcity for open, liberal societies, and for inter-personal and inter-group violence. William Ophuls' trilogy on the politics of scarcity confronts precisely these issues, but it depends on an unconvincing sociology of individuation as a central process in modernity. A framework is advanced through which to explore the tensions, trade-offs and possibilities for a socially liberal, culturally cosmopolitan and science-based civilisation under conditions of degrowth and metabolic contraction. © 2013 The White Horse Press.


Pal R.,University of Waterloo
Current Opinion in Colloid and Interface Science | Year: 2011

A comprehensive review of the rheology of simple and multiple emulsions is presented. Special attention is given to the models describing the rheology of these systems. The key factors governing the rheology of simple and multiple emulsions are discussed. In general, the state of the art is good for simple emulsions. A priori predictions of the rheological properties of simple emulsions are possible using the existing models. Multiple emulsions have received less attention. Theoretical models describing the rheological behavior of multiple emulsions at arbitrary flow strengths (any shear rate) are lacking. Careful experimental work is needed on the rheology of multiple emulsions of controlled droplet size and morphology. New emerging techniques of producing emulsions, such as microfluidic emulsification, can be used to control and manipulate the number, size, and size distribution of internal droplets in multiple emulsion globules. © 2010 Elsevier Ltd.


Farhad S.,Carleton University | Hamdullahpur F.,University of Waterloo
Journal of Power Sources | Year: 2010

A novel portable electric power generation system, fuelled by ammonia, is introduced and its performance is evaluated. In this system, a solid oxide fuel cell (SOFC) stack that consists of anode-supported planar cells with Ni-YSZ anode, YSZ electrolyte and YSZ-LSM cathode is used to generate electric power. The small size, simplicity, and high electrical efficiency are the main advantages of this environmentally friendly system. The results predicted through computer simulation of this system confirm that the first-law efficiency of 41.1% with the system operating voltage of 25.6 V is attainable for a 100 W portable system, operated at the cell voltage of 0.73 V and fuel utilization ratio of 80%. In these operating conditions, an ammonia cylinder with a capacity of 0.8 l is sufficient to sustain full-load operation of the portable system for 9 h and 34 min. The effect of the cell operating voltage at different fuel utilization ratios on the number of cells required in the SOFC stack, the first- and second-law efficiencies, the system operating voltage, the excess air, the heat transfer from the SOFC stack, and the duration of operation of the portable system with a cylinder of ammonia fuel, are also studied through a detailed sensitivity analysis. Overall, the ammonia-fuelled SOFC system introduced in this paper exhibits an appropriate performance for portable power generation applications. © 2009 Elsevier B.V.


Rosmanis A.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2011

I introduce a continuous-time quantum walk on graphs called the quantum snake walk, the basis states of which are fixed-length paths (snakes) in the underlying graph. First, I analyze the quantum snake walk on the line, and I show that, even though most states stay localized throughout the evolution, there are specific states that most likely move on the line as wave packets with momentum inversely proportional to the length of the snake. Next, I discuss how an algorithm based on the quantum snake walk might potentially be able to solve an extended version of the glued trees problem, which asks to find a path connecting both roots of the glued trees graph. To the best of my knowledge, no efficient quantum algorithm solving this problem is known yet. © 2011 American Physical Society.


Collins S.,University of Akron | Collins S.,University of Waterloo
Coordination Chemistry Reviews | Year: 2011

Applications of transition metal amidinate [RC(NR')2], guanidinate and amidopyridine complexes to olefin coordination polymerization are reviewed. In addition, the use of complexes, featuring closely related ligands, such as phosphonamide or iminophosphonamide [R2P(NR')2], in olefin polymerization is highlighted. Some of these complexes have also been investigated in the stereoregular polymerization of styrene and conjugated dienes, whereas more recent work has focused on controlled ring-opening polymerization of lactones and lactides. © 2010 Elsevier B.V.


Yang X.,Hefei University of Technology | Clausi D.A.,University of Waterloo
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | Year: 2012

This paper presents a new approach to sea ice segmentation in synthetic aperture radar (SAR) intensity images by combining an edge-preserving region (EPR)-based representation with region-level MRF models. To construct the EPR-based representation of a SAR image, edge strength is measured using instantaneous coefficient of variation (ICOV) upon which the watershed algorithm is applied to partition the image into primitive regions. In addition, two new metrics for quantitative assessment of region characteristics (region accuracy and region redundancy) are defined and used for parameter estimation in the ICOV extraction process towards desired region characteristics. In combination with a region-level MRF, the EPR-based representation facilitates the segmentation process by largely reducing the search space of optimization process and improving parameter estimation of feature model, leading to considerable computational savings and less probability of false segmentation. The proposed segmentation method has been evaluated using a synthetic sea ice image corrupted with varying levels of speckle noise as well as real SAR sea ice images. Relative to the existing region-level MRF-based methods, testing results have demonstrated that our proposed method substantially improves the segmentation accuracy at high speckle noise and achieves on average 29% reduction of computational time. © 2008-2012 IEEE.


During Drosophila embryogenesis the majority of the extra-embryonic epithelium known as the amnioserosa (AS) undergoes programmed cell death (PCD) following the completion of the morphogenetic process of dorsal closure. Approximately ten percent of AS cells, however, are eliminated during dorsal closure by extrusion from the epithelium. Using biosensors that report autophagy and caspase activity in vivo, we demonstrate that AS cell extrusion occurs in the context of elevated autophagy and caspase activation. Furthermore, we evaluate AS extrusion rates, autophagy, and caspase activation in embryos in which caspase activity or autophagy are altered by genetic manipulation. This includes using the GAL4/UAS system to drive expression of p35, reaper, dINR (ACT) and Atg1 in the AS; we also analyze embryos lacking both maternal and zygotic expression of Atg1. Based on our results we suggest that autophagy can promote, but is not required for, epithelial extrusion and caspase activation in the amnioserosa.


Ross H.S.,University of Waterloo
Infancy | Year: 2013

This study examined property conflicts in thirty-two 20- and 30-month-old peer dyads during eighteen 40-min play sessions. Ownership influenced conflicts. Both 20- and 30-month-old owners claimed ownership ("mine") and instigated and won property conflicts more often than non-owners. At 30months, owners also resisted peers' instigations more often than non-owners. Mothers' interventions supported non-owners more often than owners, in part because owners initiated conflict more frequently. Children who received mothers' support tended to win disputes. Finally, mothers' support of owners and children's adherence to ownership rights led to decreased conflict as relationships developed, supporting predictions based on theories concerning the social utility of ownership rights. © International Society on Infant Studies (ISIS).


Cronin D.S.,University of Waterloo
Journal of the Mechanical Behavior of Biomedical Materials | Year: 2014

The rate of soft tissue sprain/strain injuries to the cervical spine and associated cost continue to be significant; however, the physiological nature of this injury makes experimental tests challenging while aspects such as occupant position and musculature may contribute to significant variability in the current epidemiological data. Several theories have been proposed to identify the source of pain associated with whiplash. The goal of this study was to investigate three proposed sources of pain generation using a detailed numerical model in rear impact scenarios: distraction of the capsular ligaments; transverse nerve root compression through decrease of the intervertebral foramen space; and potential for damage to the disc based on the extent of rotation and annulus fibre strain. There was significant variability associated with experimental measures, where the range of motion data overlapped ultimate failure data. Average data values were used to evaluate the model, which was justified by the use of average mechanical properties within the model and previous studies demonstrating predicted response and failure of the tissues was comparable to average response values. The model predicted changes in dimension of the intervertebral foramen were independent of loading conditions, and were within measured physiological ranges for the impact severities considered. Disc response, measured using relative rotation between intervertebral bodies, was below values associated with catastrophic failure or avulsion but exceeded the average range of motion values. Annulus fibre strains exceeded a proposed threshold value at three levels for 10. g impacts. Capsular ligament strain increased with increasing impact severity and the model predicted the potential for injury at impact severities from 4. g to 15.4. g, when the range of proposed distraction corresponding to sub-catastrophic failure was exceeded, in agreement with the typically reported values of 9-15. g. This study used an enhanced neck finite element model with active musculature to investigate three potential sources of neck pain resulting from rear impact scenarios and identified capsular ligament strain and deformation of the disc as potential sources of neck pain in rear impact scenarios. © 2013 Elsevier Ltd.


Smith S.L.,University of Waterloo | Schwager M.,Boston University | Rus D.,Massachusetts Institute of Technology
IEEE Transactions on Robotics | Year: 2012

In this paper, we present controllers that enable mobile robots to persistently monitor or sweep a changing environment. The environment is modeled as a field that is defined over a finite set of locations. The field grows linearly at locations that are not within the range of a robot and decreases linearly at locations that are within range of a robot. We assume that the robots travel on given closed paths. The speed of each robot along its path is controlled to prevent the field from growing unbounded at any location. We consider the space of speed controllers that are parametrized by a finite set of basis functions. For a single robot, we develop a linear program that computes a speed controller in this space to keep the field bounded, if such a controller exists. Another linear program is derived to compute the speed controller that minimizes the maximum field value over the environment. We extend our linear program formulation to develop a multirobot controller that keeps the field bounded. We characterize, both theoretically and in simulation, the robustness of the controllers to modeling errors and to stochasticity in the environment. © 2012 IEEE.


Nazari S.,University of Waterloo | Thistle J.G.,Thales Rail Signalling Solutions Inc.
IEEE Transactions on Automatic Control | Year: 2012

The problem of checking blocking properties is studied for networks consisting of arbitrary numbers of finite-state discrete-event subsystems. The topology of the networks is that of a fully connected graph: any subsystem can potentially interact with any other. Two types of blocking are studied: component blocking, whereby a subsystem is potentially prevented from entering its set of marker states, and network blocking, whereby the subsystems are potentially unable to occupy marker states simultaneously. It is shown that if the subsystems are all identical and broadcast} actions are permitted, both types of blocking properties are undecidable; but in the absence of broadcast actions, they become decidable. If the subsystems are not necessarily identical but only isomorphic, then blocking properties are in general undecidable; however, a template is proposed for ensuring adequate structure for decidability. It is claimed that this template is sufficiently general to admit many realistic examples. © 2012 IEEE.


Shames I.,KTH Royal Institute of Technology | Dasgupta S.,University of Iowa | Fidan B.,University of Waterloo | Anderson B.D.O.,Australian National University
IEEE Transactions on Automatic Control | Year: 2012

Consider an agent A at an unknown location, undergoing sufficiently slow drift, and a mobile agent B that must move to the vicinity of and then circumnavigate A at a prescribed distance from A. In doing so, B can only measure its distance from A, and knows its own position in some reference frame. This paper considers this problem, which has applications to surveillance and orbit maintenance. In many of these applications it is difficult for B to directly sense the location of A, e.g. when all that B can sense is the intensity of a signal emitted by A. This intensity does, however provide a measure of the distance. We propose a nonlinear periodic continuous time control law that achieves the objective using this distance measurement. Fundamentally, a) B must exploit its motion to estimate the location of A, and b) use its best instantaneous estimate of where A resides, to move itself to achieve the circumnavigation objective. For a) we use an open loop algorithm formulated by us in an earlier paper. The key challenge tackled in this paper is to design a control law that closes the loop by marrrying the two goals. As long as the initial estimate of the source location is not coincident with the intial position of B, the algorithm is guaranteed to be exponentially convergent when A is stationary. Under the same condition, we establish that when A drifts with a sufficiently small, unknown velocity, B globally achieves its circumnavigation objective, to within a margin proportional to the drift velocity. © 2011 IEEE.


Liu Z.-W.,Huazhong University of Science and Technology | Guan Z.-H.,Huazhong University of Science and Technology | Shen X.,University of Waterloo | Feng G.,City University of Hong Kong
IEEE Transactions on Automatic Control | Year: 2012

In this technical note, an impulsive consensus algorithm is proposed for second-order continuous-time multi-agent networks with switching topology. The communication among agents occurs at sampling instants based on position only measurements. By using the property of stochastic matrices and algebraic graph theory, some sufficient conditions are obtained to ensure the consensus of the controlled multi-agent network if the communication graph has a spanning tree jointly. A numerical example is given to illustrate the effectiveness of the proposed algorithm. © 2012 IEEE.


Glick B.R.,University of Waterloo
Biotechnology Advances | Year: 2010

In the past twenty years or so, researchers have endeavored to utilize plants to facilitate the removal of both organic and inorganic contaminants from the environment, especially from soil. These phytoremediation approaches have come a long way in a short time. However, the majority of this work has been done under more controlled laboratory conditions and not in the field. As an adjunct to various phytoremediation strategies and as part of an effort to make this technology more efficacious, a number of scientists have begun to explore the possibility of using various soil bacteria together with plants. These bacteria include biodegradative bacteria, plant growth-promoting bacteria and bacteria that facilitate phytoremediation by other means. An overview of bacterially assisted phytoremediation is provided here for both organic and metallic contaminants, with the intent of providing some insight into how these bacteria aid phytoremediation so that future field studies might be facilitated. © 2010 Elsevier Inc. All rights reserved.


Heikkila J.J.,University of Waterloo
Comparative Biochemistry and Physiology - A Molecular and Integrative Physiology | Year: 2010

Heat shock proteins (HSPs) are molecular chaperones that are involved in protein folding and translocation. During heat shock, both constitutive and stress-inducible HSPs bind to and inhibit irreversible aggregation of denatured protein and facilitate their refolding once normal cellular conditions are re-established. Recent interest in HSPs has been propelled by their association with various human diseases. Amphibian model systems, as shown in this review, have had a significant impact on our understanding of hsp gene expression and function. Some amphibian hsp genes are expressed constitutively during oogenesis and embryogenesis, while others are developmentally regulated and enriched in selected tissues in a stress-inducible fashion. For example, while hsp70 genes are heat-inducible after the midblastula stage, hsp30 genes are not inducible until late neurula/early tailbud. This particular phenomenon is likely controlled by chromatin structure. Also, hsp genes are expressed during regeneration, primarily in response to wounding-associated trauma. The availability of amphibian cultured cells has enabled the analysis of hsp gene expression induced by different stresses (e.g. cadmium, arsenite, proteasome inhibitors etc.), HSP intracellular localization, and their involvement in stress resistance. Furthermore, hyperthermia treatment of adult amphibians reveals that certain tissues were more sensitive than others in terms of hsp gene expression. Finally, this review details the evidence available for the role of amphibian small HSPs as molecular chaperones. © 2010 Elsevier Inc. All rights reserved.


Piccart M.J.,University of Waterloo
Cancer Research | Year: 2013

Trastuzumab, a monoclonal antibody directed at the HER2 receptor, is one of the most impressive targeted drugs developed in the last two decades. Indeed, when given in conjunction with chemotherapy, it improves the survival of women with HER2 positive breast cancer, both in advanced and in early disease. Its optimal duration, however, is poorly defined in both settings with a significant economic impact in the adjuvant setting where the drug is arbitrarily given for 1 year. This article reviews current attempts at shortening this treatment duration, emphasizing the likelihood of inconclusive results and, therefore, the need to investigate this important variable as part of the initial pivotal trials and with the support of public health systems. Failure to do so has major consequences on treatment affordability. Ongoing adjuvant trials of dual HER2 blockade, using trastuzumab in combination with a second anti-HER2 agent, and trials of the antibody-drug conjugate T-DM1 (trastuzumab-emtansine) have to all be designed with 12 months of targeted therapy. © 2013 American Association for Cancer Research.


Background: Excessive weight gain among youth is an ongoing public health concern. Despite evidence linking both policies and the built environment to adolescent and adult overweight, the association between health policies or the built environment and overweight are often overlooked in research with children. The purpose of this study was to examine if school-based physical activity policies and the built environment surrounding a school are associated with weight status among children. Methods. Objectively measured height and weight data were available for 2,331 grade 1 to 4 students (aged 6 to 9 years) attending 30 elementary schools in Ontario, Canada. Student-level data were collected using parent reports and the PLAY-On questionnaire administered to students by study nurses. School-level policy data were collected from school administrators using the Physical Activity Module of the Healthy School Planner tool, and built environment data were provided by the Enhanced Points of Interest data resource. Multi-level logistic regression models were used to examine the school- and student-level characteristics associated with the odds of a student being overweight or obese. Results: There was significant between-school random variation in the odds of a student being overweight [σ2 μ0 = 0.274(0.106), p < 0.001], but not for being obese [σ2 μ0 = 0.115(0.089)]. If a student attended a school that provided student access to a variety of facilities on and off school grounds during school hours or supported active transportation to and from school, he/she was less likely to overweight than a similar student attending a school without these policies. Characteristics of the built environment were not associated with overweight or obesity among this large cross-sectional sample of children. Conclusions: This new evidence suggests that it may be wise to target obesity prevention efforts to schools that do not provide student access to recreation facilities during school hours or schools that do not support active transportation for students. Future research should evaluate if school-based overweight and obesity prevention programming might be improved if interventions selectively targeted the school characteristics that are putting students at the greatest risk. © 2013 Leatherdale; licensee BioMed Central Ltd.


Wesson P.S.,University of Waterloo | Wesson P.S.,Herzberg Institute for Astrophysics
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2011

In 5D, I take the metric in canonical form and define causality by null-paths. Then spacetime is modulated by a factor equivalent to the wave function, and the 5D geodesic equation gives the 4D Klein-Gordon equation. These results effectively show how general relativity and quantum mechanics may be unified in 5D. © 2011 Elsevier B.V.


Mann R.B.,University of Waterloo | Mureika J.R.,Loyola Marymount University
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2011

We consider the formulation of entropic gravity in two spacetime dimensions. The usual gravitational force law is derived even in the absence of area, as normally required by the holographic principle. A special feature of this perspective concerns the nature of temperature and entropy defined at a point. We argue that the constancy of the gravitational force in one spatial dimension implies the information contained at each point in space is an internal degree of freedom on the manifold, and furthermore is a universal constant, contrary to previous assertions that entropic gravity in one spatial dimension is ill-defined. We give some heuristic arguments for gravitation and information transfer constraints within this framework, thus adding weight to the contention that spacetime and gravitation might be emergent phenomena. © 2011 Elsevier B.V.


Das S.,University of Lethbridge | Mann R.B.,University of Waterloo
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2011

Almost all theories of Quantum Gravity predict modifications of the Heisenberg Uncertainty Principle near the Planck scale to a so-called Generalized Uncertainty Principle (GUP). Recently it was shown that the GUP gives rise to corrections to the Schrödinger and Dirac equations, which in turn affect all non-relativistic and relativistic quantum Hamiltonians. In this Letter, we apply it to superconductivity and the quantum Hall effect and compute Planck scale corrections. We also show that Planck scale effects may account for a (small) part of the anomalous magnetic moment of the muon. We obtain (weak) empirical bounds on the undetermined GUP parameter from present-day experiments. © 2011 Elsevier B.V.


Chan T.M.,University of Waterloo
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2016

At SODA'93, Chazelle and Matousek presented a derandom-ization of Clarkson's sampling-based algorithm [FOCS'88] for solving linear programs with n constraints and d variables in d(7+o(1))dn deterministic time. The time bound can be improved to d(5+o)(1)dn with subsequent work by Bronnimann, Chazelle, and Matousek [FOCS'93l. We first point out a much simpler derandomization of Clarkson's algorithm that avoids ϵ-approximations and runs in d(3+o(1))dn time. We then describe a few additional ideas that eventually improve the deterministic time bound to d(1/2+o(1))dn. © Copyright (2016) by SIAM: Society for Industrial and Applied Mathematics.


Fraser D.,University of Waterloo
Studies in History and Philosophy of Science Part B - Studies in History and Philosophy of Modern Physics | Year: 2011

Further arguments are offered in defence of the position that the variant of quantum field theory (QFT) that should be subject to interpretation and foundational analysis is axiomatic quantum field theory. I argue that the successful application of renormalization group (RG) methods within alternative formulations of QFT illuminates the empirical content of QFT, but not the theoretical content. RG methods corroborate the point of view that QFT is a case of the underdetermination of theory by empirical evidence. I also urge caution in extrapolating interpretive conclusions about QFT from the application of RG methods in other contexts (e.g., condensed matter physics). This paper replies to criticisms advanced by David Wallace, but aims to be self-contained. © 2011 Elsevier Ltd.


Bennett W.F.D.,University of Calgary | Bennett W.F.D.,University of Waterloo | Tieleman D.P.,University of Calgary
Accounts of Chemical Research | Year: 2014

ConspectusThe defects and pores within lipid membranes are scientifically interesting and have a number of biological applications. Although lipid bilayers are extremely thin hydrophobic barriers, just ∼3 nm thick, they include diverse chemistry and have complex structures. Bilayers are soft and dynamic, and as a result, they can bend and deform in response to different stimuli by means of structural changes in their component lipids. Though defects occur within these structures, their transience and small size have made it difficult to characterize them. However, with recent advances in computer power and computational modeling techniques, researchers can now use simulations as a powerful tool to probe the mechanism and energies of defect and pore formation in a number of situations.In this Account, we present results from our detailed molecular dynamics computer simulations of hydrophilic pores and related defects in lipid bilayers at an atomistic level. Electroporation can be used to increase the permeability of cellular membranes, with potential therapeutic applications. Atomistic simulations of electroporation have illustrated the molecular details of this process, including the importance of water dipole interactions at the water-membrane interface. Characterization of the lipid-protein interactions provides an important tool for understanding transmembrane protein structure and thermodynamic stability. Atomistic simulations give a detailed picture of the free energies of model peptides and side chains in lipid membranes; the energetic cost of defect formation strongly influences the energies of interactions between lipids and polar and charged residues. Many antimicrobial peptides form hydrophilic pores in lipid membranes, killing bacteria or cancer cells. On the basis of simulation data, at least some of these peptides form defects and pores near the center of the bilayer, with a common disordered structure where hydrated headgroups form an approximately toroidal shape. The localization and trafficking of lipids supports general membrane structure and a number of important signaling cascades, such as those involving ceramide, diacylglycerol, and cholesterol. Atomistic simulations have determined the rates and free energies of lipid flip-flop. During the flip-flop of most phosphatidylcholine lipids, a hydrophilic pore forms when the headgroup moves near the center of the bilayer.Simulations have provided novel insight into many features of defects and pores in lipid membranes. Simulation data from very different systems and models show how water penetration and defect formation can determine the free energies of many membrane processes. Bilayers can deform and allow transient defects and pores when exposed to a diverse range of stimuli. Future work will explore many aspects of membrane defects with increased resolution and scope, including the study of more complex lipid mixtures, membrane domains, and large-scale membrane remodeling. Such studies will examine processes including vesicle budding and fusion, non-bilayer lipid phases, and interactions between lipid bilayers and other biomolecules. Simulations provide information that complements experimental studies, allowing microscopic insight into experimental observations and suggesting novel hypotheses and experiments. These studies should enable a deeper understanding of the role of lipid bilayers in cellular biology and support the development of future lipid-based biotechnology. © 2014 American Chemical Society.


Jennings A.T.,California Institute of Technology | Burek M.J.,University of Waterloo | Greer J.R.,California Institute of Technology
Physical Review Letters | Year: 2010

We report results of uniaxial compression experiments on single-crystalline Cu nanopillars with nonzero initial dislocation densities produced without focused ion beam (FIB). Remarkably, we find the same power-law size-driven strengthening as FIB-fabricated face-centered cubic micropillars. TEM analysis reveals that initial dislocation density in our FIB-less pillars and those produced by FIB are on the order of 1014m-2 suggesting that mechanical response of nanoscale crystals is a stronger function of initial microstructure than of size regardless of fabrication method. © 2010 The American Physical Society.


Liu J.,University of Waterloo
TrAC - Trends in Analytical Chemistry | Year: 2014

Fluorescent silver, gold and copper nanoclusters (NCs) have emerged for biosensor development. Compared to semiconductor quantum dots, there is less concern about the toxicity of metal NCs, which can be more easily conjugated to biopolymers. These NCs need a stabilizing ligand. Many polymers, proteins and nucleic acids stabilize NCs, and many DNA sequences produce highly-fluorescent NCs. Coupling these DNA stabilizers with other sequences, such as aptamers, has generated a large number of biosensors.We summarize the synthesis of DNA and nucleotide-templated NCs; and, we discuss their chemical interactions. We briefly review properties of NCs, such as fluorescence quantum yield, emission wavelength and lifetime, structure and photostability.We categorize sensor-design strategies using these NCs into:. (1)fluorescence de-quenching;(2)generation of templating DNA sequences to produce NCs;(3)change of nearby environment; and,(4)reacting with heavy metal ions or other quenchers.Finally, we discuss future trends. © 2014 Elsevier Ltd.


Visually evoked fast intrinsic optical signals (IOSs) were recorded for the first time in vivo from all layers of healthy chicken retina by using a combined functional optical coherence tomography (fOCT) and electroretinography (ERG) system. The fast IOSs were observed to develop within ∼5 ms from the on-set of the visual stimulus, whereas slow IOSs were measured up to 1 s later. The visually evoked IOSs and ERG traces were recorded simultaneously, and a clear correlation was observed between them. The ability to measure visually evoked fast IOSs non-invasively and in vivo from individual retinal layers could significantly improve the understanding of the complex communication between different retinal cell types in healthy and diseased retinas.


Waite M.L.,University of Waterloo
Physics of Fluids | Year: 2011

Numerical simulations of forced stratified turbulence are presented, and the dependence on horizontal resolution and grid aspect ratio is investigated. Simulations are designed to model the small-scale end of the atmospheric mesoscale and oceanic submesoscale, for which high horizontal resolution is usually not feasible in large-scale geophysical fluid simulations. Coarse horizontal resolution, which necessitates the use of thin grid aspect ratio, yields a downscale stratified turbulence energy cascade in agreement with previous results. We show that with increasing horizontal resolution, a transition emerges at the buoyancy scale 2πU/N, where U is the rms velocity and N is the Brunt-Väisälä frequency. Simulations with high horizontal resolution and isotropic grid spacing exhibit a spectral break at this scale, below which there is a net injection of kinetic energy by nonlinear interactions with the large-scale flow. We argue that these results are consistent with a direct transfer of energy to the buoyancy scale by Kelvin-Helmholtz instability of the large-scale vortices. These findings suggest the existence of a distinct subrange of stratified turbulence between the buoyancy and Ozmidov scales. This range must be at least partially resolved or parameterized to obtain robust simulations of larger-scale turbulence. © 2011 American Institute of Physics.


Objective: Fatty foods are regarded as highly appetitive, and self-control is often required to resist consumption. Executive control resources (ECRs) are potentially facilitative of self-control efforts, and therefore could predict success in the domain of dietary self-restraint. It is not currently known whether stronger ECRs facilitate resistance to fatty food consumption, and moreover, it is unknown whether such an effect would be stronger in some age groups than others. The purpose of the present study was to examine the association between ECRs and consumption of fatty foods among healthy community-dwelling adults across the adult life span. Methods: An age-stratified sample of individuals between 18 and 89 years of age attended two laboratory sessions. During the first session they completed two computer-administered tests of ECRs (Stroop and Go-NoGo) and a test of general cognitive function (Wechsler Abbreviated Scale of Intelligence); participants completed two consecutive 1-week recall measures to assess frequency of fatty and nonfatty food consumption. Results: Regression analyses revealed that stronger ECRs were associated with lower frequency of fatty food consumption over the 2-week interval. This association was observed for both measures of ECR and a composite measure. The effect remained significant after adjustment for demographic variables (age, gender, socioeconomic status), general cognitive function, and body mass index. The observed effect of ECRs on fatty food consumption frequency was invariant across age group, and did not generalize to nonfatty food consumption. Conclusions: ECRs may be potentially important, though understudied, determinants of dietary behavior in adults across the life span. © 2011 American Psychological Association.


Khandani A.K.,University of Waterloo
IEEE International Symposium on Information Theory - Proceedings | Year: 2013

It is shown that embedding part or all of the information in the (intentional) variations of the transmission media (end-to-end channel) can offer significant performance gains vs. traditional SISO, SIMO and MIMO systems, at the same time with a lower complexity. This is in contrast with the traditional wireless systems where the information is entirely embedded in the variations of an RF source prior to the antenna to propagate via the channel to the destination. In particular, it is shown that using a single transmit antenna and D receive antennas; significant savings in energy with respect to a D×D traditional MIMO are achieved. Similar energy savings are possible in SISO, and SIMO setups. © 2013 IEEE.


We provide some criteria for stabilizability by the energy-shaping method for the class of all controlled Lagrangian systems of two degrees of freedom and one degree of under-actuation: a necessary and sufficient condition for Lyapunov stabilizability, two sufficient conditions for asymptotic stabilizability, and a necessary and sufficient condition for exponential stabilizability. As a corollary, we show that some of the asymptotically stabilizing controllers that were designed in old literatures with the energy-shaping method are actually exponentially stabilizing controllers. Examples of such systems are the inverted pendulum on a cart, the Furuta pendulum, the ball and beam system, and the Pendubot. © 2010 IEEE.


Chan T.M.,University of Waterloo
Journal of the ACM | Year: 2010

We present a fully dynamic randomized data structure that can answer queries about the convex hull of a set of n points in three dimensions, where insertions take O(log3n) expected amortized time, deletions take O(log6n) expected amortized time, and extreme-point queries take O(log2n) worst-case time. This is the first method that guarantees polylogarithmic update and query cost for arbitrary sequences of insertions and deletions, and improves the previous O(nε)-time method by Agarwal and Matouek a decade ago. As a consequence, we obtain similar results for nearest neighbor queries in two dimensions and improved results for numerous fundamental geometric problems (such as levels in three dimensions and dynamic Euclidean minimum spanning trees in the plane). © 2010 ACM.


Sarhadi A.,University of Waterloo | Heydarizadeh M.,Research Institute for Water Scarcity and Drought in Agriculture and Natural Resources
International Journal of Climatology | Year: 2014

This paper presents a methodology for regional frequency analysis and spatial pattern features of the Annual Maximum Dry Spell Length (AMDSL) as an indicator of drought conditions using the well-known L-moments approach and statistical-based methods. Applying Ward's cluster-analysis method identifies eight regions with distinctive AMDSL behaviours for Iran. Homogeneity testing indicates that most of these regions are homogenous. The goodness-of-fit test ZDist shows that Generalized Logistic; Generalized Extreme Value and Pearson type III, distributions fit best for most regions. The spatial pattern of L-Moment statistics demonstrates that although the northwestern and northern parts of the country experience short dry spells, these periods are inconstant, and extreme dry spell events may happen in these areas. Almost all spatial mapping of AMDSLs at different probabilistic levels demonstrates that dry spells increase gradually from west to east and from north to south, and the southern parts (especially along the Persian Gulf and Oman Sea) and central areas, including most agricultural lands, stand out as the most sensitive to soil moisture deficits, because of longer lasting droughts. © 2013 Royal Meteorological Society.


Stastna M.,University of Waterloo
Physics of Fluids | Year: 2011

We consider the resonant generation of internal waves by small-amplitude topography with multiple local maxima. We demonstrate that for near-critical inflows, the initial resonant generation process is locked to the topography. However, the process undergoes a profound reorganization in the long time limit, and over the topography yields waves with an amplitude that is larger than the theoretical maximum for waves far upstream. For subcritical flows, we demonstrate that short length-scale topography can successfully generate finite amplitude, quasi-periodic mode-1 waves, but that the most energetic of these are confined to the region over and downstream of the topography. We demonstrate that, due to the inherently shorter length scale of higher mode solitary waves, subcritical flows over short topography can generate mode-2 waves even for stratifications whose center is well removed from the mid-depth. Finally, we discuss the implications of our results to slowly changing currents such as tides, namely that periods of near critical flow dominate wave generation. © 2011 American Institute of Physics.


Karimi M.,University of Waterloo | Nasiri-Kenari M.,Sharif University of Technology
Journal of Lightwave Technology | Year: 2011

Fading and path loss are the major challenges in practical deployment of free space optical communication systems. In this paper, a cooperative free space communication via an optical amplify-and-forward relay is considered to deal with these challenges. We use photon counting approach to investigate the system bit error probability (BEP) performance and study the effects of atmospheric turbulence, background light, amplified spontaneous emission, and receiver thermal noise on the system performance. We compare the results with those of the multiple-transmitter (MT) system. The results indicate that the performance of the relay-assisted system is much better than that of the MT system in different cases considered. We show that there is an optimum place for the relay from the BEP point of view. © 2010 IEEE.


Denison S.,University of Waterloo | Xu F.,University of California at Berkeley
Cognition | Year: 2014

Reasoning under uncertainty is the bread and butter of everyday life. Many areas of psychology, from cognitive, developmental, social, to clinical, are interested in how individuals make inferences and decisions with incomplete information. The ability to reason under uncertainty necessarily involves probability computations, be they exact calculations or estimations. What are the developmental origins of probabilistic reasoning? Recent work has begun to examine whether infants and toddlers can compute probabilities; however, previous experiments have confounded quantity and probability-in most cases young human learners could have relied on simple comparisons of absolute quantities, as opposed to proportions, to succeed in these tasks. We present four experiments providing evidence that infants younger than 12. months show sensitivity to probabilities based on proportions. Furthermore, infants use this sensitivity to make predictions and fulfill their own desires, providing the first demonstration that even preverbal learners use probabilistic information to navigate the world. These results provide strong evidence for a rich quantitative and statistical reasoning system in infants. © 2013 Elsevier B.V.


Hane F.,University of Waterloo
FEBS Letters | Year: 2013

Amyloid-β, the protein implicated in Alzheimer's disease, along with a number of other proteins, has been shown to form amyloid fibrils. Fibril forming proteins share no common primary structure and have little known function. Furthermore, all proteins have the ability to form amyloid fibrils under certain conditions as the fibrillar structure lies at the global free energy minimum of proteins. This raises the question of the mechanism of the evolution of the amyloid fibril structure. Experimental evidence supports the hypothesis that the fibril structure is a by-product of the forces of protein folding and lies outside the bounds of evolutionary pressures. © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.


Nilsen E.S.,University of Waterloo | Graham S.A.,University of Calgary
Child Development | Year: 2012

Using a longitudinal design, preschoolers' appreciation of a listener's knowledge of the location of a hidden sticker after the listener was provided with an ambiguous or unambiguous description was assessed. Preschoolers (N=34) were tested at 3 time points, each 6months apart (4, 41/2, and 5years). Eye gaze measures demonstrated that preschoolers were sensitive to communicative ambiguity, even when the situation was unambiguous from their perspective. Preschoolers' explicit evaluations of ambiguity were characterized by an initial appreciation of message clarity followed by an appreciation of message ambiguity. Children's inhibitory control skills at 4years old related to their explicit detection of ambiguity at later ages. Results are discussed in terms of the developmental progression of preschoolers' awareness of communicative ambiguity. © 2012 The Authors. Child Development © 2012 Society for Research in Child Development, Inc.


Faizal M.,University of Waterloo | Khalil M.M.,Alexandria University
International Journal of Modern Physics A | Year: 2015

Based on the universality of the entropy-area relation of a black hole, and the fact that the generalized uncertainty principle (GUP) adds a logarithmic correction term to the entropy in accordance with most approaches to quantum gravity, we argue that the GUP-corrected entropy-area relation is universal for all black objects. This correction to the entropy produces corrections to the thermodynamics. We explicitly calculate these corrections for three types of black holes: Reissner-Nordström, Kerr and charged AdS black holes, in addition to spinning black rings. In all cases, we find that they produce a remnant. Even though the GUP-corrected entropy-area relation produces the logarithmic term in the series expansion, we need to use the full form of the GUP-corrected entropy-area relation to get remnants for these black holes. © 2015 World Scientific Publishing Company.


Green analytical chemistry is an aspect of green chemistry which introduced in the late nineties. The main objectives of green analytical chemistry are to obtain new analytical technologies or to modify an old method to incorporate procedures that use less hazardous chemicals. There are several approaches to achieve this goal such as using environmentally benign solvents and reagents, reducing the chromatographic separation times and miniaturization of analytical devices. Traditional methods used for the analysis of pharmaceutically active compounds require large volumes of organic solvents and generate large amounts of waste. Most of them are volatile and harmful to the environment. With the awareness about the environment, the development of green technologies has been receiving increasing attention aiming at eliminating or reducing the amount of organic solvents consumed everyday worldwide without loss in chromatographic performance. This review provides the state of the art of green analytical methodologies for environmental analysis of pharmaceutically active compounds in the aquatic environment with special emphasis on strategies for greening liquid chromatography (LC). The current trends of fast LC applied to environmental analysis, including elevated mobile phase temperature, as well as different column technologies such as monolithic columns, fully porous sub-2 μm and superficially porous particles are presented. In addition, green aspects of gas chromatography (GC) and supercritical fluid chromatography (SFC) will be discussed. We pay special attention to new green approaches such as automation, miniaturization, direct analysis and the possibility of locating the chromatograph on-line or at-line as a step forward in reducing the environmental impact of chromatographic analyses. © 2014 Elsevier B.V. All rights reserved.


Harris G.L.H.,University of Waterloo | Poole G.B.,University of Melbourne | Harris W.E.,McMaster University
Monthly Notices of the Royal Astronomical Society | Year: 2014

We explore several correlations between various large-scale galaxy properties, particularly total globular cluster population (NGC), the central black hole mass (M•), velocity dispersion (nominally σe) and bulge mass (Mdyn). Our data sample of 49 galaxies, for which both NGC and M• are known, is larger than used in previous discussions of these two parameters and we employ the same sample to explore all pairs of correlations. Further, within this galaxy sample, we investigate the scatter in each quantity, with emphasis on the range of published values for σe and effective radius (Re) for any one galaxy. We find that these two quantities in particular are difficult to measure consistently and caution that precise intercomparison of galaxy properties involving Re and σe is particularly difficult. Using both conventional x2-minimization and Monte Carlo Markov Chain fitting techniques, we show that quoted observational uncertainties for all parameters are too small to represent the true scatter in the data. We find that the correlation between Mdyn and NGCis stronger than either the M•-σe or the M•-NGC relations. We suggest that this is because both the galaxy bulge population and NGC were fundamentally established at an early epoch during the same series of star-forming events. By contrast, although the seed for M• was likely formed at a similar epoch, its growth over time is less similar from galaxy to galaxy and thus less predictable. © 2013 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society.


Miller D.E.,University of Waterloo | Mansouri N.,SNC - Lavalin
IEEE Transactions on Automatic Control | Year: 2010

Recently the use of a linear periodic controller has been proposed to solve the model reference adaptive control problem. The resulting controller can handle rapid changes in plant parameters, and it can provide nice transient behavior with arbitrarily good steady-state tracking using a control signal which remains modest in size. However, it also has some undesireable features: i) the proposed sampled-data controller achieves good performance by using a small sampling period, which results in large gains and a correspondingly poor noise tolerance, ii) a rapidly varying control signal is used, which may require a fast actuator, and iii) the closer to optimality that we wish to get, the more complex the controller. In this paper, we completely redesign the control law to significantly alleviate these problems; the new design provides better noise performance, especially when the sign of the high frequency gain is known, uses a smoother and smaller control signal, has a fixed complexity, independent of the desired level of performance, and is more intuitively appealing, in that probing, estimation, and control are now carried out in parallel rather than in series. © 2010 IEEE.


Ha B.-Y.,University of Waterloo | Jung Y.,Korea Institute of Science and Technology
Soft Matter | Year: 2015

How confinement or a physical constraint modifies polymer chains is not only a classical problem in polymer physics but also relevant in a variety of contexts such as single-molecule manipulations, nanofabrication in narrow pores, and modelling of chromosome organization. Here, we review recent progress in our understanding of polymers in a confined (and crowded) space. To this end, we highlight converging views of these systems from computational, experimental, and theoretical approaches, and then clarify what remains to be clarified. In particular, we focus on exploring how cylindrical confinement reshapes individual chains and induces segregation forces between them-by pointing to the relationships between intra-chain organization and chain segregation. In the presence of crowders, chain molecules can be entropically phase-separated into a condensed state. We include a kernel of discussions on the nature of chain compaction by crowders, especially in a confined space. Finally, we discuss the relevance of confined polymers for the nucleoid, an intracellular space in which the bacterial chromosome is tightly packed, in part by cytoplasmic crowders. © 2015 The Royal Society of Chemistry.


Farsani R.K.,University of Waterloo
IEEE Transactions on Information Theory | Year: 2015

As the main basic building blocks of the interference networks, in this paper the broadcast channel, the classical interference channel (CIC), and the cognitive radio channel (CRC) are considered. New capacity outer bounds are established for these channels. These outer bounds are all derived based on a novel unified framework. Using the derived outer bounds, some new capacity results are proved for the CIC and the CRC; a mixed interference regime is identified for the two-user CIC, where decoding interference at one receiver and treating interference as noise at the other one is sum-rate optimal. In addition, a noisy interference regime is derived for the one-sided CIC. Our new capacity theorems for the CIC contain the previously obtained results regarding the Gaussian channel as special cases. For the CRC, a full characterization of the capacity region for a class of more-capable channels is derived. Moreover, it is shown that the derived outer bounds are useful to study channels with one-sided receiver side information wherein one of the receivers has access to the nonintended message; capacity bounds are also discussed in details for such scenarios. Our results lead to new insights regarding the nature of information flow in the basic interference networks. © 1963-2012 IEEE.


Faizal M.,University of Waterloo
International Journal of Geometric Methods in Modern Physics | Year: 2015

In this paper, we will demonstrate that like the existence of a minimum measurable length, the existence of a maximum measurable momentum, also influence all quantum mechanical systems. Beyond the simple one-dimensional case, the existence of a maximum momentum will induce non-local corrections to the first quantized Hamiltonian. However, these non-local corrections can be effectively treated as local corrections by using the theory of harmonic extensions of functions. We will also analyze the second quantization of this deformed first quantized theory. Finally, we will analyze the gauge symmetry corresponding to this deformed theory. © 2015 World Scientific Publishing Company.


Faizal M.,University of Waterloo
International Journal of Modern Physics A | Year: 2015

In this paper, we will deform the second and third quantized theories by deforming the canonical commutation relations in such a way that they become consistent with the generalized uncertainty principle. Thus, we will first deform the second quantized commutator and obtain a deformed version of the Wheeler-DeWitt equation. Then we will further deform the third quantized theory by deforming the third quantized canonical commutation relation. This way we will obtain a deformed version of the third quantized theory for the multiverse. © 2015 World Scientific Publishing Company.


Turri J.,University of Waterloo
Cognitive Science | Year: 2016

Compatibilism is the view that determinism is compatible with acting freely and being morally responsible. Incompatibilism is the opposite view. It is often claimed that compatibilism or incompatibilism is a natural part of ordinary social cognition. That is, it is often claimed that patterns in our everyday social judgments reveal an implicit commitment to either compatibilism or incompatibilism. This paper reports five experiments designed to identify such patterns. The results support a nuanced hybrid account: The central tendencies in ordinary social cognition are compatibilism about moral responsibility, compatibilism about positive moral accountability (i.e., about deserving credit for good outcomes), neither compatibilism nor incompatibilism about negative moral accountability (i.e., about deserving blame for bad outcomes), compatibilism about choice for actions with positive outcomes, and incompatibilism about choice for actions with negative or neutral outcomes. © 2016 Cognitive Science Society, Inc.


Pal R.,University of Waterloo
Journal of Colloid and Interface Science | Year: 2011

New models are developed for the viscosity of concentrated emulsions taking into consideration the effects of interfacial rheology and Marangoni phenomenon. The interface is assumed to be viscous with non-zero surface-shear and surface-dilational viscosities. The Marangoni effect is accounted for through non-zero Gibbs elasticity of the interface. The experimental viscosity data for a number of emulsion systems are interpreted in terms of the proposed models. © 2011 Elsevier Inc.


Hesjedal T.,University of Waterloo | Hesjedal T.,Clarendon Laboratory
Applied Physics Letters | Year: 2011

Few-layer graphene is obtained in atmospheric chemical vapor deposition on polycrystalline copper in a roll-to-roll process. Raman and x-ray photoelectron spectroscopy were employed to confirm the few-layer nature of the graphene film, to map the inhomogeneities, and to study and optimize the growth process. This continuous growth process can be easily scaled up and enables the low-cost fabrication of graphene films for industrial applications. © 2011 American Institute of Physics.


Miller D.E.,University of Waterloo | Davison E.J.,University of Toronto
Automatica | Year: 2012

In the control of linear time-invariant (LTI) decentralized systems, a decentralized fixed mode (DFM) is one which is immovable using an LTI decentralized controller. However, some DFMs can be moved using more complicated decentralized controllers; the ones which cannot are labelled quotient DFMs (QDFMs), since they arise from a related quotient system. The classical algorithm used to compute the QDFMs requires two steps: a partitioning of the sub-systems, using graph theory, followed by the application of standard tools from decentralized control. Here the goal is to provide a more direct approach to computing them. © 2012 Elsevier Ltd. All rights reserved.


Waite M.L.,University of Waterloo | Snyder C.,U.S. National Center for Atmospheric Research
Journal of the Atmospheric Sciences | Year: 2013

The role of moist processes in the development of the mesoscale kinetic energy spectrum is investigated with numerical simulations of idealized moist baroclinic waves. Dry baroclinic waves yield upper-tropospheric kinetic energy spectra that resemble a 23 power law. Decomposition into horizontally rotational and divergent kinetic energy shows that the divergent energy has a much shallower spectrum, but its amplitude is too small to yield a characteristic kink in the total spectrum, which is dominated by the rotational part. The inclusion of moist processes energizes the mesoscale. In the upper troposphere, the effect is mainly in the divergent part of the kinetic energy; the spectral slope remains shallow (around -5/3) as in the dry case, but the amplitude increases with increasing humidity. The divergence field in physical space is consistent with inertia-gravity waves being generated in regions of latent heating and propagating throughout the baroclinic wave. Buoyancy flux spectra are used to diagnose the scale at which moist forcing-via buoyant production from latent heating-injects kinetic energy. There is significant input of kinetic energy in the mesoscale, with a peak at scales of around 800 km and a plateau at smaller scales. If the latent heating is artificially set to zero at some time, the enhanced divergent kinetic energy decays over several days toward the level obtained in the dry simulation. The effect of moist forcing of mesoscale kinetic energy presents a challenge for theories of the mesoscale spectrum based on the idealization of a turbulent inertial subrange. © 2013 American Meteorological Society.


Xiang G.Y.,Griffith University | Xiang G.Y.,Hefei University of Technology | Higgins B.L.,Griffith University | Berry D.W.,University of Waterloo | And 2 more authors.
Nature Photonics | Year: 2011

Precise interferometric measurement is vital to many scientific and technological applications. Using quantum entanglement allows interferometric sensitivity that surpasses the shot-noise limit (SNL). To date, experiments demonstrating entanglement-enhanced sub-SNL interferometry, and most theoretical treatments, have addressed the goal of increasing signal-to-noise ratios. This is suitable for phase-sensing - detecting small variations about an already known phase. However, it is not sufficient for ab initio phase-estimation - making a self-contained determination of a phase that is initially completely unknown within the interval [0, 2π). Both tasks are important, but not equivalent. To move from the sensing regime to the ab initio estimation regime requires a non-trivial phase-estimation algorithm. Here, we implement a 'bottom-up' approach, optimally utilizing the available entangled photon states, obtained by post-selection. This enables us to demonstrate sub-SNL ab initio estimation of an unknown phase by entanglement-enhanced optical interferometry. © 2011 Macmillan Publishers Limited. All rights reserved.


In response to the widespread availability of illegal contraband, the federal and five provincial governments in Canada implemented a 40-60% reduction to cigarette excise taxes in February 1994. We exploit this unique and discrete policy shock by estimating the effects of cigarette taxes on youth smoking with data from the 1992-1996 Waterloo Smoking Prevention Program, 1991 General Social Survey, 1994 Youth Smoking Survey, 1996-1997 and 1998-1999 National population Health Surveys, and the 1999 Canadian Tobacco Use Monitoring Survey. Empirical estimates yield daily and occasional participation elasticities from -0.10 to -0.14, which is consistent with findings from recent U.S.-based research. A key contribution of this research is in the analysis of lower taxes on a panel of 591 youths from the Waterloo Smoking Prevention Program, who did not smoke in 1993, but 43% of whom confirm smoking participation following the tax reduction. Employing these data reveals elasticities from -0.2 to -0.5, which suggest that even significant and discrete changes in taxes might have limited impacts on the initiation and persistence of youth smoking. Copyright © 2009 John Wiley & Sons, Ltd.


Matthews D.,University of Waterloo
International Journal of Biostatistics | Year: 2013

A method to produce exact simultaneous confidence bands for the empirical cumulative distribution function that was first described by Owen, and subsequently corrected by Jager and Wellner, is the starting point for deriving exact nonparametric confidence bands for the survivor function of any positive random variable. We invert a nonparametric likelihood test of uniformity, constructed from the Kaplan- Meier estimator of the survivor function, to obtain simultaneous lower and upper bands for the function of interest with specified global confidence level. The method involves calculating a null distribution and associated critical value for each observed sample configuration. However, Noe recursions and the Van Wijngaarden-Decker-Brent root-finding algorithm provide the necessary tools for efficient computation of these exact bounds. Various aspects of the effect of right censoring on these exact bands are investigated, using as illustrations two observational studies of survival experience among non-Hodgkin's lymphoma patients and a much larger group of subjects with advanced lung cancer enrolled in trials within the North Central Cancer Treatment Group. Monte Carlo simulations confirm the merits of the proposed method of deriving simultaneous interval estimates of the survivor function across the entire range of the observed sample. This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. It was begun while the author was visiting the Department of Statistics, University of Auckland, and completed during a subsequent sojourn at the Medical Research Council Biostatistics Unit in Cambridge. The support of both institutions, in addition to that of NSERC and the University of Waterloo, is greatly appreciated.


Godsil C.,University of Waterloo
Electronic Journal of Combinatorics | Year: 2011

Let X be a graph on n vertices with adjacency matrix A and let H(t) denote the matrix-valued function exp(iAt). If u and v are distinct vertices in X, we say perfect state transfer from u to v occurs if there is a time τ such that |H(τ) u,v| = 1. If u ∈ V (X) and there is a time σ such that |H(σ)u,u| = 1, we say X is periodic at u with period σ. It is not difficult to show that if the ratio of distinct nonzero eigenvalues of X is always rational, then X is periodic. We show that the converse holds, from which it follows that a regular graph is periodic if and only if its eigenvalues are distinct. For a class of graphs X including all vertex-transitive graphs we prove that, if perfect state transfer occurs at time τ, then H(τ) is a scalar multiple of a permutation matrix of order two with no fixed points. Using certain Hadamard matrices, we construct a new infinite family of graphs on which perfect state transfer occurs.


Boissonneault M.,Universite de Sherbrooke | Gambetta J.M.,University of Waterloo | Blais A.,Universite de Sherbrooke
Physical Review Letters | Year: 2010

In dispersive readout schemes, qubit-induced nonlinearity typically limits the measurement fidelity by reducing the signal-to-noise ratio (SNR) when the measurement power is increased. Contrary to seeing the nonlinearity as a problem, here we propose to use it to our advantage in a regime where it can increase the SNR. We show analytically that such a regime exists if the qubit has a many-level structure. We also show how this physics can account for the high-fidelity avalanchelike measurement recently reported by Reed et al..


Chakrabarty D.,9 Lavelle Road | Swamy C.,University of Waterloo
ITCS 2014 - Proceedings of the 2014 Conference on Innovations in Theoretical Computer Science | Year: 2014

In this paper, we study mechanism design problems in the ordinal setting wherein the preferences of agents are described by orderings over outcomes, as opposed to specific numerical values associated with them. This setting is relevant when agents can compare outcomes, but aren't able to evaluate precise utilities for them. Such a situation arises in diverse contexts including voting and matching markets. Our paper addresses two issues that arise in ordinal mechanism design. To design social welfare maximizing mechanisms, one needs to be able to quantitatively measure the welfare of an outcome which is not clear in the ordinal setting. Second, since the impossibility results of Gibbard and Satterthwaite [14, 25] force one to move to randomized mechanisms, one needs a more nuanced notion of truthfulness. We propose rank approximation as a metric for measuring the quality of an outcome, which allows us to evaluate mechanisms based on worst-case performance, and lex-truthfulness as a notion of truthfulness for randomized ordinal mechanisms. Lex-truthfulness is stronger than notions studied in the literature, and yet flexible enough to admit a rich class of mechanisms circumventing classical impossibility results. We demonstrate the usefulness of the above notions by devising lex-truthful mechanisms achieving good rank-approximation factors, both in the general ordinal setting, as well as structured settings such as (onesided) matching markets, and its generalizations, matroid and scheduling markets.


Zhu J.,Nanjing University of Aeronautics and Astronautics | Hipel K.W.,University of Waterloo
Information Sciences | Year: 2012

The method of grey target decision making based on a multiple stages linguistic label is extended in this research. Firstly, the multi-granularity linguistic term sets are transformed into the same linguistic set. Next, the choice rule of a target is put forward and the calculation of a target distance is developed. Furthermore, weight models of criteria and of stages, which are based on the requirement of maximum difference of alternatives and restriction of stage harmony, are suggested. In addition, a model of lower and upper value of a target distance is put forward. Moreover, the method is extended to the group decision making environment. This model can help to avoid faulty decision making due to uncertainty. The suggested method is applied to vendor evaluation of a commercial airplane in China. © 2012 Elsevier Inc. All rights reserved.


Mohamed Y.A.-R.I.,University of Alberta | El-Saadany E.F.,University of Waterloo
IEEE Transactions on Energy Conversion | Year: 2011

This paper presents a robust natural-frame-based interfacing scheme for grid-connected distributed generation inverters. The control scheme consists of a dead-beat line-voltage sensorless natural-frame current controller, adaptive neural network (NN)-based disturbance estimator, and robust sensorless synchronization loop. The estimated uncertainty dynamics provide the necessary energy shaping in the inverter control voltage to attenuate grid-voltage disturbances and other voltage disturbances caused by interfacing parameter variation. In addition, the predictive nature of the estimator has the necessary phase advance to compensate for system delays. The self-learning feature of the NN adaptation algorithm allows feasible and easy adaptation design at different grid disturbances and operating conditions. The fact that converter synchronization is based on the fundamental grid-voltage facilitates the use of the estimated uncertainty to extract the position of the fundamental grid-voltage vector without using voltage sensors. Theoretical analysis and comparative evaluation results are presented to demonstrate the effectiveness of the proposed control scheme. © 2011 IEEE.


Zarrabi-Zadeh H.,University of Waterloo
Algorithmica (New York) | Year: 2011

We present a new streaming algorithm for maintaining an ε-kernel of a point set in ℝ d using O((1/ε (d-1)/2) log∈(1/ε)) space. The space used by our algorithm is optimal up to a small logarithmic factor. This significantly improves (for any fixed dimension d ≥3) the best previous algorithm for this problem that uses O(1/ε d-(3/2)) space, presented by Agarwal and Yu. Our algorithm immediately improves the space complexity of the previous streaming algorithms for a number of fundamental geometric optimization problems in fixed dimensions, including width, minimum-volume bounding box, minimum-radius enclosing cylinder, minimum-width enclosing annulus, etc. © 2010 Springer Science+Business Media, LLC.


Wood C.J.,Institute for Quantum Computing | Wood C.J.,University of Waterloo | Spekkens R.W.,Perimeter Institute for Theoretical Physics
New Journal of Physics | Year: 2015

An active area of research in the fields of machine learning and statistics is the development of causal discovery algorithms, the purpose of which is to infer the causal relations that hold among a set of variables from the correlations that these exhibit . We apply some of these algorithms to the correlations that arise for entangled quantum systems. We show that they cannot distinguish correlations that satisfy Bell inequalities from correlations that violate Bell inequalities, and consequently that they cannot do justice to the challenges of explaining certain quantum correlations causally. Nonetheless, by adapting the conceptual tools of causal inference, we can show that any attempt to provide a causal explanation of nonsignalling correlations that violate a Bell inequality must contradict a core principle of these algorithms, namely, that an observed statistical independence between variables should not be explained by fine-tuning of the causal parameters. In particular, we demonstrate the need for such fine-tuning for most of the causal mechanisms that have been proposed to underlie Bell correlations, including superluminal causal influences, superdeterminism (that is, a denial of freedom of choice of settings), and retrocausal influences which do not introduce causal cycles. © 2015 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft.


Drescher M.,University of Waterloo | Thomas S.C.,University of Toronto
Oikos | Year: 2013

Projections of future climate suggest increases in global temperatures that are especially pronounced in winter in cold-temperate regions. Thermal insulation provided by snow cover to litter, soil, and overwintering plants will likely be affected by changing winter temperatures and might influence future species composition and ranges. We investigated effects of changing snow cover on seed germination and sapling survival of several cold-temperate tree species using a snow manipulation approach. Post-winter seed germination increased or decreased with increasing snow cover, depending on species; decreased seed germination was found in species that characteristically disperse seed in summer or fall months prior to snowfall. Post-winter sapling survival increased with increasing snow cover for all species, though some species benefitted more from increased snow cover than others. Sapling mortality was associated with root exposure, suggesting the possibility that soil frost heaving could be an important mechanism for observed effects. Our results suggest that altered snow regimes may cause re-assembly of current species habitat relationships and may drive changes in species' biogeographic range. However, local snow regimes also vary with associated vegetation cover and topography, suggesting that species distribution patterns may be strongly influenced by spatial heterogeneity in snow regimes and complicating future projections. © 2012 The Authors. Oikos © 2012 Nordic Society Oikos.


Baumchen O.,McMaster University | McGraw J.D.,McMaster University | Forrest J.A.,University of Waterloo | Dalnoki-Veress K.,McMaster University
Physical Review Letters | Year: 2012

We have examined the direct effect of manipulating the number of free surfaces on the measured glass transition temperature T g of thin polystyrene films. Thin films in the range 35nm


Gottesman D.,Perimeter Institute for Theoretical Physics | Jennewein T.,University of Waterloo | Croke S.,Perimeter Institute for Theoretical Physics
Physical Review Letters | Year: 2012

We present an approach to building interferometric telescopes using ideas of quantum information. Current optical interferometers have limited baseline lengths, and thus limited resolution, because of noise and loss of signal due to the transmission of photons between the telescopes. The technology of quantum repeaters has the potential to eliminate this limit, allowing in principle interferometers with arbitrarily long baselines. © 2012 American Physical Society.


Egger D.J.,Saarland University | Wilhelm F.K.,Saarland University | Wilhelm F.K.,University of Waterloo
Physical Review Letters | Year: 2013

Quantum transmission lines are central to superconducting and hybrid quantum computing. In this work we show how coupling them to a left-handed transmission line allows circuit QED to reach a new regime: multimode ultrastrong coupling. Out of the many potential applications of this novel device, we discuss the preparation of multipartite entangled states and the simulation of the spin-boson model where a quantum phase transition is reached up to finite size effects. © 2013 American Physical Society.


Faizal M.,University of Waterloo | Pourhassan B.,Damghan University
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2015

In this paper, we will analyze the effects of thermal fluctuations on the stability of a black Saturn. The entropy of the black Saturn will get corrected due to these thermal fluctuations. We will demonstrate that the correction term generated by these thermal fluctuations is a logarithmic term. Then we will use this corrected value of the entropy to obtain bounds for various parameters of the black Saturn. We will also analyze the thermodynamical stability of the black Saturn in presence of thermal fluctuations, using this corrected value of the entropy. © 2015 The Authors.


Ricardez-Sandoval L.A.,University of Waterloo
Canadian Journal of Chemical Engineering | Year: 2011

Multiscale modelling is a new emerging field in process systems engineering. Although the idea of linking events occurring across time and length scales is not new, the numerical solution of these models is challenging because of computational limitations and the difficulty in coupling modelling methods with different characteristics. Although an extensive set of tools are currently available to improve the performance of processes described using continuum models, most of these tools are not suitable to design and control a multiscale process. This work presents the approaches that are currently available to perform multiscale modelling and identifies the key challenges that need to be addressed to improve the performance of macroscopic processes by controlling events occurring at the atomistic, molecular and nanoscopic levels. © 2011 Canadian Society for Chemical Engineering.


Childs A.M.,University of Waterloo | Ge Y.,Perimeter Institute for Theoretical Physics
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2014

We consider the problem of searching a general d-dimensional lattice of N vertices for a single marked item using a continuous-time quantum walk. We demand locality, but allow the walk to vary periodically on a small scale. By constructing lattice Hamiltonians exhibiting Dirac points in their dispersion relations and exploiting the linear behavior near a Dirac point, we develop algorithms that solve the problem in a time of O(N) for d>2 and O(NlogN) in d=2. In particular, we show that such algorithms exist even for hypercubic lattices in any dimension. Unlike previous continuous-time quantum walk algorithms on hypercubic lattices in low dimensions, our approach does not use external memory. © 2014 American Physical Society.


Cooper A.F.,University of Waterloo
Global Policy | Year: 2011

This article presents an analysis of the G20 which, while recognising the innovative capacity of this leaders' forum, also addresses one of its major sources of contestation, the lack of equitable regional balance. Whereas the EU has an overrepresentation within the G20 membership, other regional constituencies including the Caribbean, the Nordics, Southeast Asia and Africa have been underrepresented or excluded completely. This has consequences in terms of heightening the legitimacy gap already in place due to the summit's image as a self-selected concert of big powers. Yet the excluded regions have advocated inclusion, not rejection, of the G20. The main focus of the article is therefore on the practical means by which forms of inclusion should and can be enhanced. To its credit, South Korea as host of the November 2010 G20 has moved to address some of the problems associated with these representational imbalances. But the search for inclusion needs to be stepped up to enhance the G20's position as a model of legitimate global governance. Moreover, many of the mechanisms to promote inclusion are feasible, taking advantage of the flexibility in the design of the G20 project as it has evolved over the past two years. © 2011 London School of Economics and Political Science and John Wiley & Sons Ltd.


Searle G.,University of Queensland | Filion P.,University of Waterloo
Urban Studies | Year: 2011

There is a lack of knowledge about effective implementation of intensification policies. The paper concentrates on the intensification experience of Sydney, Australia, and Toronto, Canada. Historical narratives, which document intensification efforts and outcomes since the 1950s, paint different pictures. For much of the period, Sydney adopted a medium-density strategy sustained by public-sector incentives and regulations. In Toronto, in contrast, the focus has been on high-density developments driven mostly by market trends. Lately, however, the Sydney intensification strategy has shifted to high-density projects. The paper concludes by drawing out findings that are relevant to intensification policies in the selected metropolitan regions and elsewhere: the ubiquity of NIMBY reactions; the importance of senior government involvement because less sensitive to anti-density NIMBY reactions; the possibility of framing intensification strategies in ways that avoid political party confrontation; and the role of major environmental movements in raising public opinion support to intensification. © 2010 Urban Studies Journal Limited.


Thompson R.B.,University of Waterloo
Journal of Chemical Physics | Year: 2010

A hybrid self-consistent field theory/density functional theory method is applied to predict tilt (kink) grain boundary structures between lamellar domains of a symmetric diblock copolymer with added spherical nanoparticles. Structures consistent with experimental observations are found and theoretical evidence is provided in support of a hypothesis regarding the positioning of nanoparticles. Some particle distributions are predicted for situations not yet examined by experiment. © 2010 American Institute of Physics.


Mabood F.,Edwardes College Peshawar | Khan W.A.,University of Waterloo
Computers and Fluids | Year: 2014

Purpose: The paper aims to find an accurate analytic solution (series solution) for MHD stagnation point flow in porous medium for different values of Prandtl number and suction/injection parameter. Design/methodology/approach: In this paper, the homotopy analysis method (HAM) with unknown convergence-control parameter has been used to derive accurate analytic solution for MHD stagnation point flow in porous medium. Findings: Main findings are; the skin-friction coefficient decreases with Prandtl number, and with the increasing values of M the Nusselt number significantly increases with low Prandtl number. Practical implications: The HAM with unknown convergence-control parameter can be used to obtain analytic solutions for many problems in sciences and engineering. Originality/value: This paper fulfils an identified need to evaluate the accurate analytic solution (series solution) of practical problem. Some deduced results can be obtained in a limiting sense. © 2014 Elsevier Ltd.


Balcan M.-F.,Georgia Institute of Technology | Harvey N.J.A.,University of Waterloo
Proceedings of the Annual ACM Symposium on Theory of Computing | Year: 2011

There has been much interest in the machine learning and algorithmic game theory communities on understanding and using submodular functions. Despite this substantial interest, little is known about their learnability from data. Motivated by applications, such as pricing goods in economics, this paper considers PAC-style learning of submodular functions in a distributional setting. A problem instance consists of a distribution on {0,1}n and a real-valued function on {0,1}n that is non-negative, monotone, and submodular. We are given poly(n) samples from this distribution, along with the values of the function at those sample points. The task is to approximate the value of the function to within a multiplicative factor at subsequent sample points drawn from the same distribution, with sufficiently high probability. We develop the first theoretical analysis of this problem, proving a number of important and nearly tight results. For instance, if the underlying distribution is a product distribution then we give a learning algorithm that achieves a constant-factor approximation (under some assumptions). However, for general distributions we provide a surprising Omega(n1/3) lower bound based on a new interesting class of matroids and we also show a O(n1/2) upper bound. Our work combines central issues in optimization (submodular functions and matroids) with central topics in learning (distributional learning and PAC-style analyses) and with central concepts in pseudo-randomness (lossless expander graphs). Our analysis involves a twist on the usual learning theory models and uncovers some interesting structural and extremal properties of submodular functions, which we suspect are likely to be useful in other contexts. In particular, to prove our general lower bound, we use lossless expanders to construct a new family of matroids which can take wildly varying rank values on superpolynomially many sets; no such construction was previously known. This construction shows unexpected extremal properties of submodular functions. © 2011 ACM.


Brodutch A.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2013

Discordant states appear in a large number of quantum phenomena and seem to be a good indicator of divergence from classicality. While there is evidence that they are essential for a quantum algorithm to have an advantage over a classical one, their precise role is unclear. We examine the role of discord in quantum algorithms using the paradigmatic framework of restricted distributed quantum gates and show that manipulating discordant states using local operations has an associated cost in terms of entanglement and communication resources. Changing discord reduces the total correlations and reversible operations on discordant states usually require nonlocal resources. Discord alone is, however, not enough to determine the need for entanglement. A more general type of similar quantities, which we call K discord, is introduced as a further constraint on the kinds of operations that can be performed without entanglement resources. © 2013 American Physical Society.


Karsten M.,University of Waterloo
IEEE/ACM Transactions on Networking | Year: 2010

This paper presents Interleaved Stratified Timer Wheels as a novel priority queue data structure for traffic shaping and scheduling in packet-switched networks. The data structure is used to construct an efficient packet approximation of general processor sharing (GPS). This scheduler is the first of its kind by combining all desirable properties without any residual catch. In contrast to previous work, the scheduler presented here has constant and near-optimal delay and fairness properties, and can be implemented with O(1) algorithmic complexity, and has a low absolute execution overhead. The paper presents the priority queue data structure and the basic scheduling algorithm, along with several versions with different cost-performance trade-offs. A generalized analytical model for rate-controlled rounded timestamp schedulers is developed and used to assess the scheduling properties of the different scheduler versions. © 2006 IEEE.


Johnston N.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2013

It is known that, in an (m⊠- n)-dimensional quantum system, the maximum dimension of a subspace that contains only entangled states is (m-1)(n-1). We show that the exact same bound is tight if we require the stronger condition that every state with range in the subspace has non-positive partial transpose. As an immediate corollary of our result, we solve an open question that asks for the maximum number of negative eigenvalues of the partial transpose of a quantum state. In particular, we give an explicit method of construction of a bipartite state whose partial transpose has (m-1)(n-1) negative eigenvalues, which is necessarily maximal, despite recent numerical evidence that suggested such states may not exist for large m and n. © 2013 American Physical Society.


Fahidy T.Z.,University of Waterloo
Electrochemistry Communications | Year: 2011

Well known in several scientific and technical areas, the Pareto distribution of probability theory is employed to describe the distribution of mass fraction of metal or metal oxide powders produced by electrolytic means, with respect to limited quantitative observations of their particle size. The estimation of Pareto parameters from experimental mean/median or mean-variance data, and via the maximum likelihood method is specifically demonstrated. © 2011 Elsevier B.V. All rights reserved.


Zavala J.,Perimeter Institute for Theoretical Physics | Zavala J.,University of Waterloo | Vogelsberger M.,Harvard - Smithsonian Center for Astrophysics | Walker M.,Harvard - Smithsonian Center for Astrophysics
Monthly Notices of the Royal Astronomical Society: Letters | Year: 2013

Self-interacting dark matter is an attractive alternative to the cold dark matter paradigm only if it is able to substantially reduce the central densities of dwarf-size haloes while keeping the densities and shapes of cluster-size haloes within current constraints. Given the seemingly stringent nature of the latter, it was thought for nearly a decade that self-interacting dark matter would be viable only if the cross-section for self-scattering was strongly velocity dependent. However, it has recently been suggested that a constant cross-section per unit mass of σT/m ~ 0.1 cm2 g-1 is sufficient to accomplish the desired effect. We explicitly investigate this claim using high-resolution cosmological simulations of a Milky Way-size halo and find that, similarly to the cold dark matter case, such cross-section produces a population of massive subhaloes that is inconsistent with the kinematics of the classical dwarf spheroidals, in particular with the inferred slopes of the mass profiles of Fornax and Sculptor. This problem is resolved if σT/m ~ 1 cm2 g-1 at the dwarf spheroidal scales. Since this value is likely inconsistent with the halo shapes of several clusters, our results leave only a small window open for a velocity-independent self-interacting dark matter model to work as a distinct alternative to cold dark matter. © 2013 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society.


Cohen A.,University of British Columbia | Davidson S.,University of Waterloo
Water Alternatives | Year: 2011

Watersheds are a widely accepted scale for water governance activities. This paper makes three contributions to current understandings of watersheds as governance units. First, the paper collects recent research identifying some of the challenges associated with the policy framework understood as the watershed approach. These challenges are boundary choice, accountability, public participation, and watersheds' asymmetries with 'problem-sheds' and 'policy-sheds'. Second, the paper draws upon this synthesis and on a review of the development and evolution of the concept of watersheds to suggest that the challenges associated with the watershed approach are symptoms of a broader issue: that the concept of watersheds was developed as a technical tool but has been taken up as a policy framework. The result of this transition from tool to framework, the paper argues, has been the conflation of governance tools, hydrologic boundaries, and Integrated Water Resources Management (IWRM). Third, the paper calls for an analysis of watersheds as separate from the governance tools with which they have been conflated, and presents three entry points into such an analysis.


Allarakhia M.,University of Waterloo | Walsh S.,University of New Mexico
Technovation | Year: 2012

Governments and companies around the globe have embraced nanotechnology as a strategically critical pan industrial technology. Many view it as one of the essential foundation technology bases of the next Schumpeterian wave. A number of commercial and government sponsored groups have developed a variety of consortia centered on the commercial promise of nanotechnology. Yet the optimal management of these consortia has proven elusive to the point that some suggest that they cannot be managed at all. If these consortia are important, and their effective management crucial, then there is cause for concern. We utilize the case study method to create a nanotechnology consortia management diagnostic model based on institutional analysis development (IAD). Nanotechnology consortia are formed for a variety of purposes and their stakeholders include governments, industries, large firms, SME, entrepreneurial enterprises, and supporting firms. © 2011 Elsevier Ltd. All rights reserved.


Wong J.S.,Simon Fraser University | Schonlau M.,University of Waterloo
Criminal Justice and Behavior | Year: 2013

Over the past decade school bullying has emerged as a prominent issue of concern for students, parents, educators, and researchers. Bully victimization has been linked to a long list of negative outcomes, such as depression, peer rejection, school dropout, eating disorders, delinquency, and violence. Previous research relating bully victimization to delinquency has typically used standard regression techniques that may not sufficiently control for heterogeneity between bullied and nonbullied youths. Using a large, nationally representative panel dataset, the National Longitudinal Survey of Youth 1997 (NLSY97), we use a propensity score matching technique to assess the impact of bully victimization on a range of delinquency outcomes. Results show that 19% of respondents had been victimized prior to the age of 12 years (n = 8,833). Early victimization is predictive of the development of 6 out of 10 delinquent behaviors measured over a period of 6 years, including assault, vandalism, theft, other property crimes (such as receiving stolen property or fraud), selling drugs, and running away from home. Bully victimization should be considered an important precursor to delinquency. © 2013 International Association for Correctional and Forensic Psychology.


Harmes A.,University of Waterloo
Global Environmental Politics | Year: 2011

This article examines the potential effectiveness of socially responsible investment (SRI) and investor environmentalism through carbon disclosure in terms of their key goal of creating real financial incentives, through share price performance, for firms to pursue climate change mitigation. It does so by theoretically assessing the two main assumptions which underpin investor environmentalism as promoted by SRI funds and NGOs such as the Carbon Disclosure Project: Those concerning the power of institutional investors, and the "business case" for climate change mitigation. In doing so, it argues that the potential of using institutional investors to create real financial incentives for climate change mitigation, in the form of share price performance, has been considerably overestimated and that there is not even a strong theoretical case for why carbon disclosure should work in this regard. This is argued based on the structural constraints faced by most institutional investors, as well as the fundamentally incorrect assumption about climate change, that it is a form of market failure, which theoretically underpins these initiatives. © 2011 by the Massachusetts Institute of Technology.


Cosentino A.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2013

We present a simple semidefinite program whose optimal value is equal to the maximum probability of perfectly distinguishing orthogonal maximally entangled states using any PPT measurement (a measurement whose operators are positive under partial transpose). When the states to be distinguished are given by the tensor product of Bell states, the semidefinite program simplifies to a linear program. In Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.109.020506 109, 020506 (2012), Yu, Duan, and Ying exhibit a set of four maximally entangled states in C4 - C4, which is distinguishable by any PPT measurement only with probability strictly less than 1. Using semidefinite programming, we show a tight bound of 7/8 on this probability (3/4 for the case of unambiguous PPT measurements). We generalize this result by demonstrating a simple construction of a set of k states in Ck - Ck with the same property, for any k that is a power of 2. By running numerical experiments, we show the local indistinguishability of certain sets of generalized Bell states in C5 - C5 and C6 - C6 previously considered in the literature. © 2013 American Physical Society.


Granek J.A.,York University | Gorbet D.J.,University of Waterloo | Sergio L.E.,York University
Cortex | Year: 2010

Using event-related functional magnetic resonance imaging (fMRI), we examined the effect of video-game experience on the neural control of increasingly complex visuomotor tasks. Previously, skilled individuals have demonstrated the use of a more efficient movement control brain network, including the prefrontal, premotor, primary sensorimotor and parietal cortices. Our results extend and generalize this finding by documenting additional prefrontal cortex activity in experienced video gamers planning for complex eye-hand coordination tasks that are distinct from actual video-game play. These changes in activation between non-gamers and extensive gamers are putatively related to the increased online control and spatial attention required for complex visually guided reaching. These data suggest that the basic cortical network for processing complex visually guided reaching is altered by extensive video-game play. © 2009.


Motahari A.S.,Sharif University of Technology | Oveis-Gharan S.,Ciena | Maddah-Ali M.-A.,Alcatel - Lucent | Khandani A.K.,University of Waterloo
IEEE Transactions on Information Theory | Year: 2014

In this paper, we develop the machinery of real interference alignment. This machinery is extremely powerful in achieving the sum degrees of freedom (DoF) of single antenna systems. The scheme of real interference alignment is based on designing single-layer and multilayer constellations used for modulating information messages at the transmitters. We show that constellations can be aligned in a similar fashion as that of vectors in multiple antenna systems and space can be broken up into fractional dimensions. The performance analysis of the signaling scheme makes use of a recent result in the field of Diophantine approximation, which states that the convergence part of the Khintchine-Groshev theorem holds for points on nondegenerate manifolds. Using real interference alignment, we obtain the sum DoF of two model channels, namely the Gaussian interference channel (IC) and the X channel. It is proved that the sum DoF of the K -user IC is (K/2) for almost all channel parameters. We also prove that the sum DoF of the X -channel with K transmitters and M receivers is (KM/K+M-1) for almost all channel parameters. © 2014 IEEE.


Gates M.,University of Waterloo
Rural and remote health | Year: 2013

Prevalence rates of overweight and obesity in Canada have risen rapidly in the past 20 years. Concurrent with the obesity epidemic, sleep time and physical activity levels have decreased among youth. Aboriginal youth experience disproportionately high obesity prevalence but there is inadequate knowledge of contributing factors. This research aimed to examine sleep and screen time behavior and their relationship to Body Mass Index (BMI) in on-reserve First Nations youth from Ontario, Canada. This was an observational population-based study of cross-sectional design. Self-reported physical activity, screen time, and lifestyle information was collected from 348 youth aged 10-18 years residing in five northern, remote First Nations communities and one southern First Nations community in Ontario, Canada, from October 2004 to June 2010. Data were collected in the school setting using the Waterloo Web-based Eating Behaviour Questionnaire. Based on self-reported height and weight, youth were classified normal (including underweight), overweight and obese according to BMI. Descriptive cross-tabulations and Pearson's χ2 tests were used to compare screen time, sleep habits, and physical activity across BMI categories. Participants demonstrated low levels of after-school physical activity, and screen time in excess of national guidelines. Overall, 75.5% reported being active in the evening three or less times per week. Approximately one-quarter of the surveyed youth watched more than 2 hours of television daily and 33.9% spent more than 2 hours on the internet or playing video games. For boys, time using the internet/video games (p=0.022) was positively associated with BMI category, with a greater than expected proportion of obese boys spending more than 2 hours using the internet or video games daily (56.7%). Also for boys, time spent outside after school (p=0.033) was negatively associated with BMI category, with a lesser than expected proportion spending 'most of the time' outside (presumably being active) after school. These relationships were not observed in girls. Adjusted standardized residuals suggest a greater than expected proportion of obese individuals had a television in their bedroom (66.7%) as compared with the rest of the population. The current study adds to the limited information about contributors to overweight and obesity in First Nations youth living on-reserve in Canada. Concerns about inadequate sleep, excess screen time, and inadequate physical activity mirror those of the general population. Further investigation is warranted to improve the understanding of how various lifestyle behaviors influence overweight, obesity, and the development of chronic disease among First Nations youth. Initiatives to reduce screen time, increase physical activity, and encourage adequate sleep among on-reserve First Nations youth are recommended.


Konig R.,University of Waterloo | Smith G.,IBM
IEEE Transactions on Information Theory | Year: 2014

When two independent analog signals, X and Y are added together giving Z = X + Y, the entropy of Z, H(Z), is not a simple function of the entropies H(X) and H(Y), but rather depends on the details of X and Y's distributions. Nevertheless, the entropy power inequality (EPI), which states that e 2H(Z) ≥ e2H(X) + e2H(Y), gives a very tight restriction on the entropy of Z. This inequality has found many applications in information theory and statistics. The quantum analogue of adding two random variables is the combination of two independent bosonic modes at a beam splitter. The purpose of this paper is to give a detailed outline of the proof of two separate generalizations of the EPI to the quantum regime. Our proofs are similar in spirit to the standard classical proofs of the EPI, but some new quantities and ideas are needed in the quantum setting. In particular, we find a new quantum de Bruijin identity relating entropy production under diffusion to a divergence-based quantum Fisher information. Furthermore, this Fisher information exhibits certain convexity properties in the context of beam splitters. © 2014 IEEE.


Sreenivasan V.,Indiana University Bloomington | Bobier W.R.,University of Waterloo
Vision Research | Year: 2014

Convergence insufficiency (CI) is a developmental visual anomaly defined clinically by a reduced near point of convergence, a reduced capacity to view through base-out prisms (fusional convergence); coupled with asthenopic symptoms typically blur and diplopia. Experimental studies show reduced vergence parameters and tonic adaptation. Based upon current models of accommodation and vergence, we hypothesize that the reduced vergence adaptation in CI leads to excessive amounts of convergence accommodation (CA). Eleven CI participants (mean age = 17.4. ± 2.3. years) were recruited with reduced capacity to view through increasing magnitudes of base out (BO) prisms (mean fusional convergence at 40 cm = 12. ± 0.9δ). Testing followed our previous experimental design for (n= 11) binocularly normal adults. Binocular fixation of a difference of Gaussian (DoG) target (0.2. cpd) elicited CA responses during vergence adaptation to a 12δ BO. Vergence and CA responses were obtained at 3. min intervals over a 15. min period and time course were quantified using exponential decay functions. Results were compared to previously published data on eleven binocular normals. Eight participants completed the study. CI's showed significantly reduced magnitude of vergence adaptation (CI: 2.9δ vs. normals: 6.6 p= 0.01) and CA reduction (CI = 0.21D, Normals = 0.55D; p= 0.03). However, the decay time constants for adaptation and CA responses were not significantly different. CA changes were not confounded by changes in tonic accommodation (Change in TA = 0.01. ± 0.2D; p= 0.8). The reduced magnitude of vergence adaptation found in CI patients resulting in higher levels of CA may potentially explain their clinical findings of reduced positive fusional vergence (PFV) and the common symptom of blur. © 2014 Elsevier B.V.


Qin A.K.,Nanjing Southeast University | Clausi D.A.,University of Waterloo
IEEE Transactions on Image Processing | Year: 2010

Multivariate image segmentation is a challenging task, influenced by large intraclass variation that reduces class distinguishability as well as increased feature space sparseness and solution space complexity that impose computational cost and degrade algorithmic robustness. To deal with these problems, a Markov random field (MRF) based multivariate segmentation algorithm called "multivariate iterative region growing using semantics" (MIRGS) is presented. In MIRGS, the impact of intraclass variation and computational cost are reduced using the MRF spatial context model incorporated with adaptive edge penalty and applied to regions. Semantic region growing starting from watershed over-segmentation and performed alternatively with segmentation gradually reduces the solution space size, which improves segmentation effectiveness. As a multivariate iterative algorithm, MIRGS is highly sensitive to initial conditions. To suppress initialization sensitivity, it employs a region-level k-means (RKM) based initialization method, which consistently provides accurate initial conditions at low computational cost. Experiments show the superiority of RKM relative to two commonly used initialization methods. Segmentation tests on a variety of synthetic and natural multivariate images demonstrate that MIRGS consistently outperforms three other published algorithms. © 2006 IEEE.


Hall P.A.,University of Waterloo
Current Directions in Psychological Science | Year: 2016

Human beings have reliable preferences for energy-rich foods; these preferences are present at birth and possibly innate. Relatively recent changes in our day-to-day living context have rendered such foods commonly encountered, nearly effortless to procure, and frequently brought to mind. Theoretical, conceptual, and empirical perspectives from the field of social neuroscience support the hypothesis that the increase in the prevalence of overweight and obesity in first- and second-world countries may be a function of these dynamics coupled with our highly evolved but ultimately imperfect capacities for self-control. This review describes the significance of executive-control systems for explaining the occurrence of nonhomeostatic forms of dietary behavior—that is, those aspects of calorie ingestion that are not for the purpose of replacing calories burned. I focus specifically on experimental findings—including those from cortical-stimulation studies—that collectively support a causal role for executive-control systems in modulating cravings for and consumption of high-calorie foods. © 2016, © The Author(s) 2016.


Liang K.,University of Waterloo | Keles S.,University of Wisconsin - Madison
BMC Bioinformatics | Year: 2012

Background: ChIP-seq has become an important tool for identifying genome-wide protein-DNA interactions, including transcription factor binding and histone modifications. In ChIP-seq experiments, ChIP samples are usually coupled with their matching control samples. Proper normalization between the ChIP and control samples is an essential aspect of ChIP-seq data analysis.Results: We have developed a novel method for estimating the normalization factor between the ChIP and the control samples. Our method, named as NCIS (Normalization of ChIP-seq) can accommodate both low and high sequencing depth datasets. We compare statistical properties of NCIS against existing methods in a set of diverse simulation settings, where NCIS enjoys the best estimation precision. In addition, we illustrate the impact of the normalization factor in FDR control and show that NCIS leads to more power among methods that control FDR at nominal levels.Conclusion: Our results indicate that the proper normalization between the ChIP and control samples is an important step in ChIP-seq analysis in terms of power and error rate control. Our proposed method shows excellent statistical properties and is useful in the full range of ChIP-seq applications, especially with deeply sequenced data. © 2012 Liang and Keleş; licensee BioMed Central Ltd.


Brown E.G.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2013

We study the harvesting of quantum and classical correlations from a hot scalar field in a periodic cavity by a pair of spatially separated oscillator-detectors. Specifically, we utilize nonperturbative and exact (non-numerical) techniques to solve for the evolution of the detectors-field system and then we examine how the entanglement, Gaussian quantum discord, and mutual information obtained by the detectors change with the temperature of the field. While (as expected) the harvested entanglement rapidly decays to zero as temperature is increased, we find remarkably that both the mutual information and the discord can actually be increased by multiple orders of magnitude via increasing the temperature. We go on to explain this phenomenon by a variety of means and are able to make accurate predictions of the behavior of thermal amplification. By doing this we also introduce a new perspective on harvesting in general and illustrate that the system can be represented as two dynamically decoupled systems, each with only a single detector. The thermal amplification of discord harvesting represents an exciting prospect for discord-based quantum computation, including its use in entanglement activation. © 2013 American Physical Society.


Johnston N.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2013

The separability from spectrum problem asks for a characterization of the eigenvalues of the bipartite mixed states ρ with the property that U†ρU is separable for all unitary matrices U. This problem has been solved when the local dimensions m and n satisfy m=2 and n≤3. We solve all remaining qubit-qudit cases (i.e., when m=2 and n≥4 is arbitrary). In all of these cases we show that a state is separable from spectrum if and only if U†ρU has positive partial transpose for all unitary matrices U. This equivalence is in stark contrast with the usual separability problem, where a state having positive partial transpose is a strictly weaker property than it being separable. © 2013 American Physical Society.


Wu C.,University of Waterloo
Tobacco control | Year: 2010

This paper describes the design features, data collection methods and analytical strategies of the ITC China Survey, a prospective cohort study of 800 adult smokers and 200 adult non-smokers in each of six cities in China. In addition to features and methods which are common to ITC surveys in other countries, the ITC China Survey possesses unique features in frame construction, a large first phase data enumeration and sampling selection; and it uses special techniques and measures in training, field work organisation and quality control. It also faces technical challenges in sample selection and weight calculation when some selected upper level clusters need to be replaced by new ones owing to massive relocation exercises within the cities.


Parry D.C.,University of Waterloo
Leisure Sciences | Year: 2014

Many feminist scholars avow social justice as the ultimate goal of their research, but the process remains poorly conceptualized. With this in mind, the purpose of this article is to use feminist leisure scholarship to provide insight into the ways that I see myself and others enacting social justice. In particular, I outline how a politics of hope, transformative encounters, and activism enable feminist leisure scholars to make the world more just. I consider future areas of feminist leisure research that would benefit from a social justice agenda and conclude with a cautionary note about the seductive postfeminist message that the work of feminism is done. © 2014 Copyright © Taylor & Francis Group, LLC.


Weber O.,University of Waterloo
Business Strategy and the Environment | Year: 2014

What is the current state of environmental, social and governance (ESG) reporting and what is the relation between ESG reporting and the financial performance of Chinese companies? This study analyses corporate ESG disclosure in China between 2005 and 2012 by analysing the members of the main indexes of the biggest Chinese stock exchanges. After discussing theories that explain the ESG performance of firms such as institutional theory, accountability and stakeholder theory we present uni- and multivariate statistical analyses of ESG reporting and its relation to environmental and financial performance. Our results suggest that ownership status and membership of certain stock exchanges influence the frequency of ESG disclosure. In turn, ESG reporting influences both environmental and financial performance. We conclude that the main driver for ESG disclosure is accountability and that Chinese corporations are catching up with respect to the frequency of ESG reporting as well as with respect to the quality. © 2013 John Wiley & Sons, Ltd and ERP Environment.


Burn D.H.,University of Waterloo
Hydrological Processes | Year: 2014

A regional, or pooled, approach to frequency analysis is explored in the context of the estimation of rainfall quantiles required for the formation of intensity-duration-frequency (IDF) curves. Resampling experiments are used, in conjunction with two rainfall data sets with long record lengths, to explore the merits of a pooled approach to the estimation of extreme rainfall quantiles. The width of the 95% confidence interval for quantile estimates is used as the primary basis to evaluate the relative merits of pooled and single site estimates of rainfall quantiles. Recommendations are formulated for applying the regional approach to frequency analysis, and these recommendations are used in the application of the regional approach to 40 sites with IDF data in southern Ontario, Canada. The results demonstrate that the regional approach is preferred to single site analysis for estimating extreme rainfall quantiles for conditions and data availability commonly encountered in practice. © 2014 John Wiley & Sons, Ltd.


Chan T.M.,University of Waterloo
Computational Geometry: Theory and Applications | Year: 2010

Given n axis-parallel boxes in a fixed dimension d≥3, how efficiently can we compute the volume of the union? This standard problem in computational geometry, commonly referred to as Klee's measure problem, can be solved in time O(nd/ 2logn) by an algorithm of Overmars and Yap (FOCS 1988). We give the first (albeit small) improvement: our new algorithm runs in time nd/ 22 O( logn), where log denotes the iterated logarithm. For the related problem of computing the depth in an arrangement of n boxes, we further improve the time bound to near O(nd/ 2/logd/ 2-1n), ignoring loglogn factors. Other applications and lower-bound possibilities are discussed. The ideas behind the improved algorithms are simple. © 2009 Elsevier B.V.


Honek J.F.,University of Waterloo
Biochemical Society Transactions | Year: 2014

A number of bacterial glyoxalase I enzymes are maximally activated by Ni2+ and Co2+ ions, but are inactive in the presence of Zn2+, yet these enzymes will also bind this metal ion. The structure-activity relationships between these two classes of glyoxalase I serve as important clues as to how the molecular structures of these proteins control metal-activation profiles. © The Authors Journal compilation © 2014 Biochemical Society.


Nayak P.K.,University of Waterloo
Ecology and Society | Year: 2014

Innovations in social-ecological research require novel approaches to conceive change in human-environment systems. The study of history constitutes an important element of this process. First, using the Chilika Lagoon small-scale fisheries in India, as a case, in this paper I reflect on the appropriateness of a social-ecological perspective for understanding economic history. Second, I examine here how changes in various components of the lagoon's social-ecological system influenced and shaped economic history and the political processes surrounding it. I then discuss the two-way linkages between economic history and social-ecological processes to highlight that the components of a social-ecological system, including the economic aspects, follow an interactive and interdependent trajectory such that their combined impacts have important implications for human-environment connections and sustainability of the system as a whole. Social, ecological, economic, and political components of a system are interlinked and may jointly contribute to the shaping of specific histories. Based on this synthesis, I offer insights to move beyond theoretical, methodological, and disciplinary boundaries as an overarching approach, an inclusive lens, to study change in complex social-ecological systems. © 2014 by the author(s).


Lin H.,University of Waterloo
Organization and Environment | Year: 2014

This article extends resource dependence theory to systematically explain what types of firms are likely to partner with governments through government–business partnerships (GBPs) to address environmental challenges. Using data from 377 environmental alliances formed between 1985 and 2013, this article empirically assesses firms’ likelihood of choosing GBPs for environmental improvements rather than selection of other cross-sector and interfirm partnership(s). The results suggest that GBPs are likely to form when firms are in vulnerable strategic positions, for example, where their survival substantively relies on receiving government support. GBPs are also likely to form when firms have strong resource or social positions that allow them to leverage governmental power in the development of strategic opportunities related to environmental improvements. © 2014 SAGE Publications


Raeisi S.,University of Calgary | Raeisi S.,University of Waterloo | Sekatski P.,University of Geneva | Simon C.,University of Calgary
Physical Review Letters | Year: 2011

Observing quantum effects such as superpositions and entanglement in macroscopic systems requires not only a system that is well protected against environmental decoherence, but also sufficient measurement precision. Motivated by recent experiments, we study the effects of coarse graining in photon number measurements on the observability of micro-macro entanglement that is created by greatly amplifying one photon from an entangled pair. We compare the results obtained for a unitary quantum cloner, which generates micro-macro entanglement, and for a measure-and-prepare cloner, which produces a separable micro-macro state. We show that the distance between the probability distributions of results for the two cloners approaches zero for a fixed moderate amount of coarse graining. Proving the presence of micro-macro entanglement therefore becomes progressively harder as the system size increases. © 2011 American Physical Society.


Woody E.Z.,University of Waterloo | Szechtman H.,McMaster University
Frontiers in Human Neuroscience | Year: 2013

Research indicates that there is a specially adapted, hard-wired brain circuit, the security motivation system, which evolved to manage potential threats, such as the possibility of contamination or predation. The existence of this system may have important implications for policy-making related to security. The system is sensitive to partial, uncertain cues of potential danger, detection of which activates a persistent, potent motivational state of wariness or anxiety. This state motivates behaviors to probe the potential danger, such as checking, and to correct for it, such as washing. Engagement in these behaviors serves as the terminating feedback for the activation of the system. Because security motivation theory makes predictions about what kinds of stimuli activate security motivation and what conditions terminate it, the theory may have applications both in understanding how policy-makers can best influence others, such as the public, and also in understanding the behavior of policy-makers themselves. © 2013 Woody and Szechtman.


Nelson-Wong E.,Regis University | Callaghan J.P.,University of Waterloo
Spine | Year: 2014

OBJECTIVE.: To determine if development of transient low back pain (LBP) during prolonged standing in individuals without prior history of LBP predicts future clinical LBP development at higher rates than in individuals who do not develop LBP during prolonged standing. SUMMARY OF BACKGROUND DATA.: Prolonged standing has been found to induce transient LBP in 40% to 70% of previously asymptomatic individuals. Individuals who develop pain during standing have been found to have altered neuromuscular profiles prior to the standing exposure compared with their pain free counterparts; therefore, it has been hypothesized that these individuals may have higher risk for LBP disorders. METHODS.: Previously asymptomatic participants who had completed a biomechanical study investigating LBP development during standing and response to exercise intervention completed annual surveys regarding LBP status for a period of 3 years. χ analyses were performed to determine group differences in LBP incidence rates. Accuracy statistics were calculated for ability of LBP development during standing to predict future LBP. RESULTS.: Participants who developed transient LBP during standing had significantly higher rates of clinical LBP during the 3-year follow-up period (35.3% vs. 23.1%) and were 3 times more likely to experience an episode of clinical LBP during the first 24 months than their non-pain developing counterparts. CONCLUSION.: Transient LBP development during prolonged standing is a positive predictive factor for future clinical LBP in previously asymptomatic individuals. Individuals who experience transient LBP during standing may be considered a "preclinical" group who are at increased risk for future LBP disorders. © 2014, Lippincott Williams & Wilkins.


Lu Q.-B.,University of Waterloo
Mutation Research - Reviews in Mutation Research | Year: 2010

The subpicosecond-lived prehydrated electron (epre-) is a fascinating species in radiation biology and radiotherapy of cancer. Using femtosecond time-resolved laser spectroscopy, we have recently resolved that epre- states are electronically excited states and have lifetimes of ∼180 fs and ∼550 fs, after the identification and removal of a coherence spike, respectively. Notably, the weakly bound epre- (<0 eV) has the highest yield among all the radicals generated in the cell during ionizing radiation. Recently, it has been demonstrated that dissociative electron transfer (DET) reactions of epre- can lead to important biological effects. By direct observation of the transition states of the DET reactions, we have showed that DET reactions of epre- play key roles in bond breakage of nucleotides and in activations of halopyrimidines as potential hypoxic radiosensitizers and of the chemotherapeutic drug cisplatin in combination with radiotherapy. This review discusses all of these findings, which may lead to improved strategies in radiotherapy of cancer, radioprotection of humans and in discovery of new anticancer drugs. © 2010 Elsevier B.V. All rights reserved.


Gangeh M.J.,University of Waterloo
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention | Year: 2010

In this paper, a texton-based classification system based on raw pixel representation along with a support vector machine with radial basis function kernel is proposed for the classification of emphysema in computed tomography images of the lung. The proposed approach is tested on 168 annotated regions of interest consisting of normal tissue, centrilobular emphysema, and paraseptal emphysema. The results show the superiority of the proposed approach to common techniques in the literature including moments of the histogram of filter responses based on Gaussian derivatives. The performance of the proposed system, with an accuracy of 96.43%, also slightly improves over a recently proposed approach based on local binary patterns.


Swamy C.,University of Waterloo | Shmoys D.B.,Cornell University
SIAM Journal on Computing | Year: 2012

Stochastic optimization problems provide a means to model uncertainty in the input data where the uncertainty is modeled by a probability distribution over the possible realizations of the actual data. We consider a broad class of these problems in which the realized input is revealed through a series of stages and hence are called multistage stochastic programming problems. Multistage stochastic programming and, in particular, multistage stochastic linear programs with full recourse, is a domain that has received a great deal of attention within the operations research community, mostly from the perspective of computational results in application settings. Our main result is to give the first fully polynomial approximation scheme for a broad class of multistage stochastic linear programming problems with any constant number of stages. The algorithm analyzed, known as the sample average approximation method, is quite simple and is the one most commonly used in practice. The algorithm accesses the input by means of a "black box" that can generate, given a series of outcomes for the initial stages, a sample of the input according to the conditional probability distribution (given those outcomes). We use this to obtain the first approximation algorithms for a variety of k-stage generalizations of basic combinatorial optimization problems including the set cover, vertex cover, multicut on trees, facility location, and multicommodity flow problems. © by SIAM. Unauthorized reproduction of this article is prohibited.


Gamalero E.,University of Piemonte Orientale | Glick B.R.,University of Waterloo
Plant Physiology | Year: 2015

A focus on the mechanisms by which ACC deaminase-containing bacteria facilitate plant growth.Bacteria that produce the enzyme 1-aminocyclopropane-1-carboxylate (ACC) deaminase, when present either on the surface of plant roots (rhizospheric) or within plant tissues (endophytic), play an active role in modulating ethylene levels in plants. This enzyme activity facilitates plant growth especially in the presence of various environmental stresses. Thus, plant growth-promoting bacteria that express ACC deaminase activity protect plants from growth inhibition by flooding and anoxia, drought, high salt, the presence of fungal and bacterial pathogens, nematodes, and the presence of metals and organic contaminants. Bacteria that express ACC deaminase activity also decrease the rate of flower wilting, promote the rooting of cuttings, and facilitate the nodulation of legumes. Here, the mechanisms behind bacterial ACC deaminase facilitation of plant growth and development are discussed, and numerous examples of the use of bacteria with this activity are summarized. © 2015 American Society of Plant Biologists. All rights reserved.


Konig R.,IBM | Konig R.,University of Waterloo | Smith G.,IBM
Physical Review Letters | Year: 2013

We find a tight upper bound for the classical capacity of quantum thermal noise channels that is within 1/ln2 bits of Holevo's lower bound. This lower bound is achievable using unentangled, classical signal states, namely, displaced coherent states. Thus, we find that while quantum tricks might offer benefits, when it comes to classical communication, they can only help a bit. © 2013 American Physical Society.


Michailovich O.,University of Waterloo
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention | Year: 2010

A spectrum of brain-related disorders are nowadays known to manifest themselves in degradation of the integrity and connectivity of neural tracts in the white matter of the brain. Such damage tends to affect the pattern of water diffusion in the white matter--the information which can be quantified by diffusion MRI (dMRI). Unfortunately, practical implementation of dMRI still poses a number of challenges which hamper its wide-spread integration into regular clinical practice. Chief among these is the problem of long scanning times. In particular, in the case of High Angular Resolution Diffusion Imaging (HARDI), the scanning times are known to increase linearly with the number of diffusion-encoding gradients. In this research, we use the theory of compressive sampling (aka compressed sensing) to substantially reduce the number of diffusion gradients without compromising the informational content of HARDI signals. The experimental part of our study compares the proposed method with a number of alternative approaches, and shows that the former results in more accurate estimation of HARDI data in terms of the mean squared error.


Bhutta Z.A.,Aga Khan University | Das J.K.,Aga Khan University | Rizvi A.,Aga Khan University | Gaffey M.F.,Hospital for Sick Children | And 5 more authors.
The Lancet | Year: 2013

Maternal undernutrition contributes to 800000 neonatal deaths annually through small for gestational age births; stunting, wasting, and micronutrient deficiencies are estimated to underlie nearly 3·1 million child deaths annually. Progress has been made with many interventions implemented at scale and the evidence for effectiveness of nutrition interventions and delivery strategies has grown since The Lancet Series on Maternal and Child Undernutrition in 2008. We did a comprehensive update of interventions to address undernutrition and micronutrient deficiencies in women and children and used standard methods to assess emerging new evidence for delivery platforms. We modelled the effect on lives saved and cost of these interventions in the 34 countries that have 90% of the world's children with stunted growth. We also examined the effect of various delivery platforms and delivery options using community health workers to engage poor populations and promote behaviour change, access and uptake of interventions. Our analysis suggests the current total of deaths in children younger than 5 years can be reduced by 15% if populations can access ten evidence-based nutrition interventions at 90% coverage. Additionally, access to and uptake of iodised salt can alleviate iodine deficiency and improve health outcomes. Accelerated gains are possible and about a fifth of the existing burden of stunting can be averted using these approaches, if access is improved in this way. The estimated total additional annual cost involved for scaling up access to these ten direct nutrition interventions in the 34 focus countries is Int$9·6 billion per year. Continued investments in nutrition-specific interventions to avert maternal and child undernutrition and micronutrient deficiencies through community engagement and delivery strategies that can reach poor segments of the population at greatest risk can make a great difference. If this improved access is linked to nutrition-sensitive approaches - ie, women's empowerment, agriculture, food systems, education, employment, social protection, and safety nets - they can greatly accelerate progress in countries with the highest burden of maternal and child undernutrition and mortality. © 2013 Elsevier Ltd.


Friedland S.,University of Illinois at Chicago | Gheorghiu V.,University of Calgary | Gheorghiu V.,University of Waterloo | Gour G.,University of Calgary
Physical Review Letters | Year: 2013

Uncertainty relations are a distinctive characteristic of quantum theory that impose intrinsic limitations on the precision with which physical properties can be simultaneously determined. The modern work on uncertainty relations employs entropic measures to quantify the lack of knowledge associated with measuring noncommuting observables. However, there is no fundamental reason for using entropies as quantifiers; any functional relation that characterizes the uncertainty of the measurement outcomes defines an uncertainty relation. Starting from a very reasonable assumption of invariance under mere relabeling of the measurement outcomes, we show that Schur-concave functions are the most general uncertainty quantifiers. We then discover a fine-grained uncertainty relation that is given in terms of the majorization order between two probability vectors, significantly extending a majorization-based uncertainty relation first introduced in M. H. Partovi, Phys. Rev. A 84, 052117 (2011). Such a vector-type uncertainty relation generates an infinite family of distinct scalar uncertainty relations via the application of arbitrary uncertainty quantifiers. Our relation is therefore universal and captures the essence of uncertainty in quantum theory. © 2013 American Physical Society.


Martin-Martinez E.,Institute Fisica Fundamental | Fuentes I.,University of Nottingham | Mann R.B.,University of Waterloo
Physical Review Letters | Year: 2011

We show that a detector acquires a Berry phase due to its motion in spacetime. The phase is different in the inertial and accelerated case as a direct consequence of the Unruh effect. We exploit this fact to design a novel method to measure the Unruh effect. Surprisingly, the effect is detectable for accelerations 109 times smaller than previous proposals sustained only for times of nanoseconds. © 2011 American Physical Society.


Burkov A.A.,University of Waterloo | Burkov A.A.,University of California at Santa Barbara | Balents L.,University of California at Santa Barbara
Physical Review Letters | Year: 2011

We propose a simple realization of the three-dimensional (3D) Weyl semimetal phase, utilizing a multilayer structure, composed of identical thin films of a magnetically doped 3D topological insulator, separated by ordinary-insulator spacer layers. We show that the phase diagram of this system contains a Weyl semimetal phase of the simplest possible kind, with only two Dirac nodes of opposite chirality, separated in momentum space, in its band structure. This Weyl semimetal has a finite anomalous Hall conductivity and chiral edge states and occurs as an intermediate phase between an ordinary insulator and a 3D quantum anomalous Hall insulator. We find that the Weyl semimetal has a nonzero dc conductivity at zero temperature, but Drude weight vanishing as T2, and is thus an unusual metallic phase, characterized by a finite anomalous Hall conductivity and topologically protected edge states. © 2011 American Physical Society.


Gambetta J.M.,University of Waterloo | Houck A.A.,Princeton University | Blais A.,Universite de Sherbrooke
Physical Review Letters | Year: 2011

We present a superconducting qubit for the circuit quantum electrodynamics architecture that has a tunable qubit-resonator coupling strength g. This coupling can be tuned from zero to values that are comparable with other superconducting qubits. At g=0, the qubit is in a decoherence-free subspace with respect to spontaneous emission induced by the Purcell effect. Furthermore, we show that in this decoherence-free subspace, the state of the qubit can still be measured by either a dispersive shift on the resonance frequency of the resonator or by a cycling-type measurement. © 2011 American Physical Society.


Piani M.,University of Strathclyde | Piani M.,University of Waterloo
Journal of the Optical Society of America B: Optical Physics | Year: 2015

We introduce and study the notion of steerability for channels. This generalizes the notion of steerability of bipartite quantum states. We discuss a key conceptual difference between the case of states and the case of channels: while state steering deals with the notion of "hidden" states, steerability in the channel case is better understood in terms of coherence of channel extensions, rather than in terms of "hidden" channels. This distinction vanishes in the case of states. We further argue how the proposed notion of lack of coherence of channel extensions coincides with the notion of channel extensions realized via local operations and classical communication. We also discuss how the Choi-JamioŁkowski isomorphism allows the direct application of many results about states to the case of channels. We introduce measures for the steerability of channel extensions. © 2015 Optical Society of America.


Block M.S.,University of Kentucky | Melko R.G.,University of Waterloo | Melko R.G.,Perimeter Institute for Theoretical Physics | Kaul R.K.,University of Kentucky
Physical Review Letters | Year: 2013

We present an extensive quantum Monte Carlo study of the Néel to valence-bond solid (VBS) phase transition on rectangular- and honeycomb-lattice SU(N) antiferromagnets in sign-problem-free models. We find that in contrast to the honeycomb lattice and previously studied square-lattice systems, on the rectangular lattice for small N, a first-order Néel-VBS transition is realized. On increasing N≥4, we observe that the transition becomes continuous and with the same universal exponents as found on the honeycomb and square lattices (studied here for N=5, 7, 10), providing strong support for a deconfined quantum critical point. Combining our new results with previous numerical and analytical studies, we present a general phase diagram of the stability of CPN-1 fixed points with q monopoles. © 2013 American Physical Society.


Michailovich O.,University of Waterloo | Rathi Y.,Harvard University
IEEE Transactions on Image Processing | Year: 2010

Visualization and analysis of the micro-architecture of brain parenchyma by means of magnetic resonance imaging is nowadays believed to be one of the most powerful tools used for the assessment of various cerebral conditions as well as for understanding the intracerebral connectivity. Unfortunately, the conventional diffusion tensor imaging (DTI) used for estimating the local orientations of neural fibers is incapable of performing reliably in the situations when a voxel of interest accommodates multiple fiber tracts. In this case, a much more accurate analysis is possible using the high angular resolution diffusion imaging (HARDI) that represents local diffusion by its apparent coefficients measured as a discrete function of spatial orientations. In this note, a novel approach to enhancing and modeling the HARDI signals using multiresolution bases of spherical ridgelets is presented. In addition to its desirable properties of being adaptive, sparsifying, and efficiently computable, the proposed modeling leads to analytical computation of the orientation distribution functions associated with the measured diffusion, thereby providing a fast and robust analytical solution for q-ball imaging. © 2010 IEEE.


McGill S.,University of Waterloo
Strength and Conditioning Journal | Year: 2010

This review article recognizes the unique function of the core musculature. In many real life activities, these muscles act to stiffen the torso and function primarily to prevent motion. This is a fundamentally different function from those muscles of the limbs, which create motion. By stiffening the torso, power generated at the hips is transmitted more effectively by the core. Recognizing this uniqueness, implications for exercise program design are discussed using progressions beginning with corrective and therapeutic exercises through stability/mobility, endurance, strength and power stages, to assist the personal trainer with a broad spectrum of clients. Copyright © Lippincott Williams & Wilkins.


Srinivasan S.J.,Princeton University | Hoffman A.J.,Princeton University | Gambetta J.M.,University of Waterloo | Houck A.A.,Princeton University
Physical Review Letters | Year: 2011

We introduce a new type of superconducting charge qubit that has a V-shaped energy spectrum and uses quantum interference to provide independently tunable qubit energy and coherent coupling to a superconducting cavity. Dynamic access to the strong coupling regime is demonstrated by tuning the coupling strength from less than 200 kHz to greater than 40 MHz. This tunable coupling can be used to protect the qubit from cavity-induced relaxation and avoid unwanted qubit-qubit interactions in a multiqubit system. © 2011 American Physical Society.


Paetznick A.,University of Waterloo | Reichardt B.W.,University of Southern California
Physical Review Letters | Year: 2013

Transversal implementations of encoded unitary gates are highly desirable for fault-tolerant quantum computation. Though transversal gates alone cannot be computationally universal, they can be combined with specially distilled resource states in order to achieve universality. We show that "triorthogonal" stabilizer codes, introduced for state distillation by Bravyi and Haah, admit transversal implementation of the controlled-controlled- Z gate. We then construct a universal set of fault-tolerant gates without state distillation by using only transversal controlled-controlled-Z, transversal Hadamard, and fault-tolerant error correction. We also adapt the distillation procedure of Bravyi and Haah to Toffoli gates, improving on existing Toffoli distillation schemes. © 2013 American Physical Society.


Rojas-Fernandez C.H.,University of Waterloo
Research in gerontological nursing | Year: 2010

Geriatric (or late-life) depression is common in older adults, with an incidence that increases dramatically after age 70 to 85, as well as among those admitted to hospitals and those who reside in nursing homes. In this population, depression promotes disability and is associated with worsened outcomes of comorbid chronic medical diseases. Geriatric depression is often undetected or undertreated in primary care settings for various reasons, including the (incorrect) belief that depression is a normal part of aging. Current research suggests that while antidepressant agent use in older adults is improving in quality, room for improvement exists. Improving the pharmacotherapy of depression in older adults requires knowledge and understanding of many clinical factors. The purpose of this review is to discuss salient issues in geriatric depression, with a focus on pharmacotherapeutic and psychotherapeutic interventions. Copyright 2010, SLACK Incorporated.


Jeon S.,University of Waterloo
Proceedings of the 2010 American Control Conference, ACC 2010 | Year: 2010

The major benefit of the state estimation based on kinematic model such as the kinematic Kalman filter (KKF) is that it is immune to parameter variations and unknown disturbances and thus can provide an accurate and robust state estimation regardless of the operating condition. Since it suggests to use a combination of low cost sensors rather than a single costly sensor, the specific characteristics of each sensor may have a major effect on the performance of the state estimator. As an illustrative example, this paper considers the simplest form of the KKF, i.e., the velocity estimation combining the encoder with the accelerometer and addresses two major issues that arise in its implementation: the limited bandwidth of the accelerometer and the deterministic feature (non-whiteness) of the quantization noise of the encoder at slow speeds. It has been shown that each of these characteristics can degrade the performance of the state estimation at different regimes of the operation range. A simple method to use the variable Kalman filter gain has been suggested to alleviate these problems using the simplified parameterization of the Kalman filter gain matrix. Experimental results are presented to illustrate the main issues and also to validate the effectiveness of the proposed scheme. © 2010 AACC.


Gzara F.,University of Waterloo
Operations Research Letters | Year: 2013

We consider the network design problem for hazardous material transportation that is modeled as a bilevel multi-commodity network flow model. We study a combinatorial bilevel formulation of the problem and present results on its solution space. We propose a family of valid cuts and incorporate them within an exact cutting plane algorithm. Numerical testing is performed using real as well as random data sets. The results show that the cutting plane method is faster than other methods in the literature on the same formulation. © 2012 Elsevier B.V. All rights reserved.


He Q.,University of Waterloo
Journal of Systems Science and Complexity | Year: 2012

This paper studies a continuous time queueing system with multiple types of customers and a first-come-first-served service discipline. Customers arrive according to a semi-Markov arrival process and the service times of individual types of customers have PH-distributions. A GI /M/1 type Markov process for a generalized age process of batches of customers is constructed. The stationary distribution of the GI /M/1 type Markov process is found explicitly and, consequently, the distributions of the age of the batch in service, the total workload in the system, waiting times, and sojourn times of different batches and different types of customers are obtained. The paper gives the matrix representations of the PH-distributions of waiting times and sojourn times. Some results are obtained for the distributions of queue lengths at departure epochs and at an arbitrary time. These results can be used to analyze not only the queue length, but also the composition of the queue. Computational methods are developed for calculating steady state distributions related to the queue lengths, sojourn times, and waiting times. © 2012 Institute of Systems Science, Academy of Mathematics and Systems Science, CAS and Springer-Verlag Berlin Heidelberg.


Reichardt B.W.,University of Waterloo
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2011

We show that any boolean function can be evaluated optimally by a quantum query algorithm that alteinates a certain fixed, input-independent reflection with a second reflection that coherently queries the input string. Originally introduced for solving the unstructured search problem, this two-reflections structure is therefore a universal feature of quantum algorithms. Our proof goes via the general adversary bound, a semi-definite program (SDP) that lower-bounds the quantum query complexity of a function. By a quantum algorithm for evaluating span programs, this lower bound is known to be tight up to a sub-logarithinic factor. The extra factor comes from converting a continuous-time query algorithm into a discrete-query algorithm. We give a direct and simplified quantum algorithm based on the dual SDP, with a bounded-error query complexity that matches the general adversary bound. Therefore, the general adversary lower bound is tight; it is in fact an SDP for quantum query complexity. This implies that the quantum query complexity of the composition f o (g.-,g) of two boolean functions f and g matches the product of the query complexities of f and g, without a logarithmic factor for error reduction. It efficiently characterizes the quantum query complexity of a read-once formula over any finite gate set. It further shows that span programs are equivalent to quantum query algorithms.


Reichardt B.W.,University of Waterloo
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2011

We give an O(√n log n)-query quantum algorithm for evaluating size-n AND-OR formulas. Its rumiing time is poly-logarithinically greater after efficient preprocessing. Unlike previous approaches, the algorithm is based on a quantum walk on a graph that is not a tree. Instead, the algorithm is based on a hybrid of direct-sum span program composition, which generates tree-like graphs, and a novel tensor-product span program composition method, which generates graphs with vertices corresponding to minimal zero-certificates.


Belovs A.,University of Latvia | Rosmanis A.,University of Waterloo
Proceedings of the Annual IEEE Conference on Computational Complexity | Year: 2013

We introduce a notion of the quantum query complexity of a certificate structure. This is a formalisation of a well-known observation that many quantum query algorithms only require the knowledge of the disposition of possible certificates in the input string, not the precise values therein. Next, we derive a dual formulation of the complexity of a non-adaptive learning graph, and use it to show that non-adaptive learning graphs are tight for all certificate structures. By this, we mean that there exists a function possessing the certificate structure and such that a learning graph gives an optimal quantum query algorithm for it. For a special case of certificate structures generated by certificates of bounded size, we construct a relatively general class of functions having this property. The construction is based on orthogonal arrays, and generalizes the quantum query lower bound for the k-sum problem derived recently. Finally, we use these results to show that the best known learning graph for the triangle problem is almost optimal in these settings. This also gives a quantum query lower bound for the triangle-sum problem. © 2013 IEEE.


Coles P.J.,National University of Singapore | Piani M.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2014

The uncertainty principle can be expressed in entropic terms, also taking into account the role of entanglement in reducing uncertainty. The information exclusion principle bounds instead the correlations that can exist between the outcomes of incompatible measurements on one physical system, and a second reference system. We provide a more stringent formulation of both the uncertainty principle and the information exclusion principle, with direct applications for, e.g., the security analysis of quantum key distribution, entanglement estimation, and quantum communication. We also highlight a fundamental distinction between the complementarity of observables in terms of uncertainty and in terms of information. © 2014 American Physical Society.


Henderson H.A.,University of Waterloo | Pine D.S.,National Health Research Institute | Fox N.A.,University of Maryland University College
Neuropsychopharmacology | Year: 2015

Behavioral inhibition (BI) is an early-appearing temperament characterized by strong reactions to novelty. BI shows a good deal of stability over childhood and significantly increases the risk for later diagnosis of social anxiety disorder (SAD). Despite these general patterns, many children with high BI do not go on to develop clinical, or even subclinical, anxiety problems. Therefore, understanding the cognitive and neural bases of individual differences in developmental risk and resilience is of great importance. The present review is focused on the relation of BI to two types of information processing: automatic (novelty detection, attention biases to threat, and incentive processing) and controlled (attention shifting and inhibitory control). We propose three hypothetical models (Top-Down Model of Control; Risk Potentiation Model of Control; and Overgeneralized Control Model) linking these processes to variability in developmental outcomes for BI children. We argue that early BI is associated with an early bias to quickly and preferentially process information associated with motivationally salient cues. When this bias is strong and stable across development, the risk for SAD is increased. Later in development, children with a history of BI tend to display normative levels of performance on controlled attention tasks, but they demonstrate exaggerated neural responses in order to do so, which may further potentiate risk for anxiety-related problems. We conclude by discussing the reviewed studies with reference to the hypothetical models and make suggestions regarding future research and implications for treatment.


Xiang L.,Nanyang Technological University | Luo J.,Nanyang Technological University | Rosenberg C.,University of Waterloo
IEEE/ACM Transactions on Networking | Year: 2013

We focus on wireless sensor networks (WSNs) that perform data collection with the objective of obtaining the whole dataset at the sink (as opposed to a function of the dataset). In this case, energy-efficient data collection requires the use of data aggregation. Whereas many data aggregation schemes have been investigated, they either compromise the fidelity of the recovered data or require complicated in-network compressions. In this paper, we propose a novel data aggregation scheme that exploits compressed sensing (CS) to achieve both recovery fidelity and energy efficiency in WSNs with arbitrary topology. We make use of diffusion wavelets to find a sparse basis that characterizes the spatial (and temporal) correlations well on arbitrary WSNs, which enables straightforward CS-based data aggregation as well as high-fidelity data recovery at the sink. Based on this scheme, we investigate the minimum-energy compressed data aggregation problem. We first prove its NP-completeness, and then propose a mixed integer programming formulation along with a greedy heuristic to solve it. We evaluate our scheme by extensive simulations on both real datasets and synthetic datasets. We demonstrate that our compressed data aggregation scheme is capable of delivering data to the sink with high fidelity while achieving significant energy saving. © 2014 IEEE.


Mozaffari-Kermani M.,Princeton University | Azarderakhsh R.,University of Waterloo
IEEE Transactions on Industrial Electronics | Year: 2013

Lightweight block ciphers are essential for providing low-cost confidentiality to sensitive constrained applications. Nonetheless, this confidentiality does not guarantee their reliability in the presence of natural and malicious faults. In this paper, fault diagnosis schemes for the lightweight internationally standardized block cipher CLEFIA are proposed. This symmetric-key cipher is compatible with yet lighter in hardware than the Advanced Encryption Standard and enables the implementation of cryptographic functionality with low complexity and power consumption. To the best of the authors' knowledge, there has been no fault diagnosis scheme presented in the literature for the CLEFIA to date. In addition to providing fault diagnosis approaches for the linear blocks in the encryption and the decryption of the CLEFIA, error detection approaches are presented for the nonlinear S-boxes, applicable to their composite-field implementations as well as their lookup table realizations. Through fault-injection simulations, the proposed schemes are benchmarked, and it is shown that they achieve error coverage of close to 100%. Finally, both application-specific integrated circuit and field-programmable gate array implementations of the proposed error detection structures are presented to assess their efficiency and overhead. The proposed fault diagnosis architectures make the implementations of the International Organization for Standardization/International Electrotechnical Commission-standardized CLEFIA more reliable. © 1982-2012 IEEE.


Al-Dharrab S.,University of Waterloo | Uysal M.,Ozyegin University | Duman T.,Bilkent University
IEEE Communications Magazine | Year: 2013

This article presents a contemporary overview of underwater acoustic communication (UWAC) and investigates physical layer aspects on cooperative transmission techniques for future UWAC systems. Taking advantage of the broadcast nature of wireless transmission, cooperative communication realizes spatial diversity advantages in a distributed manner. The current literature on cooperative communication focuses on terrestrial wireless systems at radio frequencies with sporadic results on cooperative UWAC. In this article, we summarize initial results on cooperative UWAC and investigate the performance of a multicarrier cooperative UWAC considering the inherent unique characteristics of the underwater channel. Our simulation results demonstrate the superiority of cooperative UWAC systems over their point-to-point counterparts. © 1979-2012 IEEE.


Mitran P.,University of Waterloo
IEEE Transactions on Information Theory | Year: 2015

In this paper, we consider a new definition of typicality based on the weak∗ topology that is applicable to Polish alphabets (which includes ℝn). This notion is a generalization of strong typicality in the sense that it degenerates to strong typicality in the finite alphabet case, and can also be applied to mixed and continuous distributions. Furthermore, it is strong enough to prove a Markov lemma, and thus can be used to directly prove a more general class of results than entropy (or weak) typicality. We provide two example applications of this technique. First, using the Markov Lemma, we directly prove a coding result for Gel'fand-Pinsker channels with an average input constraint for a large class of alphabets and channels without first proving a finite alphabet result and then resorting to delicate quantization arguments. This class of alphabets includes, for example, real and complex inputs subject to a peak amplitude restriction. While this large class does not directly allow for Gaussian distributions with average power constraints, it is shown to be straightforward to recover this case by considering a sequence of truncated Gaussian distributions. As a second example, we consider a problem of coordinated actions (i.e., empirical distributions) for a two node network, where we derive necessary and sufficient conditions for a given desired coordination. © 2015 IEEE.


Lefort E.C.,Dalhousie University | Blay J.,University of Waterloo
Molecular Nutrition and Food Research | Year: 2013

Apigenin (4′,5,7-trihydroxyflavone, 5,7-dihydroxy-2-(4-hydroxyphenyl)-4H-1-benzopyran-4-one) is a flavonoid found in many fruits, vegetables, and herbs, the most abundant sources being the leafy herb parsley and dried flowers of chamomile. Present in dietary sources as a glycoside, it is cleaved in the gastrointestinal lumen to be absorbed and distributed as apigenin itself. For this reason, the epithelium of the gastrointestinal tract is exposed to higher concentrations of apigenin than tissues at other locations. This would also be true for epithelial cancers of the gastrointestinal tract. We consider the evidence for actions of apigenin that might hinder the ability of gastrointestinal cancers to progress and spread. Apigenin has been shown to inhibit cell growth, sensitize cancer cells to elimination by apoptosis, and hinder the development of blood vessels to serve the growing tumor. It also has actions that alter the relationship of the cancer cells with their microenvironment. Apigenin is able to reduce cancer cell glucose uptake, inhibit remodeling of the extracellular matrix, inhibit cell adhesion molecules that participate in cancer progression, and oppose chemokine signaling pathways that direct the course of metastasis into other locations. As such, apigenin may provide some additional benefit beyond existing drugs in slowing the emergence of metastatic disease. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Li Y.,Institute of Materials Research and Engineering of Singapore | Li Y.,University of Waterloo | Singh S.P.,Institute of Materials Research and Engineering of Singapore | Sonar P.,Institute of Materials Research and Engineering of Singapore
Advanced Materials | Year: 2010

A copolymer comprising 1,4-diketopyrrolo[3,4-c]pyrrole (DPP) and thieno[3,2-b]thiophene moieties, PDBT-co-TT, shows high hole mobility of up to 0.94 cm2 V-1 s-1 in organic thin-film transistors. The strong intermolecular interactions originated from π-π stacking and donor-acceptor interaction lead to the formation of interconnected polymer networks having an ordered lamellar structure, which have established highly efficient pathways for charge carrier transport. © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Guo Q.,Hubei University | Reardon E.J.,University of Waterloo
Applied Clay Science | Year: 2012

Fluoride removal from water by a mechanochemically synthesized anion clay (meixnerite) and its calcination product was studied at initial fluoride:meixnerite molar ratios (F I:meix) of 0.1 to 2.0 - the theoretical fluoride uptake limit for meixnerite. Fluoride removal efficiency of calcined meixnerite was higher than uncalcined meixnerite at the same F I:meix, and the difference increased as the initial fluoride concentration increased. For the sorption runs performed at F I:meix=2.0, 29% and 52% of the uptake capacity were attained for the uncalcined and calcined meixnerites, respectively. Analysis of sorption reaction rate data indicates that fluoride diffusion from solution to intraparticle active sites and its chemical sorption on active sites are important mechanisms in the uptake for both meixnerites, and the intraparticle fluoride diffusion in uncalcined meixnerite was slower than in calcined meixnerite. Moreover, XRD analyses indicate that secondary fluoride-containing phases (nordstrandite and sellaite) precipitated at high initial fluoride concentrations. When fluoride precipitates did not form at lower F I:meix (<0.6), the higher fluoride uptake by calcined meixnerite is promoted by greater availability of F - ions to the meixnerite interlayers since the interlayers were generated during reaction of the F-containing solution with the calcined material. Thus some F - did not have to diffuse into the interlayers to replace existing OH - ions as it did for the uncalcined meixnerite. At F I:meix≥0.6, precipitation of F-bearing nordstrandite also contributes to calcined meixnerite's improved ability to remove fluoride. © 2011 Elsevier B.V.


Garnerone S.,University of Waterloo | Garnerone S.,University of Southern California | Zanardi P.,University of Southern California | Lidar D.A.,University of Southern California
Physical Review Letters | Year: 2012

We propose an adiabatic quantum algorithm for generating a quantum pure state encoding of the PageRank vector, the most widely used tool in ranking the relative importance of internet pages. We present extensive numerical simulations which provide evidence that this algorithm can prepare the quantum PageRank state in a time which, on average, scales polylogarithmically in the number of web pages. We argue that the main topological feature of the underlying web graph allowing for such a scaling is the out-degree distribution. The top-ranked log (n) entries of the quantum PageRank state can then be estimated with a polynomial quantum speed-up. Moreover, the quantum PageRank state can be used in "q-sampling" protocols for testing properties of distributions, which require exponentially fewer measurements than all classical schemes designed for the same task. This can be used to decide whether to run a classical update of the PageRank. © 2012 American Physical Society.


Rose D.R.,University of Waterloo
Current Opinion in Structural Biology | Year: 2012

The mature N-glycan on human glycoproteins is built up by the activity and regulation of enzymes in the endoplasmic reticulum and Golgi apparatus. A key enzyme in the maturation of N-glycans is the first glycoside hydrolase in the Golgi pathway, α-mannosidase II (GMII). This enzyme has the unusual ability to cleave two different glycosidic linkages in it catalytic center. As such, it removes two terminal mannoses following the activity of N-acetyl-glucosaminyl transferase I, and is a critical step in the formation of mature glycans. Structural analyses of the Drosophila homologue of GMII have led to insights into its unusual mechanism and substrate specificity. In addition, the results build the foundation for the development of specific clinically relevant inhibitors. © 2012 Elsevier Ltd.


Weber O.,University of Waterloo
Business Strategy and the Environment | Year: 2012

How do Canadian banks integrate environmental risks into corporate lending and where are they located compared with their global peers? In this paper we report a mixed method analysis of the integration of environmental risks into the credit management. The qualitative and quantitative analyses suggest that all analyzed Canadian commercial banks, credit unions and Export Development Canada manage environmental risks in credit management to avoid financial risks. Some of the institutions even connect environmental and sustainability issues with their general business strategies. Compared with other countries, Canadian banks are best in class, as all six Canadian commercial banks, comprising over 90 percent of Canadian assets, systematically examine environmental risks for credits, loans and mortgages. We conclude that Canadian banks are proactive regarding environmental examinations of loans and that there is a need for a more accountancy related reporting on environmental risk management in financial institutions. Further research is needed to be able to calculate costs and benefits of integrating environmental and sustainability issues into the credit risk management. © 2011 John Wiley & Sons, Ltd and ERP Environment.


Forrest J.A.,University of Waterloo
Journal of Chemical Physics | Year: 2013

We consider the ability of recent measurements on the size of a liquid-like mobile surface region in glasses to provide direct information on the length scale of enhanced surface mobility. While these quantities are strongly related there are important distinctions that limit the ability of measurements to quantify the actual length over which the surface properties change from surface to bulk-like. In particular, we show that for temperatures near the bulk glass transition, measurements of a liquid-like mobile layer may have very limited predictive power when it comes to determining the temperature dependent length scale of enhanced surface mobility near the glass transition temperature. This places important limitations on the ability of measurements of such enhanced surface dynamics to contribute to discussion on the length scale for dynamical correlation in glassy materials. © 2013 AIP Publishing LLC.


Wang Z.,University of Waterloo
IEEE Signal Processing Magazine | Year: 2011

The interest in objective image quality assessment (IQA) has been growing at an accelerated pace over the past decade. The latest progress on developing automatic IQA methods that can predict subjective quality of visual signals is exhilarating. For example, a handful of objective IQA measures have been shown to significantly and consistently outperform the widely adopted mean squared error (MSE) and peak signal-to-noise-ratio (PSNR) in terms of correlations with subjective quality evaluations [1]. © 2011 IEEE.


Clapp J.,University of Waterloo
Sustainability Science | Year: 2015

This paper examines the relationship between the development of the dominant industrial food system and its associated global economic drivers and the environmental sustainability of agricultural landscapes. It makes the case that the growth of the global industrial food system has encouraged increasingly complex forms of “distance” that separate food both geographically and mentally from the landscapes on which it was produced. This separation between food and its originating landscape poses challenges for the ability of more localized agricultural sustainability initiatives to address some of the broader problems in the global food system. In particular, distance enables certain powerful actors to externalize ecological and social costs, which in turn makes it difficult to link specific global actors to particular biophysical and social impacts felt on local agricultural landscapes. Feedback mechanisms that normally would provide pressure for improved agricultural sustainability are weak because there is a lack of clarity regarding responsibility for outcomes. The paper provides a brief illustration of these dynamics with a closer look at increased financialization in the food system. It shows that new forms of distancing are encouraged by the growing significance of financial markets in global agrifood value chains. This dynamic has a substantial impact on food system outcomes and ultimately complicates efforts to scale up small-scale local agricultural models that are more sustainable. © 2014, The Author(s).


Eliasmith C.,University of Waterloo
Topics in Cognitive Science | Year: 2012

The complex systems approach (CSA) to characterizing cognitive function is purported to underlie a conceptual and methodological revolution by its proponents. I examine one central claim from each of the contributed papers and argue that the provided examples do not justify calls for radical change in how we do cognitive science. Instead, I note how currently available approaches in ''standard'' cognitive science are adequate (or even more appropriate) for understanding the CSA provided examples. © 2011 Cognitive Science Society, Inc.


Outcome quality indicators are rarely used to evaluate mental health services because most jurisdictions lack clinical data systems to construct indicators in a meaningful way across mental health providers. As a result, important information about the effectiveness of health services remains unknown. This study examined the feasibility of developing mental health quality indicators (MHQIs) using the Resident Assessment Instrument - Mental Health (RAI-MH), a clinical assessment system mandated for use in Ontario, Canada as well as many other jurisdictions internationally. Retrospective analyses were performed on two datasets containing RAI-MH assessments for 1,056 patients from 7 facilities and 34,788 patients from 70 facilities in Ontario, Canada. The RAI-MH was completed by clinical staff of each facility at admission and follow-up, typically at discharge. The RAI-MH includes a breadth of information on symptoms, functioning, socio-demographics, and service utilization. Potential MHQIs were derived by examining the empirical patterns of improvement and incidence in depressive symptoms and cognitive performance across facilities in both sets of data. A prevalence indicator was also constructed to compare restraint use. Logistic regression was used to evaluate risk adjustment of MHQIs using patient case-mix index scores derived from the RAI-MH System for Classification of Inpatient Psychiatry. Subscales from the RAI-MH, the Depression Severity Index (DSI) and Cognitive Performance Scale (CPS), were found to have good reliability and strong convergent validity. Unadjusted rates of five MHQIs based on the DSI, CPS, and restraints showed substantial variation among facilities in both sets of data. For instance, there was a 29.3% difference between the first and third quartile facility rates of improvement in cognitive performance. The case-mix index score was significantly related to MHQIs for cognitive performance and restraints but had a relatively small impact on adjusted rates/prevalence. The RAI-MH is a feasible assessment system for deriving MHQIs. Given the breadth of clinical content on the RAI-MH there is an opportunity to expand the number of MHQIs beyond indicators of depression, cognitive performance, and restraints. Further research is needed to improve risk adjustment of the MHQIs for their use in mental health services report card and benchmarking activities.


Bayesteh A.,Huawei | Khandani A.K.,University of Waterloo
IEEE Transactions on Information Theory | Year: 2012

In this paper, we consider a downlink communication system in which a base station (BS) equipped with M antennas and power constraint P communicates with N users each equipped with K receive antennas. It is assumed that the users have perfect channel state information (CSI) of their own channels, while the BS only knows the partial CSI provided by the receivers via a feedback channel. We study the fundamental limits on the amount of feedback required at the BS to achieve the sum-rate capacity of the system (when BS has perfect CSI for all users) in the asymptotic case of N → ∞, considering various signal to noise ratio (SNR) regimes. The main results of this paper can be expressed as follows. 1) In the fixed-SNR regime (where the SNR does not scale with N ) and low-SNR regime (where the SNR is much smaller than 1/In(N)), to achieve the (1 - ε)-portion of the sum-rate capacity, the total amount of feedback should scale at least with ln (ε -1). In the fixed-SNR regime, to reduce the gap between the achievable sum rate and the sum-rate capacity of the system (which is defined as the sum-rate gap) to zero, the amount of feedback should scale at least logarithmically with the sum-rate capacity, which is achievable by using the random beam-forming (RBF) scheme proposed by Sharif and Hassibi. In the low-SNR regime, we propose an opportunistic beam-forming (OBF) scheme, which is shown to be asymptotically feedback optimal. 2) In the high-SNR regime (where the SNR grows to infinity as N → ∞), the total amount of feedback depends on the number of receive antennas. In particular, to reduce the sum-rate gap to zero in the case of K < M, the amount of feedback in the SNR regime of ln(P)/ln(N) > 1/M-1, should scale at least logarithmically with the SNR. In the case of K ≥ M, the amount of feedback does not need to scale with the SNR. Moreover, we show that RBF is asymptotically feedback optimal in the high-SNR regime. © 2012 IEEE.


Ward O.P.,University of Waterloo
Biotechnology Advances | Year: 2012

The initial focus of recombinant protein production by filamentous fungi related to exploiting the extraordinary extracellular enzyme synthesis and secretion machinery of industrial strains, including Aspergillus, Trichoderma, Penicillium and Rhizopus species, was to produce single recombinant protein products. An early recognized disadvantage of filamentous fungi as hosts of recombinant proteins was their common ability to produce homologous proteases which could degrade the heterologous protein product and strategies to prevent proteolysis have met with some limited success. It was also recognized that the protein glycosylation patterns in filamentous fungi and in mammals were quite different, such that filamentous fungi are likely not to be the most suitable microbial hosts for production of recombinant human glycoproteins for therapeutic use. By combining the experience gained from production of single recombinant proteins with new scientific information being generated through genomics and proteomics research, biotechnologists are now poised to extend the biomanufacturing capabilities of recombinant filamentous fungi by enabling them to express genes encoding multiple proteins, including, for example, new biosynthetic pathways for production of new primary or secondary metabolites. It is recognized that filamentous fungi, most species of which have not yet been isolated, represent an enormously diverse source of novel biosynthetic pathways, and that the natural fungal host harboring a valuable biosynthesis pathway may often not be the most suitable organism for biomanufacture purposes. Hence it is expected that substantial effort will be directed to transforming other fungal hosts, non-fungal microbial hosts and indeed non microbial hosts to express some of these novel biosynthetic pathways. But future applications of recombinant expression of proteins will not be confined to biomanufacturing. Opportunities to exploit recombinant technology to unravel the causes of the deleterious impacts of fungi, for example as human, mammalian and plant pathogens, and then to bring forward solutions, is expected to represent a very important future focus of fungal recombinant protein technology. © 2011.


Scott M.,University of Waterloo | Klumpp S.,Max Planck Institute of Colloids and Interfaces | Mateescu E.M.,University of California at San Diego | Hwa T.,University of California at San Diego | Hwa T.,ETH Zurich
Molecular Systems Biology | Year: 2014

Bacteria must constantly adapt their growth to changes in nutrient availability; yet despite large-scale changes in protein expression associated with sensing, adaptation, and processing different environmental nutrients, simple growth laws connect the ribosome abundance and the growth rate. Here, we investigate the origin of these growth laws by analyzing the features of ribosomal regulation that coordinate proteome-wide expression changes with cell growth in a variety of nutrient conditions in the model organism Escherichia coli. We identify supply-driven feedforward activation of ribosomal protein synthesis as the key regulatory motif maximizing amino acid flux, and autonomously guiding a cell to achieve optimal growth in different environments. The growth laws emerge naturally from the robust regulatory strategy underlying growth rate control, irrespective of the details of the molecular implementation. The study highlights the interplay between phenomenological modeling and molecular mechanisms in uncovering fundamental operating constraints, with implications for endogenous and synthetic design of microorganisms. Building upon empirical "growth laws", this Perspective discusses mechanisms that integrate protein synthesis with amino acid flux and metabolic control to guarantee optimal growth irrespective of the nutrient environment. © 2014 The Authors. Published under the terms of the CC BY 4.0 license.


Hammond D.,University of Waterloo
Nicotine and Tobacco Research | Year: 2012

Introduction: The Family Smoking Prevention and Tobacco Control Act (the "Act"), enacted in June 2009, gave the U.S. Food and Drug Administration authority to regulate tobacco products. The current paper reviews the provisions for packaging and labeling, including the existing evidence and research priorities. Methods: Narrative review using electronic literature search of published and unpublished sources in 3 primary areas: health warnings, constituent labeling, and prohibitions on the promotional elements of packaging. Results: The Act requires 9 pictorial health warnings covering half of cigarette packages and 4 text warnings covering 30% of smokeless tobacco packages. The Act also prohibits potentially misleading information on packaging, including the terms "light" and "mild," and provides a mandate to require disclosure of chemical constituents on packages. Many of the specific regulatory provisions are based on the extent to which they promote "greater public understanding of the risks of tobacco." As a result, research on consumer perceptions has the potential to shape the design and renewal of health warnings and to determine what, if any, information on product constituents should appear on packages. Research on consumer perceptions of existing and novel tobacco products will also be critical to help identify potentially misleading information that should be restricted under the Act. Conclusion: Packaging and labeling regulations required under the Act will bring the United States in line with international standards. There is an immediate need for research to evaluate these measures to guide future regulatory action. © The Author 2011. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved.


Howarth S.J.,University of Waterloo
Computer methods in biomechanics and biomedical engineering | Year: 2010

Marker obstruction during human movement analyses requires interpolation to reconstruct missing kinematic data. This investigation quantifies errors associated with three interpolation techniques and varying interpolated durations. Right ulnar styloid kinematics from 13 participants performing manual wheelchair ramp ascent were reconstructed using linear, cubic spline and local coordinate system (LCS) interpolation from 11-90% of one propulsive cycle. Elbow angles (flexion/extension and pronation/supination) were calculated using real and reconstructed kinematics. Reconstructed kinematics produced maximum elbow flexion/extension errors of 37.1 (linear), 23.4 (spline) and 9.3 (LCS) degrees. Reconstruction errors are unavoidable [minimum errors of 6.7 mm (LCS); 0.29 mm (spline); 0.42 mm (linear)], emphasising careful motion capture system setup must be performed to minimise data interpolation. For the observed movement, LCS-based interpolation (average error of 14.3 mm; correlation of 0.976 for elbow flexion/extension) was most suitable for reconstructing durations longer than 200 ms. Spline interpolation was superior for shorter durations.


Striemer C.L.,University of Western Ontario | Danckert J.A.,University of Waterloo
Trends in Cognitive Sciences | Year: 2010

Many studies have demonstrated that prism adaptation can reduce several symptoms of visual neglect: a disorder in which patients fail to respond to information in contralesional space. The dominant framework to explain these effects proposes that prisms influence higher order visuospatial processes by acting on brain circuits that control spatial attention and perception. However, studies that have directly examined the influence of prisms on perceptual biases inherent to neglect have revealed very few beneficial effects. We propose an alternative explanation whereby many of the beneficial effects of prisms arise via the influence of adaptation on circuits in the dorsal visual stream controlling attention and visuomotor behaviors. We further argue that prisms have little influence on the pervasive perceptual biases that characterize neglect. © 2010.


Kelly A.C.,University of Waterloo | Carter J.C.,Memorial University of Newfoundland
Psychology and Psychotherapy: Theory, Research and Practice | Year: 2015

Objectives The present pilot study sought to compare a compassion-focused therapy (CFT)-based self-help intervention for binge eating disorder (BED) to a behaviourally based intervention. Design Forty-one individuals with BED were randomly assigned to 3 weeks of food planning plus self-compassion exercises; food planning plus behavioural strategies; or a wait-list control condition. Methods Participants completed weekly measures of binge eating and self-compassion; pre- and post-intervention measures of eating disorder pathology and depressive symptoms; and a baseline measure assessing fear of self-compassion. Results Results showed that: (1) perceived credibility, expectancy, and compliance did not differ between the two interventions; (2) both interventions reduced weekly binge days more than the control condition; (3) the self-compassion intervention reduced global eating disorder pathology, eating concerns, and weight concerns more than the other conditions; (4) the self-compassion intervention increased self-compassion more than the other conditions; and (5) participants low in fear of self-compassion derived significantly more benefits from the self-compassion intervention than those high in fear of self-compassion. Conclusions Findings offer preliminary support for the usefulness of CFT-based interventions for BED sufferers. Results also suggest that for individuals to benefit from self-compassion training, assessing and lowering fear of self-compassion will be crucial. © 2014 The British Psychological Society.


Lawless J.F.,University of Waterloo
Statistics in Medicine | Year: 2013

Life history studies collect information on events and other outcomes during people's lifetimes. For example, these may be related to childhood development, education, fertility, health, or employment. Such longitudinal studies have constraints on the selection of study members, the duration and frequency of follow-up, and the accuracy and completeness of information obtained. These constraints, along with factors associated with the definition and measurement of certain outcomes, affect our ability to understand, model, and analyze life history processes. My objective here is to discuss and illustrate some issues associated with the design and analysis of life history studies. © 2013 John Wiley & Sons, Ltd.


Czoli C.D.,University of Waterloo
Preventing chronic disease | Year: 2013

Although cigarette use among Canadian youth has decreased significantly in recent years, alternative forms of tobacco use are becoming increasingly popular. Surveillance of youth tobacco use can help inform prevention programs by monitoring trends in risk behaviors. We examined the prevalence of bidi and hookah use and factors associated with their use among Canadian youth by using data from the 2010-2011 Youth Smoking Survey (YSS). We analyzed YSS data from 28,416 students (2006-2007) and 31,396 students (2010-2011) in grades 9 through 12 to examine prevalence of bidi and hookah use. We conducted multivariate logistic regression analyses of 2010-2011 YSS data to examine factors associated with bidi and hookah use. From 2006 through 2010, prevalence of hookah use among Canadian youth increased by 6% (P = .02). Marijuana use emerged as a consistent predictor of bidi and hookah use. Males, youth of black, Latin, or other descent, and youth of Asian descent were more likely to use bidis (odds ratio [OR], 1.5; OR, 15.6; OR, 14.9) or hookah (OR, 1.3; OR, 2.4; OR, 1.5). Current cigarette smokers were more likely than nonsmokers to be current users of bidis (OR, 6.7) and hookahs (OR, 3.0), and occasional and frequent alcohol drinkers were also more likely than nondrinkers to be current hookah users (OR, 2.8; OR, 3.6). Although bidi use has not changed significantly among Canadian youth, the increase in hookah use warrants attention. Understanding the factors associated with use of bidis and hookahs can inform the development of tobacco use prevention programs to address emerging at-risk youth populations.


Faizal M.,University of Waterloo | Majumder B.,Indian Institute of Technology Gandhinagar
Annals of Physics | Year: 2015

In this paper, we will incorporate the generalized uncertainty principle into field theories with Lifshitz scaling. We will first construct both bosonic and fermionic theories with Lifshitz scaling based on generalized uncertainty principle. After that we will incorporate the generalized uncertainty principle into a non-abelian gauge theory with Lifshitz scaling. We will observe that even though the action for this theory is non-local, it is invariant under local gauge transformations. We will also perform the stochastic quantization of this Lifshitz fermionic theory based generalized uncertainty principle. © 2015 Elsevier Inc.


Brzozowski J.,University of Waterloo
International Journal of Foundations of Computer Science | Year: 2013

Sequences (Ln | n ≥ k), called streams, of regular languages Ln are considered, where k is some small positive integer, n is the state complexity of Ln, and the languages in a stream differ only in the parameter n, but otherwise, have the same properties. The following measures of complexity are proposed for any stream: (1) the state complexity n of L n, that is, the number of left quotients of Ln (used as a reference); (2) the state complexities of the left quotients of Ln; (3) the number of atoms of Ln; (4) the state complexities of the atoms of Ln; (5) the size of the syntactic semigroup of L n; and the state complexities of the following operations: (6) the reverse of Ln; (7) the star of Ln; (8) union, intersection, difference and symmetric difference of Lm and L n; and (9) the concatenation of Lm and Ln. A stream that has the highest possible complexity with respect to these measures is then viewed as a most complex stream. The language stream (Un(a, b, c) | n ≥ 3) is defined by the deterministic finite automaton with state set {0, 1, . , n-1}, initial state 0, set {n-1} of final states, and input alphabet {a, b, c}, where a performs a cyclic permutation of the n states, b transposes states 0 and 1, and c maps state n-1 to state 0. This stream achieves the highest possible complexities with the exception of boolean operations where m = n. In the latter case, one can use Un(a, b, c) and U n(b, a, c), where the roles of a and b are interchanged in the second language. In this sense, Un(a, b, c) is a universal witness. This witness and its extensions also apply to a large number of combined regular operations. © 2013 World Scientific Publishing Company.


Lewis G.M.,University of Waterloo
Energy Policy | Year: 2010

There is an increasing interest in adding renewables such as wind to electricity generation portfolios in larger amounts as one response to concern about atmospheric carbon emissions from our energy system and the resulting climate change. Most policies with the aim of promoting renewables (e.g., RPS, FIT) do not explicitly address siting issues, which for wind energy are currently approached as the intersection of wind resource, land control, and transmission factors. This work proposes the use of locational marginal price (LMP), the location and time specific cost of electricity on the wholesale market, to signal locations where generation can address electricity system insufficiency. After an examination of the spatial and temporal behavior of LMP in Michigan over the first two years of wholesale market operation, this work combines LMP with wind speed data to generate a value metric. High value sites in Michigan tend to be sites with higher wind speeds, with the bulk of value accruing in the fall and winter seasons. © 2009 Elsevier Ltd.


Chaloupka F.J.,University of Illinois at Chicago | Yurekli A.,World Health Organization | Fong G.T.,University of Waterloo
Tobacco Control | Year: 2012

Background Increases in tobacco taxes are widely regarded as a highly effective strategy for reducing tobacco use and its consequences. Methods The voluminous literature on tobacco taxes is assessed, drawing heavily from seminal and recent publications reviewing the evidence on the impact of tobacco taxes on tobacco use and related outcomes, as well as that on tobacco tax administration. Results Well over 100 studies, including a growing number from low-income and middle-income countries, clearly demonstrate that tobacco excise taxes are a powerful tool for reducing tobacco use while at the same time providing a reliable source of government revenues. Significant increases in tobacco taxes that increase tobacco product prices encourage current tobacco users to stop using, prevent potential users from taking up tobacco use, and reduce consumption among those that continue to use, with the greatest impact on the young and the poor. Global experiences with tobacco taxation and tax administration have been used by WHO to develop a set of 'best practices' for maximising the effectiveness of tobacco taxation. Conclusions Significant increases in tobacco taxes are a highly effective tobacco control strategy and lead to significant improvements in public health. The positive health impact is even greater when some of the revenues generated by tobacco tax increases are used to support tobacco control, health promotion and/or other health-related activities and programmes. In general, oppositional arguments that higher taxes will have harmful economic effects are false or overstated.


Mahmoud Y.A.,University of Waterloo | Xiao W.,Masdar Institute of Science and Technology | Zeineldin H.H.,Masdar Institute of Science and Technology | Zeineldin H.H.,Cairo University
IEEE Transactions on Industrial Electronics | Year: 2013

Reliable and accurate photovoltaic (PV) models are essential for simulation of PV power systems. A solar cell is typically represented by a single diode equivalent circuit. The circuit parameters need to be estimated accurately to get an accurate model. However, one circuit parameter was assumed because of the limited information provided by commercial manufacturing datasheets, and thus the model accuracy is affected. This paper proposes a parameterization approach for PV models to improve modeling accuracy and reduce implementation complexity. It develops a method to accurately estimate circuit parameters, and thus improving the overall accuracy, relying only on the points provided by all commercial modules datasheet. The proposed modeling approach results in two simplified models demonstrating the advantage of fast simulation. The effectiveness of the modeling approach is thoroughly evaluated by comparing the simulation results with experimental data of solar modules made of mono-crystalline, multi-crystalline, and thin film. © 1982-2012 IEEE.


The time-covariance function captures the dynamics of biochemical fluctuations and contains important information about the underlying kinetic rate parameters. Intrinsic fluctuations in biochemical reaction networks are typically modelled using a master equation formalism. In general, the equation cannot be solved exactly and approximation methods are required. For small fluctuations close to equilibrium, a linearisation of the dynamics provides a very good description of the relaxation of the time-covariance function. As the number of molecules in the system decrease, deviations from the linear theory appear. Carrying out a systematic perturbation expansion of the master equation to capture these effects results in formidable algebra; however, symbolic mathematics packages considerably expedite the computation. The authors demonstrate that non-linear effects can reveal features of the underlying dynamics, such as reaction stoichiometry, not available in linearised theory. Furthermore, in models that exhibit noise-induced oscillations, non-linear corrections result in a shift in the base frequency along with the appearance of a secondary harmonic. © 2012 The Institution of Engineering and Technology.


Sivak J.,University of Waterloo
Clinical and Experimental Optometry | Year: 2012

In spite of a long history of study, as well as a significant, recent increase in research attention, the cause(s) and the means of preventing or mitigating the progression of myopia in children are still elusive. The high and growing prevalence of myopia, especially in Asian populations, as well as its progressive nature in children and its effect on visual acuity, have contributed to the recent surge in interest. Animal research carried out in the 1970s also helped spark this interest by legitimising the study of environmental influences on the refractive development of the eye. Efforts that include the use of visual training or biofeedback, bifocal and progressive lenses, contact lenses and pharmaceuticals are reviewed. Current research trends that focus on the relationship between genetics and environment, as well as studies, both animal and human, that explore the effect of peripheral refractive error on the refractive development of the central retina are also reviewed. © 2012 Optometrists Association Australia.


Garnerone S.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2012

We consider the dynamical properties of dissipative continuous-time quantum walks on directed graphs. Using a large-deviation approach we construct a thermodynamic formalism allowing us to define a dynamical order parameter, and to identify transitions between dynamical regimes. For a particular class of dissipative quantum walks we propose a quantum generalization of the the classical PageRank vector, used to rank the importance of nodes in a directed graph. We also provide an example where one can characterize the dynamical transition from an effective classical random walk to a dissipative quantum walk as a thermodynamic crossover between distinct dynamical regimes. © 2012 American Physical Society.


Tabia G.N.M.,Perimeter Institute for Theoretical Physics | Tabia G.N.M.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2012

It is crucial for various quantum information processing tasks that the state of a quantum system can be determined reliably and efficiently from general quantum measurements. One important class of measurements for this purpose is symmetric informationally complete positive operator-valued measurements (SIC-POVMs). SIC-POVMs have the advantage of providing an unbiased estimator for the quantum state with the minimal number of outcomes needed for full tomography. By virtue of Naimark's dilation theorem, any POVM can always be realized with a suitable coupling between the system and an auxiliary system and by performing a projective measurement on the joint system. In practice, finding the appropriate coupling is rather nontrivial. Here we propose an experimental design for directly implementing SIC-POVMs using multiport devices and path-encoded qubits and qutrits, the utility of which has recently been demonstrated by several experimental groups around the world. Furthermore, we describe how these multiports can be attained in practice with an integrated photonic system composed of nested linear optical elements. © 2012 American Physical Society.


Piani M.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2012

We argue that the geometric discord introduced by Dakić, Vedral, and Brukner is not a good measure for the quantumness of correlations, as it can increase even under trivial local reversible operations of the party whose classicality or nonclassicality is not tested. On the other hand it is known that the standard, mutual-information-based discord does not suffer this problem; a simplified proof of such a fact is given. © 2012 American Physical Society.


Myers R.C.,Perimeter Institute for Theoretical Physics | Singh A.,Perimeter Institute for Theoretical Physics | Singh A.,University of Waterloo
Journal of High Energy Physics | Year: 2012

Using holographic entanglement entropy for strip geometry, we construct a candidate for a c-function in arbitrary dimensions. For holographic theories dual to Einstein gravity, this c-function is shown to decrease monotonically along RG flows. A sufficient condition required for this monotonic flow is that the stress tensor of the matter fields driving the holographic RG flow must satisfy the null energy condition over the holographic surface used to calculate the entanglement entropy. In the case where the bulk theory is described by Gauss-Bonnet gravity, the latter condition alone is not sufficient to establish the monotonic flow of the c-function. We also observe that for certain holographic RG flows, the entanglement entropy undergoes a 'phase transition' as the size of the system grows and as a result, evolution of the c-function may exhibit a discontinuous drop. © SISSA 2012.


Gharibian S.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2012

We propose a measure of nonclassical correlations in bipartite quantum states based on local unitary operations. We prove that the measure is nonzero if and only if the quantum discord is nonzero; this is achieved via a new characterization of zero discord states in terms of the state's correlation matrix. Moreover, our scheme can be extended to ensure that the same relationship holds even with a generalized version of quantum discord in which higher-rank projective measurements are allowed. We next derive a closed-form expression for our scheme in the cases of Werner states and (2×N)- dimensional systems. The latter reveals that for (2×N)-dimensional states, our measure reduces to the geometric discord. A connection to the Clauser-Horne-Shimony-Holt inequality is shown. We close with a characterization of all maximally nonclassical, yet separable, (2×N)-dimensional states of rank at most 2 (with respect to our measure). © 2012 American Physical Society.


Modi K.,University of Oxford | Modi K.,National University of Singapore | Brodutch A.,Macquarie University | Brodutch A.,University of Waterloo | And 5 more authors.
Reviews of Modern Physics | Year: 2012

One of the best signatures of nonclassicality in a quantum system is the existence of correlations that have no classical counterpart. Different methods for quantifying the quantum and classical parts of correlations are among the more actively studied topics of quantum-information theory over the past decade. Entanglement is the most prominent of these correlations, but in many cases unentangled states exhibit nonclassical behavior too. Thus distinguishing quantum correlations other than entanglement provides a better division between the quantum and classical worlds, especially when considering mixed states. Here different notions of classical and quantum correlations quantified by quantum discord and other related measures are reviewed. In the first half, the mathematical properties of the measures of quantum correlations are reviewed, related to each other, and the classical-quantum division that is common among them is discussed. In the second half, it is shown that the measures identify and quantify the deviation from classicality in various quantum-information- processing tasks, quantum thermodynamics, open-system dynamics, and many-body physics. It is shown that in many cases quantum correlations indicate an advantage of quantum methods over classical ones. © 2012 American Physical Society.


Cetin B.,Middle East Technical University | Cetin B.,Bilkent University | Li D.,University of Waterloo
Electrophoresis | Year: 2011

Dielectrophoresis (DEP) is the movement of a particle in a non-uniform electric field due to the interaction of the particle's dipole and spatial gradient of the electric field. DEP is a subtle solution to manipulate particles and cells at microscale due to its favorable scaling for the reduced size of the system. DEP has been utilized for many applications in microfluidic systems. In this review, a detailed analysis of the modeling of DEP-based manipulation of the particles is provided, and the recent applications regarding the particle manipulation in microfluidic systems (mainly the published works between 2007 and 2010) are presented. © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Rolison D.R.,U.S. Navy | Nazar L.F.,University of Waterloo
MRS Bulletin | Year: 2011

Climate change, diminishing reserves of fossil fuels, energy security, and consumer demand all depend on alternatives to our current course of energy usage and consumption. A broad consensus concurs that implementing energy efficiency and renewable energy technologies are necessities now rather than luxuries to be deferred to some distant future. Neither effort can effect serious change in our energy patterns without marked improvements in electrical energy storage, with electrochemical energy storage in batteries and electrochemical capacitors serving as key components of any plausible scenario. 1,2 Consumer expectations of convenience and long-lived portable power further drive the need to push these old devices onto a new performance curve. This issue of MRS Bulletin addresses the significant advances occurring in research laboratories around the world as old electrode materials and designs are re-envisioned, and abandoned materials of the past are reinvigorated by arranging matter and function on the nanoscale to bring batteries and electrochemical capacitors into the 21st century. © 2011 Materials Research Society.


Liu J.,University of Waterloo
Physical Chemistry Chemical Physics | Year: 2012

The interaction between DNA and inorganic surfaces has attracted intense research interest, as a detailed understanding of adsorption and desorption is required for DNA microarray optimization, biosensor development, and nanoparticle functionalization. One of the most commonly studied surfaces is gold due to its unique optical and electric properties. Through various surface science tools, it was found that thiolated DNA can interact with gold not only via the thiol group but also through the DNA bases. Most of the previous work has been performed with planar gold surfaces. However, knowledge gained from planar gold may not be directly applicable to gold nanoparticles (AuNPs) for several reasons. First, DNA adsorption affinity is a function of AuNP size. Second, DNA may interact with AuNPs differently due to the high curvature. Finally, the colloidal stability of AuNPs confines salt concentration, whereas there is no such limit for planar gold. In addition to gold, graphene oxide (GO) has emerged as a new material for interfacing with DNA. GO and AuNPs share many similar properties for DNA adsorption; both have negatively charged surfaces but can still strongly adsorb DNA, and both are excellent fluorescence quenchers. Similar analytical and biomedical applications have been demonstrated with these two surfaces. The nature of the attractive force however, is different for each of these. DNA adsorption on AuNPs occurs via specific chemical interactions but adsorption on GO occurs via aromatic stacking and hydrophobic interactions. Herein, we summarize the recent developments in studying non-thiolated DNA adsorption and desorption as a function of salt, pH, temperature and DNA secondary structures. Potential future directions and applications are also discussed. © 2012 the Owner Societies.


Lavi R.,Technion - Israel Institute of Technology | Swamy C.,University of Waterloo
Journal of the ACM | Year: 2011

We give a general technique to obtain approximation mechanisms that are truthful in expectation. We show that for packing domains, any a-approximation algorithm that also bounds the integrality gap of the LP relaxation of the problem by α can be used to construct an a-approximation mechanism that is truthful in expectation. This immediately yields a variety of new and significantly improved results for various problem domains and furthermore, yields truthful (in expectation) mechanisms with guarantees that match the best-known approximation guarantees when truthfulness is not required. In particular, we obtain the first truthful mechanisms with approximation guarantees for a variety of multiparameter domains. We obtain truthful (in expectation) mechanisms achieving approximation guarantees of O(vm√) for combinatorial auctions (CAs), (1 + ε) for multiunit CAs with B = Q(logm) copies of each item, and 2 for multiparameter knapsack problems (multi-unit auctions). Our construction is based on considering an LP relaxation of the problem and using the classic VCG mechanism to obtain a truthful mechanism in this fractional domain. We argue that the (fractional) optimal solution scaled down by a, where a is the integrality gap of the problem, can be represented as a convex combination of integer solutions, and by viewing this convex combination as specifying a probability distribution over integer solutions, we get a randomized, truthful in expectation mechanism. Our construction can be seen as a way of exploiting VCG in a computational tractable way even when the underlying social-welfare maximization problem is NP-hard. © 2011.


Chen J.Z.Y.,University of Waterloo
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2010

The structure of the system consisting of a self-avoiding polymer chain attracted to the surface of a freely supported soft membrane surface by a short-ranged force is investigated. The adhesion of the polymer to the deformed surface can produce distinctive states such as pancake, pinch, and bud, dependent on the phenomenological parameters in the Helfrich model describing the membrane as well as an adsorption energy describing the attraction between a monomer and a membrane surface. © 2010 The American Physical Society.


Bravyi S.,IBM | Konig R.,IBM | Konig R.,University of Waterloo
Physical Review Letters | Year: 2013

Given a quantum error correcting code, an important task is to find encoded operations that can be implemented efficiently and fault tolerantly. In this Letter we focus on topological stabilizer codes and encoded unitary gates that can be implemented by a constant-depth quantum circuit. Such gates have a certain degree of protection since propagation of errors in a constant-depth circuit is limited by a constant size light cone. For the 2D geometry we show that constant-depth circuits can only implement a finite group of encoded gates known as the Clifford group. This implies that topological protection must be "turned off" for at least some steps in the computation in order to achieve universality. For the 3D geometry we show that an encoded gate U is implementable by a constant-depth circuit only if UPU† is in the Clifford group for any Pauli operator P. This class of gates includes some non-Clifford gates such as the π/8 rotation. Our classification applies to any stabilizer code with geometrically local stabilizers and sufficiently large code distance. © 2013 American Physical Society.


Mitran P.,University of Waterloo
IEEE International Symposium on Information Theory - Proceedings | Year: 2012

We consider the problem of optimal transmission power for a continuous-time energy harvesting system where energy arrivals occur at random times in random amounts. We do not assume that the energy arrivals are known non-causally and consider the online setting. Here, there is a tradeoff between increasing instantaneous transmission power, which increases instantaneous transmission rate and can reduce battery overflow, and decreasing transmission power, which increases battery life and energy efficiency. We formulate the problem as that of maximizing the average transmission rate or throughput. We first find the non-linear relationship between the transmission power (which is a function of remaining battery energy) and the stationary distribution on the remaining battery energy. We then show that the resulting maximization problem is concave in the distribution of the remaining battery energy. This is non-trivial due to the nonlinear relationship with the transmission power. We then use a calculus of variations approach to derive necessary conditions on the optimal transmission power. Specifically, we find that it must satisfy a first order non-linear autonomous ordinary differential equation (ODE) that has two degree of freedom for optimization purposes, one of which is the initial condition of the ODE. Solving the ODE numerically we compute achieved throughputs as a function of the battery capacity. © 2012 IEEE.


Van Der Meer M.,University of Waterloo | Kurth-Nelson Z.,University College London | Redish A.D.,University of Minnesota
Neuroscientist | Year: 2012

Decisions result from an interaction between multiple functional systems acting in parallel to process information in very different ways, each with strengths and weaknesses. In this review, the authors address three action-selection components of decision-making: The Pavlovian system releases an action from a limited repertoire of potential actions, such as approaching learned stimuli. Like the Pavlovian system, the habit system is computationally fast but, unlike the Pavlovian system permits arbitrary stimulus-action pairings. These associations are a "forward'' mechanism; when a situation is recognized, the action is released. In contrast, the deliberative system is flexible but takes time to process. The deliberative system uses knowledge of the causal structure of the world to search into the future, planning actions to maximize expected rewards. Deliberation depends on the ability to imagine future possibilities, including novel situations, and it allows decisions to be taken without having previously experienced the options. Various anatomical structures have been identified that carry out the information processing of each of these systems: hippocampus constitutes a map of the world that can be used for searching/imagining the future; dorsal striatal neurons represent situation-action associations; and ventral striatum maintains value representations for all three systems. Each system presents vulnerabilities to pathologies that can manifest as psychiatric disorders. Understanding these systems and their relation to neuroanatomy opens up a deeper way to treat the structural problems underlying various disorders. © The Author(s) 2012.


Childs A.M.,University of Waterloo
Communications in Mathematical Physics | Year: 2010

Quantum walk is one of the main tools for quantum algorithms. Defined by analogy to classical random walk, a quantum walk is a time-homogeneous quantum process on a graph. Both random and quantum walks can be defined either in continuous or discrete time. But whereas a continuous-time random walk can be obtained as the limit of a sequence of discrete-time random walks, the two types of quantum walk appear fundamentally different, owing to the need for extra degrees of freedom in the discrete-time case. In this article, I describe a precise correspondence between continuous- and discrete- time quantum walks on arbitrary graphs. Using this correspondence, I show that continuous-time quantum walk can be obtained as an appropriate limit of discrete-time quantum walks. The correspondence also leads to a new technique for simulating Hamiltonian dynamics, giving efficient simulations even in cases where the Hamiltonian is not sparse. The complexity of the simulation is linear in the total evolution time, an improvement over simulations based on high-order approximations of the Lie product formula. As applications, I describe a continuous-time quantum walk algorithm for element distinctness and show how to optimally simulate continuous-time query algorithms of a certain form in the conventional quantum query model. Finally, I discuss limitations of the method for simulating Hamiltonians with negative matrix elements, and present two problems that motivate attempting to circumvent these limitations. © Springer-Verlag 2009.


Mehrabian A.,University of Waterloo
Combinatorics Probability and Computing | Year: 2011

We consider a variant of the Cops and Robbers game where the robber can move t edges at a time, and show that in this variant, the cop number of a d-regular graph with girth larger than 2t+2 is Ω(dt). By the known upper bounds on the order of cages, this implies that the cop number of a connected n-vertex graph can be as large as Ω(n2/3) if t ≥ 2, and Ω(n4/5) if t ≥ 4. This improves the Ω(n t-3/t-2) lower bound of Frieze, Krivelevich and Loh (Variations on cops and robbers, J. Graph Theory, to appear) when 2 ≤ t ≤ 6. We also conjecture a general upper bound O(nt/t+1) for the cop number in this variant, generalizing Meyniel's conjecture. © 2011 Cambridge University Press.


Wesson P.S.,University of Waterloo
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2013

An exact solution of the five-dimensional field equations is studied which describes waves in the classical Einstein vacuum. While the solution is essentially 5D in nature, the waves exist in ordinary 3D space. They should not be confused with standard gravitational waves, since their phase velocity can exceed that of light. They resemble de Broglie waves, and may give insight to wave-particle duality. © 2013 Elsevier B.V.


Marvian I.,Perimeter Institute for Theoretical Physics | Marvian I.,University of Waterloo | Spekkens R.W.,Perimeter Institute for Theoretical Physics
New Journal of Physics | Year: 2013

If a system undergoes symmetric dynamics, then the final state of the system can only break the symmetry in ways in which it was broken by the initial state, and its measure of asymmetry can be no greater than that of the initial state. It follows that for the purpose of understanding the consequences of symmetries of dynamics, in particular, complicated and open-system dynamics, it is useful to introduce the notion of a state's asymmetry properties, which includes the type and measure of its asymmetry. We demonstrate and exploit the fact that the asymmetry properties of a state can also be understood in terms of information-theoretic concepts, for instance in terms of the state's ability to encode information about an element of the symmetry group. We show that the asymmetry properties of a pure state ψ relative to the symmetry group G are completely specified by the characteristic function of the state, defined as χψ(g) ≡ ψ|U(g)|ψ where g∈G and U is the unitary representation of interest. For a symmetry described by a compact Lie group G, we show that two pure states can be reversibly interconverted one to the other by symmetric operations if and only if their characteristic functions are equal up to a one-dimensional representation of the group. Characteristic functions also allow us to easily identify the conditions for one pure state to be converted to another by symmetric operations (in general irreversibly) for the various paradigms of single-copy transformations: deterministic, state-to-ensemble, stochastic and catalyzed. © IOP Publishing and Deutsche Physikalische Gesellschaft.


Few studies have assessed the construct validity of measures of neighborhood food environment, which remains a major challenge in accurately assessing food access. In this study, we adapted a psychometric tool to examine the construct validity of 4 such measures for 3 constructs. We used 4 food-environment measures to collect objective data from 422 Ontario, Canada, food stores in 2010. Residents' perceptions of their neighborhood food environment were collected from 2,397 households between 2009 and 2010. Objective and perceptual data were aggregated within buffer zones around respondents' homes (at 250 m, 500 m, 1,000 m, and 1,500 m). We constructed multitrait-multimethod matrices for each scale to examine construct validity for the constructs of food availability, food quality, and food affordability. Convergent validity between objective measures decreased with increasing geographic scale. Convergent validity between objective and subjective measures increased with increasing geographic scale. High discriminant validity coefficients existed between food availability and food quality, indicating that these two constructs may not be distinct in this setting. We conclude that the construct validity of food environment measures varies over geographic scales, which has implications for research, policy, and practice. © The Author 2013.


Baskerville N.B.,University of Waterloo | Liddy C.,University of Ottawa | Hogg W.,University of Ottawa
Annals of Family Medicine | Year: 2012

PURPOSE This study was a systematic review with a quantitative synthesis of the literature examining the overall effect size of practice facilitation and possible moderating factors. The primary outcome was the change in evidence-based practice behavior calculated as a standardized mean difference. METHODS In this systematic review, we searched 4 electronic databases and the reference lists of published literature reviews to find practice facilitation studies that identified evidence-based guideline implementation within primary care practices as the outcome. We included randomized and nonrandomized controlled trials and prospective cohort studies published from 1966 to December 2010 in English language only peer-reviewed journals. Reviews of each study were conducted and assessed for quality; data were abstracted, and standardized mean difference estimates and 95% confidence intervals (CIs) were calculated using a random-effects model. Publication bias, infl uence, subgroup, and metaregression analyses were also conducted. RESULTS Twenty-three studies contributed to the analysis for a total of 1,398 participating practices: 697 practice facilitation intervention and 701 control group practices. The degree of variability between studies was consistent with what would be expected to occur by chance alone (I2 = 20%). An overall effect size of 0.56 (95% CI, 0.43-0.68) favored practice facilitation (z = 8.76; P <.001), and publication bias was evident. Primary care practices are 2.76 (95% CI, 2.18-3.43) times more likely to adopt evidence-based guidelines through practice facilitation. Meta-regression analysis indicated that tailoring (P = .05), the intensity of the intervention (P = .03), and the number of intervention practices per facilitator (P = .004) modified evidence-based guideline adoption. CONCLUSION Practice facilitation has a moderately robust effect on evidencebased guideline adoption within primary care. Implementation fidelity factors, such as tailoring, the number of practices per facilitator, and the intensity of the intervention, have important resource implications.


Ionicioiu R.,University of Waterloo | Spiller T.P.,University of Leeds
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2012

A fundamental problem in quantum information is to describe efficiently multipartite quantum states. An efficient representation in terms of graphs exists for several families of quantum states (graph, cluster, and stabilizer states), motivating us to extend this construction to other classes. We introduce an axiomatic framework for mapping graphs to quantum states of a suitable physical system. Starting from three general axioms we derived a rich structure which includes and generalizes several classes of multipartite entangled state, like graph or stabilizer states, Gaussian cluster states, quantum random networks, and projected entangled pair states. Due to its flexibility we can extend the present formalism to include directed and weighted graphs. © 2012 American Physical Society.


Piani M.,University of Waterloo | Adesso G.,University of Nottingham
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2012

We analyze a family of measures of general quantum correlations for composite systems, defined in terms of the bipartite entanglement necessarily created between systems and apparatuses during local measurements. For every entanglement monotone E, this operational correspondence provides a different measure Q E of quantum correlations. Examples of such measures are the relative entropy of quantumness, the quantum deficit, and the negativity of quantumness. In general, we prove that any so-defined quantum correlation measure is always greater than (or equal to) the corresponding entanglement between the subsystems, Q EE, for arbitrary states of composite quantum systems. We analyze qualitatively and quantitatively the flow of correlations in iterated measurements, showing that general quantum correlations and entanglement can never decrease along von Neumann chains, and that genuine multipartite entanglement in the initial state of the observed system always gives rise to genuine multipartite entanglement among all subsystems and all measurement apparatuses at any level in the chain. Our results provide a comprehensive framework to understand and quantify general quantum correlations in multipartite states. © 2012 American Physical Society.


Montero M.,Institute Fisica Fundamental | Martin-Martinez E.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2012

We provide a simple argument showing that in the limit of infinite acceleration, the entanglement in a fermionic-field bipartite system must be independent of the choice of Unruh modes. This implies that most tensor product structures used previously to compute field entanglement in relativistic quantum information cannot give rise to physical results. © 2012 American Physical Society.


Magesan E.,University of Waterloo
Quantum Information and Computation | Year: 2011

The paper analyzes the behavior of quantum channels, particularly in large dimensions, by proving various properties of the quantum gate fidelity. Many of these properties are of independent interest in the theory of distance measures on quantum operations. A non-uniqueness result for the gate fidelity is proven, a consequence of which is the existence of non-depolarizing channels that produce a constant gate fidelity on pure states. Asymptotically, the gate fidelity associated with any quantum channel is shown to converge to that of a depolarizing channel. Methods for estimating the minimum of the gate fidelity are also presented. © Rinton Press.


Biedl T.,University of Waterloo
Discrete and Computational Geometry | Year: 2011

In this paper, we study small planar drawings of planar graphs. For arbitrary planar graphs, Θ(n2) is the established upper and lower bound on the worst-case area. A long-standing open problem is to determine for what graphs a smaller area can be achieved. We show here that series-parallel graphs can be drawn in O(n3/2) area, and outerplanar graphs can be drawn in O(nlog n) area, but 2-outerplanar graphs and planar graphs of proper pathwidth 3 require Ω(n2) area. Our drawings are visibility representations, which can be converted to polyline drawings of asymptotically the same area. © 2010 Springer Science+Business Media, LLC.


Burkov A.A.,University of Waterloo
Physical Review B - Condensed Matter and Materials Physics | Year: 2014

We present a theory of the intrinsic anomalous Hall effect in a model of a doped Weyl semimetal, which serves here as the simplest toy model of a generic three-dimensional metallic ferromagnet with Weyl nodes in the electronic structure. We analytically evaluate the anomalous Hall conductivity as a function of doping, which allows us to explicitly separate the Fermi-surface and non-Fermi-surface contributions to the Hall conductivity by carefully evaluating the zero-frequency and zero wave-vector limits of the corresponding response function. We show that this separation agrees with the one suggested a long time ago in the context of the quantum Hall effect by Streda [J. Phys. C 15, L717 (1982)JPSOAW0022-371910.1088/0022-3719/15/22/005]. © 2014 American Physical Society.


Jones M.A.,Center for Healthcare Related Infection Surveillance and Prevention | Steiner S.H.,University of Waterloo
International Journal for Quality in Health Care | Year: 2012

Background: Risk-adjusted control charts have become popular for monitoring processes that involve the management and treatment of patients in hospitals or other healthcare institutions. However, to date, the effect of estimation error on risk-adjusted control charts has not been studied. Methods: We studied the effect of estimation error on risk-adjusted binary cumulative sum (CUSUM) performance using actual and simulated data on patients undergoing coronary artery bypass surgery and assessed for mortality up to 30 days post-surgery. The effect of estimation error was indicated by the variability of the 'true' average run lengths (ARLs) obtained using repeated sampling of the observed data under various realistic scenarios. Results: Results showed that estimation error can have a substantial effect on risk-adjusted CUSUM chart performance in terms of variation of true ARLs. Moreover, the performance was highly dependent on the number of events used to derive the control chart parameters and the specified ARL for an in-control process (ARL. 0). However, the results suggest that it is the uncertainty in the overall adverse event rate that is the main component of estimation error. Conclusions: When designing a control chart, the effect of estimation error could be taken into account by generating a number of bootstrap samples of the available Phase I data and then determining the control limit needed to obtain an ARL. 0 of a pre-specified level 95% of the time. If limited Phase I data are available, it may be advisable to continue to update model parameters even after prospective patient monitoring is implemented. © The Author 2011. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.


Poulin F.J.,University of Waterloo | Franks P.J.S.,University of California at San Diego
Journal of Plankton Research | Year: 2010

Here we present a nutrient-phytoplankton-zooplankton (NPZ) model that has arbitrary size-resolution within the phytoplankton- and zooplankton-state variables. The model assumes allometric scaling of biological parameters. This particular version of the model (herbivorous zooplankton only) has analytical solutions that allow efficient exploration of the effects of allometric dependencies of various biological processes on the model's equilibrium solutions. The model shows that there are constraints on the possible combinations of allometric scalings of the biological rates that will allow ecosystems to be structured as we observe (larger organisms added as the total biomass increases). The diversity (number of size classes occupied) of the ecosystem is the result of simultaneous bottom-up and top-down control: resources determine which classes can exist; predation determines which classes do exist. Thus, the simultaneous actions of bottom-up and top-down controls are essential for maintaining and structuring planktonic ecosystems. One important conclusion from this model is that there are multiple, independent ways of obtaining any given biomass spectrum, and that the spectral slope is not, in and of itself, very informative concerning the underlying dynamics. There is a clear need for improved size-resolved field measurements of biological rates; these will both elucidate biological processes in the field, and allow strong testing of size-structured models of planktonic ecosystems. © The Author 2010.


Hammond D.,University of Waterloo
Tobacco Control | Year: 2011

Objective To review evidence on the impact of health warning messages on tobacco packages. Data sources Articles were identified through electronic databases of published articles, as well as relevant 'grey' literature using the following keywords: health warning, health message, health communication, label and labelling in conjunction with at least one of the following terms: smoking, tobacco, cigarette, product, package and pack. Study selection and data extraction: Relevant articles available prior to January 2011 were screened for six methodological criteria. A total of 94 original original articles met inclusion criteria, including 72 quantitative studies, 16 qualitative studies, 5 studies with both qualitative and qualitative components, and 1 review paper: Canada (n=35), USA (n=29) Australia (n=16), UK (n=13), The Netherlands (n=3), France (n=3), New Zealand (n=3), Mexico (n=3), Brazil (n=2), Belgium (n=1), other European countries (n=10), Norway (n=1), Malaysia (n=1) and China (n=1). Results The evidence indicates that the impact of health warnings depends upon their size and design: whereas obscure text-only warnings appear to have little impact, prominent health warnings on the face of packages serve as a prominent source of health information for smokers and non-smokers, can increase health knowledge and perceptions of risk and can promote smoking cessation. The evidence also indicates that comprehensive warnings are effective among youth and may help to prevent smoking initiation. Pictorial health warnings that elicit strong emotional reactions are significantly more effective. Conclusions Health warnings on packages are among the most direct and prominent means of communicating with smokers. Larger warnings with pictures are significantly more effective than smaller, text-only messages.


Gutoski G.,University of Waterloo
Journal of Mathematical Physics | Year: 2012

The present paper studies an operator norm that captures the distinguishability of quantum strategies in the same sense that the trace norm captures the distinguishability of quantum states or the diamond norm captures the distinguishability of quantum channels. Characterizations of its unit ball and dual norm are established via strong duality of a semidefinite optimization problem. A full, formal proof of strong duality is presented for the semidefinite optimization problem in question. This norm and its properties are employed to generalize a state discrimination result of Gutoski and Watrous [In Proceedings of the 22nd Symposium on Theoretical Aspects of Computer Science (STACS'05), Lecture Notes in Computer Science, Vol. 3404 (Springer, 2005), pp. 605-616. The generalized result states that for any two convex sets S0, S1 of strategies there exists a fixed interactive measurement scheme that successfully distinguishes any choice of S 0 ∈ S 0 from any choice of S 1 ∈ S 1 with bias proportional to the minimal distance between the sets S 0 and S 1 as measured by this norm. A similar discrimination result for channels then follows as a special case. © 2012 American Institute of Physics.


Amin O.,University of Waterloo | Uysal M.,Ozyegin University
IEEE Transactions on Wireless Communications | Year: 2011

In this paper, we investigate bit and power allocation strategies for an orthogonal frequency division multiplexing (OFDM) cooperative network over frequency-selective fading channels. We assume amplify-and-forward relaying and consider the bit error rate (BER) performance as our performance measure. Aiming to optimize the BER under total power constraint and for a given average data rate, we propose three adaptive algorithms; optimal power loading (OPL), optimal bit loading (OBL), and optimal joint bit and power loading (OBPL). Our Monte Carlo simulation results demonstrate performance gains through adaptive bit and power loading over conventional non-adaptive systems as well as currently available adaptive cooperative scheme in the literature. The impact of practical issues on the performance of proposed adaptive schemes such as imperfect channel estimation and limited feedback is further discussed. © 2011 IEEE.


Zeineldin H.H.,Masdar Institute of Science and Technology | Salama M.M.A.,University of Waterloo | Salama M.M.A.,King Saud University
IEEE Transactions on Industrial Electronics | Year: 2011

Sandia frequency shift (SFS) falls under the active islanding detection methods that rely on frequency drift to detect an islanding condition for inverter-based distributed generation. Active islanding detection methods are commonly tested on constant RLC loads where the load's active power is directly proportional to the square of voltage and is independent on the system frequency. Since the SFS method relies primarily on frequency to detect islanding, the load's active power frequency dependence could have an impact on its performance and the nondetection zone (NDZ). In this paper, the impact of the load's active power frequency dependence on the performance of the SFS method, during an islanding condition, is analyzed. A NDZ model that takes into account the load's frequency dependence parameter is derived mathematically and validated through digital simulation. The results show that the load's frequency dependence has a significant impact on the NDZ of the SFS method and thus is an important factor to consider when designing and testing this method. © 2006 IEEE.


Morris K.,University of Waterloo
IEEE Transactions on Automatic Control | Year: 2011

In control of vibrations, diffusion and many other problems governed by partial differential equations, there is freedom in the choice of actuator location. The actuator location should be chosen to optimize performance objectives. In this paper, we consider linear quadratic performance. Two types of cost are considered; the choice depends on whether the response to the worst initial condition is to be minimized; or whether the initial condition is regarded as random. In practice, approximations are used in controller design and thus in selection of the actuator locations. The optimal cost and location of the approximating sequence should converge to the exact optimal cost and location. In this work conditions for this convergence are given in the case of linear quadratic control. Examples are provided to illustrate that convergence may fail when these conditions are not satisfied. © 2010 IEEE.


Hassan M.,King Fahd University of Petroleum and Minerals | Khajepour A.,University of Waterloo
IEEE Transactions on Robotics | Year: 2011

Cable-actuated parallel manipulators (CPMs) rely on cables instead of rigid links to manipulate the moving platform in the taskspace. Upper and lower bounds imposed on the cable tensions limit the force capability in CPMs and render certain forces infeasible at the end effector. This paper presents a geometrical analysis of the problems to 1) determine whether a CPM is capable of balancing a given wrench within the cable tension limits (feasibility check); 2) minimize the 2-norm of the cable tensions that balance feasible wrenches; and 3) check for the existence of an all-positive nullspace vector, which is a necessary condition to have a wrench-closure configuration in CPMs. The unified approach used in this analysis is systematic and geometrically intuitive that is based on the formulation of the static force equilibrium problem as an intersection between two convex sets and the application of Dykstras alternating projection algorithm to find the projection of a point onto that intersection. In the case of infeasible wrenches, the algorithm can determine whether the infeasibility is because of the cable tension limits or the non-wrench-closure configuration. For the former case, a method was developed by which this algorithm can be used to extend the cable tension limits to balance infeasible wrenches. In addition, the performance of the algorithm is explained in the case of incompletely restrained cable-driven manipulators and the case of manipulators at singular poses. This paper also discusses the algorithm convergence and termination rule. This geometrical and systematic approach is intended for use as a convenient tool for cable tension analysis during design. © 2011 IEEE.


Pictorial health warnings on cigarette packages are a prominent and effective means of communicating the risks of smoking; however, there is little research on effective types of message content and socio-demographic effects. This study tested message themes and content of pictorial warnings in Mexico. Face-to-face surveys were conducted with 544 adult smokers and 528 youth in Mexico City. Participants were randomized to view 5-7 warnings for two of 15 different health effects. Warnings for each health effect included a text-only warning and pictorial warnings with various themes: "graphic" health effects, "lived experience", symbolic images, and testimonials. Pictorial health warnings were rated as more effective than text-only warnings. Pictorial warnings featuring "graphic" depictions of disease were significantly more effective than symbolic images or experiences of human suffering. Adding testimonial information to warnings increased perceived effectiveness. Adults who were female, older, had lower education, and intended to quit smoking rated warnings as more effective, although the magnitude of these differences was modest. Few interactions were observed between socio-demographics and message theme. Graphic depictions of disease were perceived by youth and adults as the most effective warning theme. Perceptions of warnings were generally similar across socio-demographic groups.


Mielke J.G.,University of Waterloo
Nutritional neuroscience | Year: 2011

Choline is a micronutrient essential for the structural integrity of cellular membranes, and its presence at synapses follows either depolarization-induced pre-synaptic release or degradation of acetylcholine. Previous studies using whole-cell recording have shown that choline can modulate inhibitory input to hippocampal pyramidal neurons by acting upon nicotinic acetylcholine receptors (nAChRs) found on interneurons. However, little is known about how choline affects neuronal activity at the population level; therefore, we used extracellular recordings to assess its influence upon synaptic transmission in acutely prepared hippocampal slices. Choline caused a reversible depression of evoked field excitatory post-synaptic potentials (fEPSPs) in a concentration-dependent manner (10, 500, and 1000 μM). When applied after the induction of long-term potentiation, choline-mediated depression (CMD) was still observed, and potentiation returned on wash-out. Complete blockade of CMD could not be achieved with antagonists for the α7 nAChR, to which choline is a full agonist, but was possible with a general nAChR antagonist. The ability of choline to increase paired-pulse facilitation, and the inability of applied gamma-aminobutyric acid (GABA) to mediate further depression of fEPSPs, suggests that the principal mechanism of choline's action was on the facilitation of neurotransmitter release. Our study provides evidence that choline can depress population-level activity, quite likely by facilitating the release of GABA from interneurons, and may thereby influence hippocampal function.


Ward O.P.,University of Waterloo
Advances in Experimental Medicine and Biology | Year: 2010

Microbial biosurfactants are amphipathic molecules having typical molecular weights of 500-1500 Da, made up of peptides, saccharides or lipids or their combinations. In biodegradation processes they mediate solubilisation, mobilization and/or accession of hydrophobic substrates to microbes, 7iy may be located on the cell surface or be secreted into the extracellular medium and they facilitate uptake of hydrophobic molecules through direct cellular contact with hydrophobic solids or droplets or through micellarisation. It y are also involved in cell physiological processes such as biofilm formation and detachment, and in diverse biofilm associated processes such as wastewater treatment and microbial patho-genesis. It protection of contaminants in biosurfactants micelles may also inhibit uptake of contaminants by microbes. In bioremediation processes biosurfactants may facilitate release of contaminants from soil, but soils also tend to bind surfactants strongly which makes their role in contaminant desorption more complex. A greater understanding of the underlying roles played by biosurfactants in microbial physiology and in biodegradative processes is developing through advances in cell and molecular biology. © 2010 Landes Bioscience and Springer Science+Business Media.


The cosmic-ray-driven electron-induced reaction of halogenated molecules adsorbed on ice surfaces has been proposed as a new mechanism for the formation of the polar ozone hole. Here, experimental findings of dissociative electron transfer reactions of halogenated molecules on ice surfaces in electron stimulated desorption, electron trapping and femtosecond time-resolved laser spectroscopic measurements are reviewed. This is followed by a review of the evidence from recent satellite observations of this new mechanism for the Antarctic ozone hole, and all other possible physical mechanisms are discussed. Moreover, new observations of the 11-year cyclic variations of both polar ozone loss and stratospheric cooling and the seasonal variations of CFCs and CH4 in the polar stratosphere are presented, and quantitative predictions of the Antarctic ozone hole in the future are given. Finally, a new observation of the effects of CFCs and cosmic-ray-driven ozone depletion on global climate change is also presented and discussed. © 2009 Elsevier B.V.


Keller H.H.,University of Waterloo
Annals of the New York Academy of Sciences | Year: 2016

Persons living with dementia have many health concerns, including poor nutritional states. This narrative review provides an overview of the literature on nutritional status in persons diagnosed with a dementing illness or condition. Poor food intake is a primary mechanism for malnutrition, and there are many reasons why poor food intake occurs, especially in the middle and later stages of the dementing illness. Research suggests a variety of interventions to improve food intake, and thus nutritional status and quality of life, in persons with dementia. For family care partners, education programs have been the focus, while a range of intervention activities have been the focus in residential care, from tableware changes to retraining of self-feeding. It is likely that complex interventions are required to more fully address the issue of poor food intake, and future research needs to focus on diverse components. Specifically, modifying the psychosocial aspects of mealtimes is proposed as a means of improving food intake and quality of life and, to date, is a neglected area of intervention development and research. © 2016 The New York Academy of Sciences.


Created in April 2009, the Financial Stability Board (FSB) represents the G20 leaders' first major international institutional innovation. Why was it established and what role will it play in global economic governance? The creation of the FSB has been linked to a US-led effort to strengthen an international prudential standards regime that had evolved in the years leading up to the 2007-08 global financial crisis. The FSB faces a number of serious challenges in its new role: developing effective mechanisms for monitoring and encouraging compliance; promoting the development of effective international standards and fostering consensus on their content; establishing its legitimacy vis-à-vis non-members and within member countries; and clarifying its relationship with other global governance institutions. Since these are very difficult tasks, the FSB may be forced to assume a less ambitious role in international regulatory politics than some of its creators initially envisioned. © 2010 London Schoo