University of Waterloo is a public research university whose main campus is located in Waterloo, Ontario, Canada. The main campus is located on 400 hectares of land in Uptown Waterloo, adjacent to Waterloo Park. The university offers a wide variety of academic programs, which is administered by six faculties, and three affiliated university colleges. Waterloo is a member of the U15, a group of research-intensive universities in Canada. Wikipedia.
Basf and University of Waterloo | Date: 2017-01-25
The present invention relates to core-shell particles, each particle comprising(A) a core comprising elemental sulfur and(B) a shell, which enwraps core (A), comprising MnO_(2). The present invention further relates to a process for preparing said core-shell particles, to a cathode material for an electrochemical cell comprising said core-shell particles, and to a cathode and an electrochemical cell comprising said cathode materials.
Hastings M.B.,University of California at Santa Barbara |
Kallin A.B.,University of Waterloo |
Melko R.G.,University of Waterloo
Physical Review Letters | Year: 2010
We develop a quantum Monte Carlo procedure, in the valence bond basis, to measure the Renyi entanglement entropy of a many-body ground state as the expectation value of a unitary Swap operator acting on two copies of the system. An improved estimator involving the ratio of Swap operators for different subregions enables convergence of the entropy in a simulation time polynomial in the system size. We demonstrate convergence of the Renyi entropy to exact results for a Heisenberg chain. Finally, we calculate the scaling of the Renyi entropy in the two-dimensional Heisenberg model and confirm that the Néel ground state obeys the expected area law for systems up to linear size L=32. © 2010 The American Physical Society.
Farag H.E.,University of Waterloo |
El-Saadany E.F.,University of Waterloo |
Seethapathy R.,Hydro One Networks Inc.
IEEE Transactions on Smart Grid | Year: 2012
Smart grid initiative is based on several pillars among which integrating a wide variety of distributed generation (DG) is of particular importance. The connection of a large number of DG units among loads may result in a severe voltage regulation problem and the utility-side voltage regulators might no longer be able to use conventional control techniques. In addition, smart grid should provide new digital technologies such as monitoring, automatic control, and two way communication facilities to improve the overall performance of the network. These technologies have been applied in this paper to construct a distributed control that has the capability to provide proper voltage regulation in smart distribution feeders. The functions of each controller have been defined according to the concept of intelligent agents and the characteristics of the individual DG unit as well as utility regulators. To verify the effectiveness and robustness of the proposed control structure, a real time simulation model has been proposed. The simulation results show that distributed control structure has the capability to mitigate the interference between DG facilities and utility voltage regulators. © 2011 IEEE.
Ionicioiu R.,Macquarie University |
Ionicioiu R.,University of Waterloo |
Terno D.R.,Macquarie University |
Terno D.R.,National University of Singapore
Physical Review Letters | Year: 2011
Gedanken experiments help to reconcile our classical intuition with quantum mechanics and nowadays are routinely performed in the laboratory. An important open question is the quantum behavior of the controlling devices in such experiments. We propose a framework to analyze quantum-controlled experiments and illustrate it by discussing a quantum version of Wheeler's delayed-choice experiment. Using a quantum control has several consequences. First, it enables us to measure complementary phenomena with a single experimental setup, pointing to a redefinition of complementarity principle. Second, it allows us to prove there are no consistent hidden-variable theories having "particle" and "wave" as realistic properties. Finally, it shows that a photon can have a morphing behavior between particle and wave. The framework can be extended to other experiments (e.g., Bell inequality). © 2011 American Physical Society.
Coles P.J.,National University of Singapore |
Piani M.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2014
The uncertainty principle can be expressed in entropic terms, also taking into account the role of entanglement in reducing uncertainty. The information exclusion principle bounds instead the correlations that can exist between the outcomes of incompatible measurements on one physical system, and a second reference system. We provide a more stringent formulation of both the uncertainty principle and the information exclusion principle, with direct applications for, e.g., the security analysis of quantum key distribution, entanglement estimation, and quantum communication. We also highlight a fundamental distinction between the complementarity of observables in terms of uncertainty and in terms of information. © 2014 American Physical Society.
Cachazo F.,Perimeter Institute for Theoretical Physics |
He S.,Perimeter Institute for Theoretical Physics |
He S.,Institute for Advanced Study |
Yuan E.Y.,Perimeter Institute for Theoretical Physics |
Yuan E.Y.,University of Waterloo
Physical Review Letters | Year: 2014
We present a compact formula for the complete tree-level S-matrix of pure Yang-Mills and gravity theories in arbitrary spacetime dimensions. The new formula for the scattering of n particles is given by an integral over the positions of n points on a sphere restricted to satisfy a dimension-independent set of equations. The integrand is constructed using the reduced Pfaffian of a 2n×2n matrix, Ψ, that depends on momenta and polarization vectors. In its simplest form, the gravity integrand is a reduced determinant which is the square of the Pfaffian in the Yang-Mills integrand. Gauge invariance is completely manifest as it follows from a simple property of the Pfaffian. © 2014 American Physical Society.
Cespedes S.,University of Waterloo |
Shen X.,University of Waterloo |
Lazo C.,Austral University of Chile
IEEE Communications Magazine | Year: 2011
Vehicular communication networks have emerged as a promising platform for the deployment of safety and infotainment applications. The stack of protocols for vehicular networks will potentially include Network Mobility Basic Support (NEMO BS) to enable IP mobility for infotainment and Internet-based applications. However, the protocol has performance limitations in highly dynamic scenarios, and several route optimization mechanisms have been proposed to overcome these limitations. This article addresses the problem of IP mobility and its specific requirements in vehicular scenarios. A qualitative comparison among the existent IP mobility solutions that optimize NEMO BS in vehicular networks is provided. Their improvements with respect to the current standard, their weaknesses, and their fulfillment of the specific requirements are also identified. In addition, the article describes some of the open research challenges related to IP mobility in vehicular scenarios. © 2010 IEEE.
Modi K.,University of Oxford |
Modi K.,National University of Singapore |
Brodutch A.,Macquarie University |
Brodutch A.,University of Waterloo |
And 5 more authors.
Reviews of Modern Physics | Year: 2012
One of the best signatures of nonclassicality in a quantum system is the existence of correlations that have no classical counterpart. Different methods for quantifying the quantum and classical parts of correlations are among the more actively studied topics of quantum-information theory over the past decade. Entanglement is the most prominent of these correlations, but in many cases unentangled states exhibit nonclassical behavior too. Thus distinguishing quantum correlations other than entanglement provides a better division between the quantum and classical worlds, especially when considering mixed states. Here different notions of classical and quantum correlations quantified by quantum discord and other related measures are reviewed. In the first half, the mathematical properties of the measures of quantum correlations are reviewed, related to each other, and the classical-quantum division that is common among them is discussed. In the second half, it is shown that the measures identify and quantify the deviation from classicality in various quantum-information- processing tasks, quantum thermodynamics, open-system dynamics, and many-body physics. It is shown that in many cases quantum correlations indicate an advantage of quantum methods over classical ones. © 2012 American Physical Society.
Wong-Ekkabut J.,Kasetsart University |
Karttunen M.,University of Waterloo
Journal of Chemical Theory and Computation | Year: 2012
Molecular dynamics (MD) simulation has become a common technique to study biological systems. Transport of small molecules through carbon nanotubes and membrane proteins has been an intensely studied topic, and MD simulations have been able to provide valuable predictions, many of which have later been experimentally proven. Simulations of such systems pose challenges, and unexpected problems in commonly used protocols and methods have been found in the past few years. The two main reasons why some were not found before are that most of these newly discovered errors do not lead to unstable simulations. Furthermore, some of them manifest themselves only after relatively long simulation times. We assessed the reliability of the most common simulations protocols by MD and stochastic dynamics (SD) or Langevin dynamics, simulations of an alpha hemolysin nanochannel embedded in a palmitoyloleoylphosphatidylcholine (POPC) lipid bilayer. Our findings are that (a) reaction field electrostatics should not be used in simulations of such systems, (b) local thermostats should be preferred over global ones since the latter may lead to an unphysical temperature distribution, (c) neighbor lists should be updated at all time steps, and (d) charge groups should be used with care and never in conjunction with reaction field electrostatics. © 2012 American Chemical Society.
Agency: Cordis | Branch: FP7 | Program: CSA-CA | Phase: ENERGY.2009.2.9.2 | Award Amount: 1.80M | Year: 2010
The objectives are to create a framework for knowledge sharing and to develop a research roadmap for activities in the context of offshore renewable energy (RE). In particular, the project will stimulate collaboration in research activities leading towards innovative, cost efficient and environmentally benign offshore RE conversion platforms for wind, wave and other ocean energy resources, for their combined use as well as for the complementary use such as aquaculture and monitoring of the sea environment. The use of the offshore resources for RE generation is a relatively new field of interest. ORECCA will overcome the knowledge fragmentation existing in Europe and stimulate the key experts to provide useful inputs to industries, research organizations and policy makers (stakeholders) on the necessary next steps to foster the development of the ocean energy sector in a sustainable and environmentally friendly way. A focus will be given to respect the strategies developed towards an integrated European maritime policy. The project will define the technological state of the art, describe the existing economical and legislative framework and identify barriers, constraints and needs within. ORECCA will enable collaboration of the stakeholders and will define the framework for future exploitation of offshore RE sources by defining 2 approaches: pilot testing of technologies at an initial stage and large scale deployment of offshore RE farms at a mature stage. ORECCA will finally develop a vision including different technical options for deployment of offshore energy conversion platforms for different target areas in the European seas and deliver integrated roadmaps for the stakeholders. These will define the strategic investment opportunities, the R&D priorities and the regulatory and socio-economics aspects that need to be addressed in the short to the medium term to achieve a vision and a strategy for a European policy towards the development of the offshore RE sector
Agency: Cordis | Branch: FP7 | Program: NOE | Phase: ICT-2011.1.6 | Award Amount: 5.99M | Year: 2011
The goal of EINS is coordinating and integrating European research aimed at achieving a deeper multidisciplinary understanding of the development of the Internet as a societal and technological artefact, whose evolution is increasingly interwined with that of human societies. Its main objective is to allow an open and productive dialogue between all the disciplines which study Internet systems under any technological or humanistic perspective, and which in turn are being transformed by the continuous advances in Internet functionalities and applications. EINS will bring together research institutions focusing on network engineering, computation, complexity, security, trust, mathematics, physics, sociology, game theory, economics, political sciences, humanities, law, energy, transport, artistic expression, and any other relevant social and life sciences.\nThis multidisciplinary bridging of the different disciplines may also be seen as the starting point for a new Internet Science, the theoretical and empirical foundation for an holistic understanding of the complex techno-social interactions related to the Internet. It is supposed to inform the future technological, social, political choices concerning Internet technologies, infrastructures and policies made by the various public and private stakeholders, for example as for the far-ended possible consequences of architectural choices on social, economic, environmental or political aspects, and ultimately on quality of life at large.\nThe individual contributing disciplines will themselves benefit from a more holistic understanding of the Internet principles and in particular of the network effect. The unprecedented connectivity offered by the Internet plays a role often underappreciated in most of them; whereas the Internet provides both an operational development platform and a concrete empirical and experimental model. These multi- and inter-disciplinary investigations will improve the design of elements of Future Internet, enhance the understanding of its evolving and emerging implications at societal level, and possibly identify universal principles for understanding the Internet-based world that will be fed back to the participating disciplines. EINS will:\nCoordinate the investigation, from a multi-disciplinary perspective, of specific topics at the intersection between humanistic and technological sciences, such as privacy & identity, reputation, virtual communities, security & resilience, network neutrality\nLay the foundations for an Internet Science, based i.a. on Network Science and Web Science, aiming at understanding the impact of the network effect on human societies & organisations, as for technological, economic, social & environmental aspects\nProvide concrete incentives for academic institutions and individual researchers to conduct studies across multiple disciplines, in the form of online journals, conferences, workshops, PhD courses, schools, contests, and open calls
Agency: GTR | Branch: EPSRC | Program: | Phase: Training Grant | Award Amount: 5.00M | Year: 2014
Quantum technologies promise a transformation of measurement, communication and computation by using ideas originating from quantum physics. The UK was the birthplace of many of the seminal ideas and techniques; the technologies are now ready to translate from the laboratory into industrial applications. Since international companies are already moving in this area, there is a critical need across the UK for highly-skilled researchers who will be the future leaders in quantum technology. Our proposal is driven by the need to train this new generation of leaders. They will need to be equipped to function in a complex research and engineering landscape where quantum physics meets cryptography, complexity and information theory, devices, materials, software and hardware engineering. We propose to train a cohort of leaders to meet these challenges within the highly interdisciplinary research environment provided by UCL, its commercial and governmental laboratory partners. In their first year the students will obtain a background in devices, information and computational sciences through three concentrated modules organized around current research issues. They will complete a team project and a longer individual research project, preparing them for their choice of main research doctoral topic at the end of the year. Cross-cohort training in communication skills, technology transfer, enterprise, teamwork and career planning will continue throughout the four years. Peer to peer learning will be continually facilitated not only by organized cross-cohort activities, but also by the day to day social interaction among the members of the cohort thanks to their co-location at UCL.
News Article | November 11, 2015
Researchers studying communities of microbes need to up their game. That was the argument made by two articles published on 28 October in Science1 and Nature2, which called for national and international initiatives that would unite microbiome researchers and move the field forward. The initiatives would help researchers to develop better, standardized ways to study microbial communities so that scientists can make meaningful comparisons of data sets across different studies. Some researchers were sceptical. Nick Loman, a bacterial geneticist at the University of Birmingham, UK, tweeted: But the proponents say that the two articles are just starting points for broader discussion in the field. Microbiome studies focus on the bacteria and other microbes living in sites ranging from soil to the human mouth. In the Science piece, US researchers argued that for microbiology to move beyond descriptive studies towards hypothesis- and application-driven science, the field needs to bring in scientists from other disciplines and create tools that manipulate microbial communities and their genes. The authors proposed that a national Unified Microbiome Initiative (UMI) would develop and implement these tools, and called for new funding mechanisms for interdisciplinary research. The Nature article, by authors in the United States, Germany and China, responds to the US researchers’ proposal by calling for an International Microbiome Initiative (IMI). This would coordinate the efforts of a global, interdisciplinary group of scientists, including the UMI, allowing researchers to share data. “By pooling data from scientists from around the world, an IMI would generate much more knowledge than could one country alone,” the authors write. Several scientists greeted the proposals with enthusiasm, including Roman Stilling, a postdoctoral fellow at the APC Microbiome Institute at University College Cork, Ireland. He said in an e-mail to Nature: “Standardization may help ensure reproducibility and may help other researchers with guidelines to follow when they want to start working on the microbiome too.” But Patrick Schloss, a microbiologist at the University of Michigan in Ann Arbor, questioned the need for a global initiative, tweeting: Schloss later tempered his tweets in a blog post, writing that he and others are already developing tools to study microbial data and pursuing hypothesis-driven work. He wrote that funding for interdisciplinary microbiome research would be “awesome”, but added in an interview that a lot of details are missing from the proposals. “In fairness, we don’t really know much about what is being planned,” Schloss says, adding that the proposals seem primarily to be a call for government support, rather than a concrete plan. “There’s no funding mandate, there’s nothing really. Just a bunch of ideas,” he says. These are works in progress, says microbial ecologist Jack Gilbert at the University of Chicago in Illinois, a co-author of the Science article. “To put down immediately at this point that we have a clear funding method, this is what we want to fund, these are the research areas we think are valid, would have been crass,” Gilbert says. Gilbert hopes that the proposals — and the online back-and-forth — will stimulate further discussion and the creation of new research programmes. “No one is saying that we’re going to fundamentally transform the way you do science. We’re saying we’re going to fundamentally transform the way science is funded and the way multidisciplinary science can be implemented,” he says. “This is starting a conversation.” Other scientists expressed concern that standardizing methods and data sharing might stifle creativity in a rapidly evolving field. Loman wrote in an e-mail to Nature that there are often good reasons for methodological differences between microbiologists studying different ecosystems such as the gut or soil. “Should we standardise on one protocol?” he adds. “We don’t even know what the right technique is for many niches.” And as Noah Fierer, a microbial ecologist at the University of Colorado, Boulder, added in a blog post: “Methods are constantly changing (hopefully improving) and many of these improvements come from smaller labs that may not be directly involved in the consortium that decides the consensus methods.” These criticisms of the call for standardization surprised Nicole Dubilier, a co-author of the Nature piece who is a microbiologist at the Max Planck Institute for Marine Microbiology in Bremen, Germany. “If there’s one thing that will help the field, it’s standardization,” she says. “Standardization is the key to comparing results.” Some scientists poked fun at the grand aims of the initiatives. Josh Neufeld, a microbial ecologist at the University of Waterloo in Canada, tweeted:
News Article | December 5, 2015
Insurance company Intact Financial Corp. will collaborate with the University of Waterloo (UW) in Canada to mitigate the risks of climate change by creating the Intact Center on Climate Adaptation (ICCA). The ICCA's chief goal is to be an incubator for new measures to adapt to climate change. Intact Financial Corp. will allocate $4.25 million to support the ICCA. The center will focus on awareness and research regarding innovative solutions to climate change risks that Canadian communities face, and will be based at the university's Faculty of Environment. "The impacts of climate change are certainly being felt by the insurance sector, and indeed they're on the front line, if you will, of having to deal with the negative impacts associated with climate change," said Professor Blair Feltmate, ICCA head at the University of Waterloo. The company reported that insurance payouts have increased significantly in the past few years, and much of that can be attributed to storms caused by climate change. Feltmate said claims have gone up by $1 billion each year for the last five or six years, and that total payouts increased from $100 million and $500 million annually to over $1 billion. She said that people are searching for ways to de-risk the system going forward through climate change adaptation in order to reduce the impacts of flooding and other factors. "People think that adaptation to climate change is always very expensive, and it doesn't have to be," added Feltmate. One of the ICCA's initiatives is to start a green infrastructure program that aims to protect communities from severe precipitation. The ICCA will also create a program that will study the vulnerabilities to extreme weather that various industries experience. Additionally, a national home adaptation audit program designed to evaluate how homes can be vulnerable to flood damage will be launched under the ICCA. The Insurance Bureau of Canada (IBC) said property damage caused by extreme weather is the chief source of property insurance claims, which now exceeds payouts from fire damage. Flooding has been the largest source of claims in recent years. The IBC said that extreme weather payouts are doubling every five to 10 years, with a record of $3.4 billion in 2013 because of severe flooding in Toronto and Alberta. Many insurers were prompted to increase premiums by 20 percent to deal with the costs of property damage caused by extreme weather, the IBC said. Meanwhile, Intact and UW previously teamed up for The Climate Adaptation Project in which they identified the areas in Canada that are most vulnerable to climate change. The ICCA is part of a five-year partnership between the company and the university.
News Article | November 25, 2016
TORONTO, ON--(Marketwired - November 25, 2016) - Green Shield Canada (GSC) is pleased to announce Sherry Peister, Chair of the Board of Directors has been named a Women's Executive Network (WXN) 2016 Canada's Most Powerful Women: Top 100 Award Winner. Launched in 2003, the Top 100 Awards celebrate the accomplishments of Canada's top female executive talent as well as their organizations and networks. Sherry -- who received the Accenture Corporate Directors Award, which recognizes the accomplishments of professional women in leadership roles -- was honoured during a gala celebration at the Metro Toronto Convention Centre on November 24, 2016. Sherry was appointed as GSC's first female Board Chair in 2010. Since then, she has had a profound impact on advancing GSC's strategic direction and mission by recognizing the ever-evolving nature of the health and dental benefits industry and being a strong advocate for change. Specifically, her passionate and unwavering commitment to good corporate governance has led to the introduction of new competency, recruitment, and evaluation policies for Board Directors, as well as new succession planning and risk oversight measures. As a proud supporter of women in leadership, GSC believes that balanced diversity within an organization or on a board can be organically achieved with the right structures and processes in place, and a conscious mandate to do so. Under Sherry's leadership, GSC's commitment to diversity has advanced. "Sherry has helped GSC make big strides in ensuring diversity by spearheading an amendment to the GSC General Operating By-law that now requires no more than two-thirds of the Board of Directors to be of a single gender," explains Steve Bradie, GSC President and CEO. "In addition to gender diversity, Sherry is ensuring that board recruitment efforts include consideration of skill, experience, age, education, cultural, and geographic diversity." "Women are not just leading companies, headlines and new deals, we're doing so in record numbers. In addition to closing the gender gap for participation in post-secondary education and the workforce, we're excelling at the top levels of every sector," says WXN Owner and CEO, Sherri Stevens. "When WXN created Canada's Most Powerful Women: Top 100 Awards, part of the purpose was celebration. By recognizing a community of now 939 remarkable women, we get the opportunity to look back and appreciate the hard work and hurdle jumping it took to get here." The full list of WXN's 2016 Canada's Most Powerful Women: Top 100 Award Winners can be found at https://www.wxnetwork.com/top-100/top-100-winners/. Sherry Peister is the Chair of both the GSC Board of Directors and the GSC Foundation. A licensed pharmacist, she served as President of the Canadian Pharmacists Association from June 2013 to June 2014. Sherry has been active with both her profession and her community having served on the Ontario Pharmacists Association (President), Cambridge Memorial Hospital, Ontario College of Pharmacists, the Waterloo Region Wellington Dufferin District Health Council and the Advisory Council of the School of Pharmacy, University of Waterloo. Sherry has been a GSC Board member since 1997. As Canada's only national not-for-profit health and dental benefits specialist, GSC offers group and individual health and dental benefits programs and administration services. But our reason for being is the enhancement of the common good. We seek out innovative ways to improve access to better health for Canadians. From coast-to-coast, our service delivery includes drug, dental, extended health care, vision, hospital and travel benefits for groups, as well as programs with a focus on individuals. Supported by unique claim management strategies, advanced technology and exceptional customer service, we create customized programs for two million plan participants nation-wide. greenshield.ca At WXN, we inspire smart women to lead. WXN delivers innovative networking, mentoring, professional and personal development to inform, inspire, connect and recognize our global community of more than 22,000 women, men and their organizations. WXN enables our partners and corporate members to become and to be recognized as employers of choice and leaders in the advancement of women. Founded in 1997, WXN is Canada's leading organization dedicated to the advancement and recognition of women in management, executive, professional and board roles. WXN is led by CEO Sherri Stevens, owner of the award-winning, multi-million dollar Workforce Management Company Stevens Resource Group Inc., which she established in 1990. In 2008, WXN launched in Ireland, followed by London, UK in 2015, creating an international community of female leaders. More information and details are available at www.wxnetwork.com or www.top100women.ca.
News Article | August 22, 2016
MIT Physics Department Senior Research Scientist Jagadeesh S. Moodera was one of the pioneers in the field of spin-polarized magnetic tunnel junctions, which led to a thousand-fold increase in hard disk storage capacity. Using his group’s expertise working with atomically thin materials that exhibit exotic features, Moodera is laying a step-by-step foundation toward a new generation of quantum computers. Moodera’s group is making progress toward devices that display resistance-free, spin-polarized electrical current; memory storage at the level of single molecules; and capture the elusive paired electron “halves” known as Majorana fermions, which are sought after as qubits for quantum computing. This work combines materials that allow the free flow of electrons only on their surface (topological insulators) with other materials that lose their resistance to electricity (superconductors). Researchers call mixed layers of these materials heterostructures. A key goal is to push these effects up from ultracold temperatures to ordinary temperatures for everyday use. “Our group specializes in the growth and understanding the physical phenomena at the atomic level of any number of exotic combinations of these materials plus heterostructures with different other materials such as ferromagnetic layers or superconductors and so on,” Moodera says. Majorana fermions, which can be thought of as a paired “electron halves,” may lead to creating quantum entanglement believed necessary for quantum computers. “Our first goal is to look for the Majorana fermions, unambiguously detect them, and show this is it. It’s been the goal for many people for a long time. It’s one of those things predicted 80 years ago, and yet to be shown in a conclusive manner,” Moodera says. Moodera’s group is searching for these Majorana fermions on the surface of gold, a phenomenon predicted in 2012 by William and Emma Rogers Professor of Physics Patrick Lee and Andrew C. Potter PhD ’13. “I have a lot of hope that it’s going to come up with something very interesting, this particular area is exotically rich,” Moodera says. His team reported progress toward this goal in a Nano Letters paper published on March 4. Postdoc Peng Wei, with fellow Moodera group postdocs Ferhat Katmis and Cui-Zu Chang, demonstrate that epitaxial (111)-oriented gold thin films become superconducting when grown on top of superconducting vanadium film. The vanadium becomes a superconductor below 4 kelvins, which is hundreds of degrees below room temperature. Tests show that the surface state of (111)-oriented gold also becomes superconducting, which holds out potential for this system in the search for Majorana fermions. Future work will seek to detect Majorana fermions on the ends of (111)-oriented gold nanowires. “In this kind of nanowire, in principle, we would expect Majorana fermion states to exist at the end of the nanowire instead of in the middle,” Wei explains. Moodera says, “We have not discovered Majorana fermions yet, however, we have made a very nice foundation for that.” Further results will be published soon. In a series of 2015 papers, Moodera’s group demonstrated the first reported truly zero-resistance edge current in the quantum anomalous Hall state of a topological insulator system, realizing a 1988 prediction by F. Duncan M. Haldane at Princeton University. The importance of comprehensive achievements of perfect quantum anomalous Hall state at zero magnetic field as well as the demonstration of dissipationless chiral edge current in a topological insulator is well brought out in a Journal Club for Condensed Matter Physics commentary by Harvard University Professor Bertrand I. Halperin, a pioneer in the field. “In this system, there is a very special edge state. The bulk is insulating, but the edge is metallic,” says Cui-Zu Chang, lead author of the Nature Materials paper and Physical Review Letters paper published in April and July 2015. “Our group is the first to show a completely dissipationless edge state, meaning that the resistance for current flow exactly becomes zero when the quantum state is reached at low temperatures,” Chang says. “If one can realize this effect, for example, at room temperature, it will be remarkably valuable. You can use this kind of effect to develop quantum electronics including the quantum computer,” Chang says. “In this kind of computer, there is minimal heating effect; the current flow is completely dissipationless; and you can also communicate over very long distance.” In a 2013 paper with collaborators from Northeastern University, Göttingen University in Germany and Spence High School in New York, Moodera and MIT postdoc Bin Li demonstrated a superconducting spin switch in a structure sandwiching an aluminum layer between europium sulfide layers. In this work, the intrinsic magnetization of europium sulfide controls superconductivity in the aluminum layer. The direction of magnetization in europium sulfide can be reversed, which can thereby switch the aluminum between superconducting and normal states, making it potentially useful for logic circuits and nonvolatile memory applications a step in the direction of superconducting spintronics. These experiments validated a theoretical prediction 50 years ago by French Nobel Laureate Pierre-Gilles deGennes. Several years ago Guoxing Miao, then a junior researcher with Moodera, observed a unique energy profile across a sandwich structure made with metallic islands confined within two europium sulfide magnetic insulator barriers. This arrangement of the inherent large energy separation in the nano islands combined with the large interfacial magnetic field confined at the interface and the spin selective transmission property of the adjacent europium sulfide powerfully modifies the two-dimensional electronic structures. They observed spin-assisted charge transfer across such a device, generating a spontaneous spin current and voltage. These unique properties can be practical for controlling spin flows in electronic devices and for energy harvesting. Published in Nature Communications in April 2014, these were unexpected fundamental results, Moodera says. Guoxing Miao is an assistant professor at University of Waterloo and Institute for Quantum Computing in Canada. More recently, the researchers paired europium sulfide with graphene, creating a strong edge current, which they reported March 28 in Nature Materials. “What we find is very exciting,” postdoc Peng Wei, lead author of the paper, says: “Experiments show a strong magnetic field (more than 14 Tesla) experienced by graphene originating in the europium sulfide that polarizes the spins of electrons in the graphene layer without affecting the orbital motion of the electrons.” In the device, europium sulfide produces a large field, called a magnetic exchange field, which raises the energy of spin-up electrons and lowers the energy of spin-down electrons in graphene and creates an edge current with spin-up electrons streaming in one direction and spin-down electrons streaming in the opposite direction. These effects are brought about by the confinement of electrons in these atomically thin devices, fellow postdoc Ferhat Katmis explains. At the interface between europium sulfide, which is a magnetic insulator, and graphene, Peng Wei explains, the graphene can “feel” the huge exchange field, or internal magnetism, which can reach millions times bigger than the Earth’s magnetic field, from the europium sulfide. This effect is potentially useful for spin-based memory and logic devices and possibly quantum computing. Moodera was a guest editor of the July 2014 MRS Bulletin, which highlighted progress in organic spintronics. Controlling magnetic behavior at the interface of the materials is again the key element in this approach. By adding magnetic sensing capability to these large organic molecules (up to hundreds of atoms per molecule), their magnetic orientation can be switched back and forth. This work holds promise to serve as photo-switches, color displays, and information storage units at the molecular level. These molecules can start out completely non-magnetic, but when they are placed on the surface of a magnetic material, their behavior changes. “They share electrons at the interface. These molecules share some of their electrons into the ferromagnetic layer or the ferromagnetic layer actually gives out some of its electrons carrying with it the magnetic behavior,” Moodera explains. Electrons from the magnetic material carry a magnetic signature, which influences the organic molecule to switch between resistive and conductive states. This collaborative work between researchers in the U.S., Germany, and India was published as a Nature Letter paper in 2013. Moodera and co-inventor Karthik V. Raman PhD ’11 were issued a patent in May 2014 for high-density molecular memory storage. It is one of four patents issued to Moodera and colleagues. “We have shown early stages of such a possibility of these molecules being used for storing information,” Moodera says. “This is what we want to explore. This will allow us to store information in molecules in the future.” He projects that molecular storage can increase storage density by 1,000 to 10,000 times compared to current technology. “That gives you an idea of how powerful it can become,” he says. Organic molecules have other advantages as well, he says, including lower cost, less energy consumption, flexibility and more environmentally friendly materials. “But it’s a very, very huge area, almost untapped direction where many unprecedented new phenomena might emerge if it can be patiently investigated fundamentally,” he cautions. Moodera is currently seeking long-term funding for this research into permanent memory devices using magnetic single molecules. “It’s a visionary program which means somebody has to be patient,” Moodera explains. “We are quite capable of doing this here if we get good support. ... Everything has to be looked at and understood, and then go further, so there is no set a priori recipe for this!” In 2009, Moodera and two MIT colleagues (the late Robert Meservey and Paul Tedrow, then group leader) shared the Oliver E. Buckley Condensed Matter Prize from the American Physical Society with Terunobu Miyazaki from Tohuku University in Japan for "pioneering work in the field of spin-dependent tunneling and for the application of these phenomena to the field of magnetoelectronics (also called spintronics)." “Jagadeesh Moodera and team were the first to show magnetoresistance from a magnetic tunnel junction at room temperature — a fundamental discovery that has enabled rapid growth of data storage capacity. All hard disk drives made since 2005 have a MTJ as the read sensor,” says Tiffany Santos ’02, PhD ’07, a former Moodera lab member who now works as a principal research engineer at HGST in San Francisco. As a materials science undergraduate and then doctoral student in Moodera's group, Santos explored spin-polarized tunneling in MTJs made of novel materials such as magnetic semiconductors and organic molecules. Santos was awarded best thesis prize from the Department of Materials Science and Engineering both for her BS and PhD theses. In common bar magnets, which have north and south poles, two magnets are attracted if opposite poles face, but will repel if the same poles face each other. Similarly, in a magnetic tunnel junction, the current flow across the layered materials will behave differently depending on whether the magnetism of the layers points in the same, or in the opposite, direction — either resisting the flow of current or enhancing it. This spin tunneling work, which dates to the 1990s, revealed that pairing two thin magnetic materials separated by a thin insulator causes electrons to move, or “quantum tunnel,” through the insulator from one magnet to the other, which is why it is called a magnetic tunnel junction. “This change in the current flow, very significant, can be detected very easily,” Moodera says. Since these magnetic materials are atomically thin, rather than north and south poles, their magnetism is associated with the up or down spin of electrons, which is a quantum property, and they are characterized as parallel when their spins are in alignment, or antiparallel when their spins point in the opposite directions. “So all you have to do is change from parallel to anti-parallel orientation, and there you have this beautiful spin sensor, or spin memory,” Moodera says. “This spin memory is non-volatile; that’s the most striking thing about it. You can set this particular device in a particular orientation, leave it alone, after a million years it’ll be still like that; meaning that the information which is stored here will be permanent.” Institute Professor Mildred S. Dresselhaus has known Moodera for many years, initially through his work using magnetic fields for materials research. Moodera, she says, developed expertise in spin phenomena long before they became popular topics in science and he has attained similar status in topological insulators. “His career has been all like that. He works for the love of science, and he’s not particularly interested in recognition,” Dresselhaus adds. Although Moodera has never been a faculty member, he works effectively with students and he finds his own support, she notes. “MIT is a place that can accommodate people like him,” Dresselhaus says. Limited funding means the U.S. is in danger of losing its leadership role in research, Moodera fears. He involves high school students and undergraduates (nearly 150 so far) in his research, many becoming coauthors in the publications and patents. “When we tell the young students and postdocs, ‘Oh, physics is wonderful, you should get into research, you really can discover many things that are exciting and valuable’, we are not actually telling the whole story. Despite funding support from National Science Foundation and Office of Naval Research for our program, there is increasing uncertainty and pressure to raise research funds. ... With constant struggle for funds, one spends much time in dealing with these issues. ... We wish there is reliable and continuous support when the track record is good. Science is like art — if creative breakthroughs are needed, then the proper support should be there with long-term vision, with freedom to explore, and without breaks and uncertainties. When one looks at some of the breakthroughs we have achieved so far — magnetic tunnel junctions that drives all hard drives in computers, prototype molecular spin memory, nonvolatile perfect superconducting spin memory/switch or even the latest totally spin-polarized edge current which is perfectly dissipationless, evidently the foundations for tomorrow’s cutting edge technology, isn’t it crystal clear that such a research program be unequivocally supported to benefit our society?” he asks. Despite his lab’s prominence in spintronics and topological insulators, making further progress in the current research environment means he depends on federal and other outside grants. “If I don’t have funding, I close the shop,” he says. “Everything moves so fast, you cannot wait for tomorrow. Everything has to happen today, that’s the unfortunate thing dealing with uncertainty. It’s a lot of pressure and stress on us, particularly in the last 10 years. The funding situation has become so volatile that we are kept under the dark cloud, constantly concerned about what is coming next.” Yet the situation has not always been so. During a tour of his lab facilities, Moodera recalls a phone call (over 20 years ago) from an Office of Naval Research (ONR) program director, Krystl Hathaway, who suggested there was money available, his work was high-quality, and that he should apply. “That was when I had only a month or two of funds left to sustain a research program! So, I said yes! I couldn’t believe it in the beginning,” he recalls. “I put in a one-page application. In a week’s time she sent me the money to tidy me out for four months. After that, I put in a real, several-page proposal for a full grant, and she supported my research program for over 10 years. Two years after this support started, research led to the discovery of the phenomenon called the tunnel magnetoresistance in 1994-95, which besides creating a vast new area of research, is also instrumental in the explosion of unbelievable storage capacity and speed in computer hard drives as we enjoy today at rock bottom cost. Most notable is that this work was mainly done with a summer high school intern who later joined MIT [Lisa Kinder, '99] and an undergraduate [Terrilyn Wong '97].” Later, when the same program officer was at a Materials Research Society (MRS) meeting in Boston, she visited Moodera’s lab and noticed the age of a key piece of thin film equipment used in creating the tunnel magnetoresistance breakthrough. It was then about 35 years old and had been cobbled together mostly from salvaged parts. Again she volunteered to provide substantial funding to build specialized equipment for a technique called molecular beam epitaxy (MBE), which is used to create ultra-clean thin films, atomic layer by atomic layer. On vacation in India, Moodera got a phone call from a physics administrator (the late Margaret O'Meara), telling him Hathaway from ONR was urgently looking for him. “I came back the next day, and then I spent four hours writing a proposal, which another two hours later was submitted from MIT. It all happened in one day essentially, and one week later I got $350,000, which built our first MBE system,” he says. “It’s a very versatile system that even after 20 years continues to deliver big results in the growth and investigation of the field of quantum coherent materials at present. By carefully planning and optimizing we even got some other critical parts that we needed for our other equipment in the lab.” “Dr. Hathaway, and then subsequently Dr. Chagaan Baatar, the new program director at ONR, were very happy that we produced a lot more things in the new system. It made a huge difference in our program. So that’s how sometimes it works out, and fundamental research should be supported if one looks for breakthroughs!” he says. “People come in and see, 'these people need support'. So that kind of thing should happen now, I think.” Funding for basic science has to increase by manyfold, Moodera suggests. “The future is actually created and defined now. Evidently it’s very important then. If you don’t invest now, there is no future development. A vision for fundamental knowledge buildup is strongly eroding in the country now, and thus needs to be corrected before it reaches the point of no return,” he says. Moodera has been at MIT for over three decades, where his group is part of the Francis Bitter Magnet Laboratory (which is now under Plasma Science and Fusion Center) and the Department of Physics. Moodera’s lab equipment ranges from the newest two-story scanning tunneling microscope that can examine atomic surfaces and molecules under extreme cold and high magnetic fields to a 1960s’ vintage glass liquid helium cryostat, which still sees frequent use. “It’s not the equipment. It’s how you think about a problem and solve it, that’s our way of looking at things. ... We train real scientists here; ones that can really think, come up with something out of essentially nothing. To start from basic atoms and molecules and actually build things, completely new and understand the emerging phenomena; unexpected science can come out of it,” Moodera says. “This group has solved important physics in ferromagnetism,” postdoc Peng Wei says. “We actually have very unique equipment that cannot be seen in other labs.” A native of Bangalore, India, Moodera plays badminton, ping-pong, and tennis, and he follows world tennis, soccer, and cricket. With his wife, MIT Department of Materials Science and Engineering senior lecturer Geetha Berera, Moodera likes to hike and enjoy nature. His hobbies include gardening and bird watching.
News Article | December 7, 2015
There are lots of apps that make it easy to encrypt your phone calls and chats. But your metadata—the data about who you're talking to, when, and more—is more difficult to obscure. Which is why I was intrigued to come across a project called Vuvuzela, uploaded to the code-sharing platform GitHub last week. Vuvuzela is a prototype chat app, still under development, that not only encrypts the content of messages between two people, but as much information about those messages and the people who sent and received them as possible. "Encryption software can hide the content of messages, but adversaries can still learn a lot from metadata—which users are communicating, at what times they communicate, and so on—by observing message headers or performing traffic analysis," states an academic paper describing the project. And metadata, as we've learned from revelations about US, Canadian and UK spying operations, can reveal a lot—sometimes more than the contents of communication itself. As former NSA director Michael Hayden once infamously said, “We kill people based on metadata.” That’s how valuable such information can be. From a slide deck introducing Vuvuzela at the 2015 Symposium on Operating Systems Principles. Image: David Lazar, et al According to the paper describing Vuvuzela's capabilities, presented in October at the 2015 Symposium on Operating Systems Principles, the goal is to minimize the amount of metadata about a person or their conversation that is leaked, or can be intercepted. The only variables revealed are "the total number of users engaged in a conversation, and the total number of users not engaged in one" (it does not reveal which users are in each group). The team's work was funded by the National Science Foundation and Google. According to David Lazar, a PhD student at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the primary author of Vuvuzela's code, the main reason it has traditionally been so difficult to encrypt the metadata associated with our online chats and phone calls is efficiency. "The most efficient way for a server to deliver a message from A to B is to directly send the message from A to B, but that requires the server to know who A and B are," Lazar explained in an email. "It's much harder for A's message to get delivered to B when the server can't know that A and B are talking to each other." In an example implementation of Vuvuzela, all users connect to a server, which is connected to other servers in a chain. Messages are routed between servers in this chain to an ever-changing number of "dead drops," that only user A and user B can access. From a slide deck introducing Vuvuzela at the 2015 Symposium on Operating Systems Principles. Image: David Lazar, et al "In practice, a dead drop is just a pseudorandom 128-bit number. To communicate, two users agree on a dead drop and send messages with the same dead drop ID in the 'header' of the message," explained Lazar (though, to be clear, this all happens in the background). "The last server in the Vuvuzela chain sees all of the incoming messages and their dead drop IDs. When it encounters two messages with the same dead drop ID, it exchanges the messages and sends them back down the chain. This is how users receive each other's messages." To protect users, dead drops are changed regularly, and because of the way messages are passed between servers, there's no way for an attacker who has comprised one server to tell which dead drops correspond with which users. Even if a user was to compromise the last server in the chain—the server that can see all the dead drop IDs—there's not much a sophisticated attacker could glean about who is talking to who. The reason: noise. Vuvuzela adds noise to all the data flying around the network to make it harder to observe who's communicating with whom. Basically, Vuvuzela's goal is to create a system where, even after observing a large stream of communication over time, an attacker can't reliably distinguish who is talking to who. According to Lazar, "Vuvuzela generates noise equivalent to 1.2 million users, even if 100 million users are using the system or just 2 users are using it." In other words, in the Vuvuzela configuration described in Lazar’s paper, it will appear to an attacker that there are 1.2 million users communicating. Vuvuzela is certainly not the only project attempting to solve this problem, but it is one of the most recent. Out of the University of Waterloo, cryptographer Ian Goldberg has been taking another approach to the problem of chat metadata, and is tackling the problem of presence—the information that chat programs leak before you even initiate a conversation—with a project called DP5. In other words, how do you privately convey that you’re online, and available to chat, to a list of authorized users? "Vuvuzela is complementary to DP5," Goldberg told me via email. "Vuvuzela protects the delivery of messages, while DP5 protects presence indication. So Alice could use DP5 to privately learn that her friend Bob was online (without revealing to anyone that she is friends with Bob), and then use Vuvuzela to privately send Bob messages (without revealing to anyone that she is communicating with Bob)." There are also a pair of chat projects, Pond and Ricochet, that have been attempting to minimize the amount of metadata they leak. Of course, it's hard to hide absolutely everything. For example, "Vuvuzela cannot hide the fact that a user is connected to the system," the paper reads. One way around this is to leave the client open all the time, so it's not possible to infer that two users always turn their clients on at, say, 9AM each day, and off shortly after their chat. And for all their efforts, efficiency still poses a formidable challenge. "Running a Vuvuzela server can be expensive due to the cost of bandwidth. However, the biggest drawback is the high message latency," Lazar said. "In our experiments with 1 million users, the end-to-end message latency was 37 seconds.” It’s not exactly the most real-time messaging app out there—”Perhaps the higher latency makes Vuvuzela better suited for SMS-style messaging, rather than GChat-style messaging,” Lazar offered—but at least your metadata won’t be leaking for all to see. Correction, Dec. 7: A previous version of this article indicated that "it will always appear to an attacker that there are 1.2 million users online." However, Lazar has clarified that that the exact amount of noise generated by Vuvuzela changes regularly and is not static, and that the noise obfuscates how many users are communicating and not merely online, as previously stated. Motherboard regrets the error.
News Article | February 27, 2017
VANCOUVER, BC / ACCESSWIRE / February 27, 2017 / CopperBank Resources Corp. ("CopperBank" or the "Company") (CSE: CBK) (OTC PINK: CPPKF) announces that it has appointed Brigitte Dejou as an independent director and Colin Burge has joined the company's technical advisory team. Mr. Kovacevic comments, "I have known Brigitte and Colin for many years and have direct experience with both of them. CopperBank's technical team now comprises of ten shareholder aligned members who have a combined two hundred and fifty years of industry experience. Colin was a vital element to the team at Cobre Panama, who delineated a tremendous amount of additional pounds of copper from the time I was an investor in the early development of that project. Brigitte has a deep knowledge base that rounds out our technical team, especially due to her direct experience in Alaska, where she participated with the geological interpretation that added many years of mine life to TECK's Red Dog mine district. Both Brigitte and Colin will be great assets to CopperBank's stakeholders as we continue our thorough data analysis and plan the important next steps for the Pyramid Copper porphyry deposit, The San Diego Bay prospect and our Contact Copper Oxide Project." Ms. Dejou holds both a Bachelor of Engineering degree and a Masters of Applied Science degree from Ecole Polytechnique de Montréal and is a member of the Ordre des Ingénieurs du Québec. Ms. Dejou has 25 years of experience in mineral exploration, including 18 years within Teck Cominco (now TECK) managing various exploration programs and two years with Osisko Mining Corporation working on the evaluation of new projects and QA/QC of existing drilling programs (Canadian Malartic, Duparquet, Hammond Reef). Ms. Dejou brings a wealth of experience in running a variety of exploration projects from grass-roots to pre-feasibility stage across North America (including Red Dog, El Limon-Morelos, Polaris, and Mesaba). She was also instrumental in the discovery of the Aktigiruk deposit. She has explored for a variety of commodities both for base metals (sedex and MVT Zn-Pb, porphyry Cu, magmatic Cu-Ni, VMS) and precious metals (Au, Ag, PGE). Since 2012, Ms. Dejou is Vice President Exploration for LaSalle Exploration. Mr. Burge is a discovery oriented exploration geologist with 30 years' experience in project development with First Quantum Minerals and predecessor companies. Mr. Burge was part of a corporate development team at Inmet Mining Corp. that discovered and delineated more than 30 billion pounds of copper at the Cobre Panama Project, leading to First Quantum Minerals $5 billion dollar acquisition of the company. He gained valuable experience working with First Quantum for 3 years as the project transitions to a mining operation and has excellent technical skills in exploration data management and the application of exploration tools, as well as a strong ability to think creatively. Mr. Burge graduated from the University of Waterloo with a Bachelor of Earth Science in 1981 and is a licensed professional geologist in British Columbia. Shareholders are encouraged to visit the Company's website for further details and biographies of each individual of CopperBank's technical team: www.copperbankcorp.com The Company also announces that Robert McLeod will be stepping down from the Board of Directors and will remain as an important member of CopperBank's technical team. Mr. McLeod will remain as a Qualified Person for the company. Certain information in this release may constitute "forward-looking information" under applicable securities laws and necessarily involve risks and uncertainties. Forward-looking information included herein is made as of the date of this news release and CopperBank does not intend, and does not assume any obligation, to update forward-looking information unless required by applicable securities laws. Forward-looking information relates to future events or future performance and reflects management of CopperBank's expectations or beliefs regarding future events. In certain cases, forward-looking information can be identified by the use of words such as "plans," or "believes," or variations of such words and phrases or statements that certain actions, events or results "may," "could," "would," "might," or "will be taken," "occur," or "be achieved," or the negative of these terms or comparable terminology. Examples of forward-looking information in this news release include, but are not limited to, statements with respect to the Company's ongoing review of its existing portfolio, the involvement of CopperBank in any potential divestiture, spin-out, partnership, or other transactions involving the Company's portfolio assets, and the ability of the Company to complete any such transactions, the ability of CopperBank to enter into transactions that will ultimately enhance shareholder value, and the anticipated issuance of one million shares in connection with the satisfaction of certain loans between CopperBank and management. This forward-looking information is based, in part, on assumptions and factors that may change or prove to be incorrect, thus causing actual results, performance, or achievements to be materially different from those expressed or implied by forward-looking information. Such factors and assumptions include, but are not limited to the Company's ability to identify and complete one or more transactions involving the Company's portfolio assets that enhance shareholder value as part of management's ongoing review of strategic alternatives in the current market conditions. By its very nature, forward-looking information involves known and unknown risks, uncertainties and other factors which may cause the actual results, performance or achievements to be materially different from any future results, performance or achievements expressed or implied by forward-looking information. Such factors include, but are not limited to, the risk that the Company will not be able to identify and complete one or more transactions involving the Company's portfolio assets that enhance shareholder value as part of management's ongoing review of strategic alternatives in the current market conditions. Although CopperBank has attempted to identify important factors that could cause actual actions, events, or results to differ materially from forward-looking information, there may be other factors that cause actions, events, or results not to be as anticipated, estimated, or intended. There can be no assurance that forward-looking information will prove to be accurate, as actual results and future events could differ materially from those anticipated by such forward-looking information. Accordingly, readers should not place undue reliance on forward-looking information. For more information on CopperBank and the risks and challenges of its businesses, investors should review the continuous disclosure filings that are available under CopperBank's profile at www.sedar.com. VANCOUVER, BC / ACCESSWIRE / February 27, 2017 / CopperBank Resources Corp. ("CopperBank" or the "Company") (CSE: CBK) (OTC PINK: CPPKF) announces that it has appointed Brigitte Dejou as an independent director and Colin Burge has joined the company's technical advisory team. Mr. Kovacevic comments, "I have known Brigitte and Colin for many years and have direct experience with both of them. CopperBank's technical team now comprises of ten shareholder aligned members who have a combined two hundred and fifty years of industry experience. Colin was a vital element to the team at Cobre Panama, who delineated a tremendous amount of additional pounds of copper from the time I was an investor in the early development of that project. Brigitte has a deep knowledge base that rounds out our technical team, especially due to her direct experience in Alaska, where she participated with the geological interpretation that added many years of mine life to TECK's Red Dog mine district. Both Brigitte and Colin will be great assets to CopperBank's stakeholders as we continue our thorough data analysis and plan the important next steps for the Pyramid Copper porphyry deposit, The San Diego Bay prospect and our Contact Copper Oxide Project." Ms. Dejou holds both a Bachelor of Engineering degree and a Masters of Applied Science degree from Ecole Polytechnique de Montréal and is a member of the Ordre des Ingénieurs du Québec. Ms. Dejou has 25 years of experience in mineral exploration, including 18 years within Teck Cominco (now TECK) managing various exploration programs and two years with Osisko Mining Corporation working on the evaluation of new projects and QA/QC of existing drilling programs (Canadian Malartic, Duparquet, Hammond Reef). Ms. Dejou brings a wealth of experience in running a variety of exploration projects from grass-roots to pre-feasibility stage across North America (including Red Dog, El Limon-Morelos, Polaris, and Mesaba). She was also instrumental in the discovery of the Aktigiruk deposit. She has explored for a variety of commodities both for base metals (sedex and MVT Zn-Pb, porphyry Cu, magmatic Cu-Ni, VMS) and precious metals (Au, Ag, PGE). Since 2012, Ms. Dejou is Vice President Exploration for LaSalle Exploration. Mr. Burge is a discovery oriented exploration geologist with 30 years' experience in project development with First Quantum Minerals and predecessor companies. Mr. Burge was part of a corporate development team at Inmet Mining Corp. that discovered and delineated more than 30 billion pounds of copper at the Cobre Panama Project, leading to First Quantum Minerals $5 billion dollar acquisition of the company. He gained valuable experience working with First Quantum for 3 years as the project transitions to a mining operation and has excellent technical skills in exploration data management and the application of exploration tools, as well as a strong ability to think creatively. Mr. Burge graduated from the University of Waterloo with a Bachelor of Earth Science in 1981 and is a licensed professional geologist in British Columbia. Shareholders are encouraged to visit the Company's website for further details and biographies of each individual of CopperBank's technical team: www.copperbankcorp.com The Company also announces that Robert McLeod will be stepping down from the Board of Directors and will remain as an important member of CopperBank's technical team. Mr. McLeod will remain as a Qualified Person for the company. Certain information in this release may constitute "forward-looking information" under applicable securities laws and necessarily involve risks and uncertainties. Forward-looking information included herein is made as of the date of this news release and CopperBank does not intend, and does not assume any obligation, to update forward-looking information unless required by applicable securities laws. Forward-looking information relates to future events or future performance and reflects management of CopperBank's expectations or beliefs regarding future events. In certain cases, forward-looking information can be identified by the use of words such as "plans," or "believes," or variations of such words and phrases or statements that certain actions, events or results "may," "could," "would," "might," or "will be taken," "occur," or "be achieved," or the negative of these terms or comparable terminology. Examples of forward-looking information in this news release include, but are not limited to, statements with respect to the Company's ongoing review of its existing portfolio, the involvement of CopperBank in any potential divestiture, spin-out, partnership, or other transactions involving the Company's portfolio assets, and the ability of the Company to complete any such transactions, the ability of CopperBank to enter into transactions that will ultimately enhance shareholder value, and the anticipated issuance of one million shares in connection with the satisfaction of certain loans between CopperBank and management. This forward-looking information is based, in part, on assumptions and factors that may change or prove to be incorrect, thus causing actual results, performance, or achievements to be materially different from those expressed or implied by forward-looking information. Such factors and assumptions include, but are not limited to the Company's ability to identify and complete one or more transactions involving the Company's portfolio assets that enhance shareholder value as part of management's ongoing review of strategic alternatives in the current market conditions. By its very nature, forward-looking information involves known and unknown risks, uncertainties and other factors which may cause the actual results, performance or achievements to be materially different from any future results, performance or achievements expressed or implied by forward-looking information. Such factors include, but are not limited to, the risk that the Company will not be able to identify and complete one or more transactions involving the Company's portfolio assets that enhance shareholder value as part of management's ongoing review of strategic alternatives in the current market conditions. Although CopperBank has attempted to identify important factors that could cause actual actions, events, or results to differ materially from forward-looking information, there may be other factors that cause actions, events, or results not to be as anticipated, estimated, or intended. There can be no assurance that forward-looking information will prove to be accurate, as actual results and future events could differ materially from those anticipated by such forward-looking information. Accordingly, readers should not place undue reliance on forward-looking information. For more information on CopperBank and the risks and challenges of its businesses, investors should review the continuous disclosure filings that are available under CopperBank's profile at www.sedar.com.
News Article | October 12, 2016
Neandertals are the comeback kids of human evolution. A mere decade ago, the burly, jut-jawed crowd was known as a dead-end species that lost out to us, Homo sapiens. But once geneticists began extracting Neandertal DNA from fossils and comparing it with DNA from present-day folks, the story changed. Long-gone Neandertals rode the double helix express back to evolutionary relevance as bits of their DNA turned up in the genomes of living people. A molecular window into interbreeding between Neandertals and ancient humans suddenly flung open. Thanks to ancient hookups, between 20 and 35 percent of Neandertals’ genes live on in various combinations from one person to another. About 1.5 to 4 percent of DNA in modern-day non-Africans’ genomes comes from Neandertals, a population that died out around 40,000 years ago. Even more surprising, H. sapiens’ Stone Age dalliances outside their own kind weren’t limited to Neandertals. Ancient DNA shows signs of interbreeding between now-extinct Neandertal relatives known as Denisovans and ancient humans. Denisovans’ DNA legacy still runs through native populations in Asia and the Oceanic islands. Between 1.9 and 3.4 percent of present-day Melanesians’ genes can be traced to Denisovans (SN Online: 3/17/16). Other DNA studies finger unknown, distant relatives of Denisovans as having interbred with ancestors of native Australians and Papuans (see "Single exodus from Africa gave rise to today’s non-Africans"). Genetic clues also suggest that Denisovans mated with European Neandertals. These findings have renewed decades-old debates about the evolutionary relationship between humans and extinct members of our evolutionary family, collectively known as hominids. Conventional wisdom that ancient hominid species living at the same time never interbred or, if they did, produced infertile offspring no longer holds up. But there is only so much that can be inferred from the handful of genomes that have been retrieved from Stone Age individuals so far. DNA from eons ago offers little insight into how well the offspring of cross-species flings survived and reproduced or what the children of, say, a Neandertal mother and a human father looked like. Those who suspect that Neandertals and other Stone Age hominid species had a big evolutionary impact say that ancient DNA represents the first step to understanding the power of interbreeding in human evolution. But it’s not enough. Accumulating evidence of the physical effects of interbreeding, or hybridization, in nonhuman animals may offer some answers. Skeletal studies of living hybrid offspring — for example, in wolves and monkeys — may tell scientists where to look for signs of interbreeding on ancient hominid fossils. Scientists presented findings on hybridization’s physical effects in a variety of animals in April at the annual meeting of the American Association of Physical Anthropologists in Atlanta. Biological anthropologist Rebecca Ackermann of the University of Cape Town in South Africa co-organized the session to introduce researchers steeped in human evolution to the ins and outs of hybridization in animals and its potential for helping to identify signs of interbreeding on fossils typically regarded as either H. sapiens or Neandertals. “I was astonished by the number of people who came up to me after the session and said that they hadn’t even thought about this issue before,” Ackermann says. Interbreeding is no rare event. Genome comparisons have uncovered unexpectedly high levels of hybridization among related species of fungi, plants, rodents, birds, bears and baboons, to name a few. Species often don’t fit the traditional concept of populations that exist in a reproductive vacuum, where mating happens only between card-carrying species members. Evolutionary biologists increasingly view species that have diverged from a common ancestor within the last few million years as being biologically alike enough to interbreed successfully and evolve as interconnected populations. These cross-species collaborations break from the metaphor of an evolutionary tree sprouting species on separate branches. Think instead of a braided stream, with related species flowing into and out of genetic exchanges, while still retaining their own distinctive looks and behaviors. Research now suggests that hybridization sometimes ignites helpful evolutionary changes. An initial round of interbreeding — followed by hybrid offspring mating among themselves and with members of parent species — can result in animals with a far greater array of physical traits than observed in either original species. Physical variety in a population provides fuel for natural selection, the process by which individuals with genetic traits best suited to their environment tend to survive longer and produce more offspring. Working in concert with natural selection and random genetic changes over time, hybridization influences evolution in other ways as well. Depending on available resources and climate shifts, among other factors, interbreeding may stimulate the merger of previously separate species or, conversely, prompt one of those species to die out while another carries on. The birth of new species also becomes possible. In hybrid zones where the ranges of related species overlap, interbreeding regularly occurs. “Current evidence for hybridization in human evolution suggests not only that it was important, but that it was an essential creative force in the emergence of our species,” Ackermann says. A vocal minority of researchers have argued for decades that signs of interbreeding with Neandertals appear in ancient human fossils. In their view, H. sapiens interbred with Asian and European Neandertals after leaving Africa at least 60,000 years ago (SN: 8/25/12, p. 22). They point to some Stone Age skeletons, widely regarded as H. sapiens, that display unusually thick bones and other Neandertal-like features. Critics of that view counter that such fossils probably come from particularly stocky humans or individuals who happened to develop a few unusual traits. Interbreeding with Neandertals occurred too rarely to make a dent on human anatomy, the critics say. One proposed hybrid fossil has gained credibility because of ancient DNA (SN: 6/13/15, p. 11). A 37,000- to 42,000-year-old human jawbone found in Romania’s Oase Cave contains genetic fingerprints of a Neandertal ancestor that had lived only four to six generations earlier than the Oase individual. Since the fossil’s discovery in 2002, paleoanthropologist Erik Trinkaus of Washington University in St. Louis has argued that it displays signs of Neandertal influence, including a wide jaw and large teeth that get bigger toward the back of the mouth. In other ways, such as a distinct chin and narrow, high-set nose, a skull later found in Oase Cave looks more like that of a late Stone Age human than a Neandertal. Roughly 6 to 9 percent of DNA extracted from the Romanian jaw comes from Neandertals, the team found. “That study gave me great happiness,” Ackermann says. Genetic evidence of hybridization finally appeared in a fossil that had already been proposed as an example of what happened when humans dallied with Neandertals. Hybridization clues such as those seen in the Oase fossil may dot the skulls of living animals as well. Skull changes in mouse hybrids, for instance, parallel those observed on the Romanian fossil, Ackermann’s Cape Town colleague Kerryn Warren reported at the anthropology meeting in April. Warren and her colleagues arranged laboratory liaisons between three closely related house mouse species. First-generation mouse hybrids generally displayed larger heads and jaws and a greater variety of skull shapes than their purebred parents. In later generations, differences between hybrid and purebred mice began to blur. More than 80 percent of second-generation hybrids had head sizes and shapes that fell in between those of their hybrid parents and purebred grandparents. Ensuing generations, including offspring of hybrid-purebred matches, sported skulls that generally looked like those of a purebred species with a few traits borrowed from another species or a hybrid line. Borrowed traits by themselves offered no clear road map for retracing an animal’s hybrid pedigree. There’s a lesson here for hominid researchers, Ackermann warns: Assign fossils to one species or another at your own risk. Ancient individuals defined as H. sapiens or Neandertals or anything else may pull an Oase and reveal a hybrid face. Part of the reason for Ackermann’s caution stems from evidence that hybridization tends to loosen genetic constraints on how bodies develop. That’s the implication of studies among baboons, a primate viewed as a potential model for hybridization in human evolution. Six species of African baboons currently interbreed in three known regions, or hybrid zones. These monkeys evolved over the last several million years in the same shifting habitats as African hominids. At least two baboon species have inherited nearly 25 percent of their DNA from a now-extinct baboon species that inhabited northern Africa, according to preliminary studies reported at the anthropology meeting by evolutionary biologist DietmarZinner of the German Primate Center in Göttingen. Unusual arrangements of 32 bony landmarks on the braincase appear in second-generation baboon hybrids, Cape Town physical anthropologist Terrence Ritzman said in another meeting presentation. Such alterations indicate that interbreeding relaxes evolved biological limits on how skulls grow and take shape in baboon species, he concluded. In line with that proposal, hybridization in baboons and many other animals results in smaller canine teeth and the rotation of other teeth in their sockets relative to parent species. Changes in the nasal cavity of baboons showed up as another telltale sign of hybridization in a recent study by Ackermann and Kaleigh Anne Eichel of the University of Waterloo, Canada. The researchers examined 171 skulls from a captive population of yellow baboons, olive baboons and hybrid offspring of the two species. Skulls were collected when animals died of natural causes at a primate research center in San Antonio. Scientists there tracked the purebred or hybrid backgrounds of each animal. First-generation hybrids from the Texas baboon facility, especially males, possessed larger nasal cavities with a greater variety of shapes, on average, than either parent species, Ackermann and Eichel reported in the May Journal of Human Evolution. Male hybrid baboons, in general, have large faces and boxy snouts. Similarly, sizes and shapes of the mid-face vary greatly from one Eurasian fossil hominid group to another starting around 126,000 years ago, says paleoanthropologist Fred Smith of Illinois State University in Normal. Mating between humans and Neandertals could have produced at least some of those fossils, he says. One example: A shift toward smaller, humanlike facial features on Neandertal skulls from Croatia’s Vindija Cave. Neandertals lived there between 32,000 and 45,000 years ago. Smith has long argued that ancient humans interbred with Neandertals at Vindija Cave and elsewhere. Ackermann agrees. Ancient human skulls with especially large nasal cavities and unusually shaped braincases actually represent human-Neandertal hybrids, she suggests. She points to fossils, dating to between 80,000 and 120,000 years ago, found at the Skhul and Qafzeh caves in Israel. Eurasian Neandertals mated with members of much larger H. sapiens groups before getting swamped by the African newcomers’ overwhelming numbers, Smith suspects. He calls it “extinction by hybridization.” Despite disappearing physically, “Neandertals left a genetic and biological mark on humans,” he says. Some Neandertal genes eluded extinction, he suspects, because they were a help to humans. Several genetic studies suggest that present-day humans inherited genes from both Neandertals and Denisovans that assist in fighting infections (SN: 3/5/16, p. 18). One physical characteristic of hybridization in North American gray wolves is also a sign of interbreeding’s health benefits. Genetic exchanges with coyotes and dogs have helped wolves withstand diseases in new settings, says UCLA evolutionary biologist Robert Wayne. “There are few examples of hybridization leading to new mammal species,” Wayne says. “It’s more common for hybridization to enhance a species’ ability to survive in certain environments.” Despite their name, North American gray wolves often have black fur. Wayne and his colleagues reported in 2009 that black coat color in North American wolves stems from a gene variant that evolved in dogs. Interbreeding with Native American dogs led to the spread of that gene among gray wolves, the researchers proposed. The wolves kept their species identity, but their coats darkened with health benefits, the scientists suspect. Rather than offer camouflage in dark forests, the black-coat gene appears to come with resistance to disease, Wayne said at the anthropology meeting. Black wolves survive distemper and mange better than their gray-haired counterparts, he said. Similarly, DNA comparisons indicate that Tibetan gray wolves acquired a gene that helps them survive at high altitudes by interbreeding with mastiffs that are native to lofty northern Asian locales. Intriguingly, genetic evidence also suggests that present-day Tibetans inherited a high-altitude gene from Denisovans or a closely related ancient population that lived in northeast Asia. Labeling gray wolf hybrids as separate wolf species is a mistake, Wayne and colleagues contend (SN: 9/3/16, p. 7). Hybrids smudge the lines that scientists like to draw between living species as well as fossil hominid species, Wayne says. Like wolves, ancient hominids were medium-sized mammals that traveled great distances. It’s possible that an ability to roam enabled humans, Neandertals and Denisovans to cross paths in more populated areas, resulting in hybrid zones, paleoanthropologist John Hawks of the University of Wisconsin–Madison suggests. Hominids may have evolved traits suited to particular climates or regions. If so, populations may have rapidly dispersed when their home areas underwent dramatic temperature and habitat changes. Instead of slowly moving across the landscape and stopping at many points along the way, hominid groups could have trekked a long way before establishing camps in areas where other hominids had long hunted and foraged. Perhaps these camps served as beachheads from which newcomers ventured out to meet and mate with the natives, Hawks says. All ancient hominid populations were genetically alike enough, based on ancient DNA studies, to have been capable of interbreeding, Hawks said at the anthropology meeting. Specific parts of Asia and Europe could have periodically become contact areas for humans, Neandertals, Denisovans and other hominids. Beneficial genes would have passed back and forth, and then into future generations. Ackermann sees merit in that proposal. Hominid hybrid territories would have hosted cultural as well as genetic exchanges among populations, she says, leading to new tool-making styles, social rituals and other innovations. “These weren’t necessarily friendly exchanges,” Ackermann says. Many historical examples describe cultural exchange involving populations that succumb to invaders but end up transforming their conquerors’ way of life. However genes, behaviors and beliefs got divvied up in the Stone Age, a mix of regional populations — including Neandertals and Denisovans — can be considered human ancestors, she theorizes. They all contributed to human evolution’s braided stream. That’s a controversial view. Neandertals and Denisovans lived in relatively isolated areas where contact with other hominid populations was probably rare, says paleoanthropologist Matthew Tocheri of Lakehead University in Thunder Bay, Canada. Random DNA alterations, leading to the spread of genes that happened to promote survival in specific environments, played far more important roles in human evolution than occasional hybridization did, Tocheri predicts. Neandertals and Denisovans can’t yet boast of being undisputed hybrid powers behind humankind’s rise. But a gallery of interbreeding animals could well help detect hybrid hominids hiding in plain sight in the fossil record. This article appears in the October 15, 2016, issue of Science News with the headline, "The Hybrid Factor: The physical efffects of interbreeding among animals may offer clues to Neandertals' genetic mark on humans." This article was corrected on October 12, 2016, to note Fred Smith .
News Article | December 23, 2016
« DOE awards $18M to 5 projects to accelerate development of plug-in electric vehicles & use of other alternative fuels | Main | DOE to award $15M to accelerate deployment of efficient transportation technology » Researchers from the University of Waterloo Center for Automotive Research (WatCAR) in Canada are modifying a Lincoln MKZ Hybrid to autonomous drive-by-wire operation. The research platform, dubbed “Autonomoose” is equipped with a full suite of radar, sonar, lidar, inertial and vision sensors; NVIDIA DRIVE PX 2 AI platform (earlier post) to run a complete autonomous driving system, integrating sensor fusion, path planning, and motion control software; and a custom autonomy software stack being developed at Waterloo as part of the research. Recently, the Autonomoose autonomously drove a crew of Ontario Ministry of Transportation officials to the podium of a launch event to introduce the first car approved to hit the roads under the province’s automated vehicle pilot program. Operating at 24 trillion deep learning operations per second, DRIVE PX 2 enables Autonomoose to navigate Ontario’s city streets and highways, even in inclement weather. The WatCAR research team has Autonomoose operating at level 2 autonomy, where the driver must be prepared to take over from the system in the event it fails to respond to a situation properly. Over the duration of the research program, they will advance the automation through level 3—where drivers can turn their attention away in certain environments, such as freeways—and ultimately reach level 4, where the automated system can control the car under most all circumstances. Ontario is the first province in Canada to create a pilot program to test automated vehicles on its roads. WatCAR was the first applicant and the first approved participant to test a vehicle on public roads. Public road testing of Autonomoose in both ideal and adverse weather conditions will begin early next year. The province places no restriction on where these test vehicles can be driven—an advantage compared to most programs around the world, which restrict driving to certain areas of cities or highways. Canada’s Natural Sciences and Engineering Research Council (NSERC) provided initial research funding for Autonomoose. Nine professors are involved from the Faculty of Engineering and Faculty of Mathematics. Specific projects include:
News Article | December 6, 2016
People must be part of the equation in conservation projects. This will increase local support and the effectiveness of conservation. That's the main conclusion of a study published online Nov. 29 in the journal Biological Conservation. In it, an international group of scientists recognizes the need to consider humans' livelihoods, cultural traditions and dependence on natural resources when planning and carrying out conservation projects around the world. "We really need to think about people as we're creating conservation initiatives. Forgetting about humans in the conservation recipe is like forgetting yeast in a loaf of bread," said lead author Nathan Bennett, a researcher at the University of Washington, the University of British Columbia and Stanford University. As the Earth continues to lose species and natural resources, the common approach to conservation has been to emphasize natural science to solve ecological problems, leaving people's relationships to natural resources out of the discussion. Increasingly, natural scientists and social scientists are partnering to try to consider both the needs of nature and of stakeholders. But for lack of good precedent, funding and will, often conservation organizations and activities don't fully consider the human dimensions of conservation, the authors found. "When people are ignored and conservation measures are put in, we see opposition, conflict and often failure," Bennett said. "These problems require the best available evidence, and that includes having both natural and social scientists at the table." This paper follows dozens of studies that point out the need for humans to be considered in environmental management and conservation, but few have articulated the benefits of doing so and exactly how to do this, Bennett explained. This review paper is the first to bring together the entire storyline by listing the practical contributions the variety of social sciences can offer to improve conservation. "This paper helps us to move beyond statements about the need for this toward actually setting the agenda," Bennett said. Two years ago, Bennett convened an international working group to find ways to practically involve more social scientists from fields such as geography, history, anthropology and economics in conservation projects. This paper is one of several outcomes from that working group. Another paper published in July 2016 suggests that conservation organizations and funders should put more emphasis on social sciences and explains what an ideal "conservation team" could look like. This new study calls for action to ensure that we have learned the lessons from past failures and successes of ignoring or considering human dimensions in conservation. In Thailand, for example, officials set up a series of marine protected areas along the country's coastline to try to conserve threatened habitats, including coral reefs, mangroves and seagrass meadows. But they didn't consider the thousands of fishermen and women who live near or inside the marine protected areas and rely on fishing and harvesting for livelihoods and feeding their families. Fishing bans and unfair treatment have led to resentment and opposition. In one case, fishermen burned a ranger station in protest. To add to the divisiveness, big commercial boats still caught fish in these areas because the protection zones were not well enforced. A recent successful example was the creation of California's marine protected area network. Local fisheries and communities, along with scientists, fishery managers, government and industry, were all brought to the table and the outcome ultimately was supported by most groups involved, Bennett explained. Similarly, right now in British Columbia planning for marine protected areas is underway, and First Nations leaders are working alongside local and federal governments. Successful conservation projects happen when both natural and social scientists are working with government, nonprofits, resource managers and local communities to come up with solutions that benefit everyone. This can take more time and resources at the outset, but Bennett and his collaborators argue that social scientists are often in a position to help make this a more efficient process. "Ignoring the people who live in an area can be a costly mistake for conservation. This is one of those cases where an ounce of prevention can be worth more than a pound of cure," he said. "Specialists in the social sciences can develop more creative, robust and effective solutions to environmental problems that people are going to get behind." Patrick Christie, a UW professor in the School of Marine and Environmental Affairs, is a co-author on the paper. Other co-authors are from the University of British Columbia, Stanford University, the University of Guelph, the University of Saskatchewan, the American Museum of Natural History in New York, the University of Victoria, the University of Wyoming, the University of Waterloo, the International Union for Conservation of Nature in Switzerland, Oregon State University, Memorial University of Newfoundland, Cornell University, Slippery Rock University, Georgia State University and World Wildlife Fund International. This study and co-authors were funded by the Canadian Wildlife Federation, the Social Science and Humanities Research Council, the Liber Ero Foundation, Fulbright Canada, the Smith Fellowship Program, the National Science Foundation and a number of other organizations. See the paper for a complete list. For more information, contact Bennett at email@example.com or 360-820-0181.
News Article | November 9, 2015
While lithium-ion batteries are currently the gold standard for batteries and used in most of our electronics -- smartphones, computers, and more -- they have their short-comings. Compared to other battery technologies being tested in labs, they're not the best when it comes to energy storage capacity and lifetime length. Researchers around the world are looking for the better battery that can store more energy, weigh less and last longer than traditional lithium-ion batteries. While many researchers have been working on entirely different battery technologies, researchers at the University of Waterloo decided to address the shortcomings of the lithium-ion battery head-on. The researchers decided to focus on the negative anode of li-on batteries, usually made from graphite. “Graphite has long been used to build the negative electrodes in lithium-ion batteries,” said Professor Chen, the Canada Research Chair in Advanced Materials for Clean Energy and a member of the Waterloo Institute for Nanotechnology and the Waterloo Institute for Sustainable Energy. “But as batteries improve, graphite is slowly becoming a performance bottleneck because of the limited amount of energy that it can store.” Using graphite, the maximum theoretical capacity of the battery is 370 mAh/g (milliamp hours per gram), but silicon has a theoretical capacity of 4,200 mAh/g. Silicon also has the added benefit of being much cheaper. So, why hasn't silicon been used already? The problem with silicon is that it interacts with the lithium inside the cell during the charge cycle and expands and contracts as much as 300 percent. That expansion causes cracks and ultimately causes the battery to fail. The research team figured out a way to minimize the expansion by using a flash heat treatment for the silicon electrodes that creates a "robust nanostructure." This structure resulted in less contact between the electrode and the lithium which cut out most of the expansion and contraction and made the battery much more stable. The new design had a capacity of than 1,000 mAh/g over 2,275 cycles and the researchers say that the design promises a 40 to 60 percent increase in energy density over traditional lithium-ion batteries. That means that an electric car with this new battery design could have a range of over 300 miles, while the batteries themselves would be lighter, reducing the overall weight of the vehicle. The team is working on commercializing the new design and hope that it will be in new batteries in the next year.
News Article | December 6, 2016
HOUSTON--(BUSINESS WIRE)--The 2017 Underground Construction Technology International Conference & Exhibition (UCT) is taking its benchmark education program to the Fort Worth Convention Center, Jan. 31-Feb. 2. Presented by Underground Construction magazine, UCT is the largest event in the United States focusing on underground utility infrastructure rehabilitation and construction. Conference attendees will learn the latest in underground utility pipe rehabilitation and new construction using both trenchless and open-cut technologies through hands-on demonstrations in the exhibit hall or by real-world case histories, presentations and panel discussions in the seminars. The educational program offers 27 Professional Development Hours, reviewed and certified by The University of Texas at Arlington (Continuing Education Units are also available). Attendee demographics include public works, telecom, gas and electric, government officials, contractors, engineers, manufacturers and suppliers. A wide variety of technologies are highlighted at the conference such as horizontal directional drilling, pipe bursting, cured-in-place pipe and dozens more. The city of Fort Worth’s Water Director, John Robert Carman, will deliver the welcome and keynote address at 9 a.m. on Tuesday, Jan. 31, immediately following a continental breakfast for municipal personnel, contractors, engineers and vendors. Academic sponsors lending their expertise to the program include: the Center for Underground Infrastructure Research and Education, University of Texas at Arlington; the Trenchless Technology Center, Louisiana Tech University; the Center for Innovative Grouting Materials & Technology, University of Houston; Vanderbilt University; Colorado School of Mines; Swim Center at Virginia Tech University; Del E. Webb School of Construction, Arizona State University; the Centre for Advancement of Trenchless Technologies, University of Waterloo (Canada); and Oklahoma State University. UCT also has the support of industry associations such as the National Association of Sewer Service Companies, Distribution Contractors Association, Power & Communication Contractors Association, NACE International, Pipe Line Contractors Association, Interstate Natural Gas Association of America, North American Society For Trenchless Technology, American Gas Association, Southern Gas Association and many more. A limited number of exhibit booths and sponsor opportunities remain. Press registration is complimentary. Early registration discounts are available for multiple attendees from the same company. More information is available at uctonline.com, or contact Karen Francis at firstname.lastname@example.org. UCT and Underground Construction are produced and managed by Oildom Publishing Company of Texas. Oildom Publishing produces directories, events, magazines and webinars, focused on the energy pipeline and underground utility industry.
News Article | February 15, 2017
Astronomers using the Atacama Large Millimeter/submillimeter Array (ALMA) have discovered a surprising connection between a supermassive black hole and the galaxy where it resides. Powerful radio jets from the black hole - which normally suppress star formation - are stimulating the production of cold gas in the galaxy's extended halo of hot gas. This newly identified supply of cold, dense gas could eventually fuel future star birth as well as feed the black hole itself. The researchers used ALMA to study a galaxy at the heart of the Phoenix Cluster, an uncommonly crowded collection of galaxies about 5.7 billion light-years from Earth. The central galaxy in this cluster harbors a supermassive black hole that is in the process of devouring star-forming gas, which fuels a pair of powerful jets that erupt from the black hole in opposite directions into intergalactic space. Astronomers refer to this type of black-hole powered system as an active galactic nucleus (AGN). Earlier research with NASA's Chandra X-ray observatory revealed that the jets from this AGN are carving out a pair of giant "radio bubbles," huge cavities in the hot, diffuse plasma that surrounds the galaxy. These expanding bubbles should create conditions that are too inhospitable for the surrounding hot gas to cool and condense, which are essential steps for future star formation. The latest ALMA observations, however, reveal long filaments of cold molecular gas condensing around the outer edges of the radio bubbles. These filaments extend up to 82,000 light-years from either side of the AGN. They collectively contain enough material to make about 10 billion suns. "With ALMA we can see that there's a direct link between these radio bubbles inflated by the supermassive black hole and the future fuel for galaxy growth," said Helen Russell, an astronomer with the University of Cambridge, UK, and lead author on a paper appearing in the Astrophysical Journal. "This gives us new insights into how a black hole can regulate future star birth and how a galaxy can acquire additional material to fuel an active black hole." The new ALMA observations reveal previously unknown connections between an AGN and the abundance of cold molecular gas that fuels star birth. "To produce powerful jets, black holes must feed on the same material that the galaxy uses to make new stars," said Michael McDonald, an astrophysicist at the Massachusetts Institute of Technology in Cambridge and coauthor on the paper. "This material powers the jets that disrupt the region and quenches star formation. This illustrates how black holes can slow the growth of their host galaxies." Without a significant source of heat, the most massive galaxies in the universe would be forming stars at extreme rates that far exceed observations. Astronomers believe that the heat, in the form of radiation and jets from an actively feeding supermassive black hole, prevents overcooling of the cluster's hot gas atmosphere, suppressing star formation. This story, however, now appears more complex. In the Phoenix Cluster, Russell and her team found an additional process that ties the galaxy and its black hole together. The radio jets that heat the core of the cluster's hot atmosphere also appear to stimulate the production of the cold gas required to sustain the AGN. "That's what makes this result so surprising," said Brian McNamara, an astronomer at the University of Waterloo, Ontario, and co-author on the paper. "This supermassive black hole is regulating the growth of the galaxy by blowing bubbles and heating the gases around it. Remarkably, it also is cooling enough gas to feed itself." This result helps astronomers understand the workings of the cosmic "thermostat" that controls the launching of radio jets from the supermassive black hole. "This could also explain how the most massive black holes were able to both suppress run-away starbursts and regulate the growth of their host galaxies over the past six billion years or so of cosmic history," noted Russell. The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc. The Atacama Large Millimeter/submillimeter Array (ALMA), an international astronomy facility, is a partnership of ESO, the U.S. National Science Foundation (NSF) and the National Institutes of Natural Sciences (NINS) of Japan in cooperation with the Republic of Chile. ALMA is funded by ESO on behalf of its Member States, by NSF in cooperation with the National Research Council of Canada (NRC) and the National Science Council of Taiwan (NSC) and by NINS in cooperation with the Academia Sinica (AS) in Taiwan and the Korea Astronomy and Space Science Institute (KASI). ALMA construction and operations are led by ESO on behalf of its Member States; by the National Radio Astronomy Observatory (NRAO), managed by Associated Universities, Inc. (AUI), on behalf of North America; and by the National Astronomical Observatory of Japan (NAOJ) on behalf of East Asia. The Joint ALMA Observatory (JAO) provides the unified leadership and management of the construction, commissioning and operation of ALMA.
News Article | March 2, 2017
The Ecological Society of America (ESA) will present the 2017 awards recognizing outstanding contributions to ecology in new discoveries, teaching, sustainability, diversity, and lifelong commitment to the profession during the Society's Annual Meeting in Portland, Ore. The awards ceremony will take place during the Scientific Plenary on Monday, August 7, at 8 AM in the Oregon Ballroom, Oregon Convention Center. Learn more about ESA awards on our home website. The Eminent Ecologist Award honors a senior ecologist for an outstanding body of ecological work or sustained ecological contributions of extraordinary merit. Soil ecologist Diana Wall, the founding director of the Colorado State University's School of Global Environmental Sustainability, is world-renowned for uncovering the importance of below-ground processes. Best known for her outstanding quarter century of research in the McMurdo Dry Valleys in Antarctica, one of the more challenging environments of the planet. Her research has revealed fundamental soil processes from deserts and forests to grasslands and agricultural ecosystems to New York City's Central Park. Dr. Wall's extensive collaborative work seeks to understand how the living component of soil contributes to ecosystem processes and human wellbeing--and to in turn uncover how humans impact soils, from local to global scales. In landmark studies, she revealed the key role of nematodes and other tiny animals as drivers of decomposition rates and carbon cycling. The biodiversity in soils, she found, influences ecosystem functioning and resilience to human disturbance, including climate change. She demonstrated that the biodiversity belowground can at times be decoupled from biodiversity aboveground. Her focus on nematodes in soils in very harsh environments, from the cold, dry Antarctic to hot, dry deserts, opened up a perspective on how life copes with extreme environments. She has a laudable record of publishing excellent papers in top-ranked scientific journals. Dr. Wall has played a vital role as an ecological leader, chairing numerous national and international committees and working groups and serving as president of the Ecological Society of America in 1999. She is a Fellow of ESA, the American Association for the Advancement of Science, and the Society of Nematologists. In 2013, she received the Tyler Prize for Environmental Achievement for her outspoken efforts as an ambassador for the environmental and economic importance of soils and ecology. Currently, she is scientific chair of the Global Soil Biodiversity Initiative, which works to advance soil biodiversity for use in policy and management of terrestrial ecosystems. Dr. Wall is well-respected in her role as mentor of young scientists, over several generations, and as a communicator of science outside the usual academic arenas. Odum Award recipients demonstrate their ability to relate basic ecological principles to human affairs through teaching, outreach, and mentoring activities.? Kathleen Weathers is a senior scientist and the G.Evelyn Hutchinson chair of ecology at the Cary Institute of Ecosystem Studies, where she focuses on freshwater ecosystems. For more than a decade, she has been dedicated to advancing bottom-up network science, creating training opportunities for graduate students and tools for citizen science engagement. Her efforts strive to equip the next generation of ecologists and managers with the skills needed to protect freshwater resources. Dr Weathers played a guiding role in the formation of the Global Lake Ecological Observatory Network (GLEON), and currently acts as co-chair. A part of this international grassroots collaboration she helped develop Lake Observer, a crowd-sourcing App that streamlines the way that researchers and citizen scientists record water quality observations in lakes, rivers, and streams. Dr. Weathers has made it a priority to mentor students and early-career scientists participating in GLEON, with an eye toward diversity, inclusion, and instruction. She helped empower GLEON's student association, which contributes meaningfully to governance and training within the broader network. She also spearheaded the development of the GLEON Fellows Program, a two-year graduate immersion in data analysis, international collaboration, effective communication, and team science. The GLEON Fellows Program has emerged as a model for training initiatives in macrosystem ecology, and will affect the ecological community positively for decades to come, as participants carry their training forward to other institutions and endeavors. The Distinguished Service Citation recognizes long and distinguished volunteer service to ESA, the scientific community, and the larger purpose of ecology in the public welfare. Debra Peters is the founding editor-in-chief of ESA's newest journal, Ecosphere, created in 2010 to offer a rapid path to publication for research reports from across the spectrum of ecological science, including interdisciplinary studies that may have had difficulty finding a home within the scope of the existing ESA family of journals. In her hands the online-only, open-access journal has claimed a successful niche in the ecological publications landscape, expanding to publish over 400 manuscripts in 2016. Dr. Peters, an ecologist for the United States Department of Agriculture Agricultural Research service's (USDA-ARS) Jornada Experimental Range and lead principal investigator for the Jornada Basin Long Term Ecological Research program in Las Cruces, New Mexico, has served on the editorial boards of ESA's journals Ecological Applications, Ecology and Ecological Monographs. She chaired the society's Rangeland Section, was a founding member and chair of the Southwest Chapter, and has served as member-at-large on the Governing Board. As program chair for the 98th Annual Meeting of the society, she inaugurated the wildly popular Ignite talks, which give speakers the opportunity to present conceptual talks that do not fit into the standard research presentation format. Dr. Peters has greatly contributed to the broader research enterprise as senior advisor to the chief scientist at the USDA, and as a member of the National Ecological Observatory Network's (NEON) Board of Directors. She has provided this quite amazing array of services in support of the society and her profession while maintaining an outstanding level of research productivity and scientific leadership in landscape-level, cross-scale ecosystem ecology. Many of her more than 100 research publication have been cited more than 100 times. Her fine record of research led to her election as a Fellow of ESA and the American Association for the Advancement of Science. In all respects, Debra Peters exemplifies distinguished service to the ESA, and to science. ESA's Commitment to Human Diversity in Ecology award recognizes long-standing contributions of an individual towards increasing the diversity of future ecologists through mentoring, teaching, or outreach. Gillian Bowser, research scientist in Colorado State University's Natural Resource Ecology Laboratory, is honored for her joyful and successful recruitment and retention of under-represented students to the study of ecology, to public service in support of the natural world, and to empowerment of women and minorities worldwide. The Cooper Award honors the authors of an outstanding publication in the field of geobotany, physiographic ecology, plant succession or the distribution of plants along environmental gradients. William S. Cooper was a pioneer of physiographic ecology and geobotany, with a particular interest in the influence of historical factors, such as glaciations and climate history, on the pattern of contemporary plant communities across landforms. University of Waterloo, Ontario professor Andrew Trant and colleagues at the University of Victoria and the Hakai Institute in British Columbia revealed a previously unappreciated historical influence on forest productivity: long-term residence of First Nations people. Counter to a more familiar story of damage to ecosystems inflicted by people and their intensive use of resources, the activities of native people on the Central Coast of British Columbia enhanced the fertility of the soil around habitation sites, leading to greater productivity of the dominant tree species, the economically and culturally valuable western redcedar (Thuja plicata Donn ex D. Don). Through a combination of airborne remote sensing and on-the-ground field work, the authors showed that forest height, width, canopy cover, and greenness increased on and near shell middens. They presented the first documentation of influence on forest productivity by the daily life activities of traditional human communities. The Mercer Award recognizes an outstanding and recently-published ecological research paper by young scientists. Biological invasions, and migrations of native species in response to climate change, are pressing areas of interest in this time of global change. Fragmentation of the landscape by natural and human-made barriers slows the velocity of spread, but it is not known how patchy habitat quality might influence the potential for evolution to accelerate invasions. Jennifer Williams, an assistant professor at the University of British Columbia, and colleagues implemented a creative experimental design using the model plant species Arabidopsis thaliana that allowed them to disentangle ecological and evolutionary dynamics during population expansion. Some plant populations were allowed to evolve, while others were continually reset to their original genetic composition. The authors convincingly demonstrate that rapid evolution can influence the speed at which populations spread, especially in fragmented landscapes. The Sustainability Science Award recognizes the authors of the scholarly work that makes the greatest contribution to the emerging science of ecosystem and regional sustainability through the integration of ecological and social sciences. Sustainability challenges like air pollution, biodiversity loss, climate change, energy and food security, disease spread, species invasion, and water shortages and pollution are often studied, and managed, separately, although they the problems they present are interconnected. Jianguo Liu and colleagues provide a framework for addressing global sustainability challenges from a coupled human and natural systems approach that incorporates both socioeconomic and environmental factors. They review several recent papers that have quantified at times conflicting efforts to provide ecosystem services, when these efforts are examined in a global perspective. The authors argue for the need to quantify spillover systems and feedbacks and to integrate analyses over multiple spatial and temporal scales. This will likely require the development of new analytical frameworks both to understand the social ecological mechanisms involved and to inform management and policy decisions for global sustainability. The Innovation in Sustainability Science Award recognizes the authors of a peer-reviewed paper published in the past five years exemplifying leading-edge work on solution pathways to sustainability challenges. One of the biggest challenges facing development of effective policy to address sustainability issues is that the concepts and vocabulary used by scientists to define and promote sustainability rarely translate into effective policy, because they do not include measures of success. This challenge is particularly apparent in the concept of stability and resilience, terms which are frequently used in policy statements and have long been the subject of empirical and theoretical research in ecology, but for which there are no easily defined and quantified metrics. Ian Donohue and colleagues argue that much of the fault for this disconnect lies with the academic community. They summarize and analyze a number of examples to support their claim that ecologists have taken a one-dimensional approach to quantifying stability and disturbance when these are actually multi-dimensional processes. They argue that this has led to confused communication of the nature of stability, which contributes to the lack of adoption of clear policies. They propose three areas where future research is needed and make clear recommendations for better integrating the multidimensional nature of stability into research, policy and actions that should become a priority for all involved in sustainability science. The Whittaker Award recognizes an ecologist with an earned doctorate and an outstanding record of contributions in ecology who is not a U.S. citizen and who resides outside the United States. Petr Pyšek, the chair of the Department of Invasion Ecology at the Academy of Sciences of the Czech Republic, is honored for his pioneering and insightful work in invasion ecology. Dr. Pyšek is editor-in-chief of Preslia (Journal of the Czech Botanical Society) and serves on the editorial boards of Biological Invasions, Diversity and Distributions, Folia Geobotanica, and Perspectives on Plant Ecology, Evolution and Systematics. The Shreve award supplies $1,000-2,000 to support ecological research by graduate or undergraduate student members of ESA in the hot deserts of North America (Sonora, Mohave, Chihuahua, and Vizcaino). Daniel Winkler, a PhD student with Travis Huxman at University of California Irvine, studies the invasion of Sahara mustard (Brassica tournefortii) in the Mojave, Sonoran, and Chihuahuan deserts. His dissertation focuses on determining the source populations of Sahara mustard and whether plasticity in functional traits is allowing the species to spread. Funds from the Forrest Shreve Student Research Fund will be used to process samples for leaf stable isotopes and elemental stoichiometry, allowing for a comparison of functional traits indicative of local adaptation and the species' plasticity. Daniel was a National Park Service Young Leaders in Climate Change Fellow and a NSF EAPSI Research Fellow. Learn more about the August 7-12, 2017 ESA Annual Meeting on the meeting website: http://esa. ESA welcomes attendance from members of the press and waives registration fees for reporters and public information officers. To apply, please contact ESA Communications Officer Liza Lester directly at email@example.com. The Ecological Society of America (ESA), founded in 1915, is the world's largest community of professional ecologists and a trusted source of ecological knowledge, committed to advancing the understanding of life on Earth. The 10,000 member Society publishes five journals and a membership bulletin and broadly shares ecological information through policy, media outreach, and education initiatives. The Society's Annual Meeting attracts 4,000 attendees and features the most recent advances in ecological science. Visit the ESA website at http://www. .
News Article | February 15, 2017
Here are summaries of research to be presented by CIFAR fellows at the 2017 AAAS meeting in Boston, MA from Feb. 16-19. CIFAR Humans & the Microbiome Program Co-Director Janet Rossant (Hospital for Sick Children) will moderate a discussion on the microbes that inhabit humans -- collectively called the microbiome. Program Co-Director Brett Finlay (University of British Columbia) will speak on the role of the microbiome in early childhood. Senior Fellow Eran Elinav (Weizmann Institute of Science) will delve into how genes, diet and microbiomes interact. Ana Duggan of Senior Fellow Hendrik Poinar's lab (McMaster University) will describe how they reconstruct ancient genomes and microbiomes. CIFAR Azrieli Brain, Mind & Consciousness Senior Fellow Sheena Josselyn (Hospital for Sick Children) brings together leaders in the field of memory research, approaching the expansive question of the temporal component of memory using unique tools. Josselyn recently discovered the neural rules for separating emotional memories across the temporal context in the amygdala, and will discuss how this process may go awry with psychiatric conditions. Eran Elinav, a Senior Fellow in CIFAR's Humans & the Microbiome program, will answer questions about how the microbiome affects humans (especially in regards to their diet) as well as how it can affect entire societies--shaping them through both common diseases and pandemics. A quantum mechanical representation of information could enable revolutionary technologies, from fast computation to unbreakable encryption. CIFAR Senior Fellow in the Quantum Information Science program Michele Mosca (University of Waterloo) will discuss cybersecurity in an era with quantum computers. Associate Fellow in the Quantum Information Science program Scott Aaronson (University of Texas) will speak on how quantum research is deepening our understanding of physics and mathematics. CIFAR creates knowledge that is transforming our world. Established in 1982, the Institute brings together interdisciplinary groups of extraordinary researchers from around the globe to address questions and challenges of importance to the world. Our networks help support the growth of research leaders and are catalysts for change in business, government and society. CIFAR is generously supported by the governments of Canada, British Columbia, Alberta, Ontario and Quebec, Canadian and international partners, as well as individuals, foundations and corporations.
News Article | December 21, 2016
They say a picture's worth a thousand words, but what about a video? That's not a rhetorical question. According to Forrester Research, a minute of video is now worth an estimated 1.8 million words—a finding that confirms what marketers and viral stars already know: Video is eating the world. A third of online activity is now spent watching video, and some analysts predict that could increase to more than two-thirds by the end of 2017. Even Facebook, a platform built on written status updates, may be mostly video five years from now, according to Mark Zuckerberg himself. Last month, no less an authority than Brian Halligan, CEO of internet marketing giant Hubspot, called video the number-one content marketing tool available. Still, many companies remain laser-focused on text and still images—trying to reach customers through the standard mix of blog posts, social media updates and, of course, endless email newsletters. I get it. Producing video can seem time-consuming, expensive, and complicated. And getting noticed presents its own set of challenges — especially in a world where more video is uploaded in 30 days than the top three U.S. TV networks have created in the past 30 years. But not having a video strategy for 2017 is like not having a digital strategy in 2000. You might be able to survive for the time being, but holdouts will find themselves seriously left behind and racing desperately to catch up in just a few short years ahead. More and more, digital means video. Like it or not, other media formats are moving to the margins—making it pretty much essential to start putting together your company's video strategy right about now. The good news is that it isn’t as hard or resource-intensive as you might think. Reaching a target audience isn’t about having the most sophisticated equipment or the latest high-tech toys, and you don’t have to produce Super Bowl–quality ads in order to make an impact. In fact, many of the biggest trends in video heading into 2017 make it easier for companies to produce video in-house, rather than having to outsource production to an agency. One of the easiest ways to make effective videos is to tap the talent that’s already on your payroll. Allowing staff to speak honestly and directly to customers creates an intimacy that can instantly cut through the sterile, corporate veneer. In fact, quick and dirty videos made with simple, free tools (like this is one we hacked together for recording screenshots) can often be more effective than slick, commercial-quality ones. Take a look at this video from Zappos, designed to showcase its work culture. It’s a little messy, but that’s part of the charm—real people talking to real people: Other companies are experimenting with DIY video to give an edge to their brands as employers, but staff videos can be effective for internal communications, too; here's how Hootsuite CEO Ryan Holmes uses video selfies to keep his teams up to date, for instance. Lots of approaches can work. Whether you live-stream town halls or have leaders speak directly to their teams like Holmes does, videos create a visceral, human connection that’s hard to replicate with newsletters or email memos. 2016 was the year of the live stream, with Facebook Live gaining momentum and turning any given moment into a global broadcast. With Instagram now diving into the streaming game, 2017 will see more companies experimenting with real-time video. Yes, live broadcast presents challenges, but for this format it’s okay—and even expected—to be spontaneous; viewers will forgive less-than-perfect production. Just make sure you’ve planned a loose structure that revolves around some sort of action, like T-Mobile’s John Legere does with his super-short, informal "Slow-Cooker Sunday" cooking shows. Most peoples' short (and possibly shrinking) attention spans mean the average viewer needs something to happen about every nine seconds. But it's important that you don't assume your audience is limited to those tuning in in the moment. It’s often under-appreciated how much live video is actually viewed later, as on-demand content. Done well, a live video can live on for months and years. A prime example is the content featured on Whale, the Q&A app from Justin Kan, which lets users engage in a real-time conversation with business experts and then adds those videos to a searchable content library. Personalized video might sound like a daunting concept, but in practice, it’s pretty straightforward—often just a matter of adding small, personal touches to recordings that speak to individual clients. This can be as simple as using post-production tools to insert their names into the opening frames or into the video itself—say on a ticket or a seat "reserved" especially for them. The University of Waterloo (which, in full disclosure, has partnered with my company, Vidyard) did this recently with a recruitment campaign that showed individual students’ names on dorm-room doors during a POV video of life on campus. We already know that including video in email marketing can increase click-through rates. But when Waterloo used this personalized tactic, 70% of prospective students opened their emails. Remember that nine-second attention span? One hack to ensure that action is sustained and varied is the "stories" format. Snapchat was the pioneer here, automatically splicing together users' short clips and photos into a longer video stream. In 2016, Instagram released its copy, and rumor has it that WhatsApp is next. But this format isn’t just for preteens looking to keep their friends in the loop. Buzzfeed’s Tasty channel has scored big on Facebook with its mastery of the truncated timeline, in step-by-step videos for easy-to-make recipes. This application is almost tailor-made for product guides or even customer support videos. The real virtue of the "story" format for companies just getting into the video game is that it's so flexible and forgiving. It’s not hard to shoot six seconds of action here, then three seconds there—all on a smartphone—and this fast-cut technique is better suited to mobile attention spans anyway. While augmented reality and virtual reality still require cumbersome equipment to record and watch, 360-degree video has quietly come into its own. For the uninitiated, 360-degree video allows viewers to either physically move their phones around in front of them or to click and drag their screens in order to change their perspective on a scene—an immersive experience that requires no special headgear. The special 360-degree cameras for filming this kind of content now cost just a few hundred dollars, meaning pretty much any company can produce its own ready-made immersive video. Red Bull was one of the first out of the gate in applying this technology to its action sports videos, and fashion line Barbour used 360-degree video to offer a VIP view to those who tuned into its 2017 spring and summer men’s collection debut. But the technology can also liven up more down-to-earth experiences, like speeches, conferences, and parties. You could even set one up in your office so clients can "click in" to your headquarters for an office tour. The world is already moving toward video—and fast. What’s driving this trend isn’t the desire for slick production-values and Oscar-worthy editing, though, although there's a place for that; it’s the desire for a connection that feels human and personal. Keeping that at the core of your video strategy won't just stretch your dollar, it'll serve you better than any technological bells and whistles in 2017. Michael Litt is cofounder and CEO of the video marketing platform Vidyard. Follow him on Twitter at @michaellitt.
News Article | February 27, 2017
VANCOUVER, BC / ACCESSWIRE / February 27, 2017 / CopperBank Resources Corp. ("CopperBank" or the "Company") (CSE: CBK) (OTC PINK: CPPKF) announces that it has appointed Brigitte Dejou as an independent director and Colin Burge has joined the company's technical advisory team. Mr. Kovacevic comments, "I have known Brigitte and Colin for many years and have direct experience with both of them. CopperBank's technical team now comprises of ten shareholder aligned members who have a combined two hundred and fifty years of industry experience. Colin was a vital element to the team at Cobre Panama, who delineated a tremendous amount of additional pounds of copper from the time I was an investor in the early development of that project. Brigitte has a deep knowledge base that rounds out our technical team, especially due to her direct experience in Alaska, where she participated with the geological interpretation that added many years of mine life to TECK's Red Dog mine district. Both Brigitte and Colin will be great assets to CopperBank's stakeholders as we continue our thorough data analysis and plan the important next steps for the Pyramid Copper porphyry deposit, The San Diego Bay prospect and our Contact Copper Oxide Project." Ms. Dejou holds both a Bachelor of Engineering degree and a Masters of Applied Science degree from Ecole Polytechnique de Montréal and is a member of the Ordre des Ingénieurs du Québec. Ms. Dejou has 25 years of experience in mineral exploration, including 18 years within Teck Cominco (now TECK) managing various exploration programs and two years with Osisko Mining Corporation working on the evaluation of new projects and QA/QC of existing drilling programs (Canadian Malartic, Duparquet, Hammond Reef). Ms. Dejou brings a wealth of experience in running a variety of exploration projects from grass-roots to pre-feasibility stage across North America (including Red Dog, El Limon-Morelos, Polaris, and Mesaba). She was also instrumental in the discovery of the Aktigiruk deposit. She has explored for a variety of commodities both for base metals (sedex and MVT Zn-Pb, porphyry Cu, magmatic Cu-Ni, VMS) and precious metals (Au, Ag, PGE). Since 2012, Ms. Dejou is Vice President Exploration for LaSalle Exploration. Mr. Burge is a discovery oriented exploration geologist with 30 years' experience in project development with First Quantum Minerals and predecessor companies. Mr. Burge was part of a corporate development team at Inmet Mining Corp. that discovered and delineated more than 30 billion pounds of copper at the Cobre Panama Project, leading to First Quantum Minerals $5 billion dollar acquisition of the company. He gained valuable experience working with First Quantum for 3 years as the project transitions to a mining operation and has excellent technical skills in exploration data management and the application of exploration tools, as well as a strong ability to think creatively. Mr. Burge graduated from the University of Waterloo with a Bachelor of Earth Science in 1981 and is a licensed professional geologist in British Columbia. Shareholders are encouraged to visit the Company's website for further details and biographies of each individual of CopperBank's technical team: www.copperbankcorp.com The Company also announces that Robert McLeod will be stepping down from the Board of Directors and will remain as an important member of CopperBank's technical team. Mr. McLeod will remain as a Qualified Person for the company. Certain information in this release may constitute "forward-looking information" under applicable securities laws and necessarily involve risks and uncertainties. Forward-looking information included herein is made as of the date of this news release and CopperBank does not intend, and does not assume any obligation, to update forward-looking information unless required by applicable securities laws. Forward-looking information relates to future events or future performance and reflects management of CopperBank's expectations or beliefs regarding future events. In certain cases, forward-looking information can be identified by the use of words such as "plans," or "believes," or variations of such words and phrases or statements that certain actions, events or results "may," "could," "would," "might," or "will be taken," "occur," or "be achieved," or the negative of these terms or comparable terminology. Examples of forward-looking information in this news release include, but are not limited to, statements with respect to the Company's ongoing review of its existing portfolio, the involvement of CopperBank in any potential divestiture, spin-out, partnership, or other transactions involving the Company's portfolio assets, and the ability of the Company to complete any such transactions, the ability of CopperBank to enter into transactions that will ultimately enhance shareholder value, and the anticipated issuance of one million shares in connection with the satisfaction of certain loans between CopperBank and management. This forward-looking information is based, in part, on assumptions and factors that may change or prove to be incorrect, thus causing actual results, performance, or achievements to be materially different from those expressed or implied by forward-looking information. Such factors and assumptions include, but are not limited to the Company's ability to identify and complete one or more transactions involving the Company's portfolio assets that enhance shareholder value as part of management's ongoing review of strategic alternatives in the current market conditions. By its very nature, forward-looking information involves known and unknown risks, uncertainties and other factors which may cause the actual results, performance or achievements to be materially different from any future results, performance or achievements expressed or implied by forward-looking information. Such factors include, but are not limited to, the risk that the Company will not be able to identify and complete one or more transactions involving the Company's portfolio assets that enhance shareholder value as part of management's ongoing review of strategic alternatives in the current market conditions. Although CopperBank has attempted to identify important factors that could cause actual actions, events, or results to differ materially from forward-looking information, there may be other factors that cause actions, events, or results not to be as anticipated, estimated, or intended. There can be no assurance that forward-looking information will prove to be accurate, as actual results and future events could differ materially from those anticipated by such forward-looking information. Accordingly, readers should not place undue reliance on forward-looking information. For more information on CopperBank and the risks and challenges of its businesses, investors should review the continuous disclosure filings that are available under CopperBank's profile at www.sedar.com.
News Article | February 15, 2017
The gig: Apoorva Mehta, 30, is the founder and chief executive of San Francisco grocery delivery start-up Instacart. Over the last four years, he has grown the company to more than 300 full-time employees and tens of thousands of part-time grocery shoppers. The start-up offers on-demand and same-day grocery delivery in hundreds of cities in 20 states. Electrical engineering: Growing up in Canada, Mehta had an avid curiosity in how technology worked. “Everything from atoms, all the way to what you see on a computer when you go to Google.com,” Mehta said. “I wanted to learn everything in between.” Not knowing what he wanted to do after college, he enrolled in an electrical engineering course at the University of Waterloo. Bored at Amazon: Mehta spent his post-college years working for technology companies such as Qualcomm and BlackBerry, and even did a stint at a steel factory. His goal was to try a bit of everything to help figure out what he really wanted to do. He eventually moved to Seattle to be a supply chain engineer at Amazon.com, where he developed fulfillment systems to get packages from Amazon’s warehouses to customers’ doors. During those years, he learned two things: He liked to build software, and he wanted to be challenged. After two years at Amazon, he felt that he was no longer being challenged. With no other role lined up, he quit his job. Twenty companies: He spent the next two years putting his learnings into practice. Between leaving Amazon and founding Instacart, Mehta estimates he started 20 companies. He tried building an ad network for social gaming companies. He spent a whole year developing a social network specifically for lawyers. “I knew nothing about these topics, but I liked putting myself in a position where I had to learn about an industry and try to solve problems they may or may not have had,” he said. None of the companies worked out. “After going through all these failures, releasing feature after feature, I realized it wasn’t that I couldn’t find a product that worked, I just didn’t care about the product,” Mehta said of the social network for lawyers. “When I went home, I wouldn’t think about it because I didn’t care about lawyers. I didn’t think of what lawyers did day to day.” Which led him to lesson No. 3: solve a real problem you actually care about. Groceries: With 20 failed start-up ideas under this belt, Mehta put some thought into the problems he experienced day to day. He lived in San Francisco. He didn’t own a car. He loved to cook, but he couldn’t get the groceries he wanted in his neighborhood. “It was 2012, people were ordering everything online, meeting people online, watching movies online, yet the one thing everyone has to do every single week — buying groceries — we still do in an archaic way,” he said. As soon as he came up with the idea for an on-demand grocery delivery platform, he couldn’t stop thinking about it. In less than a month, he’d coded himself a crude version of an app that could be used by people who needed groceries, and a version for those who were shopping in-store for customers. On its first test-run, because Mehta hadn’t hired any shoppers yet, he ordered through the app, went to the store and delivered the groceries to himself. Webvan: The idea of ordering groceries online and having them delivered to your home wasn’t new. Webvan, a company founded on that very premise, famously went under during the dotcom bust. But this didn’t faze Mehta, who believed that the success of a company rests not only on the quality of the idea but also on timing. “It was very clear to me that the idea was a good one and the time was now for the same reason why Uber and Lyft were finding success,” he said. Smartphones had become ubiquitous, people were comfortable performing transactions over their phones, and the idea of using an app to hire someone to perform a task was fast becoming the norm. “As a result of smartphones, the equation had changed,” he said. Teething troubles: Although Mehta landed on a hot idea and was able to partner with stores such as Whole Foods, Target and Safeway, the expansion of Instacart wasn’t without problems. The company was slapped with a class-action lawsuit in 2015, alleging that the workers who shopped for and delivered groceries were misclassified as independent contractors. Instacart eventually made its shoppers part-time employees, with some qualifying for benefits such as health insurance. “We went from having zero part-time employees to having people at thousands of individual store locations,” he said, “We had to figure out scheduling and what kinds of training had to be provided. We needed to figure out a lot of things.” Advice: Most start-ups fail, and those who start a company for the sake of starting a company are even more likely to fail, Mehta said. “The reason to start a company is to bring a change that you strongly believe in to this world,” he said. “You really have to want to do this.” Personal: Mehta lives in San Francisco. He’s an avid reader and enjoys biking in the city. Trump's rift with Mexican president sets off worries about future of NAFTA Former DreamWorks Chief Jeffrey Katzenberg raises nearly $600 million for his next act
News Article | January 30, 2017
The age of big data has seen a host of new techniques for analyzing large data sets. But before any of those techniques can be applied, the target data has to be aggregated, organized, and cleaned up. That turns out to be a shockingly time-consuming task. In a 2016 survey, 80 data scientists told the company CrowdFlower that, on average, they spent 80 percent of their time collecting and organizing data and only 20 percent analyzing it. An international team of computer scientists hopes to change that, with a new system called Data Civilizer, which automatically finds connections among many different data tables and allows users to perform database-style queries across all of them. The results of the queries can then be saved as new, orderly data sets that may draw information from dozens or even thousands of different tables. “Modern organizations have many thousands of data sets spread across files, spreadsheets, databases, data lakes, and other software systems,” says Sam Madden, an MIT professor of electrical engineering and computer science and faculty director of MIT’s bigdata@CSAIL initiative. “Civilizer helps analysts in these organizations quickly find data sets that contain information that is relevant to them and, more importantly, combine related data sets together to create new, unified data sets that consolidate data of interest for some analysis.” The researchers presented their system last week at the Conference on Innovative Data Systems Research. The lead authors on the paper are Dong Deng and Raul Castro Fernandez, both postdocs at MIT’s Computer Science and Artificial Intelligence Laboratory; Madden is one of the senior authors. They’re joined by six other researchers from Technical University of Berlin, Nanyang Technological University, the University of Waterloo, and the Qatar Computing Research Institute. Although he’s not a co-author, MIT adjunct professor of electrical engineering and computer science Michael Stonebraker, who in 2014 won the Turing Award — the highest honor in computer science — contributed to the work as well. Data Civilizer assumes that the data it’s consolidating is arranged in tables. As Madden explains, in the database community, there’s a sizable literature on automatically converting data to tabular form, so that wasn’t the focus of the new research. Similarly, while the prototype of the system can extract tabular data from several different types of files, getting it to work with every conceivable spreadsheet or database program was not the researchers’ immediate priority. “That part is engineering,” Madden says. The system begins by analyzing every column of every table at its disposal. First, it produces a statistical summary of the data in each column. For numerical data, that might include a distribution of the frequency with which different values occur; the range of values; and the “cardinality” of the values, or the number of different values the column contains. For textual data, a summary would include a list of the most frequently occurring words in the column and the number of different words. Data Civilizer also keeps a master index of every word occurring in every table and the tables that contain it. Then the system compares all of the column summaries against each other, identifying pairs of columns that appear to have commonalities — similar data ranges, similar sets of words, and the like. It assigns every pair of columns a similarity score and, on that basis, produces a map, rather like a network diagram, that traces out the connections between individual columns and between the tables that contain them. A user can then compose a query and, on the fly, Data Civilizer will traverse the map to find related data. Suppose, for instance, a pharmaceutical company has hundreds of tables that refer to a drug by its brand name, hundreds that refer to its chemical compound, and a handful that use an in-house ID number. Now suppose that the ID number and the brand name never show up in the same table, but there’s at least one table linking the ID number and the chemical compound, and one linking the chemical compound and the brand name. With Data Civilizer, a query on the brand name will also pull up data from tables that use just the ID number. Some of the linkages identified by Data Civilizer may turn out to be spurious. But the user can discard data that don’t fit a query while keeping the rest. Once the data have been pruned, the user can save the results as their own data file. “Data Civilizer is an interesting technology that potentially will help data scientists address an important problem that arises due to the increasing availability of data — identifying which data sets to include in an analysis,” says Iain Wallace, a senior informatics analyst at the drug company Merck. “The larger an organization, the more acute this problem becomes.” “We are currently exploring how to use Civilizer as a harmonization layer on top of a variety of chemical-biology datasets,” Wallace continues. “These datasets typically link compounds, diseases, and targets together. One use case is to identify which table contains information about a specific compound and what additional information is available about that compound in other related datasets. Civilizer helps us by allowing full text search over all the columns and then identifying related columns automatically. By using Civilizer, we should be easily able to add additional data sources and update our analysis very quickly.”
News Article | January 13, 2016
In 1990, when James Danckert was 18, his older brother Paul crashed his car into a tree. He was pulled from the wreckage with multiple injuries, including head trauma. The recovery proved difficult. Paul had been a drummer, but even after a broken wrist had healed, drumming no longer made him happy. Over and over, Danckert remembers, Paul complained bitterly that he was just — bored. “There was no hint of apathy about it at all,” says Danckert. “It was deeply frustrating and unsatisfying for him to be deeply bored by things he used to love.” A few years later, when Danckert was training to become a clinical neuropsychologist, he found himself working with about 20 young men who had also suffered traumatic brain injury. Thinking of his brother, he asked them whether they, too, got bored more easily than they had before. “And every single one of them,” he says, “said yes.” Those experiences helped to launch Danckert on his current research path. Now a cognitive neuroscientist at the University of Waterloo in Canada, he is one of a small but growing number of investigators engaged in a serious scientific study of boredom. There is no universally accepted definition of boredom. But whatever it is, researchers argue, it is not simply another name for depression or apathy. It seems to be a specific mental state that people find unpleasant — a lack of stimulation that leaves them craving relief, with a host of behavioural, medical and social consequences. In studies of binge-eating, for example, boredom is one of the most frequent triggers, along with feelings of depression and anxiety1, 2. In a study of distractibility using a driving simulator, people prone to boredom typically drove at higher speeds than other participants, took longer to respond to unexpected hazards and drifted more frequently over the centre line3. And in a 2003 survey, US teenagers who said that they were often bored were 50% more likely than their less-frequently bored peers to later take up smoking, drinking and illegal drugs4. Boredom even accounts for about 25% of variation in student achievement, says Jennifer Vogel-Walcutt, a developmental psychologist at the Cognitive Performance Group, a consulting firm in Orlando, Florida. That's about the same percentage as is attributed to innate intelligence. Boredom is “something that requires significant consideration”, she says. Researchers hope to turn such hints into a deep understanding of what boredom is, how it manifests in the brain and how it relates to factors such as self-control. But “it's a ways out before we're answering those questions”, says Shane Bench, a psychologist who studies boredom in the lab of Heather Lench at Texas A&M University in College Station. In particular, investigators need better ways to measure boredom and more reliable techniques for making research subjects feel bored in the lab. Still, the field is growing. In May 2015, the University of Warsaw drew almost 50 participants to its second annual conference on boredom, which attracted international speakers from social psychology and sociology. And in November, Danckert brought together about a dozen investigators from Canada and the United States for a workshop on the subject. Researchers in fields from genetics to philosophy, psychology and history are starting to work together on boredom research, says John Eastwood, a psychologist at York University in Toronto, Canada. “A critical mass of people addressing similar issues creates more momentum.” The scientific study of boredom dates back to at least 1885, when the British polymath Francis Galton published5 a short note in Nature on 'The Measure of Fidget' — his account of how restless audience members behaved during a scientific meeting. But decades passed with only a few people taking a serious interest in the subject. “There are things all around us that we don't think to look at, maybe because they appear trivial,” says Eastwood. That began to change in 1986, when Norman Sundberg and Richard Farmer of the University of Oregon in Eugene published their Boredom Proneness Scale (BPS)6, the first systematic way for researchers to measure boredom — beyond asking study participants, “Do you feel bored?”. Instead, they could ask how much participants agreed or disagreed with statements such as: “Time always seems to be passing slowly”, “I feel that I am working below my abilities most of the time” and “I find it easy to entertain myself”. (The statements came from interviews and surveys that Sundberg and Farmer had conducted on how people felt when they were bored.) A participant's aggregate score would give a measure of his or her propensity for boredom. The BPS opened up new avenues of research and made it apparent that boredom was about restlessness as much as apathy, the search for meaning as much as ennui. It has served as a launching point for other boredom scales, a catalyst for making the field more important and a tool for connecting boredom to other factors, including mental health and academic success. But it also has some widely acknowledged flaws, says Eastwood. One is that the BPS is a self-reported measure, which means that it is inherently subjective. Another is that it measures susceptibility to boredom — 'trait boredom' — not the intensity of the feeling in any given situation, which is known as state boredom. Studies consistently show that these two measures are independent of each other, yet researchers are only beginning to tease them apart. This can be particularly confounding in educational settings. Shifts in teaching style or classroom environment are unlikely to reduce students' trait boredom, which is intrinsic and slow to change, but can be very effective at reducing state boredom, which is purely situational. The BPS has often been misused to measure both forms of boredom at the same time, yielding answers that are likely to be misleading, says Eastwood. Scientists are still hashing out how to improve on the BPS. In 2013, Eastwood helped to develop the Multidimensional State Boredom Scale (MSBS)7, which features 29 statements about immediate feelings, such as: “I am stuck in a situation that I feel is irrelevant.” Unlike the BPS, which is all about the participant's habits and personality, the MSBS attempts to measure how bored people feel in the moment. And that, Eastman hopes, will give it a better shot at revealing what boredom is for everybody. But to measure boredom, researchers must first make sure that study participants are bored. And that is a whole different challenge. One way to create a particular mood, used for decades in psychology, is to show people a video clip. There are scientifically validated videos for inducing happiness, sadness, anger, empathy and many other emotions. So when she was working on her dissertation at Waterloo in 2014, Colleen Merrifield decided to make a video that would bore most people to tears. In Merrifield's video, two men stand in a white, windowless room. Silently, they take clothes from a pile between them and hang them on a white rack — a camisole, a shirt, a sweater, a sock. The seconds tick by: 15, 20, 45, 60. The men keep hanging laundry. Eighty seconds. One of the men asks the other for a clothes peg. One hundred seconds. They keep hanging laundry. Two hundred seconds. They keep hanging laundry. Three hundred seconds. They keep hanging laundry. Shown on a loop, the video can last for as long as five and a half minutes. Perhaps unsurprisingly, the people to whom Merrifield showed this found it stupefyingly dull8. But then she tried using the video to study how boredom affected the ability to focus and pay attention. Her protocol called for participants to carry out a classic cognitive attention task — watching for star-like light clusters to appear or disappear on a monitor — then to sit through the video to get good and bored, and finally to do the task again so that she could see how boredom affected their performance. But she found that she had to redesign the experiment: the task was boring people more than the video. This was not entirely unexpected. Previous studies of boredom had often used tasks instead of videos. But it also demonstrated the problem. There are so many ways for researchers to bore people with tasks — asking them to proofread address labels, say, or to screw nuts and bolts together — that it had always been difficult to compare individual studies. For instance, different studies have found boredom to be correlated with both rising and falling heart rate9. But without a standardized method for inducing boredom, it is impossible to work out who is right. In 2014, researchers at Carnegie Mellon University in Pittsburgh, Pennsylvania, published a paper9 that aimed to begin the process of standardization. It compared six different boredom inductions, representing three broad classes — repetitive physical tasks, simple cognitive tasks, and video or audio media — as well as a control video. The researchers used the MSBS to see how intensely each task elicited boredom, and a measure called the Differential Emotion Scale to see whether each task elicited boredom alone, or a number of other emotions. All six tasks were significantly more boring than the control and all six caused boredom almost exclusively. The best of the bunch was a task that required participants to click a mouse button to rotate a computer icon of a peg a quarter of a turn clockwise, over and over. After that, says Danckert, “I think I might be abandoning the video” to induce boredom in the lab. Instead, he will rely on behavioural tasks. The inexactness of the tools leaves holes in what researchers can reasonably say about boredom. For instance, many real-world problems that are highly correlated with boredom are connected to the idea of self-control, including addiction, gambling and binge-eating10. “I characterize boredom as a deficiency in self-regulation,” Danckert says. “It's a difficulty of engaging with tasks in your environment. The more self-control you have, the less likely you are to be bored.” But does this mean that self-control and boredom are measures of the same thing? Even Danckert is uncertain. Consider people with a history of traumatic brain injury. “Failures of self-control are their problem,” he says. “They might be inappropriately impulsive; there's increased risk-taking; they might also engage in drug and alcohol abuse.” Danckert certainly saw his brother, Paul, experience all those things in the wake of his injury. But in Danckert's research sample of people with traumatic brain injury — who are predominantly in their 40s — ageing seems to have weakened the link between boredom and self-control. In data that are not yet published, Danckert says, his patients report levels of self-control no lower than those of the general population, but their boredom-proneness scores are much higher. By contrast, Danckert's brother seems to demonstrate the opposite effect. He struggled for years with self-control issues, but eventually became less bored and reclaimed his love of music. “It's the most important thing in his life, next to his children,” Danckert says. So there is reason to suspect that boredom and self-control can exist independently — but there is not yet enough evidence to understand much beyond that. Despite all this uncertainty, researchers see themselves as laying a foundation, creating tools and standards that will allow them to tackle really important questions. “We're establishing boredom as a testable construct,” says Bench. Defining boredom is an important part of that. Different researchers have different pet definitions: a German-led team, for example, identifies five types of boredom11. But most workers in the field agree that, at least some of the time, people will work very hard to relieve boredom. This not only presents a more active version of boredom than most people are probably used to, but also has tangible connections to efforts to address boredom in the real world. Lench and Bench are testing whether the drive to become un-bored is so strong that people might be willing to choose unpleasant experiences as an alternative. This idea builds on research that has shown a correlation between sensation-seeking behaviour, even risky behaviour, and high boredom-proneness scores12. It is also similar to findings published in Science13 in 2014 and Appetite14 in 2015. In the first study, researchers asked people to sit in a room with nothing to do for as long as 15 minutes at a time. Some of the participants, particularly men, were willing to give themselves small electric shocks rather than be left alone with their thoughts. The second paper described two experiments: one in which the participants had access to unlimited sweets, and another in which they had access to unlimited electric shocks. Participants ate more when they were bored — but they also gave themselves more shocks. Even when it is not very pleasant, apparently, novelty is better than monotony. Novelty might also have a role in overcoming boredom in the classroom. In 2014, for instance, researchers led by psychologist Reinhard Peckrun of the University of Munich in Germany reported15 how they had followed 424 university students over the course of an academic year, measuring their boredom levels and documenting their test scores. The team found evidence of a cycle in which boredom begot lower exam results, which resulted in more disengagement from class and higher levels of boredom. Those effects were consistent throughout the school year, even after accounting for students' gender, age, interest in the subject, intrinsic motivation and previous achievement. But other studies suggest that novelty can disrupt this cycle16. Sae Schatz, director of the Advanced Distributed Learning Initiative, a virtual company that develops educational tools for the US Department of Defense, points to one experiment17 with a computer system that tutored students in physics. When the system was programmed to insult those who got questions wrong and snidely praise those who got them right, says Schatz, some students, especially adult learners, saw improved outcomes and were willing to spend longer on the machines. Schatz thinks that this could be because the insults provided enough novelty to keep people engaged and less prone to boredom. Looking to the future, researchers such as Eastwood are intent on finding better ways to understand what boredom is and why it is correlated to so many other mental states. They also want to investigate boredom in people who aren't North American college students. That means testing older people, as well as individuals from diverse ethnic and national backgrounds. And, given the impact that boredom may have on education, it also means developing versions of the BPS and MSBS that can be administered to children. Many researchers likewise hope to expand on the types of study being done. To get beyond self-reported data, Danckert wants to start looking at brain structures, and seeing whether there are differences between people who score highly on the BPS and those who don't. These data could help him to understand why boredom manifests so strongly in some people with traumatic brain injury. There's also a need, Danckert says, for more scientists to realize that boredom is fascinating. “We may be on the cusp of having enough people to advance a little more quickly,” he says.
News Article | September 21, 2016
« NanoXplore awarded $3.3M from SDTC to explore use of graphene-enhanced plastics in electric motors | Main | Volkswagen Truck & Bus launches RIO: open cloud-based platform for transportation industry digitization » Dr. Drew Higgins from the University of Waterloo has won the 2016 Canada’s Distinguished Dissertation Award from the Canadian Association of Graduate Studies (CAGS) in the category of Engineering/Medical Science/Natural Science for his work on fuel cell catalysts. Under the supervision of Dr. Zhongwei Chen, Canada Research Chair in Advanced Materials for Clean Energy, Higgins developed an alloy that slashed the use of platinum. Rather than simply match the performance of current catalysts, it actually proved seven times more effective. Continuing his research, he ultimately identified half a dozen options that eliminated the precious metal entirely. The 60+ research papers resulting from his work have been cited more than 3,600 times to date. His eventual goal is to create a fuel cell that will last 5,000 hours of operation and be competitive with typical internal combustion engines. Higgins is now continuing his electrocatalysis work at Stanford as a Banting Postdoctoral Fellow. The CAGS/ProQuest Distinguished Dissertation Awards have been recognizing outstanding Canadian doctoral dissertations for more than 20 years. The judges look for work that makes significant, original contributions to the academic community and to Canadian society. There are two awards: one for engineering, medical sciences and natural sciences; and one for fine arts, humanities and social sciences. They include a $1,500 prize and an invitation to speak to the Canadian Association for Graduate Studies conference to be held this November in Toronto.
News Article | March 23, 2016
Agency: Cordis | Branch: H2020 | Program: RIA | Phase: HCO-06-2015 | Award Amount: 2.71M | Year: 2016
Smoking and other forms of tobacco consumption are considered the single most important cause of preventable morbidity and premature mortality worldwide. Efforts to reduce the devastation of tobacco-related deaths and illness in the EU consist of the Tobacco Products Directive (TPD), and the ongoing implementation of the WHO Framework Convention on Tobacco Control (FCTC). The main objective of EUREST-PLUS is to monitor and evaluate the impact of the TPD within the context of FCTC ratification at an EU level. Our 4 specific objectives hence are: 1) To evaluate the psychosocial and behavioral impact of TPD implementation and FCTC implementation, through the creation of a longitudinal cohort of adult smokers in 6 EU MS (Germany, Greece, Hungary, Poland, Romania, Spain; total n=6000) in a pre- vs. post-TPD study design. 2) To assess support for TPD implementation through secondary dataset analyses of the Special Eurobarometer on Tobacco Surveys (SETS), cross-sectional surveys performed among 27,000 adults in all 28 EU MS, before the TPD is implemented and to monitor progress in FCTC implementation in the EU over the past years through trend analyses on the merged datasets of the 2009, 2012 and 2015 SETS datasets (n=80,000). 3) To document changes in e-cigarette product parameters (technical design, labelling/packaging and chemical composition) following implementation of Article 20 of the TPD. 4) To enhance innovative joint research collaborations, through the pooling and comparisons across both other EU countries of the ITC Project (UK, NL, FR), and other non-EU countries . Tackling tobacco use is quintessential to reducing the impact of chronic NCDs, a topic EUREST-PLUS will stride to lead.
News Article | December 13, 2016
A chemistry professor at The University of Texas at Arlington has been honored with a prestigious award for his groundbreaking contributions to the fields of analytical chemistry. Purnendu "Sandy" Dasgupta, the Hamish Small Chair of Ion Analysis in the Department of Chemistry and Biochemistry at UTA, was named recipient of the 2016 Eastern Analytical Symposium's highest award, the Award for Outstanding Achievements in the Fields of Analytical Chemistry. Dasgupta was presented with the award during the organization's annual meeting in Somerset, N.J. "This is a tremendous honor and I'm very grateful for this recognition by my peers," Dasgupta said. "This award means so much to me because it is a rare one that does not recognize expertise in a specific area but recognizes broad contributions across the fields of analytical chemistry." The Eastern Analytical Symposium and Exposition is held each year to provide professional scientists and students continuing education in the analytical and allied sciences through the presentation of symposia of papers, workshops and short courses. College of Science Dean Morteza Khaledi said that the EAS Award is a well-deserved honor for Dasgupta, noting that many of Dasgupta's important contributions to analytical chemistry have had significant positive impact on health and the human condition, one of the main pillars of UTA's Strategic Plan 2020: Bold Solutions | Global Impact. "Through his research Dr. Dasgupta has done so much to address critical issues related to improving human health and the methods we use to treat disease and illness," Khaledi said. "His innovations and contributions across an array of fields make him a most worthy recipient of this prestigious award." The honor was made all the more special for Dasgupta by the presence at the awards ceremony of many close friends and colleagues, including Janusz Pawliszyn of the University of Waterloo in Ontario, Canada; Satinder Ahuja, president of Ahuja Consulting; Chris Pohl and Kannan Srinivasan of Thermo Fisher, a biotechnology product development company; Graham Marshall, president of Global FIA, a technology company specializing in flow-based analysis techniques; William Barber of Agilent Technologies, a public research, development and manufacturing company; and Kevin Schug, Shimadzu Distinguished Professor of Analytical Chemistry in the Department of Chemistry and Biochemistry at UTA. Dasgupta, who joined UTA in 2007 following 25 distinguished years at Texas Tech University, has won numerous awards over the course of his career. In August, he received the Tech Titans Technology Inventors Award, presented by the Technology Association of North Texas, for his many innovations in chemical and environmental analysis. Other honors he has received include the 2015 American Chemical Society Division of Analytical Chemistry J. Calvin Giddings Award for Excellence in Education; the 2012 Stephen Dal Nogare Award in Chromatography; the 2012 Wilfred T. Doherty Award, DFW Section of the ACS; and the 2011 ACS Award in Chromatography. He also was named a Fellow of the Institute of Electrical and Electronics Engineers and an honorary member of the Japan Society of Analytical Chemistry, both in 2015. Among his recent research projects, Dasgupta led a team which devised a new method to measure the amount of blood present in dry blood spot analysis, providing a new alternative to the current preferred approach of measuring sodium levels. Dry blood spot analysis is simple and inexpensive and is routinely used to screen newborns for metabolic disorders. It also has proven effective in diagnosing infant HIV infection, especially in developing countries where health budgets are limited. Another of Dasgupta's recent projects is the development of a prototype for an implantable in-line shunt flow monitoring system for hydrocephalus patients, which could lead to better treatment, especially in infants and children who account for a large percentage of shunt operations every year. In another project, Dasgupta is using a $1.2 million grant from NASA to further the search for amino acids, the so-called building blocks of life, by extending a platform that he developed to detect and separate ions. Dasgupta's active research areas also include methods for environmentally friendly analysis of arsenic in drinking water; rapid analysis of trace heavy metals in the atmosphere; iodine nutrition in women and infants and the role of the chemical perchlorate; and the development of a NASA-funded ion chromatograph for testing extraterrestrial soil, such as that found on Mars. Dasgupta received a bachelor's degree with honors in Chemistry from Bankura Christian College in 1968 and a master's degree in inorganic chemistry from the University of Burdwan in 1970, both located in West Bengal, India. He came to the United States in 1973 and earned his doctorate in analytical chemistry under Philip W. West, with a minor in electrical engineering, from Louisiana State University in 1977. After working as an instructor at LSU, as a research chemist at the California Primate Research Center and as an adjunct assistant professor in the Department of Civil and Environmental Engineering at the University of California at Davis, he joined Texas Tech University, where he did award-winning research and attained the rank of Paul Whitfield Horn Professor, the institution's highest honor. He has published more than 400 papers and holds 27 patents. The University of Texas at Arlington The University of Texas at Arlington is a Carnegie Research-1 "highest research activity" institution. With a projected global enrollment of close to 57,000 in AY 2016-17, UTA is the largest institution in The University of Texas System. Guided by its Strategic Plan Bold Solutions | Global Impact, UTA fosters interdisciplinary research within four broad themes: health and the human condition, sustainable urban communities, global environmental impact, and data-driven discovery. UTA was recently cited by U.S. News & World Report as having the second lowest average student debt among U.S. universities. U.S. News & World Report also ranks UTA fifth in the nation for undergraduate diversity. The University is a Hispanic-Serving Institution and is ranked as the top four-year college in Texas for veterans on Military Times' 2017 Best for Vets list.
News Article | December 12, 2016
Maplesoft today announced that its mathematical computation software, Maple, is now on display at the Science Museum in London. On December 8, 2016, the museum opened Mathematics: The Winton Gallery which explores how mathematicians, their tools and ideas have helped to shape the modern world over the last four hundred years. The gallery showcases an early version of Maple from 1997. Mathematics: The Winton Gallery is a ground breaking new permanent gallery that places mathematics at the heart of our lives, bringing the subject to life through remarkable stories, artefacts and design. It features more than 100 exhibits from the Science Museum’s world-class science, technology, engineering and mathematics (STEM) collections, including a 17th century Islamic astrolabe, an early prototype of the Enigma decoding machine, and the Handley Page ‘Gugnunc’ aircraft, which was the result of ground-breaking aerodynamic research. The gallery explores how mathematical practice has shaped, and been shaped by, humans, technology and ideas throughout history. “Mathematical practice underpins so many aspects of our lives and work, and we hope that bringing together these remarkable stories, people and exhibits will inspire visitors to think about the role of mathematics in a new light,” said Dr. David Rooney, Lead Curator of Mathematics: The Winton Gallery at the Science Museum. “At its heart, the mathematics gallery will tell a rich cultural story of human endeavour that has helped transform the world over the last 400 years.” Maple was first developed in 1980 as a computer algebra system, with students at the University of Waterloo first using the software as part of a course in 1981. Maple was sold commercially for the first time in 1984, with Maple v3.3, and 5 years later, Maple v4.3 received PC Magazine’s Editor’s Choice award, leading to rapid expansion in the marketplace. Over the next 20+ years, Maplesoft’s suite of products has grown to include MapleSim, Maple T.A. and Möbius, with these solutions currently being used by academic institutions, researchers and engineers in over 90 countries around the world. “The new mathematics gallery will provide a tremendous opportunity for people to experience the evolution of mathematics and technology, and to gain a sense of how mathematics has helped to shape our world,” said Jim Cooper, President and CEO of Maplesoft. “We are excited for people to experience Maple in its early form and understand its role in that mathematical evolution.” A timeline of how Maple has evolved over the years can be viewed at the following link: http://www.maplesoft.com/25anniversary/ For more information on the new gallery, and the Science Museum, please visit http://www.sciencemuseum.org.uk/mathematics. About Maplesoft Maplesoft has provided mathematics-based software solutions to educators, engineers, and researchers in science, technology, engineering, and mathematics (STEM) for over 25 years. Maplesoft’s flagship product, Maple, combines the world's most powerful mathematics engine with an interface that makes it extremely easy to analyze, explore, visualize, and solve mathematical problems. Building on this technology, the product line includes solutions for online assessment, system-level modeling and simulation, and online STEM courseware. Maplesoft products provide modern, innovative solutions to meet today’s challenges, from exploring math concepts on a smartphone to enabling a model-driven innovation approach that helps companies reduce risk and bring high-quality products to market faster. Maplesoft products and services are used by more than 8000 educational institutions, research labs, and companies, in over 90 countries. Maplesoft is a subsidiary of Cybernet Systems Group. For further details, please visit http://www.maplesoft.com.
News Article | November 30, 2016
John Friedlander of the University of Toronto and Henryk Iwaniec of Rutgers University will receive the 2017 AMS Joseph L. Doob Prize. The two are honored for their book Opera de Cribro (AMS, 2010). The prime numbers, the building blocks of the whole numbers, have fascinated humankind for millennia. While it has been known since the time of Euclid that the number of primes is infinite, exactly how they are distributed among the whole numbers is still not understood. The Latin title of the prizewinning book by Friedlander and Iwaniec could be translated as A Laborious Work Around the Sieve, where in this context a "sieve" is a mathematical tool for sifting prime numbers out of sets of whole numbers. The Sieve of Eratosthenes, dating from the third century BC, is a simple, efficient method to produce a table of prime numbers. For a long time, it was the only way to study the mysterious sequence of the primes. In the early 20th century, improvements came through the work of Norwegian mathematician Viggo Brun, who combined the Sieve of Eratosthenes with ideas from combinatorics. Tools from another branch of mathematics, complex analysis, came into play through the work of English mathematicians G.H. Hardy and J.E. Littlewood, and of the iconic Indian mathematician Srinivasa Ramanujan (protagonist of the 2016 film The Man Who Knew Infinity). For 30 years, Brun's method and its refinements were the main tools in sieve theory. Then, in 1950, another Norwegian mathematician, Atle Selberg, put forward a new, simple, and elegant method. As his method was independent of that of Brun, the combination of the two gave rise to deep new results. The latter part of the 20th century saw the proof of many profound results on classical prime-number questions that had previously been considered inaccessible. Among these was a formula for the number of primes representable as the sum of a square and of a fourth power, obtained by Friedlander and Iwaniec in 1998. With these developments, the time was ripe for a new book dealing with prime-number sieves and the techniques needed for their applications. Written by two of the top masters of the subject, Opera de Cribro is an insightful and comprehensive presentation of the theory and application of sieves. In addition to providing the latest technical background and results, the book looks to the future by raising new questions, giving partial answers, and indicating new ways of approaching the problems. With high-quality writing, clear explanations, and numerous examples, the book helps readers understand the subject in depth. "These features distinguish this unique monograph from anything that had been written before on the subject and lift it to the level of a true masterpiece," the prize citation says. The two prizewinners collaborated on an expository article on number sieves, "What is the Parity Phenomenon?", which appeared in the August 2009 issue of the AMS Notices. Born in Toronto, John Friedlander received his BSc from the University of Toronto and his MA from the University of Waterloo. In 1972, he earned his PhD at Pennsylvania State University under the supervision of S. Chowla. His first position was that of assistant to Atle Selberg at the Institute for Advanced Study. After further positions at IAS, the Massachusetts Institute of Technology, the Scuola Normale Superiore in Pisa, and the University of Illinois at Urbana-Champaign, he returned to the University of Toronto as a faculty member in 1980. He was Mathematics Department Chair from 1987 to 1991 and since 2002 has been University Professor of Mathematics. He was awarded the Jeffery-Williams Prize of the Canadian Mathematical Society (1999) and the CRM-Fields (currently CRM-Fields-PIMS) Prize of the Canadian Mathematical Institutes (2002). He gave an invited lecture at the International Congress of Mathematicians in Zurich in 1994. He is a Fellow of the Royal Society of Canada, a Founding Fellow of the Fields Institute, and a Fellow of the AMS. Born in Elblag, Poland, Henryk Iwaniec graduated from Warsaw University in 1971 and received his PhD in 1972. In 1976 he defended his habilitation thesis at the Institute of Mathematics of the Polish Academy of Sciences and was elected to member correspondent. He left Poland in 1983 to take visiting positions in the USA, including long stays at the Institute for Advanced Study in Princeton. In 1987, he was appointed to his present position as New Jersey State Professor of Mathematics at Rutgers University. He was elected to the American Academy of Arts and Sciences (1995), the US National Academy of Sciences (2006), and the Polska Akademia Umiejetnosci (2006, foreign member). He has received numerous prizes including the Sierpinski Medal (1996), the Ostrowski Prize (2001, shared with Richard Taylor and Peter Sarnak), the AMS Cole Prize in Number Theory (2002, shared with Richard Taylor), the AMS Steele Prize for Mathematical Exposition (2011), the Banach Medal of the Polish Academy of Sciences (2015), and the Shaw Prize in Mathematical Sciences (2015, shared with Gerd Faltings). He was an invited speaker at the International Congress of Mathematicians in Helsinki (1978), Berkeley (1986), and Madrid (2006). Presented every three years, the AMS Doob Prize recognizes a single, relatively recent, outstanding research book that makes a seminal contribution to the research literature, reflects the highest standards of research exposition, and promises to have a deep and long-term impact in its area. The prize will be awarded Thursday, January 5, 2017, at the Joint Mathematics Meetings in Atlanta. Find out more about AMS prizes and awards at http://www. . Founded in 1888 to further mathematical research and scholarship, today the American Mathematical Society fulfills its mission through programs and services that promote mathematical research and its uses, strengthen mathematical education, and foster awareness and appreciation of mathematics and its connections to other disciplines and to everyday life.
News Article | November 4, 2016
ETOBICOKE, ON, November 04, 2016-- Mike Lecky, Director, Technical Security Services for Scotiabank, has been recognized for showing dedication, leadership and excellence in Information Security.Worldwide Branding is proud to endorse the recent, notable professional efforts and accomplishments of Mike Lecky. A member in good standing, Mike Lecky parlays 20 years of experience into his professional network, and has been noted for his business achievements, leadership abilities, and technical knowledge.Mr. Lecky is a creative Information Security executive with a focus on progressive security solutions and transforming Information Security in enterprise environments. He is a collaborator, and is skilled at building and aligning talent and leading teams in complex issues. In his current role, he is charged with managing and maintaining security technologies and delivering security services in over 60 countries for one of the larger international retail banking and wealth management corporations. Mr. Lecky has a history of building enterprise-wide security operations, consolidating and centralizing business units and maturing delivery of security services. He has developed security governance capability in risk management, 3rd party security contracts, compliance and regulatory reporting (OFSE, FFIEC, SEC, MAS, SOX, Basel and PCI), disaster recovery, data loss prevention and security awareness.With a background in numerous industry sectors, including financial, retail, telecommunications, high technology and military aerospace, Mike Lecky brings diversity and fresh perspectives to business, technology and organizational challenges. Having earned a Bachelor of Applied Science in Electrical Engineering from the University of Waterloo, a Master of Science in Information Technology from the University of Liverpool, and an MBA from the Ivey Business School at the University of Western Ontario Mr. Lecky has now honed his expertise in security operations, cyber risk management, technology strategy, cloud solutions, and organizational change management.Mr. Lecky is a Certified Chief Information Security Officer (C|CISO) and holds several Information Security professional credentials. Also a Professional Engineer (P.Eng.) and a Project Management Professional (PMP), he remains updated in his field through his affiliation with the Association of Professional Engineers of Ontario (APEO), Information Systems Audit and Control Association (ISACA), Information Security System Association (ISSA), International Information Systems Security Certifications Consortium (ISC2), and the Project Management Institute (PMI). Mr. Lecky attributes his success to staying current, building and maintaining solid relationships and to being loyal to his passions.Worldwide Branding has added Mike Lecky to their distinguished Registry of Executives, Professionals and Entrepreneurs. While inclusion in Worldwide Branding is an honor, only small selections of members in each discipline are endorsed and promoted as leaders in their professional fields.About Worldwide BrandingFor more than 15 years, Worldwide Branding has been the leading, one-stop-shop, personal branding company, in the United States and abroad. From writing professional biographies and press releases, to creating and driving Internet traffic to personal websites, our team of branding experts tailor each product specifically for our clients' needs. From health care to finance to education and law, our constituents represent every major industry and occupation, at all career levels.For more information, please visit http://www.worldwidebranding.com
News Article | April 18, 2016
« FLIR Systems launches next generation high-performance uncooled thermal camera core | Main | Study finds total PM10 emissions from EVs equal to those of modern ICEVs; role of weight and non-exhaust PM » The Institute of Vehicle Concepts at the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt, DLR) has signed a cooperation agreement with the University of Waterloo and the University of Windsor in Canada. Central elements of the collaboration will be research into and the exchange of scientific data on the subjects of lightweight construction and the crashworthiness of vehicles, as well as planning and collaborating on projects. The collaboration agreement was signed during the International Crashworthiness Symposiums organized by the three collaborating partners. The symposium took place in Windsor, Canada, on 11 April. The University of Waterloo is one of the leading research institutes in Canada for the engineering sector and is primarily contributing its expertise in the areas of materials research and the behavior of materials in the event of a crash. The University of Windsor is one of the world’s leaders in the area of energy absorption through metalworking processes. The DLR Institute of Vehicle Concepts works on and coordinates research areas relating to transport technology, new vehicle concepts and vehicle technologies. In its research area of lightweight construction and hybrid construction methods, it is closely involved with new crashworthy lightweight construction methods for road and rail vehicles, and provides a unique research infrastructure with its component crash test facility.
News Article | January 25, 2016
From gene mapping to space exploration, humanity continues to generate ever-larger sets of data — far more information than people can actually process, manage or understand. Machine learning systems can help researchers deal with this ever-growing flood of information. Some of the most powerful of these analytical tools are based on a strange branch of geometry called topology, which deals with properties that stay the same even when something is bent and stretched every which way. Such topological systems are especially useful for analyzing the connections in complex networks, such as the internal wiring of the brain, the U.S. power grid, or the global interconnections of the Internet. But even with the most powerful modern supercomputers, such problems remain daunting and impractical to solve. Now, a new approach that would use quantum computers to streamline these problems has been developed by researchers at MIT, the University of Waterloo, and the University of Southern California. The team describes their theoretical proposal this week in the journal Nature Communications. Seth Lloyd, the paper’s lead author and the Nam P. Suh Professor of Mechanical Engineering, explains that algebraic topology is key to the new method. This approach, he says, helps to reduce the impact of the inevitable distortions that arise every time someone collects data about the real world. In a topological description, basic features of the data (How many holes does it have? How are the different parts connected?) are considered the same no matter how much they are stretched, compressed, or distorted. Lloyd explains that it is often these fundamental topological attributes “that are important in trying to reconstruct the underlying patterns in the real world that the data are supposed to represent.” It doesn’t matter what kind of dataset is being analyzed, he says. The topological approach to looking for connections and holes “works whether it’s an actual physical hole, or the data represents a logical argument and there’s a hole in the argument. This will find both kinds of holes.” Using conventional computers, that approach is too demanding for all but the simplest situations. Topological analysis “represents a crucial way of getting at the significant features of the data, but it’s computationally very expensive,” Lloyd says. “This is where quantum mechanics kicks in.” The new quantum-based approach, he says, could exponentially speed up such calculations. Lloyd offers an example to illustrate that potential speedup: If you have a dataset with 300 points, a conventional approach to analyzing all the topological features in that system would require “a computer the size of the universe,” he says. That is, it would take 2300 (two to the 300th power) processing units — approximately the number of all the particles in the universe. In other words, the problem is simply not solvable in that way. “That’s where our algorithm kicks in,” he says. Solving the same problem with the new system, using a quantum computer, would require just 300 quantum bits — and a device this size may be achieved in the next few years, according to Lloyd. “Our algorithm shows that you don’t need a big quantum computer to kick some serious topological butt,” he says. There are many important kinds of huge datasets where the quantum-topological approach could be useful, Lloyd says, for example understanding interconnections in the brain. “By applying topological analysis to datasets gleaned by electroencephalography or functional MRI, you can reveal the complex connectivity and topology of the sequences of firing neurons that underlie our thought processes,” he says. The same approach could be used for analyzing many other kinds of information. “You could apply it to the world’s economy, or to social networks, or almost any system that involves long-range transport of goods or information,” Lloyd says. But the limits of classical computation have prevented such approaches from being applied before. While this work is theoretical, “experimentalists have already contacted us about trying prototypes,” he says. “You could find the topology of simple structures on a very simple quantum computer. People are trying proof-of-concept experiments.” Ignacio Cirac, a professor at the Max Planck Institute of Quantum Optics in Munich, Germany, who was not involved in this research, calls it “a very original idea, and I think that it has a great potential.” He adds “I guess that it has to be further developed and adapted to particular problems. In any case, I think that this is top-quality research.” The team also included Silvano Garnerone of the University of Waterloo in Ontario, Canada, and Paolo Zanardi of the Center for Quantum Information Science and Technology at the University of Southern California. The work was supported by the Army Research Office, Air Force Office of Scientific Research, Defense Advanced Research Projects Agency, Multidisciplinary University Research Initiative of the Office of Naval Research, and the National Science Foundation.
News Article | November 10, 2016
OAKVILLE, ONTARIO--(Marketwired - Nov. 10, 2016) - Saint Jean Carbon Inc. ("Saint Jean" or the "Company") (TSX VENTURE:SJL), a carbon science company engaged in the exploration of natural graphite properties and related carbon products, is pleased to announce that the Company has a new Chief Technology Officer (CTO), Dr. Zhongwei Chen PhD, MSChE, BS, Canadian Research Chair and Professor in Advanced Materials for Clean Energy Waterloo Institute for Nanotechnology Department of Chemical Engineering, University of Waterloo. Dr. Zhongwei Chen will lead the technology planning, engineering and implementation of all of the Company's clean energy storage and energy creation initiatives. Dr. Zhongwei Chen's research work covers advanced materials and electrodes for PEM fuel cells, lithium ion batteries and zinc-air batteries. His education; PhD, University of California - Riverside, MSChE, East China University of Science and Technology, China, BS, Nanjing University of Technology, China. His honours and awards; Early Researcher Award, Ministry of Economic Development and Innovation, Ontario, Canada (2012), NSERC Discovery Accelerator Award (2014), Canada Research Chair in Advanced Materials for Clean Energy (2014), and E.W.R Steacie Memorial Fellowship (2016). Please follow the link to the full website for complete in-depth details. http://chemeng.uwaterloo.ca/zchen/index.html Dr. Zhongwei Chen, CTO, commented: "I have had the opportunity to work very closely with Saint Jean Carbon over the last year, specifically with their advanced spherical coated graphite for lithium-ion batteries and the very promising results have me hopeful that we, together with my global partners, will build the best and most advanced graphite electrode materials for the growing electric car and mass energy storage industries. We feel it is imperative to make sure that in every step we take towards future supply, we demonstrate our team strengths and constant superior technological advancements." Paul Ogilvie, CEO, commented: "On behalf of the Board of Directors, Shareholders and Stakeholders, I am honoured that Zhongwei has chosen our Company, over the hundreds of other possible suitors. We feel our working relationship over the last year has proven a very strong bond between our raw material and his engineering excellence. We are in a constant drive to move forward as fast as we can with the very best people, and with this appointment, we have just topped our own expectations." Saint Jean is a publicly traded carbon science company, with interest in graphite mining claims in the province of Quebec in Canada. For the latest information on Saint Jean's properties and news please refer to the website: http://www.saintjeancarbon.com/ On behalf of the Board of Directors Neither TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release. FORWARD LOOKING STATEMENTS: This news release contains forward-looking statements, within the meaning of applicable securities legislation, concerning Saint Jean's business and affairs. In certain cases, forward-looking statements can be identified by the use of words such as "plans", "expects" or "does not expect", "intends" "budget", "scheduled", "estimates", "forecasts", "intends", "anticipates" or variations of such words and phrases or state that certain actions, events or results "may", "could", "would", "might" or "will be taken", "occur" or "be achieved". These forward-looking statements are based on current expectations, and are naturally subject to uncertainty and changes in circumstances that may cause actual results to differ materially. The forward-looking statements in this news release assume, inter alia, that the conditions for completion of the Transaction, including regulatory and shareholder approvals, if necessary, will be met. Although Saint Jean believes that the expectations represented in such forward-looking statements are reasonable, there can be no assurance that these expectations will prove to be correct. Statements of past performance should not be construed as an indication of future performance. Forward-looking statements involve significant risks and uncertainties, should not be read as guarantees of future performance or results, and will not necessarily be accurate indications of whether or not such results will be achieved. A number of factors, including those discussed above, could cause actual results to differ materially from the results discussed in the forward-looking statements. Any such forward-looking statements are expressly qualified in their entirety by this cautionary statement. All of the forward-looking statements made in this press release are qualified by these cautionary statements. Readers are cautioned not to place undue reliance on such forward-looking statements. Forward-looking information is provided as of the date of this press release, and Saint Jean assumes no obligation to update or revise them to reflect new events or circumstances, except as may be required under applicable securities laws.
News Article | November 3, 2016
Present-day racial biases may contribute to the pollution and devaluation of lower- and middle-class black communities, according to new research led by a social psychologist at the University of Illinois at Chicago. The investigation was based on several studies demonstrating that physical spaces, such as houses or neighborhoods, are targets of racial stereotyping, discrimination, and implicit racial bias. The researchers found study participants applied negative stereotypes, such as "impoverished," "crime-ridden," or "dirty," in their perceptions of physical spaces associated with black Americans. "These space-focused stereotypes can make people feel less connected to a space, assume it has low-quality characteristics, monetarily devalue it, and dampen its protection from environmental harms," said Courtney Bonam, UIC assistant professor of psychology and the study's lead author. "Some of the findings show that space-focused stereotypes figuratively pollute the way observers imagine a target area and their judgment about an existing structure in it, while other work demonstrates how this presumed figurative pollution leads observers to consider literally polluting black space." Bonam says the findings are relevant to recent examples of pollution exposure, such as the lead contamination in Flint, Michigan, and East Chicago, Indiana, and also provide insights into the longstanding racial wealth gap in the United States. One study asked a national sample of over 400 white U.S. citizens to read a proposal to build a potentially hazardous chemical plant near a residential neighborhood. Half of the participants were told the nearby neighborhood is mostly black, while the other half was told that the area is mostly white. Even though all participants read the same proposal, they were less likely to report opposition to building the chemical plant when the nearby neighborhood was mostly black. "They assumed it was an industrial area when it was black, which led them to devalue and subsequently pollute the land there," Bonam said. "Additionally, these findings held when participants were told that the neighborhood had middle-class property values; and when accounting both for participants' perceptions of the neighborhood's class level, and their negative attitudes toward black people in general." In another study, a national sample of more than 200 white U.S. citizens were given pictures of the same middle-class, suburban house. Half of the people were told the house was in a predominately black neighborhood, while the other segment was informed that it was in a mostly white neighborhood. Those who thought the house was in a black neighborhood estimated its value at $20,000 less than the other group, and were less likely to say they would live in or buy the house. The researchers said people racially stereotyped the surrounding area by assuming it had lower quality services and amenities when it was a black neighborhood. A different study asked a racially diverse sample of 30 U.S. citizens living in the San Francisco Bay area to evaluate the same middle-class suburban house, which they were told was for sale. When the homeowners were pictured as a black family, respondents evaluated the home more negatively than when it was shown as being sold by a white family. Bonam notes the bias in these cases is directed at racialized physical space, and it operates even without negative attitudes toward black people. "Together, these studies tell us that space-focused stereotypes may thus contribute to wide-ranging social problems, from racial disparities in wealth to the overexposure of black people to environmental pollution," Bonam said. "These studies also broaden the scope of traditional stereotyping research and can inform policymakers, urban planners, and the public about an insidious form of stereotyping that can perpetuate racial inequalities." The research was funded by the American Psychological Association, the Society for the Psychological Study of Social Issues, Stanford University, and UIC, and was published in the November issue of the Journal of Experimental Psychology: General. Hillary Bergsieker of the University of Waterloo and Jennifer Eberhardt of Stanford are the co-authors.
News Article | January 25, 2016
From gene mapping to space exploration, humanity continues to generate ever-larger sets of data — far more information than people can actually process, manage, or understand. Machine learning systems can help researchers deal with this ever-growing flood of information. Some of the most powerful of these analytical tools are based on a strange branch of geometry called topology, which deals with properties that stay the same even when something is bent and stretched every which way. Such topological systems are especially useful for analyzing the connections in complex networks, such as the internal wiring of the brain, the U.S. power grid, or the global interconnections of the Internet. But even with the most powerful modern supercomputers, such problems remain daunting and impractical to solve. Now, a new approach that would use quantum computers to streamline these problems has been developed by researchers at MIT, the University of Waterloo, and the University of Southern California. The team describes their theoretical proposal this week in the journal Nature Communications. Seth Lloyd, the paper’s lead author and the Nam P. Suh Professor of Mechanical Engineering, explains that algebraic topology is key to the new method. This approach, he says, helps to reduce the impact of the inevitable distortions that arise every time someone collects data about the real world. In a topological description, basic features of the data (How many holes does it have? How are the different parts connected?) are considered the same no matter how much they are stretched, compressed, or distorted. Lloyd explains that it is often these fundamental topological attributes “that are important in trying to reconstruct the underlying patterns in the real world that the data are supposed to represent.” It doesn’t matter what kind of dataset is being analyzed, he says. The topological approach to looking for connections and holes “works whether it’s an actual physical hole, or the data represents a logical argument and there’s a hole in the argument. This will find both kinds of holes.” Using conventional computers, that approach is too demanding for all but the simplest situations. Topological analysis “represents a crucial way of getting at the significant features of the data, but it’s computationally very expensive,” Lloyd says. “This is where quantum mechanics kicks in.” The new quantum-based approach, he says, could exponentially speed up such calculations. Lloyd offers an example to illustrate that potential speedup: If you have a dataset with 300 points, a conventional approach to analyzing all the topological features in that system would require “a computer the size of the universe,” he says. That is, it would take 2300 (two to the 300th power) processing units — approximately the number of all the particles in the universe. In other words, the problem is simply not solvable in that way. “That’s where our algorithm kicks in,” he says. Solving the same problem with the new system, using a quantum computer, would require just 300 quantum bits — and a device this size may be achieved in the next few years, according to Lloyd. “Our algorithm shows that you don’t need a big quantum computer to kick some serious topological butt,” he says. There are many important kinds of huge datasets where the quantum-topological approach could be useful, Lloyd says, for example understanding interconnections in the brain. “By applying topological analysis to datasets gleaned by electroencephalography or functional MRI, you can reveal the complex connectivity and topology of the sequences of firing neurons that underlie our thought processes,” he says. The same approach could be used for analyzing many other kinds of information. “You could apply it to the world’s economy, or to social networks, or almost any system that involves long-range transport of goods or information,” says Lloyd, who holds a joint appointment as a professor of physics. But the limits of classical computation have prevented such approaches from being applied before. While this work is theoretical, “experimentalists have already contacted us about trying prototypes,” he says. “You could find the topology of simple structures on a very simple quantum computer. People are trying proof-of-concept experiments.” Ignacio Cirac, a professor at the Max Planck Institute of Quantum Optics in Munich, Germany, who was not involved in this research, calls it “a very original idea, and I think that it has a great potential.” He adds “I guess that it has to be further developed and adapted to particular problems. In any case, I think that this is top-quality research.” The team also included Silvano Garnerone of the University of Waterloo in Ontario, Canada, and Paolo Zanardi of the Center for Quantum Information Science and Technology at the University of Southern California. The work was supported by the Army Research Office, Air Force Office of Scientific Research, Defense Advanced Research Projects Agency, Multidisciplinary University Research Initiative of the Office of Naval Research, and the National Science Foundation.
News Article | February 15, 2017
This new research, published today in Nature Physics by two Perimeter Institute researchers, was built on a simple question: could industry-standard machine learning algorithms help fuel physics research? To find out, former Perimeter Institute postdoctoral fellow Juan Cassasquilla and Roger Melko, an Associate Faculty member at Perimeter and Associate Professor at the University of Waterloo, repurposed Google's TensorFlow, an open-source software library for machine learning, and applied it to a physical system. Melko says they didn't know what to expect. "I thought it was a long shot," he admits. Using gigabytes of data representing different state configurations created using simulation software on supercomputers, Carrasquilla and Melko created a large collection of "images" to introduce into the machine learning algorithm (also known as a neural network). The result: the neural network distinguished phases of a simple magnet, and could distinguish an ordered ferromagnetic phase from a disordered high-temperature phase. It could even find the boundary (or phase transition) between phases, says Carrasquilla, who now works at quantum computing company D-Wave Systems. "Once we saw that they worked, then we knew they were going to be useful for many related problems. All of a sudden, the sky's the limit," Melko says. "Everyone like me who has access to massive amounts of data can try these standard neural networks." This research, which was originally published as a preprint on the arXiv in May, 2016, shows that applying machine learning to condensed matter and statistical physics could open entirely new opportunities for research and, eventually, real-world application.
News Article | February 15, 2017
February 13, 2017 (Waterloo, Ontario, Canada) - A machine learning algorithm designed to teach computers how to recognize photos, speech patterns, and hand-written digits has now been applied to a vastly different set of data: identifying phase transitions between states of matter. This new research, published today in Nature Physics by two Perimeter Institute researchers, was built on a simple question: could industry-standard machine learning algorithms help fuel physics research? To find out, former Perimeter Institute postdoctoral fellow Juan Cassasquilla and Roger Melko, an Associate Faculty member at Perimeter and Associate Professor at the University of Waterloo, repurposed Google's TensorFlow, an open-source software library for machine learning, and applied it to a physical system. Melko says they didn't know what to expect. "I thought it was a long shot," he admits. Using gigabytes of data representing different state configurations created using simulation software on supercomputers, Carrasquilla and Melko created a large collection of "images" to introduce into the machine learning algorithm (also known as a neural network). The result: the neural network distinguished phases of a simple magnet, and could distinguish an ordered ferromagnetic phase from a disordered high-temperature phase. It could even find the boundary (or phase transition) between phases, says Carrasquilla, who now works at quantum computing company D-Wave Systems. "Once we saw that they worked, then we knew they were going to be useful for many related problems. All of a sudden, the sky's the limit," Melko says. "Everyone like me who has access to massive amounts of data can try these standard neural networks." This research, which was originally published as a preprint on the arXiv in May, 2016, shows that applying machine learning to condensed matter and statistical physics could open entirely new opportunities for research and, eventually, real-world application. The full journal article can be found here: http://www. Perimeter Institute is the world's largest research hub devoted to theoretical physics. The independent Institute was founded in 1999 to foster breakthroughs in the fundamental understanding of our universe, from the smallest particles to the entire cosmos. Research at Perimeter is motivated by the understanding that fundamental science advances human knowledge and catalyzes innovation, and that today's theoretical physics is tomorrow's technology. Located in the Region of Waterloo, the not-for-profit Institute is a unique public-private endeavour, including the Governments of Ontario and Canada, that enables cutting-edge research, trains the next generation of scientific pioneers, and shares the power of physics through award-winning educational outreach and public engagement.
News Article | February 13, 2017
SCOR Global Life announces the following promotions, with immediate effect: Paolo De Martin, Chief Executive Officer of SCOR Global Life, comments: "With today's promotions we continue to optimally position SCOR Global Life for the execution of the "Vision in Action" strategic plan. The newly created role of Chief Actuary will ensure strong coordination across our actuarial and risk management teams as we grow and develop in new business areas. These promotions in the Americas confirm that our clients can count on a strong management team in this important region, where we have key market leadership positions." Denis Kessler, Chairman & Chief Executive Officer of SCOR, comments: "The promotions we have announced today once again demonstrate the strength, depth and diversity of our management team and our ability to retain and promote the best talent within our organization." Brona Magee, an Irish citizen, holds a Bachelor of Actuarial and Financial Studies degree from University College Dublin. Brona moved to Charlotte USA to take the positon of CFO - Americas at SCOR Global Life in 2013 and in 2015 was promoted to Deputy CEO - Americas. Prior to that Brona was the CFO for SCOR Global Life Reinsurance Ireland from 2011 to 2013. From 2006 to 2011 she worked for Transamerica International Reinsurance Ireland, which was acquired by SCOR in 2011. Brona is a Fellow of the Society of Actuaries in Ireland. Brock Robbins, a Canadian citizen, holds an Actuarial Science degree from the University of Waterloo, Canada. Brock joined SCOR in 2011 with the acquisition of Transamerica Reinsurance and took the role of Chief Pricing Officer - Americas at SCOR Global Life. In 2015 Brock was promoted to EVP Head of US Markets. Brock is a Fellow of the Society of Actuaries. Tammy Kapeller, a United States Citizen, holds a degree in Actuarial Science from the University of Nebraska and a Master's Degree from the University of Missouri-Kansas City. Tammy joined SCOR Global Life in 2013 with the acquisition of Generali USA and has been Chief Operating Officer for the Americas since then. Tammy is a Fellow of the Society of Actuaries. Sean Hartley, a United States Citizen born in South Africa, holds a Bachelor of Science Degree in Economics and Finance from the University of South Carolina. Sean joined SCOR Global Life in 2014 as Vice President of Human Resources for the Americas. Prior to joining SCOR Global Life, Sean gained extensive Human Resources experience working with AEGON. Sean is a Hogan-certified, LVI and Executive Coach. Prior to this role Sean was SVP of Human Resources. Nicole Baird, a United States citizen, holds a Bachelor of Business Administration from Pfeiffer University and a Master's degree in Business Administration from East Carolina University. Nicole joined SCOR Global Life in 2013 as an HR Generalist and most recently held the position of HR Business Partner in the Americas. SCOR does not communicate "profit forecasts" in the sense of Article 2 of (EC) Regulation n°809/2004 of the European Commission. Thus, any forward-.looking statements contained in this communication should not be held as corresponding to such profit forecasts. Information in this communication may include "forward-looking statements", including but not limited to statements that are predictions of or indicate future events, trends, plans or objectives, based on certain assumptions and include any statement which does not directly relate to a historical fact or current fact. Forward-looking statements are typically identified by words or phrases such as, without limitation, "anticipate", "assume", "believe", "continue", "estimate", "expect", "foresee", "intend", "may increase" and "may fluctuate" and similar expressions or by future or conditional verbs such as, without limitations, "will", "should", "would" and "could." Undue reliance should not be placed on such statements, because, by their nature, they are subject to known and unknown risks, uncertainties and other factors, which may cause actual results, on the one hand, to differ from any results expressed or implied by the present communication, on the other hand. Please refer to the 2015 reference document filed on 4 March 2016 under number D.16-0108 with the French Autorité des marchés financiers (AMF) posted on SCOR's website www.scor.com (the "Document de Référence"), for a description of certain important factors, risks and uncertainties that may affect the business of the SCOR Group. As a result of the extreme and unprecedented volatility and disruption of the current global financial crisis, SCOR is exposed to significant financial, capital market and other risks, including movements in interest rates, credit spreads, equity prices, and currency movements, changes in rating agency policies or practices, and the lowering or loss of financial strength or other ratings. The Group's financial information is prepared on the basis of IFRS and interpretations issued and approved by the European Union. This financial information does not constitute a set of financial statements for an interim period as defined by IAS 34 "Interim Financial Reporting". The Group's financial information is prepared on the basis of IFRS and interpretations issued and approved by the European Union. This financial information does not constitute a set of financial statements for an interim period as defined by IAS 34 "Interim Financial Reporting".
Christians J.A.,University of Notre Dame |
Fung R.C.M.,University of Notre Dame |
Fung R.C.M.,University of Waterloo |
Kamat P.V.,University of Notre Dame
Journal of the American Chemical Society | Year: 2014
Organo-lead halide perovskite solar cells have emerged as one of the most promising candidates for the next generation of solar cells. To date, these perovskite thin film solar cells have exclusively employed organic hole conducting polymers which are often expensive and have low hole mobility. In a quest to explore new inorganic hole conducting materials for these perovskite-based thin film photovoltaics, we have identified copper iodide as a possible alternative. Using copper iodide, we have succeeded in achieving a promising power conversion efficiency of 6.0% with excellent photocurrent stability. The open-circuit voltage, compared to the best spiro-OMeTAD devices, remains low and is attributed to higher recombination in CuI devices as determined by impedance spectroscopy. However, impedance spectroscopy revealed that CuI exhibits 2 orders of magnitude higher electrical conductivity than spiro-OMeTAD which allows for significantly higher fill factors. Reducing the recombination in these devices could render CuI as a cost-effective competitor to spiro-OMeTAD in perovskite solar cells. © 2013 American Chemical Society.
Motahari A.S.,Sharif University of Technology |
Oveis-Gharan S.,Ciena |
Maddah-Ali M.-A.,Alcatel - Lucent |
Khandani A.K.,University of Waterloo
IEEE Transactions on Information Theory | Year: 2014
In this paper, we develop the machinery of real interference alignment. This machinery is extremely powerful in achieving the sum degrees of freedom (DoF) of single antenna systems. The scheme of real interference alignment is based on designing single-layer and multilayer constellations used for modulating information messages at the transmitters. We show that constellations can be aligned in a similar fashion as that of vectors in multiple antenna systems and space can be broken up into fractional dimensions. The performance analysis of the signaling scheme makes use of a recent result in the field of Diophantine approximation, which states that the convergence part of the Khintchine-Groshev theorem holds for points on nondegenerate manifolds. Using real interference alignment, we obtain the sum DoF of two model channels, namely the Gaussian interference channel (IC) and the X channel. It is proved that the sum DoF of the K -user IC is (K/2) for almost all channel parameters. We also prove that the sum DoF of the X -channel with K transmitters and M receivers is (KM/K+M-1) for almost all channel parameters. © 2014 IEEE.
Farhad S.,Carleton University |
Hamdullahpur F.,University of Waterloo
Journal of Power Sources | Year: 2010
A novel portable electric power generation system, fuelled by ammonia, is introduced and its performance is evaluated. In this system, a solid oxide fuel cell (SOFC) stack that consists of anode-supported planar cells with Ni-YSZ anode, YSZ electrolyte and YSZ-LSM cathode is used to generate electric power. The small size, simplicity, and high electrical efficiency are the main advantages of this environmentally friendly system. The results predicted through computer simulation of this system confirm that the first-law efficiency of 41.1% with the system operating voltage of 25.6 V is attainable for a 100 W portable system, operated at the cell voltage of 0.73 V and fuel utilization ratio of 80%. In these operating conditions, an ammonia cylinder with a capacity of 0.8 l is sufficient to sustain full-load operation of the portable system for 9 h and 34 min. The effect of the cell operating voltage at different fuel utilization ratios on the number of cells required in the SOFC stack, the first- and second-law efficiencies, the system operating voltage, the excess air, the heat transfer from the SOFC stack, and the duration of operation of the portable system with a cylinder of ammonia fuel, are also studied through a detailed sensitivity analysis. Overall, the ammonia-fuelled SOFC system introduced in this paper exhibits an appropriate performance for portable power generation applications. © 2009 Elsevier B.V.
Das S.,University of Lethbridge |
Mann R.B.,University of Waterloo
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2011
Almost all theories of Quantum Gravity predict modifications of the Heisenberg Uncertainty Principle near the Planck scale to a so-called Generalized Uncertainty Principle (GUP). Recently it was shown that the GUP gives rise to corrections to the Schrödinger and Dirac equations, which in turn affect all non-relativistic and relativistic quantum Hamiltonians. In this Letter, we apply it to superconductivity and the quantum Hall effect and compute Planck scale corrections. We also show that Planck scale effects may account for a (small) part of the anomalous magnetic moment of the muon. We obtain (weak) empirical bounds on the undetermined GUP parameter from present-day experiments. © 2011 Elsevier B.V.
Shames I.,KTH Royal Institute of Technology |
Dasgupta S.,University of Iowa |
Fidan B.,University of Waterloo |
Anderson B.D.O.,Australian National University
IEEE Transactions on Automatic Control | Year: 2012
Consider an agent A at an unknown location, undergoing sufficiently slow drift, and a mobile agent B that must move to the vicinity of and then circumnavigate A at a prescribed distance from A. In doing so, B can only measure its distance from A, and knows its own position in some reference frame. This paper considers this problem, which has applications to surveillance and orbit maintenance. In many of these applications it is difficult for B to directly sense the location of A, e.g. when all that B can sense is the intensity of a signal emitted by A. This intensity does, however provide a measure of the distance. We propose a nonlinear periodic continuous time control law that achieves the objective using this distance measurement. Fundamentally, a) B must exploit its motion to estimate the location of A, and b) use its best instantaneous estimate of where A resides, to move itself to achieve the circumnavigation objective. For a) we use an open loop algorithm formulated by us in an earlier paper. The key challenge tackled in this paper is to design a control law that closes the loop by marrrying the two goals. As long as the initial estimate of the source location is not coincident with the intial position of B, the algorithm is guaranteed to be exponentially convergent when A is stationary. Under the same condition, we establish that when A drifts with a sufficiently small, unknown velocity, B globally achieves its circumnavigation objective, to within a margin proportional to the drift velocity. © 2011 IEEE.
Sundaram S.,University of Waterloo |
Hadjicostis C.N.,University of Cyprus |
Hadjicostis C.N.,University of Illinois at Urbana - Champaign
IEEE Transactions on Automatic Control | Year: 2013
We develop a graph-theoretic characterization of controllability and observability of linear systems over finite fields. Specifically, we show that a linear system will be structurally controllable and observable over a finite field if the graph of the system satisfies certain properties, and the size of the field is sufficiently large. We also provide graph-theoretic upper bounds on the controllability and observability indices for structured linear systems (over arbitrary fields). We then use our analysis to design nearest-neighbor rules for multi-agent systems where the state of each agent is constrained to lie in a finite set. We view the discrete states of each agent as elements of a finite field, and employ a linear iterative strategy whereby at each time-step, each agent updates its state to be a linear combination (over the finite field) of its own state and the states of its neighbors. Using our results on structural controllability and observability, we show how a set of leader agents can use this strategy to place all agents into any desired state (within the finite set), and how a set of sink agents can recover the set of initial values held by all of the agents. © 2012 IEEE.
Liang X.,University of Waterloo |
Hart C.,University of Waterloo |
Pang Q.,University of Waterloo |
Garsuch A.,BASF |
And 2 more authors.
Nature Communications | Year: 2015
The lithium-sulfur battery is receiving intense interest because its theoretical energy density exceeds that of lithium-ion batteries at much lower cost, but practical applications are still hindered by capacity decay caused by the polysulfide shuttle. Here we report a strategy to entrap polysulfides in the cathode that relies on a chemical process, whereby a host - manganese dioxide nanosheets serve as the prototype - reacts with initially formed lithium polysulfides to form surface-bound intermediates. These function as a redox shuttle to catenate and bind 'higher' polysulfides, and convert them on reduction to insoluble lithium sulfide via disproportionation. The sulfur/manganese dioxide nanosheet composite with 75 wt% sulfur exhibits a reversible capacity of 1,300 mA h g-1 at moderate rates and a fade rate over 2,000 cycles of 0.036%/cycle, among the best reported to date. We furthermore show that this mechanism extends to graphene oxide and suggest it can be employed more widely. © 2015 Macmillan Publishers Limited. All rights reserved.
Sundaram S.,University of Waterloo |
Hadjicostis C.N.,University of Cyprus |
Hadjicostis C.N.,University of Illinois at Urbana - Champaign
IEEE Transactions on Automatic Control | Year: 2011
Given a network of interconnected nodes, each with its own value (such as a measurement, position, vote, or other data), we develop a distributed strategy that enables some or all of the nodes to calculate any arbitrary function of the node values, despite the actions of malicious nodes in the network. Our scheme assumes a broadcast model of communication (where all nodes transmit the same value to all of their neighbors) and utilizes a linear iteration where, at each time-step, each node updates its value to be a weighted average of its own previous value and those of its neighbors. We consider a node to be malicious or faulty if, instead of following the predefined linear strategy, it updates its value arbitrarily at each time-step (perhaps conspiring with other malicious nodes in the process). We show that the topology of the network completely characterizes the resilience of linear iterative strategies to this kind of malicious behavior. First, when the network contains 2f or fewer vertex-disjoint paths from some node xj to another node xi, we provide an explicit strategy for f malicious nodes to follow in order to prevent node xi from receiving any information about xj's value. Next, if node xi has at least 2f+1 vertex-disjoint paths from every other (non-neighboring) node, we show that xi is guaranteed to be able to calculate any arbitrary function of all node values when the number of malicious nodes is f or less. Furthermore, we show that this function can be calculated after running the linear iteration for a finite number of time-steps (upper bounded by the number of nodes in the network) with almost any set of weights (i.e., for all weights except for a set of measure zero). © 2011 IEEE.
Fung W.S.,University of Waterloo |
Hariharan R.,Strand Life science |
Harvey N.J.A.,University of Waterloo |
Panigrahi D.,Massachusetts Institute of Technology
Proceedings of the Annual ACM Symposium on Theory of Computing | Year: 2011
We present a general framework for constructing cut sparsifiers in undirected graphs - weighted subgraphs for which every cut has the same weight as the original graph, up to a multiplicative factor of (1 ± ε). Using this framework, we simplify, unify and improve upon previous sparsification results. As simple instantiations of this framework, we show that sparsifiers can be constructed by sampling edges according to their strength (a result of Benczur and Karger), effective resistance (a result of Spielman and Srivastava), edge connectivity, or by sampling random spanning trees. Sampling according to edge connectivity is the most aggressive method, and the most challenging to analyze. Our proof that this method produces sparsifiers resolves an open question of Benczur and Karger. While the above results are interesting from a combinatorial standpoint, we also prove new algorithmic results. In particular, we develop techniques that give the first (optimal) O(m)-time sparsification algorithm for unweighted graphs. Our algorithm has a running time of O(m) + Õ(n/ε2) for weighted graphs, which is also linear unless the input graph is very sparse itself. In both cases, this improves upon the previous best running times (due to Benczur and Karger) of O(m log2 n) (for the unweighted case) and O(m log3 n) (for the weighted case) respectively. Our algorithm constructs sparsifiers that contain O(n log n/ε2) edges in expectation; the only known construction of sparsifiers with fewer edges is by a substantially slower algorithm running in O(n3 m / ε2) time. A key ingredient of our proofs is a natural generalization of Karger's bound on the number of small cuts in an undirected graph. Given the numerous applications of Karger's bound, we suspect that our generalization will also be of independent interest. © 2011 ACM.
Tehrani M.,University of Waterloo |
Uysal M.,Ozyegin University |
Yanikomeroglu H.,Carleton University
IEEE Communications Magazine | Year: 2014
In a conventional cellular system, devices are not allowed to directly communicate with each other in the licensed cellular bandwidth and all communications take place through the base stations. In this article, we envision a two-tier cellular network that involves a macrocell tier (i.e., BS-to-device communications) and a device tier (i.e., device-to-device communications). Device terminal relaying makes it possible for devices in a network to function as transmission relays for each other and realize a massive ad hoc mesh network. This is obviously a dramatic departure from the conventional cellular architecture and brings unique technical challenges. In such a two-tier cellular system, since the user data is routed through other users¿ devices, security must be maintained for privacy. To ensure minimal impact on the performance of existing macrocell BSs, the two-tier network needs to be designed with smart interference management strategies and appropriate resource allocation schemes. Furthermore, novel pricing models should be designed to tempt devices to participate in this type of communication. Our article provides an overview of these major challenges in two-tier networks and proposes some pricing schemes for different types of device relaying. © 2014 IEEE.
He G.,University of Waterloo |
Evers S.,University of Waterloo |
Liang X.,University of Waterloo |
Cuisinier M.,University of Waterloo |
And 2 more authors.
ACS Nano | Year: 2013
Porous hollow carbon spheres with different tailored pore structures have been designed as conducting frameworks for lithium-sulfur battery cathode materials that exhibit stable cycling capacity. By deliberately creating shell porosity and utilizing the interior void volume of the carbon spheres, sufficient space for sulfur storage as well as electrolyte pathways is guaranteed. The effect of different approaches to develop shell porosity is examined and compared in this study. The most highly optimized sulfur-porous carbon nanosphere composite, created using pore-formers to tailor shell porosity, exhibits excellent cycling performance and rate capability. Sulfur is primarily confined in 4-5 nm mesopores in the carbon shell and inner lining of the shells, which is beneficial for enhancing charge transfer and accommodating volume expansion of sulfur during redox cycling. Little capacity degradation (∼0.1% /cycle) is observed over 100 cycles for the optimized material. © 2013 American Chemical Society.
Abdoli M.J.,Huawei |
Ghasemi A.,Ciena |
Khandani A.K.,University of Waterloo
IEEE Transactions on Information Theory | Year: 2013
The K-user single-input single-output (SISO) additive white Gaussian noise (AWGN) interference channel and 2\times K SISO AWGN X channel are considered, where the transmitters have delayed channel state information (CSI) through noiseless feedback links. Multiphase transmission schemes are proposed for both channels which possess novel ingredients, namely, multiphase partial interference nulling, distributed interference management via user scheduling, and distributed higher order symbol generation. The achieved degree-of-freedom (DoF) values are greater than the best previously known DoFs for both channels with delayed CSI at the transmitters. © 1963-2012 IEEE.
Chowdhury M.,University of California at Berkeley |
Rahman M.R.,University of Illinois at Urbana - Champaign |
Boutaba R.,University of Waterloo |
Boutaba R.,Pohang University of Science and Technology
IEEE/ACM Transactions on Networking | Year: 2012
Network virtualization allows multiple heterogeneous virtual networks (VNs) to coexist on a shared infrastructure. Efficient mapping of virtual nodes and virtual links of a VN request onto substrate network resources, also known as the VN embedding problem, is the first step toward enabling such multiplicity. Since this problem is known to be NP-hard, previous research focused on designing heuristic-based algorithms that had clear separation between the node mapping and the link mapping phases. In this paper, we present ViNEYard-a collection of VN embedding algorithms that leverage better coordination between the two phases. We formulate the VN embedding problem as a mixed integer program through substrate network augmentation.We then relax the integer constraints to obtain a linear program and devise two online VN embedding algorithms D-ViNE and R-ViNE using deterministic and randomized rounding techniques, respectively. We also present a generalized window-based VN embedding algorithm (WiNE) to evaluate the effect of lookahead on VN embedding. Our simulation experiments on a large mix of VN requests show that the proposed algorithms increase the acceptance ratio and the revenue while decreasing the cost incurred by the substrate network in the long run. © 2011 IEEE.
News Article | August 23, 2016
Getting screened for Alzheimer’s disease could soon mean taking a trip to the eye doctor. Decreased retinal thickness, the presence of abnormal proteins, and changes in how the retinal blood vessels respond to light all appear to be signs of neurodegenerative disease, according to researchers who spoke at the recent Alzheimer’s Association International Conference (AAIC 2016) in Toronto. All of these could be detected with non-invasive eye exams, which would represent a huge leap forward for patients and Alzheimer’s researchers alike. Alzheimer’s is the most common cause of dementia, and it’s irreversible. It affects an estimated 5 million Americans, and the numbers are growing. But right now, there’s no perfect way to diagnose it: Doctors perform memory tests on their patients, or take a detailed family history, which means the disease sometimes isn’t caught until it’s progressed. A definitive diagnosis generally can’t be done until after the patient’s death, when clusters of abnormal proteins called amyloid plaques (a hallmark of the disease) can be found in brain tissue samples. Earlier detection would mean that patients and their families could plan ahead, and that researchers could better study the disease. Improved screening methods would enable doctors to identify who’s at risk, maybe even before their symptoms start to show. Read More: Can Learning to Code Delay Alzheimer's? The eyes are attracting attention as a portal to what’s happening in the brain. At a session at the AAIC 2016, researchers focused on the retina, which sits in the back of the eye and is made up of nerve tissue. The eyes are like windows into the brain, said Melanie Campbell, professor of optometry and vision science at the University of Waterloo. She told Motherboard in an interview that amyloid plaques can appear in the back of the eyes on the retina. It’s possible amyloids leak into the vitreous fluid of the eye from the cerebrospinal fluid, Campbell said. Researchers also hypothesize that amyloid proteins are synthesized by neural cells within the eye, a similar process to what happens in the brains of Alzheimer’s patients, appearing in both the retina and the vitreous fluid. Right now, in the lab, amyloids can be detected on retinas using rather complicated and expensive eye-imaging techniques. But Campbell and colleagues developed a prototype device that does the job more easily and cheaply. This new technology, called polarimetry, uses polarized light. “It turns out amyloids show up very clearly under polarized light,” she said. She presented results of a series of proof-of-concept scans done on human and canine retinas. The scans were conducted on a series of cadaver retinas from the Eye Bank of Canada (20 from people who had Alzheimer’s, and 22 controls), as well on living and postmortem canine retinas. The researchers found that amyloid deposits were not only easy to detect with this new technology, but it was relatively easy to count them, and to measure their size—something other imaging techniques can’t do. The next step will be testing the device clinically on patients with Alzheimer’s disease, Campbell said. However, the presence of amyloids isn’t a guaranteed way to diagnose it; they show risk so this would be for screening. Another clue of the disease is thin retinal nerve fiber layers (RNFL). In fact, the thinner RNFLs are, the poorer the cognition levels of subjects, according to Fang Ko, clinical associate professor of ophthalmology, Florida State University and Moorfields Eye Hospital in the UK, who also spoke at the conference. Here, researchers used data from the UK Biobank, which included medical and health details of 500,000 volunteers aged between 40 to 69 years from across England. Of these, 67,000 underwent eye exams, which included retinal imaging. Many were ultimately excluded (including those with diabetes or other conditions that affect the retina), leaving about 32,000 subjects. They completed four different cognitive tests. Of those, a total of 1,251 participants went on to repeat the cognitive tests after three years. Researchers found that people with thinner RNFLs performed worse on each of the cognitive tests than those whose RNFLs were thicker. And those who started the study with thinner RNFL had greater cognitive decline at the three year follow-up than those who had thicker ones. It may be possible to use thin RNFL as a predictor of cognitive decline, she said, but it isn’t a surefire method: diseases like glaucoma can also affect its thickness, so once again, this could be a useful tool for screening rather than diagnosis. A third technique, using a flickering light exam of the retinal blood vessels, could also help screen for Alzheimer’s, according to Konstantin Kotliar, a biomedical engineer at the Aachen University of Applied Sciences in Germany. In healthy eyes, a flickering light shone on the retina causes immediate dilation of both retinal arteries and veins. “In people with Alzheimer’s disease, retinal arteries and veins have a delayed reaction to a flickering light test,” he said. But, they undergo greater dilation than in people without the disease. (Diminished and sometimes delayed dilation is also seen in eye diseases like glaucoma, he said.) At the conference, Kotliar presented a study (unpublished as of yet) measuring and comparing retinal vessel reactions to flickering light in patients aged 60 to 79. Fifteen had mild-to-moderate dementia due to Alzheimer’s; 24 had mild cognitive impairment, also from Alzheimer’s, and 15 were healthy controls with no cognitive impairment. Retinal artery and vein reactions to 20-second-long flicker stimulation were measured. Both arteries and veins dilated more in people with mild to moderate Alzheimer’s than in controls. Also, the start of dilation in the retinal arteries took longer in people with Alzheimer’s than in controls—though the delay wasn’t as pronounced in the veins. How the retinal vessels behaved in Alzheimer’s patients was a surprise, and this might contribute to another screening test, he said. Finding new ways to screen for Alzheimer’s has never been more important: with the number of patients expected to balloon in years to come, so finding new ways to detect it will crucial.
News Article | November 14, 2015
Home > Press > NIST team proves 'spooky action at a distance' is really real Abstract: Einstein was wrong about at least one thing: There are, in fact, "spooky actions at a distance," as now proven by researchers at the National Institute of Standards and Technology (NIST). Einstein used that term to refer to quantum mechanics, which describes the curious behavior of the smallest particles of matter and light. He was referring, specifically, to entanglement, the idea that two physically separated particles can have correlated properties, with values that are uncertain until they are measured. Einstein was dubious, and until now, researchers have been unable to support it with near-total confidence. As described in a paper posted online and submitted to Physical Review Letters (PRL),* researchers from NIST and several other institutions created pairs of identical light particles, or photons, and sent them to two different locations to be measured. Researchers showed the measured results not only were correlated, but also--by eliminating all other known options--that these correlations cannot be caused by the locally controlled, "realistic" universe Einstein thought we lived in. This implies a different explanation such as entanglement. The NIST experiments are called Bell tests, so named because in 1964 Irish physicist John Bell showed there are limits to measurement correlations that can be ascribed to local, pre-existing (i.e. realistic) conditions. Additional correlations beyond those limits would require either sending signals faster than the speed of light, which scientists consider impossible, or another mechanism, such as quantum entanglement. The research team achieved this feat by simultaneously closing all three major "loopholes" that have plagued previous Bell tests. Closing the loopholes was made possible by recent technical advances, including NIST's ultrafast single-photon detectors, which can accurately detect at least 90 percent of very weak signals, and new tools for randomly picking detector settings. "You can't prove quantum mechanics, but local realism, or hidden local action, is incompatible with our experiment," NIST's Krister Shalm says. "Our results agree with what quantum mechanics predicts about the spooky actions shared by entangled particles." The NIST paper was submitted to PRL with another paper by a team at the University of Vienna in Austria who used a similar high-efficiency single-photon detector provided by NIST to perform a Bell test that achieved similar results. The NIST results are more definitive than those reported recently by researchers at Delft University of Technology in the Netherlands. In the NIST experiment, the photon source and the two detectors were located in three different, widely separated rooms on the same floor in a large laboratory building. The two detectors are 184 meters apart, and 126 and 132 meters, respectively, from the photon source. The source creates a stream of photon pairs through a common process in which a laser beam stimulates a special type of crystal. This process is generally presumed to create pairs of photons that are entangled, so that the photons' polarizations are highly correlated with one another. Polarization refers to the specific orientation of the photon, like vertical or horizontal (polarizing sunglasses preferentially block horizontally polarized light), analogous to the two sides of a coin. Photon pairs are then separated and sent by fiber-optic cable to separate detectors in the distant rooms. While the photons are in flight, a random number generator picks one of two polarization settings for each polarization analyzer. If the photon matched the analyzer setting, then it was detected more than 90 percent of the time. In the best experimental run, both detectors simultaneously identified photons a total of 6,378 times over a period of 30 minutes. Other outcomes (such as just one detector firing) accounted for only 5,749 of the 12,127 total relevant events. Researchers calculated that the maximum chance of local realism producing these results is just 0.0000000059, or about 1 in 170 million. This outcome exceeds the particle physics community's requirement for a "5 sigma" result needed to declare something a discovery. The results strongly rule out local realistic theories, suggesting that the quantum mechanical explanation of entanglement is indeed the correct explanation. The NIST experiment closed the three major loopholes as follows: Fair sampling: Thanks to NIST's single-photon detectors, the experiment was efficient enough to ensure that the detected photons and measurement results were representative of the actual totals. The detectors, made of superconducting nanowires, were 90 percent efficient, and total system efficiency was about 75 percent. No faster-than-light communication: The two detectors measured photons from the same pair a few hundreds of nanoseconds apart, finishing more than 40 nanoseconds before any light-speed communication could take place between the detectors. Information traveling at the speed of light would require 617 nanoseconds to travel between the detectors. Freedom of choice: Detector settings were chosen by random number generators operating outside the light cone (i.e., possible influence) of the photon source, and thus, were free from manipulation. (In fact, the experiment demonstrated a "Bell violation machine" that NIST eventually plans to use to certify randomness.) To further ensure that hidden variables such as power grid fluctuations could not have influenced the results, the researchers performed additional experimental runs mixed with another source of randomness--data from popular movies, television shows and the digits of Pi. This didn't change the outcome. The experiment was conducted at NIST's Boulder, Colo., campus, where researchers made one of the photon detectors and provided theoretical support. Researchers at the Jet Propulsion Laboratory (Pasadena, Calif.) made the other detector. Researchers at NIST's Gaithersburg, Md., headquarters built random number generators and related circuits. Researchers from the University of Illinois (Urbana-Champaign, Ill.) and the University of Waterloo and University of Moncton in Canada helped develop the photon source and perform the experiments. Researchers at the Barcelona Institute of Science and Technology in Spain developed another random number generator. ### Funding for NIST contributions to the experiment was provided, in part, by the Defense Advanced Research Projects Agency. As a non-regulatory agency of the U.S. Department of Commerce, NIST promotes U.S. innovation and industrial competitiveness by advancing measurement science, standards and technology in ways that enhance economic security and improve our quality of life. To learn more about NIST, visit www.nist.gov. * L.K. Shalm, E. Meyer-Scott, B.G. Christensen, P. Bierhorst, M.A. Wayne, D.R. Hamel, M.J. Stevens, T. Gerrits, S. Glancy, M.S. Allman, K.J. Coakley, S.D. Dyer, C. Hodge, A.E. Lita, V.B. Verma, J.C. Bienfang, A.L. Migdall, Y. Zhang, W.H. Farr, F. Marsili, M.D. Shaw, J.A. Stern, C. Abellan, W. Amaya, V. Pruneri, T. Jennewein, M.W. Mitchell, P.G. Kwiat, R.P. Mirin, E. Knill and S.W. Nam. A strong loophole-free test of local realism. Submitted to Physical Review Letters. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.
News Article | November 24, 2015
Home > Press > Researchers find new, inexpensive way to clean water from oil sands production Abstract: Researchers have developed a process to remove contaminants from oil sands wastewater using only sunlight and nanoparticles that is more effective and inexpensive than conventional treatment methods. Frank Gu, a professor in the Faculty of Engineering at the University of Waterloo and Canada Research Chair in Nanotechnology Engineering, is the senior researcher on the team that was the first to find that photocatalysis -- a chemical reaction that involves the absorption of light by nanoparticles -- can completely eliminate naphthenic acids in oil sands wastewater, and within hours. Naphthenic acids pose a threat to ecology and human health. Water in tailing ponds left to biodegrade naturally in the environment still contains these contaminants decades later. "With about a billion tonnes of water stored in ponds in Alberta, removing naphthenic acids is one of the largest environmental challenges in Canada," said Tim Leshuk, a PhD candidate in chemical engineering at Waterloo. He is the lead author of this paper and a recipient of the prestigious Vanier Canada Graduate Scholarship. "Conventional treatments people have tried either haven't worked or if they have worked they've been far too impractical or expensive to solve the size of the problem. Waterloo's technology is the first step of what looks like a very practical and green treatment method." Unlike treating polluted water with chlorine or membrane filtering, the Waterloo technology is energy-efficient and relatively inexpensive. Nanoparticles become extremely reactive when exposed to sunlight and break down the persistent pollutants in their individual atoms, completely removing them from the water. This treatment depends on only sunlight for energy, and the nanoparticles can be recovered and reused indefinitely. Next steps for the Waterloo research include ensuring that the treated water meets all of the objectives Canadian environmental legislation and regulations required to ensure it can be safely discharged from sources larger than the samples, such as tailing ponds. Kerry Peru and John Headley, research scientists from Environment Canada, are co-authors of the paper, which appears in the latest issue of the journal Chemosphere. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.
News Article | November 23, 2015
Frank Gu, a professor in the Faculty of Engineering at the University of Waterloo and Canada Research Chair in Nanotechnology Engineering, is the senior researcher on the team that was the first to find that photocatalysis—a chemical reaction that involves the absorption of light by nanoparticles—can completely eliminate naphthenic acids in oil sands wastewater, and within hours. Naphthenic acids pose a threat to ecology and human health. Water in tailing ponds left to biodegrade naturally in the environment still contains these contaminants decades later. "With about a billion tonnes of water stored in ponds in Alberta, removing naphthenic acids is one of the largest environmental challenges in Canada," said Tim Leshuk, a PhD candidate in chemical engineering at Waterloo. He is the lead author of this paper and a recipient of the prestigious Vanier Canada Graduate Scholarship. "Conventional treatments people have tried either haven't worked or if they have worked they've been far too impractical or expensive to solve the size of the problem. Waterloo's technology is the first step of what looks like a very practical and green treatment method." Unlike treating polluted water with chlorine or membrane filtering, the Waterloo technology is energy-efficient and relatively inexpensive. Nanoparticles become extremely reactive when exposed to sunlight and break down the persistent pollutants in their individual atoms, completely removing them from the water. This treatment depends on only sunlight for energy, and the nanoparticles can be recovered and reused indefinitely. Next steps for the Waterloo research include ensuring that the treated water meets all of the objectives Canadian environmental legislation and regulations required to ensure it can be safely discharged from sources larger than the samples, such as tailing ponds. Kerry Peru and John Headley, research scientists from Environment Canada, are co-authors of the paper, which appears in the latest issue of the journal Chemosphere. Explore further: Using microbes for the quick clean up of dirty oil More information: Solar photocatalytic degradation of naphthenic acids in oil sands process-affected water, Chemosphere, DOI: 10.1016/j.chemosphere.2015.10.073
News Article | October 28, 2016
The National Science Foundation has awarded Rivier’s biology department a five-year, $650,000 grant to support the education of young scientists. The grant will fund a pilot program titled ARGYLES (Attract, Retain and Graduate Young LifE Scientists) to engage biology majors as emergent scientists who will contribute to the vitality of the STEM workforce in the Northeast. ARGYLES successes will be shared with peer institutions to encourage program adoption at other colleges and universities through conference presentations, news releases and peer-reviewed journals. The ARGYLES program will provide four-year scholarships, and signature learning and professional experiences to academically talented students from lower-income families. Preference will be given to minority, female and first-generation learners, currently underrepresented in the STEM disciplines. Key program components include summer field study, peer and faculty mentoring, community building, independent research proposals and projects, travel to scientific conferences, and workforce preparation. “This grant will not only assist current STEM students but also enable Rivier to expand a model for student engagement in other disciplines,” says Sister Paula Marie Buley, IHM, Rivier’s President. “ARGYLES’ offers four pillars of support: academic, financial, vocational and communal. In addition, the focus on experiential learning will offer both a hands-on educational experience and build diversity within the scientific community.” Program objectives establish a progression from campus to community to career for students. Goals include recruitment and enrollment of qualified students; building community within the ARGYLES cohort and active participation in campus life and the Greater Nashua community; increased retention and graduation rates; and continuation to STEM graduate study or employment in their field within six months of graduation. Rivier’s commitment to global engagement and career preparation fosters a broader experience: ARGYLES students, peer mentors, and faculty will travel to Canada for a two-week research and cultural exchange. While hiking along the Niagara Escarpment, students will be introduced to the geology and flora of the region and will gain hands-on experience testing water retrieved from various locations that extend from Niagara Falls to Georgian Bay. The expedition will also provide students the opportunity to establish professional connections with international science students. “We’re excited for this opportunity to grow Rivier’s biology and biotech programs,” says Dr. Susan Barbaro, Associate Professor of Biology, Department Coordinator and the grant’s author. “We have already established a partnership with faculty and staff at the University of Waterloo in Ontario as we plan for the community-building teaching trip abroad.” Student recruitment for the program begins immediately. Learning community formation and activities will take place before the fall 2017 semester. Interested parties should contact Rivier’s Office of Undergraduate Admissions at (603) 897-8507 or admissions(at)rivier(dot)edu.
News Article | November 12, 2015
Einstein used that term to refer to quantum mechanics, which describes the curious behavior of the smallest particles of matter and light. He was referring, specifically, to entanglement, the idea that two physically separated particles can have correlated properties, with values that are uncertain until they are measured. Einstein was dubious, and until now, researchers have been unable to support it with near-total confidence. As described in a paper posted online and submitted to Physical Review Letters (PRL), researchers from NIST and several other institutions created pairs of identical light particles, or photons, and sent them to two different locations to be measured. Researchers showed the measured results not only were correlated, but also—by eliminating all other known options—that these correlations cannot be caused by the locally controlled, "realistic" universe Einstein thought we lived in. This implies a different explanation such as entanglement. The NIST experiments are called Bell tests, so named because in 1964 Irish physicist John Bell showed there are limits to measurement correlations that can be ascribed to local, pre-existing (i.e. realistic) conditions. Additional correlations beyond those limits would require either sending signals faster than the speed of light, which scientists consider impossible, or another mechanism, such as quantum entanglement. The research team achieved this feat by simultaneously closing all three major "loopholes" that have plagued previous Bell tests. Closing the loopholes was made possible by recent technical advances, including NIST's ultrafast single-photon detectors, which can accurately detect at least 90 percent of very weak signals, and new tools for randomly picking detector settings. "You can't prove quantum mechanics, but local realism, or hidden local action, is incompatible with our experiment," NIST's Krister Shalm says. "Our results agree with what quantum mechanics predicts about the spooky actions shared by entangled particles." The NIST paper was submitted to PRL with another paper by a team at the University of Vienna in Austria who used a similar high-efficiency single-photon detector provided by NIST to perform a Bell test that achieved similar results. The NIST results are more definitive than those reported recently by researchers at Delft University of Technology in the Netherlands. In the NIST experiment, the photon source and the two detectors were located in three different, widely separated rooms on the same floor in a large laboratory building. The two detectors are 184 meters apart, and 126 and 132 meters, respectively, from the photon source. The source creates a stream of photon pairs through a common process in which a laser beam stimulates a special type of crystal. This process is generally presumed to create pairs of photons that are entangled, so that the photons' polarizations are highly correlated with one another. Polarization refers to the specific orientation of the photon, like vertical or horizontal (polarizing sunglasses preferentially block horizontally polarized light), analogous to the two sides of a coin. Photon pairs are then separated and sent by fiber-optic cable to separate detectors in the distant rooms. While the photons are in flight, a random number generator picks one of two polarization settings for each polarization analyzer. If the photon matched the analyzer setting, then it was detected more than 90 percent of the time. In the best experimental run, both detectors simultaneously identified photons a total of 6,378 times over a period of 30 minutes. Other outcomes (such as just one detector firing) accounted for only 5,749 of the 12,127 total relevant events. Researchers calculated that the maximum chance of local realism producing these results is just 0.0000000059, or about 1 in 170 million. This outcome exceeds the particle physics community's requirement for a "5 sigma" result needed to declare something a discovery. The results strongly rule out local realistic theories, suggesting that the quantum mechanical explanation of entanglement is indeed the correct explanation. The NIST experiment closed the three major loopholes as follows: Fair sampling: Thanks to NIST's single-photon detectors, the experiment was efficient enough to ensure that the detected photons and measurement results were representative of the actual totals. The detectors, made of superconducting nanowires, were 90 percent efficient, and total system efficiency was about 75 percent. No faster-than-light communication: The two detectors measured photons from the same pair a few hundreds of nanoseconds apart, finishing more than 40 nanoseconds before any light-speed communication could take place between the detectors. Information traveling at the speed of light would require 617 nanoseconds to travel between the detectors. Freedom of choice: Detector settings were chosen by random number generators operating outside the light cone (i.e., possible influence) of the photon source, and thus, were free from manipulation. (In fact, the experiment demonstrated a "Bell violation machine" that NIST eventually plans to use to certify randomness.) To further ensure that hidden variables such as power grid fluctuations could not have influenced the results, the researchers performed additional experimental runs mixed with another source of randomness—data from popular movies, television shows and the digits of Pi. This didn't change the outcome. The experiment was conducted at NIST's Boulder, Colo., campus, where researchers made one of the photon detectors and provided theoretical support. Researchers at the Jet Propulsion Laboratory (Pasadena, Calif.) made the other detector. Researchers at NIST's Gaithersburg, Md., headquarters built random number generators and related circuits. Researchers from the University of Illinois (Urbana-Champaign, Ill.) and the University of Waterloo and University of Moncton in Canada helped develop the photon source and perform the experiments. Researchers at the Barcelona Institute of Science and Technology in Spain developed another random number generator. Explore further: High-efficiency photon source brings day of reckoning closer for a famous quantum test More information: L.K. Shalm, E. Meyer-Scott, B.G. Christensen, P. Bierhorst, M.A. Wayne, D.R. Hamel, M.J. Stevens, T. Gerrits, S. Glancy, M.S. Allman, K.J. Coakley, S.D. Dyer, C. Hodge, A.E. Lita, V.B. Verma, J.C. Bienfang, A.L. Migdall, Y. Zhang, W.H. Farr, F. Marsili, M.D. Shaw, J.A. Stern, C. Abellan, W. Amaya, V. Pruneri, T. Jennewein, M.W. Mitchell, P.G. Kwiat, R.P. Mirin, E. Knill and S.W. Nam. A strong loophole-free test of local realism. Submitted to Physical Review Letters. arxiv.org/abs/1511.03189
News Article | December 2, 2016
(PRLEAP.COM) December 2, 2016 - Nearly a year after announcing its support for autonomous vehicle testing, Ontario has announced the first pilot program. Car insurance expert Shop Insurance Canada (ShopInsuranceCanada.ca) has followed the autonomous vehicle developments closely and says there are both positives and negatives to be taken from Ontario's program.Ontario's Ministry of Transportation (MTO) launched "the first automated vehicle (AV) pilot program in Canada" on Monday, Nov. 28. The program is being led by BlackBerry QNX, the Erwin Hymer, and the University of Waterloo.The province announced support for driverless cars at the start of the year, saying it would become the first region in the country to give autonomous vehicle manufacturers a base to test their creations. With the launch of the pilot program today, Ontario announced a pilot that groups experts from the manufacturing, technology, and research sectors.The program will "advance innovation and capability in Ontario's AV sector," the MTO explained in a statement.Among the participants in the program is the WATCar Project at the University of Waterloo's Centre for Automotive Research . The team will test the Lincoln MKZ on the road and at various levels of automation.The Erwin Hymer Group is a Kitchener-Waterloo, Ont. based automaker that will test the Mercedes-Benz Sprinter van at different automation levels. As for BlackBerry, the software development giant will test the 2017 Lincoln with automated features.After the pilot regulatory framework was announced in January, there were reports that Ontario was struggling to attract companies to participate in the program. Alongside road support, the province has provided $2.95 million in funding to support research and development of autonomous tech."Ontario's innovation ecosystem, with leading clusters in automotive, information technology, and cleantech, makes the province the ideal location to develop the disruptive technologies like this AV pilot that will shape the future of the industry," said Brad Duguid, Ontario's Minister of Economic Development and Growth, in the statement.Considering the profound impact autonomous technology will have on the insurance industry, online expert Shop Insurance Canada has followed the market closely. The company says that Ontario's openness for driverless cars is interesting because it is the first time Canada will see these vehicles up close.Indeed, Shop Insurance Canada points out that the country actually presents a unique testing ground for developers of autonomous tech:"One of the main concerns surrounding autonomous vehicles is whether they really can perform all the tasks of a driver. While testing has been extremely encouraging, enough to make these vehicles close to showrooms, there are still doubts about their safety under certain driving conditions. Canada, with its severe winters and snow, could provide a deep test of driverless technology in the worst of conditions."Yes, auto insurance companies will be impacted by the growth of the autonomous market in the coming decades. However, there is no doubt that the tech is here to stay, and we are now at the dawn of the autonomous age in Canada.Shop Insurance Canada is a Toronto based company that specializes in delivering the best auto insurance products to customers around Ontario and Canada. The online car insurance quoting tool uses an engine that is easy to use and accurate enough to deliver the best auto insurance quotes from over 25 of Canada's leading providers. Shop Insurance Canada also offers expert advice on the auto insurance industry, as well as guides and news to help customers find the best deal possible.Shop Insurance Canada works hard to bring all the latest insurance news to customers. We believe that understanding the industry starts with knowing what is happening day to day. Our customers and readers are hugely important to us, and we want them to get the best deals by being involved in the industry. If you have any interesting insurance topics or stories, let us know and we will be happy to look into it and write it up.Perhaps you have a funny story about your premium evaluations, or maybe a genuine worry about the state of insurance in Canada. Shop Insurance Canada wants your voice and story to be heard, so get in touch with us via our official contact page 1003-60 Bathrust St.Toronto, OntarioM5V 2P4Canada416-913-0151
News Article | September 12, 2016
« University of Waterloo opens new automotive research facility | Main | Lux Research: EVs and AVs will inevitably merge; “terrifying prospect” for value-chain incumbents » A detailed, city-level multivariate regression analysis of EV penetration in California has found a link between electric vehicle uptake and many underlying factors. A team at the International Council on Clean Transportation (ICCT) found that electric vehicle model availability; public electric vehicle charging network; local promotion activities for electric vehicles (e.g., outreach events, informational websites; electric car sharing services; and government and fleet programs) and median income in each city to be correlated significantly with new electric vehicle sales share. They cautioned that causality could not be determined within the analysis. The team drilled into the activities of the 30 California cities with the highest rates of electric vehicle penetration, examining how local organizations—regional and city governments, utilities, businesses, and nonprofits are promoting electric vehicles through a wide array of activities. In these 30 cities, electric vehicles account for 6% to 18% share of new vehicle sales—this is 8 to 25 times that of the US average in 2015. These vehicle markets range greatly in size, from hundreds of electric vehicle sales up to approximately 4,000 (San Jose). Some of the examined factors (California Clean Vehicle Rebate claim rate and the prevalence of single-family homes) were not linked with electric vehicle uptake. Other factors for which data was not available (such as the income of electric vehicle purchasers specifically, rather than city-level median income) or that cannot be quantified (such as cultural differences between cities) could be influencing electric vehicle uptake in these cities, the authors observed. Based on the analysis, the ICCT team drew three overarching conclusions: Comprehensive policy support is helping support the electric vehicle market. Consumers in California benefit from federal and state electric vehicle incentives, as well as from persistent local action and extensive charging infrastructure. The Zero-Emission Vehicle program has increased model availability (the cities tended to have about 20 EV models locally available over 2015) and provided relative certainty about vehicle deployment that local stakeholders can bank on. The major metropolitan areas in California had 3 to 13 times the average US electric vehicle uptake in 2015. Local promotion activities are encouraging the electric vehicle market. The 30 cities in California with the highest electric vehicle uptake—with 8 to 25 times the US electric vehicle uptake—have seen the implementation of abundant, wide-ranging electric vehicle promotion programs involving parking, permitting, fleets, utilities, education, and workplace charging. These cities tend to be smaller, but Oakland and San Jose are also within the high electric vehicle uptake cities. There were twelve cities with electric vehicle market shares of new vehicles from 10% to 18% in 2015 including Berkeley, Manhattan Beach, and many throughout Silicon Valley. The electric vehicle market grows with its charging infrastructure. The 30 California cities with the highest electric vehicle uptake have, on average, 5 times the public charging infrastructure per capita than the US average. In addition, workplace charging availability in the San Jose metropolitan area is far higher than elsewhere. Increasingly, major public electric power utilities and workplaces are expanding the public charging network to further address consumer confidence and convenience. This analysis of the California market could have broader implications in defining best practice policies to support electric vehicles. Governments around the world are contemplating more progressive regulatory policies to promote electric vehicles. Policymakers are also investigating complementary local outreach, city policy, and charging infrastructure planning to pave the way for the emerging electric vehicle market. California provides a template for such state and local activities that reach more businesses and prospective consumers. The California experience suggests that if electric vehicle models are brought to more markets and there is supporting policy in place, market growth will continue. Our findings suggest that California’s playbook could be a helpful example to other regions seeking to encourage electric vehicle uptake.
News Article | February 15, 2017
TORONTO, ON--(Marketwired - February 13, 2017) - Electrovaya (TSX: EFL) ( : EFLVF) is pleased to welcome Professor Carolyn Hansson CM, FCAE, FRSC, one of Canada's influential and innovative engineers to the Electrovaya Board of Directors. Professor Hansson has a long and distinguished career in industries such as Lockheed Martin (Martin Marietta), Danish Corrosion Labs and Bell Labs as well in academia (Waterloo, Queens, Columbia & SUNY) and was earlier a member of the Board of a TSX and NASDAQ listed Alternate Energy Company (Hydrogenics). A Professor of Materials Engineering at the University of Waterloo, Dr. Hansson is the recipient of many awards including the Order of Canada and is a member of several influential committees within North America and Europe. During her tenure as Vice President of Research at Waterloo University Professor Hansson drove innovation across all disciplines of the University. She has wide connections within innovation circles in Canada, USA and Europe having lived and worked on both sides of the Atlantic. "Carolyn has great practical experience in industry, government and academia and we are delighted that she has agreed to join the Board of Directors at Electrovaya," said Dr. Sankar Das Gupta, Chairman & CEO of Electrovaya. "The advanced Lithium Ion battery is the key defining technology needed today and Electrovaya provides this critical next generation technology to the emerging alternate energy sector. I am very pleased to join the Board and help build the Company," said Prof. Hansson. Electrovaya Inc. (TSX: EFL) ( : EFLVF) designs, develops and manufactures proprietary Lithium Ion Super Polymer® batteries, battery systems, and battery-related products for energy storage, clean electric transportation and other specialized applications. Electrovaya, through its fully owned subsidiary, Litarion GmbH, also produces cells, electrodes and SEPARION® ceramic separators and has manufacturing capacity of about 500MWh/annum. Electrovaya is a technology focused company with extensive patents and other Intellectual Property. Headquartered in Ontario, Canada, Electrovaya has production facilities in Canada and Germany with customers around the globe. To learn more about how Electrovaya and Litarion are powering mobility and energy storage, please explore www.electrovaya.com, www.litarion.com and www.separion.com This press release contains forward-looking statements, including statements that relate to, among other things, revenue forecasts, technology development progress, plans for shipment using the Company's technology, production plans, the Company's markets, objectives, goals, strategies, intentions, beliefs, expectations and estimates, and can generally be identified by the use of words such as "may", "will", "could", "should", "would", "likely", "possible", "expect", "intend", "estimate", "anticipate", "believe", "plan", "objective" and "continue" (or the negative thereof) and words and expressions of similar import. Although the Company believes that the expectations reflected in such forward-looking statements are reasonable, such statements involve risks and uncertainties, and undue reliance should not be placed on such statements. Certain material factors or assumptions are applied in making forward-looking statements, and actual results may differ materially from those expressed or implied in such statements. Important factors that could cause actual results to differ materially from expectations include but are not limited to: general business and economic conditions (including but not limited to currency rates and creditworthiness of customers); Company liquidity and capital resources, including the availability of additional capital resources to fund its activities; level of competition; changes in laws and regulations; legal and regulatory proceedings; the ability to adapt products and services to the changing market; the ability to attract and retain key executives; and the ability to execute strategic plans. Additional information about material factors that could cause actual results to differ materially from expectations and about material factors or assumptions applied in making forward-looking statements may be found in the Company's most recent annual and interim Management's Discussion and Analysis under "Risk and Uncertainties" as well as in other public disclosure documents filed with Canadian securities regulatory authorities. The Company does not undertake any obligation to update publicly or to revise any of the forward-looking statements contained in this document, whether as a result of new information, future events or otherwise, except as required by law.
News Article | November 3, 2016
WASHINGTON -- Using prominent, graphic pictures on cigarette packs warning against smoking could avert more than 652,000 deaths, up to 92,000 low birth weight infants, up to 145,000 preterm births, and about 1,000 cases of sudden infant deaths in the U.S. over the next 50 years, say researchers from Georgetown Lombardi Comprehensive Cancer Center. Their study, published online Nov. 3 in the journal Tobacco Control, is the first to estimate the effects of pictorial warnings on cigarette packs on the health of both adults and infants in the U.S Although more than 70 nations have adopted or are considering adopting the World Health Organization's Framework Convention for Tobacco Control to use such front and back of-the-pack pictorial warnings -- an example is a Brazilian photo of a father with a tracheotomy -- they have not been implemented in the US. Pictorial warnings have been required by law, but an industry lawsuit stalled implementation of this requirement. Currently, a text-only warning appears on the side of cigarette packs in the U.S. The study used a tobacco control policy model, SimSmoke, developed by Georgetown Lombardi's David T. Levy, PhD, which looks at the effects of past smoking policies as well as future policies. SimSmoke is peer-reviewed, and has been used and validated in more than 20 countries. In this study, Levy and his colleagues, who included investigators at the University of Waterloo, Ontario, and the University of South Carolina, looked at changes in smoking rates in Australia, Canada and the United Kingdom, which have already implement prominent pictorial warning labels (PWLs). For example, eight years after PWLs were implemented in Canada, there was an estimated 12 percent - 20 percent relative reduction in smoking prevalence. After PWLs began to be used in Australia in 2006, adult smoking prevalence fell from 21.3 percent in 2007 to 19 percent in 2008. After implementation in the UK in 2008, smoking prevalence fell 10 percent in the following year. The researchers used these and other studies and, employing the SimSmoke model, estimated that implementing PWLs in the U.S. would directly reduce smoking prevalence in relative terms by 5 percent in the near term, increasing to 10 percent over the long-term. If implemented in 2016, PWLs are estimated to reduce the number of smoking attributable deaths (heart disease, lung cancer and COPD) by an estimated 652,800 by 2065 and to prevent more than 46,600 cases of low-birth weights, 73,600 cases of preterm birth, and 1,000 SIDS deaths. "The bottom line is that requiring large pictorial warnings would help protect the public health of people in the United States," says Levy, a professor of oncology. "There is a direct association between these warnings and increased smoking cessation and reduced smoking initiation and prevalence. That would lead to significant reduction of death and morbidity, as well as medical cost." The study was funded by a grant from the National Institute on Drug Abuse (R01DA036497) and the National Cancer Institute (UO1-CA97450). Co-authors include Darren Mays, PhD, MPH, and Zhe Yuan, MS, both from Georgetown, David Hammond, PhD, from the University of Waterloo, and James F. Thrasher, PhD, MS, MA, from the University of South Carolina. Hammond has served as a paid expert witness on behalf of governments in tobacco litigation, including challenges to health warning regulations. The other co-authors report no potential conflicts or related financial interests. Georgetown Lombardi Comprehensive Cancer Center is designated by the National Cancer Institute as a comprehensive cancer center -- the only cancer center of its kind in the Washington, DC area. A part of Georgetown University Medical Center and MedStar Georgetown University Hospital, Georgetown Lombardi seeks to improve the diagnosis, treatment, and prevention of cancer through innovative basic and clinical research, patient care, community education and outreach, and the training of cancer specialists of the future. Connect with Georgetown Lombardi on Facebook (Facebook.com/GeorgetownLombardi) and Twitter (@LombardiCancer). Georgetown University Medical Center (GUMC) is an internationally recognized academic medical center with a three-part mission of research, teaching and patient care (through MedStar Health). GUMC's mission is carried out with a strong emphasis on public service and a dedication to the Catholic, Jesuit principle of cura personalis -- or "care of the whole person." The Medical Center includes the School of Medicine and the School of Nursing & Health Studies, both nationally ranked; Georgetown Lombardi Comprehensive Cancer Center, designated as a comprehensive cancer center by the National Cancer Institute; and the Biomedical Graduate Research Organization, which accounts for the majority of externally funded research at GUMC including a Clinical and Translational Science Award from the National Institutes of Health. Connect with GUMC on Facebook (Facebook.com/GUMCUpdate), Twitter (@gumedcenter) and Instagram (@gumedcenter).
News Article | December 5, 2016
Industrial pollution may seem like a modern phenomenon, but in fact, an international team of researchers may have discovered what could be the world's first polluted river, contaminated approximately 7,000 years ago. In this now-dry riverbed in the Wadi Faynan region of southern Jordan, Professor Russell Adams, from the Department of Anthropology at the University of Waterloo, and his colleagues found evidence of early pollution caused by the combustion of copper. Neolithic humans here may have been in the early stages of developing metallurgy by learning how to smelt. The research findings, published in Science of the Total Environment, shed light on a turning point in history, when humans began moving from making tools out of stones to making tools out of metal. This period, known as the Chalcolithic or Copper Age, is a transitional period between the late Neolithic or Stone Age and the beginning of the Bronze Age. "These populations were experimenting with fire, experimenting with pottery and experimenting with copper ores, and all three of these components are part of the early production of copper metals from ores," said Adams. "The technological innovation and the spread of the adoption and use of metals in society mark the beginning of the modern world." People created copper at this time by combining charcoal and the blue-green copper ore found in abundance in this area in pottery crucibles or vessels and heating the mixture over a fire. The process was time-consuming and labour-intensive and, for this reason, it took thousands of years before copper became a central part of human societies. Many of the objects created in the earliest phase of copper production were primarily symbolic and fulfilled a social function within society. Attaining rare and exotic items was a way in which individuals attained prestige. As time passed, communities in the region grew larger and copper production expanded. People built mines, then large smelting furnaces and factories by about 2600 BC. "This region is home to the world's first industrial revolution," said Adams. "This really was the centre of innovative technology." But people paid a heavy price for the increased metal production. Slag, the waste product of smelting, remained. It contained metals such as copper, lead, zinc, cadmium, and even arsenic, mercury and thalium. Plants absorbed these metals, people and animals such as goats and sheep ate them, and so the contaminants bioaccumulated in the environment. Adams believes the pollution from thousands of years of copper mining and production must have led to widespread health problems in ancient populations. Infertility, malformations and premature death would have been some of the effects. Researchers have found high levels of copper and lead in human bones dating back to the Roman period. Adams and his international team of researchers are now trying to expand the analysis of the effects of this pollution to the Bronze Age, which began around 3200 BC. The Faynan region has a long history of human occupation, and the team is examining the extent and spread of this pollution at the time when metals and their industrial scale production became central to human societies.
News Article | November 19, 2016
TORONTO, ONTARIO--(Marketwired - Nov. 19, 2016) - The Canadian-Croatian Chamber of Commerce is pleased to be welcoming the President of the Republic of Croatia, Kolinda Grabar-Kitarovic, during her first visit to Canada in her capacity as President. The Chamber and its members will be hosting a number of events featuring the President in Toronto and surrounding areas during her visit from November 20-22, 2016. The Croatian President will, among other activities, be meeting with Croatian-Canadian community and business leaders and professionals; addressing Croatian-Canadian students at a youth mentorship event in Norval; meeting with Mayor Berry Vrbanovic in Kitchener; touring the innovation ecosystem (including the University of Waterloo, the Lazaridis Quantum Nano computer centre, Communitech, and Velocity) that is part of the Toronto-Waterloo Region Corridor; meeting with representatives and students of Croatian language studies at the University of Waterloo; meeting with Ontario Premier Kathleen Wynne and Ontario Lieutenant-Governor Elizabeth Dowdeswell in Toronto; visiting the Donnelly Center for Cellular and Biomolecular Research at the University of Toronto; engaging in a roundtable discussion with leading female executives in Canada; and meeting with business leaders on Bay Street in Toronto to discuss potential investment opportunities in Croatia. To conclude her visit to Canada, the Croatian President will deliver a keynote address to nearly 1,000 members and friends of the Croatian community in Canada at a sold-out gala dinner and fundraiser being organized by the Chamber at the Burlington Convention Center. The event will benefit the restoration of the Vukovar water tower, one of the most famous symbols of Vukovar and the suffering of that heroic city as well as the Croatian War of Independence during the early 1990s. "We are honoured to host the Croatian President during her visit to Canada," said Ivan Grbesic, a member of the board of directors of the Chamber and one of the coordinators of the visit. "Significant time and effort was invested in ensuring that this working visit would contribute to expanding existing ties and exploring new opportunities between Canada and Croatia, especially given the recent signing of the CETA trade deal. The visit will also be a historic one for members of our Croatian-Canadian community in general and one that will be remembered for years to come given that it is the first time that a sitting Croatian President will visit our community since Croatia declared its independence 25 years ago", he added. "We also expect that this visit will lay the groundwork for an official visit to Ottawa by the Croatian President or recently elected Croatian Prime Minister in the near future". President Grabar-Kitarovic became the first female and youngest president of the Republic of Croatia in 2015 and is one of seven female heads of state in the world today. Her election as President capped a two-decade career in politics and diplomacy, including key roles as: the country's first female Minister of Foreign Affairs and European Integration, Croatia's Ambassador to the U.S., and the Assistant Secretary General for Public Diplomacy at NATO (the first woman Assistant Secretary-General ever in the history of NATO and the highest-ever ranking female official to have served within NATO's governing structure). Prior to visiting Norval, Kitchener, Toronto, and Burlington, the President is attending the Halifax International Security Forum on November 19-20, 2016. Founded in 1995, the Canadian-Croatian Chamber of Commerce is a not-for-profit network of Croatian-Canadian businesses, professionals and organizations that has emerged as the voice of Croatian-Canadian business in Canada. Canada has one of the largest and most successful Croatian communities outside of Croatia and the Chamber brings together businesses, professionals and organizations with strategic relationships (economic, commercial, political, and cultural) in both Canada and Croatia.
News Article | February 1, 2016
But now in a new paper, physicists have proposed that the shortest physically meaningful length of time may actually be several orders of magnitude longer than the Planck time. In addition, the physicists have demonstrated that the existence of such a minimum time alters the basic equations of quantum mechanics, and as quantum mechanics describes all physical systems at a very small scale, this would change the description of all quantum mechanical systems. The researchers, Mir Faizal at the University of Waterloo and University of Lethbridge in Canada, Mohammed M. Khalil at Alexandria University in Egypt, and Saurya Das at the University of Lethbridge, have recently published a paper called "Time crystals from minimum time uncertainty" in The European Physical Journal C. "It might be possible that, in the universe, the minimum time scale is actually much larger than the Planck time, and this can be directly tested experimentally," Faizal told Phys.org. The Planck time is so short that no experiment has ever come close to examining it directly—the most precise tests can access a time interval down to about 10−17 seconds. Nevertheless, there is a great deal of theoretical support for the existence of the Planck time from various approaches to quantum gravity, such as string theory, loop quantum gravity, and perturbative quantum gravity. Almost all of these approaches suggest that it is not possible to measure a length shorter than the Planck length, and by extension not possible to measure a time shorter than the Planck time, since the Planck time is defined as the time it takes light to travel a single unit of the Planck length in a vacuum. Motivated by several recent theoretical studies, the scientists further delved into the question of the structure of time—in particular, the long-debated question of whether time is continuous or discrete. "In our paper, we have proposed that time is discrete in nature, and we have also suggested ways to experimentally test this proposal," Faizal said. One possible test involves measuring the rate of spontaneous emission of a hydrogen atom. The modified quantum mechanical equation predicts a slightly different rate of spontaneous emission than that predicted by the unmodified equation, within a range of uncertainty. The proposed effects may also be observable in the decay rates of particles and of unstable nuclei. Based on their theoretical analysis of the spontaneous emission of hydrogen, the researchers estimate that the minimum time may be orders of magnitude larger than the Planck time, but no greater than a certain amount, which is fixed by previous experiments. Future experiments could lower this bound on the minimum time or determine its exact value. The scientists also suggest that the proposed changes to the basic equations of quantum mechanics would modify the very definition of time. They explain that the structure of time can be thought of as a crystal structure, consisting of discrete, regularly repeating segments. On a more philosophical level, the argument that time is discrete suggests that our perception of time as something that is continuously flowing is just an illusion. "The physical universe is really like a movie/motion picture, in which a series of still images shown on a screen creates the illusion of moving images," Faizal said. "Thus, if this view is taken seriously, then our conscious precipitation of physical reality based on continuous motion becomes an illusion produced by a discrete underlying mathematical structure." "This proposal makes physical reality platonic in nature," he said, referring to Plato's argument that true reality exists independent of our senses. "However, unlike other theories of platonic idealism, our proposal can be experimentally tested and not just be argued for philosophically." Explore further: Looking at quantum gravity in a mirror More information: Mir Faizal, et al. "Time crystals from minimum time uncertainty." The European Physical Journal C. DOI: 10.1140/epjc/s10052-016-3884-4. Also at arXiv:1501.03111 [physics.gen-ph]
News Article | September 12, 2016
« Audi deepening partnerships with Alibaba, Baidu and Tencent on connected cars in China | Main | ICCT analysis of California top EV cities finds link between EV uptake and many underlying factors » The University of Waterloo has opened a new automotive research and testing facility, setting the stage for technological advances that will benefit both consumers and the environment. The $10-million Green and Intelligent Automotive (GAIA) Research Facility is supported by Toyota Motor Manufacturing Canada, the Government of Canada, the Government of Ontario, the University of Waterloo, and equipment suppliers. Research areas include longer-lasting batteries to extend the range of electric vehicles, methods to feed excess energy from vehicles back into the public power grid, emissions, wheel force measurements and advanced driver assistance systems (ADAS), such as adaptive cruise controllers that maintain safe distances between vehicles while also optimizing fuel consumption. GAIA is spread across three labs and covers 4,000 square feet. A key design of the GAIA Research Facility allows integration of three cells—batteries, powertrains and a rolling dynamometer that simulates real-world driving. This integration enables a safe and reliable way to test individual components and entire vehicles under one roof. The facility will be open to a team of 150 faculty and graduate students who will test, modify and identify problems with electric and hybrid vehicles. The focus of their work will be to make it to a test track, saving time and money in the process. Several years in the making, GAIA is the latest infrastructure addition to the Waterloo Centre for Automotive Research (WatCAR), the largest university-based automotive research centre in Canada.
News Article | September 20, 2016
The Institute for Quantum Computing at the University of Waterloo holds the new Guinness World Record for the smallest national flag, after creating a Canadian flag measuring about one one-hundredth the width of a human hair. The flag, created on a silicon wafer, measures 1.178 micrometers in length and is invisible without having to use an electron microscope (which uses electrons as the source for illumination). It was created using the electron beam lithography system in the Quantum NanoFab facility at the university. The wafer is etched with the official logo of the Canada 150 celebrations, which will culminate in next summer’s 150th anniversary of the Confederation of Canada. The British colonies of Canada, Nova Scotia, and New Brunswick were federally united into one Dominion of Canada on July 1, 1867.
Qin A.K.,Nanjing Southeast University |
Clausi D.A.,University of Waterloo
IEEE Transactions on Image Processing | Year: 2010
Multivariate image segmentation is a challenging task, influenced by large intraclass variation that reduces class distinguishability as well as increased feature space sparseness and solution space complexity that impose computational cost and degrade algorithmic robustness. To deal with these problems, a Markov random field (MRF) based multivariate segmentation algorithm called "multivariate iterative region growing using semantics" (MIRGS) is presented. In MIRGS, the impact of intraclass variation and computational cost are reduced using the MRF spatial context model incorporated with adaptive edge penalty and applied to regions. Semantic region growing starting from watershed over-segmentation and performed alternatively with segmentation gradually reduces the solution space size, which improves segmentation effectiveness. As a multivariate iterative algorithm, MIRGS is highly sensitive to initial conditions. To suppress initialization sensitivity, it employs a region-level k-means (RKM) based initialization method, which consistently provides accurate initial conditions at low computational cost. Experiments show the superiority of RKM relative to two commonly used initialization methods. Segmentation tests on a variety of synthetic and natural multivariate images demonstrate that MIRGS consistently outperforms three other published algorithms. © 2006 IEEE.
Kliuchnikov V.,University of Waterloo |
Maslov D.,National Science Foundation |
Mosca M.,University of Waterloo
Physical Review Letters | Year: 2013
Decomposing unitaries into a sequence of elementary operations is at the core of quantum computing. Information theoretic arguments show that approximating a random unitary with precision ε requires Ω(log¡(1/ε)) gates. Prior to our work, the state of the art in approximating a single qubit unitary included the Solovay-Kitaev algorithm that requires O(log¡3+δ(1/ε)) gates and does not use ancillae and the phase kickback approach that requires O( log¡2(1/ε)log¡log¡(1/ε)) gates but uses O(log¡2(1/ε)) ancillae. Both algorithms feature upper bounds that are far from the information theoretic lower bound. In this Letter, we report an algorithm that saturates the lower bound, and as such it guarantees asymptotic optimality. In particular, we present an algorithm for building a circuit that approximates single qubit unitaries with precision ε using O(log¡(1/ε)) Clifford and T gates and employing up to two ancillary qubits. We connect the unitary approximation problem to the problem of constructing solutions corresponding to Lagrange's four-square theorem, and thereby develop an algorithm for computing an approximating circuit using an average of O(log ¡2(1/ε)log¡log¡(1/ε)) operations with integers. © 2013 American Physical Society.
Luan T.H.,University of Waterloo |
Ling X.,Research in Motion |
Shen X.S.,University of Waterloo
IEEE Transactions on Mobile Computing | Year: 2012
The pervasive adoption of IEEE 802.11 radios in the past decade has made possible for the easy Internet access from a vehicle, notably drive-thru Internet. Originally designed for the static indoor applications, the throughput performance of IEEE 802.11 in the outdoor vehicular environment is, however, still unclear especially when a large number of fast-moving users transmitting simultaneously. In this paper, we investigate the performance of IEEE 802.11 DCF in the highly mobile vehicular networks. We first propose a simple yet accurate analytical model to evaluate the throughput of DCF in the large scale drive-thru Internet scenario. Our model incorporates the high-node mobility with the modeling of DCF and unveils the impacts of mobility (characterized by node velocity and moving directions) on the resultant throughput. Based on the model, we show that the throughput of DCF will be reduced with increasing node velocity due to the mismatch between the MAC and the transient high-throughput connectivity of vehicles. We then propose several enhancement schemes to adaptively adjust the MAC in tune with the node mobility. Extensive simulations are carried out to validate the accuracy of the developed analytical model and the effectiveness of the proposed enhancement schemes. © 2012 IEEE.
Chowdhury N.M.M.K.,University of California at Berkeley |
Boutaba R.,University of Waterloo |
Boutaba R.,Pohang University of Science and Technology
Computer Networks | Year: 2010
Due to the existence of multiple stakeholders with conflicting goals and policies, alterations to the existing Internet architecture are now limited to simple incremental updates; deployment of any new, radically different technology is next to impossible. To fend off this ossification, network virtualization has been propounded as a diversifying attribute of the future inter-networking paradigm. By introducing a plurality of heterogeneous network architectures cohabiting on a shared physical substrate, network virtualization promotes innovations and diversified applications. In this paper, we survey the existing technologies and a wide array of past and state-of-the-art projects on network virtualization followed by a discussion of major challenges in this area. © 2009 Elsevier B.V. All rights reserved.
Limam N.,Pohang University of Science and Technology |
Boutaba R.,University of Waterloo
IEEE Transactions on Software Engineering | Year: 2010
The integration of external software in project development is challenging and risky, notably because the execution quality of the software and the trustworthiness of the software provider may be unknown at integration time. This is a timely problem and of increasing importance with the advent of the SaaS model of service delivery. Therefore, in choosing the SaaS service to utilize, project managers must identify and evaluate the level of risk associated with each candidate. Trust is commonly assessed through reputation systems; however, existing systems rely on ratings provided by consumers. This raises numerous issues involving the subjectivity and unfairness of the service ratings. This paper describes a framework for reputation-aware software service selection and rating. A selection algorithm is devised for service recommendation, providing SaaS consumers with the best possible choices based on quality, cost, and trust. An automated rating model, based on the expectancy-disconfirmation theory from market science, is also defined to overcome feedback subjectivity issues. The proposed rating and selection models are validated through simulations, demonstrating that the system can effectively capture service behavior and recommend the best possible choices. © 2010 IEEE.
Chan T.M.,University of Waterloo |
Wilkinson B.T.,University of Aarhus
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2013
We present three new results on one of the most basic problems in geometric data structures, 2-D orthogonal range counting. All the results are in the w-bit word RAM model. • It is well known that there are linear-space data structures for 2-D orthogonal range counting with worst-case optimal query time O(logw n). We give an O(n log log n)-space adaptive data structure that improves the query time to O(log log n + logw, k), where k is the output count. When k = O(1), our bounds match the state of the art for the 2-D orthogonal range emptiness problem [Chan, Larsen, and Pǎtraş cu, SoCG 2011]. • We give an O(n log log n)-space data structure for approximate 2-D orthogonal range counting that can compute a (1 + δ)-factor approximation to the count in O(log log n) time for any fixed constant δ > 0. Again, our bounds match the state of the art for the 2-D orthogonal range emptiness problem. • Lastly, we consider the 1-D range selection problem, where a query in an array involves finding the kth least element in a given subarray. This problem is closely related to 2-D 3-sided orthogonal range counting. Recently, Jørgensen and Larsen [SODA 2011] presented a linear-space adaptive data structure with query time O(log log n + logw k). We give a new linear-space structure that improves the query time to O(1 + logw k), exactly matching the lower bound proved by Jørgensen and Larsen. Copyright © SIAM.
News Article | April 12, 2016
The tree of life, which depicts how life has evolved and diversified on the planet, is getting a lot more complicated. Researchers at the University of California, Berkeley, who have discovered more than 1,000 new types of bacteria and Archaea over the past 15 years lurking in Earth's nooks and crannies, have dramatically rejiggered the tree to account for these microscopic new life forms. "The tree of life is one of the most important organizing principles in biology," said Jill Banfield, a UC Berkeley professor of earth and planetary science and environmental science, policy and management. "The new depiction will be of use not only to biologists who study microbial ecology, but also biochemists searching for novel genes and researchers studying evolution and earth history." Much of this microbial diversity remained hidden until the genome revolution allowed researchers like Banfield to search directly for their genomes in the environment, rather than trying to culture them in a lab dish. Many of the microbes cannot be isolated and cultured because they cannot live on their own: they must beg, borrow or steal stuff from other animals or microbes, either as parasites, symbiotic organisms or scavengers. The new tree of life, to be published online April 11 in the new journal Nature Microbiology, reinforces once again that the life we see around us - plants, animals, humans and other so-called eukaryotes - represent a tiny percentage of the world's biodiversity. "Bacteria and Archaea from major lineages completely lacking isolated representatives comprise the majority of life's diversity," said Banfield, who also has an appointment at Lawrence Berkeley National Laboratory. "This is the first three-domain genome-based tree to incorporate these uncultivable organisms, and it reveals the vast scope of as yet little-known lineages." According to first author Laura Hug, a former UC Berkeley postdoctoral fellow who is now on the biology faculty at the University of Waterloo in Ontario, Canada, the more than 1,000 newly reported organisms appearing on the revised tree are from a range of environments, including a hot spring in Yellowstone National Park, a salt flat in Chile's Atacama desert, terrestrial and wetland sediments, a sparkling water geyser, meadow soil and the inside of a dolphin's mouth. All of these newly recognized organisms are known only from their genomes. "What became really apparent on the tree is that so much of the diversity is coming from lineages for which we really only have genome sequences," she said. "We don't have laboratory access to them, we have only their blueprints and their metabolic potential from their genome sequences. This is telling, in terms of how we think about the diversity of life on Earth, and what we think we know about microbiology." One striking aspect of the new tree of life is that a group of bacteria described as the "candidate phyla radiation" forms a very major branch. Only recognized recently, and seemingly comprised only of bacteria with symbiotic lifestyles, the candidate phyla radiation now appears to contain around half of all bacterial evolutionary diversity. While the relationship between Archaea and eukaryotes remains uncertain, it's clear that "this new rendering of the tree offers a new perspective on the history of life," Banfield said. "This incredible diversity means that there are a mind-boggling number of organisms that we are just beginning to explore the inner workings of that could change our understanding of biology," said co-author Brett Baker, formerly of Banfield's UC Berkeley lab but now at the University of Texas, Austin, Marine Science Institute. Charles Darwin first sketched a tree of life in 1837 as he sought ways of showing how plants, animals and bacteria are related to one another. The idea took root in the 19th century, with the tips of the twigs representing life on Earth today, while the branches connecting them to the trunk implied evolutionary relationships among these creatures. A branch that divides into two twigs near the tips of the tree implies that these organisms have a recent common ancestor, while a forking branch close to the trunk implies an evolutionary split in the distant past. Archaea were first added in 1977 after work showing that they are distinctly different from bacteria, though they are single-celled like bacteria. A tree published in 1990 by microbiologist Carl Woese was "a transformative visualization of the tree," Banfield said. With its three domains, it remains the most recognizable today. With the increasing ease of DNA sequencing in the 2000s, Banfield and others began sequencing whole communities of organisms at once and picking out the individual groups based on their genes alone. This metagenomic sequencing revealed whole new groups of bacteria and Archaea, many of them from extreme environments, such as the toxic puddles in abandoned mines, the dirt under toxic waste sites and the human gut. Some of these had been detected before, but nothing was known about them because they wouldn't survive when isolated in a lab dish. For the new paper, Banfield and Hug teamed up with more than a dozen other researchers who have sequenced new microbial species, gathering 1,011 previously unpublished genomes to add to already known genome sequences of organisms representing the major families of life on Earth. She and her team constructed a tree based on 16 separate genes that code for proteins in the cellular machine called a ribosome, which translates RNA into proteins. They included a total of 3,083 organisms, one from each genus for which fully or almost fully sequenced genomes were available. The analysis, representing the total diversity among all sequenced genomes, produced a tree with branches dominated by bacteria, especially by uncultivated bacteria. A second view of the tree grouped organisms by their evolutionary distance from one another rather than current taxonomic definitions, making clear that about one-third of all biodiversity comes from bacteria, one-third from uncultivable bacteria and a bit less than one-third from Archaea and eukaryotes. "The two main take-home points I see in this tree are the prominence of major lineages that have no cultivable representatives, and the great diversity in the bacterial domain, most importantly, the prominence of candidate phyla radiation," Banfield said. "The candidate phyla radiation has as much diversity within it as the rest of the bacteria combined."
News Article | December 9, 2016
TORONTO, ON--(Marketwired - December 09, 2016) - The Chartered Professional Accountants of Ontario (CPA Ontario) congratulates Sanly Li of Richmond Hill who captured the Ontario Gold Medal as the top writer of the Common Final Examination (CFE) in the province. Written in September, the CFE is a national three-day evaluation that assesses competencies including essential knowledge, professional judgment, ethics and the ability to communicate. A student in the CPA education program, Sanly has a Master of Accounting degree from the University of Waterloo. She works at KPMG LLP in Toronto. On why she chose a career in the accounting profession, Sanly said: "I went into business and accounting aiming to be a CPA because the designation opens the gateway to many opportunities and enhances my future prospects." A total of 20 Ontario CPA students placed on the prestigious 53-member National Honour Roll and Ontario had 1,248 of the country's 3,515 successful CFE writers. This marks a milestone for these CPAs, acknowledging years of hard work, dedicated study, steadfast work-life balance and a true commitment to the accounting profession and their careers. The CFE is an important component of the new CPA qualification program, which includes prescribed education, practical experience and examination requirements. Only those who complete this entire CPA program successfully are entitled to use the internationally recognized designation of Chartered Professional Accountant, a profession known for financial expertise, strategic thinking, business insight and leadership. "Congratulations to all the successful Ontario candidates," said Carol Wilding, FCPA, FCA, President and CEO of CPA Ontario. "Qualifying to become a CPA is a very challenging and very rewarding process that benefits successful writers throughout their careers. I am confident that all of these students will make great contributions to the accounting profession while upholding the high standards of the CPA designation." The following 19 Ontario CFE writers have also achieved recognition on the 2016 National Honour Roll: Interviews with Honour Roll students are available on request. About the Chartered Professional Accountants of Ontario CPA Ontario protects the public interest by ensuring its members meet the highest standards of integrity and expertise. CPA Ontario serves and supports its more than 87,000 members and 19,000 students in their qualification and professional development in a wide range of senior positions in public accounting, business, finance, government, not-for-profits and academe. Chartered Professional Accountants are valued by organizations of all types and sizes for their financial expertise, strategic thinking, business insight, management skills and leadership. For information on the profession, visit cpaontario.ca. To become a CPA in Ontario, visit gocpaontario.ca.
News Article | February 15, 2017
Washington, DC - People make decisions every day, some trivial, like what to eat for lunch, while others are more significant -- career, marriage, buying a home. A series of studies conducted by Jeff Hughes and Abigail Scholar (University of Waterloo) show that how people make their decisions, not just the outcome, may impact their health, happiness and satisfaction. The research appears in the journal, Personality and Social Psychology Bulletin, published by the Society for Personality and Social Psychology (SPSP). One approach to decision-making is to maximize, which is commonly defined as an extensive search through options to find "the best one." However, Hughes and Scholer's research shows that even people who want to find the best option can approach that goal in different ways. One type of maximizer, the promotion-focused maximizer, strives to attain ideals and is particularly concerned with approaching gains and avoiding non-gains, according to the study. When put to a series of tests, it turns out this type of maximizer is able to find the best choice in a way that is satisfying and avoids regret. Another type of maximizer, the assessment-focused maximizer, approach their decisions with a concern for evaluating and comparing options. According to the authors' research, they run into what might be commonly seen as FOBO, or fear of a better option. They become so focused on doing the "right" thing that even after they make a decision, they still ruminate on their earlier options, which leads to frustration and regret with the decision process. "It's okay to look through your options thoroughly, but what especially seems to produce frustration and regret when making decisions is re-evaluating the same options over and over," says Hughes, "Doing so invites you to keep thinking about all the options you had to leave behind, rather than enjoying the option that you chose in the end." Past research on maximizing has focused on one strategy, the process of extensively searching through alternatives. In the real world, this strategy might be shown by the person who circles the entire store three times before deciding on something, or who wants to take a tropical vacation but checks out information on Iceland and Finland as well, just in case there's a special offer. "What seems problematic with this strategy is that people have trouble letting go of options they already evaluated - they keep going back 'just in case,'" says Hughes. According to the authors, the research on this type of decision strategy is much more negative, being associated with greater depression and regret, lower life satisfaction, and more procrastination. "We don't want to suggest that being thorough with really important decisions is a bad thing, but if you're as thorough with your decisions for lunch as you are with your decisions about your career, this could be a problem," summarizes Hughes. Personality and Social Psychology Bulletin (PSPB), published monthly, is an official journal of the Society for Personality and Social Psychology (SPSP). SPSP promotes scientific research that explores how people think, behave, feel, and interact. The Society is the largest organization of social and personality psychologists in the world. Follow us on Twitter, @SPSPnews and find us on facebook.com/SPSP.org.
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2009.1.1 | Award Amount: 8.07M | Year: 2010
The Internet has evolved from a technology-centric core network to a user- and content-centric network that must support millions of users creating and consuming content. It must accommodate new services with new requirements and cope with heterogeneous network technologies. The momentum is moving toward the end user who is now capable of creating, storing, and delivering content and services. FIGARO proposes a Future Internet architecture that is structured around residential networks. In this architecture, home gateways have a key role as integrator of different networks and services, and as coordinator of Internet-wide distributed content management. FIGARO will: i) design a novel content management architecture that enables distributed content backup, search and access. This architecture will also support mobile users and wireless ad-hoc content sharing; ii) develop a network optimization framework, leveraging community networks and heterogeneous networks; iii) deliver a network management architecture which includes new network monitoring and real-time troubleshooting techniques; iv) explore novel Internet-based communication and service solutions for emerging sectors, such as energy management and e-health care.\nWe will deliver the components of the FIGARO architecture through an experimental approach incorporating testbed prototyping of solutions. In summary, FIGARO is intended to evolve the current Internet to meet future demands of applications, services and end-users, while preserving its current robustness and increasing its scalability and efficiency. Furthermore, the integration of new sectors into the future Internet will spur trans-sector innovation and create new businesses. The project is expected to result in technologies that will strengthen Europes position and give competitive advantage to European industry in the areas of Future Internet technologies and services, residential gateways and home automation.
News Article | November 23, 2016
As if full-time research weren't time-consuming and challenging enough, nanophysicist Michael Stopa embraced a second occupation while at the bench: politics. He served as a delegate for US president-elect Donald Trump at this year's Republican National Convention. Before that, while he was a senior scientist at Harvard University in Cambridge, Massachusetts, he blew his cover as a semi-secret conservative by running unsuccessfully as a Republican for the US Congress in 2010 and again in 2013. “My face was on the front page of the Harvard Crimson,” he says of the university's student newspaper. “At that point, I was exposed.” Stopa, who now works at a technology startup near Boston, Massachusetts, says that his outspoken politics have cost him at least one close professional collaboration — and maybe more — but that hasn't quietened him. He still talks politics on the Harvard Lunch Club weekly podcast. In each session he takes part in, he discusses his conservative views, including his belief that illegal immigration threatens the United States. The acrimonious US presidential election is over, but politics are forever, and Stopa isn't the only scientist joining the fray. Many researchers take public political stands on Twitter and elsewhere, and some are engaging with political parties or running for office (see 'Join the party'). Politically active scientists can struggle to find the time and energy to bridge both worlds, and there's always the risk that an unpopular stand could cause friction. But there are also benefits: politics can provide another avenue for networking and outreach. And, ideally, scientists will be able to give governments the kind of input needed to produce informed policy. Political involvement can also create a sense of real-world accomplishment that is sometimes hard to find in the lab. “Nothing's more rewarding than combining the two passions,” says David Mazzocchi-Jones, a neuroscientist at Keele University, UK, and a member of the local Labour Party. Despite the opportunities, few scientists have reached high office in government. Frauke Petry, chairwoman of the right-wing Alternative for Germany Party, has a chemistry PhD, as does chancellor Angela Merkel. Of 535 members of the US House and Senate, just two congressmen — a physicist and an engineer — have PhDs in the hard sciences. The UK-based Campaign for Science and Engineering counts 90 Members of Parliament who have at least some background or interest in the sciences, engineering or medicine, including Thérèse Coffey, who has a PhD in chemistry. That's down from 103 science-minded MPs in the previous parliament. “Scientists are very under-represented in politics in the UK,” Mazzocchi-Jones says. “Twenty years ago, there were quite a few more.” Researchers who manage to break into the political world could have a huge impact on policy, says Jeff Schweitzer, a former marine biologist who worked as a science-policy analyst for the US Clinton administration in the 1990s. “The biggest thing that a scientist brings is a method of thinking,” he says. “They have a vocabulary that non-scientists might not have.” Scientists in government can help to bridge the gap between policymakers and the researchers who study, in great detail, how the real world actually works, he adds. Mazzocchi-Jones, a Labour councillor for Newcastle-under-Lyme, believes that his science background has helped him to handle the issues that matter to his constituents. “When we're deciding on a new recycling system, I can say, 'Show me the numbers,'” he says. Governments are increasingly facing critical issues such as climate change and fracking (hydraulic fracturing) that call for scientific wisdom, says David Dunbar, a bioinformatician at the University of Edinburgh, UK, who is active with the Scottish National Party. “The scientists you see in the UK Parliament seem to be thinking in an evidence-based way, and that's a positive,” he says. “The party line isn't always evidence-based. And neither is public opinion.” Scientists who aren't themselves politically active can still do their bit to keep politicians informed, even if only through a quick e-mail or a chat with a local representative. “We need to engage more with politicians,” Mazzocchi-Jones says. “It's not going to get us anywhere unless we talk to them directly.” He says that his interest in politics was rekindled in 2014 when he helped to organize the UK Physiological Society's Engaging with Parliamentarians outreach programme, where he and other scientists paired up with politicians to exchange ideas. He says that both sides must find ways to identify common ground. “Scientists have to step forward and be recognized,” he says, “and politicians have to listen”. Politics can be a sticky subject, however, especially when someone is out of step with their colleagues. Stopa says he felt some tension at Harvard, and not just with the friend and collaborator who severed ties with him. When Stopa's contract wasn't renewed, he was eager to move on. “It's hard to be surrounded by people with different ideologies.” Stopa doesn't regret publicly announcing his conservatism, but he understands why some conservatives prefer to keep quiet. “There's an ongoing debate about whether or not to come out of the closet,” he says. “You have to make that decision for yourself. If you think it might negatively affect your career, you might be better off not saying these things.” Schweitzer sees political activism as a right. No one in science should be afraid to put their politics on display, he says. “If that's an obstacle,” he adds, “you're not at the right institution.” Sometimes, political anonymity isn't much of an option. “A lot of my colleagues and students live in my ward,” says Mazzocchi-Jones. “I know for a fact that they've received leaflets with my face on them.” And his dual roles occasionally collide in awkward ways. “Colleagues will tell me, 'My bin didn't get collected last week,'” he adds, by way of example. So compelling is political work for some scientists that they turn it into a full-time profession. Stacey Danckert, who has a PhD in cognitive neuroscience from the University of Waterloo, Canada, declined a prestigious two-year grant from the Alzheimer's Society in 2013 because she found it tough to balance research, politics and family commitments. “I decided to follow my passion for the environment,” she says. She left the lab and is now policy coordinator for the Green Party of Ontario and a twice-unsuccessful Green Party candidate for the Provincial Parliament of Ontario. In her view, it's almost impossible for a scientist to run for political office while staying in the lab. “It's important to get your name out, and you can't do that without spending a lot of time,” she says. “The two pursuits require endless dedication.” Similarly, Jess Spear, a former climate scientist who worked at the Burke Museum of Natural History and Culture in Seattle, Washington, left research to join Socialist Alternative, a socialist activist group, in 2011. After running unsuccessfully for the Washington state House of Representatives in 2014, she is now a full-time organizer for the group. “The more I got involved in climate science,” she says, “the more I became aware that we don't just need more data. We need political will.” Schweitzer believes that scientists who can handle university politics have the mettle to excel at local, regional and national politics. “The skills are very transferable,” he says. “You have to show that you can get along with people, and you have to build networks.” Perhaps most importantly, scientists tend to have a track record of working with large bureaucracies. “You need to be able to manipulate the system to your will to get things done,” he says. “If you tend to get frustrated and just throw up your hands, politics probably isn't for you.” Before jumping into politics, Schweitzer briefly ran his own lab at the University of California, Irvine, an experience that he says was invaluable in his second career. “In order to have credibility in Washington DC, you have to have had at least a short career as a lab scientist,” he says. “I wasn't in the lab very long, but in their view I was a real scientist.” Mazzocchi-Jones manages, for the most part, to keep his work separate from his ideology. He says that he has a student who is an outspoken supporter of the UK Independence Party, a right-wing, anti-immigration party. “I find his politics abhorrent,” Mazzocchi-Jones says. “But in the end, science unites us.”
News Article | February 14, 2017
IRVINE, Calif., Feb. 14, 2017 (GLOBE NEWSWIRE) -- InMode welcomes Mr. Shakil Lakhani, Executive Vice President of Sales, North America. Mr. Lakhani has been in the industry for over a decade, bringing a vast amount of experience with him. He began his career in 2006 with one of the largest aesthetic laser companies, Cynosure Inc., as the youngest Territory Manager, and was quickly promoted to Area Sales Manager and Sales Director by 2013. He has experienced rapid growth throughout his career and attributes the majority of his success to his upbringing and support from his family. Mr. Lakhani graduated with a B.A. from the University of Waterloo. Mr. Tyler Lembke joins the InMode team as Vice President of Sales for the West Region. With more than 11 years of experience in the medical and aesthetic laser industry, Mr. Lembke has held various sales and management positions for Cynosure, Cutera, and Lumenis. Most recently Mr. Lembke has specialized in the introduction and rapid launch of innovative niche technologies into the marketplace. Mr. Lembke graduated with a Bachelor of Science in Business from Oklahoma State University. Adrian Bishop is appointed as Vice President of Sales for the East Region. Previously, Mr. Bishop served as Director of Sales, Southeast Region for Syneron-Candela. He was the Top Producing Director of Sales for both fiscal year 2015 and fiscal 2016 for Syneron breaking sales records both years. He also held many sales and training roles with Syneron Candela as well as Solta Medical prior to that. Mr. Bishop has a proven track record of building award winning sales teams and will continue that momentum at InMode. Mr. Bishop received a B.S.B.A. in Marketing and Business Administration from Christian Brothers University in Memphis TN. “InMode is very excited about the growth opportunities brought by our new executive team. Coupled with our outstanding technology, this improved sales structure will further strengthen our presence in the aesthetic market,” says Erik Dowell, CEO of Americas. InMode’s technological advancements have become the new standard for aesthetic medicine, specifically in the radio-frequency aesthetic market. For more than three decades our R&D team was critical in developing state-of-the-art light, laser, and radio-frequency devices, thereby launching and shaping the industry. Our technology continues that legacy in providing superior satisfaction for both the patient and the practice. Learn more about InMode/Invasix technologies by visiting www.inmodemd.com.
News Article | September 11, 2016
« Boeing and NASA testing Blended Wing Body model in Langley subsonic wind tunnel | Main | University of Waterloo opens new automotive research facility » In China, Audi has signed tripartite memorandums of understanding (MOUs) with Alibaba, Baidu, and Tencent respectively. The partners will deepen their cooperation in the areas of data analysis, internet-vehicle platform building and urban intelligent transport. The partnerships will be supported through the brand’s strong development capabilities in China. In Beijing, Audi operates its largest research and development facility outside of Germany. The R&D center is part of Audi China, a 100% Audi AG daughter-company, and puts a strong focus on key future technologies such as connected car, piloted driving, new energy vehicles and digital services. Audi China and the brand’s China joint venture FAW-VW Automotive Co. Ltd. are closely cooperating in their digitalization activities. Audi can build on existing cooperations with China’s leading internet firms. In cooperation with Alibaba, the German company has integrated real time traffic data into Audi MMI and has become the first premium manufacturer in China to offer high-resolution 3D maps. Starting from 2017, Audi will launch Baidu CarLife in its local model line-up, for seamless transfer of Baidu’s popular app services between the customers’ digital devices and their cars. With Tencent, Audi is currently developing the integration of WeChat MyCar services into Audi models. The first features to be implemented will be location sharing and music sharing. WeChat is Asia’s leading communication services app with over 700 million active users. The signing of the MOUs took place on September 11 during the Audi Brand Summit. At this event, Audi showcase, among other technologies and vehicles, the Audi A6 L e-tron, its first locally produced plug-in hybrid model. (Earlier post.)
News Article | November 4, 2016
Using prominent, graphic pictures on cigarette packs warning against smoking could avert more than 652,000 deaths, up to 92,000 low birth weight infants, up to 145,000 preterm births, and about 1,000 cases of sudden infant deaths in the U.S. over the next 50 years, say researchers from Georgetown Lombardi Comprehensive Cancer Center. Their study, published online Nov. 3 in the journal Tobacco Control, is the first to estimate the effects of pictorial warnings on cigarette packs on the health of both adults and infants in the U.S Although more than 70 nations have adopted or are considering adopting the World Health Organization's Framework Convention for Tobacco Control to use such front and back of-the-pack pictorial warnings -- an example is a Brazilian photo of a father with a tracheotomy -- they have not been implemented in the US. Pictorial warnings have been required by law, but an industry lawsuit stalled implementation of this requirement. Currently, a text-only warning appears on the side of cigarette packs in the U.S. The study used a tobacco control policy model, SimSmoke, developed by Georgetown Lombardi's David T. Levy, PhD, which looks at the effects of past smoking policies as well as future policies. SimSmoke is peer-reviewed, and has been used and validated in more than 20 countries. In this study, Levy and his colleagues, who included investigators at the University of Waterloo, Ontario, and the University of South Carolina, looked at changes in smoking rates in Australia, Canada and the United Kingdom, which have already implement prominent pictorial warning labels (PWLs). For example, eight years after PWLs were implemented in Canada, there was an estimated 12 percent -- 20 percent relative reduction in smoking prevalence. After PWLs began to be used in Australia in 2006, adult smoking prevalence fell from 21.3 percent in 2007 to 19 percent in 2008. After implementation in the UK in 2008, smoking prevalence fell 10 percent in the following year. The researchers used these and other studies and, employing the SimSmoke model, estimated that implementing PWLs in the U.S. would directly reduce smoking prevalence in relative terms by 5 percent in the near term, increasing to 10 percent over the long-term. If implemented in 2016, PWLs are estimated to reduce the number of smoking attributable deaths (heart disease, lung cancer and COPD) by an estimated 652,800 by 2065 and to prevent more than 46,600 cases of low-birth weights, 73,600 cases of preterm birth, and 1,000 SIDS deaths. "The bottom line is that requiring large pictorial warnings would help protect the public health of people in the United States," says Levy, a professor of oncology. "There is a direct association between these warnings and increased smoking cessation and reduced smoking initiation and prevalence. That would lead to significant reduction of death and morbidity, as well as medical cost." The study was funded by a grant from the National Institute on Drug Abuse (R01DA036497) and the National Cancer Institute (UO1-CA97450). Co-authors include Darren Mays, PhD, MPH, and Zhe Yuan, MS, both from Georgetown, David Hammond, PhD, from the University of Waterloo, and James F. Thrasher, PhD, MS, MA, from the University of South Carolina. Hammond has served as a paid expert witness on behalf of governments in tobacco litigation, including challenges to health warning regulations. The other co-authors report no potential conflicts or related financial interests.
News Article | September 23, 2016
Although boredom is as familiar a feeling as excitement or fear, science has only begun to understand what makes people bored. Recently, six scientists who emerged after living for a year in isolation on the Mauna Loa volcano as part of the HI-SEAS (Hawaii Space Exploration Analog and Simulation) experiment, which simulated the isolation that future space travelers might experience traveling to and living on Mars, said that boredom was their biggest challenge. Boredom "has been understudied until fairly recently, but it’s [worth studying] because human experience has consequences for how we interact with each our and our environment," said James Danckert, professor of cognitive neuroscience at the University of Waterloo in Ontario in an interview with Live Science. It's easy to think of examples of mind-numbing situations, such as waiting in line at the DMV, listening to a monotonous lecture or being stuck in traffic. It's much more difficult to define boredom. A 2012 review of boredom research that was conducted in educational settings suggested that boredom is some combination of an objective lack of neurological excitement and a subjective psychological state of dissatisfaction, frustration or disinterest, all of which result from a lack of stimulation. The one aspect about which most people seem to agree is that boredom is unpleasant. "I describe it as an aggressively dissatisfying state," Danckert said. In this way, boredom is not the same as apathy, because bored people are in some way motivated to end their boredom. [10 Things That Make Humans Special] Boredom is also distinct from hopelessness and depression. Compared to hopelessness, "boredom may involve feeling stuck in a dissatisfying current situation, but it does not involve believing that success is impossible or that engaging in satisfying activity is unattainable in the future," said Taylor Acee, assistant professor of developmental education at Texas State University, San Marcos, in an email interview with Live Science. And although boredom is similar to depression, in that both are unpleasant states of low-arousal, Acee and Danckert agreed that depression tends to involve negative, inward-looking focus, whereas boredom relates to a negative feeling that arises from lack of stimulation from the outside world. Research has shown that some people are more prone to boredom than others. A 2012 paper looked at the psychological attributes that could make a person more susceptible to boredom, and it found that people who have conditions that affect their attention, such as ADHD, may get bored easily. Also, people who are over- or undersensitive to stimulation, and those who are unable to express what activities might be engaging enough to combat their boredom, are more likely to get bored. In his own research, Danckert has found that people who are reaching the end of their young adulthood years, around age 22, may be less likely than those in their late teens to get bored. The reason may hint at a larger cause of boredom, he said. "In that age range, the frontal cortex is in the final stages of maturation," and this part of the brain helps with self-control and self-regulation, Danckert said. People who have experienced traumatic brain injury may also be more prone to boredom, which can affect their recovery, he said. It's possible that this relates to injury to the frontal cortex. [10 Things You Didn't Know About the Brain] The key to avoiding boredom, for those both inside and outside of the groups of people most prone to boredom, is self-control, Danckert said. "Those with a higher capacity for self-control are less likely to experience boredom," he said. Recent research has linked boredom to increased creativity. Danckert said that in studies that he has done, he has found that boredom tends to inspire creativity only in people with high levels of self-control. So far, there's no tidy, evolutionary reasoning to explain why we get bored. But that doesn't mean that boredom can't do us some good. "The positive side of boredom is that, if responded to in an adaptive way, it is this signal to explore, [to] do something else. That what you're doing now isn't working," said Danckert, noting that he was explaining the philosophical reasoning of University of Louisville professor of philosophy Andreas Elpidorou, who defends the value of boredom his is own work. How to stop being bored Research on boredom isn't far enough along to reveal ways to fight it. There are, however, some hints as to what might make a task boring or not. "Boredom is often experienced when an individual perceives themselves as being temporarily confined to a situation or activity that lacks value for one reason or another," Acee said. Tasks could lack value because they are unenjoyable, uninteresting, too easy or too difficult, or because we consider them unimportant on a personal level, he said. [So Bored! 8 Facts from the Science of Boredom] One way to keep a tedious task from being boring might be to think about it differently. "Reflecting about the potential usefulness, relevance or meaningfulness of an activity can help individuals increase the value they assign to the activity," said Acee. Although it hasn't been tested, Danckert similarly suggested that mindfulness training or meditation might help ward off ennui. It's important to note that, unless you find Candy Crush particularly meaningful, turning to technology is not the likeliest boredom cure. It may provide an increased level of engagement, but probably little else, said Danckert, emphasizing that there isn't really any way to say for sure whether what we're experiencing in our plugged-in present is any less boring to us than what preceding, smartphoneless generations have experienced. Both Acee and Danckert said that boredom is something we need to know more about. Boredom has been associated with plenty of negative outcomes, including low academic performance, high dropout rates, mistakes on the job, depression, anxiety and a lowered sense of life purpose, Acee said. Even if it doesn't lead to these problems in most people, boredom plays a major role in our lives, particularly if we find that our work or time in the classroom is snooze-inducing. "Generating knowledge about boredom through research can help inform us about how to design educational programs, structure work environments, advise patients and clients and manage our day-to-day lives," Acee said. The 7 Biggest Mysteries of the Human Body 10 Ways to Keep Your Mind Sharp Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
News Article | February 15, 2017
One of the two top air pollutants in the U.S., ground-level ozone is harmful not only to your health but also to your bank balance. Long-term exposure to high concentrations of ozone can lead to respiratory and lung disease such as asthma, conditions that drive up medical expenses and sometimes result in lost income. Ozone exacts a particularly heavy toll on people living in economically disadvantaged areas, where industrial and power plants tend to cluster. While policies have been implemented to reduce ozone emissions across the country, they have not yet addressed built-in inequities in the U.S. economy, leaving low-income Americans at greatest risk for health and economic damages. Now a study by researchers at the MIT Joint Program on the Science and Policy of Global Change provides the first breakdown of ozone exposure, health, and economic impacts by household income across the U.S. The study, which appears in the journal Environmental Science and Technology, uses a modified version of the MIT Joint Program’s U.S. Regional Energy Policy (USREP) model to simulate the health and economic impacts of ozone exposure and ozone-reduction policy on nine U.S. income groups. Comparing a set of policies under consideration in 2014 with a business-as-usual scenario, the researchers found the policies to be most effective in reducing mortality risks among lowest-income (less than $10,000 per year) households, which netted twice the relative economic gains as their highest-income (more than $150,000 per year) counterparts. “I hope our findings remind decision-makers to look at the distributive effects of environmental policy and how that relates to economic disparity,” says the study’s lead author, Rebecca Saari PhD '15, a former Joint Program research assistant and engineering systems PhD student who is now an assistant professor of civil and environmental engineering at the University of Waterloo in Canada. “If you ignore those effects, you underestimate the importance of ozone reduction for low-income households and overestimate it for high-income households. Now that we have better tools, we can actually model the differences among income groups and quantify the impacts.” To obtain their results, the researchers combined a regional chemical transport model (Comprehensive Air Quality Model with extensions, or CAMx), health impacts model (Benefits Mapping and Analysis System, or BenMAP), and model of the continental U.S. energy and economic system (USREP) into a single computational platform. They then enhanced that platform to simulate ozone concentrations and their health and economic impacts across nine household income categories. Using 2005 U.S. ozone concentration data as a base year, they compared results from two simulations — one representing a baseline scenario in which no new ozone-reduction policy was applied, the other implementing a U.S. EPA-evaluated suite of policies once planned for the year 2014. The study determined that ozone exposure — and hence mortality incidence rates — declined with increasing income, with the proposed 2014 policies reducing these rates by 12-13 percent. People earning the lowest incomes were better off economically by 0.2 percent, twice as much as those in the highest income group — and were twice as economically vulnerable to delays in policy implementation. The model could enable today’s decision-makers to evaluate any new ozone reduction policy proposal in terms of its potential impacts on Americans in all income groups, thereby gauging whether or not it will reduce or exacerbate existing economic inequality. “Integrating air pollution modeling with economic analysis in this way provides a new type of information on proposed policies and their implications for environmental justice,” says study co-author Noelle Selin, associate professor in the MIT Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences. “This type of approach can be used to help policymakers better identify policies that will mitigate environmental inequalities.” The research was funded by the U.S. Environmental Protection Agency, the MIT Leading Technology and Policy Initiative; the MIT Energy Initiative Total Energy Fellowship; the MIT Martin Family Society Fellowship; and the National Park Service.
News Article | February 23, 2017
TORONTO, ON--(Marketwired - February 23, 2017) - ViXS Systems Inc. (TSX: VXS) a pioneer and leader in advanced media processing solutions, announced today the appointment of Fred Shlapak to its Board of Directors effective immediately. "We are pleased that Fred has joined our Board of Directors," said Peter Currie, Chairman of ViXS. "He brings over three decades of leadership and operational experience coupled with deep knowledge of the semiconductor industry. His perspective and guidance will be invaluable as we continue to grow ViXS to the next level." Mr. Shlapak was President and Chief Executive Officer of the Semiconductor Products Sector at Motorola Corporation, a multi-billion company, when he retired in February 2004. Mr. Shlapak's 33-year career at Motorola included leading the company's Canadian semiconductor operations. He lived and worked in Europe where he had roles of increasing responsibility that culminated with leadership of Motorola's European Semiconductor Group. Mr. Shlapak holds B.Sc. and M.Sc. degrees in Electrical Engineering from the University of Waterloo. He also served on the boards of Applied Micro Circuits Corporation, Gennum Corporation, Tundra Semiconductor Corporation and SiGe Semiconductor. He was a former member of the Semiconductor Industry Association. ViXS is a pioneer and market leader in designing revolutionary media processing semiconductor solutions for video over IP streaming solutions, with over 500 patents issued and pending worldwide, numerous industry awards for innovation, and over 33 million media processor shipped to date. ViXS is driving the transition to Ultra HD 4K across the entire content value chain by providing professional and consumer grade chipsets that support the new High Efficiency Video Coding (HEVC) standard up to Main 12 Profile, reducing bandwidth consumption by 50% while providing the depth of color and image clarity needed to take advantage of higher-resolution content. ViXS' XCodePro 300 family is ideal for Ultra HD 4K infrastructure equipment, and the XCode 6000 family of system-on-chip (SoC) products achieve unprecedented levels of integration that enable manufacturers to create cost-effective consumer entertainment devices. ViXS is headquartered in Toronto, Canada with offices internationally. VIXS™, the ViXS® logo, XCode®, XCodePro™, XConnex™ and Xtensiv™ are trademarks and/or registered trademarks of ViXS. Other trademarks are the property of their respective owners. For more information on ViXS, visit our website: www.vixs.com. Certain statements in this press release which are not historical facts constitute forward-looking statements or information within the meaning of applicable securities laws ("forward-looking statements"). Such statements include, but are not limited to, statements regarding ViXS' projected revenues, gross margins, earnings, growth rates, the impact of new product design wins, market penetration and product plans. The use of terms such as "may", "anticipated", "expected", "projected", "targeting", "estimate", "intend" and similar terms are intended to assist in identification of these forward-looking statements. Readers are cautioned not to place undue reliance upon any such forward-looking statements. Such forward-looking statements are not promises or guarantees of future performance and involve both known and unknown risks and uncertainties that may cause ViXS' actual results to be materially different from historical results or from any results expressed or implied by such forward-looking statements. Accordingly, there can be no assurance that forward-looking statements will prove to be accurate and readers are therefore cautioned not to place undue reliance upon any such forward-looking statements. Factors that could cause results or events to differ materially from current expectations expressed or implied by forward looking statements contained herein include, but are not limited to: our history of losses and the risks associated with not achieving or sustaining profitability; the Company's dependence on a limited number of customers for a substantial portion of revenues; fluctuating revenue and expense levels arising from changes in customer demand, sales cycles, product mix, average selling prices, manufacturing costs and timing of product introductions; risks associated with competing against larger and more established companies; competitive risks and pressures from further consolidation amongst competitors, customers, and suppliers; market share risks and timing of revenue recognition associated with product transitions; risks associated with changing industry standards such as HEVC (High Efficiency Video Codec), HDR (High Dynamic Range) and Ultra HD resolution; risks related to intellectual property, including third party licensing or patent infringement claims; the loss of any of the Company's key personnel could seriously harm its business; risks associated with adverse economic conditions; delays in the launch of customer products; price re-negotiations by existing customers; the Company's dependence on a limited number of supply chain partners for the manufacture of its products, legal proceedings arising from the ordinary course of business; ability to raise needed capital; ongoing liquidity requirements;and other factors discussed in the "Risk Factors" section of the Company's Annual Information Form dated March 31, 2016, a copy of which is available under the Company's profile on SEDAR at www.sedar.com. All forward-looking statements are qualified in their entirety by this cautionary statement. ViXS is providing this information as of the current date and does not undertake any obligation to update any forward-looking statements contained herein as a result of new information, future events or otherwise except as may be required by applicable securities laws.
News Article | December 21, 2016
SAE International announces that Kasra Ghahremani, PhD, Structural Diagnostics Engineer with Walter P. Moore, is winner of the Henry O. Fuchs Student Award. Established in 1991, this award recognizes a graduate or recently graduated student (i.e. post doctorate or new professor) that is working in the field of fatigue research and applications. The purpose of this award is to promote the education of engineering students in the area of fatigue technology. This award honors the memory of Professor Henry O. Fuchs. Professor Fuchs participated in the SAE Fatigue Design & Evaluation Committee's research projects, was a member of the faculty who founded the SAE Fatigue Concepts in Design short course, published extensively in SAE and elsewhere in the technical community, and actively participated in the Surface Enhancement Division of the Committee which is responsible for many standards relating to surface treatments of metals for withstanding fatigue damage. Dr. Ghahremani’s research during his graduate studies was primarily focused on the fatigue performance, assessment, and retrofitting of metal structures in the long-life regime. His research outcomes have been published in co-authorship with his scientific advisers and other collaborators in 20 journal and conference research papers. Prior to joining Walter P Moore, Dr. Ghahremani was a Post-Doctoral Research Fellow at George Mason University conducting research on detecting structural damage in 3D point clouds. He has received several national and institutional awards and scholarships including the prestigious Natural Sciences and Engineering Research Council of Canada (NSERC) Scholarship. Dr. Ghahremani received his PhD and MASc degrees in Structural Engineering from the University of Waterloo and BSc in Civil Engineering from Sharif University of Technology. SAE International is a global association committed to being the ultimate knowledge source for the engineering profession. By uniting more than 127,000 engineers and technical experts, we drive knowledge and expertise across a broad spectrum of industries. We act on two priorities: encouraging a lifetime of learning for mobility engineering professionals and setting the standards for industry engineering. We strive for a better world through the work of our philanthropic SAE Foundation, including programs like A World in Motion® and the Collegiate Design Series™.
News Article | October 23, 2015
Many human-made pollutants in the environment resist degradation through natural processes, and disrupt hormonal and other systems in mammals and other animals. Removing these toxic materials — which include pesticides and endocrine disruptors such as bisphenol A (BPA) — with existing methods is often expensive and time-consuming. In a new paper published this week in Nature Communications, researchers from MIT and the Federal University of Goiás in Brazil demonstrate a novel method for using nanoparticles and ultraviolet (UV) light to quickly isolate and extract a variety of contaminants from soil and water. Ferdinand Brandl and Nicolas Bertrand, the two lead authors, are former postdocs in the laboratory of Robert Langer, the David H. Koch Institute Professor at MIT’s Koch Institute for Integrative Cancer Research. (Eliana Martins Lima, of the Federal University of Goiás, is the other co-author.) Both Brandl and Bertrand are trained as pharmacists, and describe their discovery as a happy accident: They initially sought to develop nanoparticles that could be used to deliver drugs to cancer cells. Brandl had previously synthesized polymers that could be cleaved apart by exposure to UV light. But he and Bertrand came to question their suitability for drug delivery, since UV light can be damaging to tissue and cells, and doesn’t penetrate through the skin. When they learned that UV light was used to disinfect water in certain treatment plants, they began to ask a different question. “We thought if they are already using UV light, maybe they could use our particles as well,” Brandl says. “Then we came up with the idea to use our particles to remove toxic chemicals, pollutants, or hormones from water, because we saw that the particles aggregate once you irradiate them with UV light.” The researchers synthesized polymers from polyethylene glycol, a widely used compound found in laxatives, toothpaste, and eye drops and approved by the Food and Drug Administration as a food additive, and polylactic acid, a biodegradable plastic used in compostable cups and glassware. Nanoparticles made from these polymers have a hydrophobic core and a hydrophilic shell. Due to molecular-scale forces, in a solution hydrophobic pollutant molecules move toward the hydrophobic nanoparticles, and adsorb onto their surface, where they effectively become “trapped.” This same phenomenon is at work when spaghetti sauce stains the surface of plastic containers, turning them red: In that case, both the plastic and the oil-based sauce are hydrophobic and interact together. If left alone, these nanomaterials would remain suspended and dispersed evenly in water. But when exposed to UV light, the stabilizing outer shell of the particles is shed, and — now “enriched” by the pollutants — they form larger aggregates that can then be removed through filtration, sedimentation, or other methods. The researchers used the method to extract phthalates, hormone-disrupting chemicals used to soften plastics, from wastewater; BPA, another endocrine-disrupting synthetic compound widely used in plastic bottles and other resinous consumer goods, from thermal printing paper samples; and polycyclic aromatic hydrocarbons, carcinogenic compounds formed from incomplete combustion of fuels, from contaminated soil. The process is irreversible and the polymers are biodegradable, minimizing the risks of leaving toxic secondary products to persist in, say, a body of water. “Once they switch to this macro situation where they’re big clumps,” Bertrand says, “you won’t be able to bring them back to the nano state again.” The fundamental breakthrough, according to the researchers, was confirming that small molecules do indeed adsorb passively onto the surface of nanoparticles. “To the best of our knowledge, it is the first time that the interactions of small molecules with pre-formed nanoparticles can be directly measured,” they write in Nature Communications. Even more exciting, they say, is the wide range of potential uses, from environmental remediation to medical analysis. The polymers are synthesized at room temperature, and don’t need to be specially prepared to target specific compounds; they are broadly applicable to all kinds of hydrophobic chemicals and molecules. “The interactions we exploit to remove the pollutants are non-specific,” Brandl says. “We can remove hormones, BPA, and pesticides that are all present in the same sample, and we can do this in one step.” And the nanoparticles’ high surface-area-to-volume ratio means that only a small amount is needed to remove a relatively large quantity of pollutants. The technique could thus offer potential for the cost-effective cleanup of contaminated water and soil on a wider scale. “From the applied perspective, we showed in a system that the adsorption of small molecules on the surface of the nanoparticles can be used for extraction of any kind,” Bertrand says. “It opens the door for many other applications down the line.” This approach could possibly be further developed, he speculates, to replace the widespread use of organic solvents for everything from decaffeinating coffee to making paint thinners. Bertrand cites DDT, banned for use as a pesticide in the U.S. since 1972 but still widely used in other parts of the world, as another example of a persistent pollutant that could potentially be remediated using these nanomaterials. “And for analytical applications where you don’t need as much volume to purify or concentrate, this might be interesting,” Bertrand says, offering the example of a cheap testing kit for urine analysis of medical patients. The study also suggests the broader potential for adapting nanoscale drug-delivery techniques developed for use in environmental remediation. “That we can apply some of the highly sophisticated, high-precision tools developed for the pharmaceutical industry, and now look at the use of these technologies in broader terms, is phenomenal,” says Frank Gu, an assistant professor of chemical engineering at the University of Waterloo in Canada, and an expert in nanoengineering for health care and medical applications. “When you think about field deployment, that’s far down the road, but this paper offers a really exciting opportunity to crack a problem that is persistently present,” says Gu, who was not involved in the research. “If you take the normal conventional civil engineering or chemical engineering approach to treating it, it just won’t touch it. That’s where the most exciting part is.”
Ali A.F.,Florida State University |
Ali A.F.,Center for Fundamental Physics |
Ali A.F.,Benha University |
Faizal M.,University of Waterloo |
Khalil M.M.,Alexandria University
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2015
In this paper, we investigate the effect of Planckian deformation of quantum gravity on the production of black holes at colliders using the framework of gravity's rainbow. We demonstrate that a black hole remnant exists for Schwarzschild black holes in higher dimensions using gravity's rainbow. The mass of this remnant is found to be greater than the energy scale at which experiments were performed at the LHC. We propose this as a possible explanation for the absence of black holes at the LHC. Furthermore, we demonstrate that it is possible for black holes in six (and higher) dimensions to be produced at energy scales that will be accessible in the near future. © 2015 The Authors.
Vaccaro A.,University of Sannio |
Canizares C.A.,University of Waterloo |
Bhattacharya K.,University of Waterloo
IEEE Transactions on Power Systems | Year: 2013
This paper presents a novel framework based on range arithmetic for solving power flow problems whose input data are specified within real compact intervals. Reliable interval bounds are computed for the power flow problem, which is represented as an optimization model with complementary constraints to properly represent generator bus voltage controls, including reactive power limits and voltage recovery processes. It is demonstrated that the lower and upper bounds of the power flow solutions can be obtained by solving two determinate optimization problems. Several numerical results are presented and discussed, demonstrating the effectiveness of the proposed methodology and comparing it to a previously proposed affine arithmetic based solution approach. © 2012 IEEE.
Zeng T.,University of Waterloo |
Li H.,Jilin University |
Roy P.-N.,University of Waterloo
Journal of Physical Chemistry Letters | Year: 2013
We present the first simulation study of bosonic clusters doped with an asymmetric top molecule. The path-integral Monte Carlo method with the latest methodological advance in treating rigid-body rotation [Noya, E. G.; Vega, C.; McBride, C. J. Chem. Phys.2011, 134, 054117] is employed to study a para-water impurity in para-hydrogen clusters with up to 20 para-hydrogen molecules. The growth pattern of the doped clusters is similar in nature to that of pure clusters. The para-water molecule appears to rotate freely in the cluster. The presence of para-water substantially quenches the superfluid response of para-hydrogen with respect to the space-fixed frame. © 2012 American Chemical Society.
Cisneros G.A.,Wayne State University |
Karttunen M.,University of Waterloo |
Ren P.,University of Texas at Austin |
Sagui C.,North Carolina State University
Chemical Reviews | Year: 2014
Electrostatic interactions are crucial for biomolecular simulations, as their calculation is the most time-consuming when computing the total classical forces, and their representation has profound consequences for the accuracy of classical force fields. Long-range electrostatic interactions are crucial for the stability of proteins, nucleic acids, glycomolecules, lipids, and other macromolecules, and their interactions with solvent, ions, and other molecules. Traditionally, electrostatic interactions have been modeled using a set of fixed atom-centered point charges or partial charges. The most popular methods for extracting charges from molecular wave functions are based on a fitting of the atomic charges to the molecular electrostatic potential (MEP) computed with ab initio or semiempirical methods outside the van der Waals surface. Computationally, the electrostatic potential for a system with explicit solvent is calculated by either solving Poisson's equation or explicitly adding the individual charge potentials.
Bhutta Z.A.,Aga Khan University |
Das J.K.,Aga Khan University |
Rizvi A.,Aga Khan University |
Gaffey M.F.,Hospital for Sick Children |
And 5 more authors.
The Lancet | Year: 2013
Maternal undernutrition contributes to 800000 neonatal deaths annually through small for gestational age births; stunting, wasting, and micronutrient deficiencies are estimated to underlie nearly 3·1 million child deaths annually. Progress has been made with many interventions implemented at scale and the evidence for effectiveness of nutrition interventions and delivery strategies has grown since The Lancet Series on Maternal and Child Undernutrition in 2008. We did a comprehensive update of interventions to address undernutrition and micronutrient deficiencies in women and children and used standard methods to assess emerging new evidence for delivery platforms. We modelled the effect on lives saved and cost of these interventions in the 34 countries that have 90% of the world's children with stunted growth. We also examined the effect of various delivery platforms and delivery options using community health workers to engage poor populations and promote behaviour change, access and uptake of interventions. Our analysis suggests the current total of deaths in children younger than 5 years can be reduced by 15% if populations can access ten evidence-based nutrition interventions at 90% coverage. Additionally, access to and uptake of iodised salt can alleviate iodine deficiency and improve health outcomes. Accelerated gains are possible and about a fifth of the existing burden of stunting can be averted using these approaches, if access is improved in this way. The estimated total additional annual cost involved for scaling up access to these ten direct nutrition interventions in the 34 focus countries is Int$9·6 billion per year. Continued investments in nutrition-specific interventions to avert maternal and child undernutrition and micronutrient deficiencies through community engagement and delivery strategies that can reach poor segments of the population at greatest risk can make a great difference. If this improved access is linked to nutrition-sensitive approaches - ie, women's empowerment, agriculture, food systems, education, employment, social protection, and safety nets - they can greatly accelerate progress in countries with the highest burden of maternal and child undernutrition and mortality. © 2013 Elsevier Ltd.
Granek J.A.,York University |
Gorbet D.J.,University of Waterloo |
Sergio L.E.,York University
Cortex | Year: 2010
Using event-related functional magnetic resonance imaging (fMRI), we examined the effect of video-game experience on the neural control of increasingly complex visuomotor tasks. Previously, skilled individuals have demonstrated the use of a more efficient movement control brain network, including the prefrontal, premotor, primary sensorimotor and parietal cortices. Our results extend and generalize this finding by documenting additional prefrontal cortex activity in experienced video gamers planning for complex eye-hand coordination tasks that are distinct from actual video-game play. These changes in activation between non-gamers and extensive gamers are putatively related to the increased online control and spatial attention required for complex visually guided reaching. These data suggest that the basic cortical network for processing complex visually guided reaching is altered by extensive video-game play. © 2009.
Ouyang G.,Sun Yat Sen University |
Vuckovic D.,University of Waterloo |
Pawliszyn J.,University of Waterloo
Chemical Reviews | Year: 2011
Solid-phase microextraction (SPME) approaches have been widely used for invasive and noninvasive studies as a simple, miniaturized, fast, and environmentally friendly sampling and sample preparation technique. SPME is a solvent-free sample preparation technique and combines sampling, analyte isolation, and enrichment into one step. In vivo analysis is a special application area where SPME is gaining ground because of its unique format and convenient device design. SPME can be performed using three basic extraction modes, direct extraction, headspace extraction, and membrane-protected extraction. SPME eliminates or minimizes the use of organic solvents, integrates sampling and sample preparation, and therefore substantially reduces the total time and cost of analysis. To understand the kinetics of SPME process, Prandtl boundary layer model can be used for simplification of corresponding equations. The performance of SPME is critically dependent on the properties of the extraction phase, which determine the selectivity and the reliability of the method.
Onorato M.,University of Turin |
Onorato M.,National Institute of Nuclear Physics, Italy |
Residori S.,University of Nice Sophia Antipolis |
Bortolozzo U.,University of Nice Sophia Antipolis |
And 3 more authors.
Physics Reports | Year: 2013
Rogue waves is the name given by oceanographers to isolated large amplitude waves, that occur more frequently than expected for normal, Gaussian distributed, statistical events. Rogue waves are ubiquitous in nature and appear in a variety of different contexts. Besides water waves, they have been recently reported in liquid Helium, in nonlinear optics, microwave cavities, etc. The first part of the review is dedicated to rogue waves in the oceans and to their laboratory counterpart with experiments performed in water basins. Most of the work and interpretation of the experimental results will be based on the nonlinear Schrödinger equation, an universal model, that rules the dynamics of weakly nonlinear, narrow band surface gravity waves. Then, we present examples of rogue waves occurring in different physical contexts and we discuss the related anomalous statistics of the wave amplitude, which deviates from the Gaussian behavior that were expected for random waves. The third part of the review is dedicated to optical rogue waves, with examples taken from the supercontinuum generation in photonic crystal fibers, laser fiber systems and two-dimensional spatiotemporal systems. In particular, the extreme waves observed in a two-dimensional spatially extended optical cavity allow us to introduce a description based on two essential conditions for the generation of rogue waves: nonlinear coupling and nonlocal coupling. The first requirement is needed in order to introduce an elementary size, such as that of the solitons or breathers, whereas the second requirement implies inhomogeneity, a mechanism needed to produce the events of mutual collisions and mutual amplification between the elementary solitons or wavepackets. The concepts of "granularity" and "inhomogeneity" as joint generators of optical rogue waves are introduced on the basis of a linear experiment. By extending these concepts to other systems, rogue waves can be classified as phenomena occurring in the presence of many uncorrelated "grains" of activity inhomogeneously distributed in large spatial domains, the "grains" being of linear or nonlinear origin, as in the case of wavepackets or solitons. © 2013 Elsevier B.V.
Kelly A.C.,University of Waterloo |
Carter J.C.,Memorial University of Newfoundland |
Borairi S.,York University
International Journal of Eating Disorders | Year: 2014
Compassion-focused therapy (CFT; Gilbert, 2005, 2009) is a transdiagnostic treatment approach focused on building self-compassion and reducing shame. It is based on the theory that feelings of shame contribute to the maintenance of psychopathology, whereas self-compassion contributes to the alleviation of shame and psychopathology. We sought to test this theory in a transdiagnostic sample of eating disorder patients by examining whether larger improvements in shame and self-compassion early in treatment would facilitate faster eating disorder symptom remission over 12 weeks. Participants were 97 patients with an eating disorder admitted to specialized day hospital or inpatient treatment. They completed the Eating Disorder Examination-Questionnaire, Experiences of Shame Scale, and Self-Compassion Scale at intake, and again after weeks 3, 6, 9, and 12. Multilevel modeling revealed that patients who experienced greater decreases in their level of shame in the first 4 weeks of treatment had faster decreases in their eating disorder symptoms over 12 weeks of treatment. In addition, patients who had greater increases in their level of self-compassion early in treatment had faster decreases in their feelings of shame over 12 weeks, even when controlling for their early change in eating disorder symptoms. These results suggest that CFT theory may help to explain the maintenance of eating disorders. Clinically, findings suggest that intervening with shame early in treatment, perhaps by building patients' self-compassion, may promote better eating disorders treatment response. (Int J Eat Disord 2014; 47:54-64) Copyright © 2013 Wiley Periodicals, Inc.
Agency: GTR | Branch: AHRC | Program: | Phase: Research Grant | Award Amount: 26.23K | Year: 2016
In recent years we have all become familiar with the notion of information overload, the digital deluge, the information explosion, and numerous variations on this idea. At the heart of this phenomenon is the growth of born-digital big data, a term which encompasses everything from aggregated tweets and Facebook posts to government emails, from the live and archived web to data generated by wearable and household technology. While there has been a growing interest in big data and the humanities in recent years, as exhibited notably in the AHRCs digital transformations theme, most academic research in this area has been undertaken by computer scientists and in emerging fields such as social informatics. As yet, there has been no systematic investigation of how humanities researchers are engaging with this new type of primary source, of what tools and methods they might require in order to work more effectively with big data in the future, and of what might constitute a specifically humanities approach to big data research. What kinds of questions will this data allow us to ask and answer? How can we ensure that this material is collected and preserved in such a way that it meets the requirements of humanities researchers? What insights can scholars in the humanities learn from ground-breaking work in the computer and social sciences, and from the archives and libraries who are concerned with securing all of this information? The proposed research Network will bring together researchers and practitioners from all of these stakeholder groups, to discern if there is a genuine humanities approach to born-digital big data, and to establish how this might inform, complement and draw on other disciplines and practices. Over the course of three workshops, one to be held at The National Archives in Kew, one at the Institute of Historical Research, University of London, and one at the University of Cambridge, the Network will address the current state of the field; establish the most appropriate tools and methods for humanities researchers for whom born-digital material is an important primary source; discuss the ways in which researchers and archives can work together to facilitate big data research; identify the barriers to engagement with big data, particularly in relation to skills; and work to build an engaged and lasting community of interest. The focus of the Network will be on history, but it will also encompass other humanities and social science disciplines. It will also include representatives of non-humanities disciplines, for example the computer, social and information sciences. Cross-disciplinary approaches and collaborative working are essential in such a new and complex area of investigation, and the Network relates to the current highlight notice encouraging the exploration of innovative areas of cross-disciplinary enquiry. While there has for some time been a recognition of the value of greater engagement between researchers in the humanities and the sciences in the development of new approaches to and understandings of born-digital big data, only very tentative first steps have been made towards realising this aim (for example forthcoming activity organised by the Turing Institute). The Network will provide a forum from which to launch precisely this kind of cross-disciplinary discussion, defining a central role for the humanities. During the 12 months of the project all members of the Network will contribute to a web resource, which will present key themes and ideas to both an academic and wider audience of the interested general public. External experts from government, the media and other relevant sectors will also be invited to contribute, to ensure that the Network takes account of a range of opinions and needs. The exchange of knowledge and experience that takes place at the workshops will also be distilled into a white paper, which will be published under a CC-BY licence in month 12 of the Network.
News Article | March 4, 2016
The first major results of the Blue Brain Project, a detailed simulation of a bit of rat neocortex about the size of a grain of coarse sand, were published last year1. The model represents 31,000 brain cells and 37 million synapses. It runs on a supercomputer and is based on data collected over 20 years. Furthermore, it behaves just like a speck of brain tissue. But therein, say critics, lies the problem. “It's the best biophysical model we have of any brain, but that's not enough,” says Christof Koch, a neuroscientist at the Allen Institute for Brain Science in Seattle, Washington, which has embarked on its own large-scale brain-modelling effort. The trouble with the model is that it holds no surprises: no higher functions or unexpected features have emerged from it. Some neuroscientists, including Koch, say that this is because the model was not built with a particular hypothesis about cognitive processes in mind. Its success will depend on whether specific questions can be asked of it. The irony, says neuroscientist Alexandre Pouget, is that deriving answers will require drastic simplification of the model, “unless we figure out how to adjust the billions of parameters of the simulations, which would seem to be a challenging problem to say the least”. By contrast, Pouget's group at the University of Geneva, Switzerland, is generating and testing hypotheses on how the brain deals with uncertainty in functions such as attention and decision-making. There is a widespread preference for hypothesis-driven approaches in the brain-modelling community. Some models might be very small and detailed, for example, focusing on a single synapse. Others might explore the electrical spiking of whole neurons, the communication patterns between brain areas, or even attempt to recapitulate the whole brain. But ultimately a model needs to answer questions about brain function if we are to advance our understanding of cognition. Blue Brain is not the only sophisticated model to have hit the headlines in recent years. In late 2012, theoretical neuroscientist Chris Eliasmith at the University of Waterloo in Canada unveiled Spaun, a whole-brain model that contains 2.5 million neurons (a fraction of the human brain's estimated 86 billion). Spaun has a digital eye and a robotic arm, and can reason through eight complex tasks such as memorizing and reciting lists, all of which involve multiple areas of the brain2. Nevertheless, Henry Markram, a neurobiologist at the Swiss Federal Institute of Technology in Lausanne who is leading the Blue Brain Project, noted3 at the time: “It is not a brain model.” Although Markram's dismissal of Spaun amused Eliasmith, it did not surprise him. Markram is well known for taking a different approach to modelling, as he did in the Blue Brain Project. His strategy is to build in every possible detail to derive a perfect imitation of the biological processes in the brain with the hope that higher functions will emerge — a 'bottom-up' approach. Researchers such as Eliasmith and Pouget take a 'top-down' strategy, creating simpler models based on our knowledge of behaviour. These skate over certain details, instead focusing on testing hypotheses about brain function. Rather than dismiss the criticism, Eliasmith took Markram's comment on board and added bottom-up detail to Spaun. He selected a handful of frontal cortex neurons, which were relatively simple to begin with, and swapped them for much more complicated neurons — ones that account for multiple ion channels and changes in electrical activity over time. Although these complicated neurons were more biologically realistic, Eliasmith found that they brought no improvement to Spaun's performance on the original eight tasks. “A good model doesn't introduce complexity for complexity's sake,” he says. For many years, computational models of the brain were what theorists call unconstrained: there were not enough experimental data to map onto the models or to fully test them. For instance, scientists could record electrical activity, but from only one neuron at a time, which limited their ability to represent neural networks. Back then, brain models were simple out of necessity. In the past decade, an array of technologies has provided more information. Imaging technology has revealed previously hidden parts of the brain. Researchers can control genes to isolate particular functions. And emerging statistical methods have helped to describe complex phenomena in simpler terms. These techniques are feeding newer generations of models. Nevertheless, most theorists think that a good model includes only the details needed to help answer a specific question. Indeed, one of the most challenging aspects of model building is working out which details are important to include and which are acceptable to ignore. “The simpler the model is, the easier it is to analyse and understand, manipulate and test,” says cognitive and computational neuroscientist Anil Seth of the University of Sussex in Chichester, UK. An oft-cited success in theoretical neuroscience is the Reichardt detector — a simple, top-down model for how the brain senses motion — proposed by German physicist Werner Reichardt in the 1950s. “The big advantage of the Reichardt model for motion detection was that it was an algorithm to begin with,” says neurobiologist Alexander Borst of the Max Planck Institute of Neurobiology in Martinsried, Germany. “It doesn't speak about neurons at all.” When Borst joined the Max Planck Society in the mid-1980s, he ran computational simulations of the Reichardt model, and got surprising results. He found, for instance, that neurons oscillated when first presented with a pattern that was moving at constant velocity — a result that he took to Werner Reichardt, who was also taken aback. “He didn't expect his model to show that,” says Borst. They confirmed the results in real neurons, and continued to refine and expand Reichardt's model to gain insight into how the visual system detects motion. In the realm of bottom-up models, the greatest success has come from a set of equations developed in 1952 to explain how flow of ions in and out of a nerve cell produces an axon potential. These Hodgkin–Huxley equations are “beautiful and inspirational”, says neurobiologist Anthony Zador of Cold Spring Harbor Laboratory in New York, adding that they have allowed many scientists to make predictions about how neuronal excitability works. The equations, or their variants, form some of the basic building blocks of many of today's larger brain models of cognition. Although many theoretical neuroscientists do not see value in pure bottom-up approaches such as that taken by the Blue Brain Project, they do not dismiss bottom-up models entirely. These types of data-driven brain simulations have the benefit of reminding model-builders what they do not know, which can inspire new experiments. And top-down approaches can often benefit from the addition of more detail, says theoretical neuroscientist Peter Dayan of the Gatsby Computational Neuroscience Unit at University College London. “The best kind of modelling is going top-down and bottom-up simultaneously,” he says. Borst, for example, is now approaching the Reichardt detector from the bottom up to explore questions such as how neurotransmitter receptors on motion-sensitive neurons interact. And Eliasmith's more complex Spaun has allowed him to do other types of experiment that he couldn't before — in particular, he can now mimic the effect of sodium-channel blockers on the brain. Also taking a multiscale approach is neuroscientist Xiao-Jing Wang of New York University Shanghai in China, whose group described a large-scale model of the interaction of circuits across different regions of the macaque brain4. The model is built, in part, from his previous, smaller models of local neuronal circuits that show how neurons in a group fire in time. To scale up to the entire brain, Wang had to include the strength of the feedback between areas. Only now has he got the right data — thanks to the burgeoning field of connectomics (the study of connection maps within an organism's nervous system) — to build in this important detail, he says. Wang is using his model to study decision-making, the integration of sensory information and other cognitive processes. In physics, the marriage between experiment and theory led to the development of unifying principles. And although neuroscientists might hope for a similar revelation in their field, the brain (and biology in general) is inherently more noisy than a physical system, says computational neuroscientist Gustavo Deco of the Pompeu Fabra University in Barcelona, Spain, who is an investigator on the Human Brain Project. Deco points out that equations describing the behaviour of neurons and synapses are non-linear, and neurons are connected in a variety of ways, interacting in both a feedforward and a feedback manner. That said, there are examples of theory allowing neuroscientists to extract general principles, such as how the brain balances excitation and inhibition, and how neurons fire in synchrony, Wang says. Complex neuroscience often requires huge computational resources. But it is not a want of supercomputers that limits good, theory-driven models. “It is a lack of knowledge about experimental facts. We need more facts and maybe more ideas,” Borst says. Those who crave vast amounts of computer power misunderstand the real challenge facing scientists who are trying to unravel the mysteries of the brain, Borst contends. “I still don't see the need for simulating one million neurons simultaneously in order to understand what the brain is doing,” he says, referring to the large-scale simulation linked with the Human Brain Project. “I'm sure we can reduce that to a handful of neurons and get some ideas.” Computational neuroscientist Andreas Herz, of the Ludwig-Maximilians University in Munich, Germany, agrees. “We make best progress if we focus on specific elements of neural computation,” he says. For example, a single cortical neuron receives input from thousands of other cells, but it is unclear how it processes this information. “Without this knowledge, attempts to simulate the whole brain in a seemingly biologically realistic manner are doomed to fail,” he adds. At the same time, supercomputers do allow researchers to build details into their models and see how they compare to the originals, as with Spaun. Eliasmith has used Spaun and its variations to see what happens when he kills neurons or tweaks other features to investigate ageing, motor control or stroke damage in the brain. For him, adding complexity to a model has to serve a purpose. “We need to build bigger and bigger models in every direction, more neurons and more detail,” he says. “So that we can break them.”
News Article | December 6, 2016
In 1905, a 26-year-old Albert Einstein changed physics forever when he outlined his theory of special relativity. This theory outlined the relationship between space and time and is founded on two fundamental assumptions: the laws of physics are the same for all non-accelerating observers, and the speed of light in a vacuum is always the same. Over the last century, Einstein's theories of relativity (both special and general) have withstood the trials of experimental verification and been used to explain a number of physical processes, including the origins of our universe. But in the late 1990s, a handful of physicists challenged one of the fundamental assumptions underlying Einstein's theory of special relativity: Instead of the speed of light being constant, they proposed that light was faster in the early universe than it is now. This theory of the variable speed of light was—and still is—controversial. But according to a new paper published in November in the physics journal Physical Review D, it could be experimentally tested in the near future. If the experiments validate the theory, it means that the laws of nature weren't always the same as what we experience today and would require a serious revision of Einstein's theory of gravity. "The whole of physics is predicated on the constancy of the speed of light," Joao Magueijo, a cosmologist at Imperial College London and pioneer of the theory of variable light speed, told Motherboard. "So we had to find ways to change the speed of light without wrecking the whole thing too much." "The whole of physics is predicated on the constancy of the speed of light." According to Magueijo, the variable speed of light (VSL) theory emerged as a solution to a longstanding inconsistency in cosmology known as "the horizon problem" which arises when the speed of light is considered to be a constant. If light has an invariable speed limit, then that means that since the Big Bang it could only have traveled approximately 13.7 billion light years, because approximately 13.7 billion years have elapsed since the Big Bang. The distance that light is able to travel since the Big Bang creates the 'horizon' of the visible universe—this is about 47 billion light years (although light has only been traveling for 13.7 billion years, this number takes into account the expansion of space that is occurring as light is traveling). So imagine sitting in the center of a sphere (the universe) with a diameter of 47 billion light years. The edge of this sphere, aka the horizon of the universe, is the cosmic microwave background (CMB)—radiation from about 400,000 years after the Big Bang and our earliest snapshot of the universe—and no matter where you are in the universe, when you observe the CMB today it is 13.7 billion light years distant. Here's where the problem arises: although any point in the universe is always 13.7 billion light years from the cosmic microwave background, the distance separating one side of the horizon of the cosmic microwave background from the other (let's call this the "diameter" of the universe) is approximately 27.4 billion light years. In other words, the universe is too large to have allowed light to travel from one end of the other during its existence, which is necessary to account for the homogeneity observed in the CMB. When cosmologists observe the cosmic microwave background it is remarkably uniform: its temperature is approximately -270 C no matter where it is measured with minuscule variance (one part in 100,000). Yet if light, the fastest "thing" in the universe, isn't able to travel from one side of the universe to the other over the course of the universe's entire existence, this uniformity that is observed in the CMB would be impossible. To understand why this is the case, imagine a bathtub with a faucet at either end, one spigot producing cold water, the other would produce hot water. If you turn both of these faucets off, eventually the water in the bathtub will reach a uniform temperature as the hot and cold water mix. But if while the faucets are running you stretch the tub out in every direction so fast that the hot and cold water will never meet, one side of the tub will always be way hotter than the other side instead of a single uniform temperature. This is what happened during the Big Bang, except that rather than seeing parts of the early universe in the CMB that are way hotter or cooler than other parts, it's perfectly uniform. So what gives? The most widely accepted resolution to the horizon problem is called inflation, which basically states that the uniformity we observe in the CMB occurred while the universe was still incredibly small and dense, and it maintained this uniformity while it expanded. In this example, the hot and cold bath water reached a uniform temperature before the bathtub started its crazy fast expansion in every direction. Although this inflationary theory preserves a constant speed of light, it also requires accepting the existence of an "inflation field," which only existed during a brief period of time in the early universe. According to proponents of variable light speed however, this problem can be solved without recourse to inflation if the speed of light was significantly higher in the early universe. This would allow the distant edges of the universe to remain "connected" as the universe expanded and would account for the observed uniformity in the CMB. Yet for theoretical physicists who prescribe to the inflationary model of the universe, allowing for variable light speed instead of constant light speed is a way of "flipping the sign" of a fundamental term in the theory of special relativity. "In most cases, flipping such a sign is a recipe for certain disaster as the resulting theory would cease to be physically and internally consistent," David Marsh, a senior research fellow at the Center for Theoretical Cosmology who was not involved with the paper, told Motherboard. "Afshordi and Magueijo have addressed some of the challenges coming with this sign flip, but it appears that much work remains in establishing that the model is theoretically healthy. If that can be done, this model may have a host of far-reaching consequences also for the rest of physics beyond cosmology." So just how much faster was light speed just after the Big Bang? According to Magueijo and his colleague Niayesh Afshordi, an associate professor of physics and astronomy at the University of Waterloo, the answer is "infinitely" faster. The duo cite light speed as being at least 32 orders of magnitude faster than its currently accepted speed of 300 million meters per second—this is merely the lower bounds of the faster light speed, however. As you get closer to the Big Bang, the speed of light approaches infinity. On this view, the speed of light was faster because the universe was incredibly hot at the beginning. According to Afshordi, their theory requires that the early universe was at least a toasty 1028 degrees Celsius (to put this in perspective, the highest temperature we are capable of realizing on Earth is about 1016 degrees Celsius, a full 12 orders of magnitude cooler). As the universe expanded and cooled below this temperature, light underwent a phase shift—much like liquid water changes into ice once the temperature reaches a certain threshold—and arrived at the speed we know today: 300 million meters per second. Just like ice won't get more "icy" the colder the temperature gets, the speed of light has not been slowing down since it reached 300 million meters per second. If Magueijo and Afshordi's theory of variable light speed is correct, then the speed of light decreased in a predictable way—which means with sensitive enough instruments, this light speed decay can be measured. And that's exactly what they did in their latest paper. "Varying speed of light is going back to the foundations of physics and saying perhaps there are things beyond relativity." According to Afshordi, galaxies and other structures in the universe were only possible due to fluctuations in the early universe's density. These density fluctuations are recorded in the cosmic microwave background as a "spectral index," which might be imagined as the "color" of the early universe. The neutral baseline of the spectral index is a value of 1, which would be a universe with the same magnitude of gravitational fluctuations on all scales. Above this value the universe is "blue" (representing shorter wavelength fluctuations) and below this value and the universe is "red"(representing longer wavelength fluctuations). Although the inflationary model of the universe also would have a "red" spectral index, it is unable to calculate a precise value of the index and as a result the exact gravity fluctuations in the early universe. In their new paper, Magueijo and Afshordi pegged the spectral index at a value of 0.96478, just slightly red, which is two orders of magnitude more precise than current measurements of the spectral index (about 0.968). Now that they've used the variable light speed theory to put a hard number on the spectral index, all that remains to be seen is whether increasingly sensitive experiments probing the CMB and distribution of galaxies will verify or overturn their theory. Both Magueijo and Afshordi expect these results to be available at some point in the decade. But Marsh and other physicists aren't so sure. "Compared to inflation, Afshordi and Magueijo's model is at the present very complicated and poorly understood," Marsh said. "However, the understanding of inflation has developed over 35 years and there are still important open theoretical questions to address in that framework. It is certainly possible that given more time and research, the theoretical setting of this model will be much better understood and its predictions may appear more elegant." If their theory is correct, it will overturn one of the main axiom's underlying Einstein's theory of special relativity and force physicists to reconsider the nature of gravity. According to Afshordi, however, it is more or less accepted in the physics community that Einstein's theory of gravity cannot be the whole story, and that a quantum theory of gravity will come to replace it. There are a number of competing quantum gravity theories, but if the variable light speed theory outlined in this paper is proven to be correct, it will significantly narrow the range of plausible theories of quantum gravity. "If you really want to open up quantum gravity to observation, you're better off without this idea of inflation," said Magueijo. "Inflation leaves fundamental physics completely untouched [and] is a mechanism of insulating the observable universe from physics beyond relativity. Varying speed of light is going back to the foundations of physics and saying perhaps there are things beyond relativity. This is the best position to open up new ideas and new theories." Correction: This story originally stated that the "horizon" of the visible universe was 13.7 billion light years, when in fact it is 47 billion light years. We've updated the reference and regret the error. Get six of our favorite Motherboard stories every day by signing up for our newsletter.
News Article | November 8, 2016
OTTAWA, ON--(Marketwired - November 08, 2016) - C-COM Satellite Systems Inc., (TSX VENTURE: CMI) the leading global provider of mobile auto-deploying satellite antenna systems, announced today that it has developed a patent pending technology to be used with its next generation in-motion phased array antennas. This technology has been developed in partnership with the University of Waterloo under the guidance of Dr. Safieddin (Ali) Safavi-Naeini, director of the Centre for Intelligent Antenna and Radio Systems (CIARS). The new method, under patent, for calibrating a phased array antenna, is expected to be used in low-profile two-way phased-array antenna systems for land-mobile satellite communications. "This newly invented technique provides a faster and much lower-cost calibration process which can be easily integrated with the phased-array system, thus eliminating costly system calibration during manufacturing (production phase)," said Dr. Safieddin Safavi-Naeini, a professor at the Department of Electrical and Computer Engineering at the University of Waterloo. "The main advantage of this method is that it significantly reduces the calibration time and enhances its accuracy," continued Dr. Safavi-Naeini. "Another remarkable advantage of this new patent pending design is hardware simplicity and its integration into the phased-array system - the entire calibration can be performed during system initialization in the field," continued Dr. Safavi-Naeini. "This new calibration solution can also extract critical geometrical parameters of the system and identify mechanical misalignment errors." "This novel method provides a very practical solution in terms of product reliability. Calibration can be performed in the field at any time without requiring the antenna to be shipped back to the equipment provider," said Bilal Awada, Chief Technology Officer of C-COM Satellite Systems Inc. "As a research-intensive institution, we encourage industry collaboration as a means to advance technology through a mutually-beneficial partnership," said Dave Dietz, Director of Research for the Faculty of Engineering. "Through the support of C-COM, Prof. Safavi-Naeini's research team continues to advance science and innovation in the field of satellite communications." "We look forward to continuing our collaboration with the University of Waterloo and support the research on this unique Ka-band in-motion antenna technology," said Leslie Klein, President and CEO of C-COM Satellite Systems Inc. C-COM Satellite Systems Inc. is a leader in the development, manufacture and deployment of commercial grade mobile satellite-based technology for the delivery of two-way high-speed Internet, VoIP and Video services into vehicles. C-COM has developed a number of proprietary Mobile auto-deploying (iNetVu®) antennas that deliver broadband over satellite into vehicles while stationary virtually anywhere where one can drive. The iNetVu® Mobile antennas have also been adapted to be airline checkable and easily transportable. More than 7000 C-COM antennas have been deployed in 103 countries around the world in vertical markets such as Oil & Gas Exploration, Military Communications, Disaster Management, SNG, Emergency Communications, Cellular Backhaul, Telemedicine, Mobile Banking, and others. The Company's satellite-based products are known worldwide for their high quality, reliability and cost-effectiveness. C-COM is also involved in the design and development of a new generation of Ka-band (communications on the move) antennas, which will deliver satellite broadband solutions into vehicles while in motion. More information is available at: www.c-comsat.com iNetVu® is a registered trademark of C-COM Satellite Systems Inc. Neither TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.
News Article | March 2, 2017
A new study, published today in the Canadian Journal of Civil Engineering, presents a risk-based approach for classifying the road surface conditions of a highway network under winter weather events. This approach includes an explicit account of the driving risk that a motorist may experience on a highway. In countries like Canada that have severe winter seasons, transportation agencies often face challenges in meeting the safety and mobility needs of people on the road. To address these challenges, most agencies have a comprehensive winter maintenance program in place that includes policies, best practices, and guidelines for monitoring and reporting of road surface conditions. Typically, road surface condition information is broadcast through a traveler information portal known as 511 system or the website of the road agency. However, there is a lack of consistency in defining and determining the winter driving conditions of a highway across different transportation agencies and jurisdictions. Additionally, different terms may represent different levels of travel risk depending on the agency and location. "The main goal of our study is to develop and propose a new approach to road surface condition classification that provides consistency in the communication of the driving risk that a motorist may experience," says Dr. Lalita Thakali, Research Associate at the University of Waterloo. In this study, researchers from the Department of Civil & Environmental Engineering at the University of Waterloo, propose a risk-based approach for classifying road surface conditions that could be used for monitoring winter driving conditions and directing winter road maintenance operations. The researchers propose a relative risk index on the basis of the risk estimated using a collision model calibrated using detailed hourly data of weather, road surface conditions, traffic and accidents on a large number of highway sections in Ontario over six winter seasons. The study proposed two alternative approaches to address the challenge of determining the overall condition of a highway section or route with non-uniform driving conditions. The first approach applies a risk model to estimate the relative increase in risk under a specific winter weather and road surface conditions as compared to normal conditions. The second approach involves converting different classes of road conditions observed on any given route into a single dominant class based on the relative risk between individual classes of road conditions. This could help drivers assess the road conditions of their entire trip or route. "An ideal classification system for the public should be one that is simple, intuitive, and consistent" continues Dr. Thakali. The risk-based approach for road condition classification introduced in this research represents one step closer towards such an ideal classification system. Further research could look into the feasibility of developing a universal risk index that is applicable across different regions in Canada. The paper, "A risk-based approach to winter road surface condition classification" by Liping Fu, Lalita Thakali, Tae J. Kwon and Taimur Usman was published today in the Canadian Journal of Civil Engineering.
News Article | March 2, 2017
In countries like Canada that have severe winter seasons, transportation agencies often face challenges in meeting the safety and mobility needs of people on the road. To address these challenges, most agencies have a comprehensive winter maintenance program in place that includes policies, best practices, and guidelines for monitoring and reporting of road surface conditions. Typically, road surface condition information is broadcast through a traveler information portal known as 511 system or the website of the road agency. However, there is a lack of consistency in defining and determining the winter driving conditions of a highway across different transportation agencies and jurisdictions. Additionally, different terms may represent different levels of travel risk depending on the agency and location. "The main goal of our study is to develop and propose a new approach to road surface condition classification that provides consistency in the communication of the driving risk that a motorist may experience," says Dr. Lalita Thakali, Research Associate at the University of Waterloo. In this study, researchers from the Department of Civil & Environmental Engineering at the University of Waterloo, propose a risk-based approach for classifying road surface conditions that could be used for monitoring winter driving conditions and directing winter road maintenance operations. The researchers propose a relative risk index on the basis of the risk estimated using a collision model calibrated using detailed hourly data of weather, road surface conditions, traffic and accidents on a large number of highway sections in Ontario over six winter seasons. The study proposed two alternative approaches to address the challenge of determining the overall condition of a highway section or route with non-uniform driving conditions. The first approach applies a risk model to estimate the relative increase in risk under a specific winter weather and road surface conditions as compared to normal conditions. The second approach involves converting different classes of road conditions observed on any given route into a single dominant class based on the relative risk between individual classes of road conditions. This could help drivers assess the road conditions of their entire trip or route. "An ideal classification system for the public should be one that is simple, intuitive, and consistent" continues Dr. Thakali. The risk-based approach for road condition classification introduced in this research represents one step closer towards such an ideal classification system. Further research could look into the feasibility of developing a universal risk index that is applicable across different regions in Canada. The paper, "A risk-based approach to winter road surface condition classification" by Liping Fu, Lalita Thakali, Tae J. Kwon and Taimur Usman was published today in the Canadian Journal of Civil Engineering. Explore further: Improve winter road safety in Canada to decrease number of deaths More information: Liping Fu et al, A risk-based approach to winter road surface condition classification, Canadian Journal of Civil Engineering (2017). DOI: 10.1139/cjce-2016-0215
Martin-Martinez E.,Institute Fisica Fundamental |
Fuentes I.,University of Nottingham |
Mann R.B.,University of Waterloo
Physical Review Letters | Year: 2011
We show that a detector acquires a Berry phase due to its motion in spacetime. The phase is different in the inertial and accelerated case as a direct consequence of the Unruh effect. We exploit this fact to design a novel method to measure the Unruh effect. Surprisingly, the effect is detectable for accelerations 109 times smaller than previous proposals sustained only for times of nanoseconds. © 2011 American Physical Society.
Jiang Y.,Wenzhou University |
Jiang Y.,University of Waterloo |
Chen J.Z.Y.,University of Waterloo
Physical Review Letters | Year: 2013
We utilize the wormlike chain model in the framework of the self-consistent field theory to investigate the influence of chain rigidity on the phase diagram of AB diblock copolymers in the full three-dimensional space. We develop an efficient numerical scheme that can be used to calculate the physical properties of ordered microstructures self-assembled from semiflexible block copolymers. The calculation describes the entire physical picture of the phase diagram, crossing from the flexible over to rodlike polymer behavior. © 2013 American Physical Society.
Tripathi R.,University of Waterloo |
Wood S.M.,University of Bath |
Islam M.S.,University of Bath |
Nazar L.F.,University of Waterloo
Energy and Environmental Science | Year: 2013
The Na-ion battery is currently the focus of much research interest due to its cost advantages and the relative abundance of sodium as compared to lithium. Olivine NaMPO4 (M = Fe, Fe0.5Mn0.5, Mn) and layered Na2FePO4F are interesting materials that have been reported recently as attractive positive electrodes. Here, we report their Na-ion conduction behavior and intrinsic defect properties using atomistic simulation methods. In the olivines, Na ion migration is essentially restricted to the  direction along a curved trajectory, similar to that of LiMPO 4, but with a lower migration energy (0.3 eV). However, Na/M antisite defects are also predicted to have a lower formation energy: the higher probability of tunnel occupation with a relatively immobile M2+ cation-along with a greater volume change on redox cycling-contributes to the poor electrochemical performance of the Na-olivine. Na+ ion conduction in Na2FePO4F is predicted to be two-dimensional (2D) in the interlayer plane with a similar low activation energy. The antisite formation energy is slightly higher; furthermore, antisite occupation would not be predicted to impede transport significantly owing to the 2D pathway. This factor, along with the much lower volume change on redox cycling, is undoubtedly responsible for the better electrochemical performance of the layered structure. Where volume change and structural effects do not incur impediments, Na-ion materials may present significant advantages over their Li counterparts. © 2013 The Royal Society of Chemistry.
Faizal M.,University of Waterloo |
Upadhyay S.,State University of Rio de Janeiro
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2014
In this paper, we will analyze the ghost condensation in the ABJM theory. We will perform our analysis in N=1 superspace. We show that in the Delbourgo-Jarvis-Baulieu-Thierry-Mieg gauge the spontaneous breaking of BRST symmetry can occur in the ABJM theory. This spontaneous breaking of BRST symmetry is caused by ghost-anti-ghost condensation. We will also show that in the ABJM theory, the ghost-anti-ghost condensates remain present in the modified abelian gauge. Thus, the spontaneous breaking of BRST symmetry in ABJM theory can even occur in the modified abelian gauge. © 2014 The Authors. Published by Elsevier B.V.
Vohanka J.,Masaryk University |
Faizal M.,University of Waterloo
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2015
In this paper, we will analyze three-dimensional supersymmetric Yang-Mills theory coupled to matter fields in SIM(1) superspace formalism. The original theory which is invariant under the full Lorentz group has N=1 supersymmetry. However, when we break the Lorentz symmetry down to SIM(1) group, the SIM(1) superspace will break half the supersymmetry of the original theory. Thus, the resultant theory in SIM(1) superspace will have N=1/2 supersymmetry. This is the first time that N=1 supersymmetry will be broken down to N=1/2 supersymmetry, for a three-dimensional theory, on a manifold without a boundary. This is because it is not possible to use nonanticommutativity to break N=1 supersymmetry down to N=1/2 supersymmetry in three dimensions. © 2015 American Physical Society.
Montero M.,Institute Fisica Fundamental |
Martin-Martinez E.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2012
We provide a simple argument showing that in the limit of infinite acceleration, the entanglement in a fermionic-field bipartite system must be independent of the choice of Unruh modes. This implies that most tensor product structures used previously to compute field entanglement in relativistic quantum information cannot give rise to physical results. © 2012 American Physical Society.
Bharati S.,University of Waterloo |
Saengudomlert P.,Asian Institute of Technology
Journal of Lightwave Technology | Year: 2010
Closed-form mathematical expressions of network performances, such as the mean packet delay, are useful for evaluating a communication network during the design process. This paper provides derivations of closed-form expressions of the mean packet delay for the gated service and the limited service of dynamic bandwidth allocation in Ethernet passive optical networks (EPONs). Based on the M/G/1 queueing analysis framework of a multiuser cyclic polling system, we derive the mean packet delay expressions by modifying the expressions for the reservation time component of the total delay. Results from simulation experiments confirm that our analysis can accurately predict the mean packet delay. Finally, we extend the analysis to demonstrate how the limited service can protect packets transmitted by a light-load user from having excessive delays due to high traffic loads from other users in the same EPON. The analytical results indicate that, in selecting the maximum length of a scheduling cycle for the limited service, there is a tradeoff between the mean packet delay under uniform traffic and the guaranteed upper bound on the mean packet delay under nonuniform traffic. © 2010 IEEE.
News Article | February 5, 2016
New findings from an international collaboration led by Canadian scientists may eventually lead to a theory of how superconductivity initiates at the atomic level, a key step in understanding how to harness the potential of materials that could provide lossless energy storage, levitating trains and ultra-fast supercomputers. Professor David Hawthorn, Professor Michel Gingras, doctoral student Andrew Achkar, and post-doctoral fellow Dr. Zhihao Hao from University of Waterloo's Department of Physics and Astronomy have experimentally shown that electron clouds in superconducting materials can snap into an aligned and directional order called nematicity. "It has become apparent in the past few years that the electrons involved in superconductivity can form patterns, stripes or checkerboards, and exhibit different symmetries - aligning preferentially along one direction," said Professor Hawthorn. "These patterns and symmetries have important consequences for superconductivity - they can compete, coexist or possibly even enhance superconductivity. " Their results, published today in the prestigious journal Science, present the most direct experimental evidence to date of electronic nematicity as a universal feature in cuprate high-temperature superconductors. "In this study, we identify some unexpected alignment of the electrons - a finding that is likely generic to the high temperature superconductors and in time may turn out be a key ingredient of the problem," said Professor Hawthorn. Superconductivity, the ability of a material to conduct an electric current with zero resistance, is best described as an exotic state in high temperature superconductors - challenging to predict, let alone explain. The scientists used a novel technique called soft x-ray scattering at the Canadian Light Source synchrotron in Saskatoon to probe electron scattering in specific layers in the cuprate crystalline structure. Specifically, the individual cuprate (CuO2) planes, where electronic nematicity takes place, versus the crystalline distortions in between the CuO2 planes. Electronic nematicity happens when the electron orbitals align themselves like a series of rods - breaking their unidirectional symmetry apart from the symmetry of the crystalline structure. The term "nematicity" commonly refers to when liquid crystals spontaneously align under an electric field in liquid crystal displays. In this case, it is the electronic orbitals that enter the nematic state as the temperature drops below a critical point. Recent breakthroughs in high-temperature superconductivity have revealed a complex competition between the superconductive state and charge density wave order fluctuations. These periodic fluctuations in the distribution of the electrical charges create areas where electrons bunch up in high- versus low-density clouds, a phenomenon that is now recognized to be generic to the underdoped cuprates. Results from this study show electronic nematicity also likely occurs in underdoped cuprates. Understanding the relation of nematicity to charge density wave order, superconductivity and an individual material's crystalline structure could prove important to identifying the origins of the superconducting and so-called pseudogap phases. The authors also found the choice of doping material impacts the transition to the nematic state. Dopants, such as strontium, lanthanum, and even europium added to the cuprate lattice, create distortions in the lattice structure which can either strengthen or weaken nematicity and charge density wave order in the CuO2 layer. Although there is not yet an agreed upon explanation for why electronic nematicity occurs, it may ultimately present another knob to tune in the quest to achieve the ultimate goal of a room temperature superconductor. "Future work will tackle how electronic nematicity can be tuned, possibly to advantage, by modifying the crystalline structure," says Hawthorn. Hawthorn and Gingras are both Fellows of the Canadian Institute For Advanced Research. Gingras holds the Canada Research Chair in Condensed Matter Theory and Statistical Mechanics and spent time at the Perimeter Institute of Theoretical Physics as a visiting researcher while this work was being carried out.
News Article | December 19, 2016
On Monday, BlackBerry QNX unveiled a new Autonomous Vehicle Innovation Center housed within its QNX facility in Ottawa, Canada. The purpose of the center is to develop production-ready software to accelerate the adoption of connected and self-driving vehicles, which is part of the company's pivot away from hardware. Canada is a natural fit for the the new center because on November 28, the Ministry of Transportation of Ontario approved BlackBerry QNX to test autonomous vehicles on Ontario roads as part of the government's autonomous vehicle pilot program. "Autonomous vehicles require software that is extremely sophisticated and highly secure," said John Chen, executive chairman and CEO of BlackBerry, in a statement. "Our innovation track record in mobile security and our demonstrated leadership in automotive software make us ideally suited to dominate the market for embedded intelligence in the cars of the future." Analysts have said that the QNX platform is the future of BlackBerry. QNX was acquired by BlackBerry in 2010 and Ford chose it to replace Microsoft for its Sync infotainment platform. The QNX platform has a reputation for being secure, and it is already in more than 60 million vehicles from 20 different automakers, including the Sync 3 system in Ford vehicles, as previously reported by TechRepublic. SEE: BlackBerry expands Ford partnership for QNX, targets IoT and connected cars as path back to relevance (TechRepublic) BlackBerry QNX plans to hire local software engineers to work on connected and autonomous car projects. One of the first projects will be supporting Ontario's autonomous driving pilot as well as BlackBerry QNX's work with the University of Waterloo, PolySync, and Renesas Electronics to build an autonomous concept vehicle. BlackBerry QNX is extending its platform expertise into ADAS (Advanced Driver Assist Systems), CVAV (Connected Vehicle and Autonomous Vehicle) systems and secure Over the Air Software Update services. "With the opening of its innovation center in Ottawa, BlackBerry is helping to establish our country as the global leader in software and security for connected car and autonomous vehicle development," said Canadian Prime Minister Justin Trudeau in a statement. "This center will create great middle-class jobs for Canadians, new opportunities for recent university graduates, and further position Canada as a global hub for innovation." A BlackBerry spokesperson said 50% of all cars will connect to the cloud by 2020 and will be loaded with IoT edge nodes and sensors. The opening of the Ottawa center is in anticipation of this shift so that BlackBerry can invest in key technologies for embedded intelligence to power the core electronics of connected and autonomous cars. Analysts agree that this is yet another step toward BlackBerry's move out of mobile phones and more deeply into software, particularly the well-regarded QNX platform. William Stofega, program director and mobile phone industry analyst for the International Data Corporation, said, "I think this maps out BlackBerry's next step in their evolution from hardware to software." Bob Bilbruck, CEO of B2 Group, said, "I think the QNX Innovation Center is just another way for BlackBerry to widen their already large footprint in connected car and the future of autonomous vehicles, and show leadership in this space. It's also a way for the Canadian government to show support of BlackBerry and also get involved in this innovative space through this partnership with BlackBerry. QNX has a solid footprint in this vertical now but I think their ambitions are bigger; and this private and public innovation center venture shows BlackBerry's and the Canadian government's commitment to being innovative and showing an innovative face to the rest of the world." In October, BlackBerry expanded its relationship with Ford Motor Company to include a dedicated team working with Ford on expanding the use of BlackBerry's QNX Neutrino Operating System, Certicom security technology, QNX hypervisor, and QNX audio processing software.
News Article | November 30, 2016
Road testing of self-driving cars on the public roads of Ontario will commence soon, following researchers at the University of Waterloo being granted approval for this by the Ministry of Transportation. The move means that the populous Canadian province will now join other jurisdictions supporting autonomous driving technologies — very likely to be a rapidly growing industry over the coming decades. The decision to grant approval to the University of Waterloo’s 3-year autonomous vehicle research program was announced by Ontario’s Minister of Transportation, Steven Del Duca, thusly: “I am pleased to announce that the University of Waterloo is one of the first approved applicants of our Automated Pilot Vehicle program. As a result, Waterloo will be among the first eligible to operate an autonomous vehicle on a public roadway in Canada.” The press release on the matter provides details: “Fully connected to the internet and featuring powerful computers to process and analyze data in real time, the test car includes technologies such as radar, sonar, and lidar, as well as both inertial and vision sensors. A researcher will always be behind the wheel and ready to assume control at all times. The vehicle currently operates with some degree of self-driving capabilities, combining features such as adaptive cruise control to maintain a safe distance from other vehicles without intervention by the driver. … “The goal of the research team, which includes 9 professors working under the umbrella of the Waterloo Centre for Automotive Research (WatCAR), is to progressively add more automated features. Specific aims of the Waterloo project include improving automated driving in challenging Canadian weather conditions, further optimizing fuel efficiency to reduce emissions, and designing new computer-based controls. The researchers will test the vehicle everywhere from city streets to divided highways as they add and fine-tune new capabilities.” The bit about specifically working to deal with Ontario’s weather is interesting. It’s a common argument against self-driving technologies that they won’t be able to function in places with harsh weather like Ontario, not any time soon anyways. Obviously, if researchers show that winter weather in Ontario is no issue, then this argument will fall through. Partners for the program include NVIDIA and AutonomouStuff. Buy a cool T-shirt or mug in the CleanTechnica store! Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.
News Article | February 28, 2017
BOSTON, MA--(Marketwired - Feb 28, 2017) - MP Objects ("MPO"), a leading provider of cloud-based Supply Chain Orchestration software for "Customer Chain Control," today announced it has moved its commercial headquarters to the U.S. in the center of Boston, adding to its international locations in Rotterdam, Tokyo and Hyderabad. Most recently, MPO closed its first round of venture capital in a $10 million growth equity investment from Updata Partners. The funds are earmarked for U.S. expansion, global marketing, international sales and new hires. To that end, it announced the appointment of Brian Hodgson as Executive Vice President of Business Development, responsible for pursuing opportunities with new and existing customers, markets and partnerships. MPO is used by multinational companies in logistics, technology, industrial, healthcare, and consumer sectors to manage dynamic supply chain configurations that meet each customer's unique requirements. Customers include global brands and blue chip companies such as CEVA, DSV, Geodis, Nippon Express, eBay, IBM, Microsoft, Dow Chemical, Terex, Patagonia and Oakley. EVP of Business Development Brian Hodgson was formerly vice president of sales and marketing for Descartes Systems Group, a global provider of logistics software solutions. He spent four years at Oz Development as vice president of sales and marketing. Oz Development provided cloud-based solutions that streamlined warehouse and shipping processes and was acquired by Descartes in 2015. He was chief marketing officer for Kewill, a leader in logistics software, and began his two-decades long career as a senior software engineer. Brian earned a bachelor's degree in electrical engineering from University of Waterloo. "With our continued growth and working with customers with the most complex supply chains, Brian provides 20+ years of experience and complements the rest of the management team," said Martin Verwijmeren, MPO's chief executive officer. "Brian will help us identify strategies for growth opportunities and create long-term value for our customers." Taking a "customer-first" approach, MPO's Customer Chain Control SaaS solution enables companies and their clients to select the optimal sourcing and delivery path for each order based on factors such as stock availability, price levels, lead times and routing options. It creates a customer-by-customer, order-driven supply chain. MPO then manages those unique supply chain steps (e.g., packaging, instructions, shipper tracking, etc.) through every participant in the system (supplier, shipper, carrier, last mile). About MP Objects Founded in 2000 with offices in Rotterdam, Tokyo, Hyderabad and Boston, MPO provides a single SaaS platform for order planning and execution that leverages existing enterprise supply chain systems such as ERP, logistics, warehouse and transportation management systems. With MPO, companies see higher revenues by being able to deliver a better customer experience and lower costs by capturing details within each order's supply chain that were previously unmanaged. For more information: contact firstname.lastname@example.org; call 1 (646) 520-0841 or visit: www.mp-objects.com.
News Article | March 1, 2017
CALGARY, ALBERTA--(Marketwired - March 1, 2017) - PROSPECTOR RESOURCES CORP. ("Prospector" or the "Company") (TSX VENTURE:PRR) announces that it has strengthened its management team with the appointments of Mr. Tim Williams as Executive Vice President - COO, Mr. Jose Luis Martinez as Executive Vice President - Corporate Development & Strategy, Mr. Ian Dreyer as Senior Vice President - Geology and Mr. David D'Onofrio as Chief Financial Officer and Corporate Secretary. Tim Williams, who will initially be based in Peru, will provide overall leadership in projects, engineering, construction and mining and will also participate in the technical review of all M&A activities. Prior to joining Prospector Resources Corp., Tim was Vice President Operations for Rio Alto Mining Limited from 2010 to 2015. Tim's responsibilities included overseeing the construction and operation of the La Arena gold mine, and overseeing the construction of the Shahuindo gold mine, both located in Peru. Following the acquisition of Rio Alto Mining Limited by Tahoe Resources Inc. in April 2015, Tim was the Vice President Operations and Country Manager in Peru until August 2016. Prior to his involvement with Rio Alto Mining Limited, Tim managed the El Brocal and the Marcona open pit mining contracts for Stracon - GyM in Peru. Tim has also held senior operating positions in Compania Minera Volcan at their Cerro de Pasco operations also located in Peru. Before arriving in Peru, Tim held mining production roles with Anglo Gold Ashanti at Geita in Tanzania, geotechnical and mine planning roles at WMCs Leintster Nickel Operations and MIM's McArthur River mine both located in Australia. Tim has also worked in the consulting industry with AMC Mining at their Perth, WA office. Tim holds a Masters Degree in Mining Geomechanics, a Bachelors Degree in Mining and Economic Geology, and a Post Graduate Diploma in Mining, from Curtin University, Western Australian School of Mines. He is a Fellow of the Australasian Institute of Mining and Metallurgy. Jose Luis Martinez lives in Toronto, Canada and has extensive Global Banking experience over 23 years in the Canadian financial services sector. Jose Luis is an accomplished Investment Banking professional with extensive Global Banking experience in the Canadian Financial Services sector who is now deploying his capital markets expertise and regional knowledge to make Prospector Resources Corp. a success story. From 2006 to 2016, Jose Luis led business development and relationship management for TD Securities Investment Banking in Latin America, where he had a focus in the Mining sector. Transactions originated included Mergers & Acquisitions Advisory Assignments, Equity Underwriting, and Debt Financing for a wide variety of clients in Canada and Latin America. Prior to this, Jose Luis spent over 12 years covering the Latin American market originating and executing on a wide array of financing transactions for large corporations. He also served as Head of TD Securities' South America Regional Representative Office in Chile. Jose Luis has strong relationships with executives of large public and private companies, as well as controlling shareholders of leading private conglomerates across Latin America. Jose Luis holds an MBA from University of Toronto in Canada and a Bachelor of Business Administration from Universidad de Lima in Peru. Ian Dreyer who is based in Lima, Peru has 30 years of geological and mining related experience ranging from open pit and underground mine production, resource definition to grass roots exploration in Australia, Africa, Indonesia and Latin America. His work in Latin America since 2010 has broadly been in a consulting role, working on deposits in Peru, Chile, Mexico, Brazil and Uruguay. Ian has resided in Lima since 2010 and was a key member of the Rio Alto Mining Limited team that developed the La Arena and Shahuindo gold mines located in Peru. Ian has been involved in the optimization of three major gold deposits: The Golden Mile and its transformation into the "Super Pit", the Mount Charlotte Underground Gold Mine and the Telfer Gold Mine, all located in Australia and brings a mix of technical and operational strengths to the team. Ian holds a BSc in Geology from Curtin University of Western Australia and is a Chartered Professional Geologist (AUSIMM). David D'Onofrio is currently the CFO of a diversified merchant bank focused on providing early stage capital and advisory services to emerging growth companies globally. Mr. D'Onofrio has over 10 years' experience working in public accounting in audit and taxation advisory roles, and has acted as CEO, CFO, Director, Audit Committee member and in other financial advisory positions to a number of private and public enterprises. David is a Chartered Professional Accountant, graduate from Schulich School of Business and holds a Masters of Taxation Degree from the University of Waterloo. Prospector thanks Anthony Jackson for his service over the past years and wishes him continued success on his future endeavours. Alex Black, Prospector Resources Corp. President & CEO stated, "The executive management appointments announced today form the core management team from which our company will grow its future business. The appointments follow on from our stated objective to assemble a highly experienced technical and corporate management team with a solid experience base of developing and building mines and a track record of creating significant shareholder value. We created an enviable business and work culture at Rio Alto Mining Limited that we intend to replicate at Prospector Resources Corp. and have a number of our old management team ready to join our company as we advance our business. We are currently actively reviewing a number of business opportunities that I believe will form a solid base for a new entrant in the precious metals mining space." Prospector is also pleased to announce that it has issued, as part of its variable incentive compensation program, an aggregate of 430,000 Restricted Share Units ("RSUs") and 2,050,000 options to purchase Prospector common shares ("Options"), all pursuant to Prospector's Share Incentive Plan and Stock Option Plan. Of the 430,000 RSUs and 2,050,000 Options, 430,000 RSUs and 1,550,000 Options are being granted to the directors and officers of Prospector, including the individuals announced in this press release. The RSUs, which vest 1/3 equally over a three year period, include a time-based and a performance-based component with a multiplier as determined by the Company's Board of Directors, and entitle the holder to an amount computed by the value of a notional number of Common Shares designated in the award. Each Option entitles the holder to purchase one Prospector common share at a price of $1.02 for a period of five years from the date of grant. The options also vest 1/3 equally over a three year period. The grant of the RSUs and Options are subject to the terms of the Share Incentive Plan and the Stock Option Plan respectively, and final regulatory approval and if applicable, shareholder approval. The focus of Prospector is to compile an attractive portfolio of precious metals assets that can be developed into mines and to assemble a highly experienced technical and corporate management team with a solid experience base of developing and building mines in South America, Central America and South America. Through its strategy of evaluating and acquiring precious metals projects and through a combination of organic exploration and development and strategic acquisitions, the new management team intends to grow the recapitalized Prospector and create long-term shareholder value through the development of high-margin, strong free-cash-flowing mining operations. Neither the TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.
News Article | April 7, 2016
Home > Press > Changing the color of single photons in a diamond quantum memory Abstract: Researchers from the Institute for Quantum Computing at the University of Waterloo and the National Research Council of Canada (NRC) have, for the first time, converted the colour and bandwidth of ultrafast single photons using a room-temperature quantum memory in diamond. Shifting the colour of a photon, or changing its frequency, is necessary to optimally link components in a quantum network. For example, in optical quantum communication, the best transmission through an optical fibre is near infrared, but many of the sensors that measure them work much better for visible light, which is a higher frequency. Being able to shift the colour of the photon between the fibre and the sensor enables higher performance operation, including bigger data rates. The research, published in Nature Communications, demonstrated small frequency shifts that are useful for a communication protocol known as wavelength division multiplexing. This is used today when a sender needs to transmit large amounts of information through a transmission so the signal is broken into smaller packets of slightly different frequencies and sent through together. The information is then organized at the other end based on those frequencies. In the experiments conducted at NRC, the researchers demonstrated the conversion of both the frequency and bandwidth of single photons using a room-temperature diamond quantum memory. "Originally there was this thought that you just stop the photon, store it for a little while and get it back out. The fact that we can manipulate it at the same time is exciting," said Kent Fisher a PhD student at the Institute for Quantum Computing and with the Department of Physics and Astronomy at Waterloo. "These findings could open the door for other uses of quantum memory as well." The diamond quantum memory works by converting the photon into a particular vibration of the carbon atoms in the diamond, called a phonon. This conversion works for many different colours of light allowing for the manipulation of a broad spectrum of light. The energy structure of diamond allows for this to occur at room temperature with very low noise. Researchers used strong laser pulses to store and retrieve the photon. By controlling the colours of these laser pulses, researchers controlled the colour of the retrieved photon. "The fragility of quantum systems means that you are always working against the clock," remarked Duncan England, researcher at NRC. "The interesting step that we've shown here is that by using extremely short pulses of light, we are able to beat the clock and maintain quantum performance." The integrated platform for photon storage and spectral conversion could be used for frequency multiplexing in quantum communication, as well as build up a very large entangled state - something called a cluster state. Researchers are interested in exploiting cluster states as the resource for quantum computing driven entirely by measurements. "Canada is a power-house in quantum research and technology. This work is another example of what partners across the country can achieve when leveraging their joint expertise to build next-generation technologies," noted Ben Sussman, program leader for NRC's Quantum Photonics program. About University of Waterloo University of Waterloo is Canada's top innovation university. With more than 36,000 students we are home to the world's largest co-operative education system of its kind. Our unmatched entrepreneurial culture, combined with an intensive focus on research, powers one of the top innovation hubs in the world. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.
Piani M.,University of Waterloo |
Piani M.,University of Strathclyde |
Watrous J.,University of Waterloo |
Watrous J.,Canadian Institute for Advanced Research
Physical Review Letters | Year: 2015
Steering is the entanglement-based quantum effect that embodies the "spooky action at a distance" disliked by Einstein and scrutinized by Einstein, Podolsky, and Rosen. Here we provide a necessary and sufficient characterization of steering, based on a quantum information processing task: the discrimination of branches in a quantum evolution, which we dub subchannel discrimination. We prove that, for any bipartite steerable state, there are instances of the quantum subchannel discrimination problem for which this state allows a correct discrimination with strictly higher probability than in the absence of entanglement, even when measurements are restricted to local measurements aided by one-way communication. On the other hand, unsteerable states are useless in such conditions, even when entangled. We also prove that the above steering advantage can be exactly quantified in terms of the steering robustness, which is a natural measure of the steerability exhibited by the state. © 2015 American Physical Society.
Piani M.,University of Strathclyde |
Piani M.,University of Waterloo
Journal of the Optical Society of America B: Optical Physics | Year: 2015
We introduce and study the notion of steerability for channels. This generalizes the notion of steerability of bipartite quantum states. We discuss a key conceptual difference between the case of states and the case of channels: while state steering deals with the notion of "hidden" states, steerability in the channel case is better understood in terms of coherence of channel extensions, rather than in terms of "hidden" channels. This distinction vanishes in the case of states. We further argue how the proposed notion of lack of coherence of channel extensions coincides with the notion of channel extensions realized via local operations and classical communication. We also discuss how the Choi-JamioŁkowski isomorphism allows the direct application of many results about states to the case of channels. We introduce measures for the steerability of channel extensions. © 2015 Optical Society of America.
Woody E.Z.,University of Waterloo |
Szechtman H.,McMaster University
Frontiers in Human Neuroscience | Year: 2013
Research indicates that there is a specially adapted, hard-wired brain circuit, the security motivation system, which evolved to manage potential threats, such as the possibility of contamination or predation. The existence of this system may have important implications for policy-making related to security. The system is sensitive to partial, uncertain cues of potential danger, detection of which activates a persistent, potent motivational state of wariness or anxiety. This state motivates behaviors to probe the potential danger, such as checking, and to correct for it, such as washing. Engagement in these behaviors serves as the terminating feedback for the activation of the system. Because security motivation theory makes predictions about what kinds of stimuli activate security motivation and what conditions terminate it, the theory may have applications both in understanding how policy-makers can best influence others, such as the public, and also in understanding the behavior of policy-makers themselves. © 2013 Woody and Szechtman.
Foldvari M.,University of Waterloo
Journal of Biomedical Nanotechnology | Year: 2010
Non-invasive drug delivery systems provide alternative routes of administration and improved delivery of drugs to localized target sites in the body. Topically applied dermal and transdermal delivery systems could replace needles required to administer many of the new biologics-based drugs and vaccines, in addition to other significant advantages such as avoiding first-pass hepatic metabolism, gastric degradation and frequent dosing. However, the limited dermal and transdermal delivery of many small and large molecules is a significant challenge because of the unyielding barrier properties of the skin. This paper reviews the application of a novel topical delivery system, biphasic vesicles built from nanoscale components, to the delivery of several therapeutic agents and vaccine antigens and discusses progress toward clinical use. Copyright © 2010 American Scientific Publishers All rights reserved.
Lu Q.-B.,University of Waterloo
Mutation Research - Reviews in Mutation Research | Year: 2010
The subpicosecond-lived prehydrated electron (epre-) is a fascinating species in radiation biology and radiotherapy of cancer. Using femtosecond time-resolved laser spectroscopy, we have recently resolved that epre- states are electronically excited states and have lifetimes of ∼180 fs and ∼550 fs, after the identification and removal of a coherence spike, respectively. Notably, the weakly bound epre- (<0 eV) has the highest yield among all the radicals generated in the cell during ionizing radiation. Recently, it has been demonstrated that dissociative electron transfer (DET) reactions of epre- can lead to important biological effects. By direct observation of the transition states of the DET reactions, we have showed that DET reactions of epre- play key roles in bond breakage of nucleotides and in activations of halopyrimidines as potential hypoxic radiosensitizers and of the chemotherapeutic drug cisplatin in combination with radiotherapy. This review discusses all of these findings, which may lead to improved strategies in radiotherapy of cancer, radioprotection of humans and in discovery of new anticancer drugs. © 2010 Elsevier B.V. All rights reserved.
Srinivasan S.J.,Princeton University |
Hoffman A.J.,Princeton University |
Gambetta J.M.,University of Waterloo |
Houck A.A.,Princeton University
Physical Review Letters | Year: 2011
We introduce a new type of superconducting charge qubit that has a V-shaped energy spectrum and uses quantum interference to provide independently tunable qubit energy and coherent coupling to a superconducting cavity. Dynamic access to the strong coupling regime is demonstrated by tuning the coupling strength from less than 200 kHz to greater than 40 MHz. This tunable coupling can be used to protect the qubit from cavity-induced relaxation and avoid unwanted qubit-qubit interactions in a multiqubit system. © 2011 American Physical Society.
McGill S.,University of Waterloo
Strength and Conditioning Journal | Year: 2010
This review article recognizes the unique function of the core musculature. In many real life activities, these muscles act to stiffen the torso and function primarily to prevent motion. This is a fundamentally different function from those muscles of the limbs, which create motion. By stiffening the torso, power generated at the hips is transmitted more effectively by the core. Recognizing this uniqueness, implications for exercise program design are discussed using progressions beginning with corrective and therapeutic exercises through stability/mobility, endurance, strength and power stages, to assist the personal trainer with a broad spectrum of clients. Copyright © Lippincott Williams & Wilkins.
Huang P.-J.J.,University of Waterloo |
Liu J.,University of Waterloo
Analytical Chemistry | Year: 2010
Aptamers are single-stranded nucleic acids that can selectively bind to essentially any molecule of choice. Because of their high stability, low cost, ease of modification, and availability through selection, aptamers hold great promise in addressing key challenges in bioanalytical chemistry. In the past 15 years, many highly sensitive fluorescent aptamer sensors have been reported. However, few such sensors showed high performance in serum samples. Further challenges related to practical applications include detection in a very small sample volume and a low dependence of sensor performance on ionic strength. We report the immobilization of an aptamer sensor on a magnetic microparticle and the use of flow cytometry for detection. Flow cytometry allows the detection of individual particles in a capillary and can effectively reduce the light scattering effect of serum. Since DNA immobilization generated a highly negatively charged surface and caused an enrichment of counterions, the sensor performance showed a lower salt dependence. The detection limits for adenosine are determined to be 178 μM in buffer and 167 μM in 30% serum. Finally, we demonstrated that the detection can be carried out in 10 μL of 90% human blood serum. © 2010 American Chemical Society.
Gambetta J.M.,University of Waterloo |
Houck A.A.,Princeton University |
Blais A.,Université de Sherbrooke
Physical Review Letters | Year: 2011
We present a superconducting qubit for the circuit quantum electrodynamics architecture that has a tunable qubit-resonator coupling strength g. This coupling can be tuned from zero to values that are comparable with other superconducting qubits. At g=0, the qubit is in a decoherence-free subspace with respect to spontaneous emission induced by the Purcell effect. Furthermore, we show that in this decoherence-free subspace, the state of the qubit can still be measured by either a dispersive shift on the resonance frequency of the resonator or by a cycling-type measurement. © 2011 American Physical Society.
Magesan E.,University of Waterloo |
Gambetta J.M.,University of Waterloo |
Emerson J.,University of Waterloo
Physical Review Letters | Year: 2011
In this Letter we propose a fully scalable randomized benchmarking protocol for quantum information processors. We prove that the protocol provides an efficient and reliable estimate of the average error-rate for a set operations (gates) under a very general noise model that allows for both time and gate-dependent errors. In particular we obtain a sequence of fitting models for the observable fidelity decay as a function of a (convergent) perturbative expansion of the gate errors about the mean error. We illustrate the protocol through numerical examples. © 2011 American Physical Society.
Baig A.,University of Waterloo |
Ng F.T.T.,University of Waterloo
Energy and Fuels | Year: 2010
Biodiesel is a nontoxic, renewable, and biodegradable alternative green fuel for petroleum-based diesel. However, the major obstacle for the commercial production of biodiesel is the high cost of raw material, i.e., refined vegetable oils. This problem can be addressed using low-cost feedstocks, such as waste oils and fats. However, these feedstocks contain a high amount of free fatty acids (FFAs), which cannot be used for the production of biodiesel using a traditional homogeneous alkali-catalyzed transesterification process. A solid acid catalyst based on a supported heteropolyacid catalyst (PSA) was evaluated for the production of biodiesel from soybean oil (SBO) containing up to 25 wt % palmitic acid (PA). It was demonstrated that this solid acid catalyst catalyzed simultaneously esterification and transesterification. The total glycerin, ester content, and acid numbers were determined according to ASTM D 6584, EN 14103, and ASTM D 974, respectively. It was found that at 200 °C, 1:27 oil/alcohol molar ratio, and 3 wt % catalyst, a high-quality biodiesel with an ester content of 93.95 mass % was produced from a feedstock (SBO containing 10% PA) in 10 h. The PA and chemically bound glycerin (CBG), which includes triglyceride (TG), diglyceride (DG), and monoglyceride (MG), conversions of 92.44 and 99.38% were obtained, respectively. The effect of process parameters, such as catalyst amount, oil/alcohol molar ratio, and FFA content in the feedstock, has been investigated. This single-step solid acid-catalyzed process has potential for industrial-scale production of biodiesel from high FFA feedstocks. © 2010 American Chemical Society.
High temperature-high efficiency liquid chromatography using sub-2μm coupled columns for the analysis of selected non-steroidal anti-inflammatory drugs and veterinary antibiotics in environmental samples
Shaaban H.,University of Waterloo |
Gorecki T.,University of Waterloo
Analytica Chimica Acta | Year: 2011
A high efficiency HPLC method was developed by coupling three sub-2μm columns in series and operating them at high temperature for the separation of selected non-steroidal anti-inflammatory drugs and veterinary antibiotics in environmental samples. The separation was performed at 80°C to reduce the solvent viscosity, thus reducing the column backpressure. The chromatographic performance of high temperature-extended column length HPLC method was used to determine the most widely used non-steroidal anti-inflammatory drugs and veterinary antibiotics such as sulphonamides in wastewater samples. The method could simultaneously determine 24 pharmaceuticals in short analysis time with high efficiency. The method involved pre-concentration and clean-up by solid phase extraction (SPE) using Oasis HLB extraction cartridges. It was validated based on linearity, precision, detection and quantification limits, selectivity and accuracy. Good recoveries were obtained for all analytes ranging from 72.7% to 98.2% with standard deviations not higher than 6%, except for acetaminophen and acetyl salicylic acid, for which low recovery was obtained. The detection limits of the studied pharmaceuticals ranged from 2 to 16μgL -1, while limits of quantification were in the range from 7 to 54μgL -1 with UV detection. © 2011 Elsevier B.V.
Mock S.E.,University of Waterloo |
Eibach R.P.,University of Waterloo
Psychology and Aging | Year: 2011
Older subjective age is often associated with lower psychological well-being among middle-aged and older adults. We hypothesize that attitudes toward aging moderate this relationship; specifically, feeling older will predict lower well-being among those with less favorable attitudes toward aging but not those with more favorable aging attitudes. We tested this with longitudinal data from the National Survey of Midlife Development in the United States-II assessing subjective age and psychological well-being over 10 years. As hypothesized older subjective age predicted lower life satisfaction and higher negative affect when aging attitudes were less favorable but not when aging attitudes were more favorable. Implications and future research directions are discussed. © 2011 American Psychological Association.
Albadi M.H.,Sultan Qaboos University |
El-Saadany E.F.,University of Waterloo
Energy | Year: 2010
This paper presents a new formulation for the turbine-site matching problem, based on wind speed characteristics at any site, the power performance curve parameters of any pitch-regulated wind turbine, as well as turbine size and tower height. Wind speed at any site is characterized by the 2-parameter Weibull distribution function and the value of ground friction coefficient (α). The power performance curve is characterized by the cut-in, rated, and cut-out speeds and the rated power. The new Turbine-Site Matching Index (TSMI) is derived based on a generic formulation for Capacity Factor (CF), which includes the effect of turbine tower height (h). Using the CF as a basis for turbine-site matching produces results that are biased towards higher towers with no considerations for the associated costs. The proposed TSMI includes the effects of turbine size and tower height on the Initial Capital Cost (ICC) of wind turbines. The effectiveness and the applicability of the proposed TSMI are illustrated using five case studies. In general, for each turbine, there exists an optimal tower height, at which the value of the TSMI is at its maximum. The results reveal that higher tower heights are not always desirable for optimality. © 2010 Elsevier Ltd.
Chen J.Z.Y.,University of Waterloo
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2010
The structure of the system consisting of a self-avoiding polymer chain attracted to the surface of a freely supported soft membrane surface by a short-ranged force is investigated. The adhesion of the polymer to the deformed surface can produce distinctive states such as pancake, pinch, and bud, dependent on the phenomenological parameters in the Helfrich model describing the membrane as well as an adsorption energy describing the attraction between a monomer and a membrane surface. © 2010 The American Physical Society.
Burkov A.A.,University of Waterloo |
Burkov A.A.,University of California at Santa Barbara |
Balents L.,University of California at Santa Barbara
Physical Review Letters | Year: 2011
We propose a simple realization of the three-dimensional (3D) Weyl semimetal phase, utilizing a multilayer structure, composed of identical thin films of a magnetically doped 3D topological insulator, separated by ordinary-insulator spacer layers. We show that the phase diagram of this system contains a Weyl semimetal phase of the simplest possible kind, with only two Dirac nodes of opposite chirality, separated in momentum space, in its band structure. This Weyl semimetal has a finite anomalous Hall conductivity and chiral edge states and occurs as an intermediate phase between an ordinary insulator and a 3D quantum anomalous Hall insulator. We find that the Weyl semimetal has a nonzero dc conductivity at zero temperature, but Drude weight vanishing as T2, and is thus an unusual metallic phase, characterized by a finite anomalous Hall conductivity and topologically protected edge states. © 2011 American Physical Society.
Jeon S.,University of Waterloo
Proceedings of the 2010 American Control Conference, ACC 2010 | Year: 2010
The major benefit of the state estimation based on kinematic model such as the kinematic Kalman filter (KKF) is that it is immune to parameter variations and unknown disturbances and thus can provide an accurate and robust state estimation regardless of the operating condition. Since it suggests to use a combination of low cost sensors rather than a single costly sensor, the specific characteristics of each sensor may have a major effect on the performance of the state estimator. As an illustrative example, this paper considers the simplest form of the KKF, i.e., the velocity estimation combining the encoder with the accelerometer and addresses two major issues that arise in its implementation: the limited bandwidth of the accelerometer and the deterministic feature (non-whiteness) of the quantization noise of the encoder at slow speeds. It has been shown that each of these characteristics can degrade the performance of the state estimation at different regimes of the operation range. A simple method to use the variable Kalman filter gain has been suggested to alleviate these problems using the simplified parameterization of the Kalman filter gain matrix. Experimental results are presented to illustrate the main issues and also to validate the effectiveness of the proposed scheme. © 2010 AACC.
Wong A.W.C.,University of Waterloo |
Taylor J.E.,University of Waterloo
Astrophysical Journal | Year: 2012
Individual dark matter halos in cosmological simulations vary widely in their detailed structural properties, properties such as concentration, shape, spin, and degree of internal relaxation. Recent non-parametric (principal component) analyses suggest that a few principal components explain a large fraction of the scatter in these structural properties. The main principal component is closely aligned with concentration, which in turn is known to be related to the mass accretion history (MAH) of the halo, as described by its merger tree. Here, we examine more generally the connection between the MAH and structural parameters. The space of mass accretion histories has principal components of its own. The strongest, accounting for almost 60% of the scatter between individual histories, can be interpreted as the age of the system. We give an analytic fit for this first component, which provides a rigorous way of defining the dynamical age of a halo. The second strongest component, representing acceleration or deceleration of growth at late times, accounts for 25% of the scatter. Relating structural parameters to formation history, we find that concentration correlates strongly with the early history of the halo, while shape and degree of relaxation or dynamical equilibrium correlate with the later history. We examine the inferences about formation history that can be drawn by splitting halos into sub-samples based on observable properties such as concentration and shape. Applications include the definition young and old samples of galaxy clusters in a quantitative way, or empirical tests of environmental processing rates in clusters. © 2012. The American Astronomical Society. All rights reserved.
Rolison D.R.,U.S. Navy |
Nazar L.F.,University of Waterloo
MRS Bulletin | Year: 2011
Climate change, diminishing reserves of fossil fuels, energy security, and consumer demand all depend on alternatives to our current course of energy usage and consumption. A broad consensus concurs that implementing energy efficiency and renewable energy technologies are necessities now rather than luxuries to be deferred to some distant future. Neither effort can effect serious change in our energy patterns without marked improvements in electrical energy storage, with electrochemical energy storage in batteries and electrochemical capacitors serving as key components of any plausible scenario. 1,2 Consumer expectations of convenience and long-lived portable power further drive the need to push these old devices onto a new performance curve. This issue of MRS Bulletin addresses the significant advances occurring in research laboratories around the world as old electrode materials and designs are re-envisioned, and abandoned materials of the past are reinvigorated by arranging matter and function on the nanoscale to bring batteries and electrochemical capacitors into the 21st century. © 2011 Materials Research Society.
Ududec C.,University of Waterloo |
Wiebe N.,University of Waterloo |
Emerson J.,University of Waterloo
Physical Review Letters | Year: 2013
The question of how irreversibility can emerge as a generic phenomenon when the underlying mechanical theory is reversible has been a long-standing fundamental problem for both classical and quantum mechanics. We describe a mechanism for the appearance of irreversibility that applies to coherent, isolated systems in a pure quantum state. This equilibration mechanism requires only an assumption of sufficiently complex internal dynamics and natural information-theoretic constraints arising from the infeasibility of collecting an astronomical amount of measurement data. Remarkably, we are able to prove that irreversibility can be understood as typical without assuming decoherence or restricting to coarse-grained observables, and hence occurs under distinct conditions and time scales from those implied by the usual decoherence point of view. We illustrate the effect numerically in several model systems and prove that the effect is typical under the standard random-matrix conjecture for complex quantum systems. © 2013 American Physical Society.
Gamalero E.,University of Piemonte Orientale |
Glick B.R.,University of Waterloo
Plant Physiology | Year: 2015
A focus on the mechanisms by which ACC deaminase-containing bacteria facilitate plant growth.Bacteria that produce the enzyme 1-aminocyclopropane-1-carboxylate (ACC) deaminase, when present either on the surface of plant roots (rhizospheric) or within plant tissues (endophytic), play an active role in modulating ethylene levels in plants. This enzyme activity facilitates plant growth especially in the presence of various environmental stresses. Thus, plant growth-promoting bacteria that express ACC deaminase activity protect plants from growth inhibition by flooding and anoxia, drought, high salt, the presence of fungal and bacterial pathogens, nematodes, and the presence of metals and organic contaminants. Bacteria that express ACC deaminase activity also decrease the rate of flower wilting, promote the rooting of cuttings, and facilitate the nodulation of legumes. Here, the mechanisms behind bacterial ACC deaminase facilitation of plant growth and development are discussed, and numerous examples of the use of bacteria with this activity are summarized. © 2015 American Society of Plant Biologists. All rights reserved.
Konig R.,IBM |
Konig R.,University of Waterloo |
Physical Review Letters | Year: 2013
We find a tight upper bound for the classical capacity of quantum thermal noise channels that is within 1/ln2 bits of Holevo's lower bound. This lower bound is achievable using unentangled, classical signal states, namely, displaced coherent states. Thus, we find that while quantum tricks might offer benefits, when it comes to classical communication, they can only help a bit. © 2013 American Physical Society.
Bravyi S.,IBM |
Konig R.,IBM |
Konig R.,University of Waterloo
Physical Review Letters | Year: 2013
Given a quantum error correcting code, an important task is to find encoded operations that can be implemented efficiently and fault tolerantly. In this Letter we focus on topological stabilizer codes and encoded unitary gates that can be implemented by a constant-depth quantum circuit. Such gates have a certain degree of protection since propagation of errors in a constant-depth circuit is limited by a constant size light cone. For the 2D geometry we show that constant-depth circuits can only implement a finite group of encoded gates known as the Clifford group. This implies that topological protection must be "turned off" for at least some steps in the computation in order to achieve universality. For the 3D geometry we show that an encoded gate U is implementable by a constant-depth circuit only if UPU† is in the Clifford group for any Pauli operator P. This class of gates includes some non-Clifford gates such as the π/8 rotation. Our classification applies to any stabilizer code with geometrically local stabilizers and sufficiently large code distance. © 2013 American Physical Society.
Block M.S.,University of Kentucky |
Melko R.G.,University of Waterloo |
Melko R.G.,Perimeter Institute for Theoretical Physics |
Kaul R.K.,University of Kentucky
Physical Review Letters | Year: 2013
We present an extensive quantum Monte Carlo study of the Néel to valence-bond solid (VBS) phase transition on rectangular- and honeycomb-lattice SU(N) antiferromagnets in sign-problem-free models. We find that in contrast to the honeycomb lattice and previously studied square-lattice systems, on the rectangular lattice for small N, a first-order Néel-VBS transition is realized. On increasing N≥4, we observe that the transition becomes continuous and with the same universal exponents as found on the honeycomb and square lattices (studied here for N=5, 7, 10), providing strong support for a deconfined quantum critical point. Combining our new results with previous numerical and analytical studies, we present a general phase diagram of the stability of CPN-1 fixed points with q monopoles. © 2013 American Physical Society.
Nelson-Wong E.,Regis University |
Callaghan J.P.,University of Waterloo
Spine | Year: 2014
OBJECTIVE.: To determine if development of transient low back pain (LBP) during prolonged standing in individuals without prior history of LBP predicts future clinical LBP development at higher rates than in individuals who do not develop LBP during prolonged standing. SUMMARY OF BACKGROUND DATA.: Prolonged standing has been found to induce transient LBP in 40% to 70% of previously asymptomatic individuals. Individuals who develop pain during standing have been found to have altered neuromuscular profiles prior to the standing exposure compared with their pain free counterparts; therefore, it has been hypothesized that these individuals may have higher risk for LBP disorders. METHODS.: Previously asymptomatic participants who had completed a biomechanical study investigating LBP development during standing and response to exercise intervention completed annual surveys regarding LBP status for a period of 3 years. χ analyses were performed to determine group differences in LBP incidence rates. Accuracy statistics were calculated for ability of LBP development during standing to predict future LBP. RESULTS.: Participants who developed transient LBP during standing had significantly higher rates of clinical LBP during the 3-year follow-up period (35.3% vs. 23.1%) and were 3 times more likely to experience an episode of clinical LBP during the first 24 months than their non-pain developing counterparts. CONCLUSION.: Transient LBP development during prolonged standing is a positive predictive factor for future clinical LBP in previously asymptomatic individuals. Individuals who experience transient LBP during standing may be considered a "preclinical" group who are at increased risk for future LBP disorders. © 2014, Lippincott Williams & Wilkins.
Yan Z.,University of Waterloo |
Sun B.,University of Waterloo |
Li Y.,University of Waterloo
Chemical Communications | Year: 2013
(3E,7E)-3,7-Bis(2-oxoindolin-3-ylidene)benzo[1,2-b:4,5-b′]difuran-2, 6(3H,7H)-dione (IBDF) was used as a new electron-acceptor building block for polymer semiconductors with very low-lying energy levels. A copolymer of IBDF and thiophene showed stable electron transport performance in thin film transistors. © 2013 The Royal Society of Chemistry.
Duley W.W.,University of Waterloo |
Hu A.,University of Waterloo
Astrophysical Journal | Year: 2012
We report on the preparation of hydrogenated amorphous carbon nanoparticles whose spectral characteristics include an absorption band at 217.5 nm with the profile and characteristics of the interstellar 217.5 nm feature. Vibrational spectra of these particles also contain the features commonly observed in absorption and emission from dust in the diffuse interstellar medium. These materials are produced under "slow" deposition conditions by minimizing the flux of incident carbon atoms and by reducing surface mobility. The initial chemistry leads to the formation of carbon chains, together with a limited range of small aromatic ring molecules, and eventually results in carbon nanoparticles having an sp 2/sp 3 ratio ≈ 0.4. Spectroscopic analysis of particle composition indicates that naphthalene and naphthalene derivatives are important constituents of this material. We suggest that carbon nanoparticles with similar composition are responsible for the appearance of the interstellar 217.5 nm band and outline how these particles can form in situ under diffuse cloud conditions by deposition of carbon on the surface of silicate grains. Spectral data from carbon nanoparticles formed under these conditions accurately reproduce IR emission spectra from a number of Galactic sources. We provide the first detailed fits to observational spectra of Type A and B emission sources based entirely on measured spectra of a carbonaceous material that can be produced in the laboratory. © 2012. The American Astronomical Society. All rights reserved..
Paetznick A.,University of Waterloo |
Reichardt B.W.,University of Southern California
Physical Review Letters | Year: 2013
Transversal implementations of encoded unitary gates are highly desirable for fault-tolerant quantum computation. Though transversal gates alone cannot be computationally universal, they can be combined with specially distilled resource states in order to achieve universality. We show that "triorthogonal" stabilizer codes, introduced for state distillation by Bravyi and Haah, admit transversal implementation of the controlled-controlled- Z gate. We then construct a universal set of fault-tolerant gates without state distillation by using only transversal controlled-controlled-Z, transversal Hadamard, and fault-tolerant error correction. We also adapt the distillation procedure of Bravyi and Haah to Toffoli gates, improving on existing Toffoli distillation schemes. © 2013 American Physical Society.
Friedland S.,University of Illinois at Chicago |
Gheorghiu V.,University of Calgary |
Gheorghiu V.,University of Waterloo |
Gour G.,University of Calgary
Physical Review Letters | Year: 2013
Uncertainty relations are a distinctive characteristic of quantum theory that impose intrinsic limitations on the precision with which physical properties can be simultaneously determined. The modern work on uncertainty relations employs entropic measures to quantify the lack of knowledge associated with measuring noncommuting observables. However, there is no fundamental reason for using entropies as quantifiers; any functional relation that characterizes the uncertainty of the measurement outcomes defines an uncertainty relation. Starting from a very reasonable assumption of invariance under mere relabeling of the measurement outcomes, we show that Schur-concave functions are the most general uncertainty quantifiers. We then discover a fine-grained uncertainty relation that is given in terms of the majorization order between two probability vectors, significantly extending a majorization-based uncertainty relation first introduced in M. H. Partovi, Phys. Rev. A 84, 052117 (2011). Such a vector-type uncertainty relation generates an infinite family of distinct scalar uncertainty relations via the application of arbitrary uncertainty quantifiers. Our relation is therefore universal and captures the essence of uncertainty in quantum theory. © 2013 American Physical Society.
Pang Q.,University of Waterloo |
Nazar L.F.,University of Waterloo
ACS Nano | Year: 2016
Lithium-sulfur batteries are attractive electrochemical energy storage systems due to their high theoretical energy density and very high natural abundance of sulfur. However, practically, Li-S batteries suffer from short cycling life and low sulfur utilization, particularly in the case of high-sulfur-loaded cathodes. Here, we report on a light-weight nanoporous graphitic carbon nitride (high-surface-area g-C3N4) that enables a sulfur electrode with an ultralow long-term capacity fade rate of 0.04% per cycle over 1500 cycles at a practical C/2 rate. More importantly, it exhibits good high-sulfur-loading areal capacity (up to 3.5 mAh cm-2) with stable cell performance. We demonstrate the strong chemical interaction of g-C3N4 with polysulfides using a combination of spectroscopic experimental studies and first-principles calculations. The 53.5% concentration of accessible pyridinic nitrogen polysulfide adsorption sites is shown to be key for the greatly improved cycling performance compared to that of N-doped carbons. © 2016 American Chemical Society.
Michailovich O.,University of Waterloo |
Rathi Y.,Harvard University
IEEE Transactions on Image Processing | Year: 2010
Visualization and analysis of the micro-architecture of brain parenchyma by means of magnetic resonance imaging is nowadays believed to be one of the most powerful tools used for the assessment of various cerebral conditions as well as for understanding the intracerebral connectivity. Unfortunately, the conventional diffusion tensor imaging (DTI) used for estimating the local orientations of neural fibers is incapable of performing reliably in the situations when a voxel of interest accommodates multiple fiber tracts. In this case, a much more accurate analysis is possible using the high angular resolution diffusion imaging (HARDI) that represents local diffusion by its apparent coefficients measured as a discrete function of spatial orientations. In this note, a novel approach to enhancing and modeling the HARDI signals using multiresolution bases of spherical ridgelets is presented. In addition to its desirable properties of being adaptive, sparsifying, and efficiently computable, the proposed modeling leads to analytical computation of the orientation distribution functions associated with the measured diffusion, thereby providing a fast and robust analytical solution for q-ball imaging. © 2010 IEEE.
Ammar K.,University of Waterloo
Proceedings of the ACM SIGMOD International Conference on Management of Data | Year: 2016
Many applications regularly generate large graph data. Many of these graphs change dynamically, and analysis techniques for static graphs are not suitable in these cases. This thesis proposes an architecture to process and analyze dynamic graphs. It is based on a new computation model called Grab'n Fix. The architecture includes a novel distributed graph storage layer to support dynamic graph processing. These proposals were inspired by an extensive quantitative and qualitative analysis of existing graph analytics platform. © 2016 ACM.
Sonnenblick A.,University of Waterloo |
De Azambuja E.,University of Waterloo |
Azim H.A.,University of Waterloo |
Piccart M.,University of Waterloo
Nature Reviews Clinical Oncology | Year: 2015
Inhibition of poly(ADP-ribose) polymerase (PARP) enzymes is a potential synthetic lethal therapeutic strategy in cancers harbouring specific DNA-repair defects, including those arising in carriers of BRCA1 or BRCA2 mutations. Since the development of first-generation PARP inhibitors more than a decade ago, numerous clinical trials have been performed to validate their safety and efficacy, bringing us to the stage at which adjuvant therapy with PARP inhibitors is now being considered as a viable treatment option for patients with breast cancer. Nevertheless, the available data do not provide clear proof that these drugs are efficacious in the setting of metastatic disease. Advancement of a therapy to the neoadjuvant and adjuvant settings without such evidence is exceptional, but seems reasonable in the case of PARP inhibitors because the target population that might benefit from this class of drugs is small and well defined. This Review describes the evolution of PARP inhibitors from bench to bedside, and provides an up-to-date description of the key published or otherwise reported clinical trials of these agents. The specific considerations and challenges that might be encountered when implementing these compounds in the adjuvant treatment of breast cancer in the clinic are also highlighted.
Rojas-Fernandez C.H.,University of Waterloo
Research in gerontological nursing | Year: 2010
Geriatric (or late-life) depression is common in older adults, with an incidence that increases dramatically after age 70 to 85, as well as among those admitted to hospitals and those who reside in nursing homes. In this population, depression promotes disability and is associated with worsened outcomes of comorbid chronic medical diseases. Geriatric depression is often undetected or undertreated in primary care settings for various reasons, including the (incorrect) belief that depression is a normal part of aging. Current research suggests that while antidepressant agent use in older adults is improving in quality, room for improvement exists. Improving the pharmacotherapy of depression in older adults requires knowledge and understanding of many clinical factors. The purpose of this review is to discuss salient issues in geriatric depression, with a focus on pharmacotherapeutic and psychotherapeutic interventions. Copyright 2010, SLACK Incorporated.
Olivares D.E.,University of Waterloo |
Canizares C.A.,University of Waterloo |
Kazerani M.,University of Waterloo
IEEE Transactions on Smart Grid | Year: 2014
This paper presents the mathematical formulation of the microgrid's energy management problem and its implementation in a centralized Energy Management System (EMS) for isolated microgrids. Using the model predictive control technique, the optimal operation of the microgrid is determined using an extended horizon of evaluation and recourse, which allows a proper dispatch of the energy storage units. The energy management problem is decomposed into Unit Commitment (UC) and Optimal Power Flow (OPF) problems in order to avoid a mixed-integer non-linear formulation. The microgrid is modeled as a three-phase unbalanced system with presence of both dispatchable and non-dispatchable distributed generation. The proposed EMS is tested in an isolated microgrid based on a CIGRE medium-voltage benchmark system. Results justify the need for detailed three-phase models of the microgrid in order to properly account for voltage limits and procure reactive power support. © 2014 IEEE.
Luo J.-Y.,University of Waterloo |
Epling W.S.,University of Waterloo
Applied Catalysis B: Environmental | Year: 2010
The effects of H2O on the performance of a model Pt/Ba/Al2O3 catalyst and regeneration of the catalyst surface were investigated, both in the absence and presence of CO2. In the absence of CO2, an unexpected promotional effect of H2O was observed at low temperature. For example, in the presence of H2O, 90% NOX conversion was obtained at 150 °C, which is the same as that at 350 °C under otherwise identical conditions. The results demonstrate that regeneration, the rate-limiting step at low temperature, occurs through hydrogen spillover to the nitrates, and not through reverse NOX species migration to the Pt sites, and the presence of H2O greatly promotes the hydrogen spillover rate by providing and stabilizing surface hydroxyl groups. The promotional effect of H2O gradually decreases with increasing temperature, and shows no positive effect above 350 °C. In the presence of CO2, however, the presence of H2O always results in improved performance in the entire temperature range. Besides enhanced hydrogen spillover, another contribution of H2O when CO2 is present, is to weaken the detrimental effect of CO. CO is formed via the reverse water-gas shift reaction and poisons Pt sites as well as forms barium isocyanates. The H2O decreases the amount of CO formed as well as hydrolyzes the -NCO species. © 2010 Elsevier B.V. All rights reserved.
Olsthoorn J.,University of Waterloo |
Stastna M.,University of Waterloo
Geophysical Research Letters | Year: 2014
We present numerical simulations of near-bed instability induced by internal waves shoaling over topography using a model with an explicit representation of the sediment concentration. We find that not all separation bubble-bursting events lead to resuspension, though all lead to significant transport out of the bottom boundary layer. This transport can significantly enhance chemical exchange across the bottom boundary layer. When resuspension occurs, we find that it is largely due to two-dimensional evolution of the separation bubble during the bursting process. Three-dimensionalization occurs once the resuspended sediment cloud is transported out of the bottom boundary layer, and hence, redeposition is strongly influenced by three-dimensional effects. We derive a criterion for resuspension over a linearly sloping bottom in terms of two dimensionless parameters that encapsulate the sediment properties. Key Points Internal waves can induce resuspension in a coupled hydrodynamic-sediment model True resuspension is provided for sufficiently vigorous shear only This paper quantifies when true resuspension will occur ©2014. American Geophysical Union. All Rights Reserved.
Lehnherr I.,University of Waterloo
Environmental Reviews | Year: 2014
There has been increasing concern about mercury (Hg) levels in marine and freshwater organisms in the Arctic, due to the importance of traditional country foods such as fish and marine mammals to the diet of Northern Peoples. Due to its toxicity and ability to bioaccumulate and biomagnify in food webs, methylmercury (MeHg) is the form of Hg that is of greatest concern. The main sources of MeHg to Arctic aquatic ecosystems, the processes responsible for MeHg formation and degradation in the environment, MeHg bioaccumulation in Arctic biota and the human health implications for Northern Peoples are reviewed here. In Arctic marine ecosystems, Hg(II) methylation in the water column, rather than bottom sediments, is the primary source of MeHg, although a more quantitative understanding of the role of dimethylmercury (DMHg) as a MeHg source is needed. Because MeHg production in marine waters is limited by the availability of Hg(II), predicted increases in Hg(II) concentrations in oceans are likely to result in higher MeHg concentrations and increased exposure to Hg in humans and wildlife. In Arctic freshwaters, MeHg concentrations are a function of two antagonistic processes, net Hg(II) methylation in bottom sediments of ponds and lakes and MeHg photodemethylation in the water column. Hg(II) methylation is controlled by microbial activity and Hg(II) bioavailability, which in turn depend on interacting environmental factors (temperature, redox conditions, organic carbon, and sulfate) that induce nonlinear responses in MeHg production. Methylmercury bioaccumulation-biomagnification in Arctic aquatic food webs is a function of the MeHg reservoir in abiotic compartments, as well as ecological considerations such as food-chain length, growth rates, life-history characteristics, feeding behavior, and trophic interactions. Methylmercury concentrations in Arctic biota have increased significantly since the onset of the industrial age, and in some populations of fish, seabirds, and marine mammals toxicological thresholds are being exceeded. Due to the complex connection between Hg exposure and human health in Northern Peoples-arising from the dual role of country foods as both a potential Hg source and a nutritious, affordable food source with many physical and social health benefits - reductions in anthropogenic Hg emissions are seen as the only viable long-term solution. © 2013 Published by NRC Research Press.
Evers S.,University of Waterloo |
Nazar L.F.,University of Waterloo
Chemical Communications | Year: 2012
Graphene-sulfur composites with sulfur fractions as high as 87 wt% are prepared using a simple one-pot, scalable method. The graphene envelops the sulfur particles, providing a conductive shrink-wrap for electron transport. These materials are efficient cathodes for Li-S batteries, yielding 93% coulombic efficiency over 50 cycles with good capacity. © The Royal Society of Chemistry 2012.
Steiner S.H.,University of Waterloo |
Jones M.,Center for Healthcare Related Infection Surveillance and Prevention
Statistics in Medicine | Year: 2010
Monitoring medical outcomes is desirable to help quickly detect performance changes. Previous applications have focused mostly on binary outcomes, such as 30-day mortality after surgery. However, in many applications the survival time data are routinely collected. In this paper, we propose an updating exponentially weighted moving average (EWMA) control chart to monitor risk-adjusted survival times. The updating EWMA (uEWMA) operates in a continuous time; hence, the scores for each patient always reflect the most up-to-date information. The uEWMA can be implemented based on a variety of survival-time models and can be set up to provide an ongoing estimate of a clinically interpretable average patient score. The efficiency of the uEWMA is shown to compare favorably with the competing methods. Copyright © 2009 John Wiley & Sons, Ltd.
Poulin F.J.,University of Waterloo
Journal of Physical Oceanography | Year: 2010
This article aims to advance the understanding of inherent randomness in geophysical fluids by considering the particular example of baroclinic shear flows that are spatially uniform in the horizontal directions and aperiodic in time. The time variability of the shear is chosen to be the Kubo oscillator, which is a family of time-dependent bounded noise that is oscillatory in nature with various degrees of stochasticity. The author analyzed the linear stability of a wide range of temporally periodic and aperiodic shears with a zero and nonzero mean to get a more complete understanding of the effect of oscillations in shear flows in the context of the two-layer quasigeostrophic Phillips model. It is determined that the parametric mode, which exists in the periodic limit, also exists in the range of small and moderate stochasticities but vanishes in highly erratic flows. Moreover, random variations weaken the effects of periodicity and yield growth rates more similar to that of the time-averaged steady-state analog. This signifies that the periodic shear flows possess the most extreme case of stabilization and destabilization and are thus anomalous. In the limit of an f plane, the linear stability problem is solved exactly to reveal that individual solutions to the linear dynamics with time-dependent baroclinic shear have growth rates that are equal to that of the time-averaged steady state. This implies that baroclinic shear flows with zero means are linearly stable in that they do not grow exponentially in time. This means that the stochastic mode that was found to exist in the Mathieu equation does not arise in this model. However, because the perturbations grow algebraically, the aperiodic baroclinic shear on an f plane can give rise to nonlinear instabilities. © 2010 American Meteorological Society.
Henderson H.A.,University of Waterloo
Neuropsychopharmacology | Year: 2014
Behavioral inhibition (BI) is an early-appearing temperament characterized by strong reactions to novelty. BI shows a good deal of stability over childhood and significantly increases the risk for later diagnosis of social anxiety disorder (SAD). Despite these general patterns, many children with high BI do not go on to develop clinical, or even subclinical, anxiety problems. Therefore, understanding the cognitive and neural bases of individual differences in developmental risk and resilience is of great importance. The present review is focused on the relation of BI to two types of information processing: automatic (novelty detection, attention biases to threat, and incentive processing) and controlled (attention shifting and inhibitory control). We propose three hypothetical models (Top-Down Model of Control; Risk Potentiation Model of Control; and Overgeneralized Control Model) linking these processes to variability in developmental outcomes for BI children. We argue that early BI is associated with an early bias to quickly and preferentially process information associated with motivationally salient cues. When this bias is strong and stable across development, the risk for SAD is increased. Later in development, children with a history of BI tend to display normative levels of performance on controlled attention tasks, but they demonstrate exaggerated neural responses in order to do so, which may further potentiate risk for anxiety-related problems. We conclude by discussing the reviewed studies with reference to the hypothetical models and make suggestions regarding future research and implications for treatment.Neuropsychopharmacology Reviews advance online publication, 27 August 2014; doi:10.1038/npp.2014.189.
Wu X.,University of Waterloo |
Xie L.-L.,University of Waterloo
IEEE Transactions on Information Theory | Year: 2014
Decode-and-forward (D-F) and compress-and-forward (C-F) are two fundamentally different relay strategies proposed by Cover and El Gamal in 1979. Individually, either of them has been successfully generalized to multirelay channels. In this paper, to allow each relay node the freedom of choosing either of the two strategies, we propose a unified framework, where both the D-F and C-F strategies can be employed simultaneously in the network. It turns out that, to incorporate in full the advantages of both the best known D-F and C-F strategies into a unified framework, the major challenge arises as follows: For the D-F relay nodes to fully utilize the help of the C-F relay nodes, decoding at the D-F relay nodes should not be conducted until all the blocks have been finished; however, in the multilevel D-F strategy, the upstream nodes have to decode prior to the downstream nodes in order to help, which makes simultaneous decoding at all the D-F relay nodes after all the blocks have been finished inapplicable. To tackle this problem, nested blocks combined with backward decoding are used in our framework, so that the D-F relay nodes at different levels can perform backward decoding at different frequencies. As such, the upstream D-F relay nodes can decode before the downstream D-F relay nodes, and the use of backward decoding at each D-F relay node ensures the full exploitation of the help of both the other D-F relay nodes and the C-F relay nodes. The achievable rates under our unified relay framework are found to combine both the best known D-F and C-F achievable rates and include them as special cases. It is also demonstrated through a Gaussian network example that our achievable rates are generally better than the rates obtained with existing unified schemes and with D-F or C-F alone. © 1963-2012 IEEE.
Gates M.,University of Waterloo
Rural and remote health | Year: 2013
Prevalence rates of overweight and obesity in Canada have risen rapidly in the past 20 years. Concurrent with the obesity epidemic, sleep time and physical activity levels have decreased among youth. Aboriginal youth experience disproportionately high obesity prevalence but there is inadequate knowledge of contributing factors. This research aimed to examine sleep and screen time behavior and their relationship to Body Mass Index (BMI) in on-reserve First Nations youth from Ontario, Canada. This was an observational population-based study of cross-sectional design. Self-reported physical activity, screen time, and lifestyle information was collected from 348 youth aged 10-18 years residing in five northern, remote First Nations communities and one southern First Nations community in Ontario, Canada, from October 2004 to June 2010. Data were collected in the school setting using the Waterloo Web-based Eating Behaviour Questionnaire. Based on self-reported height and weight, youth were classified normal (including underweight), overweight and obese according to BMI. Descriptive cross-tabulations and Pearson's χ2 tests were used to compare screen time, sleep habits, and physical activity across BMI categories. Participants demonstrated low levels of after-school physical activity, and screen time in excess of national guidelines. Overall, 75.5% reported being active in the evening three or less times per week. Approximately one-quarter of the surveyed youth watched more than 2 hours of television daily and 33.9% spent more than 2 hours on the internet or playing video games. For boys, time using the internet/video games (p=0.022) was positively associated with BMI category, with a greater than expected proportion of obese boys spending more than 2 hours using the internet or video games daily (56.7%). Also for boys, time spent outside after school (p=0.033) was negatively associated with BMI category, with a lesser than expected proportion spending 'most of the time' outside (presumably being active) after school. These relationships were not observed in girls. Adjusted standardized residuals suggest a greater than expected proportion of obese individuals had a television in their bedroom (66.7%) as compared with the rest of the population. The current study adds to the limited information about contributors to overweight and obesity in First Nations youth living on-reserve in Canada. Concerns about inadequate sleep, excess screen time, and inadequate physical activity mirror those of the general population. Further investigation is warranted to improve the understanding of how various lifestyle behaviors influence overweight, obesity, and the development of chronic disease among First Nations youth. Initiatives to reduce screen time, increase physical activity, and encourage adequate sleep among on-reserve First Nations youth are recommended.
Konig R.,University of Waterloo |
IEEE Transactions on Information Theory | Year: 2014
When two independent analog signals, X and Y are added together giving Z = X + Y, the entropy of Z, H(Z), is not a simple function of the entropies H(X) and H(Y), but rather depends on the details of X and Y's distributions. Nevertheless, the entropy power inequality (EPI), which states that e 2H(Z) ≥ e2H(X) + e2H(Y), gives a very tight restriction on the entropy of Z. This inequality has found many applications in information theory and statistics. The quantum analogue of adding two random variables is the combination of two independent bosonic modes at a beam splitter. The purpose of this paper is to give a detailed outline of the proof of two separate generalizations of the EPI to the quantum regime. Our proofs are similar in spirit to the standard classical proofs of the EPI, but some new quantities and ideas are needed in the quantum setting. In particular, we find a new quantum de Bruijin identity relating entropy production under diffusion to a divergence-based quantum Fisher information. Furthermore, this Fisher information exhibits certain convexity properties in the context of beam splitters. © 2014 IEEE.
Lu N.,University of Waterloo |
Shen X.S.,University of Waterloo
IEEE Communications Surveys and Tutorials | Year: 2014
The capacity scaling law of wireless networks has been considered as one of the most fundamental issues. In this survey, we aim at providing a comprehensive overview of the development in the area of scaling laws for throughput capacity and delay in wireless networks. We begin with background information on the notion of throughput capacity of random networks. Based on the benchmark random network model, we then elaborate the advanced strategies adopted to improve the throughput capacity, and other factors that affect the scaling laws. We also present the fundamental tradeoffs between throughput capacity and delay under a variety of mobility models. In addition, the capacity and delay for hybrid wireless networks are surveyed, in which there are at least two types of nodes functioning differently, e.g., normal nodes and infrastructure nodes. Finally, recent studies on scaling law for throughput capacity and delay in emerging vehicular networks are introduced. © 2014 IEEE.
Stewart T.C.,University of Waterloo |
Eliasmith C.,University of Waterloo
Proceedings of the IEEE | Year: 2014
In this paper, we review the theoretical and software tools used to construct Spaun, the first (and so far only) brain model capable of performing cognitive tasks. This tool set allowed us to configure 2.5 million simple nonlinear components (neurons) with 60 billion connections between them (synapses) such that the resulting model can perform eight different perceptual, motor, and cognitive tasks. To reverse-engineer the brain in this way, a method is needed that shows how large numbers of simple components, each of which receives thousands of inputs from other components, can be organized to perform the desired computations. We achieve this through the neural engineering framework (NEF), a mathematical theory that provides methods for systematically generating biologically plausible spiking networks to implement nonlinear and linear dynamical systems. On top of this, we propose the semantic pointer architecture (SPA), a hypothesis regarding some aspects of the organization, function, and representational resources used in the mammalian brain. We conclude by discussing Spaun, which is an example model that uses the SPA and is implemented using the NEF. Throughout, we discuss the software tool Neural ENGineering Objects (Nengo), which allows for the synthesis and simulation of neural models efficiently on the scale of Spaun, and provides support for constructing models using the NEF and the SPA. The resulting NEF/SPA/Nengo combination is a general tool set for both evaluating hypotheses about how the brain works, and for building systems that compute particular functions using neuron-like components. © 2014 IEEE.
Piccart M.J.,University of Waterloo
Cancer Research | Year: 2013
Trastuzumab, a monoclonal antibody directed at the HER2 receptor, is one of the most impressive targeted drugs developed in the last two decades. Indeed, when given in conjunction with chemotherapy, it improves the survival of women with HER2 positive breast cancer, both in advanced and in early disease. Its optimal duration, however, is poorly defined in both settings with a significant economic impact in the adjuvant setting where the drug is arbitrarily given for 1 year. This article reviews current attempts at shortening this treatment duration, emphasizing the likelihood of inconclusive results and, therefore, the need to investigate this important variable as part of the initial pivotal trials and with the support of public health systems. Failure to do so has major consequences on treatment affordability. Ongoing adjuvant trials of dual HER2 blockade, using trastuzumab in combination with a second anti-HER2 agent, and trials of the antibody-drug conjugate T-DM1 (trastuzumab-emtansine) have to all be designed with 12 months of targeted therapy. © 2013 American Association for Cancer Research.
Menzies K.L.,University of Waterloo |
Jones L.,University of Waterloo
Optometry and Vision Science | Year: 2010
Biomaterials may be defined as artificial materials that can mimic, store, or come into close contact with living biological cells or fluids and are becoming increasingly popular in the medical, biomedical, optometric, dental, and pharmaceutical industries. Within the ophthalmic industry, the best example of a biomaterial is a contact lens, which is worn by ∼125 million people worldwide. For biomaterials to be biocompatible, they cannot illicit any type of unfavorable response when exposed to the tissue they contact. A characteristic that significantly influences this response is that related to surface wettability, which is often determined by measuring the contact angle of the material. This article reviews the impact of contact angle on the biocompatibility of tissue engineering substrates, blood-contacting devices, dental implants, intraocular lenses, and contact lens materials. Copyright © 2010 American Academy of Optometry.
Vogelsberger M.,Harvard - Smithsonian Center for Astrophysics |
Zavala J.,University of Waterloo |
Zavala J.,Perimeter Institute for Theoretical Physics
Monthly Notices of the Royal Astronomical Society | Year: 2013
Self-interacting darkmatter offers an interesting alternative to collisionless darkmatter because of its ability to preserve the large-scale success of the cold dark matter model, while seemingly solving its challenges on small scales. We present here the first study of the expected dark matter detection signal in a fully cosmological context taking into account different selfscattering models for dark matter. We demonstrate that models with constant and velocitydependent cross-sections, which are consistent with observational constraints, lead to distinct signatures in the velocity distribution, because non-thermalized features found in the cold dark matter distribution are thermalized through particle scattering. Depending on the model, selfinteraction can lead to a 10 per cent reduction of the recoil rates at high energies, corresponding to a minimum speed that can cause recoil larger than 300 km s-1, compared to the cold dark matter case. At lower energies these differences are smaller than 5 per cent for all models. The amplitude of the annual modulation signal can increase by up to 25 per cent, and the day of maximum amplitude can shift by about two weeks with respect to the cold dark matter expectation. Furthermore, the exact day of phase reversal of the modulation signal can also differ by about a week between the different models. In general, models with velocitydependent cross-sections peaking at the typical velocities of dwarf galaxies lead only to minor changes in the detection signals, whereas allowed constant cross-section models lead to significant changes. We conclude that different self-interacting dark matter scenarios might be distinguished from each other through the details of direct detection signals. Furthermore, detailed constraints on the intrinsic properties of dark matter based on null detections should take into account the possibility of self-scattering and the resulting effects on the detector signal. © 2013 The Author. Published by Oxford University Press on behalf of the Royal Astronomical Society.
Zaki A.,University of Waterloo |
Dave N.,University of Waterloo |
Liu J.,University of Waterloo
Journal of the American Chemical Society | Year: 2012
The melting temperature (T m) of DNA is affected not only by salt but also by the presence of high molecular weight (MW) solutes, such as polyethylene glycol (PEG), acting as a crowding agent. For short DNAs in a solution of low MW PEGs, however, the change of excluded volume upon melting is very small, leading to no increase in T m. We demonstrate herein that by attaching 12-mer DNAs to gold nanoparticles, the excluded volume change was significantly increased upon melting, leading to increased T m even with PEG 200. Larger AuNPs, higher MW PEGs, and higher PEG concentrations show even larger effects in stabilizing the DNA. This study reveals a unique and fundamental feature at nanoscale due to geometric effects. It also suggests that weak interactions can be stabilized by a combination of polyvalent binding and the enhanced macromolecular crowding effect using nanoparticles. © 2011 American Chemical Society.
Scott M.,University of Waterloo |
Hwa T.,University of California at San Diego
Current Opinion in Biotechnology | Year: 2011
Quantitative empirical relationships between cell composition and growth rate played an important role in the early days of microbiology. Gradually, the focus of the field began to shift from growth physiology to the ever more elaborate molecular mechanisms of regulation employed by the organisms. Advances in systems biology and biotechnology have renewed interest in the physiology of the cell as a whole. Furthermore, gene expression is known to be intimately coupled to the growth state of the cell. Here, we review recent efforts in characterizing such couplings, particularly the quantitative phenomenological approaches exploiting bacterial 'growth laws.' These approaches point toward underlying design principles that can guide the predictive manipulation of cell behavior in the absence of molecular details. © 2011 Elsevier Ltd.
Duan Z.,University of Waterloo
International Journal of Thermal Sciences | Year: 2012
The objective of this paper is to furnish the research and design communities with a simple and convenient means of predicting quantities of engineering interest for slip flow in doubly connected microchannels. Slip flow in doubly connected microchannels has been examined and a simple model is proposed to predict the friction factor and Reynolds number product. As doubly connected regions are inherently more difficult to solve than simply connected regions, and for slip flow no solutions or graphical and tabulated data exist for nearly all doubly connected geometries, the developed simple model fills this void and can be used to predict friction factor and Reynolds number product, mass flow rate, pressure distribution, and pressure drop of slip flow in doubly connected microchannels for the practical engineering design of doubly connected microchannels. The proposed models are preferable since the effects of various independent parameters are demonstrated and the difficulty and investment is completely negligible compared with the cost of alternative numerical methods. © 2012 Elsevier Masson SAS. All rights reserved.
Wu D.Y.-T.,University of Waterloo |
Boumaiza S.,University of Waterloo
IEEE Transactions on Microwave Theory and Techniques | Year: 2012
A new Doherty amplifier configuration with an intrinsically broadband characteristic is presented based on the synthesis of key ideas derived from the analyses of the load modulation concept and the conventional Doherty amplifier. Important building blocks to implement the proposed Doherty amplifier structure are outlined, which include the quasi-lumped quarter-wave transmission line, as well as the Klopfenstein taper for broadband impedance matching. A 90-W GaN broadband Doherty amplifier was designed and fabricated and achieved an average peak output power of 49.9 dBm, an average gain of 15.3 dB, and average peak and 6-dB back-off efficiencies of 67.3% and 60.6%, respectively, from 700 to 1000 MHz (35.3% bandwidth). The amplifier is shown to be highly linearizable when driven with 20-MHz WCDMA and long-term evolution signals, achieving adjacent channel power ratio of better than -48 dBc after digital predistortion. © 1963-2012 IEEE.
Shahsavan H.,University of Waterloo |
Zhao B.,University of Waterloo
Macromolecules | Year: 2014
Inspired by the amazing adhesion abilities of the toe pads of geckos and tree frogs, we report an experimental study on the integration of a dissipative material (resembling the dissipative and wet nature of the tree frog toe pads) to an elastic fibrillar interface (resembling the dry and fibrillar nature of the gecko foot pads). Accordingly, a new type of functionally graded adhesive is introduced, which is composed of an array of elastic micropillars at the base, a thin elastic intermediate layer and a viscoelastic top layer. A systematic investigation of this bioinspired graded adhesive structure was performed in comparison with three control adhesive materials: a viscoelastic film, a viscoelastic film coated on a soft elastomer, and elastic film-terminated micropillars. The results showed that this graded structure bestows remarkable adhesive properties in terms of pull-off force, work of adhesion, and structural integrity (i.e., inhibited cohesive failure). Moreover, an extraordinary compliance was observed, which is attributed to the polymer slippage at the top layer. Overall, we attribute the improved adhesive properties to the synergetic interplay of top viscous-elastic layers with the base biomimetic micropillars. © 2013 American Chemical Society.
Wang Q.,University of Waterloo |
Sun B.,University of Waterloo |
Aziz H.,University of Waterloo
Advanced Functional Materials | Year: 2014
The degradation mechanisms of phosphorescent organic light-emitting devices (PhOLEDs) are studied. The results show that PhOLED degradation is closely linked to interactions between excitons and positive polarons in the host material of the emitter layer (EML), which lead to its aggregation near the EML/electron transport layer (ETL) interface. This exciton-polaron-induced aggregation (EPIA) is associated with the emergence of new emission bands at longer wavelengths in the electroluminescence spectra of these materials, which can be detected after prolonged device operation. Such EPIA processes are found to occur in a variety of wide-bandgap materials commonly used as hosts in PhOLEDs and are correlated with device degradation. Quite notably, the extent of EPIA appears to correlate with the material's bandgap rather than with the glass-transition temperature. The findings uncover a new degradation mechanism, caused by polaron-exciton interactions, that appears to be behind the lower stability of OLEDs utilizing wide-bandgap materials in general. The same degradation mechanism can be expected to be present in other organic optoelectronic devices. Exciton-polaron-induced aggregation occurs in a variety of wide-bandgap materials commonly used as host materials in phosphorescent organic light-emitting devices (PhOLEDs). Such an aggregation process is determined to play a major role in limiting the electroluminescence stability of PhOLEDs. The extent of aggregation correlates with the material's bandgap rather than with the glass-transition temperature. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Thomas J.P.,University of Waterloo |
Leung K.T.,University of Waterloo
Advanced Functional Materials | Year: 2014
Hybrid solar cells made of a p-type conducting polymer, poly(3,4-ethyl thiophene):polystyrenesulfonate (PEDOT:PSS), on Si have gained considerable interest in the fabrication of cost-effective high-efficiency devices. However, most of the high power conversion efficiency (PCE) performances have been obtained from solar cells fabricated on surface-structured Si substrates. High-performance planar single-junction solar cells have considerable advantages in terms of processing and cost, because they do not require the complex surface texturing processes. The interface of single-junction solar cells can critically influence the performance. Here, we demonstrate the effect of adding different surfactants in a co-solvent-optimized PEDOT:PSS polymer, which, in addition to acting as a p-layer and as an anti-reflective coating, also enhances the device performance of a hybrid planar-Si solar cell. Using time-of-flight secondary ion mass spectrometry, we conduct three-dimensional chemical imaging of the interface, which enables us to characterize the micropore defects found to limit the PCE. Upon minimizing these micropore defects with the addition of optimized amounts of fluorosurfactant and co-solvent, we achieve a PEDOT:PSS/planar-Si cell with a record high PCE of 13.3% for the first time. Our present approach of micropore defect reduction can also be used to improve the performance of other organic electronic devices based on PEDOT:PSS. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Nelson-Wong E.,University of Waterloo |
Callaghan J.P.,University of Waterloo
Journal of Electromyography and Kinesiology | Year: 2010
Purpose and scope: Low back pain development has been associated with static standing postures in occupational settings. Previous work has demonstrated gluteus muscle co-activation as a predominant pattern in previously asymptomatic individuals who develop low back pain when exposed to 2-h of standing. The purpose of this work was to investigate muscle co-activation as a predisposing factor in low back pain development while including a multifactorial approach of clinical assessment tools and psychosocial assessments to identify individuals who are at risk for pain development during standing. Results: Forty percent of participants developed low back pain during the 2-h of standing. Pain developers demonstrated bilateral gluteus medius and trunk flexor-extensor muscle co-activation prior to reports of pain development. Pain developers and non-pain developers demonstrated markedly different patterns of muscle activation during the 2-h of standing. A novel screening test of active hip abduction was the only clinical assessment tool that predicted pain development. Conclusions: Gluteus medius and trunk muscle co-activation appears to be a predisposing rather than adaptive factor in low back pain development during standing. A combination of a positive active hip abduction test and presence of muscle co-activation during standing may be useful for early identification of at-risk individuals. © 2009 Elsevier Ltd. All rights reserved.
Duhamel J.,University of Waterloo
Langmuir | Year: 2014
The aim of this review is to introduce the reader first to the mathematical complexity associated with the analysis of fluorescence decays acquired with solutions of macromolecules labeled with a fluorophore and its quencher that are capable of interacting with each other via photophysical processes within the macromolecular volume, second to the experimental and mathematical approaches that have been proposed over the years to handle this mathematical complexity, and third to the information that one can expect to retrieve with respect to the internal dynamics of such fluorescently labeled macromolecules. In my view, the ideal fluorophore-quencher pair to use in studying the internal dynamics of fluorescently labeled macromolecules would involve a long-lived fluorophore, a fluorophore and a quencher that do not undergo energy migration, and a photophysical process that results in a change in fluorophore emission upon contact between the excited fluorophore and quencher. Pyrene, with its ability to form an excimer on contact between excited-state and ground-state species, happens to possess all of these properties. Although the concepts described in this review apply to any fluorophore and quencher pair sharing pyrene's exceptional photophysical properties, this review focuses on the study of pyrene-labeled macromolecules that have been characterized in great detail over the past 40 years and presents the main models that are being used today to analyze the fluorescence decays of pyrene-labeled macromolecules reliably. These models are based on Birks' scheme, the DMD model, the fluorescence blob model, and the model free analysis. The review also provides a step-by-step protocol that should enable the noneducated user to achieve a successful decay analysis exempt of artifacts. Finally, some examples of studies of pyrene-labeled macromolecules are also presented to illustrate the different types of information that can be retrieved from these fluorescence decay analyses depending on the model that is selected. © 2013 American Chemical Society.
Lanigan N.,University of Waterloo |
Wang X.,University of Waterloo
Chemical Communications | Year: 2013
Building on established supramolecular chemistry, metal coordination and organometallic chemistry have been widely explored for supramolecular polymers and nanostructures. Increasingly, research has demonstrated that this approach is promising for the synthesis of novel materials with functions and properties derived from metal elements and their coordination structures. Unique self-assembling behaviour and unexpected supramolecular structures are frequently discovered due to multiple non-covalent interactions in addition to metal coordination. However, an explicit understanding of the synergistic effects of non-covalent interactions for designed synthesis of metal containing assemblies with structure correlated properties remains a challenge to be addressed. Recent literature in the area is highlighted in this review in order to illustrate newly explored concepts and stress the importance of developing well understood and controlled supramolecular chemistry for designed synthesis. © 2013 The Royal Society of Chemistry.
Khan S.S.,University of Waterloo |
Madden M.G.,National University of Ireland
Knowledge Engineering Review | Year: 2014
One-class classification (OCC) algorithms aim to build classification models when the negative class is either absent, poorly sampled or not well defined. This unique situation constrains the learning of efficient classifiers by defining class boundary just with the knowledge of positive class. The OCC problem has been considered and applied under many research themes, such as outlier/novelty detection and concept learning. In this paper, we present a unified view of the general problem of OCC by presenting a taxonomy of study for OCC problems, which is based on the availability of training data, algorithms used and the application domains applied. We further delve into each of the categories of the proposed taxonomy and present a comprehensive literature review of the OCC algorithms, techniques and methodologies with a focus on their significance, limitations and applications. We conclude our paper by discussing some open research problems in the field of OCC and present our vision for future research. © Cambridge University Press 2014..
Lee S.C.,University of Waterloo |
Lo W.,University of Waterloo |
Holm R.H.,Harvard University
Chemical Reviews | Year: 2014
A review discusses developments in the biomimetic chemistry of cubane-type and higher nuclearity iron-sulfur clusters. It presents a classification of the primary tactics employed in the synthesis and related reactions of metal-sulfur weak-field clusters by use of illustrative cases, the majority of which are drawn from our own work. Biomimetic synthesis is an endeavor directed toward synthetic representations of protein-bound metal sites with the attendant potential benefit of the discovery of new reactions and structures regardless of biological relevance. Depicted structures have been authenticated by X-ray crystallography, redox potentials are quoted vs the SCE, and 57Fe isomer shifts in Moössbauer spectroscopy are referenced to iron metal at room temperature.
Brown L.C.,University of Waterloo |
Duguay C.R.,University of Waterloo
Progress in Physical Geography | Year: 2010
This paper reviews the current state of knowledge pertaining to the interactions of lake ice and climate. Lake ice has been shown to be sensitive to climate variability through observations and modelling, and both long-term and short-term trends have been identified from ice records. Ice phenology trends have typically been associated with variations in air temperatures while ice thickness trends tend to be associated more to changes in snow cover. The role of ice cover in the regional climate is less documented and with longer ice-free seasons possible as a result of changing climate conditions, especially at higher latitudes, the effects of lakes on their surrounding climate (such as increased evaporation, lake-effect snow and thermal moderation of surrounding areas, for example) can be expected to become more prominent. The inclusion of lakes and lake ice in climate modelling is an area of increased attention in recent studies. An important step in improving predictions of ice conditions in models is the assimilation of remote sensing data in areas where in-situ data is lacking, or non-representative of the lake conditions. The ability to accurately represent ice cover on lakes will be an important step in the improvement of global circulation models, regional climate models and numerical weather forecasting. © The Author(s) 2010.
Nafissi N.,University of Waterloo |
Slavcev R.,University of Waterloo
Applied Microbiology and Biotechnology | Year: 2014
Bacteriophage recombination systems have been widely used in biotechnology for modifying prokaryotic species, for creating transgenic animals and plants, and more recently, for human cell gene manipulation. In contrast to homologous recombination, which benefits from the endogenous recombination machinery of the cell, site-specific recombination requires an exogenous source of recombinase in mammalian cells. The mechanism of bacteriophage evolution and their coexistence with bacterial cells has become a point of interest ever since bacterial viruses' life cycles were first explored. Phage recombinases have already been exploited as valuable genetic tools and new phage enzymes, and their potential application to genetic engineering and genome manipulation, vectorology, and generation of new transgene delivery vectors, and cell therapy are attractive areas of research that continue to be investigated. The significance and role of phage recombination systems in biotechnology is reviewed in this paper, with specific focus on homologous and site-specific recombination conferred by the coli phages, λ, and N15, the integrase from the Streptomyces phage, ΦC31, the recombination system of phage P1, and the recently characterized recombination functions of Yersinia phage, PY54. Key steps of the molecular mechanisms involving phage recombination functions and their application to molecular engineering, our novel exploitations of the PY54-derived recombination system, and its application to the development of new DNA vectors are discussed. © 2014 Springer-Verlag Berlin Heidelberg.
Varghese G.,Maxim Integrated Products Inc. |
Wang Z.,University of Waterloo
IEEE Transactions on Circuits and Systems for Video Technology | Year: 2010
We propose a video denoising algorithm based on a spatiotemporal Gaussian scale mixture model in the wavelet transform domain. This model simultaneously captures the local correlations between the wavelet coefficients of natural video sequences across both space and time. Such correlations are further strengthened with a motion compensation process, for which a Fourier domain noise-robust cross correlation algorithm is proposed for motion estimation. Bayesian least square estimation is used to recover the original video signal from the noisy observation. Experimental results show that the performance of the proposed approach is competitive when compared with state-of-the-art video denoising algorithms based on both peak signal-to-noise-ratio and structural similarity evaluations. © 2010 IEEE.
Suttisansanee U.,University of Waterloo |
Honek J.F.,University of Waterloo
Seminars in Cell and Developmental Biology | Year: 2011
The glyoxalase system is composed of two metalloenzymes, Glyoxalase I and Glyoxalase II. This system is important in the detoxification of methylglyoxal, among other roles. Detailed studies have determined that a number of bacterial Glyoxalase I enzymes are maximally activated by Ni2+ and Co2+ ions, but are inactive in the presence of Zn2+. This is in contrast to the Glyoxalase I enzyme from humans, which is catalytically active with Zn2+ as well as a number of other metal ions. The structure-activity relationships between these two classes of Glyoxalase I are serving as important clues to how the molecular structures of these proteins control metal activation profiles as well as to clarify the mechanistic chemistry of these catalysts. In addition, the possibility of targeting inhibitors against the bacterial versus human enzyme has the potential to lead to new approaches to combat bacterial infections. © 2011 Elsevier Ltd.
Chan T.,University of Waterloo |
Gu F.,University of Waterloo
Expert Review of Molecular Diagnostics | Year: 2011
Sepsis, an innate immunological response of systemic inflammation to infection, is a growing problem worldwide with a relatively high mortality rate. Immediate treatment is required, necessitating quick, early and accurate diagnosis. Rapid molecular-based tests have been developed to address this need, but still suffer some disadvantages. The most commonly studied biomarkers of sepsis are reviewed for their current uses and diagnostic accuracies, including C-reactive protein, procalcitonin, serum amyloid A, mannan and IFN-γ-inducible protein 10, as well as other potentially useful biomarkers. A singular ideal biomarker has not yet been identified; an alternative approach is to shift research focus to determine the diagnostic relevancy of multiple biomarkers when used in concert. Challenges facing biomarker research, including lack of methodology standardization and assays with better detection limits, are discussed. The ongoing efforts in the development of a multiplex point-of-care testing kit, enabling quick and reliable detection of serum biomarkers, may have great potential for early diagnosis of sepsis. © 2011 Expert Reviews Ltd.
Mohamed T.,University of Waterloo |
Rao P.P.N.,University of Waterloo
Current Medicinal Chemistry | Year: 2011
Alzheimer's disease (AD) is a highly complex and rapidly progressive neurodegenerative disorder characterized by the systemic collapse of cognitive function and formation of dense amyloid plaques and neurofibrillary tangles. AD pathology is derived from the cholinergic, amyloid and tau hypotheses, respectively. Current pharmacotherapy with known anti-cholinesterases, such as Aricept® and Exelon®, only offer symptomatic relief without any disease-modifying effects. It is now clear that in order to prevent the rapid progression of AD, new therapeutic treatments should target multiple AD pathways as opposed to the traditional "one drug, one target" approach. This review will focus on the recent advances in medicinal chemistry aimed at the development of small molecule therapies that target various AD pathological routes such as the cholinesterases (AChE and BuChE), amyloidogenic secretases (β/γ-secretase), amyloid-β aggregation, tau phosphorylation and fibrillation and metal-ion redox/reactive oxygen species (ROS). Some notable ring templates will be discussed along with their structure-activity relationship (SAR) data and their multiple modes of action. These emerging trends signal a paradigm shift in anti-AD therapies aimed at the development of multifunctional small molecules as diseasemodifying agents (DMAs). © 2011 Bentham Science Publishers.
Chabot V.,University of Waterloo |
Higgins D.,University of Waterloo |
Yu A.,University of Waterloo |
Xiao X.,General Motors |
And 2 more authors.
Energy and Environmental Science | Year: 2014
This paper gives a comprehensive review about the most recent progress in synthesis, characterization, fundamental understanding, and the performance of graphene and graphene oxide sponges. Practical applications are considered including use in composite materials, as the electrode materials for electrochemical sensors, as absorbers for both gases and liquids, and as electrode materials for devices involved in electrochemical energy storage and conversion. Several advantages of both graphene and graphene oxide sponges such as three dimensional graphene networks, high surface area, high electro/thermo conductivities, high chemical/electrochemical stability, high flexibility and elasticity, and extremely high surface hydrophobicity are emphasized. To facilitate further research and development, the technical challenges are discussed, and several future research directions are also suggested in this paper. This journal is © 2014 the Partner Organisations.
Wesson P.S.,University of Waterloo
International Journal of Modern Physics D | Year: 2015
Recent criticism of higher-dimensional extensions of Einstein's theory is considered. This may have some justification in regard to string theory, but is misguided as applied to five-dimensional (5D) theories with a large extra dimension. Such theories smoothly embed general relativity, ensuring recovery of the latter's observational support. When the embedding of spacetime is carried out in accordance with Campbell's theorem, the resulting 5D theory naturally explains the origin of classical matter and vacuum energy. Also, constraints on the equations of motion near a high-energy surface or membrane in the 5D manifold lead to quantization and quantum uncertainty. These are major returns on the modest investment of one extra dimension. Instead of fruitless bickering about whether it is possible to "see" the fifth dimension, it is suggested that it be treated on par with other concepts of physics, such as time. The main criterion for the acceptance of a fifth dimension (or not) should be its usefulness. © 2015 World Scientific Publishing Company.
Gomez-Rios G.A.,University of Waterloo |
Pawliszyn J.,University of Waterloo
Chemical Communications | Year: 2014
A new SPME device was developed and applied for quick solventless extraction/enrichment of small molecules from complex matrices. Subsequently, the device was coupled as a transmission mode substrate to DART resulting in limits of detection in the low pg mL-1 level in less than 3 minutes with reproducibility below 5% RSD. This journal is © the Partner Organisations 2014.
Nayak P.K.,University of Waterloo
Ecology and Society | Year: 2014
Innovations in social-ecological research require novel approaches to conceive change in human-environment systems. The study of history constitutes an important element of this process. First, using the Chilika Lagoon small-scale fisheries in India, as a case, in this paper I reflect on the appropriateness of a social-ecological perspective for understanding economic history. Second, I examine here how changes in various components of the lagoon's social-ecological system influenced and shaped economic history and the political processes surrounding it. I then discuss the two-way linkages between economic history and social-ecological processes to highlight that the components of a social-ecological system, including the economic aspects, follow an interactive and interdependent trajectory such that their combined impacts have important implications for human-environment connections and sustainability of the system as a whole. Social, ecological, economic, and political components of a system are interlinked and may jointly contribute to the shaping of specific histories. Based on this synthesis, I offer insights to move beyond theoretical, methodological, and disciplinary boundaries as an overarching approach, an inclusive lens, to study change in complex social-ecological systems. © 2014 by the author(s).
Wang F.,University of Waterloo |
Liu B.,University of Waterloo |
Ip A.C.-F.,University of Waterloo |
Liu J.,University of Waterloo
Advanced Materials | Year: 2013
Nano-graphene oxide can adsorb both doxorubicin and zwitterionic dioleoyl-sn-glycero-3-phosphocholine (DOPC) liposomes in an orthogonal and non-competing manner with high capacities based on different surface and intermolecular forces taking place on the heterogeneous surface of the graphene oxide. The system forms stable colloids, allowing co-delivery of both cargos to cancer cells. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Parry D.C.,University of Waterloo
Leisure Sciences | Year: 2014
Many feminist scholars avow social justice as the ultimate goal of their research, but the process remains poorly conceptualized. With this in mind, the purpose of this article is to use feminist leisure scholarship to provide insight into the ways that I see myself and others enacting social justice. In particular, I outline how a politics of hope, transformative encounters, and activism enable feminist leisure scholars to make the world more just. I consider future areas of feminist leisure research that would benefit from a social justice agenda and conclude with a cautionary note about the seductive postfeminist message that the work of feminism is done. © 2014 Copyright © Taylor & Francis Group, LLC.
Hall P.A.,University of Waterloo
Current Directions in Psychological Science | Year: 2016
Human beings have reliable preferences for energy-rich foods; these preferences are present at birth and possibly innate. Relatively recent changes in our day-to-day living context have rendered such foods commonly encountered, nearly effortless to procure, and frequently brought to mind. Theoretical, conceptual, and empirical perspectives from the field of social neuroscience support the hypothesis that the increase in the prevalence of overweight and obesity in first- and second-world countries may be a function of these dynamics coupled with our highly evolved but ultimately imperfect capacities for self-control. This review describes the significance of executive-control systems for explaining the occurrence of nonhomeostatic forms of dietary behavior—that is, those aspects of calorie ingestion that are not for the purpose of replacing calories burned. I focus specifically on experimental findings—including those from cortical-stimulation studies—that collectively support a causal role for executive-control systems in modulating cravings for and consumption of high-calorie foods. © 2016, © The Author(s) 2016.
Johnston N.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2013
The separability from spectrum problem asks for a characterization of the eigenvalues of the bipartite mixed states ρ with the property that U†ρU is separable for all unitary matrices U. This problem has been solved when the local dimensions m and n satisfy m=2 and n≤3. We solve all remaining qubit-qudit cases (i.e., when m=2 and n≥4 is arbitrary). In all of these cases we show that a state is separable from spectrum if and only if U†ρU has positive partial transpose for all unitary matrices U. This equivalence is in stark contrast with the usual separability problem, where a state having positive partial transpose is a strictly weaker property than it being separable. © 2013 American Physical Society.
Brown E.G.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2013
We study the harvesting of quantum and classical correlations from a hot scalar field in a periodic cavity by a pair of spatially separated oscillator-detectors. Specifically, we utilize nonperturbative and exact (non-numerical) techniques to solve for the evolution of the detectors-field system and then we examine how the entanglement, Gaussian quantum discord, and mutual information obtained by the detectors change with the temperature of the field. While (as expected) the harvested entanglement rapidly decays to zero as temperature is increased, we find remarkably that both the mutual information and the discord can actually be increased by multiple orders of magnitude via increasing the temperature. We go on to explain this phenomenon by a variety of means and are able to make accurate predictions of the behavior of thermal amplification. By doing this we also introduce a new perspective on harvesting in general and illustrate that the system can be represented as two dynamically decoupled systems, each with only a single detector. The thermal amplification of discord harvesting represents an exciting prospect for discord-based quantum computation, including its use in entanglement activation. © 2013 American Physical Society.
Hassan F.M.,University of Waterloo |
Chabot V.,University of Waterloo |
Elsayed A.R.,University of Waterloo |
Xiao X.,General Motors |
Chen Z.,University of Waterloo
Nano Letters | Year: 2014
A novel, economical flash heat treatment of the fabricated silicon based electrodes is introduced to boost the performance and cycle capability of Li-ion batteries. The treatment reveals a high mass fraction of Si, improved interfacial contact, synergistic SiO2/C coating, and a conductive cellular network for improved conductivity, as well as flexibility for stress compensation. The enhanced electrodes achieve a first cycle efficiency of ∼84% and a maximum charge capacity of 3525 mA h g-1, almost 84% of silicon's theoretical maximum. Further, a stable reversible charge capacity of 1150 mA h g-1 at 1.2 A g-1 can be achieved over 500 cycles. Thus, the flash heat treatment method introduces a promising avenue for the production of industrially viable, next-generation Li-ion batteries. © 2013 American Chemical Society.
Liang K.,University of Waterloo |
Keles S.,University of Wisconsin - Madison
BMC Bioinformatics | Year: 2012
Background: ChIP-seq has become an important tool for identifying genome-wide protein-DNA interactions, including transcription factor binding and histone modifications. In ChIP-seq experiments, ChIP samples are usually coupled with their matching control samples. Proper normalization between the ChIP and control samples is an essential aspect of ChIP-seq data analysis.Results: We have developed a novel method for estimating the normalization factor between the ChIP and the control samples. Our method, named as NCIS (Normalization of ChIP-seq) can accommodate both low and high sequencing depth datasets. We compare statistical properties of NCIS against existing methods in a set of diverse simulation settings, where NCIS enjoys the best estimation precision. In addition, we illustrate the impact of the normalization factor in FDR control and show that NCIS leads to more power among methods that control FDR at nominal levels.Conclusion: Our results indicate that the proper normalization between the ChIP and control samples is an important step in ChIP-seq analysis in terms of power and error rate control. Our proposed method shows excellent statistical properties and is useful in the full range of ChIP-seq applications, especially with deeply sequenced data. © 2012 Liang and Keleş; licensee BioMed Central Ltd.
Souza-Silva T.A.,University of Waterloo |
Gionfriddo E.,University of Waterloo |
Pawliszyn J.,University of Waterloo
TrAC - Trends in Analytical Chemistry | Year: 2015
The present review - Part II of a comprehensive review on solid-phase microextraction (SPME) applied to complex matrices - aims to describe recent developments in SPME technology applied in the fields of food analysis. We briefly introduce a perspective on the most commonly performed types of analysis within food studies, and place particular attention on the more recent SPME developments of new extraction phases, as this is recognized as the driving force behind most of the advances in the technique in this area. We address quantitation in this review with a concise, yet assertive, discussion on calibration strategies of SPME methods in complex matrices. © 2015 Elsevier B.V.
Brown L.C.,University of Waterloo |
Duguay C.R.,University of Waterloo
Cryosphere | Year: 2011
Lakes comprise a large portion of the surface cover in northern North America, forming an important part of the cryosphere. The timing of lake ice phenological events (e.g. break-up/freeze-up) is a useful indicator of climate variability and change, which is of particular relevance in environmentally sensitive areas such as the North American Arctic. Further alterations to the present day ice regime could result in major ecosystem changes, such as species shifts and the disappearance of perennial ice cover. The Canadian Lake Ice Model (CLIMo) was used to simulate lake ice phenology across the North American Arctic from 1961-2100 using two climate scenarios produced by the Canadian Regional Climate Model (CRCM). Results from the 1961-1990 time period were validated using 15 locations across the Canadian Arctic, with both in situ ice cover observations from the Canadian Ice Database as well as additional ice cover simulations using nearby weather station data. Projected changes to the ice cover using the 30-year mean data between 1961-1990 and 2041-2070 suggest a shift in break-up and freeze-up dates for most areas ranging from 10-25 days earlier (break-up) and 0-15 days later (freeze-up). The resulting ice cover durations show mainly a 10-25 day reduction for the shallower lakes (3 and 10 m) and 10-30 day reduction for the deeper lakes (30 m). More extreme reductions of up to 60 days (excluding the loss of perennial ice cover) were shown in the coastal regions compared to the interior continental areas. The mean maximum ice thickness was shown to decrease by 10-60 cm with no snow cover and 5-50 cm with snow cover on the ice. Snow ice was also shown to increase through most of the study area with the exception of the Alaskan coastal areas. © Author(s) 2011.
Davidson D.,University of Waterloo |
Gu F.X.,University of Waterloo
Journal of Agricultural and Food Chemistry | Year: 2012
Controlled release fertilizers (CRFs) are a branch of materials that are designed to improve the soil release kinetics of chemical fertilizers to address problems stemming losses from runoff or other factors. Current CRFs are used but only in a limited market due to relatively high costs and doubts about their abilities to result in higher yields and increased profitability for agricultural businesses. New technologies are emerging that promise to improve the efficacy of CRFs to add additional functionality and reduce cost to make CRFs a more viable alternative to traditional chemical fertilizer treatment. CRFs that offer ways of reducing air and water pollution from fertilizer treatments, improving the ability of plants to access required nutrients, improving water retention to increase drought resistance, and reducing the amount of fertilizer needed to provide maximum crop yields are under development. A wide variety of different strategies are being considered to tackle this problem, and each approach offers different advantages and drawbacks. Agricultural industries will soon be forced to move toward more efficient and sustainable practices to respond to increasing fertilizer cost and desire for sustainable growing practices. CRFs have the potential to solve many problems in agriculture and help enable this shift while maintaining profitability. © 2011 American Chemical Society.
Liu J.,University of Waterloo
Physical Chemistry Chemical Physics | Year: 2012
The interaction between DNA and inorganic surfaces has attracted intense research interest, as a detailed understanding of adsorption and desorption is required for DNA microarray optimization, biosensor development, and nanoparticle functionalization. One of the most commonly studied surfaces is gold due to its unique optical and electric properties. Through various surface science tools, it was found that thiolated DNA can interact with gold not only via the thiol group but also through the DNA bases. Most of the previous work has been performed with planar gold surfaces. However, knowledge gained from planar gold may not be directly applicable to gold nanoparticles (AuNPs) for several reasons. First, DNA adsorption affinity is a function of AuNP size. Second, DNA may interact with AuNPs differently due to the high curvature. Finally, the colloidal stability of AuNPs confines salt concentration, whereas there is no such limit for planar gold. In addition to gold, graphene oxide (GO) has emerged as a new material for interfacing with DNA. GO and AuNPs share many similar properties for DNA adsorption; both have negatively charged surfaces but can still strongly adsorb DNA, and both are excellent fluorescence quenchers. Similar analytical and biomedical applications have been demonstrated with these two surfaces. The nature of the attractive force however, is different for each of these. DNA adsorption on AuNPs occurs via specific chemical interactions but adsorption on GO occurs via aromatic stacking and hydrophobic interactions. Herein, we summarize the recent developments in studying non-thiolated DNA adsorption and desorption as a function of salt, pH, temperature and DNA secondary structures. Potential future directions and applications are also discussed. © 2012 the Owner Societies.
Weber O.,University of Waterloo
Business Strategy and the Environment | Year: 2014
What is the current state of environmental, social and governance (ESG) reporting and what is the relation between ESG reporting and the financial performance of Chinese companies? This study analyses corporate ESG disclosure in China between 2005 and 2012 by analysing the members of the main indexes of the biggest Chinese stock exchanges. After discussing theories that explain the ESG performance of firms such as institutional theory, accountability and stakeholder theory we present uni- and multivariate statistical analyses of ESG reporting and its relation to environmental and financial performance. Our results suggest that ownership status and membership of certain stock exchanges influence the frequency of ESG disclosure. In turn, ESG reporting influences both environmental and financial performance. We conclude that the main driver for ESG disclosure is accountability and that Chinese corporations are catching up with respect to the frequency of ESG reporting as well as with respect to the quality. © 2013 John Wiley & Sons, Ltd and ERP Environment.
Burn D.H.,University of Waterloo
Hydrological Processes | Year: 2014
A regional, or pooled, approach to frequency analysis is explored in the context of the estimation of rainfall quantiles required for the formation of intensity-duration-frequency (IDF) curves. Resampling experiments are used, in conjunction with two rainfall data sets with long record lengths, to explore the merits of a pooled approach to the estimation of extreme rainfall quantiles. The width of the 95% confidence interval for quantile estimates is used as the primary basis to evaluate the relative merits of pooled and single site estimates of rainfall quantiles. Recommendations are formulated for applying the regional approach to frequency analysis, and these recommendations are used in the application of the regional approach to 40 sites with IDF data in southern Ontario, Canada. The results demonstrate that the regional approach is preferred to single site analysis for estimating extreme rainfall quantiles for conditions and data availability commonly encountered in practice. © 2014 John Wiley & Sons, Ltd.
Neale A.,University of Waterloo |
Sachdev M.,University of Waterloo
IEEE Transactions on Device and Materials Reliability | Year: 2013
The reliability concern associated with radiation-induced soft errors in embedded memories increases as semiconductor technology scales deep into the sub-40-nm regime. As the memory bit-cell area is reduced, single event upsets (SEUs) that would have once corrupted only a single bit-cell are now capable of upsetting multiple adjacent memory bit-cells per particle strike. While these error types are beyond the error handling capabilities of the commonly used single error correction double error detection (SEC-DED) error correction codes (ECCs) in embedded memories, the overhead associated with moving to more sophisticated double error correction (DEC) codes is considered to be too costly. To address this, designers have begun leveraging selective bit placement to design SEC-DED codes capable of double adjacent error correction (DAEC) or triple adjacent error detection (TAED). These codes can be implemented for the same check-bit overhead as the conventional SEC-DED codes; however, no codes have been developed that use both DAEC and TAED together. In this paper, a new ECC scheme is introduced that provides not only the basic SEC-DED coverage but also both DAEC and scalable adjacent error detection (xAED) with a reduction in miscorrection probability as well. Codes capable of up to 11-bit AED have been developed for both 16- and 32-bit standard memory word sizes, and a (39, 32) SEC-DED-DAEC-TAED code implementation that uses the same number of check-bits as a conventional 32-data-bit SEC-DED code is presented. © 2001-2011 IEEE.
Huang M.,University of Waterloo |
Joseph J.W.,University of Waterloo
Endocrinology | Year: 2014
Biphasic glucose-stimulated insulin secretion involves a rapid first phase followed by a prolonged second phase of insulin secretion. The biochemical pathways that control these 2 phases of insulin secretion are poorly defined. In this study,weused a gas chromatography mass spectroscopy-based metabolomics approach to perform a global analysis of cellular metabolism during biphasic insulin secretion. A time course metabolomic analysis of the clonal β-cell line 832/13 cells showed that glycolytic, tricarboxylic acid, pentose phosphate pathway, and several amino acids were strongly correlated to biphasic insulin secretion. Interestingly, first-phase insulin secretion was negatively associated with L-valine, trans-4-hydroxy-L-proline, trans-3-hydroxy-L-proline, DL-3-aminoisobutyric acid, L-glutamine, sarcosine, L-lysine, and thymine and positively with L-glutamic acid, flavin adenine dinucleotide, caprylic acid, uridine 5β-monophosphate, phosphoglycerate, myristic acid, capric acid, oleic acid, linoleic acid, and palmitoleic acid. Tricarboxylic acid cycle intermediates pyruvate, α-ketoglutarate, and succinate were positively associated with second-phase insulin secretion. Other metabolites such as myo-inositol, cholesterol, DL-3-aminobutyric acid, and L-norleucine were negatively associated metabolites with the second-phase of insulin secretion. These studies provide a detailed analysis of key metabolites that are either negatively or positively associated with biphasic insulin secretion. The insights provided by these data set create a framework for planning future studies in the assessment of the metabolic regulation of biphasic insulin secretion. Copyright © 2014 by the Endocrine Society.
Bloemberg D.,University of Waterloo |
Quadrilatero J.,University of Waterloo
Biochimica et Biophysica Acta - Molecular Cell Research | Year: 2014
Skeletal muscle differentiation requires activity of the apoptotic protease caspase-3. We attempted to identify the source of caspase activation in differentiating C2C12 skeletal myoblasts. In addition to caspase-3, caspase-2 was transiently activated during differentiation; however, no changes were observed in caspase-8 or -9 activity. Although mitochondrial Bax increased, this was matched by Bcl-2, resulting in no change to the mitochondrial Bax:Bcl-2 ratio early during differentiation. Interestingly, mitochondrial membrane potential increased on a timeline similar to caspase activation and was accompanied by an immediate, temporary reduction in cytosolic Smac and cytochrome c. Since XIAP protein expression dramatically declined during myogenesis, we investigated whether this contributes to caspase-3 activation. Despite reducing caspase-3 activity by up to 57%, differentiation was unaffected in cells overexpressing normal or E3-mutant XIAP. Furthermore, a XIAP mutant which can inhibit caspase-9 but not caspase-3 did not reduce caspase-3 activity or affect differentiation. Administering a chemical caspase-3 inhibitor demonstrated that complete enzyme inhibition was required to impair myogenesis. These results suggest that neither mitochondrial apoptotic signaling nor XIAP degradation is responsible for transient caspase-3 activation during C2C12 differentiation. © 2014 Elsevier B.V.
Brodutch A.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2013
Discordant states appear in a large number of quantum phenomena and seem to be a good indicator of divergence from classicality. While there is evidence that they are essential for a quantum algorithm to have an advantage over a classical one, their precise role is unclear. We examine the role of discord in quantum algorithms using the paradigmatic framework of restricted distributed quantum gates and show that manipulating discordant states using local operations has an associated cost in terms of entanglement and communication resources. Changing discord reduces the total correlations and reversible operations on discordant states usually require nonlocal resources. Discord alone is, however, not enough to determine the need for entanglement. A more general type of similar quantities, which we call K discord, is introduced as a further constraint on the kinds of operations that can be performed without entanglement resources. © 2013 American Physical Society.
Karsten M.,University of Waterloo
IEEE/ACM Transactions on Networking | Year: 2010
This paper presents Interleaved Stratified Timer Wheels as a novel priority queue data structure for traffic shaping and scheduling in packet-switched networks. The data structure is used to construct an efficient packet approximation of general processor sharing (GPS). This scheduler is the first of its kind by combining all desirable properties without any residual catch. In contrast to previous work, the scheduler presented here has constant and near-optimal delay and fairness properties, and can be implemented with O(1) algorithmic complexity, and has a low absolute execution overhead. The paper presents the priority queue data structure and the basic scheduling algorithm, along with several versions with different cost-performance trade-offs. A generalized analytical model for rate-controlled rounded timestamp schedulers is developed and used to assess the scheduling properties of the different scheduler versions. © 2006 IEEE.
Cosentino A.,University of Waterloo
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2013
We present a simple semidefinite program whose optimal value is equal to the maximum probability of perfectly distinguishing orthogonal maximally entangled states using any PPT measurement (a measurement whose operators are positive under partial transpose). When the states to be distinguished are given by the tensor product of Bell states, the semidefinite program simplifies to a linear program. In Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.109.020506 109, 020506 (2012), Yu, Duan, and Ying exhibit a set of four maximally entangled states in C4 - C4, which is distinguishable by any PPT measurement only with probability strictly less than 1. Using semidefinite programming, we show a tight bound of 7/8 on this probability (3/4 for the case of unambiguous PPT measurements). We generalize this result by demonstrating a simple construction of a set of k states in Ck - Ck with the same property, for any k that is a power of 2. By running numerical experiments, we show the local indistinguishability of certain sets of generalized Bell states in C5 - C5 and C6 - C6 previously considered in the literature. © 2013 American Physical Society.
Khaleghi B.,University of Waterloo |
Khamis A.,University of Waterloo |
Karray F.O.,University of Waterloo |
Razavi S.N.,McMaster University
Information Fusion | Year: 2013
There has been an ever-increasing interest in multi-disciplinary research on multisensor data fusion technology, driven by its versatility and diverse areas of application. Therefore, there seems to be a real need for an analytical review of recent developments in the data fusion domain. This paper proposes a comprehensive review of the data fusion state of the art, exploring its conceptualizations, benefits, and challenging aspects, as well as existing methodologies. In addition, several future directions of research in the data fusion community are highlighted and described. © 2011 Elsevier B.V. All rights reserved.
Doxey A.C.,University of Waterloo
Virulence | Year: 2013
Molecular mimicry of host proteins is a common strategy adopted by bacterial pathogens to interfere with and exploit host processes. Despite the availability of pathogen genomes, few studies have attempted to predict virulence-associated mimicry relationships directly from genomic sequences. Here, we analyzed the proteomes of 62 pathogenic and 66 non-pathogenic bacterial species, and screened for the top pathogen-specific or pathogen-enriched sequence similarities to human proteins. The screen identified approximately 100 potential mimicry relationships including well-characterized examples among the top-scoring hits (e.g., RalF, internalin, yopH, and others), with about 1/3 of predicted relationships supported by existing literature. Examination of homology to virulence factors, statistically enriched functions, and comparison with literature indicated that the detected mimics target key host structures (e.g., extracellular matrix, ECM) and pathways (e.g., cell adhesion, lipid metabolism, and immune signaling). The top-scoring and most widespread mimicry pattern detected among pathogens consisted of elevated sequence similarities to ECM proteins including collagens and leucine-rich repeat proteins. Unexpectedly, analysis of the pathogen counterparts of these proteins revealed that they have evolved independently in different species of bacterial pathogens from separate repeat amplifications. Thus, our analysis provides evidence for two classes of mimics: complex proteins such as enzymes that have been acquired by eukaryote-to-pathogen horizontal transfer, and simpler repeat proteins that have independently evolved to mimic the host ECM. Ultimately, computational detection of pathogen-specific and pathogen-enriched similarities to host proteins provides insights into potentially novel mimicry-mediated virulence mechanisms of pathogenic bacteria.
Gepstein S.,Technion - Israel Institute of Technology |
Glick B.R.,University of Waterloo
Plant Molecular Biology | Year: 2013
The plant senescence syndrome resembles, in many molecular and phenotypic aspects, plant responses to abiotic stresses. Both processes have an enormous negative global agro-economic impact and endanger food security worldwide. Premature plant senescence is the main cause of losses in grain filling and biomass yield due to leaf yellowing and deteriorated photosynthesis, and is also responsible for the losses resulting from the short shelf life of many vegetables and fruits. Under abiotic stress conditions the yield losses are often even greater. The primary challenge in agricultural sciences today is to develop technologies that will increase food production and sustainability of agriculture especially under environmentally limiting conditions. In this chapter, some of the mechanisms involved in abiotic stress-induced plant senescence are discussed. Recent studies have shown that crop yield and nutritional values can be altered as well as plant stress tolerance through manipulating the timing of senescence. It is often difficult to separate the effects of age-dependent senescence from stress-induced senescence since both share many biochemical processes and ultimately result in plant death. The focus of this review is on abiotic stress-induced senescence. Here, a number of the major approaches that have been developed to ameliorate some of the effects of abiotic stress-induced plant senescence are considered and discussed. Some approaches mimic the mechanisms already used by some plants and soil bacteria whereas others are based on development of new improved transgenic plants. While there may not be one simple strategy that can effectively decrease all losses of crop yield that accrue as a consequence of abiotic stress-induced plant senescence, some of the strategies that are discussed already show great promise. © 2013 Springer Science+Business Media Dordrecht.
van Ooteghem K.,University of Waterloo |