Espoo, Finland
Espoo, Finland

Aalto University is a university primarily located in Greater Helsinki, Finland. It was created as a merger of three leading Finnish universities: the Helsinki University of Technology , the Helsinki School of Economics , and the University of Art and Design Helsinki . The close collaboration between the scientific, business and arts communities is intended to foster multi-disciplinary education and research. The Finnish government, in 2010, set out to create a university that has innovation built into its foundations, merging three institutions into one along the way, forming an entity that serves as Finland's model for an innovation university.It comprises six schools with over 19,000 students and 5,000 staff members, thus being Finland's third-largest university. The six schools of Aalto University are all renowned institutions in their respective fields. The main campus of Aalto University is located in Otaniemi, Espoo, where the engineering schools operate, with two schools currently headquartered in Helsinki: the School of Business in Töölö and the School of Arts, Design and Architecture in Arabianranta . In addition, the university operates several units outside Greater Helsinki in Mikkeli, Pori and Vaasa.Aalto university's operations showcase Finland’s bold new experiment in higher education. The Aalto Design Factory, AppCampus, ADD LAB and Aalto Ventures Program, among others, drive the university's mission for a radical shift towards multidisciplinary learning and have contributed substantially to the emergence of Helsinki as a hotbed for startups. Aaltoes, which stands for Aalto Entrepreneurship Society, is Europe’s largest student run entrepreneurship community and organises the Startup Sauna accelerator program for startups, raising more than US$36 million in funding since 2010.The university is named in honour of Alvar Aalto, a prominent Finnish architect, designer and alumnus of the former Helsinki University of Technology, who was also instrumental in designing a large part of the university's main campus in Otaniemi. Wikipedia.


Time filter

Source Type

Patent
Aalto University | Date: 2015-03-13

The present disclosure relates to aqueous all-copper redox flow batteries. This battery comprises at least one first and second half-cell compartments including the first and second aqueous electrolyte solutions comprising a copper compound and supporting electrolytes and a first and second electrodes. The battery further comprises external storage tanks for the electrolytes residing outside of the half-cell compartments, and means for circulating the electrolytes to and from the half-cells. There is a separator between the first and the second half-cell, and the half-cells of this battery are configured to conduct oxidation and reduction reactions for charging and discharging the battery.


Patent
Aalto University | Date: 2017-01-04

Method for shaping a film/sheet (1), particularly for making three-dimensional shapes in it, in which case the outlines of the shape to be formed are warmed/heated (4) and the film/sheet is stretched along the heated outline, in order to cause the part of the film/sheet inside it to deviate from the plane of the surface. The heated outlines are created by, for example, directing a moving beam of thermal radiation onto it. A countersurface (9) is provided to limit the stretching and ensure that rupture does not occur. By repeating the procedure sequentially, a great variety of shapes may be created.


Schutz M.,Humboldt University of Berlin | Maschio L.,University of Turin | Karttunen A.J.,Aalto University | Usvyat D.,Humboldt University of Berlin
Journal of Physical Chemistry Letters | Year: 2017

Black phosphorus (black-P) consists of phosphorene sheets, stacked by van der Waals dispersion. In a recent study based on periodic local second-order Møller-Plesset perturbation theory (LMP2) with higher-order corrections evaluated on finite clusters, we obtained a value of −151 meV/atom for the exfoliation energy. This is almost twice as large as another recent theoretical result (around −80 meV/atom) obtained with quantum Monte Carlo (QMC). Here, we revisit this system on the basis of the recently implemented, periodically embedded ring coupled cluster (rCCD) model instead of LMP2. Higher-order coupled cluster corrections on top of rCCD are obtained from finite clusters by utilizing our new "unit-cell-in-cluster" scheme. Our new value of −92 meV/atom is noticeably lower than that based on LMP2 and in reasonably close agreement with the QMC result. However, in contrast to QMC, no strong effect from the second-neighbor and farther layers in black-P are observed in our calculations. © 2017 American Chemical Society.


Silvo J.,Aalto University
WCTE 2016 - World Conference on Timber Engineering | Year: 2016

The post-war generation has reached senior age and is currently in retirement. We have more ageing people than ever and we thus need new concepts for caring for them. Senior citizens dream of a healthy home to live in, high quality and accessibility, a feeling of comfort and safety and staying active despite ageing. The question is how architecture could bring these contents into their lives. Wood as a construction material has several known positive properties and using logs as material for senior houses create a healthy environment. The aim of this study is to develop a healthy wooden log housing concept for senior citizens. This study begins with a literature review, followed by a section introducing design solutions. The concept itself consists of a log house modules theme with variations to the modules. The final solution takes shape depending on the habitants and surroundings.


Kuittinen M.,Aalto University
WCTE 2016 - World Conference on Timber Engineering | Year: 2016

The amount of refugees is growing steadily. Climate change is already a key driver of humanitarian crises. The amount of refugees is expected to reach one billion by 2050. As we know that buildings cause a significant share of global greenhouse gas emissions, the housing needs of these refugees should be met sustainably. Based on case studies, it can be concluded that wooden shelters have a considerably lower carbon footprint and primary energy demand than shelters made from non-renewable materials. To grasp this potential, the benefits of using wood in humanitarian construction should be disseminated to the humanitarian community. However, this promotion should go hand-in-hand with the implementation of sustainable forestry in the developing countries.


Niiranen J.,Aalto University | Niemi A.H.,University of Oulu
European Journal of Mechanics, A/Solids | Year: 2017

Sixth-order boundary value problems for gradient-elastic Kirchhoff plate bending models are formulated in a variational form within an H3 Sobolev space setting. Existence and uniqueness of the weak solutions are then established by proving the continuity and coercivity of the associated symmetric bilinear forms. Complete sets of boundary conditions, including both the essential and the natural conditions, are derived accordingly. In particular, the gradient-elastic Kirchhoff plate models feature two different types of clamped and simply supported boundary conditions in contrast to the classical Kirchhoff plate model. These new types of boundary conditions are given additional attributes singly and doubly; referring to free and prescribed normal curvature, respectively. The formulations and results of the analyzed strain gradient plate models are compared to two other generalized Kirchhoff plate models which follow a modified strain gradient elasticity theory and a modified couple stress theory. It is clarified that the results are extendable to these model variants as well. Finally, the relationship of the natural boundary conditions to external loadings is analyzed. © 2016 Elsevier Masson SAS


Sirkia T.,Aalto University
Proceedings - 2016 IEEE Working Conference on Software Visualization, VISSOFT 2016 | Year: 2016

To learn to program, a novice programmer must understand the dynamic, runtime aspect of program code, a so-called notional machine. Understanding the machine can be easier when it is represented graphically, and tools have been developed to this end. However, these tools typically support only one programming language and do not work in a web browser. In this article, we present the functionality and technical implementation of the two visualization tools. First, the language-agnostic and extensible Jsvee library helps educators visualize notional machines and create expression-level program animations for online course materials. Second, the Kelmu toolkit can be used by ebook authors to augment automatically generated animations, for instance by adding annotations such as textual explanations and arrows. Both of these libraries have been used in introductory programming courses, and there is preliminary evidence that students find the animations useful. © 2016 IEEE.


Osterberg M.,Aalto University | Valle-Delgado J.J.,Aalto University
Current Opinion in Colloid and Interface Science | Year: 2017

Lignocellulosics, i.e., cellulose, lignin and hemicelluloses, are natural renewable polymers of high technological interest. The properties of products based on these polymers are largely determined by the forces at their interfaces. This review summarizes the main findings related to surface interactions relevant for papermaking and describes how the interest in novel, high performance renewable materials has changed the focus of the research to nanocellulosic materials. Areas of interest that need further work are also outlined. © 2016 Elsevier Ltd


Lilius J.,Aalto University
European Urban and Regional Studies | Year: 2017

This paper focuses on the meaning of the urban environment for parents on family leave in Helsinki, Finland. Finland is a part of the Nordic model that emphasises ‘family-friendly arrangements’, such as family leave for mothers and fathers. To date, there is little research on how parents use urban space on family leave, although it is known that fathers stay on family leave more often in urban areas. Based on a triangulation of qualitative data on the day-to-day life of mothers and fathers on family leave, the paper argues that particular place-dependent ways of being on family leave take place in the inner city. Mixed-use pavements in many ways help mothers and fathers to cope in their new life situation and break the isolation often associated with family leave. The data also shows the importance of family-friendly public and commercial places in the city, such as playgrounds and accessible grocery shops, cafeterias and restaurants. The paper concludes that there is a need to further explore the production side of the everyday practices of parents, and how they add to city life and participate in changing cityscapes. © 2015, © The Author(s) 2015.


Naseri H.,Aalto University | Koivunen V.,Aalto University
IEEE Transactions on Signal Processing | Year: 2017

An affordable and reliable indoor positioning is a highly needed service. Moreover, maps of the indoor environment are vital to many applications. In this paper, a method for joint localization and mapping using multipath delay estimates is developed. Required high-resolution estimates of multipath delays may be obtained using radio frequency or acoustic measurements among a set of nodes in a network. In this paper, the problem is modeled in two-dimensional space with arbitrary node configuration and assuming a convex polygonal room shape. Joint localization and mapping is formulated as an optimization problem. It is subdivided and relaxed into two convex subproblems, which can be solved in an alternating manner. A method for data association and a low-complexity mapping algorithm stemming from Hough transform are proposed. Both the estimation performance and identifiability of the indoor localization problem are improved. Moreover, a basic map of the propagation environment is produced. © 2016 IEEE.


Yao L.,Aalto University | Inkinen S.,Aalto University | Van Dijken S.,Aalto University
Nature Communications | Year: 2017

Resistive switching in transition metal oxides involves intricate physical and chemical behaviours with potential for non-volatile memory and memristive devices. Although oxygen vacancy migration is known to play a crucial role in resistive switching of oxides, an in-depth understanding of oxygen vacancy-driven effects requires direct imaging of atomic-scale dynamic processes and their real-time impact on resistance changes. Here we use in situ transmission electron microscopy to demonstrate reversible switching between three resistance states in epitaxial La2/3Sr1/3MnO3 films. Simultaneous high-resolution imaging and resistance probing indicate that the switching events are caused by the formation of uniform structural phases. Reversible horizontal migration of oxygen vacancies within the manganite film, driven by combined effects of Joule heating and bias voltage, predominantly triggers the structural and resistive transitions. Our findings open prospects for ionotronic devices based on dynamic control of physical properties in complex oxide nanostructures. © 2017 The Author (s).


Malitckaya M.,Aalto University | Komsa H.-P.,Aalto University | Havu V.,Aalto University | Puska M.J.,Aalto University
Advanced Electronic Materials | Year: 2017

Point defects and complexes may affect significantly physical, optical, and electrical properties of semiconductors. The Cu(In,Ga)Se2 alloy is an absorber material for low-cost thin-film solar cells. Several recently published computational investigations show contradicting results for important point defects such as copper antisite substituting indium (CuIn), indium vacancy (VIn), and complexes of point defects in CuInSe2. In the present work effects of the most important computational parameters are studied especially on the formation energies of point defects. Moreover, related to defect identification by the help of their calculated properties possible explanations are discussed for the three acceptors, occuring in photoluminescence measurements of Cu-rich samples. Finally, new insight into comparison between theoretical and experimental results is presented in the case of varying chemical potentials and of formation of secondary phases. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Drost R.,Aalto University | Ojanen T.,Aalto University | Harju A.,Aalto University | Liljeroth P.,Aalto University
Nature Physics | Year: 2017

Topological materials exhibit protected edge modes that have been proposed for applications in, for example, spintronics and quantum computation. Although a number of such systems exist, it would be desirable to be able to test theoretical proposals in an artificial system that allows precise control over the key parameters of the model. The essential physics of several topological systems can be captured by tight-binding models, which can also be implemented in artificial lattices. Here, we show that this method can be realized in a vacancy lattice in a chlorine monolayer on a Cu(100) surface. We use low-temperature scanning tunnelling microscopy (STM) to fabricate such lattices with atomic precision and probe the resulting local density of states (LDOS) with scanning tunnelling spectroscopy (STS). We create analogues of two tight-binding models of fundamental importance: the polyacetylene (dimer) chain with topological domain-wall states, and the Lieb lattice with a flat electron band. These results provide an important step forward in the ongoing effort to realize designer quantum materials with tailored properties. © 2017 Nature Publishing Group


Tretyakov S.A.,Aalto University
Journal of Optics (United Kingdom) | Year: 2017

This review paper is a personal attempt to understand the current state of metamaterials science and its development directions, analyzing the main historical steps of its development from the late 19th century to our days. © 2016 IOP Publishing Ltd.


To intensify experimental research within the field of orthopaedic tribology, a three-station, dual motion, high frequency (25.3 Hz) circular translation pin-on-disc wear test device was recently introduced. In the present study, the pins were CoCr with a spherical, polished bearing surface of 28 mm radius, whereas the flat discs were conventional UHMWPE. This configuration was intended to simulate the wear mechanisms of total knee prostheses. The number of wear cycles run was as high as 200 million. The mean wear rate was 0.35 mg per one million cycles (0.77 mg/24 h) which corresponded to a mean wear factor of 3.5 × 10−6 mm3/Nm. The study provided further proof that a wear test for orthopaedic implant materials can be accelerated by substantially increasing the cycle frequency, provided that the sliding velocity remains close to the values obtained from biomechanical studies. Hence, the moderate frictional heating will not lead to unrealistic wear mechanisms. © 2017 Elsevier Ltd


Huotari K.,University of California at Berkeley | Huotari K.,Hanken School of Economics | Huotari K.,Aalto University | Hamari J.,Aalto University
Proceedings of the 16th International Academic MindTrek Conference 2012: "Envisioning Future Media Environments", MindTrek 2012 | Year: 2012

During recent years "gamification" has gained significant attention among practitioners and game scholars. However, the current understanding of gamification has been solely based on the act of adding systemic game elements into services. In this paper, we propose a new definition for gamification, which emphases the experiential nature of games and gamification, instead of the systemic understanding. Furthermore, we tie this definition to theory from service marketing because majority of gamification implementations aim towards goals of marketing, which brings to the discussion the notion of how customer / user is always ultimately the creator of value. Since now, the main venue for academic discussion on gamification has mainly been the HCI community. We find it relevant both for industry practitioners as well as for academics to study how gamification can fit in the body of knowledge of existing service literature because the goals and the means of gamification and marketing have a significant overlap. © 2012 ACM.


Bao Y.,Aalto University | Laitinen M.,University of Jyväskylä | Sajavaara T.,University of Jyväskylä | Savin H.,Aalto University
Advanced Electronic Materials | Year: 2017

Dimethylaluminum chloride (DMACl) as an aluminum source has shown promising potential to replace more expensive and commonly used trimethylaluminum in the semiconductor industry for atomic layer deposited (ALD) thin films. Here, the Al2O3 DMACl-process is modified by replacing the common ALD oxidant, water, by ozone that offers several benefits including shorter purge time, layer-by-layer growth, and improved film adhesion. It is shown that the introduction of the ozone instead of water increases carbon and chlorine content in the Al2O3, while long ozone pulses increase the amount of interfacial hydrogen at silicon surface. These are found to be beneficial effects regarding the surface passivation and thus final device operation. Heat treatments (at 400 and 800 °C) are found to be essential for high quality surface passivation similar to ALD Al2O3 deposited from conventional precursors, which is correlated with the changes at the interface and related impurity distributions. The optimal deposition temperature is found to be 250 °C, which provides the best chemical passivation after thermal treatments. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Suihkonen S.,Aalto University | Sintonen S.,Leibniz Institute for Crystal Growth | Tuomisto F.,Aalto University
Advanced Electronic Materials | Year: 2017

Native bulk gallium nitride (GaN) has emerged as an alternative for sapphire and silicon as a substrate material for III-N devices. While quasi-bulk GaN substrates are currently commercially available, single crystal GaN substrates are considered essential for future high performance light emitters and power devices. The ammonothermal method is currently considered one of the most feasible methods to grow large truly bulk crystals of GaN at low cost and high structural quality. High crystalline quality GaN substrates sliced from ammonothermally grown crystals have been demonstrated and utilized in homoepitaxy for III-N devices. However, despite the high crystalline quality the properties of as-grown ammonothermal GaN crystals and substrates are affected by the presence of impurities and other defects that hinder their use for device applications. Here, the main developments of ammonothermal growth of GaN and the effects of impurities, native point defects, and dislocations on the material properties are summarized. Additionally, measurement techniques that enable the evaluation of point defect concentration and low dislocation density distribution over a large area on bulk GaN substrates are reviewed. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Jarvensivu P.,Aalto University | Jarvensivu P.,BIOS Research Unit
Journal of Cleaner Production | Year: 2017

Societies must rapidly abandon the use of fossil fuels to avoid the worst effects of climate change. This paper examines the cultural dynamics of the energy transition by focusing on a post-fossil fuel experiment in an international artist and researcher residency. The aims of the experiment were to explore how fossil fuels currently determine human lives and to imagine and build pathways forward. The six-year ethnographic case study was analysed from the perspective of practice theory, shedding light on how changes in the material arrangements of energy, food and transportation reconfigure meanings and competences. These transformations were found to have inspiring as well as unfortunate, even threatening aspects that need to be taken into account in transition design and governance. © 2017 Elsevier Ltd.


Tidwell P.,Aalto University
Structures and Architecture - Proceedings of the 3rd International Conference on Structures and Architecture, ICSA 2016 | Year: 2016

Around 1903, the German physician, psychologist, scientist, philosopher and general polymath Hermann von Helmholtz described his work in surprisingly unguarded tone: “I was like a mountaineer who, not knowing the path, must climb slowly and laboriously, is forced to turn back frequently because his way is blocked but discovers, sometimes by deliberation and more often by accident, new passages which lead him onward for a distance.” (Helmholtz, 1954) According to him, the groundbreaking research on acoustics and human perception not as a linear progression so much as a laborious and often hapless series of trials and errors. I am not sure if Helmholtz’s groundbreaking research has much in common with the work of architects today, but I suspect that many of us will be familiar with the innumerable false starts and missteps that his method entailed. And since the figure of the vigorous mountaineer conjures a far more flattering self image than the one that we usually confront, I suggest that we take the metaphor. In a few simple lines Helmholtz recognized the invaluable role of bad ideas, failed experiments and the countless hours spent on unfulfilled ambitions. More than just research or experiments, his analogy posits that these activities serve as a kind of training necessary for the hard work of cognitive climbing-practice in the fullest sense of the word. Architects should be familiar with this notion since it underlies most systems of architectural apprenticeship and licensure. As we all know, we are required to practice our trade before we can practice as architects. © 2016 Taylor & Francis Group, London.


Pihlajamaa M.,Aalto University
Journal of Engineering and Technology Management - JET-M | Year: 2017

Developing radical innovations is highly demanding because of high uncertainties which give rise to unanticipated problems and discoveries. Managing individual motivation is therefore an important component of the radical innovation capability. This study presents a theoretical model of managing individual motivation in radical innovation development. The model is tested and elaborated by investigating four incumbent companies. The findings indicate that managers may influence the initial level of individual motivation and its effect on the success in development tasks by assigning external goals and providing organizational support. These methods can be found at multiple levels: individuals, project teams, and the organization. © 2017 Elsevier B.V.


McGookin D.,Aalto University | Kyto M.,Aalto University
ACM International Conference Proceeding Series | Year: 2016

A recent trend in HCI has been to employ wearable technology (such as Head-Mounted Displays (HMDs)) to help provoke and support face-to-face interaction through the presentation of, usually algorithmically matched, media from user social and digital media accounts. Such approaches could effectively support face-to-face interaction in many situations. However, work fails to consider the existing rich practices in how users manage their identity both face-to-face, and with existing social and digital media services. We present the results of 7 in-depth interviews on how users would wish to employ, manage and present social media in face-to-face encounters over a range of contexts and social relationships. We identified the importance of user control in selecting media, particularly when presenting to strangers, and how users would wish to present media to others. Our work provides guidance on how to more effectively employ existing media in face-to-face interaction. © 2016 ACM.


Holappa L.,Aalto University
Journal of Chemical Technology and Metallurgy | Year: 2017

The basic technologies for modern iron and steel production were developed about half a century ago. Then and thereafter, raw materials pretreatments, coke making and blast furnace process itself were strongly improved, energy consumption and emissions were decreased as well as productivity and product quality were raised. Modern top, bottom and combined blowing converter processes were developed, which revolutionized steel production based on iron ore concentrates. Today, oxygen converter process is the main primary steelmaking technology with a share of about two thirds of the world steel production. Another mainstream of steelmaking is based on recycled steel as raw material. Smaller share of the scrap is used in converters as cooling material, but most of it is melted in ultra-high power electric furnaces. The progresses in these primary steelmaking technologies have been solidly associated with the emergence and developments of secondary steelmaking processes in ladles as well as with the breakthrough of continuous casting since 1960s. These technologies radically changed the whole steelmaking principle, which has meant much higher production rates, and lower materials and energy consumption per unit as well as unequalled premises for development of new steel grades with highly improved and strictly specific, tailored properties. The present paper discusses the latest achievements in different unit processes. Latest progresses in secondary metallurgy and continuous casting are considered with critical emphasis on the metallurgical constraints of the current processes. Finally, some future perspectives influencing the process development are discussed.


Sirkia T.,Aalto University
ACM International Conference Proceeding Series | Year: 2016

Parson's problems, in which students solve programming assignments by putting code fragments in the correct order, can be an easy way to start the assignments as there is no need to write code or struggle with the syntax. In this paper, we report results from a preliminary experiment in which we combined two existing libraries, js-parsons and Jsvee. We extended the original feedback of js-parsons with program visualizations to show for the students how their solution was executed and why it possibly did not work as expected. We analyzed the usage of the visualizations, and the results show that over half of the students viewed them if they were available. Novices who used the visualizations tend to need more submissions than the other novices, which may imply the weaker students find visualizations more useful. However, more research is needed to analyze the learning effects. © 2016 ACM.


Toppila A.,Aalto University | Salo A.,Aalto University
Reliability Engineering and System Safety | Year: 2017

A central problem in risk management is that of identifying the optimal combination (or portfolio) of improvements that enhance the reliability of the system most through reducing failure event probabilities, subject to the availability of resources. This optimal portfolio can be sensitive with regard to epistemic uncertainties about the failure events' probabilities. In this paper, we develop an optimization model to support the allocation of resources to improvements that mitigate risks in coherent systems in which interval-valued probabilities defined by lower and upper bounds are employed to capture epistemic uncertainties. Decision recommendations are based on portfolio dominance: a resource allocation portfolio is dominated if there exists another portfolio that improves system reliability (i) at least as much for all feasible failure probabilities and (ii) strictly more for some feasible probabilities. Based on non-dominated portfolios, recommendations about improvements to implement are derived by inspecting in how many non-dominated portfolios a given improvement is contained. We present an exact method for computing the non-dominated portfolios. We also present an approximate method that simplifies the reliability function using total order interactions so that larger problem instances can be solved with reasonable computational effort. © 2017 Elsevier Ltd


Kiran V.R.,Aalto University | Alfayad A.,Carnegie Mellon University | Weber I.,Qatar Computing Research Institute
Conference on Human Factors in Computing Systems - Proceedings | Year: 2016

Several projects have shown the feasibility to use textual social media data to track public health concerns, such as temporal influenza patterns or geographical obesity patterns. In this paper, we look at whether geo-tagged images from Instagram also provide a viable data source. Especially for "lifestyle" diseases, such as obesity, drinking or smoking, images of social gatherings could provide information that is not necessarily shared in, say, tweets. In this study, we explore whether (i) tags provided by the users and (ii) annotations obtained via automatic image tagging are indeed valuable for studying public health. We find that both user-provided and machine-generated tags provide information that can be used to infer a county's health statistics. Whereas for most statistics user-provided tags are better features, for predicting excessive drinking machine-generated tags such as "liquid" and "glass" yield better models. This hints at the potential of using machine-generated tags to study substance abuse.


Lee B.,Aalto University | Oulasvirta A.,Aalto University
Conference on Human Factors in Computing Systems - Proceedings | Year: 2016

We present a novel model to predict error rates in temporal pointing. With temporal pointing, a target is about to appear within a limited time window for selection. Unlike in spatial pointing, there is no movement to control in the temporal domain; the user can only determine when to launch the response. Although this task is common in interactions requiring temporal precision, rhythm, or synchrony, no previous HCI model predicts error rates as a function of task properties. Our model assumes that users have an implicit point of aim but their ability to elicit the input event at that time is hampered by variability in three processes: 1) an internal time-keeping process, 2) a response-execution stage, and 3) input processing in the computer. We derive a mathematical model with two parameters from these assumptions. High fit is shown for user performance with two task types, including a rapidly paced game. The model can explain previous findings showing that touchscreens are much worse in temporal pointing than physical input devices. It also has novel implications for design that extend beyond the conventional wisdom of minimising latency.


Lee B.,Aalto University
Conference on Human Factors in Computing Systems - Proceedings | Year: 2016

A Falling Line is an interactive installation that creates responsive sound from participant's drawing on a black wall. In the form of white line, the drawing represents the waveform of the created sound. Audiences move a computer mouse to elongate the line but no modification or deletion of the existing line is allowed. To ensure the standard CD-quality sound (44100 kHz, 16 bit), a special electronics were used to extract the raw output from a high performance mouse. In terms of technology, this work demonstrates a novel concept of creating digital sound. In terms of artistic expression, the interaction provides high tension to the audience when trying to create some kind of meaningful sound from the irreversible drawing. © 2016 Authors.


Ahlgren O.,Aalto University
IEEE International Conference on Data Mining Workshops, ICDMW | Year: 2017

The first publications on sentiment analysis and opinion mining were published roughly a decade ago. Now it is time to lookback on the achievements so far. This paper presents statistics on the evolution of sentiment analysis. What kind of topics have beendiscussed? How has their popularity changed over time? Who have been the leading researchers? Answers to these questions areprovided by statistical analysis on keywords and by applying Latent Dirichlet Allocation to the titles and abstracts of the publications. The aim of this paper is to provide background information on the big picture of semantic analysis and its development over time. © 2016 IEEE.


Garimella K.,Aalto University
WSDM 2017 - Proceedings of the 10th ACM International Conference on Web Search and Data Mining | Year: 2017

In this thesis, we develop methods to (i) detect and quantify the existence of filter bubbles in social media, (ii) monitor their evolution over time, and finally, (iii) devise methods to overcome the effects caused by filter bubbles. We are the first to propose an end-to-end system that solves the prob-lem of filter bubbles completely algorithmically. We build on top of existing studies and ideas from social science with principles from graph theory to design algorithms which are language independent, domain agnostic and scalable to large number of users. © 2017 ACM.


Taffese W.Z.,Aalto University | Sistonen E.,Aalto University
Automation in Construction | Year: 2017

Accurate service-life prediction of structures is vital for taking appropriate measures in a time- and cost-effective manner. However, the conventional prediction models rely on simplified assumptions, leading to inaccurate estimations. The paper reviews the capability of machine learning in addressing the limitations of classical prediction models. This is due to its ability to capture the complex physical and chemical process of the deterioration mechanism. The paper also presents previous researches that proposed the applicability of machine learning in assisting durability assessment of reinforced concrete structures. The advantages of employing machine learning for durability and service-life assessment of reinforced concrete structures are also discussed in detail. The growing trend of collecting more and more in-service data using wireless sensors facilitates the use of machine learning for durability and service-life assessment. The paper concludes by recommending the future directions based on examination of recent advances and current practices in this specific area. © 2017 Elsevier B.V.


They're called the "Apple Watch Series 2", "LG Watch", "Samsung GEAR S3" or "Moto 360 2nd Gen" but they all have the same problem. "Every new product generation has better screens, better processors, better cameras, and new sensors, but regarding input, the limitations remain," explains Srinath Sridhar, a researcher in the Graphics, Vision and Video group at the Max Planck Institute for Informatics. Together with Christian Theobalt, head of the Graphics, Vision and Video group at MPI, Anders Markussen and Sebastian Boring at the University of Copenhagen and Antti Oulasvirta at Aalto University in Finland, Srinath Sridhar has therefore developed an input method that requires only a small camera to track fingertips in mid-air, and touch and position of the fingers on the back of the hand. This combination enables more expressive interactions than any previous sensing technique. Regarding hardware, the prototype, which the researchers have named "WatchSense", requires only a depth sensor, a much smaller version of the well-known "Kinect" game controller from the Xbox 360 video game console. With WatchSense, the depth sensor is worn on the user's forearm, about 20cm from the watch. As a sort of 3D camera, it captures the movements of the thumb and index finger, not only on the back of the hand but also in the space over and above it. The software developed by the researchers recognizes the position and movement of the fingers within the 3D image, allowing the user to control apps on smartphones or other devices. "The currently available depth sensors do not fit inside a smartwatch, but from the trend it's clear that in the near future, smaller depth sensors will be integrated into smartwatches," Sridhar says. But this is not all that's required. According to Sridhar, with their software system the scientists also had to solve the challenges of handling the unevenness of the back of the hand and the fact that the fingers can occlude each other when they are moved. "The most important thing is that we can not only recognize the fingers, but also distinguish between them," explains Sridhar, "which nobody else had managed to do before in a wearable form factor. We can now do this even in real time." The software recognizes the exact positions of the thumb and index finger in the 3D image from the depth sensor, because the researchers trained it to do this via machine learning. In addition, the researchers have successfully tested their prototype in combination with several mobile devices and in various scenarios. "Smartphones can be operated with one or more fingers on the display, but they do not use the space above it. If both are combined, this enables previously impossible forms of interaction," explains Sridhar. He and his colleagues were able to show that with WatchSense, in a music program, the volume could be adjusted and a new song selected more quickly than was possible with a smartphone's Android app. The researchers also tested WatchSense for tasks in virtual and augmented reality, in a map application, and used it to control a large external screen. Preliminary studies showed that WatchSense was more satisfactory for each case than conventional touch-sensitive displays. Sridhar is confident that "we need something like WatchSense whenever we want to be productive while moving. WatchSense is the first to enable expressive input for devices while on the move." From May 6, the researchers will present WatchSense at the renowned "Conference on Human Factors in Computing," or CHI for short, which this time takes place in the city of Denver in the US. Explore further: 'Lab-on-a-glove' could bring nerve-agent detection to a wearer's fingertips


News Article | April 19, 2017
Site: phys.org

'We have been preparing for the launch of either Aalto-1 or Aalto-2 for a long time. There was a big crowd of us looking forward to and celebrating this historic event in Otaniemi,' says Professor Jaan Praks, the director of the project. Aalto-2 will take part in the international QB50 Mission, the aim of which is to produce the first ever comprehensive model of the features of the thermosphere, the layer between the Earth's atmosphere and space. Dozens of nanosatellites from different parts of the world will take part in the mission. Because Aalto-2 is part of a larger project, it will be registered in Belgium in the same way as the project's other satellites in order to simplify the permit procedures. 'The space station will release Aalto-2 into space within about one month of the arrival of the cargo. The astronauts will install the launch adapter into a robot arm, which will allow the satellites to be safely detached to their orbits. The astronauts will film the satellite being detached and that will be the last time we get to see the Aalto-2 satellite before starting to wait for a signal from it,' says Tuomas Tikka, a doctoral candidate at Aalto and one of the founders of Reaktor Space Lab. The Aalto-2 satellite's orbit is close to the equator, so the satellite can only be occasionally in contact with the earth station in Otaniemi. 'Several earth stations from around the world are involved in the mission. The information sent by the satellites will be shared by them all, so it is likely that the first time we hear from the Aalto-2 satellite will be with the help of one of the other earth stations,' says Tikka. Aalto-2, which only weighs two kilogrammes, is carrying the multi-Needle Langmuir Probe (mNLP) payload developed at the University of Oslo for the measurement of plasma characteristics. 'Our team's primary goal will be to demonstrate how well the satellite platform designed and built at Aalto University functions in the challenging conditions of space,' Tikka continues. Construction of the Aalto-2 satellite began in 2012 as a doctoral project when the first students graduated as Masters of Science in Technology after working on the Aalto-1 project. Over six years, dozens of next-generation space industry experts have been trained in the projects. The impact is already visible in the growth of start-up companies in the space sector. 'Although there has been space technology in Finland for several decades, Aalto-2 is the first Finnish-built satellite that is now in space. Thanks to the cost-efficiency of small satellites, the industry is on the rise both in Finland and abroad ,' says Praks.


News Article | April 13, 2017
Site: www.chromatographytechniques.com

The Earth’s capacity to feed its growing population is limited – and unevenly distributed. An increase in cultivated land and the use of more efficient production technology are partly buffering the problem, but in many areas it is instead solved by increasing food imports. For the first time, researchers at Aalto University have been able to show a broad connection between resource scarcity, population pressure, and food imports, in a study published in Earth’s Future. "Although this has been a topic of global discussion for a long time, previous research has not been able to demonstrate a clear connection between resource scarcity and food imports. We performed a global analysis focusing on regions where water availability restricts production, and examined them from 1961 until 2009, evaluating the extent to which the growing population pressure was met by increasing food imports," explains Miina Porkka, postdoctoral researcher. The researchers’ work combined modeled data with FAO statistics and also took into consideration increases in production efficiency resulting from technological development. The analysis showed that in 75 percent of resource scarce regions, food imports began to rise as the region’s own production became insufficient. Even less wealthy regions relied on the import strategy – but not always successfully. According to the research, the food security of about 1.4 billion people has become dependent on imports and an additional 460 million people live in areas where increased imports are not enough to compensate for the lack of local production. The big issue, says co-author Joseph Guillaume, is that people may not even be aware that they have chosen dependency on imports over further investment in local production or curbing demand. "It seems obvious to look elsewhere when local production is not sufficient, and our analysis clearly shows that is what happens. Perhaps that is the right choice, but it should not be taken for granted." The international food system is sensitive and price and production shocks can spread widely and undermine food security – especially in poorer countries that are dependent on imports. As a result, further investments in raising production capacity could be a viable alternative. Especially in sub-Saharan Africa and India, there are opportunities to sustainably improve food production by, for example, more efficient use of nutrients and better irrigation systems. Miina Porkka emphasises that the solutions will ultimately require more than just increasing food production. "Keeping food demand in check is the key issue. Controlling population growth plays an essential role in this work, but it would also be important to enhance production chains by reducing food waste and meat consumption. Since one quarter of all the food produced in the world is wasted, reducing this would be really significant on a global level."


News Article | April 24, 2017
Site: www.rdmag.com

Plasmonic nanoparticles exhibit properties based on their geometries and relative positions. Researchers have now developed an easy way to manipulate the optical properties of plasmonic nanostructures that strongly depend on their spatial arrangement. The plasmonic nanoparticles can form clusters, plasmonic metamolecules, and then interact with each other. Changing the geometry of the nanoparticles can be used to control the properties of the metamolecules. "The challenge is to make the structures change their geometry in a controlled way in response to external stimuli. In this study, structures were programmed to modify their shape by altering the pH," tells Assistant Professor Anton Kuzyk from Aalto University. In this study plasmonic metamolecules were functionalized with pH-sensitive DNA locks. DNA locks can be easily programmed to operate at a specific pH range. Metamolecules can be either in a "locked" state at low pH or in relaxed state at high pH. Both states have very distinct optical responses. This in fact allows creating assemblies of several types of plasmonic metamolecules, with each type designed to switch at different a pH value. The ability to program nanostructures to perform a specific function only within a certain pH window could have applications in the field of nanomachines and smart nanomaterials with tailored optical functionalities. This active control of plasmonic metamolecules is promising for the development of sensors, optical switches, transducers and phase shifters at different wavelengths. In the future, pH-responsive nanostructures could also be useful in the development of controlled drug delivery.


News Article | April 25, 2017
Site: www.sciencedaily.com

The brain works most efficiently when it can focus on a single task for a longer period of time. Previous research shows that multitasking, which means performing several tasks at the same time, reduces productivity by as much as 40%. Now a group of researchers specialising in brain imaging has found that changing tasks too frequently interferes with brain activity. This may explain why the end result is worse than when a person focuses on one task at a time. 'We used functional magnetic resonance imaging to measure different brain areas of our research subjects while they watched short segments of the Star Wars, Indiana Jones and James Bond movies,' explains Aalto University Associate Professor Iiro Jääskeläinen. Cutting the films into segments of approximately 50 seconds fragmented their continuity. In the study, the subjects' brain areas functioned more smoothly when they watched the films in segments of 6.5 minutes. The posterior temporal and dorsomedial prefrontal cortices, the cerebellum and dorsal precuneus are the most important areas of the brain in terms of combining individual events into coherent event sequences. These areas of the brain make it possible to turn fragments into complete entities. According to the study, these brain regions work more efficiently when it can deal with one task at a time. Jääskeläinen recommends completing one task each day rather than working on a dozen of different tasks simultaneously. 'It's easy to fall into the trap of multitasking. In that case, it seems like there is little real progress and this leads to a feeling of inadequacy. Concentration decreases, which causes stress. Prolonged stress hinders thinking and memory,' says Jääskeläinen. The neuroscientist also sees social media as a challenge. 'Social media is really nothing but multitasking, with several parallel plots and issues. You might end up reading the news or playing a game recommended by a friend. From the brain's perspective, social media only increases the load.'


News Article | May 3, 2017
Site: www.gizmag.com

The WatchSense prototype in use – in the final version, the depth sensor would be incorporated into the watch (Credit: Oliver Dietze) Although smartwatches may indeed be getting capable of more and more functions, their touchscreens will have to remain relatively small if they're still going to fit on people's wrists. As a result, we've recently seen attempts at extending the user interface off of the screen. One of the latest, known as WatchSense, allows users to control a mobile device by moving the fingers of one hand on and above the back of the other. The WatchSense concept was developed by researchers from the Max Planck Institute for Informatics, the University of Copenhagen and Aalto University in Finland. In its current proof-of-concept form, it incorporates a small 3D depth sensor which is worn on the forearm. That sensor is able to ascertain the positions of the user's index finger and thumb as they move on the back of the hand that's wearing the watch, as well as in the space above it. Custom software assigns different commands to different movements, allowing users to control various functions on a linked smartphone. Although the sensor is presently separate from the user's smartwatch, the team believes that it will soon be possible to incorporate miniaturized depth sensors directly into watches. In lab tests, it was found that WatchSense allowed users to adjust music volume and select songs more quickly than they could using a smartphone's Android music app. It was also found to be "more satisfactory" than a touchscreen for virtual and augmented reality-based tasks, along with a map application and the control of a large external screen. The technology will be demonstrated at the upcoming Conference on Human Factors in Computing, taking place in Denver, Colorado.


News Article | April 24, 2017
Site: www.eurekalert.org

(Aalto University) Having women in scientific committees may decrease women's opportunities to get a nomination for a professor. According to a study by researchers in Aalto University, Finland, male evaluators become less favorable toward female candidates as soon as a female evaluator joins the committee. At the same time, female evaluators are not significantly more favorable toward female candidates.


News Article | May 3, 2017
Site: www.eurekalert.org

It relies on a depth sensor that tracks movements of the thumb and index finger on and above the back of the hand. In this way, not only can smartwatches be controlled, but also smartphones, smart TVs and devices for augmented and virtual reality. They're called the "Apple Watch Series 2", "LG Watch", "Samsung GEAR S3" or "Moto 360 2nd Gen" but they all have the same problem. "Every new product generation has better screens, better processors, better cameras, and new sensors, but regarding input, the limitations remain," explains Srinath Sridhar, a researcher in the Graphics, Vision and Video group at the Max Planck Institute for Informatics. Together with Christian Theobalt, head of the Graphics, Vision and Video group at MPI, Anders Markussen and Sebastian Boring at the University of Copenhagen and Antti Oulasvirta at Aalto University in Finland, Srinath Sridhar has therefore developed an input method that requires only a small camera to track fingertips in mid-air, and touch and position of the fingers on the back of the hand. This combination enables more expressive interactions than any previous sensing technique. Regarding hardware, the prototype, which the researchers have named "WatchSense", requires only a depth sensor, a much smaller version of the well-known "Kinect" game controller from the Xbox 360 video game console. With WatchSense, the depth sensor is worn on the user's forearm, about 20cm from the watch. As a sort of 3D camera, it captures the movements of the thumb and index finger, not only on the back of the hand but also in the space over and above it. The software developed by the researchers recognizes the position and movement of the fingers within the 3D image, allowing the user to control apps on smartphones or other devices. "The currently available depth sensors do not fit inside a smartwatch, but from the trend it's clear that in the near future, smaller depth sensors will be integrated into smartwatches," Sridhar says. But this is not all that's required. According to Sridhar, with their software system the scientists also had to solve the challenges of handling the unevenness of the back of the hand and the fact that the fingers can occlude each other when they are moved. "The most important thing is that we can not only recognize the fingers, but also distinguish between them," explains Sridhar, "which nobody else had managed to do before in a wearable form factor. We can now do this even in real time." The software recognizes the exact positions of the thumb and index finger in the 3D image from the depth sensor, because the researchers trained it to do this via machine learning. In addition, the researchers have successfully tested their prototype in combination with several mobile devices and in various scenarios. "Smartphones can be operated with one or more fingers on the display, but they do not use the space above it. If both are combined, this enables previously impossible forms of interaction," explains Sridhar. He and his colleagues were able to show that with WatchSense, in a music program, the volume could be adjusted and a new song selected more quickly than was possible with a smartphone's Android app. The researchers also tested WatchSense for tasks in virtual and augmented reality, in a map application, and used it to control a large external screen. Preliminary studies showed that WatchSense was more satisfactory for each case than conventional touch-sensitive displays. Sridhar is confident that "we need something like WatchSense whenever we want to be productive while moving. WatchSense is the first to enable expressive input for devices while on the move." From May 6, the researchers will present WatchSense at the renowned "Conference on Human Factors in Computing," or CHI for short, which this time takes place in the city of Denver in the US.


News Article | April 24, 2017
Site: www.eurekalert.org

(Aalto University) Plasmonic nanoparticles exhibit properties based on their geometries and relative positions. Researchers have now developed an easy way to manipulate the optical properties of plasmonic nanostructures that strongly depend on their spatial arrangement. 'The challenge is to make the structures change their geometry in a controlled way in response to external stimuli. In this study, structures were programmed to modify their shape by altering the pH,' tells Assistant Professor Anton Kuzyk from Aalto University.


News Article | April 25, 2017
Site: www.eurekalert.org

The brain works most efficiently when it can focus on a single task for a longer period of time The brain works most efficiently when it can focus on a single task for a longer period of time. Previous research shows that multitasking, which means performing several tasks at the same time, reduces productivity by as much as 40%. Now a group of researchers specialising in brain imaging has found that changing tasks too frequently interferes with brain activity. This may explain why the end result is worse than when a person focuses on one task at a time. 'We used functional magnetic resonance imaging to measure different brain areas of our research subjects while they watched short segments of the Star Wars, Indiana Jones and James Bond movies,' explains Aalto University Associate Professor Iiro Jääskeläinen. Cutting the films into segments of approximately 50 seconds fragmented their continuity. In the study, the subjects' brain areas functioned more smoothly when they watched the films in segments of 6.5 minutes. The posterior temporal and dorsomedial prefrontal cortices, the cerebellum and dorsal precuneus are the most important areas of the brain in terms of combining individual events into coherent event sequences. These areas of the brain make it possible to turn fragments into complete entities. According to the study, these brain regions work more efficiently when it can deal with one task at a time. Jääskeläinen recommends completing one task each day rather than working on a dozen of different tasks simultaneously. 'It's easy to fall into the trap of multitasking. In that case, it seems like there is little real progress and this leads to a feeling of inadequacy. Concentration decreases, which causes stress. Prolonged stress hinders thinking and memory,' says Jääskeläinen. The neuroscientist also sees social media as a challenge. 'Social media is really nothing but multitasking, with several parallel plots and issues. You might end up reading the news or playing a game recommended by a friend. From the brain's perspective, social media only increases the load.' In addition to Jääskeläinen, Juha Lahnakoski from Max Planck Institute, Mikko Sams from Aalto University and Lauri Nummenmaa from the University of Turku participated in the research.


News Article | April 24, 2017
Site: www.cemag.us

Plasmonic nanoparticles exhibit properties based on their geometries and relative positions. Researchers have now developed an easy way to manipulate the optical properties of plasmonic nanostructures that strongly depend on their spatial arrangement. The plasmonic nanoparticles can form clusters, plasmonic metamolecules, and then interact with each other. Changing the geometry of the nanoparticles can be used to control the properties of the metamolecules. “The challenge is to make the structures change their geometry in a controlled way in response to external stimuli.  In this study, structures were programmed to modify their shape by altering the pH,” tells Assistant Professor Anton Kuzyk from Aalto University. Utilization of Programmable DNA Locks In this study plasmonic metamolecules were functionalized with pH-sensitive DNA locks. DNA locks can be easily programmed to operate at a specific pH range. Metamolecules can be either in a “locked” state at low pH or in relaxed state at high pH. Both states have very distinct optical responses. This in fact allows creating assemblies of several types of plasmonic metamolecules, with each type designed to switch at different a pH value. The ability to program nanostructures to perform a specific function only within a certain pH window could have applications in the field of nanomachines and smart nanomaterials with tailored optical functionalities. This active control of plasmonic metamolecules is promising for the development of sensors, optical switches, transducers and phase shifters at different wavelengths. In the future, pH-responsive nanostructures could also be useful in the development of controlled drug delivery. The study was carried out by Anton Kuzyk from Aalto University, Maximilian Urban and Na Liu from Max Planck Institute for Intelligent Systems and the Heidelberg University, and Andrea Idili and Francesco Ricci from the University of Rome Tor Vergata.


News Article | May 3, 2017
Site: www.rdmag.com

It relies on a depth sensor that tracks movements of the thumb and index finger on and above the back of the hand. In this way, not only can smartwatches be controlled, but also smartphones, smart TVs and devices for augmented and virtual reality. They're called the "Apple Watch Series 2", "LG Watch", "Samsung GEAR S3" or "Moto 360 2nd Gen" but they all have the same problem. "Every new product generation has better screens, better processors, better cameras, and new sensors, but regarding input, the limitations remain," explains Srinath Sridhar, a researcher in the Graphics, Vision and Video group at the Max Planck Institute for Informatics. Together with Christian Theobalt, head of the Graphics, Vision and Video group at MPI, Anders Markussen and Sebastian Boring at the University of Copenhagen and Antti Oulasvirta at Aalto University in Finland, Srinath Sridhar has therefore developed an input method that requires only a small camera to track fingertips in mid-air, and touch and position of the fingers on the back of the hand. This combination enables more expressive interactions than any previous sensing technique. Regarding hardware, the prototype, which the researchers have named "WatchSense", requires only a depth sensor, a much smaller version of the well-known "Kinect" game controller from the Xbox 360 video game console. With WatchSense, the depth sensor is worn on the user's forearm, about 20cm from the watch. As a sort of 3D camera, it captures the movements of the thumb and index finger, not only on the back of the hand but also in the space over and above it. The software developed by the researchers recognizes the position and movement of the fingers within the 3D image, allowing the user to control apps on smartphones or other devices. "The currently available depth sensors do not fit inside a smartwatch, but from the trend it's clear that in the near future, smaller depth sensors will be integrated into smartwatches," Sridhar says. But this is not all that's required. According to Sridhar, with their software system the scientists also had to solve the challenges of handling the unevenness of the back of the hand and the fact that the fingers can occlude each other when they are moved. "The most important thing is that we can not only recognize the fingers, but also distinguish between them," explains Sridhar, "which nobody else had managed to do before in a wearable form factor. We can now do this even in real time." The software recognizes the exact positions of the thumb and index finger in the 3D image from the depth sensor, because the researchers trained it to do this via machine learning. In addition, the researchers have successfully tested their prototype in combination with several mobile devices and in various scenarios. "Smartphones can be operated with one or more fingers on the display, but they do not use the space above it. If both are combined, this enables previously impossible forms of interaction," explains Sridhar. He and his colleagues were able to show that with WatchSense, in a music program, the volume could be adjusted and a new song selected more quickly than was possible with a smartphone's Android app. The researchers also tested WatchSense for tasks in virtual and augmented reality, in a map application, and used it to control a large external screen. Preliminary studies showed that WatchSense was more satisfactory for each case than conventional touch-sensitive displays. Sridhar is confident that "we need something like WatchSense whenever we want to be productive while moving. WatchSense is the first to enable expressive input for devices while on the move." From May 6, the researchers will present WatchSense at the renowned "Conference on Human Factors in Computing," or CHI for short, which this time takes place in the city of Denver in the US.


News Article | April 24, 2017
Site: www.rdmag.com

Plasmonic nanoparticles exhibit properties based on their geometries and relative positions. Researchers have now developed an easy way to manipulate the optical properties of plasmonic nanostructures that strongly depend on their spatial arrangement. The plasmonic nanoparticles can form clusters, plasmonic metamolecules, and then interact with each other. Changing the geometry of the nanoparticles can be used to control the properties of the metamolecules. "The challenge is to make the structures change their geometry in a controlled way in response to external stimuli. In this study, structures were programmed to modify their shape by altering the pH," tells Assistant Professor Anton Kuzyk from Aalto University. In this study plasmonic metamolecules were functionalized with pH-sensitive DNA locks. DNA locks can be easily programmed to operate at a specific pH range. Metamolecules can be either in a "locked" state at low pH or in relaxed state at high pH. Both states have very distinct optical responses. This in fact allows creating assemblies of several types of plasmonic metamolecules, with each type designed to switch at different a pH value. The ability to program nanostructures to perform a specific function only within a certain pH window could have applications in the field of nanomachines and smart nanomaterials with tailored optical functionalities. This active control of plasmonic metamolecules is promising for the development of sensors, optical switches, transducers and phase shifters at different wavelengths. In the future, pH-responsive nanostructures could also be useful in the development of controlled drug delivery.


News Article | May 8, 2017
Site: www.newscientist.com

Even quantum computers need to keep their cool. Now, researchers have built a tiny nanoscale refrigerator to keep qubits cold enough to function. Classical computers require built-in fans and other ways to dissipate heat, and quantum computers are no different. Instead of working with bits of information that can be either 0 or 1, as in a classical machine, a quantum computer relies on “qubits”, which can be in both states simultaneously – called a superposition – thanks to the quirks of quantum mechanics. Those qubits must be shielded from all external noise, since the slightest interference will destroy the superposition, resulting in calculation errors. Well-isolated qubits heat up easily, so keeping them cool is a challenge. Also, unlike in a classical computer, qubits must start in their low-temperature ground states to run an algorithm. Qubits heat up during calculations, so if you want to run several quantum algorithms one after the other, any cooling mechanism must be able to do its job quickly. A standard fan just won’t cut it. Now, Mikko Möttönen at Aalto University in Finland and his colleagues have built the first standalone cooling device for a quantum circuit. It could eventually be integrated into many kinds of quantum electronic devices ­– including a computer. The team built a circuit with an energy gap dividing two channels: a superconducting fast lane, where electrons can zip along with zero resistance, and a slow resistive (non-superconducting) lane. Only electrons with sufficient energy to jump across that gap can get to the superconductor highway; the rest are stuck in the slow lane. If some poor electron falls just short of having enough energy to make the jump, it can capture a photon from a nearby resonator to get a boost. As a result, the resonator gradually cools down. Over time this has a selective chilling effect on the electrons as well: the hotter electrons jump the gap, while the cooler ones are left behind. The process removes heat from the system, much like how a refrigerator functions. Spiros Michalakis at the California Institute of Technology draws a loose analogy with the famous thought experiment known as Maxwell’s Demon, in which an intelligent being presides over a box of gas atoms divided into two chambers. The demon allows only the hottest, or most energetic, atoms to pass through an opening in the wall dividing the two chambers, resulting in a sharp difference in temperature between the two. There is no demon in the quantum fridge, but it works in a similar way, Michalakis says. “It’s kind of like a gate similar to Maxwell’s Demon, where you only allow electrons with energy above a certain threshold to cross,” he said. The next step will be to build the device and cool actual qubits with it, being careful not to accidentally destroy the superposition when the fridge is shut down. Möttönen is confident enough in eventual success that he has applied for a patent for the device. “Maybe in 10 to 15 years, this might be commercially useful,” he said. “It’s going to take some time, but I’m pretty sure we’ll get there.”


News Article | May 8, 2017
Site: www.rdmag.com

Quantum physicist Mikko Möttönen and his team at Aalto University have invented a quantum-circuit refrigerator, which can reduce errors in quantum computing. The global race towards a functioning quantum computer is on. With future quantum computers, we will be able to solve previously impossible problems and develop, for example, complex medicines, fertilizers, or artificial intelligence. The research results published today in the scientific journal, Nature Communications, suggest how harmful errors in quantum computing can be removed. This is a new twist towards a functioning quantum computer. How quantum computers differ from the computers that we use today is that instead of normal bits, they compute with quantum bits, or qubits. The bits being crunched in your laptop are either zeros or ones, whereas a qubit can exist simultaneously in both states. This versatility of qubits is needed for complex computing, but it also makes them sensitive to external perturbations. Just like ordinary processors, a quantum computer also needs a cooling mechanism. In the future, thousands or even millions of logical qubits may be simultaneously used in computation, and in order to obtain the correct result, every qubit has to be reset in the beginning of the computation. If the qubits are too hot, they cannot be initialized because they are switching between different states too much. This is the problem Möttönen and his group have developed a solution to. The nanoscale refrigerator developed by the research group at Aalto University solves a massive challenge: with its help, most electrical quantum devices can be initialized quickly. The devices thus become more powerful and reliable. "I have worked on this gadget for five years and it finally works!" rejoices Kuan Yen Tan, who works as a postdoctoral researcher in Möttönen's group. Tan cooled down a qubit-like superconducting resonator utilizing the tunneling of single electrons through a two-nanometer-thick insulator. He gave the electrons slightly too little energy from an external voltage source than what is needed for direct tunneling. Therefore, the electron captures the missing energy required for tunneling from the nearby quantum device, and hence the device loses energy and cools down. The cooling can be switched off by adjusting the external voltage to zero. Then, even the energy available from the quantum device is not enough to push the electron through the insulator. "Our refrigerator keeps quanta in order," Mikko Möttönen sums up. Next, the group plans to cool actual quantum bits in addition to resonators. The researchers also want to lower the minimum temperature achievable with the refrigerator and make its on/off switch super fast.


News Article | May 8, 2017
Site: www.eurekalert.org

Quantum physicist Mikko Möttönen and his team at Aalto University have invented a quantum-circuit refrigerator, which can reduce errors in quantum computing. The global race towards a functioning quantum computer is on. With future quantum computers, we will be able to solve previously impossible problems and develop, for example, complex medicines, fertilizers, or artificial intelligence. The research results published today in the scientific journal, Nature Communications, suggest how harmful errors in quantum computing can be removed. This is a new twist towards a functioning quantum computer. How quantum computers differ from the computers that we use today is that instead of normal bits, they compute with quantum bits, or qubits. The bits being crunched in your laptop are either zeros or ones, whereas a qubit can exist simultaneously in both states. This versatility of qubits is needed for complex computing, but it also makes them sensitive to external perturbations. Just like ordinary processors, a quantum computer also needs a cooling mechanism. In the future, thousands or even millions of logical qubits may be simultaneously used in computation, and in order to obtain the correct result, every qubit has to be reset in the beginning of the computation. If the qubits are too hot, they cannot be initialized because they are switching between different states too much. This is the problem Möttönen and his group have developed a solution to. The nanoscale refrigerator developed by the research group at Aalto University solves a massive challenge: with its help, most electrical quantum devices can be initialized quickly. The devices thus become more powerful and reliable. "I have worked on this gadget for five years and it finally works!" rejoices Kuan Yen Tan, who works as a postdoctoral researcher in Möttönen's group. Tan cooled down a qubit-like superconducting resonator utilizing the tunneling of single electrons through a two-nanometer-thick insulator. He gave the electrons slightly too little energy from an external voltage source than what is needed for direct tunneling. Therefore, the electron captures the missing energy required for tunneling from the nearby quantum device, and hence the device loses energy and cools down. The cooling can be switched off by adjusting the external voltage to zero. Then, even the energy available from the quantum device is not enough to push the electron through the insulator. "Our refrigerator keeps quanta in order," Mikko Möttönen sums up. Next, the group plans to cool actual quantum bits in addition to resonators. The researchers also want to lower the minimum temperature achievable with the refrigerator and make its on/off switch super fast.


News Article | May 3, 2017
Site: www.rdmag.com

It relies on a depth sensor that tracks movements of the thumb and index finger on and above the back of the hand. In this way, not only can smartwatches be controlled, but also smartphones, smart TVs and devices for augmented and virtual reality. They're called the "Apple Watch Series 2", "LG Watch", "Samsung GEAR S3" or "Moto 360 2nd Gen" but they all have the same problem. "Every new product generation has better screens, better processors, better cameras, and new sensors, but regarding input, the limitations remain," explains Srinath Sridhar, a researcher in the Graphics, Vision and Video group at the Max Planck Institute for Informatics. Together with Christian Theobalt, head of the Graphics, Vision and Video group at MPI, Anders Markussen and Sebastian Boring at the University of Copenhagen and Antti Oulasvirta at Aalto University in Finland, Srinath Sridhar has therefore developed an input method that requires only a small camera to track fingertips in mid-air, and touch and position of the fingers on the back of the hand. This combination enables more expressive interactions than any previous sensing technique. Regarding hardware, the prototype, which the researchers have named "WatchSense", requires only a depth sensor, a much smaller version of the well-known "Kinect" game controller from the Xbox 360 video game console. With WatchSense, the depth sensor is worn on the user's forearm, about 20cm from the watch. As a sort of 3D camera, it captures the movements of the thumb and index finger, not only on the back of the hand but also in the space over and above it. The software developed by the researchers recognizes the position and movement of the fingers within the 3D image, allowing the user to control apps on smartphones or other devices. "The currently available depth sensors do not fit inside a smartwatch, but from the trend it's clear that in the near future, smaller depth sensors will be integrated into smartwatches," Sridhar says. But this is not all that's required. According to Sridhar, with their software system the scientists also had to solve the challenges of handling the unevenness of the back of the hand and the fact that the fingers can occlude each other when they are moved. "The most important thing is that we can not only recognize the fingers, but also distinguish between them," explains Sridhar, "which nobody else had managed to do before in a wearable form factor. We can now do this even in real time." The software recognizes the exact positions of the thumb and index finger in the 3D image from the depth sensor, because the researchers trained it to do this via machine learning. In addition, the researchers have successfully tested their prototype in combination with several mobile devices and in various scenarios. "Smartphones can be operated with one or more fingers on the display, but they do not use the space above it. If both are combined, this enables previously impossible forms of interaction," explains Sridhar. He and his colleagues were able to show that with WatchSense, in a music program, the volume could be adjusted and a new song selected more quickly than was possible with a smartphone's Android app. The researchers also tested WatchSense for tasks in virtual and augmented reality, in a map application, and used it to control a large external screen. Preliminary studies showed that WatchSense was more satisfactory for each case than conventional touch-sensitive displays. Sridhar is confident that "we need something like WatchSense whenever we want to be productive while moving. WatchSense is the first to enable expressive input for devices while on the move." From May 6, the researchers will present WatchSense at the renowned "Conference on Human Factors in Computing," or CHI for short, which this time takes place in the city of Denver in the US.


News Article | May 8, 2017
Site: phys.org

The research results published today in the scientific journal, Nature Communications, suggest how harmful errors in quantum computing can be removed. This is a new twist towards a functioning quantum computer. How quantum computers differ from the computers that we use is that instead of computing using normal bits, they use quantum bits, or qubits. The bits being crunched in your laptop are either zeros or ones, whereas a qubit can exist simultaneously in both states. This versatility of qubits is a needed for complex computing, but it also makes them sensitive to external perturbations. Just like ordinary processors, a quantum computer also needs a cooling mechanism. In the future, thousands or even millions of logical qubits may be simultaneously used in computation, and in order to obtain the correct result, every qubit has to be reset in the beginning of the computation. If the qubits are too hot, they cannot be initialized because they are switching between different states too much. This is the problem Mikko Möttönen and his group have developed a solution to. The nanoscale refrigerator developed by the research group at Aalto University solves a massive challenge: with its help, most electrical quantum devices can be initialized quickly. The devices thus become more powerful and reliable. "I have worked on this gadget for five years and it finally works!" rejoices Kuan Yen Tan, who works as a postdoctoral researcher in Möttönen's group. Tan cooled down a qubit-like superconducting resonator utilizing the tunnelling of single electrons through a two-nanometer-thick insulator. He gave the electrons slightly too little energy from an external voltage source than what is needed for direct tunnelling. Therefore, the electron captures the missing energy required for tunnelling from the nearby quantum device, and hence the device loses energy and cools down. The cooling can be switched off by adjusting the external voltage to zero. Then, even the energy available from the quantum device is not enough to push the electron through the insulator. "Our refrigerator keeps quanta in order," Mikko Möttönen sums up. Next, the group plans to cool actual quantum bits in addition to resonators. The researchers also want to lower the minimum temperature achievable with the refrigerator and make its on/off switch super fast. Explore further: Trapped ions and superconductors face off in quantum benchmark


News Article | April 27, 2017
Site: globenewswire.com

Supervisory board of Nordecon AS decided on its meeting held on 26 April 2017 to appoint two new management board members as of 01 May 2017: Maret Tambek will be responsible for financial management and support services of the group and Priit Luman’s responsibility will be the group’s activities in the export markets. Maret Tambek started working for Nordecon Infra AS as a financial manager in 2007. She assumed the position of head accountant of the group in spring 2010 and as of July 2014, Maret Tambek acts as a financial director of Nordecon AS. Previously, Maret worked for KPMG Baltics AS as an auditor for eleven years. She worked as a specialist for the Bank of Estonia from 1992 to 1996. Maret graduated from Tallinn University of Technology in the field of Management and Planning of Manufacturing in 1989. Maret is certified public accountant and member of the Estonian Auditors’ Association. Maret Tambek does not own the shares of Nordecon AS. Priit Luman has held different construction management related positions in Nordecon AS since 2006. Starting from 2013 he is the head of the buildings construction division. Priit graduated from Tallinn University of Technology in Civil and Building Engineering in 2010, obtaining a cum laude Master’s degree in Science in Engineering. Starting from 2017 Priit studies in the EMBA program of Aalto University. Priit Luman has been issued a Level V Diploma Civil Engineer by the Estonian Association of Civil Engineers. Priit Luman owns 200 shares of Nordecon AS. Nordecon (www.nordecon.com) is a group of construction companies whose core business is construction project management and general contracting in the buildings and infrastructures segment. Geographically the Group operates in Estonia, Ukraine, Finland and Sweden. The parent of the Group is Nordecon AS, a company registered and located in Tallinn, Estonia. In addition to the parent company, there are more than 10 subsidiaries in the Group. The consolidated revenue of the Group in 2016 was 183 million euros. Currently Nordecon Group employs close to 700 people. Since 18 May 2006 the company's shares have been quoted in the main list of the NASDAQ Tallinn Stock Exchange.


News Article | April 27, 2017
Site: globenewswire.com

Supervisory board of Nordecon AS decided on its meeting held on 26 April 2017 to appoint two new management board members as of 01 May 2017: Maret Tambek will be responsible for financial management and support services of the group and Priit Luman’s responsibility will be the group’s activities in the export markets. Maret Tambek started working for Nordecon Infra AS as a financial manager in 2007. She assumed the position of head accountant of the group in spring 2010 and as of July 2014, Maret Tambek acts as a financial director of Nordecon AS. Previously, Maret worked for KPMG Baltics AS as an auditor for eleven years. She worked as a specialist for the Bank of Estonia from 1992 to 1996. Maret graduated from Tallinn University of Technology in the field of Management and Planning of Manufacturing in 1989. Maret is certified public accountant and member of the Estonian Auditors’ Association. Maret Tambek does not own the shares of Nordecon AS. Priit Luman has held different construction management related positions in Nordecon AS since 2006. Starting from 2013 he is the head of the buildings construction division. Priit graduated from Tallinn University of Technology in Civil and Building Engineering in 2010, obtaining a cum laude Master’s degree in Science in Engineering. Starting from 2017 Priit studies in the EMBA program of Aalto University. Priit Luman has been issued a Level V Diploma Civil Engineer by the Estonian Association of Civil Engineers. Priit Luman owns 200 shares of Nordecon AS. Nordecon (www.nordecon.com) is a group of construction companies whose core business is construction project management and general contracting in the buildings and infrastructures segment. Geographically the Group operates in Estonia, Ukraine, Finland and Sweden. The parent of the Group is Nordecon AS, a company registered and located in Tallinn, Estonia. In addition to the parent company, there are more than 10 subsidiaries in the Group. The consolidated revenue of the Group in 2016 was 183 million euros. Currently Nordecon Group employs close to 700 people. Since 18 May 2006 the company's shares have been quoted in the main list of the NASDAQ Tallinn Stock Exchange.


News Article | April 27, 2017
Site: globenewswire.com

Supervisory board of Nordecon AS decided on its meeting held on 26 April 2017 to appoint two new management board members as of 01 May 2017: Maret Tambek will be responsible for financial management and support services of the group and Priit Luman’s responsibility will be the group’s activities in the export markets. Maret Tambek started working for Nordecon Infra AS as a financial manager in 2007. She assumed the position of head accountant of the group in spring 2010 and as of July 2014, Maret Tambek acts as a financial director of Nordecon AS. Previously, Maret worked for KPMG Baltics AS as an auditor for eleven years. She worked as a specialist for the Bank of Estonia from 1992 to 1996. Maret graduated from Tallinn University of Technology in the field of Management and Planning of Manufacturing in 1989. Maret is certified public accountant and member of the Estonian Auditors’ Association. Maret Tambek does not own the shares of Nordecon AS. Priit Luman has held different construction management related positions in Nordecon AS since 2006. Starting from 2013 he is the head of the buildings construction division. Priit graduated from Tallinn University of Technology in Civil and Building Engineering in 2010, obtaining a cum laude Master’s degree in Science in Engineering. Starting from 2017 Priit studies in the EMBA program of Aalto University. Priit Luman has been issued a Level V Diploma Civil Engineer by the Estonian Association of Civil Engineers. Priit Luman owns 200 shares of Nordecon AS. Nordecon (www.nordecon.com) is a group of construction companies whose core business is construction project management and general contracting in the buildings and infrastructures segment. Geographically the Group operates in Estonia, Ukraine, Finland and Sweden. The parent of the Group is Nordecon AS, a company registered and located in Tallinn, Estonia. In addition to the parent company, there are more than 10 subsidiaries in the Group. The consolidated revenue of the Group in 2016 was 183 million euros. Currently Nordecon Group employs close to 700 people. Since 18 May 2006 the company's shares have been quoted in the main list of the NASDAQ Tallinn Stock Exchange.


News Article | April 27, 2017
Site: globenewswire.com

Supervisory board of Nordecon AS decided on its meeting held on 26 April 2017 to appoint two new management board members as of 01 May 2017: Maret Tambek will be responsible for financial management and support services of the group and Priit Luman’s responsibility will be the group’s activities in the export markets. Maret Tambek started working for Nordecon Infra AS as a financial manager in 2007. She assumed the position of head accountant of the group in spring 2010 and as of July 2014, Maret Tambek acts as a financial director of Nordecon AS. Previously, Maret worked for KPMG Baltics AS as an auditor for eleven years. She worked as a specialist for the Bank of Estonia from 1992 to 1996. Maret graduated from Tallinn University of Technology in the field of Management and Planning of Manufacturing in 1989. Maret is certified public accountant and member of the Estonian Auditors’ Association. Maret Tambek does not own the shares of Nordecon AS. Priit Luman has held different construction management related positions in Nordecon AS since 2006. Starting from 2013 he is the head of the buildings construction division. Priit graduated from Tallinn University of Technology in Civil and Building Engineering in 2010, obtaining a cum laude Master’s degree in Science in Engineering. Starting from 2017 Priit studies in the EMBA program of Aalto University. Priit Luman has been issued a Level V Diploma Civil Engineer by the Estonian Association of Civil Engineers. Priit Luman owns 200 shares of Nordecon AS. Nordecon (www.nordecon.com) is a group of construction companies whose core business is construction project management and general contracting in the buildings and infrastructures segment. Geographically the Group operates in Estonia, Ukraine, Finland and Sweden. The parent of the Group is Nordecon AS, a company registered and located in Tallinn, Estonia. In addition to the parent company, there are more than 10 subsidiaries in the Group. The consolidated revenue of the Group in 2016 was 183 million euros. Currently Nordecon Group employs close to 700 people. Since 18 May 2006 the company's shares have been quoted in the main list of the NASDAQ Tallinn Stock Exchange.


News Article | April 21, 2017
Site: globenewswire.com

Testing the fuels of the future A million kilometers of test drives, a merciless cold weather simulator and numerous engine tests for special purposes. These three testing methods are among those used by the Neste Engine Laboratory in Kilpilahti to help guarantee consistent, high-quality, low-emission fuels that comply with regulations for Neste's customers. The Engine Laboratory has nearly 60 years of experience under its belt, ensuring that Neste's fuel products keep their customer promise. The fuels must be safe, economical and environmentally friendly to use in any conditions. "We at the Engine Laboratory look far into the future so that we can offer products that best meet our customers' needs and guarantee that our products work," says Teemu Sarjovaara, Head of the Engine Laboratory and alumnus of Aalto University where he took his doctorate at its combustion engine laboratory. The laboratory's team of seven top experts use ten vehicles and several test engines to test new fuel mixtures developed by the company's own product development laboratory, and to verify the compatibility of Neste's fuels with additives made by other manufacturers. With the dozens of various field tests and equipment simulating real-life conditions, the team clocks up around one million kilometers of test drives each year. Hundreds of thousands of these kilometers are driven in actual road conditions. Another testing method is a testing facility built inside the Engine Laboratory, where short tests can be used to simulate a wide range of different road conditions. The third type of test is conducted with engines alone or on test benches built from engine parts, such as the injection systems in diesel engines. Currently, the laboratory is working on an 18-month project in which four identical cars are driven for 100,000 kilometers each. A range of tests measuring fuel consumption and factors such as engine wear are carried out at the beginning and the end of the testing period as well as several times during the project. "We are carrying out the tests increasingly in field conditions, where the cars are measured for an enormous amount of data on a continuous basis. The data is then transferred via 4G network to the laboratory for analysis. This way we can react if the readings show anything unusual," Sarjovaara explains. The test laboratory has a room with a test environment known as the chassis dynamometer, which has seen hundreds of tests. Cars are driven in this environment at a speed of up to 160km/h so that the traction wheels are turning but the car remains stationary. In the middle of the 150-square-meter room stands a familiar 2010 Opel Astra that looks like it is in intensive care: tubes and wires connect the engine cavity and the exhaust piping to dozens of measuring instruments. On the front axle, the front wheels are replaced by an electric dynamometer, or an electric brake, which simulates the load of the road, such as hills and friction. Measuring instruments can be found both inside the car and on the control desk outside the room. In front of the car there is a metal tube resembling a gigantic ventilation duct, which looks like it could swallow the car any minute. Oncoming air current is simulated as part of the test by blowing air against the car with the same speed as the speed indicator of the car shows. This creates realistic conditions similar to, for example, driving in cold winter conditions. One can only imagine, what the force of the air current coming out of the tube must feel like when the speedometer shows over 100 km/h and the temperature in the test room plummets down to -36 degrees Celsius. The laboratory has diesel and gasoline engines dating from different periods, each with their own purpose. Sarjovaara shows us the engine of a VW Transporter from the 1990s. It is still used in tests to ensure that fuel additives do not cause problems with engine valves in sub-zero temperatures. The most modern of the engines is the diesel engine manufactured by the PSA Group, the manufacturer of Citroen and Peugeot cars. The engine is customized to comply with the international standards for test engines. Standardization is necessary so that the results produced by testing laboratories around the world are mutually comparable and that tests can be repeated in different laboratories. Testing methods are continuously being developed, as each year brings some new technological innovation. While the basic logic of the combustion engine has not changed since the late 19th century, small innovations are constantly being developed to improve energy-efficiency and safety while reducing emissions. At the moment, engine testing is affected, most of all, by the use of a computer that connects different parts of the car with each other. Engine performance is also strongly dependent on transmission and brakes, so it is increasingly difficult to test an engine on its own, and tests are more commonly carried out with real cars on chassis dynamometers or in genuine road conditions. "Alongside field tests, the tests conducted in laboratory conditions on the chassis dynamometers are the most important ones because of their better accuracy and repeatability," says Sarjovaara. The Neste Engine Laboratory works in close co-operation with many engine and car manufacturers. It also participates in joint projects and information exchanges with both Finnish and international scientific research institutes and commercial engine laboratories. Through wide-reaching collaboration and the tireless cultivation of its own expertise, Neste guarantees the continuity of its own research operations. Thanks to that research, Neste fuels are, and will be, of high quality, efficient, safe and environmentally friendly. Neste (NESTE, Nasdaq Helsinki) builds sustainable options for the needs of transport, businesses and consumers. Our global range of products and services allows customers to lower their carbon footprint by combining high-quality and low-emission renewable products and oil products to tailor-made service solutions. We are the world's largest producer of renewable diesel refined from waste and residues, and we are also bringing renewable solutions to the aviation and plastics industries. We want to be a reliable partner whose expertise, research and sustainable practices are widely appreciated. In 2016, Neste's net sales stood at EUR 11.7 billion, and we were on the Global 100 list of the 100 most sustainable companies in the world. Read more: www.neste.com/en


News Article | April 19, 2017
Site: phys.org

Researchers working at the Aalto University and at the Royal Institute of Technology KTH in Stockholm have developed a new method for measuring the number of single walled carbon nanotubes and their concentration in a carbon nanotube layer. The novel method is based on measurement of the Raman spectrum together with precise measurement of mass and optical absorbance. The dependence of the number of the CNTs on the phonon scattering intensity is observed. This method opens an opportunity for the quantitative mapping of sp2 bonded carbon atom distribution (i.e. those atoms that form the carbon nanotubes with bonds to three other carbon atoms) in the CNT layers with a resolution limited by the focused laser spot size. The carbon nanotube (CNT) has a structure of a rolled single layer of graphene, where each carbon atom is bonded with three other carbon atoms. Basically the nanotube can be considered as one large molecule. The length of a CNT varies from one to one hundred micrometers while its diameter is of the order of one nanometer CNT based materials are intensively studied due to a number of novel and unique properties that make them potentially useful in a wide range of applications. Extremely thin CNT layers offer outstanding properties like excellent flexibility, optical transparency, high electrical conductivity, extremely small weight, and low processing costs. Optical and electrical properties of a CNT layer can be varied with changing, e.g., the diameter and length of nanotubes or the amount of carbon nanotubes in the layer. 'CNT layers can be used for fabrication of transparent electrodes, fuel and solar cells, supercapacitors, etc. Therefore, a measurement technique for the number of carbon nanotubes in the CNT layer is very useful,' says Irina Nefedova, one of the researchers in this project, who defended her thesis of electrical and optical properties of carbon nanotubes in March 2017 at Aalto University. Explore further: Reusable carbon nanotubes could be the water filter of the future More information: Ilya V. Anoshkin et al. Single walled carbon nanotube quantification method employing the Raman signal intensity, Carbon (2017). DOI: 10.1016/j.carbon.2017.02.019


News Article | May 3, 2017
Site: www.eurekalert.org

Computers are able to learn to explain the behavior of individuals by tracking their glances and movements. Researchers from Aalto University, University of Birmingham and University of Oslo present results paving the way for computers to learn psychologically plausible models of individuals simply by observing them. In newly published conference article, the researchers showed that just by observing how long a user takes to click menu items, one can infer a model that reproduces similar behavior and accurately estimates some characteristics of that user's visual system, such as fixation durations. Despite significant breakthroughs in artificial intelligence, it has been notoriously hard for computers to understand why a user behaves the way she does. Cognitive models that describe individual capabilities, as well as goals, can much better explain and hence be able to predict individual behavior also in new circumstances. However, learning these models from the practically available indirect data has been out of reach. "The benefit of our approach is that much smaller amount of data is needed than for 'black box' methods. Previous methods for performing this type of tuning have either required extensive manual labor, or a large amount of very accurate observation data, which has limited the applicability of these models until now", Doctoral student Antti Kangasrääsiö from Aalto University explains. The method is based on Approximate Bayesian Computation (ABC), which is a machine learning method that has been developed to infer very complex models from observations, with uses in climate sciences and epidemiology among others. It paves the way for automatic inference of complex models of human behavior from naturalistic observations. This could be useful in human-robot interaction, or in assessing individual capabilities automatically, for example detecting symptoms of cognitive decline. "We will be able to infer a model of a person that also simulates how that person learns to act in totally new circumstances," Professor of Machine Learning at Aalto University Samuel Kaski says. "We're excited about the prospects of this work in the field of intelligent user interfaces," Antti Oulasvirta Professor of User Interfaces from Aalto University says. "In the future, the computer will be able to understand humans in a somewhat similar manner as humans understand each other. It can then much better predict not only the benefits of a potential change but also its individual costs to an individual, a capability that adaptive interfaces have lacked", he continues. The results will be presented at the world's largest computer-human interaction conference CHI in Denver, USA, in May 2017. The article is available in preprint: https:/


News Article | May 3, 2017
Site: phys.org

Researchers from Aalto University, University of Birmingham and University of Oslo present results paving the way for computers to learn psychologically plausible models of individuals simply by observing them. In newly published conference article, the researchers showed that just by observing how long a user takes to click menu items, one can infer a model that reproduces similar behavior and accurately estimates some characteristics of that user's visual system, such as fixation durations. Despite significant breakthroughs in artificial intelligence, it has been notoriously hard for computers to understand why a user behaves the way she does. Cognitive models that describe individual capabilities, as well as goals, can much better explain and hence be able to predict individual behavior also in new circumstances. However, learning these models from the practically available indirect data has been out of reach. "The benefit of our approach is that much smaller amount of data is needed than for 'black box' methods. Previous methods for performing this type of tuning have either required extensive manual labor, or a large amount of very accurate observation data, which has limited the applicability of these models until now", Doctoral student Antti Kangasrääsiö from Aalto University explains. The method is based on Approximate Bayesian Computation (ABC), which is a machine learning method that has been developed to infer very complex models from observations, with uses in climate sciences and epidemiology among others. It paves the way for automatic inference of complex models of human behavior from naturalistic observations. This could be useful in human-robot interaction, or in assessing individual capabilities automatically, for example detecting symptoms of cognitive decline. "We will be able to infer a model of a person that also simulates how that person learns to act in totally new circumstances," Professor of Machine Learning at Aalto University Samuel Kaski says. "We're excited about the prospects of this work in the field of intelligent user interfaces," Antti Oulasvirta Professor of User Interfaces from Aalto University says. "In the future, the computer will be able to understand humans in a somewhat similar manner as humans understand each other. It can then much better predict not only the benefits of a potential change but also its individual costs to an individual, a capability that adaptive interfaces have lacked", he continues. The results will be presented at the world's largest computer-human interaction conference CHI in Denver, USA, in May 2017. The article is available in preprint: arxiv.org/abs/1612.00653 Explore further: People find changes in user interfaces annoying More information: Inferring Cognitive Models from Data using Approximate Bayesian Computation, DOI: 10.1145/3025453.3025576


Salo J.,Aalto University
Computers in Human Behavior | Year: 2011

Social virtual worlds (SVWs) have become important environments for social interaction. At the same time, the supply and demand of virtual goods and services is rapidly increasing. For SVWs to be economically sustainable, retaining existing users and turning them into consumers are paramount challenges. This requires an understanding of the underlying reasons why users continuously engage in SVWs and purchase virtual items. This study builds upon Technology Acceptance Model, motivational model and theory of network externalities to examine continuous usage and purchase intention and it empirically tests the model with data collected from 2481 Habbo users. The results reveal a strong relationship between continuous usage and purchasing. Further, the results demonstrate the importance of the presence of other users in predicting the purchase behavior in the SVW. Continuous SVW usage in turn is predicted directly by perceived enjoyment and usefulness while the effect of attitude is marginal. Finally, perceived network externalities exert a significant influence of perceived enjoyment and usefulness of the SVW but do not have a direct effect on the continuous usage. © 2011 Elsevier Ltd. All rights reserved.


Silaev M.A.,RAS Institute for Physics of Microstructures | Volovik G.E.,Aalto University | Volovik G.E.,L D Landau Institute For Theoretical Physics
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

We consider fermionic states bound on domain walls in a Weyl superfluid 3He-A and on interfaces between 3He-A and a fully gapped topological superfluid 3He-B. We demonstrate that in both cases the fermionic spectrum contains Fermi arcs that are continuous nodal lines of energy spectrum terminating at the projections of two Weyl points to the plane of surface states in momentum space. The number of Fermi arcs is determined by the index theorem that relates bulk values of the topological invariant to the number of zero-energy surface states. The index theorem is consistent with an exact spectrum of Bogolubov-de Gennes equation obtained numerically, meanwhile, the quasiclassical approximation fails to reproduce the correct number of zero modes. Thus we demonstrate that topology describes the properties of the exact spectrum beyond the quasiclassical approximation. © 2012 American Physical Society.


Laasonen K.,Aalto University | Panizon E.,CNR Institute of Materials for Electronics and Magnetism | Bochicchio D.,CNR Institute of Materials for Electronics and Magnetism | Ferrando R.,CNR Institute of Materials for Electronics and Magnetism
Journal of Physical Chemistry C | Year: 2013

The structures of AgCu, AgNi, and AgCo nanoalloys with icosahedral geometry have been computationally studied by a combination of atomistic and density-functional theory (DFT) calculations, for sizes up to about 1400 atoms. These nanoalloys preferentially assume core-shell chemical ordering, with Ag in the shell. These core-shell nanoparticles can have either centered or off-center cores; they can have an atomic vacancy in their central site or present different arrangements of the Ag shell. Here we compare these different icosahedral motifs and determine the factors influencing their stability by means of a local strain analysis. The calculations find that off-center cores are favorable for sufficiently large core sizes and that the central vacancy is favorable in pure Ag clusters but not in binary clusters with cores of small size. A quite good agreement between atomistic and DFT calculations is found in most cases, with some discrepancy only for pentakis-dodecahedral structures. Our results support the accuracy of the atomistic model. Spin structure and charge transfer in the nanoparticles are also analyzed. © 2013 American Chemical Society.


Galvez F.E.,University of Seville | Barnes P.R.F.,Imperial College London | Halme J.,Aalto University | Miguez H.,University of Seville
Energy and Environmental Science | Year: 2014

In order to enhance optical absorption, light trapping by multiple scattering is commonly achieved in dye sensitized solar cells by adding particles of a different sort. Herein we propose a theoretical method to find the structural parameters (particle number density and size) that optimize the conversion efficiency of electrodes of different thicknesses containing spherical inclusions of diverse composition. Our work provides a theoretical framework in which the response of solar cells containing diffuse scattering particles can be rationalized. Optical simulations are performed by combining a Monte Carlo approach with Mie theory, in which the angular distribution of scattered light is accounted for. Several types of scattering centers, such as anatase, gold and silver particles, as well as cavities, are considered and their effect compared. Estimates of photovoltaic performance, insight into the physical mechanisms responsible for the observed enhancements, and guidelines to improve the cell design are provided. We discuss the results in terms of light transport in weakly disordered optical media and find that the observed variations between the optimum scattering configurations attained for different electrode thicknesses can be understood as the result of the randomization of the light propagation direction at different depths within the active layer. A primary conclusion of our study is that photovoltaic performance is optimised when the scattering properties of the film are adjusted so that the distance over which incident photons are randomized is comparable to the thickness of the film. This simple relationship could also be used as a design rule to attain the optimum optical design in other photovoltaic materials. This journal is © The Royal Society of Chemistry.


Galvez F.E.,University of Seville | Kemppainen E.,Aalto University | Miguez H.,University of Seville | Halme J.,Aalto University
Journal of Physical Chemistry C | Year: 2012

Herein, we present an integral optical and electrical theoretical analysis of the effect of different diffuse light scattering designs on the performance of dye solar cells. Light harvesting efficiencies and electron generation functions extracted from optical numerical calculations based on a Monte Carlo approach are introduced in a standard electron diffusion model to obtain the steady-state characteristics of the different configurations considered. We demonstrate that there is a strong dependence of the incident photon to current conversion efficiency, and thus of the overall conversion efficiency, on the interplay between the value of the electron diffusion length considered and the type of light scattering design employed, which determines the spatial dependence of the electron generation function. Other effects, like the influence of increased photoelectron generation on the photovoltage, are also discussed. Optimized scattering designs for different combinations of electrode thickness and electron diffusion length are proposed. © 2012 American Chemical Society.


Sein M.K.,University of Agder | Henfridsson O.,Viktoria Institute | Henfridsson O.,University of Oslo | Purao S.,Pennsylvania State University | And 3 more authors.
MIS Quarterly: Management Information Systems | Year: 2011

Design research (DR) positions information technology artifacts at the core of the Information Systems discipline. However, dominant DR thinking takes a technological view of the IT artifact, paying scant attention to its shaping by the organizational context. Consequently, existing DR methods focus on building the artifact and relegate evaluation to a subsequent and separate phase. They value technological rigor at the cost of organizational relevance, and fail to recognize that the artifact emerges from interaction with the organizational context even when its initial design is guided by the researchers' intent. We propose action design research (ADR) as a new DR method to address this problem. ADR reflects the premise that IT artifacts are ensembles shaped by the organizational context during development and use. The method conceptualizes the research process as containing the inseparable and inherently interwoven activities of building the IT artifact, intervening in the organization, and evaluating it concurrently. The essay describes the stages of ADR and associated principles that encapsulate its underlying beliefs and values. We illustrate ADR through a case of competence management at Volvo IT.


Petersen R.,University of Aalborg | Pedersen T.G.,University of Aalborg | Jauho A.-P.,Technical University of Denmark | Jauho A.-P.,Aalto University
ACS Nano | Year: 2011

Pristine graphene is a semimetal and thus does not have a band gap. By making a nanometer scale periodic array of holes in the graphene sheet a band gap may form; the size of the gap is controllable by adjusting the parameters of the lattice. The hole diameter, hole geometry, lattice geometry, and the separation of the holes are parameters that all play an important role in determining the size of the band gap, which, for technological applications, should be at least of the order of tenths of an eV. We investigate four different hole configurations: the rectangular, the triangular, the rotated triangular, and the honeycomb lattice. It is found that the lattice geometry plays a crucial role for size of the band gap: the triangular arrangement displays always a sizable gap, while for the other types only particular hole separations lead to a large gap. This observation is explained using Clar sextet theory, and we find that a sufficient condition for a large gap is that the number of sextets exceeds one-third of the total number of hexagons in the unit cell. Furthermore, we investigate nonisosceles triangular structures to probe the sensitivity of the gap in triangular lattices to small changes in geometry. © 2011 American Chemical Society.


Zhu B.,KTH Royal Institute of Technology | Fan L.,KTH Royal Institute of Technology | Fan L.,Tianjin University of Technology | Lund P.,Aalto University
Applied Energy | Year: 2013

Recent scientific and technological advancements have provided a wealth of new information about solid oxide-molten salt composite materials and multifunctional ceria-based nano-composites for advanced fuel cells (NANOCOFC). NANOCOFC is a new approach for designing and developing of multi-functionalities for nanocomposite materials, especially at 300-600 °C. NANOCOFC and low temperature advanced ceramic fuel cells (LTACFCs) are growing as a new promising area of research which can be explored in various ways. The ceria-based composite materials have been developed as competitive electrolyte candidates for low temperature ceramic fuel cells (LTCFCs). In the latest developments, multifunctional materials have been developed by integrating semi- and ion conductors, which have resulted in an emerging insight knowledge concerned with their R&D on single-component electrolyte-free fuel cells (EFFCs) - a breakthrough fuel cell technology. A homogenous component/layer of the semi- and ion conducting materials can realize fuel cell all functions to avoid using three components: anode, electrolyte and cathode, i.e. " three in one" highlighted by Nature Nanotechnology (2011). This report gives a short review and advance knowledge on worldwide activities on the ceria-based composites, emphasizing on the latest semi-ion conductive nanocomposites and applications for new applied energy technologies. It gives an overview to help the audience to get a comprehensive understanding on this new field. © 2013 Elsevier Ltd.


Kaipia R.,Aalto University | Dukovska-Popovska I.,University of Aalborg | Loikkanen L.,Aalto University
International Journal of Physical Distribution and Logistics Management | Year: 2013

Purpose: The aim of this empirical paper is to study information sharing in fresh food supply chains, with a specific goal of reducing waste and facilitating sustainable performance. The study focuses on material and information flow issues, specifically on sharing demand and shelf-life data. Design/methodology/approach: This work has been designed as an exploratory case study in three fresh food supply chains, milk, fresh fish, and fresh poultry, in the Nordic countries. The cases are based on interviews and data from the databases of the companies involved. Each case focuses on analyzing information flow, particularly the current order patterns and forecasting and planning process, and material flow, focusing on the supply chain structure. In two cases significant changes have been made to forecasting processes and material flow, while the third case intends to identify the most beneficial uses of shared information to create a sustainable fresh food supply chain. Findings: The performance of the perishable food chain can be improved by more efficient information sharing. The key to improved operations is how and for which purposes the shared data should be used. In addition, changes in the supply chain structure were needed to speed up the deliveries and ensure shelf availability. The cross-case analysis revealed that improved performance was obtained with parallel changes in information sharing and usage and in material flow. Originality/value: Few studies approach the problem of waste and sustainability from an integrated supply chain perspective. This paper links data sharing with the sustainability performance of the supply chain as a whole. © Emerald Group Publishing Limited.


Zurita G.A.,CONICET | Bellocq M.I.,CONICET | Rybicki J.,Aalto University
Proceedings of the National Academy of Sciences of the United States of America | Year: 2013

The species-area relationship (SAR) gives a quantitative description of the increasing number of species in a community with increasing area of habitat. In conservation, SARs have been used to predict the number of extinctions when the area of habitat is reduced. Such predictions are most needed for landscapes rather than for individual habitat fragments, but SAR-based predictions of extinctions for landscapes with highly fragmented habitat are likely to be biased because SAR assumes contiguous habitat. In reality, habitat loss is typically accompanied by habitat fragmentation. To quantify the effect of fragmentation in addition to the effect of habitat loss on the number of species, we extend the power-law SAR to the species-fragmented area relationship. This model unites the single-species metapopulation theory with the multispecies SAR for communities. We demonstrate with a realistic simulation model and with empirical data for forest-inhabiting subtropical birds that the species-fragmented area relationship gives a far superior prediction than SAR of the number of species in fragmented landscapes. The results demonstrate that for communities of species that are not well adapted to live in fragmented landscapes, the conventional SAR underestimates the number of extinctions for landscapes in which little habitat remains and it is highly fragmented. © PNAS 2013.


Kopnin N.B.,Aalto University | Kopnin N.B.,L D Landau Institute For Theoretical Physics | Melnikov A.S.,RAS Institute for Physics of Microstructures
Physical Review B - Condensed Matter and Materials Physics | Year: 2011

The approach applicable for spatially inhomogeneous and time-dependent problems associated with the induced superconductivity in low-dimensional electronic systems is developed. This approach is based on the Fano-Anderson model which describes the decay of a resonance state coupled to a continuum. We consider two types of junctions made of a ballistic two-dimensional electron gas placed in a tunnel finite-length contact with a bulk superconducting leads. We calculate the spectrum of the bound states, supercurrent, and the current-voltage curve which show a rich structure due to the presence of induced gap and dimensional quantization. © 2011 American Physical Society.


The wear behaviour of a new dual mobility total hip design was compared with that of a modular design using the 12-station anatomic hip joint simulator HUT-4. In addition, two positions of the acetabular shells were compared, at 45° and 60° abduction. The acetabular insert material was conventional ultra-high molecular weight polyethylene (UHMWPE) in both designs, and the femoral head material was stainless steel. The differences in the mean wear rates between the two designs in either position, and between the two positions in either design were not statistically significant. The wear rates were of the order of 20 mg per one million cycles. © 2009 Elsevier B.V. All rights reserved.


News Article | December 1, 2015
Site: www.sciencenews.org

Anyone trying to circumvent the physical laws governing heat is going to get burned. A new experiment reveals how a device that robs a closed system of heat to make it more orderly, an action forbidden by a bedrock law of physics, inevitably pays a price by becoming hotter and more disordered. It’s a real-life demonstration of a nearly 150-year-old thought experiment known as Maxwell’s demon. If this demon really could skirt the second law of thermodynamics — which states that the entropy, or disorder, of an isolated system can never decrease ­— then it would be possible to create a perpetual motion machine. The demonstration described in a paper to be published in Physical Review Letters is the first to monitor both a system and the demon that’s working to reduce the system’s entropy. “It’s a really nice experiment,” says Eric Lutz, a theoretical physicist at the University of Erlangen-Nuremberg in Germany. The work confirms theoretical research showing that information and heat are intertwined: The demon heats up because it must discard the information it learned to manipulate the system. A demonlike device could eventually perform functions like refrigeration, but this experiment proves the contraption would consume energy just like the kitchen appliance. Nineteenth century Scottish physicist James Clerk Maxwell was very familiar with the second law of thermodynamics. It explains why heat always flows from hot to cold until everything reaches a stable temperature, a state of maximum entropy. Steam engines work by exploiting the heat transfer to drive a turbine. In an 1867 letter, Maxwell introduced a scheme that seemed to game the system. He envisioned a microscopic entity that monitored gas molecules bouncing around two neighboring containers. This “demon” would increase the temperature difference between containers, and thus decrease the total entropy, by allowing only fast-moving molecules to cross into the hotter container and slow-moving molecules to enter the colder container. The sorting would enable the demon to perpetually run an engine. Molecular sorting isn’t the only way to decrease entropy — stealing heat works too. The laboratory version of Maxwell’s demon created by Jonne Koski, a physicist at Aalto University in Finland, and colleagues essentially tricked an electronic circuit into forfeiting heat. Without the demon, electrons in the circuit progressed from high to low energy, as if rolling down a gentle slope. As the electrons rolled downhill, they released energy in the form of heat into their environment, increasing the system’s temperature and entropy. At one point along their path, though, the electrons had to briefly borrow some of that energy to scale a small bump — which isn’t a big deal as long as they gave it back when rolling down the bump. But the demon, in the form a charge-manipulating device, was monitoring that obstacle. Whenever an electron scaled the bump, the demon introduced a charge that transformed the bump into a pothole. The electron then had to consume even more energy to escape the hole. Once the electron left, the demon brought back the bump for the next electron. The cumulative effect of electrons overcoming the demon-created obstacle course drained heat from the environment, leading to lower temperature and lower entropy. A scientist with no knowledge of the experiment would be shocked to find the system seemingly violating the second law. But there’s no need to rewrite the textbooks, because Koski’s demon pays a price. The researchers found that as the demon fooled with electrons, it heated up. In fact, it warmed so much that the total entropy of the system and the demon increased. The heat is a by-product of the demon’s inability to store information about the system it’s monitoring. Unlike making observations and recording them, erasing information always requires some use of energy, a principle first articulated by physicist Rolf Landauer in 1961. Since Koski’s demon can keep tabs only on one electron at a time, it must discard its knowledge of past electrons — an entropy-increasing process that more than compensates for the entropy lost by the system. “The demon has to heat up more than the system cools,” Koski says. Some physicists say that while the experiment is compelling, they’re not convinced it captures the essence of Maxwell’s original demon concept. Nonetheless, a device similar to Koski’s demon could prove useful for cooling nano-sized devices — even if it has to play by the rules.


News Article | November 29, 2016
Site: www.eurekalert.org

Some people claim to experience pain just watching something painful to happen. This is true especially of people suffering from complex regional pain syndrome (CRPS), a disabling chronic pain disorder in a limb. In CPRS patients, both own movements and just observing other persons' movements may aggravate the pain. When you hurt yourself, pain receptors in the body send signals to different parts of the brain. As the result, you experience pain. Researchers in Aalto University, Finland, found that when CRPS patients feel pain caused by observing other person's movements, their brains display abnormal activation in many such areas that respond to normal physical pain. Thus the pain that the CRPS patients felt during movement observation presented similarities to the "normal" pain associated with tissue damage. - CPRS is a very complex disease with devastating chronic pain. Its pathophysiology is incompletely understood and definitive biomarkers are lacking. Our discovery may help to develop diagnostics and therapeutic strategies for CRPS patients, tells neurologist Jaakko Hotta, Doctoral Candidate at Aalto University. In the study, the researchers analyzed functional magnetic resonance images from 13 upper-limb CRPS patients and 13 healthy control subjects who were viewing brief videos of hand actions, such as a hand squeezing a ball with maximum force. In the CPRS patients, watching hand actions was associated with abnormal brain activation patterns and a pattern-classification analysis differentiated the patients from the healthy subjects. These findings indicate that CRPS affects brain areas related to both pain processing and motor control. The study was published on the Journal of Pain, the official journal of the American Pain Society.


News Article | November 30, 2016
Site: www.biosciencetechnology.com

Some people claim to experience pain just watching something painful to happen. This is true especially of people suffering from complex regional pain syndrome (CRPS), a disabling chronic pain disorder in a limb. In CPRS patients, both own movements and just observing other persons' movements may aggravate the pain. When you hurt yourself, pain receptors in the body send signals to different parts of the brain. As the result, you experience pain. Researchers in Aalto University, Finland, found that when CRPS patients feel pain caused by observing other person's movements, their brains display abnormal activation in many such areas that respond to normal physical pain. Thus the pain that the CRPS patients felt during movement observation presented similarities to the "normal" pain associated with tissue damage. - CPRS is a very complex disease with devastating chronic pain. Its pathophysiology is incompletely understood and definitive biomarkers are lacking. Our discovery may help to develop diagnostics and therapeutic strategies for CRPS patients, tells neurologist Jaakko Hotta, Doctoral Candidate at Aalto University. In the study, the researchers analyzed functional magnetic resonance images from 13 upper-limb CRPS patients and 13 healthy control subjects who were viewing brief videos of hand actions, such as a hand squeezing a ball with maximum force. In the CPRS patients, watching hand actions was associated with abnormal brain activation patterns and a pattern-classification analysis differentiated the patients from the healthy subjects. These findings indicate that CRPS affects brain areas related to both pain processing and motor control. The study was published on the Journal of Pain, the official journal of the American Pain Society.


News Article | December 8, 2016
Site: www.eurekalert.org

For the first time, information retrieval is possible with the help of EEG interpreted with machine learning In a study conducted by the Helsinki Institute for Information Technology (HIIT) and the Centre of Excellence in Computational Inference (COIN), laboratory test subjects read the introductions of Wikipedia articles of their own choice. During the reading session, the test subjects' EEG was recorded, and the readings were then used to model which key words the subjects found interesting. 'The aim was to study if EEG can be used to identify the words relevant to a test subject, to predict a subject's search intentions and to use this information to recommend new relevant and interesting documents to the subject. There are millions of documents in the English Wikipedia, so the recommendation accuracy was studied against this vast but controllable corpus', says HIIT researcher Tuukka Ruotsalo. Due to the noise in brain signals, machine learning was used for modelling, so that relevance and interest could be identified by learning the EEG responses. With the help of machine learning methods, it was possible to identify informative words, so they were also useful in the information retrieval application. 'Information overload is a part of everyday life, and it is impossible to react to all the information we see. And according to this study, we don't need to; EEG responses measured from brain signals can be used to predict a user's reactions and intent', tells HIIT researcher Manuel Eugster. Based on the study, brain signals could be used to successfully predict other Wikipedia content that would interest the user. 'Applying the method in real information retrieval situations seems promising based on the research findings. Nowadays, we use a lot of our working time searching for information, and there is much room in making knowledge work more effective, but practical applications still need more work. The main goal of this study was to show that this kind of new thing was possible in the first place', tells Professor at the Department of Computer Science and Director of COIN Samuel Kaski. 'It is possible that, in the future, EEG sensors can be worn comfortably. This way, machines could assist humans by automatically observing, marking and gathering relevant information by monitoring EEG responses', adds Ruotsalo. The study was carried out in cooperation by the Helsinki Institute for Information Technology (HIIT), which is jointly run by Aalto University and the University of Helsinki, and the Centre of Excellence in Computational Inference (COIN). The study has been funded by the EU, the Academy of Finland as a part of the COIN study on machine learning and advanced interfaces, and the Revolution of Knowledge Work project by Tekes. Video: https:/ HIIT augmented research http://augmentedresearch. Tekes Re:Know https:/ EU:n MindSee project http://mindsee. Department of Computer Science http://cs. HIIT http://www. The Centre of Excellence in Computational Inference (COIN) http://research.


News Article | October 10, 2016
Site: motherboard.vice.com

On New Year's Day in 1995, hurricane-force winds were blowing in the North Sea, off the coast of Norway. Twelve-meter-high waves pummelled the Draupner oil platform, where workers were stationed, but the platform was designed to withstand this sort of punishment, and workers were sheltered for safety. Suddenly, a freak monster wave hammered the platform, seemingly out of nowhere. The rig was unharmed, but its instruments took a measurement that caused scientists' jaws to drop: The wave it recorded had a height of roughly 26 meters (85 feet). Rogue waves have long been part of sailors' lore. Until the Draupner Event, as it's now called, some scientists had a hard time believing they were real. Why would a skyscraper-size wave erupt out of the ocean, only to disappear shortly after? "That got scientists interested in these waves," Amin Chabchoub, assistant professor of hydrodynamics at Aalto University, told me over the phone from Helsinki. 1995 was the first time that a decent measurement of one had been taken. Now nobody doubts they exist. We still can't predict when one will appear, but scientists think they're getting closer to forecasting when a rogue wave will strike. To better study this, Chabchoub, who recently published a new paper in Physical Review Letters, built a mini rogue wave in the lab. Rogue waves, according to Johannes Gemmrich of the University of Victoria, don't have to be absolute monsters—they can be any height. "It's a relatively easy definition," he told me: "An individual wave which is large compared to surrounding waves." Tsunamis, by contrast, are often caused by displacements at the bottom of the ocean, and can travel long distances, including in shallow waters close to shore. Read More: MIT Algorithm Could Save Future Sailors From Rogue Waves Chabchoub is collaborating with Themistoklis P. Sapsis of the Massachusetts Institute of Technology (MIT), to get better at predicting them. "He's providing us with two things," Sapsis told me. "One, measurements of the wave field before the rogue wave occurs. And then giving us confirmation about the exact location of where it is in his wave tank." They've used the rogue waves to swamp mini boats, to watch how it all works. It's hard to say for sure what causes a rogue wave to form, which makes it even harder to predict them. According to Sapsis, there are two theories: one is that ocean swells, travelling in different speeds and directions, "superimpose with the right phase," he explained, creating an abnormally huge wave. Some vanish in less than a minute after they spike up. The other is that waves all travelling in the same direction eventually mash and join together, forming a giant one. These ones tend to be longer-lived, according to the National Oceanic and Atmospheric Administration (NOAA)'s Ocean Service. Chabchoub and Sapsis aren't the only ones trying to find a way to predict rogue waves. Francesco Fedele of the Georgia Institute of Technology has developed a method using mathematical models of underlying wave energy and other factors. (The NOAA is implementing an approach based on his work.) "You cannot tell that a rogue wave will occur for sure," Fedele told me, but it could still be an early warning system that the probability is high, giving "advance notice to ships." Ultimately, ocean waves might just be too chaotic for even the best algorithms to parse and understand. A rogue wave forecast "will be like what you get for thunderstorms," Arun Chawla of the National Weather Service told me. "You don't know exactly when or where [the rogue wave will appear], but you know the conditions are right." Get six of our favorite Motherboard stories every day by signing up for our newsletter.


News Article | November 14, 2016
Site: www.greencarcongress.com

« Samsung Electronics to acquire HARMAN for ~$8B, accelerating growth in automotive and connected technologies | Main | Siemens to acquire Mentor Graphics for $4.5B » Rolls-Royce and VTT Technical Research Centre of Finland Ltd have formed a strategic partnership to design, to test and to validate the first generation of remote and autonomous ships. The new partnership will combine and integrate the two companies’ unique expertise to make such vessels a commercial reality. (Earlier post.) Rolls-Royce is pioneering the development of remote-controlled and autonomous ships and believes a remote-controlled ship will be in commercial use by the end of the decade. The company is applying technology, skills and experience from across its businesses to this development. VTT has deep knowledge of ship simulation and extensive expertise in the development and management of safety-critical and complex systems in demanding environments such as nuclear safety. VTT combines physical tests such as model and tank testing, with digital technologies, such as data analytics and computer visualisation. VTT will also use field research to incorporate human factors into safe ship design. As a result of working with the Finnish telecommunications sector, VTT has extensive experience of working with 5G mobile phone technology and wi-fi mesh networks. VTT has the first 5G test network in Finland. Working with VTT will allow Rolls-Royce to assess the performance of remote and autonomous designs through the use of both traditional model tank tests and digital simulation, allowing the company to develop functional, safe and reliable prototypes. Rolls-Royce has experience in secure data analytics across civil aerospace, defence, nuclear power and marine; coupled with its ship intelligence capabilities, design, propulsion and machinery expertise, this base means it is ideally placed to take the lead in defining the future of shipping, in collaboration with industry, academia and Government. Rolls-Royce is leading the Advanced Autonomous Waterborne Applications Initiative (AAWA). Funded by Tekes (Finnish Funding Agency for Technology and Innovation), AAWA brings together universities, ship designers, equipment manufacturers, and classification societies to explore the economic, social, legal, regulatory and technological factors which need to be addressed to make autonomous ships a reality. It combines the expertise of some of Finland’s top academic researchers from Tampere University of Technology; VTT Technical Research Centre of Finland Ltd; Åbo Akademi University; Aalto University; the University of Turku; and leading members of the maritime cluster including Rolls-Royce, NAPA, Deltamarin, DNV GL and Inmarsat. Rolls-Royce is also a member of the Norwegian Forum for Autonomous Ships (NFAS) which has the backing of the Norwegian Maritime Administration, The Norwegian Coastal Administration, the Federation of Norwegian Industries and MARINTEK. Its objectives are to strengthen the cooperation between users, researchers, authorities and others that are interested in autonomous ships and their use; contribute to the development of common Norwegian strategies for development and use of autonomous ships and co-operate with other international and national bodies interested in autonomous shipping. Rolls-Royce is also a founding member of the Finnish ecosystem for autonomous marine transport (DIMECC). Supported by the Finnish Marine Industries Association, the Ministry of Transport and Communications, Tekes (the Finnish Funding Agency for Innovation) and leading companies including Rolls-Royce, Cargotec, Ericsson, Meyer Turku, Tieto, and Wärtsilä, it aims to create the world’s first autonomous marine transport system in the Baltic Sea.


Home > Press > The route to high temperature superconductivity goes through the flat land: The route to high temperature superconductivity goes through the flat land Abstract: Superconductors are marvellous materials that are able to transport electric current and energy without dissipation. For this reason, they are extremely useful for constructing magnets that can generate enormous magnetic fields without melting. They have found important applications as essential components of the Large Hadron Collider particle accelerator at CERN, levitating trains, and the magnetic resonance imaging tool widely used for medical purposes. Yet, one reason why the waiting list for an MRI scan is sometimes so long is the cost of the equipment. Indeed, superconductors have to be cooled down below one hundred degrees centigrade to manifest their unique properties, and this implies the use of expensive refrigerators. An important open problem in modern materials science is to understand the mechanism behind superconductivity, and in particular, it would be highly desirable to be able to predict with precision the critical temperature below which the superconducting transition occurs. In fact, there are no currently available theories that can provide accurate predictions for the critical temperature of the most useful superconductive materials. This is unfortunate since a sound understanding of the mechanism of superconductivity is essential if we are interested in synthesizing materials that may one day achieve superconductivity at room temperature, without refrigeration. A potential breakthrough has recently been put forward by researchers at Aalto University. Their study builds on the theory of the electronic motion in crystals developed by Felix Bloch in 1928. It is an interesting consequence of quantum mechanics that an electron that feels the electric charge of an ordered array of atoms (a crystal) can move as freely as it would in free space. However, the crystal has the nontrivial effect of modifying the apparent mass of the electron. Indeed, electrons appear to be heavier (or lighter) in a crystal than in free space, which means that one has to push them more (or less) to make them move. This fact has very important consequences since electrons with a larger apparent mass lead to a larger critical temperature for superconductivity. Ideally to maximize the critical temperature, we should consider electrons with infinite apparent mass or, to use the jargon of physicists, electrons in a 'flat band'. Naively we could expect that electrons with infinite mass would be stuck in place, unable to carry any current, and the essential property of superconductivity would be lost. "I was very intrigued to find out how a supercurrent, that is, electrical current, could be carried by electrons in a flat band. We had some hints that this is in fact possible, but not a general solution of this paradox" says Aalto physics Professor Paivi Torma. Surprisingly in the world of quantum mechanics, an infinite mass does not necessarily prevent the flow of electric current. The key to this mystery is to remember that electrons are quantum mechanical objects with both particle- and wave-like features. Prof. Paivi Torma and postdoctoral researcher Sebastiano Peotta have found that the mass alone, which is a property of particles, is not sufficient to completely characterize electrons in solids. We also need something called the 'quantum metric'. A metric tells how distances are measured, for instance the distance between two points is different on a sphere than on a flat surface. It turns out that the quantum metric measures the spread of the electron waves in a crystal. This spread is a wave-like property. Electrons with the same apparent mass, possibly infinite, can be associated with waves that are more or less spread out in the crystal, as measured by the quantum metric. The larger the quantum metric, the larger the supercurrent that the superconductor can carry. "Our results are very positive," says Peotta, "they open a novel route for engineering superconductors with high critical temperature. If our predictions are verified, common sense will suffer a big blow, but I am fine with that." Another surprising finding is that the quantum metric is intimately related to an even more subtle wave-like property of the electrons quantified by an integer number called the Chern number. The Chern number is an example of a topological invariant, namely a mathematical property of objects that is not changed under an arbitrary but gentle (not disruptive) deformation of the object itself. A simple example of a topological invariant is the number of twists of a belt. A belt with a single twist is a called a Mobius band in mathematics and is shown in the figure. A twist can be moved forward and backward in the belt but never removed unless the belt is broken. The number of twists is always an integer. In the same way, the Chern number can take only integer values and cannot be changed unless a drastic change is performed on the electron waves. If the Chern number is nonzero, it is not possible to unknot the electron waves centred at neighbouring atoms of the material. As a consequence, the waves have to overlap, and it is this finite overlap that ensures superconductivity, even in a flat band. Aalto researchers have thus discovered an unexpected connection between superconductivity and topology. Finland is a leader in this type of research, as flat band superconductivity was already predicted to occur at the surface of a certain kind of graphite, a result of the theoretical work of Grigory Volovik and Nikolai Kopnin (Aalto University) and Tero Heikkila (University of Jyvaskyla). To launch the next stage of discovery, Peotta and Torma's theoretical predictions could now be tested experimentally in ultracold atomic gas systems by collaborators. "The connections I made this summer as a guest professor at ETH Zurich will be very useful for our further research on the topic," reveals Torma. "We are also intrigued by the fact that the physics we describe may be important for known superconductive materials, but it has not been noticed yet," adds Peotta. About Aalto University Aalto University. Towards a better world. Aalto University is a community of bold thinkers where science and art meet technology and business. Aalto University has six schools with nearly 20 000 students and 4 700 employees, 390 of which are professors. Our campuses are located in Espoo and Helsinki, Finland. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | November 14, 2016
Site: www.eurekalert.org

Rolls-Royce and VTT Technical Research Centre of Finland Ltd have announced a strategic partnership to design, test and validate the first generation of remote and autonomous ships. The new partnership will combine and integrate the two company's unique expertise to make such vessels a commercial reality. Rolls-Royce is pioneering the development of remote controlled and autonomous ships and believes a remote controlled ship will be in commercial use by the end of the decade. The company is applying technology, skills and experience from across its businesses to this development. VTT has deep knowledge of ship simulation and extensive expertise in the development and management of safety-critical and complex systems in demanding environments such as nuclear safety. They combine physical tests such as model and tank testing, with digital technologies, such as data analytics and computer visualisation. They will also use field research to incorporate human factors into safe ship design. As a result of working with the Finnish telecommunications sector, VTT has extensive experience of working with 5G mobile phone technology and wi-fi mesh networks. VTT has the first 5G test network in Finland. Working with VTT will allow Rolls-Royce to assess the performance of remote and autonomous designs through the use of both traditional model tank tests and digital simulation, allowing the company to develop functional, safe and reliable prototypes. Karno Tenovuo, Rolls-Royce, Vice President Ship Intelligence, said: "Remotely operated ships are a key development project for Rolls-Royce Marine, and VTT is a reliable and innovative partner for the development of a smart ship concept. This collaboration is a natural continuation of the earlier User Experience for Complex systems (UXUS) project, where we developed totally new bridge and remote control systems for shipping." Erja Turunen, Executive Vice President at VTT, said: "Rolls-Royce is a pioneer in remotely controlled and autonomous shipping. Our collaboration strengthens the way we can integrate and leverage VTT's expertise in simulation and safety validation, including the industrial Internet of Things, to develop new products and in the future, enable us to develop new solutions for new areas of application as well." Rolls-Royce is pioneering the development of remote controlled and autonomous ships, applying technology, skills and experience from across its businesses with the ambition of seeing a remote controlled ship in commercial use by the end of the decade. Rolls-Royce's experience in secure data analytics across civil aerospace, defence, nuclear power and marine; coupled with its ship intelligence capabilities, design, propulsion and machinery expertise means it is ideally placed to take the lead in defining the future of shipping, in collaboration with industry, academia and Government. Rolls-Royce is leading the Advanced Autonomous Waterborne Applications Initiative (AAWA). Funded by Tekes (Finnish Funding Agency for Technology and Innovation), AAWA brings together universities, ship designers, equipment manufacturers, and classification societies to explore the economic, social, legal, regulatory and technological factors which need to be addressed to make autonomous ships a reality. It combines the expertise of some of Finland's top academic researchers from Tampere University of Technology; VTT Technical Research Centre of Finland Ltd; Åbo Akademi University; Aalto University; the University of Turku; and leading members of the maritime cluster including Rolls-Royce, NAPA, Deltamarin, DNV GL and Inmarsat. Rolls-Royce is also a member of the Norwegian Forum for Autonomous Ships (NFAS) which has the backing of the Norwegian Maritime Administration, The Norwegian Coastal Administration, the Federation of Norwegian Industries and MARINTEK. Its objectives are to strengthen the cooperation between users, researchers, authorities and others that are interested in autonomous ships and their use; contribute to the development of common Norwegian strategies for development and use of autonomous ships and co-operate with other international and national bodies interested in autonomous shipping. Rolls-Royce is a founder member of the Finnish ecosystem for autonomous marine transport (DIMECC). Supported by the Finnish Marine Industries Association, the Ministry of Transport and Communications, Tekes - the Finnish Funding Agency for Innovation and leading companies including: Rolls-Royce, Cargotec, Ericsson, Meyer Turku, Tieto, and Wärtsilä it aims to create the world's first autonomous marine transport system in the Baltic Sea. More information on VTT's ship model and propulsion device test facilities: http://www. Ship Intelligence press photos are available for download at: https:/ For further information, please contact: Erja Turunen Executive Vice President VTT Technical Research Centre of Finland Ltd +358 50 380 9671, erja.turunen@vtt.fi 1. Rolls-Royce's vision is to be the market-leader in high performance power systems where our engineering expertise, global reach and deep industry knowledge deliver outstanding customer relationships and solutions. We operate across five businesses: Civil Aerospace, Defence Aerospace, Marine, Nuclear and Power Systems. 2. Rolls-Royce has customers in more than 120 countries, comprising more than 400 airlines and leasing customers, 160 armed forces, 4,000 marine customers including 70 navies, and more than 5,000 power and nuclear customers. 3. We have three common themes across all our businesses: 4. Annual underlying revenue was £13.4 billion in 2015, around half of which came from the provision of aftermarket services. The firm and announced order book stood at £76.4 billion at the end of 2015. 5. In 2015, Rolls-Royce invested £1.2 billion on research and development. We also support a global network of 31 University Technology Centres, which position Rolls-Royce engineers at the forefront of scientific research. 6. Rolls-Royce employs over 50,000 people in more than 46 countries. Nearly 15,700 of these are engineers. 7. The Group has a strong commitment to apprentice and graduate recruitment and to further developing employee skills. In 2015 we employed 228 graduates and 277 apprentices through our worldwide training programmes. VTT Technical Research Centre of Finland Ltd is the leading research and technology company in the Nordic countries. We use our research and knowledge to provide expert services for our domestic and international customers and partners, and for both private and public sectors. We use 4,000,000 hours of brainpower a year to develop new technological solutions. VTT in social media: Facebook, LinkedIn, YouTube and Twitter @VTTFinland.


News Article | December 12, 2016
Site: www.eurekalert.org

Members of the TET family of proteins help protect against cancer by regulating the chemical state of DNA --and thus turning growth-promoting genes on or off. The latest findings reported by researchers at La Jolla Institute for Allergy and Immunology illustrate just how important TET proteins are in controlling cell proliferation and cell fate. For the study, published in the December 20, 2016, edition of Nature Immunology, Anjana Rao, PhD, a professor at the La Jolla Institute, genetically engineered mice to lack both TET2 and TET3 in T cells. The mice developed a lethal disease resembling lymphoma within weeks of birth, their spleens and livers bloated with iNKT cells, a normally rare kind of T cell. This finding recapitulated the features of many human blood cancers, including those involving T cells, in which TET2 is often mutated or lost. "We knew that TET proteins were involved in human cancer but we didn't know how they regulated T cell development," says Angeliki Tsagaratou, Ph.D., an instructor in the Rao lab and the study's first author. "In the new study we saw huge increases in the proliferation of the special iNKT cells in TET2/3 mutant mice. Once growth control was lost, those cells underwent the kind of malignant transformation that gives rise to T cell lymphoma in humans." The results demonstrate how TET proteins serve as anti-cancer factors called tumor suppressors and suggest ways to block malignancy in cancers marked by TET mutations. Members of the TET family of enzymes help rewrite the epigenome, the regulatory layer of chemical modifications that sits atop the genome and helps determine gene activity without changing the letters of DNA. In addition to the four letters or bases in DNA - A, C, G and T - there is a "fifth base" with a very important role. This base is formed from the DNA base cytosine (C) by addition of a methyl group, and so is called mC (m for methyl). The levels of mC are altered in cancer cells and during the development of embryos. However, until the discovery of TET proteins in 2009 in the laboratory of Rao, then at Harvard Medical School, it was not known how mC could be converted back to regain C. Dr. Rao's team showed that TET proteins were able to convert 5mC to a sixth base, known as 5hmC. 5hmC is indirectly converted back to C, thus restoring the status quo. DNA modifications of this type in part govern how compressed and "expressible" a strand is: in general, DNA methylation coils up genes to silence them, while less methylated DNA strands, or strands that possess 5hmC, are more accessible and more likely to be expressed, which means they are directing the synthesis of a particular protein. "When TET proteins are lost, iNKT cells that lack them apparently become trapped in an immature, highly proliferative state," explains Tsagaratou. "Unlike normal cells, they can't switch off growth-promoting genes: they just keep dividing." DNA sequencing followed by bioinformatic data-crunching revealed the kinds of abnormal DNA methylation patterns typically seen after TET protein loss. That suggested that improper DNA modifications in the TET2/3 mutant T cells allowed unchecked expression of cancer- and inflammation-associated genes. Edahí González-Avalos, one of the two second authors of the study, conducted most of that analysis. "Without computer analysis of sequencing results, we would not have been able to determine relationships between DNA methylation, how accessible regions of the genome were, global gene expression, or the emergence of cancer cells," says González-Avalos, a graduate student in UCSD's Bioinformatics Graduate Program. "Without computational tools, this study could have taken many, many years!" A critical test reported in the paper demonstrates how insidious even a few perpetually immature iNKT cells can be. In it, the team transferred a small number of iNKT cells lacking Tet2/3 from mutant mice into adult mice with a robust immune system. But even those mice soon developed lymphoproliferative disease as lethal as that seen in TET2/3 mutant mice. "We weren't expecting this," says Tsagaratou. "When we transferred mutant cells we thought healthy mice would control their expansion. But in three months mutant cells took over the mouse's immune system and rapidly gave rise to tumors." The lesson of this story? That a functional immune system is no defense against malignancy once deregulated, pro-inflammatory iNKT cells gain a foothold. In a 2015 Nature Communications paper, Rao, who heads LJI's Division of Signaling and Gene Expression, reported that TET2/3 mutations caused myeloid disease resembling acute myeloid leukemia in mice. The new study extends these findings to a different class of hematological cancers, namely lymphoid cancer, which is caused by abnormal activity of immune T or B cells. "Right now we don't know how TET mutations specifically contribute to either T cell lymphomas or leukemias. But we think these mutations are early events in both," says Tsagaratou. Thus the search is on is to discover additional cancer-causing genes "downstream" of TET mutations that drive uncontrolled cell division in either context. "Identification of additional factors would give us a broad idea of all steps in pathway and provide multiple targets to hit." In addition to Tsagaratou and González-Avalos, Sini Rautio of Aalto University School of Science, in Aalto, Finland, was a co-second author of the paper, contributing significantly to the bioinformatic analysis of genome-wide sequencing data. Also contributing were James Scott Browne, Ph.D., Susan Togher, and William A. Pastor, Ph.D., all from LJI; Ellen V. Rothenberg, Ph.D., of Caltech; and bioinformaticians Lukas Chavez Ph.D., of the German Cancer Research Center in Heidelberg and Harri Lähdesmäki, Ph.D., of Aalto University School of Science, the PhD thesis supervisor of Sini Rautio. The study was funded by the NIH (R01 grants AI44432, CA151535 and R35CA210043); a Leukemia & Lymphoma Society grant (6187-12, to A.R.); and an Academy of Finland Centre of Excellence in Molecular Systems Immunology and Physiology Research grant (to H.L.). Other funding was from the Cancer Research Institute, the Academy of Finland Centre of Excellence in Molecular Systems, Immunology and Physiology Research program, the Damon Runyon Cancer Research Foundation (DRG-2069-11), and the National Science Foundation. doi:10.1038/ni.3630 About La Jolla Institute for Allergy and Immunology The La Jolla Institute for Allergy and Immunology is dedicated to understanding the intricacies and power of the immune system so that we may apply that knowledge to promote human health and prevent a wide range of diseases. Since its founding in 1988 as an independent, nonprofit research organization, the Institute has made numerous advances leading toward its goal: life without disease.


Ricci M.,Ecole Polytechnique Federale de Lausanne | Spijker P.,Aalto University | Voitchovsky K.,Ecole Polytechnique Federale de Lausanne | Voitchovsky K.,Durham University
Nature Communications | Year: 2014

When immersed into water, most solids develop a surface charge, which is neutralized by an accumulation of dissolved counterions at the interface. Although the density distribution of counterions perpendicular to the interface obeys well-established theories, little is known about counterions' lateral organization at the surface of the solid. Here we show, by using atomic force microscopy and computer simulations, that single hydrated metal ions can spontaneously form ordered structures at the surface of homogeneous solids in aqueous solutions. The structures are laterally stabilized only by water molecules with no need for specific interactions between the surface and the ions. The mechanism, studied here for several systems, is controlled by the hydration landscape of both the surface and the adsorbed ions. The existence of discrete ion domains could play an important role in interfacial phenomena such as charge transfer, crystal growth, nanoscale self-assembly and colloidal stability. © 2014 Macmillan Publishers Limited. All rights reserved.


Walther A.,Aalto University | Bjurhager I.,KTH Royal Institute of Technology | Malho J.-M.,Aalto University | Pere J.,VTT Technical Research Center of Finland | And 3 more authors.
Nano Letters | Year: 2010

Although remarkable success has been achieved to mimic the mechanically excellent structure of nacre in laboratory-scale models, it remains difficult to foresee mainstream applications due to time-consuming sequential depositions or energy-intensive processes. Here, we introduce a surprisingly simple and rapid methodology for large-area, lightweight, and thick nacre-mimetic films and laminates with superior material properties. Nanoclay sheets with soft polymer coatings are used as ideal building blocks with intrinsic hard/soft character. They are forced to rapidly self-assemble into aligned nacre-mimetic films via paper-making, doctor-blading or simple painting, giving rise to strong and thick films with tensile modulus of 45 GPa and strength of 250 MPa, that is, partly exceeding nacre. The concepts are environmentally friendly, energy-efficient, and economic and are ready for scale-up via continuous roll-to-roll processes. Excellent gas barrier properties, optical translucency, and extraordinary shape-persistent fire-resistance are demonstrated. We foresee advanced large-scale biomimetic materials, relevant for lightweight sustainable construction and energy-efficient transportation. © 2010 American Chemical Society.


Zhou M.,Beijing University of Posts and Telecommunications | Cui Q.,Beijing University of Posts and Telecommunications | Jantti R.,Aalto University | Tao X.,Beijing University of Posts and Telecommunications
IEEE Communications Letters | Year: 2012

In this letter, we consider a two-way relay channel (TWRC) with two end nodes and k relay nodes, where end nodes have the full channel-state information (CSI) and relay nodes only have the channel-amplitude information (CAI). With the objective of minimizing transmit power consumption at required end-to-end rates, energy-efficient relay selection (RS) and power allocation (PA) scheme is studied for TWRC based on analog network coding (ANC). Firstly, we propose an energy-efficient single RS and PA (E-SRS-PA) scheme, where the best relay node is selected to minimize total transmit power. Then, we prove that E-SRS-PA scheme is the optimal energy-efficient RS and PA (OE-RS-PA) scheme in ANC-based TWRC, and thus the optimal number of relay nodes to be selected in energy efficiency sense is equal to one. In addition, the closed-form expressions of optimal power allocation of E-SRS-PA scheme are derived. Numerical simulations confirm the optimality of proposed E-SRS-PA and demonstrate the energy efficiency of ANC-based TWRC compared with the other relaying schemes. © 2012 IEEE.


Iakovlev M.,Aalto University | Vana Heiningen A.,University of Maine, United States | Vana Heiningen A.,Aalto University
ChemSusChem | Year: 2012

SO 2-ethanol-water (SEW) lignocellulosic fractionation has the potential to overcome the present techno-economic barriers that hinder the commercial implementation of renewable transportation fuel production. In this study, SEW fractionation of spruce wood chips is examined for its ability to separate the main wood components, hemicelluloses, lignin, and cellulose, and the potential to recover SO 2 and ethanol from the spent fractionation liquid. Therefore, overall sulfur and carbohydrate mass balances are established. 95-97% of the charged SO 2 remains in the liquid and can be fully recovered by distillation. During fractionation, hemicelluloses and lignin are effectively dissolved, whereas cellulose is preserved in the solid (fibre) phase. Hemicelluloses are hydrolysed, producing up to 50% monomeric sugars, whereas dehydration and oxidation of carbohydrates are insignificant. The latter is proven by the closed carbohydrate material balances as well as by the near absence of corresponding by-products (furfural, hydroxymethylfurfural (HMF) and aldonic acids). In addition, acid methanolysis/GC and acid hydrolysis/high performance anion exchange chromatography (HPAEC) methods for the carbohydrate determination are compared. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Survase S.A.,Aalto University | Van Heiningen A.,University of Maine, United States | Granstrom T.,Aalto University
Applied Microbiology and Biotechnology | Year: 2012

Continuous production of acetone, n-butanol, and ethanol (ABE) was carried out using immobilized cells of Clostridium acetobutylicum DSM 792 using glucose and sugar mixture as a substrate. Among various lignocellulosic materials screened as a support matrix, coconut fibers and wood pulp fibers were found to be promising in batch experiments. With a motive of promoting wood-based biorefinery concept, wood pulp was used as a cell holding material. Glucose and sugar mixture (glucose, mannose, galactose, arabinose, and xylose) comparable to lignocellulose hydrolysate was used as a substrate for continuous production of ABE. We report the best solvent productivity among wild-type strains using column reactor. The maximum total solvent concentration of 14.32 gL-1 was obtained at a dilution rate of 0.22 h-1 with glucose as a substrate compared to 12.64 gL-1 at 0.5 h-1 dilution rate with sugar mixture. The maximum solvent productivity (13.66 g L-1 h-1) was obtained at a dilution rate of 1.9 h-1 with glucose as a substrate whereas solvent productivity (12.14 gL-1 h-1) was obtained at a dilution rate of 1.5 h-1 with sugar mixture. The immobilized column reactor with wood pulp can become an efficient technology to be integrated with existing pulp mills to convert them into woodbased bio-refineries. © Springer-Verlag 2011.


Lu H.,KTH Royal Institute of Technology | Alanne K.,Aalto University | Martinac I.,KTH Royal Institute of Technology
Energy Conversion and Management | Year: 2014

Renewable energy systems entail a significant potential to meet the energy requirements of building clusters and districts (BCDs) provided that local energy sources are exploited efficiently. Besides improving the energy efficiency by reducing energy consumption and improving the match between energy supply and demand, energy quality issues have become a key topic of interest. Energy quality management is a technique that aims at optimally utilizing the exergy content of various renewable energy sources. In addition to minimizing life-cycle CO2 emissions related to exergy losses of an energy system, issues such as system reliability should be addressed. The present work contributes to the research by proposing a novel multi-objective design optimization scheme that minimizes the global warming potential during the life-cycle and maximizes the exergy performance, while the maximum allowable level of the loss of power supply probability (LPSP) is predefined by the user as a constraint. The optimization makes use of Genetic Algorithm (GA). Finally, a case study is presented, where the above methodology has been applied to an office BCD located in Norway. The proposed optimization scheme is proven to be efficient in finding the optimal design and can be easily enlarged to encompass more relevant objective functions. © 2014 Elsevier Ltd. All rights reserved.


Huang C.,Shanghai JiaoTong University | Ye F.,Shanghai JiaoTong University | Sun Z.,Aalto University | Chen X.,Shanghai JiaoTong University
Optics Express | Year: 2014

We study linear and nonlinear mode properties in a periodically patterned graphene sheet. We demonstrate that a subwavelength one-dimensional photonic lattice can be defined across the graphene monolayer, with its modulation depth and correspondingly the associated photonic band structures being controlled rapidly, by an external gate voltage. We find the existences of graphene lattice solitons at the deep-subwavelength scales in both dimensions, thanks to the combination of graphene intrinsic self-focusing nonlinearity and the graphene plasmonic confinement effects. © 2014 Optical Society of America.


Parviainen P.,KTH Royal Institute of Technology | Koivisto M.,Aalto University
Journal of Machine Learning Research | Year: 2013

We consider the problem of finding a directed acyclic graph (DAG) that optimizes a decomposable Bayesian network score. While in a favorable case an optimal DAG can be found in polynomial time, in the worst case the fastest known algorithms rely on dynamic programming across the node subsets, taking time and space 2n, to within a factor polynomial in the number of nodes n. In practice, these algorithms are feasible to networks of at most around 30 nodes, mainly due to the large space requirement. Here, we generalize the dynamic programming approach to enhance its feasibility in three dimensions: first, the user may trade space against time; second, the proposed algorithms easily and efficiently parallelize onto thousands of processors; third, the algorithms can exploit any prior knowledge about the precedence relation on the nodes. Underlying all these results is the key observation that, given a partial order P on the nodes, an optimal DAG compatible with P can be found in time and space roughly proportional to the number of ideals of P, which can be significantly less than 2n. Considering sufficiently many carefully chosen partial orders guarantees that a globally optimal DAG will be found. Aside from the generic scheme, we present and analyze concrete tradeoff schemes based on parallel bucket orders. © 2013 Pekka Parviainen and Mikko Koivisto.


Aurell E.,KTH Royal Institute of Technology | Aurell E.,Aalto University | Aurell E.,Albanova University Center | Ekeberg M.,KTH Royal Institute of Technology
Physical Review Letters | Year: 2012

We show that a method based on logistic regression, using all the data, solves the inverse Ising problem far better than mean-field calculations relying only on sample pairwise correlation functions, while still computationally feasible for hundreds of nodes. The largest improvement in reconstruction occurs for strong interactions. Using two examples, a diluted Sherrington-Kirkpatrick model and a two-dimensional lattice, we also show that interaction topologies can be recovered from few samples with good accuracy and that the use of l 1 regularization is beneficial in this process, pushing inference abilities further into low-temperature regimes. © 2012 American Physical Society.


Ullah I.,KTH Royal Institute of Technology | Parviainen P.,Aalto University | Lagergren J.,KTH Royal Institute of Technology
Molecular Biology and Evolution | Year: 2015

Species tree reconstruction has been a subject of substantial research due to its central role across biology and medicine. A species tree is often reconstructed using a set of gene trees or by directly using sequence data. In either of these cases, one of the main confounding phenomena is the discordance between a species tree and a gene tree due to evolutionary events such as duplications and losses. Probabilistic methods can resolve the discordance by coestimating gene trees and the species tree but this approach poses a scalability problem for larger data sets. We present MixTreEM-DLRS: A twophase approach for reconstructing a species tree in the presence of gene duplications and losses. In the first phase, MixTreEM, a novel structural expectation maximization algorithm based on a mixture model is used to reconstruct a set of candidate species trees, given sequence data for monocopy gene families from the genomes under study. In the second phase, PrIME-DLRS, a method based on the DLRS model (A° kerborg O, Sennblad B, Arvestad L, Lagergren J. 2009. Simultaneous Bayesian gene tree reconstruction and reconciliation analysis. Proc Natl Acad Sci USA. 106(14):5714-5719), is used for selecting the best species tree. PrIME-DLRS can handle multicopy gene families since DLRS, apart from modeling sequence evolution, models gene duplication and loss using a gene evolution model (Arvestad L, Lagergren J, Sennblad B. 2009. The gene evolution model and computing its associated probabilities. J ACM. 56(2):1-44).We evaluate MixTreEM-DLRS using synthetic and biological data, and compare its performance with a recent genome-scale species tree reconstruction method PHYLDOG (Boussau B, Sz öllosi GJ, Duret L, Gouy M, Tannier E, Daubin V. 2013. Genomescale coestimation of species and gene trees. Genome Res. 23(2):323-330) as well as with a fast parsimony-based algorithm Duptree (Wehe A, Bansal MS, Burleigh JG, Eulenstein O. 2008. Duptree: a program for large-scale phylogenetic analyses using gene tree parsimony. Bioinformatics 24(13):1540-1541). Our method is competitive with PHYLDOG in terms of accuracy and runs significantly faster and our method outperforms Duptree in accuracy. The analysis constituted by MixTreEM without DLRS may also be used for selecting the target species tree, yielding a fast and yet accurate algorithm for larger data sets. MixTreEM is freely available at http://prime. scilifelab.se/mixtreem/. © 2015 The Author.


Nummenmaa L.,Aalto University | Nummenmaa L.,University of Turku | Calvo M.G.,University of La Laguna
Emotion | Year: 2015

Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. © 2015 American Psychological Association.


Muhonen J.T.,Aalto University | Muhonen J.T.,University of Warwick | Meschke M.,Aalto University | Pekola J.P.,Aalto University
Reports on Progress in Physics | Year: 2012

A superconductor with a gap in the density of states or a quantum dot with discrete energy levels is a central building block in realizing an electronic on-chip cooler. They can work as energy filters, allowing only hot quasiparticles to tunnel out from the electrode to be cooled. This principle has been employed experimentally since the early 1990s in investigations and demonstrations of micrometre-scale coolers at sub-kelvin temperatures. In this paper, we review the basic experimental conditions in realizing the coolers and the main practical issues that are known to limit their performance. We give an update of experiments performed on cryogenic micrometre-scale coolers in the past five years. © 2012 IOP Publishing Ltd.


Karstunen M.,University of Strathclyde | Yin Z.-Y.,Shanghai JiaoTong University | Yin Z.-Y.,Aalto University
Geotechnique | Year: 2010

This paper investigates the time-dependent behaviour of Murro test embankment in Finland. The embankment was built in 1993 on a soft natural clay deposit, which exhibits large strain anisotropy, destructuration and viscosity. The study is based on extensive experimental data from triaxial and oedometer tests on intact and reconstituted soil samples which shed light on the influence of time on mechanical properties, including testing designed for studying soil anisotropy and destructuration. The interpretation of the results is done in the framework of a recently developed elasto-viscoplastic model EVPSCLAY1S, which is used to simulate the soft soil deposit coupled with Biot's consolidation theory. The determination of model parameters from the test results demonstrates that the model can be relatively easily used for practical applications. Using these parameters, two-dimensional finite-element analyses have been made as large deformation analysis. The comparisons between calculations and measurements demonstrate that the proposed model can be satisfactorily used to describe the time-dependent behaviour of the embankment on structured clay.


Kostiainen M.A.,Aalto University | Hiekkataipale P.,Aalto University | Laiho A.,Aalto University | Lemieux V.,St Jean Photochimie SJPC | And 3 more authors.
Nature Nanotechnology | Year: 2013

Binary nanoparticle superlattices are periodic nanostructures with lattice constants much shorter than the wavelength of light1,2 and could be used to prepare multifunctional metamaterials3,4. Such superlattices are typically made from synthetic nanoparticles5-8, and although biohybrid structures have been developed9-15, incorporating biological building blocks into binary nanoparticle superlattices remains challenging16-18. Protein-based nanocages provide a complex yet monodisperse and geometrically well-defined hollow cage that can be used to encapsulate different materials19,20. Such protein cages have been used to program the self-assembly of encapsulated materials to form free-standing crystals21,22 and superlattices at interfaces 21,23 or in solution24,25. Here, we show that electrostatically patchy protein cages-cowpea chlorotic mottle virus and ferritin cages-can be used to direct the self-assembly of three-dimensional binary superlattices. The negatively charged cages can encapsulate RNA or superparamagnetic iron oxide nanoparticles, and the superlattices are formed through tunable electrostatic interactions with positively charged gold nanoparticles. Gold nanoparticles and viruses form an AB8 fcc crystal structure that is not isostructural with any known atomic or molecular crystal structure and has previously been observed only with large colloidal polymer particles26. Gold nanoparticles and empty or nanoparticle-loaded ferritin cages form an interpenetrating simple cubic AB structure (isostructural with CsCl). We also show that these magnetic assemblies provide contrast enhancement in magnetic resonance imaging. © 2013 Macmillan Publishers Limited. All rights reserved.


Pennanen T.,King's College London | Perkkio A.-P.,Aalto University
Mathematical Programming | Year: 2012

This paper studies dynamic stochastic optimization problems parameterized by a random variable. Such problems arise in many applications in operations research and mathematical finance. We give sufficient conditions for the existence of solutions and the absence of a duality gap. Our proof uses extended dynamic programming equations, whose validity is established under new relaxed conditions that generalize certain no-arbitrage conditions from mathematical finance. © 2012 Springer and Mathematical Optimization Society.


News Article | September 1, 2016
Site: www.materialstoday.com

A new study has shown the potential for nanofiber scaffolds in guiding the behavior of stem and cancer cells, enabling them to act in a different but controlled way in vitro. The scaffolds were shown to direct the preferential orientation of human mesenchymal stem cells to suppress major inflammatory factors expression, and also to immobilize cancer cells. Such customized scaffolds that can mimic a native extracellular matrix could lead to new research into stem and cancer cell manipulation, associated advanced therapy development, and for conditions such as Alzheimers and Parkinsons. Many studies have found it difficult to identify a proper substrate for in vitro models on engineered scaffolds that can modulate cells differentiation. However, in this work, published in Scientific Reports [Kazantseva, et al., Sci. Rep. (2016) DOI: 10.1038/srep30150], scientists demonstrated a new design and functionality of unique 3D customized porous substrate scaffolds of aligned, self-assembled ceramic nanofibers of ultra-high anisotropy ratio, augmented into graphene shells. The hybrid nano-network provides a useful combination of selective guidance stimuli of stem cells differentiation, immune reactions variations, and local immobilization of cancer cells, which was not available before. The team, from Aalto University in Finland, in collaboration with Protobios, CellIn Technologies and Tallinn University of Technology, were inspired by the need for new advanced therapy medicinal products such as tissue engineering and even anti-cancer and neurological drug research, and associated areas such as toxicology. The scaffolds are capable of mimicking a native extracellular matrix capable of modulating cells differentiation. The scaffold helps in the evaluation of primary cells’ fate in different conditions as they provide controlled conditions to assess factors with greater precision by varying parameters. As team leader Michael Gasik points out, “This unique hybrid nano-network allows for an exceptional combination of selective guidance stimuli for stem cell development, variations in immune reactions, and behavior of cancer cells”. Such selective down-regulation of certain inflammatory cytokines could also allow the approach to be a means of exploring the human immune system and treating associated diseases. Researcher Irina Hussainova also said “Structures borrowed from nature are of special interest because of their possible great effect on tissue engineering and regenerative medicine”. The work could help towards the development of new cancer tumor models to identify how cancer develops, and for new cancer therapies. They have confirmed the effects for mesenchymal stem cells, mononuclear blood cells and four different tumor types, all of which exhibit rather distinct responses, so the team is now exploring neurogenic markers, immunology features and peculiarities between various cancer cell models.


News Article | April 15, 2016
Site: www.theenergycollective.com

Intermediary actors can be crucial for bringing about low energy transitions. This blog explores what they are and provides some key insights about intermediaries in low energy transitions. It has long been recognized that changing the way we produce and use energy is of crucial importance to tackle the challenges related to depleting fuel resources and their environmental impacts. For things to change, actors facilitating these processes and connecting people are important. The capital city of Helsinki, Finland, has set up an innovation unit, Forum Virium, which coordinates the construction of a smart city district within the city combining the use of several innovations, including solutions for energy storage and management, car-free blocks and services, including a smart application which allows residents to control appliances, lighting and heating remotely. In this development, according to ongoing work by Eva Heiskanen and Kaisa Matschoss, Forum Virium has acted as an intermediary identifying new innovations and organising networking events for information sharing, among other things. In the UK, an independent organisation, called Bioregional, has been a crucial intermediary in low- carbon building projects. In Brighton, Bioregional acted as a developer for One Brighton, a multi-residential energy-efficient, insulated and triple glazed building heated by woodfuel pellets with a community space and social housing developed to the principles of One Planet Living.  It builds on the experience of BedZED, a pioneering low energy housing development built in early 2000s in London. With Mari Martiskainen we have explored how Bioregional acted as an intermediary to One Brighton in several ways. It created a tangible vision for the building project by adapting previous learning from projects like BedZED, carried out project management activities and connected the local council, the builder and the local community together. These kinds of Innovation intermediaries are organisations – or sometimes individuals – which can act as go-betweens for people, funds, knowledge and ideas that in combination may result in innovation. Intermediaries may: Despite the important role that they can play, these intermediary actors in energy transitions are often invisible and their roles under-played. A workshop held on March 9-10 co-organised by the TRIPOD project and Centre on Innovation and Energy Demand (CIED), aimed to better understand the role that intermediary actors in energy transitions play. Three important insights were made: Innovation intermediaries can drive change (akin to innovation champions or institutional entrepreneurs) or mediate and connect individuals, groups, resources and knowledge across sectors (and so are sometimes called boundary spanners, knowledge brokers or hybrid actors). While many intermediaries act as distributed change agents across networks and systems, intermediaries can take on broader roles and operate on many levels. The kind of roles innovation intermediaries carry out depend on their focus, degree of financial or political independence and mandate, among other things. Examples from the processes of creating low-energy buildings, installing heat pumps and setting up community energy schemes presented at the workshop showed how intermediation has evolved from simple advice and information dissemination to the development of tools, business partnerships, professional services, and policy advocacy. For example, in the context of low energy building, intermediaries can: Perhaps a key question is what kind of intermediaries are most useful to advance sustainable energy transitions, and can such intermediaries be intentionally orchestrated? And should they? These are pertinent areas for further research. 2. What kind of intermediary activities will bring about more sustainable energy systems? The intermediary activities required are likely to differ depending on the phase of energy transition or the stage of innovation. This is also likely to define the extent to which intermediation between actors and processes is needed at all. Also, intermediary actors may experience favourable or hostile contexts, which require different strategies. We scholars continue to have different interpretations on the scale and definition of intermediation activity. In the workshop, there were differences in opinion regarding the degree of advocacy and of neutrality (intermediaries as benefactors or businesses) that the intermediary actors possess, or should possess. What we did agree on, however, was the need to make intermediation more visible. This is a fine balance however, as intermediaries should not take centre stage if they are to act as effective brokers between actors. Intermediation focuses on delivering a key object or a service. This can range from shared energy output (from a community energy scheme, for example) and technologies (such as heat pumps) to more broadly facilitating low-energy transitions or niche areas, like low-energy buildings. The focus partly determines if the intermediary is regarded as neutral (politically, financially or technologically) or if it seeks to advance particular interests. Both types are needed but, it was felt by workshop participants that the intermediaries’ stance should be made explicit to others. Why are our insights relevant? When making recommendations as to how we can achieve more sustainable energy systems, it is important to acknowledge the role of different intermediary actors and their associations. We also need to differentiate between those kinds of intermediary activities that are fundamental for low-energy transitions from those that are beneficial or even detrimental. In addition, we need to know if policies or community experiments are dependent on particular intermediaries to make them successful. Paula Kivimaa is Senior Research Fellow at SPRU working for the Centre on Innovation and Energy Demand (CIED). She is also Senior Researcher at the Finnish Environment Institute SYKE and Docent at Aalto University School of Business. Paula leads the CIED project on Low Energy Housing Innovations and the Role of Intermediaries. She is also member of the TRIPOD project consortium. Her current research interests include policy analysis from low-carbon innovation and transition perspectives, as well as policy complementing approaches to support low-carbon innovation, such as intermediation. Sussex Energy Group members Dr. Mari Martiskainen, Professor Adrian Smith and Dr. Jake Barnes also contributed to the workshop.


News Article | December 14, 2016
Site: www.eurekalert.org

Researchers in Aalto University, Finland, and P.L. Kapitza Institute in Moscow have discovered half-quantum vortices in superfluid helium. This vortex is a topological defect, exhibited in superfluids and superconductors, which carries a fixed amount of circulating current. 'This discovery of half-quantum vortices culminates a long search for these objects originally predicted to exist in superfluid helium in 1976,' says Samuli Autti, Doctoral Candidate at Aalto University in Finland. 'In the future, our discovery will provide access to the cores of half-quantum vortices, hosting isolated Majorana modes, exotic solitary particles. Understanding these modes is essential for the progress of quantum information processing, building a quantum computer,' Autti continues. Macroscopic coherence in quantum systems such as superfluids and superconductors provides many possibilities, and some central limitations. For instance, the strength of circulating currents in these systems is limited to certain discrete values by the laws of quantum mechanics. A half-quantum vortex overcomes that limitation using the non-trivial topology of the underlying material, a topic directly related to the 2016 Nobel Prize in physics. Among the emerging properties is one analogous to the so-called Alice string in high-energy physics, where a particle on a route around the string flips the sign of its charge. In general the quantum character of these systems is already utilized in ultra-sensitive SQUID amplifiers and other important quantum devices. The article Observation of Half-Quantum Vortices in Topological Superfluid 3He has been published today in the online version of Physical Review Letters. Experiments were done in the Low Temperature Laboratory at Aalto University.


News Article | August 23, 2016
Site: www.cemag.us

Novel scaffolds are shown enabling cells to behave in a different but controlled way in vitro due to the presence of aligned, self-assembled ceramic nanofibers of an ultra-high anisotropy ratio augmented into graphene shells. “This unique hybrid nano-network allows for an exceptional combination of selective guidance stimuli for stem cell development, variations in immune reactions, and behavior of cancer cells,” says Professor Michael Gasik from Aalto University. These scaffolds, for example, were shown to be able to direct the preferential orientation of human mesenchymal stem cells, similarly to neurogenic lineage, to suppress of major inflammatory factors expression and to immobilize cancer cells. The selective downregulation of specific inflammatory cytokines may be anticipated as a new tool for understanding the human immune system and ways of treating associated diseases. The effects observed are self-regulated by cells only, without the side effects usually arising from the use of external factors. New scaffolds may help to control the fate of stem cells, such as development towards axons and neurites formation. This is important, for instance, in the development of Alzheimer’s disease therapy. The discovery may also be very useful in developing new cancer tumor models, understanding how cancer develops, and developing new cancer therapies. The results of the study were published in Nature Scientific Reports. Aalto University made the study in collaboration with Protobios, CellIn Technologies, and Tallinn University of Technology.


News Article | August 26, 2016
Site: www.nanotech-now.com

Home > Press > Nanofiber scaffolds demonstrate new features in the behavior of stem and cancer cells Abstract: A discovery in the field of biomaterials may open new frontiers in stem and cancer cell manipulation and associated advanced therapy development. Novel scaffolds are shown enabling cells to behave in a different but controlled way in vitro due to the presence of aligned, self-assembled ceramic nanofibers of an ultra-high anisotropy ratio augmented into graphene shells. "This unique hybrid nano-network allows for an exceptional combination of selective guidance stimuli for stem cell development, variations in immune reactions, and behavior of cancer cells", says Professor Michael Gasik from Aalto University. These scaffolds, for example, were shown to be able to direct the preferential orientation of human mesenchymal stem cells, similarly to neurogenic lineage, to suppress of major inflammatory factors expression and to immobilize cancer cells. The selective downregulation of specific inflammatory cytokines may be anticipated as a new tool for understanding the human immune system and ways of treating associated diseases. The effects observed are self-regulated by cells only, without the side effects usually arising from the use of external factors. New scaffolds may help to control the fate of stem cells, such as development towards axons and neurites formation. This is important, for instance, in the development of Alzheimer's disease therapy. The discovery may also be very useful in developing new cancer tumour models, understanding how cancer develops, and developing new cancer therapies. ### The results of the study were published in Nature Scientific Reports. Aalto University made the study in collaboration with Protobios, CellIn Technologies, and Tallinn University of Technology. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Jo H.-H.,Aalto University | Perotti J.I.,Aalto University | Kaski K.,Aalto University | Kertesz J.,Aalto University | Kertesz J.,Central European University
Physical Review X | Year: 2014

Non-Poissonian bursty processes are ubiquitous in natural and social phenomena, yet little is known about their effects on the large-scale spreading dynamics. In order to characterize these effects, we devise an analytically solvable model of susceptible-infected spreading dynamics in infinite systems for arbitrary inter-event time distributions and for the whole time range. Our model is stationary from the beginning, and the role of the lower bound of inter-event times is explicitly considered. The exact solution shows that for early and intermediate times, the burstiness accelerates the spreading as compared to a Poisson-like process with the same mean and same lower bound of inter-event times. Such behavior is opposite for late-time dynamics in finite systems, where the power-law distribution of inter-event times results in a slower and algebraic convergence to a fully infected state in contrast to the exponential decay of the Poisson-like process. We also provide an intuitive argument for the exponent characterizing algebraic convergence.


Mantymaki M.,University of Turku | Salo J.,University of Oulu | Salo J.,Aalto University
International Journal of Information Management | Year: 2013

Spending real money on virtual goods and services has become a popular form of online consumer behavior, particularly among teenagers. This study builds on the Unified Theory of Acceptance and Use of Technology (UTAUT) to examine the role of motivation, social influence, measured with perceived network size as well as user interface and facilitating conditions in predicting the intention to engage in purchasing in social virtual worlds. The research model is tested with data from 1045 users of Habbo Hotel, world's most popular virtual world for teenagers. The results underscore the role of perceived network size and motivational factors in explaining in-world purchase decisions. The study shows that virtual purchasing behavior is substantially influenced by the factors driving usage behavior. Hence, virtual purchasing can be understood as a means to enhance the user experience. For virtual world operators, reinforcing the sense of presence of user's social network offers a means to promote virtual purchasing. © 2013.


Priimagi A.,Aalto University | Priimagi A.,Polytechnic of Milan | Barrett C.J.,McGill University | Shishido A.,Tokyo Institute of Technology
Journal of Materials Chemistry C | Year: 2014

The design of functional and stimuli-responsive materials is among the key goals of modern materials science. The structure and properties of such materials can be controlled via various stimuli, among which light is often times the most attractive choice. Light is ubiquitous and a gentle energy source and its properties can be optimized for a specific target remotely, with high spatial and temporal resolution. Light-control over molecular alignment has in recent years attracted particular interest, for potential applications such as reconfigurable photonic elements and optical-to-mechanical energy conversion. Herein, we bring forward some recent examples and emerging trends in this exciting field of research, focusing on liquid crystals, liquid-crystalline polymers and photochromic organic crystals, which we believe serve to highlight the immense potential of light-responsive materials to a wide variety of current and future high-tech applications in photonics, energy harvesting and conversion. This journal is © the Partner Organisations 2014.


Heczko O.,ASCR Institute of Physics Prague | Straka L.,Aalto University | Seiner H.,Czech Institute of Thermomechanics
Acta Materialia | Year: 2013

The morphology and microstructure of a single, macroscopically straight twin interface with a twinning stress of about 1 MPa was analysed in detail by differential interference contrast optical microscopy and X-ray diffraction. The interface was identified as a Type I macrotwin boundary between two variants with a/b-laminates and constant modulation direction, in contrast with a highly mobile twinned interface consisting of Type II macrotwin boundary segments with changing modulation direction and a/b-laminate reported earlier. Theoretical analysis using elastic continuum theory shows that only pure Type I or Type II boundaries are fully compatible with a/b-laminate. Other hypothetical twin microstructures combining these two mobile interfaces are shown to be incompatible to various degrees. A weakly incompatible combination of Type I and II boundaries was experimentally observed. The large difference in mobility between Type I and Type II macrotwin boundaries created at the same location of the same sample indicates that the mobility depends on the internal structure of these boundaries. A possible origin of this different mobility is discussed. © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.


Zarubova N.,ASCR Institute of Physics Prague | Ge Y.,Aalto University | Heczko O.,ASCR Institute of Physics Prague | Hannula S.-P.,Aalto University
Acta Materialia | Year: 2013

The straining of non-modulated (NM) Ni-Mn-Ga martensite was studied by in situ transmission electron microscopy (TEM). Initially, the self-accommodated NM martensitic structure consists of internally twinned domains. During straining, the detwinning process starts within these domains. The internal twin variant more favorably oriented to the stress grows at the expense of the other one. In the detwinned, single-variant domain, a new twin variant can form, gradually replacing the existing variant via the twinning process. Both processes-detwinning and new twinning-proceed by the same mechanism, namely by the movement of twinning dislocations along the twin boundary. Lattice dislocations are also created in the detwinning process. While the boundaries between the internal twins are coherent and mobile, the boundaries between the internally twinned domains are incoherent, strained and not mobile. The planes of the coherent twin boundary are {202) planes and the Burgers vectors of the twinning dislocations are parallel to the h101] direction. The magnitude of the Burgers vector determined from the TEM observations disagrees with the calculation from the lattice constant measurement by X-ray diffraction. Possible reasons for this discrepancy are discussed. © 2013 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.


Straka L.,Aalto University | Hanninen H.,Aalto University | Heczko O.,ASCR Institute of Physics Prague
Applied Physics Letters | Year: 2011

Magnetic-field-induced reorientation in Ni-Mn-Ga five-layered martensite (10 M) mediated by the motion of single twin boundary was evaluated from magnetization measurements between 20 and 300 K. At 300 K, the single twin boundary moved in an exceptionally small field of 25 kA/m. Twinning stress, as a measure of the twin boundary mobility, was determined from the magnetization curves using a magnetic-energy-based model; it increased from ≈0.1 MPa at 300 K to ≈0.8 MPa at 20 K. The dependence is discussed in terms of thermal activation and the effect of intermartensitic transformation is considered. © 2011 American Institute of Physics.


Lunden J.,Aalto University | Kassam S.A.,University of Pennsylvania | Koivunen V.,Aalto University
IEEE Transactions on Signal Processing | Year: 2010

Cognitive radios sense the radio spectrum in order to find underutilized spectrum and then exploit it in an agile manner. Spectrum sensing has to be performed reliably in challenging propagation environments characterized by shadowing and fading effects as well as heavy-tailed noise distributions. In this paper, a robust computationally efficient nonparametric cyclic correlation estimator based on the multivariate (spatial) sign function is proposed. Nonparametric statistics provide additional robustness against heavy-tailed noise and when the noise statistics are not fully known. Asymptotic distribution of the spatial sign cyclic correlation estimator under the null hypothesis is established. Tests using constraint on false alarm rate are derived based on the estimated spatial sign cyclic correlation for single-user and collaborative spectrum sensing by multiple secondary users. Theoretical justification for detecting cyclostationary signals using the spatial sign cyclic correlation is provided. A sequential detection scheme for reducing the average detection time is proposed. Simulation experiments and theoretical results comparing the proposed method with cyclostationary spectrum sensing methods employing the conventional cyclic correlation estimator are presented. Simulations demonstrate the reliable and highly robust performance of the proposed nonparametric spectrum sensing method in both Gaussian and non-Gaussian noise environments. © 2009 IEEE.


Breuer H.-P.,Albert Ludwigs University of Freiburg | Laine E.-M.,Aalto University | Laine E.-M.,University of Turku | Piilo J.,University of Turku | And 2 more authors.
Reviews of Modern Physics | Year: 2016

The dynamical behavior of open quantum systems plays a key role in many applications of quantum mechanics, examples ranging from fundamental problems, such as the environment-induced decay of quantum coherence and relaxation in many-body systems, to applications in condensed matter theory, quantum transport, quantum chemistry, and quantum information. In close analogy to a classical Markovian stochastic process, the interaction of an open quantum system with a noisy environment is often modeled phenomenologically by means of a dynamical semigroup with a corresponding time-independent generator in Lindblad form, which describes a memoryless dynamics of the open system typically leading to an irreversible loss of characteristic quantum features. However, in many applications open systems exhibit pronounced memory effects and a revival of genuine quantum properties such as quantum coherence, correlations, and entanglement. Here recent theoretical results on the rich non-Markovian quantum dynamics of open systems are discussed, paying particular attention to the rigorous mathematical definition, to the physical interpretation and classification, as well as to the quantification of quantum memory effects. The general theory is illustrated by a series of physical examples. The analysis reveals that memory effects of the open system dynamics reflect characteristic features of the environment which opens a new perspective for applications, namely, to exploit a small open system as a quantum probe signifying nontrivial features of the environment it is interacting with. This Colloquium further explores the various physical sources of non-Markovian quantum dynamics, such as structured environmental spectral densities, nonlocal correlations between environmental degrees of freedom, and correlations in the initial system-environment state, in addition to developing schemes for their local detection. Recent experiments addressing the detection, quantification, and control of non-Markovian quantum dynamics are also briefly discussed. © 2016 American Physical Society.


Golubev D.,Karlsruhe Institute of Technology | Faivre T.,Aalto University | Pekola J.P.,Aalto University
Physical Review B - Condensed Matter and Materials Physics | Year: 2013

We discuss heat transport through a Josephson tunnel junction under various bias conditions. We first derive the formula for the cooling power of the junction valid for arbitrary time dependence of the Josephson phase. Combining it with the classical equation of motion for the phase, we find the time-averaged cooling power as a function of bias current or bias voltage. We also find the noise of the heat current and, more generally, the full counting statistics of the heat transport through the junction. We separately consider the metastable superconducting branch of the current-voltage characteristics allowing quantum fluctuations of the phase in this case. This regime is experimentally attractive since the junction has low power dissipation, low impedance, and therefore may be used as a sensitive detector. © 2013 American Physical Society.


Mertaniemi H.,Aalto University | Forchheimer R.,Linköping University | Ikkala O.,Aalto University | Ras R.H.A.,Aalto University
Advanced Materials | Year: 2012

When water droplets impact each other while traveling on a superhydrophobic surface, we demonstrate that they are able to rebound like billiard balls. We present elementary Boolean logic operations and a flip-flop memory based on these rebounding water droplet collisions. Furthermore, bouncing or coalescence can be easily controlled by process parameters. Thus by the controlled coalescence of reactive droplets, here using the quenching of fluorescent metal nanoclusters as a model reaction, we also demonstrate an elementary operation for programmable chemistry. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Casey T.R.,Aalto University | Toyli J.,University of Turku
Technovation | Year: 2012

This article aims to analyze the strategic management of two-sided platforms from the view point of a mobile communications platform manager and elaborate on the dynamics that result in either platform success or failure. A framework is created to model the endogenous formation and diffusion process of a two-sided platform, describing the interplay of strategy levers that platform managers have at their disposal and factors affecting user willingness to create platform affiliation. The framework is applied to the diffusion of public wireless local area access services and configured with extensive data reflecting a large European city and platform deployment costs. The results show the effect of subsidization, revenue sharing, and alliance strategies and highlight the importance of understanding feedback structure and dynamic complexity around two-sided platforms. The results also point out how strategy opportunities vary for different types of platform managers, for example mobile operators extending their mobile infrastructure or large internet companies managing adjacent service platforms and striving for disruptive platform envelopment. © 2012 Elsevier Ltd.


Seiner H.,Czech Institute of Thermomechanics | Straka L.,Aalto University | Heczko O.,ASCR Institute of Physics Prague
Journal of the Mechanics and Physics of Solids | Year: 2014

We present a continuum-based model of microstructures forming at the macro-twin interfaces in thermoelastic martensites and apply this model to highly mobile interfaces in 10 M modulated Ni-Mn-Ga martensite. The model is applied at three distinct spatial scales observed in the experiment: meso-scale (modulation twinning), micro-scale (compound a-b lamination), and nano-scale (nanotwining in the concept of adaptive martensite). We show that two mobile interfaces (Type I and Type II macro-twins) have different micromorphologies at all considered spatial scales, which can directly explain their different twinning stress observed in experiments. The results of the model are discussed with respect to various experimental observations at all three considered spatial scales. © 2013 Elsevier Ltd.


Rahmstorf S.,Potsdam Institute for Climate Impact Research | Perrette M.,Potsdam Institute for Climate Impact Research | Vermeer M.,Aalto University
Climate Dynamics | Year: 2012

We determine the parameters of the semi-empirical link between global temperature and global sea level in a wide variety of ways, using different equations, different data sets for temperature and sea level as well as different statistical techniques. We then compare projections of all these different model versions (over 30) for a moderate global warming scenario for the period 2000-2100. We find the projections are robust and are mostly within ±20% of that obtained with the method of Vermeer and Rahmstorf (Proc Natl Acad Sci USA 106:21527-21532, 2009), namely ~1 m for the given warming of 1.8°C. Lower projections are obtained only if the correction for reservoir storage is ignored and/or the sea level data set of Church and White (Surv Geophys, 2011) is used. However, the latter provides an estimate of the base temperature T0 that conflicts with the constraints from three other data sets, in particular with proxy data showing stable sea level over the period 1400-1800. Our new best-estimate model, accounting also for groundwater pumping, is very close to the model of Vermeer and Rahmstorf (Proc Natl Acad Sci USA 106:21527-21532, 2009). © 2011 Springer-Verlag.


Gasparinetti S.,Aalto University | Kamleitner I.,Karlsruhe Institute of Technology
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

We introduce and discuss a scheme for Cooper-pair pumping. The scheme relies on the coherent transfer of a superposition of charge states across a superconducting island and is realized by adiabatic manipulation of magnetic fluxes. Differently from previous implementations, it does not require any modulation of electrostatic potentials. We find a peculiar dependence of the pumped charge on the superconducting phase bias across the pump and that an arbitrarily large amount of charge can be pumped in a single cycle when the phase bias is π. We explain these features and their relation to the adiabatic theorem. © 2012 American Physical Society.


Hietanen J.K.,University of Tampere | Nummenmaa L.,Aalto University | Nummenmaa L.,University of Turku
PLoS ONE | Year: 2011

Recent event-related potential studies have shown that the occipitotemporal N170 component - best known for its sensitivity to faces - is also sensitive to perception of human bodies. Considering that in the timescale of evolution clothing is a relatively new invention that hides the bodily features relevant for sexual selection and arousal, we investigated whether the early N170 brain response would be enhanced to nude over clothed bodies. In two experiments, we measured N170 responses to nude bodies, bodies wearing swimsuits, clothed bodies, faces, and control stimuli (cars). We found that the N170 amplitude was larger to opposite and same-sex nude vs. clothed bodies. Moreover, the N170 amplitude increased linearly as the amount of clothing decreased from full clothing via swimsuits to nude bodies. Strikingly, the N170 response to nude bodies was even greater than that to faces, and the N170 amplitude to bodies was independent of whether the face of the bodies was visible or not. All human stimuli evoked greater N170 responses than did the control stimulus. Autonomic measurements and self-evaluations showed that nude bodies were affectively more arousing compared to the other stimulus categories. We conclude that the early visual processing of human bodies is sensitive to the visibility of the sex-related features of human bodies and that the visual processing of other people's nude bodies is enhanced in the brain. This enhancement is likely to reflect affective arousal elicited by nude bodies. Such facilitated visual processing of other people's nude bodies is possibly beneficial in identifying potential mating partners and competitors, and for triggering sexual behavior. © 2011 Hietanen et al.


Priimagi A.,Aalto University | Cavallo G.,Polytechnic of Milan | Metrangolo P.,Polytechnic of Milan | Metrangolo P.,VTT Technical Research Center of Finland | Resnati G.,Polytechnic of Milan
Accounts of Chemical Research | Year: 2013

Halogen bonding is an emerging noncovalent interaction for constructing supramolecular assemblies. Though similar to the more familiar hydrogen bonding, four primary differences between these two interactions make halogen bonding a unique tool for molecular recognition and the design of functional materials. First, halogen bonds tend to be much more directional than (single) hydrogen bonds. Second, the interaction strength scales with the polarizability of the bond-donor atom, a feature that researchers can tune through single-atom mutation. In addition, halogen bonds are hydrophobic whereas hydrogen bonds are hydrophilic. Lastly, the size of the bond-donor atom (halogen) is significantly larger than hydrogen. As a result, halogen bonding provides supramolecular chemists with design tools that cannot be easily met with other types of noncovalent interactions and opens up unprecedented possibilities in the design of smart functional materials.This Account highlights the recent advances in the design of halogen-bond-based functional materials. Each of the unique features of halogen bonding, directionality, tunable interaction strength, hydrophobicity, and large donor atom size, makes a difference. Taking advantage of the hydrophobicity, researchers have designed small-size ion transporters. The large halogen atom size provided a platform for constructing all-organic light-emitting crystals that efficiently generate triplet electrons and have a high phosphorescence quantum yield. The tunable interaction strengths provide tools for understanding light-induced macroscopic motions in photoresponsive azobenzene-containing polymers, and the directionality renders halogen bonding useful in the design on functional supramolecular liquid crystals and gel-phase materials. Although halogen bond based functional materials design is still in its infancy, we foresee a bright future for this field. We expect that materials designed based on halogen bonding could lead to applications in biomimetics, optics/photonics, functional surfaces, and photoswitchable supramolecules. © 2013 American Chemical Society.


Nefedov I.S.,Aalto University | Melnikov L.A.,Saratov State Technical University
Applied Physics Letters | Year: 2014

We demonstrate the production of strong directive thermal emissions in the far-field zone of asymmetric hyperbolic metamaterials (AHMs), exceeding that predicted by Planck's limit. Asymmetry is inherent to the uniaxial medium, where the optical axis is tilted with respect to medium interfaces. The use of AHMs is shown to enhance the free-space coupling efficiency of thermally radiated waves, resulting in Super-Planckian far-field thermal emission in certain directions. This effect is impossible in usual hyperbolic materials because emission of high density of states (DOS) photons into vacuum with smaller DOS is preserved by the total internal reflection. Different plasmonic metamaterials are proposed for realizing AHM media; the thermal emission from a AHM, based on a grapheme multilayer structure, is presented, as an example. © 2014 AIP Publishing LLC.


Pekola J.P.,Aalto University | Solinas P.,Aalto University | Shnirman A.,Karlsruhe Institute of Technology | Averin D.V.,State University of New York at Stony Brook
New Journal of Physics | Year: 2013

We propose a calorimetric measurement of work in a quantum system. As a physical realization, we consider a superconducting two-level system, a Cooper-pair box, driven by a gate voltage past an avoided level crossing at charge degeneracy. We demonstrate that, with realistic experimental parameters, the temperature measurement of a resistor (environment) can detect single microwave photons emitted or absorbed by the two-level system. This method would thus be a way to measure the full distribution of work in repeated measurements, and to assess the quantum fluctuation relations. © IOP Publishing and Deutsche Physikalische Gesellschaft.


Vehkalahti R.,University of Turku | Hollanti C.,University of Turku | Hollanti C.,Aalto University | Oggier F.,Nanyang Technological University
IEEE Transactions on Information Theory | Year: 2012

Multiple-input double-output (MIDO) codes are important in the near-future wireless communications, where the portable end-user device is physically small and will typically contain at most two receive antennas. Especially tempting is the 4 × 2 channel due to its immediate applicability in the digital video broadcasting (DVB). Such channels optimally employ rate-two space-time (ST) codes consisting of (4 × 4) matrices. Unfortunately, such codes are in general very complex to decode, hence setting forth a call for constructions with reduced complexity. Recently, some reduced complexity constructions have been proposed, but they have mainly been based on different ad hoc methods and have resulted in isolated examples rather than in a more general class of codes. In this paper, it will be shown that a family of division algebra based MIDO codes will always result in at least 37.5% worst-case complexity reduction, while maintaining full diversity and, for the first time, the nonvanishing determinant (NVD) property. The reduction follows from the fact that, similarly to the Alamouti code, the codes will be subsets of matrix rings of the Hamiltonian quaternions, hence allowing simplified decoding. At the moment, such reductions are among the best known for rate-two MIDO codes [5], [6]. Several explicit constructions are presented and shown to have excellent performance through computer simulations. © 2006 IEEE.


Kinnunen J.J.,Aalto University | Bruun G.M.,University of Aarhus
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2015

We analyze a Bose-Einstein condensate (BEC) mixed with a superfluid two-component Fermi gas in the whole BCS-BEC crossover. Using a quasiparticle random-phase approximation combined with Beliaev theory to describe the Fermi superfluid and the BEC, respectively, we show that the single-particle and collective excitations of the Fermi gas give rise to an induced interaction between the bosons, which varies strongly with momentum and frequency. It diverges at the sound mode of the Fermi superfluid, resulting in a sharp avoided crossing feature and a corresponding sign change of the interaction energy shift in the excitation spectrum of the BEC. In addition, the excitation of quasiparticles in the Fermi superfluid leads to damping of the excitations in the BEC. Besides studying induced interactions themselves, we can use these prominent effects to systematically probe the strongly interacting Fermi gas. © 2015 American Physical Society.


Home > Press > Researchers nearly reached quantum limit with nanodrums: Extremely accurate measurements of microwave signals can potentially be used for data encryption based on quantum cryptography and other purposes Abstract: Researchers at Aalto University and the University of Jyväskylä have developed a new method of measuring microwave signals extremely accurately. This method can be used for processing quantum information, for example by efficiently transforming signals from microwave circuits to the optical regime. Important quantum limit If you are trying to tune in a radio station but the tower is too far away, the signal gets distorted by noise. The noise results mostly from having to amplify the information carried by the signal in order to transfer it into an audible form. According to the laws of quantum mechanics, all amplifiers add noise. In the early 1980s, US physicist Carlton Caves proved theoretically that the Heisenberg uncertainty principle for such signals requires that at least half an energy quantum of noise must be added to the signal. In everyday life, this kind of noise does not matter, but researchers around the world have aimed to create amplifiers that would come close to Caves' limit. 'The quantum limit of amplifiers is essential for measuring delicate quantum signals, such as those generated in quantum computing or quantum mechanical measuring, because the added noise limits the size of signals that can be measured', explains Professor Mika Sillanpää. From quantum bits to flying qubits So far, the solution for getting closest to the limit is an amplifier based on superconducting tunnel junctions developed in the 1980s, but this technology has its problems. Led by Sillanpää, the researchers from Aalto and the University of Jyväskylä combined a nanomechanical resonator - a vibrating nanodrum - with two superconducting circuits, i.e. cavities. 'As a result, we have made the most accurate microwave measurement with nanodrums so far', explains Caspar Ockeloen-Korppi from Aalto University, who conducted the actual measurement. In addition to the microwave measurement, this device enables transforming quantum information from one frequency to another while simultaneously amplifying it. 'This would for example allow transferring information from superconducting quantum bits to the "flying qubits" in the visible light range and back', envision the creators of the theory for the device, Tero Heikkilä, Professor at the University of Jyväskylä, and Academy Research Fellow Francesco Massel. Therefore, the method has potential for data encryption based on quantum mechanics, i.e. quantum cryptography, as well as other applications. ### The research team also included researchers Juha-Matti Pirkkalainen and Erno Darmskägg from Aalto University. The work was published in Physical Review X, one of the most distinguished journals in physics, 28 October 2016. The work was conducted in the Center of Excellence on Low Temperature Quantum Phenomena and Devices in the Academy of Finland, and it was partially funded by the European Research Council. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | October 31, 2016
Site: www.eurekalert.org

Extremely accurate measurements of microwave signals can potentially be used for data encryption based on quantum cryptography and other purposes. Researchers at Aalto University and the University of Jyväskylä have developed a new method of measuring microwave signals extremely accurately. This method can be used for processing quantum information, for example by efficiently transforming signals from microwave circuits to the optical regime. If you are trying to tune in a radio station but the tower is too far away, the signal gets distorted by noise. The noise results mostly from having to amplify the information carried by the signal in order to transfer it into an audible form. According to the laws of quantum mechanics, all amplifiers add noise. In the early 1980s, US physicist Carlton Caves proved theoretically that the Heisenberg uncertainty principle for such signals requires that at least half an energy quantum of noise must be added to the signal. In everyday life, this kind of noise does not matter, but researchers around the world have aimed to create amplifiers that would come close to Caves' limit. 'The quantum limit of amplifiers is essential for measuring delicate quantum signals, such as those generated in quantum computing or quantum mechanical measuring, because the added noise limits the size of signals that can be measured', explains Professor Mika Sillanpää. So far, the solution for getting closest to the limit is an amplifier based on superconducting tunnel junctions developed in the 1980s, but this technology has its problems. Led by Sillanpää, the researchers from Aalto and the University of Jyväskylä combined a nanomechanical resonator - a vibrating nanodrum - with two superconducting circuits, i.e. cavities. 'As a result, we have made the most accurate microwave measurement with nanodrums so far', explains Caspar Ockeloen-Korppi from Aalto University, who conducted the actual measurement. In addition to the microwave measurement, this device enables transforming quantum information from one frequency to another while simultaneously amplifying it. 'This would for example allow transferring information from superconducting quantum bits to the "flying qubits" in the visible light range and back', envision the creators of the theory for the device, Tero Heikkilä, Professor at the University of Jyväskylä, and Academy Research Fellow Francesco Massel. Therefore, the method has potential for data encryption based on quantum mechanics, i.e. quantum cryptography, as well as other applications. The research team also included researchers Juha-Matti Pirkkalainen and Erno Darmskägg from Aalto University. The work was published in Physical Review X, one of the most distinguished journals in physics, 28 October 2016. The work was conducted in the Center of Excellence on Low Temperature Quantum Phenomena and Devices in the Academy of Finland, and it was partially funded by the European Research Council.


News Article | October 31, 2016
Site: www.cemag.us

Researchers at Aalto University and the University of Jyväskylä have developed a new method of measuring microwave signals extremely accurately. This method can be used for processing quantum information, for example by efficiently transforming signals from microwave circuits to the optical regime. If you are trying to tune in a radio station but the tower is too far away, the signal gets distorted by noise. The noise results mostly from having to amplify the information carried by the signal in order to transfer it into an audible form. According to the laws of quantum mechanics, all amplifiers add noise. In the early 1980s, U.S. physicist Carlton Caves proved theoretically that the Heisenberg uncertainty principle for such signals requires that at least half an energy quantum of noise must be added to the signal. In everyday life, this kind of noise does not matter, but researchers around the world have aimed to create amplifiers that would come close to Caves’ limit. “The quantum limit of amplifiers is essential for measuring delicate quantum signals, such as those generated in quantum computing or quantum mechanical measuring, because the added noise limits the size of signals that can be measured,” explains Professor Mika Sillanpää. So far, the solution for getting closest to the limit is an amplifier based on superconducting tunnel junctions developed in the 1980s, but this technology has its problems. Led by Sillanpää, the researchers from Aalto and the University of Jyväskylä combined a nanomechanical resonator — a vibrating nanodrum — with two superconducting circuits, i.e. cavities. “As a result, we have made the most accurate microwave measurement with nanodrums so far,” explains Caspar Ockeloen-Korppi from Aalto University, who conducted the actual measurement. In addition to the microwave measurement, this device enables transforming quantum information from one frequency to another while simultaneously amplifying it. “This would for example allow transferring information from superconducting quantum bits to the ‘flying qubits’ in the visible light range and back,” envision the creators of the theory for the device, Tero Heikkilä, Professor at the University of Jyväskylä, and Academy Research Fellow Francesco Massel. Therefore, the method has potential for data encryption based on quantum mechanics, i.e. quantum cryptography, as well as other applications. The research team also included researchers Juha-Matti Pirkkalainen and Erno Darmskägg from Aalto University. The work was published in Physical Review X, one of the most distinguished journals in physics, on Oct. 28. The work was conducted in the Center of Excellence on Low Temperature Quantum Phenomena and Devices in the Academy of Finland, and it was partially funded by the European Research Council.


News Article | January 12, 2016
Site: phys.org

An autonomous Maxwell's demon. When the demon sees the electron enter the island (1.), it traps the electron with a positive charge (2.). When the electron leaves the island (3.), the demon switches back a negative charge (4.). Credit: Jonne Koski In 1867, Scottish physicist James Clerk Maxwell challenged the second law of thermodynamics according to which entropy in a closed system must always increase. In his thought experiment, Maxwell took a closed gas container, divided it into two parts with an inner wall and provided the wall with a small trap door. By opening and closing the door, the creature – 'demon' – controlling it could separate slow cold and fast warm particles to their respective sides, thus creating a temperature difference in contravention of the laws of thermodynamics. On theoretical level, the thought experiment has been an object of consideration for nearly 150 years, but testing it experimentally has been impossible until the last few years. Making use of nanotechnology, scientists from Aalto University have now succeeded in constructing an autonomous Maxwell's demon that makes it possible to analyse the microscopic changes in thermodynamics. The research results were recently published in Physical Review Letters. The work is part of the forthcoming PhD thesis of MSc Jonne Koski at Aalto University. "The system we constructed is a single-electron transistor that is formed by a small metallic island connected to two leads by tunnel junctions made of superconducting materials. The demon connected to the system is also a single-electron transistor that monitors the movement of electrons in the system. When an electron tunnels to the island, the demon traps it with a positive charge. Conversely, when an electron leaves the island, the demon repels it with a negative charge and forces it to move uphill contrary to its potential, which lowers the temperature of the system," explains Professor Jukka Pekola. What makes the demon autonomous or self-contained is that it performs the measurement and feedback operation without outside help. Changes in temperature are indicative of correlation between the demon and the system, or, in simple terms, of how much the demon 'knows' about the system. According to Pekola, the research would not have been possible without the Low Temperature Laboratory conditions. "We work at extremely low temperatures, so the system is so well isolated that it is possible to register extremely small temperature changes," he says. "An electronic demon also enables a very large number of repetitions of the measurement and feedback operation in a very short time, whereas those who, elsewhere in the world, used molecules to construct their demons had to contend with not more than a few hundred repetitions." The work of the team led by Pekola remains, for the time being, basic research, but in the future, the results obtained may, among other things, pave the way towards reversible computing. "As we work with superconducting circuits, it is also possible for us to create qubits of quantum computers. Next, we would like to examine these same phenomena on the quantum level," Pekola reveals. Explore further: Could Maxwell's Demon Exist in Nanoscale Systems? More information: J. V. Koski et al. On-Chip Maxwell's Demon as an Information-Powered Refrigerator, Physical Review Letters (2015). DOI: 10.1103/PhysRevLett.115.260602


News Article | February 23, 2017
Site: www.materialstoday.com

Self-assembly is one of the fundamental principles of nature, directing the growth of larger ordered and functional systems from smaller building blocks. Self-assembly can be observed at all length scales, from molecules to galaxies. Now researchers at the Nanoscience Centre of the University of Jyväskylä and the HYBER Centre of Excellence of Aalto University, both in Finland, report a new type of self-assembly, in which tiny gold nanoclusters just a couple of nanometres in size form two- and three-dimensional materials. Each nanocluster comprises 102 gold atoms and a surface layer of 44 thiol molecules. The study, conducted with funding from the Academy of Finland and the European Research Council, is reported in a paper in Angewandte Chemie International Edition. The atomic structure of the 102-atom gold nanocluster was first resolved by Roger Kornberg’s group at Stanford University in 2007. Since then, further studies of the nanocluster’s properties have been conducted in the Jyväskylä Nanoscience Centre. In this latest study, the Finnish researchers have shown that the nanocluster’s thiol surface possesses a large number of acidic groups able to form directed hydrogen bonds with neighboring nanoclusters, initiating directed self-assembly. This self-assembly took place in a water-methanol mixture and produced two distinctly different superstructures, which were imaged by a high-resolution electron microscope at Aalto University. In one of the structures, two-dimensional, hexagonally-ordered layers of gold nanoclusters were stacked together, each layer being just one nanocluster thick. Under different synthesis conditions, the nanoclusters would instead self-assemble into three-dimensional spherical, hollow capsid structures, where the thickness of the capsid wall corresponds again to just one nanocluster. While the details of the formation mechanisms for the superstructures warrant further investigation, these initial observations suggest a new route to synthetically-made, self-assembling nanomaterials. “Today, we know of several tens of different types of atomistically-precise gold nanoclusters, and I believe they can exhibit a wide variety of self-assembling growth patterns that could produce a range of new meta-materials,” said Hannu Häkkinen, who coordinated the research at the Nanoscience Centre. “In biology, typical examples of self-assembling functional systems are viruses and vesicles. Biological self-assembled structures can also be de-assembled by gentle changes in the surrounding biochemical conditions. It’ll be of great interest to see whether these gold-based materials can be de-assembled and then re-assembled to different structures by changing something in the chemistry of the surrounding solvent.” “The free-standing two-dimensional nanosheets will bring opportunities towards new-generation functional materials, and the hollow capsids will pave the way for highly lightweight colloidal framework materials,” predicted postdoctoral researcher Nonappa from Aalto University. “In a broader framework, it has remained as a grand challenge to master the self-assemblies through all length scales to tune the functional properties of materials in a rational way,” said Olli Ikkala from Aalto University. “So far, it has been commonly considered sufficient to achieve sufficiently narrow size distributions of the constituent nanoscale structural units to achieve well-defined structures. The present findings suggest a paradigm change to pursue strictly defined nanoscale units for self-assemblies.” This story is adapted from material from the Academy of Finland, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.


« UMTRI: average US new vehicle fuel economy drops in October | Main | Navigant forecasts global sales of electric drive buses to reach 181,000 in 2026 » An international team of scientists led by a researcher at the University of California, Riverside has modified the energy spectrum of acoustic phonons—elemental excitations, also referred to as quasi-particles, that spread heat through crystalline materials like a wave—by confining them to nanometer-scale semiconductor structures. The results, published in an open-access paper in the journal Nature Communications, have important implications in the thermal management of electronic devices. The project was led by Alexander Balandin, Distinguished Professor of Electrical and Computing Engineering and UC Presidential Chair Professor in UCR’s Bourns College of Engineering. The team used semiconductor nanowires from Gallium Arsenide (GaAs), synthesized by researchers in Finland, and an imaging technique called Brillouin-Mandelstam light scattering spectroscopy (BMS) to study the movement of phonons through the crystalline nanostructures. By changing the size and the shape of the GaAs nanostructures, the researchers were able to alter the energy spectrum, or dispersion, of acoustic phonons. The BMS instrument used for this study was built at UCR’s Phonon Optimized Engineered Materials (POEM) Center, which is directed by Balandin. Controlling phonon dispersion is crucial for improving heat removal from nanoscale electronic devices, which has become the major roadblock in allowing the engineers to continue to reduce their size. It can also be used to improve the efficiency of thermoelectric energy generation, Balandin said. In that case, decreasing thermal conductivity by phonons is beneficial for thermoelectric devices that generate energy by applying a temperature gradient to semiconductors. For years, the only envisioned method of changing the thermal conductivity of nanostructures was via acoustic phonon scattering with nanostructure boundaries and interfaces. We demonstrated experimentally that by spatially confining acoustic phonons in nanowires one can change their velocity, and the way they interact with electrons, magnons, and how they carry heat. Our work creates new opportunities for tuning thermal and electronic properties of semiconductor materials. In addition to Balandin, contributors to this paper included Fariborz Kargar, a graduate student and Ph.D. candidate in electrical and computer engineering at UCR and the lead author on the paper; Bishwajit Debnath, a graduate student in electrical and computer engineering at UCR; Kakko Joona Pekko, Antti Saynatjoki and Harri Lipsanen from Aalto University in Helsinki, Finland; Denis L. Nika, from Moldova State University in Chisinau, Moldova; and Roger K. Lake, professor of electrical and computer engineering at UCR. The work at UC Riverside was supported as part of the Spins and Heat in Nanoscale Electronic Systems (SHINES), an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences (BES) under Award # SC0012670.


Home > Press > UCR researchers discover new method to dissipate heat in electronic devices: By modulating the flow of phonons through semiconductor nanowires, engineers can create smaller and faster devices Abstract: Controlling the flow of heat through semiconductor materials is an important challenge in developing smaller and faster computer chips, high-performance solar panels, and better lasers and biomedical devices. For the first time, an international team of scientists led by a researcher at the University of California, Riverside has modified the energy spectrum of acoustic phonons-- elemental excitations, also referred to as quasi-particles, that spread heat through crystalline materials like a wave--by confining them to nanometer-scale semiconductor structures. The results have important implications in the thermal management of electronic devices. Led by Alexander Balandin, Distinguished Professor of Electrical and Computing Engineering and UC Presidential Chair Professor in UCR's Bourns College of Engineering, the research is described in a paper published Thursday, Nov. 10, in the journal Nature Communications. The paper is titled "Direct observation of confined acoustic phonon polarization branches in free-standing nanowires." The team used semiconductor nanowires from Gallium Arsenide (GaAs), synthesized by researchers in Finland, and an imaging technique called Brillouin-Mandelstam light scattering spectroscopy (BMS) to study the movement of phonons through the crystalline nanostructures. By changing the size and the shape of the GaAs nanostructures, the researchers were able to alter the energy spectrum, or dispersion, of acoustic phonons. The BMS instrument used for this study was built at UCR's Phonon Optimized Engineered Materials (POEM) Center, which is directed by Balandin. Controlling phonon dispersion is crucial for improving heat removal from nanoscale electronic devices, which has become the major roadblock in allowing engineers to continue to reduce their size. It can also be used to improve the efficiency of thermoelectric energy generation, Balandin said. In that case, decreasing thermal conductivity by phonons is beneficial for thermoelectric devices that generate energy by applying a temperature gradient to semiconductors. "For years, the only envisioned method of changing the thermal conductivity of nanostructures was via acoustic phonon scattering with nanostructure boundaries and interfaces. We demonstrated experimentally that by spatially confining acoustic phonons in nanowires one can change their velocity, and the way they interact with electrons, magnons, and how they carry heat. Our work creates new opportunities for tuning thermal and electronic properties of semiconductor materials," Balandin said. ### In addition to Balandin, contributors to this paper included Fariborz Kargar, a graduate student and Ph.D. candidate in electrical and computer engineering at UCR and the lead author on the paper; Bishwajit Debnath, a graduate student in electrical and computer engineering at UCR; Kakko Joona Pekko, Antti Saynatjoki and Harri Lipsanen from Aalto University in Helsinki, Finland; Denis L. Nika, from Moldova State University in Chisinau, Moldova; and Roger K. Lake, professor of electrical and computer engineering at UCR. The work at UC Riverside was supported as part of the Spins and Heat in Nanoscale Electronic Systems (SHINES), an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences (BES) under Award # SC0012670. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | November 11, 2016
Site: www.cemag.us

Controlling the flow of heat through semiconductor materials is an important challenge in developing smaller and faster computer chips, high-performance solar panels, and better lasers and biomedical devices. For the first time, an international team of scientists led by a researcher at the University of California, Riverside has modified the energy spectrum of acoustic phonons— elemental excitations, also referred to as quasi-particles, that spread heat through crystalline materials like a wave—by confining them to nanometer-scale semiconductor structures. The results have important implications in the thermal management of electronic devices. Led by Alexander Balandin, Distinguished Professor of Electrical and Computing Engineering and UC Presidential Chair Professor in UCR’s Bourns College of Engineering, the research is described in a paper published Thursday, Nov. 10, in the journal Nature Communications. The paper is titled “Direct observation of confined acoustic phonon polarization branches in free-standing nanowires.” The team used semiconductor nanowires from Gallium Arsenide (GaAs), synthesized by researchers in Finland, and an imaging technique called Brillouin-Mandelstam light scattering spectroscopy (BMS) to study the movement of phonons through the crystalline nanostructures. By changing the size and the shape of the GaAs nanostructures, the researchers were able to alter the energy spectrum, or dispersion, of acoustic phonons. The BMS instrument used for this study was built at UCR’s Phonon Optimized Engineered Materials (POEM) Center, which is directed by Balandin. Controlling phonon dispersion is crucial for improving heat removal from nanoscale electronic devices, which has become the major roadblock in allowing engineers to continue to reduce their size. It can also be used to improve the efficiency of thermoelectric energy generation, Balandin said. In that case, decreasing thermal conductivity by phonons is beneficial for thermoelectric devices that generate energy by applying a temperature gradient to semiconductors. “For years, the only envisioned method of changing the thermal conductivity of nanostructures was via acoustic phonon scattering with nanostructure boundaries and interfaces. We demonstrated experimentally that by spatially confining acoustic phonons in nanowires one can change their velocity, and the way they interact with electrons, magnons, and how they carry heat. Our work creates new opportunities for tuning thermal and electronic properties of semiconductor materials,” Balandin said. In addition to Balandin, contributors to this paper included Fariborz Kargar, a graduate student and Ph.D. candidate in electrical and computer engineering at UCR and the lead author on the paper; Bishwajit Debnath, a graduate student in electrical and computer engineering at UCR; Kakko Joona Pekko, Antti Saynatjoki and Harri Lipsanen from Aalto University in Helsinki, Finland; Denis L. Nika, from Moldova State University in Chisinau, Moldova; and Roger K. Lake, professor of electrical and computer engineering at UCR. The work at UC Riverside was supported as part of the Spins and Heat in Nanoscale Electronic Systems (SHINES), an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences (BES) under Award # SC0012670.


News Article | November 10, 2016
Site: www.eurekalert.org

RIVERSIDE, Calif. -- Controlling the flow of heat through semiconductor materials is an important challenge in developing smaller and faster computer chips, high-performance solar panels, and better lasers and biomedical devices. For the first time, an international team of scientists led by a researcher at the University of California, Riverside has modified the energy spectrum of acoustic phonons-- elemental excitations, also referred to as quasi-particles, that spread heat through crystalline materials like a wave--by confining them to nanometer-scale semiconductor structures. The results have important implications in the thermal management of electronic devices. Led by Alexander Balandin, Distinguished Professor of Electrical and Computing Engineering and UC Presidential Chair Professor in UCR's Bourns College of Engineering, the research is described in a paper published Thursday, Nov. 10, in the journal Nature Communications. The paper is titled "Direct observation of confined acoustic phonon polarization branches in free-standing nanowires." The team used semiconductor nanowires from Gallium Arsenide (GaAs), synthesized by researchers in Finland, and an imaging technique called Brillouin-Mandelstam light scattering spectroscopy (BMS) to study the movement of phonons through the crystalline nanostructures. By changing the size and the shape of the GaAs nanostructures, the researchers were able to alter the energy spectrum, or dispersion, of acoustic phonons. The BMS instrument used for this study was built at UCR's Phonon Optimized Engineered Materials (POEM) Center, which is directed by Balandin. Controlling phonon dispersion is crucial for improving heat removal from nanoscale electronic devices, which has become the major roadblock in allowing engineers to continue to reduce their size. It can also be used to improve the efficiency of thermoelectric energy generation, Balandin said. In that case, decreasing thermal conductivity by phonons is beneficial for thermoelectric devices that generate energy by applying a temperature gradient to semiconductors. "For years, the only envisioned method of changing the thermal conductivity of nanostructures was via acoustic phonon scattering with nanostructure boundaries and interfaces. We demonstrated experimentally that by spatially confining acoustic phonons in nanowires one can change their velocity, and the way they interact with electrons, magnons, and how they carry heat. Our work creates new opportunities for tuning thermal and electronic properties of semiconductor materials," Balandin said. In addition to Balandin, contributors to this paper included Fariborz Kargar, a graduate student and Ph.D. candidate in electrical and computer engineering at UCR and the lead author on the paper; Bishwajit Debnath, a graduate student in electrical and computer engineering at UCR; Kakko Joona Pekko, Antti Saynatjoki and Harri Lipsanen from Aalto University in Helsinki, Finland; Denis L. Nika, from Moldova State University in Chisinau, Moldova; and Roger K. Lake, professor of electrical and computer engineering at UCR. The work at UC Riverside was supported as part of the Spins and Heat in Nanoscale Electronic Systems (SHINES), an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences (BES) under Award # SC0012670.


News Article | December 11, 2015
Site: phys.org

There are many new possibilities for the development of gallium nitride (GaN) used in the production LEDs. One of the most promising production methods of gallium nitride is the ammonothermal method which uses a reactor filled up with liquid ammonia. The method is identical with the hydrothermal method utilised in the production of quartz, in which water is used instead of ammonia. However, the high temperature inside the ammonothermal reactor combined with a pressure 2,500 times the atmospheric pressure and the corrosive effects of the so-called supercritical fluid pose a challenge to the reactor chamber and thus to the manufacture of LED materials. To find a solution to the problem, Aalto University Post-Doctoral Researcher Sami Suihkonen and a research group from the University of California, Santa Barbara led by Nobelist in Physics Shuji Nakamura and Post-Doctoral Researcher Siddha Pimputkar systematically analysed the behaviours of 35 metals, 2 metalloids and 17 different ceramic materials with 3 different supercritical fluid chemistries heated to a temperature of 572 degrees Celsius. "In the ammonothermal method, the energy contained in the reactor corresponds roughly to a stick of dynamite, making the conditions fairly hostile," saysSami Suihkonen. "A nickel-chromium alloy commonly used in the reactors tolerates ordinary supercritical ammonia quite well but poorly withstands the effects of the mixtures used in the production of GaN which include the addition of ammonium chloride or sodium. Our research indicated that vanadium, niobium and tungsten carbide are stable in all three supercritical fluids. For practical applications, however, it is more important to find a material best suited for a certain type of chemistry. For ammonium-sodium this was silver; with ammonium-chloride, silicon nitride and noble metals appear the most promising." To replace the reactor's nickel-chromium alloy with other structural materials would require the reshaping of the manufacturing process according to Suihkonen. More robust reactors would nevertheless enable the production of higher quality GaN containing fewer crystal defects which in turn leads to higher quality LEDs. Better LED quality translates to cheaper price. "From a high-quality LED more light can be obtained per surface-area unit. As the price of an LED is governed by its surface area, better materials could reduce the price of LEDs to even a fraction of the current price Suihkonen calculates. Moreover, higher quality LED's generate less heat and thus require smaller cooling elements, which could further reduce the price and enable LED lighting fixtures that are more compact than the current ones." Apart from their use in more economical and efficient illumination, these better materials could be useful also in power electronics, which is needed among other things, in power control of electric vehicles, in power supplies and converters. The Stability of Materials in Supercritical Ammonia Solutions study was recently published in the Journal of Supercritical Fluids. More information: Siddha Pimputkar et al. Stability of Materials in Supercritical Ammonia Solutions, The Journal of Supercritical Fluids (2015). DOI: 10.1016/j.supflu.2015.10.020


News Article | December 21, 2015
Site: www.materialstoday.com

Of the many ways for creating the gallium nitride (GaN) used in the production of light-emitting diodes (LEDs), one of the most promising is the ammonothermal method, which uses a reactor filled with liquid ammonia. It is identical to the hydrothermal method used to produce quartz, in which water is used instead of ammonia. The downside to the ammonothermal method is that it requires high temperatures and a pressure 2500 times greater than atmospheric pressure, which together convert the ammonia into a supercritical fluid with properties of both a liquid and a gas. These high temperatures and pressures, together with the corrosive effects of the supercritical fluid, pose a challenge to the reactor chamber and thus to the manufacture of LED materials. In the ammonothermal method, around the same amount of energy is contained within the reactor as in a stick of dynamite, making the conditions fairly hostile, says Sami Suihkonen, a post-doctoral researcher at Aalto University in Finland. So Suihkonen and a research group from the University of California, Santa Barbara, led by Nobel laureate Shuji Nakamura, set out to find the most suitable materials for constructing the reaction chamber. As they report in The Journal of Supercritical Fluids, this involved systematically analyzing the behaviors of 35 metals, two metalloids and 17 different ceramic materials exposed to three different supercritical fluids heated to 572°C. “A nickel-chromium alloy commonly used in the reactors tolerates ordinary supercritical ammonia quite well but poorly withstands the effects of the mixtures used in the production of GaN, which include the addition of ammonium chloride or sodium,” explains Suihkonen. “Our research indicated that vanadium, niobium and tungsten carbide are stable in all three supercritical fluids. For practical applications, however, it is more important to find a material best suited for a certain type of chemistry. For ammonium-sodium this was silver; with ammonium-chloride, silicon nitride and noble metals appear the most promising.” According to Suihkonen, replacing the reactor's nickel-chromium alloy with other structural materials would require reshaping the manufacturing process. However, more robust reactors would allow the production of higher quality GaN containing fewer crystal defects, which in turn would lead to higher quality LEDs. As more light can be obtained per surface-area unit from a high-quality LED and the price of an LED is governed by its surface area, better materials could reduce the price of LEDs to a fraction of their current price. Moreover, higher quality LEDs generate less heat and thus require smaller cooling elements, further reducing the price and leading to LED lighting fixtures that are more compact than current ones. This story is adapted from material from Aalto University, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.


Volovik G.E.,Aalto University | Volovik G.E.,L D Landau Institute For Theoretical Physics | Zubkov M.A.,ITEP
Annals of Physics | Year: 2014

First of all, we reconsider the tight-binding model of monolayer graphene, in which the variations of the hopping parameters are allowed. We demonstrate that the emergent 2. D Weitzenbock geometry as well as the emergent U (1) gauge field appear. The emergent gauge field is equal to the linear combination of the components of the zweibein. Therefore, we actually deal with the gauge fixed version of the emergent 2 + 1 D teleparallel gravity. In particular, we work out the case, when the variations of the hopping parameters are due to the elastic deformations, and relate the elastic deformations with the emergent zweibein. Next, we investigate the tight-binding model with the varying intralayer hopping parameters for the multilayer graphene with the A B C stacking. In this case the emergent 2. D Weitzenbock geometry and the emergent U (1) gauge field appear as well, and the emergent low energy effective field theory has the anisotropic scaling. © 2013 Elsevier Inc.


Gacanin H.,Motive | Salmela M.,Aalto University | Adachi F.,Tohoku University
IEEE Transactions on Wireless Communications | Year: 2012

Broadcast nature of the wireless channel enables wireless communications to make use of network coding at the physical layer (PNC) to improve the network capacity. Recently, narrowband and later broadband analog network coding (ANC) were introduced as a simpler implementation of PNC. The ANC schemes require two time slots while in PNC three time slots are required for bi-directional communication between two nodes and hence ANC is more spectrum efficient. The coherent detection and self-information removal in ANC require accurate channel state information (CSI). {In this paper, we theoretically analyze the bit error rate (BER) performance with imperfect knowledge of CSI for broadband ANC using orthogonal frequency division multiplexing (OFDM), where the channel estimation error is modeled as a zero-mean complex Gaussian random variable. We investigate the BER performance for three cases: (i) the effect of imperfect self-information removal due to channel estimation (CE) error with fading tracking errors, (ii) the effect of imperfect self-information removal due to CE error without fading tracking errors}, and (iii) the ideal CE case. We discuss how, and by how much, our results obtained by theoretical analysis can be used for design of broadband ANC system with the imperfect knowledge of CSI. Our results show that imperfect channel estimation due to the noise effect has less impact on self-information removal than the imperfect channel estimation due to fading tracking errors. The tracking against fading is an important problem for accurate self-information removal as well as coherent detection and thus, the effect of channel time-selectivity is also theoretically studied. The achievable BER performance gains due to the polynomial time-domain channel interpolation are investigated using the derived close-form BER expressions and it was shown that the broadband ANC schemes with practical CE in a time- and frequency-selective channel should include a more sophisticated channel interpolation techniques since the impact of Doppler shift has prevalent effect on the achievable BER performance. © 2012 IEEE.


Valagiannopoulos C.A.,Aalto University | Tsitsas N.L.,Aristotle University of Thessaloniki
Radio Science | Year: 2012

Eliminating the electromagnetic interaction of a device with its background is a topic which attracts considerable attention both from a theoretical as well as from an experimental point of view. In this work, we analyze an infinite two-dimensional planar microstrip antenna, excited by an incident plane wave, and propose its potential operation as a low-profile receiving antenna, by suitably adjusting the parameters of its cloaking superstrate. We impose a semi-analytic integral equation method to determine the scattering characteristics of the microstrip antenna. The method utilizes the explicit expressions of the Green's function of the strip-free microstrip and yields the surface strip's current as the solution of a suitable linear system. Subsequently, the antenna's far-field response is obtained. Numerical results are presented for the achieved low profile of the receiving antenna, by choosing suitably the cloaking superstrate parameters. It is demonstrated that for specific cloaking parameters the scattered field by the antenna is considerably reduced, while the received signal from the antenna is maintained at sensible levels. We point out that the material values achieving this reduction correspond to a superstrate filled with an -near-zero or a low-index metamaterial. Finally, the variations of the device reaction for various superstrates are depicted, concluding that for optimized superstrate's parameters, the reaction values are significantly reduced, while at distinct scattering angles even approach zero.


He K.,Tsinghua University | Wang X.,Aalto University
Journal of Bioactive and Compatible Polymers | Year: 2011

A tubular polyurethane (PU) sandwich-like, adipose-derived stem cell (ADSC)/gelatin/alginate/ fibrin construct was fabricated using a double-nozzle, low-temperature (-20°C) deposition technique. The ADSCs survived the fabrication and cryopreservation stages by incorporating a cryoprotectant (glycerol or dimethyl sulfoxide (DMSO)) in the cell/hydrogel system. With 5% DMSO or 10% glycerol alone in the hydrogel, the cell viabilities were retained (73% and 62%, respectively). The three-dimensional construct was effectively preserved below -80°C for more than 1 week. After the freeze/thaw processes, cell viability and proliferation ability were regained. This strategy has the potential to be widely used in complex organ manufacturing techniques. © 2011 The Author(s).


Sabharwal A.,Rice University | Schniter P.,Ohio State University | Guo D.,Northwestern University | Bliss D.W.,Arizona State University | And 2 more authors.
IEEE Journal on Selected Areas in Communications | Year: 2014

In-band full-duplex (IBFD) operation has emerged as an attractive solution for increasing the throughput of wireless communication systems and networks. With IBFD, a wireless terminal is allowed to transmit and receive simultaneously in the same frequency band. This tutorial paper reviews the main concepts of IBFD wireless. One of the biggest practical impediments to IBFD operation is the presence of self-interference, i.e., the interference that the modem's transmitter causes to its own receiver. This tutorial surveys a wide range of IBFD self-interference mitigation techniques. Also discussed are numerous other research challenges and opportunities in the design and analysis of IBFD wireless systems. © 1983-2012 IEEE.


Zubkov M.A.,ITEP | Volovik G.E.,Aalto University | Volovik G.E.,Russian Academy of Sciences
Nuclear Physics B | Year: 2012

Topological invariants for the 4D gapped system are discussed with application to the quantum vacua of relativistic quantum fields. Expression N~3 for the 4D systems with mass gap defined in Volovik (2010) [13] is considered. It is demonstrated that N~3 remains the topological invariant when the interacting theory in deep ultraviolet is effectively massless. We also consider the 5D systems and demonstrate how 4D invariants emerge as a result of the dimensional reduction. In particular, the new 4D invariant N~5 is suggested. The index theorem is proved that defines the number of massless fermions n F in the intermediate vacuum, which exists at the transition line between the massive vacua with different values of N~5. Namely, 2n F is equal to the jump δN~5 across the transition. The jump δN~3 at the transition determines the number of only those massless fermions, which live near the hypersurface ω=0. The considered invariants are calculated for the lattice model with Wilson fermions. © 2012.


Barth C.,CNRS Interdisciplinary Center on Nanoscience in Marseille | Foster A.S.,Tampere University of Technology | Foster A.S.,Aalto University | Henry C.R.,CNRS Interdisciplinary Center on Nanoscience in Marseille | And 2 more authors.
Advanced Materials | Year: 2011

The current status and future prospects of non-contact atomic force microscopy (nc-AFM) and Kelvin probe force microscopy (KPFM) for studying insulating surfaces and thin insulating films in high resolution are discussed. The rapid development of these techniques and their use in combination with other scanning probe microscopy methods over the last few years has made them increasingly relevant for studying, controlling, and functionalizing the surfaces of many key materials. After introducing the instruments and the basic terminology associated with them, state-of-the-art experimental and theoretical studies of insulating surfaces and thin films are discussed, with specific focus on defects, atomic and molecular adsorbates, doping, and metallic nanoclusters. The latest achievements in atomic site-specific force spectroscopy and the identification of defects by crystal doping, work function, and surface charge imaging are reviewed and recent progress being made in high-resolution imaging in air and liquids is detailed. Finally, some of the key challenges for the future development of the considered fields are identified. Non-contact atomic force microscopy (nc-AFM) and Kelvin probe force microscopy (KPFM) for studying insulating surfaces and thin insulating films in high resolution are reviewed. The methods are introduced, then experimental and theoretical studies of insulating surfaces and thin films, with specific focus on defects, atomic and molecular adsorbates, doping, and metallic nanoclusters are discussed. © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Teravainen T.,Aalto University | Lehtonen M.,University of Sussex | Martiskainen M.,University of Sussex
Energy Policy | Year: 2011

Concerns about climate change and energy security have been major arguments used to justify the recent return of nuclear power as a serious electricity generation option in various parts of the world. This article examines the recent public discussion in Finland, France, and the UK - three countries currently in the process of constructing or planning new nuclear power stations. To place the public discussion on nuclear power within the relationship between policy discourses and contexts, the article addresses three interrelated themes: the justifications and discursive strategies employed by nuclear advocates and critics, the similarities and differences in debates between the three countries, and the interaction between the country-specific state orientations and the argumentation concerning nuclear power. Drawing from documentary analysis and semi-structured interviews, the article identifies and analyses key discursive strategies and their use in the context of the respective state orientations: 'technology-and-industry-know-best' in Finland, 'government-knows-best' in France, and 'markets-know-best' in the UK. The nuclear debates illustrate subtle ongoing transformations in these orientations, notably in the ways in which the relations between markets, the state, and civil society are portrayed in the nuclear debates. © 2011 Elsevier Ltd.


Song X.,Aalto University | Oksanen M.,Aalto University | Sillanpaa M.A.,Aalto University | Craighead H.G.,Cornell University | And 2 more authors.
Nano Letters | Year: 2012

We present a simple micromanipulation technique to transfer suspended graphene flakes onto any substrate and to assemble them with small localized gates into mechanical resonators. The mechanical motion of the graphene is detected using an electrical, radio frequency (RF) reflection readout scheme where the time-varying graphene capacitor reflects a RF carrier at f = 5-6 GHz producing modulation sidebands at f ± f m. A mechanical resonance frequency up to f m = 178 MHz is demonstrated. We find both hardening/softening Duffing effects on different samples and obtain a critical amplitude of ∼40 pm for the onset of nonlinearity in graphene mechanical resonators. Measurements of the quality factor of the mechanical resonance as a function of dc bias voltage V dc indicates that dissipation due to motion-induced displacement currents in graphene electrode is important at high frequencies and large V dc. © 2011 American Chemical Society.


Volovik G.E.,Aalto University | Volovik G.E.,L D Landau Institute For Theoretical Physics | Zubkov M.A.,ITEP
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2013

We review the known results on the bosonic spectrum in various Nambu-Jona-Lasinio models both in condensed matter physics and in relativistic quantum field theory including He3-B, He3-A, the thin films of superfluid He3, and QCD (Hadronic phase and the color-flavor locking phase). Next, we calculate the bosonic spectrum in the relativistic model of top quark condensation suggested in. In all considered cases, the sum rule appears, which relates the masses (energy gaps) Mboson of the bosonic excitations in each channel with the mass (energy gap) of the condensed fermion Mf as ΣMboson2=4Mf2. Previously, this relation was established by Nambu for He3-B and for the s-wave superconductor. We generalize this relation to the wider class of models and call it the Nambu sum rule. We discuss the possibility to apply this sum rule to various models of top quark condensation. In some cases, this rule allows us to calculate the masses of extra Higgs bosons that are the Nambu partners of the 125 GeV Higgs. © 2013 American Physical Society.


Lin F.-H.,National Taiwan University | Lin F.-H.,Aalto University
Magnetic Resonance in Medicine | Year: 2013

Magnetic resonance imaging (MRI) typically achieves spatial encoding by measuring the projection of a q-dimensional object over q-dimensional spatial bases created by linear spatial encoding magnetic fields (SEMs). Recently, imaging strategies using nonlinear SEMs have demonstrated potential advantages for reconstructing images with higher spatiotemporal resolution and reducing peripheral nerve stimulation. In practice, nonlinear SEMs and linear SEMs can be used jointly to further improve the image reconstruction performance. Here, we propose the multidimensionally encoded (MDE) MRI to map a q-dimensional object onto a p-dimensional encoding space where p > q. MDE MRI is a theoretical framework linking imaging strategies using linear and nonlinear SEMs. Using a system of eight surface SEM coils with an eight-channel radiofrequency coil array, we demonstrate the five-dimensional MDE MRI for a two-dimensional object as a further generalization of PatLoc imaging and O-space imaging. We also present a method of optimizing spatial bases in MDE MRI. Results show that MDE MRI with a higher dimensional encoding space can reconstruct images more efficiently and with a smaller reconstruction error when the k-space sampling distribution and the number of samples are controlled. © 2012 Wiley Periodicals, Inc.


Costa M.,Aalto University | Richter A.,BATOP GmbH | Koivunen V.,Aalto University
IEEE Transactions on Signal Processing | Year: 2012

In this paper, algorithms for joint high-resolution direction-of-arrival (DoA) and polarization estimation using real-world arrays with imperfections are proposed. Both azimuth and elevation angles are considered. Partially correlated and coherent signals may be handled as well. Unlike most of the work available in the open literature, we consider the case when polarization sensitive antenna arrays, which may be disposed on a conformal surface, may have unknown (but fixed) geometries, be composed of elements with individual beampatterns and be subject to cross-polarization as well as mounting platform effects. Herein, recent results on steering vector modeling from noise-corrupted array calibration measurements are employed. This allows for incorporating all nonidealities of an antenna array into the estimation algorithms in a general and convenient manner. The proposed estimators can be implemented using the fast Fourier transform or polynomial rooting techniques regardless of the array configuration. The stochastic Cramér-Rao lower bound for the estimation problem at hand is established using the results from array steering vector modeling as well. The proposed expression takes into account array nonidealities, making such a bound tight even for real-world arrays. Extensive simulation results are provided using several real-world antenna arrays. The proposed algorithms outperform conventional algorithms available in the literature and have a performance close to the stochastic Cramér-Rao lower bound. © 1991-2012 IEEE.


Researchers used a microwave resonator based on extremely sensitive measurement devices known as superconductive quantum interference devices (SQUIDs). In their studies, the resonator was cooled down and kept near absolute zero, where any thermal motion freezes. This state corresponds to perfect darkness where no photon - a real particle of electromagnetic radiation such as visible light or microwaves - is present. However, in this state (called quantum vacuum) there exist fluctuations that bring photons in and out of existence for a very short time. The researchers have now managed to convert these fluctuations into real photons of microwave radiation with different frequencies, showing that, in a sense, darkness is more than just absence of light. They also found out that these photons are correlated with each other, as if a magic connection exists between them. 'With our experimental setup we managed to create complex correlations of microwave signals in a controlled way,' says Dr Pasi Lähteenmäki, who performed the research during his doctoral studies at the Low Temperature Laboratory of Aalto University. 'This all hints at the possibility of using the different frequencies for quantum computing. The photons at different frequencies will play a similar role to the registers in classical computers, and logical gate operations can be performed between them,' says Doc. Sorin Paraoanu, Senior University Lecturer and one of the co-authors of the work. The results provide a new approach for quantum computing. 'Today the basic architecture of future quantum computers is being developed very intensively around the world. By utilizing the multi-frequency microwave signals, an alternative approach can be pursued which realizes the logical gates by sequences of quantum measurements. Moreover, if we use the photons created in our resonator, the physical quantum bits or qubits become needless,' explains Professor Pertti Hakonen from the Low Temperature Laboratory of Aalto University. These experiments utilized the OtaNANO infrastructure and the niobium superconducting technology of the Technical Research Centre of Finland (VTT). This work was done under the framework of the Centre of Quantum Engineering at Aalto University. More information: Pasi Lähteenmäki et al, Coherence and multimode correlations from vacuum fluctuations in a microwave superconducting cavity, Nature Communications (2016). DOI: 10.1038/ncomms12548


News Article | November 30, 2016
Site: en.prnasia.com

ESPOO, Finland, Nov. 30, 2016 /PRNewswire/ -- Using wood elements allows faster building turnaround. This leads to more profitable construction projects and shorter investment payback times. The pace of construction is kept at the desired level, because prefabrication reduces some of the most common risks at construction sites. To view the Multimedia News Release, please click: http://www.multivu.com/players/uk/7992351-metsa-wood-prefabricated-wood-elements/ Utilizing prefabricated wood elements is a surprisingly fast option for on-site construction. For example, up to 1500 m[2] of Kerto® LVL (Laminated Veneer Lumber) roof panels can be assembled within a single working day. "An example of rapid building is in the construction of the headquarters of the Diesel-Benelux Company in Amsterdam. An extremely tight building schedule of only nine months resulted in choosing Kerto LVL roof panels - under which the rest of the construction work could be finished in time," project subcontractor in charge of the wood construction, Lambert van den Bosch from Heko Spanten, mentions. One of the most important construction phases is to get the on-site protection done quickly to eliminate weather-related risks. Today, the alternatives for on-site weather protection include applying fast construction methods, such as prefabrication or building under a tent. Prefabricated wood elements shield the building site beneath, providing protection that's superior to temporary options - especially when it comes to snow loads and heavy winds. "For example, erecting Kerto LVL roof elements used at the logistic centre of DB Schenker, Finland provided a roof over the entire building in just 15 days. This is the same amount of time that erecting a temporary tent would have required. Using prefabricated roof panels ensured that the rest of the work could be completed in a protected environment and without additional costs for temporary protection," says Matti Kuittinen, architect and researcher from Aalto University. When the construction work takes place in controlled indoor conditions at the prefabrication plant, there is less risk of accidents and consequent delays at the building site. This is because some of the dangerous on-site phases are no longer needed. "Assembling ready-made wood elements can replace the potentially more dangerous process of having to build a roof from beams, panels and bitumen at the heights of an unfinished building. On-site accidents are of course not frequent, but every single one of them should be avoided," van den Bosch concludes. Prefabrication "pays off" at the construction site According to construction professionals interviewed by McGraw-Hill Construction, nearly 70% of projects that used prefabricated elements had shorter schedules and 65% had decreased budgets[1]. In addition to faster building projects leading to faster revenue, there is also other benefits that become apparent at the construction site. "Utilizing prefabricated wood elements can help in significantly reducing other inconveniences such as unloading building materials in the neighbourhood, as well as the amount of on-site waste and the need to transport it," Kuittinen adds. [1] McGraw Hill Construction, "Prefabrication and Modularization - Increasing Productivity in the Construction Industry," McGraw Hill Construction, Bedford, 2011 Learn more about how using prefabricated elements can improve the speed at the construction site: http://www.metsawood.com/publications Metsä Wood provides competitive and environmentally friendly wood products for construction, industrial customers and distributor partners. We manufacture products from Nordic wood, a sustainable raw material of premium quality. Our sales in 2015 were EUR 0.9 billion, and we employ about 2,000 people. Metsä Wood is part of Metsä Group. For more information, please contact:  Henni Rousu, Marketing Manager, Metsä Wood mobile: +358(0)405548388  henni.rousu@metsagroup.com For press information, please contact: Matt Trace, Director, Defero Communications matt@deferouk.co.uk tel. +44-07828663988


News Article | March 30, 2016
Site: phys.org

VTT Technical Research Centre of Finland and Aalto University investigated how willow biomass can be utilised more efficiently. When processed correctly, willow is eminently suitable as a source of sugar in the production of ethanol. The lignin fraction formed as a by-product of the ethanol process and the fibres and compounds of willow bark are suitable for the manufacture of environmentally friendly chemicals and bio-based materials. According to the results of the joint project of VTT and Aalto University, after appropriate preparation and enzymatic hydrolysis, willow is highly suitable as a source of sugar in ethanol production. The ethanol yield improved substantially when the willow bark was first separated from the biomass before steam explosion and yeast fermentation. A lignin-containing residue is produced in the ethanol manufacturing process from which various bio-based chemicals and materials can be manufactured. It will be possible to utilise the results of the new research on using willow in ethanol production in industry using existing technology, especially if industrial or farm scale equipment suitable for debarking is available. The project aimed to make complete use of the biomass without producing material side streams. For this reason, the composition and structure of the bark was analysed carefully. The researchers identified a number of compounds in the bark. These may include physiologically active components or natural antimicrobial compounds, which in the future could be utilised in bioprocesses to prevent the growth of harmful bacteria. The fibres of the bark component differed from those of the wood component in terms of composition and structure. In addition to sugars suitable for ethanol fermentation, various fractions can be separated from willow: lignin, bark component fibres, and bioactive and antimicrobial compounds. The properties and their most suitable application areas require further studies. It is important to also investigate the environmental impacts of cultivating and processing willow as well as the economic profitability of processing. There is an increasing global demand for environmentally friendly chemicals and plant-based raw materials for renewable fuels. With these, the aim is to reduce carbon dioxide emissions, replace oil-based components and produce renewable energy. These challenges can be met by using willow. Fast-growing willow has not been widely utilised as a raw material in industry. In addition to its low price, it has numerous advantages compared to other forest or agriculture based raw materials. Willow can be grown, for example, on flood susceptible and nutrient poor land, i.e. in areas that are not suitable for forest or field cultivation. As it grows willow uses nutrients efficiently, therefore it reduces the nutrient load on water courses caused by agriculture or peat production areas. Explore further: Chemicals and biofuel from wood biomass


News Article | September 1, 2016
Site: www.cemag.us

Researchers at Aalto University have demonstrated the suitability of microwave signals in the coding of information for quantum computing. Previous development of the field has been focusing on optical systems. Researchers used a microwave resonator based on extremely sensitive measurement devices known as superconductive quantum interference devices (SQUIDs). In their studies, the resonator was cooled down and kept near absolute zero, where any thermal motion freezes. This state corresponds to perfect darkness where no photon — a real particle of electromagnetic radiation such as visible light or microwaves — is present. However, in this state (called quantum vacuum) there exist fluctuations that bring photons in and out of existence for a very short time. The researchers have now managed to convert these fluctuations into real photons of microwave radiation with different frequencies, showing that, in a sense, darkness is more than just absence of light. They also found out that these photons are correlated with each other, as if a magic connection exists between them. “With our experimental setup we managed to create complex correlations of microwave signals in a controlled way,” says Dr. Pasi Lähteenmäki, who performed the research during his doctoral studies at the Low Temperature Laboratory of Aalto University. “This all hints at the possibility of using the different frequencies for quantum computing. The photons at different frequencies will play a similar role to the registers in classical computers, and logical gate operations can be performed between them,” says Dr. Sorin Paraoanu, Senior University Lecturer and one of the co-authors of the work. The results provide a new approach for quantum computing. “Today the basic architecture of future quantum computers is being developed very intensively around the world. By utilizing the multi-frequency microwave signals, an alternative approach can be pursued which realizes the logical gates by sequences of quantum measurements. Moreover, if we use the photons created in our resonator, the physical quantum bits or qubits become needless,” explains Professor Pertti Hakonen from the Low Temperature Laboratory of Aalto University. These experiments utilized the OtaNANO infrastructure and the niobium superconducting technology of the Technical Research Centre of Finland (VTT). This work was done under the framework of the Centre of Quantum Engineering at Aalto University.


News Article | December 14, 2016
Site: phys.org

A half-quantum vortex combines circular spin flow and circular mass flow, leading to the formation of vortex pairs that can be observed experimentally. Credit: Credit: Ella Maru Studio Researchers have discovered half-quantum vortices in superfluid helium. This vortex is a topological defect, exhibited in superfluids and superconductors, which carries a fixed amount of circulating current. These objects originally predicted to exist in superfluid helium in 1976. The discovery will provide access to the cores of half-quantum vortices, hosting isolated Majorana modes, exotic solitary particles. Understanding these modes is essential for the progress of quantum information processing, building a quantum computer. Researchers in Aalto University, Finland, and P.L. Kapitza Institute in Moscow have discovered half-quantum vortices in superfluid helium. This vortex is a topological defect, exhibited in superfluids and superconductors, which carries a fixed amount of circulating current. "This discovery of half-quantum vortices culminates a long search for these objects originally predicted to exist in superfluid helium in 1976," says Samuli Autti, Doctoral Candidate at Aalto University in Finland. "In the future, our discovery will provide access to the cores of half-quantum vortices, hosting isolated Majorana modes, exotic solitary particles. Understanding these modes is essential for the progress of quantum information processing, building a quantum computer," Autti continues. Macroscopic coherence in quantum systems such as superfluids and superconductors provides many possibilities, and some central limitations. For instance, the strength of circulating currents in these systems is limited to certain discrete values by the laws of quantum mechanics. A half-quantum vortex overcomes that limitation using the non-trivial topology of the underlying material, a topic directly related to the 2016 Nobel Prize in physics. Among the emerging properties is one analogous to the so-called Alice string in high-energy physics, where a particle on a route around the string flips the sign of its charge. In general the quantum character of these systems is already utilized in ultra-sensitive SQUID amplifiers and other important quantum devices. The article Observation of Half-Quantum Vortices in Topological Superfluid 3He has been published today in the online version of Physical Review Letters. Experiments were done in the Low Temperature Laboratory at Aalto University.


News Article | September 1, 2016
Site: www.scientificcomputing.com

Researchers at Aalto University have demonstrated the suitability of microwave signals in the coding of information for quantum computing. Previous development of the field has been focusing on optical systems. Researchers used a microwave resonator based on extremely sensitive measurement devices known as superconductive quantum interference devices (SQUIDs). In their studies, the resonator was cooled down and kept near absolute zero, where any thermal motion freezes. This state corresponds to perfect darkness where no photon - a real particle of electromagnetic radiation such as visible light or microwaves - is present. However, in this state (called quantum vacuum) there exist fluctuations that bring photons in and out of existence for a very short time. The researchers have now managed to convert these fluctuations into real photons of microwave radiation with different frequencies, showing that, in a sense, darkness is more than just absence of light. They also found out that these photons are correlated with each other, as if a magic connection exists between them. 'With our experimental setup we managed to create complex correlations of microwave signals in a controlled way,' says Dr Pasi Lähteenmäki, who performed the research during his doctoral studies at the Low Temperature Laboratory of Aalto University. 'This all hints at the possibility of using the different frequencies for quantum computing. The photons at different frequencies will play a similar role to the registers in classical computers, and logical gate operations can be performed between them,' says Doc. Sorin Paraoanu, Senior University Lecturer and one of the co-authors of the work. The results provide a new approach for quantum computing. 'Today the basic architecture of future quantum computers is being developed very intensively around the world. By utilizing the multi-frequency microwave signals, an alternative approach can be pursued which realizes the logical gates by sequences of quantum measurements. Moreover, if we use the photons created in our resonator, the physical quantum bits or qubits become needless,' explains Professor Pertti Hakonen from the Low Temperature Laboratory of Aalto University.


When microbatteries are manufatured, the key challenge is to make them able to store large amounts of energy in a small space. One way to improve the energy density is to manufacure the batteries based on three-dimensional microstructured architectures. This may increase the effective surface inside a battery- even dozens of times. However, the production of materials fit for these has proven to be very difficult. "ALD is a great method for making battery materials fit for 3D microstructured architectures. Our method shows it is possible to even produce organic electrode materials by using ALD, which increases the opportunities to manufacture efficient microbatteries," says doctoral candidate Mikko Nisula from Aalto University. The researchers' deposition process for Li-terephthalate is shown to comply well with the basic principles of ALD-type growth, including the sequential self-saturated surface reactions, which is a necessity when aiming at micro-lithium-ion devices with three-dimensional architectures. The as-deposited films are found to be crystalline across the deposition temperature range of 200 - 280 °C, which is a trait that is highly desired for an electrode material, but rather unusual for hybrid organic-inorganic thin films. An excellent rate capability is ascertained for the Li-terephthalate films, with no conductive additives required. The electrode performance can be further enhanced by depositing a thin protective LiPON solid-state electrolyte layer on top of Li-terephthalate. This yields highly stable structures with a capacity retention of over 97% after 200 charge/discharge cycles at 3.2 C. The study about the method has now been published in the latest edition of Nano Letters. More information: Mikko Nisula et al. Atomic/Molecular Layer Deposition of Lithium Terephthalate Thin Films as High Rate Capability Li-Ion Battery Anodes, Nano Letters (2016). DOI: 10.1021/acs.nanolett.5b04604


Abstract: With people wanting to use smaller electronic devices, smaller energy storage systems are needed. Researchers of Aalto University in Finland have demonstrated the fabrication of electrochemically active organic lithium electrode thin films, which help make microbatteries more efficient than before. Researchers used a combined atomic/molecular layer deposition (ALD/MLD) technique, to prepare lithium terephthalate, a recently found anode material for a lithium-ion battery. When microbatteries are manufatured, the key challenge is to make them able to store large amounts of energy in a small space. One way to improve the energy density is to manufacure the batteries based on three-dimensional microstructured architectures. This may increase the effective surface inside a battery- even dozens of times. However, the production of materials fit for these has proven to be very difficult. - ALD is a great method for making battery materials fit for 3D microstructured architectures. Our method shows it is possible to even produce organic electrode materials by using ALD, which increases the opportunities to manufacture efficient microbatteries, says doctoral candidate Mikko Nisula from Aalto University. The researchers' deposition process for Li-terephthalate is shown to comply well with the basic principles of ALD-type growth, including the sequential self-saturated surface reactions, which is a necessity when aiming at micro-lithium-ion devices with three-dimensional architectures. The as-deposited films are found to be crystalline across the deposition temperature range of 200?280 °C, which is a trait that is highly desired for an electrode material, but rather unusual for hybrid organic?inorganic thin films. An excellent rate capability is ascertained for the Li-terephthalate films, with no conductive additives required. The electrode performance can be further enhanced by depositing a thin protective LiPON solid-state electrolyte layer on top of Li-terephthalate. This yields highly stable structures with a capacity retention of over 97% after 200 charge/discharge cycles at 3.2 C. The study about the method has now been published in the latest edition of Nano Letters. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | August 31, 2016
Site: www.nanotech-now.com

Abstract: Researchers at Aalto University have demonstrated the suitability of microwave signals in the coding of information for quantum computing. Previous development of the field has been focusing on optical systems. Researchers used a microwave resonator based on extremely sensitive measurement devices known as superconductive quantum interference devices (SQUIDs). In their studies, the resonator was cooled down and kept near absolute zero, where any thermal motion freezes. This state corresponds to perfect darkness where no photon - a real particle of electromagnetic radiation such as visible light or microwaves - is present. However, in this state (called quantum vacuum) there exist fluctuations that bring photons in and out of existence for a very short time. The researchers have now managed to convert these fluctuations into real photons of microwave radiation with different frequencies, showing that, in a sense, darkness is more than just absence of light. They also found out that these photons are correlated with each other, as if a magic connection exists between them. 'With our experimental setup we managed to create complex correlations of microwave signals in a controlled way,' says Dr Pasi Lähteenmäki, who performed the research during his doctoral studies at the Low Temperature Laboratory of Aalto University. 'This all hints at the possibility of using the different frequencies for quantum computing. The photons at different frequencies will play a similar role to the registers in classical computers, and logical gate operations can be performed between them,' says Doc. Sorin Paraoanu, Senior University Lecturer and one of the co-authors of the work. The results provide a new approach for quantum computing. 'Today the basic architecture of future quantum computers is being developed very intensively around the world. By utilizing the multi-frequency microwave signals, an alternative approach can be pursued which realizes the logical gates by sequences of quantum measurements. Moreover, if we use the photons created in our resonator, the physical quantum bits or qubits become needless,' explains Professor Pertti Hakonen from the Low Temperature Laboratory of Aalto University. These experiments utilized the OtaNANO infrastructure and the niobium superconducting technology of the Technical Research Centre of Finland (VTT). This work was done under the framework of the Centre of Quantum Engineering at Aalto University. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Helium-3 experimental cell and extract of data showing creation of light Higgs mode (analog of 125 GeV Higgs boson). Credit: Dr. Vladislav Zavyalov, Low Temperature Laboratory, Aalto University In 2012, a proposed observation of the Higgs boson was reported at the Large Hadron Collider in CERN.  The observation has puzzled the physics community, as the mass of the observed particle, 125 GeV, looks lighter than the expected energy scale, about 1 TeV. Researchers at Aalto University in Finland now propose that there is more than one Higgs boson, and they are much heavier than the 2012 observation.  The results were recently published in Nature Communications. "Our recent ultra-low temperature experiments on superfluid helium (3He) suggest an explanation why the Higgs boson observed at CERN appears to be too light.  By using the superfluid helium analogy, we have predicted that there should be other Higgs bosons, which are much heavier (about 1 TeV) than previously observed," says Professor (emeritus) Grigory E. Volovik. Prof. Volovik holds a position in the Low Temperature Laboratory at Aalto University and in Landau Institute, Moscow.  He has received the international Simon Prize in 2004 for distinguished work in theoretical low temperature physics, and the Lars Onsager Prize in 2014 for outstanding research in theoretical statistical physics. At the same time, the new CERN experiments have shown evidence of the second Higgs in just the suggested region (at 0.75 TeV).  This evidence has immediately been commented and discussed in a large number of papers submitted to arXiv, an e-print service widely utilised by the physics community to distribute manuscripts of their unpublished work. Explore further: Hints of the Higgs - papers are submitted More information: V. V. Zavjalov et al. Light Higgs channel of the resonant decay of magnon condensate in superfluid 3He-B, Nature Communications (2016). DOI: 10.1038/NCOMMS10294


Saira O.-P.,Aalto University | Yoon Y.,Aalto University | Tanttu T.,Aalto University | Mottonen M.,Aalto University | And 2 more authors.
Physical Review Letters | Year: 2012

Recent progress on micro- and nanometer-scale manipulation has opened the possibility to probe systems small enough that thermal fluctuations of energy and coordinate variables can be significant compared with their mean behavior. We present an experimental study of nonequilibrium thermodynamics in a classical two-state system, namely, a metallic single-electron box. We have measured with high statistical accuracy the distribution of dissipated energy as single electrons are transferred between the box electrodes. The obtained distributions obey Jarzynski and Crooks fluctuation relations. A comprehensive microscopic theory exists for the system, enabling the experimental distributions to be reproduced without fitting parameters. © 2012 American Physical Society.


Averin D.V.,State University of New York at Stony Brook | Pekola J.P.,Aalto University
Physical Review Letters | Year: 2010

We have analyzed the spectral density of fluctuations of the energy flux through a mesoscopic constriction between two equilibrium reservoirs. It is shown that at finite frequencies, the fluctuating energy flux is not related to the thermal conductance of the constriction by the standard fluctuation- dissipation theorem, but contains additional noise. The main physical consequence of this extra noise is that the fluctuations do not vanish at zero temperature together with the vanishing thermal conductance. © 2010 The American Physical Society.


Palyulin V.V.,University of Potsdam | Ala-Nissila T.,Aalto University | Metzler R.,University of Potsdam | Metzler R.,Tampere University of Technology
Soft Matter | Year: 2014

Probably no other field of statistical physics at the borderline of soft matter and biological physics has caused such a flurry of papers as polymer translocation since the 1994 landmark paper by Bezrukov, Vodyanoy, and Parsegian and the study of Kasianowicz in 1996. Experiments, simulations, and theoretical approaches are still contributing novel insights to date, while no universal consensus on the statistical understanding of polymer translocation has been reached. We here collect the published results, in particular, the famous-infamous debate on the scaling exponents governing the translocation process. We put these results into perspective and discuss where the field is going. In particular, we argue that the phenomenon of polymer translocation is non-universal and highly sensitive to the exact specifications of the models and experiments used towards its analysis. This journal is © the Partner Organisations 2014.


Ollila E.,Aalto University | Tyler D.E.,Rutgers University | Koivunen V.,Aalto University | Poor H.V.,Rutgers University
IEEE Transactions on Signal Processing | Year: 2012

Complex elliptically symmetric (CES) distributions have been widely used in various engineering applications for which non-Gaussian models are needed. In this overview, circular CES distributions are surveyed, some new results are derived and their applications e.g., in radar and array signal processing are discussed and illustrated with theoretical examples, simulations and analysis of real radar data. The maximum likelihood (ML) estimator of the scatter matrix parameter is derived and general conditions for its existence and uniqueness, and for convergence of the iterative fixed point algorithm are established. Specific ML-estimators for several CES distributions that are widely used in the signal processing literature are discussed in depth, including the complex t-distribution, K-distribution, the generalized Gaussian distribution and the closely related angular central Gaussian distribution. A generalization of ML-estimators, the M-estimators of the scatter matrix, are also discussed and asymptotic analysis is provided. Applications of CES distributions and the adaptive signal processors based on ML-and M-estimators of the scatter matrix are illustrated in radar detection problems and in array signal processing applications for Direction-of-Arrival (DOA) estimation and beamforming. Furthermore, experimental validation of the usefulness of CES distributions for modelling real radar data is given. © 2012 IEEE.


Maisi V.F.,Aalto University | Maisi V.F.,Center for Metrology and Accreditation | Kambly D.,University of Geneva | Flindt C.,University of Geneva | Pekola J.P.,Aalto University
Physical Review Letters | Year: 2014

We employ a single-charge counting technique to measure the full counting statistics of Andreev events in which Cooper pairs are either produced from electrons that are reflected as holes at a superconductor-normal-metal interface or annihilated in the reverse process. The full counting statistics consists of quiet periods with no Andreev processes, interrupted by the tunneling of a single electron that triggers an avalanche of Andreev events giving rise to strongly super-Poissonian distributions. © 2014 American Physical Society.


Ollila E.,Aalto University | Tyler D.E.,Rutgers University
IEEE Transactions on Signal Processing | Year: 2014

In this paper, a general class of regularized M-estimators of scatter matrix are proposed that are suitable also for low or insufficient sample support (small n and large p) problems. The considered class constitutes a natural generalization of M-estimators of scatter matrix (Maronna, 1976) and are defined as a solution to a penalized M-estimation cost function. Using the concept of geodesic convexity, we prove the existence and uniqueness of the regularized M-estimators of scatter and the existence and uniqueness of the solution to the corresponding M-estimating equations under general conditions. Unlike the non-regularized M-estimators of scatter, the regularized estimators are shown to exist for any data configuration. An iterative algorithm with proven convergence to the solution of the regularized M-estimating equation is also given. Since the conditions for uniqueness do not include the regularized versions of Tyler's M-estimator, necessary and sufficient conditions for their uniqueness are established separately. For the regularized Tyler's M-estimators, we also derive a simple, closed form, and data-dependent solution for choosing the regularization parameter based on shape matrix matching in the mean-squared sense. Finally, some simulations studies illustrate the improved accuracy of the proposed regularized M-estimators of scatter compared to their non-regularized counterparts in low sample support problems. An example of radar detection using normalized matched filter (NMF) illustrate that an adaptive NMF detector based on regularized M-estimators are able to maintain accurately the preset CFAR level. © 2014 IEEE.


Pekola J.P.,Aalto University | Saira O.-P.,Aalto University | Maisi V.F.,Aalto University | Maisi V.F.,Center for Metrology and Accreditation | And 5 more authors.
Reviews of Modern Physics | Year: 2013

The control of electrons at the level of the elementary charge e was demonstrated experimentally already in the 1980s. Ever since, the production of an electrical current ef, or its integer multiple, at a drive frequency f has been a focus of research for metrological purposes. This review discusses the generic physical phenomena and technical constraints that influence single-electron charge transport and presents a broad variety of proposed realizations. Some of them have already proven experimentally to nearly fulfill the demanding needs, in terms of transfer errors and transfer rate, of quantum metrology of electrical quantities, whereas some others are currently "just" wild ideas, still often potentially competitive if technical constraints can be lifted. The important issues of readout of single-electron events and potential error correction schemes based on them are also discussed. Finally, an account is given of the status of single-electron current sources in the bigger framework of electric quantum standards and of the future international SI system of units, and applications and uses of single-electron devices outside the metrological context are briefly discussed. © 2013 American Physical Society.


Solinas P.,Aalto University | Averin D.V.,State University of New York at Stony Brook | Pekola J.P.,Aalto University
Physical Review B - Condensed Matter and Materials Physics | Year: 2013

We analyze work done on a quantum system driven by a control field. The average work depends on the whole dynamics of the system, and is obtained as the integral of the average power operator. As a specific example we focus on a superconducting Cooper-pair box forming a two-level system. We obtain expressions for the average work and work distribution in a closed system, and discuss control field and environment contributions to the average work for an open system. © 2013 American Physical Society.


Hashemi S.M.,Aalto University | Hashemi S.M.,Iran University of Science and Technology | Nefedov I.S.,Aalto University
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

In this paper we discuss an interesting property of arrays of metallic carbon nanotubes, namely, the capability of perfect absorption in optically ultrathin layers. The carbon nanotube array is used in a regime where it possesses properties of a uniaxial indefinite medium. We show that if the optical axis is tilted with respect to an interface, a plane incident wave propagates inside a finite-thickness slab of the carbon nanotube array with a very small wavelength and small material losses cause the total wave absorption. We demonstrate that perfect matching with free space can be achieved in an optically ultrathin layer without a magnetic response and when the reflected wave is absent. Nonsymmetry appearing as a difference between wave numbers of waves propagating upward and downward with respect to the interface under oblique incidence leads to the absence of a thickness resonance. © 2012 American Physical Society.


Wang Z.W.,University of Birmingham | Toikkanen O.,Aalto University | Quinn B.M.,Aalto University | Palmer R.E.,University of Birmingham
Small | Year: 2011

The shape of monolayer-protected Au38 clusters on the sub-nanometer scale is obtained by spherical aberration-corrected scanning transmission electron microscopy under optimized experimental imaging conditions. The cluster shape is generally retained between frames, and statistical analysis of the shape population identifies a high proportion of prolate clusters, consistent with published theoretical calculations and X-ray diffraction results. This method has the potential to characterize other small, ligand-protected nanoparticles as well as clusters or nanoparticles generated by physical methods, such as catalyst particles. © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Averin D.V.,State University of New York at Stony Brook | Mottonen M.,Aalto University | Pekola J.P.,Aalto University
Physical Review B - Condensed Matter and Materials Physics | Year: 2011

We propose and analyze a possible implementation of Maxwell's demon (MD) based on a single-electron pump. It presents a practical realization of MD in a solid-state environment. We show that measurements of the charge states of the pump and feedback control of the gate voltages lead to a net flow of electrons against the bias voltage ideally with no work done on the system by the gate control. The information obtained in the measurements converts thermal fluctuations into energy stored in a capacitor. We derive the conditions on the detector back action and measurement time necessary for implementing this conversion. © 2011 American Physical Society.


News Article | March 8, 2016
Site: www.sciencenews.org

When French engineer Sadi Carnot calculated the maximum efficiency of a heat engine in 1824, he had no idea what heat was. In those days, physicists thought heat was a fluid called caloric. But Carnot, later lauded as a pioneer in establishing the second law of thermodynamics, didn’t have to know those particulars, because thermodynamics is insensitive to microscopic details. Heat flows from hot to cold regardless of whether it consists of a fluid or, as it turns out, the collective motion of trillions of trillions of molecules. Thermodynamics, the laws and equations governing energy and its usefulness to do work, concerns itself only with the big picture. It’s a successful approach. As thermodynamics requires, energy is always conserved (the first law), and when it flows from hot to cold it can do work, limited by the generation of disorder, or entropy (the second law). These laws dictate everything from the miles per gallon a car engine gets to the battery life of a smartphone. They help physicists better understand black holes and why time moves forward but not backward (SN: 7/25/15, p. 15). Yet the big picture approach, considering the forest rather than the trees, has made physicists wonder if thermodynamics holds at all scales. Would it work if an engine consisted of three molecules rather than the typical trillion trillion? In the realm of the very small, governed by the quirky rules of quantum mechanics, perhaps the thermodynamic code is not so rigid. “Thermodynamics was designed for big stuff,” says Janet Anders, a theoretical physicist at the University of Exeter in England. “We haven’t really integrated thermodynamics with quantum mechanics.” Over the last few decades, physicists have gradually explored heat flow at the quantum level, intrigued by the possibility of finding violations of thermodynamics’ second law. So far, the second law has held strong. But new precision experimental techniques are allowing physicists to explore the quantum foundations of thermodynamics more fully. Testing the limits set by theorists, researchers are building tiny engines, some powered by a single atom, and measuring the devices’ feeble oomph. Even if physicists can’t break the thermodynamic rules, recent evidence suggests ways to bend them — especially by exploiting the way quantum entanglement weaves together the fates of a few particles. Techniques used in processing quantum information could prove useful for squeezing extra energy out of miniature engines, for instance. These lessons could help scientists build nanomachines that harvest heat and use it to deliver medicine inside the body, or help reduce energy loss in the tiny components of traditional computers. Any future practical applications of this work will depend on understanding how basic thermodynamic principles operate at ultrasmall scales. It goes back to statistics, says University College London quantum theoretical physicist Jonathan Oppenheim. If the trillion trillion gas molecules in a steam engine were represented by that many coins, then the result of flipping all those coins would be a homogenous mixture of heads and tails, the equivalent of stable temperature and maximum entropy. That’s why steam engines always follow the rules. But flip three minicoins inside a tiny quantum engine and all three could easily land on heads, as if all the fast molecules stayed in one compartment rather than mixing with the other — a violation of the second law. Experiments over the years had suggested that if the second law of thermodynamics does break down at small scales, the violation is not very drastic. Last year, Oppenheim and colleagues got more specific, publishing a detailed analysis in the Proceedings of the National Academy of Sciences. Their results indicate that not only does the second law actually hold at the quantum scale, it is also more demanding. Rather than analyzing entropy directly, Oppenheim’s team looked at how much energy a system has available to do work, a quantity called free energy. In our macroscopic world, the amount of free energy depends only on a system’s temperature and entropy. But by zooming in toward smaller and smaller collections of particles, the researchers found that they had to take into account several more varieties of free energy. Every one of them decreases over time. In other words, the second law requires adherence to even more rules at the quantum level. Recent experiments have made it clear that attempts to circumvent the second law at any scale are doomed. In the Dec. 31 Physical Review Letters, Jonne Koski, a physicist at Aalto University in Finland, and colleagues created the laboratory equivalent of the heat-manipulating “demon” conjured by Scottish physicist James Clerk Maxwell in 1867. Maxwell’s demon, which tries to violate thermodynamics’ second law by sorting fast (blue) from slow (green) molecules to avoid equilibrium (left), would inevitably fail at the quantum level, a new experiment shows. Maxwell wondered whether a hypothetical microscopic entity tracking the particles flitting around two adjacent containers could separate the fast-moving particles from the slow ones. The demon’s actions would minimize the system’s total entropy, a violation of the second law, and create a temperature difference that could be exploited to do work for free. Koski’s team built a demonic device that deprived an electronic circuit of heat and thus its entropy as well. The demon did its job: A visitor to the lab observing the experiment would think  the circuit was violating the second law. But the researchers also noticed that the demon paid a price for its transgressions. As it performed its dirty deed, the demon itself heated up. The total entropy of the circuit and the demon together actually increased, just as the second law requires (SN Online: 12/1/15). Koski’s electronic demon failed because of its reliance on information about individual particles. The connection between information and thermodynamics dates back to 1929. That’s when Hungarian physicist Leo Szilard dug deeper into Maxwell’s thought experiment and drew up a blueprint for exploiting information about particles, such as their position and velocity, to perform tasks. Szilard’s work demonstrated that in physics, information isn’t merely a stock quote or a baseball player’s batting average — it’s physical. More than three decades later, IBM physicist Rolf Landauer showed that Szilard’s approach came with a cost. Maxwell’s demon may capitalize on its knowledge about one particle, Landauer said, but the demon must use up the energy it gained when it scrubs that information from its finite memory and turns its attention to the next particle. Erasing information costs energy. That’s why the sophisticated demonic circuit failed to circumvent the second law. Information is clearly important for understanding thermodynamics, and it’s also downright essential for making sense of the stranger parts of quantum mechanics. Tiny bits of matter can essentially exist in two places at once, a phenomenon called superposition. Two or more particles can be wrangled into what’s known as an entangled state, intricately linking the particles’ properties regardless of the distance between them. Many physicists are trying to exploit superposition, quantum entanglement and other quantum trickery to perform information-heavy tasks that are impossible under the rules of classical physics. Researchers envision supersecure communication networks and quantum computers that exploit entangled photons or ions to solve complex problems with ease (SN: 11/20/10, p. 22). But information means much more than just exchanging and processing 1s and 0s. As a result, physicists pondering quantum computing and communication have turned their attention to thermodynamics. They’ve begun asking whether properties such as entanglement could also offer an advantage in converting heat into work. In the October–December 2015 Physical Review X, a European team demonstrated that a system of several entangled particles stores more usable energy than the same particles without quantum connections. The advantage, which quickly disappears as the number of particles increases, boils down to the notion that information is a resource. Entangled particles essentially provide information for free, because knowing something about one particle reveals something about its entangled partners (SN: 1/9/16, p. 9). Even though the second law holds strong, says study coauthor Marcus Huber, the ability to exploit information from quantum effects “also helps you to do things that you couldn’t do classically.” Obtaining information at a discount may enable technology that bends the second law and outperforms the best life-size engines. “What we can hope for are machines that run faster, refrigerators that get cooler or batteries that store more or charge faster,” says Huber, a quantum information theorist at the University of Geneva. Huber compares the challenge ahead to playing a game, much like the one Carnot played in the 19th century. Carnot essentially turned dials controlling variables such as temperature and pressure until he had squeezed the maximum efficiency out of a steam engine. Today’s physicists have different goals — perhaps creating a microscopic refrigerator to cool their instruments to unfathomably low temperatures. To achieve such goals, physicists plan to turn the dials for variables like entanglement and see what happens. Soon scientists may be able to start playing those games with engines exploiting quantum effects in the lab. German researchers took a step toward that goal in October by building a heat engine consisting of a single atom. Johannes Roßnagel, a quantum physicist at the University of Mainz, and colleagues built a cone-shaped enclosure around a calcium ion. After using a laser and electric field to heat up the ion to about one degree above absolute zero, the researchers measured the work performed by the ion as it exerted a subtle push toward the top of the cone. A typical engine (left) uses energy from heat to drive a turbine or perform some other task. Reduce the engine’s size enough and it can drive a single atom (right, green dot) to vibrate and do a tiny amount of work. The nanoscopic engine worked just as the laws of thermodynamics say it should, the researchers reported in a paper posted online at arXiv.org. Adjusting for the tiny weight of the ion, the power was comparable to that of a car engine, Roßnagel says. “It’s quite interesting to see that you can drive heat machines with a single atom,” he says. Despite the measureable power output of the single-ion engine, Roßnagel warns that nano-sized engines for practical use are decades away at best. Instead, the usefulness of quantum thermo-dynamics will probably happen under the hood of other technologies. Some researchers have their eyes on the multi-billion-dollar computer chip industry. In the drive to build ever-faster computers, engineers keep shrinking transistors to pack more and more onto chips. The transistors, some just tens of nano-meters wide, tend to leak electrons and heat up. That heat ruins the energy efficiency of the computer and damages components. Quantum thermodynamics could help physicists learn tricks to reduce the amount of wasted heat or perhaps even harvest it with small devices inside the computer. Heat management is even more crucial for physicists seeking to build practical quantum computers. Such a device needs to operate at extremely low temperatures to exploit quantum effects and potentially outperform traditional computers. Next, Roßnagel and his colleagues plan to chill their single atom until it’s capable of maintaining delicate quantum states including superposition and entanglement. Such an experiment would put Huber’s theoretical results to the test and expose the potential of adjusting those “quantumness” knobs to better exploit heat to do work. A few contrarians in the physics community say that such experiments could finally violate the vaunted second law of thermodynamics. But don’t bet on it. Early 20th century English astrophysicist Arthur Eddington is still looking good with his prediction that any theory attempting to defy the second law will “collapse in deepest humiliation.” But he didn’t say anything about moving the goalposts a bit. This article appears in the March 19, 2016 issue with the headline, "The laws of heat go small: Physicists explore thermodynamics in the quantum realm."


News Article | November 10, 2016
Site: phys.org

"Nanodiamond, nanohorn, nano-onion...," lists off the Aalto University Professor, recounting the many nano-shapes of carbon. Laurila is using these shapes to build new materials: tiny sensors, only a few hundred nanometres across, that can achieve great things due to their special characteristics. For one, the sensors can be used to enhance the treatment of neurological conditions. That is why Laurila, University of Helsinki Professor Tomi Taira and experts from HUS (the Hospital District of Helsinki and Uusimaa) are looking for ways to use the sensors for taking electrochemical measurements of biomolecules. Biomolecules are e.g. neurotransmitters such as glutamate, dopamine and opioids, which are used by nerve cells to communicate with each other. "Most of the drugs meant for treating neurological diseases change the communication between nerve cells that is based on neurotransmitters. If we had real time and individual information on the operation of the neurotransmitter system, it would make it much easier to for example plan precise treatments," explains Taira. Due to their small size, carbon sensors can be taken directly next to a nerve cell, where the sensors will report what kind of neurotransmitter the cell is emitting and what kind of reaction it is inducing in other cells. "In practice, we are measuring the electrons that are moving in oxidation and reduction reactions," Laurila explains the operating principle of the sensors. "The advantage of the sensors developed by Tomi and the others is their speed and small size. The probes used in current measurement methods can be compared to logs on a cellular scale – it's impossible to use them and get an idea of the brain's dynamic," summarizes Taira. For the sensors, the journey from in vitro tests conducted in glass dishes and test tubes to in vivo tests and clinical use is long. However, the researchers are highly motivated. "About 165 million people are suffering from various neurological diseases in Europe alone. And because they are so expensive to treat, neurological diseases make up as much as 80 per cent of health care costs," tells Taira. Tomi Laurila believes that carbon sensors will have applications in fields such as optogenetics. Optogenetics is a recently developed method where a light-sensitive molecule is brought into a nerve cell so that the cell's electric operation can then be turned on or off by stimulating it with light. A few years ago, a group of scientists proved in the scientific journal Nature that they had managed to use optogenetics to activate a memory trace that had been created previously due to learning. Using the same technique, researchers were able to demonstrate that with a certain type of Alzheimer's, the problem is not that there are no memory traces being created, but that the brain cannot read the traces. "So the traces exist, and they can be activated by boosting them with light stimuli," explains Taira but stresses that a clinical application is not yet a reality. However, clinical applications for other conditions may be closer by. One example is Parkinson's disease. In Parkinson's disease, the amount of dopamine starts to decrease in the cells of a particular brain section, which causes the typical symptoms such as tremors, rigidity and slowness of movement. With the sensors, the level of dopamine could be monitored in real time. "A sort of feedback system could be connected to it, so that it would react by giving an electric or optical stimulus to the cells, which would in turn release more dopamine," envisions Taira. "Another application that would have an immediate clinical use is monitoring unconscious and comatose patients. With these patients, the level of glutamate fluctuates very much, and too much glutamate damages the nerve cell – online monitoring would therefore improve their treatment significantly. Manufacturing carbon sensors is definitely not a mass production process; it is slow and meticulous handiwork. "At this stage, the sensors are practically being built atom by atom," summarises Tomi Laurila. "Luckily, we have many experts on carbon materials of our own. For example, the nanobuds of Professor Esko Kauppinen and the carbon films of Professor Jari Koskinen help with the manufacturing of the sensors. Carbon-based materials are mainly very compatible with the human body, but there is still little information about them. That's why a big part of the work is to go through the electrochemical characterisation that has been done on different forms of carbon." The sensors are being developed and tested by experts from various fields, such as chemistry, materials science, modelling, medicine and imaging. Twenty or so articles have been published on the basic properties of the materials. Now, the challenge is to build them into geometries that are functional in a physiological environment. And taking measurements is not simple, either. "Brain tissue is delicate and doesn't appreciate having objects being inserted in it. But if this were easy, someone would've already done it," conclude the two. Explore further: Chronic pain and shaking under control using 'pacemaker for the brain'


News Article | November 14, 2016
Site: www.sciencedaily.com

Researchers found mathematical structure that was thought not to exist. The best possible q-analogs of codes may be useful in more efficient data transmission. The best possible q-analogs of codes may be useful in more efficient data transmission. In the 1970s, a group of mathematicians started developing a theory according to which codes could be presented at a level one step higher than the sequences formed by zeros and ones: mathematical subspaces named q-analogs. For a long time, no applications were found -- or were not even searched for -- for the theory until ten years ago, when it was understood that they would be useful in the efficient data transmission required by modern data networks. The challenge was that, despite numerous attempts, the best possible codes described in the theory had not been found and it was therefore believed they did not even exist. 'We thought it could very well be possible,' says Professor Patric Östergård from Aalto University and smiles. 'The search was challenging because of the enormous size of the structures. Searching for them is a gigantic operation even if there is very high-level computational capacity available. Therefore, in addition to algebraic techniques and computers, we also had to use our experience and guess where to start looking, and that way limit the scope of the search.' The perseverance was rewarded when the group consisting of five researchers found the largest possible structure described by the theory. The results were recently presented in the scientific publication Forum of Mathematics, Pi, which publishes only a dozen carefully selected articles per year. Aalto University (Finland), Technion (Israel), University of Bayreuth (Germany), Darmstadt University of Applied Sciences (Germany), University of California San Diego (USA) and Nanyang Technological University (Singapore) participated in the study. Although mathematical breakthroughs rarely become financial success stories immediately, many modern things we take for granted would not exist without them. For example, Boolean algebra, which has played a key role in the creation of computers, has been developed since the 19th century. 'As a matter of fact, information theory was green before anyone had even mentioned green alternatives,' says Östergård and laughs. 'Its basic idea is, actually, to try to take advantage of the power of the transmitter as effectively as possible, which in practice means attempting to transmit data using as little energy as possible. Our discovery will not become a product straight away, but it may gradually become part of the internet.'


News Article | November 14, 2016
Site: www.eurekalert.org

The best possible q-analogs of codes may be useful in more efficient data transmission In the 1970s, a group of mathematicians started developing a theory according to which codes could be presented at a level one step higher than the sequences formed by zeros and ones: mathematical subspaces named q-analogs. For a long time, no applications were found - or were not even searched for - for the theory until ten years ago, when it was understood that they would be useful in the efficient data transmission required by modern data networks. The challenge was that, despite numerous attempts, the best possible codes described in the theory had not been found and it was therefore believed they did not even exist. 'We thought it could very well be possible,' says Professor Patric Östergård from Aalto University and smiles. 'The search was challenging because of the enormous size of the structures. Searching for them is a gigantic operation even if there is very high-level computational capacity available. Therefore, in addition to algebraic techniques and computers, we also had to use our experience and guess where to start looking, and that way limit the scope of the search.' The perseverance was rewarded when the group consisting of five researchers found the largest possible structure described by the theory. The results were recently presented in the scientific publication Forum of Mathematics, Pi, which publishes only a dozen carefully selected articles per year. Aalto University (Finland), Technion (Israel), University of Bayreuth (Germany), Darmstadt University of Applied Sciences (Germany), University of California San Diego (USA) and Nanyang Technological University (Singapore) participated in the study. Although mathematical breakthroughs rarely become financial success stories immediately, many modern things we take for granted would not exist without them. For example, Boolean algebra, which has played a key role in the creation of computers, has been developed since the 19th century. 'As a matter of fact, information theory was green before anyone had even mentioned green alternatives,' says Östergård and laughs. 'Its basic idea is, actually, to try to take advantage of the power of the transmitter as effectively as possible, which in practice means attempting to transmit data using as little energy as possible. Our discovery will not become a product straight away, but it may gradually become part of the internet.'' Michael Braun, Tuvi Etzion, Patric Östergård, Alexander Vardy, Alfred Wassermann: "Existence of q-analogs of Steiner Systems". Forum of Mathematics, Pi. Link to the publication https:/


News Article | January 19, 2016
Site: phys.org

The study utilized eye gaze tracking to demonstrate how dogs view the emotional expressions of dog and human faces. Dogs looked first at the eye region and generally examined eyes longer than nose or mouth areas. Species-specific characteristics of certain expressions attracted their attention, for example the mouths of threatening dogs. However, dogs appeared to base their perception of facial expressions on the whole face. Threatening faces evoked attentional bias, which may be based on an evolutionary adaptive mechanism: the sensitivity to detect and avoid threats represents a survival advantage. Interestingly, dogs' viewing behavior was dependent on the depicted species: threatening conspecifics' faces evoked longer looking but threatening human faces instead an avoidance response. Threatening signals carrying different biological validity are most likely processed via distinctive neurocognitive pathways. "The tolerant behavior strategy of dogs toward humans may partially explain the results. Domestication may have equipped dogs with a sensitivity to detect the threat signals of humans and respond them with pronounced appeasement signals", says researcher Sanni Somppi from the University of Helsinki. This is the first evidence of emotion-related gaze patterns in non-primates. Already 150 years ago Charles Darwin proposed that the analogies in the form and function of human and non-human animal emotional expressions suggest shared evolutionary roots. Recent findings provide modern scientific support for Darwin's old argument. A total of 31 dogs of 13 different breeds attended the study. Prior the experiment the dogs were clicker-trained to stay still in front of a monitor without being commanded or restrained. Due to positive training approach, dogs were highly motivated to perform the task. The study is part of the collaboration project of Faculties of Veterinary Medicine and Behavioural Science, University of Helsinki and Department of Neuroscience and Biomedical Engineering, Aalto University. Previously, the research group of professor Outi Vainio from the University of Helsinki has discovered that socially informative objects in images, as personally familiar faces and social interaction, attract dogs' attention. The research group of Professor Outi Vainio explores cognition and emotion in dogs in the Faculty of Veterinary Medicine of the University of Helsinki. The study has been supported inter alia by the Academy of Finland and the Emil Aaltonen Foundation. More information: Sanni Somppi et al. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity – Evidence from Gazing Patterns, PLOS ONE (2016). DOI: 10.1371/journal.pone.0143047


Glaus P.,University of Manchester | Honkela A.,Aalto University | Rattray M.,University of Sheffield
Bioinformatics | Year: 2012

Motivation: High-throughput sequencing enables expression analysis at the level of individual transcripts. The analysis of transcriptome expression levels and differential expression (DE) estimation requires a probabilistic approach to properly account for ambiguity caused by shared exons and finite read sampling as well as the intrinsic biological variance of transcript expression. Results: We present Bayesian inference of transcripts from sequencing data (BitSeq), a Bayesian approach for estimation of transcript expression level from RNA-seq experiments. Inferred relative expression is represented by Markov chain Monte Carlo samples from the posterior probability distribution of a generative model of the read data. We propose a novel method for DE analysis across replicates which propagates uncertainty from the sample-level model while modelling biological variance using an expression-level-dependent prior. We demonstrate the advantages of our method using simulated data as well as an RNA-seq dataset with technical and biological replication for both studied conditions. © The Author(s) 2012. Published by Oxford University Press.


Heikkila T.T.,University of Jyväskylä | Volovik G.E.,Aalto University | Volovik G.E.,L D Landau Institute For Theoretical Physics
New Journal of Physics | Year: 2015

We consider the Z2 topology of the Dirac lines, i.e., lines of band contacts, on an example of graphite. Four lines - three with topological charge each and one with - merge together near the H-point and annihilate due to summation law . The merging point is similar to the real-space nexus, an analog of the Dirac monopole at which the Z2 strings terminate. © 2015 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft.


Jo H.-H.,Aalto University | Karsai M.,Aalto University | Kertesz J.,Aalto University | Kertesz J.,Budapest University of Technology and Economics | Kaski K.,Aalto University
New Journal of Physics | Year: 2012

The temporal communication patterns of human individuals are known to be inhomogeneous or bursty, which is reflected as heavy tail behavior in the inter-event time distribution. As the cause of such a bursty behavior two main mechanisms have been suggested: (i) inhomogeneities due to the circadian and weekly activity patterns and (ii) inhomogeneities rooted in human task execution behavior. In this paper, we investigate the role of these mechanisms by developing and then applying systematic de-seasoning methods to remove the circadian and weekly patterns from the time series of mobile phone communication events of individuals. We find that the heavy tails in the interevent time distributions remain robust with respect to this procedure, which clearly indicates that the human task execution-based mechanism is a possible cause of the remaining burstiness in temporal mobile phone communication patterns. © IOP Publishing Ltd and Deutsche Physikalische Gesellschaft.


Malola S.,University of Jyväskylä | Lehtovaara L.,University of Jyväskylä | Enkovaara J.,University of Jyväskylä | Enkovaara J.,Aalto University | And 3 more authors.
ACS Nano | Year: 2013

Gold nanoclusters protected by a thiolate monolayer (MPC) are widely studied for their potential applications in site-specific bioconjugate labeling, sensing, drug delivery, and molecular electronics. Several MPCs with 1-2 nm metal cores are currently known to have a well-defined molecular structure, and they serve as an important link between molecularly dispersed gold and colloidal gold to understand the size-dependent electronic and optical properties. Here, we show by using an ab initio method together with atomistic models for experimentally observed thiolate-stabilized gold clusters how collective electronic excitations change when the gold core of the MPC grows from 1.5 to 2.0 nm. A strong localized surface plasmon resonance (LSPR) develops at 540 nm (2.3 eV) in a cluster with a 2.0 nm metal core. The protecting molecular layer enhances the LSPR, while in a smaller cluster with 1.5 nm gold core, the plasmon-like resonance at 540 nm is confined in the metal core by the molecular layer. Our results demonstrate a threshold size for the emergence of LSPR in these systems and help to develop understanding of the effect of the molecular overlayer on plasmonic properties of MPCs enabling engineering of their properties for plasmonic applications. © 2013 American Chemical Society.


Ojala M.,Aalto University | Garriga G.C.,University Pierre and Marie Curie
Journal of Machine Learning Research | Year: 2010

We explore the framework of permutation-based p-values for assessing the performance of classifiers. In this paper we study two simple permutation tests. The first test assess whether the classifier has found a real class structure in the data; the corresponding null distribution is estimated by permuting the labels in the data. This test has been used extensively in classification problems in computational biology. The second test studies whether the classifier is exploiting the dependency between the features in classification; the corresponding null distribution is estimated by permuting the features within classes, inspired by restricted randomization techniques traditionally used in statistics. This new test can serve to identify descriptive features which can be valuable information in improving the classifier performance. We study the properties of these tests and present an extensive empirical evaluation on real and synthetic data. Our analysis shows that studying the classifier performance via permutation tests is effective. In particular, the restricted permutation test clearly reveals whether the classifier exploits the interdependency between the features in the data. © 2010 Markus Ojala and Gemma C. Garriga.


Deb K.,Indian Institute of Technology Kanpur | Deb K.,Aalto University | Sinha A.,Aalto University
Evolutionary Computation | Year: 2010

Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, gameplaying strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity. © 2010 by the Massachusetts Institute of Technology.


Pihko P.M.,University of Jyväskylä | Majander I.,Aalto University | Erkkila A.,Aalto University
Topics in Current Chemistry | Year: 2010

The reversible reaction of primary or secondary amines with enolizable aldehydes or ketones affords nucleophilic intermediates, enamines. With chiral amines, catalytic enantioselective reactions via enamine intermediates become possible. In this review, structure-activity relationships and the scope as well as current limitations of enamine catalysis are discussed. © 2009 Springer-Verlag Berlin Heidelberg.


Pirkkalainen J.-M.,Aalto University | Damskagg E.,Aalto University | Brandt M.,Aalto University | Massel F.,University of Jyväskylä | Sillanpaa M.A.,Aalto University
Physical Review Letters | Year: 2015

A pair of conjugate observables, such as the quadrature amplitudes of harmonic motion, have fundamental fluctuations that are bound by the Heisenberg uncertainty relation. However, in a squeezed quantum state, fluctuations of a quantity can be reduced below the standard quantum limit, at the cost of increased fluctuations of the conjugate variable. Here we prepare a nearly macroscopic moving body, realized as a micromechanical resonator, in a squeezed quantum state. We obtain squeezing of one quadrature amplitude 1.1±0.4 dB below the standard quantum limit, thus achieving a long-standing goal of obtaining motional squeezing in a macroscopic object. © 2015 American Physical Society.


Holme P.,UmeaUniversity | Holme P.,Sungkyunkwan University | Holme P.,University of Stockholm | Saramaki J.,Aalto University
Physics Reports | Year: 2012

A great variety of systems in nature, society and technology-from the web of sexual contacts to the Internet, from the nervous system to power grids-can be modeled as graphs of vertices coupled by edges. The network structure, describing how the graph is wired, helps us understand, predict and optimize the behavior of dynamical systems. In many cases, however, the edges are not continuously active. As an example, in networks of communication via e-mail, text messages, or phone calls, edges represent sequences of instantaneous or practically instantaneous contacts. In some cases, edges are active for non-negligible periods of time: e.g.,the proximity patterns of inpatients at hospitals can be represented by a graph where an edge between two individuals is on throughout the time they are at the same ward. Like network topology, the temporal structure of edge activations can affect dynamics of systems interacting through the network, from disease contagion on the network of patients to information diffusion over an e-mail network. In this review, we present the emergent field of temporal networks, and discuss methods for analyzing topological and temporal structure and models for elucidating their relation to the behavior of dynamical systems. In the light of traditional network theory, one can see this framework as moving the information of when things happen from the dynamical system on the network, to the network itself. Since fundamental properties, such as the transitivity of edges, do not necessarily hold in temporal networks, many of these methods need to be quite different from those for static networks. The study of temporal networks is very interdisciplinary in nature. Reflecting this, even the object of study has many names-temporal graphs, evolving graphs, time-varying graphs, time-aggregated graphs, time-stamped graphs, dynamic networks, dynamic graphs, dynamical graphs, and so on. This review covers different fields where temporal graphs are considered, but does not attempt to unify related terminology-rather, we want to make papers readable across disciplines. © 2012 Elsevier B.V.


Jo H.-H.,Aalto University | Eom Y.-H.,French National Center for Scientific Research
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2014

One of the interesting phenomena due to topological heterogeneities in complex networks is the friendship paradox: Your friends have on average more friends than you do. Recently, this paradox has been generalized for arbitrary node attributes, called the generalized friendship paradox (GFP). The origin of GFP at the network level has been shown to be rooted in positive correlations between degrees and attributes. However, how the GFP holds for individual nodes needs to be understood in more detail. For this, we first analyze a solvable model to characterize the paradox holding probability of nodes for the uncorrelated case. Then we numerically study the correlated model of networks with tunable degree-degree and degree-attribute correlations. In contrast to the network level, we find at the individual level that the relevance of degree-attribute correlation to the paradox holding probability may depend on whether the network is assortative or dissortative. These findings help us to understand the interplay between topological structure and node attributes in complex networks. © 2014 American Physical Society.


Stenroos M.,Aalto University | Stenroos M.,MRC Cognition and Brain science Unit | Hunold A.,TU Ilmenau | Haueisen J.,TU Ilmenau
NeuroImage | Year: 2014

Experimental MEG source imaging studies have typically been carried out with either a spherically symmetric head model or a single-shell boundary-element (BEM) model that is shaped according to the inner skull surface. The concepts and comparisons behind these simplified models have led to misunderstandings regarding the role of skull and scalp in MEG. In this work, we assess the forward-model errors due to different skull/scalp approximations and due to differences and errors in model geometries. We built five anatomical models of a volunteer using a set of T1-weighted MR scans and three common toolboxes. Three of the models represented typical models in experimental MEG, one was manually constructed, and one contained a major segmentation error at the skull base. For these anatomical models, we built forward models using four simplified approaches and a three-shell BEM approach that has been used as reference in previous studies. Our reference model contained in addition the skull fine-structure (spongy bone). We computed signal topographies for cortically constrained sources in the left hemisphere and compared the topographies using relative error and correlation metrics. The results show that the spongy bone has a minimal effect on MEG topographies, and thus the skull approximation of the three-shell model is justified. The three-shell model performed best, followed by the corrected-sphere and single-shell models, whereas the local-spheres and single-sphere models were clearly worse. The three-shell model was the most robust against the introduced segmentation error. In contrast to earlier claims, there was no noteworthy difference in the computation times between the realistically-shaped and sphere-based models, and the manual effort of building a three-shell model and a simplified model is comparable. We thus recommend the realistically-shaped three-shell model for experimental MEG work. In cases where this is not possible, we recommend a realistically-shaped corrected-sphere or single-shell model. © 2014 Elsevier Inc.


Nikander P.,Ericsson AB | Gurtov A.,Aalto University | Henderson T.R.,Boeing Company
IEEE Communications Surveys and Tutorials | Year: 2010

The Host Identity Protocol (HIP) is an internetworking architecture and an associated set of protocols, developed at the IETF since 1999 and reaching their first stable version in 2007. HIP enhances the original Internet architecture by adding a name space used between the IP layer and the transport protocols. This new name space consists of cryptographic identifiers, thereby implementing the so-called identifier/locator split. In the new architecture, the new identifiers are used in naming application level end-points (sockets), replacing the prior identification role of IP addresses in applications, sockets, TCP connections, and UDP-based send and receive system calls. IPv4 and IPv6 addresses are still used, but only as names for topological locations in the network. HIP can be deployed such that no changes are needed in applications or routers. Almost all pre-compiled legacy applications continue to work, without modifications, for communicating with both HIP-enabled and non-HIP-enabled peer hosts. The architectural enhancement implemented by HIP has profound consequences. A number of the previously hard networking problems become suddenly much easier. Mobility, multihoming, and baseline end-to-end security integrate neatly into the new architecture. The use of cryptographic identifiers allows enhanced accountability, thereby providing a base for easier build up of trust. With privacy enhancements, HIP allows good location anonymity, assuring strong identity only towards relevant trusted parties. Finally, the HIP protocols have been carefully designed to take middle boxes into account, providing for overlay networks and enterprise deployment concerns. This article provides an in-depth look at HIP, discussing its architecture, design, benefits, potential drawbacks, and ongoing work. © 2010 IEEE.


Patent
Aalto University and Teknologian Tutkimuskeskus Vtt | Date: 2014-06-17

There is provided an apparatus comprising thresholding means adapted to check if an average frequency of occurrence of timing violations is outside a range; and controlling means adapted to control at least one of a clock frequency, a processing, a heat generation, a bias voltage, a current, and a temperature in a direction to bring the average frequency of occurrence of timing violations into the range if the average frequency of occurrence of timing violations is outside the range.


Patent
Aalto University and Teknologian Tutkimuskeskus Vtt | Date: 2015-04-08

An apparatus, comprising a clock adapted to provide a clock signal alternating with a cycle between a first level and a second level if a timing violation is not detected; a first latch adapted to be clocked such that it passes a first signal when the clock signal is at the first level; a second combinational logic adapted to output a second signal based on the first signal passed through the first latch; a second latch adapted to be clocked such that it passes the second signal when the clock signal is at the second level; a detecting means adapted to detect the timing violation of at least one of the first signal and of the second signal; a time stretching means adapted to stretch, if the timing violation is detected, the clock such that the clock alternates between the first level and the second level with a delay.


Patent
Aalto University and Teknologian Tutkimuskeskus Vtt | Date: 2014-10-02

An apparatus, comprising a clock adapted to provide a clock signal alternating with a cycle between a first level and a second level if a timing violation is not detected; a first latch adapted to be clocked such that it passes a first signal when the clock signal is at the first level; a second combinational logic adapted to output a second signal based on the first signal passed through the first latch; a second latch adapted to be clocked such that it passes the second signal when the clock signal is at the second level; a detecting means adapted to detect the timing violation of at least one of the first signal and of the second signal; a time stretching means adapted to stretch, if the timing violation is detected, the clock such that the clock alternates between the first level and the second level with a delay.


News Article | October 31, 2016
Site: phys.org

Researchers at Aalto University and the University of Jyväskylä have developed a new method of measuring microwave signals extremely accurately. This method can be used for processing quantum information, for example, by efficiently transforming signals from microwave circuits to the optical regime. If you are trying to tune in a radio station but the tower is too far away, the signal gets distorted by noise. The noise results mostly from having to amplify the information carried by the signal in order to transfer it into an audible form. According to the laws of quantum mechanics, all amplifiers add noise. In the early 1980s, US physicist Carlton Caves proved theoretically that the Heisenberg uncertainty principle for such signals requires that at least half an energy quantum of noise must be added to the signal. In everyday life, this kind of noise does not matter, but researchers around the world have aimed to create amplifiers that would come close to Caves' limit. 'The quantum limit of amplifiers is essential for measuring delicate quantum signals, such as those generated in quantum computing or quantum mechanical measuring, because the added noise limits the size of signals that can be measured', explains Professor Mika Sillanpää. So far, the solution for getting closest to the limit is an amplifier based on superconducting tunnel junctions developed in the 1980s, but this technology has its problems. Led by Sillanpää, the researchers from Aalto and the University of Jyväskylä combined a nanomechanical resonator – a vibrating nanodrum – with two superconducting circuits, i.e. cavities. 'As a result, we have made the most accurate microwave measurement with nanodrums so far', explains Caspar Ockeloen-Korppi from Aalto University, who conducted the actual measurement. In addition to the microwave measurement, this device enables transforming quantum information from one frequency to another while simultaneously amplifying it. 'This would for example allow transferring information from superconducting quantum bits to the "flying qubits" in the visible light range and back', envision the creators of the theory for the device, Tero Heikkilä, Professor at the University of Jyväskylä, and Academy Research Fellow Francesco Massel. Therefore, the method has potential for data encryption based on quantum mechanics, i.e. quantum cryptography, as well as other applications. Explore further: Colors from darkness: Researchers develop alternative approach to quantum computing More information: C. F. Ockeloen-Korppi et al. Low-Noise Amplification and Frequency Conversion with a Multiport Microwave Optomechanical Device, Physical Review X (2016). DOI: 10.1103/PhysRevX.6.041024


News Article | November 19, 2016
Site: www.sciencedaily.com

For businesses using social media, posts with high engagement have the greatest impact on customer spending, according to new research from the University at Buffalo School of Management. Published in the Journal of Marketing, the study assessed social media posts for sentiment (positive, neutral or negative), popularity (engagement) and customers' likelihood to use social media, and found the popularity of a social media post had the greatest effect on purchases. "A neutral or even negative social media post with high engagement will impact sales more than a positive post that draws no likes, comments or shares," says study co-author Ram Bezawada, PhD, associate professor of marketing in the UB School of Management. "This is true even among customers who say their purchase decisions are not swayed by what they read on social media." The researchers studied data from a large specialty retailer with multiple locations in the northeast United States. They combined data about customer participation on the company's social media page with in-store purchases before and after the retailer's social media engagement efforts. They also conducted a survey to determine customers' attitudes toward technology and social media. The study also found that businesses' social posts significantly strengthen the effect of traditional television and email marketing efforts. When social media is combined with TV marketing, customer spending increased by 1.03 percent and cross buying by 0.84 percent. When combined with email marketing, customer spending increased by 2.02 percent and cross buying by 1.22 percent. Cross buying refers to when a customer purchases additional products or services from the same firm. "The clear message here is that social media marketing matters, and managers should embrace it to build relationships with customers," says Bezawada. "Developing a community with a dedicated fan base can lead to a definitive impact on revenues and profits." Bezawada collaborated on the project with Ashish Kumar, assistant professor of marketing at Aalto University; Rishika Rishika, clinical assistant professor of marketing at the University of South Carolina; Ramkumar Janakiraman, associate professor of marketing at the University of South Carolina; and P.K. Kannan, the Ralph J. Tyser Professor of Marketing Science at the University of Maryland.


News Article | October 12, 2016
Site: phys.org

In the basement beneath Aalborg University are 40 small speakers and three subwoofers placed around a narrow walkway with just enough room for a chair. The space is an anechoic chamber where the walls, ceiling and floor are covered with thick, pointed, foam plates that absorb sound that hits the walls. The 43 speakers are set up so that along with a newly developed recording system and an advanced computer program they can reproduce the exact acoustic conditions from any other room. If you put a CD in the drive or start an audio file on a computer, it will sound entirely as it would in the space the laboratory is simulating. The sound lab is the latest addition to the accurate reproduction of acoustic conditions. Years ago, Aalborg University along with Brüel & Kjær, Delta Akustik and Bang & Olufsen developed an artificial head with built-in microphones that could record how the human ear perceives sounds coming from different places. The artificial head was later developed to turn to each side and accurately map how the acoustics of a room affect sound when you move. By reproducing the sound in a pair of headphones with a head-tracker, you can make it sound as if you are in the room. The perceived acoustics are very close to those of the real room and thus you can simulate how speakers and sound systems will work in different rooms. Not the same with headphones Recordings with artificial head technology have been used for a number of years to reproduce various acoustic conditions, but the problem has always been that we do not ordinarily experience the sounds of our surroundings through a pair of headphones on a daily basis. In order to more accurately reproduce an acoustic environment, you can enhance the experience considerably by using a loudspeaker setup in an anechoic environment to create a precise spatial illusion. The idea and the system were originally developed by researchers at Aalto University in Finland for studies on sound in concert halls. The system at AAU is an advancement aimed at smaller spaces such as living rooms and automotive interiors. "With headphones on, it often feels as if all sound is quite close to or inside the head. You do not have the feeling that something comes from farther away – the spatial element is very difficult to recreate," explains PhD student Neo Kaplanis who developed the new sound reproduction system at AAU. "The same goes for the experience of a powerful bass. It's not something we just hear with our ears; it's something that can be felt in the entire body. You simply cannot reproduce it with a pair of headphones." However, in the accurately positioned array of speakers in the new sound lab, you can. By using recordings made with the appropriate recording method we can recreate the sound of any room. "If you wear a blindfold or turn off all the lights in the lab, your ears make you believe that you're in a completely different place; in fact that's how we run the experiments," says Neo Kaplanis. Better sound in the car Right now the lab is set up to reproduce the acoustic space in a car, and with good reason. The university's partner, Bang & Olufsen, was until recently one of the world's leading manufacturers of Hi-Fi audio for luxury cars. The automotive department was recently sold to American speaker manufacturer Harman, but their development department is now in Struer, right next to B&O's headquarters. When you develop a sound system for a brand new car, it takes a very long time because you test the car with many different speaker systems that have to be changed along the way. It is a long and costly affair, but because the entire production of new cars is top-secret, audio system manufacturers typically only have a prototype of the car available for a few days. "With the new system we will be able to map the car's acoustic conditions, send the car back to the factory, and then adapt and adjust the audio system with measurements in the lab. It makes it possible to develop much higher quality sound," says Søren Bech, Director of Research at Bang & Olufsen, who divides his time between the Struer company and a professorship at AAU's Department of Electronic Systems. The possibilities in the sound lab are not limited to the best speaker solutions for luxury cars. With the new system, in principle we can reproduce the sound from all kinds of spaces, from concert halls to living rooms – to buildings that are still on the drawing board. The setup will thus be an important tool in future research/development projects. Explore further: 3D sound for the Zurich Opera House


Flash Physics is our daily pick of the latest need-to-know developments from the global physics community selected by Physics World's team of editors and reporters A new and extremely precise way of amplifying and measuring tiny microwave signals has been unveiled by physicists at Aalto University and the University of Jyväskylä in Finland. Mika Sillanpää, Tero Heikkilä and colleagues created their detector by combining a micron-sized mechanical resonator resembling a drum with two superconducting microwave cavities. The device is able to amplify a very weak microwave signal with a gain of 41 dB – a factor of about 12,500 – while only adding about four quanta of noise to the signal. This is close to the minimum amount of noise possible (the standard quantum limit), which is half a quanta of noise. As well as being able to amplify very weak signals so that they can be measured, the technique could be used in quantum-information systems in which quantum bits of information (qubits) are encoded into microwave signals. Another important feature of the new technology is that it can convert signals from one microwave frequency to another. Writing in Physical Review X, the team suggest that this could be useful for developing quantum-information systems that are based on several different qubit technologies. A series of computer simulations done by scientists in Japan and France provide important insights into how the rings around Saturn and other planets formed – and why the composition of Saturn's rings is different to that of the rings of Neptune and Uranus. Ryuki Hyodo and colleagues at Kobe University, the University of Paris Diderot and the Tokyo Institute of Technology focussed on the "late heavy bombardment" era of the solar system. This happened about four-billion years ago and is thought to have involved the inward migration of thousands of Pluto-sized objects from the outer solar system. The team first calculated the probability that some of these objects would pass close enough to Jupiter, Saturn, Uranus and Neptune such that they would be broken up by tidal forces. The researchers found that enough fragments would be created and then captured by the giant planets to account for the current rings of Saturn and Uranus. Simulations also revealed that these fragments – some of which would be several kilometres in size – would break up as they orbit the planets to become the circular rings of much smaller objects seen today. The simulations offer a suggestion as to why Saturn's rings are made mostly of ice, whereas the rings of Uranus and Neptune contain much more rock. This, they write in Icarus, is because Saturn is less dense than Uranus and Neptune and therefore the tidal forces it exerted on the Pluto-like objects is weaker. As a result, Saturn's gravity was only able to chip away at the ice on the surface of the passing objects whilst Uranus and Neptune were able to break up the underlying rock. A 2D monolayer of transition metal dichalcogenides (TMDC) can be used to generate pairs of photons, say researchers at the Julius-Maximilians-Universität Würzburg in Germany. TMDCs behave like semiconductors and are often used to make ultra-small and energy-efficient chips. Christian Schneider, Sven Höfling and colleagues produced monolayers of tungsten diselenide by using a piece of tape to peel off thin layers from a multi-layer film of the TMDC. This involved repeatedly peeling the film so that thinner and thinner layers are made until the material on the tape is only one-atomic-layer thick. This layer is then cooled down to a temperature just above absolute zero and it is then excited with a laser, causing it to emit single protons under specific conditions. "We were now able to show that a specific type of excitement produces not one but exactly two photons," says Schneider. "The light particles are generated in pairs so to speak." Two-photon sources are of interest to those carrying out quantum cryptography and other such protocols that involve entanglement. The research is described in Nature Communications.


News Article | November 4, 2016
Site: www.scientificcomputing.com

Researchers at Aalto University and the University of Jyväskylä have developed a new method of measuring microwave signals extremely accurately. This method can be used for processing quantum information, for example by efficiently transforming signals from microwave circuits to the optical regime. If you are trying to tune in a radio station but the tower is too far away, the signal gets distorted by noise. The noise results mostly from having to amplify the information carried by the signal in order to transfer it into an audible form. According to the laws of quantum mechanics, all amplifiers add noise. In the early 1980s, US physicist Carlton Caves proved theoretically that the Heisenberg uncertainty principle for such signals requires that at least half an energy quantum of noise must be added to the signal. In everyday life, this kind of noise does not matter, but researchers around the world have aimed to create amplifiers that would come close to Caves' limit. 'The quantum limit of amplifiers is essential for measuring delicate quantum signals, such as those generated in quantum computing or quantum mechanical measuring, because the added noise limits the size of signals that can be measured', explains Professor Mika Sillanpää. So far, the solution for getting closest to the limit is an amplifier based on superconducting tunnel junctions developed in the 1980s, but this technology has its problems. Led by Sillanpää, the researchers from Aalto and the University of Jyväskylä combined a nanomechanical resonator - a vibrating nanodrum - with two superconducting circuits, i.e. cavities. 'As a result, we have made the most accurate microwave measurement with nanodrums so far', explains Caspar Ockeloen-Korppi from Aalto University, who conducted the actual measurement. In addition to the microwave measurement, this device enables transforming quantum information from one frequency to another while simultaneously amplifying it. 'This would for example allow transferring information from superconducting quantum bits to the "flying qubits" in the visible light range and back', envision the creators of the theory for the device, Tero Heikkilä, Professor at the University of Jyväskylä, and Academy Research Fellow Francesco Massel. Therefore, the method has potential for data encryption based on quantum mechanics, i.e. quantum cryptography, as well as other applications.


Scientists at Aalto University (Finland) and Amherst College (USA) have created knotted solitary waves, or knot solitons, in the quantum-mechanical field describing a gas of superfluid atoms, also known as a Bose–Einstein condensate. In contrast to knotted ropes, these quantum knots exist in a field that assumes a certain direction at every point of space. The field segregates into an infinite number of linked rings, each with its own field direction. The resulting structure is topologically stable as it cannot be separated without breaking the rings. In other words, one cannot untie the knot within the superfluid unless one destroys the state of the quantum matter. "To make this discovery, we exposed a Rubidium condensate to rapid changes of a specifically tailored magnetic field, tying the knot in less than a thousandth of a second. After we learned how to tie the first quantum knot, we have become rather good at it. Thus far, we have tied several hundred such knots," says Professor David Hall, Amherst College. The scientists tied the knot by squeezing the structure into the condensate from its outskirts. This required them to initialize the quantum field to point in a particular direction, after which they suddenly changed the applied magnetic field to bring an isolated null point, at which the magnetic field vanishes into the center of the cloud. Then they just waited for less than a millisecond for the magnetic field to do its trick and tie the knot. "For decades, physicists have been theoretically predicting that it should be possible to have knots in quantum fields, but nobody else has been able to make one. Now that we have seen these exotic beasts, we are really excited to study their peculiar properties. Importantly, our discovery connects to a diverse set of research fields including cosmology, fusion power, and quantum computers," says research group leader Mikko Möttönen, Aalto University. Knots have been used and appreciated by human civilizations for thousands of years. For example, they have enabled great seafaring expeditions and inspired intricate designs and patterns. The ancient Inca civilization used a system of knots known as quipu to store information. In modern times, knots have been thought to play important roles in the quantum-mechanical foundations of nature, although they have thus far remained unseen in quantum dynamics. In everyday life, knots are typically tied on ropes or strings with two ends. However, these kinds of knots are not what mathematicians call topologically stable since they can be untied without cutting the rope. In stable knots, the ends of the ropes are glued together. Such knots can be relocated within the rope but cannot be untied without scissors. Mathematically speaking, the created quantum knot realizes a mapping referred to as Hopf fibration that was discovered by Heinz Hopf in 1931. The Hopf fibration is still widely studied in physics and mathematics. Now it has been experimentally demonstrated for the first time in a quantum field. "This is the beginning of the story of quantum knots. It would be great to see even more sophisticated quantum knots to appear such as those with knotted cores. Also it would be important to create these knots in conditions where the state of the quantum matter would be inherently stable. Such system would allow for detailed studies of the stability of the knot itself," says Mikko Möttönen.


News Article | November 18, 2016
Site: www.eurekalert.org

BUFFALO, N.Y. - For businesses using social media, posts with high engagement have the greatest impact on customer spending, according to new research from the University at Buffalo School of Management. Published in the Journal of Marketing, the study assessed social media posts for sentiment (positive, neutral or negative), popularity (engagement) and customers' likelihood to use social media, and found the popularity of a social media post had the greatest effect on purchases. "A neutral or even negative social media post with high engagement will impact sales more than a positive post that draws no likes, comments or shares," says study co-author Ram Bezawada, PhD, associate professor of marketing in the UB School of Management. "This is true even among customers who say their purchase decisions are not swayed by what they read on social media." The researchers studied data from a large specialty retailer with multiple locations in the northeast United States. They combined data about customer participation on the company's social media page with in-store purchases before and after the retailer's social media engagement efforts. They also conducted a survey to determine customers' attitudes toward technology and social media. The study also found that businesses' social posts significantly strengthen the effect of traditional television and email marketing efforts. When social media is combined with TV marketing, customer spending increased by 1.03 percent and cross buying by 0.84 percent. When combined with email marketing, customer spending increased by 2.02 percent and cross buying by 1.22 percent. Cross buying refers to when a customer purchases additional products or services from the same firm. "The clear message here is that social media marketing matters, and managers should embrace it to build relationships with customers," says Bezawada. "Developing a community with a dedicated fan base can lead to a definitive impact on revenues and profits." Bezawada collaborated on the project with Ashish Kumar, assistant professor of marketing at Aalto University; Rishika Rishika, clinical assistant professor of marketing at the University of South Carolina; Ramkumar Janakiraman, associate professor of marketing at the University of South Carolina; and P.K. Kannan, the Ralph J. Tyser Professor of Marketing Science at the University of Maryland. The UB School of Management is recognized for its emphasis on real-world learning, community and economic impact, and the global perspective of its faculty, students and alumni. The school also has been ranked by Bloomberg Businessweek, Forbes and U.S. News & World Report for the quality of its programs and the return on investment it provides its graduates. For more information about the UB School of Management, visit http://mgt. .


News Article | April 19, 2016
Site: www.sciencenews.org

Information may seem ethereal, given how easily we forget phone numbers and birthdays. But scientists say it is physical, and if a new study is correct, that goes for quantum systems, too. Although pages of text or strings of bits seem easily erased with the press of a button, the act of destroying information has tangible physical impact, according to a principle proposed in 1961 by physicist Rolf Landauer. Deleting information is associated with an increase in entropy, or disorder, resulting in the release of a certain amount of heat for each erased bit. Even the most efficient computer would still output heat when irreversibly scrubbing out data. This principle has been verified experimentally for systems that follow the familiar laws of classical physics. But the picture has remained fuzzy for quantum mechanical systems, in which particles can be in multiple states at once and their fates may be linked through the spooky process of quantum entanglement. Now a team of scientists reports April 13 in Proceedings of the Royal Society A that Landauer’s principle holds even in that wild quantum landscape. “Essentially what they’ve done is test [this principle] at a very detailed and quantitative way,” says physicist John Bechhoefer of Simon Fraser University in Burnaby, Canada, who was not involved with the research. “And they’re showing that this works in a quantum system, which is a really important step.” Testing Landauer’s principle in the quantum realm could be important for understanding the fundamental limits of quantum computers, Bechhoefer says. To verify Landauer’s principle, the researchers used a system of three qubits — the quantum version of the bits found in a typical computer — made from trifluoroiodoethylene, a molecule which has three fluorine atoms. The nuclei of these three fluorine atoms have a quantum property known as spin. That “spin” can be clockwise or counterclockwise, serving the same purpose as a 0 or 1 for a standard bit. The first qubit, which researchers called the “system,” contains the information to be erased. According to Landauer’s principle, when the information is erased, heat will be generated and energy will flow to the second qubit, known as the “reservoir.” Just as computer scientists can perform operations on the bits in a typical computer (adding or subtracting numbers, for instance), the researchers can apply operations to the fluorine qubits by using pulses of radio waves to tweak the nuclear spins. But making measurements of quantum systems is tricky, says physicist Lucas Céleri of Federal University of Goiás in Brazil, a leader of the research team. “In a quantum world, every time you measure the system, you interact with it,” thereby changing it. So the researchers used a work-around. The third qubit is coupled to the reservoir and can be used to measure the heat generated without mucking up the qubits of interest. When the researchers erased information, they found heat was generated as expected from Landauer’s principle. They looked at the average of multiple measurements, because quantum fluctuations mean that any single trial won’t necessarily conform to the principle. “It’s a very nice demonstration of Landauer’s principle in a quantum system, cleverly conceived and well carried out,” says quantum physicist Seth Lloyd of MIT, who was not involved with the research. But some researchers suggest there is more work to be done. "It is a carefully executed experiment with three interacting qubits,” physicists Jukka Pekola of Aalto University in Finland and Jonne Koski of ETH Zurich wrote in an e-mail. But in a traditional test of Landauer’s principle, the reservoir would not be a single qubit, but a large “heat bath” of many particles. The researchers therefore had to account for additional entropy introduced as a result of their single-particle reservoir. The next step, Pekola and Koski say, would be to investigate a qubit that interacts with a reservoir consisting of more particles to perform a more conventional test of Landauer’s principle at the quantum level.


News Article | October 31, 2016
Site: www.sciencedaily.com

Extremely accurate measurements of microwave signals can potentially be used for data encryption based on quantum cryptography and other purposes. Researchers at Aalto University and the University of Jyväskylä have developed a new method of measuring microwave signals extremely accurately. This method can be used for processing quantum information, for example by efficiently transforming signals from microwave circuits to the optical regime. If you are trying to tune in a radio station but the tower is too far away, the signal gets distorted by noise. The noise results mostly from having to amplify the information carried by the signal in order to transfer it into an audible form. According to the laws of quantum mechanics, all amplifiers add noise. In the early 1980s, US physicist Carlton Caves proved theoretically that the Heisenberg uncertainty principle for such signals requires that at least half an energy quantum of noise must be added to the signal. In everyday life, this kind of noise does not matter, but researchers around the world have aimed to create amplifiers that would come close to Caves' limit. 'The quantum limit of amplifiers is essential for measuring delicate quantum signals, such as those generated in quantum computing or quantum mechanical measuring, because the added noise limits the size of signals that can be measured', explains Professor Mika Sillanpää. So far, the solution for getting closest to the limit is an amplifier based on superconducting tunnel junctions developed in the 1980s, but this technology has its problems. Led by Sillanpää, the researchers from Aalto and the University of Jyväskylä combined a nanomechanical resonator -- a vibrating nanodrum -- with two superconducting circuits, i.e. cavities. 'As a result, we have made the most accurate microwave measurement with nanodrums so far', explains Caspar Ockeloen-Korppi from Aalto University, who conducted the actual measurement. In addition to the microwave measurement, this device enables transforming quantum information from one frequency to another while simultaneously amplifying it. 'This would for example allow transferring information from superconducting quantum bits to the "flying qubits" in the visible light range and back', envision the creators of the theory for the device, Tero Heikkilä, Professor at the University of Jyväskylä, and Academy Research Fellow Francesco Massel. Therefore, the method has potential for data encryption based on quantum mechanics, i.e. quantum cryptography, as well as other applications. The research team also included researchers Juha-Matti Pirkkalainen and Erno Darmskägg from Aalto University. The work was published in Physical Review X, one of the most distinguished journals in physics, 28 October 2016.


Events after the end of the review period: This interim report has been prepared in accordance with IAS 34, and the disclosed information is unaudited. “SRV's revenue and operating profit further increased during the third quarter. We have received numerous orders for major projects this year, such as a new central hospital in Central Finland, and forging ahead to complete these large-scale projects is already having a favourable effect on our revenue. Although we haven not received any major new orders in recent months, our order backlog remains at a record high and we're expecting more interesting new entries in our order book at the end of the year. The lengthy recession in Russia is naturally being reflected in our operations, for example, in temporary rent discounts granted to shopping centre tenants. In view of the circumstances, our shopping centres in St Petersburg are performing excellently. In the last few months, Pearl Plaza has broken its previous records for both visitor numbers and earnings. Business also got off to a good start at the Okhta Mall. The shopping centre opened in August and in October they received a very distinguished award in the European Property Awards 2016 competition. The Russian shopping centre market has become increasing rouble-based, and we therefore changed our operating currency to the rouble in September. However, this will leave us more susceptible to currency exchange rate fluctuations. One of our strategic objectives is to improve profitability, and we are still a long way off achieving this. On a positive note, there has been a clear increase in the number of developer-contracted housing projects, particularly in the capital city region, and we will complete about 500 developer-contracted units this year, the majority of which will be visible in our Q4 result. Thanks to new orders and our personnel's strong commitment, we're expecting plenty of good things for the rest of 2016,” says CEO Juha Pekka Ojala. In the January–September period of 2016, the Group's order backlog rose to EUR 1,888.1 (1,517.5) million (up 24.4%). The largest new projects announced in early 2016 included a new central hospital in Central Finland, the Ring Road I tunnel project, a contractor agreement for the expansion of Tapiola city centre, as well as the construction of a new campus building for Aalto University and retail premises in the Metro Centre, both in Otaniemi, Espoo. The order backlog saw growth in operations in Finland in particular, largely in the second quarter. No significant new orders were announced in July–September. The Group's revenue rose by 12.8 per cent to EUR 555.5 (492.5) million, largely thanks to increased revenue from business construction in Finland. The major projects agreed on during the spring have entered the construction phase and are now generating revenue. Figures for the comparison period include excavation and other infrastructure work that was completed at the REDI site prior to the official start-up decision and was recognised as revenue (EUR 40 million) in January–March 2015 in accordance with the level of completion. The Group's operating profit rose to EUR 11.4 (7.5) million, primarily due to improved profitability and higher revenue in SRV's operations in Finland. However, the rise in construction costs associated with the REDI shopping centre and parking facility weakened SRV's operating profit. Operating profit in Russia also weakened, even though a change in the rouble exchange rate improved the earnings of Russian associated companies by EUR 0.5 million. Operating profit and its relative level are also lowered by the elimination of a share equivalent to SRV's ownership from the profit margins of three shopping centre projects under construction (Okhta Mall, 4Daily and REDI), which will be recognised as income only when the investment is sold. The Group's profit before taxes was EUR -3.1 (2.7) million. The result was weakened by higher interest expenses and a EUR -7.8 million fair value revaluation of a ten-year interest rate hedge. The Group's earnings per share were EUR -0.11 (EUR -0.02). Earnings per share were weakened not only by the lower result, but also by the cost of repaying the hybrid bond. Quarterly variation in SRV's operating profit and operating profit margin is affected by several factors. SRV’s own projects are recognised as income upon delivery; the part of the order backlog that is continuously recognised as income based on the level of completion mainly consists of low-margin contracting; and the nature of the company's operations (project development). The Group's equity ratio stood at 37.8 per cent (42.5% 12/2015). Gearing was 99.7 per cent (83.3% 12/2015). The changes in equity ratio and gearing were due to an increase in interest-bearing debt. Net debt totalled EUR 285.0 (248.3) million and liquid assets EUR 37.9 (28.1) million. In the July-September period of 2016, the Group’s revenue rose by 24.5 per cent to EUR 193.1 (155.1) million. Growth in revenue was driven particularly by large business construction projects. The Group’s operating profit increased to EUR 7.3 (4.1) million thanks to improved margins and higher revenue in operations in Finland. Operating profit in International operations improved slightly, to EUR 1.2 (-0.3) million. The Group's profit before taxes was EUR 3.9 (0.1) million. The result was weakened by higher interest expenses and a EUR -1.2 million fair value revaluation of a 10-year interest rate hedge. *) The comparison data has been adjusted to reflect the share issue. All forward-looking statements in this review are based on management’s current expectations and beliefs about future events, and actual results may differ materially from the expectations and beliefs such statements contain. This is a summary of SRV’s interim report and the complete report is attached as a pdf-file to this release and is also available on the company website. The interim report will be presented to the media and analysts at a joint press conference, which will take place on Thursday, 3 November at 11.00 a.m. at the Living Lab test environment, Suvilahti, address Kaasutehtaankatu 1, 00540 Helsinki. President & CEO Juha Pekka Ojala and CFO Ilkka Pitkänen will be present at the event. A live webcast of the press conference will be available on the company’s website www.srv.fi/en/investors. For further information, please contact You also find us on social media:


News Article | November 20, 2015
Site: phys.org

Superconductors are marvellous materials that are able to transport electric current and energy without dissipation. For this reason, they are extremely useful for constructing magnets that can generate enormous magnetic fields without melting. They have found important applications as essential components of the Large Hadron Collider particle accelerator at CERN, levitating trains, and the magnetic resonance imaging tool widely used for medical purposes. Yet, one reason why the waiting list for an MRI scan is sometimes so long is the cost of the equipment. Indeed, superconductors have to be cooled down below one hundred degrees centigrade to manifest their unique properties, and this implies the use of expensive refrigerators. An important open problem in modern materials science is to understand the mechanism behind superconductivity, and in particular, it would be highly desirable to be able to predict with precision the critical temperature below which the superconducting transition occurs. In fact, there are no currently available theories that can provide accurate predictions for the critical temperature of the most useful superconductive materials. This is unfortunate since a sound understanding of the mechanism of superconductivity is essential if we are interested in synthesizing materials that may one day achieve superconductivity at room temperature, without refrigeration. A potential breakthrough has recently been put forward by researchers at Aalto University. Their study builds on the theory of the electronic motion in crystals developed by Felix Bloch in 1928. It is an interesting consequence of quantum mechanics that an electron that feels the electric charge of an ordered array of atoms (a crystal) can move as freely as it would in free space. However, the crystal has the nontrivial effect of modifying the apparent mass of the electron. Indeed, electrons appear to be heavier (or lighter) in a crystal than in free space, which means that one has to push them more (or less) to make them move. This fact has very important consequences since electrons with a larger apparent mass lead to a larger critical temperature for superconductivity. Ideally to maximize the critical temperature, we should consider electrons with infinite apparent mass or, to use the jargon of physicists, electrons in a "flat band". Naively we could expect that electrons with infinite mass would be stuck in place, unable to carry any current, and the essential property of superconductivity would be lost. "I was very intrigued to find out how a supercurrent, that is, electrical current, could be carried by electrons in a flat band. We had some hints that this is in fact possible, but not a general solution of this paradox" says Aalto physics Professor Päivi Törmä. Surprisingly in the world of quantum mechanics, an infinite mass does not necessarily prevent the flow of electric current. The key to this mystery is to remember that electrons are quantum mechanical objects with both particle- and wave-like features. Prof. Päivi Törmä and postdoctoral researcher Sebastiano Peotta have found that the mass alone, which is a property of particles, is not sufficient to completely characterize electrons in solids. We also need something called the "quantum metric". A metric tells how distances are measured, for instance the distance between two points is different on a sphere than on a flat surface. It turns out that the quantum metric measures the spread of the electron waves in a crystal. This spread is a wave-like property. Electrons with the same apparent mass, possibly infinite, can be associated with waves that are more or less spread out in the crystal, as measured by the quantum metric. The larger the quantum metric, the larger the supercurrent that the superconductor can carry. "Our results are very positive," says Peotta, "they open a novel route for engineering superconductors with high critical temperature. If our predictions are verified, common sense will suffer a big blow, but I am fine with that." Another surprising finding is that the quantum metric is intimately related to an even more subtle wave-like property of the electrons quantified by an integer number called the Chern number. The Chern number is an example of a topological invariant, namely a mathematical property of objects that is not changed under an arbitrary but gentle (not disruptive) deformation of the object itself. A simple example of a topological invariant is the number of twists of a belt. A belt with a single twist is a called a Möbius band in mathematics and is shown in the figure. A twist can be moved forward and backward in the belt but never removed unless the belt is broken. The number of twists is always an integer. In the same way, the Chern number can take only integer values and cannot be changed unless a drastic change is performed on the electron waves. If the Chern number is nonzero, it is not possible to unknot the electron waves centred at neighbouring atoms of the material. As a consequence, the waves have to overlap, and it is this finite overlap that ensures superconductivity, even in a flat band. Aalto researchers have thus discovered an unexpected connection between superconductivity and topology. Finland is a leader in this type of research, as flat band superconductivity was already predicted to occur at the surface of a certain kind of graphite, a result of the theoretical work of Grigory Volovik and Nikolai Kopnin (Aalto University) and Tero Heikkilä (University of Jyväskylä). To launch the next stage of discovery, Peotta and Törmä's theoretical predictions could now be tested experimentally in ultracold atomic gas systems by collaborators. "The connections I made this summer as a guest professor at ETH Zurich will be very useful for our further research on the topic," reveals Törmä. "We are also intrigued by the fact that the physics we describe may be important for known superconductive materials, but it has not been noticed yet," adds Peotta. Explore further: Electron spin could be the key to high-temperature superconductivity More information: Sebastiano Peotta et al. Superfluidity in topologically nontrivial flat bands, Nature Communications (2015). DOI: 10.1038/ncomms9944


News Article | January 21, 2016
Site: www.nanotech-now.com

Abstract: The very first experimental observations of knots in quantum matter have just been reported in Nature Physics by scientists at Aalto University (Finland) and Amherst College (USA). The scientists created knotted solitary waves, or knot solitons, in the quantum-mechanical field describing a gas of superfluid atoms, also known as a Bose-Einstein condensate. In contrast to knotted ropes, the created quantum knots exist in a field that assumes a certain direction at every point of space. The field segregates into an infinite number of linked rings, each with its own field direction. The resulting structure is topologically stable as it cannot be separated without breaking the rings. In other words, one cannot untie the knot within the superfluid unless one destroys the state of the quantum matter. To make this discovery we exposed a Rubidium condensate to rapid changes of a specifically tailored magnetic field, tying the knot in less than a thousandth of a second. After we learned how to tie the first quantum knot, we have become rather good at it. Thus far, we have tied several hundred such knots, says Professor David Hall, Amherst College. The scientists tied the knot by squeezing the structure into the condensate from its outskirts. This required them to initialize the quantum field to point in a particular direction, after which they suddenly changed the applied magnetic field to bring an isolated null point, at which the magnetic field vanishes, into the center of the cloud. Then they just waited for less than a millisecond for the magnetic field to do its trick and tie the knot. For decades, physicists have been theoretically predicting that it should be possible to have knots in quantum fields, but nobody else has been able to make one. Now that we have seen these exotic beasts, we are really excited to study their peculiar properties. Importantly, our discovery connects to a diverse set of research fields including cosmology, fusion power, and quantum computers, says research group leader Mikko Möttönen, Aalto University. Knots have been used and appreciated by human civilizations for thousands of years. For example, they have enabled great seafaring expeditions and inspired intricate designs and patterns. The ancient Inca civilization used a system of knots known as quipu to store information. In modern times, knots have been thought to play important roles in the quantum-mechanical foundations of nature, although they have thus far remained unseen in quantum dynamics. In everyday life, knots are typically tied on ropes or strings with two ends. However, these kinds of knots are not what mathematicians call topologically stable since they can be untied without cutting the rope. In stable knots, the ends of the ropes are glued together. Such knots can be relocated within the rope but cannot be untied without scissors. Mathematically speaking, the created quantum knot realizes a mapping referred to as Hopf fibration that was discovered by Heinz Hopf in 1931. The Hopf fibration is still widely studied in physics and mathematics. Now it has been experimentally demonstrated for the first time in a quantum field. This is the beginning of the story of quantum knots. It would be great to see even more sophisticated quantum knots to appear such as those with knotted cores. Also it would be important to create these knots in conditions where the state of the quantum matter would be inherently stable. Such system would allow for detailed studies of the stability of the knot itself, says Mikko Möttönen. Funding This material is based upon work supported by the National Science Foundation under Grant No. PHY-1205822, the Academy of Finland (grant nos. 251748, 284621, 135794, and 272806), Finnish Doctoral Programme in Computational Sciences, and the Magnus Ehrnrooth Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. For more information, please click Contacts: Mikko Möttönen Docent, Professor 358-50-594-0950 (Time zone: GMT +2) Aalto University and University of Jyväskylä http://physics.aalto.fi/en/groups/qcd/ Mikko Möttönen is the leader of the theoretical and computational part of the research. Theoretical insight and computational modelling was very important for the success of the creation of the knots. The modelling was carried out using the facilities at CSC -- IT Center for Science Ltd and at Aalto University (Aalto Science-IT project). David S. Hall, Professor Amherst College 1-413-542-2072 (Time zone: GMT -5) http://www3.amherst.edu/~halllab/ David S. Hall is the leader of the experimental part of the research. The quantum knots were created in the Physics Laboratories at Amherst College, United States of America. If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Oikarinen E.,Aalto University | Woltran S.,Vienna University of Technology
Artificial Intelligence | Year: 2011

Since argumentation is an inherently dynamic process, it is of great importance to understand the effect of incorporating new information into given argumentation frameworks. In this work, we address this issue by analyzing equivalence between argumentation frameworks under the assumption that the frameworks in question are incomplete, i.e. further information might be added later to both frameworks simultaneously. In other words, instead of the standard notion of equivalence (which holds between two frameworks, if they possess the same extensions), we require here that frameworks F and G are also equivalent when conjoined with any further framework H. Due to the nonmonotonicity of argumentation semantics, this concept is different to (but obviously implies) the standard notion of equivalence. We thus call our new notion strong equivalence and study how strong equivalence can be decided with respect to the most important semantics for abstract argumentation frameworks. We also consider variants of strong equivalence in which we define equivalence with respect to the sets of arguments credulously (or skeptically) accepted, and restrict strong equivalence to augmentations H where no new arguments are raised. © 2011 Elsevier B.V. All rights reserved.


Jylanki P.,Aalto University | Vanhatalo J.,University of Helsinki | Vehtari A.,Aalto University
Journal of Machine Learning Research | Year: 2011

This paper considers the robust and efficient implementation of Gaussian process regression with a Student-t observation model, which has a non-log-concave likelihood. The challenge with the Student-t model is the analytically intractable inference which is why several approximative methods have been proposed. Expectation propagation (EP) has been found to be a very accurate method in many empirical studies but the convergence of EP is known to be problematic with models containing non-log-concave site functions. In this paper we illustrate the situations where standard EP fails to converge and review different modifications and alternative algorithms for improving the convergence. We demonstrate that convergence problems may occur during the type-II maximum a posteriori (MAP) estimation of the hyperparameters and show that standard EP may not converge in the MAP values with some difficult data sets. We present a robust implementation which relies primarily on parallel EP updates and uses a moment-matching-based double-loop algorithm with adaptively selected step size in difficult cases. The predictive performance of EP is compared with Laplace, variational Bayes, and Markov chain Monte Carlo approximations. © 2011 Pasi Jylänki, Jarno Vanhatalo and Aki Vehtari.


Chaudhari S.,Aalto University | Lunden J.,Aalto University | Koivunen V.,Aalto University | Poor H.V.,Princeton University
IEEE Transactions on Signal Processing | Year: 2012

This paper focuses on the performance analysis and comparison of hard decision (HD) and soft decision (SD) based approaches for cooperative spectrum sensing in the presence of reporting channel errors. For cooperative sensing (CS) in cognitive radio networks, a distributed detection approach with displaced sensors and a fusion center (FC) is employed. For HD based CS, each secondary user (SU) sends a one-bit hard local decision to the FC. For SD based CS, each SU sends a quantized version of a local decision statistic such as the log-likelihood ratio or any suitable sufficient statistic. The decision statistics are sent through channels that may cause errors. The effects of channel errors are incorporated in the analysis through the bit error probability (BEP). For HD based CS, the counting rule or the K-out-of-N rule is used at the FC. For SD based CS, the optimal fusion rule in the presence of reporting channel errors is derived and its distribution is established. A comparison of the two schemes is conducted to show that there is a performance gain in using SD based CS even in the presence of reporting channel errors. In addition, a BEP wall is shown to exist for CS such that if the BEP is above a certain value, then irrespective of the received signal strength corresponding to the primary user, the constraints on false alarm probability and detection probability cannot be met. It is shown that the performance of HD based CS is very sensitive to the BEP wall phenomenon while the SD based CS is more robust in that sense. © 2006 IEEE.


Raij T.T.,Aalto University | Raij T.T.,University of Helsinki | Riekki T.J.J.,University of Helsinki | Hari R.,Aalto University
Schizophrenia Research | Year: 2012

Background: Poor insight is a central characteristic of psychosis and schizophrenia. Accumulating evidence indicates that cortical midline structures (CMS) and frontopolar cortex (FPC), both of which are associated with insight-related processing in healthy subjects, are among the most affected brain structures in schizophrenia. However, the hypothesis that direct associations between function of these brain regions and poor insight in schizophrenia exist has not been tested previously. Methods: We studied 21 patients with schizophrenia and 17 healthy control subjects with structural and functional magnetic resonance imaging during a clinical insight task and a comparable control task. We assessed the level of insight, depression, positive and negative symptoms, and neurocognitive function, then adjusted correlation between insight and insight-task-related brain activation for potential confounders. Voxel-based morphometry was used to compare brain volumes between groups. Results: Insight correlated strongly with the activation of the CMS and the FPC during the clinical insight tasks, independently of potential confounders. The CMS activation was stronger during the insight task than during the control task in patients. The functional correlates of insight matched the distribution of cortical volume reduction in the patient group. Conclusions: These findings suggest a link between known regional brain abnormalities and the manifestation of poor insight in schizophrenia. The contribution of CMS to insight may be related to self-referential processing and that of FPC to the integration of multiple cognitive processes that are necessary for accurate evaluation of one's mental illness. © 2012 Elsevier B.V.


Komsa H.-P.,University of Helsinki | Krasheninnikov A.V.,University of Helsinki | Krasheninnikov A.V.,Aalto University
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

Using GW first-principles calculations for few-layer and bulk MoS 2, we study the effects of quantum confinement on the electronic structure of this layered material. By solving the Bethe-Salpeter equation, we also evaluate the exciton energy in these systems. Our results are in excellent agreement with the available experimental data. Exciton binding energy is found to dramatically increase from 0.1 eV in the bulk to 1.1 eV in the monolayer. The fundamental band gap increases as well, so that the optical transition energies remain nearly constant. We also demonstrate that environments with different dielectric constants have a profound effect on the electronic structure of the monolayer. Our results can be used for engineering the electronic properties of MoS2 and other transition-metal dichalcogenides and may explain the experimentally observed variations in the mobility of monolayer MoS2. © 2012 American Physical Society.


Hyvarinen A.,University of Helsinki | Ramkumar P.,Aalto University
Frontiers in Human Neuroscience | Year: 2013

Independent component analysis (ICA) is increasingly used to analyze patterns of spontanous activity in brain imaging. However, there are hardly any methods for answering the fundamental question: Are the obtained components statistically significant? Most methods considering the significance of components either consider group-differences or use arbitrary thresholds with weak statistical justification. In previous work, we proposed a statistically principled method for testing if the coefficients in the mixing matrix are similar in different subjects or sessions. In many applications of ICA, however, we would like to test the reliability of the independent components themselves and not the mixing coefficients. Here, we develop a test for such an inter-subject consistency by extending our previous theory. The test is applicable, for example, to the spatial activity patterns obtained by spatial ICA in resting-state fMRI. We further improve both this and the previously proposed testing method by introducing a new way of correcting for multiple testing, new variants of the clustering method, and a computational approximation which greatly reduces the memory and computation required. © 2013 Hyvärinen and Ramkumar.


Sainiemi L.,Aalto University | Jokinen V.,University of Helsinki | Shah A.,Aalto University | Shpak M.,Aalto University | And 3 more authors.
Advanced Materials | Year: 2011

Maskless plasma etching forms nanospikes on a silicon wafer. The inverse of the nanospike pattern is replicated into a poly(dimethylsiloxane) (PDMS) film by casting. The PDMS functions as a stamp for replicating the original pattern into polymeric substrates. All nanospike-structured surfaces suppress light reflection and can be made self-cleaning. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Bjorkman T.,Aalto University | Gulans A.,Aalto University | Krasheninnikov A.V.,Aalto University | Krasheninnikov A.V.,University of Helsinki | Nieminen R.M.,Aalto University
Physical Review Letters | Year: 2012

Although the precise microscopic knowledge of van der Waals interactions is crucial for understanding bonding in weakly bonded layered compounds, very little quantitative information on the strength of interlayer interaction in these materials is available, either from experiments or simulations. Here, using many-body perturbation and advanced density-functional theory techniques, we calculate the interlayer binding and exfoliation energies for a large number of layered compounds and show that, independent of the electronic structure of the material, the energies for most systems are around 20meV/2. This universality explains the successful exfoliation of a wide class of layered materials to produce two-dimensional systems, and furthers our understanding the properties of layered compounds in general. © 2012 American Physical Society.


Berseneva N.,Aalto University | Krasheninnikov A.V.,Aalto University | Krasheninnikov A.V.,University of Helsinki | Nieminen R.M.,Aalto University
Physical Review Letters | Year: 2011

Electron-beam-mediated postsynthesis doping of boron-nitride nanostructures with carbon atoms was recently demonstrated, thus opening a new way to control the electronic properties of these systems. Using density-functional theory static and dynamic calculations, we show that the substitution process is governed not only by the response of such systems to irradiation, but also by the energetics of the atomic configurations, especially when the system is electrically charged. We suggest using spatially localized electron irradiation for making carbon islands and ribbons embedded into BN sheets. We further study the magnetic and electronic properties of such hybrid nanostructures and show that triangular carbon islands embedded into BN sheets possess magnetic moments, which can be switched on and off by electrically charging the structure. © 2011 American Physical Society.


Drost R.,Aalto University | Uppstu A.,Helsinki Institute of Physics | Schulz F.,Aalto University | Hamalainen S.K.,Aalto University | And 3 more authors.
Nano Letters | Year: 2014

The electronic properties of graphene edges have been predicted to depend on their crystallographic orientation. The so-called zigzag (ZZ) edges haven been extensively explored theoretically and proposed for various electronic applications. However, their experimental study remains challenging due to the difficulty in realizing clean ZZ edges without disorder, reconstructions, or the presence of chemical functional groups. Here, we propose the ZZ-terminated, atomically sharp interfaces between graphene and hexagonal boron nitride (BN) as experimentally realizable, chemically stable model systems for graphene ZZ edges. Combining scanning tunneling microscopy and numerical methods, we explore the structure of graphene-BN interfaces and show them to host localized electronic states similar to those on the pristine graphene ZZ edge. © 2014 American Chemical Society.


Nevalainen P.,University of Helsinki | Lauronen L.,University of Helsinki | Pihko E.,Aalto University
Frontiers in Human Neuroscience | Year: 2014

The mysteries of early development of cortical processing in humans have started to unravel with the help of new non-invasive brain research tools like multichannel magnetoencephalography (MEG). In this review, we evaluate, within a wider neuroscientific and clinical context, the value of MEG in studying normal and disturbed functional development of the human somatosensory system. The combination of excellent temporal resolution and good localization accuracy provided by MEG has, in the case of somatosensory studies, enabled the differentiation of activation patterns from the newborn's primary (SI) and secondary somatosensory (SII) areas. Furthermore, MEG has shown that the functioning of both SI and SII in newborns has particular immature features in comparison with adults. In extremely preterm infants, the neonatal MEG response from SII also seems to potentially predict developmental outcome: those lacking SII responses at term show worse motor performance at age 2 years than those with normal SII responses at term. In older children with unilateral early brain lesions, bilateral alterations in somatosensory cortical activation detected in MEG imply that the impact of a localized insult may have an unexpectedly wide effect on cortical somatosensory networks.The achievements over the last decade show that MEG provides a unique approach for studying the development of the somatosensory system and its disturbances in childhood. MEG well complements other neuroimaging methods in studies of cortical processes in the developing brain. © 2014 Nevalainen, Lauronen and Pihko.


Jarvinen P.,Aalto University | Hamalainen S.K.,Aalto University | Banerjee K.,Aalto University | Hakkinen P.,Aalto University | And 3 more authors.
Nano Letters | Year: 2013

One of the suggested ways of controlling the electronic properties of graphene is to establish a periodic potential modulation on it, which could be achieved by self-assembly of ordered molecular lattices. We have studied the self-assembly of cobalt phthalocyanines (CoPc) on chemical vapor deposition (CVD) grown graphene transferred onto silicon dioxide (SiO2) and hexagonal boron nitride (h-BN) substrates. Our scanning tunneling microscopy (STM) experiments show that, on both substrates, CoPc forms a square lattice. However, on SiO2, the domain size is limited by the corrugation of graphene, whereas on h-BN, single domain extends over entire terraces of the underlying h-BN. Additionally, scanning tunneling spectroscopy (STS) measurements suggest that CoPc molecules are doped by the substrate and that the level of doping varies from molecule to molecule. This variation is larger on graphene on SiO2 than on h-BN. These results suggest that graphene on h-BN is an ideal substrate for the study of molecular self-assembly toward controlling the electronic properties of graphene by engineered potential landscapes. © 2013 American Chemical Society.


Mikkila J.,Aalto University | Eskelinen A.-P.,Aalto University | Niemela E.H.,University of Helsinki | Linko V.,Aalto University | And 3 more authors.
Nano Letters | Year: 2014

DNA origami structures can be programmed into arbitrary shapes with nanometer scale precision, which opens up numerous attractive opportunities to engineer novel functional materials. One intriguing possibility is to use DNA origamis for fully tunable, targeted, and triggered drug delivery. In this work, we demonstrate the coating of DNA origami nanostructures with virus capsid proteins for enhancing cellular delivery. Our approach utilizes purified cowpea chlorotic mottle virus capsid proteins that can bind and self-assemble on the origami surface through electrostatic interactions and further pack the origami nanostructures inside the viral capsid. Confocal microscopy imaging and transfection studies with a human HEK293 cell line indicate that protein coating improves cellular attachment and delivery of origamis into the cells by 13-fold compared to bare DNA origamis. The presented method could readily find applications not only in sophisticated drug delivery applications but also in organizing intracellular reactions by origami-based templates. © 2014 American Chemical Society.


Li J.,Aalto University | Hietala S.,University of Helsinki | Tian X.,Aalto University | Tian X.,CAS Xinjiang Technical Institute of Physics and Chemistry
ACS Nano | Year: 2015

Here we report the organic-free mesocrystalline superstructured cages of BaTiO3, i.e., the BaTiO3 supercages, which are synthesized by a one-step templateless and additive-free route using molten hydrated salt as the reaction medium. An unusual three-dimensional oriented aggregation of primary BaTiO3 nanoparticles in the medium of high ionic strength, which normally favors random aggregation, is identified to take place at the early stage of the synthesis. The spherical BaTiO3 aggregates further experience a remarkable continuous ordering transition in morphology, consisting of nanoparticle faceting and nanosheet formation steps. This ordering transition in conjunction with Ostwald ripening-induced solid evacuation leads to the formation of unique supercage structure of BaTiO3. Benefiting from their structure, the BaTiO3 supercages exhibit improved microwave absorption property. © 2014 American Chemical Society.


Climente-Alarcon V.,Aalto University | Antonino-Daviu J.A.,Polytechnic University of Valencia | Riera-Guasp M.,Polytechnic University of Valencia | Vlcek M.,Czech Technical University
IEEE Transactions on Industrial Electronics | Year: 2014

During the last years, several time-frequency decomposition tools have been applied for the diagnosis of induction motors, for those cases in which the traditional procedures, such as motor current signature analysis, cannot yield the necessary response. Among them, the Cohen distributions have been widely selected to study transient and even stationary operation due to their high-resolution and detailed information provided at all frequencies. Their main drawback, the cross-terms, has been tackled either modifying the distribution, or carrying out a pretreatment of the signal before computing its time-frequency decomposition. In this paper, a filtering process is proposed that uses advanced notch filters in order to remove constant frequency components present in the current of an induction motor, prior to the computation of its distribution, to study rotor asymmetries and mixed eccentricities. In transient operation of machines directly connected to the grid, this procedure effectively eliminates most of the artifacts that have prevented the use of these tools, allowing a wideband analysis and the definition of a precise quantification parameter able to follow the evolution of their state. © 1982-2012 IEEE.


Rahman M.M.,Aalto University | B. Mostafiz S.,University of Helsinki | Paatero J.V.,Aalto University | Lahdelma R.,Aalto University
Renewable and Sustainable Energy Reviews | Year: 2014

The rapid depletion of fossil fuel reserves and environmental concerns with their combustion necessitate looking for alternative sources for long term sustainability of the world. These concerns also appear serious in developing countries who are striving for rapid economic growth. The net biomass growing potential on the global land surface is 10 times more than the global food, feed, fiber, and energy demands. This study investigates whether the developing countries have sufficient land resource to meet the projected energy demand towards 2035 by planting energy crops on surplus agricultural land after food and feed production. The annual yields of four commonly grown energy crops specifically jatropha, switchgrass, miscanthus, and willow have been used to make scenarios and estimate land requirements against each scenario. This paper first performs literature reviews on the availability of land resource, past and future trends in land use changes, demand of lands for food production, and potential expansion of croplands. The energy demands towards 2035 are compiled from energy scenarios derived by the International Energy Agency (IEA) and the British Petroleum (BP). This paper also reviewed bio-physiological characteristics of these energy crops to determine whether they are cultivable under tropical climatic conditions in developing regions. This paper found that projected energy demand through 2035 in developing regions could be provided by energy crops grown on a portion of surplus croplands or upgraded grasslands (27% and 22% respectively for miscanthus scenario). Sustainable land management practices, improved agricultural productivity, and adopting suitable energy crops cultivation can potentially supply increasing energy demands. © 2013 Elsevier Ltd.


Rishika R.,Texas A&M University | Kumar A.,Aalto University | Janakiraman R.,Texas A&M University | Bezawada R.,State University of New York at Buffalo
Information Systems Research | Year: 2013

In this study we examine the effect of customers' participation in a firm's social media efforts on the intensity of the relationship between the firm and its customers as captured by customers' visit frequency. We further hypothesize and test for the moderating roles of social media activity and customer characteristics on the link between social media participation and the intensity of customer-firm relationship. Importantly, we also quantify the impact of social media participation on customer profitability. We assemble a novel data set that combines customers' social media participation data with individual customer level transaction data. To account for endogeneity that could arise because of customer self-selection, we utilize the propensity score matching technique in combination with difference in differences analysis. Our results suggest that customer participation in a firm's social media efforts leads to an increase in the frequency of customer visits. We find that this participation effect is greater when there are high levels of activity in the social media site and for customers who exhibit a strong patronage with the firm, buy premium products, and exhibit lower levels of buying focus and deal sensitivity. We find that the above set of results holds for customer profitability as well. We discuss theoretical implications of our results and offer prescriptions for managers on how to engage customers via social media. Our study emphasizes the need for managers to integrate knowledge from customers' transactional relationship with their social media participation to better serve customers and create sustainable business value. © 2013 INFORMS.


Ferragina P.,University of Pisa | Gagie T.,Aalto University | Manzini G.,University of Piemonte Orientale
Algorithmica | Year: 2012

In this paper we describe algorithms for computing the Burrows-Wheeler Transform (bwt) and for building (compressed) indexes in external memory. The innovative feature of our algorithms is that they are lightweight in the sense that, for an input of size n, they use only n bits of working space on disk while all previous approaches use Θ(nlog n) bits. This is achieved by building the bwt directly without passing through the construction of the Suffix Array/Tree data structure. Moreover, our algorithms access disk data only via sequential scans, thus they take full advantage of modern disk features that make sequential disk accesses much faster than random accesses. We also present a scan-based algorithm for inverting the bwt that uses Θ(n) bits of working space, and a lightweight internal-memory algorithm for computing the bwt which is the fastest in the literature when the available working space is o(n) bits. Finally, we prove lower bounds on the complexity of computing and inverting the bwt via sequential scans in terms of the classic product: internalmemory space × number of passes over the disk data, showing that our algorithms are within an O(log n) factor of the optimal. © Springer Science+Business Media, LLC 2011.


Dinu L.P.,University of Bucharest | Popa A.,Aalto University
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

Given a set S of k strings of maximum length n, the goal of the closest substring problem (CSSP) is to find the smallest integer d (and a corresponding string t of length ℓ ≤ n) such that each string s ∈ S has a substring of length ℓ of "distance" at most d to t. The closest string problem (CSP) is a special case of CSSP where ℓ = n. CSP and CSSP arise in many applications in bioinformatics and are extensively studied in the context of Hamming and edit distance. In this paper we consider a recently introduced distance measure, namely the rank distance. First, we show that the CSP and CSSP via rank distance are NP-hard. Then, we present a polynomial time k-approximation algorithm for the CSP problem. Finally, we give a parametrized algorithm for the CSP (the parameter is the number of input strings) if the alphabet is binary and each string has the same number of 0's and 1's. © 2012 Springer-Verlag.


Lehtinen O.,University of Ulm | Lehtinen O.,University of Helsinki | Kurasch S.,University of Ulm | Krasheninnikov A.V.,University of Helsinki | And 2 more authors.
Nature Communications | Year: 2013

Dislocations, one of the key entities in materials science, govern the properties of any crystalline material. Thus, understanding their life cycle, from creation to annihilation via motion and interaction with other dislocations, point defects and surfaces, is of fundamental importance. Unfortunately, atomic-scale investigations of dislocation evolution in a bulk object are well beyond the spatial and temporal resolution limits of current characterization techniques. Here we overcome the experimental limits by investigating the two-dimensional graphene in an aberration-corrected transmission electron microscope, exploiting the impinging energetic electrons both to image and stimulate atomic-scale morphological changes in the material. The resulting transformations are followed in situ, atom-by-atom, showing the full life cycle of a dislocation from birth to annihilation. Our experiments, combined with atomistic simulations, reveal the evolution of dislocations in two-dimensional systems to be governed by markedly long-ranging out-of-plane buckling. © 2013 Macmillan Publishers Limited. All rights reserved.


Broberg A.,Aalto University | Salminen S.,University of Helsinki | Kytta M.,Aalto University
Applied Geography | Year: 2013

Research on urban structural characteristics promoting physical activity is often focussing on just few of the settings where children and youth spend their time. To overcome this, we used mapping methodology where children themselves defined their important places. Then, the associations between the urban structure and children's active transport and independent mobility were studied. Principal component analysis was used to compose multivariate profiles of physical environment around meaningful places. We found that structure dominated by single family housing promoted both independent mobility and use of active transport modes. Dense urban residential structure allowed for independent mobility but did not promote active transport. © 2012 Elsevier Ltd.


Jousimo J.,University of Helsinki | Tack A.J.M.,University of Helsinki | Ovaskainen O.,University of Helsinki | Mononen T.,University of Helsinki | And 4 more authors.
Science | Year: 2014

Ecological theory predicts that disease incidence increases with increasing density of host networks, yet evolutionary theory suggests that host resistance increases accordingly. To test the combined effects of ecological and evolutionary forces on host-pathogen systems, we analyzed the spatiotemporal dynamics of a plant (Plantago lanceolata) - fungal pathogen (Podosphaera plantaginis) relationship for 12 years in over 4000 host populations. Disease prevalence at the metapopulation level was low, with high annual pathogen extinction rates balanced by frequent (re-)colonizations. Highly connected host populations experienced less pathogen colonization and higher pathogen extinction rates than expected; a laboratory assay confirmed that this phenomenon was caused by higher levels of disease resistance in highly connected host populations.


Komsa H.-P.,University of Helsinki | Krasheninnikov A.V.,University of Helsinki | Krasheninnikov A.V.,Aalto University
Physical Review B - Condensed Matter and Materials Physics | Year: 2013

We calculate from first principles the electronic structure and optical properties of a number of transition metal dichalcogenide (TMD) bilayer heterostructures consisting of MoS2 layers sandwiched with WS 2, MoSe2, MoTe2, BN, or graphene sheets. Contrary to previous works, the systems are constructed in such a way that the unstrained lattice constants of the constituent incommensurate monolayers are retained. We find strong interaction between the Γ-point states in all TMD/TMD heterostructures, which can lead to an indirect gap. On the other hand, states near the K point remain as in the monolayers. When TMDs are paired with BN or graphene layers, the interaction around the Γ-point is negligible, and the electronic structure resembles that of two independent monolayers. Calculations of optical properties of the MoS2/WS2 system show that, even when the valence- and conduction-band edges are located in different layers, the mixing of optical transitions is minimal, and the optical characteristics of the monolayers are largely retained in these heterostructures. The intensity of interlayer transitions is found to be negligibly small, a discouraging result for engineering the optical gap of TMDs by heterostructuring. © 2013 American Physical Society.


Tuboltsev V.,University of Helsinki | Savin A.,Aalto University | Pirojenko A.,University of Helsinki | Raisanen J.,University of Helsinki
ACS Nano | Year: 2013

While bulk gold is well known to be diamagnetic, there is a growing body of convincing experimental and theoretical work indicating that nanostructured gold can be imparted with unconventional magnetic properties. Bridging the current gap in experimental study of magnetism in bare gold nanomaterials, we report here on magnetism in gold nanocrystalline films produced by cluster deposition in the aggregate form that can be considered as a crossover state between a nanocluster and a continuous film. We demonstrate ferromagnetic-like hysteretic magnetization with temperature dependence indicative of spin-glass-like behavior and find this to be consistent with theoretical predictions, available in the literature, based on first-principles calculations. © 2013 American Chemical Society.


Karjalainen O.K.,Aalto University | Nieger M.,University of Helsinki | Koskinen A.M.P.,Aalto University
Angewandte Chemie - International Edition | Year: 2013

To All(oc) involved: A palladium-catalyzed formal 5-endo-trig heteroannulation of enones generated in situ from amino acid derived β-keto nitriles has been realized (see scheme; Alloc=allyl carbamate). The reaction proceeds with allyl-group transfer from the carbamate protecting group to generate two new contiguous stereocenters, including one quaternary center, with high selectivity. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Jokinen V.,Aalto University | Kostiainen R.,University of Helsinki | Sikanen T.,University of Helsinki
Advanced Materials | Year: 2012

Multiphase liquid droplets consisting of three connected but immiscible liquid phases are demonstrated. The droplets have designer geometries stabilized by surface energy patterns; aqueous phases prefer contact with hydrophilic surface while organic phases prefer contact with hydrophobic areas. The multiphase droplets are applied for liquid-liquid-liquid extraction. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Kotakoski J.,University of Helsinki | Krasheninnikov A.V.,University of Helsinki | Krasheninnikov A.V.,Aalto University | Kaiser U.,University of Ulm | And 2 more authors.
Physical Review Letters | Year: 2011

While crystalline two-dimensional materials have become an experimental reality during the past few years, an amorphous 2D material has not been reported before. Here, using electron irradiation we create an sp2-hybridized one-atom-thick flat carbon membrane with a random arrangement of polygons, including four-membered carbon rings. We show how the transformation occurs step by step by nucleation and growth of low-energy multivacancy structures constructed of rotated hexagons and other polygons. Our observations, along with first-principles calculations, provide new insights to the bonding behavior of carbon and dynamics of defects in graphene. The created domains possess a band gap, which may open new possibilities for engineering graphene-based electronic devices. © 2011 American Physical Society.


Kaukonen M.,Aalto University | Krasheninnikov A.V.,Aalto University | Krasheninnikov A.V.,University of Helsinki | Kauppinen E.,Aalto University | Nieminen R.M.,Aalto University
ACS Catalysis | Year: 2013

Because of its high specific surface area and unique electronic properties, graphene with substitutional impurity metal atoms and clusters attached to defects in the graphene sheet is attractive for use in hydrogen fuel cells for oxygen reduction at the cathode. In an attempt to find a cheap yet efficient catalyst for the reaction, we use density-functional theory calculations to study the structure and properties of transition-metal-vacancy complexes in graphene. We calculate formation energies of the complexes, which are directly related to their stability, along with oxygen and water adsorption energies. In addition to metals, we also consider nonmetal impurities like B, P, and Si, which form strong bonds with under-coordinated carbon atoms at defects in graphene. Our results indicate that single Ni, Pd, Pt, Sn, and P atoms embedded into divacancies in graphene are promising candidates for the use in fuel cell cathodes for oxygen reduction reaction (ORR). We further discuss how ion irradiation of graphene combined with metal sputtering and codeposition can be used to make an efficient and relatively inexpensive graphene-based material for hydrogen fuel cells. © 2012 American Chemical Society.


Shi J.,Princeton University | Ikalainen S.,Aalto University | Vaara J.,University of Oulu | Romalis M.V.,Princeton University
Journal of Physical Chemistry Letters | Year: 2013

Nuclear spin optical rotation (NSOR) is a recently developed technique for detection of nuclear magnetic resonance via rotation of light polarization, instead of the usual long-range magnetic fields. NSOR signals depend on hyperfine interactions with virtual optical excitations, giving new information about the nuclear chemical environment. We use a multipass optical cell to perform the first precision measurements of NSOR signals for a range of organic liquids and find clear distinction between proton signals for different compounds, in agreement with our earlier theoretical predictions. Detailed first-principles quantum mechanical NSOR calculations are found to be in agreement with the measurements. © 2013 American Chemical Society.


Komsa H.-P.,University of Helsinki | Krasheninnikov A.V.,University of Helsinki | Krasheninnikov A.V.,Aalto University
Journal of Physical Chemistry Letters | Year: 2012

Using density-functional theory calculations, we study the stability and electronic properties of single layers of mixed transition metal dichalcogenides (TMDs), such as MoS2xSe2(1-x), which can be referred to as two-dimensional (2D) random alloys. We demonstrate that mixed MoS 2/MoSe2/MoTe2 compounds are thermodynamically stable at room temperature, so that such materials can be manufactured using chemical-vapor deposition technique or exfoliated from the bulk mixed materials. By applying the effective band structure approach, we further study the electronic structure of the mixed 2D compounds and show that general features of the band structures are similar to those of their binary constituents. The direct gap in these materials can continuously be tuned, pointing toward possible applications of 2D TMD alloys in photonics. © 2012 American Chemical Society.


Nair R.R.,Manchester Center for Mesoscience and Nanotechnology | Sepioni M.,Manchester Center for Mesoscience and Nanotechnology | Tsai I.-L.,Manchester Center for Mesoscience and Nanotechnology | Lehtinen O.,University of Helsinki | And 6 more authors.
Nature Physics | Year: 2012

The possibility to induce a magnetic response in graphene by the introduction of defects has been generating much interest, as this would expand the already impressive list of its special properties and allow novel devices where charge and spin manipulation could be combined. So far there have been many theoretical studies (for reviews, see refs 1-3) predicting that point defects in graphene should carry magnetic moments μ ∼ μ B and these can in principle couple (anti)ferromagnetically 1-12. However, experimental evidence for such magnetism remains both scarce and controversial 13-16. Here we show that point defects in graphene - (1) fluorine adatoms in concentrations x gradually increasing to stoichiometric fluorographene CF x=1.0 (ref. 17) and (2) irradiation defects (vacancies) - carry magnetic moments with spin 1/2. Both types of defect lead to notable paramagnetism but no magnetic ordering could be detected down to liquid helium temperatures. The induced paramagnetism dominates graphene's low-temperature magnetic properties, despite the fact that the maximum response we could achieve was limited to one moment per approximately 1,000 carbon atoms. This limitation is explained by clustering of adatoms and, for the case of vacancies, by the loss of graphene's structural stability. Our work clarifies the controversial issue of graphene's magnetism and sets limits for other graphitic compounds. © 2012 Macmillan Publishers Limited. All rights reserved.


Pervila M.,University of Helsinki | Kangasharju J.,Aalto University
Proceedings of the 2nd ACM SIGCOMM Workshop on Green Networking, GreenNets'11 | Year: 2011

This article describes two benchmark studies involving the cooling technique known as cold aisle containment (CAC). One test case studies a 26U server rack operating on unconditioned outside air only in a carefully controlled setup. The other examines a server room with a power draw of over 80 kW during normal operation. In both cases we measure how incorporating CAC changes the air flow, electricity consumption, operating temperatures, and cooling requirements. Our results show how the air flow separation affects the temperatures in the server room and verify that using CAC can reduce CRAC power by roughly a fifth. © 2011 ACM.


Banhart F.,CNRS Institute of Genetics and of Molecular and Cellular Biology | Kotakoski J.,University of Helsinki | Krasheninnikov A.V.,University of Helsinki | Krasheninnikov A.V.,Aalto University
ACS Nano | Year: 2011

Graphene is one of the most promising materials in nanotechnology. The electronic and mechanical properties of graphene samples with high perfection of the atomic lattice are outstanding, but structural defects, which may appear during growth or processing, deteriorate the performance of graphenebaseddevices. However, deviations from perfection can be useful in some applications, as they make it possible to tailor the local properties of graphene and to achieve new functionalities. In this article, the present knowledge about point and line defects in graphene are reviewed. Particular emphasis is put on the unique ability of graphene to reconstruct its lattice around intrinsic defects, leading to interesting effects and potential applications. Extrinsic defects such as foreign atoms which are of equally high importance for designing graphene-based devices with dedicated properties are also discussed. © 2011 American Chemical Society.


Krasheninnikov A.V.,Aalto University | Krasheninnikov A.V.,University of Helsinki | Nieminen R.M.,Aalto University
Theoretical Chemistry Accounts | Year: 2011

We present a density functional theory study of transition metal adatoms on a graphene sheet with vacancy-type defects. We calculate the strain fields near the defects and demonstrate that the strain fields around these defects reach far into the unperturbed hexagonal network and that metal atoms have a high affinity to the non-perfect and strained regions of graphene. Metal atoms are therefore attracted by the reconstructed defects. The increased reactivity of the strained graphene makes it possible to attach metal atoms much more firmly than to pristine graphene and supplies a tool for tailoring the electronic structure of graphene. Finally, we analyze the electronic band structure of graphene with defects and show that some defects open a semiconductor gap in graphene, which may be important for carbon-based nanoelectronics. © 2011 Springer-Verlag.


Sirleto L.,National Research Council Italy | Antonietta Ferrara M.,National Research Council Italy | Nikitin T.,University of Helsinki | Novikov S.,Aalto University | Khriachtchev L.,University of Helsinki
Nature Communications | Year: 2012

Nanostructured silicon has generated a lot of interest in the past decades as a key material for silicon-based photonics. The low absorption coefficient makes silicon nanocrystals attractive as an active medium in waveguide structures, and their third-order nonlinear optical properties are crucial for the development of next generation nonlinear photonic devices. Here we report the first observation of stimulated Raman scattering in silicon nanocrystals embedded in a silica matrix under non-resonant excitation at infrared wavelengths (∼1.5 μm). Raman gain is directly measured as a function of the silicon content. A giant Raman gain from the silicon nanocrystals is obtained that is up to four orders of magnitude greater than in crystalline silicon. These results demonstrate the first Raman amplifier based on silicon nanocrystals in a silica matrix, thus opening new perspectives for the realization of more efficient Raman lasers with ultra-small sizes, which would increase the synergy between electronic and photonic devices. © 2012 Macmillan Publishers Limited. All rights reserved.


Ahlgren E.H.,University of Helsinki | Kotakoski J.,University of Helsinki | Krasheninnikov A.V.,University of Helsinki | Krasheninnikov A.V.,Aalto University
Physical Review B - Condensed Matter and Materials Physics | Year: 2011

By combining classical molecular dynamics simulations and density-functional-theory total-energy calculations, we study the possibility of doping graphene with B and N atoms using low-energy ion irradiation. Our simulations show that the optimum irradiation energy is 50 eV with substitution probabilities of 55% for N and 40% for B. We further estimate probabilities for different defect configurations to appear under B and N ion irradiation. We analyze the processes responsible for defect production and report an effective swift chemical sputtering mechanism for N irradiation at low energies (~125 eV), which leads to production of single vacancies. Our results show that ion irradiation is a promising method for creating hybrid C-B/N structures for future applications in the realm of nanoelectronics. © 2011 American Physical Society.


Rauch C.,Aalto University | Makkonen I.,Helsinki Institute of Physics | Tuomisto F.,Aalto University
Physical Review B - Condensed Matter and Materials Physics | Year: 2011

We present a comprehensive study of vacancy and vacancy-impurity complexes in InN combining positron annihilation spectroscopy and ab initio calculations. Positron densities and annihilation characteristics of common vacancy-type defects are calculated using density functional theory, and the feasibility of their experimental detection and distinction with positron annihilation methods is discussed. The computational results are compared to positron lifetime and conventional as well as coincidence Doppler broadening measurements of several representative InN samples. The particular dominant vacancy-type positron traps are identified and their characteristic positron lifetimes, Doppler ratio curves, and line-shape parameters determined. We find that indium vacancies (VIn) and their complexes with nitrogen vacancies (VN) or impurities act as efficient positron traps, inducing distinct changes in the annihilation parameters compared to the InN lattice. Neutral or positively charged VN and pure VN complexes, on the other hand, do not trap positrons. The predominantly introduced positron trap in irradiated InN is identified as the isolated VIn, while in as-grown InN layers VIn do not occur isolated but complexed with one or more V N. The number of VN per VIn in these complexes is found to increase from the near-surface region toward the layer-substrate interface. © 2011 American Physical Society.


Kalbac M.,J. Heyrovsky Institute of Physical Chemistry | Lehtinen O.,University of Helsinki | Krasheninnikov A.V.,University of Helsinki | Krasheninnikov A.V.,Aalto University | Keinonen J.,University of Helsinki
Advanced Materials | Year: 2013

Contrary to theoretical estimates based on the conventional binary collision model, experimental results indicate that the number of defects in the lower layer of the bi-layer graphene sample is smaller than in the upper layer. This observation is explained by in situ self-annealing of the defects. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Patent
University of Helsinki and Aalto University | Date: 2014-09-30

A search system configured to predict further search intents of a user and to perform exploratory further searches and produce a number of search features and associated relevances and divergence quantifiers for displaying by user equipment at least two-dimensional so as to allow the user to identify relationship of various diverging search terms and to rapidly direct the searching towards information the existence of which may have been previously unknown to the user. Some of the search features can be concealed and shown to the user only if the user magnifies the corresponding area on a display showing the search features returned by the search engine. Files matching to varying degree with the present and predicted further searches are shown to the user with respective lists of search features.


Patent
Aalto University and University of Helsinki | Date: 2014-04-04

A method of manufacturing a cellulose-based shaped article. The method comprises subjecting a solution of lignocellulosic material, dissolved in a distillable ionic liquid, to a spinning method, wherein the ionic liquid is a diazabicyclononene (DBN)-based ionic liquid. DBN-based ionic liquids have good dissolution power, high thermal and chemical stability, lack runaway reactions and exhibit low energy consumption, due to low spinning temperatures. The shaped cellulose articles can be used as textile fibres, high-end non-woven fibres, technical fibres, films for packaging, and barriers films in batteries, as membranes and as carbon-fibre precursors.


Komsa H.-P.,University of Helsinki | Kotakoski J.,University of Helsinki | Kotakoski J.,University of Vienna | Kurasch S.,University of Ulm | And 4 more authors.
Physical Review Letters | Year: 2012

Using first-principles atomistic simulations, we study the response of atomically thin layers of transition metal dichalcogenides (TMDs)-a new class of two-dimensional inorganic materials with unique electronic properties-to electron irradiation. We calculate displacement threshold energies for atoms in 21 different compounds and estimate the corresponding electron energies required to produce defects. For a representative structure of MoS 2, we carry out high-resolution transmission electron microscopy experiments and validate our theoretical predictions via observations of vacancy formation under exposure to an 80 keV electron beam. We further show that TMDs can be doped by filling the vacancies created by the electron beam with impurity atoms. Thereby, our results not only shed light on the radiation response of a system with reduced dimensionality, but also suggest new ways for engineering the electronic structure of TMDs. © 2012 American Physical Society.


Kotakoski J.,University of Helsinki | Kotakoski J.,University of Vienna | Santos-Cottin D.,University of Helsinki | Krasheninnikov A.V.,University of Helsinki | Krasheninnikov A.V.,Aalto University
ACS Nano | Year: 2012

Figure Persented: Electron beam of a transmission electron microscope can be used to alter the morphology of graphene nanoribbons and create atomically sharp edges required for applications of graphene in nanoelectronics. Using density-functional-theory-based simulations, we study the radiation hardness of graphene edges and show that the response of the ribbons to irradiation is not determined by the equilibrium energetics as assumed in previous experiments, but by kinetic effects associated with the dynamics of the edge atoms after impacts of energetic electrons. We report an unexpectedly high stability of armchair edges, comparable to that of pristine graphene, and demonstrate that the electron energy should be below ∼50 keV to minimize the knock-on damage. © 2011 American Chemical Society.


News Article | March 2, 2017
Site: www.eurekalert.org

In cooperation with Okmetic Oy and the Polish ITME, researchers at Aalto University have studied the application of SOI (Silicon On Insulator) wafers, which are used as a platform for manufacturing different microelectronics components, as a substrate for producing gallium nitride crystals. The researchers compared the characteristics of gallium nitride (GaN) layers grown on SOI wafers to those grown on silicon substrates more commonly used for the process. In addition to high-performance silicon wafers, Okmetic also manufactures SOI wafers, in which a layer of silicon dioxide insulator is sandwiched between two silicon layers. The objective of the SOI technology is to improve the capacitive and insulating characteristics of the wafer. "We used a standardised manufacturing process for comparing the wafer characteristics. GaN growth on SOI wafers produced a higher crystalline quality layer than on silicon wafers. In addition, the insulating layer in the SOI wafer improves breakdown characteristics, enabling the use of clearly higher voltages in power electronics. Similarly, in high frequency applications, the losses and crosstalk can be reduced", explains Jori Lemettinen, a doctoral candidate from the Department of Electronics and Nanoengineering. 'GaN based components are becoming more common in power electronics and radio applications. The performance of GaN based devices can be improved by using a SOI wafer as the substrate', adds Academy Research Fellow Sami Suihkonen. Growth of GaN on a silicon substrate is challenging. GaN layers and devices can be grown on substrate material using metalorganic vapor phase epitaxy (MOVPE). When using silicon as a substrate the grown compound semiconductor materials have different coefficients of thermal expansion and lattice constants than a silicon wafer. These differences in their characteristics limit the crystalline quality that can be achieved and the maximum possible thickness of the produced layer. 'The research showed that the layered structure of an SOI wafer can act as a compliant substrate during gallium nitride layer growth and thus reduce defects and strain in the grown layers", Lemettinen notes. GaN based components are commonly used in blue and white LEDs. In power electronics applications, GaN diodes and transistors, in particular, have received interest, for example in frequency converters or electric cars. It is believed that in radio applications, 5G network base stations will use GaN based power amplifiers in the future. In electronics applications, a GaN transistor offers low resistance and enables high frequencies and power densities. The article has been accepted for publication in the journal Semiconductor Science and Technology. Link to the article https:/


News Article | March 2, 2017
Site: phys.org

The researchers used Micronova’s cleanrooms and, in particular, a reactor designed for gallium nitride manufacturing. The image shows a six-inch substrate in the MOVPE reactor before manufacturing. Credit: Aalto University / Jori Lemettinen In cooperation with Okmetic Oy and the Polish ITME, researchers at Aalto University have studied the application of SOI (Silicon On Insulator) wafers, which are used as a platform for manufacturing different microelectronics components, as a substrate for producing gallium nitride crystals. The researchers compared the characteristics of gallium nitride (GaN) layers grown on SOI wafers to those grown on silicon substrates more commonly used for the process. In addition to high-performance silicon wafers, Okmetic also manufactures SOI wafers, in which a layer of silicon dioxide insulator is sandwiched between two silicon layers. The objective of the SOI technology is to improve the capacitive and insulating characteristics of the wafer. "We used a standardised manufacturing process for comparing the wafer characteristics. GaN growth on SOI wafers produced a higher crystalline quality layer than on silicon wafers. In addition, the insulating layer in the SOI wafer improves breakdown characteristics, enabling the use of clearly higher voltages in power electronics. Similarly, in high frequency applications, the losses and crosstalk can be reduced," explains Jori Lemettinen, a doctoral candidate from the Department of Electronics and Nanoengineering. "GaN based components are becoming more common in power electronics and radio applications. The performance of GaN based devices can be improved by using a SOI wafer as the substrate," adds Academy Research Fellow Sami Suihkonen. Growth of GaN on a silicon substrate is challenging. GaN layers and devices can be grown on substrate material using metalorganic vapor phase epitaxy (MOVPE). When using silicon as a substrate the grown compound semiconductor materials have different coefficients of thermal expansion and lattice constants than a silicon wafer. These differences in their characteristics limit the crystalline quality that can be achieved and the maximum possible thickness of the produced layer. 'The research showed that the layered structure of an SOI wafer can act as a compliant substrate during gallium nitride layer growth and thus reduce defects and strain in the grown layers," Lemettinen notes. GaN based components are commonly used in blue and white LEDs. In power electronics applications, GaN diodes and transistors, in particular, have received interest, for example in frequency converters or electric cars. It is believed that in radio applications, 5G network base stations will use GaN based power amplifiers in the future. In electronics applications, a GaN transistor offers low resistance and enables high frequencies and power densities. More information: Jori Lemettinen et al. MOVPE growth of GaN on 6-inch SOI-substrates: effect of substrate parameters on layer quality and strain, Semiconductor Science and Technology (2017). DOI: 10.1088/1361-6641/aa5942


News Article | December 16, 2015
Site: phys.org

Schematic of the tip of a scanning tunneling microscope on a graphene nanoribbon. Researchers at Aalto University have succeeded in experimentally realizing metallic graphene nanoribbons (GNRs) that are only 5 carbon atoms wide. In their article published in Nature Communications, the research team demonstrated fabrication of the GNRs and measured their electronic structure. The results suggest that these extremely narrow and single-atom-thick ribbons could be used as metallic interconnects in future microprocessors. Graphene nanoribbons have been suggested as ideal wires for use in future nanoelectronics: when the size of the wire is reduced to the atomic scale, graphene is expected to outperform copper in terms of conductance and resistance to electromigration, which is the typical breakdown mechanism in thin metallic wires. However, all demonstrated graphene nanoribbons have been semiconducting, which hampers their use as interconnects. Headed by Prof. Peter Liljeroth, researchers from the Atomic Scale Physics and Surface Science groups have now shown experimentally that certain atomically precise graphene nanoribbon widths are nearly metallic, in accordance with earlier predictions based on theoretical calculations. The team used state-of-the-art scanning tunneling microscopy (STM) that allows them to probe the material's structure and properties with atomic resolution. "With this technique, we measured the properties of individual ribbons and showed that ribbons longer than 5 nanometers exhibit metallic behaviour," says Dr Amina Kimouche, the lead author of the study. The nanoribbon fabrication is based on a chemical reaction on a surface. "The cool thing about the fabrication procedure is that the precursor molecule exactly determines the width of the ribbon. If you want one-carbon-atom-wide ribbons, you simply have to pick a different molecule," explains Dr Pekka Joensuu, who oversaw the synthesis of the precursor molecules for the ribbons. The experimental findings were complemented by theoretical calculations by the Quantum Many-Body Physics group headed by Dr Ari Harju. The theory predicts that when the width of the ribbons is increased atom-by-atom, every third width should be (nearly) metallic with a very small band gap. "According to quantum mechanics, normally when you make your system smaller, it increases the band gap. Graphene can work differently due to its extraordinary electronic properties," says Harju's doctoral student Mikko Ervasti, who performed the calculations. These results pave the way for using graphene in future electronic devices, where these ultra-narrow ribbons could replace copper as the interconnect material. Future studies will focus on all-graphene devices combining both metallic and semiconducting graphene nanostructures. "While we are far from real applications, it is an extremely exciting concept to build useful devices from these tiny structures and to achieve graphene circuits with controlled junctions between GNRs," says Liljeroth.


News Article | January 4, 2016
Site: www.materialstoday.com

Researchers at Aalto University in Finland have succeeded in producing metallic graphene nanoribbons (GNRs) that are only five carbon atoms wide. In an article published in Nature Communications, the researchers report fabricating the GNRs and measuring their electronic structure. Their results suggest that these extremely narrow and single-atom-thick ribbons could be used as metallic interconnects in future microprocessors. Graphene nanoribbons have been suggested as ideal wires for use in future nanoelectronics. When the size of the wire is reduced to the atomic scale, graphene is expected to outperform copper in terms of conductance and resistance to electromigration, which is the typical breakdown mechanism in thin metallic wires. However, all the graphene nanoribbons developed so far have been semiconducting, rather than metallic, hampering their use as interconnects. Headed by Peter Liljeroth, researchers from the Atomic Scale Physics and Surface Science groups at Aalto University have now experimentally confirmed that certain atomically-precise graphene nanoribbon widths are nearly metallic, in accordance with earlier predictions based on theoretical calculations. The team used state-of-the-art scanning tunneling microscopy (STM) that allowed them to probe the graphene nanoribbons’ structure and properties with atomic resolution. “With this technique, we measured the properties of individual ribbons and showed that ribbons longer than 5nm exhibit metallic behavior,” says Amina Kimouche, the lead author of the study. To produce graphene nanoribbons with precise widths, the researchers developed a novel fabrication process based on chemical reactions on a surface. “The cool thing about the fabrication procedure is that the precursor molecule exactly determines the width of the ribbon. If you want one-carbon-atom-wide ribbons, you simply have to pick a different molecule,” explains Pekka Joensuu, who oversaw the synthesis of the precursor molecules for the ribbons. The experimental findings were complemented by theoretical calculations by the Quantum Many-Body Physics group headed by Ari Harju. Theory predicts that when the width of the ribbons increases atom-by-atom, every third width should be (nearly) metallic with a very small band gap. “According to quantum mechanics, normally when you make your system smaller, it increases the band gap. Graphene can work differently due to its extraordinary electronic properties,” says Harju’s doctoral student Mikko Ervasti, who performed the calculations. These results pave the way for using graphene in future electronic devices, where these ultra-narrow ribbons could replace copper as the interconnect material. Future studies will focus on creating all-graphene devices combining both metallic and semiconducting graphene nanostructures. “While we are far from real applications, it is an extremely exciting concept to build useful devices from these tiny structures and to achieve graphene circuits with controlled junctions between GNRs,” says Liljeroth. This story is adapted from material from Aalto University, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.


News Article | December 16, 2015
Site: www.cemag.us

Researchers have succeeded in experimentally realizing metallic graphene nanoribbons that are only 5 carbon atoms wide. The results suggest that these extremely narrow and single-atom-thick ribbons could be used as metallic interconnects in future microprocessors. In their article published in Nature Communications, the research team demonstrated fabrication of the graphene nanoribbons (GNR) and measured their electronic structure. Graphene nanoribbons have been suggested as ideal wires for use in future nanoelectronics: when the size of the wire is reduced to the atomic scale, graphene is expected to outperform copper in terms of conductance and resistance to electromigration, which is the typical breakdown mechanism in thin metallic wires. However, all demonstrated graphene nanoribbons have been semiconducting, which hampers their use as interconnects. Headed by Professor Peter Liljeroth, researchers from the Atomic Scale Physics and Surface Science groups have now shown experimentally that certain atomically precise graphene nanoribbon widths are nearly metallic, in accordance with earlier predictions based on theoretical calculations. The team used state-of-the-art scanning tunneling microscopy (STM) that allows them to probe the material’s structure and properties with atomic resolution. “With this technique, we measured the properties of individual ribbons and showed that ribbons longer than 5 nanometers exhibit metallic behavior,” says Dr. Amina Kimouche, the lead author of the study. The nanoribbon fabrication is based on a chemical reaction on a surface. “The cool thing about the fabrication procedure is that the precursor molecule exactly determines the width of the ribbon. If you want one-carbon-atom-wide ribbons, you simply have to pick a different molecule,” explains Dr. Pekka Joensuu, who oversaw the synthesis of the precursor molecules for the ribbons. The experimental findings were complemented by theoretical calculations by the Quantum Many-Body Physics group headed by Dr. Ari Harju. The theory predicts that when the width of the ribbons is increased atom-by-atom, every third width should be (nearly) metallic with a very small band gap. “According to quantum mechanics, normally when you make your system smaller, it increases the band gap. Graphene can work differently due to its extraordinary electronic properties,” says Harju’s doctoral student Mikko Ervasti, who performed the calculations. These results pave the way for using graphene in future electronic devices, where these ultra-narrow ribbons could replace copper as the interconnect material. Future studies will focus on all-graphene devices combining both metallic and semiconducting graphene nanostructures. “While we are far from real applications, it is an extremely exciting concept to build useful devices from these tiny structures and to achieve graphene circuits with controlled junctions between GNRs,” says Liljeroth. The research, “Ultra-narrow metallic armchair graphene nanoribbons,” was published in Nature Communications. The study was performed at Aalto University’s Department of Applied Physics and Department of Chemistry. The groups are part of the Academy of Finland’s Centres of Excellence in Low Temperature Quantum Phenomena and Devices (LTQ) and Computational Nanosciences (COMP). The Academy of Finland and the European Research Council ERC funded the research.


News Article | November 2, 2016
Site: globenewswire.com

Neste Corporation Press Release 2 November 2016 at 3:00 pm (EET) Neste donates EUR 1.5 million to universities in Finland To celebrate the 100th anniversary of Finland's independence, Neste will donate a total of EUR 1.5 million to Finnish universities. The donation will be split between Aalto University, Åbo Akademi, Lappeenranta University of Technology, and the University of Helsinki. According to a recent survey, Finns wish that universities would increasingly cooperate with businesses. In selecting the donation recipients, particular attention was paid to the extent of cooperation with Neste, share of graduates from these universities of all Neste recruits, and the university's academic success in international reviews. With the donations, Neste wants to underline its commitment to the future of Finnish young people and its appreciation of the contribution of the Finnish educational system to our 100-year-old country. "The absolute strengths of Finland in international comparison are education accessible to everyone and highly educated people. This expertise also underlies Neste's success. With the donation, Neste wants to contribute to ensuring that there will be competent and diverse expertise in Finland for competitiveness and new innovations in the future as well," says Matti Lievonen, CEO of Neste. Companies are hoped to provide internships and degree thesis subjects Neste carried out a survey of the attitudes of a thousand Finns aged 18 to 80 towards the funding and business cooperation of universities and higher education institutions. According to this survey, the majority of respondents (83%) thought that in order to safeguard the employment of the students, businesses and universities and higher education institutions should increase their cooperation to some extent or considerably. The respondents considered securing internships as clearly the most important form (69%) of cooperation. The younger the respondent, the more important they considered internships. Assignments for degree theses (50%) and support for research (42%) were also considered to be good forms of cooperation. Neste annually offers summer internships for approximately 350 young people in Finland. In addition, the company is engaged in close cooperation with universities and higher education institutions in a variety of research projects, and offers degree thesis opportunities for students. Besides the donation, Neste has had a high profile in supporting Finnish education in other ways as well. The company took part in the Tempaus2016 event organized by the Student Union of Aalto University as the main partner. Over a thousand Aalto students visited Finnish primary schools and inspired schoolchildren to learn by using an everyday life problem toolbox, among other things. In addition, the students took photographs of their school visits, and a mosaic compiled from the photos was handed over to Olli-Pekka Heinonen, Director General of the Board of Education. "Neste definitely wanted to be involved with such a great event that connected primary schools and institutes of higher education. Businesses must take part in developing and supporting the Finnish school system and higher education so that we will also have internationally renowned experts in the future. The survey shows that this is something that is significant to Finns in general, too," Lievonen remarks. More information: Osmo Kammonen, SVP, Communications and Brand Marketing, tel. +358 10 458 4885 Neste in brief Neste (NESTE, Nasdaq Helsinki) creates sustainable choices for the needs of transport, businesses and consumers.  Our global range of products and services allows customers to lower their carbon footprint by combining high-quality renewable products and oil products to tailor-made service solutions. We are the world's largest producer of renewable diesel refined from waste and residues, and we are also bringing renewable solutions to the aviation and plastics industries. We want to be a reliable partner, whose expertise, R&D and sustainable practices are widely respected. In 2015, Neste's net sales stood at EUR 11 billion, and we were on the Global 100 list of the 100 most sustainable companies in the world. Read more: neste.com