Bhilai, India
Bhilai, India

Time filter

Source Type

News Article | April 20, 2017
Site: phys.org

Dried fruit, gravel or waste – all this is sorted on belt systems by size and quality. Credit: Fraunhofer IOSB Sand, gravel, coal, deicing salt or diamonds, grain, sugar, coffee or grapes and waste – a lot of everyday goods are more or less grainy. To classify this bulk material by quality and size, it must be sorted in a sophisticated process. Tens of thousands of belt-type sorting plants operate in Germany alone. Scientists of the Karlsruhe Institute of Technology (KIT) and the Fraunhofer Institute for Optronics, Systems Technology and Image Evaluation (IOSB) have developed a system that is able to sort much faster, more cheaply and more accurately than previous techniques. Bulk materials are processed in nearly all industries, including building construction, logistics, chemistry and agriculture. Given the large number of plants, any improvement of the sorting steps necessary everywhere could save billions. "The economic benefit of this project can hardly be overestimated; enormous resources could be saved," explains Professor Uwe Hanebeck at the KIT in describing the new technology. Because of the patterns of movement of objects to be sorted, much more precise classification is possible than in conventional sorting systems. Industry had already shown interest in the new technology, says Professor Thomas Längle, responsible for the project at the Fraunhofer IOSB. "Classical belt-type sorting systems scan the material to be sorted by means of a camera; parts not to be treated are blown out by means of pressurized air nozzles," explains Georg Maier of the IOSB. There is a problem: The cameras now in use scan the objects only on a short belt section, which allows only rough classification. As a consequence, several runs are necessary to achieve a satisfactory result. The team led by Längle and Hanebeck developed a sorting system in which cameras operating from different perspectives capture a more precise view of the bulk material. In this way, objects of different categories can be distinguished more effectively. In addition, algorithms based on the images predict how objects are going to move on the belt. In this way, it is possible to sort out foreign bodies much more precisely. The advantages of the new sorting system are explained by Benjamin Noack of the ISAS by the example of spheres and hemispheres: "When seen from the top, they look alike. While hemispheres normally remain on the belt, spheres are restless, thus additionally aggravating the sorting process. However, when seen from the side, spheres and hemispheres can be distinguished." In addition, the system was now able to predict the behavior of these objects and then adjust its operation accordingly and, in this way, also achieve better classification, says Noack. The scientists also found a way to increase the accuracy of existing sorting systems quickly and at low cost. "This requires but a software update," explains Hanebeck. Besides the KIT and IOSB, also the Ruhr University of Bochum (RUB) and the Technical University of Berlin (TUB) are involved in the project. Scientists at those institutions in particular work on realistic simulations of bulk material and the numerical design of sorting systems. Explore further: Sorting black plastics according to type enables previously impossible recycling


News Article | April 17, 2017
Site: www.spie.org

Novel hybrid polymers for fabrication of advanced micro-optics The enduring search for innovative materials, as solutions for the fabrication of advanced micro-optics, remains a crucial part in achieving the commercial success of emerging production technologies. In this regard, innovative optical polymers pave the way for easy, cost-effective, and reliable fabrication of micro-optical devices. To enable a broad range of applications, however, it is necessary for these materials to satisfy numerous requirements. These requirements derive from both the initial processing itself (e.g., the need for broad and reliable process windows) and from the final application (e.g., the need for transparency at a certain wavelength or wavelength bands, as well as non-yellowing, specific refractive index, and high-temperature stability properties). On the basis of Fraunhofer-Gesellschaft's ORMOCER® technology, UV-curable inorganic-organic hybrid polymers have become established materials that are used in industry for a variety of optical applications (e.g., microlens arrays, diffractive gratings, and wafer-level optics).1 These hybrid polymers consist of inorganic and organic components that are covalently bonded at the molecular level. The materials thus exhibit a combination of the favorable properties of both material classes and satisfy many different requirements for industrial micro- and nanofabrication. In addition, these materials can be processed with the use of standard UV-exposure setups,2, 3 as well as UV-molding and laser lithography systems,4 which means they can be implemented into mass-production environments. Since their original commercialization in the early 2000s, our team at microresist technology GmbH has achieved a continued upscaling in production volume of these hybrid materials. As a result, a total volume of these materials of more than 1000kg has now been produced and they are supplied to a constantly growing number of users and consumers worldwide (in academia and industry). These innovative hybrid materials are therefore a viable consumable for industrial fabrication of hybrid polymeric micro-optics.5 Direct fabrication of individual high-precision 3D photonic structures from the hybrid materials, however, is currently limited by the available generic techniques (including UV lithography, UV molding, and laser-direct writing). To further extend the potential applications of UV-curable inorganic-organic hybrid materials, we have thus investigated the fabrication of optical interconnects made from these materials.6 For this work, which we conducted in collaboration with Multiphoton Optics GmbH (MPO),7 we used the two-photon absorption (TPA) lithography technique4, 8 because it allows the direct fabrication of individual high-precision 3D photonic structures. In the first part of the study (at MPO), we used TPA to fabricate different optical elements that have almost no limitation in shape or size. As an example, we show a 133μm-diameter Fresnel lens in Figure 1. Figure 1. Scanning electron microscope (SEM) image of a Fresnel lens (top view). The lens has a diameter of 133μm and was fabricated using two-photon absorption (TPA) lithography and a generic ORMOCER-based hybrid polymer. (Image courtesy of Multiphoton Optics GmbH, MPO.) We have also developed new formulations of the hybrid polymers so that we can obtain ideal conditions for the fabrication process of single-mode and multimode waveguides. In our preliminary assessment, we found that some of our novel hybrid polymer formulations have a significantly increased processing window (i.e., where fabrication of waveguides is feasible and no material overexposure occurs) compared with generic formulations (i.e., commercially available hybrid polymers). For instance, in Figure 2 we illustrate the changes in lithographic performance (in terms of writing laser power and writing velocity) of our TPA-modified hybrid polymers that we can achieve in the high-precision 3D-printing process. Figure 2. Illustration of the lithographic performance of TPA-modified ORMOCER-based hybrid polymers that can be achieved in a high-precision TPA 3D-printing process. In both plots the exposure dose (power) is plotted as a function of the writing speed (velocity) and the colored (blue) area indicates the available processing window, i.e., where no material malfunction (such as blistering) occurs. In (a) there is only a limited process window. In (b), however, the process window has been substantially increased by altering the photoinitiator (the same base material was used in both cases). The material in (b) was used as the novel TPA-adapted hybrid polymer system in this study. As a proof-of-principle, we fabricated a specimen to demonstrate pitch adaption (see Figure 3). This demonstrator consisted of six waveguides, and we adapted its pitch from 125 to 62.5μm (left to right). The embedded 3D waveguides in this structure were written over a 500μm gap between two glass substrates. In this way we were able to ‘simulate’ a chip-to-chip connection and demonstrate that 3D waveguides—which form a connection between different optical elements—are feasible in adapted hybrid polymers. This optical interconnect technique is a promising candidate for implementation in industry because it can be used as substitute for the tedious process of actively and passively aligning different optical elements within assemblies. Our process is more simplified because it only requires passive alignment and consists of three processing steps: that is, coating of the components with the hybrid polymer resin, pre-exposure of the polymer material via a UV-flash exposure, and fabrication of the waveguides via the TPA process. Figure 3. Top-view optical microscope image of six waveguide cores embedded in a cladding matrix of an ORMOCER-based hybrid polymer. The pitch of the waveguides has been adapted from 125 to 62.5μm (left to right). As a proof-of-principle for chip-to-chip coupling, the waveguides were fabricated on two glass substrates (with a gap of about 500μm between them) at a height of 150–170μm. (Image courtesy of MPO.) In another part of our work, we have investigated the generation of a high-precision 3D optical micropattern. In this basic parameter study we examined woodpile-like microstructures (see Figure 4), and we thus experimentally verified the large processing window of hybrid polymers. Furthermore, we observed different degrees of stiffness in the material, i.e., we categorized the structures of the woodpiles as ‘excellent’ or ‘collapsing.’ We observed ‘partly collapsing’ structures when the energy dose was too small to sufficiently cross-link the material (by TPA-induced polymerization), which may be an advantageous setting for facilitating applications where material flexibility is required. Figure 4. Top-view SEM image of a woodpile-like microstructure that has been exposed to varying extents by changing the TPA lithography parameters. From left to right, the average fabrication power was increased from 1 to 10mW. From top to bottom, the writing velocity was increased from 0.1 to 1mm/s. The total fabrication time was about 2 hours. This structure is thus an experimental verification of the broad processing window of the modified ORMOCER-based hybrid polymers. (Image courtesy of MPO.) In summary, we have evaluated the properties of novel ORMOCER-based hybrid polymers that have been adapted for high-precision 3D printing by using TPA lithography. We found that some of our TPA modifications significantly increase the processing windows of the polymers compared to generic formulations. We thus selected certain TPA-tailored polymer materials for our demonstrations and characterizations. From these novel materials, we fabricated a number of advanced optical elements (e.g., pitch-adapted 3D-embedded waveguides and a woodpile-like optical micropattern). The results thus illustrate that hybrid polymers are not only viable materials for industrial fabrication of polymeric micro-optics (with the use of conventional processes such as UV molding), but that they are also potential candidates for fabricating optical waveguide systems (down to the chip level) when emerging and innovative manufacturing techniques based on TPA are eventually implemented in industrial environments. ORMOCER-based hybrid polymers can therefore, because of their versatility, continue to meet the increasing expectations of dynamically growing micro-optics markets. In our future research we will further investigate the perspective of system integration because the hybrid polymers may facilitate mix-and-match approaches (i.e., where advanced TPA-based processing is combined with generic photolithography or inkjet printing). micro resist technology GmbH Arne Schleunitz received his PhD from the Karlsruhe Institute of Technology (Germany) in 2009 and was a postdoctoral researcher at the Paul Scherrer Institut (Switzerland) until 2012. With his in-depth knowledge of nanolithography processes, he joined micro resist technology as head of technology management. He was promoted to chief technology officer in 2016. Jan Klein earned his PhD in organic chemistry from Humboldt University of Berlin (Germany) in 2011. He has since been working as a research scientist and project leader at micro resist technology, with goal of innovating the field of hybrid polymers. In 2013 he became the product manager for hybrid polymers. Multiphoton Optics GmbH Ruth Houbertz is a physicist and has been the chief executive officer and managing director of Multiphon Optics since 2014. She obtained her PhD from the University of Ulm, Germany, in 1993, and between 1993 and 1999 she went on to build a team at the University of Saarbrücken, Germany. She then held different technical and management positions at the Fraunhofer Institute for Silicate Research, Germany, between 2000 and 2012, and from 1999 to 2000 she worked at Sandia National Laboratories, CA. She holds more than 90 patents and has received numerous awards, including the Joseph von Fraunhofer Award in 2007, the Green Photonics Award in 2013, and the Cowin Award of Entrepreneurship in 2014. Alexander Krupp studied physics at the Technical University in Munich (Germany) and received his diploma at the Walther-Meißner-Institute for Low-Temperature Research (Germany) in 2010, where he studied ferromagnetic Heusler compound thin films. In 2014, he finished his PhD on ultracold rubidium Rydberg atoms and molecules in Stuttgart (Germany). Since 2015 he has been a dedicated application engineer at Multiphoton Optics. 1. C. Gimkiewicz, M. Moser, S. Obi, C. Urban, J. S. Pedersen, H. Thiele, C. Zschokke, M. T. Gale, Wafer-scale replication and testing of micro-optical components for VCSELs, Proc. SPIE 5453, p. 13-26, 2004. doi:10.1117/12.544793 3. U. Streppel, P. Dannberg, C. Waechter, A. Braeuer, P. Nicole, L. Froehlich, R. Houbertz, M. Popall, Development of a new fabrication method for stacked optical waveguides using inorganic-organic copolymers, IEEE Conf. Polym. Adhes. Microelectron. Photon., 2001. doi:10.1109/POLYTR.2001.973304 4. R. Houbertz, S. Steenhusen, T. Stichel, G. Sextl, Two-photon polymerization of inorganic-organic hybrid polymers as scalable technology using ultra-short laser pulses, Coherence and Ultrashort Pulse Laser Emission, p. 583-608, InTech, 2010. doi:10.5772/13203 6. A. Schleunitz, J. J. Klein, A. Krupp, B. Bender, R. Houbertz, G. Grützner, Evaluation of hybrid polymers for high-precision manufacturing of 3D optical interconnects by two-photon absorption lithography, Proc. SPIE 10109, p. 1010905, 2017. doi:10.1117/12.2252318 8. T. Stichel, B. Hecht, R. Houbertz, G. Sextl, Two-photon polymerization as method for the fabrication of large scale biomedical scaffold applications, J. Laser Micro/Nanoeng. 5, p. 209-212, 2010.


News Article | May 1, 2017
Site: motherboard.vice.com

The world's smallest racecars faced off over the weekend on a track thinner than a human hair. It was the debut Nanocar Grand Prix, an international challenge to design and race vehicles made of single molecules. Developed and hosted by the Center for the Development of Materials and Structural Studies (CEMES) in Toulouse, France, the race was conceived in 2013 as a way to showcase test platforms for mobile nanotechnologies through the fun and relatable medium of automotive racing. The small-scale showdown kicked off early Saturday with its heat of six nanoscale competitors, built by teams from around the world. Instead of burning rubber, these cars were propelled along a crystal gold surface with periodic electric shocks administered by teams with a scanning tunnel microscope. Some of the tiny buggies were designed to resemble real cars, while others took inspiration from the movement patterns of caterpillars or windmills. This diversity reflects the history of normal-scale racecar driving, according to Christian Joachim, a CEMES nanoscientist and the director of the race. "In 1894, the first ever car race was organized between Paris and Rouen and if you look carefully, they decided at that time to keep all kinds of propulsion," Joachim told LiveScience. "We accepted a large variation of molecular designs on purpose to try to understand what works best." The Green Buggy, developed by Joachim and his fellow CEMES researchers, may have been the hometown favorite, but it broke down on the starting line, according to the leaderboard. The NIMS-MANA Car, representing Japan's National Institute for Materials Science (NIMS), made it one nanometer before sputtering out. Microscopic image of the Green Buggy. Image: CEMES/CNRS/Nanomobile ClubMeanwhile, the Dipolar Racer, a joint American-Australian entry, gained an early lead with an average speed of 35 nanometers per hour, and won the race with 450 nanometers under its belt over nine hours. The silver medal went to the University of Basel's Swiss Nano Dragster, with 133 nanometers of mileage, followed by Ohio University's Bobcat Nano-Wagon and Technical University of Dresden's Windmill car, with 45 and 11 nanometers respectively. Read More: MIT Physicists Are Designing Microscopic Robots to Walk Inside Our Bodies Devising ways to manipulate nanoscale entities is a hot topic, especially since the 2016 Nobel Prize for Chemistry was awarded to research into molecular machines and motors. But for Joachim and the other racers, there's no clear roadmap for the future of these tiny cars—and that's part of the fun. "We don't really know, with this technology, what we will get out of it," he said. "Right now, we are just opening up the technology and science." Subscribe to Science Solved It , Motherboard's new show about the greatest mysteries that were solved by science.


News Article | April 17, 2017
Site: www.nature.com

In the search for new physics, experiments based on high-energy collisions inside massive atom smashers are coming up empty-handed. So physicists are putting their faith in more-precise methods: less crash-and-grab and more watching-ways-of-wobbling. Next month, researchers in the United States will turn on one such experiment. It will make a super-accurate measurement of the way that muons, heavy cousins of electrons, behave in a magnetic field. And it could provide evidence of the existence of entirely new particles. The particles hunted by the new experiment, at the Fermi National Laboratory in Batavia, Illinois, comprise part of the virtual soup that surrounds and interacts with all forms of matter. Quantum theory says that short-lived virtual particles constantly ‘blip’ in and out of existence. Physicists already account for the effects of known virtual particles, such as photons and quarks. But the virtual soup might have mysterious, and as yet unidentified, ingredients. And muons could be particularly sensitive to them. The new Muon g−2 experiment will measure this sensitivity with unparalleled precision. And in doing so, it will reanalyse a muon anomaly that has puzzled physicists for more than a decade. If the experiment confirms that the anomaly is real, then the most likely explanation is that it is caused by virtual particles that do not appear in the existing physics playbook — the standard model. “It would be the first direct evidence of not only physics beyond the standard model, but of entirely new particles,” says Dominik Stöckinger, a theorist at the Technical University of Dresden, Germany, and a member of the Muon g−2 collaboration. Physicists are crying out for a successor to the standard model — a theory that has been fantastically successful yet is known to be incomplete because it fails to account for many phenomena, such as the existence of dark matter. Experiments at the Large Hadron Collider (LHC) at CERN, Europe’s particle-physics lab near Geneva, Switzerland, have not revealed a specific chink, despite performing above expectation and carrying out hundreds of searches for physics beyond the standard model. The muon anomaly is one of only a handful of leads that physicists have. Measurements of the muon’s magnetic moment — a fundamental property that relates to the particle’s inherent magnetism — could hold the key, because it is tweaked by interactions with virtual particles. When last measured 15 years ago at the Brookhaven National Laboratory in New York, the muon’s magnetic moment was larger than theory predicts. Physicists think that interaction with unknown particles, perhaps those envisaged by a theory called supersymmetry, might have caused this anomaly. Other possible explanations are a statistical fluke, or a flaw in the theorists᾽ standard-model calculation, which combines the complex effects of known particles. But that is becoming less likely, says Stöckinger, who says that new calculation methods and experimental cross-checks make the theoretical side much more robust than it was 15 years ago. “With this tantalizing result from Brookhaven, you really have to do a better experiment,” says Lee Roberts, a physicist at Boston University in Massachusetts, who is joint leader of the Muon g−2 experiment. The Fermilab set-up will use 20 times the number of muons used in the Brookhaven experiment to shrink uncertainty by a factor of 4. “If we agree, but with much smaller error, that will show definitively that there’s some particle that hasn’t been observed anywhere else,” he says. To probe the muons, Fermilab physicists will inject the particles into a magnetic field contained in a ring some 14 metres across. Each particle has a magnetic property called spin, which is analogous to Earth spinning on its axis. As the muons travel around the ring at close to the speed of light, their axes of rotation wobble in the field, like off-kilter spinning tops. Combining this precession rate with a measurement of the magnetic field gives the particles’ magnetic moment. Since the Brookhaven result, some popular explanations for the anomaly — including effects of hypothetical dark photons — seem to have been ruled out by other experiments, says Stöckinger. “But if you look at the whole range of scenarios for physics beyond the standard model, there are many possibilities.” Although a positive result would give little indication of exactly what the new particles are, it would provide clues to how other experiments might pin them down. If the relatively large Brookhaven discrepancy is maintained, it can only come from relatively light particles, which should be within reach of the LHC, says Stöckinger, even if they interact so rarely that it takes years for them to emerge. Indeed, the desire to build on previous findings is so strong that to avoid possible bias, Fermilab experimenters will process their incoming results ‘blind’ and apply a different offset to each of two measurements that combine to give the magnetic moment. Only once the offsets are revealed will anyone know whether they have proof of new particles hiding in the quantum soup. “Until then nobody knows what the answer is,” says Roberts. “It will be an exciting moment.”


News Article | May 8, 2017
Site: www.eurekalert.org

Seven years later, children who start learning English in the first grade achieve poorer results in this subject than children whose first English lesson isn't until the third grade. This is according to the findings unearthed by the team headed by Dr Nils Jäkel and Prof Dr Markus Ritter at Ruhr-Universität Bochum. The researchers evaluated data gathered in a large longitudinal study in North Rhine-Westphalia, Germany, that was carried out between 2010 and 2014. The results have been published in the journal Language Learning. "Starting foreign-language lessons at an early age is often very much commended, even though hardly any research exists that would support this myth," says Nils Jäkel from the Chair of English Language Teaching in Bochum. Together with his colleagues from Bochum and from the Technical University Dortmund, they analysed data of 5,130 students from 31 secondary schools of the Gymnasium type in North Rhine-Westphalia. The researchers compared two student cohorts, one of which started learning English in the first grade, the other in the third grade. They evaluated the children's reading and hearing proficiency in English in the fifth and seventh grade respectively. In the fifth grade, children who had their first English lessons very early in elementary school achieved better results with respect to reading and hearing proficiency. This changed by the seventh grade. By then, the latecomers, i.e. children who didn't start to learn English until the third grade, were better. "Our study confirmed results from other countries, for example Spain, that show that early English lessons with one or two hours per week in elementary school aren't very conductive to attaining language competence in the long term," says Jäkel. In the next months, he and his colleagues are going to analyse additional data to investigate if the results can be confirmed for the ninth grade. A possible interpretation of the results: "Early English-language lessons in elementary school take place at a time when deep immersion would be necessary to achieve sustainable effects," describes Nils Jäkel. "Instead, the children attend English lessons that amount to 90 minutes per week at most." Moreover, the authors point out a rupture that takes place during the transition period from elementary school to grammar school. "Broadly speaking, the predominantly playful, holistically structured elementary-school lessons make way for rather more cognitive, intellectualised grammar-school methodology," says Jäkel. In elementary school, English is taught through child-appropriate, casual immersion in and experience of the foreign language through rhymes, songs, movement and stories. Secondary schools focus primarily on prescribed grammar and vocabulary lessons. This would explain why the early advantages in listening proficiency that are identified in the fifth grade are partially forfeit by the seventh grade, as the authors elaborate; this is possibly due to a lapse in motivation, as students feel the rupture more keenly after experiencing four years of English lessons in elementary school. It is also possible that the potential of English lessons at an early stage had not been fully exploited, as they had been rather hastily adapted for the first grade. "When English lessons were introduced in elementary school, many teachers had to qualify for lateral entry on short notice," explains Jäkel. With their findings, the researchers do not question early English lessons as such. On the contrary, it is an important factor contributing to the European multilingualism we aspire to, as it paves the way for further language acquisition in secondary schools. Early English lessons might help make the children aware of linguistic and cultural diversity. "But it would be wrong to have unreasonably high expectations," says Jäkel. "A reasonable compromise might be the introduction of English in the third grade, with more lessons per week." And it is just as important to better coordinate the didactical approaches on elementary and grammar school levels. Here, teachers at these two different types of school could learn from each other. In 2007, the three universities in the Ruhr region launched a strategic collaboration under the umbrella of University Alliance (UA) Ruhr. By consolidating their resources, the partner universities systematically boost their performance. True to the notion "better together", more than 100 research, teaching and administrative collaborations have been established to date. With more than 115,000 students and almost 1,300 professors, UA Ruhr is one of the largest and most powerful academic hubs in Germany.


News Article | May 4, 2017
Site: www.theengineer.co.uk

Graphene Flagship in Vienna unveils molybdenum disulphide integrated circuit with potential for low-power electronics While graphene is the undoubted star of ultra-thin materials, other substances that can be separated into crystals only a few atoms thick are also attracting attention. The Graphene Flagship at the Technical University of Vienna (TU Vienna) has now revealed research on an integrated circuit constructed from molybdenum disulphide (MoS2), a material known as a transition metal dichalcogenide that exhibits semiconducting properties. The processor chip contains 115 transistors, is capable of one-bit logic operations and is scalable to multibit operations that would be more useful for computing, claim the researchers. Because they are so thin and lightweight, they could potentially be used in wearable systems and other ubiquitous computing applications without adding bulk, they add. “In principle, it’s an advantage to have a thin material for a transistor. The thinner the material, the better the electrostatic control of the transistor channel, and the smaller the power consumption,” said Thomas Mueller (TU Vienna), who led the work. Mueller believes that such processors might be combined with LEDs to make a new type of smart paper, or could form components of the intelligence of smart sensors for Industry 4.0 or IoT applications. This, however, would entail increasing the complexity of the system. “Our goal is to realise significantly larger circuits that can do much more in terms of useful operations. We want to make a full 8-bit design – or even more bits – on a single chip with smaller feature sizes.” His colleague Stefan Wachter, who co-authored a paper on the research in Nature Communications, added that even increasing the design to two bits would double the complexity of the circuit. However, the team is already formulating a strategy for increasing the number of transistors on the chip. One idea might be to grow the structures as crystals directly, rather than transferring them from a fabrication process. The medium-term goal is to find a way to produce a chip with tens of thousands of transistors, although this is still well below conventional silicon microchips which can have billions.


News Article | April 20, 2017
Site: www.eurekalert.org

Most everyday goods pass through sorting machines -- KIT and IOSB researchers make bulk material systems more perfect -- optimization of many thousands of plants could save billions Sand, gravel, coal, deicing salt or diamonds, grain, sugar, coffee or grapes and waste - a lot of everyday goods are more or less grainy. To classify this bulk material by quality and size, it must be sorted in a sophisticated process. Tens of thousands of belt-type sorting plants are run in Germany alone. Scientists of the Karlsruhe Institute of Technology (KIT) and the Fraunhofer Institute for Optronics, Systems Technology and Image Evaluation (IOSB) have developed a system which is able to sort much faster, more cheaply and more accurately than previous techniques. Bulk materials are processed in nearly all industries, be it building construction, logistics, chemistry or agriculture. Given the large number of plants, any improvement of the sorting steps necessary everywhere could save billions. "The economic benefit of this project can hardly be overestimated; enormous resources could be saved," explains Professor Uwe Hanebeck of the Chair of Intelligent Sensor - Actuator Systems (ISAS) at the KIT in describing the new technology. Because of the patterns of movement of objects to be sorted, much more precise classification is possible than in conventional sorting systems. Industry had already shown interest in the new technology, says Professor Thomas Längle, responsible for the project at the Fraunhofer IOSB. "Classical belt-type sorting systems scan the material to be sorted by means of a camera; parts not to be treated are blown out by means of pressurized-air nozzles," explains Georg Maier of the IOSB. There is a problem: The cameras now in use scan the objects only on a short belt section, which allows only rough classification. As a consequence, several runs are necessary to achieve a satisfactory result. The team around Längle and Hanebeck developed a sorting system in which cameras operating from different perspectives take a more precise view of the bulk material. In this way, objects of different categories can be distinguished more effectively. In addition, algorithms based on the images predict how objects are going to move on the belt. In this way, it is possible to sort out foreign bodies much more precisely. The advantages of the new sorting system are explained by Benjamin Noack of the ISAS by the example of spheres and hemispheres: "When seen from the top, they look alike. While hemispheres normally remain on the belt, spheres are restless, thus additionally aggravating the sorting process. However, when seen from the side, spheres and hemispheres can be distinguished." In addition, the system was now able to predict the behavior of these objects and then adjust its operation accordingly and, in this way, also achieve better classification, says Noack. The scientists also found a way to increase the accuracy of existing sorting systems quickly and at low cost. "This requires but a software update," explains Hanebeck. Besides the KIT and IOSB, also the Ruhr University of Bochum (RUB) and the Technical University of Berlin (TUB) are involved in the project. Scientists at those institutions in particular work on realistic simulations of bulk material and the numerical design of sorting systems. For further information, please contact: Dr. Felix Mescoli, Press Officer , Phone: +49 721 608-48120, Fax: +49 721 608-43658, Email: felix.mescoli@kit.edu The Karlsruhe Institute of Technology (KIT) pools its three core tasks of research, higher education, and innovation in a mission. With about 9,300 employees and 25,000 students, the KIT is one of the big institutions of research and higher education in natural sciences and engineering in Europe. KIT - The Research University in the Helmholtz Association Since 2010, the KIT has been certified as a family-friendly university. This press release is available on the internet at http://www. .


News Article | May 8, 2017
Site: www.prweb.com

SAE International announced that Professor Nadir Yilmaz was elected as a 2016-2017 SAE Fellow. Dr. Yilmaz was one of 23 Fellow inductees recognized during ceremonies at the WCX 17: SAE World Congress Experience in Detroit, Michigan. The highest grade of membership, SAE Fellow recognizes and honors long-term members who have made a significant impact on society's mobility technology through leadership, research, and innovation. Established in 1975, the grade of Fellow is administered by the SAE Fellow Committee. A native of Adana, Turkey, Dr. Nadir Yilmaz, is a Professor and the Department Chair of Mechanical Engineering at Howard University in Washington DC, as well as, a licensed professional engineer (P.E.) and consultant. As an SAE Fellow, Dr. Yilmaz is recognized as a technical leader in alternative fuels and PAHs, in diesel engines. His research led to a paradigm shift in understanding root causes of the failure of 70% of U.S. Army diesel engines. Additionally, his service and dedication includes exemplary leadership in automotive engineering education, SAE, and industry at-large. Dr. Yilmaz is also a Fellow of ASME (American Society of Mechanical Engineers) and a Fellow of NSPE (National Society of Professional Engineers). Dr. Yilmaz received his B.S., M.S., and Ph.D. degrees in Mechanical Engineering from Istanbul Technical University, Bradley University and New Mexico State University, respectively. He has been a noted author of many publications in the fields of combustion, CFD, rocket propellants and internal combustion engines. He was a U.S. Department of Energy Visiting Faculty Fellow at Sandia National Laboratories in 2012 and 2013. He has educated and trained many underrepresented minority and female students via various STEM programs including NSF-funded AMP (Alliance for Minority Participation) program. Dr. Yilmaz serves as the editor-in-chief for the SAE International Journal of Fuels and Lubricants and an associate editor for the ASME Journal of Engineering for Gas Turbines and Power. Besides serving on the SAE ABET Accreditation Board, he is the Chair of SAE International Education Board. In addition, Dr. Yilmaz serves on the Board of Directors of the Turkish Society of Automotive Engineers. Dr. Yilmaz has received numerous awards at the state, national and international levels. Among his top awards, he is the 2014 NSPE National Young Engineer of the Year, given to a professional engineer under the age of 35 in the United States, nationally. He also won the ASEE Section Outstanding Teaching Award (2013), the SAE International Excellence in Engineering Education “Triple E” Award (2016), SAE Ralph R. Teetor Educational Award (2011) and SAE Faculty Advisor Award (2013 and 2016). SAE International is a global association committed to being the ultimate knowledge source for the engineering profession. By uniting over 127,000 engineers and technical experts, we drive knowledge and expertise across a broad spectrum of industries. We act on two priorities: encouraging a lifetime of learning for mobility engineering professionals and setting the standards for industry engineering. We strive for a better world through the work of our philanthropic SAE Foundation, including programs like A World in Motion® and the Collegiate Design Series™.


In a soft-furnished studio space behind a warehouse in west Berlin, a group of international scientists are debating our robot future. An engineer from a major European carmaker is just finishing a cautiously optimistic progress report on self-driving vehicles. Increasingly, he explains, robot cars are learning to differentiate cars from more vulnerable moving objects such as pedestrians or cyclists. Some are already better than humans at telling apart different breeds of dog. “But of course,” he says, “these are small steps.” Then a tall, athletic man with a light-grey three-piece suit and a greying goatee who has spent most of the morning playing with his smartphone strides to the podium, and suddenly baby steps become interstellar leaps. “Very soon, the smartest and most important decision makers might not be human,” he says, with the pitying smile of a parent explaining growing pains to a teenager. “We are on the verge not of another industrial revolution, but a new form of life, more like the big bang.” Jürgen Schmidhuber has been described as the man the first self-aware robots will recognise as their papa. The 54-year-old German scientist may have developed the algorithms that allow us to speak to our computers or get our smartphones to translate Mandarin into English, but he isn’t very keen on the idea that robots of the future will exist primarily to serve humanity. Instead, he believes machine intelligence will soon not just match that of humans, but outstrip it, designing and building heat-resistant robots that can get much closer to the sun’s energy sources than thin-skinned Homo sapiens, and eventually colonise asteroid belts across the Milky Way with self-replicating robot factories. And Schmidhuber is the person who is trying to build their brains. As we lower ourselves on to a pair of beanbags after his talk, Schmidhuber explains that in a laboratory in Lugano in the Swiss Alps his company Nnaisense is already developing systems that function much like babies, setting themselves little experiments in order to understand how the world works: “True AI”, as he calls it. The only problem is that they are still too slow – around a billion neural connections compared with around 100,000bn in the human cortex. “But we have a trend whereby our computers are getting 10 times faster every five years, and unless that trend breaks, it will only take 25 years until we have a recurrent neural network comparable with the human brain. We aren’t that many years away from an animal-like intelligence, like that of a crow or a capuchin monkey.” How many years, exactly? “I think years is a better measure than decades, but I wouldn’t want to tie myself down to four or seven.” When I ask how he can be so confident about his timetable, he launches the hyperdrive. Suddenly we are jumping from the big bang to the neolithic revolution, from the invention of gunpowder to the world wide web. Major events in the history of the universe, Schmidhuber says, seem to be happening at exponentially accelerating intervals – each landmark coming around a quarter of the time of the previous. If you study the pattern, it looks like it is due to converge around the year 2050. “In the year 2050 time won’t stop, but we will have AIs who are more intelligent than we are and will see little point in getting stuck to our bit of the biosphere. They will want to move history to the next level and march out to where the resources are. In a couple of million years, they will have colonised the Milky Way.” He describes this point of convergence as “omega”, a term first coined by Teilhard de Chardin, a French Jesuit priest born in 1888. Schmidhuber says he likes omega “because it sounds a bit like ‘Oh my God’”. Schmidhuber’s status as the godfather of machine intelligence is not entirely undisputed. For a computer scientist, he can sometimes sound surprisingly unscientific. During his talk in Berlin, there were audible groans from the back of the audience. When Schmidhuber outlined how robots would eventually leave Earth behind and “enjoy themselves” exploring the universe, a Brazilian neuroscientist interrupted: “Is that what you are saying? That there is an algorithm for fun? You are destroying the scientific method in front of all these people. It’s horrible!” When asked about those reactions, Schmidhuber has that pitying look again. “My theses have been controversial for decades, so I am used to these standard arguments. But a lot of neuroscientists have no idea what is happening in the world of AI.” But even within the AI community, Schmidhuber has his detractors. When I mentioned his name to people working on artificial intelligence, several said his work was undoubtedly influential and “getting more so”, but also that he had “a bit of a chip on his shoulder”. Many felt his optimism about the rate of technological progress was unfounded, and possibly dangerous. Far from being the true seer of the robot future, one suggested, Schmidhuber was pushing artificial intelligence to a destiny similar to that of the Segway, a product whose advent was hyped up as a technological revolution akin to the invention of the PC and ended up as a slapstick prop in Paul Blart: Mall Cop. To understand why Schmidhuber yo-yos between prophet and laughing stock, one has to dive deeper into his CV. Born in Munich in 1963, he became interested in robotics during puberty, after picking up rucksacks full of popular science books and sci-fi novels from the nearby library – Olaf Stapleton’s Star Maker, ETA Hoffmann’s The Sandman and the novels of Stanislaw Lem were particular favourites. His great hero, “my wonderful idol”, he says, was Albert Einstein. “At some point I realised I could have even more influence if I built something that is even smarter than myself, or even smarter than Einstein.” He embarked on a degree in mathematics and computer science at Munich’s Technical University, which handed him a professorship at the age of 30. In 1997, Schmidhuber and one of his students, Sepp Hochreiter, wrote a paper that proposed a method for how artificial neural networks – computer systems that mimic the human brain – could be boosted with a memory function, by adding loops that interpreted patterns of words or images in the light of previously obtained information. They called it Long Short-Term Memory (LSTM). At the time, AI was going through a prolonged “winter”: technology had failed to live up to the first wave of hype around artificial intelligence, and funding was hard to come by. In the 1960s, the hope had been that machines could be coded top-down to understand the world in all its complexity. If there is a new buzz now, it is around a seemingly simpler idea: that machines could be fitted with an algorithm that is relatively basic, but enables them to gradually learn bottom-up how complex the world really is. In 1997, Schmidhuber’s paper on LSTM was rejected by MIT, but it now looks like one of the key concepts behind a new wave of interest in deep learning. In 2015, Google announced it had managed to improve the error rate of its voice recognition software by almost 50% using LSTM. It is the system that powers Amazon’s Alexa, and Apple announced last year that is using LSTM to improve the iPhone. If Schmidhuber had his way, the concept would get even more recognition. In a scathing 2015 article, he complained that the “Canadian” trio of computer scientists hailed in Silicon Valley as the superstars of AI – Geoffrey Hinton (Google), Yann Lecun (Facebook) and Yoshua Bengio (IBM) – “heavily cite each other”, but “fail to credit the pioneers of the field”. During his talk in Berlin and our interview, he repeats emphatically, at regular intervals, like a jingle crashing through your Spotify stream, that the current buzz around computer learning is “old hat” and that LSTM got there many years earlier. He is quick to talk down the importance of Silicon Valley, which he feels is so dominated by “cut-throat competition” that it produces less value for money than European institutes. One possibility that presents itself when listening to Schmidhuber talk about the future of robotics is that his relentless techno utopianism is simply a strategy to make sure he doesn’t end up as the Sixto Rodriguez of AI, influential but overlooked while the Silicon Valley Dylans go down in the hall of fame. And relentless it is. Given his interest in sci-fi, has he never worried that robots will enslave and rule over us once they become self aware? Schmidhuber shakes his head. “We won’t be enslaved, at the very least because we are very badly suited as slaves for someone who could just build robots that are far superior to us.” He dismisses The Matrix, in which imprisoned humans are used to power AIs: “That was the most idiotic plot of all time. Why would you use human bioenergy to power robots when a power station that keeps them alive produces so much more energy?” But in that case won’t robots see it as more efficient to wipe out humanity altogether? “Like all scientists, highly intelligent AIs would have a fascination with the origins of life and civilisation. But this fascination will dwindle after a while, just like most people don’t understand the origin of the world nowadays. Generally speaking, our best protection will be their lack of interest in us, because most species’ biggest enemy is their own kind. They will pay about as much attention to us as we do to ants.” I wonder if the analogy is less comforting than he intends. Surely we sometimes step on ants? Some people even use chemicals to poison entire colonies. “Of course, but that only applies to a minute percentage of the global ant population, and no one seems to have the desire to wipe all ants off the face of this Earth. On the contrary, most of us are pleased when we hear there are still more ants on the planet than humans, and most of them are in the Brazilian jungle somewhere. “We may be much smarter than ants, but the overall weight of humans on this planet is still comparable to the overall weight of all ants,” he says, citing a recently disputed claim by the Harvard professor Edmund O Wilson. Let’s forget about sci-fi, I say. What about more immediate concerns, like robotisation creating mass unemployment? In a recent article in Nature magazine, AI research Kate Crawford and cyberlaw professor Ryan Calo warned that the new wave of excitement about intelligent design was creating dangerous blindspots when it came to the social knock-on effects of replacing humans with robots. Again, Schmidhuber is not unduly concerned. The dawn of the robot future was clear to him when he fathered two daughters at the start of the millennium, he says. “What advice do I give them? I tell them: your papa thinks everything will be great, even if there may be ups and downs. Just be prepared to constantly do something new. Be prepared to learn how to learn.” “Homo ludens has always had a talent for inventing jobs of the non-existential kind. The vast majority of the population is already doing luxury jobs like yours and mine,” he says, nodding towards my notepad. “It’s easy to predict which kind of jobs will disappear, but it’s difficult to predict which new jobs will be created. Who would have thought in the 1980s that 30 years later there would be people making millions as professional video gamers or YouTube stars? “Even highly respectable jobs in the medical profession will be affected. In 2012 robots started winning competitions when it came to screening cancer with deep neural networks. Does that mean doctors will lose their jobs? Of course not. It just means that the same doctor will treat 10 times as many patients in the same time he used to treat one. Many people will gain cheap access to medical research. Human lives will be saved and lengthened.” Countries with many robots per capita such as Japan, Germany, Korea or Switzerland, he proposes cheerfully, have relatively low unemployment rates. I try to suggest that a truck driver in his 50s who has never heard of JavaScript may not quite share his optimism, but it’s difficult to talk about the concerns of generations, not to say individuals, when arguing with someone who thinks in omega leaps. Whenever you try to drill into Schmidhuber’s optimistic vision of the robot future, you encounter at its core a very simple scenario. When two beings have a conflict of interest, he says, they have two ways to resolve it: either by collaboration or through competition. Yet every time we encounter such a fork in the road in our conversation, collaboration wins out. When I ask him whether robots of the future, on top of being curious and playful, will also be able to fall in love, he agrees, because “love is obviously an extreme form of collaboration. The love life of robots will be polyamorous rather than monogamous: “There will be all sorts of relationships between robots. They will be able to share part of their minds, which humans currently can’t, or only if they dedicate a lot of time to each other. There will be fusions of the kind that don’t exist among biological organisms.” If love is really just an intense form of collaboration, why does it feel so irrational? Why do we feel lust, or heartbreak? Schmidhuber doesn’t pick up the bait. “We’ve already got pain sensors, so that robots hurt themselves when they bump into something. And we’ll work out the lust thing eventually. In the end, they amount to the same thing.” What if one company, an Apple or a Google, builds up a monopoly stronghold over the supersmart robots that run the world in the future? He thinks that kind of dystopia, as evoked in Dave Eggers’ novel The Circle, is “extremely unlikely”. Here too collaboration will triumph. “The central algorithm for intelligence is incredibly short. The algorithm that allows systems to self-improve is perhaps 10 lines of pseudocode. What we are missing at the moment is perhaps just another five lines.” “Maybe we will develop those 10 lines in my little company, but in these times, when even Swiss banking secrecy is nearing its end, it wouldn’t stay there. It would be leaked. Maybe some unknown man somewhere in India will come up with the code and make it accessible to everyone.” If that sounds a little bit Pollyannaish, it’s because Schmidhuber’s own experience – the initial rejection of LSTM and his pervading distrust of “cut-throat” Silicon Valley – must have taught him that competition can create losers as well as winners. As disarming as his optimism can be on a personal level, I would feel a lot more comfortable with the idea of the most advanced beings of the future being midwifed by Jürgen Schmidhuber if he was willing to articulate that doubt. He ends our conversation on an apologetic note: “I am sorry you are talking to such a teenager. But I’ve been saying the same things since the 70s and 80s. The only difference is that people are now starting to take me seriously.”


News Article | May 3, 2017
Site: news.yahoo.com

(Reuters) - Europe’s top tech hubs tend to radiate from massive capital cities like London, Berlin and Paris. But the heart of European innovation isn’t a major metropolis –it’s a small city in the Dutch-speaking region of Flanders. That’s the conclusion of Reuters’ second annual ranking of Europe’s Most Innovative Universities, a list that identifies and ranks the educational institutions doing the most to advance science, invent new technologies, and help drive the global economy. The most innovative university in Europe, for the second year running, is Belgium’s KU Leuven. This nearly 600-year-old institution was founded by Pope Martin V, but today it’s better known for technology than theology: KU Leuven maintains one of the largest independent research and development organizations on the planet. In fiscal 2015, the university’s research spending exceeded €454 million, and its patent portfolio currently includes 586 active families, each one representing an invention protected in multiple countries. How does a relatively small Catholic university out-innovate bigger, better-known institutions across Europe? KU Leuven earned its first-place rank, in part, by producing a high volume of influential inventions. Its researchers submit more patents than most other universities on the continent, and outside researchers frequently cite KU Leuven inventions in their own patent applications. Those are key criteria in Reuters ranking of Europe’s Most Innovative Universities, which was compiled in partnership with Clarivate Analytics, and is based on proprietary data and analysis of indicators including patent filings and research paper citations. The second most innovative university in Europe is Imperial College London, an institution whose researchers have been responsible for the discovery of penicillin, the development of holography and the invention of fiber optics. The third-place University of Cambridge has been associated with 91 Nobel Laureates during its 800-year history. And the fourth-place Technical University of Munich has spun off more than 800 companies since 1990, including a variety of high-tech startups in industries including renewable energy, semiconductors and nanotechnology. Overall, the same countries that dominate European business and politics dominate the ranking of Europe's Most Innovative Universities. German universities account for 23 of the 100 institutions on the list, more than any other country, and the United Kingdom comes in second, tied with France, each with 17 institutions. But those three countries are also among the most populous and richest countries on the continent. Control for those factors, and it turns out that countries with much smaller populations and modest economies often outperform big ones. The Republic of Ireland has only three schools on the entire list, but with a population of less than 5 million people, it can boast more top 100 innovative universities per capita than any other country in Europe. On the same per capita basis, the second most innovative country on the list is Denmark, followed by Belgium, Switzerland and the Netherlands. Germany, the United Kingdom and France rank in the middle of the pack, an indication that they may be underperforming compared with their smaller neighbors: On a per capita basis, none of those countries has half as many top 100 universities than Ireland. And the same trends hold true if you look at national economies. According to the International Monetary Fund, in 2016 Germany’s gross domestic product exceeded $3.49 trillion –11 times larger than Ireland at $307 billion, yet Germany has only 7 times as many top 100 innovative universities. Some countries underperform even more drastically. Russia is Europe’s most populous country and has the region’s fifth largest economy, yet none of its universities count among the top 100. Other notable absences include any universities from Ukraine or Romania–a fact that reveals another divide between Western and Eastern Europe. To compile the ranking of Europe’s most innovative universities, Clarivate Analytics (formerly the Intellectual Property & Science business of Thomson Reuters) began by identifying more than 600 global organizations that published the most articles in academic journals, including educational institutions, nonprofit charities, and government-funded institutions. That list was reduced to institutions that filed at least 50 patents with the World Intellectual Property Organization in the period between 2010 and 2015. Then they evaluated each candidate on 10 different metrics, focusing on academic papers (which indicate basic research) and patent filings (which point to an institution's ability to apply research and commercialize its discoveries). Finally, they trimmed the list so that it only included European universities, and then ranked them based on their performance. This is the second consecutive year that Clarivate and Reuters have collaborated to rank Europe’s Most Innovative Universities, and three universities that ranked in the top 100 in 2016 fell off the list entirely: the Netherland’s Eindhoven University of Technology, Germany’s University of Kiel, and the UK’s Queens University Belfast. All three universities filed fewer than 50 patents during the period examined for the ranking, and thus were eliminated from consideration. They’ve been replaced by three new entrants to the top 100: the University of Glasgow (#54), the University of Nice Sophia Antipolis (#94), and the Autonomous University of Madrid (#100). The returning universities that made the biggest moves on the list were the Netherland’s Leiden University (up 21 spots to #17) and Germany’s Technical University of Berlin (up 21 spots to #41). Belgium’s Université Libre of Brussels (down 17 to #38) and the UK’s University of Leeds (down 17 to #73) made the biggest moves in the opposite direction. Generally, though, the list remained largely stable: Nine of the top ten schools of 2016 remained in the top 10 for 2017, and 17 of the top 20. This stability is understandable because something as large as university paper output and patent performance is unlikely to change quickly. Of course, the relative ranking of any university does not provide a complete picture of whether its researchers are doing important, innovative work. Since the ranking measures innovation on an institutional level, it may overlook particularly innovative departments or programs: a university might rank low for overall innovation but still operate one of the world's most innovative computer science laboratories, for instance. And it's important to remember that whether a university ranks at the top or the bottom of the list, it's still within the top 100 on the continent: All of these universities produce original research, create useful technology and stimulate the global economy.

Loading Technical University collaborators
Loading Technical University collaborators