Minas Gerais, Brazil
Minas Gerais, Brazil
SEARCH FILTERS
Time filter
Source Type

News Article | April 17, 2017
Site: phys.org

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what's sometimes called the first cognitive science department. Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory. The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips. "There's this idea that ideas in science are a bit like epidemics of viruses," says Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT's McGovern Institute for Brain Research, and director of MIT's Center for Brains, Minds, and Machines. "There are apparently five or six basic strains of flu viruses, and apparently each one comes back with a period of around 25 years. People get infected, and they develop an immune response, and so they don't get infected for the next 25 years. And then there is a new generation that is ready to be infected by the same strain of virus. In science, people fall in love with an idea, get excited about it, hammer it to death, and then get immunized—they get tired of it. So ideas should have the same kind of periodicity!" Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples. Usually, the examples have been hand-labeled in advance. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels. Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today's neural nets are organized into layers of nodes, and they're "feed-forward," meaning that data moves through them in only one direction. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data. To each of its incoming connections, a node will assign a number known as a "weight." When the network is active, the node receives a different data item—a different number—over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node "fires," which in today's neural nets generally means sending the number—the sum of the weighted inputs—along all its outgoing connections. When a neural net is being trained, all of its weights and thresholds are initially set to random values. Training data is fed to the bottom layer—the input layer—and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs. The neural nets described by McCullough and Pitts in 1944 had thresholds and weights, but they weren't arranged into layers, and the researchers didn't specify any training mechanism. What McCullough and Pitts showed was that a neural net could, in principle, compute any function that a digital computer could. The result was more neuroscience than computer science: The point was to suggest that the human brain could be thought of as a computing device. Neural nets continue to be a valuable tool for neuroscientific research. For instance, particular network layouts or rules for adjusting weights and thresholds have reproduced observed features of human neuroanatomy and cognition, an indication that they capture something about how the brain processes information. The first trainable neural network, the Perceptron, was demonstrated by the Cornell University psychologist Frank Rosenblatt in 1957. The Perceptron's design was much like that of the modern neural net, except that it had only one layer with adjustable weights and thresholds, sandwiched between input and output layers. Perceptrons were an active area of research in both psychology and the fledgling discipline of computer science until 1959, when Minsky and Papert published a book titled "Perceptrons," which demonstrated that executing certain fairly common computations on Perceptrons would be impractically time consuming. "Of course, all of these limitations kind of disappear if you take machinery that is a little more complicated—like, two layers," Poggio says. But at the time, the book had a chilling effect on neural-net research. "You have to put these things in historical context," Poggio says. "They were arguing for programming—for languages like Lisp. Not many years before, people were still using analog computers. It was not clear at all at the time that programming was the way to go. I think they went a little bit overboard, but as usual, it's not black and white. If you think of this as this competition between analog computing and digital computing, they fought for what at the time was the right thing." By the 1980s, however, researchers had developed algorithms for modifying neural nets' weights and thresholds that were efficient enough for networks with more than one layer, removing many of the limitations identified by Minsky and Papert. The field enjoyed a renaissance. But intellectually, there's something unsatisfying about neural nets. Enough training may revise a network's settings to the point that it can usefully classify data, but what do those settings mean? What image features is an object recognizer looking at, and how does it piece them together into the distinctive visual signatures of cars, houses, and coffee cups? Looking at the weights of individual connections won't answer that question. In recent years, computer scientists have begun to come up with ingenious methods for deducing the analytic strategies adopted by neural nets. But in the 1980s, the networks' strategies were indecipherable. So around the turn of the century, neural networks were supplanted by support vector machines, an alternative approach to machine learning that's based on some very clean and elegant mathematics. The recent resurgence in neural networks—the deep-learning revolution—comes courtesy of the computer-game industry. The complex imagery and rapid pace of today's video games require hardware that can keep up, and the result has been the graphics processing unit (GPU), which packs thousands of relatively simple processing cores on a single chip. It didn't take long for researchers to realize that the architecture of a GPU is remarkably like that of a neural net. Modern GPUs enabled the one-layer networks of the 1960s and the two- to three-layer networks of the 1980s to blossom into the 10-, 15-, even 50-layer networks of today. That's what the "deep" in "deep learning" refers to—the depth of the network's layers. And currently, deep learning is responsible for the best-performing systems in almost every area of artificial-intelligence research. The networks' opacity is still unsettling to theorists, but there's headway on that front, too. In addition to directing the Center for Brains, Minds, and Machines (CBMM), Poggio leads the center's research program in Theoretical Frameworks for Intelligence. Recently, Poggio and his CBMM colleagues have released a three-part theoretical study of neural networks. The first part, which was published last month in the International Journal of Automation and Computing, addresses the range of computations that deep-learning networks can execute and when deep networks offer advantages over shallower ones. Parts two and three, which have been released as CBMM technical reports, address the problems of global optimization, or guaranteeing that a network has found the settings that best accord with its training data, and overfitting, or cases in which the network becomes so attuned to the specifics of its training data that it fails to generalize to other instances of the same categories. There are still plenty of theoretical questions to be answered, but CBMM researchers' work could help ensure that neural networks finally break the generational cycle that has brought them in and out of favor for seven decades. Explore further: How the brain recognizes faces: Machine-learning system spontaneously reproduces aspects of human neurology More information: Tomaso Poggio et al. Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review, International Journal of Automation and Computing (2017). DOI: 10.1007/s11633-017-1054-2


Processes for producing low nitrogen, essentially nitride-free chromium or chromium plus niobium-containing nickel-based alloys include charging elements or compounds which do not dissolve appreciable amounts of nitrogen in the molten state to a refractory crucible within a vacuum induction furnace, melting said elements or compounds therein under reduced pressure, and effecting heterogeneous carbon-based bubble nucleation in a controlled manner. The processes also include, upon cessation of bubble formation, adding low nitrogen chromium or a low nitrogen chromium-containing master alloy with a nitrogen content of below 10 ppm to the melt, melting and distributing said added chromium or chromium-containing master alloy throughout the melt, bringing the resulting combined melt to a temperature and surrounding pressure to permit tapping, and tapping the resulting melt, directly or indirectly, to a metallic mold and allowing the melt to solidify and cool under reduced pressure.


Jia Z.,University of Louisiana at Lafayette | Misra R.D.K.,University of Louisiana at Lafayette | O'Malley R.,Nucor Steel Decatur LLC | Jansto S.J.,CBMM Co.
Materials Science and Engineering A | Year: 2011

We describe here the precipitation behavior and mechanical properties of 560MPa Ti-Nb and 770MPa Ti-Nb-Mo-V steels. The precipitation characteristics were analyzed in terms of chemistry and size distribution of precipitates, with particular focus on the crystallography of precipitates through an analysis of electron diffraction patterns. In addition to pure carbides (NbC, TiC, Mo2C, and VC), Nb containing titanium-rich carbides were also observed. These precipitates were of a size range of 4-20nm. The mechanism of formation of these Ti-rich niobium containing carbides is postulated to involve epitaxial nucleation of NbC on previously precipitated TiC. Interface precipitation of NbC was an interesting observation in compact strip processing which is characterized by an orientation relationship of [001]NbC//[001]α-Fe, implying that the precipitation of NbC occurred during austenite-ferrite transformation. © 2011 Elsevier B.V.


Santos A.P.,Associacao Salgado de Oliveira | Santos A.P.,Institutos Superiores Of Ensino Do Censa | Guimaraes R.C.,CBMM Co. | Carvalho E.M.,Federal University of Uberlandia | Gastaldi A.C.,Hospital Das Clinicas Of Ribeirao Preto
Respiratory Care | Year: 2013

BACKGROUND: Flutter VRP1, Shaker, and Acapella are devices that combine positive expiratory pressure (PEP) and oscillations. OBJECTIVES: To compare the mechanical performance of the Flutter VRP1, Shaker, and Acapella devices. METHODS: An experimental platform and a ventilator, used a flow generator at 5, 10, 15, 20, 26, and 32 L/min, were employed at angles of -30°, 0°, and +30° to evaluate Flutter VRP1 and Shaker, whereas Acapella was adjusted at intermediate, higher, and lower levels of resistance, including positive expiratory pressures (PEP) along with air outflow rates and oscillation frequencies. RESULTS: When the relationships between pressure amplitudes of all air flows were analyzed for the 3 devices at low and intermediate pressures levels, no statistically significant differences were observed in mean pressure amplitudes between Flutter VRP1 and Shaker devices. However, both devices had different values from Acapella, with their pressure amplitude values being higher than that of Acapella (P = .04). There were no statistically significant differences in PEP for the 3 angles or marks regarding all air flows. The expected relationships between variables were observed, with increases in PEP, compared to those of air flows and resistance. Nevertheless, there was a statistically significant difference in frequency of oscillation between these devices and Acapella, whose value was higher than those of Flutter VRP1 and Shaker devices (P = .002). At intermediate pressure levels, the patterns were the same, in comparison to low pressures, although the Acapella device showed frequencies of oscillation values lower than those of Flutter VRP1 and Shaker (P < .001). At high pressures, there were no statistically significant differences among the 3 devices for frequency of oscillations. CONCLUSIONS: The Flutter VRP1 and Shaker devices had a similar performance to that of Acapella in many aspects, except for PEP. © 2013 Daedalus Enterprises.


Jansto S.G.,CBMM Co.
Materials Science and Technology Conference and Exhibition 2010, MS and T'10 | Year: 2010

Value-added applications of niobium (Nb) microalloyed steels continue to be developed meeting the increasing material demands within the automotive, pipeline and structural carbon steel segments. High quality production of these value-added Nb-bearing steels is realized through the synergistic balance of the process and physical metallurgy. Process metallurgy practices presented include development of the steel chemistries, proper melting and alloy additions, secondary refining, slab and billet reheat furnace operation, and alternative hot rolling considerations to successfully obtain a fine grain, homogeneous microstructure within production environments. Case examples of the necessary process metallurgy practices to meet the physical metallurgy objectives are discussed incorporating current niobium-bearing steel product applications. A sustainability structural steel study recently completed presents the positive environmental impact of Nb-microalloyed steel applications as it relates to more effective product design, reduced steelmaking emissions and reduced energy consumption. Copyright © 2010 MS&T'10®.


Jansto S.G.,CBMM Co.
AIST Steel Properties and Applications Conference Proceedings - Combined with MS and T'11, Materials Science and Technology 2011 | Year: 2011

The application of niobium (Nb) in high carbon steels enhances both the metallurgical properties and processability products such as steel bar, sheet and plate. Such process and product metallurgical improvements relate to the Nb-pinning effect of the austenite grain boundaries in microalloyed 0.25 to 0.95%C steels during the reheat furnace process prior to rolling. Consequently, Nb microalloyed high carbon automotive and long product steel applications have been developed. The Micro-Niobium Alloy Approach© is described and correlated to a variety of high carbon steel grades and applications. Metallurgical Operational Implementation (MOI©)) links the product requirements to the mill capability and resultant process metallurgical implementation. This integrative approach connects the Nb process and physical metallurgy necessary to achieve the desired ultra-fine grain, homogeneous high carbon steel microstructures that exhibit superior toughness, strength, fatigue performance and weldability.


Jansto S.G.,CBMM Co.
METAL 2013 - 22nd International Conference on Metallurgy and Materials, Conference Proceedings | Year: 2013

Over 200 million tons of Nb-bearing steels were continuously cast and hot rolled globally in 2012. These Nb-bearing plate, bar and sheet products are manufactured throughout the world. Numerous publications discuss the traditional ductility trough for carbon steels with and without microalloy additions of Nb, V and/or Ti. The steelmaking and process metallurgy parameters under actual mill conditions are rarely correlated to the hot ductility behavior. The hot ductility troughs associated with simple carbon-manganese steels can also result in surface and internal quality issues if certain steelmaking and casting parameters are not followed. Although higher carbon equivalent steels generally exhibit inherently lower hot ductility behavior, as measured by percent reduction in area at elevated temperature, these steels still exhibit sufficient ductility to satisfactorily meet the unbending stress and strain gradients existing in the straightening section of most casters. The relationship between the steelmaking and caster operation and the resultant slab quality is related through the hot ductility behavior. This global Nb-bearing continuous casting steel research study concludes that the incidence of slab cracking during casting is related more to the steelmaking and caster process parameters. These parameters include the superheat variation, transfer ladle temperature stratification, mould flux incompatibility, casting speed fluctuation, elemental residual chemistry level and excessive secondary cooling. This paper defines these operational root causes supported by physical metallurgy hot ductility data from industrial samples. © 2013 TANGER Ltd., Ostrava.


Jansto S.G.,CBMM Co.
Materials Science and Technology Conference and Exhibition 2011, MS and T'11 | Year: 2011

The application of niobium (Nb) in high carbon steels enhances both the metallurgical properties and processability products such as steel bar, sheet and plate. Such process and product metallurgical improvements relate to the Nb-pinning effect of the austenite grain boundaries in microalloyed 0.25 to 0.95%C steels during the reheat furnace process prior to rolling. Consequently, Nb microalloyed high carbon automotive and long product steel applications have been developed. The Micro-Niobium Alloy Approach© is described and correlated to a variety of high carbon steel grades and applications. Metallurgical Operational Implementation (MOI©) links the product requirements to the mill capability and resultant process metallurgical implementation. This integrative approach connects the Nb process and physical metallurgy necessary to achieve the desired ultra-fine grain, homogeneous high carbon steel microstructures that exhibit superior toughness, strength, fatigue performance and weldability.


Processes for producing low-nitrogen metallic chromium or chromium-containing alloys, which prevent the nitrogen in the surrounding atmosphere from being carried into the melt and being absorbed by the metallic chromium or chromium-containing alloy during the metallothermic reaction, include vacuum-degassing a thermite mixture comprising metal compounds and metallic reducing powders contained within a vacuum vessel, igniting the thermite mixture to effect reduction of the metal compounds within the vessel under reduced pressure i.e., below 1 bar, and conducting the entire reduction reaction in said vessel under reduced pressure, including solidification and cooling, to produce a final product with a nitrogen content below 10 ppm. The final products obtained, in addition to low-nitrogen metallic chromium in combination with other elements, can be used as raw materials in the manufacture of superalloys, stainless steel and other specialty steels whose final content of nitrogen is below 10 ppm.


Processes for producing low nitrogen, essentially nitride-free chromium or chromium plus niobium-containing nickel-based alloys include charging elements or compounds which do not dissolve appreciable amounts of nitrogen in the molten state to a refractory crucible within a vacuum induction furnace, melting said elements or compounds therein under reduced pressure, and effecting heterogeneous carbon-based bubble nucleation in a controlled manner. The processes also include, upon cessation of bubble formation, adding low nitrogen chromium or a low nitrogen chromium-containing master alloy with a nitrogen content of below 10 ppm to the melt, melting and distributing said added chromium or chromium-containing master alloy throughout the melt, bringing the resulting combined melt to a temperature and surrounding pressure to permit tapping, and tapping the resulting melt, directly or indirectly, to a metallic mold and allowing the melt to solidify and cool under reduced pressure.

Loading CBMM Co. collaborators
Loading CBMM Co. collaborators