Atlanta, GA, United States
Atlanta, GA, United States

The Georgia Institute of Technology is a public research university in Atlanta, Georgia, in the United States. It is a part of the University System of Georgia and has satellite campuses in Savannah, Georgia; Metz, France; Athlone, Ireland; Shanghai, China; and Singapore.The educational institution was founded in 1885 as the Georgia School of Technology as part of Reconstruction plans to build an industrial economy in the post-Civil War Southern United States. Initially, it offered only a degree in mechanical engineering. By 1901, its curriculum had expanded to include electrical, civil, and chemical engineering. In 1948, the school changed its name to reflect its evolution from a trade school to a larger and more capable technical institute and research university.Today, Georgia Tech is organized into six colleges and contains about 31 departments/units, with emphasis on science and technology. It is well recognized for its degree programs in engineering, computing, business administration, the science, architecture, and liberal arts.Georgia Tech's main campus occupies part of Midtown Atlanta, bordered by 10th Street to the north and by North Avenue to the south, placing it well in sight of the Atlanta skyline. In 1996, the campus was the site of the athletes' village and a venue for a number of athletic events for the 1996 Summer Olympics. The construction of the Olympic village, along with subsequent gentrification of the surrounding areas, enhanced the campus.Student athletics, both organized and intramural, are a part of student and alumni life. The school's intercollegiate competitive sports teams, the four-time football national champion Yellow Jackets, and the nationally recognized fight song "Ramblin' Wreck from Georgia Tech", have helped keep Georgia Tech in the national spotlight. Georgia Tech fields eight men's and seven women's teams that compete in the NCAA Division I athletics and the Football Bowl Subdivision. Georgia Tech is a member of the Coastal Division in the Atlantic Coast Conference. Wikipedia.


Time filter

Source Type

Patent
Emory University and Georgia Institute of Technology | Date: 2016-03-18

Various embodiments of the present invention provide a conduit system including an outer lumen, an inner lumen, and an attaching device. In other embodiments, a multiple access port device adapted for communication with at least one of an outer lumen, an inner lumen, or an attaching device of a conduit system is provided. In yet other embodiments, a system including an inner lumen that is collapsible is provided. Means for closing a conduit system are also provided, including a plug for insertion through an attaching device and a variable radius coiled member associated with an attaching device.


Patent
Georgia Institute of Technology | Date: 2015-04-24

A microneedle array is provided for administrating a drug or other substance into a biological tissue. The array includes a base substrate; a primary funnel portion extending from one side of the base substrate; and two or more solid microneedles extending from the primary funnel portion, wherein the two or more microneedles comprise the substance of interest. Methods for making an array of microneedles are also provided. The method may include providing a non-porous and gas-permeable mold having a two or more cavities each of which defines a microneedle; filling the cavities with a fluid material which includes a substance of interest and a liquid vehicle; drying the fluid material to remove at least a portion of the liquid vehicle and form a plurality of microneedles that include the substance of interest, wherein the filling is conducted with a pressure differential applied between opposed surfaces of the mold.


Patent
Georgia Institute of Technology | Date: 2015-04-23

Aspects of the present disclosure generally relate to sanitizing wipe that provides a visual indication that a sufficient amount of abrasive scrubbing has occurred for a given period of time to properly sterilize various medical devices and medical equipment including needless intravenous hub-and-port systems. The sanitizing wipe can change color when used to properly sanitize medical equipment. The sanitizing swab can comprise a plurality of layers of non-woven material (e.g., cotton) in addition to an indicating film disposed between two layers of non-woven material. The indicating film can comprise a polymeric film and a plurality of microencapsulated dyes incorporated into the polymeric film. The microencapsulated dyes can be adapted to burst upon sufficient force being applied thereto, and the bursting of the microencapsulated dyes can cause the sanitizing wipe to undergo a change in visual state (e.g., change color).


Methods of making various capillary foams are provided. The foams can include liquid foams having a plurality of particles connected by a network of a secondary fluid at the interface between the discontinuous and continuous phase. The foams can also include solid foams where the continuous phases (bulk fluid) is removed to produce the solid foam having high overall porosities and low densities. Densities as low as 0.3 g cm^(3 )and porosities as high as 95% or higher can be achieved. The secondary fluid can be polymerized to further strengthen the solid foam. Methods and devices are also provided for oil recovery from water using a capillary foam. The methods can include forming a capillary foam wherein the oil is the secondary fluid, and wherein the foam can transport the oil to the surface of the water.


Patent
Georgia Institute of Technology | Date: 2016-10-31

A generator includes a first member, a second member and a sliding mechanism. The first member includes a first electrode and a first dielectric layer affixed to the first electrode. The first dielectric layer includes a first material that has a first rating on a triboelectric series. The second member includes a second material that has a second rating on the triboelectric series that is different from the first rating. The second member includes a second electrode. The second member is disposed adjacent to the first dielectric layer so that the first dielectric layer is disposed between the first electrode and the second electrode. The sliding mechanism is configured to cause relative movement between the first member and the second member, thereby generating an electric potential imbalance between the first electrode and the second electrode.


Patent
Georgia Institute of Technology | Date: 2015-05-05

Systems and methods for controlling a swarm of mobile robots are disclosed. In one aspect, the robots cover a domain main of interest. Each robot receives a density function indicative of at least one area of importance in the domain of interest, and calculates a velocity vector based on the density function and a displace vector relative to an adjacent robot. Each robot moves to the area of importance according to its velocity vector. In some aspects, the robots together perform a sequence of formations. Each robot mimics a trajectory as part of its performance by switching among a plurality of motion modes. Each robot determines its next motion mode based on a displacement vector relative to an adjacent robot.


Patent
Georgia Institute of Technology | Date: 2016-08-09

A method for cleaning patch-clamp glass pipette electrodes that enables their re-use. By immersing pipette tips or planar patch clamp chips into a detergent, followed by rinsing, pipettes and planar patch clamp chips were re-usable at least ten times with little to no degradation in signal fidelity, in experimental preparations ranging from human embryonic kidney cells to neurons in culture, slices, and in vivo.


Patent
Georgia Institute of Technology | Date: 2015-06-15

Certain implementations of the disclosed technology may include systems and methods for high-frequency resonant gyroscopes. In an example implementation, a resonator gyroscope assembly is provided. The resonator gyroscope assembly can include a square resonator body suspended adjacent to a substrate, a ground electrode attached to a side of the resonator body, a piezoelectric layer attached to a side of the ground electrode, a drive electrode in electrical communication with the piezoelectric layer, and configured to stimulate one or more vibration modes of the square resonator body; and a sense electrode in electrical communication with the piezoelectric layer, and configured to receive an output from the square or disk resonator responsive to stimulation of the one or more vibration modes.


Patent
Georgia Institute of Technology | Date: 2015-01-27

Methods and systems for searching genomes for potential CRISPR off-target sites are provided. In preferred embodiments, the methods include identifying possible on- and off-target cleavage sites and/or ranking the potential off-target sites based on the number and location of mismatches, insertions, and/or deletions in the gRNA guide sequence relative to the genomic DNA sequence at a putative target site in the genome. These methods allow for the selection of better target sites and/or experimental confirmation of off-target sites and are an improvement over partial search mechanisms that fail to locate every possible target site.


Patent
Georgia Institute of Technology | Date: 2016-08-17

The present disclosure provides compositions including thermo-electro-chemical converter, methods of converting thermal energy into electrical energy, and the like. In general, embodiments of the present disclosure can be used to convert thermal energy into electrical energy by way of a chemical process.


Li A.,Georgia Institute of Technology
Nature Nanotechnology | Year: 2017

Ion sources for molecular mass spectrometry are usually driven by direct current power supplies with no user control over the total charges generated. Here, we show that the output of triboelectric nanogenerators (TENGs) can quantitatively control the total ionization charges in mass spectrometry. The high output voltage of TENGs can generate single- or alternating-polarity ion pulses, and is ideal for inducing nanoelectrospray ionization (nanoESI) and plasma discharge ionization. For a given nanoESI emitter, accurately controlled ion pulses ranging from 1.0 to 5.5 nC were delivered with an onset charge of 1.0 nC. Spray pulses can be generated at a high frequency of 17 Hz (60 ms in period) and the pulse duration is adjustable on-demand between 60 ms and 5.5 s. Highly sensitive (∼0.6 zeptomole) mass spectrometry analysis using minimal sample (18 pl per pulse) was achieved with a 10 pg ml-1 cocaine sample. We also show that native protein conformation is conserved in TENG-ESI, and that patterned ion deposition on conductive and insulating surfaces is possible. © 2017 Nature Publishing Group


Ballantyne D.R.,Georgia Institute of Technology
Monthly Notices of the Royal Astronomical Society | Year: 2017

Deep X-ray surveys have provided a comprehensive and largely unbiased view of active galactic nuclei (AGN) evolution stretching back to z ~ 5. However, it has been challenging to use the survey results to connect this evolution to the cosmological environment that AGN inhabit. Exploring this connection will be crucial to understanding the triggering mechanisms of AGN and how these processesmanifest in observations at all wavelengths. In anticipation of upcoming wide-field X-ray surveys that will allow quantitative analysis of AGN environments, this paper presents a method to observationally constrain the conditional luminosity function (CLF) of AGN at a specific z. Once measured, the CLF allows the calculation of the AGN bias, mean dark matter halo mass, AGN lifetime, halo occupation number, and AGN correlation function - all as a function of luminosity. The CLF can be constrained using a measurement of the X-ray luminosity function and the correlation length at different luminosities. The method is illustrated at z ≈ 0 and 0.9 using the limited data that are currently available, and a clear luminosity dependence in the AGN bias and mean halo mass is predicted at both z, supporting the idea that there are at least two different modes of AGN triggering. In addition, the CLF predicts that z≈0.9 quasars may be commonly hosted by haloes with Mh ~1014 M⊙. These 'young cluster' environments may provide the necessary interactions between gas-rich galaxies to fuel luminous accretion. The results derived from this method will be useful to populate AGN of different luminosities in cosmological simulations. © 2016 The Authors. Published by Oxford University Press on behalf of the Royal Astronomical Society.


Ballantyne D.R.,Georgia Institute of Technology
Monthly Notices of the Royal Astronomical Society | Year: 2017

The orientation-based unification model of active galactic nuclei (AGNs) posits that the principle difference between obscured (Type 2) and unobscured (Type 1) AGNs is the line of sight into the central engine. If this model is correct then there should be no difference in many of the properties of AGN host galaxies (e.g. the mass of the surrounding dark matter haloes). However, recent clustering analyses of Type 1 and Type 2 AGNs have provided some evidence for a difference in the halo mass, in conflict with the orientation-based unified model. In this work, a method to compute the conditional luminosity function (CLF) of Type 2 and Type 1 AGNs is presented. The CLF allows many fundamental halo properties to be computed as a function of AGN luminosity, which we apply to the question of the host halo masses of Type 1 and 2 AGNs. By making use of the total AGN CLF, the Type 1 X-ray luminosity function, and the luminosity-dependent Type 2 AGN fraction, the CLFs of Type 1 and 2 AGNs are calculated at z ≈ 0 and 0.9. At both z, there is no statistically significant difference in the mean halo mass of Type 2 and 1 AGNs at any luminosity. There is marginal evidence that Type 1 AGNs may have larger halo masses than Type 2s, which would be consistent with an evolutionary picture where quasars are initially obscured and then subsequently reveal themselves as Type 1s. As the Type 1 lifetime is longer, the host halo will increase somewhat in mass during the Type 1 phase. The CLF technique will be a powerful way to study the properties of many AGNs subsets (e.g. radio-loud, Compton-thick) as future wide-area X-ray and optical surveys substantially increase our ability to place AGNs in their cosmological context. © 2016 The Authors. Published by Oxford University Press on behalf of the Royal Astronomical Society.


Koros W.J.,Georgia Institute of Technology | Zhang C.,Georgia Institute of Technology
Nature Materials | Year: 2017

Materials research is key to enable synthetic membranes for large-scale, energy-efficient molecular separations. Materials with rigid, engineered pore structures add an additional degree of freedom to create advanced membranes by providing entropically moderated selectivities. Scalability — the capability to efficiently and economically pack membranes into practical modules — is a critical yet often neglected factor to take into account for membrane materials screening. In this Progress Article, we highlight continuing developments and identify future opportunities in scalable membrane materials based on these rigid features, for both gas and liquid phase applications. These advanced materials open the door to a new generation of membrane processes beyond existing materials and approaches. © 2017 Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved.


News Article | April 17, 2017
Site: co.newswire.com

The Oracle Virus From Paul Michael Privateer Points An Allegorical Finger at the New Order of American Politics Digital Book Guild announces its first novel with proceeds going to St. Jude Children's Research Hospital. Digital Book Guild is an non-profit publisher attracting writers willing to donate some or all of their proceeds to various charities. Paul Michael Privateer's Oracle Virus is the first release in this effort. Reviewers and readers have said the Oracle Virus has a fast-paced plot that moves through bizarre serial murders, kidnappings, betrayal and deceit with a jolting tension running through the story to the end in Washington DC. The violent climax between nature and man, the fake fog of Washington D.C. politics, Kafkaesque intrigue, and epic battles collide allegorically, leading readers down the America’s truth resistant political rabbit hole, with an orange haired real estate tycoon guide. The book has nothing and everything to do with the Oracle, not Larry' Ellison's but America's. Other have enjoyed it as the first "Google-assist read". They've stated that while surface action pleases pop readers hooked on hybrid international thriller-sci-fi mysteries, there is a below resembling a Lynch ecosystem reminiscent of Borges, Kafka, Hesse, Updike, Rushdie, Kidman, Antonioni, Greenville, Ihimaera, Adichie, and Abani. The Oracle Virus invents a new genre—hyperallegorical realism. The deeper a reader dives in the more The Oracle Virus reveals itself as postcolonial fiction, a cautionary tale of the effects of Trumpian politics, dressed up as a blockbuster. Others insist that Privateer has created a novel that reads like a movie and is in a league with the likes of John Le Carre (and British intelligence), John Grisham (and legal acumen), Dan Brown (and religious intrigue), and Patricia Cornwell (and Forensic Science). Digital Book Guild believes it has a winner given responses that suggest the novel is flat out the most intense sci-fi mystery and beat political thriller offered up in a decade, a novel in which readers travel through two centuries, fly in and out of exotic world capitols and meet renegade Nazis, meet a computer genius, Dark Web cave dwellers, Shaolin priests, Interpol’s toughest agents, strange prophets, and then meet Jack Kavanaugh, a modern hybrid of Sherlock Holmes and Jason Bourne, only smarter and stronger. The first reviewer's comments are about the novel's realism: "This sci-fi mystery thriller is set in the present, save for opening Hitler Nazi flashbacks. I say this in case a prospective reader expects star travel. The description and praise stated in the synopsis is accurate. Parts of the “science fiction” aspect of the book’s storyline are increasingly believable, given ever-expanding bio-technology. Scary! I tend to Google items I find in fiction, to learn how fictitious some story features are. I found that seriously-sized drone apocalyptic delivery vehicles used in the scary climax are available. Privateer spins an engaging mystery thriller. His social awareness is a plus. All proceeds are to go St Jude Children’s Hospital, cancer research division.” The editorial board agrees that the Oracle Virus is a very unusual, advanced high-tech detective story that combines science-fiction, detective, thriller, and romance. The story starts out meticulously and then becomes a real page turner. The cover provides a good overview: The Oracle Virus nerve blasts its way into being a classic sci-fi thriller with substantial philosophic insight. It doesn’t brake a nanosecond for Hitchcock twists or Phillip K. Dick’s paranoia. Page one is a rabbit hole: why is there a fake Gestapo assassination of Hitler or is there? How does a secret genetics lab survive WWII bombings? Can a machine named Mediatron create reality? Or can a nanovirus control our minds? How is a serial geek murder, a hurricane headed toward D.C., a whale stranding, the kidnapping of world presidents, a bloody fight atop the Washington Monument and a secret Louisiana rogue organization all be connected? Or are they? They found that Privateer's social awareness was a plus with proceeds going to St Jude’s Children’s’ Hospital, cancer research division. The Oracle Virus may be the first novel ever in its charity goal is announced on the title page. Paul Michael Privateer was born in New York, served in the United States Air Force, and is interested in intersections between literature, media, and science/information technology. His books include Romantic Voices and Inventing Intelligence, and many of his journal articles deal with the cultural and political effects of cyberspace, digital technology, and corporate media. Privateer has taught at San Jose State, the University of Southern Mississippi, Georgia Institute of Technology and Arizona State University. The University of Geneva, Stanford, and MIT offered him a Fulbright and visiting professorships. He has appeared in the New York Times and on CNN, PBS, ABC, NPR, and BBC4 given his work on education reform, citizen service, and the digital future. His fiction focuses on the most basic aspects of being human: love, passion, fidelity, identity, taboos, social alienation, insecurity and death. His next novels, A Woman in Love and The Nightmare Collector explore the limits of digital media and hyperreal minimalism. His fiction is about fiction. His recent novel, The Oracle Virus, pays sometimes subtle homage to McFarlane, Shakespeare, Hugo, Dickens, Woolf, Kafka, Hardy, Melville, Camus, Steinbeck, Beckett, Borges, Dick, Auster, Angelou, Ellison, Roth, Gibson and many others whose influences ultimately make serious fiction writing a ritual gathering of ghosts. This respect and fascination began with his favorite childhood game: Authors. Privateer lives in the Pacific Northwest and is engaged in socially conscious initiatives. He is founder of NoSchoolViolence.org and Seattle Data for Good. He kayaks, likes trekking Puget Sound islands and the Olympic Peninsula with Nell, a curious but cautiously social black lab. For some unknown reason, she doesn’t sniff everyone’s hand.


News Article | May 2, 2017
Site: www.treehugger.com

Biomimicry is a great tool to solve problems. Rather than reinvent the wheel, we can often look at the solutions that nature has come up with over millions of years of trials & errors. For example, the study of how ants can so quickly move underground and dig relatively stable tunnels in all kinds of soil can teach scientists and engineers a lot, some of which might be quite useful to make robots that could do search & rescue missions or explore hard to access corners of the Earth (equipped with the proper sensors, they could be used for all kinds of environmental monitoring jobs). That's exactly what researchers from the Georgia Institute of Technology have done. Dr Nick Gravish, who led the research, designed "scientific grade ant farms" - allowing the ants to dig through sand trapped between two plates of glass, so every tunnel and every movement could be viewed and filmed. "These ants would move at very high speeds," he explained, "and if you slowed down the motion, (you could see) it wasn't graceful movement - they have many slips and falls." (source) Some of the things that they learned are that ants use their antennae to catch themselves when falling, and that they are very deliberate about the diameter of the tunnels that they dig, keeping them all the same size regardless of soil type (approximately one body length in diameter). Check out some of the very interesting research videos published by Dr Nick Gravish: The researchers even built a "homemade X-Ray CT scanner" to look at tunnels in 3D: See also: Secrets of the Porcupine Quill Could Help Us Make Better Medical Supplies


Martin Burke is a tad envious. A chemist at the University of Illinois in Urbana, Burke has watched funding agencies back major research initiatives in other fields. Biologists pulled in billions of dollars to decipher the human genome, and physicists persuaded governments to fund the gargantuan Large Hadron Collider, which discovered the Higgs boson. Meanwhile chemists, divided among dozens of research areas, often wind up fighting for existing funds. Burke wants to change that. At the American Chemical Society (ACS) meeting here earlier this month, he proposed that chemists rally around an initiative to synthesize most of the hundreds of thousands of known organic natural products: the diverse small molecules made by microbes, plants, and animals. "It would be a moon mission for our field," Burke says. The effort, which would harness an automated synthesis machine he and his colleagues developed to snap together molecules from a set of premade building blocks, could cost $1 billion and take 20 years, Burke estimates. But the idea captivates at least some in the field. "Assuming it's a robust technology, I would have to think it would be revolutionary," says John Reed, the global head of pharma research and early development at Roche in Basel, Switzerland. "Even if it only allowed you to make half the compounds, it strikes me as worthy." Natural products have countless uses in modern society. They make up more than half of all medicines, as well as dyes, diagnostic probes, perfumes, sweeteners, lotions, and so on. "There's probably not a home on the planet that has not been impacted by natural products," Burke says. But discovering, isolating, and testing new natural products is slow, painstaking work. Take bryostatins, a family of 20 natural products first isolated in 1976 from spongelike marine creatures called bryozoans. Bryostatins have shown potential for treating Alzheimer's disease and HIV, and demand has skyrocketed. Yet chemists must mash up 14 tons of bryozoans to produce just 18 grams of bryostatin-1. Synthesizing new bryostatins is equally hard, each one requiring dozens of chemical steps. Burke thinks there is a better way. Two years ago, he and colleagues unveiled a machine that can link a variety of building blocks Lego-wise to create thousands of natural product compounds and their structural relatives. Now he says the approach can be scaled up. Molecular biologists have already automated the synthesis of short strands of DNA, proteins, and sugar chains, revolutionizing biomedicine. Burke argues that doing the same for natural products "could have major positive implications not just for chemistry, but for society." Two years ago, Burke estimated that assembling 75% of natural products with his machine would take some 5000 different building blocks, compared with just four for DNA—a challenging number for chemical suppliers to make and stock. But now, Burke told the ACS meeting, the problem looks more manageable. His lab recently teamed up with that of Jeffrey Skolnick, a computational biologist at the Georgia Institute of Technology in Atlanta. They surveyed the literature on natural products, counted 282,487 compounds, and mapped all their structures. Skolnick's team then designed an algorithm to break each one into fragments, snapping only single bonds between carbon atoms—the kind of bonds Burke's machine can reassemble. Then the researchers asked the computer how many unique fragments it would take to reconstruct the library. It turned out that just 1400 building blocks would suffice to synthesize 75% of all natural product "chemical space," which includes related compounds not made by any organism. "This suggests it's a bounded, solvable problem," Burke says. "It's a profound idea," says Mukund Chorghade, president of THINQ Pharma in Mumbai, India, who believes it would be a boon for drug discovery because it could provide untold numbers of lead compounds for developing new treatments. But not everyone is sold. "Marty is a visionary," says Larry Overman, a synthetic organic chemist at the University of California, Irvine. However, he says, natural product molecules are vastly more complex in structure than biopolymers such as DNA and proteins, and whether automated synthesizers could reproduce that complexity isn't clear. An even harder problem could be securing funding for a large-scale effort to make the building blocks, assemble them into intermediate structures, and fold and tailor those to produce the final shapes. Bob Lees, head of the division at the National Institute of General Medical Sciences (NIGMS) in Bethesda, Maryland, that supports much of the organic chemical synthesis research in the United States, notes that in recent years NIGMS has devoted less funding to large-scale projects and more to research by single investigators. And last month, the Trump administration's initial budget request for 2018 proposed cutting the budget for the National Institutes of Health, the parent organization of NIGMS, by nearly 20%. Whatever its merits, Burke's cornucopian vision could face a steep uphill climb to become reality.


News Article | April 19, 2017
Site: www.futurity.org

Methane-making microbes may have battled “rust-breathing” microbes for dominance in early Earth’s oceans—and kept those oceans from freezing under an ancient, dimmer sun in the process, new research suggests. For much of its first two billion years, Earth was a very different place: oxygen was scarce, microbial life ruled, and the sun was significantly dimmer than it is today. Yet the rock record shows that vast seas covered much of the early Earth under the faint young sun. Scientists have long debated what kept those seas from freezing. A popular theory is that potent gases such as methane—with many times more warming power than carbon dioxide—created a thicker greenhouse atmosphere than required to keep water liquid today. In the absence of oxygen, iron built up in ancient oceans. Under the right chemical and biological processes, this iron rusted out of seawater and cycled many times through a complex loop, or “ferrous wheel.” Some microbes could “breathe” this rust in order to outcompete others, such as those that made methane. When rust was plentiful, an “iron curtain” may have suppressed methane emissions. “The ancestors of modern methane-making and rust-breathing microbes may have long battled for dominance in habitats largely governed by iron chemistry,” says Marcus Bray, a biology doctoral candidate in the laboratory of Jennifer Glass, assistant professor in the Georgia Institute of Technology’s School of Earth and Atmospheric Sciences. Using mud pulled from the bottom of a tropical lake, the researchers gained a new grasp of how ancient microbes made methane despite this “iron curtain.” Collaborator Sean Crowe, an assistant professor at the University of British Columbia, collected mud from the depths of Indonesia’s Lake Matano, an anoxic iron-rich ecosystem that uniquely mimics early oceans. Bray placed the mud into tiny incubators simulating early Earth conditions, and tracked microbial diversity and methane emissions over a period of 500 days. Minimal methane was formed when rust was added; without rust, microbes kept making methane through multiple dilutions. Extrapolating these findings to the past, the team concluded that methane production could have persisted in rust-free patches of ancient seas. Unlike the situation in today’s well-aerated oceans, where most natural gas produced on the seafloor is consumed before it can reach the surface, most of this ancient methane would have escaped to the atmosphere to trap heat from the early sun. Glass was principal investigator of the study in Geobiology. Additional members of the research team are from Georgia Tech, the University of British Columbia, the Indonesian Institute of Sciences, the Skidaway Institute of Oceanography, and the University of Kansas. A grant from NASA Exobiology funded the work. The Center for Dark Energy Biosphere Investigations and the NASA Astrobiology Institute also provided support. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsoring organizations.


Even a simple chemical reaction can be surprisingly complicated. That’s especially true for reactions involving catalysts, which speed up the chemistry that makes fuel, fertilizer and other industrial goods. In theory, a catalytic reaction may follow thousands of possible paths, and it can take years to identify which one it actually takes so scientists can tweak it and make it more efficient. Now researchers at the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University have taken a big step toward cutting through this thicket of possibilities. They used machine learning – a form of artificial intelligence – to prune away the least likely reaction paths, so they can concentrate their analysis on the few that remain and save a lot of time and effort. The method will work for a wide variety of complex chemical reactions and should dramatically speed the development of new catalysts, the team reported in Nature Communications. “Designing a novel catalyst to speed a chemical reaction is a very daunting task,” said Thomas Bligaard, a staff scientist at the SUNCAT Center for Interface Science and Catalysis, a joint SLAC/Stanford institute where the research took place. “There’s a huge amount of experimental work that normally goes into it.” For instance, he said, finding a catalyst that turns nitrogen from the air into ammonia – considered one of the most important developments of the 20th century because it made the large-scale production of fertilizer possible, helping to launch the Green Revolution – took decades of testing various reactions one by one. Even today, with the help of supercomputer simulations that predict the results of reactions by applying theoretical models to huge databases on the behavior of chemicals and catalysts, the search can take years, because until now it has relied largely on human intuition to pick possible winners out of the many available reaction paths. “We need to know what the reaction is, and what are the most difficult steps along the reaction path, in order to even think about making a better catalyst,” said Jens Nørskov, a professor at SLAC and Stanford and director of SUNCAT. “We also need to know whether the reaction makes only the product we want or if it also makes undesirable byproducts. We’ve basically been making reasonable assumptions about these things, and we really need a systematic theory to guide us.” For this study, the team looked at a reaction that turns syngas, a combination of carbon monoxide and hydrogen, into fuels and industrial chemicals. The syngas flows over the surface of a rhodium catalyst, which like all catalysts is not consumed in the process and can be used over and over. This triggers chemical reactions that can produce a number of possible end products, such as ethanol, methane or acetaldehyde. “In this case there are thousands of possible reaction pathways – an infinite number, really – with hundreds of intermediate steps,” said Zachary Ulissi, a postdoctoral researcher at SUNCAT. “Usually what would happen is that a graduate student or postdoctoral researcher would go through them one at a time, using their intuition to pick what they think are the most likely paths. This can take years.” The new method ditches intuition in favor of machine learning, where a computer uses a set of problem-solving rules to learn patterns from large amounts of data and then predict similar patterns in new data. It’s a behind-the-scenes tool in an increasing number of technologies, from self-driving cars to fraud detection and online purchase recommendations. The data used in this process came from past studies of chemicals and their properties, including calculations that predict the bond energies between atoms based on principles of quantum mechanics. The researchers were especially interested in two factors that determine how easily a catalytic reaction proceeds: How strongly the reacting chemicals bond to the surface of the catalyst and which steps in the reaction present the most significant barriers to going forward. These are known as rate-limiting steps. A reaction will seek out the path that takes the least energy, Ulissi explained, much like a highway designer will choose a route between mountains rather than waste time looking for an efficient way to go over the top of a peak. With machine learning the researchers were able to analyze the reaction pathways over and over, each time eliminating the least likely paths and fine-tuning the search strategy for the next round. Once everything was set up, Ulissi said, “It only took seconds or minutes to weed out the paths that were not interesting. In the end there were only about 10 reaction barriers that were important.” The new method, he said, has the potential to reduce the time needed to identify a reaction pathway from years to months. Andrew Medford, a former SUNCAT graduate student who is now an assistant professor at the Georgia Institute of Technology, also contributed to this research, which was funded by the DOE Office of Science.


News Article | April 17, 2017
Site: www.newscientist.com

Nice and steady does it. A video analysis system uses motion tracking and machine learning to assess how good surgeons are at suturing a wound. “Different surgeons all have different styles of suturing,” says Aneeq Zia at the Georgia Institute of Technology in Atlanta, whose team developed the system. The team first captured footage of 41 surgeons and nurses practising their suturing and knot-tying skills on foam boards. Participants also wore an accelerometer on each hand to capture motion data, and a clinician then watched the videos and rated each person’s skills. A machine-learning algorithm used these ratings to gain an understanding about which features from the video and accelerometer data were associated with high scores. It found that the best clinicians moved their hands in smooth synchronisation and in a similar way with every stitch. Those with lower ratings moved in a more erratic and less predictable manner. The system then gave the videos its own rating based on what it had learned, without taking into account the assigned ratings for each one. Using a combination of the video and accelerometer data, it achieved an accuracy of 93.2 per cent in matching the clinician’s skill assessments for suturing and 94 per cent for knot-tying. Including accelerometer data had less impact on the results than expected, says Zia, and actually made the system slightly less accurate at rating the knot-tying task. This may be because the video footage contains information about how surgeons move both their hands and tools, whereas the accelerometers could only track hand movements. Zia hopes that later versions of the system could give trainee surgeons useful feedback on their suturing skills without an expert having to observe them. Finding ways to measure a surgeon’s performance at certain tasks could help us better understand the link between that and results in patients, says Gregory Hager at John Hopkins University in Baltimore, Maryland. There are well-defined ways to assess the effectiveness of a particular drug, but it’s trickier to quantify the impact of surgical procedures, he says. The more data we have on surgical performance, the more we can understand the impact of different techniques on health. Eventually, Zia says he would like to create a system that tells trainees where they went wrong and how to perform better next time. They could hone their skills using an automated system, before being assessed by an expert.


News Article | April 26, 2017
Site: www.newscientist.com

Everyone poops, and it takes them about the same amount of time. A new study of the hydrodynamics of defecation finds that all mammals with faeces like ours take 12 seconds on average to relieve themselves, no matter how large or small the animal. The research, published in Soft Matter, reveals that the soft matter coming out of the hind ends of elephants, pandas, warthogs and dogs slides out of the rectum on a layer of mucus that keeps toilet time to a minimum. “The smell of body waste attracts predators, which is dangerous for animals. If they stay longer doing their thing, they’re exposing themselves and risking being discovered,” says Patricia Yang, a mechanical engineer at the Georgia Institute of Technology in Atlanta. Yang and colleagues filmed elephants, pandas and warthogs at a local zoo, and one team member’s dog in a park, as they defecated.  All these animals produce cylindrical faeces, like we do, and this is the most common kind among mammals. Though the animals’ body masses ranged from 4 to 4,000 kilograms, the duration of defecation remained constant. That consistency across animals is down to a few things. First, the length of faecal pieces was 5 times as long as the diameter of the rectum in each of the animals. Yang also found that the normal, low-level pressure animals apply to push through a bowel movement is constant, and unrelated to a creature’s body mass. This means that, whether it’s a human or a mouse, the pressure used on normal excrement is the same. This is similar to her previous finding that mammals take the same amount of time to empty their bladders. The final piece of this puzzle is the mucus layer in the colon, which plays a big role in the duration of evacuation. Creatures with cylindrical faeces aren’t squeezing matter through a nozzle like a toothpaste tube.  “It’s more like a plug that just goes through a chute,” she says. Yang says larger animals have more rectal mucus, which facilitates quicker expulsion. Constipation happens when that mucus is absorbed by the faeces. Without this slick layer, a human applying no pressure at all would take 500 days to void their bowels, Yang says. “It would be shortened to 6 hours if you apply maximum pressure, but I believe you’d still need to see a doctor,” she says. All animals produce on average two pieces of faeces. Larger animals have longer faeces and a longer rectum, but they have thicker mucus, which makes the faeces accelerate faster – so they travel a longer distance in the same amount of time. Yang’s team also used Youtube videos of animals relieving themselves to measure the average time of defecation among 23 different species. “There’s a surprising amount of poop videos online. They’re mostly from zoos where tourists film it and upload it,” she says. They collected stool samples from 34 species and found that diet affects the density of faecal matter. Floaters – droppings that are lighter than water – are produced by pandas and other herbivores like elephants and kangaroos. They eat low-nutrition, high-fiber foods and defecate much of it in undigested form. Sinkers are produced by large carnivores like bears, tigers and lions. They eat heavier, indigestible ingredients, including fur and bone. Using a rheometer – a device that measures the way fluids flow under applied force – Yang found that faeces are shear-thinning, which means they have lower resistance the faster they’re deformed. That’s why dog poop feels slippery when you step on it. Based on animals at the Atlanta Zoo, they found that on average, animals take in about 8 per cent of their body mass in food, and expel 1 per cent of body mass in faeces. Their observations fed into a mathematical model that can predict defecation times for various problems within the digestive system. “If it’s taking far longer than 12 seconds, I’d say you should go see someone about it,” she says. “But you can’t count the newspaper time.”


News Article | May 3, 2017
Site: www.newscientist.com

Novel proteins, created from scratch with no particular design in mind, can sometimes do the work of a natural protein. The discovery may widen the toolkit of synthetic biologists trying to build bespoke organisms. There are more proteins possible than there are atoms in the universe, and yet evolution has tested only a minuscule fraction of them. No one knows whether the vast, untried space of proteins includes some that could have biological uses. Until now, most researchers assembling novel proteins have meticulously selected each amino acid building block so that the resulting protein folds precisely into a pre-planned shape that closely fits the molecules it is intended to interact with. Michael Hecht, a chemist at Princeton University, decided to try a much looser approach. “I was trying to see what the hell’s out there,” he says. Proteins fold because certain amino acids associate easily with water, while others tend to be tucked away in the interior of the protein. Hecht chose a common shape for folded proteins, called a four-helix bundle – reminiscent of four fingers pressed tightly together – and worked out which positions in the protein needed to have water-loving amino acids and which parts water-avoiding in order to take that shape. Then he randomly picked amino acids from those two categories to fill those positions. He repeated the process over and over, eventually designing around a million different semi-random proteins. Next, Hecht built DNA molecules coding for each of these proteins and inserted this genetic material into bacteria so they would make them. To test whether any of these proteins had biological functions, Hecht supplied each one to E. coli bacteria that were missing a single gene (and hence, the protein it coded for). The missing genes Hecht tested were ones that coded for enzymes that catalyse biochemical reactions. Would the novel proteins “rescue” the bacteria and help them survive? Most of the time, they didn’t. But for four of the 80 gene deletions Hecht worked on, at least one – and in one case, hundreds – of the semi-random novel proteins did restore the missing function. “”We were ecstatic,” says Hecht. When he looked more closely, he got a surprise. Not a single one of the rescue proteins replaced the missing enzyme by catalysing its reaction. Instead, they somehow upregulated other, related enzymes in the bacteria so that they could take over for the absent one, he told the Astrobiology Science Conference in Mesa, Arizona, last week. In a follow-up experiment, Hecht has found at least one novel protein that does act as an enzyme, catalysing a chemical reaction needed to make the amino acid serine. Hecht suggests that this direct catalysis should be even more common in proteins with basic shapes other than the four-helix bundle, which is normally a structural protein rather than an enzyme. Eventually, synthetic biologists should be able to use Hecht’s approach to find a wide range of novel proteins for their toolkit. For instance, new proteins might be able to provide functions similar to today’s antibody-based drugs but without their unwanted tendency to clump together, says Hecht. Other proteins might bind to or break down toxins. But fulfilling this promise is probably some way off. So far, Hecht has been unable to predict the function of his novel proteins, notes Nicholas Hud, a chemist at the Georgia Institute of Technology, meaning a huge amount of trial and error is needed to find something useful. “De novo design of enzymes is still a bit beyond our reach,” says Hud.


News Article | April 17, 2017
Site: www.newscientist.com

The laws of attraction rule Titan’s sands. Static electricity clumping up sand could explain the strange dunes on Saturn’s largest moon. Titan is a hazy moon with a thick, orange nitrogen atmosphere. Its poles are home to placid methane lakes, and its equatorial regions are covered with dunes up to 100 metres high. The dunes seem to be facing in the wrong direction, though. The prevailing winds on Titan blow toward the west, but the dunes point east. “You’ve got this apparent paradox,” says Josef Dufek at the Georgia Institute of Technology in Atlanta. “The winds are moving one way and the sediments are moving the other way.” To understand the shifting of Titan’s sands, Dufek and his colleagues placed grains of organic materials like those on Titan’s surface in a chamber with conditions simulating Titan’s and spun them in a cylindrical tumbler. When they opened the chamber, static electricity from the grains jostling in the dry air had clumped them together. “It was like when you open a box on a winter morning and the packing peanuts stick everywhere,” says Dufek. “These hydrocarbons on Titan are low density and they stick to everything, just like packing peanuts do.” Grains on Titan can maintain that charge and stick together for much longer than particles on Earth could, because of their low density and the dryness of Titan’s atmosphere. That could explain why the dunes don’t align with the wind. The breeze close to Titan’s surface is relatively mild, generally staying below 5 kilometres per hour. The sand’s “stickiness” would make it difficult for such low winds to move them. More powerful winds from storms or seasonal changes could blow otherwise-stable sands eastward, forming the dunes that we see today. “The relative importance of electrostatic forces on blowing sand are likely to be more significant on Titan,” says Ralph Lorenz at the Johns Hopkins University Applied Physics Lab in Laurel, Maryland. “The wind speed at which particles start to move could be higher than we might otherwise expect.” The unique clumping of Titan’s sands may even explain how the grains got there in the first place. Their make-up is similar to particles suspended in the soupy atmosphere, but the sand grains are much bigger. “The atmospheric particles are very small, so they can’t be the things blowing around in those dunes, but this is one way that we could make them grow,” says Jani Radebaugh at Brigham Young University in Utah. Once enough particles had clumped together, they would fall out of the sky, coating the moon’s surface like electric snow.


News Article | April 17, 2017
Site: www.techrepublic.com

As society enters an era where AI will take life-or-death decisions—spotting whether moles are cancerous and driving us to work—trusting these machines will become ever more important. The difficulty is that it's almost impossible for us to understand the inner workings of many modern AI systems that perform human-like tasks, such as recognizing real-life objects or understanding speech. The models produced by the deep-learning systems that have powered recent AI breakthroughs are largely opaque, functioning as black boxes that spit out a result but whose operation remains mysterious. This inscrutability stems from the complexity of the large neural networks that underpin deep-learning systems. These brain-inspired networks are interconnected layers of algorithms that feed data into each other and can be trained to carry out specific tasks. The way these systems represent what they have learned is spread across these sprawling and densely connected networks, and dispersed in such a way that their workings are very tricky to make sense of. Technology giants such as Google, Facebook, Microsoft and Amazon have laid out a vision of the future where AI agents will help people in their daily lives, both at work and at home: organizing our day, driving our cars, delivering our goods. SEE: Inside Amazon's clickworker platform: How half a million people are being paid pennies to train AI (PDF download) (TechRepublic) But for that future to be realized, machine learning models will need to be open to scrutiny, says Dr Tolga Kurtoglu, CEO of PARC, the pioneering Silicon Valley research facility renowned for work in the late 1970s that led to the creation of the mouse and graphical user interface. "There is a huge need in being able to meaningfully explain why a particular AI algorithm came to the conclusion it did," he said, particularly as AI increasingly interacts with consumers. "That will have a profound impact on how we think about human-computer interaction in the future." Systems will need to be able to articulate their assumptions, which paths they explored, what they ruled out, and why and how they arrived at a particular conclusion, according to Kurtoglu. "It's the first step towards establishing a trusted relationship between human agents and AI agents," he said, adding that collaboration between humans and machines could prove highly effective in solving problems. Greater insight into an AI's workings would also help identify where faulty assumptions originated. Machine learning models are only as good as the training data used to create them, and inherent biases in that data will be reflected in the conclusions these models reach. For example, the facial recognition system that categorises images in Google's Photos app made headlines when it tagged black faces as gorillas, an error that was blamed on it not being trained on sufficient images of African Americans. Similarly, a system that learned to associate male and female names with concepts such as 'executive' and 'professional', ended up repeating reductive gender stereotypes. As responsibility for decisions that can have a material effect on our lives are handed to AI, such as whether someone should be given a loan or which treatment is best suited to a patient, the need for transparency becomes more pressing, said Dr Ayanna Howard, of the school of electrical and computer engineering at the Georgia Institute of Technology. "This is an important issue, especially when these intelligent agents are included in decision-making processes that directly impact an individual's well-being, liberty, or subjective treatment by society," she said. "For example, if a machine learning system is involved in determining what medical treatment or procedure an individual should receive, without some disclosure of the system's thinking process - how do we know if such decisions are biased or not?" Research at Georgia Institute of Technology has shown that in certain situations people will "overtrust" decisions made by AI or robots, she said, highlighting the need for systems capable of articulating their reasoning process to users. PARC has recently won a four year grant from the US Department of Defense to work on this problem of devising learning systems that offer greater transparency. CEO Tolga Kurtoglu was optimistic about the research's prospects, but said it would likely "take a long time to really crack some of those hard technical questions that need to get answered". "One of the things that we're looking at is being able to translate between the semantic representations of how humans think about a certain set of problems, and computational representations of knowledge and information, and be able to seamlessly go back and forth—so that you can map from one domain to another." The challenge of augmenting current deep-learning approaches to be more understandable will be considerable, according to Dr Sean Holden, senior lecturer in Machine Learning in the Computer Laboratory at Cambridge University, "I don't see any evidence that a solution is on the horizon," he said. However, while deep learning models have had tremendous success in areas such as image and speech recognition, and are massively more successful than other AI techniques for tackling particular tasks, there are other approaches to AI whose reasoning are clearer. "Despite the current obsession with all things deep, this represents only one part of the wider field of AI. Other areas in AI are much more amenable to producing explanations," Holden said.


News Article | March 16, 2017
Site: www.techtimes.com

New research claims more "airpocalypses" are expected to occur in China as global warming worsens. The severe pollution suffered by China recently points to one major cause — climate change. Climate scientists have found that the lack of ice in the Arctic and increasing snowfalls in Siberia have changed the weather patterns in East China. "The ventilation is getting worse," study author Yuhang Wang, an atmospheric scientist at Georgia Institute of Technology in Atlanta, said. Climate change, he said, has "a large effect on pollution in China." Air pollution became a national crisis in China during the 2012 to 2013 winter seasons. Almost two-thirds of the country's 74 big cities registered pollution levels beyond the national air quality standards. At the time, smog was also discovered to have contained small particles, smaller than 2.5 micrometers, which could cause heart and lung diseases. Some 90,000 people died and hundreds of thousands more fell ill during the 2013 winter smog. The event was often referred to as China's "airpocalypse." The situation did not improve even though tighter measures against emissions were implemented by the Chinese government. Although the summer air is clearer, winter smog has remained a serious problem. The new study, however, found that climate change was the key driver of the severe air pollution that took place. The team led by Wang published their research in the journal Science Advances on March 15. Wang was joined by Yufei Zou, Yuzhong Zhang, and Ja-Ho Koo. The researchers found out that the 2013 airpocalypse came after the Arctic ice plunged to its record low coupled with increased Siberian snowfall. The Arctic ice plunged again to its lowest last year, and then China suffered another round of airpocalypse this winter. "The very rapid change in polar warming is really having a large impact on China," Wang said. The emissions in China are on the decrease for the last four years, but winter smog has remained an environmental problem. Decreasing sea ice and increasing snowfall in the polar region kept the "cold air from getting into the eastern parts of China, where it would flush out the air pollution." As global warming will continue to cause melting of Arctic ice, the researchers said, "extreme haze events in winter will likely to occur at a higher frequency in China." The airpocalypse phenomenon should give a more urgent tone to initiatives in reducing air pollution and carbon dioxide emissions. The issue of smog reduction is not only about cutting emissions that pollute the air. "It is also about reducing emissions of greenhouse gases from China and all the other countries ... [to] slow down the rapidly changing Arctic climate," Wang said. According to recent research, greenhouse gas emissions from human activity account for almost 75 percent of the decrease of summer sea ice. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | April 28, 2017
Site: news.yahoo.com

Origami is an old Japanese art of folding paper into various decorative shapes, and requires a certain amount of nimbleness of the fingers. But a new method developed by researchers uses layers of a thin polymer that fold themselves into origami structures under the effect of light. The self-folding origami technique was developed by researchers from Georgia Institute of Technology and Peking University in China, who published a paper on the subject Friday in the journal Science Advances. And the possible applications of the technique aren’t just decorative — it can find uses in a range of fields, including “soft robots, microelectronics, soft actuators, mechanical metamaterials and biomedical devices,” according to the researchers. The paper, titled “Origami by frontal photopolymerization,” explains the method. A thin layer of liquid acrylate polymer is placed in a plate or between two glass slides, and then light from an LED projector is shone onto the polymer. A photo-initiating material mixed into the polymer causes it to harden under the effect of light, while a light-absorbing dye in it controls the amount of light absorbed. The areas that receive less light end up bending more, and the bending process starts when the polymer film is removed from the liquid. Jerry Qi, a professor in the Woodruff School of Mechanical Engineering at Georgia Tech and a co-author of the paper, said in a statement Friday: “During a specific type of photopolymerization, frontal photopolymerization, the liquid resin is cured continuously from the side under light irradiation toward the inner side. This creates a non-uniform stress field that drives the film to bend along the direction of light path.” To amplify the bending of the polymer and make more complex origami structures, the researchers shone light on both sides of the polymer. And they created several structures like flowers, birds, tables, capsules and the well-known tassels of the Miura fold. “We have developed two types of fabrication processes. In the first one, you can just shine the light pattern towards a layer of liquid resin, and then you will get the origami structure. In the second one, you may need to flip the layer and shine a second pattern. This second process gives you much wider design freedom,” Zeang Zhao, a doctoral student at Georgia Tech and Peking University, said in the statement. After about five to ten seconds of light being shone onto the polymer, a film about 200 microns thick is formed. The structures produced by the researchers used multiple layers of the polymer and were all about half inch in size. According to Qi, the size limit for structures created using this method is about one inch. The method would theoretically work with a large number of photo-curable polymers, and the system could be connected to a computer to generate precise three-dimensional models for generating grayscale patterns in which light would shine on the polymer. “We have developed a simple approach to fold a thin sheet of polymer into complicated three-dimensional origami structures. Our approach is not limited by specific materials, and the patterning is so simple that anybody with PowerPoint and a projector could do it,” Qi said.


News Article | April 17, 2017
Site: news.mit.edu

A single cell can contain a wealth of information about the health of an individual. Now, a new method developed at MIT and National Chiao Tung University could make it possible to capture and analyze individual cells from a small sample of blood, potentially leading to very low-cost diagnostic systems that could be used almost anywhere. The new system, based on specially treated sheets of graphene oxide, could ultimately lead to a variety of simple devices that could be produced for as little as $5 apiece and perform a variety of sensitive diagnostic tests even in places far from typical medical facilities. The material used in this research is an oxidized version of the two-dimensional form of pure carbon known as graphene, which has been the subject of widespread research for over a decade because of its unique mechanical and electrical characteristics. The key to the new process is heating the graphene oxide at relatively mild temperatures. This low-temperature annealing, as it is known, makes it possible to bond particular compounds to the material’s surface. These compounds in turn select and bond with specific molecules of interest, including DNA and proteins, or even whole cells. Once captured, those molecules or cells can then be subjected to a variety of tests. The findings are reported in the journal ACS Nano, in a paper co-authored by Neelkanth Bardhan, an MIT postdoc, and Priyank Kumar PhD ’15, now a postdoc at ETH Zurich; Angela Belcher, the James Mason Crafts Professor in biological engineering and materials science and engineering at MIT and a member of the Koch Institute for Integrative Cancer Research; Jeffrey Grossman, the Morton and Claire Goulder and Family Professor in Environmental Systems at MIT; Hidde L. Ploegh, a professor of biology and member of the Whitehead Institute for Biomedical Research; Guan-Yu Chen, an assistant professor in biomedical engineering at National Chiao Tung University in Taiwan; and Zeyang Li, a doctoral student at the Whitehead Institute. Other researchers have been trying to develop diagnostic systems using a graphene oxide substrate to capture specific cells or molecules, but these approaches used just the raw, untreated material. Despite a decade of research, other attempts to improve such devices’ efficiency have relied on external modifications, such as surface patterning through lithographic fabrication techniques, or adding microfluidic channels, which add to the cost and complexity. The new finding offers a mass-producible, low-cost approach to achieving such improvements in efficiency. The heating process changes the material’s surface properties, causing oxygen atoms to cluster together, leaving spaces of bare graphene between them. This makes it relatively easy to attach other chemicals to the surface, which can interact with specific molecules of interest. The new research demonstrates how that basic process could potentially enable a suite of low-cost diagnostic systems, for example for cancer screening or treatment follow-up. For this proof-of-concept test, the team used molecules that can quickly and efficiently capture specific immune cells that are markers for certain cancers. They were able to demonstrate that their treated graphene oxide surfaces were almost twice as effective at capturing such cells from whole blood, compared to devices fabricated using ordinary, untreated graphene oxide, says Bardhan, the paper’s lead author. The system has other advantages as well, Bardhan says. It allows for rapid capture and assessment of cells or biomolecules under ambient conditions within about 10 minutes and without the need for refrigeration of samples or incubators for precise temperature control. And the whole system is compatible with existing large-scale manufacturing methods, making it possible to produce diagnostic devices for less than $5 apiece, the team estimates. Such devices could be used in point-of-care testing or resource-constrained settings. Existing methods for treating graphene oxide to allow functionalization of the surface require high temperature treatments or the use of harsh chemicals, but the new system, which the group has patented, requires no chemical pretreatment and an annealing temperature of just 50 to 80 degrees Celsius (122 to 176 F). While the team’s basic processing method could make possible a wide variety of applications, including solar cells and light-emitting devices, for this work the researchers focused on improving the efficiency of capturing cells and biomolecules that can then be subjected to a suite of tests. They did this by enzymatically coating the treated graphene oxide surface with peptides called nanobodies — subunits of antibodies, which can be cheaply and easily produced in large quantities in bioreactors and are highly selective for particular biomolecules. The researchers found that increasing the annealing time steadily increased the efficiency of cell capture: After nine days of annealing, the efficiency of capturing cells from whole blood went from 54 percent, for untreated graphene oxide, to 92 percent for the treated material. The team then performed molecular dynamics simulations to understand the fundamental changes in the reactivity of the graphene oxide base material. The simulation results, which the team also verified experimentally, suggested that upon annealing, the relative fraction of one type of oxygen (carbonyl) increases at the expense of the other types of oxygen functional groups (epoxy and hydroxyl) as a result of the oxygen clustering. This change makes the material more reactive, which explains the higher density of cell capture agents and increased efficiency of cell capture. “Efficiency is especially important if you’re trying to detect a rare event,” Belcher says. “The goal of this was to show a high efficiency of capture.” The next step after this basic proof of concept, she says, is to try to make a working detector for a specific disease model. In principle, Bardhan says, many different tests could be incorporated on a single device, all of which could be placed on a small glass slide like those used for microscopy. “I think the most interesting aspect of this work is the claimed clustering of oxygen species on graphene sheets and its enhanced performance in surface functionalization and cell capture,” says Younan Xia, a professor of chemistry and biochemistry at Georgia Institute of Technology who was not involved in this work. “It is an interesting idea.” The work was supported by the Army Research Office Institute for Collaborative Biotechnologies and MIT’s Tata Center and Solar Frontiers Center.


News Article | April 13, 2017
Site: www.rdmag.com

A new 3D printing method has enabled researchers to create objects that can permanently transform into a range of different shapes in response to heat. A research team that included the Georgia Institute of Technology, Singapore University of Technology and Design and Xi’an Jiaotong University in China have created the objects by printing layers of shape memory polymers with each layer designed to respond differently when exposed to heat. “This new approach significantly simplifies and increases the potential of 4D printing by incorporating the mechanical programming post-processing step directly into the 3D printing process,” Jerry Qi, a professor in the George W. Woodruff School of Mechanical Engineering at Georgia Tech, said in a statement. “This allows high-resolution 3D printed components to be designed by computer simulation, 3D printed, and then directly and rapidly transformed into new permanent configurations by simply heating,” he added. The new development builds on previous work the team had done using smart shape memory polymers, which have the ability to remember one shape and change to another programmed shape when uniform heat is applied, to make objects that could fold themselves along hinges. “The approach can achieve printing time and material savings up to 90 percent, while completely eliminating time-consuming mechanical programming from the design and manufacturing workflow,” Qi said. The researchers fabricated several objects that could bend or expand quickly when immersed in hot water—including a model of a flower whose petals bend like a real daisy responding to sunlight and a lattice-shaped object that could expand by nearly eight times its original size. “Our composite materials at room temperature have one material that is soft but can be programmed to contain internal stress, while the other material is stiff,” Zhen Ding, a postdoc researcher at Singapore University of Technology and Design, said in a statement. “We use computational simulations to design composite components where the stiff material has a shape and size that prevents the release of the programmed internal stress from the soft material after 3D printing. “Upon heating the stiff material softens and allows the soft material to release its stress and this results in a change – often dramatic – in the product shape.” The newly created 4D objects could allow for a range of new product features including enabling products that could be stacked flat or rolled for shipping and then expanded once in use. The technology could also enable components that could respond to stimuli such as temperature, moisture or light in a way that is precisely timed to create space structures, deployable medical devices, robots, toys and a range of other structures. “The key advance of this work is a 4D printing method that is dramatically simplified and allows the creation of high-resolution complex 3D reprogrammable products,” Martin Dunn, a professor at Singapore University of Technology and Design who is also the director of the SUTD Digital Manufacturing and Design Centre, said in a statement. “It promises to enable myriad applications across biomedical devices, 3D electronics, and consumer products. “It even opens the door to a new paradigm in product design, where components are designed from the onset to inhabit multiple configurations during service.”


News Article | May 7, 2017
Site: cleantechnica.com

The amount of dissolved oxygen in the water of the world’s oceans — an important marker of overall oceanic biological health/livability — has been declining at a notable rate for more than 2 decades now, according to a new analysis from the Georgia Institute of Technology. Map of the linear trend of dissolved oxygen at the depth of 100 meters. Credit: Georgia Tech The new analysis, which looked at historic data going back more than 5 decades, found that oceanic oxygen levels started dropping notably in the 1980s, just as ocean temperatures began to increase at a relatively fast rate. “The oxygen in oceans has dynamic properties, and its concentration can change with natural climate variability,” commented Taka Ito, an associate professor in Georgia Tech’s School of Earth and Atmospheric Sciences who led the research. “The important aspect of our result is that the rate of global oxygen loss appears to be exceeding the level of nature’s random variability.” With falling oxygen levels in ocean water, habitability for larger forms of marine life becomes harder and large-scale hypoxic events (dead zones) became more likely. While it’s long been known that rising ocean temperatures would result in less oxygen being present in the waters, oxygen levels have been falling much more rapidly than was expected. “The trend of oxygen falling is about 2 to 3 times faster than what we predicted from the decrease of solubility associated with the ocean warming,” Ito commented. “This is most likely due to the changes in ocean circulation and mixing associated with the heating of the near-surface waters and melting of polar ice.” The press release provides more: “The majority of the oxygen in the ocean is absorbed from the atmosphere at the surface or created by photosynthesizing phytoplankton. Ocean currents then mix that more highly oxygenated water with subsurface water. But rising ocean water temperatures near the surface have made it more buoyant and harder for the warmer surface waters to mix downward with the cooler subsurface waters. Melting polar ice has added more freshwater to the ocean surface — another factor that hampers the natural mixing and leads to increased ocean stratification.” “After the mid-2000s, this trend became apparent, consistent, and statistically significant — beyond the envelope of year-to-year fluctuations,” Ito continued. “The trends are particularly strong in the tropics, eastern margins of each basin and the subpolar North Pacific.” Interestingly, earlier research from Ito and his team found that air pollution originating in East Asia was responsible for the decrease of oceanic oxygen levels thousands of miles away in the tropical Pacific Ocean. The exact cause? Iron and nitrogen pollution was/is causing plankton blooms that deplete oxygen. It’s worth bearing that reality in mind when discussing proposed geoengineering plans involving the “seeding” of the oceans with massive amounts of iron and/or other nutrients. The new findings are detailed in a paper published in the journal Geophysical Research Letters. Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.


News Article | May 4, 2017
Site: www.eurekalert.org

Most tumors contain regions of low oxygen concentration where cancer therapies based on the action of reactive oxygen species are ineffective. Now, American scientists have developed a hybrid nanomaterial that releases a free-radical-generating prodrug inside tumor cells upon thermal activation. As they report in the journal Angewandte Chemie, the free radicals destroy the cell components even in oxygen-depleted conditions, causing apoptosis. Delivery, release, and action of the hybrid material can be precisely controlled. Many well-established cancer treatment schemes are based on the generation of reactive oxygen species (ROS), which induce apoptosis for the tumor cells. However, this mechanism only works in the presence of oxygen, and hypoxic (oxygen-depleted) regions in the tumor tissue often survive the ROS-based treatment. Therefore, Younan Xia at the Georgia Institute of Technology and Emory University, Atlanta, USA, and his team have developed a strategy to deliver and release a radical-generating prodrug that, upon activation, damages cells by a ROS-type radical mechanism, but without the need for oxygen. The authors explained that they had to turn to the field of polymerization chemistry to find a compound that produces enough radicals. There, the azo compound AIPH is a well-known polymerization initiator. In medicinal applications, it generates free alkyl radicals that cause DNA damage and lipid and protein peroxidation in cells even under hypoxic conditions. However, the AIPH must be safely delivered to the cells in the tissue. Thus, the scientists used nanocages, the cavities of which were filled with lauric acid, a so-called phase-change material (PCM) that can serve as a carrier for AIPH. Once inside the target tissue, irradiation by a near-infrared laser heats up the nanocages, causing the PCM to melt and triggering the release and decomposition of AIPH. This concept worked well, as the team has shown with a variety of experiments on different cell types and components. Red blood cells underwent pronounced hemolysis. Lung cancer cells incorporated the nanoparticles and were severely damaged by the triggered release of the radical starter. Actin filaments retracted and condensed following the treatment. And the lung cancer cells showed significant inhibition of their growth rate, independently of the oxygen concentration. Although the authors admit that "the efficacy still needs to be improved by optimizing the components and conditions involved," they have demonstrated the effectiveness of their hybrid system in killing cells, also in places where the oxygen level is low. This strategy might be highly relevant in nanomedicine, cancer theranostics, and in all applications where targeted delivery and controlled release with superb spatial/temporal resolutions is desired. Dr. Xia is the Brock Family Chair and Georgia Research Alliance (GRA) Eminent Scholar in Nanomedicine in The Wallace H. Coulter Department of Biomedical Engineering at Georgia Institute of Technology. The Xia group's research activities center on the design and synthesis of novel nanomaterials for a broad range of applications, including nanomedicine, regenerative medicine, cancer theranostics, tissue engineering, controlled release, catalysis, and fuel cell technology. Dr. Xia has received many prestigious awards.


News Article | April 17, 2017
Site: www.eurekalert.org

All stakeholders in the scientific research enterprise -- researchers, institutions, publishers, funders, scientific societies, and federal agencies - should improve their practices and policies to respond to threats to the integrity of research WASHINGTON - All stakeholders in the scientific research enterprise -- researchers, institutions, publishers, funders, scientific societies, and federal agencies - should improve their practices and policies to respond to threats to the integrity of research, says a new report from the National Academies of Sciences, Engineering, and Medicine. Actions are needed to ensure the availability of data necessary for reproducing research, clarify authorship standards, protect whistleblowers, and make sure that negative as well as positive research findings are reported, among other steps. The report stresses the important role played by institutions and environments - not only individual researchers -- in supporting scientific integrity. And it recommends the establishment of an independent, nonprofit Research Integrity Advisory Board to support ongoing efforts to strengthen research integrity. The board should work with all stakeholders in the research enterprise to share expertise and approaches for minimizing and addressing research misconduct and detrimental practices. "The research enterprise is not broken, but it faces significant challenges in creating the conditions needed to foster and sustain the highest standards of integrity," said Robert Nerem, chair of the committee that wrote the report, and Institute Professor and Parker H. Petit Professor Emeritus, Institute for Bioengineering and Bioscience, Georgia Institute of Technology. "To meet these challenges, all parties in the research enterprise need to take deliberate steps to strengthen the self-correcting mechanisms that are part of research and to better align the realities of research with its values and ideals." A growing body of evidence indicates that substantial percentages of published results in some fields are not reproducible, the report says, noting that this is a complex phenomenon and much remains to be learned. While a certain level of irreproducibility due to unknown variables or errors is a normal part of research, data falsification and detrimental research practices -- such as inappropriate use of statistics or after-the-fact fitting of hypotheses to previously collected data -- apparently also play a role. In addition, new forms of detrimental research practices are appearing, such as predatory journals that do little or no editorial review or quality control of papers while charging authors substantial fees. And the number of retractions of journal articles has increased, with a significant percentage of those retractions due to research misconduct. The report cautions, however, that this increase does not necessarily indicate that the incidence of misconduct is increasing, as more-vigilant scrutiny by the community may be a contributing factor. The report endorses the definition of scientific misconduct proposed in the 1992 Academies report Responsible Science: "fabrication, falsification, or plagiarism in proposing, performing, or reporting research." However, many practices that have until now been categorized as "questionable" research practices - for example, misleading use of statistics that falls short of falsification, and failure to retain research data -- should be recognized as "detrimental" research practices, the new report says. Detrimental research practices should be understood to include not only actions of individual researchers but also irresponsible or abusive actions by research institutions and journals. "The research process goes beyond the actions of individual researchers," said Nerem. "Research institutions, journals, scientific societies, and other parts of the research enterprise all can act in ways that either support or undermine integrity in research." Because research institutions play a central role in fostering research integrity, they should maintain the highest standards for research conduct, going beyond simple compliance with federal regulations and applying these standards to all research independent of the source of funding. Institutions' key responsibilities include creating and sustaining a research culture that fosters integrity and encourages adherence to best practices, as well as monitoring the integrity of their research environments. Senior leaders at each institution -- the president, other senior executives, and faculty leaders -- should guide and be actively engaged in these tasks. Furthermore, they must have the capacity to effectively investigate and address allegations of research misconduct and to address the conflict of interest that institutions may have in conducting these investigations -- for example, by incorporating external perspectives. In addition, research institutions and federal agencies should ensure that good faith whistleblowers - those who raise concerns about the integrity of research - are protected and their concerns addressed in a fair, thorough, and timely manner. Inadequate responses to such concerns have been a critical point of failure in many cases of misconduct where investigations were delayed or sidetracked. Currently, standards for transparency in many fields and disciplines do not adequately support reproducibility and the ability to build on previous work, the report says. Research sponsors and publishers should ensure that the information needed for a person knowledgeable about the field and its techniques to reproduce the reported results is made available at the time of publication or as soon as possible after that. Federal funding agencies and other research sponsors should also allocate sufficient funds to enable the long-term storage, archiving, and access of datasets and code necessary to replicate published findings. Researchers should routinely disclose all statistical tests carried out, including negative findings, the report says. Available evidence indicates that scientific publications are biased against presenting negative results and that the publication of negative results is on the decline. But routine reporting of negative findings will help avoid unproductive duplication of research and make research spending more productive. Dissemination of negative results also has prompted a questioning of established paradigms, leading ultimately to groundbreaking new discoveries. Research sponsors, research institutions, and journals should support and encourage this level of transparency. Scientific societies and journals should develop clear disciplinary authorship standards based on the principle that those who have made a significant intellectual contribution are authors. Those who engage in these activities should be designated as authors, and all authors should approve the final manuscript. Universal condemnation by all disciplines of gift or honorary authorship, coercive authorship, and ghost authorship would also contribute to changing the culture of research environments where these practices are still accepted. To bring a unified focus to addressing challenges in fostering research integrity across all disciplines and sectors, the report urges the establishment of a nonprofit, independent Research Integrity Advisory Board. The RIAB could facilitate the exchange of information on approaches to assessing and creating environments of the highest integrity and to handling allegations of misconduct and investigations. It could provide advice, support, encouragement, and where helpful advocacy on what needs to be done by research institutions, journal and book publishers, and other stakeholders in the research enterprise. The RIAB would have no direct role in investigations, regulation, or accreditation; instead it will serve as a neutral resource that helps the research enterprise respond to challenges. In addition, the report recommends that government agencies and private foundations fund research to quantify conditions in the research environment that may be linked to research misconduct and detrimental research practices, and to develop responses to these conditions. The study was sponsored by the U.S. Geological Survey of the U.S. Department of the Interior, the Office of Research Integrity of the U.S. Department of Health and Human Services, the Office of the Inspector General of the National Science Foundation, the Office of Science of the U.S. Department of Energy, the U.S. Department of Veterans Affairs, the U.S. Environmental Protection Agency, the Burroughs Wellcome Fund, the Society for Neuroscience, and the National Academies of Sciences, Engineering, and Medicine. The National Academies of Sciences, Engineering, and Medicine are private, nonprofit institutions that provide independent, objective analysis and advice to the nation to solve complex problems and inform public policy decisions related to science, technology, and medicine. They operate under an 1863 congressional charter to the National Academy of Sciences, signed by President Lincoln. For more information, visit http://national-academies. . A roster follows. Sara Frueh, Media Officer Joshua Blatt, Media Assistant Office of News and Public Information 202-334-2138; e-mail news@nas.edu national-academies.org/newsroom Follow us on Twitter at @theNASEM Copies of Fostering Integrity in Research are available from the National Academies Press on the Internet at http://www. or by calling 202-334-3313 or 1-800-624-6242. Reporters may obtain a copy from the Office of News and Public Information (contacts listed above). Robert M. Nerem1,2 (chair) Institute Professor and Parker H. Petit Professor Emeritus Institute for Bioengineering and Bioscience Georgia Institute of Technology Atlanta Ann M. Arvin2 Lucile Packard Professor of Pediatrics, Vice Provost and Dean of Research, and Professor of Microbiology and Immunology Stanford University Stanford, Calif. C.K. (Tina) Gunsalus Director National Center for Professional and Research Ethics University of Illinois Urbana-Champaign Deborah G. Johnson Anne Shirley Carter Olsson Professor Emeritus of Applied Ethics Department of Science, Technology, and Society School of Engineering and Applied Science University of Virginia Charlottesville Michael A. Keller Ida M. Green University Librarian, and Director of Academic Information Resources University Libraries and Academic Information Resources Stanford University Stanford, Calif. W. Carl Lineberger3 E.U. Condon Distinguished Professor of Chemistry, and Fellow JILA University of Colorado Boulder Victoria Stodden Associate Professor of Statistics Institute for Data Sciences and Engineering University of Illinois Urbana-Champaign Sara E. Wilson Associate Professor of Mechanical Engineering, and Academic Director Bioengineering Graduate Program University of Kansas Lawrence Paul R. Wolpe Asa Griggs Candler Professor of Bioethics, and Director Center for Ethics Emory University Atlanta 1 Member, National Academy of Engineering 2 Member, National Academy of Medicine 3 Member, National Academy of Sciences


News Article | May 5, 2017
Site: www.cemag.us

An international team of scientists has developed a new way to produce single-layer graphene from a simple precursor: ethene — also known as ethylene — the smallest alkene molecule, which contains just two atoms of carbon. By heating the ethene in stages to a temperature of slightly more than 700 degrees Celsius — hotter than had been attempted before – the researchers produced pure layers of graphene on a rhodium catalyst substrate. The stepwise heating and higher temperature overcame challenges seen in earlier efforts to produce graphene directly from hydrocarbon precursors. Because of its lower cost and simplicity, the technique could open new potential applications for graphene, which has attractive physical and electronic properties. The work also provides a novel mechanism for the self-evolution of carbon cluster precursors whose diffusional coalescence results in the formation of the graphene layers. The research, reported as the cover article in the May 4 issue of the Journal of Physical Chemistry C, was conducted by scientists at the Georgia Institute of Technology, Technische Universität München in Germany, and the University of St. Andrews in Scotland. In the United States, the research was supported by the U.S. Air Force Office of Scientific Research and the U.S. Department of Energy’s Office of Basic Energy Sciences. “Since graphene is made from carbon, we decided to start with the simplest type of carbon molecules and see if we could assemble them into graphene,” explains Uzi Landman, a Regents’ Professor and F.E. Callaway endowed chair in the Georgia Tech School of Physics who headed the theoretical component of the research. “From small molecules containing carbon, you end up with macroscopic pieces of graphene.” Graphene is now produced using a variety of methods including chemical vapor deposition, evaporation of silicon from silicon carbide — and simple exfoliation of graphene sheets from graphite. A number of earlier efforts to produce graphene from simple hydrocarbon precursors had proven largely unsuccessful, creating disordered soot rather than structured graphene. Guided by a theoretical approach, the researchers reasoned that the path from ethene to graphene would involve formation of a series of structures as hydrogen atoms leave the ethene molecules and carbon atoms self-assemble into the honeycomb pattern that characterizes graphene. To explore the nature of the thermally-induced rhodium surface-catalyzed transformations from ethene to graphene, experimental groups in Germany and Scotland raised the temperature of the material in steps under ultra-high vacuum. They used scanning-tunneling microscopy (STM), thermal programed desorption (TPD) and high-resolution electron energy loss (vibrational) spectroscopy (HREELS) to observe and characterize the structures that form at each step of the process. Upon heating, ethene adsorbed onto the rhodium catalyst evolves via coupling reactions to form segmented one-dimensional polyaromatic hydrocarbons (1D-PAH). Further heating leads to dimensionality crossover — one dimensional to two dimensional structures — and dynamical restructuring processes at the PAH chain ends with a subsequent activated detachment of size-selective carbon clusters, following a mechanism revealed through first-principles quantum mechanical simulations. Finally, rate-limiting diffusional coalescence of these dynamically self-evolved cluster-precursors leads to condensation into graphene with high purity. At the final stage before the formation of graphene, the researchers observed nearly round disk-like clusters containing 24 carbon atoms, which spread out to form the graphene lattice. “The temperature must be raised within windows of temperature ranges to allow the requisite structures to form before the next stage of heating,” Landman explains. “If you stop at certain temperatures, you are likely to end up with coking.” An important component is the dehydrogenation process which frees the carbon atoms to form intermediate shapes, but some of the hydrogen resides temporarily on, or near, the metal catalyst surface and it assists in subsequent bond-breaking process that lead to detachment of the 24-carbon cluster-precursors.  “All along the way, there is a loss of hydrogen from the clusters,” says Landman. “Bringing up the temperature essentially ‘boils’ the hydrogen out of the evolving metal-supported carbon structure, culminating in graphene.” The resulting graphene structure is adsorbed onto the catalyst. It may be useful attached to the metal, but for other applications, a way to remove it will have to be developed. Adds Landman: “This is a new route to graphene, and the possible technological application is yet to be explored.” Beyond the theoretical research, carried out by Bokwon Yoon and Landman at the Georgia Tech Center for Computational Materials Science, the experimental work was done in the laboratory of Professor Renald Schaub at the University of St. Andrews and in the laboratory of Professor Ueli Heiz and Friedrich Esch at the Technische Universität München. Other co-authors included Bo Wang, Michael König, Catherine J. Bromley, Michael-John Treanor, José A. Garrido Torres, Marco Caffio, Federico Grillo, Herbert Frücht, and Neville V. Richardson. The work at the Georgia Institute of Technology was supported by the Air Force Office of Scientific Research through Grant FA9550-14-1-0005 and by the Office of Basic Energy Sciences of the U.S. Department of Energy through Grant FG05-86ER45234. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsoring organizations.


News Article | May 4, 2017
Site: www.theengineer.co.uk

World population growth may have halved since its peak in the 1960s, but the number of people in the world is still increasing and they all need feeding. Increasing automation in agriculture is helping to meet this challenge, and the Institute for Robotics and Intelligent Machines (IRIM) at the Georgia Institute of Technology is developing robots that its researchers hope will be able to save farmers from laborious fieldwork in monitoring the condition of their crops. The latest addition to their armoury was designed after a sloth, but moves more like a gibbon. The two armed robot is designed to hang from a cable strung the length of the line of crops, and move by brachiating – swinging from hand to hand – along the cable. Its sensor platform, housed at the junction of the arms, can then take photos and collect other data continuously. They are also able to switch from one parallel cable to another, allowing them to cover an entire suitably-rigged field. Mechanical engineer Jonathan Rogers explains that the design was intended to help the robot to operate in the field for long periods of time and move around without human help while consuming a very small amount of energy. “This is the best way to keep something out of the way and off the ground without having to have something in the air all the time,” he said. It could not only send data back from analysis so that the crops’ nutrient needs could be assessed, he said, it could even conceivably deliver those nutrients, completely automating crop maintenance. The sloth was chosen as the model for the robot, which has been named Tarzan, because it is a very energy efficient animal, and Rogers said that the robot should be equally efficient. The goal is for the robot to be completely solar powered, he said. “It could literally live outside for months at a time.” The robot is being tested in a four-acre soybean field outside Athens, Georgia, which is normally home to plant geneticists. See Tarzan in action below.


News Article | May 4, 2017
Site: www.chromatographytechniques.com

A new analysis of decades of data on oceans across the globe has revealed that the amount of dissolved oxygen contained in the water - an important measure of ocean health - has been declining for more than 20 years. Researchers at Georgia Institute of Technology looked at a historic dataset of ocean information stretching back more than 50 years and searched for long term trends and patterns. They found that oxygen levels started dropping in the 1980s as ocean temperatures began to climb. "The oxygen in oceans has dynamic properties, and its concentration can change with natural climate variability," said Taka Ito, an associate professor in Georgia Tech's School of Earth and Atmospheric Sciences who led the research. "The important aspect of our result is that the rate of global oxygen loss appears to be exceeding the level of nature's random variability." The study, which was published April in Geophysical Research Letters, was sponsored by the National Science Foundation and the National Oceanic and Atmospheric Administration. The team included researchers from the National Center for Atmospheric Research, the University of Washington-Seattle, and Hokkaido University in Japan. Falling oxygen levels in water have the potential to impact the habitat of marine organisms worldwide and in recent years led to more frequent "hypoxic events" that killed or displaced populations of fish, crabs and many other organisms. Researchers have for years anticipated that rising water temperatures would affect the amount of oxygen in the oceans, since warmer water is capable of holding less dissolved gas than colder water. But the data showed that ocean oxygen was falling more rapidly than the corresponding rise in water temperature. "The trend of oxygen falling is about two to three times faster than what we predicted from the decrease of solubility associated with the ocean warming," Ito said. "This is most likely due to the changes in ocean circulation and mixing associated with the heating of the near-surface waters and melting of polar ice." The majority of the oxygen in the ocean is absorbed from the atmosphere at the surface or created by photosynthesizing phytoplankton. Ocean currents then mix that more highly oxygenated water with subsurface water. But rising ocean water temperatures near the surface have made it more buoyant and harder for the warmer surface waters to mix downward with the cooler subsurface waters. Melting polar ice has added more freshwater to the ocean surface - another factor that hampers the natural mixing and leads to increased ocean stratification. "After the mid-2000s, this trend became apparent, consistent and statistically significant—beyond the envelope of year-to-year fluctuations," Ito said. "The trends are particularly strong in the tropics, eastern margins of each basin and the subpolar North Pacific." In an earlier study, Ito and other researchers explored why oxygen depletion was more pronounced in tropical waters in the Pacific Ocean. They found that air pollution drifting from East Asia out over the world's largest ocean contributed to oxygen levels falling in tropical waters thousands of miles away. Once ocean currents carried the iron and nitrogen pollution to the tropics, photosynthesizing phytoplankton went into overdrive consuming the excess nutrients. But rather than increasing oxygen, the net result of the chain reaction was the depletion oxygen in subsurface water. That, too, is likely a contributing factor in waters across the globe, Ito said.


News Article | May 4, 2017
Site: www.eurekalert.org

IMAGE:  Global map of the linear trend of dissolved oxygen at the depth of 100 meters. view more A new analysis of decades of data on oceans across the globe has revealed that the amount of dissolved oxygen contained in the water - an important measure of ocean health - has been declining for more than 20 years. Researchers at Georgia Institute of Technology looked at a historic dataset of ocean information stretching back more than 50 years and searched for long term trends and patterns. They found that oxygen levels started dropping in the 1980s as ocean temperatures began to climb. "The oxygen in oceans has dynamic properties, and its concentration can change with natural climate variability," said Taka Ito, an associate professor in Georgia Tech's School of Earth and Atmospheric Sciences who led the research. "The important aspect of our result is that the rate of global oxygen loss appears to be exceeding the level of nature's random variability." The study, which was published April in Geophysical Research Letters, was sponsored by the National Science Foundation and the National Oceanic and Atmospheric Administration. The team included researchers from the National Center for Atmospheric Research, the University of Washington-Seattle, and Hokkaido University in Japan. Falling oxygen levels in water have the potential to impact the habitat of marine organisms worldwide and in recent years led to more frequent "hypoxic events" that killed or displaced populations of fish, crabs and many other organisms. Researchers have for years anticipated that rising water temperatures would affect the amount of oxygen in the oceans, since warmer water is capable of holding less dissolved gas than colder water. But the data showed that ocean oxygen was falling more rapidly than the corresponding rise in water temperature. "The trend of oxygen falling is about two to three times faster than what we predicted from the decrease of solubility associated with the ocean warming," Ito said. "This is most likely due to the changes in ocean circulation and mixing associated with the heating of the near-surface waters and melting of polar ice." The majority of the oxygen in the ocean is absorbed from the atmosphere at the surface or created by photosynthesizing phytoplankton. Ocean currents then mix that more highly oxygenated water with subsurface water. But rising ocean water temperatures near the surface have made it more buoyant and harder for the warmer surface waters to mix downward with the cooler subsurface waters. Melting polar ice has added more freshwater to the ocean surface - another factor that hampers the natural mixing and leads to increased ocean stratification. "After the mid-2000s, this trend became apparent, consistent and statistically significant -- beyond the envelope of year-to-year fluctuations," Ito said. "The trends are particularly strong in the tropics, eastern margins of each basin and the subpolar North Pacific." In an earlier study, Ito and other researchers explored why oxygen depletion was more pronounced in tropical waters in the Pacific Ocean. They found that air pollution drifting from East Asia out over the world's largest ocean contributed to oxygen levels falling in tropical waters thousands of miles away. Once ocean currents carried the iron and nitrogen pollution to the tropics, photosynthesizing phytoplankton went into overdrive consuming the excess nutrients. But rather than increasing oxygen, the net result of the chain reaction was the depletion oxygen in subsurface water. That, too, is likely a contributing factor in waters across the globe, Ito said. This material is based upon work supported by the National Science Foundation under Grant No. OCE-1357373 and the National Oceanic and Atmospheric Administration under Grant No. NA16OAR4310173. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the National Oceanic and Atmospheric Administration.


News Article | May 4, 2017
Site: www.enr.com

Batson-Cook Co., Atlanta, announced that David Barksdale, chief operatingofficer, is retiring after 40 years with the general contracting and construction management firm, effective immediately. Kevin Appleton, formerly senior vice president and manager of health care, succeeds Barksdale as COO. Also, Denny Godwin, a health care project executive, will become vice president of health care Atlanta. Barksdale will serve as executive adviser to the company during the transition. He began his career as a field engineer and worked in various management positions before being appointed executive vice president and COO in 2014. Appleton, who transitioned to health care project management in 2010, became senior vice president and general manager of health care Atlanta in 2014. Karl Dauber has been named national practice lead for hydrology, hydraulics and drainage engineering for the water and environment business at WSP | Parsons Brinckerhoff. Formerly water area manager, he has more than 31 years of related experience. In Atlanta, WSP | Parsons Brinckerhoff named Tesa Gonzales a senior supervising engineer, working in the fleet and facilities group. She has more than 35 years of experience in the development, delivery and control of large infrastructure and railcar capital improvement and maintenance projects. Also in Atlanta, the firm named Michael Connor a senior vice president. In this position, Connor will serve as managing director of the Atlanta buildings practice. Leslie Gartner, senior vice president and the previous managing director of the Atlanta office, is transitioning into a new role developing the firm’s national science and technology practice. CALYX Engineers and Consultants of Cary, N.C., named planning program manager Liz Kovasckitz to the role of principal. With CALYX for more than 13 years, she has 26 years of environmental, community and transportation planning experience. In CALYX’s Roswell, Ga., office, Victoria Guobaitis has joined the firm as project manager with the traffic engineering team. Guobaitis previously worked for Parsons in Washington, D.C., where she served as traffic engineer. She earned a master’s degree in civil engineering from the Georgia Institute of Technology. Fort Lauderdale, Fla.-based construction management firm Moss & Associates promoted Jason Clark to vice president/project executive. Clark, who has more than 15 years of construction management experience, joined Moss in 2005 as an assistant project manager. Prior to that, Clark worked with Bovis Lend Lease in New York City. Sherri Jent has joined Cardno’s Tampa office as business development manager for infrastructure services. Jent will support business development efforts for projects located throughout the Americas, focusing on survey and mapping, subsurface utility excavation, utility coordination, transportation and structures. She has more than 18 years of professional engineering industry experience. Michael Sabodish, business unit manager, has joined NOVA’s Raleigh office. He brings 16 years of experience in geotechnical engineering, construction materials testing, special inspections and environmental consulting projects. Batson-Cook Co. announced changes in its executive leadership, including the retirement of COO David Barksdale.


News Article | May 4, 2017
Site: www.chromatographytechniques.com

A new analysis of decades of data on oceans across the globe has revealed that the amount of dissolved oxygen contained in the water - an important measure of ocean health - has been declining for more than 20 years. Researchers at Georgia Institute of Technology looked at a historic dataset of ocean information stretching back more than 50 years and searched for long term trends and patterns. They found that oxygen levels started dropping in the 1980s as ocean temperatures began to climb. "The oxygen in oceans has dynamic properties, and its concentration can change with natural climate variability," said Taka Ito, an associate professor in Georgia Tech's School of Earth and Atmospheric Sciences who led the research. "The important aspect of our result is that the rate of global oxygen loss appears to be exceeding the level of nature's random variability." The study, which was published April in Geophysical Research Letters, was sponsored by the National Science Foundation and the National Oceanic and Atmospheric Administration. The team included researchers from the National Center for Atmospheric Research, the University of Washington-Seattle, and Hokkaido University in Japan. Falling oxygen levels in water have the potential to impact the habitat of marine organisms worldwide and in recent years led to more frequent "hypoxic events" that killed or displaced populations of fish, crabs and many other organisms. Researchers have for years anticipated that rising water temperatures would affect the amount of oxygen in the oceans, since warmer water is capable of holding less dissolved gas than colder water. But the data showed that ocean oxygen was falling more rapidly than the corresponding rise in water temperature. "The trend of oxygen falling is about two to three times faster than what we predicted from the decrease of solubility associated with the ocean warming," Ito said. "This is most likely due to the changes in ocean circulation and mixing associated with the heating of the near-surface waters and melting of polar ice." The majority of the oxygen in the ocean is absorbed from the atmosphere at the surface or created by photosynthesizing phytoplankton. Ocean currents then mix that more highly oxygenated water with subsurface water. But rising ocean water temperatures near the surface have made it more buoyant and harder for the warmer surface waters to mix downward with the cooler subsurface waters. Melting polar ice has added more freshwater to the ocean surface - another factor that hampers the natural mixing and leads to increased ocean stratification. "After the mid-2000s, this trend became apparent, consistent and statistically significant—beyond the envelope of year-to-year fluctuations," Ito said. "The trends are particularly strong in the tropics, eastern margins of each basin and the subpolar North Pacific." In an earlier study, Ito and other researchers explored why oxygen depletion was more pronounced in tropical waters in the Pacific Ocean. They found that air pollution drifting from East Asia out over the world's largest ocean contributed to oxygen levels falling in tropical waters thousands of miles away. Once ocean currents carried the iron and nitrogen pollution to the tropics, photosynthesizing phytoplankton went into overdrive consuming the excess nutrients. But rather than increasing oxygen, the net result of the chain reaction was the depletion oxygen in subsurface water. That, too, is likely a contributing factor in waters across the globe, Ito said.


News Article | April 20, 2017
Site: www.PR.com

After winning the Technology Association of Georgia’s Fintech Innovation Award, Trust Stamp will now compete at both the 36|86 Technology Conference in Tennessee and on a national level in the prestigious Money 20/20 Startup Challenge in Las Vegas. Atlanta, GA, April 20, 2017 --( Trust Stamp CEO, Andrew Gowasack said, “The Money 20/20 Startup Challenge represents an unparallelled opportunity and challenge to standout on one of the most prestigious stages in the Fintech ecosystem. We are proud of our recent achievements, however, this opportunity is a reminder to never get comfortable!” Founded in January 2016, Trust Stamp has created a new paradigm for online identity & trust using proprietary AI powered facial biometrics with proof of life to establish user identity. In 2016 Trust Stamp completed three incubator programs and secured two major launch partnerships and over $1.5m in seed capital. Trust Stamp also received the Publicis 90 Gold Award at Viva Technology in Paris, was named as one of the top-30 Blockchain startups in the World and launched its European subsidiary at TechCrunch Disrupt in London. In February 2017, Trust Stamp was awarded the Technology Association of Georgia’s FinTech Innovation Award with a $50,000 grand prize. The award also included membership in the renowned ATDC Technology Business Incubator at the Georgia Institute of Technology. Trust Stamp technical co-founder, Gareth Genner said, “ATDC has already accelerated our growth by presenting us with business opportunities and access to technological insights. Georgia Tech is one of the foremost thought leaders in our space and we believe that their technical expertise will continue to keep us years ahead of both cyber criminals and competitors.” Before appearing on the national stage in October for Money 20/20, the team will present at the 36|86 Technology Conference in Nashville TN which showcases promising growth-stage startups innovating in traditional industries and positioning the region to lead the country’s next wave of entrepreneurship. provides Identity & Trust as a Service. Trust Stamp’s core functionality is the use of a facial biometrics with proof-of-liveness to create a unique digital identity that can function as a biometric authentication tool. Trust Stamp can also attach other data to the digital identity which can be stored as an immutable hash on an enterprise server and/or in a blockchain and also facilitate biometrically based data signing and encryption. Trust Stamp is a graduate of the Charlotte-based QC FinTech Incubator program, the National Association of Realtors REach Accelerator and Saint Louis-based Six Thirty Cyber Accelerator. Atlanta, GA, April 20, 2017 --( PR.com )-- Trust Stamp, an Atlanta Technology Village based FinTech startup, will join a cadre of the world’s top fintech startups at Money 20/20’s Startup Challenge.Trust Stamp CEO, Andrew Gowasack said, “The Money 20/20 Startup Challenge represents an unparallelled opportunity and challenge to standout on one of the most prestigious stages in the Fintech ecosystem. We are proud of our recent achievements, however, this opportunity is a reminder to never get comfortable!”Founded in January 2016, Trust Stamp has created a new paradigm for online identity & trust using proprietary AI powered facial biometrics with proof of life to establish user identity. In 2016 Trust Stamp completed three incubator programs and secured two major launch partnerships and over $1.5m in seed capital. Trust Stamp also received the Publicis 90 Gold Award at Viva Technology in Paris, was named as one of the top-30 Blockchain startups in the World and launched its European subsidiary at TechCrunch Disrupt in London.In February 2017, Trust Stamp was awarded the Technology Association of Georgia’s FinTech Innovation Award with a $50,000 grand prize. The award also included membership in the renowned ATDC Technology Business Incubator at the Georgia Institute of Technology.Trust Stamp technical co-founder, Gareth Genner said, “ATDC has already accelerated our growth by presenting us with business opportunities and access to technological insights. Georgia Tech is one of the foremost thought leaders in our space and we believe that their technical expertise will continue to keep us years ahead of both cyber criminals and competitors.”Before appearing on the national stage in October for Money 20/20, the team will present at the 36|86 Technology Conference in Nashville TN which showcases promising growth-stage startups innovating in traditional industries and positioning the region to lead the country’s next wave of entrepreneurship. Trust Stamp provides Identity & Trust as a Service. Trust Stamp’s core functionality is the use of a facial biometrics with proof-of-liveness to create a unique digital identity that can function as a biometric authentication tool. Trust Stamp can also attach other data to the digital identity which can be stored as an immutable hash on an enterprise server and/or in a blockchain and also facilitate biometrically based data signing and encryption. Trust Stamp is a graduate of the Charlotte-based QC FinTech Incubator program, the National Association of Realtors REach Accelerator and Saint Louis-based Six Thirty Cyber Accelerator.


News Article | April 26, 2017
Site: www.eurekalert.org

Being first in a new ecosystem provides major advantages for pioneering species, but the benefits may depend on just how competitive later-arriving species are. That is among the conclusions in a new study testing the importance of "first arrival" in controlling adaptive radiation of species, a hypothesis famously proposed for "Darwin's Finches," birds from the Galapagos Islands that were first brought to scientific attention by Darwin. Researchers at the Georgia Institute of Technology tested the importance of first arrival with bacterial species competing in a test tube. Using a bacterium that grows on plant leaves, they confirmed the importance of first arrival for promoting species diversification, and extended that hypothesis with some important caveats. "We wanted to understand the role of species colonization history in regulating the interaction between the rapidly-evolving bacterium Pseudomonas fluorescens SBW-25 and competing species and how that affected P. fluorescens adaptive radiation in the ecosystem," said Jiaqi Tan, a research scientist in Georgia Tech's School of Biological Sciences. "The general pattern we find is that the earlier arrival of P. fluorescens allowed it to diversify to a greater extent. If the competing and diversifying species are very similar ecologically, we find a stronger effect of species colonization history on adaptive radiation." The research is scheduled to be reported April 26th in the journal Evolution and was supported by the National Science Foundation. The study is believed to be the first rigorous experimental test of the role colonization history plays in adaptive radiation. Evolutionary biologist David Lack studied a group of closely-related bird species known as Darwin's Finches, and popularized them in a book first published in 1947. Among his hypotheses was that the birds were successful in their adaptive radiation -- the evolutionary diversification of morphological, physiological and behavior traits -- because they were early colonizers of the islands. The finches filled the available ecological niches, taking advantage of the resources in ways that limited the ability of later-arriving birds to similarly establish themselves and diversify, he suggested. "The bird species that arrived after the finches could only use the resources that the finches weren't using," Tan explained. "The other birds could not diversify because there weren't many resources left for them." Tan and other researchers in the laboratory of Georgia Tech Professor Lin Jiang tested that hypothesis using P. fluorescens, which rapidly evolves into two general phenotypes differentiated by the ecological niches they adopt in static test tube microcosms. Within the two major phenotypes - -known as "fuzzy spreaders" and "wrinkly spreaders" - there are additional minor variations. The researchers allowed the bacterium to colonize newly-established microcosms and diversify before introducing competing bacterial species. The six competitors, which varied in their niche and competitive fitness compared to P. fluorescens, were introduced individually and allowed to grow through multiple generations. Their success and level of diversification were measured by placing microcosm samples onto agar plates and counting the number of colonies from each species and sub-species. The study also included the reverse of the earlier colonization history, allowing the competitor bacteria to establish themselves in microcosms before introducing the P. fluorescens. The competitors included a broad range of organisms common in the environment, some of them retrieved from a lake near the Georgia Tech campus. The experiment allowed the scientists to extend the hypothesis that Lack advanced 70 years ago. "If the diversifying species and the competing species are very similar, you can have a strong priority effect in which the first-arriving species can strongly impact the ability of the later species to diversify," said Jiang, a professor in Georgia Tech's School of Biological Sciences. "If the species are different enough, then the priority effect is weaker, so there would be less support for the first arrival hypothesis." Adaptive radiation has important implications for new ecosystems, particularly with organisms that evolve rapidly. P. fluorescens produces as many as ten generations a day under the reported experimental conditions, which allowed the Georgia Tech scientists to study how they evolved over 120 generations -- changes that would have taken hundreds of years in finches. The bacterial population studied in Jiang's lab included as many as 100 million organisms, far more than the number of birds on the Galapagos Islands. The asexual reproduction of the bacteria meant the mutation rate likely also differed from the birds. Still, Jiang and Tan believe their study offers insights into how different species interact in new environments based on historical advantages. "From the perspective of evolutionary biology, scientists often focus only on the particular species that interest them," said Jiang, who studies community ecology. "We also need to think about the surrounding ecological context of the evolutionary process." In future work Jiang hopes to study how the introduction of predators may combine with species competition to affect adaptive radiation. In addition to those already mentioned, the research team also included Georgia Tech Ph.D. student Xi Yang, who conducted the data analysis. This research was supported by the National Science Foundation under grants DEB-1257858 and DEB-1342754. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.


News Article | April 26, 2017
Site: phys.org

Researchers at the Georgia Institute of Technology tested the importance of first arrival with bacterial species competing in a test tube. Using a bacterium that grows on plant leaves, they confirmed the importance of first arrival for promoting species diversification, and extended that hypothesis with some important caveats. "We wanted to understand the role of species colonization history in regulating the interaction between the rapidly-evolving bacterium Pseudomonas fluorescens SBW-25 and competing species and how that affected P. fluorescens adaptive radiation in the ecosystem," said Jiaqi Tan, a research scientist in Georgia Tech's School of Biological Sciences. "The general pattern we find is that the earlier arrival of P. fluorescens allowed it to diversify to a greater extent. If the competing and diversifying species are very similar ecologically, we find a stronger effect of species colonization history on adaptive radiation." The research is scheduled to be reported April 26th in the journal Evolution and was supported by the National Science Foundation. The study is believed to be the first rigorous experimental test of the role colonization history plays in adaptive radiation. Evolutionary biologist David Lack studied a group of closely-related bird species known as Darwin's Finches, and popularized them in a book first published in 1947. Among his hypotheses was that the birds were successful in their adaptive radiation—the evolutionary diversification of morphological, physiological and behavior traits—because they were early colonizers of the islands. The finches filled the available ecological niches, taking advantage of the resources in ways that limited the ability of later-arriving birds to similarly establish themselves and diversify, he suggested. "The bird species that arrived after the finches could only use the resources that the finches weren't using," Tan explained. "The other birds could not diversify because there weren't many resources left for them." Tan and other researchers in the laboratory of Georgia Tech Professor Lin Jiang tested that hypothesis using P. fluorescens, which rapidly evolves into two general phenotypes differentiated by the ecological niches they adopt in static test tube microcosms. Within the two major phenotypes - -known as "fuzzy spreaders" and "wrinkly spreaders" - there are additional minor variations. The researchers allowed the bacterium to colonize newly-established microcosms and diversify before introducing competing bacterial species. The six competitors, which varied in their niche and competitive fitness compared to P. fluorescens, were introduced individually and allowed to grow through multiple generations. Their success and level of diversification were measured by placing microcosm samples onto agar plates and counting the number of colonies from each species and sub-species. The study also included the reverse of the earlier colonization history, allowing the competitor bacteria to establish themselves in microcosms before introducing the P. fluorescens. The competitors included a broad range of organisms common in the environment, some of them retrieved from a lake near the Georgia Tech campus. The experiment allowed the scientists to extend the hypothesis that Lack advanced 70 years ago. "If the diversifying species and the competing species are very similar, you can have a strong priority effect in which the first-arriving species can strongly impact the ability of the later species to diversify," said Jiang, a professor in Georgia Tech's School of Biological Sciences. "If the species are different enough, then the priority effect is weaker, so there would be less support for the first arrival hypothesis." Adaptive radiation has important implications for new ecosystems, particularly with organisms that evolve rapidly. P. fluorescens produces as many as ten generations a day under the reported experimental conditions, which allowed the Georgia Tech scientists to study how they evolved over 120 generations—changes that would have taken hundreds of years in finches. The bacterial population studied in Jiang's lab included as many as 100 million organisms, far more than the number of birds on the Galapagos Islands. The asexual reproduction of the bacteria meant the mutation rate likely also differed from the birds. Still, Jiang and Tan believe their study offers insights into how different species interact in new environments based on historical advantages. "From the perspective of evolutionary biology, scientists often focus only on the particular species that interest them," said Jiang, who studies community ecology. "We also need to think about the surrounding ecological context of the evolutionary process." In future work Jiang hopes to study how the introduction of predators may combine with species competition to affect adaptive radiation. In addition to those already mentioned, the research team also included Georgia Tech Ph.D. student Xi Yang, who conducted the data analysis. Explore further: 3-D scans reveal flexible skull patterns are key to island bird diversity


News Article | April 26, 2017
Site: www.rdmag.com

A new, simpler and efficient interface to control robots will allow laymen to do it without significant training time. Traditionally, robots are controlled by a computer screen and a mouse in a ring-and-arrow system that is often difficult to control and error-prone. However, in the new system, designed by researchers from Georgia Institute of Technology, the user can point and click on an item and choose a specific grasp to allow the robot to get into the position to grab the chosen item. “Instead of a series of rotations, lowering and raising arrows, adjusting the grip and guessing the correct depth of field, we've shortened the process to just two clicks,” Sonia Chernova, the Georgia Tech assistant professor in robotics and advisor to the research team, said in a statement. The researchers used college students to test both the traditional method and the new point-and-click program and found that the new system resulted in significantly fewer errors and allowed participants to perform tasks quicker and more reliably. Each student used both methods for two minutes and averaged only one mistake per task in the point-and-click method compared to almost four mistakes per task using the traditional method. “Roboticists design machines for specific tasks, then often turn them over to people who know less about how to control them,” David Kent, the Georgia Tech Ph.D. robotics student who led the project, said in a statement. “Most people would have a hard time turning virtual dials if they needed a robot to grab their medicine. But pointing and clicking on the bottle? That's much easier.” The traditional method is cumbersome because it requires two screens so the user can adjust the virtual gripper and command the robot exactly where to go and what to grab.  While giving the user a maximum level of control and flexibility, the size of the workspace can be burdensome and contribute to an increase number of errors. However, because the new method doesn’t include 3D mapping, it only provides a single camera view and allows the robot’s perception algorithm to analyze an object's 3D surface geometry to determine where the gripper should be placed. “The robot can analyze the geometry of shapes, including making assumptions about small regions where the camera can't see, such as the back of a bottle,” Chernova said. “Our brains do this on their own -- we correctly predict that the back of a bottle cap is as round as what we can see in the front. “In this work, we are leveraging the robot's ability to do the same thing to make it possible to simply tell the robot which object you want to be picked up,” she added.


News Article | April 26, 2017
Site: www.sciencemag.org

Increasing diversity within academic science has been a priority for France Córdova since she became director of the National Science Foundation (NSF) in 2014. Within a year she had launched an initiative, called INCLUDES, that challenges universities to do a better job of attracting women and minorities into the field. Now, Córdova has turned her attention inward in hopes of improving the dismal track record of NSF’s most prestigious award for young scientists. Only five women have won NSF’s annual Alan T. Waterman Award in its 41-year history, and no woman of color has ever been selected. The 2017 winners announced this month mark the 13th year in a row that the $1 million research prize has gone to a man (two, actually, including the second black scientist ever chosen.) For decades, NSF rules required candidates to be either 35 or younger, or within 7 years of having received their doctoral degree. Those ceilings made sense when the typical academic scientist was someone who “went straight through school with no debt and no family commitments, and who could focus on research in their late 20s and early 30s without distractions,” says Karan Watson, provost of Texas A&M University in College Station and chair of the Waterman selection committee. But Watson says those caps penalize anyone whose career has been slowed or interrupted by family, finances, or physical challenges—a group likely to be disproportionately female and members of underrepresented minority groups. So Córdova pushed to raise the ceilings to age 40 and 10 years post-Ph.D. “We hope it will level the playing field,” says Maria Zuber, chair of the National Science Board in Arlington, Virginia, NSF’s oversight body, which approved the change at its November 2016 meeting. (The change, announced last week, applies to the 2018 competition deadlines arriving this fall.) Zuber, an astrophysicist and vice president for research at the Massachusetts Institute of Technology (MIT) in Cambridge, compared it to “stop-the-clock” policies at MIT and other universities that give faculty members more time to build the research record needed to win tenure. If only it were that easy, says Kim Cobb, a paleoclimate researcher at Georgia Institute of Technology in Atlanta and one of six university ADVANCE professors with a remit to improve gender equity. Cobb, who has spent the past several years championing women for awards handed out by the American Geophysical Union (AGU), cites “the deep pool of issues” that female academics must deal with. “There’s explicit bias—the idea that women don’t belong in science,” she begins. “Then there’s structural bias—only women have babies, for example. And then there’s the implicit bias that every one of us carries around without even being aware of its effect on our decisions.” Those biases affect much more than a quest for professional recognition, of course. But prizes are important to academics, and those responsible for handing them out often aren’t aware of the baggage that they may be bringing to the selection committee. “I think that [prize] committees genuinely want to do the right thing,” says Joan Strassmann, a sociobiologist at Washington University in St. Louis in Missouri, a vocal campaigner against gender bias in science. “And they honestly think they are doing a good job.” But she says the Matilda effect—a phrase coined by Margaret Rossiter in the 1990s to describe how the scientific achievements of women are so often credited to men, or simply ignored—demonstrates that good intentions aren’t nearly enough. Strassmann cited research showing that individuals have a hard time choosing the best candidate from a large pool of highly qualified applicants to make the case for diversity. “Scientists pride themselves on being able to spot talent,” says Strassmann, a member of the National Academy of Sciences. “If we had more humility, we might feel free to use other criteria” that would address gender inequity more directly. What is the nature of the current imbalance? In response to a request from Insider, NSF analyzed the last 15 years of the Waterman prize. NSF receives an average of 59 nominees a year—from a high of 86 last year to a low of 42 and 43 in 2005 and 2006. Roughly one-quarter of the pool is deemed worthy of closer scrutiny. And women make up 26% of those finalists, called top performers. (NSF doesn’t ask applicants about gender, but officials did a manual search to determine the gender of the top performers.) The actual percentage can vary considerably from one year to the next year—women made up as few as 10% of the top performers in 2007 and 12% in 2015, and as many as 40% last year. But regardless of the percentage, only a tiny number of women—one and two in 2007 and 2015, for example—make the short list of top performers. Several years ago, AGU identified a similar problem with its prestigious early career award. So in 2011, AGU removed the age limit, then 36, and replaced it with a 10-year post-Ph.D. ceiling. The new rules also allow applicants to describe “special circumstances” that would warrant removing the ceiling altogether. “We know women may take time off to have children,” says Beth Paredes, assistant AGU director for honors and science affiliations in Washington, D.C. “And it can also apply to men, for example, in countries with required military service.” The exemption is rarely invoked, Paredes acknowledges. “But we wanted to be as inclusive as possible,” she says. An uneven distribution of Waterman applicants across disciplines also works against women. Last year, for example, 30 of the 86 nominees were engineers, a field in which women are badly underrepresented. In contrast, the committee received only eight applications from researchers in the social, behavioral, and economic sciences. Watson says the committee would like to see more applicants from the social sciences. Only two have won the prize, and sociologist Dalton Conley of Princeton University, a 2005 Waterman winner and current committee member, says that his colleagues face a Catch-22 situation. “Due to a lack of extant winners, the award is not as known in the social sciences,” Conley says. “And among those who are aware of it, they may figure that there is not much of a shot of winning. Hence fewer apply.” Bumping up those numbers will take more than simply beating the bushes for strong candidates, however. Both Cobb and Strassmann say that the skills needed to succeed in winning prizes—from identifying a heavyweight advocate to rounding up the necessary supporting letters and filling out all the paperwork—aren’t taught in graduate school. Instead, they are learned through the same old boys’ network that for so long has excluded women and minorities. “I didn’t even know the Waterman existed until 6 or 7 years ago,” says Cobb, adding that she became familiar with the award only after she and a small group of women within AGU began their advocacy efforts. And modesty would have ruled it out. “I would never have dared to aspire to such an award,” says Cobb, who was named a chaired professor last year at the age of 41. Changing that culture will require some arm-twisting, Strassmann acknowledges. “I realize everybody is inundated with other tasks,” she says. “But I’ve resolved to nominate 10 people a year, and to urge others to do the same. And why not? I know how to do it. And it feels good.” Though nominating more women and minorities is a necessary first step, nobody expects it will be enough to make the problem disappear. “The old excuse—that there are none who are good enough—is no longer valid,” Watson says. “But maybe we were clipping their wings too soon. The new rules will give them more time to build up their record.” Data from AGU show a surprising gender distribution in its James B. Macelwane Medal for early-career scientists. After being an almost exclusively male prize in its first 2 decades (36 of 37 winners from 1962 to 1983), women received 17% of the awards in the next 2 decades and actually reached parity in the 5 years preceding the rule change. Since 2012, however, men have captured 70% of the 24 medals.


News Article | April 17, 2017
Site: www.chromatographytechniques.com

For much of its first 2 billion years, Earth was a very different place: oxygen was scarce, microbial life ruled, and the sun was significantly dimmer than it is today. Yet the rock record shows that vast seas covered much of the early Earth under the faint young sun. Scientists have long debated what kept those seas from freezing. A popular theory is that potent gases such as methane—with many times more warming power than carbon dioxide—created a thicker greenhouse atmosphere than required to keep water liquid today. In the absence of oxygen, iron built up in ancient oceans. Under the right chemical and biological processes, this iron rusted out of seawater and cycled many times through a complex loop, or "ferrous wheel." Some microbes could "breathe" this rust in order to outcompete others, such as those that made methane. When rust was plentiful, an "iron curtain" may have suppressed methane emissions. "The ancestors of modern methane-making and rust-breathing microbes may have long battled for dominance in habitats largely governed by iron chemistry," said Marcus Bray, a biology Ph.D candidate in the laboratory of Jennifer Glass, assistant professor in the Georgia Institute of Technology's School of Earth and Atmospheric Sciences and principal investigator of the study funded by NASA's Exobiology and Evolutionary Biology Program. The research was reported in the journal Geobiology today. Using mud pulled from the bottom of a tropical lake, researchers at Georgia Tech gained a new grasp of how ancient microbes made methane despite this "iron curtain." Sean Crowe, an assistant professor at the University of British Columbia, collected mud from the depths of Indonesia's Lake Matano, an anoxic iron-rich ecosystem that uniquely mimics early oceans. Bray placed the mud into tiny incubators simulating early Earth conditions, and tracked microbial diversity and methane emissions over a period of 500 days. Minimal methane was formed when rust was added; without rust, microbes kept making methane through multiple dilutions. Extrapolating these findings to the past, the team concluded that methane production could have persisted in rust-free patches of ancient seas. Unlike the situation in today's well-aerated oceans, where most natural gas produced on the seafloor is consumed before it can reach the surface, most of this ancient methane would have escaped to the atmosphere to trap heat from the early sun.


News Article | April 17, 2017
Site: www.eurekalert.org

The National Science Foundation (NSF) today recognized Baratunde "Bara" A. Cola of the Georgia Institute of Technology and John V. Pardon of Princeton University with the nation's highest honor for early career scientists and engineers, the Alan T. Waterman Award. This marks only the second time in the award's 42-year history that NSF selected two recipients in the same year. Bestowed annually, the Waterman Award recognizes outstanding researchers age 35 and under in NSF-supported fields of science and engineering. In addition to a medal, awardees each receive a $1 million, five-year grant for research in their chosen field of study. "We are seeing the significant impact of their research very early in the careers of these awardees," said NSF Director France Córdova. "That is the most exciting aspect of the Waterman Award, which recognizes early career achievement. They have creatively tackled longstanding scientific challenges, and we look forward to what they will do next." Cola pioneered new engineering methods and materials to control light and heat in electronics at the nanoscale. He serves as an associate professor at Georgia Tech's George W. Woodruff School of Mechanical Engineering. In 2015, Cola and his team were the first to overcome more than 40 years of research challenges to create a device called an optical rectenna, which turns light into direct current more efficiently than today's technology. The device could lead to highly efficient solar cells with the potential to power new generations of cell phones, laptops, satellites and drones. The technology uses carbon nanotubes that act as tiny antennas to capture light. Light is then converted into direct current by miniature, nanotechnology-enabled mechanisms called rectifier diodes. The research has the potential to double solar cell efficiency at one-tenth the cost, according to Cola. "Ultimately, we see the Waterman as fueling the final leg of our long-term effort to be the first to truly bring transformational applications of carbon nanotubes to the market," Cola said. "As of now, we know that there will be a substantial investment in engineering another breakthrough in carbon nanotube optical rectenna science." Cola also works to commercialize other novel nanotechnology-based innovations. In 2015, he participated in NSF Innovation Corps (I-Corps) at Georgia Tech, a program that immerses scientists and engineers in entrepreneurial training, teaching them to look beyond the lab and consider the commercial potential or broader impacts of their research. I-Corps participants interview prospective customers and identify market needs for federally funded innovations. In addition, Cola and colleagues were responsible for engineering breakthroughs, including the first thermally conductive amorphous polymer, the first practical electrochemical cell for generating electricity from waste heat and the first evidence of thermal energy conduction by surface polaritons. Cola, 35, is the founder of Carbice Nanotechnologies, Inc., a company that uses a carbon nanotube-material to remove heat from computer chip testing stations, allowing for faster and cheaper testing of chips during production. The technology could eventually result in smaller, faster, more powerful computer chips for use in everything from smartphones to supercomputers. Carbice Nanotechnologies received support from NSF's Small Business Innovation Research program. He also is co-founder of the NSF-funded Academic and Research Leadership Network, a group of more than 300 Ph.D. engineering researchers from minority groups underrepresented in academia, industry and government laboratories. Pardon is a Clay Research Fellow and professor of mathematics at Princeton University. His research focuses on geometry and topology, the study of properties of shapes that are unaffected by deformations, such as stretching or twisting. He is known for solving problems that stumped other mathematicians for decades and generating solutions that provide new tools for geometric analysis. In 2013, Pardon published a solution to the Hilbert-Smith conjecture, a mathematical proposition involving the actions of groups of "manifolds" in three dimensions. Manifolds include spheres and doughnut-shaped objects. The conjecture originates from one of the 23 problems published in 1900 by German mathematician David Hilbert, which helped guide the course of 20th century mathematics. American topologist Paul Althaus Smith proposed a stronger version of the problem in 1941. This problem has connections to many other areas of mathematics and physics. Pardon's publication was notable for proving this longstanding conjecture, a major achievement in mathematics. Prior to that publication, as a senior undergraduate at Princeton, Pardon answered a question posed in 1983 by Russian mathematician Mikhail Gromov regarding "knots," mathematical structures that resemble physical knots, but are closed, instead of having any ends. Gromov's question involved a special class of knots called "torus knots." He asked whether these knots could be tied without altering or distorting their topology. Pardon figured out a way to use the distortion between two properties of knots -- their intrinsic and extrinsic distances -- to control their topology. He showed that torus knots are limited by their geometric properties, and can be tied without altering their topology. Pardon's solution has important applications in fluid dynamics and electrodynamics, calculating forces involved in aircraft movement, predicting weather patterns, determining the flow of liquids through water treatment plant pipelines, determining the flow of electrical charges, and more. Pardon, who received his doctorate in mathematics in 2015 from Stanford University, has been a full professor at Princeton since fall 2016. Among other awards, Pardon earned a National Science Foundation Graduate Research Fellowship to support his graduate studies at Stanford. As of October last year, Pardon had published 11 papers on such subjects as contact homology, virtual fundamental cycles, the distortion of knots, algebraic varieties, and the carpenter's rule problem.


News Article | April 26, 2017
Site: www.gizmag.com

If grasping-armed robots are ever going to be widely used for applications such as assisting seniors in their homes, then they'll have to be easy to program. And unfortunately, programming most robots to grasp and retrieve an object can still be a rather complex process. That's why scientists at the Georgia Institute of Technology have developed a new system that simply requires users to click twice with a computer mouse. Currently, in one of the most commonly-used computer-based programming systems, users are shown a locked-off camera view of the robot and its surroundings, along with a 3D map of that scene. Using a series of onscreen rings and arrows, they then set about manually adjusting each of the six degrees of freedom of the robot's arm, lining it up so that it is hopefully able to grasp and lift the desired object. It's a setup that's definitely designed for experts, and that involves a certain amount of trial and error. In the new system – designed by a team led by Ph.D student David Kent – users start by just clicking on the object that they want retrieved, within the overhead camera view. Using the information from the 3D map, the system automatically figures out how best to position the arm, in order to reach that item. It then presents the user with a choice of grasping styles, one of which they choose by clicking on it. From there, the robot arm goes to work, moving its gripper to the item, grasping it, then delivering it to the user. "The robot can analyze the geometry of shapes, including making assumptions about small regions where the camera can't see, such as the back of a bottle," says assistant professor Sonia Chernova, who advised on the project. "Our brains do this on their own – we correctly predict that the back of a bottle cap is as round as what we can see in the front. In this work, we are leveraging the robot's ability to do the same thing to make it possible to simply tell the robot which object you want to be picked up." The system can be seen in use, in the video below.


News Article | April 25, 2017
Site: www.sciencedaily.com

The traditional interface for remotely operating robots works just fine for roboticists. They use a computer screen and mouse to independently control six degrees of freedom, turning three virtual rings and adjusting arrows to get the robot into position to grab items or perform a specific task. But for someone who isn't an expert, the ring-and-arrow system is cumbersome and error-prone. It's not ideal, for example, for older people trying to control assistive robots at home. A new interface designed by Georgia Institute of Technology researchers is much simpler, more efficient and doesn't require significant training time. The user simply points and clicks on an item, then chooses a grasp. The robot does the rest of the work. "Instead of a series of rotations, lowering and raising arrows, adjusting the grip and guessing the correct depth of field, we've shortened the process to just two clicks," said Sonia Chernova, the Georgia Tech assistant professor in robotics who advised the research effort. Her team tested college students on both systems, and found that the point-and-click method resulted in significantly fewer errors, allowing participants to perform tasks more quickly and reliably than using the traditional method. "Roboticists design machines for specific tasks, then often turn them over to people who know less about how to control them," said David Kent, the Georgia Tech Ph.D. robotics student who led the project. "Most people would have a hard time turning virtual dials if they needed a robot to grab their medicine. But pointing and clicking on the bottle? That's much easier." The traditional ring-and-arrow-system is a split-screen method. The first screen shows the robot and the scene; the second is a 3-D, interactive view where the user adjusts the virtual gripper and tells the robot exactly where to go and grab. This technique makes no use of scene information, giving operators a maximum level of control and flexibility. But this freedom and the size of the workspace can become a burden and increase the number of errors. The point-and-click format doesn't include 3-D mapping. It only provides the camera view, resulting in a simpler interface for the user. After a person clicks on a region of an item, the robot's perception algorithm analyzes the object's 3-D surface geometry to determine where the gripper should be placed. It's similar to what we do when we put our fingers in the correct locations to grab something. The computer then suggests a few grasps. The user decides, putting the robot to work. "The robot can analyze the geometry of shapes, including making assumptions about small regions where the camera can't see, such as the back of a bottle," said Chernova. "Our brains do this on their own -- we correctly predict that the back of a bottle cap is as round as what we can see in the front. In this work, we are leveraging the robot's ability to do the same thing to make it possible to simply tell the robot which object you want to be picked up." By analyzing data and recommending where to place the gripper, the burden shifts from the user to the algorithm, which reduces mistakes. During a study, college students performed a task about two minutes faster using the new method vs. the traditional interface. The point-and-click method also resulted in approximately one mistake per task, compared to nearly four for the ring-and-arrow technique. In addition to assistive robots in homes, the researchers see applications in search-and-rescue operations and space exploration. The interface has been released as open-source software and was presented in Vienna, Austria, March 6-9 at the 2017 Conference on Human-Robot Interaction (HRI2017). The study is partially supported by National Science Foundation Fellowship (IIS 13-17775) and and the Office of Naval Research (N000141410795). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.


News Article | April 26, 2017
Site: www.rdmag.com

A new, simpler and efficient interface to control robots will allow laymen to do it without significant training time. Traditionally, robots are controlled by a computer screen and a mouse in a ring-and-arrow system that is often difficult to control and error-prone. However, in the new system, designed by researchers from Georgia Institute of Technology, the user can point and click on an item and choose a specific grasp to allow the robot to get into the position to grab the chosen item. “Instead of a series of rotations, lowering and raising arrows, adjusting the grip and guessing the correct depth of field, we've shortened the process to just two clicks,” Sonia Chernova, the Georgia Tech assistant professor in robotics and advisor to the research team, said in a statement. The researchers used college students to test both the traditional method and the new point-and-click program and found that the new system resulted in significantly fewer errors and allowed participants to perform tasks quicker and more reliably. Each student used both methods for two minutes and averaged only one mistake per task in the point-and-click method compared to almost four mistakes per task using the traditional method. “Roboticists design machines for specific tasks, then often turn them over to people who know less about how to control them,” David Kent, the Georgia Tech Ph.D. robotics student who led the project, said in a statement. “Most people would have a hard time turning virtual dials if they needed a robot to grab their medicine. But pointing and clicking on the bottle? That's much easier.” The traditional method is cumbersome because it requires two screens so the user can adjust the virtual gripper and command the robot exactly where to go and what to grab.  While giving the user a maximum level of control and flexibility, the size of the workspace can be burdensome and contribute to an increase number of errors. However, because the new method doesn’t include 3D mapping, it only provides a single camera view and allows the robot’s perception algorithm to analyze an object's 3D surface geometry to determine where the gripper should be placed. “The robot can analyze the geometry of shapes, including making assumptions about small regions where the camera can't see, such as the back of a bottle,” Chernova said. “Our brains do this on their own -- we correctly predict that the back of a bottle cap is as round as what we can see in the front. “In this work, we are leveraging the robot's ability to do the same thing to make it possible to simply tell the robot which object you want to be picked up,” she added.


News Article | April 28, 2017
Site: www.eurekalert.org

Researchers at the Georgia Institute of Technology and Peking University have found a new use for the ubiquitous PowerPoint slide: Producing self-folding three-dimensional origami structures from photocurable liquid polymers. The technique involves projecting a grayscale pattern of light and dark shapes onto a thin layer of liquid acrylate polymer placed in a plate or between two glass slides. A photoinitiator material mixed into the polymer initiates a crosslinking reaction when struck by light from an ordinary LED projector, causing a solid film to form. A light-absorbing dye in the polymer serves as a regulator for the light. Due to the complicated interaction between the evolution of the polymer network and volume shrinkage during photo curing, areas of the polymer that receive less light exhibit more apparent bending behavior. When the newly-created polymer film is removed from the liquid polymer, the stress created in the film by the differential shrinkage causes the folding to begin. To make the most complex origami structures, the researchers shine light onto both sides of the structures. Origami structures produced so far include tiny tables, capsules, flowers, birds and the traditional miura-ori fold -- all about a half-inch in size. The origami structures could have applications in soft robots, microelectronics, soft actuators, mechanical metamaterials and biomedical devices. "The basic idea of our method is to utilize the volume shrinkage phenomenon during photo-polymerization," said Jerry Qi, a professor in the Woodruff School of Mechanical Engineering at Georgia Tech. "During a specific type of photopolymerization, frontal photopolymerization, the liquid resin is cured continuously from the side under light irradiation toward the inner side. This creates a non-uniform stress field that drives the film to bend along the direction of light path." Details of the work are scheduled to be published April 28 in the journal Science Advances. The research was supported by the National Science Foundation, the Air Force Office of Scientific Research and the Chinese Scholarship Council. It is believed to be the first application to create self-folding origami structures through the control of volume shrinkage during patterned photopolymerization. The process that creates the shrinkage phenomenon is considered harmful in other uses of the polymer. "Volume shrinkage of polymer was always assumed to be detrimental in the fabrication of composites and in the conventional 3-D printing technology," said Daining Fang, a co-author of the paper and a professor at Peking University when the research was done. "Our work shows that with a change of perspective, this phenomenon can become quite useful." Fang is now at Beijing Institute of Technology. To make the most complex shapes with bending in both directions, the researchers can flip the patterned film over to create crosslinking on the other side. "We have developed two types of fabrication processes," said Zeang Zhao, a Ph.D. student at Georgia Tech and Peking University. "In the first one, you can just shine the light pattern towards a layer of liquid resin, and then you will get the origami structure. In the second one, you may need to flip the layer and shine a second pattern. This second process gives you much wider design freedom." Light is shined onto the film for five to ten seconds, which produces a film about 200 microns thick. "The areas that receive light become solid; the other parts of the pattern remain liquid, and the structure can then be removed from the liquid polymer," said Qi. "The technique is very simple." Frontal photopolymerization is a process in which a polymer film is continuously cured from one side in a thick layer of liquid resin. In the presence of strong light attenuation, the solidification front initiates at the surface upon illumination and propagates toward the liquid side as the irradiation time increases. The process can be delicately tuned by controlling the illumination time and the light intensity, and the method has been used to fabricate microfluidic devices and synthesize microparticles. The researchers used poly(ethylene glycol) diacrylate in this demonstration, but the technique should work with a broad range of photocurable polymers. An orange dye was used in the demonstration, but other dyes could produce structures in a range of different colors. For the proof-of-principle, Zhao created a PowerPoint pattern by hand. To scale the process up, the system could be connected to a computer-aided design (CAD) tool for generating more precise grayscale patterns. Qi believes the technique could be used to produce structures as much as an inch in size. "The self-folding requires relatively thin films which might not be possible in larger structures," he said. Added Qi, "We have developed a simple approach to fold a thin sheet of polymer into complicated three-dimensional origami structures. Our approach is not limited by specific materials, and the patterning is so simple that anybody with PowerPoint and a projector could do it." This research was supported by NSF awards CMMI-1462894, CMMI-1462895, and EFRI-1435452; and the Air Force Office of Scientific Research grant 15RT0885. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsoring organizations.


News Article | May 8, 2017
Site: globenewswire.com

NEWARK, N.J. and CHARLESTON, S.C., May 08, 2017 (GLOBE NEWSWIRE) -- Arkados Group, Inc. (OTC:AKDS), a leading software developer and system integrator enabling Internet of Things (IoT) applications for commercial and industrial customers, today announced it has completed the acquisition of SolBright Renewable Energy, LLC, a renewable energy design and development company based in Charleston, SC. The deal was originally announced on March 17, 2017. In addition, we are pleased to announce that Patrick Hassell has been appointed as the President of Arkados Energy Solutions, LLC, our energy services business.  Mr. Hassell most recently served as the Founder and Managing Director of SolBright Renewable Energy.  Prior to SolBright, Mr. Hassell was President and CEO of AkroMetrix, a prominent microelectronics industry equipment manufacturer. Mr. Hassell received a BS in Civil Engineering from the University of Virginia and later received a Master’s of Science in Management from the Georgia Institute of Technology. SolBright Renewable Energy is a leading provider of turnkey development, engineering, procurement and construction (EPC) services for the commercial/industrial and military solar photovoltaic markets. Arkados paid $15 million in a combination of cash, debt and stock for substantially 100% of the assets of SolBright, including a current backlog of approximately $40 million in distributed generation EPC projects and a substantial pipeline of additional projects throughout the Eastern United States.  This transaction allows Arkados to significantly expand its Arkados Energy Solutions business into the rapidly growing renewable energy industry and is expected to open new customer opportunities for its cutting-edge Internet of Things solutions. “SolBright has been one of the more active super-regional designers and installers of solar systems primarily along the East Coast but as far reaching as California, by completing approximately 30 MW of projects during that time.  SolBright’s strong track record, which is a testament to Patrick and his team, yielded industry leading win rates and a loyal, blue chip customer base,” said Terrence DeFranco, CEO of Arkados Group, Inc.  “The closing of this acquisition marks an important milestone in our growth and is expected to be a tremendous catalyst for revenue and earnings growth.  Our unique model of combining the value of renewable energy services with the vast benefits of our IIoT Arktic™ software platform should create great value for our customers and our shareholders and we have great confidence in Patrick and his team leading this business to new heights.” “We believe that this transaction will allow us to take our business to the next level and realize many more opportunities with our existing and future customer opportunities,” stated Mr. Hassel.  “Over the past 8 years, we’ve built an award-winning business and established our brand that has become known for quality, reliability and excellent service. With Arkados, we can add a cutting-edge set of services that should help to establish us as a clear leader in the industry. The SolBright team is very excited to have access to Arkados’ technology solutions based on our belief that our customers will greatly benefit from them.” AIP Private Capital and AIP Asset Management acted as the lead investors for the acquisition financing, providing $2 million in convertible debt and an additional $500,000 in equity. L2 Capital, LLC and SBI Investments, LLC provided subordinated debt in the amount of $792,000 and the Company secured additional working capital of approximately $600,000 in equity from other investors. Joseph Gunnar & Co., LLC, a leading New York City-based securities and investment bank established in 1997, acted as the placement agent and advisor to Arkados and will continue in this capacity as the Company explores other opportunities. The Capital Corporation, a leading investment bank headquartered out of Greenville, South Carolina and with offices in Spartanburg, South Carolina and Boca Raton, Florida, served as the exclusive investment banking advisor to SolBright on the transaction. Detailed information related to the transaction can be found in the Company’s Current Report on Form 8K filed with the Securities and Exchange Commission on May 5, 2017 at www.sec.gov. General Electric estimates that the Industrial Internet of Things (IIoT), the use of sensing, data gathering, monitoring and controlling commercial and industrial machinery, will reach $60 trillion worldwide in the next 15 years.  IIoT technology and software applications in combination with on-site renewable energy generation and emerging battery storage capabilities provides a complete solutions set for optimizing energy efficiency and corporate energy spend.  In November 2015, Gartner estimated that the Internet of Things will consist of 20.8 billion connected objects in use by 2020, up from 6.4 billion in 2016 and that enterprise customers represent the largest spending on these devices.  Another more recent Gartner report estimates IoT deployment in commercial buildings is on track to reach just over 1 billion in 2018. Arkados Group, Inc. (“Arkados” or the “Company”), through its subsidiaries, is a provider of scalable and interoperable Internet of Things solutions focused on industrial automation and energy management.  We execute our business as a software-as-a-service (SaaS) application developer and energy services firm that helps commercial and industrial facilities owners and managers leverage the Internet of Things to reduce costs and improve productivity with unique, cutting-edge building and machine automation solutions. The Company’s Arktic™ software platform is a scalable and interoperable cloud-based system for sensing, gathering, storing, analyzing data as well as reporting critical information and implementing command and control.  Our applications currently focus on measurement and verification and predictive analytic and are delivered to customers as a complement to our services business, which focuses primarily on reducing energy costs through solar PV, LED lighting and other energy conservation services for the commercial and industrial facilities market. More information is available at our web site at www.arkadosgroup.com. SolBright Renewable Energy is a turnkey developer and EPC of Solar Photovoltaic projects for long term, stable, distributed power solutions. SolBright focuses on military, municipal and commercial/industrial markets, with projects ranging in size from 100 kWp to 5,000 kWp. SolBright’s services include market assessment, design/engineering, installation, operation and maintenance/monitoring, financing and project ownership. SolBright has distinct competitive advantages for ground, parking canopy and roof-top solar applications that ensure integration with existing/new roof warranties. SolBright has a national reach within the United States, with projects successfully delivered throughout the southeast, mid-Atlantic and northeast and as far west as California. AIP is a Toronto-based asset management company, managing hedge and mutual funds and discretionary separately managed accounts. AIP has been named Best Global Macro Hedge Fund in Canada at the Hedge Fund Awards sponsored by Barclay Hedge and was nominated for the Ernst and Young Entrepreneur of the Year Award. Its core focus is to help clients, be they institutions, hedge funds, mutual funds, family offices, or retail investors, achieve their investment goals. About AIP Private Capital - it is a privately-held investment firm, focuses on emerging growth companies primarily in Financial Services and Technology sectors with unique assets, strong business models and seasoned management teams with the skills and ability to grow the company quickly to profitability. AIPPC provides private equity/debt, VC, special situations investments and short-term financing as well as technical, board and managerial leadership. AIPPC is a member of the CVCA, TMA and was recently nominated for the Ernst and Young Entrepreneur of the Year Award. Forward-Looking Statements This news release contains forward-looking statements as defined by the Private Securities Litigation Reform Act of 1995. Forward-looking statements include statements concerning plans, objectives, goals, strategies, future events or performance, and underlying assumptions and other statements that are other than statements of historical facts and can be identified by terminology such as "may," "should," "potential," "continue," "expects," "anticipates," "intends," "plans," "believes," "estimates," and similar expressions. These statements are based upon current beliefs, expectations and assumptions and include statements regarding the contributions expected from Mr. Hassell, the expected new customer opportunities for the Company’s cutting-edge Internet of Things solutions, the value for the Company’s customers and shareholders to be derived from the acquisition, the belief that this transaction will allow us to take the Company’s business to the next level and allow it to realize many more opportunities with its existing and future customer opportunities, becoming a clear leader in the industry and the size of the market. These statements are subject to uncertainties and risks many of which are difficult to predict, including the ability to successfully integrate the new business and new management team with the Company’s existing business and managements team.  Product and service demand and acceptance, changes in technology, economic conditions, the impact of competition and pricing, government regulations, and other risks contained in reports filed by the Company with the Securities and Exchange Commission. All such forward-looking statements, whether written or oral, and whether made by or on behalf of the Company, are expressly qualified by this cautionary statement and any other cautionary statements which may accompany the forward-looking statements. In addition, the Company disclaims any obligation to update any forward-looking statements to reflect events or circumstances after the date hereof.


NEWARK, Calif., April 25, 2017 (GLOBE NEWSWIRE) -- Depomed, Inc. (NASDAQ:DEPO) today announced that Sharon D. Larkin joined Depomed as Senior Vice President, Human Resources and Administration. Ms. Larkin brings over 25 years of global Human Resources leadership to this role. Most recently, she worked at Abbott, a Fortune 100 diversified healthcare company where she held roles of increased responsibility for 23 years, most recently as Divisional Vice President, Human Resources, Medical Devices Group. In this role, Ms. Larkin provided worldwide human resources leadership for Abbott’s five medical device operating businesses.  Ms. Larkin received a B.S. in Industrial Management from the Georgia Institute of Technology College of Management. “Sharon is a highly accomplished healthcare industry executive with a proven track record in organizational talent management and helping to create a high performance culture,” said Arthur Higgins, President and Chief Executive Officer of Depomed. “We are excited to have her join our team as we focus on superior performance and enhanced execution.” "I am pleased to be joining Depomed and contribute to creating a company that my colleagues throughout the company are proud to work for and one that our customers admire," said Ms. Larkin. Depomed is a leading specialty pharmaceutical company focused on enhancing the lives of the patients, families, physicians, providers and payers we serve through commercializing innovative products for pain and neurology related disorders. Depomed markets six medicines with areas of focus that include mild to severe acute pain, moderate to severe chronic pain, neuropathic pain, migraine and breakthrough cancer pain. Depomed is headquartered in Newark, California. To learn more about Depomed, visit www.depomed.com. “Safe Harbor” Statement under the Private Securities Litigation Reform Act of 1995. The statements that are not historical facts contained in this release are forward-looking statements that involve risks and uncertainties including, but not limited to risks detailed in the Company’s Securities and Exchange Commission filings, including the Company’s most recent Annual Report on Form 10-K and most recent Quarterly Report on Form 10-Q. The inclusion of forward-looking statements should not be regarded as a representation that any of the Company’s plans or objectives will be achieved. You are cautioned not to place undue reliance on these forward-looking statements, which speak only as of the date hereof. The Company undertakes no obligation to publicly release the result of any revisions to these forward-looking statements that may be made to reflect events or circumstances after the date hereof or to reflect the occurrence of unanticipated events.


News Article | May 5, 2017
Site: www.prnewswire.com

Invest Atlanta and Georgia's Department of Economic Development, the primary economic development authorities in Atlanta and Georgia, were instrumental in facilitating Flexport's decision to expand to Atlanta and offering ongoing support. "Flexport's decision to establish a major operation here further cements Atlanta's role as a global logistics hub and home to many of the world's leading supply chain technology companies," said Dr. Eloisa Klementich, President and CEO of Invest Atlanta. "By choosing Atlanta, the company will be right at the heart of our region's burgeoning cross-sector tech community and at the doorstep of Georgia Tech, the Atlanta University Center and other talent-rich academic institutions." "Georgia continues to rise as a leader in the technology sector, and Atlanta is at the epicenter of this growth," said Georgia Department of Economic Development Commissioner Pat Wilson. "Flexport's decision to invest and create jobs in Atlanta is a testament to our commitment to supporting innovative technologies in Georgia, and meeting the growing demands of this industry. I look forward to watching them grow and thrive here." Flexport's decision to expand into Atlanta was solidified by Georgia's exceptional talent pool and enthusiastic adoption of the latest technology in logistics and freight. The city's location, transportation infrastructure, and strong logistics sector will enable Flexport to form lasting relationships with local service providers, working together to reach clients more quickly and efficiently. By air, Atlanta's airport offers access to 80 percent of the U.S. market within a two-hour flight, while, by land, Georgia's six interstates connect the state to 80 percent of the U.S. population within a two-day truck drive. Finally, by sea, the port of Savannah is the most efficient seaport operation in North America due to its large single-terminal design. The company hopes its presence in the state will support local clients and small businesses in identifying and managing global shipping solutions. "Atlanta is a booming technology center and is a natural fit for our next North American office," said Ryan Petersen, CEO and founder of Flexport. "The city has the busiest airport in the world to bring us closer to our air freight operations and is advantageously located near the port of Savannah, where they've been making huge investments to handle the bigger ships coming through the newly widened Panama Canal. On top of that, we already move hundreds of tons of freight through Georgia's major logistics gateways every year, so establishing a local presence will make customs clearance and local trucking cheaper. Simply put, Atlanta is giving us added access to air, land and sea, plus some of the country's best and brightest talent." Flexport is invested in continued growth of company functions in Atlanta, and is excited to begin immediate recruiting for the Atlanta office. The company will be hiring for its sales, operations, customs, and small business development departments. Flexport Recruiting Partners are looking for folks with a client-first mentality, strong communication skills and an entrepreneurial spirit and are interested in meeting Georgia's top talent from universities like Emory University and the Georgia Institute of Technology. Interested candidates should reach out to Kristen Hayward, Global VP of Recruiting for Flexport, Kristen.Hayward@flexport.com and check out Flexport.com/Careers. To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/flexport-announces-fourth-us-office-opening-in-atlanta-300452427.html


OTTAWA, Ontario--(BUSINESS WIRE)--Transforming the viewing experience worldwide, Espial today announced that Aamir Hussain, Executive Vice President and Chief Technology Officer of CenturyLink, Inc., the 3rd largest telecommunications service provider in the USA with $18 billion in revenues, will stand for election to Espial’s board of directors at this year’s Annual General Meeting to be held on June 13, 2017. Aamir Hussain, leads product, platforms, infrastructure, cloud, network operations and information technology for CenturyLink (NYSE: CTL), a global communications, hosting, cloud and IT services company that provides broadband, voice, video, data and managed services to millions of customers. Prior to CenturyLink, Aamir was the Managing Director & Chief Technology Officer of Liberty Global in Europe (NASDAQ: LBTYA/B/K), one of the largest cable companies in the world serving 48 million RGUs, where he led the technology and network organizations across 13 countries in Europe spanning voice, video, broadband and wireless. While at Liberty Global, Aamir led the development of several next generation video and broadband solutions, along with mobile network solutions across the European cable footprint. Aamir has also held senior executive positions at Covad Communications, AT&T, Qwest and Telus. Aamir has first-hand involvement in launching various next generation video platforms and cloud strategies across multiple operators at both CenturyLink and Liberty Global. Aamir has completed business, telecom and strategy training at Harvard Business School and Insead in France. He also holds a Masters in Electrical Engineering from Georgia Institute of Technology and a Bachelor in Electrical Engineering from University of South Florida. “As a board, we are continually evaluating and considering candidates to strengthen our board in order to bring new perspectives and thinking” said Peter Seeligsohn, Chairman, Espial. “I am very pleased to announce Aamir Hussain as a nominee to our Board of Directors. There are few in the industry who have Aamir’s depth of knowledge and unique industry perspective, given his European and North American experience and from having held executive roles in both telecom and cable companies. His experience will help Espial further expand in the European cable and broader telecom markets. Aamir’s leadership and strategic perspective will be a valuable addition to our board in shaping future strategy and executing on our goals to the benefit of our shareholders.” “Espial has developed an exciting portfolio of cloud, device and user experience products and solutions. Their recent success across service providers in Europe and North America to launch next generation video services is impressive" said Aamir Hussain. "Our industry is going through a transformation in technology, delivery and business models and Espial’s focus around RDK, IP, and Cloud are important elements in helping service providers evolve to next generation services. I look forward to working with the Board and management to help them achieve their vision." With Espial, video service providers create responsive and engaging subscriber viewing experiences incorporating powerful content discovery and intuitive navigation. Service providers achieve ‘Web-speed’ innovation with Espial’s flexible, open software leveraging RDK and HTML5 technologies. This provides competitive advantage through an immersive and personalized user experience, seamlessly blending advanced TV services with OTT content. With customers spanning six continents, Espial is headquartered in Ottawa, Canada, with R&D centers in Seattle, Montreal, Silicon Valley, Cambridge and Lisbon, and with sales offices in North America, Europe and Asia. For more information, visit www.espial.com. This press release contains information that is forward looking information with respect to Espial within the meaning of Section 138.4(9) of the Ontario Securities Act (forward looking statements) and other applicable securities laws. In some cases, forward-looking information can be identified by the use of terms such as "may", "will", "should", "expect", "plan", "anticipate", "believe", "intend", "estimate", "predict", "potential", "continue" or the negative of these terms or other similar expressions concerning matters that are not historical facts. In particular, statements or assumptions about, economic conditions, ongoing or future benefits of existing and new customer, and partner relationships or new board nominees, our position or ability to capitalize on the move to more open systems by service providers, existing or future opportunities for the company and products (including our ability to successfully execute on market opportunities and secure new customer wins) and any other statements regarding Espial's objectives (and strategies to achieve such objectives), future expectations, beliefs, goals or prospects are or involve forward-looking information. Forward-looking information is based on certain factors and assumptions. While the company considers these assumptions to be reasonable based on information currently available to it, they may prove to be incorrect. Forward-looking information, by its nature necessarily involves known and unknown risks and uncertainties. A number of factors could cause actual results to differ materially from those in the forward-looking statements or could cause our current objectives and strategies to change, including but not limited to changing conditions and other risks associated with the on-demand TV software industry and the market segments in which Espial operates, competition, Espial’s ability to continue to supply existing customers and partners with its products and services and avoid being displaced by competitive offerings, effectively grow its integration and support capabilities, execute on market opportunities, develop its distribution channels and generate increased demand for its products, economic conditions, technological change, unanticipated changes in our costs, regulatory changes, litigation, the emergence of new opportunities, many of which are beyond our control and current expectation or knowledge. Additional risks and uncertainties affecting Espial can be found in Management’s Discussion and Analysis of Results of Operations and Financial Condition and its Annual Information Form for the fiscal years ended December 31, 2016 on SEDAR at www.sedar.com. If any of these risks or uncertainties were to materialize, or if the factors and assumptions underlying the forward-looking information were to prove incorrect, actual results could vary materially from those that are expressed or implied by the forward-looking information contained herein and our current objectives or strategies may change. Espial assumes no obligation to update or revise any forward looking statements, whether as a result of new information, future events or otherwise, except as required by law. Readers are cautioned not to place undue reliance on these forward-looking statements that speak only as of the date hereof.


News Article | May 4, 2017
Site: www.eurekalert.org

IMAGE:  Schematic of the pathway describing the evolution of adsorbed ethene (top left) to graphene (bottom left). The sequence of intermediates identified in the study and their respective appearance temperatures are... view more An international team of scientists has developed a new way to produce single-layer graphene from a simple precursor: ethene - also known as ethylene - the smallest alkene molecule, which contains just two atoms of carbon. By heating the ethene in stages to a temperature of slightly more than 700 degrees Celsius -- hotter than had been attempted before - the researchers produced pure layers of graphene on a rhodium catalyst substrate. The stepwise heating and higher temperature overcame challenges seen in earlier efforts to produce graphene directly from hydrocarbon precursors. Because of its lower cost and simplicity, the technique could open new potential applications for graphene, which has attractive physical and electronic properties. The work also provides a novel mechanism for the self-evolution of carbon cluster precursors whose diffusional coalescence results in the formation of the graphene layers. The research, reported as the cover article in the May 4 issue of the Journal of Physical Chemistry C, was conducted by scientists at the Georgia Institute of Technology, Technische Universität München in Germany, and the University of St. Andrews in Scotland. In the United States, the research was supported by the U.S. Air Force Office of Scientific Research and the U.S. Department of Energy's Office of Basic Energy Sciences. "Since graphene is made from carbon, we decided to start with the simplest type of carbon molecules and see if we could assemble them into graphene," explained Uzi Landman, a Regents' Professor and F.E. Callaway endowed chair in the Georgia Tech School of Physics who headed the theoretical component of the research. "From small molecules containing carbon, you end up with macroscopic pieces of graphene." Graphene is now produced using a variety of methods including chemical vapor deposition, evaporation of silicon from silicon carbide - and simple exfoliation of graphene sheets from graphite. A number of earlier efforts to produce graphene from simple hydrocarbon precursors had proven largely unsuccessful, creating disordered soot rather than structured graphene. Guided by a theoretical approach, the researchers reasoned that the path from ethene to graphene would involve formation of a series of structures as hydrogen atoms leave the ethene molecules and carbon atoms self-assemble into the honeycomb pattern that characterizes graphene. To explore the nature of the thermally-induced rhodium surface-catalyzed transformations from ethene to graphene, experimental groups in Germany and Scotland raised the temperature of the material in steps under ultra-high vacuum. They used scanning-tunneling microscopy (STM), thermal programed desorption (TPD) and high-resolution electron energy loss (vibrational) spectroscopy (HREELS) to observe and characterize the structures that form at each step of the process. Upon heating, ethene adsorbed onto the rhodium catalyst evolves via coupling reactions to form segmented one-dimensional polyaromatic hydrocarbons (1D-PAH). Further heating leads to dimensionality crossover - one dimensional to two dimensional structures - and dynamical restructuring processes at the PAH chain ends with a subsequent activated detachment of size-selective carbon clusters, following a mechanism revealed through first-principles quantum mechanical simulations. Finally, rate-limiting diffusional coalescence of these dynamically self-evolved cluster-precursors leads to condensation into graphene with high purity. At the final stage before the formation of graphene, the researchers observed nearly round disk-like clusters containing 24 carbon atoms, which spread out to form the graphene lattice. "The temperature must be raised within windows of temperature ranges to allow the requisite structures to form before the next stage of heating," Landman explained. "If you stop at certain temperatures, you are likely to end up with coking." An important component is the dehydrogenation process which frees the carbon atoms to form intermediate shapes, but some of the hydrogen resides temporarily on, or near, the metal catalyst surface and it assists in subsequent bond-breaking process that lead to detachment of the 24-carbon cluster-precursors. "All along the way, there is a loss of hydrogen from the clusters," said Landman. "Bringing up the temperature essentially 'boils' the hydrogen out of the evolving metal-supported carbon structure, culminating in graphene." The resulting graphene structure is adsorbed onto the catalyst. It may be useful attached to the metal, but for other applications, a way to remove it will have to be developed. Added Landman: "This is a new route to graphene, and the possible technological application is yet to be explored." Beyond the theoretical research, carried out by Bokwon Yoon and Landman at the Georgia Tech Center for Computational Materials Science, the experimental work was done in the laboratory of Professor Renald Schaub at the University of St. Andrews and in the laboratory of Professor Ueli Heiz and Friedrich Esch at the Technische Universität München. Other co-authors included Bo Wang, Michael König, Catherine J. Bromley, Michael-John Treanor, José A. Garrido Torres, Marco Caffio, Federico Grillo, Herbert Frücht, and Neville V. Richardson. The work at the Georgia Institute of Technology was supported by the Air Force Office of Scientific Research through Grant FA9550-14-1-0005 and by the Office of Basic Energy Sciences of the U.S. Department of Energy through Grant FG05-86ER45234. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsoring organizations. CITATION: Bo Wang, et al., "Ethene to Graphene: Surface Catalyzed Chemical Pathways, Intermediates, and Assembly," (Journal of Physical Chemistry C). http://dx.


News Article | May 4, 2017
Site: phys.org

Most tumors contain regions of low oxygen concentration where cancer therapies based on the action of reactive oxygen species are ineffective. Now, American scientists have developed a hybrid nanomaterial that releases a free-radical-generating prodrug inside tumor cells upon thermal activation. As they report in the journal Angewandte Chemie, the free radicals destroy the cell components even in oxygen-depleted conditions, causing apoptosis. Delivery, release, and action of the hybrid material can be precisely controlled. Many well-established cancer treatment schemes are based on the generation of reactive oxygen species (ROS), which induce apoptosis for the tumor cells. However, this mechanism only works in the presence of oxygen, and hypoxic (oxygen-depleted) regions in the tumor tissue often survive the ROS-based treatment. Therefore, Younan Xia at the Georgia Institute of Technology and Emory University, Atlanta, USA, and his team have developed a strategy to deliver and release a radical-generating prodrug that, upon activation, damages cells by a ROS-type radical mechanism, but without the need for oxygen. The authors explained that they had to turn to the field of polymerization chemistry to find a compound that produces enough radicals. There, the azo compound AIPH is a well-known polymerization initiator. In medicinal applications, it generates free alkyl radicals that cause DNA damage and lipid and protein peroxidation in cells even under hypoxic conditions. However, the AIPH must be safely delivered to the cells in the tissue. Thus, the scientists used nanocages, the cavities of which were filled with lauric acid, a so-called phase-change material (PCM) that can serve as a carrier for AIPH. Once inside the target tissue, irradiation by a near-infrared laser heats up the nanocages, causing the PCM to melt and triggering the release and decomposition of AIPH. This concept worked well, as the team has shown with a variety of experiments on different cell types and components. Red blood cells underwent pronounced hemolysis. Lung cancer cells incorporated the nanoparticles and were severely damaged by the triggered release of the radical starter. Actin filaments retracted and condensed following the treatment. And the lung cancer cells showed significant inhibition of their growth rate, independently of the oxygen concentration. Although the authors admit that "the efficacy still needs to be improved by optimizing the components and conditions involved," they have demonstrated the effectiveness of their hybrid system in killing cells, also in places where the oxygen level is low. This strategy might be highly relevant in nanomedicine, cancer theranostics, and in all applications where targeted delivery and controlled release with superb spatial/temporal resolutions is desired. Explore further: New drug delivery system shows promise for fighting solid tumors More information: Song Shen et al, A Hybrid Nanomaterial for the Controlled Generation of Free Radicals and Oxidative Destruction of Hypoxic Cancer Cells, Angewandte Chemie International Edition (2017). DOI: 10.1002/anie.201702898


News Article | May 3, 2017
Site: www.eurekalert.org

Sudden cardiac death resulting from fibrillation - erratic heartbeat due to electrical instability - is one of the leading causes of death in the United States. Now, researchers have discovered a fundamentally new source of that electrical instability, a development that could potentially lead to new methods for predicting and preventing life-threatening cardiac fibrillation. A steady heartbeat is maintained by electrical signals that originate deep within the heart and travel through the muscular organ in regular waves that stimulate the coordinated contraction of muscle fibers. But when those waves are interrupted by blockages in electrical conduction - such as scar tissue from a heart attack - the signals can be disrupted, creating chaotic spiral-shaped electrical waves that interfere with one another. The resulting electrical turbulence causes the heart to beat ineffectively, quickly leading to death. Scientists have known that instabilities at the cellular level, especially variation in the duration of each electrical signal - known as an action potential - are of primary importance in creating chaotic fibrillation. By analyzing electrical signals in the hearts of an animal model, researchers from the Georgia Institute of Technology and the U.S. Food and Drug Administration have found an additional factor - the varying amplitude of the action potential - that may also cause dangerous electrical turbulence within the heart. The research, supported by the National Science Foundation, was reported April 20 in the journal Physical Review Letters. "Mathematically, we can now understand some of these life-threatening instabilities and how they develop in the heart," said Flavio Fenton, a professor in Georgia Tech's School of Physics. "We have proposed a new mechanism that explains when fibrillation will occur, and we have a theory that can predict, depending on physiological parameters, when this will happen." The voltage signal that governs the electrically-driven heartbeat is mapped by doctors from the body surface using electrocardiogram technology, which is characterized by five main segments (P-QRS-T), each representing different activations in the heart. T waves occur at the end of each heartbeat, and indicate the back portion of each wave. Researchers have known that abnormalities in the T wave can signal an increased risk of a potentially life-threatening heart rhythm. Fenton and his collaborators studied the cellular action potential amplitude, which is controlled by sodium ion channels that are part of the heart's natural regulatory system. Sodium ions flowing into the cells boost the concentration of cations - which carry a positive charge - leading to a phenomena known as depolarization, in which the action potential of the cell rises above its resting level. The sodium channels then close at the peak of the action potential. While variations in the duration of the action potential indicate problems with the heart's electrical system, the researchers have now associated dynamic variations in the amplitude of the action potential with conduction block and the onset of fibrillation. "We have shown for the first time that a fundamentally different instability related to amplitude may underlie or additionally affect the risk of cardiac instabilities leading to fibrillation," said Richard Gray, one of the study's co-authors and a biomedical engineer in the Office of Science and Engineering Laboratories in the U.S. Food and Drug Administration. "You can have one wave with a long amplitude followed by one wave with a short amplitude, and if the short one becomes too short, the next wave will not be able to propagate," said Diana Chen, a Georgia Tech graduate student and first author of the study. "The waves going through the heart have to move together to maintain an effective heartbeat. If one of them breaks, the first wave can collide with the next wave, initiating the spiral waves." If similar results are found in human hearts, this new understanding of how electrical turbulence forms could allow doctors to better predict who would be at risk of fibrillation. The information might also lead to the development of new drugs for preventing or treating the condition. "One next scientific step would be to investigate pharmaceuticals that would reduce or eliminate the cellular amplitude instability," said Gray. "At the present time, most pharmaceutical approaches are focused on the action potential duration." The critical role of electrical waves in governing the heart's activity allows physics - and mathematics - to be used for understanding what is happening in this most critical organ, Fenton said. "We have derived a mathematical explanation for how this happens, why it is dangerous and how it initiates an arrhythmia," he explained. "We now have a mechanism that provides a better understanding of how these electrical disturbances originate. It's only when you have these changes in wave amplitude that the signals cannot propagate properly." Chen studied at the FDA's Center for Devices and Radiological Health through the NSF/FDA Scholar-in- Residence Program. Operated in collaboration with the National Science Foundation's Directorate for Engineering's Chemical, Bioengineering, Environmental, and Transport Systems, the program enables investigators in science, engineering and mathematics to develop research collaborations within the intramural research environment at the FDA. In addition to those already mentioned, the paper included work from Ilija Uzelac, a postdoctoral fellow, and Conner Herndon, a graduate research assistant. All are from the Georgia Tech School of Physics. This material is based upon work supported by the National Science Foundation under awards CNS-1347015 and CNS-1446675. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. CITATION: Diandian Diana Chen, et al., "A Mechanism for QRS Amplitude Alternans in Electrocardiograms and the Initiation of Spatiotemporal Chaos," (Physical Review Letters, 2017). https:/


A steady heartbeat is maintained by electrical signals that originate deep within the heart and travel through the muscular organ in regular waves that stimulate the coordinated contraction of muscle fibers. But when those waves are interrupted by blockages in electrical conduction - such as scar tissue from a heart attack - the signals can be disrupted, creating chaotic spiral-shaped electrical waves that interfere with one another. The resulting electrical turbulence causes the heart to beat ineffectively, quickly leading to death. Scientists have known that instabilities at the cellular level, especially variation in the duration of each electrical signal - known as an action potential - are of primary importance in creating chaotic fibrillation. By analyzing electrical signals in the hearts of an animal model, researchers from the Georgia Institute of Technology and the U.S. Food and Drug Administration have found an additional factor - the varying amplitude of the action potential - that may also cause dangerous electrical turbulence within the heart. The research, supported by the National Science Foundation, was reported April 20 in the journal Physical Review Letters. "Mathematically, we can now understand some of these life-threatening instabilities and how they develop in the heart," said Flavio Fenton, a professor in Georgia Tech's School of Physics. "We have proposed a new mechanism that explains when fibrillation will occur, and we have a theory that can predict, depending on physiological parameters, when this will happen." The voltage signal that governs the electrically-driven heartbeat is mapped by doctors from the body surface using electrocardiogram technology, which is characterized by five main segments (P-QRS-T), each representing different activations in the heart. T waves occur at the end of each heartbeat, and indicate the back portion of each wave. Researchers have known that abnormalities in the T wave can signal an increased risk of a potentially life-threatening heart rhythm. Fenton and his collaborators studied the cellular action potential amplitude, which is controlled by sodium ion channels that are part of the heart's natural regulatory system. Sodium ions flowing into the cells boost the concentration of cations - which carry a positive charge - leading to a phenomena known as depolarization, in which the action potential of the cell rises above its resting level. The sodium channels then close at the peak of the action potential. While variations in the duration of the action potential indicate problems with the heart's electrical system, the researchers have now associated dynamic variations in the amplitude of the action potential with conduction block and the onset of fibrillation. "We have shown for the first time that a fundamentally different instability related to amplitude may underlie or additionally affect the risk of cardiac instabilities leading to fibrillation," said Richard Gray, one of the study's co-authors and a biomedical engineer in the Office of Science and Engineering Laboratories in the U.S. Food and Drug Administration. "You can have one wave with a long amplitude followed by one wave with a short amplitude, and if the short one becomes too short, the next wave will not be able to propagate," said Diana Chen, a Georgia Tech graduate student and first author of the study. "The waves going through the heart have to move together to maintain an effective heartbeat. If one of them breaks, the first wave can collide with the next wave, initiating the spiral waves." If similar results are found in human hearts, this new understanding of how electrical turbulence forms could allow doctors to better predict who would be at risk of fibrillation. The information might also lead to the development of new drugs for preventing or treating the condition. "One next scientific step would be to investigate pharmaceuticals that would reduce or eliminate the cellular amplitude instability," said Gray. "At the present time, most pharmaceutical approaches are focused on the action potential duration." The critical role of electrical waves in governing the heart's activity allows physics - and mathematics - to be used for understanding what is happening in this most critical organ, Fenton said. "We have derived a mathematical explanation for how this happens, why it is dangerous and how it initiates an arrhythmia," he explained. "We now have a mechanism that provides a better understanding of how these electrical disturbances originate. It's only when you have these changes in wave amplitude that the signals cannot propagate properly." Explore further: Cause of killer cardiac disease identified by new method More information: Diandian Diana Chen et al, Mechanism for Amplitude Alternans in Electrocardiograms and the Initiation of Spatiotemporal Chaos, Physical Review Letters (2017). DOI: 10.1103/PhysRevLett.118.168101


News Article | May 4, 2017
Site: globenewswire.com

MENLO PARK, Calif., May 04, 2017 (GLOBE NEWSWIRE) -- GRAIL, Inc., a life sciences company whose mission is to detect cancer early when it can be cured, today announced the appointment of José Baselga, M.D., Ph.D., Brook Byers and Kaye Foster to its Board of Directors. The three new board members will join GRAIL’s existing board of directors: Bill Rastetter, Jeff Huber, Richard Klausner and Robert Nelsen. “I am very pleased to welcome José, Brook and Kaye, three extremely seasoned and accomplished executives, to our Board at this pivotal time for GRAIL,” said Jeff Huber, GRAIL’s Chief Executive Officer. “With these leaders, we are increasing the breadth of strategic leadership, industry experience and operational excellence within our Board. As we work to expand our operations and build integrated programs in science, technology and clinical development, their expertise and counsel will be invaluable. We are looking forward to working with them in our pursuit to transform the way cancer is diagnosed and treated.” José Baselga is the Physician-in-Chief and Chief Medical Officer at Memorial Sloan Kettering Cancer Center (MSK) and Professor of Medicine at Weill Cornell Medical College. Prior to MSK, Dr. Baselga was the Chief of the Division of Hematology/Oncology, Associate Director of the Massachusetts General Hospital Cancer Center and Professor of Medicine at Harvard Medical School. He also was the Chairman of Medical Oncology and Founding Director of the Vall d’Hebron Institute of Oncology in Barcelona, Spain. Dr. Baselga is a past President of the American Association of Cancer Research (AACR) and of the European Society for Medical Oncology, and a past member of the Board of Directors for the American Society of Clinical Oncology (ASCO) and AACR. He is an elected member of the National Academy of Medicine, the American Society of Clinical Investigation, the Association of American Physicians, and a Fellow of the AACR Academy. He is a past member of the Editorial Boards of Cancer Cell, Journal of Clinical Oncology, and Clinical Cancer Research and is the founding editor-in-chief for the AACR flagship journal Cancer Discovery. In addition to joining GRAIL’s Board of Directors, Dr. Baselga is also the Chairman of GRAIL’s Scientific Advisory Board. Dr. Baselga earned his M.D. from Universitat Autonoma de Barcelona and completed residencies in internal medicine at Vall d'Hebron University Hospital and SUNY - Health Sciences Center at Brooklyn. Brook Byers is a senior partner and founding member of the venture capital firm Kleiner Perkins Caufield & Byers (KPCB). Mr. Byers formed the first life sciences practice group in the venture capital profession and led KPCB to become a premier venture capital firm in the medical, healthcare, and biotechnology sectors. Brook has been a pioneer in the fields of precision medicine, molecular diagnostics and genomics, serving as a Steering Committee member for the Coalition of 21st Century Medicine, and through investment and board leadership in companies such as Foundation Medicine, Genomic Health and Veracyte. Brook currently serves on the Board of Directors of Cell Design Labs, Enjoy, Newsela, and Zephyr Health. He also serves on the Board of Overseers of the University of California San Francisco medical campus and hospitals, the Stanford Medicine Advisory Council and the Board of Directors of the New Schools Foundation. Mr. Byers holds a bachelor’s degree in Electrical Engineering from Georgia Institute of Technology and an MBA from Stanford University. He is also the recipient of an honorary Ph.D. from Georgia Institute of Technology. Kaye Foster has over 25 years of experience in the pharmaceutical industry leading large, global human resources organizations. She currently advises CEOs and leadership teams focusing on business transformations, talent management strategy and Human Resources strategy development and implementation. Most recently, she was Senior Vice President, Global Human Resources for Onyx Pharmaceuticals where she led all aspects of human resources for U.S. and global operations. Ms. Foster joined Onyx from Johnson & Johnson where she served as an Executive Committee member and Chief Human Resources Officer, leading a worldwide team of over 3,000 human resources professionals. She also held several Human Resources executive positions with Pfizer Inc., supporting its pharmaceutical businesses in Japan, Asia, Africa, Middle East and Latin America, and she led the integration of both the Warner-Lambert and Pharmacia mergers for these regions. She is a Senior Advisor with The Boston Consulting Group (BCG) and sits on the Board of Directors of Agios Pharmaceuticals as well as the Board of Trustees of Spelman College, Stanford Healthcare, ValleyCare Health System and Glide Memorial Church in San Francisco. Kaye holds a bachelor’s degree from Baruch College and an MBA from Columbia Business School. About GRAIL GRAIL is a life sciences company whose mission is to detect cancer early when it can be cured. GRAIL is using the power of high-intensity sequencing, population-scale clinical trials, and state of the art Computer Science and Data Science to enhance the scientific understanding of cancer biology and develop blood tests for early-stage cancer detection. The company’s funding was led by ARCH Venture Partners and includes Amazon, Bezos Expeditions, Bill Gates, Bristol-Myers Squibb, Celgene, GV, Illumina, Johnson & Johnson Innovation, Merck, McKesson Ventures, Sutter Hill Ventures, Tencent, Varian Medical Systems, and other financial partners. For more information, please visit www.grail.com.


News Article | May 4, 2017
Site: phys.org

By heating the ethene in stages to a temperature of slightly more than 700 degrees Celsius—hotter than had been attempted before - the researchers produced pure layers of graphene on a rhodium catalyst substrate. The stepwise heating and higher temperature overcame challenges seen in earlier efforts to produce graphene directly from hydrocarbon precursors. Because of its lower cost and simplicity, the technique could open new potential applications for graphene, which has attractive physical and electronic properties. The work also provides a novel mechanism for the self-evolution of carbon cluster precursors whose diffusional coalescence results in the formation of the graphene layers. The research, reported as the cover article in the May 4 issue of the Journal of Physical Chemistry C, was conducted by scientists at the Georgia Institute of Technology, Technische Universität München in Germany, and the University of St. Andrews in Scotland. In the United States, the research was supported by the U.S. Air Force Office of Scientific Research and the U.S. Department of Energy's Office of Basic Energy Sciences. "Since graphene is made from carbon, we decided to start with the simplest type of carbon molecules and see if we could assemble them into graphene," explained Uzi Landman, a Regents' Professor and F.E. Callaway endowed chair in the Georgia Tech School of Physics who headed the theoretical component of the research. "From small molecules containing carbon, you end up with macroscopic pieces of graphene." Graphene is now produced using a variety of methods including chemical vapor deposition, evaporation of silicon from silicon carbide - and simple exfoliation of graphene sheets from graphite. A number of earlier efforts to produce graphene from simple hydrocarbon precursors had proven largely unsuccessful, creating disordered soot rather than structured graphene. Guided by a theoretical approach, the researchers reasoned that the path from ethene to graphene would involve formation of a series of structures as hydrogen atoms leave the ethene molecules and carbon atoms self-assemble into the honeycomb pattern that characterizes graphene. To explore the nature of the thermally-induced rhodium surface-catalyzed transformations from ethene to graphene, experimental groups in Germany and Scotland raised the temperature of the material in steps under ultra-high vacuum. They used scanning-tunneling microscopy (STM), thermal programed desorption (TPD) and high-resolution electron energy loss (vibrational) spectroscopy (HREELS) to observe and characterize the structures that form at each step of the process. Upon heating, ethene adsorbed onto the rhodium catalyst evolves via coupling reactions to form segmented one-dimensional polyaromatic hydrocarbons (1D-PAH). Further heating leads to dimensionality crossover - one dimensional to two dimensional structures - and dynamical restructuring processes at the PAH chain ends with a subsequent activated detachment of size-selective carbon clusters, following a mechanism revealed through first-principles quantum mechanical simulations. Finally, rate-limiting diffusional coalescence of these dynamically self-evolved cluster-precursors leads to condensation into graphene with high purity. At the final stage before the formation of graphene, the researchers observed nearly round disk-like clusters containing 24 carbon atoms, which spread out to form the graphene lattice. "The temperature must be raised within windows of temperature ranges to allow the requisite structures to form before the next stage of heating," Landman explained. "If you stop at certain temperatures, you are likely to end up with coking." An important component is the dehydrogenation process which frees the carbon atoms to form intermediate shapes, but some of the hydrogen resides temporarily on, or near, the metal catalyst surface and it assists in subsequent bond-breaking process that lead to detachment of the 24-carbon cluster-precursors. "All along the way, there is a loss of hydrogen from the clusters," said Landman. "Bringing up the temperature essentially 'boils' the hydrogen out of the evolving metal-supported carbon structure, culminating in graphene." The resulting graphene structure is adsorbed onto the catalyst. It may be useful attached to the metal, but for other applications, a way to remove it will have to be developed. Added Landman: "This is a new route to graphene, and the possible technological application is yet to be explored." More information: Bo Wang et al, Ethene to Graphene: Surface Catalyzed Chemical Pathways, Intermediates, and Assembly, The Journal of Physical Chemistry C (2017). DOI: 10.1021/acs.jpcc.7b01999


News Article | April 27, 2017
Site: www.prnewswire.com

Raisa Ahmad was previously a summer associate with the firm, in which she conducted research and prepared memos for patent litigation cases involving software and security patents, pharmaceuticals, and biomedical devices.  In addition, she has experience preparing claim construction charts, invalidity contentions, and Lanham Act standing memos.  Prior to law school, she was a student engineer and conducted electric-cell substrate impedance sensing analysis for the Center for the Convergence of Physical and Cancer Biology.  Ahmad received her J.D. from the University of Arizona College of Law in 2016 where she was senior articles editor for the Arizona Law Review and received the Dean's Achievement Award Scholarship.  She received her B.S.E., magna cum laude, in biomedical engineering from Arizona State University in 2011.  She is admitted to practice in Texas. Brian Apel practices patent litigation, including post-grant proceedings before the U.S. Patent and Trademark Office.  He has worked for clients in the mechanical, electrical, and chemical industries and has experience in pre-suit diligence including opinion work, discovery, damages, summary judgment, and appeals.  Apel also has experience in patent prosecution, employment discrimination, and First Amendment law.  Before law school, he served as an officer in the U.S. Navy.  Apel received his J.D., magna cum laude, Order of the Coif, from the University of Michigan Law School in 2016 and his B.A., with honors, in chemistry from Northwestern University in 2008.  He is admitted to practice in Minnesota, the U.S. District Court of Minnesota, and before the U.S. Patent and Trademark Office. Zoya Kovalenko Brooks focuses her practice on patent litigation, including working on teams for one of the largest high-tech cases in the country pertaining to data transmission and memory allocation technologies.  She was previously a summer associate and law clerk with the firm.  While in law school, she served as a legal extern at The Coca-Cola Company in the IP group.  Prior to attending law school, she was an investigator intern at the Equal Employment Opportunity Commission, where she investigated over 20 potential discrimination cases.  Brooks received her J.D., high honors, Order of the Coif, from Emory University School of Law in 2016 where she was articles editor for Emory Law Journal and her B.S., high honors, in applied mathematics from the Georgia Institute of Technology in 2013.  She is admitted to practice in Georgia. Holly Chamberlain focuses on patent prosecution in a variety of areas including the biomedical, mechanical, and electromechanical arts.  She was previously a summer associate with the firm.  She received her J.D. from Boston College Law School in 2016 where she was an editor of Intellectual Property and Technology Forum and her B.S. in biological engineering from Massachusetts Institute of Technology in 2013.  She is admitted to practice in Massachusetts and before the U.S. Patent and Trademark Office. Thomas Chisena previously was a summer associate with the firm where he worked on patent, trade secret, and trademark litigation.  Prior to attending law school, he instructed in biology, environmental science, and anatomy & physiology.  Chisena received his J.D., magna cum laude, from the University of Pennsylvania Law School in 2016 where he was executive editor of Penn Intellectual Property Group Online and University of Pennsylvania Journal of International Law, Vol. 37.  He also received his Wharton Certificate in Business Management in December 2015.  He received his B.S. in biology from Pennsylvania State University in 2009.  He is admitted to practice in Pennsylvania, Massachusetts, and the U.S. District Court of Massachusetts. Claire Collins was a legal intern for the Middlesex County District Attorney's Office during law school.  She has experience researching and drafting motions and legal memorandums.  Collins received her J.D. from the University of Virginia School of Law in 2016 where she was a Dillard Fellow, her M.A. from Texas A&M University in 2012, and her B.A. from Bryn Mawr College in 2006.  She is admitted to practice in Massachusetts. Ronald Golden, III previously served as a courtroom deputy to U.S. District Judge Leonard P. Stark and U.S. Magistrate Judge Mary Pat Thynge.  He received his J.D. from Widener University School of Law in 2012 where he was on the staff of Widener Law Review and was awarded "Best Overall Competitor" in the American Association for Justice Mock Trial.  He received his B.A. from Stockton University in political science and criminal justice in 2005.  He is admitted to practice in Delaware and New Jersey. Dr. Casey Kraning-Rush was previously a summer associate with the firm, where she focused primarily on patent litigation.  She received her J.D. from the University of Pennsylvania Law School in 2016 where she was managing editor of Penn Intellectual Property Group Online and awarded "Best Advocate" and "Best Appellee Brief" at the Western Regional of the AIPLA Giles Rich Moot Court.  She earned her Ph.D. in biomedical engineering from Cornell University in 2013 and has extensive experience researching cellular and molecular medicine.  She received her M.S. in biomedical engineering from Cornell University in 2012 and her B.S., summa cum laude, in chemistry from Butler University in 2008.  She is admitted to practice in Delaware. Alana Mannigé was previously a summer associate with the firm and has worked on patent prosecution, patent litigation, trademark, and trade secret matters.  During law school, she served as a judicial extern to the Honorable Judge James Donato of the U.S. District Court for the Northern District of California.  She also worked closely with biotech startup companies as part of her work at the UC Hastings Startup Legal Garage.  Prior to attending law school, Mannigé worked as a patent examiner at the U.S. Patent and Trademark Office.  She received her J.D., magna cum laude, from the University of California, Hastings College of the Law in 2016 where she was senior articles editor of Hastings Science & Technology Law Journal.  She received her M.S. in chemistry from the University of Michigan in 2010 and her B.A., cum laude, in chemistry from Clark University in 2007.  She is admitted to practice in California and before the U.S. Patent and Trademark Office. Will Orlady was previously a summer associate with the firm, in which he collaborated to research and brief a matter on appeal to the Federal Circuit.  He also analyzed novel issues related to inter partes review proceedings, drafted memoranda on substantive patent law issues, and crafted infringement contentions.  During law school, Orlady was a research assistant to Professor Kristin Hickman, researching and writing on administrative law.  He received his J.D., magna cum laude, Order of the Coif, from the University of Minnesota Law School in 2016 where he was lead articles editor of the Minnesota Journal of Law, Science and Technology and his B.A. in neuroscience from the University of Southern California in 2012.  He is admitted to practice in Minnesota and the U.S. District Court of Minnesota. Jessica Perry previously was a summer associate and law clerk with the firm, where she worked on patent and trademark litigation.  During law school, she was an IP & licensing analyst, in which she assisted with drafting and tracking material transfer agreement and inter-institutional agreements.  She also worked with the Boston University Civil Litigation Clinic representing pro bono clients with unemployment, social security, housing, and family law matters.  Prior to law school, she was a senior mechanical design engineer for an aerospace company.  She received her J.D. from Boston University School of Law in 2016 where she was articles editor of the Journal of Science and Technology Law, her M.Eng. in mechanical engineering from Rensselaer Polytechnic Institute in 2009, and her B.S. in mechanical engineering from the University of Massachusetts, Amherst in 2007.  She is admitted to practice in Massachusetts and the U.S. District Court of Massachusetts. Taufiq Ramji was previously a summer associate with the firm, in which he researched legal issues that related to ongoing litigation and drafted responses to discovery requests and U.S. Patent and Trademark Office actions.  Prior to attending law school, Ramji worked as a software developer.  He received his J.D. from Harvard Law School in 2016.  He is admitted to practice in California. Charles Reese has worked on matters before various federal district courts, the Court of Appeals for the Federal Circuit, and the Patent Trial and Appeal Board.  His litigation experience includes drafting dispositive, evidentiary, and procedural motions; arguing in federal district court; and participating in other stages of litigation including discovery, appeal, and settlement negotiation.  Previously, he was a summer associate with the firm.  He received his J.D., cum laude, from Harvard Law School in 2016 where he was articles editor of Harvard Law Review, his A.M. in organic and organometallic chemistry from Harvard University in 2012, and his B.S., summa cum laude, in chemistry from Furman University in 2010.  He is admitted to practice in Georgia and the U.S. District Court for the Northern District of Georgia. Ethan Rubin was previously a summer associate and law clerk with the firm.  During law school, he worked at a corporation's intellectual property department in which he prepared and prosecuted patents relating to data storage systems.  He also worked as a student attorney, advocating for local pro bono clients on various housing and family law matters.  Rubin received his J.D., cum laude, from Boston College Law School in 2016 where he was articles editor of Boston College Law Review, his M.S. in computer science from Boston University in 2013, and his B.A., magna cum laude, in criminal justice from George Washington University in 2011.  He is admitted to practice in Massachusetts and before the U.S. Patent and Trademark Office. Pooya Shoghi focuses on patent prosecution, including portfolio management, application drafting, client counseling, and standard essential patent development.  Prior to joining the firm, he was a patent practitioner at a multinational technology company, where he was responsible for the filing and prosecution of U.S. patent applications.  During law school, he was a legal intern at a major computer networking technology company, where he focused on issues of intellectual property licensing in the software arena.  He received his J.D., with honors, from Emory University School of Law in 2014 where he was executive managing editor of Emory Corporate Governance and Accountability Review.  He received his B.S., summa cum laude, in computer science (2015) and his B.A., summa cum laude, in political science (2011) from Georgia State University.  He is admitted to practice in New York and before the U.S. Patent and Trademark Office. Tucker Terhufen focuses his practice on patent litigation in federal district courts as well as before the International Trade Commission for clients in the medical devices, life sciences, chemical, and electronics industries.  Prior to joining Fish, he served as judicial extern to the Honorable David G. Campbell of the U.S. District Court for the District of Arizona and to the Honorable Mary H. Murguia of the U.S. Court of Appeals for the Ninth Circuit.  He received his J.D., magna cum laude, Order of the Coif, from Arizona State University, Sandra Day O'Connor College of Law in 2016 where he was note and comment editor of Arizona State Law Journal and received a Certificate in Law, Science, and Technology with a specialization in Intellectual Property.  He received his B.S.E., summa cum laude, in chemical engineering from Arizona State University.  He is admitted to practice in California. Laura Whitworth was previously a summer associate with the firm.  During law school, she served as a judicial intern for the Honorable Judge Jimmie V. Reyna of the U.S. Court of Appeals for the Federal Circuit.  She received her J.D., cum laude, from American University Washington College of Law in 2016 where she was senior federal circuit editor of American University Law Review and senior patent editor of Intellectual Property Brief.  She received her B.S. in chemistry from the College of William & Mary in 2013.  She is admitted to practice in Virginia, the U.S. District Court for the Eastern District of Virginia, and before the U.S. Patent and Trademark Office. Jack Wilson was previously a summer associate with the firm.  During law school, he served as a judicial extern for the Honorable Mark Davis of the United States District Court for the Eastern District of Virginia.  Prior to attending law school, he served in the United States Army.  He received his J.D., magna cum laude, from William & Mary Law School in 2016 where he was on the editorial staff of William & Mary Law Review and his B.S. in computer engineering from the University of Virginia in 2009.  He is admitted to practice in Virginia and before the U.S. Patent and Trademark Office. Fish & Richardson is a global patent prosecution, intellectual property litigation, and commercial litigation law firm with more than 400 attorneys and technology specialists in the U.S. and Europe.  Our success is rooted in our creative and inclusive culture, which values the diversity of people, experiences, and perspectives.  Fish is the #1 U.S. patent litigation firm, handling nearly three times as many cases than its nearest competitor; a powerhouse patent prosecution firm; a top-tier trademark and copyright firm; and the #1 firm at the Patent Trial and Appeal Board, with more cases than any other firm.  Since 1878, Fish attorneys have been winning cases worth billions in controversy – often by making new law – for the world's most innovative and influential technology leaders.  For more information, visit https://www.fr.com or follow us at @FishRichardson. To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/fish--richardson-announces-18-recent-associates-300447237.html


News Article | April 17, 2017
Site: www.eurekalert.org

IMAGE:  Tiny incubators were used to simulate early Earth conditions, tracking microbial diversity and methane emissions over a period of 500 days. view more For much of its first two billion years, Earth was a very different place: oxygen was scarce, microbial life ruled, and the sun was significantly dimmer than it is today. Yet the rock record shows that vast seas covered much of the early Earth under the faint young sun. Scientists have long debated what kept those seas from freezing. A popular theory is that potent gases such as methane -- with many times more warming power than carbon dioxide -- created a thicker greenhouse atmosphere than required to keep water liquid today. In the absence of oxygen, iron built up in ancient oceans. Under the right chemical and biological processes, this iron rusted out of seawater and cycled many times through a complex loop, or "ferrous wheel." Some microbes could "breathe" this rust in order to outcompete others, such as those that made methane. When rust was plentiful, an "iron curtain" may have suppressed methane emissions. "The ancestors of modern methane-making and rust-breathing microbes may have long battled for dominance in habitats largely governed by iron chemistry," said Marcus Bray, a biology Ph.D. candidate in the laboratory of Jennifer Glass, assistant professor in the Georgia Institute of Technology's School of Earth and Atmospheric Sciences and principal investigator of the study funded by NASA's Exobiology and Evolutionary Biology Program. The research was reported in the journal Geobiology on April 17, 2017. Using mud pulled from the bottom of a tropical lake, researchers at Georgia Tech gained a new grasp of how ancient microbes made methane despite this "iron curtain." Collaborator Sean Crowe, an assistant professor at the University of British Columbia, collected mud from the depths of Indonesia's Lake Matano, an anoxic iron-rich ecosystem that uniquely mimics early oceans. Bray placed the mud into tiny incubators simulating early Earth conditions, and tracked microbial diversity and methane emissions over a period of 500 days. Minimal methane was formed when rust was added; without rust, microbes kept making methane through multiple dilutions. Extrapolating these findings to the past, the team concluded that methane production could have persisted in rust-free patches of ancient seas. Unlike the situation in today's well-aerated oceans, where most natural gas produced on the seafloor is consumed before it can reach the surface, most of this ancient methane would have escaped to the atmosphere to trap heat from the early sun. In addition to those already mentioned, the research team included Georgia Tech professors Frank Stewart and Tom DiChristina, Georgia Tech postdoctoral scholars Jieying Wu and Cecilia Kretz, Georgia Tech Ph.D. candidate Keaton Belli, Georgia Tech M.S. student Ben Reed, University of British Columbia postdoctoral scholar Rachel Simister, Indonesian Institute of Sciences researcher Cynthia Henny, Skidaway Institute of Oceanography professor Jay Brandes, and University of Kansas professor David Fowle. This research was funded by NASA Exobiology grant NNX14AJ87G. Support was also provided by a Center for Dark Energy Biosphere Investigations (NSF-CDEBI OCE-0939564) small research grant, and by the NASA Astrobiology Institute (NNA15BB03A). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsoring organizations. CITATION: Bray M.S., J. Wu, B.C. Reed, C.B. Kretz, K.M. Belli, R.L. Simister, C. Henny, F.J. Stewart, T.J. DiChristina, J.A. Brandes, D.A. Fowle, S.A. Crowe, J.B. Glass. 2017. Shifting microbial communities sustain multi-year iron reduction and methanogenesis in ferruginous sediment incubations. (Geobiology 2017). http://dx. .


Grant
Agency: GTR | Branch: EPSRC | Program: | Phase: Research Grant | Award Amount: 2.00M | Year: 2013

Globalization and ever-changing customer demands resulting in product customization, variety and time to market have intensified enormous competition in automotive and aerospace, manufacturing worldwide. Manufacturers are under tremendous pressures to meet changing customer needs quickly and cost effectively without sacrificing quality. Responding to these challenges manufacturers have offered flexible and reconfigurable assembly systems. However, a major challenge is how to obtain production volume flexibility for a product family with low investment and capability to yield high product quality and throughput while allowing quick production ramp-up. Overcoming these challenges involves three requirements which are the focus of this proposal: (1) Model reconfigurable assembly system architecture. The system architecture should purposefully take into account future uncertainties triggered by product family mix and product demands. This will require minimizing system changeability while maximizing system reusability to keep cost down; (2) Develop novel methodologies that can predict process capability and manage product quality for given system changeability requirements; and (3) Take advantage of emerging technologies & rapidly integrate them into existing production system, for e.g., new joining processes (Remote Laser Welding) and new materials. This project will address these factors by developing a self-resilient reconfigurable assembly system with in-process quality improvement that is able to self-recover from (i) 6-sigma quality faults; and (ii) changes in design and manufacturing. In doing so, it will go beyond state-of-the-art and practice in following ways: (1) Since current system architectures face significant challenges in responding to changing requirements, this initiative will incorporate cost, time and risks involving necessary changes by integrating uncertainty models; decision models for needed changes; and system change modelling; and (2) Current in-process quality monitoring systems use point-based measurements with limited 6-sigma failure root cause identification. They seldom correct operational defects quickly and do not provide in-depth information to understand and model manufacturing defects related to part and subassembly deformation. Usually, existing surface-based scanners are used for parts inspection not in-process quality control. This project will integrate in-line surface-based measurement with automatic Root Cause Analysis, feedforward/feedback process adjustment and control to enhance system response to fault or quality/productivity degradation. The research will be conducted for reconfigurable assembly system with multi-sector applications. It will involve system changeability/adaptation and in-process quality improvement for: (i) Automotive door assembly for implementing an emerging joining technology, e.g. Remote Laser Welding (RLW), for precise closed-loop surface quality control; and (ii) Airframe assembly for predicting process capability also for precise closed-loop surface quality control. Results will yield significant benefits to the UKs high value manufacturing sector. It will further enhance the sector by accelerating introduction of new emerging eco-friendly processes, e.g., RLW. It will foster interdisciplinary collaboration across a range of disciplines such as data mining and process mining, advanced metrology, manufacturing, and complexity sciences, etc. The integration of reconfigurable assembly systems (RAS) with in-process quality improvement (IPQI) is an emerging field and this initiative will help to engender the development into an internationally important area of research. The results of the research will inform engineering curriculum components especially as these relate to training future engineers to lead the high value manufacturing sector and digital economy.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-IP | Phase: SEC-2013.2.4-1 | Award Amount: 48.35M | Year: 2014

CORE will consolidate, amplify, extend and demonstrate EU knowledge and capabilities and international co-operation for securing supply chains whilst maintaining or improving business performance, with specific reference to key Supply Chain Corridors. CORE will be driven by the requirements of: the Customs, law enforcement authorities, and other agencies nationally and internationally to increase effectiveness of security & trade compliance, without increasing the transaction costs for business and to increase co-operative security risk management (supervision & control); the business communities, specifically shippers, forwarders, terminal operators, carriers and financial stakeholders to integrate compliance and trade facilitation concepts like green lanes and pre-clearance with supply chain visibility and optimisation. CORE will consolidate solutions developed in Reference Projects in each supply chain sector (port, container, air, post). Implementation-driven R&D will be then undertaken designed to discover gaps and practical problems and to develop capabilities and solutions that could deliver sizable and sustainable progress in supply chain security across all EU Member States and on a global scale.


Patent
Fujitsu Limited and Georgia Institute of Technology | Date: 2014-02-19

In an information processing apparatus, a comparing unit determines whether the response time of each transaction falls within an acceptable time range that is specified previously. For each time window, a first calculation unit calculates a load of processes executed in parallel by the servers in a specified tier, based on transaction data of individual transactions. Further, a second calculation unit calculates a total progress quantity in each time window, based on the transaction data of transactions whose respective response times are determined to fall within the acceptable time range. A determination unit determines a specific load value as a threshold at which the total progress quantity begins to decrease in spite of an increase of the load.


Grant
Agency: Department of Defense | Branch: Missile Defense Agency | Program: STTR | Phase: Phase I | Award Amount: 99.78K | Year: 2014

Missile defense takes place in an uncertain and dynamic environment, so multi-sensor fusion must be employed to aggregate and merge disparate data from the battlefield. However, the fusion process is hindered by the vast amount of uncertainty in operational contexts, such as imprecise measurements and varying environmental conditions. Various algorithms and fusion processes have been developed to manage this uncertainty so that accurate assessment of threats can still be obtained. However, little effort has been made at determining which methods and algorithms are best suited under different conditions and uncertainty models. In our Adaptive Management and Mitigation of Uncertainty in Fusion (AMMUF) project, we will use decision-theoretic probabilistic relational models (DT-PRMs) to model the fusion process and the different design and algorithmic decisions that can be made by system engineers and fusion operators. DT-PRMs can determine optimal decisions under inherent domain uncertainty in a variety of operational conditions. Our AMMUF tool will enable system engineers to determine the optimal fusion configuration in different missile defense contexts, giving battlefield operators the most accurate and efficient information about missile threats. Approved for Public Release 14-MDA-7663 (8 January 14)


The project aims on the one hand the development of new biobased materials specially adapted to the development of a wide range of containers or packages (films made by extrusion laminating, trays or lids developed by injection moulding and bottles performed through extrusion blow moulding technologies) and the improvement of thermal, mechanical and barrier properties of these packages through nanotechnology and innovative coatings. On the other hand, the project aims the operational integration of different intelligent technologies or smart devices to provide to the packaging value chain more information about the products and the processes, increase safety and quality of products through supply chain and improve the shelf-life of the packaged products. In both cases, the application of more flexible alternative processes and more environmentally sustainable and efficient technologies will be considered. The project includes the design, development, optimization and manufacturing of multifunctional smart packages, assuring compliance of environmental requirements through LCA and LCC analysis, managing nanotechnology risk through the whole packaging value chain, and finally, end user evaluation in different sectors as cosmetic, pharmaceutic and food industry. The project results and the high impact reached through a wide range of technologies utilized will boost the European Packaging Industry to a higher level.


Grant
Agency: European Commission | Branch: H2020 | Program: CSA | Phase: ICT-38-2015 | Award Amount: 2.22M | Year: 2016

DISCOVERY aims at supporting dialogues between Europe and North America (US and Canada); and fostering cooperation in collaborative ICT R&I, both under Horizon 2020 and under US and Canada funding programmes. With this purpose, DISCOVERY proposes a radically new approach to engage more actively and strategically in supporting dialogues and partnership building for ICT R&I cooperation. At the core of the DISCOVERY action is the Transatlantic ICT Forum that will be established as a sustainable mechanism to support policy debate and to provide opinions and recommendations furthering meaningful dialogues for purpose-driven and mutually beneficial cooperation between Europe and North America in the field of ICT. DISCOVERY will specifically focus on key aspects that until now have not been properly addressed in the political dialogue, such as funding mechanisms, ICT policy and regulations, and cybersecurity, as well as ICT priority areas of strategic interest for future partnerships in R&I. DISCOVERY will also stimulate industry engagement and innovation partnerships between the industry, research and academia, by reinforcing networking between ICT ETPs and US/Canada innovation partnerships; providing a new partner search tool; implementing Doorknock outreach to relevant US and Canada funding programmes; and using a unique set of participatory and co-creative methods and people-centric facilitation techniques to stimulate interaction among the groups of participants in project events, such as the ICT Discovery Lab and well-targeted capacity-building workshops. The DISCOVERY consortium is in the best position to leverage the required expertise, engagement with ICT dialogues, shared vision, networking capacity, access to a wide range of political, industry and economic thought-leaders throughout EU, US and Canada, and resources towards action and result-oriented dialogues, and significantly contributing to reinforce ICT R&I cooperation between Europe and North America.


Patent
Georgia Institute of Technology and Foundation University | Date: 2014-06-18

A method for preparing a conjugated polymer involves a DHAP polymerization of a 3,4-dioxythiophene, 3,4-dioxyfuran, or 3,4-dioxypyrrole and, optionally, at least one second conjugated monomer in the presence of a Pd or Ni comprising catalyst, an aprotic solvent, a carboxylic acid at a temperature in excess of 120 C. At least one of the monomers is substituted with hydrogen reactive functionalities and at least one of the monomers is substituted with a Cl, Br, and/or I. The polymerization can be carried out at temperature of 140 C. or more, and the DHAP polymerization can be carried out without a phosphine ligand or a phase transfer agent. The resulting polymer can display dispersity less than 2 and have a degree of polymerization in excess of 10.

Loading Georgia Institute of Technology collaborators
Loading Georgia Institute of Technology collaborators