Entity

Time filter

Source Type

Ohio, United States

Dinis S.,University of Porto | de Oliveira J.T.,ULHT | de Oliveira J.T.,University of Porto | Pinto R.,University of Porto | And 3 more authors.
International Journal of Women's Health | Year: 2015

Interstitial cystitis, presently known as bladder pain syndrome, has been recognized for over a century but is still far from being understood. Its etiology is unknown and the syndrome probably harbors different diseases. Autoimmune dysfunction, urothelial leakage, infection, central and peripheral nervous system dysfunction, genetic disease, childhood trauma/abuse, and subsequent stress response system dysregulation might be implicated. Management is slowly evolving from a solo act by the end-organ specialist to a team approach based on new typing and phenotyping of the disease. However, oral and invasive treatments are still largely aimed at the bladder and are based on currently proposed pathophysiologic mechanisms. Future research will better define the disease, permitting individualization of treatment. © 2015 Dinis et al. Source


Broome M.R.,Veterinary Medical Imaging | Peterson M.E.,Animal Endocrine Clinic | Kemppainen R.J.,Auburn University | Parker V.J.,The Ohio State University | Richter K.P.,Veterinary Specialty Hospital
Journal of the American Veterinary Medical Association | Year: 2015

Objective-To describe findings in dogs with exogenous thyrotoxicosis attributable to consumption of commercially available dog foods or treats containing high concentrations of thyroid hormone.Design-Retrospective and prospective case series.Animals-14 dogs.Procedures-Medical records were retrospectively searched to identify dogs with exogenous thyrotoxicosis attributable to dietary intake. One case was found, and subsequent cases were identified prospectively. Serum thyroid hormone concentrations were evaluated before and after feeding meat-based products suspected to contain excessive thyroid hormone was discontinued. Scintigraphy was performed to evaluate thyroid tissue in 13 of 14 dogs before and 1 of 13 dogs after discontinuation of suspect foods or treats. Seven samples of 5 commercially available products fed to 6 affected dogs were analyzed for thyroxine concentration; results were subjectively compared with findings for 10 other commercial foods and 6 beef muscle or liver samples.Results-Total serum thyroxine concentrations were high (median, 8.8 μg/dL; range, 4.65 to 17.4 μg/dL) in all dogs at initial evaluation; scintigraphy revealed subjectively decreased thyroid gland radionuclide in 13 of 13 dogs examined. At μ 4 weeks after feeding of suspect food or treats was discontinued, total thyroxine concentrations were within the reference range for all dogs and signs associated with thyrotoxicosis, if present, had resolved. Analy- sis of tested food or treat samples revealed a median thyroxine concentration for suspect products of 1.52 μg of thyroxine/g, whereas that of unrelated commercial foods was 0.38 μg of thyroxine/g.Conclusions and Clinical Relevance-Results indicated that thyrotoxicosis can occur secondary to consumption of meat-based products presumably contaminated by thyroid tissue, and can be reversed by identification and elimination of suspect products from the diet. © 2015, American Veterinary Medical Association. All rights reserved. Source


Zolotov A.,Hebrew University of Jerusalem | Zolotov A.,The Ohio State University | Dekel A.,Hebrew University of Jerusalem | Mandelker N.,Hebrew University of Jerusalem | And 9 more authors.
Monthly Notices of the Royal Astronomical Society | Year: 2015

We use cosmological simulations to study a characteristic evolution pattern of high-redshift galaxies. Early, stream-fed, highly perturbed, gas-rich discs undergo phases of dissipative contraction into compact, star-forming systems ('blue' nuggets) at z ~ 4-2. The peak of gas compaction marks the onset of central gas depletion and inside-out quenching into compact ellipticals (red nuggets) by z ~ 2. These are sometimes surrounded by gas rings or grow extended dry stellar envelopes. The compaction occurs at a roughly constant specific star formation rate (SFR), and the quenching occurs at a constant stellar surface density within the inner kpc (∑1). Massive galaxies quench earlier, faster, and at a higher ∑1 than lower mass galaxies, which compactify and attempt to quench more than once. This evolution pattern is consistent with the way galaxies populate the SFR-size-mass space, and with gradients and scatter across the main sequence. The compaction is triggered by an intense inflow episode, involving (mostly minor) mergers, counter-rotating streams or recycled gas, and is commonly associated with violent disc instability. The contraction is dissipative, with the inflow rate >SFR, and the maximum ∑1 anticorrelated with the initial spin parameter. The central quenching is triggered by the high SFR and stellar/supernova feedback (maybe also active galactic nucleus feedback) due to the high central gas density, while the central inflow weakens as the disc vanishes. Suppression of fresh gas supply by a hot halo allows the longterm maintenance of quenching once above a threshold halo mass, inducing the quenching downsizing. © 2015 The Authors. Source


Yi H.,The Ohio State University | Feiock R.C.,Florida State University
International Journal of Climate Change Strategies and Management | Year: 2015

Purpose - This paper aims to examine state adoption of climate action plans (CAPs) and investigates the factors driving the adoption of these climate policies in the states. Design/methodology/approach - The framework that is formulated to explain the state climate actions involves four dimensions: climate risks, climate politics, climate economic and climate policy diffusions. These hypotheses are tested with event history analysis on a panel data set on 48 US continental states from 1994 to 2008. Findings - This paper found empirical evidence to support climate politics, economics and policy diffusion explanations. It also found that climate risks are not taken into account in states’ climate actions. A comparison is conducted to compare the differences in state and local climate policymaking. Originality/value - The paper investigates the motivations of state governments in adopting CAPs, and makes comparisons with local climate strategies. It contributes to academic understanding of the multilevel governance of climate protection in the USA. © Emerald Group Publishing Limited. Source


News Article | August 22, 2016
Site: http://www.biosciencetechnology.com/rss-feeds/all/rss.xml/all

New research links specific inherited genetic differences (alterations) to an increased risk for eye (uveal) melanoma, a rare form of melanoma that arises from pigment cells that determine eye color. Roughly 2,500 people are diagnosed with uveal melanoma in the United States annually. Previous clinical data suggests uveal melanoma is more common in Caucasians and individuals with light eye coloration; however, the genetic mechanisms underlying this cancer's development were largely unknown. In this new study -- co-authored by ophthalmologic pathologist and cancer geneticist Mohamed Abdel-Rahman, M.D., Ph.D., of The Ohio State University Comprehensive Cancer Center - Arthur G. James Cancer Hospital and Richard J. Solove Research Institute and cancer geneticist Tomas Kirchhoff, Ph.D., of the Perlmutter Cancer Center of NYU School of Medicine - scientists report the first evidence of a strong association between genes linked to eye color and development of uveal melanoma. Reported data suggests that inherited genetic factors associated with eye and skin pigmentation could increase a person's risk for uveal melanoma. Abdel-Rahman, Kirchhoff and team report their findings in the medical journal Scientific Reports. "This is a very important discovery that will guide future research efforts to explore the interactions of these pigmentary genes with other genetic and environmental risk factors in cancers not linked to sun exposure, such as eye melanoma. This could provide a paradigm shift in the field. Our study suggests that in eye melanoma the pigmentation difference may play a direct cancer-driving role, not related to sunlight protection," says Abdel-Rahman. Unlike other solid tumors, there has been limited progress in understanding the contribution of genetic risk factors to the development of uveal melanoma, researchers say, primarily due to the absence of comprehensive genetic data from patients as the large sample cohorts for this rare cancer type have not been available for research. To overcome these limitations, researchers analyzed samples from more than 270 patients with uveal melanoma, most of whom were treated at Ohio State. Because there is a known clinical connection between eye melanoma and skin cancer, in this study researchers sought to determine whether there were commonly shared genetic factors between both diseases, as the inherited genetic risk of skin melanoma has been more extensively explored in previous medical literature. The team analyzed 29 inherited genetic mutations previously linked with skin melanoma to determine if there was an associated risk of uveal melanoma. This analysis revealed that five genetic mutations were significantly associated with uveal melanoma risk. The three most significant genetic associations occurred in a genetic region that determines eye color. "Genetic susceptibility to uveal melanoma has been traditionally thought to be restricted only to a small groups of patients with family history. Now our strong data shows the presence of novel genetic risk factors associated with this disease in a general population of uveal melanoma patients," says Kirchhoff. "But this data is also important because it indicates -- for the first time -- that there is a shared genetic susceptibility to both skin and uveal melanoma mediated by genetic determination of eye color. This knowledge may have direct implications in the deeper molecular understanding of both diseases," adds Kirchhoff. Researchers expect the data presented in this study to fuel the formation of large national and international research consortiums to conduct comprehensive, systematic analysis of inherited (germline) genome data in large cohorts of uveal melanoma patients. "This type of collaboration is critically needed to dissect additional modifying genetic risk factors that may be uveal melanoma specific. This has important consequences not only for the prevention or early diagnosis of the disease but potentially for more improved therapies for at-risk patients," says Kirchhoff. "Federal funding will be crucial to support research of rare cancers such as eye melanoma as it is likely, as shown in this study, that the impact of such research will extend across the different cancer types," adds Abdel-Rahman.


News Article
Site: http://news.yahoo.com/science/

This image by the Hubble Space Telescope shows a dramatic view of the spiral galaxy M51, dubbed the Whirlpool Galaxy. Seen in near-infrared light, most of the starlight has been removed, revealing the Whirlpool's skeletal dust structure. This n More Paul Sutter is a visiting scholar at The Ohio State University's Center for Cosmology and AstroParticle Physics (CCAPP). Sutter is also host of the podcasts Ask a Spaceman and RealSpace, and the YouTube series Space in Your Face. Sutter contributed this article to Space.com's Expert Voices: Op-Ed & Insights. I hate making soufflés. Or, to be more precise, I hate trying to make soufflés. You know, that puffy cheese-and-egg French dish? Julia Child and Alton Brown make it look so easy, but it's a real devil to cook it just so to get that stratospheric tower of deliciousness. If you bake it too quickly or at too hot a temperature, you end up with a thick lump of tasteless remorse. So how do you cook a spiral galaxy? How do you cook any galaxy? It's pretty straightforward, really: Take a nebulous cloud of gas and dark matter in the early universe, and … wait. Given enough time, any tiny clump or seed in that cloud will attract its neighbors, making its gravitational pull even stronger, attracting more neighbors, growing stronger yet, and on and on for hundred of millions of years, as I explain in this video on how to bake a galaxy. That's it! The recipe for a galaxy is pretty simple. All it takes is gravity doing what gravity loves to do, and some time. But to make a spiral galaxy, you have to mix things just right. The problem is that there's only so much potential star stuff (aka, blobs of gas) floating around in any galaxy. If too much stuff crams into a galaxy all at once, all the available gas gets crushed into stars in one big burst. Fast-forward 13 billion years, and you're left with a lumpy, mostly elliptical galaxy, full of red stars. And when it comes to galaxies, red means dead. Or at least, really, really old. Elliptical galaxies  did their star party just once, when they were young, burning through their inheritance in one fantastic splash. These prodigal galaxies never came back. They still have stars hanging around, but they are small suns — and hence long-lived — and mostly red. Elliptical galaxies are now nothing but giant, sleepy retirement communities. Portions of spiral galaxies are red, too: the central bulge and the "halo," or the sprinkling of stars that live above and below and main, flat disk of the galaxy. But the spirals themselves are a bright, blazing blue, cluing astronomers in that stars there are young, and the party lives on. Even though a spiral galaxy may have formed a long time ago, triggering an intense burst of star formation, whatever's happening inside the spirals is an ongoing process, something that doesn't burn up too much or too little of a galaxy's gas reserves. So this means the spirals appear sometime after the galaxy itself assembles, and that whatever makes the spirals keeps on making them over eons. If spirals were only temporary, once-in-a-galactic-life-time thing, astronomers would hardly see any such galaxies left today. Instead, between half and two-thirds of all galaxies feature spiral arms. It must be Julia Child's universe, because that's a lot of perfectly cooked soufflés. If you just look at a spiral galaxy, it seems as if the arms are massive pinwheels. Blow on a galaxy (hard enough), and you just might make the galaxy spin faster. But galaxies aren't Frisbees; they're not solid objects all connected to themselves with plastic and glue. This means that the inner parts of a galaxy spin faster than the outer parts. If spiral arms were actually things attached to the center of the galaxy and spinning as fast as all the stars, the arms would've been tightly wound together like pasta on a fork a long time ago. So if spiral arms aren't things, then what the heck are they? From looking at a spiral galaxy, it seems as if all the stars are bound up in the central bulge and in the arms, with vast tracts of wasteland everywhere else. But deeper observations reveal what the human eyeball can't: The disks of spiral galaxies are filled with stars. Positively infested! The arms themselves aren't all that much denser than the seemingly empty gaps. The fact that spiral arms are only slightly more dense — but not crazily so — than the rest of the disk is a clue. The best picture astronomers have, so far, is that the spiral arms are actually — get this — density waves. Ripples in the galactic pond. "Wait, wait, wait," you may say, "I'm a person of the world. I've seen ponds, buddy. They have ripples, but those ripples are most certainly not spirals." Well, take the ripples in the pond and freeze them for just a moment. They usually look like a bunch of concentric circles, with smaller ones inside larger ones. Imagine stretching them out so the ripples are ellipses rather than circles. OK so far? Good. Now make the ellipses spin, with the little ellipses in the center spinning faster than the big ones at the edge. Before you ask why — we're building a model of a galaxy, and galaxies aren't exactly circular. Think about it long enough and squint your eyes, and you'll see that there are places where the long side of one ellipse runs into the short side of another. All this bunching-up forms a pattern. It forms a, wait for it, spiral! So that's the supposed origin of the spiral in a galaxy. Not ripples in a pond — we're done with that analogy — but density waves, places where the stars and gas are naturally denser than their surroundings. The waves come from a variety of sources, either tiny disturbances ("tiny" here means something like a supernova) amplified to galactic proportions or leftover wiggles from interactions with smaller galaxies. The bunched-up, spinning, Matryoshka-doll density waves explain why there are spiral features in the first place, but that model doesn't explain what makes the arms so striking. Sure, the arms may be slightly denser than what surrounds them, but not nearly as much as you would guess from looking at the galaxy. [How Galaxies are Classified by Type (Infographic )] That brings us to part 2: Extra stars make the spiral arms, and in return, the spiral arms make extra stars. The spiral arms are the drivers going half the speed limit in the highways of the galaxy. Fast-moving cars catch up to the slow-poke, get caught up with all the other cars trying to make their way around, honk their horns a lot, then floor it once they pass. Meanwhile the high-density disturbance plods its way down the road, oblivious and annoying. (Watch "Spiral Arms, the Galactic Traffic Jams.") The spiral arms are indeed rotating, after all, but at a different speed than the stars and gas. A cloud of gas catches up to the spiral arm, compresses because of the slightly higher density, and pops out as a star on the other end. In the galactic outskirts, the stars and gas sometimes are moving even slower than the density wave, but the process still happens when the spiral catches up to them. This is what powers the ongoing star formation in spiral galaxies: The arms themselves are star-making factories. Inside the arms, all sorts of stars are made: big ones, medium ones, little ones. The medium and petite stars live nice, long, stable lives, and after their violent youth in the spiral arm, they move out into a comfortable retirement in the gaps. The large stars, burning a bright blue, never leave, coming to the end of their short lives before ever making it out of the arms of their birth.  Now, finally, all the pieces of the puzzle have clicked into place. The spiral arms are an illusion. Well, almost an illusion. They're definitely denser than the rest of the galaxy, but not by much. However, the stars that live in the arms are uncharacteristically blue and bright, while the longer-lived stars in the gaps are redder and dimmer, giving humans' ignorant eyes, observing visible wavelengths, the impression of giant pinwheels in the sky.  The same goes for other wavelengths: Ultraviolet picks out the active young stars, and infrared highlights star-forming dust clouds, which all live primarily in the arms. It’s only by taking long-exposure images that we can see that the gaps between the spirals are not empty space. Sure, you see the spirals, but you should also notice everywhere else: Look at all the stuff! Learn more by listening to the episode "How Do Spiral Galaxies Form?" on the Ask a Spaceman podcast, available on iTunes and on the Web at http://www.askaspaceman.com. Thanks to Jayeeta Sarkar for the question that led to this episode! Ask your own question on Twitter using #AskASpaceman or by following Paul @PaulMattSutter and at facebook.com/PaulMattSutter. Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google+. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Space.com. Copyright 2015 SPACE.com, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article
Site: http://news.yahoo.com/science/

Cancer has passed heart disease as the leading cause of death in nearly half of U.S. states, according to a new report. In 2014, cancer was the leading cause of death in 22 states, including many in the West and Northeast. That's a jump from the year 2000, when cancer was the leading cause of death in just two states. In the rest of the 28 states, heart disease remained the leading cause of death in 2014. And in the U.S. population as a whole, heart disease deaths still outnumber cancer deaths: There were 614,348 people who died from heart disease in 2014, compared to 591,699 who died from cancer, according to the report, from the Centers for Disease Control and Prevention (CDC). Still, the new report shows that deaths from cancer have nearly caught up with deaths from heart disease over the last several decades in the United States. That's both because deaths from heart disease have declined in recent decades and because deaths from cancer have continued to tick upward. For example, in 1985, more than 770,000 people died from heart disease in the U.S., compared to about 450,000 from cancer. But by 2011, that gap had narrowed; during that year, there were 596,577 deaths from heart disease and 576,691 from cancer, the report said. [Top 10 Leading Causes of Death] The new report "highlights the great strides that the cardiovascular community has [made]," in educating people about risk factors for heart disease, said Dr. Laxmi Mehta, director of the Women’s Cardiovascular Health Program at The Ohio State University Wexner Medical Center, who was not involved in the report. [Map: Causes of Death in the U.S.] This education led to a reduction in risk factors for heart disease, such as smoking, Mehta said. It's also helped people better understand the symptoms of heart disease, leading to earlier diagnosis of the condition. And doctors have improved the way they treat heart attacks, leading to a reduction in death rates from heart attack complications, she said. In contrast, some cancers remain hard to catch in the early stages, Mehta said. And even though heart disease and cancer share many of the same risk factors, a person's genes may play a larger role in the development of some cancers, making the disease harder to tackle using preventive steps compared with heart disease, Mehta said. But the new findings don't mean people can become complacent about heart disease. Although cancer deaths were on course to surpass heart disease deaths by the early 2010s, this didn't happen. That's because, from 2011 to 2014, heart disease deaths increased slightly more than cancer deaths, keeping heart disease at the top of the rankings overall. Mehta noted that obesity rates and inactivity among children are on the rise, which could contribute to an increase in heart disease deaths in decades to come. "The last thing we want is people to think, 'I don’t have to worry about heart disease anymore,'" Mehta said. "Even if cancer surpasses heart disease now ... in the future there's potential for it coming back," she said. The new report is published today (Aug. 24) by the CDC's National Center for Health Statistics. Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article
Site: http://news.yahoo.com/science/

Paul Sutter is an astrophysicist at The Ohio State University and the chief scientist at COSI Science Center. Sutter is also host of the podcasts Ask a Spaceman and RealSpace, and the YouTube series Space In Your Face. Sutter contributed this article to Space.com's Expert Voices: Op-Ed & Insights. To paraphrase Galileo, "The book of nature is written in mathematical characters." The language that physicists and astronomers use to describe the natural world around us and the vast cosmos above us is just that — mathematics. It's through theoretical equations, data analysis number-crunching, and hardcore computer simulations that scientists pry open nature's secrets from her jealous hands. [Images: The World's Most Beautiful Equations] Mathematics is a fantastic tool, revealing more about the universe than we could've ever dreamt when the first scientists started applying rigorous methods to their natural philosophy. But that blessing is also a curse. Mathematics, the language that proves so adept at describing nature, is not the easiest language to translate into, say, plain English. That difficulty — the same difficulty in translating from any language into another — is at the root of much of the distrust some people have of astronomers and scientific findings. It's nothing new, unfortunately — just ask Galileo how much trouble he had. Scientists have an undeserved reputation for being poor communicators, but this couldn't be further from the truth. A healthy fraction of a scientist's day is filled with communication: coordinating work with colleagues and students, writing papers and grant proposals, preparing and giving talks at conferences and workshops, and teaching. How else is a scientist supposed to convince their fellows that they've hit upon the Next Great Idea if those results aren't communicated clearly? [Scientists Should Learn to Talk to Kids] Scientists are some of the strongest and most eloquent communicators you'll ever meet — when they're speaking their "native" language of mathematics and jargon. Jargon words are just shorthand expressions for complex topics, and any profession, from physicists to bakers, use it. It's just that bakers aren't usually called upon to report their findings to the public. And many scientists are up to the challenge of translating their findings into non-jargon English, but there's a problem: there's no good reason for them to do it. The priorities for a scientist in our current academic system are, in order: 1) get grants, 2) write papers, and 3) anything else. That "anything else" includes teaching, serving on committees, refereeing papers, and — in the tiny fraction of time leftover — engage with the broader community. Oh, and maybe spend some time with their families. If you've ever wondered why most scientists don't go to the trouble of communicating their work with the public, there's your reason: there's no incentive for them to do it. There aren't any rewards, and there certainly isn't any money. When a scientist does engage with the public, say, by giving a public lecture or visiting a classroom, by and large they are doing it in their spare-spare-spare time, and doing it because they enjoy it. So we (and "we" here means both scientists and the public) have a problem: the knowledge that scientists gain about the natural world stays relatively locked up within the scientific community, the scientists have no incentive to share it more broadly, and the public grows ever more distrustful of scientists. That reduces science funding opportunities, which means researchers have to work even harder to get grants, which means they have even less time for outreach …. We need to break this cycle. Society needs to be scientifically literate to function, and scientists need public support to continue being scientists. This is where storytelling comes in. Stories are powerful. They resonate with us on a human level in a way that bare numbers can't. And there are many creative ways to tell stories. Usually scientists are nervous to tell stories based on science — they are, after all, trained to be as precise and exacting as possible. Fortunately, there are many talented people around the world who are experts at telling stories — artists. Such as dancers. Yes, dance. People moving their bodies to music. Dance is a natural "language" for interpreting and representing physical concepts: the way a dancer thinks about the world, in terms of transfers of momentum and flows of energy, isn't much different from a physicist. Endeavors like the popular "Dance Your Ph.D." program or a project I'm involved with, "Song of the Stars," take advantage of that natural connection. In "Song of the Stars," the dances reflect themes from astrophysical phenomena. We've all been wowed by Hubble images, but it's something completely different to be immersed in the formation of the first stars or to witness a companion being pulled into a black hole, as only dance can express. To have astronomy brought down to Earth and be brought to life. To explore and share astrophysical phenomena in new and creative ways. To interpret the motions of gas and the play of complex forces using only the movement of the human body. To be told a story in a way that emotionally connects with us. And there are so many wonderful stories to tell about the universe, stories revealed by the scientific process but not usually exposed to the public in a way that they can appreciate and enjoy. [Do Science and Art Share a Source? - Café Panel Chat ] "Song of the Stars" is a blending of astronomy and dance to tell the life stories of the stars above. From the first revolution of light more than 13 billion years ago in a dark universe, to a galactic collision that sparks a new generation, to the loss of a companion into a black hole, to a spectacular supernova that sends one last message across the universe. Dance pieces depicting these scenarios are interwoven with narration that conveys the science and gives the audience enough information to fully appreciate the creative work of the artists. I'm continually fascinated by the ever-unfolding mysteries that the universe presents to us, and I want to share those mysteries with anyone I can. This is why I started working with Seven Dance Company to create "Song of the Stars." By sharing what I know with dancers and choreographers, we're working together to translate mathematics and jargon into new languages and use those new languages to tell stories that connect with us in different, emotional ways. This process sacrifices technical details, which is fine. I'm trying to communicate intuition, not information. If an audience wants reams of complex text and mathematics, they're already well-served. Most people may not realize the beauty and drama that plays out in the heavens above, because it's never been shared with them in a way that makes them care. Many people are immediately "turned off" by science or space concepts. But maybe dance can reach them. Maybe other artistic expressions can communicate to them. Maybe if science is shared with them in a way that they can appreciate and enjoy, we can break the cycle of distrust. Maybe if science knowledge is presented in new ways — away from meaningless soundbites or contextless data points — audiences can gain an understanding of, and an appreciation for, what scientists do. And maybe those audiences can gain an appetite for more. We're all curious; it is part of what makes us human. If that curiosity can be awakened — or reawakened — maybe the next time scientists beg the public for money they won't be immediately dismissed. Maybe the next time a research group publishes a new result, it's met with joy and fascination from all corners of society. Maybe a kid who never realized he or she could be a scientist pushes toward a new career. The point of combining science with the arts isn't to necessarily dictate what the artist creates, but rather to explore a shared experience and find the common ground between the disciplines. The point is to inspire artists and to bring science to new audiences who wouldn't normally be interested in the topics.  To reveal and revel in what science truly is: an expression of our shared human curiosity, expressed in the language of mathematics, but translated to make it enjoyable by everyone. "Song of the Stars" is supported by a Kickstarter campaign. Learn more by listening to the episode "What's the point in talking about science?" on the Ask A Spaceman podcast, available on iTunes and on the Web at http://www.askaspaceman.com. Ask your own question on Twitter using #AskASpaceman or by following Paul @PaulMattSutter and facebook.com/PaulMattSutter. Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google+. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Space.com. Do Not Fear Failure, The Lessons are Important (Op-Ed) Copyright 2016 SPACE.com, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article
Site: http://www.cemag.us/rss-feeds/all/rss.xml/all

Researchers at The Ohio State University Comprehensive Cancer Center — Arthur G. James Cancer Hospital and Richard J. Solove Research Institute (OSUCCC — James) have developed nanoparticles that swell and burst when exposed to near-infrared laser light. Such “nanobombs” might overcome a biological barrier that has blocked development of agents that work by altering the activity — the expression — of genes in cancer cells. The agents might kill cancer cells outright or stall their growth. The kinds of agents that change gene expression are generally forms of RNA (ribonucleic acid), and they are notoriously difficult to use as drugs. First, they are readily degraded when free in the bloodstream. In this study, packaging them in nanoparticles that target tumor cells solved that problem. This study, published in the journal Advanced Materials, suggests that the nanobombs might also solve the second problem. When cancer cells take up ordinary nanoparticles, they often enclose them in small compartments called endosomes. This prevents the drug molecules from reaching their target, and they are soon degraded. Along with the therapeutic agent, these nanoparticles contain a chemical that vaporizes, causing them to swell three times or more in size when exposed to near-infrared laser light. The endosomes burst, dispersing the RNA agent into the cell. “A major challenge to using nanoparticles to deliver gene-regulating agents such as microRNAs is the inability of the nanoparticles to escape the compartments, the endosomes, that they are encased in when cells take up the particles,” says principal investigator Xiaoming (Shawn) He, PhD, associate professor of Biomedical Engineering and member of the OSUCCC — James Translational Therapeutics Program. “We believe we’ve overcome this challenge by developing nanoparticles that include ammonium bicarbonate, a small molecule that vaporizes when exposing the nanoparticles to near-infrared laser light, causing the nanoparticle and endosome to burst, releasing the therapeutic RNA,” He explains. For their study, He and colleagues used human prostate-cancer cells and human prostate tumors in an animal model. The nanoparticles were equipped to target cancer stem-like cells (CSCs), which are cancer cells that have properties of stem cells. CSCs often resist therapy and are thought to play an important role in cancer development and recurrence. The therapeutic agent in the nanoparticles was a form of microRNA called miR-34a. The researchers chose this molecule because it can lower the levels of a protein that is crucial for CSC survival and may be involved in chemotherapy and radiation therapy resistance. The nanoparticles also encapsulate ammonium bicarbonate, which is a leavening agent sometimes used in baking. Near-infrared laser light, which induces vaporization of the ammonium bicarbonate, can penetrate tissue to a depth of one centimeter (nearly half an inch). For deeper tumors, the light would be delivered using minimally invasive surgery. Funding from an American Cancer Society Research Scholar Grant and a Pelotonia Postdoctoral Fellowship supported this research. Other researchers involved in this study were Hai Wang, Pranay Agarwal, Shuting Zhao and Jianhua Yu, all of The Ohio State University; and Xiongbin Lu of the University of Texas MD Anderson Cancer Center.


News Article
Site: http://news.yahoo.com/science/

This story was updated at 10:59 a.m. ET on Aug. 23. If your job causes stress and anxiety in your life, it may seem obvious that it may be bad for your health. But how does your history of job satisfaction affect your health years down the line? A new study shows that people who had low levels of job satisfaction in their 20s and 30s may have an increased risk of mental health problems in their 40s. "We found that there is a cumulative effect of job satisfaction on health that appears as early as your 40s," lead author Jonathan Dirlam, a doctoral student in sociology at The Ohio State University, said in a statement. There was no difference, however, in mental health risk between those who grew more satisfied with their jobs over time and those who were consistently "very satisfied" with their jobs. [7 Ways to Reduce Job Stress] "Those with the upward job trajectory were the same as those who were always high," Dirlam told Live Science. "It's kind of encouraging that if you work your way up, it can overcome any potentially negative health effects." In the study, the researchers looked at data from about 6,500 people who participated in the National Longitudinal Survey of Youth 1979, a long-term study that has followed participants since 1979, when they were 14 to 22 years old. The new study included health information in the survey that was collected when participants were in their 40s. The researchers found that people with low job satisfaction that was sustained over time were 46 percent more likely to be diagnosed with emotional problems than those with consistently high job satisfaction. These people also reported worse general mental health, higher levels of depression and more difficulty sleeping than those who either grew more satisfied over time or who had high satisfaction sustained over time. The people in the study who started with high job satisfaction, but showed a downward trend in their satisfaction levels over time showed health measurements that were in the middle of the pack. But the fact that this group fared better than the group with always-low levels of job satisfaction shows that a person’s history of job stress, and not just their current stress levels, affects their mental health risk, the researchers said. The researchers noted several limitations of the study. For example, the researchers have health data only from later in the participants' lives, so it's possible that pre-existing health problems contributed to career dissatisfaction. It's also possible that as the study participants continue to age, different trends in their mental or physical health will emerge, the researchers said. But if the finding holds true, and low job satisfaction does increase the risk of mental health problems, the general downward trend in job satisfaction that has been observed in the U.S. since the 1980s could have major effects on the health of people in this country, the researchers said. The researchers presented the paper today (Aug. 22) at the annual meeting of the American Sociological Association in Seattle. Editor’s note: This story was updated to include the information about the group that showed a downward trend in their satisfaction levels over time. Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article
Site: http://phys.org/biology-news/

The study offers important clues as to which coral species are most likely to withstand repeated bouts of heat stress, called "bleaching," as climate change warms world oceans. In the Nov. 18 issue of the Proceedings of the Royal Society B: Biological Sciences, the researchers report that the same fat-storing coral species that showed the most resilience in a 2014 bleaching study has recovered more fully in the year since, compared to other species that stored less fat. Lead study author Verena Schoepf, a former doctoral student in the School of Earth Sciences at The Ohio State University and now a research associate at the University of Western Australia, said that tropical corals are extremely sensitive to heat stress. "Three global bleaching events have already occurred since the 1980s, and will likely occur annually starting later this century," she said. "Therefore, it has become more urgent than ever to know how coral can survive annual bleaching—one of the major threats to coral reefs today." Corals are animals that live in symbiosis with algae, and when stressed, they flush the algae from their cells and take on a pale, or "bleached," appearance. Bleached coral are more susceptible to storm damage and disease. Andréa Grottoli, professor in the School of Earth Sciences at Ohio State and principal investigator of the study, said that repeated bleaching will ultimately lead to less diversity in coral reefs, where all the different sizes and shapes of coral provide specialized habitats for fish and other creatures. "Bleaching will significantly change the future of coral reefs, with heat-sensitive coral unable to recover," Grottoli said. Interactions among coral hosts and their endosymbiontic algae, as well as predators and prey would then change in a domino effect. "Already, bleaching events have resulted in significant amounts of coral dying and causing impact to ocean ecosystems, but up until now it was largely unknown whether coral could recover between annual bleaching events," Schoepf added. In fact, evidence that fat is a key component to coral survival has been building in recent years. In 2014, Grottoli, Schoepf and their colleagues exposed three different coral species to two rounds of annual bleaching, then tested them six weeks later to see how well they had recovered. At that time, finger coral (Porites divaricata), the species which kept the largest fat reserves, had fared the best. Boulder coral (Orbicella faveolata), which kept less fat reserves, had recovered to a lesser extent. Mustard hill coral (Porites astreoides), which stored the least fat, had recovered the least. Now, one year later, the researchers have revisited the corals and discovered that both the finger coral and boulder coral have recovered, while the mustard hill coral has not yet recovered, and likely never will if bleaching frequency remains high. Surprisingly, all three species appear to be healthy at first glance. The symbiotic algae had returned to their cells, so the corals' normal color had returned. But further analyses of the corals' bodies tell a different story. "They all look healthy on the outside, but they're not all healthy on the inside," Grottoli said. Healthy corals get their day-to-day energy from sugar that the algae make through photosynthesis. For growth, healing and reproduction, they eat a diet that includes zooplankton. During bleaching, their nutritional state is thrown out of balance. "When coral is bleached, it no longer gets enough food energy and so it starts slowing down in growth and loses its fat and other energy reserves - just like humans do during times of hardship," Schoepf said. When corals photosynthesis slows down during bleaching, they start consuming their own bodies, as human bodies do when severely malnourished. And while all the corals in the study were able to eat zooplankton, the ones who had more fat to burn had less healing to do after the repeat bleaching subsided, and were able to resume a normal status within a year. The ones with less fat to burn sustained more damage, and so—even a year later—they are still in the process of healing. "Our research will help with predicting the persistence of coral reefs, because knowledge of their capacity to recover from annual bleaching is critical information for these models," Grottoli said. Explore further: For corals adapting to climate change, it's survival of the fattest—and most flexible More information: Annual coral bleaching and the long-term recovery capacity of corals, Proceedings of the Royal Society B: Biological Sciences, rspb.royalsocietypublishing.org/lookup/doi/10.1098/rspb.2015.1887


News Article
Site: http://www.rdmag.com/rss-feeds/all/rss.xml/all

Some of the natural gas harvested by hydraulic fracturing operations may be of biological origin--made by microorganisms inadvertently injected into shale by oil and gas companies during the hydraulic fracturing process, a new study has found. The study suggests that microorganisms including bacteria and archaea might one day be used to enhance methane production--perhaps by sustaining the energy a site can produce after fracturing ends. The discovery is a result of the first detailed genomic analysis of bacteria and archaea living in deep fractured shales, and was made possible through a collaboration among universities and industry. The project is also yielding new techniques for tracing the movement of bacteria and methane within wells. Researchers described the project's early results on Monday, Dec. 14, at the American Geophysical Union meeting in San Francisco. "A lot is happening underground during the hydraulic fracturing process that we're just beginning to learn about," said principal investigator Paula Mouser, assistant professor of civil, environmental and geodetic engineering at The Ohio State University. "The interactions of microorganisms and chemicals introduced into the wells create a fascinating new ecosystem. Some of what we learn could make the wells more productive." Oil and gas companies inject fluid--mostly water drawn from surface reservoirs--underground to break up shale and release the oil and gas--mostly methane--that is trapped inside. Though they've long known about the microbes living inside fracturing wells--and even inject biocides to keep them from clogging the equipment--nobody has known for sure where the bacteria came from until now. "Our results indicate that most of the organisms are coming from the input fluid," said Kelly Wrighton, assistant professor of microbiology and biophysics at Ohio State. "So this means that we're creating a whole new ecosystem a mile below the surface. Not only are we fracturing the rock, we're giving these organisms a new place to live and food to eat. And in fact, the biocides that we add to inhibit their growth may actually be fueling the production of methane." That is, the biocides kill some types of bacteria, thus enabling other bacteria and archaea to prosper--species that somehow find a way to survive in water that is typically four times saltier than the ocean, and under pressures that are typically hundreds of times higher than on the surface of the earth. Deprived of light for photosynthesis, these hardy microorganisms adapt in part by eating chemicals found in the fracturing fluid and producing methane. Next, the researchers want to pinpoint exactly how the bacteria enter the fracturing fluid. It's likely that they normally live in the surface water that makes up the bulk of the fluid. But there's at least one other possibility, Wrighton explained. Oil and gas companies start the fracturing process by putting fresh water into giant blenders, where chemicals are added. The blenders are routinely swapped between sites, and sometimes companies re-use some of the well's production fluid. So it's possible that the bacteria live inside the equipment and propagate from well to well. In the next phase of the study, the team will sample site equipment to find out. The clues emerged when the researchers began using genomic tools to construct a kind of metabolic blueprint for life living inside wells, Wrighton explained. "We look at the fluid that comes out of the well," she said. "We take all the genes and enzymes in that fluid and create a picture of what the whole microbial community is doing. We can see whether they survive, what they eat and how they interact with each other." The Ohio State researchers are working with partners at West Virginia University to test the fluids taken from a well operated by Northeast Natural Energy in West Virginia. For more than a year, they've regularly measured the genes, enzymes and chemical isotopes in used fracturing fluid drawn from the well. Within around 80 days after injection, the researchers found, the organisms inside the well settle into a kind of food chain that Wrighton described this way: Some bacteria eat the fracturing fluid and produce new chemicals, which other bacteria eat. Those bacteria then produce other chemicals, and so on. The last metabolic step ends with certain species of archaea producing methane. Tests also showed that initially small bacterial populations sometimes bloom into prominence underground. In one case, a particular species that made up only 4 percent of the microbial life going into the well emerged in the used fracturing fluid at levels of 60 percent. "In terms of the resilience of life, it's new insight for me into the capabilities of microorganisms." The researchers are working to describe the nature of pathways along which fluids migrate in shale, develop tracers to track fluid migration and biological processes, and identify habitable zones where life might thrive in the deep, hot terrestrial subsurface. For example, Michael Wilkins, assistant professor of earth sciences and microbiology at Ohio State, leads a part of the project that grows bacteria under high pressure and high temperature conditions. "Our aim is to understand how the microorganisms operate under such conditions, given that it's likely they've been injected from surface sources, and are accustomed to living at much lower temperatures and normal atmospheric pressure. We're also hoping to see how geochemical signatures of microbial activity, such as methane isotopes, change in these environments," Wilkins said. Other aspects of the project involve studying how liquid, gas and rock interact underground. In Ohio State's Subsurface Materials Characterization and Analysis Laboratory, Director David Cole models the geochemical reactions taking place inside shale wells. The professor of earth sciences and Ohio Research Scholar is uncovering reaction rates for the migration of chemicals inside shale. Using tools such as advanced electron microscopy, micro-X-ray computed tomography and neutron scattering, Cole's group studies the pores that form inside shale. The pores range in size from the diameter of a human hair to many times smaller, and early results suggest that connections between these pores may enable microorganisms to access food and room to grow. Yet another part of the project involves developing new ways to track the methane produced by the bacteria, as well as the methane released from shale fracturing. Thomas Darrah, assistant professor of earth sciences, is developing computer models that trace the pathways fluids follow within the shale and within fracturing equipment. Though oil and gas companies may not be able to take full advantage of this newly discovered methane source for some time, Wrighton pointed out that there are already examples of bio-assisted methane production in industry, particularly in coal bed methane operations. "Hydraulic fracturing is a young industry," she said. "It may take decades, but it's possible that biogenesis will play a role in its future. Other researchers on the project hail from Pacific Northwest National Laboratory and the University of Maine.


News Article | February 13, 2016
Site: http://www.techtimes.com/rss/sections/environment.xml

The ocean's role in protecting the environment by serving as a large carbon sink is not clear. Scientists across the globe, however, shed light on how the ocean absorbs carbon from the atmosphere through plankton networks and deposits it in the bottom of water. The study, published in the journal Nature, shows how the ocean plucks carbon from the atmosphere and keeps it under the ocean through certain mechanisms. The results came from the Tara Oceans Expedition, wherein a team of at least 200 scientists around the globe studied unseen inhabitants in the ocean such as phytoplankton, bacteria and viruses. The expedition collected samples in nutrient-poor regions in oceans, which comprises 70 percent of the ocean's surface area. "We're trying to understand, 'Does carbon in the surface ocean sink to the deep ocean and, if so, how?'" Matthew Sullivan, an assistant professor of microbiology at The Ohio State University, said. "It's the first community-wide look at what organisms are good predictors of how carbon moves in the ocean," Sullivan added. There are two main reasons why the ocean is dubbed as the Earth's major carbon sink. First, it serves as a physical pump because it can pull surface water filled with dissolved carbon dioxide down in deep waters. Second, it acts like a biological pump because of organisms like plankton which take up carbon dioxide through the process of photosynthesis. The team analyzed data collected by the expedition between 2009 and 2013. They used advanced genetic sequencing to study tiny ocean inhabitants and through an analytical approach, they identified which inhabitants were responsible in depositing carbon in the bottom of the ocean. They found that phytoplankton absorbs carbon from the atmosphere and transmit it deep into the ocean. They found that viruses are also important, especially those that infect cyanobacteria cells. "Additionally, we show that the relative abundance of a few bacterial and viral genes can predict a significant fraction of the variability in carbon export in these regions," the researchers wrote in the paper. The study allowed for a better understanding of the roles of the organisms in the ocean in maintaining a healthy planet and ecosystems.


News Article
Site: http://phys.org/biology-news/

Large amounts of copper are toxic to people and to most living cells. But our immune systems use some copper to fend off bacteria that could make us sick. More copper in the environment leads to more bacteria, including E. coli, that develop a genetic resistance. And that could pose an increased infection risk for people, said Jason Slot, who directed a new copper-resistance study and is assistant professor of plant pathology at The Ohio State University. Today, copper is widely used, including in animal feed and to make hospital equipment - areas that could be particularly conducive to bacteria developing even greater resistance, Slot said. Under the pressure of "copper stress," bacteria have traded DNA that enabled some to outlive the threat, said Slot, who specializes in fungal evolutionary genomics. And over centuries, the genes that lead to copper resistance have bonded, forging an especially tough opponent for the heavy metal, a cluster scientists call the "copper homeostasis and silver resistance island," or CHASRI. Slot and his colleagues created a molecular clock, using bacterial samples collected over time and evolutionary analysis to trace the history of copper resistance. The team studied changes in bacteria and compared those to human use of copper. Their work suggests there were repeated episodes of genetic diversification within bacteria that appear to correspond to peaks in copper production. The study appears in the journal Genome Biology and Evolution. Slot, an evolutionary biologist, first became interested in copper resistance when he learned that the genes involved weren't evolving in the way scientists would expect. "This may have arisen at the time that humans started using a lot of copper - in the Bronze Age," Slot said. He and his collaborators speculate that the original resistance might have started in milk fermented in a copper-alloy vessel, or in the gut of an animal in a high-copper environment. From then on, human use of copper has likely contributed to bacteria with a stronger armor against it. For instance, "About 2,000 years ago Romans were pumping a ton of copper dust into the environment," Slot said. Ice cores from Greenland have supported this theory, showing likely high copper emissions during the time. Today, copper is widely used in industry, including in farming, where the metal is added to feed to fatten up animals. And in recent years, there's been a movement toward using copper more in medical settings because of its antibacterial properties, Slot said. "You're enticing the bacteria in the environment to develop a mechanism that evades your immune system," Slot said. "I think overuse of anything is a bad idea, but it's really hard for people not to overuse the few weapons that we have."


News Article
Site: http://phys.org/nanotech-news/

Researchers at The Ohio State University have found a way to light up a common cancer drug so they can see where the chemo goes and how long it takes to get there. They've devised an organic technique for creating this scientific guiding star and in doing so have opened up a new frontier in their field. Previous efforts have been limited by dyes that faded quickly and by toxic elements, particularly metals. A study published this week in the journal Nature Nanotechnology highlighted two novel accomplishments. First, the researchers created a luminescent molecule, called a peptide and made up of two amino acids. Then they hitched that light to the cancer medication so that it revealed the chemo's arrival within cells. "This is very important for personalized medicine. We really want to see what's going on when we give chemo drugs and this work paves the way for the exciting endeavor," said Dr. Mingjun Zhang, the biomedical engineering professor who led the study. Biomedical engineers strive to find techniques that behave naturally within the body and leave without doing harm. This research holds promise for doing just that because the peptide is one that should easily coexist with human cells and leave as harmlessly as it entered. "You can combine your drug with this luminescent vehicle," Zhang said of the tiny fluorescent particle devised in his lab. "Composed of natural amino acids, the nanoparticle is inherently biocompatible. Our biological machines can easily take care of it." This work was done in petri dishes in Zhang's lab and work in animals is currently underway. In the body or tissue of an animal or person, scientists would watch the fluorescent signal with an optical detection system, he said. Zhang and his colleagues sandwiched their peptide to a common chemotherapy drug so that its light was hidden until the two elements peeled apart upon entering the cells. Zhang was particularly delighted to see that the blue peptide, which can be seen under ultraviolet light, maintained its luminescence for extended periods of time. Previous work to track drugs using organic dyes has been hampered by their tendency to fade with time. "You can label it and you can attach it to a drug and see where the drug goes and when it is released," Zhang said. And it could be that the biomedical advance can give patients and their doctors information on how well and how quickly a medication is working for them. "Maybe for some people a drug is taking effect in a few minutes and for somebody else it's hours and for somebody else it never takes effect," Zhang said. The research team used doxorubicin, a widely used chemotherapy drug, for their lab work, but the discovery could apply to different types of treatments. Better understanding of the complex interplay of cells and drugs is critical to development of treatments that are finely tuned for individual patients. The Ohio State work builds on research that earned a trio of scientists the 2008 Nobel Prize in Chemistry. Their work on green fluorescent protein found in jelly fish led to the discovery that scientists could illuminate cellular-level activity that had previously been cloaked in mystery. Explore further: Next generation biomarker detects tumour cells and delivers anti-cancer drugs More information: Zhen Fan et al. Bioinspired fluorescent dipeptide nanoparticles for targeted cancer cell imaging and real-time monitoring of drug release, Nature Nanotechnology (2016). DOI: 10.1038/nnano.2015.312


News Article | October 29, 2015
Site: http://phys.org/technology-news/

Engineers at The Ohio State University have developed a new welding technique that consumes 80 percent less energy than a common welding technique, yet creates bonds that are 50 percent stronger.


Home > Press > New particle can track chemo: Discovery could reveal how well -- and how fast -- treatment finds and kills cancer Abstract: Tracking the path of chemotherapy drugs in real time and at a cellular level could revolutionize cancer care and help doctors sort out why two patients might respond differently to the same treatment. Researchers at The Ohio State University have found a way to light up a common cancer drug so they can see where the chemo goes and how long it takes to get there. They've devised an organic technique for creating this scientific guiding star and in doing so have opened up a new frontier in their field. Previous efforts have been limited by dyes that faded quickly and by toxic elements, particularly metals. A study published this week in the journal Nature Nanotechnology highlighted two novel accomplishments. First, the researchers created a luminescent molecule, called a peptide and made up of two amino acids. Then they hitched that light to the cancer medication so that it revealed the chemo's arrival within cells. "This is very important for personalized medicine. We really want to see what's going on when we give chemo drugs and this work paves the way for the exciting endeavor," said Dr. Mingjun Zhang, the biomedical engineering professor who led the study. Biomedical engineers strive to find techniques that behave naturally within the body and leave without doing harm. This research holds promise for doing just that because the peptide is one that should easily coexist with human cells and leave as harmlessly as it entered. "You can combine your drug with this luminescent vehicle," Zhang said of the tiny fluorescent particle devised in his lab. "Composed of natural amino acids, the nanoparticle is inherently biocompatible. Our biological machines can easily take care of it." This work was done in petri dishes in Zhang's lab and work in animals is currently underway. In the body or tissue of an animal or person, scientists would watch the fluorescent signal with an optical detection system, he said. Zhang and his colleagues sandwiched their peptide to a common chemotherapy drug so that its light was hidden until the two elements peeled apart upon entering the cells. Zhang was particularly delighted to see that the blue peptide, which can be seen under ultraviolet light, maintained its luminescence for extended periods of time. Previous work to track drugs using organic dyes has been hampered by their tendency to fade with time. "You can label it and you can attach it to a drug and see where the drug goes and when it is released," Zhang said. And it could be that the biomedical advance can give patients and their doctors information on how well and how quickly a medication is working for them. "Maybe for some people a drug is taking effect in a few minutes and for somebody else it's hours and for somebody else it never takes effect," Zhang said. The research team used doxorubicin, a widely used chemotherapy drug, for their lab work, but the discovery could apply to different types of treatments. Better understanding of the complex interplay of cells and drugs is critical to development of treatments that are finely tuned for individual patients. The Ohio State work builds on research that earned a trio of scientists the 2008 Nobel Prize in Chemistry. Their work on green fluorescent protein found in jelly fish led to the discovery that scientists could illuminate cellular-level activity that had previously been cloaked in mystery. ### Zhang's work was supported by the National Science Foundation. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article
Site: http://news.yahoo.com/science/

A 24-year-old man whose arms and legs were paralyzed by a spinal cord injury has regained the ability to move his hand, wrist and several fingers using an electrical device in a lab, according to a new study. The device is implanted in his brain and connected to a sleeve of electrodes worn on his forearm. With the device's help, the man, Ian Burkhart, can now carry out day-to-day tasks with his own hand, including pouring water into a glass, swiping a credit card and even playing "Guitar Hero." Burkhart became paralyzed at age 19 after he dove into a shallow wave at a beach and hit the sandy bottom, severely injuring his spinal cord. Because of where on his body the injury occurred, he lost the use of his legs and his forearms. [5 Crazy Technologies That Are Revolutionizing Biotech] But now, using the device, Burkhart has regained functional movements, said Chad Bouton, the division leader of neurotechnology and analytics at the Feinstein Institute for Medical Research in New York. Bouton is also the lead author of the study describing Burkhart's progress, published today (April 13) in the journal Nature. Functional movements are the kind that let people carry out everyday activities, such as picking up a bottle and pouring water into a cup, but these movements are often taken for granted, Bouton added, speaking on April 12 at a news conference announcing the results of the study. Burkhart is able to move his arm using brain-computer-interface technology, which uses a computer to translate signals in a person's brain into electrical pulses — in this case, on the sleeve Burkhart wears on his forearm. To create this technology, the researchers implanted a device with microelectrodes into Burkhart's motor cortex, the part of the brain that controls movement. Now, when he wears the sleeve, its 130 electrodes deliver electrical impulses to his muscles, causing them to contract. In a nonparalyzed person, signals from the brain travel down the spinal cord to nerves connected to various muscles in the body, making those muscles move. In people who are paralyzed, these signals still occur in the brain, but they cannot be transmitted to muscles, because the spinal cord is damaged. The implant in Burkhart's brain and the electrode sleeve bypass the injury in his spinal cord, delivering the signals directly to his muscles. Essentially, Burkhart is able to carry out these movements by "mastering his thoughts," said Dr. Ali Rezai, the senior author of the study and a neurosurgeon at The Ohio State University Wexner Medical Center, where Burkhart was treated. Burkhart's ability to move some of his fingers is a major finding, the researchers said, adding that they weren't sure it would be possible. To help Burkhart regain his individual finger movements, the researchers had to find and decipher very specific brain signals, Bouton said. Then, they had to figure out the pattern of electrical impulses they would need to deliver to the forearm, he said. The muscles in the forearm that control finger movements lie beneath other muscles, which control wrist movements, he said. [Bionic Humans: Top 10 Technologies] This isn't the first time researchers have decoded brain signals to help a paralyzed individual move. Indeed, the new technology is similar to using a brain implant to control a robotic arm or an exoskeleton, Rezai said. But in Burkhart's case, the sleeve takes things one step further, by actually allowing him to move his own limb, Rezai said. The ultimate goal is a device that is minimally invasive and simple to use, Rezai said. Another important aspect of Burkhart's electrode sleeve is that it's intuitive, said Nick Annetta, a research scientist at Battelle Memorial Institute, a research and development organization in Ohio, and an author of the study. That means that "when [Burkhart] thinks about closing his hand, he closes his hand. He doesn't have to think about other types of movements" in order to make that movement, Annetta said. The technology is "as natural as possible," he said. The doctors and researchers hope that one day this technology could help not only people with paralysis, but also those who have lost movement due to strokes or traumatic brain injuries, Annetta said. Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article
Site: http://www.treehugger.com/feeds/category/technology/

We've covered quite a few small-scale wind turbines over the years -- some made to resemble trees, others just small, vertical-axis versions -- and universally they would provoke some heated comments. You all would respond, rightly so, that these designs will just never be effective sources of energy generation, but we cover them because we're always wanting to discuss new ideas in clean technology and generally feel like these ideas, even if unfeasible, are worth mentioning because they may inspire the next new technology that is feasible and effective. Luckily, this new artificial tree is not a wind turbine. It is inspired by tree swaying and does harness energy from the wind, but it doesn't rely on the wind rotating or spinning anything, it instead harnesses the vibrations caused by the wind, or traffic, or seismic activity, or the swaying of a tall building or anything else that may cause it to shake. Researchers at The Ohio State University are developing this new technology that resembles a tree with no leaves and only a few branches. The team is using a tree design because they discovered that tree-like structures made with electromechanical materials can convert random forces like wind or footsteps on a bridge into strong structural vibrations that can then be converted into electricity. The researchers don't imagine groves of tall artificial trees placed everywhere, they actually believe this technology is best suited for small scale applications, requiring little power, where other clean energy sources won't work. One possible early use could be tiny trees powering bridge and building sensors that monitor the integrity of the structures. Today, these types of sensors rely on batteries or being plugged into the grid. The researchers point out that there are constant vibrations all around us from nature, like wind and seismic activity, and human movements that these trees could turn into electricity. “Buildings sway ever so slightly in the wind, bridges oscillate when we drive on them and car suspensions absorb bumps in the road,” said project leader Ryan Harne, assistant professor of mechanical and aerospace engineering and director of the Laboratory of Sound and Vibration Research. “In fact, there’s a massive amount of kinetic energy associated with those motions that is otherwise lost. We want to recover and recycle some of that energy.” Tree-like designs have been used for a long time in energy generating technologies, but this is the first time that random vibrations, like those in reality, have been proven to be a reliable and consistent energy source. Through computer modeling and then small-scale testing, the researchers were able to show that they could "exploit internal resonance to coax an electromechanical tree to vibrate with large amplitudes at a consistent low frequency, even when the tree was experiencing only high frequency forces...It reached a tipping point where the high frequency energy was suddenly channeled into a low frequency oscillation. At this point, the tree swayed noticeably back and forth, with the trunk and branch vibrating in sync." These tiny tree produced around 2 volts of electricity at this stage, which is low, but this was just a proof-of-concept. The team will now work on scaling up the experiment and increasing the voltage, but they believe this design will be a reliable source of renewable energy for low power applications around the world.


News Article
Site: http://news.yahoo.com/science/

Paul Sutter is a visiting scholar at The Ohio State University's Center for Cosmology and Astro-Particle Physics (CCAPP). Sutter is also host of the podcasts "Ask a Spaceman" and "RealSpace," and the YouTube series "Space in Your Face." As usual, we thought we had it all figured out. See that gas giant over there in the outer solar system? It was born there. It will spend its whole life there, and it will die there. Sure it might wiggle around a bit every few hundred million years — who doesn't? — but, by and large, planets don't move. Surprise: Planets move. And not just a little. They move a lot. All over the place. In fact, in the early days of a solar system's formation, planets are a little rambunctious: squirrely little toddlers jostling about underfoot. But it wasn't until we started observing planets in other solar systems ("extrasolar planets" or "exoplanets" for the astronomer on the move) that we really noticed this fact. And it wasn't just any type of exoplanet that kicked off this re-think; it was the hot Jupiters. Imagine: a planet more massive than the largest one in our solar system and 10 times warmer, a monstrous beast of hydrogen and other elements, complete with swirling bands of gas and a rich, dynamic atmosphere, orbiting closer its star than Mercury orbits the sun. In some solar systems, such a planet orbits so quickly that its year is shorter than the Earth's day. That means these worlds can whip around their parent stars in hours. The physics involved can reduce the most hardened scientist to tears. When astronomers spotted the first hot Jupiter (51 Pegasi b, the first exoplanet to be found around a sunlike star, no less), the reaction was mostly, "Ha ha, mother nature, that's cute. You got us this time, but no more funny business, OK?" But then another hot Jupiter was found. And another. Then half a dozen more. They went from goofy oddballs to … normalcy. For a while, it started to look like our own solar system was the weird one. Maybe they should just be called "regular Jupiters," and ours re-named a "cold Jupiter?" In retrospect, it's not surprising that astronomers spotted these massive planets living so close to their parent stars. After all, our detection methods are most sensitive to exactly these scenarios. One method is based on the motion of the parent star itself. Have you ever taunted a dog on a leash, running back and forth? The dog, frantically trying to chase you, runs until the leash stops it. You go the opposite direction, and so does the dog, until "thunk!" and the leash again reaches its limit. In this really bad analogy, each planet is taunting its parent sun through gravity. During one part of the planet's year, the world sits at a certain position in the system. Gently, week by week, the planet tries to pull the star over to it, because that's how gravity works. But some time later, the planet finds itself on the opposite side of the system. "No, star, I meant come over here, not over there!" Back and forth the star goes, sloshing around — just a tiny bit — it is huge compared to its planets, after all. But with precise-enough measurements, we can detect that wobble by a telltale red- and blue-shifting of the star's emitted light. [Direct Imaging: The Next Big Step in the Hunt for Exoplanets ] A second powerful method — and nowadays, the method most commonly used to find new planets — is to simply look for distant eclipses. If we get the alignment just right, and stare at enough stars, every once in a while, a planet will cross the face of its parent, ever so slightly dimming the star. Bingo: a transit detection! Both of those methods will more easily find a planet if it is big, producing a stronger pull from wiggling or a more significant dip in the brightness. So these methods will first pick out the massive, close planets, because those will make the strongest, clearest, least-ambiguous signals. And with planets that have fast orbital speeds, you can get more signal bang for your observational buck. That led to the initial worry: For a while, it seemed like every exoplanet was a hot Jupiter. Fortunately, as our detection methods improved and we could spot smaller exoplanets, we've learned the galaxy is a mellower place. There are plenty of hot Jupiters, but also plenty of regular Jupiters, and every other kind of planet you can imagine . Almost a sun, but not quite Still, how did the hot Jupiters get so hot? To seed a gas giant, you need more than rocks for a core, simply because there aren't enough rocks in a solar system to make a decent Jupiter-size planet core. You also need to glue together a bunch of ices, and last time I checked, there aren't exactly a lot of ices near the surface of a star. So obviously the hot Jupiters didn't form in the Mercurial positions where we now find them. What gives? The best guess we have so far — and it really is a guess at this point — is that a Jupiter-like planet forms in an appropriately Jupiter-like orbit in an early gaseous, nebulous not-quite-a-solar-system. The big world clears a gap in the gaseous disk, because that's what giant planets do. It's stuck to the middle of the gap like a car on a racetrack. If it moves too close in, the bands of gasses around the star are rotating faster than the planet is orbiting, and so nudge the giant young planet back out. If the planet scoots out too far, the slower-moving gas bands located there nudge it back into its proper place.  But since the system is so young, it's not done contracting and compressing. The gas continually brushes against the planet, playing a fantastically huge game of curling to keep the planet within the gap. And as the entire disk of gas continues to squeeze inward to its final, compact size, it carries the gap — and the newly formed planet — with it. Voilà: a Jupiter-size planet in the inner solar system! But if it's so easy, why does it happen only sometimes? How come our solar system's Jupiter is where it "belongs"? And what stops a hot Jupiter from becoming a very hot Jupiter and just crashing into its star? And, honestly, the whole mechanism seems a little dodgy, if you ask me. There are certainly many things we don't understand, and hot Jupiters offer us yet another tantalizing clue about the larger puzzle of how solar systems form, both here and abroad. To solve this riddle, we have to do what scientists do best: think about it some more. And more data wouldn't hurt, either. Learn more by listening to the episode "What's Up with Exoplanets?" on the "Ask a Spaceman" podcast, available on iTunes and on the Web at http://www.askaspaceman.com. Thanks to Jon Ziegler, Dan Cataldo, @infirmus, @MarkRiepe and Kieran Price for the questions that led to this piece! Ask your own question on Twitter using #AskASpaceman or by following Paul @PaulMattSutter and at facebook.com/PaulMattSutter. Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google+. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Space.com. Copyright 2016 SPACE.com, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article
Site: http://news.yahoo.com/science/

Tall people are better than short people at gauging how far away they are from objects in the middle distance, a new study reports. The researchers say the results are evidence for the idea that people's spatial perception abilities are influenced by their height, and develop over time. The human brain depends on a certain model to provide "the best guess of where objects could be located," said study co-author Teng Leng Ooi, a professor of optometry at The Ohio State University. That model, or "intrinsic bias," is typically revealed when people have very little information about where an object is located, e.g., literally in the dark, and must make an educated guess. People usually underestimate the distance between themselves and an object, and as objects get farther away, the effect gets larger. "Our previous studies have shown that the intrinsic bias is an imaginary curve that extends from one's feet and slants upward to the far distance," Ooi told Live Science in an email. In the new study, 24 people were split into two groups of 12, based on their height. The average height (measured at the eyes) in the groups were 4 feet 11 inches (149.3 cm) and 5 feet 8 inches (173.4 cm). Over three experiments, objects were presented in different levels of light, with different amounts of information to help determine location. The people then guessed the distances to objects by a variety of means, such as pacing out the distance with their eyes closed, so the study was not dependent on the subjects' sense of units of measure. The results showed that the people in both tall and short groups showed the bias, increasingly misjudging the distance to far-away objects. However, the taller participants were more accurate in their guesses, and the difference in performance between groups was consistent across all conditions, the researchers said. When tall participants sat in a chair and shorter participants stood on boxes to adjust their eye levels, the tall people were still more accurate in the middle distances. Because previous experiments showed people are better judges of distance from a higher vantage point, the researchers said, the new result is evidence that taller people have accumulated experience in guessing the distance to objects, and that their height has shaped a mental model of distances. However, other researchers said they were skeptical of the findings. "I'm a little bit dubious of the results," that show taller people are better at guessing distances, said Maryjane Wraga, a psychologist at Smith College in Massachusetts, who was not involved in the study. Because of variations in individuals' vision, Wraga said, the study, with only 12 participants in each group, would have benefited from more participants. Any pattern that emerged based on the study groups might be consistent since all three experiments used the same participants. Furthermore, "if it's a true effect, it's a modest effect." Wraga told Live Science. The differences in performance between the height groups at distances up to about 33 feet (10 meters) were small, Wraga said, and most people interact with those closer objects much more often in their daily lives. "It's not a uniform effect; it's mostly occurring for distances that are farther away." "The ideas that they're presenting are very interesting," John Philbeck, a psychologist at George Washington University in Washington, D.C., told Live Science. But he was also concerned about replicating the results, and called the sample size "a little on the thin side." "If this effect is real, there are ways to compensate for it in the real world," Wraga said, such as moving our heads and bodies to gather more information, which people probably do naturally, but was restricted in the experiments to specifically test the mental model. How should shorter people feel about the results? "Not worried at all," Wraga said. The researchers said they are interested in future studies with more subjects in a range of heights, development in children and investigating whether animals have different visual biases, possibly based on their ecological niche. The study was published today (Aug. 31) in the journal Science Advances. Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article
Site: http://news.yahoo.com/science/

Paul Sutter is a visiting scholar at The Ohio State University's Center for Cosmology and AstroParticle Physics (CCAPP). Sutter is also host of the podcasts Ask a Spaceman and RealSpace, and the YouTube series Space In Your Face. He contributed this article to Space.com's Expert Voices: Op-Ed & Insights. Yes, the universe is dying. Get over it. Well, let's back up. The universe, as defined as "everything there is, in total summation," isn't going anywhere anytime soon. Or ever. If the universe changes into something else far into the future, well then, that's just more universe, isn't it? But all the stuff in the universe? That's a different story. When we're talking all that stuff, then yes, everything in the universe is dying, one miserable day at a time. I mentioned in my last article (What Triggered the Big Bang?) how revolutionary the modern cosmological paradigm is: We don't live in a static, unchanging universe, but a dynamic one that has been around for a finite amount of time and will continue to change into its future. But what I didn't mention before is how agonizingly slow, painful and dreary the whole process will be. You may not realize it by looking at the night sky, but the ultimate darkness is already settling in. Stars first appeared on the cosmic stage rather early — more than 13 billion years ago; just a few hundred million years into this Great Play. But there's only so much stuff in the universe, and only so many opportunities to make balls of it dense enough to ignite nuclear fusion, creating the stars that fight against the relentless night. The expansion of the universe dilutes everything in it, meaning there are fewer and fewer chances to make the nuclear magic happen. And around 10 billion years ago, the expansion reached a tipping point. The matter in the cosmos was spread too thin. The engines of creation shut off. The curtain was called: the epoch of peak star formation has already passed, and we are currently living in the wind-down stage. Stars are still born all the time, but the birth rate is dropping. At the same time, that dastardly dark energy is causing the expansion of the universe to accelerate, ripping galaxies away from each other faster than the speed of light (go ahead, say that this violates some law of physics, I dare you), drawing them out of the range of any possible contact — and eventually, visibility — with their neighbors. With the exception of the Andromeda Galaxy and a few pathetic hangers-on, no other galaxies will be visible. We'll become very lonely in our observable patch of the universe. The infant universe was a creature of heat and light, but the cosmos of the ancient future will be a dim, cold animal. The only consolation is the time scale involved. You thought 14 billion years was a long time? The numbers I'm going to present are ridiculous, even with exponential notation. You can't wrap your head around it. They're just ... big. For starters, we have at least 2 trillion years until the last sun is born, but the smallest stars will continue to burn slow and steady for another 100 trillion years in a cosmic Children of Men. Our own sun will be long gone by then, heaving off its atmosphere within the next 5 billion years and charcoaling the Earth. Around the same time, the Milky Way and Andromeda galaxies will collide, making a sorry mess of the local system. At the end of this 100-trillion-year "stelliferous" era, the universe will only be left with the … well, leftovers: white dwarves (some cooled to black dwarves), neutron stars and black holes. Lots of black holes. Welcome to the Degenerate Era, a state that is as sad as it sounds. But even that isn't the end game. Oh no, it gets worse. After countless gravitational interactions, planets will get ejected from their decaying systems and galaxies themselves will dissolve. Losing cohesion, our local patch of the universe will be a disheveled wreck of a place, with dim, dead stars scattered about randomly and black holes haunting the depths. The early universe was a very strange place, and the late universe will be equally bizarre. Given enough time, things that seem impossible become commonplace, and objects that appear immutable … uh, mutate. Through a process called quantum tunneling, any solid object will slowly "leak" atoms, dissolving. Because of this, gone will be the white dwarves, the planets, the asteroids, the solid. Even fundamental particles are not immune: given 10^34 years, the neutrons in neutron stars will break apart into their constituent particles. We don't yet know if the proton is stable, but if it isn't, it's only got 10^40 years before it meets its end. With enough time (and trust me, we've got plenty of time), the universe will consist of nothing but light particles (electrons, neutrinos and their ilk), photons and black holes. The black holes themselves will probably dissolve via Hawking Radiation, briefly illuminating the impenetrable darkness as they decay. After 10^100 years (but who's keeping track at this point?), nothing macroscopic remains. Just a weak soup of particles and photons, spread so thin that they hardly ever interact. And then? Who knows? When you're contemplating such unfathomable time scales, it's hard to say. Maybe the universe will just continue cooling off, erasing temperature differences, making engines and computation — and cognition — effectively impossible. But maybe our universe is just a small patch of a larger framework, and while our branch is dying, another piece of the greater cosmos is just now entering its glorious star-forming days. Not that you'll ever be able to reach it, but it's a small comfort. Maybe a chance fluctuation will ignite a new Big Bang. Maybe whatever's driving Dark Energy will reveal its true nature, decaying into a shower of matter, breathing fresh life into a broken-down cosmos. Maybe … maybe … maybe … Maybe not. Learn more by listening to the episode "Is the universe dying?" on the Ask A Spaceman podcast, available on iTunes and on the Web at http://www.askaspaceman.com. Thanks to Alex Rothberg for the question that led to this piece! Ask your own question on Twitter using #AskASpaceman or by following Paul @PaulMattSutter and facebook.com/PaulMattSutter. Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google+. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Space.com. The History & Structure of the Universe (Infographic) Big Bang Theory: 5 Weird Facts About Seeing the Universe's Birth The Universe: Big Bang to Now in 10 Easy Steps Copyright 2015 SPACE.com, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article
Site: http://www.biosciencetechnology.com/rss-feeds/all/rss.xml/all

Researchers long have known that some portion of the risk of developing cancer is hereditary and that inherited genetic errors are very important in some tumors but much less so in others. In a new analysis, researchers have shed light on these hereditary elements across 12 cancer types — showing a surprising inherited component to stomach cancer and providing some needed clarity on the consequences of certain types of mutations in well-known breast cancer susceptibility genes, BRCA1 and BRCA2. The study, from Washington University School of Medicine in St. Louis, appears Dec. 22 in the journal Nature Communications. The investigators analyzed genetic information from more than 4,000 cancer cases included in The Cancer Genome Atlas project, an initiative funded by the National Institutes of Health (NIH) to unravel the genetic basis of cancer.​​​​​​​​​​​​​ “In general, we have known that ovarian and breast cancers have a significant inherited component, and others, such as acute myeloid leukemia and lung cancer, have a much smaller inherited genetic contribution,” said senior author Li Ding, Ph.D., associate professor of medicine and assistant director of the McDonnell Genome Institute at Washington University. “But this is the first time on a large scale that we’ve been able to pinpoint gene culprits or even the actual mutations responsible for cancer susceptibility.” The new information has implications for improving the accuracy of existing genetic tests for cancer risk and eventually expanding the available tests to include a wider variety of tumors. Past genomic studies of cancer compared sequencing data from patients’ healthy tissue and the same patients’ tumors. These studies uncovered mutations present in the tumors, helping researchers identify important genes that likely play roles in cancer. But this type of analysis can’t distinguish between inherited mutations present at birth and mutations acquired over the lifespan. To help tease out cancer’s inherited components, the new study adds analysis of the sequencing data from the patients’ normal cells that contain the “germline” information. A patient’s germline is the genetic information inherited from both parents. This new layer of information gives a genetic baseline of a patient’s genes at birth and can reveal whether cancer-associated mutations were already present. In all the cancer cases they analyzed, the investigators looked for rare germline mutations in genes known to be associated with cancer. If one copy of one of these genes from one parent is already mutated at birth, the second normal copy from the other parent often can compensate for the defect. But individuals with such mutations are more susceptible to a so-called “second hit.” As they age, they are at higher risk of developing mutations in the remaining normal copy of the gene. “We looked for germline mutations in the tumor,” Ding said. “But it was not enough for the mutations simply to be present; they needed to be enriched in the tumor — present at higher frequency. If a mutation is present in the germline and amplified in the tumor, there is a high likelihood it is playing a role in the cancer.” In 114 genes known to be associated with cancer, they found rare germline mutations in all 12 cancer types, but in varying frequencies depending on the type. They focused on a type of mutation called a truncation because most truncated genes can’t function at all. Of the ovarian cancer cases the investigators studied, 19 percent of them carried rare germline truncations. In contrast, only 4 percent of the acute myeloid leukemia cases in the analysis carried these truncations in the germline. They also found that 11 percent of the stomach cancer cases included such germline truncations, which was a surprise, according to the researchers, because that number is on par with the percentage for breast cancer. “We also found a significant number of germline truncations in the BRCA1 and BRCA2 genes present in tumor types other than breast cancer, including stomach and prostate cancers, for example,” Ding said. “This suggests we should pay attention to the potential involvement of these two genes in other cancer types.” The BRCA1 and BRCA2 genes are important for DNA repair. While they are primarily associated with risk of breast cancer, this analysis supports the growing body of evidence that they have a broader impact. “Of the patients with BRCA1 truncations in the germline, 90 percent have this BRCA1 truncation enriched in the tumor, regardless of cancer type,” Ding said. Genetic testing of the BRCA1 and BRCA2 genes in women at risk of breast cancer can reveal extremely useful information for prevention. When, for example, the genes are shown to be normal, there is no elevated genetic risk of breast cancer. But if either of these genes is mutated in ways that are known to disable either gene, breast cancer risk is dramatically increased. In this situation, doctors and genetic counselors can help women navigate the options available for reducing that risk. But mutations come in a number of varieties. Genetic testing also can reveal many that have unknown consequences for the function of these genes, so their influence on cancer risk can’t be predicted. To help clarify this gray area in clinical practice, Ding and her colleagues Jeffrey Parvin, M.D., Ph.D., professor and director of the division of computational biology and bioinformatics at The Ohio State University, and Feng Chen, Ph.D., associate professor of medicine at Washington University, investigated 68 germline non-truncation mutations of unknown significance in the BRCA1 gene. For each mutation, they tested how well the BRCA1 protein could perform one of its key DNA-repair functions. The researchers found that six of the mutations behaved like truncations, disabling the gene completely. These mutations also were enriched in the tumors, supporting a likely role in cancer. “It is important to be able to show that these six mutations of unknown clinical significance are, in fact, loss-of-function mutations,” Ding said. “But I also want to emphasize the contrasting point. Many more show normal function, at least according to our analysis. Many of these types of mutations are neutral, and we would like to identify them so that health-care providers can better counsel their patients.” Ding said more research is needed to confirm these results before they can be used to advise patients making health-care decisions. “Our strategy of investigating germline-tumor interactions provides a good way to prioritize important mutations that we should focus on,” she said. “For the information to eventually be used in the clinic, we will need to perform this type of analysis on even larger numbers of patients.”


News Article
Site: http://news.yahoo.com/science/

Dining out or eating canned foods might not actually be so bad for your waistline, a new study from Spain suggests. In the study, researchers at the Autonomous University of Madrid analyzed information from more than 1,600 people ages 18 to 60 who answered questions about their weight and typical eating habits, and were then followed over the next 3.5 years. During the study, about a third of the participants (528 people) gained at least 6.5 lbs. (3 kilograms). People who said they ate while watching TV at least two times a week, or didn't plan how much to eat before they sat down to a meal, were more likely to gain weight, compared with people who didn't report engaging in these unhealthy eating behaviors. But many other behaviors that are typically thought of as unhealthy — including eating pre-cooked or canned foods, buying snacks from a vending machine, and eating at fast food restaurants more than once a week — were not linked to weight gain. These findings are not necessarily surprising, said Lauren Blake, a registered dietitian at The Ohio State University Wexner Medical Center, who was not involved in the study. That's because, although canned foods, fast foods and vending machine snacks can be unhealthy, there are often healthier options within those categories that people can choose, like canned vegetables or a small package of nuts from a vending machine, Blake said. And if people plan ahead before eating out, by looking at the restaurant menu ahead of time, they may be able to avoid overeating, Blake said. [Lose Weight While Dining Out: Study Reveals 6 Tips] On the other hand, the two behaviors that were most strongly tied to weight gain — eating in front of the TV and failing to plan what to eat — both involve a lack of mindfulness during eating, Blake noted. If you're sitting in front of the TV with a bag of chips, "you're not mindful, and you don't even know how much you're eating," she said. A greater amount of mindfulness about eating is often important for long-term weight loss, Blake said. "If we're more aware of what and how much we're eating, that's where I see people make a lot more progress with weight loss and with maintenance," she said. But it's still hard to say from this study that certain behaviors don't lead to weight gain, said Dr. Vincent Pera, director of weight management at The Miriam Hospital in Providence, Rhode Island. Although the study took into account a number of factors that might affect weight gain — including physical activity, alcohol consumption and certain chronic diseases —  there are a number of other factors that the study wasn't able to account for, such as whether people experienced periods of high stress that could have led to overeating and weight gain, Pera said. What's more, people in the study self-reported what they ate, and how much they weighed, and it's possible that they didn't report everything that they consumed, or didn't report their weight correctly, which could affect the results, Pera said. "Where do you draw the line in saying these certain behaviors for sure impact weight, and these don't — I think you have trouble saying that," based on these findings, Pera said. And canned, processed and fast foods can be unhealthy even if they don't lead to weight gain. These foods are often high in salt, which is linked to a high blood pressure. Looking just at weight gain, as the study did, "doesn't encompass the whole picture of health," Blake said. The study also found that if people engaged in five or more of these "unhealthy" eating behaviors, they were more likely to gain weight than were people who engaged in zero to two of these behaviors. This finding suggests that "interventions designed to address several [unhealthy eating behaviors] together could be more efficient" than those that target just one unhealthy eating behavior, the researchers said. Pera said that this finding makes sense, because there are often a number of factors in people's lives and environment that affect their weight. If people get some of these factors under control, but not others, they may still gain weight, he said. The study was published online March 31 in the journal Obesity. Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article
Site: http://news.yahoo.com/science/

Paul Sutter is a visiting scholar at The Ohio State University's Center for Cosmology and AstroParticle Physics (CCAPP). Sutter is also host of the podcasts Ask a Spaceman and Realspace, and the YouTube series Space In Your Face. He contributed this article to Space.com's Expert Voices: Op-Ed & Insights. In the beginning, there was a question mark. All else followed. The end. We've all heard of the Big Bang theory (I'm talking about the cosmological model, not the TV show), but it's important to understand what that theory is and what it's not. Let me take this opportunity to be precisely, abundantly, emphatically, ridiculously, fantastically clear: The Big Bang theory is not a theory of the creation of the universe. Full stop. Done. Call it. Burn that sentence into your brain. Say it before you go to sleep, and first thing when you wake up. The Big Bang theory is a model of the history of the universe, tracing the evolution of the cosmos to its very earliest moments. And that's it. Don't try to stuff anything else into that framework. Just stop. You can keep your meta safely away from my physics, thank you very much. I'm emphasizing this because there is a lot of confusion from all sides, and it's best to keep it simple. The Big Bang theory is a scientific model, just like any other scientific model. We believe the theory is on the right track because it's — gasp — supported by extensive evidence. You don't have to take my word for it. Since the idea was first cooked up, the Big Bang theory has survived decades of scientists fighting, scratching, backstabbing, criticizing, undermining, bickering, arguing and even name-calling, all in an attempt to crush their rivals and prove that their pet alternatives were superior. Why? Because whoever takes down a major scientific paradigm gets a free trip to Stockholm. And at the end of it all, there's the evidence. You know, the actual universe that we're trying to understand. Any new observation is the scientific Thunderdome; two theories may enter, but only one can leave. And what was left after decades of evidence? Here's a hint: It's big. The evidence starts with Edwin Hubble's note that every galaxy is, on average, flying away from every other galaxy. The universe is expanding. That itself is a pretty big deal. For millennia, the default assumption (can you blame anyone?) was that, while things change here on Earth, up in the distant heavens, stuff just sort of…is. Yeah, stars may blow up or galaxies may collide, but on the whole, the universe from last week looks pretty much like the universe today. Check again in a month? Yup, the same universe. At least that's what people thought. But it's not. The universe today is different from how it was yesterday, and it will be different tomorrow. And it's not just on local scales; the whole shindig changes character one day to the next. [Evolution of the Universe Revealed by Computer Simulation (Gallery)] And if you notice that, every day, the universe is getting bigger, you can make a tremendous leap of logic to come to the conclusion that, long ago, the universe was … smaller? Maybe? I guess? Like any good scientist, as soon as you cook up this kind of ridiculous, preposterous concept, you start thinking through what the consequences would be and how you might test it — I know, radical notions. Here's the gist: The story of the past 14-ish billion years is a story of density. The universe is made of lots of kinds of stuff: hydrogen, helium, aardvarks, dark matter, gristle, photons, Ferris wheels, neutrinos, etc. All this stuff behaves differently at different densities, so when the universe was smaller, one kind of thing might dominate over another, and the physical behaviors of that thing would drive whatever was going on in the universe. For example, nowadays, the universe is mostly dark energy (whatever that is), and its behavior is ruling the universe — in this case, driving a period of accelerated expansion. But a few billion years ago, the universe was smaller, and all the matter was crammed more tightly together. And by virtue of its density, that matter was the ruler of the roost, overwhelming dark energy, which was just a background wimp rather than the powerhouse it is now. The birth of the Dark Energy Age might not seem that dramatic, but the further back you go in time — and the smaller you make the universe — the stranger it gets. Push back more than 13 billion years, when the universe was just one-thousandth of its current extent, and the matter that would one day make up entire galaxies is crammed together so tightly that atoms can't even form. It's so dense that every time a nucleus ropes in an electron, a careless high-energy photon slams into it, ripping the electron away. This is a plasma, and at one time, the entire universe lived like this.  Fast-forward to the present day, and the leftover light from the era, when the universe cooled and expanded just enough to let the first atoms form, continues to wash over us right now. But the universe is older and colder, and those high-energy gamma rays are now listless microwaves, creating a background permeating the cosmos — a cosmic microwave background, or CMB, if you will.  The CMB is not only one of the major pieces of evidence for the Big Bang (it's a baby picture of the universe…what else could you ask for?), but it's also a window to even earlier times. We may not be able to perceive the universe before the formation of the CMB, but the physics there leaves an imprint in that radiation field. It's, well, kind of important. The further we push back in time, the stranger the universe gets — yes, even stranger than a plasma. Push back further, and stable nuclei can't form. Go even further back, and protons and neutrons can't stand the pressure and degenerate into their components: quarks and gluons. Push back even further and, well, it gets complicated. The Big Bang theory can be summarized thusly: At one time, the entire universe — everything you know and love, everything on the Earth and in the heavens — was crushed into a trillion-Kelvin ball about the size of a peach. Or apple. Or small grapefruit. Really, the fruit doesn't matter here, OK? That statement sounds absolutely ridiculous, and if you said it a few hundred years ago… Well, I hope you like barbecues, because you're about to be burned at the stake. But as crazy as this concept sounds, we can actually understand this epoch with our knowledge of high-energy physics. We can model the physics of the universe at this early stage and figure out the latter-day observational consequences. We can make predictions. We can do science. At the "peach epoch," the universe was only a tiny fraction of a second old. In fact, it was even tinier than a tiny fraction — 10^-36 seconds old, or thereabouts. From there on out, we have a roughly decent picture of how the universe works. Some questions are still open, of course, but in general, we have at least a vague understanding.  The further along in age the universe gets, the more clear our picture becomes, but it's almost frightening to consider that our poor monkey brains are even contemplating such early epochs in the universe. At even earlier times, though, our understanding of the universe gets … fuzzy. The forces, energies, densities and temperatures become too high, and the knowledge of physics we've cobbled together over the centuries just isn't up to the task. In the extremely early universe gravity starts to get very important at small scales, and this is the realm of quantum gravity, the yet-to-be-solved grand riddle of modern physics. We just flat-out don't have an understanding of strong gravity at small scales. Earlier than 10^-36 seconds, we simply don't understand the nature of the universe. The Big Bang theory is fantastic at describing everything after that, but before it, we're a bit lost. Get this: At small enough scales, we don't even know if the word "before" even makes sense! At incredibly tiny scales (and I'm talking tinier than the tiniest thing you could possible imagine), the quantum nature of reality rears its ugly head at full strength, rendering our neat, orderly, friendly spacetime into a broken jungle gym of loops and tangles and rusty spikes. Notions of intervals in time or space don't really apply at those scales. Who knows what's going on?  There are, of course, some ideas out there — models that attempt to describe what "ignited" or "seeded" the Big Bang, but at this stage, they're pure speculation. If these ideas can provide observational clues — for example, a special imprint on the CMB, then hooray — we can do science!  If not, they're just bedtime stories. Learn more by listening to the episode “What banged the Big Bang?” on the Ask A Spaceman podcast, available on iTunes and on the Web at http://www.askaspaceman.com. Thanks to Rafael Ribeiro for the question that led to this piece! Ask your own question on Twitter using #AskASpaceman or by following Paul @PaulMattSutter and facebook.com/PaulMattSutter. Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google+. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Space.com. Big Bang Theory: 5 Weird Facts About Seeing the Universe's Birth Will LSST Solve the Mysteries of Dark Matter and Dark Energy? (Kavli Hangout) Copyright 2015 SPACE.com, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article
Site: http://news.yahoo.com/science/

The number of transplant surgeries from donor organs hasn't grown during the last 10 years, and transplants from living donors have declined nearly 16 percent. Because there's a critical shortage of organs, doctors at The Ohio State University More Dr. Todd Pesavento is medical director of kidney and pancreas transplantation and interim executive director of the Comprehensive Transplant Center at The Ohio State University Wexner Medical Center. Pesavento contributed this article to Live Science's Expert Voices: Op-Ed & Insights. Every 10 minutes, another name goes on the list of Americans waiting for an organ transplant. Currently, the list of patients awaiting a donation is more than 122,000 names long. Most of those patients will have to wait months or even years before finding a donor organ, and unfortunately, some never will. By the end of the day, 22 more people will die while awaiting a donor organ. The problem is, there simply aren't enough donors to meet demand. Most states have tried to bring attention to the issue by giving drivers the opportunity to become donors upon getting or renewing their driver's licenses. In May, the U.S. Senate introduced the Organ Donation Awareness and Promotion Act of 2015, and though it's yet to be voted on, it would fund efforts to further promote organ donation and raise awareness of the ongoing shortage. Despite those efforts, according to the U.S. Department of Health and Human Services (HHS), the number of donors available nationwide has remained stagnant over the last decade. In 2005, there were 14,497; last year, there were 14,415. And the number of living donors from whom organs were recovered actually dropped over the same time period, by more than 16 percent. Not content to just sit and wait, patients are increasingly taking matters into their own hands and actively looking for potential living donors. Finding donors, any way we can At the Comprehensive Transplant Center in The Ohio State University's Wexner Medical Center, we're seeing patients use everything from signs to social media to elicit donors. After one of our patients was put on a waiting list for a kidney , for example, his wife took to Facebook to share her husband's story and ask for donors. In less than a week, he had one. A former classmate of his wife's came forward to donate a kidney, and this past July, the couple celebrated the fifth anniversary of the transplant. [The 9 Most Interesting Transplants ] Another patient in need of a kidney at our center not only has a Facebook page, but also painted a plea for help on her SUV, providing details of her situation, her phone number and even her blood type. Though she's yet to find a suitable donor, she's generated dozens of phone calls and, on a broader scale, raised awareness among passersby about the possibility of becoming a living donor. I find in my practice that many people want to help, but they simply didn't know they could. Whenever someone comes forward to donate to one of my patients, I ask how they learned about becoming a living donor. Invariably, they say they saw a story on the news, read something in a newspaper or, increasingly, happened to notice something on social media. Hopefully, if you're one of those who didn't know you could become a living donor, perhaps you'll sign up after reading this. The good news is that the donor pool has broadened considerably over the last two decades. In the past, because of the risk of the recipient's body rejecting the transplanted organ, it was thought that only immediate family members could be donors. Today, advancements in surgical techniques such as vascular anastomosis and the use of robotics requiring less invasive incisions, combined with improvements in anti-rejection medications, there are fewer limits to who can donate — especially for kidneys. According to HHS, in the United States, there is a far greater need for kidney transplants than for any other organ. More than 100,000 people are waiting for donor kidneys, four times as many as all other organs combined. That's where living donors could make such a big impact. According to the United Network for Organ Sharing, kidneys are the most common organ transplanted from living donors; the United States just doesn't have enough of them. Ohio State is one of the larger transplant centers in the country. Currently, we have about 800 people on the wait list for a kidney, and next year we anticipate evaluating 800 more patients for transplant. Every year, we perform transplants for about 240 people, with about half of those patients receiving transplants from a living donor.


News Article
Site: http://news.yahoo.com/science/

Troy Patchin practices getting in and out of a car as part of his physical therapy at The Ohio State University Wexner Medical Center. Patchin was burned over nearly half his body in a work accident, and as part of his treatment at Ohio State's More Dr. Larry Jones, director of the< a href="http://wexnermedical.osu.edu/patient-care/healthcare-services/burn-care">Comprehensive Burn Center at The Ohio State University Wexner Medical Center, contributed this column to Live Science's Expert Voices: Op-Ed & Insights. Patients with severe burns, understandably, suffer from substantially diminished appetites because they're in a considerable amount of pain and are often sedated, as a result. So it may seem counterintuitive to ask severely burned patients to consume considerably more calories than they're used to while in the hospital. Despite these challenges, when burn patients are admitted to the Comprehensive Burn Center at The Ohio State University Wexner Medical Center, we make nutrition a priority, often beginning a feeding tube within 6 hours. It's an aggressive approach that helps burn patients heal faster and recently earned international recognition. When someone experiences a severe burn, defined as a second- or third-degree burn that covers at least 20 percent of the body, the hypermetabolic response is extreme. Second- and third-degree burns occur when damage extends beyond the top layer of the skin. With a second-degree burn, the skin blisters and can become extremely red and sore. Third-degree burns are the worst type, extending through every layer of the skin. The damage can even seep into the bloodstream, bones and major organs. After the body's initial shock response to the injury wears off, metabolism rates can increase up to 180 percent, heart rates can jump by up to 150 percent and the liver can increase in size by up to 200 percent. In short, the body goes into hyperdrive to heal wounds, and it looks for nutrients wherever it can find them. Unless the patient receives large amounts of supplemental nutrients, the body will rob itself of core nutrients. Essentially, if patients aren't able to meet the high calorie and protein requirements it takes to heal, their body will start consuming its own muscle mass in order to deliver nutrition. Muscle wasting is most obvious in the arms, legs and abdomen. Once patients lose that muscle mass, their ability to exercise, undergo rehabilitation and fight infection are severely compromised. Doctors need to intervene early in this process to prevent muscle loss and give the patient's body the nutrients it desperately needs to heal. Upon admission to the burn center, patients are evaluated by a dietitian to determine their energy and protein needs. Many are given a feeding tube almost immediately, through which we provide them with up to three to four times the amount of protein they normally receive in a day and 140 percent more calories. Each case is different, of course, so nurses monitor a patient's weight and caloric intake daily and dietitians adjust nutrients as needed. As a patient's burns heal, they are transitioned to oral meals during the day, with supplemental feedings overnight through the tube. Among other ingredients, the feeding solution contains proteins, which are used by the body to repair and close wounds caused by the burn; glucose, which fuels the healing efforts; vitamin D, which helps modulate cell grown and, along with omega-3 fatty acids, helps control inflammation. Ingesting such a high volume of calories and supplements can be a challenge. Severe pain is associated with a marked loss of appetite and excessive intake can lead to nausea. When necessary, we may also prescribe patients medication to allow them to tolerate the additional feedings. The healing process continues long after discharge. At a microscopic level, severe burns can take anywhere from a year to 18 months to heal — in some cases, even longer. My colleagues and I at the burn center are currently studying whether nutritional support should continue after discharge. As patients prepare to leave the burn center, dietitians help develop personalized meal plans for use at home that are high in protein and carbohydrates to stimulate continued healing. When patients return to the burn center for follow-up care for their wounds, we re-evaluate their nutritional status as well.


Home > Press > 'Nanobombs' might deliver agents that alter gene activity in cancer stem cells Abstract: Researchers at The Ohio State University Comprehensive Cancer Center -- Arthur G. James Cancer Hospital and Richard J. Solove Research Institute (OSUCCC -- James) have developed nanoparticles that swell and burst when exposed to near-infrared laser light. Such 'nanobombs' might overcome a biological barrier that has blocked development of agents that work by altering the activity -- the expression -- of genes in cancer cells. The agents might kill cancer cells outright or stall their growth. The kinds of agents that change gene expression are generally forms of RNA (ribonucleic acid), and they are notoriously difficult to use as drugs. First, they are readily degraded when free in the bloodstream. In this study, packaging them in nanoparticles that target tumor cells solved that problem. This study, published in the journal Advanced Materials, suggests that the nanobombs might also solve the second problem. When cancer cells take up ordinary nanoparticles, they often enclose them in small compartments called endosomes. This prevents the drug molecules from reaching their target, and they are soon degraded. Along with the therapeutic agent, these nanoparticles contain a chemical that vaporizes, causing them to swell three times or more in size when exposed to near-infrared laser light. The endosomes burst, dispersing the RNA agent into the cell. "A major challenge to using nanoparticles to deliver gene-regulating agents such as microRNAs is the inability of the nanoparticles to escape the compartments, the endosomes, that they are encased in when cells take up the particles," says principal investigator Xiaoming (Shawn) He, PhD, associate professor of Biomedical Engineering and member of the OSUCCC -- James Translational Therapeutics Program. "We believe we've overcome this challenge by developing nanoparticles that include ammonium bicarbonate, a small molecule that vaporizes when exposing the nanoparticles to near-infrared laser light, causing the nanoparticle and endosome to burst, releasing the therapeutic RNA," He explains. For their study, He and colleagues used human prostate-cancer cells and human prostate tumors in an animal model. The nanoparticles were equipped to target cancer stem-like cells (CSCs), which are cancer cells that have properties of stem cells. CSCs often resist therapy and are thought to play an important role in cancer development and recurrence. The therapeutic agent in the nanoparticles was a form of microRNA called miR-34a. The researchers chose this molecule because it can lower the levels of a protein that is crucial for CSC survival and may be involved in chemotherapy and radiation therapy resistance. The nanoparticles also encapsulate ammonium bicarbonate, which is a leavening agent sometimes used in baking. Near-infrared laser light, which induces vaporization of the ammonium bicarbonate, can penetrate tissue to a depth of one centimeter (nearly half an inch). For deeper tumors, the light would be delivered using minimally invasive surgery. ### The study's key technical findings include: Nanoparticles with ammonium bicarbonate enlarged more than three times when activated with near-infrared laser (from about 100 nm in diameter at body temperature to more than 300 nm at 43 degrees C. (110 degrees F). Endosomes measure 150-200 nm in diameter; The nanoparticles had great affinity for CSCs and very little for normal human adipose-derived stem cells; The miR-34a nanobombs significantly reduced tumor volume in an animal model that bore human prostate tumors. Funding from an American Cancer Society Research Scholar Grant and a Pelotonia Postdoctoral Fellowship supported this research. Other researchers involved in this study were Hai Wang, Pranay Agarwal, Shuting Zhao and Jianhua Yu, all of The Ohio State University; and Xiongbin Lu of the University of Texas MD Anderson Cancer Center. About Ohio State University Comprehensive Cancer Center The Ohio State University Comprehensive Cancer Center – Arthur G. James Cancer Hospital and Richard J. Solove Research Institute strives to create a cancer-free world by integrating scientific research with excellence in education and patient-centered care, a strategy that leads to better methods of prevention, detection and treatment. Ohio State is one of only 45 National Cancer Institute-designated Comprehensive Cancer Centers and one of only four centers funded by the NCI to conduct both phase I and phase II clinical trials on novel anticancer drugs. As the cancer program’s 306-bed adult patient-care component, The James is one of the top cancer hospitals in the nation as ranked by U.S. News & World Report and has achieved Magnet designation, the highest honor an organization can receive for quality patient care and professional nursing practice. At 21 floors with more than 1.1 million square feet, The James is a transformational facility that fosters collaboration and integration of cancer research and clinical cancer care. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


In the age of unicorns and record-level private-market valuations, there are those who warn that the dreaded startup bubble is bearing down upon us again. That’s understandable. In recent years, many highly anticipated tech IPOs hit their peak… just before going public. Since 2011, almost half of those highly valued companies are now trading either flat or below their initial public offering share prices. As Silicon Valley startups continue chasing private valuations, entrepreneurs in the Midwest are following an entirely different approach. Our goal is the same — to build world-class businesses based on innovative technologies that create new wealth — but our approach is entirely different. We are proving that startup success comes from validating a real market need, then building a new company that resolves that need with a solution for which real customers will pay. Here are the ways the Midwest model of entrepreneurship creates solutions that attract early paying customers that lead to sustainable companies. The Midwest is home to internationally recognized public and private research institutions that connect R&D to commercialization and entrepreneurship. Known for interdisciplinary initiatives in nanoscience and energy, the University of Michigan (with $1.3 billion in research expenditures) has been ranked the No. 1 U.S. public research university by the National Science Foundation. Northwestern University has 90 school-based research centers, as well as strong collaborative relationships with Argonne National Laboratory and Fermi National Laboratory. The School of Pharmacy at the University of Kansas ranked second among all pharmacy schools for NIH research funding. The University of Illinois’ Research Park is home to more than 50 startup companies that are commercializing technology, and more than 90 established companies that employ more than 1,400 people and more than 450 interns. More than 30 percent of all Purdue undergraduates have at least one research experience while attending. Startups have licensed spinout technologies from The Ohio State University at a rapid pace, with more than 30 startups since 2013 alone, with the focus to create solutions with an emphasis on the Midwest’s main industries. For example, automotive and agriculture. Simple-Fill is building a compressor that will enable the energy industry to create a fuel that goes farther for less. 3Bar Biologics is commercializing an Ohio State technology that will help farmers increase crop yields. There are 268 Fortune 1000 companies in the Midwest. Most are companies that make or distribute something. Michigan, northern Ohio, and Indiana build automobiles. Ohio has a concentration of retail and insurance. The entire region farms. Corporations within these industries have problems they need to solve and budgets to purchase solutions. In the 1980s, the Midwest learned the hard way what can happen when industries don’t innovate in the face of off-shore competition: They collapse under their own weight. We’ve gone to high-value, high-performance manufacturing. Companies in traditional industries that are thriving have created a culture that drives a high rate of new products and new processes. To achieve that, there’s a willingness on the part of Midwest corporations to seek innovation from many directions, including young inventive companies. This creates a virtuous circle for startups here. Not only is there expanding corporate awareness about young companies’ products, there’s an open-minded willingness to write checks — as early adopters and investors. There’s no better way to validate products or road test a startup’s business plan than with real paying customers. There isn’t enough investment capital for all the fundable startup opportunities in the Midwest. That requires these startups to focus on capital efficiency with smaller investment. The good news is that in the Midwest, founding teams are good at the fundamentals. They know how to manage cash. They make a dollar spend like a dime. They don’t necessarily expect early adopters to pay full price, but they do expect them to pay something — either in fees, development funding or capital investment. The Midwest is a melting pot and a natural test market. Five of the top 10 MSAs that most resemble the U.S. are in the Midwest. That gives us the opportunity to exploit from the ground up all the benefits diversity provides. It’s organic and built-in from the beginning. In our specific region, out of 350 companies we evaluated over the last year, roughly 20 percent were led by women and 20 percent by minorities. More importantly, these percentages don’t change as entrepreneurs move through our engagement, diligence and investment processes. The Midwest is home to many of the largest and finest state universities in the U.S. Every year, these fine schools graduate thousands of students who want well-paying jobs they like. These graduates are seeking reasonable rents and starter homes they can afford. They want to raise families and to connect and make friends in communities where they fit in. That describes the Midwest to a tee. In recent study by SmartAsset of the best U.S. large cities for college graduates, based on well-paying jobs, affordability and fun (concentration of young professionals and things to do), the Midwest dominated, with six of the top 10 cities. Three of WalletHub’s Top 10 Most Educated Cities are in Michigan and Wisconsin. And we’re leveraging the powerful resources of Venture for America (VFA) in St. Louis, Detroit and in Ohio’s three largest cities to attract outstanding graduates from other parts of the U.S. As Louisa Lee, a VFA fellow and graduate of Williams College, told us when she moved to Columbus, “I’m looking forward to making a life here. The other fellows and I want to make Columbus a target city for new Venture for America Fellows.” The Midwest approach to entrepreneurship leverages our unique strengths — innovation, corporate connections, diversity, efficiency and talent. In cities like Columbus, Cincinnati, Madison, Detroit and St. Louis we are creating our own unique startup landscape with long-term growth potential. More than 900 Midwest companies claimed positions on the 2015 Inc. 5000 list. Chicago, with its 1871 business incubator fostering 425 companies and 1,600 clients, ranks second in Inc.’s top cities for fast-growing companies. By starting with market and specific customer needs, the Midwest has developed a foundation for sustainable economic impact from new business creation. Consequently, this region is less likely to feel the effects if/when the next tech bubble does burst.


Baral N.R.,The Ohio State University | Baral N.R.,Tribhuvan University | Wituszynski D.M.,The Ohio State University | Martin J.F.,The Ohio State University | Shah A.,The Ohio State University
Energy | Year: 2016

Energy can be recovered from stillage from cellulosic biorefineries in different ways, including direct combustion and fast pyrolysis. These different energy conversion routes require different level of inputs from natural resources, non-renewable resources, and economic services. Due to the high energetic and economic costs of stillage recovery methods, it is essential to perform a sustainability analysis of these different options before commercial deployment. Thus, the main objective of this study was to assess the relative sustainability and environmental impact of fast pyrolysis and direct combustion systems for the beneficial use of waste stillage using emergy analysis. The estimated emergy sustainability indices of direct combustion and fast pyrolysis were 0.09 and 0.07, respectively, where the renewable fraction of stillage was the most influential input parameter. Additionally, the net product transformity for direct combustion and fast pyrolysis were 7.06E+05 and 2.61E+05 seJ/J, respectively. Overall, a 23% higher emergy sustainability index for the direct combustion compared to the fast pyrolysis and a 63% lower overall product transformity for the fast pyrolysis compared to the direct combustion suggests that both systems, at the current state of the technology, offer differing advantages for stillage utilization depending upon the desired end products and uses. © 2016 Elsevier Ltd. Source


Ni X.,Nanjing Southeast University | Luo J.,Nanjing Southeast University | Zhang B.,The Ohio State University | Teng J.,The Ohio State University | And 3 more authors.
Security and Communication Networks | Year: 2016

Location-related mobile social network services are popular nowadays, and their methods to obtain end users’ location information are based on people's self-report location claims, using mobile devices to check positions and send them back to the service providers. However, this mechanism has a serious vulnerability that makes malicious users be able to access restricted resource by transmitting fake locations. Both academic and industrial researchers are recently aware of this problem's importance since the commercialized trend of location-related mobile social network services. To address this issue, we propose mobile phone-based physical-social location, a mobile phone-based location proof system to verify users’ location claims and defend various fake location information. Our core idea is that a user's location claim can be proved by a set of selective physical encountered people serving as “witnesses” who are co-located with him/her in that area. The system is composed of proof generation and verification. In the proof generation phase, we leverage a certain number of co-located people to generate certificates as location proofs during their encounters via bluetooth interface. In the verification phase, we propose an efficient verification scheme to make our system accurate and adaptive. We have implemented the MPSL system using real world Nokia N82 (Nokia, Espoo, Finland) phones. Our experimental results show that our mobile phone-based system can achieve high verification accuracy and good performance. Copyright © 2014 John Wiley & Sons, Ltd. Copyright © 2014 John Wiley & Sons, Ltd. Source


Jia R.,University of Miami | Lang S.N.,The Ohio State University | Schoppe-Sullivan S.J.,The Ohio State University
Psychological Assessment | Year: 2016

Accurate assessment of psychological self-concept in early childhood relies on the development of psychometrically sound instruments. From a developmental perspective, the current study revised an existing measure of young children's psychological self-concepts, the Child Self-View Questionnaire (CSVQ; Eder, 1990), and examined its psychometric properties using a sample of preschool-age children assessed at approximately 4 years old with a follow-up at age 5 (N = 111). The item compositions of lower order dimensions were revised, leading to improved internal consistency. Factor analysis revealed 3 latent psychological self-concept factors (i.e., sociability, control, and assurance) from the lower order dimensions. Measurement invariance by gender was supported for sociability and assurance, not for control. Test-retest reliability was supported by stability of the psychological self-concept measurement model during the preschool years, although some evidence of increasing differentiation was obtained. Validity of children's scores on the 3 latent psychological self-concept factors was tested by investigating their concurrent associations with teacher-reported behavioral adjustment on the Social Competence and Behavior Evaluation Scale-Short Form (SCBE-SF; LaFreniere & Dumas, 1996). Children who perceived themselves as higher in sociability at 5 years old displayed less internalizing behavior and more social competence; boys who perceived themselves as higher in control at age 4 exhibited lower externalizing behavior; children higher in assurance had greater social competence at age 4, but displayed more externalizing behavior at age 5. Implications relevant to the utility of the revised psychological self-concept measure are discussed. © 2015 American Psychological Association. Source


Abraham Lincoln is best known for abolishing slavery and keeping the United States together through the Civil War, but he also helped the country become the scientific and engineering powerhouse we know today. For example, Lincoln signed the Morrill Act in 1862, creating a system of land-grant colleges and universities that revolutionized higher education in the United States, notes famed astrophysicist and science communicator Neil deGrasse Tyson. "Known also as the people's colleges, they were conceived with the idea that they would provide practical knowledge and science in a developing democratic republic," Tyson, the director of the American Museum of Natural History's Hayden Planetarium in New York City, writes in an editorial that appeared online today (Nov. 19) in the journal Science. Notable land-grant institutions include the Massachusetts Institute of Technology, Cornell University, the University of Florida, The Ohio State University, the University of Arizona and the schools in the vast University of California system. Lincoln, the 16th president of the United States, also chartered the National Academy of Sciences (NAS) in 1863, establishing the august body that advises Congress and the president about science and technology matters to this day, Tyson observes. Tyson ends his brief editorial by reproducing the full 272-word text of a speech he wrote in 2013 in response to a request by the Abraham Lincoln Presidential Library Foundation, as a way to help commemorate the 150th anniversary of Lincoln's famous Gettysburg Address (which was also 272 words long). The speech, which Tyson called "The Seedbed," reflects on the importance of the NAS, and of science generally to the United States and its future. "In this, the twenty-first century, innovations in science and technology form the primary engines of economic growth," Tyson's speech reads. "While most remember Honest Abe for war and peace, and slavery and freedom, the time has come to remember him for setting our Nation on a course of scientifically enlightened governance, without which we all may perish from this Earth." Copyright 2015 SPACE.com, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


Nicastro F.,National institute for astrophysics | Nicastro F.,Harvard - Smithsonian Center for Astrophysics | Nicastro F.,University of Crete | Senatore F.,National institute for astrophysics | And 9 more authors.
Monthly Notices of the Royal Astronomical Society | Year: 2016

We report on a systematic investigation of the cold and mildly ionized gaseous baryonic metal components of our Galaxy, through the analysis of high-resolution Chandra andXMM-Newton spectra of two samples of Galactic and extragalactic sources. The comparison between lines of sight towards sources located in the disc of our Galaxy and extragalactic sources allows us for the first time to clearly distinguish between gaseous metal components in the disc and halo of our Galaxy. We find that a warm ionized metal medium (WIMM) permeates a large volume above and below the Galaxy's disc, perhaps up to the circum-galactic space. This halo WIMM imprints virtually the totality of the OI and OII absorption seen in the spectra of our extragalactic targets, has a temperature of THalo WIMM = 2900 ± 900 K, a density 〈nH〉 Halo WIMM = 0.023 ± 0.009 cm-3 and a metallicity ZHalo WIMM = (0.4 ± 0.1) Z⊙. Consistently with previous works, we also confirm that the disc of the Galaxy contains at least two distinct gaseous metal components, one cold and neutral (the CNMM: cold neutral metal medium) and onewarm and mildly ionized, with the same temperature of the haloWIMM, but higher density 〈nH〉 Halo WIMM = 0.09 ± 0.03 cm-3) and metallicity (ZDisc WIMM = 0.8 ± 0.1 Z⊙). By adopting a simple disc+sphere geometry for the Galaxy, we estimate masses of the CNMM and the total (disc + halo) WIMM of MCNMM ≲ 8 × 108 M⊙ and MWIMM ≃ 8.2 × 109 M⊙. © 2016 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society. Source


News Article
Site: http://www.greencarcongress.com/

« Williams Advanced Engineering collaborates with Wille on zero-emission multi-function machines | Main | Hyundai Motor powers world’s first hydrogen fuel cell vehicle car-sharing service » Cummins Inc. has been awarded a $4.5-million grant from the US Department of Energy to develop a Class 6 commercial plug-in hybrid electric vehicle that can reduce fuel consumption by at least 50% over conventional Class 6 vehicles. (Earlier post.) When fully loaded, Class 6 vehicles weigh between approximately 19,000 and 26,000 pounds; typical examples include school buses or single-axle work trucks. With their expertise in internal combustion engines and related products, Cummins researchers will optimize the powertrain by selecting the engine with the best architecture to use as an electric commercial vehicle range extender, using the engine to manage the charge level of the all-electric drive battery pack. The range extender will be integrated, using advanced vehicle controls, with the electrified powertrain and other applicable technologies. Ultimately, the researchers aim to demonstrate improved fuel consumption and state of the art drivability and performance regardless of environmental conditions. Cummins is partnering with PACCAR on the project, and the full team includes representatives from The Ohio State University, National Renewable Energy Laboratory and Argonne National Laboratory. The close integration and control of the electrified powertrain with an appropriately selected engine is critically important to developing a plug-in hybrid electric vehicle system. We believe that through the team’s efforts we can soon make these innovations commercially available. The reduction of fuel consumption will be met or exceeded during a wide-range of drive cycles designed to meet the needs of a wide variety of commercial fleet operators. The fuel reduction goals will be achieved through the use of an electrified vehicle powertrain, optimization of the internal combustion engine operation, and other technologies including intelligent transportation systems and electronic braking.


« USC team develops highly efficient catalyst system for converting CO2 to methanol; 79% yield from CO2 captured from air | Main | Bio-isobutanol company Gevo signs licensing and joint development agreements with Porta in Argentina; corn feedstock » Inspired by the damping mechanisms which sustain trees under wind and seismic loads, researchers at The Ohio State University and the University of Michigan are investigating the potential for the development of energy harvesting systems that efficiently convert the same motion- and wind-based excitations into electric power. In a paper in the Journal of Sound and Vibration, they report demonstrating that tree-like structures made with electromechanical materials can convert random forces—such as winds or footfalls on a bridge—into strong structural vibrations that are ideal for generating electricity. The technology may prove most valuable when applied on a small scale, in situations where other renewable energy sources such as solar are not an option, said project leader Ryan Harne, assistant professor of mechanical and aerospace engineering at Ohio State, and director of the Laboratory of Sound and Vibration Research. … The themes of these studies suggest that nonlinearity and multimodality play critical roles in the dynamical behaviors of trees for the purposes of structural damping. Indeed, the occurrence of an internal resonance suggests that some trees exploit particularly unique energy transfer characteristics. For different purposes altogether, energy harvesting structures are designed to efficiently absorb and electrically dissipate the vibrations to which they are subjected. While to date there have been numerous energy harvesting investigations focused on nonlinearity or multimodality as individual features, there are few that have considered exploiting both phenomena concurrently to improve energy conversion. The idea of using tree-like devices to capture wind or vibration energies may seem straightforward, because real trees clearly dissipate energy when they sway. Although other research groups have tested the effectiveness of similar tree structures, until now, they haven’t made a concerted effort to capture realistic ambient vibrations with a tree-shaped electromechanical device—mainly because it was assumed that random forces of nature wouldn’t be very suitable for generating the consistent oscillations that yield useful electrical energies. Through mathematical modeling, Harne determined that it is possible for tree-like structures to maintain vibrations at a consistent frequency despite large, random inputs, so that the energy can be effectively captured and stored via power circuitry. The phenomenon is called internal resonance, and it’s how certain mechanical systems dissipate internal energies. In particular, he determined that he could exploit internal resonance to coax an electromechanical tree to vibrate with large amplitudes at a consistent low frequency, even when the tree was experiencing only high frequency forces. It even worked when these forces were significantly overwhelmed by extra random noise, as natural ambient vibrations would be in many environments. Harne and his colleagues tested the mathematical model in an experiment, where they built a tree-like device out of two small steel beams—one a tree “trunk” and the other a “branch”—connected by a strip of an electromechanical material, polyvinylidene fluoride (PVDF), to convert the structural oscillations into electrical energy. They installed the model tree on a device that shook it back and forth at high frequencies. At first, to the eye, the tree didn’t seem to move because the device oscillated with only small amplitudes at a high frequency. Regardless, the PVDF produced a small voltage from the motion: about 0.8 volts. Then they added noise to the system, as if the tree were being randomly nudged slightly more one way or the other. The tree began displaying what Harne called “saturation phenomena”: It reached a tipping point where the high frequency energy was suddenly channeled into a low frequency oscillation. At this point, the tree swayed noticeably back and forth, with the trunk and branch vibrating in sync. This low frequency motion produced more than double the voltage—around 2 volts. Those are low voltages, but the experiment was a proof-of-concept: Random energies can produce vibrations that are useful for generating electricity. Early applications would include powering the sensors that monitor the structural integrity and health of civil infrastructure, such as buildings and bridges. Harne envisions tiny tree-like structures feeding voltages to a sensor on the underside of a bridge, or on a girder deep inside a high-rise building. Today, the only way to power most structural sensors is to use batteries or plug the sensors directly into power lines, both of which are expensive and hard to manage for sensors planted in remote locations. If sensors could capture vibrational energy, they could acquire and wirelessly transmit their data is a truly self-sufficient way. The initial phase of this research was supported by the University of Michigan Summer Undergraduate Research in Engineering program and the University of Michigan Collegiate Professorship.


News Article
Site: http://phys.org/technology-news/

A project at The Ohio State University is testing whether high-tech objects that look a bit like artificial trees can generate renewable power when they are shaken by the wind—or by the sway of a tall building, traffic on a bridge or even seismic activity. In a recent issue of the Journal of Sound and Vibration, researchers report that they've uncovered something new about the vibrations that pass through tree-shaped objects when they are shaken. Specifically, they've demonstrated that tree-like structures made with electromechanical materials can convert random forces—such as winds or footfalls on a bridge—into strong structural vibrations that are ideal for generating electricity. The idea may conjure images of fields full of mechanical trees swaying in the breeze. But the technology may prove most valuable when applied on a small scale, in situations where other renewable energy sources such as solar are not an option, said project leader Ryan Harne, assistant professor of mechanical and aerospace engineering at Ohio State, and director of the Laboratory of Sound and Vibration Research. The "trees" themselves would be very simple structures: think of a trunk with a few branches—no leaves required. Early applications would include powering the sensors that monitor the structural integrity and health of civil infrastructure, such as buildings and bridges. Harne envisions tiny trees feeding voltages to a sensor on the underside of a bridge, or on a girder deep inside a high-rise building. The project takes advantage of the plentiful vibrational energy that surrounds us every day, he said. Some sources are wind-induced structural motions, seismic activity and human activity. "Buildings sway ever so slightly in the wind, bridges oscillate when we drive on them and car suspensions absorb bumps in the road," he said. "In fact, there's a massive amount of kinetic energy associated with those motions that is otherwise lost. We want to recover and recycle some of that energy." Sensors monitor the soundness of a structure by detecting the vibrations that pass through it, he explained. The initial aim of the project is to turn those vibrations into electricity, so that structural monitoring systems could actually be powered by the same vibrations they are monitoring. Today, the only way to power most structural sensors is to use batteries or plug the sensors directly into power lines, both of which are expensive and hard to manage for sensors planted in remote locations. If sensors could capture vibrational energy, they could acquire and wirelessly transmit their data is a truly self-sufficient way. At first, the idea of using tree-like devices to capture wind or vibration energies may seem straightforward, because real trees obviously dissipate energy when they sway. And other research groups have tested the effectiveness of similar tree structures using idealized—that is, not random—vibrations. But until now, researchers haven't made a concerted effort to capture realistic ambient vibrations with a tree-shaped electromechanical device—mainly because it was assumed that random forces of nature wouldn't be very suitable for generating the consistent oscillations that yield useful electrical energies. First, through mathematical modeling, Harne determined that it is possible for tree-like structures to maintain vibrations at a consistent frequency despite large, random inputs, so that the energy can be effectively captured and stored via power circuitry. The phenomenon is called internal resonance, and it's how certain mechanical systems dissipate internal energies. In particular, he determined that he could exploit internal resonance to coax an electromechanical tree to vibrate with large amplitudes at a consistent low frequency, even when the tree was experiencing only high frequency forces. It even worked when these forces were significantly overwhelmed by extra random noise, as natural ambient vibrations would be in many environments. He and his colleagues tested the mathematical model in an experiment, where they built a tree-like device out of two small steel beams—one a tree "trunk" and the other a "branch"—connected by a strip of an electromechanical material, polyvinylidene fluoride (PVDF), to convert the structural oscillations into electrical energy. They installed the model tree on a device that shook it back and forth at high frequencies. At first, to the eye, the tree didn't seem to move because the device oscillated with only small amplitudes at a high frequency. Regardless, the PVDF produced a small voltage from the motion: about 0.8 volts. Then they added noise to the system, as if the tree were being randomly nudged slightly more one way or the other. That's when the tree began displaying what Harne called "saturation phenomena": It reached a tipping point where the high frequency energy was suddenly channeled into a low frequency oscillation. At this point, the tree swayed noticeably back and forth, with the trunk and branch vibrating in sync. This low frequency motion produced more than double the voltage—around 2 volts. Those are low voltages, but the experiment was a proof-of-concept: Random energies can produce vibrations that are useful for generating electricity. "In addition, we introduced massive amounts of noise, and found that the saturation phenomenon is very robust, and the voltage output reliable. That wasn't known before," Harne said. Harne will continue this work, which he began when he was a postdoctoral researcher at the University of Michigan. There, his colleagues and co-authors on the paper were Kon-Well Wang and Anqi Sun of the Department of Mechanical Engineering. More information: Leveraging nonlinear saturation-based phenomena in an L-shaped vibration energy harvesting system, DOI: 10.1016/j.jsv.2015.11.017


Gunn J.S.,The Ohio State University | Marshall J.M.,The Ohio State University | Baker S.,University of Oxford | Baker S.,London School of Hygiene and Tropical Medicine | And 5 more authors.
Trends in Microbiology | Year: 2014

Typhoid (enteric fever) remains a major cause of morbidity and mortality worldwide, causing over 21 million new infections annually, with the majority of deaths occurring in young children. Because typhoid fever-causing Salmonella have no known environmental reservoir, the chronic, asymptomatic carrier state is thought to be a key feature of continued maintenance of the bacterium within human populations. Despite the importance of this disease to public health, our understanding of the molecular mechanisms that catalyze carriage, as well as our ability to reliably identify and treat the Salmonella carrier state, have only recently begun to advance. © 2014 Elsevier Ltd. Source

Discover hidden collaborations