Pittsburgh, PA, United States
Pittsburgh, PA, United States

Carnegie Mellon University is a private research university in Pittsburgh, Pennsylvania.The university began as the Carnegie Technical Schools founded by Andrew Carnegie in 1900. In 1912, the school became the Carnegie Institute of Technology and began granting four-year degrees. In 1967, the Carnegie Institute of Technology merged with the Mellon Institute of Industrial Research to form Carnegie Mellon University. The university's 140-acre main campus is 3 miles from Downtown Pittsburgh and abuts the Carnegie Museums of Pittsburgh, the main branch of the Carnegie Library of Pittsburgh, the Carnegie Music Hall, Schenley Park, Phipps Conservatory and Botanical Gardens, the Pittsburgh Golf Club, and the campus of the University of Pittsburgh in the city's Oakland and Squirrel Hill neighborhoods, partially extending into Shadyside.Carnegie Mellon has seven colleges and independent schools: the Carnegie Institute of Technology , College of Fine Arts, Dietrich College of Humanities and Social science, Mellon College of Science, Tepper School of Business, H. John Heinz III College and the School of Computer Science. Carnegie Mellon fields 17 varsity athletic teams as part of the University Athletic Association conference of the NCAA Division III. Wikipedia.


Time filter

Source Type

Patent
Carnegie Mellon University | Date: 2016-07-28

Provided herein are methods of suppressing viral nucleic acid, e.g. double-stranded (ds) DNA, genome release from or packaging of viruses having their nucleic acid genome packaged under stress in their capsid, and compositions useful for that purpose. The methods alter the ionic environment of the nucleic acid within the capsid and thereby prevent release of, and/or interfere with packaging of the viral genome.


Patent
Carnegie Mellon University | Date: 2016-09-19

The disclosure describes a prism containing a microfluidic channel. By coupling bulk acoustic wave generators to opposing sides of the prism, a standing bulk acoustic wave field can be excited in the prism and in the microfluidic channel. Because the microfluidic channel is titled with respect to the nodes of the bulk acoustic wave field, the prism microfluidic channel device can be used to separate microparticles and biological cells by size, compressibility, density, shape, or mass distribution. This technology enables high throughput cell sorting for biotechnology applications such as cancer cell detection.


Patent
Carnegie Mellon University | Date: 2015-05-08

A method of making optically pure preparations of chiral PNA (gamma peptide nucleic acid) monomers is provided. Nano structures comprising chiral PNA structures also are provided. Methods of amplifying and detecting specific nucleic acids, including in situ methods are provided as well as compositions and kits useful in those methods. Lastly, methods of converting nucleobase sequences from right-handed helical PNA, nucleic acid and nucleic acid analog structures to left-handed PNA, and vice-versa, are provided.


Patent
Carnegie Mellon University and Regents Of The University Of California | Date: 2015-03-13

A computer-implemented method includes accessing a plurality of sets of outputs for an interactive animation, with each set of outputs being associated with a different sequence of a plurality of sequences of discrete control inputs, and with each set of outputs comprising an output that provides a stored portion of the animation; and transmitting, to a client device, information indicative of at least one of the plurality of sets of outputs for the animation and the output that provides the stored portion of the animation, which when rendered by the client device causes the animation to be presented to a user.


Patent
Carnegie Mellon University | Date: 2015-03-17

This invention describes methods and systems for use of computer vision systems for classification of biological cells as an aid in disease diagnostics. More particularly the present invention describes a process comprising employing a robust and discriminative color space which will help provide segmentation of the cells; employing a segmentation algorithm, such as a feature-based level set, that will be able to segment the cells using a different k-phase-segmentation process, which detect for example, if a while blood cell occurs for segmenting the internal components of the cell robustly; employing a combination of different type of features including shape, texture, and invariant information, and employing a classification step to associate abnormal cell characteristics with disease states.


Identifying a masked suspect is one of the toughest challenges in biometrics that exist. This is an important problem faced in many law-enforcement applications on almost a daily basis. In such situations, investigators often only have access to the periocular region of a suspects face and, unfortunately, conventional commercial matchers are unable to process these images in such a way that the suspect can be identified. Herein, a practical method to hallucinate a full frontal face given only a periocular region of a face is presented. This approach reconstructs the entire frontal face based on an image of an individuals periocular region. By using an approach based on a modified sparsifying dictionary learning algorithm, faces can be effectively reconstructed more accurately than with conventional methods. Further, various methods presented herein are open set, and thus can reconstruct faces even if the algorithms are not specifically trained using those faces.


Patent
Carnegie Mellon University | Date: 2017-02-01

An articulated probe (10), comprising: a first mechanism (12, 14) comprised of a plurality of links; a second mechanism (12, 14) comprised of a plurality of links; a first wire extending through either said plurality of links of said first mechanism (12, 14) or said plurality of links of said second mechanism (12, 14) and a plurality of wires running through the other of said plurality of links of said first mechanism (12, 14) or said plurality of links of said second mechanism (12, 14); a device for producing command signals; and an electromechanical feeder (16) responsive to said command signals, said electromechanical feeder (16) capable of alternating each of said first mechanism (12) and second mechanism (14) between a limp mode and a rigid mode and comprising: a first motor for controlling the tension of said first wire or said plurality of wires, one of said first mechanism or said second mechanism being responsive to said first motor; and a second motor for controlling the tension of the other of said first wire or said plurality of wires, the other of said first mechanism or said second mechanism being responsive to said second motor and a method of moving the articulated probe.


Woolley A.W.,Carnegie Mellon University
AMA journal of ethics | Year: 2016

Teams offer the potential to achieve more than any person could achieve working alone; yet, particularly in teams that span professional boundaries, it is critical to capitalize on the variety of knowledge, skills, and abilities available. This article reviews research from the field of organizational behavior to shed light on what makes for a collectively intelligent team. In doing so, we highlight the importance of moving beyond simply including smart people on a team to thinking about how those people can effectively coordinate and collaborate. In particular, we review the importance of two communication processes: ensuring that team members with relevant knowledge (1) speak up when one's expertise can be helpful and (2) influence the team's work so that the team does its collective best for the patient. © 2016 American Medical Association. All Rights Reserved.


Lee E.,Carnegie Mellon University
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2017

Euiwoong Lee Given a graph G = (V;E) and an integer k 2 N, we study k-Vertex Separator (resp. k-Edge Separator), where the goal is to remove the minimum number of vertices (resp. edges) such that each connected component in the resulting graph has at most k vertices. Our primary focus is on the case where k is either a constant or a slowly growing function of n (e.g. O(log n) or no(1)). Our problems can be interpreted as a special case of three general classes of problems that have been studied separately (balanced graph partitioning, Hypergraph Vertex Cover (HVC), and fixed parameter tractability (FPT)). Our main result is an O(log k)-approximation algorithm for k-Vertex Separator that runs in time 2O(k)nO(1), and an O(log k)-approximation algorithm for k-Edge Separator that runs in time nO(1). Our re- sult on k-Edge Separator improves the best previous graph partitioning algorithm [24] for small k. Our re- sult on k-Vertex Separator improves the simple (k +1)- Approximation from HVC [3]. When OPT > k, the running time 2O(k)nO(1) is faster than the lower bound k(OPT)n(1) for exact algorithms assuming the Exponential Time Hypothesis [12]. While the running time of 20(k)nO(1) for k-Vertex Separator seems unsatisfactory, we show that the superpolynomial dependence on k may be needed to achieve a polylogarithmic approximation ratio, based on hardness of Densest k-Subgraph. We also study k-Path Transversal, where the goal is to remove the minimum number of vertices such that there is no simple path of length k. With additional ideas from FPT algorithms and graph theory, we present an O(log k)-approximation algorithm for k-Path Transversal that runs in time 2O(k3 log k)nO(1). Previously, the existence of even (1)k-approximation al- gorithm for was open [9]. Copyright © by SIAM.


Haeupler B.,Carnegie Mellon University | Harris D.G.,University of Maryland University College
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2017

The Lovasz Local Lemma (LLL) is a cornerstone principle in the probabilistic method of combinatorics, and a seminal algorithm of Moser & Tardos (2010) provides an efficient randomized algorithm to implement it. This algorithm can be parallelized to give an algorithm that uses polynomially many processors and runs in O(log3 n) time, stemming from O(log n) adaptive computations of a maximal independent set (MIS). Chung et al. (2014) developed faster local and parallel algorithms, potentially running in time O(log2 n), but these algorithms work under significantly more stringent conditions than the LLL. We give a new parallel algorithm that works under essentially the same conditions as the original algorithm of Moser & Tardos but uses only a single MIS computation, thus running in O(log2 n) time. This conceptually new algorithm also gives a clean combinatorial description of a satisfying assignment which might be of independent interest. Our techniques extend to the deterministic LLL algorithm given by Chandrasekaran et al. (2013) leading to an NC-algorithm running in time O(log2 n) as well. We also provide improved bounds on the runtimes of the sequential and parallel resampling-based algorithms originally developed by Moser & Tardos. Our bounds extend to any problem instance in which the tighter Shearer LLL criterion is satisfied. We also improve on the analysis of Kolipaka & Szegedy (2011) to give tighter concentration results. Copyright © by SIAM.


OBJECTIVE: Mindfulness meditation training has been previously shown to enhance behavioral measures of executive control (e.g. attention, working memory, cognitive control), but the neural mechanisms underlying these improvements are largely unknown. Here, we test whether mindfulness training interventions foster executive control by strengthening functional connections between dorsolateral prefrontal cortex (dlPFC) - a hub of the executive control network – and frontoparietal regions that coordinate executive function. METHODS: Thirty-five adults with elevated levels of psychological distress participated in a 3 day RCT of intensive mindfulness meditation or relaxation training. Participants completed a resting state fMRI scan before and after the intervention. We tested whether mindfulness meditation training increased resting state functional connectivity (rsFC) between dlPFC and frontoparietal control network regions. RESULTS: Left dlPFC showed increased connectivity to the right inferior frontal gyrus (T = 3.74), right middle frontal gyrus (T = 3.98), right supplementary eye field (T = 4.29), right parietal cortex (T = 4.44), and left middle temporal gyrus (T = 3.97; all p<0.05) following mindfulness training relative to the relaxation control. Right dlPFC showed increased connectivity to right middle frontal gyrus (T = 4.97, p < 0.05). CONCLUSIONS: We report that mindfulness training increases rsFC between dlPFC and dorsal network (superior parietal lobule, supplementary eye field, MFG) and ventral network (right IFG, middle temporal/angular gyrus) regions. These findings extend previous work showing increased functional connectivity amongst brain regions associated with executive function during active meditation by identifying specific neural circuits in which rsFC is enhanced by a mindfulness intervention in individuals with high levels of psychological distress. TRIAL REGISTRATION: Clinicaltrials.gov (#NCT01628809) Copyright © 2017 by American Psychosomatic Society


Crary K.,Carnegie Mellon University
Conference Record of the Annual ACM Symposium on Principles of Programming Languages | Year: 2017

Reynolds's Abstraction theorem forms the mathematical foundation for data abstraction. His setting was the polymorphic lambda calculus. Today, many modern languages, such as the ML family, employ rich module systems designed to give more expressive support for data abstraction than the polymorphic lambda calculus, but analogues of the Abstraction theorem for such module systems have lagged far behind. We give an account of the Abstraction theorem for a modern module calculus supporting generative and applicative functors, higher-order functors, sealing, and translucent signatures. The main issues to be overcome are: (1) the fact that modules combine both types and terms, so they must be treated as both simultaneously, (2) the effect discipline that models the distinction between transparent and opaque modules, and (3) a very rich language of type constructors supporting singleton kinds. We define logical equivalence for modules and show that it coincides with contextual equivalence. This substantiates the folk theorem that modules are good for data abstraction. All our proofs are formalized in Coq. © 2017 ACM.


Schwartz R.,Carnegie Mellon University
Nature Reviews Genetics | Year: 2017

Rapid advances in high-throughput sequencing and a growing realization of the importance of evolutionary theory to cancer genomics have led to a proliferation of phylogenetic studies of tumour progression. These studies have yielded not only new insights but also a plethora of experimental approaches, sometimes reaching conflicting or poorly supported conclusions. Here, we consider this body of work in light of the key computational principles underpinning phylogenetic inference, with the goal of providing practical guidance on the design and analysis of scientifically rigorous tumour phylogeny studies. We survey the range of methods and tools available to the researcher, their key applications, and the various unsolved problems, closing with a perspective on the prospects and broader implications of this field. © 2017 Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved.


Idemaru K.,University of Oregon | Holt L.L.,Carnegie Mellon University
Journal of Experimental Psychology: Human Perception and Performance | Year: 2011

Speech processing requires sensitivity to long-term regularities of the native language yet demands listeners to flexibly adapt to perturbations that arise from talker idiosyncrasies such as nonnative accent. The present experiments investigate whether listeners exhibit dimension-based statistical learning of correlations between acoustic dimensions defining perceptual space for a given speech segment. While engaged in a word recognition task guided by a perceptually unambiguous voice-onset time (VOT) acoustics to signal beer, pier, deer, or tear, listeners were exposed incidentally to an artificial " accent" deviating from English norms in its correlation of the pitch onset of the following vowel (F0) to VOT. Results across four experiments are indicative of rapid, dimension-based statistical learning; reliance on the F0 dimension in word recognition was rapidly down-weighted in response to the perturbation of the correlation between F0 and VOT dimensions. However, listeners did not simply mirror the short-term input statistics. Instead, response patterns were consistent with a lingering influence of sensitivity to the long-term regularities of English. This suggests that the very acoustic dimensions defining perceptual space are not fixed and, rather, are dynamically and rapidly adjusted to the idiosyncrasies of local experience, such as might arise from nonnative-accent, dialect, or dysarthria. The current findings extend demonstrations of " object-based" statistical learning across speech segments to include incidental, online statistical learning of regularities residing within a speech segment. © 2011 American Psychological Association.


News Article | April 26, 2017
Site: www.materialstoday.com

Advances in the processes that create long chain polymers from small organic molecules – or monomers – have enabled their ubiquity in everything from cosmetics, drugs, and biomedical devices to paints, coatings, adhesives, and microelectronics. But the conditions for polymerization have to be just right. The most common process, called radical polymerization (RP), uses radical chemistry to join monomers into a polymer chain. Over the last 25 years, the process has been refined and adapted to give better control over the final product. One particularly useful extension of the process is atom transfer radical polymerization (ATRP), developed by Krzysztof Matyjaszewski and his team at Carnegie Mellon University in the 1990s, which is simple to set up and can produce a wide range of functional materials. “ATRP has become an everyday, rather than a specialty, polymerization method as a result of the breadth of available techniques and their robustness, conjoined with the simplicity of the reaction set up,” says Matyjaszewski. In a comprehensive review, he and co-author Pawel Krys explain how ATRP uses Cu complexes to drive polymerization in a rather surprising way [European Polymer Journal 89 (2017) 482–523]. In conventional RP, the reaction proceeds very quickly, giving no time to tailor the chemical structure of the polymers produced. ATRP, by contrast, switches the growing polymer chains between a dormant ‘sleeping’ state and brief periods of activity. Extending the reaction time from a few seconds up to many hours provides a window of opportunity for manipulation of the polymers’ chemical structure. “All the polymer chains start growing at the same time and grow synchronously, which allows polymers with narrow molecular weight distribution, desired molecular weight, and complex architectures to be obtained easily,” explains Matyjaszewski. ATRP comes in two flavors: original (or ‘normal’) and ‘activator regeneration’. In the normal form, equivalent amounts of an initiator – usually an alkyl halide containing a halogen atom such as chlorine or bromine – and a catalyst in the lower oxidation state are used. A catalyst in this form, however, is unstable and difficult to handle. To get around this, and reduce the amount of catalyst required, activator regeneration ATRP uses an oxidized catalyst and a reducing agent to regenerate the metal in the lower oxidation state continuously and drive the polymerization. Lower levels of catalyst are desirable from both economic and environmental points of view. More recently, interest has turned to metal-free catalysts and new ways of controlling the polymerization reaction externally. “Light is an external stimulus, so polymerization can be stopped and restarted by turning on or off, or tuned by adjusting the irradiation wavelength, source intensity, and the distance from the reaction vessel,” points out Matyjaszewski. “Other stimuli include electrical current or mechanical forces that can provide spatiotemporal control and turn on/off polymerization.” Substantial progress has been made in ATRP over the last 20 years and the future promises to be no less exciting. ATRP offers a simple setup, uses a wide range of commercially available reaction components, and can be conducted under different conditions, including ones that are biologically relevant. Better understanding of ATRP is paving the way for new advances in process optimization and commercialization of new products. Ultimately, further refinement of ATRP could enable more sustainable, efficient, and ‘greener’ polymerization with substantially improved control, suggest Maciek Kopec and G. Julius Vancso of the University of Twente and senior editor of European Polymer Journal. “Thanks to the deep mechanistic understanding provided by Matyjaszewski and co-workers, ATRP has become the technique of choice for the easy preparation of well-defined polymers and will continue to establish its enabling role in materials chemistry, with an increasing number of ATRP-made commercial products,” they say. “In the future, we anticipate a growing number of studies using the ATRP toolbox to synthesize sophisticated, complex polymer architectures such as block copolymers, bottlebrushes or (bio)hybrids with applications in medicine, energy conversion/storage, and other areas.”


News Article | April 27, 2017
Site: www.biosciencetechnology.com

Researchers at MIT, Brigham and Women’s Hospital, and the Charles Stark Draper Laboratory have devised a way to wirelessly power small electronic devices that can linger in the digestive tract indefinitely after being swallowed. Such devices could be used to sense conditions in the gastrointestinal tract, or carry small reservoirs of drugs to be delivered over an extended period. Finding a safe and efficient power source is a critical step in the development of such ingestible electronic devices, said Giovanni Traverso, a research affiliate at MIT’s Koch Institute for Integrative Cancer Research and a gastroenterologist and biomedical engineer at Brigham and Women’s Hospital. “If we’re proposing to have systems reside in the body for a long time, power becomes crucial,” said Traverso, one of the senior authors of the study. “Having the ability to transmit power wirelessly opens up new possibilities as we start to approach this problem.” The new strategy, described in the April 27 issue of the journal Scientific Reports, is based on the wireless transfer of power from an antenna outside the body to another one inside the digestive tract. This method yields enough power to run sensors that could monitor heart rate, temperature, or levels of particular nutrients or gases in the stomach. “Right now we have no way of measuring things like core body temperature or concentration of micronutrients over an extended period of time, and with these devices you could start to do that kind of thing,” said Abubakar Abid, a former MIT graduate student who is the paper’s first author. Robert Langer, the David H. Koch Institute Professor at MIT, is also a senior author of the paper. Other authors are Koch Institute technical associates Taylor Bensel and Cody Cleveland, former Koch Institute research technician Lucas Booth, and Draper researchers Brian Smith and Jonathan O’Brien. The research team has been working for several years on different types of ingestible electronics, including sensors that can monitor vital signs, and drug delivery vehicles that can remain in the digestive tract for weeks or months. To power these devices, the team has been exploring various options, including a galvanic cell that is powered by interactions with the acid of the stomach. However, one drawback to using this type of battery cell is that the metal electrodes stop working over time. In their latest study, the team wanted to come up with a way to power their devices without using electrodes, allowing them to remain in the GI tract indefinitely. The researchers first considered the possibility of using near-field transmission, that is, wireless energy transfer between two antennas over very small distances. This approach is now used for some cell phone chargers, but because the antennas have to be very close together, the researchers realized it would not work for transferring power over the distances they needed — about 5 to 10 centimeters. Instead, they decided to explore midfield transmission, which can transfer power across longer distances. Researchers at Stanford University have recently explored using this strategy to power pacemakers, but no one had tried using it for devices in the digestive tract. Using this approach, the researchers were able to deliver 100 to 200 microwatts of power to their device, which is more than enough to power small electronics, Abid said. A temperature sensor that wirelessly transmits a temperature reading every 10 seconds would require about 30 microwatts, as would a video camera that takes 10 to 20 frames per second. In a study conducted in pigs, the external antenna was able to transfer power over distances ranging from 2 to 10 centimeters, and the researchers found that the energy transfer caused no tissue damage. “We’re able to efficiently send power from the transmitter antennas outside the body to antennas inside the body, and do it in a way that minimizes the radiation being absorbed by the tissue itself,” Abid said. Christopher Bettinger, an associate professor of materials science and biomedical engineering at Carnegie Mellon University, describes the study as a “great advancement” in the rapidly growing field of ingestible electronics. “This is a classic problem with implantable devices: How do you power them? What they’re doing with wireless power is a very nice approach,” said Bettinger, who was not involved in the research. For this study, the researchers used square antennas with 6.8-millimeter sides. The internal antenna has to be small enough that it can be swallowed, but the external antenna can be larger, which offers the possibility of generating larger amounts of energy. The external power source could be used either to continuously power the internal device or to charge it up, Traverso said. “It’s really a proof-of-concept in establishing an alternative to batteries for the powering of devices in the GI tract,” he said. “This work, combined with exciting advancements in subthreshold electronics, low-power systems-on-a-chip, and novel packaging miniaturization, can enable many sensing, monitoring, and even stimulation or actuation applications,” Smith said. The researchers are continuing to explore different ways to power devices in the GI tract, and they hope that some of their devices will be ready for human testing within about five years. “We’re developing a whole series of other devices that can stay in the stomach for a long time, and looking at different timescales of how long we want to keep them in,” Traverso said. “I suspect that depending on the different applications, some methods of powering them may be better suited than others.” The research was funded by the National Institutes of Health and by a Draper Fellowship.


News Article | April 28, 2017
Site: www.prweb.com

The McCourt School of Public Policy at Georgetown University yesterday announced the opening of the Georgetown Research Data Center (RDC). A joint project of the U.S. Census Bureau and the McCourt School’s Massive Data Institute, the Georgetown RDC provides secure access to qualified researchers at Georgetown and at other nearby universities and institutions examining a wide range of social and economic issues. “The Georgetown RDC strengthens and animates Georgetown and the McCourt School’s commitment to world-class, 21st century research and scholarship.” said Robert Groves, provost of Georgetown University and former director of the U.S. Census Bureau. “We are very pleased to partner with the Census Bureau to provide expanded but secure access to these critical data.” The Georgetown RDC is the first Census Research Data Center to open in Washington, D.C. and the 24th RDC in the country. Dr. J. Bradford Jensen, the McCrane/Shaker Chair in International Business at Georgetown’s McDonough School of Business, who helped establish the first university-based RDC at Carnegie Mellon University, will serve as executive director. Dr. Nate Ramsey, lead administrator of the Federal Statistical Research Data Center program at the Census Bureau's Center for Economic Studies, will serve as acting administrator. “The restricted-use microdata provided by Census through the RDC, like the American Community Survey, the Census of Manufacturers, and Current Population Survey, is an incredibly valuable resource to Georgetown and other qualified researchers,” said Brad Jensen, Executive Director of the Georgetown RDC. “We hope Georgetown faculty, graduate students, and other researchers studying critical issues in economics and workforce issues, health and health care, statistics and demographics will get in touch about how we can work together.” RDCs are Census Bureau facilities, housed in partner institutions, that meet all physical and information security requirements for access to restricted–use micro data of the agencies whose data are accessed there. An RDC allows qualified researchers with approved projects to access restricted-use data sets from a variety of statistical agencies to address important research questions. The Massive Data Institute at Georgetown’s McCourt School of Public Policy is an interdisciplinary research center devoted to the study of high-dimensional data to answer public policy questions. The MDI uses data from novel, often real-time sources like the Internet, social media, sensors and other big data sources to increase our understanding of society and human behavior, and thus improve public policy decision-making. The MDI regularly awards seed grants, houses postdoctoral fellows, and hosts faculty seminars on public policy and massive data. ABOUT THE MCCOURT SCHOOL OF PUBLIC POLICY The Georgetown University McCourt School of Public Policy is a top-ranked public policy school located in the center of the policy world in Washington, D.C. Our mission is to teach our students to design, analyze, and implement smart policies and put them into practice in the public, private, and nonprofit sectors, in the U.S. and around the world.


New research shows that limiting how pharmaceutical sales representatives can market their products to physicians changes their drug prescribing behaviors. A team, led by the University of California, Los Angeles' Ian Larkin and Carnegie Mellon University's George Loewenstein, examined restrictions 19 academic medical centers (AMCs) in five U.S. states placed on pharmaceutical representatives' visits to doctors' offices. Published in the May 2 issue of the Journal of the American Medical Association, the results reveal that the restrictions caused physicians to switch from prescribing drugs that were more expensive and patent-protected to generic, significantly cheaper drugs. Pharmaceutical sales representative visits to doctors, known as "detailing," is the most prominent form of pharmaceutical company marketing. Detailing often involves small gifts for physicians and their staff, such as meals. Pharmaceutical companies incur far greater expenditures on detailing visits than they do on direct-to-consumer marketing, or even on research and development of new drugs. Despite the prevalence of detailing and the numerous programs to regulate detailing, little was known about how practice-level detailing restrictions affect physician prescribing, until now. For the study, which is the largest, most comprehensive investigation into the impact of detailing restrictions, the team compared changes in the prescribing behavior of thousands of doctors before and after their AMCs introduced policies restricting detailing with the prescribing behavior of a carefully matched control group of similar physicians practicing in the same geographic regions but not subject to detailing restrictions. In total, the study included 25,000 physicians and 262 drugs in eight major drug classes from statins to sleep aids to antidepressants, representing more than $60 billion in aggregate sales in the U.S. "The study cannot definitively prove a causal link between policies that regulated detailing and changes in physician prescribing, but absent a randomized control, this evidence is as definitive as possible," said Larkin, assistant professor of strategy at UCLA's Anderson School of Management. "We investigated 19 different policy implementations that happened over a six-year period, included a control group of highly similar physicians not subject to detailing restrictions and looked at effects in eight large drug classes. The results were remarkable robust -- after the introduction of policies, about five to 10 percent of physician prescribing behavior changed." Specifically, the researchers found that detailing policies were associated with an 8.7 percent decrease in the market share of the average detailed drug. Before policy implementation, the average drug had a 19.3 percent market share. The findings also suggest that detailing may influence physicians in indirect ways. "No medical center completely barred salesperson visits; salespeople could and did continue to visit physicians at all medical centers in the study," Larkin said. "The most common restriction put in place was a ban on meals and other small gifts. The fact that regulating gifts while still allowing sales calls still led to a switch to cheaper, generic drugs may suggest that gifts such as meals play an important role in influencing physicians. The correlation between meals and prescribing has been well established in the literature, but our study suggests this relationship may be causal in nature." In light of these findings, the study indicates that physician practices and other governing bodies may need to take an active role in regulating conflicts of interest, rather than relying on individual physicians to monitor and regulate. "Social science has long demonstrated that professionals, even well-meaning ones, are powerfully influenced by conflicts of interest," said Loewenstein, the Herbert A. Simon University Professor of Economics and Psychology at CMU. "A large body of research also shows that simply disclosing conflicts of interests is insufficient to reduce their influence, and may even exacerbate it. The results from this study underline the effectiveness of, and need for, centralized rules and regulations. We should not put the onus of dealing with conflicts on patients; the best policies are those that eliminate conflicts." Larkin and Loewenstein also have a Viewpoint article in the same JAMA issue that calls for physicians to be compensated on a salary basis, instead of fee-for-service, to eliminate additional conflicts of interest. In addition to Larkin and Loewenstein, the research team included University of California, San Diego's Desmond Ang; Austrian Institute of Technology's Jonathan Steinhart; Williams College's Matthew Chao; Carnegie Mellon's Mark Patterson; Cornell University's Sunita Sah; New York University's Tina Wu; National Institute of Mental Health's Michael Schoenbaum; David Hutchins and Troyen Brennan from CVS Caremark. The National Institute of Mental Health provided funding, and CVS Caremark provided data, for the study.


News Article | April 21, 2017
Site: www.csmonitor.com

A general view of the Large Hadron Collider experiment during a media visit to the Organization for Nuclear Research in the French village of Saint-Genis-Pouilly, near Geneva in Switzerland in 2014. —"If I have seen further," wrote Isaac Newton in a 1676 letter to Robert Hooke about studying the nature of light, "it is by standing on the shoulders of giants." Now, a study of nearly 30 million research papers and more than 5 million patents offers clues as to where more of these giants might be lurking. A paper published by researchers at Northwestern University's Institute on Complex Systems in the journal Science Advances on Wednesday reveals that the most-cited papers rely on a specific mix of old and new research that the authors say is "nearly universal in all branches of science and technology." The study addresses a question that lies at the heart of the scholarly enterprise: Today's research constitutes the basic building blocks for tomorrow's discoveries, but what should the composition of those blocks be? The findings point to ways to improve how researchers can assemble the richest combination of knowledge on a topic, and may also reveal deeper patterns in how humanity acquires knowledge. "We're very interested in trying to understand where knowledge comes from, particularly breakthroughs –  these insights in science and technology that are the ones really move the needle in terms of people's thinking," says Brian Uzzi, a professor at Northwestern's Kellogg School of Management and a co-author of the paper. To find out, the researchers gathered data on citations. "What do scientists and scholars do when they start a new project or work on a new idea?" asks lead author Satyam Mukherjee, now a professor at the Indian Institute of Management Udaipur. "The first thing we do is to perform a literature review and look for related works in the past and also in recent times." The researchers examined all 28,426,345 scientific papers in the Web of Science, an indexing service for research papers in the sciences, social sciences, arts, and humanities, from 1945 to 2013, and all 5,382,833 US patents granted between 1950 and 2010. They found that the papers and patents with the highest impact, defined as garnering the top 5 percent of citations in their field, tended to cite relatively new information, but with a long, diminishing tail into past work. "Our research indicates that one needs to see the entire arc of a given idea or concept over time to use it most effectively in one's own work," says Professor Mukherjee. The researchers were surprised by their findings' universality. The sweet spot – or "hotspot," as the researchers call it – between old and new research held for papers in physics, gender studies, and everything in between, from the postwar era to the present. "I was expecting that the patterns would vary drastically by time period and academic field," says mathematician Daniel Romero, now an assistant professor at the University of Michigan's School of Information, who worked on the study as part of a postdoctoral fellowship at Northwestern. "After all, different fields have different norms for how they cite other work." The findings address what philosopher of science Thomas Kuhn famously called "the essential tension" between tradition and innovation in scientific research. "It says something very deep about where you want to look for information," says Professor Uzzi. "And also something very deep about how knowledge itself matures through time." Mark Hannah, an assistant professor in Arizona State University's English department who specializes in cross-disciplinary communication in the sciences, suggests that the hotspot may emerge from efforts to reconcile new modes of thought with older ones. "You're seeing a balancing between legacy language and emerging language," says Professor Hannah, who was not affiliated with the study. "They're doing the work of thinking how those studies come together." The study's authors also found that scientists who worked collaboratively were more likely to rely on research within the knowledge hotspot than those who worked alone, a finding that came as no surprise to Anita Woolley, a professor at Carnegie Mellon University's Tepper School of Business who specializes in collective intelligence. "Having a team work on it is what leads them to cite the sufficient variety of references," she says "If you have a team you are more likely to have a diversity of different knowledge and perspectives." "When you're working with collaborators, you're forced to explain yourself more," says Hannah. "You're forced to think through and anticipate how your use of language may not be well understood or may create a barrier for readers." The findings may point to ways to improve the technology that scientists and other scholars use to search for information, an increasingly pressing need amid what Uzzi calls the "absolute explosion in the amount of information that's created every single day." Professor Woolley mentions Google Scholar, a free search engine for academic publishing whose slogan is: "Stand on the shoulders of giants." "Usually they give you some mix of what's the highest cited but also what's recent," says Woolley. "Definitely it tends to make the rich get richer in the citations race, because they come up first. But it also probably biases you toward fairly recent things as well." The discovery of this hotspot may point to ways search engines could be improved: "Imagine if you were to develop a search engine that could deliver information in a way that it grabs this hotspot of knowledge," says Uzzi. "And if you can do that, you'd be pointing people from the get-go to the place in the store of knowledge where they are most likely to find the building blocks of tomorrow's ideas. That would solve a tremendous amount of wasted-time problems." But Sidney Redner, a physicist at the Santa Fe Institute who specializes in citation statistics, cautions that the correlations uncovered by Mukherjee and his colleagues, which he calls a "cool observation," could be misconstrued. "I think there's potential for misuse of this kind of stuff," he says, noting that researchers often cite papers for the purpose of refuting them. "There's no contextual information in citations." "That's what worries me about the whole field of citation studies is that it gets misused by administrators," says Professor Redner. "If I were trying to use this as a tenure-decision mechanism, I would be very worried." Leveraging the power of the hotspot offers may require researchers be more mindful in supplying such context to their citations. "It comes back to us as scholars and us as researchers to be clear about the ways we conduct our research and the ways that we use our sources, so that we are making visible our selections and our rationale, so that we don't become subject to an algorithm," says Hannah. "It's challenging work, but it's something we're prepared to do."


News Article | March 13, 2017
Site: www.techtimes.com

People have a tendency to deliberately avoid information that could threaten their well-being and happiness. Scientists have proven that failure to access information is only one of the many strategies people can employ to avoid pieces of content disruptive with their beliefs. The study, published in the Journal of Economic Literature, was carried out by researchers at the Carnegie Mellon University. Aside from avoiding unpleasant information, people can also selectively direct their attention to pieces of content that confirm their beliefs, while also forgetting whatever they wished weren't true. "We commonly think of information as a means to an end. However, a growing theoretical and experimental literature suggests that information may directly enter the agent's utility function. This can create an incentive to avoid information, even when it is useful, free, and independent of strategic considerations," the researchers noted. For instance, people who are on a specific diet and trying to lose weight will often prefer not to look for the number of calories in a dessert, while those who have a higher risk of developing a certain medical condition will avoid a medical screening that could confirm their fears. However, information avoidance can be more nuanced when it comes to everyday life. People choose the news sources that best align with their beliefs instead of getting information from a wide array of sources, as this could challenge their understanding of the world. "The standard account of information in economics is that people should seek out information that will aid in decision making, should never actively avoid information, and should dispassionately update their views when they encounter new valid information," said George Loewenstein, co-author of the study. According to Loewenstein, people also avoid information that could improve the results of their decision-making processes, if they believe that said information is painful to receive. Professors who are not very good at teaching could improve their techniques through feedback from their students. However, because of the uncomfortable position they're finding themselves in, they often refuse it. When the information cannot be simply ignored, people have a series of techniques that allow them to interpret it in a manner that still benefits their pre-existing perspective. For instance, questionable evidence is suddenly accepted as being trustworthy when it confirms what people believe, while rigorous research is often ignored because it is against what people wish to believe. An example of this type of avoidance is given by the people who say that climate change is a hoax, despite the studies proving otherwise. A confirmation bias is the tendency to look for or interpret information in a manner that confirms one's beliefs, while giving disproportionate attention to other hypotheses, which are generally conflicting. "Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand," noted a study published in the journal Review of General Psychology, in 1998. Since the digitalization of information, confirmation biases and information avoidance are easier than ever. The algorithms used by social media networks allow people to follow solely news sources that confirm their existing beliefs, thus potentially encouraging this phenomenon. In an attempt to address this issue, Facebook has changed its algorithm in August 2016 to prioritize informative stories. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | April 17, 2017
Site: www.newscientist.com

An artificial lung that’s small enough to be carried in a backpack has been shown to work in sheep. It’s one of several such devices being developed that could transform the lives of people with lung failure, who are currently dependent on large machines. The new device still requires an oxygen tank to be wheeled around, although tank-free prototypes are also being tested. People with lung failure are usually connected to a machine that pumps their blood through a gas exchanger to provide oxygen and remove carbon dioxide – but this often confines them to bed. The longer they are bed-ridden, the weaker their muscles become and the less likely they are to recover. To avoid this vicious circle, those who are well enough may be helped to walk around the hospital, but this is difficult because the machines are bulky with lots of long tubes. Interest in better options grew after the 2009 swine flu outbreak, when many patients ended up on this kind of support. Artificial lungs could provide a stopgap for people recovering from severe lung infections or waiting for a lung transplant – although a transplant would still be a better long-term solution for those with permanent lung damage. Yet making artificial lungs has proven harder than making a mechanical heart, say. “The heart is just a pump,” says William Federspiel of the University of Pittsburgh, whereas the lungs contain a fabulously convoluted network of branching air sacs to allow gases to diffuse in and out of the blood. “The lungs have a tremendous capability for gas exchange and there’s no man-made technology that can come close for efficiency.” The challenge is further complicated by the fact that some lung failure patients also have weakened hearts, and may need help pumping the blood into the artificial organs. Federspiel’s team has developed an artificial lung that combines the pump and gas exchanger into one device that’s small and light enough to be carried in a backpack, making walking easier.  The device would be connected to the patient’s neck, requiring just a short tube. “We want very little tubing that runs outside the body,” says Federspiel. This month, he published the results of experiments in four sheep, showing that the device could fully oxygenate the animals’ blood for a six hour test period – although he says they have since demonstrated that it works for five days. Another kind of artificial lung is under development at Carnegie Mellon University in Pittsburgh. It is aimed at patients whose hearts are working well enough to pump the blood through the gas exchanger, and connects to the heart’s arteries, with tubing coming out through the chest and the gas exchange device strapped to the patient’s body. Work due to be published later this year showed it kept three out of four sheep alive for two weeks. The experiment had to be stopped in one sheep because it developed a slow heartbeat, which wasn’t caused by the device, says Keith Cook of Carnegie Mellon University, who was involved in the work. Both this device, and the artificial lung developed by Federspiel, require an oxygen supply – so any human patient would still have to wheel around an oxygen tank, but they would be far more mobile than they are currently. However, a more efficient device is in the works that runs off the air in a room, so no cylinder is required. This runs blood through extremely thin channels formed by polymer membranes, providing a larger area for gas exchange. A miniature version has been found to work in tests on rats. Another benefit of the ultrathin tubes – just 20 micrometres in diameter – is that they mimic the pressures on blood cells exerted by the tiny capillaries of the natural lungs, helping to keep them healthier, says Joseph Potkay of the US Department of Veterans Affairs.


News Article | May 2, 2017
Site: www.scientificamerican.com

Even the most natural-sounding computerized voices—whether it’s Apple’s Siri or Amazon’s Alexa—still sound like, well, computers. Montreal-based start-up Lyrebird is looking to change that with an artificially intelligent system that learns to mimic a person’s voice by analyzing speech recordings and the corresponding text transcripts as well as identifying the relationships between them. Introduced last week, Lyrebird’s speech synthesis can generate thousands of sentences per second—significantly faster than existing methods—and mimic just about any voice, an advancement that raises ethical questions about how the technology might be used and misused. The ability to generate natural-sounding speech has long been a core challenge for computer programs that transform text into spoken words. Artificial intelligence (AI) personal assistants such as Siri, Alexa, Microsoft’s Cortana and the Google Assistant all use text-to-speech software to create a more convenient interface with their users. Those systems work by cobbling together words and phrases from prerecorded files of one particular voice. Switching to a different voice—such as having Alexa sound like a man—requires a new audio file containing every possible word the device might need to communicate with users. Lyrebird’s system can learn the pronunciations of characters, phonemes and words in any voice by listening to hours of spoken audio. From there it can extrapolate to generate completely new sentences and even add different intonations and emotions. Key to Lyrebird’s approach are artificial neural networks—which use algorithms designed to help them function like a human brain—that rely on deep-learning techniques to transform bits of sound into speech. A neural network takes in data and learns patterns by strengthening connections between layered neuronlike units. After learning how to generate speech the system can then adapt to any voice based on only a one-minute sample of someone’s speech. “Different voices share a lot of information,” says Lyrebird co-founder Alexandre de Brébisson, a PhD student at the Montreal Institute for Learning Algorithms laboratory at the University of Montreal. “After having learned several speakers’ voices, learning a whole new speaker's voice is much faster. That’s why we don’t need so much data to learn a completely new voice. More data will still definitely help, yet one minute is enough to capture a lot of the voice ‘DNA.’” Lyrebird showcased its system using the voices of U.S. political figures Donald Trump, Barack Obama and Hillary Clinton in a synthesized conversation about the start-up itself. The company plans to sell the system to developers for use in a wide range of applications, including personal AI assistants, audio book narration and speech synthesis for people with disabilities. Last year Google-owned company DeepMind revealed its own speech-synthesis system, called WaveNet, which learns from listening to hours of raw audio to generate sound waves similar to a human voice. It then can read a text out loud with a humanlike voice. Both Lyrebird and WaveNet use deep learning, but the underlying models are different, de Brébisson says. “Lyrebird is significantly faster than WaveNet at generation time,” he says. “We can generate thousands of sentences in one second, which is crucial for real-time applications. Lyrebird also adds the possibility of copying a voice very fast and is language-agnostic.” Scientific American reached out to DeepMind but was told WaveNet team members were not available for comment. Lyrebird’s speed comes with a trade-off, however. Timo Baumann, a researcher who works on speech processing at the Language Technologies Institute at Carnegie Mellon University and is not involved in the start-up, noted Lyrebird’s generated voice carries a buzzing noise and a faint but noticeable robotic sheen. Moreover, it does not generate breathing or mouth movement sounds, which are common in natural speaking. “Sounds like lip smack and inbreathe are important in conversation. They actually carry meaning and are observable to the listener,” Baumann says. These flaws make it possible to distinguish the computer-generated speech from genuine speech, he adds. We still have a few years before technology can get to a point that it could copy a voice convincingly in real-time, he adds. Still, to untrained ears and unsuspecting minds, an AI-generated audio clip could seem genuine, creating ethical and security concerns about impersonation. Such a technology might also confuse and undermine voice-based verification systems. Another concern is that it could render unusable voice and video recordings used as evidence in court. A technology that can be used to quickly manipulate audio will even call into question the veracity of real-time video in live streams. And in an era of fake news it can only compound existing problems with identifying sources of information. “It will probably be still possible to find out when audio has been tampered with,” Baumann says, “but I’m not saying that everybody will check.” Systems equipped with a humanlike voice may also pose less obvious but equally problematic risks. For example, users may trust these systems more than they should, giving out personal information or accepting purchasing advice from a device, treating it like a friend rather than a product that belongs to a company and serves its interests. “Compared to text, voice is just much more natural and intimate to us,” Baumann says. Lyrebird acknowledges these concerns and essentially issues a warning in the brief “ethics” statement on the company’s Web site. Lyrebird cautions the public that the software could be used to manipulate audio recordings used as evidence in court or to assume someone else’s identity. “We hope that everyone will soon be aware that such technology exists and that copying the voice of someone else is possible,” according to the site. Just as people have learned  photographs cannot be fully trusted in the age of Photoshop, they may need to get used to the idea that speech can be faked. There is currently no way to prevent the technology from being used to make fraudulent audio, says Bruce Schneier, a security technologist and lecturer in public policy at the Kennedy School of Government at Harvard University. The risk of encountering a fake audio clip has now become “the new reality,” he says.


News Article | May 2, 2017
Site: www.futurity.org

Getting half of American 8- to 11-year-olds into 25 minutes of physical activity three times a week would save $21.9 billion in medical costs and lost wages over their lifetimes, new research suggests. The relatively modest increase—from the current 32 percent to 50 percent of kids participating in exercise, active play, or sports that often—would also result in 340,000 fewer obese and overweight youth, a reduction of more than 4 percent, the study calculates. “Physical activity not only makes kids feel better and helps them develop healthy habits, it’s also good for the nation’s bottom line,” says Bruce Y. Lee, executive director of the Global Obesity Prevention Center at Johns Hopkins University. “Our findings show that encouraging exercise and investing in physical activity such as school recess and youth sports leagues when kids are young pays big dividends as they grow up.” The study, published in the journal Health Affairs, suggests an even bigger payoff if every current 8- through 11-year-old in the United States exercised 75 minutes over three sessions weekly. In that case, the researchers estimate, $62.3 billion in medical costs and lost wages over the course of their lifetimes could be avoided and 1.2 million fewer youths would be overweight or obese. And the savings would multiply if not just current 8-to-11 year olds, but every future cohort of elementary school children upped their game. Studies have shown that a high body mass index at age 18 is associated with a high BMI throughout adulthood and a higher risk for diabetes, heart disease, and other maladies linked to excess weight. The illnesses lead to high medical costs and productivity losses. In recent decades, there has been what experts describe as a growing epidemic of obesity in the United States. Lee and colleagues from the Johns Hopkins Bloomberg School of Public Health and the Pittsburgh Supercomputing Center at Carnegie Mellon University developed a computer simulation using their Virtual Population for Obesity Prevention software. They plugged in information representing current US children to show how changes in physical activity as kids could affect them—and the economy—throughout their lifetimes. The model relied on data from the 2005 and 2013 National Health and Nutrition Examination Survey and from the National Center for Health Statistics. Exercise totaling at least 25 minutes a day, three days a week, is a guideline developed for kids by the Sports and Fitness Industry Association. The researchers found that maintaining the current low 32 percent compliance would result in 8.1 million of today’s 8- to 11-year-olds being overweight or obese by 2020. That would trigger $2.8 trillion in additional medical costs and lost wages over their lifetimes. An overweight person’s lifetime medical costs average $62,331 and lost wages average $93,075. For an obese person, these amounts are even greater. “Even modest increases in physical activity could yield billions of dollars in savings,” Lee says. The costs averted are likely an underestimate, he says, as there are other benefits of physical activity that don’t affect weight, such as improving bone density, improving mood, and building muscle. Lee says that the spending averted by healthy levels of physical activity would more than make up for costs of programs designed to increase activity levels. “As the prevalence of childhood obesity grows, so will the value of increasing physical activity,” he says. “We need to be adding physical education programs and not cutting them. We need to encourage kids to be active, to reduce screen time and get them running around again. It’s important for their physical health—and the nation’s financial health.” Funding for the research came from the Eunice Kennedy Shriver National Institute of Child Health and Human Development and the Agency for Healthcare Research and Quality.


News Article | May 2, 2017
Site: www.prweb.com

The International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) has selected a paper co-authored by Dr. Lewis Johnson of Alelo Inc., Prof. James Lester of North Carolina State University and the late Dr. Jeff Rickel of the University of Southern California to receive the 2017 Influential Paper Award. Entitled “Animated pedagogical agents: Face-to-face interaction in interactive learning environments,” the paper published in the year 2000 laid the groundwork for a wide range of educational products that incorporate animated agent technology. The IFAAMAS Influential Paper Award recognizes publications that have made influential and long-lasting contributions to the field. Candidate award publications must have been published at least a decade prior to the year of award, and the judging panel seeks nominations from the community. The award will be formally presented at this year’s Autonomous Agents and Multi-Agent Systems conference in São Paulo, Brazil. The paper introduced and surveyed a new paradigm for interactive learning environments by using animated pedagogical agents. It argued for combining animated interface agent technologies with intelligent learning environments, yielding intelligent systems that can interact with learners in natural, human-like ways to achieve better learning outcomes. The concept has become an essential element for engaging, effective learning experiences. For example, the first Marine battalion that returned from Iraq without any combat fatalities learned Arabic language and culture in an immersive Alelo learning game that was populated with pedagogical agents. Dr. Johnson, Alelo’s CEO, said: “We are humbled and grateful to receive this prestigious award. Some of the ideas in the paper have become well established, especially in game-based learning environments. Others are only now being realized thanks to advances in immersive interfaces that enable rich face-to-face interaction between learners and technology." Prof. Lester added: “We deeply appreciate IFAAMAS’ recognition of this research. Since the paper’s publication almost two decades ago, it has been enormously gratifying to see pedagogical agents evolve into a mature technology that is finding broad application in education and training.” The paper appeared in the International Journal of Artificial Intelligence in Education, and is one of the journal's most frequently cited papers. Prof. Judy Kay, the journal’s co-editor-in-chief, said, “This work by pioneers and leaders of our field has provided the foundation for a whole new way to frame innovative educational software.” (Full citation: W.L. Johnson, J.W. Rickel, J.C. Lester, "Animated pedagogical agents: Face-to-face interaction in interactive learning environments." International Journal of Artificial Intelligence in Education 11, 47-78. 2000.) The paper was one of two recognized by IFAAMAS in 2017. The other was by Prof. Justine Cassell of Carnegie Mellon University and colleagues, entitled “Animated conversation: Rule-based generation of facial expression, gesture and spoken intonation for multiple conversational agents,” published by SIGGRAPH in 1994. Animated agents play a prominent role in Alelo’s products and solutions. Virtual role-players give learners opportunities to develop and practice their communication skills, and assess their performance and level of mastery. Virtual coaches provide feedback with a human-like touch, to encourage and show empathy. Alelo’s new Enskill platform now provides learning solutions incorporating animated agents to learners around the world. Dr. Lewis Johnson, CEO of Alelo, adds, “Enskill is the foundation of our ambitious expansion into educational markets.” Alelo creates learning solutions that help people acquire new skills and apply them when it counts, changing the way people communicate. The company has been delivering learning solutions based on virtual role-play simulations since 2003 when it spun out as a DARPA-funded research project from the University of Southern California. The U.S. Air Force Small Business Innovative Research (SBIR) program funded Alelo to develop Web-based learning technology for cultural awareness, which was distinguished as a success story. Alelo’s new Enskill platform is being used by learners around the world to develop better communication skills. Website IFAAMAS is a non-profit organization whose purpose is to promote science and technology in the areas of artificial intelligence, autonomous agents and multiagent systems. Website


News Article | April 17, 2017
Site: www.eurekalert.org

WASHINGTON, DC - The results of the William Lowell Putnam Mathematical Competition, the most prestigious university-level mathematics exam, were announced today. The 77th annual Putnam Competition, administered by the Mathematical Association of America, recognized Carnegie Mellon University as the top team and five undergraduate students were named Putnam Fellows for their high scores on the challenging six-hour mathematics exam. "The Mathematical Association of America is proud to honor every student who participated in the Putnam Competition and especially the Putnam Fellows and top scoring teams," said Michael Pearson, the executive director of the The Mathematical Association of America. "You are the future of mathematics and we look forward to your continued success." More than 4,100 students, from 568 institutions, participated in this highly competitive exam on December 3, 2016. The highest score on the six-hour exam was 114 out of a possible 120 points. Prizes are awarded to the participants with the highest scores and to the departments of mathematics of the five institutions whose teams obtain the highest rankings. For more information about participating in the Putnam Competition, click here and follow the Mathematical Association of America on Twitter @maanow. The Mathematical Association of America is the world's largest community of mathematicians, students, and enthusiasts. We accelerate the understanding of our world through mathematics because mathematics drives society and shapes our lives.


News Article | April 24, 2017
Site: cleantechnica.com

On March 28th, Andrew Stevenson of Tesla's Special Projects delivered a keynote speech titled, "Opportunities for Students in Building a Sustainable Energy Future," during the Carnegie Mellon University's Scott Institute for Energy Innovation 2017 Energy Week This Is What It Takes To Work At Tesla was originally published on CleanTechnica. To read more from CleanTechnica, join over 50,000 other subscribers: Google+ | Email | Facebook | RSS | Twitter.


News Article | April 17, 2017
Site: www.newscientist.com

A combination of stomping and whipping explain why your shoelaces seem to come undone all by themselves. In 2015, MIT researchers came up with an equation for the simplest knots to describe the forces at work – tension, friction, and stiffness – and how they relate to the number of turns that make up the topology of the knot. But although there have been many studies of the durability of various knot configurations, nobody had really focused on the physics of why a knot comes undone on its own. Oliver O’Reilly at the University of California, Berkeley, decided to study spontaneous unknotting after noticing that his young daughter could never keep her shoelaces tied. He and two graduate students ran real-world experiments to investigate further. “We looked like crazy academics because we were just walking the halls of Berkeley, watching our shoelaces come untied,” says team member Christine Gregg, an avid runner. She ran on a treadmill so her colleagues could film her shoes in slow motion to capture the details of the unravelling. They found that the culprit is a combination of the inertial forces generated while running. A knot is held together by the friction at its centre. That’s why stronger knots have more turns; each turn contributes to friction. But the constant downward stomp of the foot while running exerts an acceleration at the base of the knot, while the laces whip back and forth with each stride, tugging on the ends like an invisible hand. Eventually the knot hits a tipping point where the acceleration trumps the internal friction, and it comes undone all at once. “Once you have a little bit of slip, all the forces are aligned, such that [the slip] gets bigger and bigger,” said Gregg. At that point, it only takes one or two more strides for the entire knot to come undone. The team also found that acceleration or whipping alone isn’t sufficient. The team sat on tables and swung their legs for half an hour, with little effect. They then stomped on the ground for the same period – also to no avail. Next, they built a pendulum machine, added weights to the ends of laces, and swung the knots back and forth. As expected, knots failed more often with heavier weights, because the inertial forces generated were greater. There’s a possibility such work could shed light on the mechanics of other kinds of knotty structures, such as suture knots used in surgery, or the folding of DNA and proteins –and especially how they fail. “We’re not just trying to figure out why shoelaces come untied,” said Gregg. “We think this could be applicable to anything that uses entanglement. A knot is really just an entangled linear structure.” “Understanding the mechanics of a fairly simply knot, such as the shoelace knot, required high-speed videography – this just shows the challenges towards a rigorous understanding of knots,” says Khalid Jawed at Carnegie Mellon University in Pennsylvania.


News Article | May 3, 2017
Site: www.newscientist.com

EVEN robots want to talk politics these days. Chatbots could soon be reading news articles and then discussing them with us. Voice-activated assistants such as Amazon’s Alexa or Apple’s Siri can check the weather but are left stumped by more complicated conversations, says Alan Black at Carnegie Mellon University in Pennsylvania. Now Black and a team of computer speech researchers have launched a competition to create a chatbot that can understand a news or Wikipedia article and then talk about it with a human. “I’d like to have a system that reads the news in the morning, and I’d like to be able to talk about the news without having to go read it myself,” Black says. The winner of the Conversational Intelligence Challenge will be the team that the judges think has built the most engaging and convincing text-based chatbot. Evaluators will have to guess whether they’re talking to a bot or a human, then rate the quality and breadth of the discussion. Black doesn’t expect a convincing chatbot to emerge in the competition’s first year. But Marilyn Walker at the University of California, Santa Cruz, thinks the stage is set for a big leap forward in the world of chatbots. “Things are really changing very, very rapidly,” she says. Researchers now have better access to data sets of conversations used to build chatbots. And better speech recognition systems are making it easier for us to chat to robots in a more natural way. “Chatbots designed for Amazon Echo devices must converse ‘coherently and engagingly’ with humans” Walker and Black are both competing for the Alexa prize, a chatbot challenge run by Amazon. It tasks teams with building a speech-based chatbot for Amazon Echo devices that can converse with humans “coherently and engagingly” on a popular topic for 20 minutes. The entries are now being put to the test by Echo customers in the US, with the best-performing team set to scoop a $500,000 prize when the winners are announced in November. This article appeared in print under the headline “Talkative bots offer their take on the news”


A group of high school students at the International School of Stavanger in Norway compete in this year's picoCTF hacking competition hosted by Carnegie Mellon University's CyLab Security and Privacy Institute. Anyone could register and play, but only United States students grades 6-12 were eligible for prizes. Credit: Ryan Strutin, International School of Stavanger, Norway The cybersecurity workforce, which is currently struggling to fill seats with qualified talent, may have some newfound optimism. Over the past two weeks, upwards of 18,000 middle and high school students from across the United States learned and honed computer security skills in this year's picoCTF online hacking contest, hosted by Carnegie Mellon University's CyLab Security and Privacy Institute. The competition officially ended Friday, April 14, 2017. "I am very impressed by the amount of effort the participants put in and how much they accomplished over two weeks," said Marty Carlisle, picoCTF's technical lead and a teaching professor in Carnegie Mellon's Information Networking Institute. "I'm hoping these students will continue to pursue computer security and that I'll get a chance to work with some of them here at Carnegie Mellon." The winning team, "1064CBread," from Dos Pueblos High School in Goleta, CA, will receive their $5,000 cash award at an awards ceremony next month at Carnegie Mellon University's campus in Pittsburgh, PA. The second place team, "phsst," will receive $2,500 and consisted of students from Naperville North High School (IL), Thomas Jefferson High School for Science and Technology (VA), and Montgomery County Public Schools (MD). Team "Thee in/s/ane Potato" will receive $1,500 for finishing in third, and consisted of students from Thomas Jefferson High School (PA) and Stuyvesant High School (NY). "I think picoCTF is going to change lives here," said Anita Johnson, a teacher at Kealing Middle School in Austin, Texas, who had thirty-two of her students participate in picoCTF. "It has been a tremendous learning experience for all of us. What surprises and pleases me the most is the level of interest from the girls." During a two-week period beginning March 31, over 12,000 teams of students from across the United States attempted to hack, decrypt, reverse-engineer, and do anything necessary to solve 68 computer security challenges created by Carnegie Mellon's competitive hacking team, the Plaid Parliament of Pwning. Anyone could sign up and participate, but only United States students in grades 6-12 were eligible for prizes. Explore further: Carnegie Mellon's CyLab challenges high school students to give hacking a try


News Article | April 17, 2017
Site: motherboard.vice.com

Jeopardy is a human problem "solved" by a machine, but poker is a machine problem at the outset. Winning, or maximizing one's winning potential, is a matter of analyzing a card game as a succession of states, wherein each state offers players probabilities of certain events happening or not happening that can then be maximized in the interests of winning. Poker is a natural computer science problem, but, fortunately, computers are rarely allowed at the table. Yet, sometimes they are. Meet Lengpudashi, a poker bot developed by researchers at Carnegie Mellon University whose name translates to "cold poker master." Which is perfect. In a recent set of exhibition matches on China's Hainan island, Lengpudashi won $792,327 in poker chips over the course of five days and 36,000 hands. Its opposition was Team Dragon, a group of human engineers and computer scientists led by Yue Du, an amateur poker player and venture capitalist who took home a 2016 World Series of Poker golden bracelet (the first Chinese player to do so). Lengpudashi's win meant an IRL $290,000 in prize money, which will be reinvested in Strategic Machine, an AI company founded by CMU researchers Tuomas Sandholm and Noam Brown. An earlier poker bot developed by the duo won $1,766,250 in chips earlier this year in an epic 20-day match against five of the world's top poker players. However natural it may seem as an AI problem, poker isn't Go. Poker introduces the problem of incomplete information. A machine playing Go or chess can observe a board and know immediately the complete state of the game. Like a human poker player, however, the poker bot only knows what it sees. The best it can do is count cards—based on what's on the table and in my hand, what cards are left in the deck? This set of remaining cards only constrains possibilities. Luck makes the final call. Luck, after all, is just what we don't know. More technically, poker is an example of an information-imperfect game. Sandholm and Brown go deeper in a paper presented earlier this year at the AAAI-17 Workshop on Computer Poker and Imperfect Information Games in San Francisco. The technique described in the paper for such problems is a variation of what's known as "endgame solving." In a information-complete game like checkers or chess (or Go), the computer's strategy is to decompose the larger game into a bunch of smaller games. Iteratively solve those problems and you should wind up with a solution to the whole problem. This is endgame solving as it's usually understood. Checkers was definitively solved in this fashion. "In imperfect-information games, endgame solving is drastically more challenging," Sandholm and Brown write. "In perfect-information games it is possible to solve just a part of the game in isolation, but this is not generally possible in imperfect-information games." Imperfect-information games have to be solved as whole entities, not through decomposition. This is a problem with large games, like No-Limit Texas Hold'em. No-Limit Texas Hold'em has 10 165 nodes that have to be computed in order to be "solved." The general answer to this problem is for algorithms to create more manageable abstractions of the whole game that are essentially miniaturized versions. Solve the little version of the game, and you can then remap it to the larger game. The catch is that a lot of complexity from the real game is lost in the abstraction process, which in many cases means the remapping just doesn't do the job. The solution devised by Sandholm and Brown is known as Reach-Maxmargin refinement. The basic idea is to take this mini-games and imagine them along different paths that the whole game might take as it evolves. In their words, it's "a new method for refining endgames that considers what payoffs are achievable from other paths in the game." The result is essentially an algorithm that knows how to bluff. It understands that it can benefit from preventing the game from progressing down certain paths and seeks to manipulate that. "People have a misunderstanding of what computers and people are each good at. People think that bluffing is very human —it turns out that's not true," Brown told Bloomberg. "A computer can learn from experience that if it has a weak hand and it bluffs, it can make more money."


News Article | April 29, 2017
Site: www.prweb.com

LearnHowToBecome.org, a leading resource provider for higher education and career information, has determined which online colleges and universities in the U.S. have the most military-friendly programs and services. Of the 50 four-year schools that earned honors, Drexel University, University of Southern California, Duquesne University, Regis University and Harvard University were the top five. 50 two-year schools were also recognized; Laramie County Community College, Western Wyoming Community College, Dakota College at Bottineau, Mesa Community College and Kansas City Kansas Community College ranked as the top five. A complete list of top schools is included below. “Veterans and active duty members of the military often face unique challenges when it comes to transitioning into college, from navigating the GI Bill to getting used to civilian life,” said Wes Ricketts, senior vice president of LearnHowToBecome.org. “These online schools not only offer military-friendly resources, they also offer an online format, allowing even the busiest members of our armed forces to earn a degree or certificate.” To be included on the “Most Military-Friendly Online Colleges” list, schools must be regionally accredited, not-for-profit institutions. Each college is also evaluated on additional data points such as the number and variety of degree programs offered, military tuition rates, employment services, post-college earnings of alumni and military-related academic resources. Complete details on each college, their individual scores and the data and methodology used to determine the LearnHowToBecome.org “Most Military-Friendly Online Colleges” list, visit: The Most Military-Friendly Online Four-Year Colleges in the U.S. for 2017 include: Arizona State University-Tempe Auburn University Azusa Pacific University Baker University Boston University Canisius College Carnegie Mellon University Columbia University in the City of New York Creighton University Dallas Baptist University Drexel University Duquesne University George Mason University Hampton University Harvard University Illinois Institute of Technology Iowa State University La Salle University Lawrence Technological University Lewis University Loyola University Chicago Miami University-Oxford Michigan Technological University Missouri University of Science and Technology North Carolina State University at Raleigh Norwich University Oklahoma State University-Main Campus Pennsylvania State University-Main Campus Purdue University-Main Campus Regis University Rochester Institute of Technology Saint Leo University Southern Methodist University Syracuse University Texas A & M University-College Station University of Arizona University of Denver University of Florida University of Idaho University of Illinois at Urbana-Champaign University of Michigan-Ann Arbor University of Minnesota-Twin Cities University of Mississippi University of Missouri-Columbia University of North Carolina at Chapel Hill University of Oklahoma-Norman Campus University of Southern California University of the Incarnate Word Washington State University Webster University The Most Military-Friendly Online Two-Year Colleges in the U.S. for 2017 include: Aims Community College Allen County Community College Amarillo College Barton County Community College Bunker Hill Community College Casper College Central Texas College Chandler-Gilbert Community College Cincinnati State Technical and Community College Cochise College Columbus State Community College Cowley County Community College Craven Community College Dakota College at Bottineau East Mississippi Community College Eastern New Mexico University - Roswell Campus Edmonds Community College Fox Valley Technical College GateWay Community College Grayson College Hutchinson Community College Kansas City Kansas Community College Lake Region State College Laramie County Community College Lone Star College Mesa Community College Metropolitan Community College Mitchell Technical Institute Mount Wachusett Community College Navarro College Northeast Community College Norwalk Community College Ozarka College Phoenix College Prince George's Community College Quinsigamond Community College Rio Salado College Rose State College Sheridan College Shoreline Community College Sinclair College Southeast Community College Southwestern Oregon Community College State Fair Community College Truckee Meadows Community College Western Nebraska Community College Western Oklahoma State College Western Texas College Western Wyoming Community College Yavapai College ### About Us: LearnHowtoBecome.org was founded in 2013 to provide data and expert driven information about employment opportunities and the education needed to land the perfect career. Our materials cover a wide range of professions, industries and degree programs, and are designed for people who want to choose, change or advance their careers. We also provide helpful resources and guides that address social issues, financial aid and other special interest in higher education. Information from LearnHowtoBecome.org has proudly been featured by more than 700 educational institutions.


News Article | April 27, 2017
Site: news.mit.edu

Researchers at MIT, Brigham and Women’s Hospital, and the Charles Stark Draper Laboratory have devised a way to wirelessly power small electronic devices that can linger in the digestive tract indefinitely after being swallowed. Such devices could be used to sense conditions in the gastrointestinal tract, or carry small reservoirs of drugs to be delivered over an extended period. Finding a safe and efficient power source is a critical step in the development of such ingestible electronic devices, says Giovanni Traverso, a research affiliate at MIT’s Koch Institute for Integrative Cancer Research and a gastroenterologist and biomedical engineer at Brigham and Women’s Hospital. “If we’re proposing to have systems reside in the body for a long time, power becomes crucial,” says Traverso, one of the senior authors of the study. “Having the ability to transmit power wirelessly opens up new possibilities as we start to approach this problem.” The new strategy, described in the April 27 issue of the journal Scientific Reports, is based on the wireless transfer of power from an antenna outside the body to another one inside the digestive tract. This method yields enough power to run sensors that could monitor heart rate, temperature, or levels of particular nutrients or gases in the stomach. “Right now we have no way of measuring things like core body temperature or concentration of micronutrients over an extended period of time, and with these devices you could start to do that kind of thing,” says Abubakar Abid, a former MIT graduate student who is the paper’s first author. Robert Langer, the David H. Koch Institute Professor at MIT, is also a senior author of the paper. Other authors are Koch Institute technical associates Taylor Bensel and Cody Cleveland, former Koch Institute research technician Lucas Booth, and Draper researchers Brian Smith and Jonathan O’Brien. The research team has been working for several years on different types of ingestible electronics, including sensors that can monitor vital signs, and drug delivery vehicles that can remain in the digestive tract for weeks or months. To power these devices, the team has been exploring various options, including a galvanic cell that is powered by interactions with the acid of the stomach. However, one drawback to using this type of battery cell is that the metal electrodes stop working over time. In their latest study, the team wanted to come up with a way to power their devices without using electrodes, allowing them to remain in the GI tract indefinitely. The researchers first considered the possibility of using near-field transmission, that is, wireless energy transfer between two antennas over very small distances. This approach is now used for some cell phone chargers, but because the antennas have to be very close together, the researchers realized it would not work for transferring power over the distances they needed — about 5 to 10 centimeters. Instead, they decided to explore midfield transmission, which can transfer power across longer distances. Researchers at Stanford University have recently explored using this strategy to power pacemakers, but no one had tried using it for devices in the digestive tract. Using this approach, the researchers were able to deliver 100 to 200 microwatts of power to their device, which is more than enough to power small electronics, Abid says. A temperature sensor that wirelessly transmits a temperature reading every 10 seconds would require about 30 microwatts, as would a video camera that takes 10 to 20 frames per second. In a study conducted in pigs, the external antenna was able to transfer power over distances ranging from 2 to 10 centimeters, and the researchers found that the energy transfer caused no tissue damage. “We’re able to efficiently send power from the transmitter antennas outside the body to antennas inside the body, and do it in a way that minimizes the radiation being absorbed by the tissue itself,” Abid says. Christopher Bettinger, an associate professor of materials science and biomedical engineering at Carnegie Mellon University, describes the study as a “great advancement” in the rapidly growing field of ingestible electronics. “This is a classic problem with implantable devices: How do you power them? What they’re doing with wireless power is a very nice approach,” says Bettinger, who was not involved in the research. For this study, the researchers used square antennas with 6.8-millimeter sides. The internal antenna has to be small enough that it can be swallowed, but the external antenna can be larger, which offers the possibility of generating larger amounts of energy. The external power source could be used either to continuously power the internal device or to charge it up, Traverso says. “It’s really a proof-of-concept in establishing an alternative to batteries for the powering of devices in the GI tract,” he says. “This work, combined with exciting advancements in subthreshold electronics, low-power systems-on-a-chip, and novel packaging miniaturization, can enable many sensing, monitoring, and even stimulation or actuation applications,” Smith says. The researchers are continuing to explore different ways to power devices in the GI tract, and they hope that some of their devices will be ready for human testing within about five years. “We’re developing a whole series of other devices that can stay in the stomach for a long time, and looking at different timescales of how long we want to keep them in,” Traverso says. “I suspect that depending on the different applications, some methods of powering them may be better suited than others.” The research was funded by the National Institutes of Health and by a Draper Fellowship.


News Article | May 3, 2017
Site: www.futurity.org

A new study suggests that restricting how pharmaceutical sales representatives can market drugs to physicians changes which medications doctors prescribe to their patients. A team of researchers examined restrictions at 19 academic medical centers (AMCs) in five US states placed on pharmaceutical representatives’ visits to doctors’ offices. Published in the Journal of the American Medical Association, the results suggest that the restrictions caused physicians to switch from prescribing drugs that were more expensive and patent-protected to generic, significantly cheaper drugs. Pharmaceutical sales representative visits to doctors, known as “detailing,” is the most prominent form of pharmaceutical company marketing. Detailing often involves small gifts for physicians and their staff, such as meals. Pharmaceutical companies incur higher costs on detailing visits than on direct-to-consumer marketing, or even on research and development of new drugs. Despite the prevalence of detailing and the numerous programs to regulate detailing, little was known about how practice-level detailing restrictions affect physician prescribing, until now. For the study, which is the largest, most comprehensive investigation into the impact of detailing restrictions, the team compared changes in the prescribing behavior of thousands of doctors before and after their AMCs introduced policies restricting detailing with the prescribing behavior of a carefully matched control group of similar physicians practicing in the same geographic regions but not subject to detailing restrictions. In total, the study included 25,000 physicians and 262 drugs in eight major drug classes from statins to sleep aids to antidepressants, representing more than $60 billion in aggregate sales in the United States. “The study cannot definitively prove a causal link between policies that regulated detailing and changes in physician prescribing, but absent a randomized control, this evidence is as definitive as possible,” says Ian Larkin, assistant professor of strategy at University of California, Los Angeles’ Anderson School of Management and co-leader of the research team. “We investigated 19 different policy implementations that happened over a six-year period, included a control group of highly similar physicians not subject to detailing restrictions and looked at effects in eight large drug classes. The results were remarkable robust—after the introduction of policies, about 5 to 10 percent of physician prescribing behavior changed.” Specifically, the researchers found that detailing policies were associated with an 8.7 percent decrease in the market share of the average detailed drug. Before policy implementation, the average drug had a 19.3 percent market share. The findings also suggest that detailing may influence physicians in indirect ways. “No medical center completely barred salesperson visits; salespeople could and did continue to visit physicians at all medical centers in the study,” Larkin says. “The most common restriction put in place was a ban on meals and other small gifts. The fact that regulating gifts while still allowing sales calls still led to a switch to cheaper, generic drugs may suggest that gifts such as meals play an important role in influencing physicians. “The correlation between meals and prescribing has been well established in the literature, but our study suggests this relationship may be causal in nature.” In light of these findings, the study indicates that physician practices and other governing bodies may need to take an active role in regulating conflicts of interest, rather than relying on individual physicians to monitor and regulate. “Social science has long demonstrated that professionals, even well-meaning ones, are powerfully influenced by conflicts of interest,” says George Loewenstein, a professor of economics and psychology at Carnegie Mellon University and co-leader of the research team. “A large body of research also shows that simply disclosing conflicts of interests is insufficient to reduce their influence, and may even exacerbate it. The results from this study underline the effectiveness of, and need for, centralized rules and regulations. We should not put the onus of dealing with conflicts on patients; the best policies are those that eliminate conflicts.” Larkin and Loewenstein also have a Viewpoint article in the same JAMA issue that calls for physicians to be compensated on a salary basis, instead of fee-for-service, to eliminate additional conflicts of interest. The National Institute of Mental Health provided funding and CVS Caremark provided data for the study.


News Article | April 17, 2017
Site: cen.acs.org

A flexible battery made of gauzy silk films could power electronics and then melt away after a preset number of days (ACS Energy Lett. 2017, DOI: 10.1021/acsenergylett.7b00012). The biodegradable battery produces a high enough voltage to power temporary medical implants designed to harmlessly dissolve in the body in a few weeks once their work is done. Scientists have been making rapid progress on medical sensors and devices that could transmit images, stimulate wounds to heal, or deliver drugs for a short while before degrading. Most prototypes of these devices have been powered from an external source so they can only be placed skin-deep. To work deeper in the body, the devices will need an on-board power source. Dissolvable batteries are an ideal solution. Researchers have made such batteries before using natural, biocompatible materials for the electrodes and electrolytes. One team made electrodes out of the skin pigment melanin, while others have used thin foils of magnesium or iron. The electrolytes have typically been solutions of various salts in water, but liquid electrolytes can leak out and degrade battery electrodes, and they make batteries relatively bulky. In a fresh spin on degradable batteries, Caiyun Wang and Gordon G. Wallace of the University of Wollongong and colleagues made electrodes and a solid electrolyte out of silk. The solid electrolyte enables thinner, flatter, and more flexible and robust batteries, says Wang. Silk is ideal for medical electronics because it can be made into thin films, is biocompatible, and is sturdy enough to work in electronic circuitry. The researchers made the thin films that comprise the new battery by first dissolving a fibrous silk protein called fibroin, derived from silkworm cocoons, in water. They spread the solution in a mold and peeled off ultrathin films of silk after the water evaporated. To make the electrolyte, they infused a piece of the silk membrane with the ionic liquid choline nitrate, a molten salt that is excellent at conducting ions, by adding it to the silk fibroin solution. To make electrodes, they deposited a biocompatible magnesium alloy on a piece of the silk film to form an anode and deposited gold on another piece to form a cathode. They assembled the battery by sandwiching the electrolyte between the two electrode films and fusing together the uncoated edges with a sticky, amorphous silk film. The postage-stamp-sized, 170-µm-thick device generated a voltage of 0.87 V and had a power density of 8.7 µW/cm2, which would be enough to power an implantable medical sensor. Placed in a saline buffer solution, the battery showed a stable voltage for about an hour, after which the anode started breaking down. When the researchers added an extra silk film on top of the anode, the voltage remained stable for nearly two hours. Previously reported biodegradable batteries have lasted for about 15 minutes. The device nearly completely decomposed after 45 days in the solution, leaving behind inert gold nanoparticles, which would be cleared by the body. By adjusting the properties of the silk layers encapsulating the battery, Wallace says they could tailor how long it predictably generates power and how quickly it dissolves. The silk-ionic liquid electrolyte improves the performance of magnesium-based decomposable batteries, says Christopher J. Bettinger of Carnegie Mellon University. “These batteries can maintain a pretty high voltage for a relatively long amount of time,” he says. For medical applications it would be important to consider the toxicity of the ionic liquids, he says, but this “could also be a compostable battery for other uses.”


The cybersecurity workforce, which is currently struggling to fill seats with qualified talent, may have some newfound optimism. Over the past two weeks, upwards of 18,000 middle and high school students from across the United States learned and honed computer security skills in this year's picoCTF online hacking contest, hosted by Carnegie Mellon University's CyLab Security and Privacy Institute. The competition officially ended Friday, April 14, 2017. "I am very impressed by the amount of effort the participants put in and how much they accomplished over two weeks," said Marty Carlisle, picoCTF's technical lead and a teaching professor in Carnegie Mellon's Information Networking Institute. "I'm hoping these students will continue to pursue computer security and that I'll get a chance to work with some of them here at Carnegie Mellon." The winning team, "1064CBread," from Dos Pueblos High School in Goleta, CA, will receive their $5,000 cash award at an awards ceremony next month at Carnegie Mellon University's campus in Pittsburgh, PA. The second place team, "phsst," will receive $2,500 and consisted of students from Naperville North High School (IL), Thomas Jefferson High School for Science and Technology (VA), and Montgomery County Public Schools (MD). Team "Thee in/s/ane Potato" will receive $1,500 for finishing in third, and consisted of students from Thomas Jefferson High School (PA) and Stuyvesant High School (NY). "I think picoCTF is going to change lives here," said Anita Johnson, a teacher at Kealing Middle School in Austin, Texas, who had thirty-two of her students participate in picoCTF. "It has been a tremendous learning experience for all of us. What surprises and pleases me the most is the level of interest from the girls." During a two-week period beginning March 31, over 12,000 teams of students from across the United States attempted to hack, decrypt, reverse-engineer, and do anything necessary to solve 68 computer security challenges created by Carnegie Mellon's competitive hacking team, the Plaid Parliament of Pwning. Anyone could sign up and participate, but only United States students in grades 6-12 were eligible for prizes.


The matchmaking effort aims to dramatically increase robots and automation on U.S. production lines. But a second piece of the initiative's mission involves a very different kind of engineering challenge: keeping and growing human jobs along the way. "We are trying to create jobs because of the new technology and the new skills that will be needed," said Rebecca Hartley, director of operations for the Clemson University Center for Workforce Development in South Carolina. "We have to do this by the numbers," she added. "How many credentials do we create? How many students do we have enrolled in those programs? How many students do we have enrolled in apprenticeships? How many students are getting jobs? How many incumbent workers are getting new jobs because of new training?" It's a tall order for Hartley, who was brought on as the chief workforce officer for the Advanced Robotics in Manufacturing Institute, a nonprofit affiliated with Carnegie Mellon University that in January won an $80 million grant from the U.S. Department of Defense. Questions loom over the effects automation has had on the American workforce - a topic that for years has been the subject of debate among economists and policy-makers. While machines have allowed workers to become more productive and companies to lower their costs, the technology has also made some jobs obsolete. Occupations across the spectrum are seeing increasing automation, but manufacturing is especially exposed to non-human help - and vulnerable to job losses. One recent study found that as the average U.S. manufacturing worker churned out 68 percent more products from 2000 to 2010, employers eliminated 8.2 million jobs that economists had expected to exist. Companies and proponents of automation partly blame any job losses on what they call a persistent skills gap, and they look to training and education programs to prepare a new generation of workers. By tracking workforce needs on a national scale, Hartley and others hope the new ARM Institute can prove that improved efficiency can breathe life into manufacturing. "We believe robots and automation technology are going to save and create jobs," said Jeff Burnstein, president of the Association for Advancing Automation, an Ann Arbor, Mich.-based trade group of robotics developers. A recent study from the group showed manufacturing jobs have grown by 900,000 - even as a record number of robots were shipped. Burnstein said he likes to flip the script on the notion that robots are killing jobs: "What would happen if we didn't automate? How many jobs would be lost?" A NEW WAY OF LEARNING Hartley hails from South Carolina, a state that decades ago was home to a thriving textile industry. Like the steel industry collapse in Pittsburgh, she said, textile mills in the southern state shuttered amid an industry collapse beginning in the 1970s. Memories of lost jobs and tough factory conditions have stuck with older residents, she said, and people tend to steer their children away from goods-producing industry. "They don't want anything to do with manufacturing because it reminds them of something that's not sustainable," she said. "So it's a perception issue to really show them what advanced manufacturing is." She has been doing that in her job with Clemson University, which, with funding from the National Science Foundation, has worked to design educational materials for two-year colleges and companies in the advanced manufacturing industry. Among other programs, the group has developed virtual reality courses that give students a perspective into how new machines and equipment works. At the ARM Institute, she said early talks with industry have revealed a desire to break down negative stereotypes and to show students the future of manufacturing is a wide open field. Over the next five years, the institute plans to develop short-term certifications or credentials in robotics and automation - something that does not yet exist and that companies have sought for years, Hartley said. A similar plan using so-called "stackable credentials" was developed for the natural gas industry in Pennsylvania at ShaleNet, a training program founded in 2010. "That's something a student can build on," Hartley said. Hartley acknowledged that job creation, particularly this early in the endeavor, is a moving target. Jackie Erickson, a spokeswoman for the ARM Institute, said she couldn't provide exact numbers but workforce development "is very high priority" for the Defense Department. The institute plans to recruit military veterans returning to civilian life who have technical skills. At the gathering at the school's National Robotics Engineering Center, ARM Institute leaders began recruiting private sector partners to become members - and get at least the $173 million in contributions required as part of the federal grant. When the institute garnered more than 200 commitment letters for its grant application, many were from colleges, universities and other nonprofits in education and workforce development. Education partners so far have agreed to spend $88 million to the manufacturing companies' $41 million, according to numbers shared during the meeting by Gary Fedder, the institute's CEO. Economists and academic researchers who pin job losses on automation are encouraged by the institute's endeavor. Michael J. Hicks, an economic professor at Ball State University, found nearly nine in 10 manufacturing jobs evaporated between 2000 and 2010 as factories became more efficient and automated. Just one in 10 jobs was lost because of trade policy, according to his study, which was published in 2015 and has received national attention amid the push to bring back manufacturing jobs. That's not to say he discounts robotics from eventually bringing a new wave of jobs. When early manufacturing first sprung into the Midwestern states in the late 1800s, it drew in workers from predominantly simpler, agricultural jobs. "The same thing could happen in some other sector," he said. "We're going to have automation, but we don't know what that next big sector is going to be." The ARM Institute also has the potential to create central sources of workforce information for government, suggested Tom Mitchell, a professor of machine learning at CMU. Mitchell released a study claiming that policy-makers are "flying blind" as robotics disrupts the workplace. "You can imagine a scenario where they add up to more manufacturing jobs in the U.S., but you can also imagine a scenario where they don't," he said. "The situation is not so straightforward to me, because now I can see all these different forces at work. "We have some choices; there are policies that can make a difference." Explore further: To really help U.S. workers, we should invest in robots


News Article | May 8, 2017
Site: techcrunch.com

Researchers at Carnegie Mellon University have created a new way to turn almost any surface into a touchpad with just a little conductive spray paint. The system, called Electrick, uses a technique called “electric field tomography.” Created by CMU Ph.D. student Yang Zhang, Electrick uses small electrodes attached to the edges of a painted surface and it can turn wood, plastic, drywall, and even Jell-O and Play-Doh into a touch sensitive surface. They’ve successfully added touch sensitivity with positional control to toys, guitars, and walls. “For the first time, we’ve been able to take a can of spray paint and put a touch screen on almost anything,” said assistant professor in the Human-Computer Interaction Institute, Chris Harrison. Like many touchscreens, Electrick relies on the shunting effect — when a finger touches the touchpad, it shunts a bit of electric current to ground. By attaching multiple electrodes to the periphery of an object or conductive coating, Zhang and his colleagues showed they could localize where and when such shunting occurs. They did this by using electric field tomography — sequentially running small amounts of current through the electrodes in pairs and noting any voltage differences. The creators envision tools like interactive walls and even an interactive smartphone case that can sense the position of a finger on the back surface and interact with apps on the phone. You can also add a protective coating to the paint to keep it from chipping off. Zhang will show off the technology at the Conference on Human Factors in Computing Systems in Denver.


WASHINGTON, May 08, 2017 (GLOBE NEWSWIRE) -- The National Association of Corporate Directors (NACD), the advocate for the profession of directorship, today announced the corporate directors and senior executives who earned the CERT Certificate in Cybersecurity Oversight during the first quarter of 2017. The graduates of the course earned this unique credential by completing the NACD Cyber-Risk Oversight Program, the world’s first-ever online cyber-risk course for corporate leaders. The robust, multimodule program improves corporate directors’ understanding of cybersecurity risks, details the respective responsibilities of the board and C-suite executives in cyber-risk oversight, and engages participants in a cyber-crisis simulation. The course culminates in a comprehensive exam. “Now, more than ever, it is of paramount importance that those of us who bear the responsibility for corporate oversight are properly equipped to ask the right questions related to cybersecurity,” said Peter Gleason, NACD president and CEO. “This credential is a tangible testament to these leaders’ commitment to advanced cyberliteracy.” Corporate leaders who earned the CERT Certificate in Cybersecurity Oversight during the first quarter of 2017 include the following corporate directors and senior executives: The course was developed in partnership with Ridge Global and the CERT Division of the Software Engineering Institute at Carnegie Mellon University, which issues the certificate. Participants earn 24 Continuing Professional Education credits. In addition, NACD members who complete the course earn 22 NACD Fellowship® skill credits. “Whether a director has a background in technology or not, regulators and shareholders are holding board members to higher standards of accountability when it comes to cyber oversight,” said Gov. Tom Ridge, chair of Ridge Global and America’s first secretary of homeland security. “The NACD Cyber-Risk Oversight Program with the CERT Certificate will help board members gain confidence on cyber-risk issues while providing a tangible credential demonstrating their commitment to their fiduciary responsibilities.” Visit www.NACDonline.org/CyberCertificate to learn more about the NACD Cyber-Risk Oversight Program. About NACD The National Association of Corporate Directors (NACD) empowers more than 17,000 directors to lead with confidence in the boardroom. As the recognized authority on leading boardroom practices, NACD helps boards strengthen investor trust and public confidence by ensuring that today’s directors are well prepared for tomorrow’s challenges. World-class boards join NACD to elevate performance, gain foresight, and instill confidence. Fostering collaboration among directors, investors, and corporate governance stakeholders, NACD has been setting the standard for responsible board leadership for 40 years. To learn more about NACD, visit www.NACDonline.org. To become an NACD member, please contact us at Join@NACDonline.org or 202-572-2089. If you are already a member, contact your NACD Membership Advisor at MembershipAdvisor@NACDonline.org to ensure that you are receiving the best value from your membership.


WASHINGTON, May 08, 2017 (GLOBE NEWSWIRE) -- The National Association of Corporate Directors (NACD), the advocate for the profession of directorship, today announced the corporate directors and senior executives who earned the CERT Certificate in Cybersecurity Oversight during the first quarter of 2017. The graduates of the course earned this unique credential by completing the NACD Cyber-Risk Oversight Program, the world’s first-ever online cyber-risk course for corporate leaders. The robust, multimodule program improves corporate directors’ understanding of cybersecurity risks, details the respective responsibilities of the board and C-suite executives in cyber-risk oversight, and engages participants in a cyber-crisis simulation. The course culminates in a comprehensive exam. “Now, more than ever, it is of paramount importance that those of us who bear the responsibility for corporate oversight are properly equipped to ask the right questions related to cybersecurity,” said Peter Gleason, NACD president and CEO. “This credential is a tangible testament to these leaders’ commitment to advanced cyberliteracy.” Corporate leaders who earned the CERT Certificate in Cybersecurity Oversight during the first quarter of 2017 include the following corporate directors and senior executives: The course was developed in partnership with Ridge Global and the CERT Division of the Software Engineering Institute at Carnegie Mellon University, which issues the certificate. Participants earn 24 Continuing Professional Education credits. In addition, NACD members who complete the course earn 22 NACD Fellowship® skill credits. “Whether a director has a background in technology or not, regulators and shareholders are holding board members to higher standards of accountability when it comes to cyber oversight,” said Gov. Tom Ridge, chair of Ridge Global and America’s first secretary of homeland security. “The NACD Cyber-Risk Oversight Program with the CERT Certificate will help board members gain confidence on cyber-risk issues while providing a tangible credential demonstrating their commitment to their fiduciary responsibilities.” Visit www.NACDonline.org/CyberCertificate to learn more about the NACD Cyber-Risk Oversight Program. About NACD The National Association of Corporate Directors (NACD) empowers more than 17,000 directors to lead with confidence in the boardroom. As the recognized authority on leading boardroom practices, NACD helps boards strengthen investor trust and public confidence by ensuring that today’s directors are well prepared for tomorrow’s challenges. World-class boards join NACD to elevate performance, gain foresight, and instill confidence. Fostering collaboration among directors, investors, and corporate governance stakeholders, NACD has been setting the standard for responsible board leadership for 40 years. To learn more about NACD, visit www.NACDonline.org. To become an NACD member, please contact us at Join@NACDonline.org or 202-572-2089. If you are already a member, contact your NACD Membership Advisor at MembershipAdvisor@NACDonline.org to ensure that you are receiving the best value from your membership.


By using optogenetics to control neurons in the basal ganglia, researchers achieve effects that last longer than deep brain stimulation Researchers working in the lab of Carnegie Mellon University neuroscientist Aryn Gittis, have identified two groups of neurons that can be turned on and off to alleviate the movement-related symptoms of Parkinson's disease. The activation of these cells in the basal ganglia relieves symptoms for much longer than current therapies, like deep brain stimulation and pharmaceuticals. The study, completed in a mouse model of Parkinson's, used optogenetics to better understand the neural circuitry involved in Parkinson's disease, and could provide the basis for new experimental treatment protocols. The findings, published by researchers from Carnegie Mellon, the University of Pittsburgh and the joint CMU/Pitt Center for the Neural Basis of Cognition (CNBC) are available as an Advance Online Publication on Nature Neuroscience's website. Parkinson's disease is caused when the dopamine neurons that feed into the brain's basal ganglia die and cause the basal ganglia to stop working, preventing the body from initiating voluntary movement. The basal ganglia is the main clinical target for treating Parkinson's disease, but currently used therapies do not offer long-term solutions. "A major limitation of Parkinson's disease treatments is that they provide transient relief of symptoms. Symptoms can return rapidly if a drug dose is missed or if deep brain stimulation is discontinued," said Gittis, assistant professor of biological sciences in the Mellon College of Science and member of Carnegie Mellon's BrainHub neuroscience initiative and the CNBC. "There is no existing therapeutic strategy for long lasting relief of movement disorders associated with Parkinson's." To better understand how the neurons in the basal ganglia behave in Parkinson's, Gittis and colleagues looked at the inner circuitry of the basal ganglia. They chose to study one of the structures that makes up that region of the brain, a nucleus called the external globus pallidus (GPe). The GPe is known to contribute to suppressing motor pathways in the basal ganglia, but little is known about the individual types of neurons present in the GPe, their role in Parkinson's disease or their therapeutic potential. The research group used optogenetics, a technique that turns genetically tagged cells on and off with light. They targeted two cell types in a mouse model for Parkinson's disease: PV-GPe neurons and Lhx6-GPe neurons. They found that by elevating the activity of PV-GPe neurons over the activity of the Lhx6-GPe neurons, they were able to stop aberrant neuronal behavior in the basal ganglia and restore movement in the mouse model for at least four hours -- significantly longer than current treatments. While optogenetics is used only in animal models, Gittis said she believes their findings could create a new, more effective deep brain stimulation protocol. Co-authors of the study include: Kevin Mastro, University of Pittsburgh Center for Neuroscience; Kevin Zitelli and Amanda Willard, Carnegie Mellon Department of Biological Sciences and CNBC; and Kimberly Leblanc and Alexxai Kravitz, National Institute of Diabetes and Digestive and Kidney Diseases. The research was funded by the National Institutes of Health (NIH) (NS090745-01, NS093944-01, NS076524), the National Science Foundation (DMS 1516288), the Brain & Behavior Research Foundation (formerly NARSAD), the Parkinson's Disease Foundation and the NIH Intramural Research Program. The authors also acknowledge the support of Carnegie Mellon's Disruptive Health Technology Institute.


News Article | May 8, 2017
Site: www.prnewswire.com

PITTSBURGH, May 8, 2017 /PRNewswire/ -- One of the most popular passwords in 2016 was "qwertyuiop," even though most password meters will tell you how weak that is. The problem is no existing meters offer any good advice to make it better—until now. Researchers from Carnegie Mellon...


Researchers from Carnegie Mellon University and the University of Chicago have just unveiled a new, state-of-the-art password meter that offers real-time feedback and advice to help people create better passwords. To evaluate its performance, the team conducted an online study in which they asked 4,509 people to use it to create a password. "Instead of just having a meter say, 'Your password is bad,' we thought it would be useful for the meter to say, 'Here's why it's bad and here's how you could do better,'" says CyLab Security and Privacy Institute faculty Nicolas Christin, a professor in the department of Engineering and Public Policy and the Institute for Software Research at Carnegie Mellon, and a co-author of the study. The study will be presented at this week's CHI 2017 conference in Denver, Colorado, where it will also receive a "Best Paper Award." A demo of the meter can be viewed here. "The key result is that providing the data-driven feedback actually makes a huge difference in security compared to just having a password labeled as weak or strong," says Blase Ur, lead author on the study, formerly a graduate student in CyLab and currently an assistant professor at the University of Chicago's Department of Computer Science. "Our new meter led users to create stronger passwords that were no harder to remember than passwords created without the feedback." The meter works by employing an artificial neural network: a large, complex map of information that resembles the way neurons behave in the brain. The team conducted a study about this neural network approach that received a Best Paper Award at the USENIX Security conference in August 2016. The network "learns" by scanning millions of existing passwords and identifying trends. If the meter detects a characteristic in your password that it knows attackers may guess, it'll tell you. "The way attackers guess passwords is by exploiting the patterns that they observe in large datasets of breached passwords," says Ur. "For example, if you change Es to 3s in your password, that's not going to fool an attacker. The meter will explain about how prevalent that substitution is and offer advice on what to do instead." This data-driven feedback is presented in real-time, as a user is typing their password out letter-by-letter. The team has open-sourced their meter on GitHub. "There's a lot of different tweaking that one could imagine doing for a specific application of the meter," says Ur. "We're hoping to do some of that ourselves and also engage other members of the security and privacy community to help contribute to the meter." Explore further: Users' perceptions of password security do not always match reality


News Article | May 8, 2017
Site: www.eurekalert.org

PITTSBURGH--One of the most popular passwords in 2016 was "qwertyuiop," even though most password meters will tell you how weak that is. The problem is no existing meters offer any good advice to make it better--until now. Researchers from Carnegie Mellon University and the University of Chicago have just unveiled a new, state-of-the-art password meter that offers real-time feedback and advice to help people create better passwords. To evaluate its performance, the team conducted an online study in which they asked 4,509 people to use it to create a password. "Instead of just having a meter say, 'Your password is bad,' we thought it would be useful for the meter to say, 'Here's why it's bad and here's how you could do better,'" says CyLab Security and Privacy Institute faculty Nicolas Christin, a professor in the department of Engineering and Public Policy and the Institute for Software Research at Carnegie Mellon, and a co-author of the study. The study will be presented at this week's CHI 2017 conference in Denver, Colorado, where it will also receive a "Best Paper Award." A demo of the meter can be viewed here. "The key result is that providing the data-driven feedback actually makes a huge difference in security compared to just having a password labeled as weak or strong," says Blase Ur, lead author on the study, formerly a graduate student in CyLab and currently an assistant professor at the University of Chicago's Department of Computer Science. "Our new meter led users to create stronger passwords that were no harder to remember than passwords created without the feedback." The meter works by employing an artificial neural network: a large, complex map of information that resembles the way neurons behave in the brain. The team conducted a study about this neural network approach that received a Best Paper Award at the USENIX Security conference in August 2016. The network "learns" by scanning millions of existing passwords and identifying trends. If the meter detects a characteristic in your password that it knows attackers may guess, it'll tell you. "The way attackers guess passwords is by exploiting the patterns that they observe in large datasets of breached passwords," says Ur. "For example, if you change Es to 3s in your password, that's not going to fool an attacker. The meter will explain about how prevalent that substitution is and offer advice on what to do instead." This data-driven feedback is presented in real-time, as a user is typing their password out letter-by-letter. The team has open-sourced their meter on GitHub. "There's a lot of different tweaking that one could imagine doing for a specific application of the meter," says Ur. "We're hoping to do some of that ourselves and also engage other members of the security and privacy community to help contribute to the meter." Other authors on the study included current CMU students Jessica Colnago, Henry Dixon, Pardis Emami Naeini, Hana Habib, Noah Johnson, and William Melicher; former CMU students Felicia Alfieri and Maung Aung; and Carnegie Mellon faculty Lujo Bauer and Lorrie Faith Cranor. About Carnegie Mellon University: Carnegie Mellon is a private, internationally ranked university with programs in areas ranging from science, technology and business to public policy, the humanities and the arts. More than 13,000 students in the university's seven schools and colleges benefit from a small faculty-to-student ratio and an education characterized by its focus on creating and implementing solutions for real world problems, interdisciplinary collaboration and innovation. About Carnegie Mellon University CyLab: Carnegie Mellon University CyLab is a University-wide, multi-disciplinary cybersecurity and privacy research institute. With over 50 core faculty, CyLab partners with industry and government to develop and test systems that lead to a world in which people can trust technology. CyLab stretches across five colleges encompassing the fields of engineering, computer science, business, public policy, information systems, humanities and social sciences.


WASHINGTON, May 08, 2017 (GLOBE NEWSWIRE) -- The National Association of Corporate Directors (NACD), the advocate for the profession of directorship, today announced the corporate directors and senior executives who earned the CERT Certificate in Cybersecurity Oversight during the first quarter of 2017. The graduates of the course earned this unique credential by completing the NACD Cyber-Risk Oversight Program, the world’s first-ever online cyber-risk course for corporate leaders. The robust, multimodule program improves corporate directors’ understanding of cybersecurity risks, details the respective responsibilities of the board and C-suite executives in cyber-risk oversight, and engages participants in a cyber-crisis simulation. The course culminates in a comprehensive exam. “Now, more than ever, it is of paramount importance that those of us who bear the responsibility for corporate oversight are properly equipped to ask the right questions related to cybersecurity,” said Peter Gleason, NACD president and CEO. “This credential is a tangible testament to these leaders’ commitment to advanced cyberliteracy.” Corporate leaders who earned the CERT Certificate in Cybersecurity Oversight during the first quarter of 2017 include the following corporate directors and senior executives: The course was developed in partnership with Ridge Global and the CERT Division of the Software Engineering Institute at Carnegie Mellon University, which issues the certificate. Participants earn 24 Continuing Professional Education credits. In addition, NACD members who complete the course earn 22 NACD Fellowship® skill credits. “Whether a director has a background in technology or not, regulators and shareholders are holding board members to higher standards of accountability when it comes to cyber oversight,” said Gov. Tom Ridge, chair of Ridge Global and America’s first secretary of homeland security. “The NACD Cyber-Risk Oversight Program with the CERT Certificate will help board members gain confidence on cyber-risk issues while providing a tangible credential demonstrating their commitment to their fiduciary responsibilities.” Visit www.NACDonline.org/CyberCertificate to learn more about the NACD Cyber-Risk Oversight Program. About NACD The National Association of Corporate Directors (NACD) empowers more than 17,000 directors to lead with confidence in the boardroom. As the recognized authority on leading boardroom practices, NACD helps boards strengthen investor trust and public confidence by ensuring that today’s directors are well prepared for tomorrow’s challenges. World-class boards join NACD to elevate performance, gain foresight, and instill confidence. Fostering collaboration among directors, investors, and corporate governance stakeholders, NACD has been setting the standard for responsible board leadership for 40 years. To learn more about NACD, visit www.NACDonline.org. To become an NACD member, please contact us at Join@NACDonline.org or 202-572-2089. If you are already a member, contact your NACD Membership Advisor at MembershipAdvisor@NACDonline.org to ensure that you are receiving the best value from your membership.


News Article | April 17, 2017
Site: motherboard.vice.com

A version of this post originally appeared on Tedium , a twice-weekly newsletter that hunts for the end of the long tail. You know something you can't get through the internet's wires, at least not on its own? Food. We've been working on it for years, but no, we're not at the point where we can deliver nourishment directly via the series of tubes. But food has always been something of a means to an end—a way of driving the internet forward, making it something people would actually like to use. Fact is, if you're trying to get people to try something new, looking at Maslow's Hierarchy of Needs and fulfilling one of the listed needs—the lower down the hierarchy, the better—is a good way to ensure success. (And food is at the bottom.) The Internet of Food, of course, starts with a Coke machine. It always does. Why the "Internet Coke Machine" is actually more innovative than it sounds When I mentioned to my wife that one of the internet's earliest phenomena involved a Coke machine that had its own website, she scoffed—because the idea, at its root, sounds absurd and useless. It's an innovation that sounds absurdly pedestrian when you can convince Alexa to buy you a $170 dollhouse and four pounds of cookies by accident. But as it turns out, it's all about the use case. Carnegie Mellon University, which long managed this shining example of Maslow's Hierarchy in action, came up with the idea because the computer science department had been moved away from the machine, and the thirsty programmers needed a way to confirm that there were beverages in the machine—and, more importantly, that they were actually cold. From CMU's history page for the device: They installed micro-switches in the Coke machine to sense how many bottles were present in each of its six columns of bottles. The switches were hooked up to CMUA, the PDP-10 that was then the main departmental computer. A server program was written to keep tabs on the Coke machine's state, including how long each bottle had been in the machine. In other words, this device may have been the first "Internet of Things" device, and it was even more novel once it had been connected to the web in 1993. There were a lot of imitators over the years, but it turns out that the soda companies are doing something pretty similar these days. In 2015, Bizjournals reported that vending machine companies, including Coca-Cola, have started to rely on internet-connected platforms. This is good, notes reporter Efrat Kasznik, because it allows beverage and food distribution companies to refill the machines so they're never empty, as well as tighten supply chains. Suddenly, a staid business becomes a big data business. Take Coca-Cola's Freestyle machine. While it's better known as a fancy way to mix Sprite with Mello Yellow Zero at your local Boloco, it's a prime example of this concept in action: Supplying the machines with network connectivity allows Coke to identify each individual machine, track inventory stock levels, conduct real time test marketing, and probably most importantly: track trends and drinking preference and adjust the selections accordingly. More than 2,000 Freestyle machines are currently deployed in fast food locations throughout the U.S. and the U.K. In other words, a Coke Freestyle machine is an Internet Coke Machine on steroids, and you should treat it as such going forward. Of course, if you closely followed the internet's formative food years, you'll know that Coke machines weren't the only early way food interacted with wires. Early on, food fans saw a lot of potential for the web to redefine delivery. That said, if you looked at Pizza Hut's early attempt at delivering pizzas based on technology—a platform called PizzaNet—it might have looked like a failure. Early on, Pizza Hut generally sold fewer than 10 pies each week through its initial experiment in online ordering. Part of the issue was the newness of the technology; part of it was scale: The experiment was limited to the residents of Santa Cruz, California at the time, in part because the developer of the pizza-delivery technology was SCO. Jonathan Cohen, a former SCO marketing person who was dead-set against the PizzaNet idea but now eats his Stuffed Crust Pizza with crow, wrote a great recollection about the saga last year. But while Pizza Hut had the basic idea of delivering pizza via the internet, it was another company that did much the early legwork to make the concept widespread. And that startup's inspiration was sort of depressing, considering it was an internet-based startup. Tim Glass, a guy from Seattle, had seen the 1995 film The Net, the Sandra Bullock vehicle in which a bunch of terrible internet-related things happen. One of those internet-related things involved the ordering of a pizza through a computer—something most certainly inspired by the Pizza Hut test a year prior. But when Glass looked around for a real-world example of the idea, he couldn't find one. So he created his own, starting up a company called CyberSlice in 1996. "Millions of people order pizza every day and we're about to change that whole experience," Glass explained in a 1996 news release. "Have you ever flipped through the phone book for your favorite pizzeria only to find it's closed, doesn't deliver or the staff is too hurried to discuss the menu or specials? CyberSlice takes the guesswork out, giving consumers more choice and value than traditional phone ordering. We took a simple idea and built an entertaining and enjoyable Web destination, while staying focused on customer service and satisfaction." This whole state of affairs was more complicated than it sounds. Remember, Google Maps didn't exist then, so the company had to rely on the vanguard at the time, which was MapQuest. Like Pets.com just a couple years later, the company had to build out a lot of its own technology. On the plus side, because they were first, they were able to patent the idea of delivering food ordered on the internet, which is probably a very valuable patent these days. The company, in a pretty astute bit of marketing, used WebObjects, the NeXT Software-created online platform, and built one of the first sophisticated pieces of software using that tool. That meant it got free promotion from NeXT upon launch. Steve Jobs literally was CyberSlice's first customer. "NeXT is excited to provide the enabling technology to CyberSlice, which combines fun with an innovative business concept," Jobs said at the time. "The success of CyberSlice shows the versatility of WebObjects in creating and deploying consumer web applications that are both sophisticated and original." That stroke of good luck quickly turned out to be a mess, because of a particularly bad idea: According to Newsweek, the company, soon renamed Cybermeals, spent $54 million on advertising—on four major web portals over a four-year period. This was costly and poorly-considered, because web advertising wasn't very good at the time, and not every market had the company's service. Eventually, the company had to revamp its approach entirely, which may have made things even worse. In 1998, a venture capital firm replaced the firm's entire leadership team in exchange for $10 million in capital, and brought in a former Disney exec, Rich Frank. By early 1999, CyberMeals had been renamed Food.com. The company blew through $20 million in capital in 1998 alone. Soon enough, it had raised even more money—a fresh $80 million round—from some unusually big-name backers: McDonald's, Kraft, TV Guide, and Blockbuster. (The latter, by the way, tried to sell the idea of combining restaurant delivery and movie rentals.) At the same time, it expanded its mission to be, as a March 2000 press release put it, "the nation's dining network, offering consumers a single destination for anything related to food—and accessed from a variety of devices from personal computers to wireless handhelds and televisions." You could technically say that Food.com has done that. Literally, if you go to the website right now, it is a clearinghouse of all things food. But it only became that because the Food Network, quite literally "the nation's dining network," bought it in 2003. So here's a company that was founded based on a good idea in a bad movie, whose first customer was Steve Jobs, whose CEO used to work for Disney, and counted McDonald's and Blockbuster among its investors. It blew through tens of millions of dollars like nothing. And it was bought out by the Food Network essentially because it had a good domain. These days, of course, food and the internet work together pretty neatly. Case in point: Back in 2014, GrubHub Inc. saw its stock surge by 31 percent during its first day on the market. The company, which merged with Seamless in 2013, effectively nailed down the model in ways that Food.com never could. (In case you were wondering: Neither Grubhub nor Seamless came to life because their respective owners saw a Sandra Bullock movie.) The stock price has had ups and downs, but ultimately the company highlights the fact that the Food.com model was ultimately quite good—it just needed a more efficient company to pull it off. So I'm going to fully admit that I have a bright orange Chrome Industries bag, and I love it, but I don't do anything that could be described as "messaging" with it, unless you consider what I'm doing now with my laptop in that way. The only thing that could make it better, really, is if the messenger bag was a Kozmo.com bag. Chrome Industries got its start in 1995, and just a couple of years later, its bags were converted into the startup's main promotional tool. Kozmo.com was perhaps the early web's most inventive delivery service. It could get you pretty much anything you wanted in a relatively short period of time, and often, it showed up in an orange Chrome bag. The company, like most other dot-coms of the time, went belly-up in short order as the stock market went to hell. But Chrome is still with us and still doing well—despite the fact that its bags are built to last for-friggin'-ever. (Which means that I'll be 80 and still carrying around an awesome orange messenger bag. Deal with it.) And because they last for-friggin'-ever, Kozmo bags occasionally show up on sites like eBay for auction. Sample line from an expired auction from 2013: "The bag is super-tough, can hold three New York style pizzas, and is impervious to weather." The Wired Twitter account even referenced the thing in a joke one time. I don't know about you guys, but I suddenly feel compelled to slap a Kozmo logo on my bag just to throw people off. Just don't expect me to deliver you a pint of ice cream.


News Article | April 17, 2017
Site: www.prweb.com

Let’s start with sports. Not just one, or two, but three top teams in different sports. Pittsburghers don't know the meaning of off-season: Steelers. Pens. Buccos. Pittsburgh bleeds black and gold and, even if you're not a sports fan when you move to the Burgh, you'll likely become one pretty quickly. When looking for a top-notch education, Pittsburgh is your destination. Pittsburgh boasts such highly rated universities as Carnegie Mellon University. CMU earned the 23rd spot on U.S. News & World Report's National Universities Rankings list in 2016 and The University of Pittsburgh came in at 66 on the same list. Point Park University, Chatham University, The Art Institute of Pittsburgh, and Robert Morris University are among the many public and private colleges and universities that call Pittsburgh home. The ever growing job market in Pittsburgh is drawing people of all ages and industries. Pittsburgh secured the sixth spot for the best job markets in the U.S. in 2016, as determined by ZipRecruiter.com with the strongest employment sectors identified as healthcare, insurance, and hospitality and restaurants. And let’s not overlook the Tech field. Google has opened a headquarters in the Bakery Square district of the city because of the evolving tech companies and start-ups in the area. As Pittsburgh’s tech sector continues to grow, the selection of rising startups becomes more prevalent and diverse. Pittsburgh promises affordability for its residents. Maronda Homes knows this, and is committed to providing a multitude of housing options from patio, to town, to single-family homes throughout the suburbs of the city. Maronda Homes currently has 31 communities throughout the North, South, East, and West Hills of Pittsburgh. Maronda provides the new home customer with living options as low as the $160’s for an innovative town home design, up to a luxury 6 bedroom single-family home offering over 5,000 sq. ft. The local home builder is continuously striving to improve the home building process by modernizing floor plans and evolving home designs with affordable luxury reaching every corner. At Maronda Homes, they believe that quality is never a destination - it is a requirement. Maronda home owners agree: “After months of looking and researching, we found ourselves back at Maronda Homes. Maronda offered us not only the size of house we were looking for but also the quality. The choices and options available to us allowed us to personalize our home to suit our family perfectly while maintaining affordability.”-Katheryn, Maronda Homeowner. It’s time to take a look at Pittsburgh and Maronda Homes - A city, and a home builder committed to the success and happiness of its residents.


Successful clinical trials to create drugs and vaccines for next pandemic disease will rely on building capacity, community engagement, and international collaboration before and during outbreak WASHINGTON - Mobilization of a rapid and robust clinical research program that explores whether investigational therapeutics and vaccines are safe and effective to combat the next infectious disease epidemic will depend on strengthening capacity in low-income countries for response and research, engaging people living in affected communities, and conducting safety trials before an epidemic hits, says a new report from the National Academies of Sciences, Engineering, and Medicine. Using key lessons learned from the Ebola epidemic in West Africa, the report outlines how to improve the speed and effectiveness of clinical trial research while an epidemic is occurring, especially in settings where there is limited health care and research infrastructure. The research and development of therapeutics and vaccines is a long, complex, and expensive process and cannot be compressed into the course of a rapidly progressing outbreak. The development of a drug "from bench to bedside" is estimated, on average, to take at least 10 years and cost $2.6 billion, with less than 12 percent likelihood of eventual licensing. Therefore, making progress on the research and development of products - such as therapeutics and vaccines - before an epidemic breaks is the only way to ensure that promising candidates are ready for trials once an outbreak occurs, said the committee that carried out the study and wrote the report. In addition, clinical trials could be more rapidly planned, approved, and implemented during an outbreak if promising products are studied through Phase 1 or Phase 2 safety trials in advance of an outbreak and if emergency response planning includes clinical research considerations and clinical researchers in the discussions from the beginning. The 2014-2015 Ebola epidemic was the longest and most deadly Ebola outbreak since the virus was first discovered in 1976, resulting in 28,616 cases and 11,310 deaths in Guinea, Liberia, and Sierra Leone. In August 2014, the World Health Organization declared the epidemic a public health emergency of international concern. Researchers discussed how to conduct clinical trials on potential Ebola therapeutics and vaccines in West Africa, and ultimately, several teams conducted formal clinical trials in the Ebola-affected countries during the outbreak. The clinical trial teams overcame immense logistical obstacles encountered while trying to design and implement trials in West Africa in the midst of a rapidly spreading epidemic of a highly dangerous contagious disease. However, none of the therapeutic trials ended with conclusive results on product efficacy, although limited evidence from the trial for the ZMapp treatment did trend toward a possible benefit. Given the resources, time, and effort put into these trials, they were not as successful as they could have been. The results of the vaccine trials were more fruitful. Two Ebola vaccine candidates have data that suggest they may be safe and produce an immune response, and one is most likely protective, but further data are needed. Planning and conducting clinical research during the Ebola epidemic also required confronting a number of ethical issues, such as whether it was ethical to conduct clinical trials at all in the midst of a public health emergency and whether the research activities drew effort away from providing clinical care to the most people possible. There was also disagreement among researchers over how clinical trials should be designed during the Ebola epidemic, particularly whether trials should use randomization and concurrent control groups. Randomized controlled trials are generally the preferred research design, because they allow researchers to directly compare the outcomes of similar groups of people who differ only in the presence or absence of the investigational agent. However, many argued that randomized controlled trials would be unethical during the Ebola epidemic, as this trial design would deprive patients of an agent that could potentially prevent or treat Ebola, given the high mortality rate and lack of known and available treatment options. The committee concluded that randomized controlled trials are both ethical and the fastest and most reliable way to identify the relative benefits and risks of investigational products, and except in rare circumstances, every effort should be made to implement them during epidemics. The issues that influenced choices about trial design during the Ebola epidemic - such as community mistrust, the feasibility of a standard-of-care-only arm, the high and variable mortality rate, limited product availability, and the potential conflicts between research and care - are likely to recur in future epidemics. Nevertheless, the perceived ethical or logistical hurdles that these issues present are not sufficiently compelling to override the benefits of randomized trials. Rather, randomized trials may be the most ethical trial design, because they offer the fastest route to identifying beneficial treatments while minimizing the risks of exposure to potentially harmful investigational agents. To improve the national and international clinical trial response to the next epidemic, the committee focused on three main areas - strengthening capacity, engaging communities, and facilitating international coordination and collaboration - both in the period of time before an outbreak strikes and during the epidemic itself. The committee found major capacity challenges that hindered and slowed the research response to the Ebola epidemic, and recommended developing sustainable health systems and research capabilities, improving capacity to collect and share clinical and epidemiological data, facilitating the mechanisms for rapid ethics reviews and legal agreements before an epidemic occurs, and incorporating research systems into emergency preparedness and response systems for epidemics. Affected communities had considerable fear, mistrust, and misunderstanding of national and international response and research staff. Community members feared going to health care facilities for the treatment of Ebola, rumors spread that Ebola was deliberately brought to the region by foreigners, and initial response efforts did not take into account community traditions and beliefs. For example, mandatory cremation policies countered deeply held religious beliefs. Successful clinical research is dependent on a community's understanding of, engagement in, and sense of involvement and respect in the process of planning and conducting research, the committee found. Community engagement should be prioritized during epidemic responses and be a continuous and evolving effort, starting at the onset of the epidemic. Research and response efforts were also greatly affected by the relationships among international stakeholders and their ability to coordinate and collaborate. For example, there were a few Ebola-specific therapeutic candidates with suggestive efficacy available at the beginning of the outbreak that could have been investigated in clinical trials, but the mechanism to prioritize which should be studied first was limited. The committee recommended the establishment of an international coalition of stakeholders to work between epidemics that would advise and prioritize pathogens to target for research and development, develop generic clinical trial design templates, and identify teams of clinical research experts who could be deployed to assist with research during an outbreak. The committee also highlighted seven critical steps to launching successful clinical trials when the next epidemic first strikes and before it peaks. The steps are to collect and share patient information and establish standards of care, engage communities and establish mutual trust, integrate research efforts into response and facilitate stakeholder coordination, prioritize vaccines and therapies and select trial designs, negotiate contracts, consult with regulators, and perform independent ethics reviews. The study was sponsored by the U.S. Department of Health and Human Services' Office of the Assistant Secretary for Preparedness and Response, National Institutes of Health, and U.S. Food and Drug Administration. The National Academies of Sciences, Engineering, and Medicine are private, nonprofit institutions that provide independent, objective analysis and advice to the nation to solve complex problems and inform public policy decisions related to science, technology, and medicine. The National Academies operate under an 1863 congressional charter to the National Academy of Sciences, signed by President Lincoln. For more information, visit http://national-academies. . A roster follows. Copies of Integrating Clinical Research Into Epidemic Response: The Ebola Experience are available from the National Academies Press at http://www. or by calling 1-800-624-6242. Reporters may obtain a copy from the Office of News and Public Information (contacts listed above). Gerald T. Keusch, M.D.* (co-chair) Professor of Medicine and Global Health Boston University Schools of Medicine and Public Health Boston Keith McAdam, M.D. (co-chair) Emeritus Professor of Clinical and Tropical Medicine London School of Hygiene and Tropical Medicine London Abdel Babiker, Ph.D. Professor of Epidemiology and Medical Statistics Medical Research Council Clinical Trials Unit at University College London London Susan S. Ellenberg, Ph.D. Professor of Biostatistics Perelman School of Medicine University of Pennsylvania Philadelphia Roger J. Lewis, M.D., Ph.D.* Professor and Chair of the Department of Emergency Medicine Harbor-UCLA Medical Center Los Angeles Alex London, Ph.D. Professor of Philosophy, and Director of the Center for Ethics and Policy Carnegie Mellon University Pittsburgh Michelle M. Mello, Ph.D.* Professor of Law Stanford University School of Medicine and School of Law Stanford, Calif. Olayemi Omotade, M.D. Professor of Pediatrics and Child Health Institute of Child Health University College Hospital University of Ibadan Ibadan, Nigeria Fred Wabwire-Mangen, Ph.D. Associate Professor of Epidemiology and Public Health Makerere University School of Public Health Kampala, Uganda


News Article | April 24, 2017
Site: www.prweb.com

Recent federal recommendations against offering the inhaled nasal influenza vaccine due to lack of effectiveness could lead to more flu illness in the U.S. if the inhaled vaccine becomes effective again or if not having the choice of the needle-less vaccine substantially reduces immunization rates, according to a new analysis led by University of Pittsburgh School of Medicine scientists. The findings, published online and scheduled for a coming issue of the American Journal of Preventive Medicine, indicate that close surveillance will be needed to ensure that the U.S. Centers for Disease Control and Prevention (CDC) recommendation against the nasal vaccine—called the live attenuated influenza vaccine, or LAIV—continues to do more good than harm. “The CDC is being appropriately cautious and doing the right thing based on available data,” said lead author Kenneth J. Smith, M.D., M.S., professor of medicine and clinical and translational science in Pitt’s School of Medicine. “However, our study finds that it would take only relatively small changes to tip the scales back in favor of offering the LAIV, so close monitoring is very important.” The Pittsburgh Vaccination Research Group (PittVax) is one of a few sites across the U.S. that track flu in patients who received and did not receive the annual flu vaccine. The data they collect is shared with the CDC’s Advisory Committee on Immunization Practices and led to the CDC’s recommendation against LAIV last year after data from the two previous flu seasons showed it to be ineffective at preventing influenza A, which is typically the most common strain. In the past, the LAIV was a common vaccine offered to children 2 to 8 years old. Under current conditions, only offering the needle-delivered flu vaccine results in 20.9 percent of children ages 2 to 8 getting the flu, compared with 23.5 percent if both the needle and nasal vaccine are offered. However, if the LAIV effectiveness improves and can prevent flu in more than 63 percent of the people who get it, then it once again becomes beneficial to offer both forms of vaccination. “Interestingly, there has been no decrease in LAIV effectiveness in other countries, and we’re still unsure why this is,” said Smith. “It is possible that future research will find ways to make LAIV more effective in the U.S. again, in which case the CDC recommendations will need to be reexamined.” The researchers also found that if not having the needle-less vaccine as an option drives down vaccination rates by 18.7 percent or more, then offering both options is the better recommendation. “PittVax will continue collecting, analyzing and reporting on flu cases and flu vaccine effectiveness in the Pittsburgh region, helping guide flu immunization recommendations,” said senior author Richard K. Zimmerman, M.D., M.P.H., professor in Pitt School of Medicine’s Department of Family Medicine and Pitt Graduate School of Public Health’s Department of Behavioral and Community Health Sciences. “This kind of surveillance is critical to charting the best course to save lives from influenza, which kills thousands annually.” Additional authors on this study are Mary Patricia Nowalk, Ph.D., R.D., Angela Wateska, M.P.H., and Jonathan M. Raviotta, M.P.H., all of Pitt; Shawn T. Brown, Ph.D. and Jay V. DePasse, B.S., at the Pittsburgh Supercomputing Center at Carnegie Mellon University and Eunha Shim, Ph.D., of Soongsil University in Seoul, Republic of Korea. This project was funded by National Institute of General Medical Sciences grant R01GM111121. About the University of Pittsburgh Schools of the Health Sciences The University of Pittsburgh Schools of the Health Sciences include the schools of Medicine, Nursing, Dental Medicine, Pharmacy, Health and Rehabilitation Sciences and the Graduate School of Public Health. The schools serve as the academic partner to the UPMC (University of Pittsburgh Medical Center). Together, their combined mission is to train tomorrow’s health care specialists and biomedical scientists, engage in groundbreaking research that will advance understanding of the causes and treatments of disease and participate in the delivery of outstanding patient care. Since 1998, Pitt and its affiliated university faculty have ranked among the top 10 educational institutions in grant support from the National Institutes of Health. For additional information about the Schools of the Health Sciences, please visit http://www.health.pitt.edu.


News Article | April 18, 2017
Site: www.eurekalert.org

Adolescents face many challenging decisions. So, do consumers. A new paper published in the Proceedings of the National Academy of Sciences shows how collaborations between psychologists and economists lead to better understanding of such decisions than either discipline can on its own. "Psychology and economics are both interested in how people make decisions, but have different theories and methods. In our work with economists at Northwestern, Michigan, the Federal Reserve and elsewhere, we have found ways to complement each other's expertise," said Wändi Bruine de Bruin, professor of behavioral decision making at Leeds' University Business School, who received her Ph.D. from Carnegie Mellon University, where she is collaborating professor in the Department of Engineering and Public Policy. In two series of studies, focused on individuals' expectations for major life events, Bruine de Bruin and CMU's Baruch Fischhoff worked with economists to design survey questions that were simple enough for laypeople to answer but precise enough to inform economic models. The first project examined adolescents' expectations for life events that would affect their psychological and economic development, such as finding work, being arrested and having children. Their colleagues in economics, led by Charles Manski, a former CMU faculty member, wanted to ask precise questions on a major national survey but were meeting resistance from survey researchers, who claimed that they were too hard for teens to answer. Bruine de Bruin and Fischhoff supported the economists' concerns with studies showing that questions using seemingly simpler language were actually more difficult for respondents and less useful for researchers, because the simpler wording was more ambiguous. The team then developed questions that teens could understand and provide answers that economists could use. The process included asking for numerical probabilities (e.g., 70 percent), rather than verbal quantifiers, such as "very likely." "We found that kids were better at judging their futures than people may have thought, that they could estimate with numerical probabilities just fine and their answers were generally sensible," said Fischhoff, the Howard Heinz University Professor in the Institute for Politics and Strategy and Department of Engineering and Public Policy at CMU. The second project involved consumers' expectations of inflation, which play a central role in predicting financial decisions for the overall economy. Economists at the U.S. Federal Reserve worried that the questions that they had used for decades did not mean the same thing to consumers as they did for economists. Bringing psychological methods to bear on economics problems, the team found that here, too, it was better to ask more precise questions. "When you ask people directly about 'inflation,' it led to less confusion, and more accurate expectations, than when you use vague terms, like 'prices in general,'" said Bruine de Bruin. Moreover, asking about 'prices in general' led people to think of prices for specific goods, bringing more extreme prices to mind. As a result, expectations for 'prices in general' were higher than expectations for 'inflation.' Bruine de Bruin and Fischhoff point to four conditions that made such transdiciplinary research possible: having a shared research goal, which neither discipline could achieve on its own; finding common ground in shared methodology; sharing effort throughout, with common language and sense of ownership; and gaining mutual benefit from both the research process and its products. "Successful collaborations across fields can be done. You need to find willing partners, and create trusted partnerships," Fischhoff said. "For people interested in decision research, you should look for work that combines both psychology and economics because neither can provide the complete picture." The research was supported by grants from the U.S. National Science Foundation, Swedish Foundation for the Humanities and Social Sciences and the European Union Seventh Framework Program. Terrorism Research Must Be Driven By Evidence, Not Political Agendas


DENVER, CO--(Marketwired - April 20, 2017) - Only 13.8% of American Indians have a college degree. The American Indian College Fund is changing that. American Indian students know an education will change their lives and communities by giving them knowledge and confidence to defend their rights and amplify their voices, as demonstrated in the recent Standing Rock protests. The American Indian College Fund Flame of Hope Gala is being held April 25, 2017 at Gotham Hall, 1356 Broadway, New York City to raise money to increase the number of American Indians with college degrees. The event kicks off with a cocktail reception featuring an art exhibit by students from the prestigious Institute of American Indian Arts from 6:30-7:30 p.m. Dinner and entertainment follows from 7:30-10:00 p.m. Attendees will enjoy headline entertainment by the Indigo Girls, the folk rock music American duo, consisting of Amy Ray and Emily Saliers. The Indigo Girls released their critically acclaimed eponymous album in 1998. It remained on the Billboard Hot 100 Chart for 35 weeks, earned double platinum status, and received a Grammy nomination for "Best New Artist" and won "Best Contemporary Folk Recording." Overnight the duo became folk icons and have since released 14 albums (three platinum and three gold), and received six Grammy nominations. In addition to their musical career, the Indigo Girls support numerous social causes. Speakers include four American Indian College Fund scholars amongst the 13.8% of American Indians with college degrees. Each has already demonstrated promise in their field. One has developed and shared research at a global forum in China. A second was invited to the White House to discuss computer coding. A third has been identified as a rising star at one of the largest companies in the world. The fourth has completed a Ph.D. in engineering and public policy from Carnegie Mellon University. The evening will include the opportunity to meet the Native artist who created the original beaded artwork featured in the College Fund's public service announcement campaign. Marcus Amerman (a member of the Choctaw Nation of Oklahoma and American Indian college graduate), will be present to discuss his beaded portraits featuring more than 18,000 hand-stitched beads. The campaign, created with Amerman and advertising agency Wieden+Kennedy, includes the College Fund's new tagline "Education is the Answer." To purchase your ticket or table, please visit http://collegefund.org/events/ or contact Hannah Urano at hurano@collegefund.org or call 303-426-8900. Journalists: To discuss interview opportunities with the Indigo Girls or students, please contact Dina Horwedel at the American Indian College Fund at 303-430-5350. Founded in 1989, the American Indian College Fund has been the nation's largest charity supporting Native higher education for more than 25 years. The College Fund believes "Education is the answer" and has provided more than 100,000 scholarships since its inception and an average of 6,000 scholarships per year to American Indian students. The College Fund also supports a variety of academic and support programs at the nation's 34 accredited tribal colleges and universities, which are located on or near Indian reservations, ensuring students have the tools to graduate and succeed in their careers. The College Fund consistently receives top ratings from independent charity evaluators. For more information about the American Indian College Fund, please visit www.collegefund.org.


News Article | May 2, 2017
Site: www.eurekalert.org

A study of how policies restricting pharmaceutical promotion to physicians affect medication prescribing found that physicians in academic medical centers (AMCs) prescribed fewer of the promoted drugs, and more non-promoted drugs in the same drug classes, following policy changes to restrict marketing activities at those medical centers. The analysis encompassed 16.1 million prescriptions; while the decline observed was modest in terms of percentage, proportionally small changes can represent thousands of prescriptions. The study was supported in part by a contract from the National Institute of Mental Health (NIMH), part of the National Institutes of Health. The paper reporting these results appears in the May 2 issue of JAMA which is devoted to conflict of interest issues. It is common for pharmaceutical companies to promote medications to physicians during sales visits and events that may involve gifts such as meals and free samples, a practice called "detailing." In recent years, some AMCs in the United States have instituted policies restricting detailing, but little is known about what effect, if any, such policies have had on prescribing practices by physicians. "There has long been concern that drug marketing to physicians might influence their prescribing, including--and maybe especially--for psychiatric drugs," says Michael Schoenbaum, Ph.D., Senior Advisor for Mental Health Services, Epidemiology, and Economics, Division of Services and Intervention Research at NIMH and a coauthor of the paper. "Many medical schools have adopted policies to limit such marketing, and this study is one of the first to document what effect these policies actually have. Important next steps include assessing the economic impact of these policies and whether they affect patients' clinical outcomes." Ian Larkin, Ph.D., at the University of California, Los Angeles, and George Lowenstein, Ph.D., at Carnegie Mellon University, Pittsburgh, led a multi-center team of researchers in a study examining the effects on prescribing of AMC policies to limit pharmaceutical representative detailing. The team looked at prescribing by physicians affiliated with 19 academic medical centers in five states. These states--California, Illinois, Massachusetts, New York, and Pennsylvania--have the largest numbers of AMC-affiliated physicians and in 2015 accounted for nearly 35 percent of all U.S. prescriptions. During the period of the study--January 2006 to June 2012--these 19 centers instituted policies restricting detailing. The study compared prescribing by 2,126 physicians affiliated with these centers with that of 24,593 physicians with similar backgrounds and prescribing habits that were selected from a database of physicians in the same states provided by a large pharmacy benefits manager. The analysis in this study encompassed eight major drug classes: lipid-lowering drugs, gastroesophageal reflux disease drugs, antidiabetic agents, antihypertensive drugs, sleep aids, attention deficit hyperactivity disorder drugs, antidepressant drugs, and antipsychotic drugs. The study authors reported changes in prescribing in terms of changes in the market share of detailed and nondetailed drugs: market share represents the share of prescriptions for a given drug within a drug class. The mean market share of detailed drugs (across all the drug classes) in AMCs prior to changes in policy was 19.3 percent. Over the period of the study, the market share of detailed drugs prescribed by AMC physicians declined by 1.67 percentage point, an 8.7 percent decrease relative to the level prior to policy changes. The market share of prescribed nondetailed drugs increased 0.84 percentage point, or a relative 5.6 percent increase. The changes were statistically significant for six of the eight drug classes and for all drugs in the aggregate. The decline in prescribing of detailed drugs among AMC physicians was in contrast to a slight decline in prescriptions of detailed drugs among the comparison group of physicians over the same time period. The magnitude of changes differed across AMCs. The decline in prescriptions of detailed drugs was greatest at centers with the most stringent policies, such as bans on salespeople in patient care areas, requirements for salesperson registration and training, and penalties for salespeople and physicians for violating the policies. In 8 of 11 AMCs with more stringent policies, the changes in prescribing were significant; in only 1 of 8 AMCs with more limited measures were the changes significant. Additional analysis showed that the changes in prescribing were evident whether or not detailed drugs for which a generic version became available during the study were included in the data. Also, because AMCs instituted policy changes at different times during the study period, the authors compared prescribing during equivalent stretches of time (up to three years) immediately before and after each center's policy had changed. About the National Institute of Mental Health (NIMH): The mission of the NIMH is to transform the understanding and treatment of mental illnesses through basic and clinical research, paving the way for prevention, recovery and cure. For more information, visit the NIMH website. About the National Institutes of Health (NIH): NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit the NIH website .


News Article | April 27, 2017
Site: phys.org

Finding a safe and efficient power source is a critical step in the development of such ingestible electronic devices, says Giovanni Traverso, a research affiliate at MIT's Koch Institute for Integrative Cancer Research and a gastroenterologist and biomedical engineer at Brigham and Women's Hospital. "If we're proposing to have systems reside in the body for a long time, power becomes crucial," says Traverso, one of the senior authors of the study. "Having the ability to transmit power wirelessly opens up new possibilities as we start to approach this problem." The new strategy, described in the April 27 issue of the journal Scientific Reports, is based on the wireless transfer of power from an antenna outside the body to another one inside the digestive tract. This method yields enough power to run sensors that could monitor heart rate, temperature, or levels of particular nutrients or gases in the stomach. "Right now we have no way of measuring things like core body temperature or concentration of micronutrients over an extended period of time, and with these devices you could start to do that kind of thing," says Abubakar Abid, a former MIT graduate student who is the paper's first author. Robert Langer, the David H. Koch Institute Professor at MIT, is also a senior author of the paper. Other authors are Koch Institute technical associates Taylor Bensel and Cody Cleveland, former Koch Institute research technician Lucas Booth, and Draper researchers Brian Smith and Jonathan O'Brien. The research team has been working for several years on different types of ingestible electronics, including sensors that can monitor vital signs, and drug delivery vehicles that can remain in the digestive tract for weeks or months. To power these devices, the team has been exploring various options, including a galvanic cell that is powered by interactions with the acid of the stomach. However, one drawback to using this type of battery cell is that the metal electrodes stop working over time. In their latest study, the team wanted to come up with a way to power their devices without using electrodes, allowing them to remain in the GI tract indefinitely. The researchers first considered the possibility of using near-field transmission, that is, wireless energy transfer between two antennas over very small distances. This approach is now used for some cell phone chargers, but because the antennas have to be very close together, the researchers realized it would not work for transferring power over the distances they needed—about 5 to 10 centimeters. Instead, they decided to explore midfield transmission, which can transfer power across longer distances. Researchers at Stanford University have recently explored using this strategy to power pacemakers, but no one had tried using it for devices in the digestive tract. Using this approach, the researchers were able to deliver 100 to 200 microwatts of power to their device, which is more than enough to power small electronics, Abid says. A temperature sensor that wirelessly transmits a temperature reading every 10 seconds would require about 30 microwatts, as would a video camera that takes 10 to 20 frames per second. In a study conducted in pigs, the external antenna was able to transfer power over distances ranging from 2 to 10 centimeters, and the researchers found that the energy transfer caused no tissue damage. "We're able to efficiently send power from the transmitter antennas outside the body to antennas inside the body, and do it in a way that minimizes the radiation being absorbed by the tissue itself," Abid says. Christopher Bettinger, an associate professor of materials science and biomedical engineering at Carnegie Mellon University, describes the study as a "great advancement" in the rapidly growing field of ingestible electronics. "This is a classic problem with implantable devices: How do you power them? What they're doing with wireless power is a very nice approach," says Bettinger, who was not involved in the research. For this study, the researchers used square antennas with 6.8-millimeter sides. The internal antenna has to be small enough that it can be swallowed, but the external antenna can be larger, which offers the possibility of generating larger amounts of energy. The external power source could be used either to continuously power the internal device or to charge it up, Traverso says. "It's really a proof-of-concept in establishing an alternative to batteries for the powering of devices in the GI tract," he says. "This work, combined with exciting advancements in subthreshold electronics, low-power systems-on-a-chip, and novel packaging miniaturization, can enable many sensing, monitoring, and even stimulation or actuation applications," Smith says. The researchers are continuing to explore different ways to power devices in the GI tract, and they hope that some of their devices will be ready for human testing within about five years. "We're developing a whole series of other devices that can stay in the stomach for a long time, and looking at different timescales of how long we want to keep them in," Traverso says. "I suspect that depending on the different applications, some methods of powering them may be better suited than others." More information: Abubakar Abid et al, Wireless Power Transfer to Millimeter-Sized Gastrointestinal Electronics Validated in a Swine Model, Scientific Reports (2017). DOI: 10.1038/srep46745


News Article | May 8, 2017
Site: www.businesswire.com

BOSTON--(BUSINESS WIRE)--GE (NYSE: GE) is developing its Center for Additive Technology Advancement (CATA) in Pittsburgh, Pennsylvania into an externally focused “Customer Experience Center” (CEC) to accelerate the use of additive manufacturing with GE customers across several industries. With this transition, the $39 million Pittsburgh technology center, opened in April 2016 to drive additive manufacturing within GE industrial operations, now joins a global network of CECs under the growing umbrella of GE Additive. Last month, GE Additive announced the creation of a CEC in Munich, Germany, to allow current and potential customers to experience first-hand designing and producing components using additive manufacturing. Like the Munich site, the Pittsburgh site will operate additive machines from Concept Laser of Germany and Arcam EBM of Sweden – both leading additive providers in which GE has majority ownership. The Pittsburgh center will augment the operations already within both Arcam and Concept Laser operations in the United States, including Arcam’s Orthopedic Center of Excellence in Shelton, Connecticut. The additive machines at the Pittsburgh CEC will be enhanced by GE’s cloud-based Predix operating platform to enable industrial-scale analytics and GE Edge devices, which provide real-time control and monitoring. While the 50 employees at the Pittsburgh center will continue to support GE’s industrial businesses with additive initiatives, they will expand their focus to support current and potential Concept Laser and Arcam customers in additive design and production. Customers will benefit from hands-on training and instruction at the facility, covering additive design, machine operations and support. “We are thrilled to expand our concept of customer centers in the United States with a facility already at the leading edge of additive technology development,” said Robert Griggs, general manager of the Customer Experience Centers for GE Additive. Jennifer Cipolla, general manager of the Pittsburgh center, will continue to lead the facility as a new CEC. The Pittsburgh CEC, near the Pittsburgh airport, is convenient to Carnegie Mellon University, the University of Pittsburgh, and Robert Morris University, all in Pittsburgh, and Penn State University at State College, Pennsylvania. These institutions are already engaged in additive engineering and manufacturing processes. Additive manufacturing involves taking a digital design from computer aided design (CAD) software, and melting and fusing together very fine metal powder layer-by-layer, using a laser or an electron beam as the energy source. Additive components are typically lighter and more durable than traditional forged parts because they require less welding and machining. Since additive parts are essentially “grown” from the ground up, they generate far less scrap material. Freed of traditional manufacturing restrictions, additive manufacturing dramatically expands the design possibilities for engineers. For many years, GE has been a leading end user and innovator in the additive manufacturing space. GE has invested approximately $1.5 billion in manufacturing and additive technologies at GE’s Global Research Center (GRC), developed additive applications across six GE businesses, created new services applications across the company, and earned 346 patents in powder metals used for the additive process. In 2016, the company established GE Additive to become a leading supplier of additive technology and materials for industries worldwide. GE Additive, led by GE Vice Chairman David Joyce, is part of GE, the world’s Digital Industrial Company, transforming industry with software-defined machines and solutions that are connected, responsive and predictive. GE is organized around a global exchange of knowledge, the "GE Store," through which each business shares and accesses the same technology, markets, structure and intellect. Visit GE Additive at www.geadditive.com


Michael Mao, Co-founder of YUNQI Partners said that YunQi Partners was an angel round investor for IceKredit. He has had a good relationship with the founder of IceKredit, Lingyun Gu, since Gu worked as EIR (the settled entrepreneur) in IDG. After founding YUNQI Partners, he has focused on innovation and dynamics in financial technology and other related fields and taken IceKredit as one of the layout companies of YUNQI Partner in financial technology. YUNQI Partners is confident in Dr. Gu Lingyun and his team, and is optimistic in regards to the development prospects of Gu using big data to improve the risk control effect and efficiency of credit institutions. In this way, YUNQI Partner will continue to provide support to IceKredit. IceKredit products include Individual Credit Evaluation System and SME Credit Evaluation System. Individual Credit Evaluation System consists of an anti-fraud engine, personal credit portrait and a function for the restoration of missing customer contact information. SME Credit Evaluation System consists of multi-leveled SME credit assessment and all-round SME Portrait. IceKredit also provides whole-process, online loan management solutions for banks, P2P platforms, consumer finance companies and micro lending companies. Lingyun Gu said, after this round of funding, IceKredit will continue to focus on the following three directions: 2. grasping the window period for traditional financial institutions to meet changing credit risk control requirements, providing enterprise credit assessment for more joint-stock banks and city commercial banks and helping financial institutions to extend their businesses; 3. providing whole-process loan management service for consumer finance companies, small loan companies and other financing companies from customer acquisition, pre-loan risk control to post-loan risk management. IceKredit has had hundreds of paying customers. Lingyun Gu believes that Chinese financial enterprises have misconceptions of "trust", hoping to be in charge of all the processes from front-end data collection, flow acquisition to anti-fraud, risk control and the back-end docking funds. But in fact, with a third party professional institution, the whole process will be more economical and efficient. "Limited to their own operation mechanism, traditional financial institutions respond slowly to market demands. It usually takes them several months from data acquisition to modeling, during which the market has undergone new changes. In this way, the window period for traditional financial institutions to provide risk management model service will always exist," Lingyun Gu explained. Currently, IceKredit has a team of 103 members and offices in Beijing, Shanghai, Los Angeles, Nanjing, Changzhou and Chengdu. Its founder Lingyun Gu, who focuses on machine learning and data mining, graduated from Carnegie Mellon University and received his masters and PhD degrees in Computer Science.  When he was working in the US, he developed the first four generations risk control models for ZestFinance and worked as the co-founder and chief risk control officer of Turbo Financial Group. Yunqi  is an early stage venture capital fund with > US$250mil AUM, established in July 2014. Yunqi focuses on IOT/Robotics, Big data/Cloud Computing and Fintech. Awards received: 2016 Master List's China Top 100 Venture Capital Investment Organizations, 2016 Investment Organization List's China Annual Top 50 Foreign Capital Investment Organizations, 2016 Investment Organization's List Annual Top 10 Newly-developing VC Investment Organizations, 2016 Zero2IPO Group China Top 100 Venture Capital Investment Organizations. To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/icekredit-completes-110-million-rmb-series-a-round-following-angel-round-investment-from-yunqi-partners-300451478.html


News Article | April 27, 2017
Site: www.prweb.com

The University of San Francisco (USF) today announced the lineup of speakers and honorary degree recipients at the university’s eight commencement ceremonies, taking place Thursday, May 18 through Saturday, May 20. Over 2300 graduate and undergraduate students will participate in the ceremonies at St. Ignatius Church on USF’s main campus. Events will also be live-streamed via the university website (http://www.usfca.edu). Hailing from the front lines of real estate, medicine, academia, politics and the Catholic Church, commencement speakers and honorary degree recipients include: Renowned director and playwright Carey Perloff from San Francisco’s American Conservatory Theater and Maureen Orth, an award-winning journalist and education leader, will receive honorary degrees and address USF’s College of Arts and Sciences. California Attorney General Xavier Becerra will speak to USF School of Law graduates. All ceremonies are invitation only. Journalists interested in covering the commencement events at USF must register by contacting Jennifer Kriz at (415) 422-2697 or jkriz(at)usfca(dot)edu. Honorary Degree Recipient and Commencement Speaker: The Most Reverend Robert W. McElroy, Catholic Bishop of San Diego Named the sixth bishop of San Diego in 2015, Bishop Robert McElroy has served in parishes throughout California, and was appointed auxiliary bishop of San Francisco (2010-2015) by Pope Benedict XVI. In 2008, he served as the Lo Schiavo Chair in Catholic Social Thought at USF. McElroy is now the vice president of the California Catholic Conference and serves at the national conference of bishops. He is the author of two books: “The Search for an American Public Theology” and “Morality and American Foreign Policy.” A native San Franciscan, McElroy received his bachelor’s degree from Harvard College, and his master’s degree from Stanford University, both in American history. He also holds a licentiate in theology from the Jesuit School of Theology at Berkeley, a doctorate in moral theology from the Gregorian University Rome, and a doctorate in political science from Stanford. Friday, May 19, 9 a.m. College of Arts and Sciences, undergraduate students for humanities and sciences Honorary Degree Recipient and Commencement Speaker: Karl W. Eikenberry, Ambassador and Lieutenant General, Retired, U.S. Army Karl W. Eikenberry, who served as the U.S. ambassador to Afghanistan from April 2009 to July 2011, is currently the Oksenberg-Rohlen Fellow and director of the U.S. Asia Security Initiative at the Walter H. Shorenstein Asia-Pacific Research Center. He is also a professor and faculty member at Stanford University’s FSI Center on Democracy, Development and the Rule of Law (CDDRL), the Center for International Security and Cooperation (CISAC) and The Europe Center. In addition to his work at Stanford, Eikenberry is a fellow of the American Academy of Arts and Sciences, where he co-directs the academy's multi-year project on civil wars, violence and international responses. He serves on multiple boards, including The Asia Foundation, the International Institute for Strategic Studies, the National Committee on American Foreign Policy, Carnegie Mellon University’s Center for International Relations and Politics, and the Turquoise Mountain Foundation, which aims to regenerate Afghanistan's traditional arts and historic areas. He also is a member of the Council on Foreign Relations, the American Academy of Diplomacy and the Council of American Ambassadors. Eikenberry is a graduate of the United States Military Academy and received master’s degrees from Harvard and Stanford universities. Friday, May 19, noon College of Arts and Sciences, undergraduate students for arts and social sciences Carey Perloff, an award-winning director and playwright, is celebrating her 25th and final year as artistic director of A.C.T., San Francisco’s largest theater company. Known for her innovative productions of classics and new works, Perloff has directed more than 50 productions at A.C.T. Perloff’s play Kinship premiered at the Théâtre de Paris in October 2014. Prior to A.C.T., Perloff was the artistic director of Classic Stage Company in New York and served on the faculty of the Tisch School of the Arts at New York University. Her memoir, “Beautiful Chaos: A Life in the Theater,” about her time at A.C.T., was published in 2015 and was excerpted by American Theatre Magazine. A recipient of France’s Chevalier de l’Ordre des Arts et des Lettres and the National Corporate Theatre Fund’s 2007 Artistic Achievement Award, Perloff received a B.A. Phi Beta Kappa in classics and comparative literature from Stanford University and was a Fulbright Fellow at the University of Oxford. Friday, May 19, 3 p.m. College of Arts and Sciences, graduate students Maureen Orth is an award-winning journalist, a special correspondent for Vanity Fair, and the founder of the Marina Orth Foundation, a nonprofit foundation that promotes advanced learning in technology, English and leadership for more than 8,000 students in Colombia. As one of the first female writers at Newsweek in the early 1970s, Orth went on to publish profiles in Vanity Fair on heads of state, business leaders and celebrities, as well as acclaimed investigative reports. She has been a contributing editor at Vogue, a network correspondent for NBC News, a senior editor for New York and New West magazines and a columnist for New York Woman. She is also a contributor to The New York Times, The Washington Post and The Los Angeles Times. For her commitment to the education and success of the youth of Colombia, Orth received the McCall-Pierpaoli Humanitarian Award from Refugees International in 2015. Orth has also published two books, the best selling “Vulgar Favors” about the murder of Gianni Versace and “The Importance of Being Famous: Behind the Scenes of the Celebrity Industrial Complex.” Orth attended San Francisco College for Women/Lone Mountain for two years and completed her bachelor’s degree in political science at the University of California, Berkeley. She earned a master’s degree in journalism and documentary film at the University of California, Los Angeles. Orth’s late husband, Tim Russert, received an honorary degree from USF in 2001. Friday, May 19, 6 p.m. School of Nursing and Health Professions Honorary Degree Recipient and Commencement Speaker: Rev. Jon D. Fuller, M.D., S.J., Physician, Center for Infectious Diseases and Associate Professor, Boston University School of Medicine Founding president of the National Catholic AIDS Network, Rev. Dr. Jon Fuller is the attending physician for the Center for Infectious Diseases in Boston and manages Boston Medical Center’s program for HIV/AIDS care. He also coordinates the Research Thursday AIDS Conference series. As a Jesuit priest, Fuller has focused on how HIV prevention approaches can be analyzed and supported from the context of Catholic moral theology and serves as a consultant to international Catholic development and relief agencies on HIV-related policies. He teaches at Boston University School of Medicine, Weston Jesuit School of Theology and Harvard Divinity School. Fuller attended medical school at the University of California, San Diego, and completed his residency training in family medicine at the University of California, San Francisco. He served on the University of San Francisco Board of Trustees from 2001 to 2010. Saturday, May 20, 9 a.m. School of Law Prior to being elected as California’s attorney general this year, Xavier Becerra was a member of the United States House of Representatives for California's 34th congressional district, representing downtown Los Angeles in Congress from 1993 to 2017. Becerra also served as a deputy attorney general in the California Department of Justice from 1987 to 1990, and the California State Assembly from 1990 to 1992. Born in Sacramento, Becerra is the son of working-class immigrants from Jalisco, Mexico. He attended the University of Salamanca in Salamanca, Spain from 1978 to 1979, and earned his B.A. in economics from Stanford University. He was the first in his family to graduate from college. Becerra received his J.D. from Stanford Law School in 1980. Saturday, May 20, noon School of Management, undergraduate students in business administration Honorary Degree Recipient and Commencement Speaker: Regina Benjamin, MD, MBA, Former U.S. Surgeon General Dr. Regina M. Benjamin served as the 18th United States surgeon general, appointed by President Barack Obama in July 2009. As surgeon general, Benjamin oversaw the operational command of 6,700 uniformed public health officers who promote and protect the health of Americans in locations around the world. She is the first chair of the National Prevention Council and a former associate dean for rural health at the University of South Alabama College of Medicine. She is also the past chair of the U.S. Federation of State Medical Boards. In 1995, Benjamin was the first physician under age 40 and the first African-American woman to be elected to the American Medical Association Board of Trustees. Prior to becoming surgeon general, Benjamin served patients at the rural health clinic she founded in Bayou La Batre, Alabama, keeping the clinic in operation despite damage inflicted by hurricanes George (1998) and Katrina (2005) and a devastating fire (2006). Benjamin earned a B.S. in chemistry from Xavier University of Louisiana, an M.D. degree from the University of Alabama at Birmingham and an M.B.A. from Tulane University. She attended Morehouse School of Medicine and completed her family medicine residency in Macon, Georgia. Saturday, May 20, 3 p.m. School of Management, graduate and professional students, Masagung Graduate School of Management Honorary Degree Recipient and Commencement Speaker: Mark Buell, President, San Francisco Recreation and Park Commission, Class of 1964 Mark Buell is a graduate of USF, a native San Franciscan and a decorated Vietnam veteran. Intrepid in the world of politics and philanthropy, Buell has spent 35 years in public and private real estate development. Buell was San Francisco’s first director of economic development under Joseph Alioto and later served as the first director of the Emeryville Redevelopment Agency from 1977 to 1985. He was a founding member and first president of the California Association for Local Economic Development and has served on the San Francisco Public Utilities Commission under Dianne Feinstein. Buell is active on the boards of many nonprofit organizations including the Golden Gate National Parks Conservancy, the San Francisco Conservation Corps, the Bolinas Museum and the Chez Panisse Foundation. About the University of San Francisco The University of San Francisco is located in the heart of one of the world’s most innovative and diverse cities and is home to a vibrant academic community of students and faculty who achieve excellence in their fields. Its diverse student body enjoys direct access to faculty, small classes, and outstanding opportunities in the city itself. USF is San Francisco’s first university, and its Jesuit Catholic mission helps ignite a student’s passion for social justice and a desire to “Change the World From Here.” For more information, visit usfca.edu


SHANGHAI, May 4, 2017 /PRNewswire/ -- IceKredit is an independent credit evaluation institution for small and micro enterprises based on big data. Recently, its founder and CEO Lingyun Gu announced that it had completed a 110 million RMB Series A round at the end of 2016, led by China Creation Ventures(consisted of the original KPCB Chinese team), followed by Lingfeng Capital, alongside all existing shareholders. Previously, IceKredit received a 20 million RMB angel investment from YUNQI Partners, FreeS Fund and Will Hunting Capital, as well as a 20 million Series Pre-A round from one listed company and Lujiazui fund. Michael Mao, Co-founder of YUNQI Partners said that YunQi Partners was an angel round investor for IceKredit. He has had a good relationship with the founder of IceKredit, Lingyun Gu, since Gu worked as EIR (the settled entrepreneur) in IDG. After founding YUNQI Partners, he has focused on innovation and dynamics in financial technology and other related fields and taken IceKredit as one of the layout companies of YUNQI Partner in financial technology. YUNQI Partners is confident in Dr. Gu Lingyun and his team, and is optimistic in regards to the development prospects of Gu using big data to improve the risk control effect and efficiency of credit institutions. In this way, YUNQI Partner will continue to provide support to IceKredit. IceKredit products include Individual Credit Evaluation System and SME Credit Evaluation System. Individual Credit Evaluation System consists of an anti-fraud engine, personal credit portrait and a function for the restoration of missing customer contact information. SME Credit Evaluation System consists of multi-leveled SME credit assessment and all-round SME Portrait. IceKredit also provides whole-process, online loan management solutions for banks, P2P platforms, consumer finance companies and micro lending companies. Lingyun Gu said, after this round of funding, IceKredit will continue to focus on the following three directions: 2. grasping the window period for traditional financial institutions to meet changing credit risk control requirements, providing enterprise credit assessment for more joint-stock banks and city commercial banks and helping financial institutions to extend their businesses; 3. providing whole-process loan management service for consumer finance companies, small loan companies and other financing companies from customer acquisition, pre-loan risk control to post-loan risk management. IceKredit has had hundreds of paying customers. Lingyun Gu believes that Chinese financial enterprises have misconceptions of "trust", hoping to be in charge of all the processes from front-end data collection, flow acquisition to anti-fraud, risk control and the back-end docking funds. But in fact, with a third party professional institution, the whole process will be more economical and efficient. "Limited to their own operation mechanism, traditional financial institutions respond slowly to market demands. It usually takes them several months from data acquisition to modeling, during which the market has undergone new changes. In this way, the window period for traditional financial institutions to provide risk management model service will always exist," Lingyun Gu explained. Currently, IceKredit has a team of 103 members and offices in Beijing, Shanghai, Los Angeles, Nanjing, Changzhou and Chengdu. Its founder Lingyun Gu, who focuses on machine learning and data mining, graduated from Carnegie Mellon University and received his masters and PhD degrees in Computer Science.  When he was working in the US, he developed the first four generations risk control models for ZestFinance and worked as the co-founder and chief risk control officer of Turbo Financial Group. Yunqi  is an early stage venture capital fund with > US$250mil AUM, established in July 2014. Yunqi focuses on IOT/Robotics, Big data/Cloud Computing and Fintech. Awards received: 2016 Master List's China Top 100 Venture Capital Investment Organizations, 2016 Investment Organization List's China Annual Top 50 Foreign Capital Investment Organizations, 2016 Investment Organization's List Annual Top 10 Newly-developing VC Investment Organizations, 2016 Zero2IPO Group China Top 100 Venture Capital Investment Organizations. To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/icekredit-completes-110-million-rmb-series-a-round-following-angel-round-investment-from-yunqi-partners-300451478.html


News Article | May 2, 2017
Site: www.prnewswire.com

"We are tremendously proud of our Carnegie Mellon University alumni nominees who, through their hard work and success, serve as role models for students everywhere," said CMU President Subra Suresh. "We also are proud to partner with the Tony Awards to recognize other important role models — our nation's teachers — who provide arts education to young people. Through our Excellence in Theatre Education Award, we honor their hard work and dedication." Denée Benton "Natasha, Pierre and the Great Comet of 1812" Best Performance by an Actress in a Leading Role in a Musical 2014 graduate of CMU's School of Drama Christian Borle "Falsettos" Best Performance by an Actor in a Leading Role in a Musical 1995 graduate of CMU's School of Drama Josh Groban "Natasha, Pierre and the Great Comet of 1812" Best Performance By an Actor in a Leading Role in a Musical Attended CMU's School of Drama 1999-2000 The Excellence in Theatre Education Award continues to gain significant attention, generating hundreds of nominations from across the country again this year. This annual honor recognizes theatre educators in the U.S. who demonstrate monumental impact on the lives of students and who embody the highest standards of the profession. A panel of judges comprising representatives of the American Theatre Wing, The Broadway League, CMU and other leaders from the theatre industry recently selected the finalists and a winner, who will be recognized at the 71st Annual Tony Awards on Sunday, June 11. For more information about the nominees and the Excellence in Theatre Education Award, visit http://cmu.li/isAh30bmmTS. To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/carnegie-mellon-drama-alumni-nominated-for-2017-tony-awards-300449682.html


News Article | April 27, 2017
Site: www.businesswire.com

NEW YORK--(BUSINESS WIRE)--M&A advisory firm AdMedia Partners is pleased to announce that it acted as exclusive financial advisor to Deeplocal, an innovation studio focused on product invention, design and engineering, in its acquisition by WPP Digital, the digital investment arm of global communications company WPP. Deeplocal, based in Pittsburgh, employs over 50 people with diverse skill sets including robotics, hardware development, electrical engineering, software development, industrial design, strategy and creative. The studio solves business challenges for clients with strategy-led creative inventions and marketing campaigns and then rapidly designs, engineers and builds working prototypes and products. This work is all done in-house. Clients include Google, Netflix, Airbnb, Inc., Lyft and American Eagle Outfitters. Deeplocal was founded in 2006 as a spin-off of Carnegie Mellon University. The acquisition continues WPP’s strategy of focusing on three key areas that differentiate its offerings to clients: technology, data and content. Deeplocal creates and builds experiences that help brands tell their stories and connect with their audiences in new, unexpected ways. They work in both digital and physical disciplines, though their projects find all kinds of ways to dissolve the boundaries between the two. Everything they do is rooted in culture and faithful to strategic insights about audiences. They want to get their people talking about something amazing they’ve experienced - not about an advertising campaign. The team is comprised of creatives, marketers, strategists, technologists, engineers (mechanical, electrical, robotic and software) and artists. Whether they are making robots, software or socks, they do the vast majority of production in house. This allows Deeplocal to be nimble in their process and give clients excellent visibility into each project, from start to finish. To learn more, visit www.deeplocal.com. WPP is made up of leading companies in: advertising, media investment management, data investment management, public relations and public affairs, branding & identity, healthcare communications, direct, digital, promotion and relationship marketing, and specialist communications. WPP is one of the world’s largest communications services groups employing 200,000 people working in over 3,000 offices in 113 countries. To learn more, visit www.wpp.com. Founded in 1990, AdMedia is a leading M&A advisory firm serving the marketing services, advertising, marketing technology, media and information sectors. AdMedia has completed over 250 transactions for clients valued in excess of $12 billion. For more information, visit www.admediapartners.com.


News Article | May 1, 2017
Site: news.europawire.eu

LONDON, 01-May-2017 — /EuropaWire/ — WPP Digital, the digital investment arm of WPP, announces that it has acquired Deeplocal, Inc., (“Deeplocal”) an innovation studio focused on product invention, design, and engineering for clients’ marketing campaigns in the US. Deeplocal’s gross revenues were approximately US$12 million for the period ended December 31, 2016. Clients include Google, Netflix, Airbnb, Inc., Lyft and American Eagle Outfitters. Deeplocal is based in Pittsburgh and was founded in 2006 as a spin-off of Carnegie Mellon University. Deeplocal employs over 50 people with diverse skill sets including robotics, hardware development, electrical engineering, software development, industrial design, strategy, and creative. The studio solves business challenges for clients with strategy-led creative inventions and marketing campaigns and then rapidly designs, engineers, and builds working prototypes and products. This work is all done in-house. The acquisition continues WPP’s strategy of focusing on three key areas that differentiate the Group’s offering to clients: technology, data and content. WPP’s digital assets include companies such as Acceleration (marketing technology consultancy), Cognifide (content management technology), Conexance (data cooperative), Salmon (e-commerce), and Hogarth (digital production technology). WPP also has investments in a number of innovative technology services companies such as Globant and Mutual Mobile, as well as ad technology companies such as AppNexus, comScore (data investment management), Domo, mySupermarket, Percolate and ScrollMotion. The Group has invested in digital content companies like Russell Simmons’ All Def Digital, Fullscreen, Indigenous Media, Imagina (a content rights and media company based in Spain), MRC, Mitú, Refinery29, VICE and Woven Digital. WPP’s roster of wholly owned digital agencies include AKQA, Blue State Digital, F.biz, Mirum, POSSIBLE, VML and Wunderman. In 2015, the Group acquired a majority stake in Essence, the global digital agency and the largest independent buyer of digital media. In October WPP’s wholly-owned operating company Xaxis acquired Triad Retail Media, a leading digital retail media specialist. WPP’s digital revenues were over US$7.5 billion in 2016, representing 39% of the Group’s total revenues of US$19.4 billion. WPP has set a target of 40-45% of revenue to be derived from digital in the next four to five years. In North America, WPP companies (including associates) collectively generate revenues of US$7.3 billion and employ almost 28,000 people.


The 12 GeV CEBAF Upgrade is a $338 million, multi-year project to triple CEBAF's original operational energy for investigating the quark structure of the atom's nucleus. The majority of the upgrade is complete and will be finishing up in 2017. Scientists have been rigorously commissioning the experimental equipment to prepare for a new era of nuclear physics experiments. These activities have already led to the first scientific result, which comes from the Gluonic Excitations Experiment. GlueX conducts studies of the strong force, which glues matter together, through searches for hybrid mesons. According to Curtis Meyer, a professor of physics at Carnegie Mellon University and spokesperson for the GlueX experiment at Jefferson Lab, these hybrid mesons are built of the same stuff as ordinary protons and neutrons, which are quarks bound together by the "glue" of the strong force. But unlike ordinary mesons, the glue in hybrid mesons behaves differently. "The basic idea is that a meson is a quark and antiquark bound together, and our understanding is that the glue holds those together. And that glue manifests itself as a field between the quarks. A hybrid meson is one with that strong gluonic field being excited," Meyer explains. He says that producing these hybrid mesons allows nuclear physicists to study particles in which the strong gluonic field is contributing directly to their properties. The hybrid mesons may ultimately provide a window into how subatomic particles are built by the strong force, as well as "quark confinement" - why no quark has ever been found alone. "We hope to show that this "excited" gluonic field is an important constituent of matter. That's something that has not been observed in anything that we've seen so far. So, in some sense, it's a new type of hadronic matter that has not been observed," he says. In this first result, data were collected over a two-week period following equipment commissioning in the spring of 2016. The experiment produced two ordinary mesons called the neutral pion and the eta, and the production mechanisms of these two particles were carefully studied. The experiment takes advantage of the full-energy, 12 GeV electron beam produced by the CEBAF accelerator and delivered into the new Experimental Hall D complex. There, the 12 GeV beam is converted into a first-of-its-kind 9 GeV photon beam. "The photons go through our liquid hydrogen target. Some of them will interact with a proton in that target, something is exchanged between the photon and the proton, and something is kicked out - a meson," Meyer explains. "This publication looked at some of the simplest mesons you could kick out. But it's the same, basic production mechanism that most of our reactions will follow." The result was published as a Rapid Communication in the April issue of Physical Review C. It demonstrated that the linear polarization of the photon beam provides important information by ruling out possible meson production mechanisms. "It's not so much that the particles we created were interesting, but how they were produced: Learning what reactions were important in making them," Meyer says. The next step for the collaboration is further analysis of data already collected and preparations for the next experimental run in the fall. "I'm sure that we've produced hybrid mesons already, we just don't have enough data to start looking for them yet," Meyer says. "There are a number of steps that we're going through in terms of understanding the detector and our analysis. We're doing the groundwork now, so that we'll have confidence that we understand things well enough that we can validate results we'll be getting in the future." "This new experimental facility - Hall D - was built by dedicated efforts of the Jefferson Lab staff and the GlueX collaboration," says Eugene Chudakov, Hall D group leader. "It is nice to see that all of the equipment, including complex particle detectors, is operating as planned, and the exciting scientific program has successfully begun." The 12 GeV CEBAF Upgrade project is in its last phase of work and is scheduled for completion in September. Other major experimental thrusts for the upgraded CEBAF include research that will enable the first snapshots of the 3D structure of protons and neutrons, detailed explorations of the internal dynamics and quark-gluon structure of nuclei, and tests of fundamental theories of matter. Explore further: Jefferson Lab accomplishes critical milestones toward completion of 12 GeV upgrade More information: H. Al Ghoul et al, Measurement of the beam asymmetryforandphotoproduction on the proton atGeV, Physical Review C (2017). DOI: 10.1103/PhysRevC.95.042201


News Article | May 5, 2017
Site: www.prnewswire.com

Caroline Dowling, president of CEC at Flex, said, "As our CEC cloud customer portfolio, capabilities and product solutions continue to expand, the team and I are thrilled to have an industry veteran of Kevin's caliber join us to help lead, grow and accelerate the ongoing development of innovative cloud solutions at Flex." Dr. Kevin Kettler is a seasoned industry expert with more than 25 years of experience working across the server, storage and compute market segments, among others. Dr. Kettler's career started with IBM where he contributed to the early development of next generation compute products. He later joined Dell, where he served as the corporate CTO, leading the global architecture and technology teams driving next generation desktop, notebook, server, storage and networking designs. Most recently, Dr. Kettler was a full time consultant in Qualcomm's new datacenter business unit. He holds several U.S. patents and is a published author on the subject of real-time/multimedia systems. Dr. Kevin Kettler earned his Bachelor of Science degree in electrical engineering from Lehigh University, and his master's and Ph.D. degrees in electrical engineering from Carnegie Mellon University. He has served as an Adjunct Professor of Electrical Engineering at the University of Texas, and has served on engineering advisory councils at the University of Texas and Carnegie Mellon University. Flex is the Sketch-to-Scale™ solutions provider that designs, manufactures and distributes Intelligent Products for a Connected World™. With approximately 200,000 professionals across 30 countries, Flex provides innovative design, engineering, manufacturing, real-time supply chain insight and logistics services to companies of all sizes in various industries and end-markets. For more information, visit flex.com or follow us on Twitter @flexintl. Flex – Live Smarter™ To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/dr-kevin-kettler-joins-flex-as-cto-of-cloud-business-unit-300452154.html


"Artificial intelligence and machine learning technologies are becoming core to next-generation service management and automation. There is now an even greater need for robust analytics across multiple data sources to uncover areas for automation and consistently measure and manage the quality of service delivery for both manual and automated workflows," said Stuart Evans, Distinguished Service Professor at Carnegie Mellon University. "I have been impressed with how Numerify's cloud-based solution fulfills this need for some of the largest and most demanding clients in the world." Numerify's product advancements are driven in part by two recent patent approvals from the United States Patent and Trademark Office around big data stores and metadata. Patent 9,619,535 allows for smart refreshes of data stores, in which reports and dashboards track actual customer usage of data stores. The smart data-store refreshes allow Numerify to support even challenging analytical scenarios in a cost-effective way without sacrificing data freshness. The second patent, 9,098,315, is a design for a metadata-driven web services connector that enables the extraction and processing of data from any cloud data source. Numerify also has 15 other patents pending in the areas of low-latency analytics of big data stores, analytics of workflow-based service-oriented processes, and metadata-driven approaches to cloud analytics. "With these technological enhancements, Numerify continues to expand its catalog of intellectual property and successfully augment the value of its analytical applications for innovative Fortune 2000 companies," said Srikant Gokulnatha, Co-Founder and Chief Product Officer of Numerify. "The company plans to continue leveraging innovative technologies around big-data analytics, machine learning, and AI in its mission to deliver end-to-end business analytics solutions that remain unmatched in the industry." Connect with Numerify at the upcoming ServiceNow® Knowledge17 conference, from May 7-11 at the Orange County Convention Center in Orlando, Florida. The company will also sponsor the Gartner IT Operations Strategies & Solutions Summit, which occurs during the same week at the Hilton in Orlando. About Numerify Numerify provides IT business analytics applications to leading organizations, including companies ranked in the top 5 across 10 major industries. The company's pre-built analytics solutions integrate data across various IT sources, contact center, and related business systems. Numerify's cloud applications rapidly deliver precise insights that help IT organizations optimize costs, increase innovation speed, and transform their service experience. Headquartered in San Jose, Calif., Numerify is backed by Lightspeed Venture Partners, Sequoia Capital, Tenaya Capital, Silicon Valley Bank, and Four Rivers Group. For more information, visit www.numerify.com or follow @numerify. To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/numerify-expands-core-capabilities-of-its-business-analytics-platform-and-secures-new-patents-300452049.html


News Article | April 28, 2017
Site: motherboard.vice.com

Thursday morning, Wired ran an article called "Meet the Brilliant Young Hackers Who'll Soon Shape the World." The article features seven hackers who are indeed brilliant, and some of them might change the world; all seven of them, though, are young men. To suggest that any one small group of people, one gender, one (or a few) races will "shape the world" isn't just offensive, it's objectively incorrect (Wired has since changed its headline). The story struck a well-worn chord—women are chronically underrepresented in these sorts of lists, on technology panels, at conferences, in technology in general. Soon after we saw the article, a dozen names of women hackers popped into our heads. Our first instinct was to make a list featuring some of their work. And that would've been easy—we quickly came up with dozens of hackers we'd like to feature. We spoke to Loren Maggiore, an 18-year-old who worked at Trail of Bits, one of the most prominent security research groups in the world. We spoke to CyFi, a 16-year-old who found an iOS, Android, and Blackberry zero day before she turned 10 and cofounded r00tz Asylum, the country's biggest hacking conference for kids. We spoke to Kaya Thomas, a 21-year-old who created an iOS directory of roughly 630 children's and young adult books with people of color characters written by people of color. We spoke to several other young hackers too. And we continue to get emails, encrypted Signal and Wickr messages, texts, and calls from people telling us that there's just one more person we should talk to; one more talented person whose work deserves to be featured. But then we spoke to Safia Abdalla, a 20-year-old open source science programmer who raised something that had already been nagging at us: The people who will shape the world will never fit on a list. "There's a culture of creating lists in tech of people who have accomplished things or have a unique trait," Abdalla said. "The problem with lists about women in tech specifically is they're objectifying—a name, a photo and a paragraph bio, but they don't show any depth beyond that." "They always focus on the fact that it's like, 'You're a girl, congrats.'" Lists of people in technology are predicated largely on who's available for an interview, how old they are, whether they're good on Twitter, their race or sexual orientation, their class schedules, whether they meet some weird definition of "hacker," how tired of talking to people a journalist is, or who had a picture available to use. In Wired's case, it made a wider prediction about who would "change the world" based on who happened to show up to hacking club at one specific university on one night. There's an instinct (like our initial one) to correct for the underrepresentation of women, LGBT people, and people of color in STEM by promoting a couple of them as some sort of anomaly—people who are first and foremost women, LGBT, or minorities, rather than hackers, engineers, or scientists. Carolina Zarate, a hacker on Carnegie Mellon University's Plaid Parliament of Pwning hacking team, told us that in middle school, high school, college, and in hacking competitions her whole life, she'd been repeatedly asked how it felt to be the "girl hacker." "I wish there were more girls, but at the same time, it's kind of like—asking that takes away the fact that I'm here doing hacking stuff," she said. "They always focus on the fact that it's like, 'You're a girl, congrats.'" The natural reaction to an all-male list is an all-female one. The natural instinct for panel organizers or hiring managers is to make token moves toward diversity to avoid being dragged like Wired was Thursday. It is an empirical fact that there are fewer women hackers than men hackers. But finding young women who are doing amazing hacking work or software development is no longer difficult, they are no longer anomalies. The gender bias in technology doesn't just happen in lists, it happens in education, in hiring, in panel selection, in journalistic sourcing. For us, this means that we shouldn't highlight the work of a few underrepresented people every now and then, it should be shown as part of our everyday reporting and story selection. To do anything else is unacceptable. We want to hear about your suggestions for underreported people changing the world and how we can do a better job covering them. Email us at editors@motherboard.tv.


News Article | April 17, 2017
Site: www.techrepublic.com

Self-driving cars, drones, robots, gene editing—science fiction obsessions that have triggered many fears—have come to fruition faster than many predicted. While these emerging technologies have the potential to make our lives healthier, safer, and easier, the flip side is more grim: Eugenics, joblessness, privacy loss, and worsening economic inequality. In the book The Driver in the Driverless Car: How Our Technology Choices Will Create the Future, out this week, Vivek Wadhwa, a distinguished fellow at Carnegie Mellon University's College of Engineering and a director of research at Duke University's Pratt School of Engineering, explores the risks and rewards of our new technology, and how our choices will determine if our future errs on the side of Star Trek or Mad Max. The book began as a general look at the future and what could be possible with emerging technologies. But in the last two years, "I started getting more and more worried about the downsides of technology—the industry destruction it's causing, and the risks, dangers, and policy issues," Wadhwa told TechRepublic. "I was shocked at how fast it was happening." As evidenced by the election of US President Donald Trump, "the gap between the haves and the have nots is widening," Wadhwa said. "If we continue along the path we are on, we're going to create the dystopia of Mad Max. It's that dire." SEE: How Google's DeepMind beat the game of Go, which is even more complex than chess Many people are unaware of how rapidly technology is advancing, Wadhwa said. Take AI, which in the book, Wadhwa refers to as "both the most important breakthrough in modern computing and the most dangerous technology ever created by man." "We need AI to make intelligent decisions for us, to manage the massive amounts of data being gathered, and to give us better health—all the good," Wadhwa said. "The bad is when you look at the latest generations of machine learning, the creators have no clue how these things are making the decisions they are making." Privacy is another concern that many consumers are not paying enough attention to, Wadhwa said, and will soon become a thing of the past. He points to Internet of Things (IoT) devices that are constantly listening and learning about their human users, and even interacting with each other. "It isn't science fiction," Wadhwa said. "It's all happening as we speak." Technology offers the potential to solve the greatest challenges facing humanity to give us a science fiction utopia future, with "unlimited food, energy, and education, so life is not about making money, but about knowledge, enlightenment, sharing, and reaching for the stars," Wadhwa said. "That future is as close as 30 years from now. It's within our reach and lifetimes. But the Mad Max future is coming sooner than I expected." Wadhwa outlines three questions about any emerging technology to determine whether it will lead us to utopia or dystopia: 1. Does it have the potential to benefit everyone equally? When considering this question, Wadhwa points to AI physicians. Currently, the rich have better access to healthcare than the poor. With the rise of digital doctors, healthcare would be more readily available to everyone, as smartphones are. This is opposed to something like gene editing, which only the rich would have access to. "If only the rich have it, it creates dystopia," Wadhwa said. "We need to make sure we share the society we're creating." 2. What are its risks and rewards? This question involves weighing all potential risks and rewards of a new technology, Wadhwa said. For example, consider IoT: Do the rewards of having a refrigerator that can tell what foods you need to buy outweigh the privacy risks? The same should be considered for gene editing, as mentioned above. 3. Does it promote autonomy or dependence? Though some argue that many people are now dependent on smartphones, the fact remains that ten years ago, they did not exist, Wadhwa said, and we still have the ability to turn our phones off and go about our lives. He considers this question for self-driving cars: If these vehicles become the norm, humans likely would not be allowed to drive anymore, and would become dependent upon them for transportation. However, they would allow for autonomy as well, in terms of being able to travel anywhere for a low cost, no matter what age or disabilities a person may have. "Everyone gains autonomy from self driving cars, while we become dependent on them," Wadhwa said. How can we avoid the path to dystopia? "By learning. By deciding. By speaking up," Wadhwa said. "Each of us has a say. Your voice is as important as my voice." Our individual choices around technology matter, Wadhwa argues. He points to the recent controversies surrounding Uber, and how users chose to delete the app from their phones. People working in the tech industry must consider the impact of their innovations on the world at large, Wadhwa said. "In the tech industry, we have blinders on," he said. "We have to start taking responsibility for the dystopia we're creating."


News Article | April 24, 2017
Site: news.yahoo.com

Agriculture has come a long way in the past century. We produce more food than ever before — but our current model is unsustainable, and as the world’s population rapidly approaches the 8 billion mark, modern food production methods will need a radical transformation if they’re going to keep up. But luckily, there’s a range of new technologies that might make it possible. In this series, we’ll explore some of the innovative new solutions that farmers, scientists, and entrepreneurs are working on to make sure that nobody goes hungry in our increasingly crowded world. Ever since American citizens’ industrial age migration from the country to the city, urban areas have tended to be associated with cutting-edge technologies. Well, scratch that correlation — because in the age of artificial intelligence, a new research project by Carnegie Mellon University’s Robotics Institute is setting out to prove that the country can be every bit as technologically advanced as the smart city. Called FarmView (not to be confused with FarmVille, the time-wasting game that has overrun Facebook feeds for much of the last decade), the project employs machine learning, drones, autonomous robots, and virtually every other area of big-budget tech research to help farmers grow more food, better and smarter. “We’ve been doing research into robotics for agriculture for about 15 years now,” George Kantor, Carnegie Mellon senior system cientist, told Digital Trends. “It’s taken a number of different forms, and this was an attempt to pull it all together into one cohesive project.” But FarmView is way more than just a top-down organizational reshuffle, like making the finance administration team responsible for accounts receivable instead of accounts payable. In fact, it demonstrates a new sense of urgency around this topic, thanks to a statistic that hammered home its importance to the researchers involved. That stat? According to current predictions, the world population will hit 9.6 billion by 2050. What that means is that if better ways aren’t found to use our limited agricultural resources – including land, water, and energy – a global food crisis may well occur. “That’s a statistic which really forces us to look for solutions,” Kantor continued. “Technology alone isn’t going to solve this potential crisis; it also involves social and political issues. However, it’s something we think we can help with. It’s not just about how much food there is, either. The way we produce food right now is very resource intensive, and the resources that are available are being used up. We have to increase the amount of food we produce, as well as the quality, but do so in a way that doesn’t assume we have unlimited resources.” As part of the project, the team has developed an autonomous ground robot capable of taking visual surveys of crop fields at different times in the season — courtesy of a camera, a laser scanner to measure plant geometry, and a multispectral camera that looks at nonvisible radiation bands. Using computer vision and machine-learning technology it can predict the expected fruit yield later on in the season. Rather than just passively passing on this information to a farmer, however, it can then actively trigger the robotic pruning of leaves or thinning of fruit in a way that maintains an optimal ecological balance between leaf area and fruit load. CMU researchers also use a combination of drones and stationary sensor networks to take macroscale measurements of plant growth. While these are definitely smart examples of technology, the really long-lasting impact is going to come from how technologies like leaf-cutting robots and drones can be used to help improve crops. In this capacity, Kantor pointed toward the crop sorghum, a coarse, dry grass grain that originated thousands of years ago in Egypt. Grain sorghum is widely eaten, and is considered the fifth-most important cereal crop grown in the world. Because it features so many different varieties (a whopping 42,000!), it also has enormous genetic potential for creating new high-protein varieties that could make it even more important. After all, who’s satisfied with simply being the fifth-most important cereal crop? That’s where AI comes in. If it’s possible to use machine-learning technology to measure sorghum parameters in such a way that breeders and geneticists can choose the traits most necessary for improved yield, as well as most resistant to disease and drought, it could have a massive positive impact. Just improving the yield alone by, say, 50 percent would represent a realworld impact that very few computer scientists can ever be credited with. So does this all of this mean that the farm of the future, like the factory of the future, will be largely free of humans — with row after row of gleaming Terminator-style robots carrying out all the work? Not quite. “We’re not doing this to replace people. What we’re doing is to introduce new technologies that can make farmers more efficient at what they do, and allow them to use fewer resources to do it,” Kantor said. “The scenario we envision doesn’t involve using fewer people; it involves using robotics and other technologies to carry out tasks that humans aren’t currently doing.” At present, many of the technologies are still at the “proof of concept” phase, but Kantor noted that they’ve had some interesting discussions with agricultural early adopters. Now the project — which also includes folks from Texas A&M, Penn State, Colorado State, Washington State, the University of Maryland, University of Georgia, and South Carolina’s Clemson University — is preparing to hit the big time. “A lot of people don’t think of this as being the first place to do this kind of research and development, but it’s an area that — and I’m sorry to use this pun, but it’s really unavoidable — is really ripe for progress,” Kantor concluded. “Our push now is to start using these tools to solve problems on a large scale.” Automated agriculture: Can robots, drones, and AI save us from starvation? Can tech help us feed a population of 9 billion-plus? Welcome to the Future of Food


News Article | May 2, 2017
Site: www.eurekalert.org

The National Academy of Sciences announced today the election of 84 new members and 21 foreign associates in recognition of their distinguished and continuing achievements in original research. The National Academy of Sciences announced today the election of 84 new members and 21 foreign associates in recognition of their distinguished and continuing achievements in original research. Those elected today bring the total number of active members to 2,290 and the total number of foreign associates to 475. Foreign associates are nonvoting members of the Academy, with citizenship outside the United States. Newly elected members and their affiliations at the time of election are: Bates, Frank S.; Regents Professor, department of chemical engineering and materials science, University of Minnesota, Minneapolis Beilinson, Alexander; David and Mary Winton Green University Professor, department of mathematics, The University of Chicago, Chicago Bell, Stephen P.; investigator, Howard Hughes Medical Institute; and professor of biology, department of biology, Massachusetts Institute of Technology, Cambridge Bhatia, Sangeeta N.; John J. (1929) and Dorothy Wilson Professor, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge Buzsáki, György; professor, Neuroscience Institute, departments of physiology and neuroscience, New York University Langone Medical Center, New York City Carroll, Dana; distinguished professor, department of biochemistry, University of Utah School of Medicine, Salt Lake City Cohen, Judith G.; Kate Van Nuys Page Professor of Astronomy, department of astronomy, California Institute of Technology, Pasadena Crabtree, Robert H.; Conkey P. Whitehead Professor of Chemistry, department of chemistry, Yale University, New Haven, Conn. Cronan, John E.; professor and head of microbiology, professor of biochemistry, and Microbiology Alumni Professor, department of microbiology, University of Illinois, Urbana-Champaign Cummins, Christopher C.; Henry Dreyfus Professor of Chemistry, Massachusetts Institute of Technology, Cambridge Darensbourg, Marcetta Y.; distinguished professor of chemistry, department of chemistry, Texas A&M University, College Station DeVore, Ronald A.; The Walter E. Koss Professor and distinguished professor, department of mathematics, Texas A&M University, College Station Diamond, Douglas W.; Merton H. Miller Distinguished Service Professor of Finance, The University of Chicago, Chicago Doe, Chris Q.; investigator, Howard Hughes Medical Institute; and professor of biology, Institute of Molecular Biology, University of Oregon, Eugene Duflo, Esther; Co-founder and co-Director of the Abdul Latif Jameel Poverty Action Lab, and Professor of Poverty Alleviation and Development Economics, Massachusetts Institute of Technology, Cambridge Edwards, Robert Haas; professor of neurology and physiology, University of California, San Francisco Firestone, Mary K.; professor and associate dean of instruction and student affairs, department of environmental science policy and management, University of California, Berkeley Fischhoff, Baruch; Howard Heinz University Professor, department of social and decision sciences and department of engineering and public policy, Carnegie Mellon University, Pittsburgh Ginty, David D.; investigator, Howard Hughes Medical Institute; and Edward R. and Anne G. Lefler Professor of Neurobiology, department of neurobiology, Harvard Medical School, Boston Glass, Christopher K.; professor of cellular and molecular medicine and professor of medicine, University of California, San Diego Goldman, Yale E.; professor, department of physiology, Pennsylvania Muscle Institute, University of Pennsylvania Perelman School of Medicine, Philadelphia González, Gabriela; spokesperson, LIGO Scientific Collaboration; and professor, department of physics and astronomy, Louisiana State University, Baton Rouge Hagan, John L.; John D. MacArthur Professor of Sociology and Law, department of sociology, Northwestern University, Evanston, Ill. Hatten, Mary E.; Frederick P. Rose Professor, laboratory of developmental neurobiology, The Rockefeller University, New York City Hebard, Arthur F.; distinguished professor of physics, department of physics, University of Florida, Gainesville Jensen, Klavs F.; Warren K. Lewis Professor of Chemical Engineering and professor of materials science and engineering, Massachusetts Institute of Technology, Cambridge Kahn, Barbara B.; vice chair for research strategy and George R. Minot Professor of Medicine at Harvard Medical School, Beth Israel Deaconess Medical Center, Boston Kinder, Donald R.; Philip E. Converse Collegiate Professor of Political Science and Psychology and research scientist, department of political science, Center for Political Studies, Institute for Social Research, University of Michigan, Ann Arbor Lazar, Mitchell A.; Willard and Rhoda Ware Professor in Diabetes and Metabolic Diseases, and director, Institute for Diabetes, Obesity, and Metabolism, University of Pennsylvania Perelman School of Medicine, Philadelphia Locksley, Richard M.; investigator, Howard Hughes Medical Institute; and professor, department of medicine (infectious diseases), and Marion and Herbert Sandler Distinguished Professorship in Asthma Research, University of California, San Francisco Lozano, Guillermina; professor and chair, department of genetics, The University of Texas M.D. Anderson Cancer Center, Houston Mavalvala, Nergis; Curtis and Kathleen Marble Professor of Astrophysics and associate head, department of physics, Massachusetts Institute of Technology, Cambridge Moore, Jeffrey Scott; Murchison-Mallory Professor of Chemistry, department of chemistry, University of Illinois, Urbana-Champaign Moore, Melissa J.; chief scientific officer, mRNA Research Platform, Moderna Therapeutics, Cambridge, Mass.; and Eleanor Eustis Farrington Chair of Cancer Research Professor, RNA Therapeutics Institute, University of Massachusetts Medical School, Worcester Nunnari, Jodi M.; professor, department of molecular and cellular biology, University of California, Davis O'Farrell, Patrick H.; professor of biochemistry and biophysics, department of biochemistry and biophysics, University of California, San Francisco Ort, Donald R.; research leader and Robert Emerson Professor, USDA/ARS Global Change and Photosynthesis Research Unit, departments of plant biology and crop sciences, University of Illinois, Urbana-Champaign Parker, Gary; professor, department of civil and environmental engineering and department of geology, University of Illinois, Urbana-Champaign Patapoutian, Ardem; investigator, Howard Hughes Medical Institute; and professor, department of molecular and cellular neuroscience, The Scripps Research Institute, La Jolla, Calif. Pellegrini, Claudio; distinguished professor emeritus, department of physics and astronomy, University of California, Los Angeles Pikaard, Craig, S.; investigator, Howard Hughes Medical Institute and Gordon and Betty Moore Foundation; and distinguished professor of biology and molecular and cellular biochemistry, department of biology, Indiana University, Bloomington Read, Nicholas; Henry Ford II Professor of Physics and professor of applied physics and mathematics, Yale University, New Haven, Conn. Roediger, Henry L.; James S. McDonnell Distinguished and University Professor of Psychology, department of psychology and brain sciences, Washington University, St. Louis Rosenzweig, Amy C.; Weinberg Family Distinguished Professor of Life Sciences, and professor, departments of molecular biosciences and of chemistry, Northwestern University, Evanston, Ill. Seto, Karen C.; professor, Yale School of Forestry and Environmental Studies, New Haven, Conn. Seyfarth, Robert M.; professor of psychology and member of the graduate groups in anthropology and biology, University of Pennsylvania, Philadelphia Sibley, L. David; Alan A. and Edith L. Wolff Distinguished Professor in Molecular Microbiology, department of molecular microbiology, Washington University School of Medicine, St. Louis Spielman, Daniel A.; Henry Ford II Professor of Computer Science and Mathematics, departments of computer science and mathematics, Yale University, New Haven, Conn. Sudan, Madhu; Gordon McKay Professor of Computer Science, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Mass. Tishkoff, Sarah; David and Lyn Silfen University Professor, departments of genetics and biology, University of Pennsylvania, Philadelphia Van Essen, David C.; Alumni Professor of Neurobiology, department of anatomy and neurobiology, Washington University School of Medicine, St. Louis Vidale, John E.; professor, department of earth and space sciences, University of Washington, Seattle Wennberg, Paul O.; R. Stanton Avery Professor of Atmospheric Chemistry and Environmental Science and Engineering, California Institute of Technology, Pasadena Wilson, Rachel I.; Martin Family Professor of Basic Research in the Field of Neurobiology, department of neurobiology, Harvard Medical School, Boston Zachos, James C.; professor, department of earth and planetary sciences, University of California, Santa Cruz, Santa Cruz Newly elected foreign associates, their affiliations at the time of election, and their country of citizenship are: Addadi, Lia; professor and Dorothy and Patrick E. Gorman Chair of Biological Ultrastructure, department of structural science, Weizmann Institute of Science, Rehovot, Israel (Israel/Italy) Folke, Carl; director and professor, The Beijer Institute of Ecological Economics, Royal Swedish Academy of Sciences, Stockholm, Sweden (Sweden) Freeman, Kenneth C.; Duffield Professor of Astronomy, Mount Stromlo and Siding Spring Observatories, Research School of Astronomy and Astrophysics, Australian National University, Weston Creek (Australia) Lee, Sang Yup; distinguished professor, dean, and director, department of chemical and biomolecular engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea (South Korea) Levitzki, Alexander; professor of biochemistry, unit of cellular signaling, department of biological chemistry, The Hebrew University of Jerusalem, Jerusalem (Israel) Peiris, Joseph Sriyal Malik; Tam Wah-Ching Professorship in Medical Science, School of Public Health, The University of Hong Kong, Pokfulam, Hong Kong, People's Republic of China (Sri Lanka) Robinson, Carol Vivien; Dr. Lee's Professor of Chemistry, Physical and Theoretical Chemistry Laboratory, University of Oxford, Oxford, England (United Kingdom) Thesleff, Irma; academician of science, professor, and research director, developmental biology program, Institute of Biotechnology, University of Helsinki, Helsinki (Finland) Underdal, Arild; professor of political science, department of political science, University of Oslo, Oslo, Norway (Norway) The National Academy of Sciences is a private, nonprofit institution that was established under a congressional charter signed by President Abraham Lincoln in 1863. It recognizes achievement in science by election to membership, and -- with the National Academy of Engineering and the National Academy of Medicine -- provides science, engineering, and health policy advice to the federal government and other organizations.


News Article | April 18, 2017
Site: www.prweb.com

The Community for Accredited Online Schools, a leading resource provider for higher education information, has ranked the best online colleges and universities in Pennsylvania for 2017. The top 50 four-year schools were named, with Temple University, Pennsylvania State University, Carnegie Mellon University, Drexel University and University of Pittsburgh honored as the top five. 12 two-year colleges were also recognized, with Harrisburg Area Community College, Community College of Allegheny County, Westmoreland County Community College, Lehigh Carbon Community College and Bucks County Community College taking the top five spots. “These Pennsylvania colleges and universities have proven their value when it comes to providing high-quality online certificate and degree programs,” said Doug Jones, CEO and founder of AccreditedSchoolsOnline.org. “In addition to strong academics, these schools also offer their online students exceptional counseling and support resources that foster success.” To earn a spot on the Community for Accredited Online Schools list, colleges and universities must be accredited, public or private not-for-profit institutions. Several additional data points are taken into consideration when scoring each school, including financial aid offerings, student/teacher ratios, graduation rates, student services and academic resources. For more details on where each school falls in the rankings and the data and methodology used to determine the lists, visit: The Best Online Four-Year Schools in Pennsylvania for 2017 include the following: Alvernia University Arcadia University California University of Pennsylvania Carlow University Carnegie Mellon University Cedar Crest College Chatham University Clarks Summit University Delaware Valley University DeSales University Drexel University Duquesne University Eastern University Gannon University Geneva College Gwynedd Mercy University Immaculata University Indiana University of Pennsylvania-Main Campus Keystone College King's College La Roche College La Salle University Lancaster Bible College Lehigh University Marywood University Mercyhurst University Messiah College Misericordia University Mount Aloysius College Neumann University Pennsylvania State University-Main Campus Pennsylvania State University-Penn State Harrisburg Pennsylvania State University-Penn State Shenango Philadelphia University Point Park University Robert Morris University Rosemont College Saint Francis University Saint Joseph's University Seton Hill University Temple University University of Pittsburgh-Pittsburgh Campus University of Scranton University of the Sciences University of Valley Forge Villanova University West Chester University of Pennsylvania Widener University-Main Campus Wilkes University Wilson College Best Online Two-Year Schools in Pennsylvania for 2017 include the following: Bucks County Community College Community College of Allegheny County Community College of Philadelphia Harcum College Harrisburg Area Community College - Harrisburg Lehigh Carbon Community College Luzerne County Community College Montgomery County Community College Northampton County Area Community College Pennsylvania Highlands Community College Reading Area Community College Westmoreland County Community College ### About Us: AccreditedSchoolsOnline.org was founded in 2011 to provide students and parents with quality data and information about pursuing an affordable, quality education that has been certified by an accrediting agency. Our community resource materials and tools span topics such as college accreditation, financial aid, opportunities available to veterans, people with disabilities, as well as online learning resources. We feature higher education institutions that have developed online learning programs that include highly trained faculty, new technology and resources, and online support services to help students achieve educational success.


News Article | April 17, 2017
Site: www.techrepublic.com

In one of the top achievements for AI, AlphaGo, Google DeepMind's machine learning platform, defeated Go world champ Lee Sedol in a game more complex than chess, in March 2016. AlphaGo beat Sedol in four of the five-game tournament, after mastering the game 10 years earlier than many experts predicted. AlphaGo has continued to improve its skills throughout the past year, training by playing against top players from South Korea, China, and Japan. And at the end of May, AlphaGo will explore a new way to learn: By pairing up with top Go players and AI experts at the Future of Go Summit. The five-day conference, held from May 23-27, will bring together AI experts, the China Go Association, AlphaGo, and China's top Go players in Wuzhen, China. AlphaGo harnesses convolutional neural networks to play the ancient Chinese game. What has made it particularly impressive is the fact that it uses reinforcement learning, instead of being programmed specifically for the task. While IBM's Deep Blue achieved an AI victory in 1997 by beating world chess master Gary Kasparov, it was programmed with the moves. And Go is a complex game, with potentially 200 options per move, as opposed to 20 on a chessboard, relying heavily on intuition. While it is interesting to see what AlphaGo is capable of, the summit in May will be a test for how AI and human collaborations can work. It will be a chance to see how human learning can be enhanced by AI. And the takeaways will likely extend past the gaming world. Manuela Veloso, head of machine learning at Carnegie Mellon University previously told TechRepublic that she was interested to see "if and how AlphaGo's learning approach may apply to other different 'non-game' problems." AlphaGo has already taken on the task of reducing energy use, and the technology has been applied to medical research projects as well. Toby Walsh, AI professor at the University of New South Wales, also previously told TechRepublic that the nature of the game itself could change, and that the AI system "played moves that have surprised even Go masters." "Will man and machine together be better than man or machine alone?" Walsh asked. "Each of us can bring unique strengths to the table."


News Article | May 8, 2017
Site: www.eurekalert.org

Touch sensing is most common on small, flat surfaces such as smartphone or tablet screens. Researchers at Carnegie Mellon University, however, can turn surfaces of a wide variety of shapes and sizes into touchpads using tools as simple as a can of spray paint. Walls, furniture, steering wheels, toys and even Jell-O can be turned into touch sensors with the technology, dubbed Electrick. The "trick" is to apply electrically conductive coatings or materials to objects or surfaces, or to craft objects using conductive materials. By attaching a series of electrodes to the conductive materials, researchers showed they could use a well-known technique called electric field tomography to sense the position of a finger touch. "For the first time, we've been able to take a can of spray paint and put a touch screen on almost anything," said Chris Harrison, assistant professor in the Human-Computer Interaction Institute (HCII) and head of the Future Interfaces Group. The group will present Electrick at CHI 2017, the Conference on Human Factors in Computing Systems, this week in Denver. Until now, large touch surfaces have been expensive and irregularly shaped, or flexible touch surfaces have been largely available only in research labs. Some methods have relied on computer vision, which can be disrupted if a camera's view of a surface is blocked. The presence of cameras also raises privacy concerns. With Electrick, conductive touch surfaces can be created by applying conductive paints, bulk plastics or carbon-loaded films, such as Desco's Velostat, among other materials. HCII Ph.D. student Yang Zhang said Electrick is both accessible to hobbyists and compatible with common manufacturing methods, such as spray coating, vacuum forming and casting/molding, as well as 3D printing. Like many touchscreens, Electrick relies on the shunting effect -- when a finger touches the touchpad, it shunts a bit of electric current to ground. By attaching multiple electrodes to the periphery of an object or conductive coating, Zhang and his colleagues showed they could localize where and when such shunting occurs. They did this by using electric field tomography -- sequentially running small amounts of current through the electrodes in pairs and noting any voltage differences. The tradeoff, in comparison to other touch input devices, is accuracy. Even so, Electrick can detect the location of a finger touch to an accuracy of one centimeter, which is sufficient for using the touch surface as a button, slider or other control, Zhang said. Zhang, Harrison and Gierad Laput, another HCII Ph.D. student, used Electrick to add touch sensing to surfaces as large as a 4-by-8-foot sheet of drywall, as well as objects as varied as a steering wheel, the surface of a guitar and a Jell-O mold of a brain. Even Play-Doh can be made interactive with Electrick. The technology was used to make an interactive smartphone case -- opening applications such as a camera based on how the user holds the phone -- and a game controller that can change the position and combinations of buttons and sliders based on the game being played or the player's preferences. Zhang said the Electrick surfaces proved durable. Adding a protective coating atop the conductive paints and sheeting also is possible. The David and Lucile Packard Foundation supported this research. More information, including photos and a video, are available on the project web site. Carnegie Mellon is a private, internationally ranked research university with programs in areas ranging from science, technology and business, to public policy, the humanities and the arts. More than 13,000 students in the university's seven schools and colleges benefit from a small student-to-faculty ratio and an education characterized by its focus on creating and implementing solutions for real problems, interdisciplinary collaboration and innovation.


Lerner J.S.,Harvard University | Li Y.,University of California at Riverside | Valdesolo P.,Claremont McKenna College | Kassam K.S.,Carnegie Mellon University
Annual Review of Psychology | Year: 2015

A revolution in the science of emotion has emerged in recent decades, with the potential to create a paradigm shift in decision theories. The research reveals that emotions constitute potent, pervasive, predictable, sometimes harmful and sometimes beneficial drivers of decision making. Across different domains, important regularities appear in the mechanisms through which emotions influence judgments and choices. We organize and analyze what has been learned from the past 35 years of work on emotion and decision making. In so doing, we propose the emotion-imbued choice model, which accounts for inputs from traditional rational choice theory and from newer emotion research, synthesizing scientific models. © 2015 by Annual Reviews. All rights reserved.

Loading Carnegie Mellon University collaborators
Loading Carnegie Mellon University collaborators