CMU
United States
CMU
United States

Time filter

Source Type

News Article | February 15, 2017
Site: www.prweb.com

Digital Science, a technology company serving the needs of scientific and research communities, today announced Carnegie Mellon University (CMU) as a key customer and development partner. By implementing a suite of products from the Digital Science portfolio, Carnegie Mellon will unveil a solution to capture, analyze and showcase its leading research. Using continuous, automated capture of data from multiple internal and external sources, including publication and associated citation and altmetrics data, grant data, and research data, Carnegie Mellon will be able to provide its faculty, funders and decision-makers with an accurate, timely and holistic picture of the institution’s research. With the goal of championing new forms of scholarly communication, Carnegie Mellon is creating a number of research platforms that will work together to enable innovation and provide opportunities for interactive research among the university's researchers. As part of this effort, the university is building out an ecosystem of support, processes and tools that underpin the full research lifecycle from ideation to dissemination. Carnegie Mellon plans to roll out a suite of tools from Digital Science to its academic community over the coming months. These tools offer a multitude of benefits including: “The library is at the heart of the work of the institution and must provide a reimagined ‘intellectual commons’ for a campus community,” said Keith Webster, Dean of University Libraries, Carnegie Mellon. “With this partnership, we have the opportunity to position ourselves as a world leader in the development of the scholarly ecosystem. Digital Science is central in allowing us to build the best research information system that exists today and we look forward to sharing our experience and expertise with the global academic community.” “Carnegie Mellon is at the forefront of creating a transformative and collaborative research environment that is open to the free exchange of ideas, where research, creativity, innovation, and entrepreneurship flourish,” said Daniel Hook, CEO Digital Science. “We are very proud indeed to be working with the team at CMU to support their researchers to spend more time on discovery and collaboration. We also look forward to working with them as a development partner to continue to drive this innovation.” *About Digital Science* Digital Science is a technology company serving the needs of scientific and research communities at key points along the full cycle of research. It invests in and incubates research software companies that simplify the research cycle, making more time for discovery. Its portfolio companies include a host of leading brands including Altmetric, BioRAFT, Figshare, IFI CLAIMS Patent Services, Labguru, Overleaf, Peerwith, ReadCube, Symplectic, ÜberResearch, TetraScience and Transcriptic. It is operated by global media company, the Holtzbrinck Publishing Group. Visit http://www.digital-science.com and follow @digitalsci on Twitter. *About Altmetric* Altmetric was founded in 2011 and has made it a mission to track and analyze a world beyond scholarly citations around scholarly literature. Altmetric tracks what people are saying about research outputs online and works with some of the biggest publishers, funders, and institutions around the world to deliver this data in an accessible and reliable format. Visit http://www.altmetric.com for more information and follow @altmetric on Twitter. *About ÜberResearch* ÜberResearch, the company behind Dimensions, is a leading provider of software solutions focused on helping funding organizations, non-profits, and governmental institutions make more informed decisions about science funding. The company's cloud-based platform provides better views of an organization's grant data, peer organisation activities, and the data of the funding community at large. The software functions span search and duplication detection to robust tools for reviewer identification and portfolio analysis. For more information, visit: http://www.uberresearch.com and follow @uberresearch on Twitter. *About Figshare* Figshare is a web-based platform to help academic institutions manage, disseminate and measure the public attention of all their research outputs. The light-touch and user-friendly approach focuses on four key areas: research data management, reporting and statistics, research data dissemination and administrative control. Figshare works with institutions in North America and internationally to help them meet key funder recommendations and to provide world-leading tools to support an open culture of data sharing and collaboration. For more information, visit http://figshare.com and follow @figshare on Twitter. *About Symplectic* Symplectic is a leading developer of Research Information Management systems. Founded in 2003, Symplectic’s flagship product Elements is used by over 300,000 researchers, repository managers and librarians at over 80 of the world’s top institutions including the University of Oxford, University of Melbourne, and Duke University. For more information, visit http://www.symplectic.info and follow @symplectic on Twitter. *About Carnegie Mellon University* Carnegie Mellon is a private, internationally ranked research university with programs in areas ranging from science, technology and business, to public policy, the humanities and the arts. More than 13,000 students in the university's seven schools and colleges benefit from a small student-to-faculty ratio and an education characterized by its focus on creating and implementing solutions for real problems, interdisciplinary collaboration and innovation. Carnegie Mellon's main campus in the United States is in Pittsburgh, Pa. It has campuses in California's Silicon Valley and Qatar, and programs in Africa, Asia, Australia, Europe and Mexico.


« Are The Saudis About To Reveal The Best Kept Secret In Oil? | Main | Qoros Auto unveils drivable engineering development car with QamFree engine; near +50% boost in power & torque, -15% fuel consumption » A team at Carnegie Mellon University (CMU) led by Dr. Jeremy Michalek has investigated the implications of regional and drive cycle variations on the degradation of a plug-in hybrid electric (PHEV) battery. Modeling a PHEV with an air-cooled battery pack comprising cylindrical LiFePO /graphite cells, they simulated the effect of thermal management, driving conditions, regional climate, and vehicle system design on battery life. In their paper, published in the Journal of Power Sources, they reported that in the absence of thermal management, aggressive driving can cut battery life by two-thirds; a blended gas/electric-operation control strategy can quadruple battery life relative to an all-electric control strategy; larger battery packs can extend life by an order of magnitude relative to small packs used for all-electric operation; and batteries last 73–94% longer in mild-weather San Francisco than in hot Phoenix. Air cooling can increase battery life by a factor of 1.5–6, depending on regional climate and driving patterns, they found. End of life criteria has a substantial effect on battery life estimates. Apart from the specific type and design of the battery, the conditions and stress factors during storage and use also affect how quickly the battery will degrade. There are various factors that affect battery life such as time, charge/discharge rate, temperature, and depth of discharge (DOD)/state of charge (SOC). The degree to which each of these factors affects degradation patterns depends on the chemistry and design. … In this study, we aim to assess the regional and drive cycle implications of degradation of a PHEV battery. For this purpose we construct a comprehensive and modular simulation model to address three main questions: 1) How much improvement in PHEV battery life can be obtained with passive air-cooling? 2) How does this improvement vary across different regions and different driving and usage profiles? 3) What is the sensitivity of the results to the model parameters and assumptions? The CMU team created usage scenarios for one year of daily driving, charging and rest conditions and recorded the battery usage history. The researchers used this battery usage history to estimate the degradation over consecutive years, assuming that every year the same usage profile repeats itself. For the vehicle, they used specifications similar to a Toyota Prius with a 5 kWh Hymotion ANR26650 LiFePO /Graphite pack comprising cylindrical cells manufactured by A123 systems. This allowed them to draw on prior work and to use an air-cooled system with well-established parameters. Based on those assumptions, they developed a comprehensive simulation model to estimate battery temperature, current and state of charge profiles under the usage scenarios. To estimate daily travel behavior of the vehicle, the team used GPS sample data from the Atlanta Regional Commission (ARC) Regional Travel Survey with GPS Sub-Sample, available at the Transportation Secure Data Center (TSDC) of the National Renewable Energy Lab- oratory (NREL). To test the sensitivity of the results to drive cycle, they used two standard EPA fuel economy test cycles: the Urban Dynamometer Driving Schedule (UDDS), represents city driving conditions; US06 is an aggressive (high acceleration) driving schedule.


News Article | February 16, 2017
Site: www.prnewswire.co.uk

In continuation to our letters dated February 8, 2017 and February 13, 2017, please find enclosed the transcripts of the 'KIE's Chasing Growth Conference' held on February 13, 2017. The same will be made available on the Company's website at the following weblink https://www.infosys.com/investors/news-events/events/Documents/chasing-growth-conference-13feb2017.pdf This is for your information and records. Our next speaker is Dr. Vishal Sikka. Dr. Vishal Sikka is the current CEO and Managing Director of Infosys, a globally recognized innovative and business leader in the IT Services Industry. Dr. Sikka was born and raised in India. He studied Computer Science at Syracuse University before receiving his Ph.D. in AI Stanford in 1996. After a very long successful stint at SAP in 2014, he joined Infosys as CEO to help drive a massive transformation not only at Infosys but also within the larger IT services industry. In the last two and half years since he took over, his vision and leadership has helped accelerate Infosys transformation resulting into growth of their global work force to almost 2 lakh people. There was a massive reduction in attrition, doubling of large deal total contract value, number of large clients and industry leading cash flows generation with revenues of $9.5 billion despite strong headwinds. Infosys new software and new services has seen tremendous growth as well. Client satisfaction levels are accordingly at the highest levels ever, particularly at the CXO segment which according to Infosys Annual Survey showed 22 points increase. May I invite Dr. Vishal Sikka to talk about Infosys and the future of IT Services Thank you so much. That was quite a handful introduction. Hi, Mr. Kotak. Good to see all of you. Uday has asked me to organize my talk in two sections. In the first part I want to talk about our performance and financial metrics that most of you care a lot about and then following I want to spend some time looking at the bigger picture. I hope to finish this speech itself in another 25 minutes and then leave around half an hour for your questions. Two and half years ago, when I started, we laid out a relatively straight forward strategy. The idea was to launch a three-part strategy of renewing the core business entering into new businesses and the culture of the company. And the interesting part about this three-part strategy was also that it reflects the challenges of our clients the challenges of business that they have this dual priority of only one hand getting better at everything that they did and on the other hand doing new things that they never did before. In this, they have their own timeless culture and value system that they operate on. So, if we look over the last two and half years, the execution that we have followed on this path is starting to show signs of success. I will quickly walk you through some of these metrics. The relative revenue performance compared to the industry has distinctly improved. When I joined, our revenue was significantly below the industry. It has now got up towards the top of the industry, or certainly in line with the industry. Year-on-year if you compare the first three quarters of this financial year to the last one, you see the performance there and despite the challenging environment, we have managed to keep the operating margin steady. If you look at many of the operational parameters of our business the core business utilization for the last seven quarters has been above 80% which is for the first time in a very long time in our company. The employee cost as a percentage of revenue has come down. You can see the operating cash flow as percentage of net profit is getting close to 100%. The onsite we can still do better but the subcon costs are something that are continually improving. I know a lot of you have questions around the visa policies and things like this, we can talk about that in the Q&A but our endeavor has been to get closer to the 30% number. In terms of the core business, IT services continue to grow through a dedicated focus on renewing our existing services through a combination of automation and innovation as well as mixing our portfolio increasingly towards more high value, value add kind of services. One area of disappointment for us has been the consulting business. We continue to focus on this over the course this year. The negative performance of consulting in Q1 impacted us more than we expected. Products, platforms and other areas of the business also continue to grow and here is a different cut on that. If you look from the time when I started to last quarter, revenue per quarter has grown by more than $400 million per quarter.  While the margin is same from 25.1% to 25.1%, operating profits has gone up from $536 million to $640 million. Attrition has come down significantly from 23.4% at a group level in the quarter when I joined to now a little bit below 15%. There is a separate attrition metric that we have started to internally focus on over the last year which is high performer attrition and that number has come down significantly to single-digits. Revenue per FTE is one of the key metrics of our strategy. While it has decreased compared to when I started, it has in the last couple of quarters started to go back up and that is something we are very happy about. One key part of the strategy is the new software. When we talk about renew and new, there are new services in unprecedented new areas and there are also new software that we did not have before. Excluding Finacle, the software revenue in the quarter when I started was $35 million a quarter and that number has already gone up to $60 million now. The $100 million accounts went up from 12 to 18. $200 million accounts have doubled from 3 to 6 in that time. Organizationally Pravin and I have established a strong team around us. Jayesh is here, our Deputy CFO. Similarly, we have appointed Ravi as our Deputy COO. There are three Presidents responsible for our go to market. Below their level we have established 15 industry heads to create much more focused small encapsulated empowered business units, so that client focus and agility can be achieved and it is the best way for us to achieve scale. Similarly, we have been making a lot of progress and Pravin oversees this on simplifying our internal policies, systems and process so that we can become much more agile. So, when we look at this in the broader context of the strategy, beyond the numbers you can clearly see that on all the three dimensions of strategy things have been taking hold. One program that I am particularly proud of, I am approaching it since two years, next month it is going to be two years since we started it, is the Zero Distance. It has been deeply engrained into our company's culture. More than 95% of the projects of the company have Zero Distance plans. The idea of Zero Distance for those of you who do not know about it, is it inspires our project team to bring innovation in every single project no matter what. The number is actually 100% and the projects that do not have a Zero Distance plan are the ones that just started recently. So, if you look at the Zero Distance plans of project that started more than six weeks ago, it is 100%. Every project is basically covered by Zero Distance project and it has become a part of the culture. For the last nine months or so, we have been focusing on elevating the Zero Distance innovation idea where project teams think of bigger innovations. The CWRF is a tribute to Martin Luther King's quote that, "If you cannot fly we must run, if you cannot run we must walk, if you cannot walk we must crawl, but we have to always keep moving forward" and so we have this framework of most of the Zero Distance innovations, even though we are close to 100%, are quite incremental quite small and so our endeavor now is to elevate these towards more and more transformative innovation for our clients coming from the grass root. Zero Bench, which was the other initiative that we launched in July of 2015 that also reached close to 100%. So basically, everyone on the bench has done a Zero Bench project. We have now more than 34,000 jobs on our internal market place. 470 jobs show up every week on this and it has touched basically everybody on the bench. As I mentioned earlier, one of the direct impacts of Zero Bench is giving the youngsters an opportunity to get some experience by working on a Zero Bench project. That experience then makes it easier for them to get into production. So, Zero Bench has had a direct impact on the utilization, especially on the fresher utilization. The utilization is above 80% consistently now, but in particular the fresher utilization and off-shore utilization has gone up directly as a result of Zero Bench. The renewal of our existing core business, in particular the maintenance and run parts of the business but also to some degree the build parts of the business is impacted by automation. I mentioned, in the earnings in January that more than 2,600 people over the course of Q3 were saved but 2,650 FTE worth of effort was eliminated with automation. Over the last 12 months that number has crossed 8,000. The Mana platform plays a very important role in helping us achieve that automation. All 8,500 is not due to Mana but Mana for IT has a huge impact on bringing automation led productivity improvements to our work. And within our core business, we have launched many new services like the Mainframe Modernization, the API Economy, BI Renewal, the work with Salesforce.com, ServiceNow, etc. Roughly 10 or so of these have launched in the last couple of years and they have all been showing growth that is faster than the growth in the company. As a result of all of this, the client satisfaction, a survey done since last 12 years, has demonstrated the highest ever score this time. In particular, as you said in the introduction, the CXO satisfaction has jumped up by 22 points. So, I am quite proud of the work that our team has done in renewing our core business. When you look at new, getting into new dimensions of business is something that should come easily to us. When you create something new, you do that but culturally and organizationally new is much more difficult to pull off because it is not just enough to create a separate appendage that is sitting somewhere else and doing something totally independent. You have to find a way to make these harmonious. The idea of 'Renew and New' comes from Arthur Koestler's word on creation, this idea of two habitually distinct but self-consistent frames of preference where you are doing something unprecedented in the new dimension, whereas you are continuously improving in the existing dimension. We have seen continued strong momentum in the three acquisitions that we have made Skava, Panaya and Noah, as well as the new services that we have launched organically. One of the things that we are very proud of Skava, is that beyond retail Skava has now entered into other industries in particular financial services, telecommunication, and utility. We have opened new frontiers with Skava and our home-grown Mana platform. Mana helps us renew our services in particular in maintenance and run areas. The same Mana platform also has huge applications in helping our clients build breakthrough business application of artificial intelligence. I think, having a common platform that does both is extremely important to our strategy. A lot of people in the industry build captive automation capabilities in service of their own business. When we do that, the platform would not be competitive. You have to ensure that the platform that is powering the productivity improvement of the existing business is a world class platform. Therefore, it is important to have this platform subjected to the outside world, to the light of the outside. Building breakthrough business applications on Mana is a very-very critical part of the strategy. I am really excited by what our teams are doing, what our clients are doing, all the way from CPG companies using it for revenue reconciliation and financial consolidation much faster to companies in the pharmaceutical industry, working on much better forecasting, using Mana for contract management and compliance to contracts to dynamic fraud analysis for banks and applications like that. From new services dimension, our digital services as well as the strategic design consulting services are things that have been going forward quite well. One key part of our strategy is the cultural dimension and there are two main parts to that. One is the agility of the company as a whole and the other one is education. Infosys always had a very strong emphasis on education. Mr. Murthy always talked about this. The idea of learnability, the ability to learn. Especially in the times ahead, learnability is going to be a very critical thing for our future. So, when you look at all the work that we do in culture dimension, it is dominated by this. ILP, the Infosys Learning Platform that is our own platform to very immersively teach our fresher as well as our regular employees what is going on in the world around in a very immersive way. We drop them into boot camps, into projects and into sub-sets of the projects, so they are learning in a much more accelerated way. We have done a lot of innovation in learning itself and how people learn and so forth and brought that into ILP. One of the most interesting program that we have done is for our top 200 executives. We have done a global leadership program together with Stanford and we have now done three cohorts of this. Some of the Infoscions and the senior executive who go through this, have said that this is the best thing they have done in their entire Infosys career. This is a two-and-a-half-week program or distributed over a year which is done together with the Stanford Business School, and the Stanford School of Engineering and Design School. It has a huge impact on changing the perspective of our management team. As well as we have been investing in on-site learning and I think more and more as we focus on-site creating on-site environment for learning is going to be critical. Design thinking training has become an integral part of being at Infosys. We have now more than 130,000 Infoscion who have been trained on design thinking. By the end of this year I expect that we will be able to train everyone in the company on design thinking and similarly training in artificial intelligence, agile methodologies, and in all manner of new technologies is something that we continue to invest in. In a nutshell, two and half years going from $8.2 billion in revenue or so to now we just crossed $10 billion on a LTM basis, continued protection of margins and continued to ensure that we have strong margins. That is the philosophy of consistent profitable growth. Improvement in revenue per employee despite the pressure, the commoditization, improvement in the attrition and the sense of inspiration among the employees is growing into completely new dimensions. We are now achieving new scale in the new dimensions and highest client satisfaction ever. That is basically in a nutshell from a performance point of view where we are. As I look at the journey ahead, as happy as I am looking backwards to the last two and half years, the journey ahead is even more challenging and even more interesting one. Here is a quote from my friend Bill Ruh of GE. He will be here day after tomorrow talking at the NASSCOM Event. We have been doing some amazing work together on bringing design thinking and new kinds of solutions, co-innovation, to their platform. Here is a quote from the Head of HR and the Head of Technology for Visa on how we have helped establish a dramatic new culture as well as doing our traditional services. Visa has been one of our fastest growing accounts over the last two years and it is something that we are particularly proud of. So as I said, as good as it is to look backwards and feel good about this dimensions of improvement that we have managed to achieve, the world around us is growing through a very rapid transformation. I want to switch gears a little bit and share with you some thoughts on what I see happening in the world around us. It is difficult to capture what is going on in a nutshell, but it basically has three dimensions. A very deep rooted sense of end user centricity. When you look at retail or banking or insurance or communications and telco and so forth. In any industry there is a very deep rooted sense in which things are becoming more connected. A continuous connection to the end user and serving end user needs has never been more important. These experiences are now increasingly becoming digital but they are also being powered by AI technology. Underneath this enablement of the new experience is the intelligent infrastructure. It is governed by this exponential laws. I am calling it Moore's Laws, I know it is Moore's Law but I am calling it Moore's Laws because there are many of the similar laws not only in the performance improvement or cost performance improvement of processors but in many other dimensions of technology. We see these exponential price performance improvement. As a result of that, computing is becoming far more pervasive and that is all enabling and extreme efficiency of disinter-mediating existing industry. If we look into this in a little bit more detail, here are three projects done by our strategic design consulting team. I mentioned, in the new dimension of our strategy, this one service that we offer where we co-innovate with our clients on things that are the most important to their future. On the right-hand side is a 3D printed fully functioning replica of a low fidelity model of a GE engine that is an aircraft engine. The entire engine is 3D printed and with a Microsoft HoloLens, you can deeply examine the operational behavior of this engine. You can go back and change its design parameters, tune its design parameters and you can do predictive maintenance on it. All of that in real time. It actually has the potential to dramatically accelerate the lifecycle of these engines to dramatically redesign the nature of maintenance and repair on these engines. And also make the entire machine far more connected and far more intelligent, as it goes through its lifecycle. In the middle, you see a project that we have done with a very large agriculture company. This is a digital farm that we have built. Our team actually builds the farm. It is not only a digital farm, but it is a highly affordable digital farm that can be put together with extremely cheap components and it makes the plant itself connected. Supply of nutrient, supply of water, supply of light to the plants can be controlled thousands of times more effectively than in traditional agriculture. Instead of supplying water once a day or twice a day you can supply it two thousand times in a day by milliliters individually to the plant. The CEO of this company have told me that Vishal if you are able to do this, if you are able to connect into the plant itself, we will be able to improve agricultural yield by 30%. What we found in our experience was that some of the plants actually grow 10x faster than in the normal conditions, not only 30%. The entire Green Revolution that happened in our lifetimes, in my lifetime here in India, was at 22% improvement in agricultural yield. So, you can imagine what kind of a revolution this can create. On the left hand side is actually a store that we have built for one of our luxury retail client and this is an entire store that looks like a store from 1850's but it is completely digital. Every aspect of the store, every part of this is digital. It is like a complete room, it is a large computer that is programmable, those mirrors and the coat hangers, and the closets, and everything that you see is all completely digital. The mirror turns on when you put on a coat. It knows what coat it is. You can see yourself in different circumstances and things of this nature. So, all industries are going through this deep transformation because of a very deep rooted understanding of users enabled by a pervasive connectivity and pervasive computing. This is powered by Moore's Laws rapid advance. Moore's Law this year is going to be 52 years old,   two years older than me. It is one of the extraordinary feats of human engineering achievement. Many people say that Moore's Laws is reaching its end. It is true that traditional Moore's Laws is reaching its end. On the upper right, you see a 7 nanometer wafer that DFMC has started to produce on a non-production scale yet. In early 2018, they will go production on 7 nanometers. When I spent summer at Intel working in the Intel AI Lab in 1990, I remember the micron process was going live, that was 1,000 nanometers. When I was building HANA at SAP we were at 22 nanometers process of Intel. Now it is down to 7 nanometers. This is an extraordinary thing. Of course, the reason people say that Moore's Laws is going to end is that 7 nanometers is getting already pretty close to physical scale. There are two silicon atoms in 1 nanometer, so at 7 nanometers you are basically at the width of 14 or so silicon atoms. So, there is not much more that you can get out of this. Probably by 2022, 2023 we will have 4 nanometers, 5 nanometer process but that is as far as it will grow. But then other things are happening. On the left-hand side in the middle you see the neuromorphic board that has already been made by Jenn Hasler at Georgia Tech. These neuromorphic boards are ways of bypassing the traditional CMOS manufacturing process and going into new kinds of chip design. So therefore, some other new way continuing Moore's Laws is going to continue post 2024. On the bottom middle you see the Nvidia AI Box that has 170 teraflops box of computing in it, two Intel processors of 20 cores and 12 or so of these Tesla GPU chips inside it. This is already shipping, a lot of the deep learning algorithms run on this kind of path. And then of course on the bottom right you see this incredible growth that our partner AWS have had. That is a data center picture of a part of the datacenter of Amazon. They have dozens of datacenter, each one between 50,000 servers and 80,000 servers and collectively somewhere between 3 million servers to 5 million servers in AWS. This is an astonishingly large amount of computing capacity and AWS has seen 40%-45% growth in the last 12 months. So, what we see is that this entire digitization and AI revolution is being powered by computing that is simply exploding. Even though the traditional Moore's Laws is nearing a trend in the next seven-eight years, the new kind of architectures and new kinds of technologies is going to ensure that we are going to grow significantly further beyond that. As a result, new AI technologies are becoming possible. We all heard about this AlphaGo from Google last year that beat the world champion Go player. Recently the news came out that some students from CMU have put together a Poker playing robot. Now, why was Go significant last year, because Deep Blue beat Garry Kasparov in the middle of 90's. That was more or less a Brute Search computational capability exercise because the computer can look ahead in the Chess moves much more than the human brain can. Go is a different kind of a game. The computational complexity of Go is much worse then of Chess. With AlphaGo they were able to overcome that by bringing learning into that. A new frontier has been broken. Poker is based on human intuition, on deception, on bluffing and things like this. It actually has beaten four of the top Poker players continuously and has earned something like $1.7 million as a Poker player online. So, this is quite an interesting development. On the left-hand side, you see the autonomous car from Toyota which actually learns the driver's behavior as you drive the car and adjust itself to that. So, this is a great advancement that has happened in autonomous driving. Udacity has a car that is fully programmable by the students of Udacity who take the autonomous driving car class on Udacity. On the right-hand side is open AI which is a consortium of research in artificial intelligence in the public good. I am very proud that we, at Infosys are one of the sponsors of open AI. Universe is a package that the open AI team has released recently. It has now 45 world class researchers and they are really doing some amazing work in artificial intelligence. As a result of all of this, basically what we see, that it becomes much easier for a new comer to come in and disrupt in industry. Big parts of the supply chain and of the value chain can be disrupted and can be digitized. The world of atoms where we have been coming from, has huge numbers of intermediary. As digitization and computing replaces these atoms with bits, you can see that the value chain can be much more connected and can be much more zero distance. It can be much more efficient as a result of that, both in terms of pricing and in terms of production. So, this is what is going on. As a result of that, when we look at the journey ahead for us, a lot more needs to be done. I studied artificial intelligence as a graduate student. There are two pieces of research that our team picked out. One is from McKenzie that was done a few months ago, going through various kinds of jobs and the potential of automation in those jobs. The other is the impact of automation in the IT BPO industry, in our industry, that was done by HFS Research recently. And you can see that India, according to this study has the biggest impact of jobs among all of these countries. There are dozens of studies like this. If you research you can find out that recently, couple of weeks ago McKenzie Global published another one and Eric Reynoldson and Andy have done something. There are dozens and dozens of reports on automation. But really all you have to do is walk into any one of our floors at any company in the IT BPO industry and it becomes starkly clear as you walk around the floor that a huge number of these jobs are going to go away. It may not be two years it might be four years, five years, seven years, ten years, but there is no doubt that these jobs are going to be replaced with automation. So, what do we do? What is the nature of the journey that we are on? My sense is that there is only one way forward. Professor Mashelkar use to have this beautiful line "doing more with less for more" that is basically the nature of the endeavor. If you look at the complexity of human activity and plot that against time, the natural course of event is that we have to constantly be moving upward.  That journey upwards is not only a journey upwards, it is actually accelerating. There is a very straight forward duality that we have to follow. On one hand, we have to eliminate our work through automation and improve our productivity while on the other hand we have to deploy that improved productivity towards innovation. This is basically the formula. The reason for automation is so that we can continue to do the work that has already been commoditized more efficiently and more productively and used that improvement in productivity, become more innovative to do the new kinds of things for which there is still tremendous opportunity and tremendous value. Self-driving car engineered today is worth millions of dollars. What if we were able to produce a hundred thousand self-driving autonomous driving engineers? This continues productivity improvement but actually shifting continuously from things that are commoditizing towards things that are innovative is the basic point that has to be fueled by education. This idea that we study for the first 21 years of our life and then we do not study any more, we have to abandon this idea because technology will continually change. So we have to continuously re-skill ourselves. Thankfully we at Infosys have a tremendous advantage on this because we have always had a culture of learnability like I said. So, the fueling of this automation and innovation journey on the basis of education is the key. The other basis is software. It is the intellectual property. If all we did was moving upwards on the basis of education, which alone will also become commoditized over time. Therefore, we have to take advantage of this improving productivity using software, using our software, software that is monetized. Instead of 10 people doing something if you now get that same thing done with three people, if you do not have your own software to do the work of the remaining seven then you are losing value and somebody else who made that software is capturing that value. Therefore, that software has to be monetizable. But for that software to be monetizable, it is software that has to be world class. It is software that has to withstand the test of outside world. It cannot just be a captive thing that you do only for yourself. So, having software together with education powering this transformation is something that is critical for our future. This is why the same Mana platform applies to the renewal of our services and for the breakthrough new business solution. This is the reason why education for our traditional services as well as in the breakthrough new areas is something that is critical. So this in essence is the nature of our journey in the future. John McCarthy the father of artificial intelligence once said, in a lecture that "articulating a problem is half the solution". That lecture changed the course of my life. Within our lifetimes, we are going to see AI Technology and get to the point where a well-articulated problem, a well-specified problem will be solved automatically. The human frontier at that point is going to be problem finding, creativity, the ability to look at a situation and see what is not there, to see what is missing, see what can be innovated, that is our future. One way or the other it is going to happen. When I look at 3.7 million people in our industry the only future I see is a future that is fueled by automation and getting ahead of this curve of automation. I see the future where the automation enables us to be more innovative to exercise our creativity, to exercise our ability and to find problems, not only solve them. If we do that we will be okay, if we do not we will get disrupted. This is the journey that I see in front of us. So, I do not know how long I went on, Uday. We laid out a clear strategy. We have seen signs of early success but the world is going through a very rapid change driven by AI and Technology. Much more is still to be done to lead in the next generation of IT and with focused execution we will do it. Thank you. Thank you, Mr. Sikka. Okay, we have the first question already. Very nice presentation. My first question is that, how much of the value in this transformation will be captured by the software versus the services company? I understand that Infosys is trying to adopt software plus services model but when you look at the overall value chain it is the software company that are gaining bigger and bigger share of the overall value in this transformation. Our core business is services. We are a services company. But it is the software just like there is software-defined networking and software-defined datacenter and software-defined in retail and similarly there is software-defined services. So, we are talking about amplifying our services ability through the use of software, through the use of automation and by moving up in the value chain. So it is both, the value of the software itself will be there but it is more in services, where the real power will come from. And you see that in the numbers already. The productivity improvement in the first nine months of this financial year, compared to the first nine months of the last financial year. Last financial year in the first nine months we hired 17,500 people, this financial year's first nine months we hired something like 5,500 people, and yet we improved utilization, we improved revenue and RPE and so on and so forth. So, more with less for more, powered by software and education. The second question that I have is something which is on top of everyone's mind, that how is your relationship with the founders and what does it mean for Infosys? My relationship with the founders, it is wonderful. I meet Mr. Murthy quite frequently. I do not meet the other founders quite as frequently. I ran into Kris the other day in a Lufthansa flight and I have not seen Nandan for more than a year. But it is an amazing relationship. I have a heartfelt warm relationship with Mr. Murthy. I probably meet him four, five, six times a year, something like this. He is an incredible man, we usually talk about quantum physics and things like this and technology. He was telling me the other day about the Paris metro and how he worked on the Paris metro in the 1970s before he started Infosys. It had this whole idea of automation of autonomous driving and things like this. So, all this drama that has been doing on in the media, it is very distracting. It takes our attention, but underneath that there is a very strong fabric that this company is based on and it is a real privilege for me to be its leader. Hi Vishal, great presentation. I was curious if you could talk about, from your personal experiences over the last two and a half years, you have been turning around a really large ship. What have been your deepest personal experiences, what have been most difficult to accomplish, what have been easy to accomplish? And in light of that, if you think I were to ask the same question from the leaders of the smaller mid-tier IT services companies, what do you think they would likely say? The second question I cannot answer. Compared to two and a half years ago, to me, it is disappointing to see the rate of growth and embrace of automation in the industry. I think more should have been done by now, but that is my opinion. When it comes to Infosys, it has actually been quite counter intuitive. I would not have imagined that zero distance would get picked up the way it did. One month after I started, I think September or October of 2014, this customer satisfaction survey came out, and it was incredibly depressing. I actually got the details of customers, the client's feedback, it was a pile of paper that thick, and I read it over a period of one month. I read every single one of them. Consistently it was the same story that we used to get high marks on quality, on delivery excellence, on being responsive, being professional. And we would get the lowest scores on strategic relevance, on innovation, on being proactive, and it was a shock to me that that was the case. So, a lot of the ideas for Zero Distance formed in those four, five months, and by March of 2015 I launched it. Within nine months we had managed to create a very grass roots oriented culture around that. Recently, Ravi, our Head of Delivery, was talking to some youngsters and they told him that it is not a new thing for us anymore, we just assume that it is a part of our job to come up with something innovative in whatever project that we are doing. That to me has been the biggest positive. The bringing of design thinking at a very massive scale has been surprisingly easy because of the scale. This comes from Mr. Murthy, the scale that was setup for education employees. The new, it is always difficult to embrace the new. So, I think bringing new in, in particular, letting go of your existing way of doing things and embracing. If a piece of software automates the work that you do, then embracing that piece of software and letting go of some of the manual work. It is something that is visible to you, that is something that is very difficult for people to do, and so that has been a hard thing. Some things that I thought were going to be much easier have turned out to be not so easy, some that I thought would be difficult have turned out to be much easier than I thought. In your Vision 2020, where we are in that journey? Would inorganic be a big component of it, how products will contribute? And given the recent noise in media, how confident you are seeing through that journey? How confident am I of what? See, that $20 billion, 30%, $80,000 revenue per employee, has always been an aspiration for us. What good is an aspiration if it is not aspirational? So, we are marching down that path. When you launch something like training people on design thinking, you cannot say that we will get 100,000 people trained in the next one year or two years, you have to setup a grassroots movement and then let it go and see what happens. When you do it like that, usually it surprises you. These numbers are a consequence of the work that we do, they are not that you figure out some way to get to that number. You do the right thing and then if you do the right thing, these numbers are the footprint that we leave behind. They are the consequences of the work that we do. So, this is how we look at it. Where are we on this journey? We are still early in this journey, we still have a lot of things to deal with, we have to get the consulting endeavor right, and we have to get BPO transformed. If we talk about the Infosys transformation as a whole, BPO transformation is even more difficult because it is even lower in the ranks and the skill levels and things like this. Many of the software assets have to still be transformed, like Finacle and so forth. We are still in the early stages of that. But generally, in the renewal of the code services and the adoption of the new services, I feel very good about where we are. So now as we look ahead to the future, a new financial year is starting and so forth. We are going through this exercise of trying to think about what does the journey ahead looks like. Taking stock of the last two and a half years and figuring out what the path ahead is going to look like in terms of both the software as well as the services, and the services amplified by software and so on. How big would be the inorganic component? I have to stop you, one question. Because we are running out of time. I am going to come to this side of the hall and then back to you sir. Vishal, I understand that you are going through two huge transitions or transformations we are going to call it. One on the technology side where you articulated your walk through on floor, half the jobs would not be there in five years. On the other side, you are going through a cultural transition or transformation, a founder run company that moved in to your hands and the obvious changes that play out. I am sure this is creating a bunch of insecurity at the organization level through the different ranks. So two questions here, how do you communicate, what do you communicate through the ranks to ensure that people are calm and focused on the job. And the second is, what are the two or three key pieces of your strategy? As you are saying, two of our key foundation stones are moving around and we are probably shaking a bit. How do you make sure you anchor the ship better and navigate through this? I think continuous and intense communication is very important across different parts of the organization and so on. That is something that cannot be overstated. You need to just communicate, communicate and communicate. And in our case it is not easy because the onsite employees are difficult to communicate to. They are inside their client context and so forth. So it is not so straightforward. The other part of it is that you have to demonstrate by example, talk is cheap but you have to really demonstrate in work that you are focused, that you are ignoring the noise and the distractions and so forth, and continuing to stay engaged. That is something, there are natural latencies that our brains have and there is not much you can do about that. I think that organizational change, organizational transformation is important. A lot of people would say that why go through this exercise of taking the leadership team through two years of a Stanford program and things like that, instead of just changing the leadership team. But if you change the leadership team then what happens to the layers below. You create a complete disruption, and not to mention you missed the point of transformation that is supposed to happen. So, I think that creating a context, creating an atmosphere which elevates everybody is something that is exceedingly important. Elan Kay says that context is worth 80 IQ points. So, if we build a real context around us that elevates everybody and can have the biggest transformative effect that takes time. The benefit of that approach is that it is very sustainable, long-term, lasting approach. The drawback is that it takes time. It is easy to plant a company here or there, buy some companies, put them inside and hire some leaders and put them inside, but that is like appendages growing out of control out of you. That is not sustained, organic transformation in the true sense. We have time exactly for one more brief question and one more brief answer. So try and keep your question as brief as you can. My question is on buyback. Some of the founders have said that they have written letters to the Board on this issue and they have not heard anything. Just your views on how you plan to utilize the cash? The official answer is the Board from time to time will consider capital allocation policies. In the last four years we have reviewed twice, we have improved dividend payout, once from 30% to 40% and then again from 40% to 50%. When we have something to report, we will report it. Jayesh, did I miss something? Did I do a good job? But that was an official answer. The unofficial answer is that you look at the business circumstances over the next four, five years and then you look at what you need the capital for. Then based on that you decide. We will do that as necessary. In our case, it is basically three things, there are strategic growth initiatives, and there is a capital for buildings and infrastructure and so forth to house the employees. Then there is the investments, the acquisitions. I am not interested in buying yesterday's technology to make something or the other look good. We are interested in tomorrow's technology. And tomorrow's technology is usually expensive if it is really good. So, you have to be very, very selective in that. Based on how that mix changes over the next five years and based on that you take a decision on how to utilize the cash. We have done this with 50% so far and as we go through this exercise we will take a look at it. Nobody asked about visas or Donald Trump etc Would you like to Suo Moto answer the question about Visas and Donald Trump? No, there is nothing that has happened so far. Okay. I am afraid that concludes our session. Thank you, Dr. Sikka. Thank you for being a very responsive audience. This is a disclosure announcement from PR Newswire.


News Article | February 15, 2017
Site: www.realclimate.org

The Fourth Santa Fe Conference on Global & Regional Climate Change will be held on Feb 5-10, 2017. It is the fourth in a series organized and chaired by Petr Chylek of Los Alamos National Laboratory (LANL) and takes place intervals of 5 years or thereabouts. It is sponsored this year by LANL’s Center for Earth and Space Science and co-sponsored by the American Meteorological Society. I attended the Third in the series, which was held the week of Oct 31, 2011. I reported on it here in my essay “Climate cynicism at the Santa Fe conference”. In that report, I described my experiences and interactions with other attendees, whose opinions and scientific competence spanned the entire spectrum of possibility. Christopher Monckton represented one extreme end-member, with no scientific credibility, total denial of facts, zero acknowledgment of uncertainty in his position, and complete belief in a global conspiracy to promote a global warming fraud. At the opposite end were respected professional climate scientists at the top of their fields, such as Richard Peltier and Gerald North. Others, such as Fred Singer and Bill Gray, occupied different parts of the multi-dimensional phase space, having credentials but also having embraced denial—each for their own reasons that probably didn’t intersect. For me, the Third Conference represented an opportunity to talk to people who held contrary opinions and who promoted factually incorrect information for reasons I did not understand. My main motivation for attending was to engage in dialogue with the contrarians and deniers, to try to understand them, and to try to get them to understand me. I came away on good terms with some (Bill Gray and I bonded over our common connection to Colorado State University, where I was an undergraduate physics student in the 1970s) but not so much with others. I was ambitious and submitted four abstracts. I and my colleagues were pursuing uncertainty quantification for climate change in collaboration with other DOE labs. I had been collaborating on several approaches to it, including betting markets, expert elicitation, and statistical surrogate models, so I submitted an abstract for each of those methods. I had also been working with Lloyd Keigwin, a senior scientist and oceanographer at Woods Hole Oceanographic Institution and another top-of-his-field researcher. We submitted an abstract together about his paleotemperature reconstruction of Sargasso Sea surface temperature, which is probably the most widely reproduced paleoclimate time series other than the Mann et al. “Hockey Stick” graph. I had updated it with modern SST measurements, and in our abstract we pointed out that it had been misused by contrarians who had removed some of the data, replotted it, and mislabeled it to falsely claim that it was a global temperature record showing a cooling trend. The graph continues to make appearances. On March 23, 2000, ExxonMobil took out an advertisement in the New York Times claiming that global warming was “Unsettled Science”. The ad was illustrated with a doctored version of Lloyd’s graph (the inconvenient modern temperature data showing a warming trend had been removed). This drawing was very similar to one that had been generated by climate denier Art Robinson and his son for a Wall Street Journal editorial a couple months earlier. It wasn’t long before other distorted versions started showing up elsewhere, such as the Albuquerque Journal opinion page. The 2000 ExxonMobil version was just entered into the Congressional Record last week by Senator Tim Kaine during the Tillerson confirmation hearings. In 2011, my abstracts on betting, expert elicitation, and statistical models were all accepted, and I presented them. But the abstract that Lloyd and I submitted was unilaterally rejected by Chylek who said, “This Conference is not a suitable forum for [the] type of presentations described in [the] submitted abstract. We would accept a paper that spoke to the science, the measurements, the interpretation, but not simply an attempted refutation of someone else’s assertions (especially when made in unpublished reports and blog site).” The unpublished report he spoke of was the NIPCC/Heartland Institute report, which Fred Singer was there to discuss. After the conference, I spoke to one of the co-chairs about the reasons for the rejection. He said that he hadn’t seen it and did not agree with the reasons for the rejection. He encouraged Lloyd and me to re-submit it again for the 4th conference. So we did. Lloyd sent the following slightly-revised version on January 4. Keigwin (Science 274:1504–1508, 1996) reconstructed the SST record in the northern Sargasso Sea to document natural climate variability in recent millennia. The annual average SST proxy used δ18O in planktonic foraminifera in a radiocarbon-dated 1990 Bermuda Rise box core. Keigwin’s Fig. 4B (K4B) shows a 50-year-averaged time series along with four decades of SST measurements from Station S near Bermuda, demonstrating that at the time of publication, the Sargasso Sea was at its warmest in more than 400 years, and well above the most recent box-core temperature. Taken together, Station S and paleotemperatures suggest there was an acceleration of warming in the 20th century, though this was not an explicit conclusion of the paper. Keigwin concluded that anthropogenic warming may be superposed on a natural warming trend. In a paper circulated with the anti-Kyoto “Oregon Petition,” Robinson et al. (“Environmental Effects of Increased Atmospheric Carbon Dioxide,” 1998) reproduced K4B but (1) omitted Station S data, (2) incorrectly stated that the time series ended in 1975, (3) conflated Sargasso Sea data with global temperature, and (4) falsely claimed that Keigwin showed global temperatures “are still a little below the average for the past 3,000 years.” Slight variations of Robinson et al. (1998) have been repeatedly published with different author rotations. Various mislabeled, improperly-drawn, and distorted versions of K4B have appeared in the Wall Street Journal, in weblogs, and even as an editorial cartoon—all supporting baseless claims that current temperatures are lower than the long term mean, and traceable to Robinson’s misrepresentation with Station S data removed. In 2007, Robinson added a fictitious 2006 temperature that is significantly lower than the measured data. This doctored version of K4B with fabricated data was reprinted in a 2008 Heartland Institute advocacy report, “Nature, Not Human Activity, Rules the Climate.” On Jan. 9, Lloyd and I got a terse rejection from Chylek: “Not accepted. The committee finding was that the abstract did not indicate that the presentation would provide additional science that would be appropriate for the conference.” I had also submitted an abstract with Stephen Lewandowsky and James Risbey called “Bets reveal people’s opinions on climate change and illustrate the statistics of climate change,” and a companion poster entitled “Forty years of expert opinion on global warming: 1977-2017” in which we proposed to survey the conference attendees: Forecasts of anthropogenic global warming in the 1970s (e.g. Broecker, 1975, Charney et al., 1979) were taken seriously by policy makers. At that time, climate change was already broadly recognized within the US defense and intelligence establishments as a threat to national and global security, particularly due to climate’s effect on food production. There was uncertainty about the degree of global warming, and media-hyped speculation about global cooling confused the public. Because science-informed policy decisions needed to be made in the face of this uncertainty, the US Department of Defense funded a study in 1977 by National Defense University (NDU) called “Climate Change to the Year 2000” in which a panel of experts was surveyed. Contrary to the recent mythology of a global cooling scare in the 1970s, the NDU report (published in 1978) concluded that, “Collectively, the respondents tended to anticipate a slight global warming rather than a cooling”. Despite the rapid global warming since 1977, this subject remains politically contentious. We propose to use our poster presentation to survey the attendees of the Fourth Santa Fe Conference on Global and Regional Climate Change and to determine how expert opinion has changed in the last 40 years. I had attempted a similar project at the 3rd conference with my poster “Comparison of Climate Forecasts: Expert Opinions vs. Prediction Markets” in which my abstract proposed the following: “As an experiment, we will ask participants to go on the record with estimates of probability that the global temperature anomaly for calendar year 2012 will be equal to or greater than x, where x ranges in increments of 0.05 °C from 0.30 to 1.10 °C (relative to the 1951-1980 base period, and published by NASA GISS).” I included a table for participants to fill in, and even printed extra sheets to tack up on the board with my poster so I could compile them and report them later. This idea was a spinoff of work I had presented at an unclassified session of the 2006 International Conference on Intelligence Analysis on my research in support of the US intelligence community for which a broad spectrum of opinion must be used to generate an actionable consensus with incomplete or conflicting information. That was certainly the case in Santa Fe, where there were individuals (e.g. Don Easterbrook) who were going on record with predictions of global cooling. By the last day of the conference, several individuals had filled in the table with their probabilistic predictions and I decided to leave my poster up until the end of the day, which was how long they could be displayed according to the conference program. I wanted to plug it during my oral presentation on prediction markets so that I could get more participation. Unfortunately when I returned to the display room, my poster had been removed. Hotel employees did not know where it was, and the diverse probability estimates were lost. This year I would be more careful, as announced in my abstract. But the committee would have no part of it. On Jan 10 I got my rejection letter: Of the hundreds of abstracts I’ve submitted, this is the only conference that’s ever rejected one. As a frequent session convener and program committee chair myself, I am accustomed to providing poster space for abstracts that I might question, misunderstand, or disagree with. It has never occurred to me to look at the publication list of a poster presenter, But if I were to do that, I would be more thorough and look other information, including their coauthors’ publication lists and CVs as well. In this case, the committee might have discovered more than a few papers by one of them on the subject, such as Risbey and Kandlikar (2002) “Expert Assessment of Uncertainties in Detection and Attribution of Climate Change” in the Bulletin of the American Meteorological Society, or that Prof. Risbey was a faculty member in Granger Morgan’s Engineering and Public Policy department at CMU for five years, a place awash in expert elicitation of climate (I sent my abstract to Prof. Morgan–who I know from my AGU uncertainty quantification days–for his opinion before submitting it to the conference). At the very least, I would look at the previous work cited in the abstract. The committee would not have been puzzled by how to transform survey data into probabilistic projections if they had done so. They would have learned that the 1978 NDU study we cited had already established the methodology we were proposing to use. The NDU “Task I” was “To define and estimate the likelihood of changes in climate during the next 25 years…” using ten survey questions described in Chapter One (Methodology). The first survey question was on average global temperature. So the legitimacy of the method we were planning to use was established 40 years ago. I concluded after the 3rd Santa Fe conference that cynicism was the only attribute that was shared by the minority of attendees who were deniers, contrarians, publicity-seekers, enablers, or provocateurs. I now think that cynicism has something in common with greenhouse gases. Cynicism begets cynicism, to the detriment of society. There are natural-born cynics, and if they turn the rest of us into cynics then we are their amplifiers, just like water vapor is an amplifier of carbon dioxide’s greenhouse effect. We become part of a cynical feedback loop that generates distrust in science and the scientific method. I refuse to let that happen. I might have gotten a little steamed by an unfair or inappropriate rejection, but I’ve cooled off and my induced cynicism has condensed now. I am not going to assume that everyone is a cynic just because of a couple of misguided and misinformed decisions. As President Obama said in his farewell address, “If you’re tired of arguing with strangers on the Internet, try talking with one of them in real life.” So if you are attending the Santa Fe conference, I would like to meet with you. If you are flying into Albuquerque, where I live, drop me a line. Or meet me for a drink or dinner in Santa Fe. I can show you why Lloyd’s research really does provide additional science that is relevant to the conference. I can try to convince you that prediction markets are indeed superior to expert elicitation in their ability to forecast climate change. Maybe I can even talk you into going on record with your own probabilistic global warming forecast!


News Article | November 4, 2016
Site: www.sciencedaily.com

An international research team led by Carnegie Mellon University has found that when the brain "reads" or decodes a sentence in English or Portuguese, its neural activation patterns are the same. Published in NeuroImage, the study is the first to show that different languages have similar neural signatures for describing events and scenes. By using a machine-learning algorithm, the research team was able to understand the relationship between sentence meaning and brain activation patterns in English and then recognize sentence meaning based on activation patterns in Portuguese. The findings can be used to improve machine translation, brain decoding across languages and, potentially, second language instruction. "This tells us that, for the most part, the language we happen to learn to speak does not change the organization of the brain," said Marcel Just, the D.O. Hebb University Professor of Psychology and pioneer in using brain imaging and machine-learning techniques to identify how the brain deciphers thoughts and concepts. "Semantic information is represented in the same place in the brain and the same pattern of intensities for everyone. Knowing this means that brain to brain or brain to computer interfaces can probably be the same for speakers of all languages," Just said. For the study, 15 native Portuguese speakers -- eight were bilingual in Portuguese and English -- read 60 sentences in Portuguese while in a functional magnetic resonance imaging (fMRI) scanner. A CMU-developed computational model was able to predict which sentences the participants were reading in Portuguese, based only on activation patterns. The computational model uses a set of 42 concept-level semantic features and six markers of the concepts' roles in the sentence, such as agent or action, to identify brain activation patterns in English. With 67 percent accuracy, the model predicted which sentences were read in Portuguese. The resulting brain images showed that the activation patterns for the 60 sentences were in the same brain locations and at similar intensity levels for both English and Portuguese sentences. Additionally, the results revealed the activation patterns could be grouped into four semantic categories, depending on the sentence's focus: people, places, actions and feelings. The groupings were very similar across languages, reinforcing the organization of information in the brain is the same regardless of the language in which it is expressed. "The cross-language prediction model captured the conceptual gist of the described event or state in the sentences, rather than depending on particular language idiosyncrasies. It demonstrated a meta-language prediction capability from neural signals across people, languages and bilingual status," said Ying Yang, a postdoctoral associate in psychology at CMU and first author of the study. Discovering that the brain decodes sentences the same in different languages is one of the many brain research breakthroughs to happen at Carnegie Mellon. CMU has created some of the first cognitive tutors, helped to develop the Jeopardy-winning Watson, founded a groundbreaking doctoral program in neural computation, and is the birthplace of artificial intelligence and cognitive psychology. Building on its strengths in biology, computer science, psychology, statistics and engineering, CMU launched BrainHub, an initiative that focuses on how the structure and activity of the brain give rise to complex behaviors.


« AISIN presenting wireless torque measurement solution with TECAT sensor | Main | Report projects £51B/year boost to UK economy if government acts now on charging infrastructure and workforce training » A team from Carnegie Mellon University (CMU) has characterized the intermediate volatility organic compound (IVOC) emissions from on-road gasoline vehicles (LDGVs) and small off-road gasoline engines (SOREs). Although IVOC emissions only correspond to approximately 4% of NMHC emissions from on-road vehicles over the cold-start unified cycle, they are estimated to produce as much or more secondary organic aerosols (SOA) than single-ring aromatics. SOAs are an important component of atmospheric particulate matter. The researchers said their results clearly demonstrate that IVOCs from gasoline engines are an important class of SOA precursors. Their paper, published in the ACS journal Environmental Science & Technology, provide observational constraints on IVOC emission factors and chemical composition to facilitate their inclusion into atmospheric chemistry models. Emissions from on-road light-duty gasoline vehicles (LDGVs) are a major contributor to secondary organic aerosol (SOA) in urban environments. Single-ring aromatic compounds (C -C ) are traditionally thought to be the dominant class of SOA precursors emitted from LDGVs. However, only a small fraction of SOA formed during photo-oxidation of dilute LDGV exhaust in a smog chamber is explained by single-ring aromatic compounds. Gordon et al. hypothesized that the unexplained SOA production was due to emissions of additional SOA precursors, such as intermediate volatility organic compounds (IVOCs), not commonly included in atmospheric chemistry models. … In this study, IVOC emissions from LDGVs and SOREs were collected during chassis and engine dynamometer testing. Adsorbent samples were comprehensively analyzed to provide quantitative information on the mass, volatility, and chemical composition of IVOCs. The relationships between IVOCs and other pollutants were examined to develop methods for estimating IVOC emissions from existing data. SOA production from measured IVOCs was predicted and compared to that from single-ring aromatic compounds and SOA measured in photo-oxidation experiments with dilute exhaust. The CMU team measured IVOC emissions from LDGVs and SOREs during chassis and engine dynamometer testing at the California Air Resources Board (CARB) Haagen-Smit Laboratory. The LDGV test fleet comprised 42 vehicles recruited from the in-use California fleet and spanned a wide range of model years (1984−2012), vehicle types, engine technologies and aftertreatment technologies. All LDGVs were tested using commercial summertime California gasoline and the cold-start united cycle. Five LDGVs were also tested using hot-start cycles to investigate impacts of driving cycles on IVOC emissions. Three of these vehicles were tested using a hot-start UC; the other two vehicles were tested using both an arterial and a freeway cycle. IVOC emissions were also measured from five SOREs used in lawn and garden equipment, including two 2-stroke and three 4-stroke engines manufactured between 2002 and 2006. The IVOCs were quantified through gas chromatography/mass spectrometry analysis of adsorbent samples collected from a constant volume sampler. Speciated IVOCs included n-alkanes, b-alkanes, n-alkylcyclohexanes, n-alkylbenzenes, and unsubstituted and substituted PAHs—however, in total these species contributed, on average, less than 20% of the total (speciated + unspeciated) IVOC emissions. Naphthalene and substituted naphthalenes (C -C ) dominated the emissions of speciated IVOCs. Unspeciated IVOCs (UCM) account for 83% to 89%, on average, of the total IVOC emissions. These unspeciated IVOCs were quantified as two chemical classes (unspeciated branched alkanes and cyclic compounds). The team found that IVOC emission factors (mg kg-fuel–1) from on-road vehicles varied widely from vehicle to vehicle, but showed a general trend of lower emissions for newer vehicles that met more stringent emission standards. IVOC emission factors for 2-stroke off-road engines were substantially higher than 4-stroke off-road engines and on-road vehicles. Despite large variations in the magnitude of emissions, the IVOC volatility distribution and chemical characteristics were consistent across all tests and IVOC emissions were strongly correlated with nonmethane hydrocarbons (NMHCs), primary organic aerosol and speciated IVOCs. The researchers had earlier characterized diesel IVOCs emissions using the same techniques they used in the newer study of gasoline vehicles. Comparing the results, they found important differences between gasoline and diesel IVOC emissions. Our results demonstrate that IVOCs are an important class of organic emissions from LDGVs and SOREs, adding to the growing body of research on the importance of IVOC emissions from combustion sources. Although IVOCs only correspond to a small fraction of the total NMHC emissions, they contribute as much or more SOA than traditional precursors such as SRAs because of their high SOA yields. Therefore, inclusion of IVOCs into SOA models should substantially improve the agreement between predicted and measured SOA in the atmosphere. The consistency in the IVOC volatility distribution and chemical composition across the set of tests and the well-constrained ratios of IVOCs to other pollutants not only simplifies the parametrization of IVOCs for use in SOA models, but also can be used to develop IVOC emission inventories.


Gupta M.,Indian Institute of Technology Delhi | Peng R.,CMU
Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS | Year: 2013

We present the first data structures that maintain near optimal maximum cardinality and maximum weighted matchings on sparse graphs in sublinear time per update. Our main result is a data structure that maintains a (1+ε) approximation of maximum matching under edge insertions/deletions in worst case O( √ mε -2) time per update. This improves the 3/2 approximation given by Neiman and Solomon [20] which runs in similar time. The result is based on two ideas. The first is to re-run a static algorithm after a chosen number of updates to ensure approximation guarantees. The second is to judiciously trim the graph to a smaller equivalent one whenever possible. We also study extensions of our approach to the weighted setting, and combine it with known frameworks to obtain arbitrary approximation ratios. For a constant ε and for graphs with edge weights between 1 and N, we design an algorithm that maintains an (1 + ε)-approximate maximum weighted matching in O( √ mlogN) time per update. The only previous result for maintaining weighted matchings on dynamic graphs has an approximation ratio of 4.9108, and was shown by Anand et al.{[2], [3]}. Copyright © 2013 by The Institute of Electrical and Electronics Engineers, Inc.


News Article | November 4, 2016
Site: www.theguardian.com

A team of researchers from Pittsburgh’s Carnegie Mellon University have created sets of eyeglasses that can prevent wearers from being identified by facial recognition systems, or even fool the technology into identifying them as completely unrelated individuals. In their paper, Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition, presented at the 2016 Computer and Communications Security conference, the researchers present their system for what they describe as “physically realisable” and “inconspicuous” attacks on facial biometric systems, which are designed to exclusively identify a particular individual. The attack works by taking advantage of differences in how humans and computers understand faces. By selectively changing pixels in an image, it’s possible to leave the human-comprehensible facial image largely unchanged, while flummoxing a facial recognition system trying to categorise the person in the picture. Where the researchers struck gold was by realising that a large (but not overly large pair of glasses) could act to “change the pixels” even in a real photo. By picking a pair of “geek” frames, with relatively large rims, the researchers were able to obscure about 6.5% of the pixels in any given facial picture. Printing a pattern over those frames then had the effect of manipulating the image. But because computers don’t read faces the same way people do, the patterns printed over the frames look to an untrained eye like a regular, if garish, tortoiseshell pattern. They’re cheap too: the researchers were able to print the pattern for just $0.22 (£0.18) per frame, using a normal photo printer. The end result is impressive. The glasses were able to fool both commercial facial recognition software Face++, as well as a more specific model trained exclusively on five researchers and five celebrities. With just the pair of glasses on their faces, the researchers were able to successfully prevent the software from recognising their faces at all, as well as impersonate each other and celebrities including Milla Jovovich and Carson Daly. The work is not without its limitations. The researchers warn that “the variations in imaging conditions that we investigate in this work are narrower than can be encountered in practice”. The researchers took photos in a room with no external windows to control lighting, for instance. But they point out that many uses of facial recognition software, including biometric entry to a building, have similarly limited variations. In other cases, that control is lost, of course: “An attacker may not be able to control the lighting or her distance from the camera when [a facial recognition system] is deployed in the street for surveillance purposes,” the researchers say. If you’re hoping to wear the glasses at boozy parties to fool your friends’ auto-tagging … well, as the researchers say, “the notion of inconspicuousness is subjective”. That is: someone is still going to ask why you’re wearing those stupid glasses. The CMU team aren’t the first to demonstrate unusual hybrids of fashion and anti-surveillance tech. Artist and technologist Adam Harvey first demonstrated his CV Dazzle face-detection camouflage in 2010, which is subtle in its own way: rather than trying the disguise the anti-surveillance system as conventional eyewear, Harvey hides it in plain sight. Bold makeup and hairstyles serve to baffle facial recognition technology while appearing to a human observer not as a subversive anti-tech kit, but as an outlandish style choice.


News Article | January 25, 2017
Site: www.techtimes.com

The world's best professional poker players appear to have found their match: An artificial intelligence developed by researchers from the Carnegie Mellon University (CMU). The AI dubbed Libratus has already accumulated winnings of nearly $800,000 against human poker professionals at the Brain Vs. Artificial Intelligence competition at Rivers Casino in Pittsburgh. The human players compete to win shares of the $200,000 prize while Liberatus aims to be the first computer program to win in a professional poker tournament. Many AI researchers consider poker to be among the hardest games for computers to beat humans at. How AIs fare against human players when performing tasks has long been used as a measure of progress in the field of AI research. Any win for machine intelligence signifies a major milestone. Over the years, artificial intelligence managed to win in games such as chess with newer AIs even able to teach themselves how to play chess using strategic moves just like human players. Poker, however, is trickier for AI to master since it involves complicated decisions that a player needs to make based on incomplete information in addition to poker strategies that include bluffs. An earlier AI dubbed Claudico attempted to challenge human poker pros in 2015, but it was no match for its opponents. It appears though that the field of artificial intelligence has made significant developments since then as it looks like CMU's Libratus is doing well in the poker tournament. Libratus still has around 55,000 hands left for the competition but at more than halfway through the game, it already has a profit of $794,392. The loss of a predecessor of the Liberatus program in two years ago may help explain why several professional poker players initially underestimated the ability of CMU's AI but the players eventually noticed how the AI has progressed. "I didn't realize how good it was until today. I felt like I was playing against someone who was cheating, like it could see my cards," said Dong Kim, a high-stakes poker player who specializes in no-limit Texas Hold 'Em. "I'm not accusing it of cheating. It was just that good." The Carnegie Mellon team that developed Liberatus equipped the AI with algorithms to allow it to analyze the rules of poker and set its strategy. Using the powerful supercomputer Bridges, Liberatus refines its skills at playing poker by sifting through past games including those that were played at the current tournament. Bridges perform calculations in real time during the games helping the AI come up with computer end-game strategies. "The bot gets better and better every day. It's like a tougher version of us," said poker player Jimmy Chou said. "The first couple of days, we had high hopes. But every time we find a weakness, it learns from us and the weakness disappears the next day." © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


Ford has invested $1 billion in a joint venture with Argo AI, a Pittsburgh-based company with ties to Carnegie Mellon. The goal is to completely outfit Ford vehicles with self-driving technology. Interestingly, this isn’t a case of a large company simply hiring talent but the creation of an entirely separate company with an independent equity structure. Ford is the “majority stakeholder,” but will operate with “substantial independence.” Employees will receive equity in the company. The investment will be made over five years. “AI makers know you basically need an automaker to make an auto,” said a person familiar with the deal commenting on Ford’s decision to connect with the company. “Argo AI will develop and deploy the latest advancements in artificial intelligence, machine learning and computer vision to help build safe and efficient self-driving vehicles that enable these transformations and more,” wrote CEO Bryan Salesky. “The challenges are significant, but we are a team that believes in tackling hard, meaningful problems to improve the world. Our ambitions can only be realized if we are willing to partner with others and keep an open mind about how to solve problems.” Salesky worked at the Carnegie Mellon University National Robotics Engineering Center and, in 2011, led self-driving hardware at Google. Other team leaders include Dr. Brett Browning and Dr. Peter Rander. Both left CMU for Uber and recently made the switch to Argo. The company is targeting “full autonomy” by 2021. The two are part of the slow exodus of researchers from Uber two years after that company hired robotics faculty away from CMU. Ford’s billion dollar investment in a company two months old is, arguably, quite bold. Sources say that the team in place has extensive experience in building autonomous vehicles for Caterpillar and others and it seems this is the best — and fastest — way for Ford to access self-driving talent.

Loading CMU collaborators
Loading CMU collaborators