Rome, Italy
Rome, Italy

Time filter

Source Type

News Article | May 2, 2017
Site: www.techradar.com

Looking for the best Xbox One deals? You're in the right place for the cheapest Xbox One prices and bundles from around the web as we're constantly on the lookout for the best offers. The Xbox One S has generally replaced the original model now, but we still see the some cheap discounts, so keep an eye out on both comparison charts and bundle lists below. Don't forget, bundles are usually way better value than buying a console on its own. Looking to buy in the UK? You'll want to take a look at our UK page then. Before you look at the older Xbox One bundles, you may want to consider the new Xbox One S. The slimmed-down design looks much better than the original chunky box and the power brick has been absorbed. The main draw though is 4K visual support meaning you'll be able to watch specialized 4K Blu-Ray and Netflix content in 4K on your new 4K TV. The new version's 2TB, 1TB and 500GB models launched recently and have been selling very well. These prices are for the older Xbox One model. The console is considerably larger and has an external power brick. It won't play Netflix or Blu-Rays in 4K, but is just as an effective games console as the new Xbox One S. Nowadays though, we're finding the Xbox One S deals are even cheaper, so we'd only go for the older model if you spot a seriously cheap deal. 500GB Xbox One S | Battlefield 1 | $269 @ Amazon This is the cheapest Xbox One S Battlefield 1 bundle we've seen in a while and will probably be back up to around $300 soon. You're only getting a digital copy, but it's still way cheaper than buying a physical copy separately. 500GB Xbox One S Minecraft Favourites bundle | $245 @ Amazon This is the cheapest way to get a hold of a brand new Xbox One S console this week and you're getting a few Minecraft-related goodies with it too including both the Xbox One version and early access to the Windows 10 version. There are a batch of content packs to give you a head start too. View this deal: Xbox One S with Minecraft Favourites $245 @ Amazon Blue 500GB Xbox One S | Gears of War 4 | $269 @ Amazon The 500GB hard drive isn't ideal for the keen gamer, but it's certainly manageable. We'd try to make it work too in order to get one of these gorgeous Deep Blue Xbox One consoles under the TV. Gears of War 4 is a fine addition to the series from a fresh studio studio. View this deal: Blue 500GB Xbox One S with Gears 4 $269 @ Amazon 1TB Xbox One S | Halo Wars 2 and Season Pass | $314.49 @ Amazon Given the high prices of the 1TB Xbox One S right now, this could be the bundle to go for if you were planning on picking up Halo Wars 2 and the season pass. It's $30 cheaper this week too. View this deal: 1TB Xbox One S with Halo Wars 2 and Season Pass $314.49 @ Amazon 1TB Xbox One S | Gears of War 4 | $285 @ Amazon Gears of War 4 should be high on the must-buy list of any new Xbox One S owner. So this is an ideal bundle, especially if some of the other bundled games like FIFA 17 or Battlefield 1 don't take your fancy. View this deal: 1TB Xbox One S with Gears of War 4 $285 @ Amazon Xbox Live Gold: Need to top up your Xbox Live Gold membership? Don't pay the default automatic £40 renewal price. Check out our range of Xbox Live Gold deals below to save some money. Here's our roundup of the best Xbox One controller deals around. We've listed the best prices for all the cheapest official pads and the Elite controller. Still considering a PS4 instead? Then you'll want to take a look at our cheap . Try our new Google Chrome add-on and never pay more than the cheapest prices ever again! Pricehawk is a Chrome extension that will automatically find you the cheapest deals for the tech and games items you're shopping for online. It'll also let you know if there are voucher codes you can use to save even more money!


News Article | April 26, 2017
Site: www.nature.com

Nearly 20 years ago, I was fortunate enough to play friendly blitz chess against former world champion Garry Kasparov. It was quite an experience; his competitive spirit and creative genius were palpable. I had recently founded Elixir Studios, which specialized in artificial intelligence (AI) games, and my ambition was to conduct cutting-edge research in the field. AI was on my mind that day: Kasparov had played chess against IBM's supercomputer Deep Blue just a few years before. Now, he sets out the details of that titanic event in his memoir Deep Thinking. The 1997 match was a watershed for AI and an extraordinary technical feat. Strangely, although Kasparov lost, it left me more in awe of the incredible capabilities of the human brain than of the machine. Kasparov was able to compete against a computational leviathan and to complete myriad other tasks that make us all distinctly human. By contrast, Deep Blue was hard-coded with a set of specialized rules distilled from chess grandmasters, and empowered with a brute-force search algorithm. It was programmed to do one thing only; it could not have played even a simpler game such as noughts and crosses without being completely reprogrammed. I felt that this brand of 'intelligence' was missing crucial traits such as generality, adaptability and learning. As he details in Deep Thinking, Kasparov reached a similar conclusion. The book is his first thorough account of the match, and it offers thoughtful meditations on technology. The title references what he believes chess engines cannot do: they can calculate, but not innovate or create. They cannot think in the deepest sense. In drawing out these distinctions, Kasparov provides an impressively researched history of AI and the field's ongoing obsession with chess. For decades, leading computer scientists believed that, given the traditional status of chess as an exemplary demonstration of human intellect, a competent computer chess player would soon also surpass all other human abilities. That proved not to be the case. This has to do partly with differences between human and machine cognition: computers can easily perform calculation tasks that people consider incredibly difficult, but totally fail at commonsense tasks we find intuitive (a phenomenon called Moravec's paradox). It was also due to industry and research dynamics in the 1980s and 1990s: in pursuit of quick results, labs ditched generalizable, learning-based approaches in favour of narrow, hand-coded solutions that exploited machines' computational speed. The focus on brute-force approaches had upsides, Kasparov explains. It may not have delivered on the promise of general-purpose AI, but it did result in very powerful chess engines that soon became popularly available. Today, anyone can practise for free against software stronger than the greatest human chess masters, enabling enthusiasts worldwide to train at top levels. Before Deep Blue, pessimists predicted that the defeat of a world chess champion by a machine would lead to the game's death. In fact, more people play now than ever before, according to World Chess Federation figures. Chess engines have also given rise to exciting variants of play. In 1998, Kasparov introduced 'Advanced Chess', in which human–computer teams merge the calculation abilities of machines with a person's pattern-matching insights. Kasparov's embrace of the technology that defeated him shows how computers can inspire, rather than obviate, human creativity. In Deep Thinking, Kasparov also delves into the renaissance of machine learning, an AI subdomain focusing on general-purpose algorithms that learn from data. He highlights the radical differences between Deep Blue and AlphaGo, a learning algorithm created by my company DeepMind to play the massively complex game of Go. Last year, AlphaGo defeated Lee Sedol, widely hailed as the greatest player of the past decade. Whereas Deep Blue followed instructions carefully honed by a crack team of engineers and chess professionals, AlphaGo played against itself repeatedly, learning from its mistakes and developing novel strategies. Several of its moves against Lee had never been seen in human games — most notably move 37 in game 2, which upended centuries of traditional Go wisdom by playing on the fifth line early in the game. Most excitingly, because its learning algorithms can be generalized, AlphaGo holds promise far beyond the game for which it was created. Kasparov relishes this potential, discussing applications from machine translation to automated medical diagnoses. AI will not replace humans, he argues, but will enlighten and enrich us, much as chess engines did 20 years ago. His position is especially notable coming from someone who would have every reason to be bitter about AI's advances. His account of the Deep Blue match itself is fascinating. Famously, Kasparov stormed out of one game and gave antagonistic press conferences in which he protested against IBM's secrecy around the Deep Blue team and its methods, and insinuated that the company might have cheated. In Deep Thinking, Kasparov offers an engaging insight into his psychological state during the match. To a degree, he walks back on his earlier claims, concluding that although IBM probably did not cheat, it violated the spirit of fair competition by obscuring useful information. He also provides a detailed commentary on several crucial moments; for instance, he dispels the myth that Deep Blue's bizarre move 44 in the first game of the match left him unrecoverably flummoxed. Kasparov includes enough detail to satisfy chess enthusiasts, while providing a thrilling narrative for the casual reader. Deep Thinking delivers a rare balance of analysis and narrative, weaving commentary about technological progress with an inside look at one of the most important chess matches ever played.


News Article | April 17, 2017
Site: phys.org

During human evolution, our cerebral cortex increased in size in response to new environmental challenges. The cerebral cortex is the site of diverse processes, including visual perception and language acquisition. However, no accepted unitary theory of cortical function exists yet. One hypothesis is that there is an evolutionarily conserved neural circuit that implements a simple and flexible computation. This idea is known as the "canonical circuit." As Gary Marcus, Adam Marblestone, and Thomas Dean note in a perspective piece in Science, there is still no consensus about whether such a canonical circuit exists nearly four decades later. Researchers argue that there is little evidence that such a uniform structure can capture the diversity of cortical function in simple mammals. What would it mean for the cortex to be diverse rather than uniform? Marcus and his colleagues propose that the brain may consist of an array of elementary computational units, which act in parallel as a microprocessor. To probe at questions on cortical circuitry, collaborative endeavors engaging scientists and engineers have emerged. In parallel with the Human Brain Project in Europe, the Brain Research through Advancing Imaging Neurotechnologies Initiative is developing tools to view the brain at superior spatial and temporal resolution. Researchers aim to directly study the relationship of a neuron's function to its connections. This can be accomplished by mapping neuronal connections in conjunction with acquiring functional activity data. Researchers can scale up from the study of neurons and synapses to those of neural circuits and networks. However, this bottom-up approach may not be the best way to understand the neural basis of human cognition. Psychologist Frank van der Welde argues that this confuses the nature of understanding with the way to achieve understanding. He provides an analogy to physics. To understand the universe, a bottom-up approach would begin with an understanding of elementary particles, how these particles combine to make atoms, and how atoms to combine to make molecules. However, physics did not begin with the study of elementary particles. It began with a study of the solar system, which provided the first law of motion. In summary, understanding can proceed from top to bottom as well as bottom to top. To understand human cognition, neuroscientists should combine top-down and bottom-up approaches. One top-down approach is generating a large-scale simulation of neural processes that generate intelligence. This project intersects neuroscience and artificial intelligence. For instance, a research team at IBM represented 8 x 106 neurons and 6400 synapses per neuron on the IBM Blue Gene Processor, and ran 1 s of model time in 10 s of real time. Large-scale simulations provide a tool to investigate the brain not limited by the experimental and ethical limitations of in vivo work. Through such virtual models alongside neuroscience studies, researchers may arrive at the neural basis of intelligence. As neuroscience details the anatomy and activity of the brain, artificial intelligence seeks to develop an independent, non-biological path to intelligence. Researchers of both fields can agree that artificial intelligence is not the equal of natural intelligence in most important tasks for human cognition such as vision, motor manipulations, and natural language understanding. However, this is not to say that artificial intelligence won't ultimately achieve principles of natural intelligence. I met with Ernest Davis, a professor of computer science at New York University, to learn more about artificial intelligence. Advances in artificial intelligence have been incredible, from IBM Watson to Google DeepMind AlphaGo. Davis believes that AI will continue to advance and be used for positive and negative applications. For instance, AI can be used for improving the quality of medicine and building self-driving cars. "The impact of that will be largely positive though there will be a negative impact on employment. It will also be used for military purposes, and the immediate effects of that are mostly negative," Davis says. Nevertheless, artificial intelligence is far from building programs that retain human-level intelligence. The human brain has an unsurpassed ability to do complex, real-time interactions in a dynamic world. To implement reasoning in artificial intelligence, Davis explains different techniques available by example. For instance, chess-playing program Deep Blue did reasoning by thinking through a large number of paths to the future and determining what solution works out. A second example is software verification programs. These algorithms search for bugs in programs through logical reasoning. The program provides users with a formal proof that a program doesn't have a certain kind of bug. A third example is taxonomic reasoning, which is a simple form of logical reasoning by category. This type of reasoning has been done in programs called semantic nets, which have been in use for 50 years. A New Century of the Brain Paul Allen, the co-founder of Microsoft, has funded two distinct research projects in neuroscience and artificial intelligence. The Washington Post aptly describes it as Allen's $500 million quest to dissect the mind and code a new one from scratch. Elon Musk has funded a rival company called NeuroLink to create human-computer hybrids. To build an artificial brain, Davis believes: "The replica of the human brain can mean either the ability to do human-level tasks, which I think in principle is possible, or algorithms that are an imitation of the brain. We don't yet know which direction is the best way to pursue AI. There's an awful lot we don't understand about the brain so it may be pre-mature to try the latter." Neuroscience and artificial intelligence both aim to identify architectures of intelligence and cognition. Michael Gazzaniga states that cognitive neuroscience asks: what are the algorithms that drive structural neural elements into the physiological activity that results in perception, cognition, and consciousness? Likewise, artificial intelligence asks: what are the algorithms that can generate human-like intelligence in machines to perform reasoning, planning, and learning? Both fields describe mechanisms of intelligence as "algorithms."4 To uncover human intelligence and cognition, intersections in neuroscience and artificial intelligence are necessary. Perhaps only then can we can understand what makes us human. Explore further: Trudeau looks to make Canada 'world leader' in AI research


CLEARWATER, FL--(Marketwired - May 8, 2017) - Endurance Exploration Group, Inc., ( : EXPL) ("Endurance" or the "Company"), a company specializing in shipwreck research, survey and recovery, announces that, operating through its joint-venture, Swordfish Partners (the "Joint Venture"), the Company has located and conducted preliminary survey operations on a ship wreck site that is believed to be the remnants of an early 1800s American, wooden-hulled, passenger steam ship ("American Passenger Steam Ship" or "Target") located some forty (40) miles off the coast of North Carolina. Based upon the Company's research files and the research and other data contributed by its joint venture partner, Deep Blue Exploration, LLC dba Marex, the Company, operating through its Joint Venture, conducted side-scan sonar search and survey operations off the coast of North Carolina during the spring of this year. The Company has located what it believes to be the remnants of its targeted search during this phase of its operations, an American Passenger Steam Ship, which may contain a valuable cargo of passenger artifacts, specie, and jewelry. The wreck site lies some forty miles off the coast of North Carolina and the wreckage pattern, debris field, machinery characteristics, and other information leads the Company to believe it is possibly the American Passenger Steam Ship that was the subject and Target of the Company's Joint Venture search. The Company's Joint Venture expects to conduct additional survey operations in the near term, in an effort to positively identify the wreckage as its Target and, if warranted, move the project from a survey operation to a salvage operation beginning in the summer of this year. Endurance CEO, Micah J. Eldred, further commented, "We are very pleased with the results of our initial search and survey for this recently discovered shipwreck. If ultimately confirmed as the subject of our targeted search, the American Passenger Steamship may represent a unique opportunity for our Company and our shareholders. Based upon our archival research, we believe passengers lost significant amounts of specie and valuables on the American Passenger Steamship we have been searching for, and the location of this wreck site, if positively confirmed, may represent a salvage process that should prove to be less technically challenging and less costly than some of the other, deep-water, projects we are working on." Endurance's Joint Venture has petitioned the U.S. Federal Courts for the Middle District of Florida for a claim of the shipwreck, an appointment of the Joint Venture as substitute custodian of the shipwreck and a salvage award or title to the shipwreck and its cargo. If granted, Endurance plans to return to the wreck site to conduct salvage operations, beginning in 2017. About Endurance Exploration Group, Inc.: Endurance Exploration Group, Inc. specializes in historic shipwreck research, subsea search, survey and recovery of lost ship containing valuable cargoes. Over the last 8 years, Endurance has developed a research database of over 1,400 ships that are known to be lost with valuable cargoes in the world oceans, and in 2013 began subsea search and survey operations. About Deep Blue Exploration LLC dba Marex JV partner, Deep Blue Exploration, LLC dba Marex of Memphis, Tennessee has been researching, locating, and salvaging historic and valuable shipwrecks for over 25 years. Some of their better known ship wreck projects include the famous galleon Nuestra Senora de las Maravillas which sank in the Bahamas in 1656, El Cazadore 1780, SS North Carolina 1840, SS Merida 1911, and SS Ancona 1917. Marex Chairman, Captain Herbo Humphreys, commented, "we are excited and confidence to be working with Endurance, and we feel this American Passenger Steamship project may ultimately prove to be one the most historic and economically viable projects we have been involved with over the years." In the coming weeks, the Company may be posting sonar and still images as well as video footage of the shipwreck to its website and to its Facebook page at the following links: This press release includes "forward-looking statements" within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. In addition to statements which explicitly describe such risks and uncertainties, readers are urged to consider statements labeled with the terms "believes," "belief," "expects," "intends," "anticipates," "will," or "plans" to be uncertain and forward looking. The forward-looking statements contained herein are also subject generally to other risks and uncertainties including but not limited to legal and operational risks of offshore, historic shipwreck recovery. Forward-looking statements contained in this press release are made under the Safe Harbor Provision of the Private Securities Litigation Reform Act of 1995. Any such statements are subject to risks and uncertainties that could cause actual results to differ materially from the anticipated. The information contained in this release is as of May 8, 2017. Endurance Exploration Group, Inc. assumes no obligation to update forward-looking statements contained in this press release as the result of new information or future events or developments.


News Article | April 17, 2017
Site: www.techrepublic.com

In one of the top achievements for AI, AlphaGo, Google DeepMind's machine learning platform, defeated Go world champ Lee Sedol in a game more complex than chess, in March 2016. AlphaGo beat Sedol in four of the five-game tournament, after mastering the game 10 years earlier than many experts predicted. AlphaGo has continued to improve its skills throughout the past year, training by playing against top players from South Korea, China, and Japan. And at the end of May, AlphaGo will explore a new way to learn: By pairing up with top Go players and AI experts at the Future of Go Summit. The five-day conference, held from May 23-27, will bring together AI experts, the China Go Association, AlphaGo, and China's top Go players in Wuzhen, China. AlphaGo harnesses convolutional neural networks to play the ancient Chinese game. What has made it particularly impressive is the fact that it uses reinforcement learning, instead of being programmed specifically for the task. While IBM's Deep Blue achieved an AI victory in 1997 by beating world chess master Gary Kasparov, it was programmed with the moves. And Go is a complex game, with potentially 200 options per move, as opposed to 20 on a chessboard, relying heavily on intuition. While it is interesting to see what AlphaGo is capable of, the summit in May will be a test for how AI and human collaborations can work. It will be a chance to see how human learning can be enhanced by AI. And the takeaways will likely extend past the gaming world. Manuela Veloso, head of machine learning at Carnegie Mellon University previously told TechRepublic that she was interested to see "if and how AlphaGo's learning approach may apply to other different 'non-game' problems." AlphaGo has already taken on the task of reducing energy use, and the technology has been applied to medical research projects as well. Toby Walsh, AI professor at the University of New South Wales, also previously told TechRepublic that the nature of the game itself could change, and that the AI system "played moves that have surprised even Go masters." "Will man and machine together be better than man or machine alone?" Walsh asked. "Each of us can bring unique strengths to the table."


News Article | February 16, 2017
Site: www.prnewswire.co.uk

In continuation to our letters dated February 8, 2017 and February 13, 2017, please find enclosed the transcripts of the 'KIE's Chasing Growth Conference' held on February 13, 2017. The same will be made available on the Company's website at the following weblink https://www.infosys.com/investors/news-events/events/Documents/chasing-growth-conference-13feb2017.pdf This is for your information and records. Our next speaker is Dr. Vishal Sikka. Dr. Vishal Sikka is the current CEO and Managing Director of Infosys, a globally recognized innovative and business leader in the IT Services Industry. Dr. Sikka was born and raised in India. He studied Computer Science at Syracuse University before receiving his Ph.D. in AI Stanford in 1996. After a very long successful stint at SAP in 2014, he joined Infosys as CEO to help drive a massive transformation not only at Infosys but also within the larger IT services industry. In the last two and half years since he took over, his vision and leadership has helped accelerate Infosys transformation resulting into growth of their global work force to almost 2 lakh people. There was a massive reduction in attrition, doubling of large deal total contract value, number of large clients and industry leading cash flows generation with revenues of $9.5 billion despite strong headwinds. Infosys new software and new services has seen tremendous growth as well. Client satisfaction levels are accordingly at the highest levels ever, particularly at the CXO segment which according to Infosys Annual Survey showed 22 points increase. May I invite Dr. Vishal Sikka to talk about Infosys and the future of IT Services Thank you so much. That was quite a handful introduction. Hi, Mr. Kotak. Good to see all of you. Uday has asked me to organize my talk in two sections. In the first part I want to talk about our performance and financial metrics that most of you care a lot about and then following I want to spend some time looking at the bigger picture. I hope to finish this speech itself in another 25 minutes and then leave around half an hour for your questions. Two and half years ago, when I started, we laid out a relatively straight forward strategy. The idea was to launch a three-part strategy of renewing the core business entering into new businesses and the culture of the company. And the interesting part about this three-part strategy was also that it reflects the challenges of our clients the challenges of business that they have this dual priority of only one hand getting better at everything that they did and on the other hand doing new things that they never did before. In this, they have their own timeless culture and value system that they operate on. So, if we look over the last two and half years, the execution that we have followed on this path is starting to show signs of success. I will quickly walk you through some of these metrics. The relative revenue performance compared to the industry has distinctly improved. When I joined, our revenue was significantly below the industry. It has now got up towards the top of the industry, or certainly in line with the industry. Year-on-year if you compare the first three quarters of this financial year to the last one, you see the performance there and despite the challenging environment, we have managed to keep the operating margin steady. If you look at many of the operational parameters of our business the core business utilization for the last seven quarters has been above 80% which is for the first time in a very long time in our company. The employee cost as a percentage of revenue has come down. You can see the operating cash flow as percentage of net profit is getting close to 100%. The onsite we can still do better but the subcon costs are something that are continually improving. I know a lot of you have questions around the visa policies and things like this, we can talk about that in the Q&A but our endeavor has been to get closer to the 30% number. In terms of the core business, IT services continue to grow through a dedicated focus on renewing our existing services through a combination of automation and innovation as well as mixing our portfolio increasingly towards more high value, value add kind of services. One area of disappointment for us has been the consulting business. We continue to focus on this over the course this year. The negative performance of consulting in Q1 impacted us more than we expected. Products, platforms and other areas of the business also continue to grow and here is a different cut on that. If you look from the time when I started to last quarter, revenue per quarter has grown by more than $400 million per quarter.  While the margin is same from 25.1% to 25.1%, operating profits has gone up from $536 million to $640 million. Attrition has come down significantly from 23.4% at a group level in the quarter when I joined to now a little bit below 15%. There is a separate attrition metric that we have started to internally focus on over the last year which is high performer attrition and that number has come down significantly to single-digits. Revenue per FTE is one of the key metrics of our strategy. While it has decreased compared to when I started, it has in the last couple of quarters started to go back up and that is something we are very happy about. One key part of the strategy is the new software. When we talk about renew and new, there are new services in unprecedented new areas and there are also new software that we did not have before. Excluding Finacle, the software revenue in the quarter when I started was $35 million a quarter and that number has already gone up to $60 million now. The $100 million accounts went up from 12 to 18. $200 million accounts have doubled from 3 to 6 in that time. Organizationally Pravin and I have established a strong team around us. Jayesh is here, our Deputy CFO. Similarly, we have appointed Ravi as our Deputy COO. There are three Presidents responsible for our go to market. Below their level we have established 15 industry heads to create much more focused small encapsulated empowered business units, so that client focus and agility can be achieved and it is the best way for us to achieve scale. Similarly, we have been making a lot of progress and Pravin oversees this on simplifying our internal policies, systems and process so that we can become much more agile. So, when we look at this in the broader context of the strategy, beyond the numbers you can clearly see that on all the three dimensions of strategy things have been taking hold. One program that I am particularly proud of, I am approaching it since two years, next month it is going to be two years since we started it, is the Zero Distance. It has been deeply engrained into our company's culture. More than 95% of the projects of the company have Zero Distance plans. The idea of Zero Distance for those of you who do not know about it, is it inspires our project team to bring innovation in every single project no matter what. The number is actually 100% and the projects that do not have a Zero Distance plan are the ones that just started recently. So, if you look at the Zero Distance plans of project that started more than six weeks ago, it is 100%. Every project is basically covered by Zero Distance project and it has become a part of the culture. For the last nine months or so, we have been focusing on elevating the Zero Distance innovation idea where project teams think of bigger innovations. The CWRF is a tribute to Martin Luther King's quote that, "If you cannot fly we must run, if you cannot run we must walk, if you cannot walk we must crawl, but we have to always keep moving forward" and so we have this framework of most of the Zero Distance innovations, even though we are close to 100%, are quite incremental quite small and so our endeavor now is to elevate these towards more and more transformative innovation for our clients coming from the grass root. Zero Bench, which was the other initiative that we launched in July of 2015 that also reached close to 100%. So basically, everyone on the bench has done a Zero Bench project. We have now more than 34,000 jobs on our internal market place. 470 jobs show up every week on this and it has touched basically everybody on the bench. As I mentioned earlier, one of the direct impacts of Zero Bench is giving the youngsters an opportunity to get some experience by working on a Zero Bench project. That experience then makes it easier for them to get into production. So, Zero Bench has had a direct impact on the utilization, especially on the fresher utilization. The utilization is above 80% consistently now, but in particular the fresher utilization and off-shore utilization has gone up directly as a result of Zero Bench. The renewal of our existing core business, in particular the maintenance and run parts of the business but also to some degree the build parts of the business is impacted by automation. I mentioned, in the earnings in January that more than 2,600 people over the course of Q3 were saved but 2,650 FTE worth of effort was eliminated with automation. Over the last 12 months that number has crossed 8,000. The Mana platform plays a very important role in helping us achieve that automation. All 8,500 is not due to Mana but Mana for IT has a huge impact on bringing automation led productivity improvements to our work. And within our core business, we have launched many new services like the Mainframe Modernization, the API Economy, BI Renewal, the work with Salesforce.com, ServiceNow, etc. Roughly 10 or so of these have launched in the last couple of years and they have all been showing growth that is faster than the growth in the company. As a result of all of this, the client satisfaction, a survey done since last 12 years, has demonstrated the highest ever score this time. In particular, as you said in the introduction, the CXO satisfaction has jumped up by 22 points. So, I am quite proud of the work that our team has done in renewing our core business. When you look at new, getting into new dimensions of business is something that should come easily to us. When you create something new, you do that but culturally and organizationally new is much more difficult to pull off because it is not just enough to create a separate appendage that is sitting somewhere else and doing something totally independent. You have to find a way to make these harmonious. The idea of 'Renew and New' comes from Arthur Koestler's word on creation, this idea of two habitually distinct but self-consistent frames of preference where you are doing something unprecedented in the new dimension, whereas you are continuously improving in the existing dimension. We have seen continued strong momentum in the three acquisitions that we have made Skava, Panaya and Noah, as well as the new services that we have launched organically. One of the things that we are very proud of Skava, is that beyond retail Skava has now entered into other industries in particular financial services, telecommunication, and utility. We have opened new frontiers with Skava and our home-grown Mana platform. Mana helps us renew our services in particular in maintenance and run areas. The same Mana platform also has huge applications in helping our clients build breakthrough business application of artificial intelligence. I think, having a common platform that does both is extremely important to our strategy. A lot of people in the industry build captive automation capabilities in service of their own business. When we do that, the platform would not be competitive. You have to ensure that the platform that is powering the productivity improvement of the existing business is a world class platform. Therefore, it is important to have this platform subjected to the outside world, to the light of the outside. Building breakthrough business applications on Mana is a very-very critical part of the strategy. I am really excited by what our teams are doing, what our clients are doing, all the way from CPG companies using it for revenue reconciliation and financial consolidation much faster to companies in the pharmaceutical industry, working on much better forecasting, using Mana for contract management and compliance to contracts to dynamic fraud analysis for banks and applications like that. From new services dimension, our digital services as well as the strategic design consulting services are things that have been going forward quite well. One key part of our strategy is the cultural dimension and there are two main parts to that. One is the agility of the company as a whole and the other one is education. Infosys always had a very strong emphasis on education. Mr. Murthy always talked about this. The idea of learnability, the ability to learn. Especially in the times ahead, learnability is going to be a very critical thing for our future. So, when you look at all the work that we do in culture dimension, it is dominated by this. ILP, the Infosys Learning Platform that is our own platform to very immersively teach our fresher as well as our regular employees what is going on in the world around in a very immersive way. We drop them into boot camps, into projects and into sub-sets of the projects, so they are learning in a much more accelerated way. We have done a lot of innovation in learning itself and how people learn and so forth and brought that into ILP. One of the most interesting program that we have done is for our top 200 executives. We have done a global leadership program together with Stanford and we have now done three cohorts of this. Some of the Infoscions and the senior executive who go through this, have said that this is the best thing they have done in their entire Infosys career. This is a two-and-a-half-week program or distributed over a year which is done together with the Stanford Business School, and the Stanford School of Engineering and Design School. It has a huge impact on changing the perspective of our management team. As well as we have been investing in on-site learning and I think more and more as we focus on-site creating on-site environment for learning is going to be critical. Design thinking training has become an integral part of being at Infosys. We have now more than 130,000 Infoscion who have been trained on design thinking. By the end of this year I expect that we will be able to train everyone in the company on design thinking and similarly training in artificial intelligence, agile methodologies, and in all manner of new technologies is something that we continue to invest in. In a nutshell, two and half years going from $8.2 billion in revenue or so to now we just crossed $10 billion on a LTM basis, continued protection of margins and continued to ensure that we have strong margins. That is the philosophy of consistent profitable growth. Improvement in revenue per employee despite the pressure, the commoditization, improvement in the attrition and the sense of inspiration among the employees is growing into completely new dimensions. We are now achieving new scale in the new dimensions and highest client satisfaction ever. That is basically in a nutshell from a performance point of view where we are. As I look at the journey ahead, as happy as I am looking backwards to the last two and half years, the journey ahead is even more challenging and even more interesting one. Here is a quote from my friend Bill Ruh of GE. He will be here day after tomorrow talking at the NASSCOM Event. We have been doing some amazing work together on bringing design thinking and new kinds of solutions, co-innovation, to their platform. Here is a quote from the Head of HR and the Head of Technology for Visa on how we have helped establish a dramatic new culture as well as doing our traditional services. Visa has been one of our fastest growing accounts over the last two years and it is something that we are particularly proud of. So as I said, as good as it is to look backwards and feel good about this dimensions of improvement that we have managed to achieve, the world around us is growing through a very rapid transformation. I want to switch gears a little bit and share with you some thoughts on what I see happening in the world around us. It is difficult to capture what is going on in a nutshell, but it basically has three dimensions. A very deep rooted sense of end user centricity. When you look at retail or banking or insurance or communications and telco and so forth. In any industry there is a very deep rooted sense in which things are becoming more connected. A continuous connection to the end user and serving end user needs has never been more important. These experiences are now increasingly becoming digital but they are also being powered by AI technology. Underneath this enablement of the new experience is the intelligent infrastructure. It is governed by this exponential laws. I am calling it Moore's Laws, I know it is Moore's Law but I am calling it Moore's Laws because there are many of the similar laws not only in the performance improvement or cost performance improvement of processors but in many other dimensions of technology. We see these exponential price performance improvement. As a result of that, computing is becoming far more pervasive and that is all enabling and extreme efficiency of disinter-mediating existing industry. If we look into this in a little bit more detail, here are three projects done by our strategic design consulting team. I mentioned, in the new dimension of our strategy, this one service that we offer where we co-innovate with our clients on things that are the most important to their future. On the right-hand side is a 3D printed fully functioning replica of a low fidelity model of a GE engine that is an aircraft engine. The entire engine is 3D printed and with a Microsoft HoloLens, you can deeply examine the operational behavior of this engine. You can go back and change its design parameters, tune its design parameters and you can do predictive maintenance on it. All of that in real time. It actually has the potential to dramatically accelerate the lifecycle of these engines to dramatically redesign the nature of maintenance and repair on these engines. And also make the entire machine far more connected and far more intelligent, as it goes through its lifecycle. In the middle, you see a project that we have done with a very large agriculture company. This is a digital farm that we have built. Our team actually builds the farm. It is not only a digital farm, but it is a highly affordable digital farm that can be put together with extremely cheap components and it makes the plant itself connected. Supply of nutrient, supply of water, supply of light to the plants can be controlled thousands of times more effectively than in traditional agriculture. Instead of supplying water once a day or twice a day you can supply it two thousand times in a day by milliliters individually to the plant. The CEO of this company have told me that Vishal if you are able to do this, if you are able to connect into the plant itself, we will be able to improve agricultural yield by 30%. What we found in our experience was that some of the plants actually grow 10x faster than in the normal conditions, not only 30%. The entire Green Revolution that happened in our lifetimes, in my lifetime here in India, was at 22% improvement in agricultural yield. So, you can imagine what kind of a revolution this can create. On the left hand side is actually a store that we have built for one of our luxury retail client and this is an entire store that looks like a store from 1850's but it is completely digital. Every aspect of the store, every part of this is digital. It is like a complete room, it is a large computer that is programmable, those mirrors and the coat hangers, and the closets, and everything that you see is all completely digital. The mirror turns on when you put on a coat. It knows what coat it is. You can see yourself in different circumstances and things of this nature. So, all industries are going through this deep transformation because of a very deep rooted understanding of users enabled by a pervasive connectivity and pervasive computing. This is powered by Moore's Laws rapid advance. Moore's Law this year is going to be 52 years old,   two years older than me. It is one of the extraordinary feats of human engineering achievement. Many people say that Moore's Laws is reaching its end. It is true that traditional Moore's Laws is reaching its end. On the upper right, you see a 7 nanometer wafer that DFMC has started to produce on a non-production scale yet. In early 2018, they will go production on 7 nanometers. When I spent summer at Intel working in the Intel AI Lab in 1990, I remember the micron process was going live, that was 1,000 nanometers. When I was building HANA at SAP we were at 22 nanometers process of Intel. Now it is down to 7 nanometers. This is an extraordinary thing. Of course, the reason people say that Moore's Laws is going to end is that 7 nanometers is getting already pretty close to physical scale. There are two silicon atoms in 1 nanometer, so at 7 nanometers you are basically at the width of 14 or so silicon atoms. So, there is not much more that you can get out of this. Probably by 2022, 2023 we will have 4 nanometers, 5 nanometer process but that is as far as it will grow. But then other things are happening. On the left-hand side in the middle you see the neuromorphic board that has already been made by Jenn Hasler at Georgia Tech. These neuromorphic boards are ways of bypassing the traditional CMOS manufacturing process and going into new kinds of chip design. So therefore, some other new way continuing Moore's Laws is going to continue post 2024. On the bottom middle you see the Nvidia AI Box that has 170 teraflops box of computing in it, two Intel processors of 20 cores and 12 or so of these Tesla GPU chips inside it. This is already shipping, a lot of the deep learning algorithms run on this kind of path. And then of course on the bottom right you see this incredible growth that our partner AWS have had. That is a data center picture of a part of the datacenter of Amazon. They have dozens of datacenter, each one between 50,000 servers and 80,000 servers and collectively somewhere between 3 million servers to 5 million servers in AWS. This is an astonishingly large amount of computing capacity and AWS has seen 40%-45% growth in the last 12 months. So, what we see is that this entire digitization and AI revolution is being powered by computing that is simply exploding. Even though the traditional Moore's Laws is nearing a trend in the next seven-eight years, the new kind of architectures and new kinds of technologies is going to ensure that we are going to grow significantly further beyond that. As a result, new AI technologies are becoming possible. We all heard about this AlphaGo from Google last year that beat the world champion Go player. Recently the news came out that some students from CMU have put together a Poker playing robot. Now, why was Go significant last year, because Deep Blue beat Garry Kasparov in the middle of 90's. That was more or less a Brute Search computational capability exercise because the computer can look ahead in the Chess moves much more than the human brain can. Go is a different kind of a game. The computational complexity of Go is much worse then of Chess. With AlphaGo they were able to overcome that by bringing learning into that. A new frontier has been broken. Poker is based on human intuition, on deception, on bluffing and things like this. It actually has beaten four of the top Poker players continuously and has earned something like $1.7 million as a Poker player online. So, this is quite an interesting development. On the left-hand side, you see the autonomous car from Toyota which actually learns the driver's behavior as you drive the car and adjust itself to that. So, this is a great advancement that has happened in autonomous driving. Udacity has a car that is fully programmable by the students of Udacity who take the autonomous driving car class on Udacity. On the right-hand side is open AI which is a consortium of research in artificial intelligence in the public good. I am very proud that we, at Infosys are one of the sponsors of open AI. Universe is a package that the open AI team has released recently. It has now 45 world class researchers and they are really doing some amazing work in artificial intelligence. As a result of all of this, basically what we see, that it becomes much easier for a new comer to come in and disrupt in industry. Big parts of the supply chain and of the value chain can be disrupted and can be digitized. The world of atoms where we have been coming from, has huge numbers of intermediary. As digitization and computing replaces these atoms with bits, you can see that the value chain can be much more connected and can be much more zero distance. It can be much more efficient as a result of that, both in terms of pricing and in terms of production. So, this is what is going on. As a result of that, when we look at the journey ahead for us, a lot more needs to be done. I studied artificial intelligence as a graduate student. There are two pieces of research that our team picked out. One is from McKenzie that was done a few months ago, going through various kinds of jobs and the potential of automation in those jobs. The other is the impact of automation in the IT BPO industry, in our industry, that was done by HFS Research recently. And you can see that India, according to this study has the biggest impact of jobs among all of these countries. There are dozens of studies like this. If you research you can find out that recently, couple of weeks ago McKenzie Global published another one and Eric Reynoldson and Andy have done something. There are dozens and dozens of reports on automation. But really all you have to do is walk into any one of our floors at any company in the IT BPO industry and it becomes starkly clear as you walk around the floor that a huge number of these jobs are going to go away. It may not be two years it might be four years, five years, seven years, ten years, but there is no doubt that these jobs are going to be replaced with automation. So, what do we do? What is the nature of the journey that we are on? My sense is that there is only one way forward. Professor Mashelkar use to have this beautiful line "doing more with less for more" that is basically the nature of the endeavor. If you look at the complexity of human activity and plot that against time, the natural course of event is that we have to constantly be moving upward.  That journey upwards is not only a journey upwards, it is actually accelerating. There is a very straight forward duality that we have to follow. On one hand, we have to eliminate our work through automation and improve our productivity while on the other hand we have to deploy that improved productivity towards innovation. This is basically the formula. The reason for automation is so that we can continue to do the work that has already been commoditized more efficiently and more productively and used that improvement in productivity, become more innovative to do the new kinds of things for which there is still tremendous opportunity and tremendous value. Self-driving car engineered today is worth millions of dollars. What if we were able to produce a hundred thousand self-driving autonomous driving engineers? This continues productivity improvement but actually shifting continuously from things that are commoditizing towards things that are innovative is the basic point that has to be fueled by education. This idea that we study for the first 21 years of our life and then we do not study any more, we have to abandon this idea because technology will continually change. So we have to continuously re-skill ourselves. Thankfully we at Infosys have a tremendous advantage on this because we have always had a culture of learnability like I said. So, the fueling of this automation and innovation journey on the basis of education is the key. The other basis is software. It is the intellectual property. If all we did was moving upwards on the basis of education, which alone will also become commoditized over time. Therefore, we have to take advantage of this improving productivity using software, using our software, software that is monetized. Instead of 10 people doing something if you now get that same thing done with three people, if you do not have your own software to do the work of the remaining seven then you are losing value and somebody else who made that software is capturing that value. Therefore, that software has to be monetizable. But for that software to be monetizable, it is software that has to be world class. It is software that has to withstand the test of outside world. It cannot just be a captive thing that you do only for yourself. So, having software together with education powering this transformation is something that is critical for our future. This is why the same Mana platform applies to the renewal of our services and for the breakthrough new business solution. This is the reason why education for our traditional services as well as in the breakthrough new areas is something that is critical. So this in essence is the nature of our journey in the future. John McCarthy the father of artificial intelligence once said, in a lecture that "articulating a problem is half the solution". That lecture changed the course of my life. Within our lifetimes, we are going to see AI Technology and get to the point where a well-articulated problem, a well-specified problem will be solved automatically. The human frontier at that point is going to be problem finding, creativity, the ability to look at a situation and see what is not there, to see what is missing, see what can be innovated, that is our future. One way or the other it is going to happen. When I look at 3.7 million people in our industry the only future I see is a future that is fueled by automation and getting ahead of this curve of automation. I see the future where the automation enables us to be more innovative to exercise our creativity, to exercise our ability and to find problems, not only solve them. If we do that we will be okay, if we do not we will get disrupted. This is the journey that I see in front of us. So, I do not know how long I went on, Uday. We laid out a clear strategy. We have seen signs of early success but the world is going through a very rapid change driven by AI and Technology. Much more is still to be done to lead in the next generation of IT and with focused execution we will do it. Thank you. Thank you, Mr. Sikka. Okay, we have the first question already. Very nice presentation. My first question is that, how much of the value in this transformation will be captured by the software versus the services company? I understand that Infosys is trying to adopt software plus services model but when you look at the overall value chain it is the software company that are gaining bigger and bigger share of the overall value in this transformation. Our core business is services. We are a services company. But it is the software just like there is software-defined networking and software-defined datacenter and software-defined in retail and similarly there is software-defined services. So, we are talking about amplifying our services ability through the use of software, through the use of automation and by moving up in the value chain. So it is both, the value of the software itself will be there but it is more in services, where the real power will come from. And you see that in the numbers already. The productivity improvement in the first nine months of this financial year, compared to the first nine months of the last financial year. Last financial year in the first nine months we hired 17,500 people, this financial year's first nine months we hired something like 5,500 people, and yet we improved utilization, we improved revenue and RPE and so on and so forth. So, more with less for more, powered by software and education. The second question that I have is something which is on top of everyone's mind, that how is your relationship with the founders and what does it mean for Infosys? My relationship with the founders, it is wonderful. I meet Mr. Murthy quite frequently. I do not meet the other founders quite as frequently. I ran into Kris the other day in a Lufthansa flight and I have not seen Nandan for more than a year. But it is an amazing relationship. I have a heartfelt warm relationship with Mr. Murthy. I probably meet him four, five, six times a year, something like this. He is an incredible man, we usually talk about quantum physics and things like this and technology. He was telling me the other day about the Paris metro and how he worked on the Paris metro in the 1970s before he started Infosys. It had this whole idea of automation of autonomous driving and things like this. So, all this drama that has been doing on in the media, it is very distracting. It takes our attention, but underneath that there is a very strong fabric that this company is based on and it is a real privilege for me to be its leader. Hi Vishal, great presentation. I was curious if you could talk about, from your personal experiences over the last two and a half years, you have been turning around a really large ship. What have been your deepest personal experiences, what have been most difficult to accomplish, what have been easy to accomplish? And in light of that, if you think I were to ask the same question from the leaders of the smaller mid-tier IT services companies, what do you think they would likely say? The second question I cannot answer. Compared to two and a half years ago, to me, it is disappointing to see the rate of growth and embrace of automation in the industry. I think more should have been done by now, but that is my opinion. When it comes to Infosys, it has actually been quite counter intuitive. I would not have imagined that zero distance would get picked up the way it did. One month after I started, I think September or October of 2014, this customer satisfaction survey came out, and it was incredibly depressing. I actually got the details of customers, the client's feedback, it was a pile of paper that thick, and I read it over a period of one month. I read every single one of them. Consistently it was the same story that we used to get high marks on quality, on delivery excellence, on being responsive, being professional. And we would get the lowest scores on strategic relevance, on innovation, on being proactive, and it was a shock to me that that was the case. So, a lot of the ideas for Zero Distance formed in those four, five months, and by March of 2015 I launched it. Within nine months we had managed to create a very grass roots oriented culture around that. Recently, Ravi, our Head of Delivery, was talking to some youngsters and they told him that it is not a new thing for us anymore, we just assume that it is a part of our job to come up with something innovative in whatever project that we are doing. That to me has been the biggest positive. The bringing of design thinking at a very massive scale has been surprisingly easy because of the scale. This comes from Mr. Murthy, the scale that was setup for education employees. The new, it is always difficult to embrace the new. So, I think bringing new in, in particular, letting go of your existing way of doing things and embracing. If a piece of software automates the work that you do, then embracing that piece of software and letting go of some of the manual work. It is something that is visible to you, that is something that is very difficult for people to do, and so that has been a hard thing. Some things that I thought were going to be much easier have turned out to be not so easy, some that I thought would be difficult have turned out to be much easier than I thought. In your Vision 2020, where we are in that journey? Would inorganic be a big component of it, how products will contribute? And given the recent noise in media, how confident you are seeing through that journey? How confident am I of what? See, that $20 billion, 30%, $80,000 revenue per employee, has always been an aspiration for us. What good is an aspiration if it is not aspirational? So, we are marching down that path. When you launch something like training people on design thinking, you cannot say that we will get 100,000 people trained in the next one year or two years, you have to setup a grassroots movement and then let it go and see what happens. When you do it like that, usually it surprises you. These numbers are a consequence of the work that we do, they are not that you figure out some way to get to that number. You do the right thing and then if you do the right thing, these numbers are the footprint that we leave behind. They are the consequences of the work that we do. So, this is how we look at it. Where are we on this journey? We are still early in this journey, we still have a lot of things to deal with, we have to get the consulting endeavor right, and we have to get BPO transformed. If we talk about the Infosys transformation as a whole, BPO transformation is even more difficult because it is even lower in the ranks and the skill levels and things like this. Many of the software assets have to still be transformed, like Finacle and so forth. We are still in the early stages of that. But generally, in the renewal of the code services and the adoption of the new services, I feel very good about where we are. So now as we look ahead to the future, a new financial year is starting and so forth. We are going through this exercise of trying to think about what does the journey ahead looks like. Taking stock of the last two and a half years and figuring out what the path ahead is going to look like in terms of both the software as well as the services, and the services amplified by software and so on. How big would be the inorganic component? I have to stop you, one question. Because we are running out of time. I am going to come to this side of the hall and then back to you sir. Vishal, I understand that you are going through two huge transitions or transformations we are going to call it. One on the technology side where you articulated your walk through on floor, half the jobs would not be there in five years. On the other side, you are going through a cultural transition or transformation, a founder run company that moved in to your hands and the obvious changes that play out. I am sure this is creating a bunch of insecurity at the organization level through the different ranks. So two questions here, how do you communicate, what do you communicate through the ranks to ensure that people are calm and focused on the job. And the second is, what are the two or three key pieces of your strategy? As you are saying, two of our key foundation stones are moving around and we are probably shaking a bit. How do you make sure you anchor the ship better and navigate through this? I think continuous and intense communication is very important across different parts of the organization and so on. That is something that cannot be overstated. You need to just communicate, communicate and communicate. And in our case it is not easy because the onsite employees are difficult to communicate to. They are inside their client context and so forth. So it is not so straightforward. The other part of it is that you have to demonstrate by example, talk is cheap but you have to really demonstrate in work that you are focused, that you are ignoring the noise and the distractions and so forth, and continuing to stay engaged. That is something, there are natural latencies that our brains have and there is not much you can do about that. I think that organizational change, organizational transformation is important. A lot of people would say that why go through this exercise of taking the leadership team through two years of a Stanford program and things like that, instead of just changing the leadership team. But if you change the leadership team then what happens to the layers below. You create a complete disruption, and not to mention you missed the point of transformation that is supposed to happen. So, I think that creating a context, creating an atmosphere which elevates everybody is something that is exceedingly important. Elan Kay says that context is worth 80 IQ points. So, if we build a real context around us that elevates everybody and can have the biggest transformative effect that takes time. The benefit of that approach is that it is very sustainable, long-term, lasting approach. The drawback is that it takes time. It is easy to plant a company here or there, buy some companies, put them inside and hire some leaders and put them inside, but that is like appendages growing out of control out of you. That is not sustained, organic transformation in the true sense. We have time exactly for one more brief question and one more brief answer. So try and keep your question as brief as you can. My question is on buyback. Some of the founders have said that they have written letters to the Board on this issue and they have not heard anything. Just your views on how you plan to utilize the cash? The official answer is the Board from time to time will consider capital allocation policies. In the last four years we have reviewed twice, we have improved dividend payout, once from 30% to 40% and then again from 40% to 50%. When we have something to report, we will report it. Jayesh, did I miss something? Did I do a good job? But that was an official answer. The unofficial answer is that you look at the business circumstances over the next four, five years and then you look at what you need the capital for. Then based on that you decide. We will do that as necessary. In our case, it is basically three things, there are strategic growth initiatives, and there is a capital for buildings and infrastructure and so forth to house the employees. Then there is the investments, the acquisitions. I am not interested in buying yesterday's technology to make something or the other look good. We are interested in tomorrow's technology. And tomorrow's technology is usually expensive if it is really good. So, you have to be very, very selective in that. Based on how that mix changes over the next five years and based on that you take a decision on how to utilize the cash. We have done this with 50% so far and as we go through this exercise we will take a look at it. Nobody asked about visas or Donald Trump etc Would you like to Suo Moto answer the question about Visas and Donald Trump? No, there is nothing that has happened so far. Okay. I am afraid that concludes our session. Thank you, Dr. Sikka. Thank you for being a very responsive audience. This is a disclosure announcement from PR Newswire.


News Article | March 2, 2017
Site: www.sciencenews.org

In the battle of wits between humans and machines, computers have just upped the ante. Two new poker-playing programs can best professionals at heads-up no-limit Texas Hold’em, a two-player version of poker without restrictions on the size of bets. It’s another in a growing list of complex games, including chess, checkers (SN: 7/21/07, p. 36) and Go (SN: 12/24/16, p. 28), in which computers reign supreme. Computer scientists from the University of Alberta in Canada report that their program, known as DeepStack, roundly defeated professional poker players, playing 3,000 hands against each. The program didn’t win every hand — sometimes the luck of the draw was against it. But after the results were tallied, DeepStack beat 10 out of 11 card sharks, the scientists report online March 2 in Science. (DeepStack also beat the 11th competitor, but that victory was not statistically significant.) “This work is very impressive,” says computer scientist Murray Campbell, one of the creators of Deep Blue, the computer that bested chess grandmaster Garry Kasparov in 1997. DeepStack “had a huge margin of victory,” says Campbell, of IBM’s Thomas J. Watson Research Center in Yorktown Heights, N.Y. Likewise, computer scientists led by Tuomas Sandholm of Carnegie Mellon University in Pittsburgh recently trounced four elite heads-up no-limit Texas Hold’em players with a program called Libratus. Each contestant played 30,000 hands against the program during a tournament held in January in Pittsburgh. Libratus was “much tougher than any human I’ve ever played,” says poker pro Jason Les. Previously, Michael Bowling — one of DeepStack’s creators — and colleagues had created a program that could play a two-person version of poker in which the size of bets is limited. That program played the game nearly perfectly: It was statistically unbeatable within a human lifetime (SN: 2/7/2015, p.14). But no-limit poker is vastly more complicated because when any bet size is allowed, there are many more possible actions. Players must decide whether to go all in, play it safe with a small wager or bet something in between. “Heads-up no-limit Texas Hold’em … is, in fact, far more complex than chess,” Campbell says. In the card game, each player is dealt two cards facedown and both players share five cards dealt faceup, with rounds of betting between stages of dealing. Unlike chess or Go, where both players can see all the pieces on the board, in poker, some information is hidden — the two cards in each player’s hand. Such games, known as imperfect-information games, are particularly difficult for computers to master. To hone DeepStack’s technique, the researchers used deep learning — a method of machine learning that formulates an intuition-like sense of when to hold ’em and when to fold ’em. When it’s the program’s turn, it sorts through options for its next few actions and decides what to do. As a result, DeepStack’s nature “looks a lot more like humans’,” says Bowling. Libratus computes a strategy for the game ahead of time and updates itself as it plays to patch flaws in its tactics that its human opponents have revealed. Near the end of a game, Libratus switches to real-time calculation, during which it further refines its methods. Libratus is so computationally demanding that it requires a supercomputer to run. (DeepStack can run on a laptop.) Teaching computers to play games with hidden information, like poker, could eventually lead to real-life applications. “The whole area of imperfect-information games is a step towards the messiness of the real world,” says Campbell. Computers that can handle that messiness could assist with business negotiations or auctions, and could help guard against hidden risks, in cybersecurity, for example.


News Article | February 23, 2017
Site: www.techrepublic.com

With huge strides in AI—from advances in the driverless vehicle realm, to mastering games such as poker and Go, to automating customer service interactions—this advanced technology is poised to revolutionize businesses. But the terms AI, machine learning, and deep learning are often used haphazardly and interchangeably, when there are key differences between each type of technology. Here's a guide to the differences between these three tools to help you master machine intelligence. SEE: Inside Amazon's clickworker platform: How half a million people are being paid pennies to train AI (PDF download) (TechRepublic) AI is the broadest way to think about advanced, computer intelligence. In 1956 at the Dartmouth Artificial Intelligence Conference, the technology was described as such: "Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." AI can refer to anything from a computer program playing a game of chess, to a voice-recognition system like Amazon's Alexa interpreting and responding to speech. The technology can broadly be categorized into three groups: Narrow AI, artificial general intelligence (AGI), and superintelligent AI. IBM's Deep Blue, which beat chess grand master Garry Kasparov at the game in 1996, or Google DeepMind's AlphaGo, which in 2016 beat Lee Sedol at Go, are examples of narrow AI—AI that is skilled at one specific task. This is different from artificial general intelligence (AGI), which is AI that is considered human-level, and can perform a range of tasks. Superintelligent AI takes things a step further. As Nick Bostrom describes it, this is "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills." In other words, it's when the machines have outsmarted us. Machine learning is one subfield of AI. The core principle here is that machines take data and "learn" for themselves. It's currently the most promising tool in the AI kit for businesses. ML systems can quickly apply knowledge and training from large data sets to excel at facial recognition, speech recognition, object recognition, translation, and many other tasks. Unlike hand-coding a software program with specific instructions to complete a task, ML allows a system to learn to recognize patterns on its own and make predictions. While Deep Blue and DeepMind are both types of AI, Deep Blue was rule-based, dependent on programming—so it was not a form of ML. DeepMind, on the other hand, is: It beat the world champion in Go by training itself on a large data set of expert moves. Is your business interested in integrating machine learning into its strategy? Amazon, Baidu, Google, IBM, Microsoft and others offer machine learning platforms that businesses can use. Deep learning is a subset of ML. It uses some ML techniques to solve real-world problems by tapping into neural networks that simulate human decision-making. Deep learning can be expensive, and requires massive datasets to train itself on. That's because there are a huge number of parameters that need to be understood by a learning algorithm, which can initially produce a lot of false-positives. For instance, a deep learning algorithm could be instructed to "learn" what a cat looks like. It would take a very massive data set of images for it to understand the very minor details that distinguish a cat from, say, a cheetah or a panther or a fox. As mentioned above, in March 2016, a major AI victory was achieved when DeepMind's AlphaGo program beat world champion Lee Sedol in 4 out of 5 games of Go using deep learning. The way the deep learning system worked was by combining "Monte-Carlo tree search with deep neural networks that have been trained by supervised learning, from human expert games, and by reinforcement learning from games of self-play," according to Google. Deep learning also has business applications. It can take a huge amount of data—millions of images, for example—and recognize certain characteristics. Text-based searches, fraud detection, spam detection, handwriting recognition, image search, speech recognition, Street View detection, and translation are all tasks that can be performed through deep learning. At Google, deep learning networks have replaced many "handcrafted rule-based systems," for instance. Deep learning is also highly susceptible to bias. When Google's facial recognition system was initially rolled out, for instance, it tagged many black faces as gorillas. "That's an example of what happens if you have no African American faces in your training set," said Anu Tewary, chief data officer for Mint at Intuit. "If you have no African Americans working on the product. If you have no African Americans testing the product. When your technology encounters African American faces, it's not going to know how to behave." Some also believe that deep learning is overhyped. Sundown AI, for instance, has mastered automated customer interactions using a combination of ML and policy graph algorithms—not deep learning.


News Article | February 15, 2017
Site: www.techrepublic.com

The simulation hypothesis is the idea that reality is a digital simulation. Technological advances will inevitably produce automated artificial superintelligence that will, in turn, create simulations to better understand the universe. This opens the door for the idea that superintelligence already exists and created simulations now occupied by humans. At first blush the notion that reality is pure simulacra seems preposterous, but the hypothesis springs from decades of scientific research and is taken seriously by academics, scientists, and entrepreneurs like Stephen Hawking and Elon Musk. From Plato's allegory of the cave to The Matrix ideas about simulated reality can be found scattered through history and literature. The modern manifestation of the simulation argument is postulates that, like Moore's Law, over time computing power becomes exponentially more robust. Barring a disaster that resets technological progression, experts speculate that it is inevitable computing capacity will one day be powerful enough to generate realistic simulations. TechRepublic's smart person's guide is a routinely updated "living" precis loaded with up-to-date information about about how the simulation hypothesis works, who it affects, and why it's important. SEE: Check out all of TechRepublic's smart person's guides The simulation hypothesis advances the idea that simulations might be the inevitable outcome of technological evolution. Though ideas about simulated reality are far from new and novel, the contemporary hypothesis springs from research conducted by Oxford University professor of philosophy Nick Bostrom. In 2003 Bostrom presented a paper that proposed a trilemma, a decision between three challenging options, related to the potential of future superintelligence to develop simulations. Bostrom argues this likelihood is nonzero, meaning the odds of a simulated reality are astronomically small, but because percentage likelihood is not zero we must consider rational possibilities that include a simulated reality. Bostrom does not propose that humans occupy a simulation. Rather, he argues that massive computational ability developed by posthuman superintelligence will likely develop simulations to better understand that nature of reality. In his book Superintelligence using anthropic rhetoric Bostrom argues that the odds of a population with human-like population advancing to superintelligence is "very close to zero," or (with an emphasis on the word or) the odds that a superintelligence would desire to create simulations is also "very close to zero," or the odds that people with human-like experiences actually live in a simulation is "very close to one." He concludes by arguing that if the claim "very close to one" is the correct answer and most people do live in simulations, then the odds are good that we too exist in a simulation. Simulation hypothesis has many critics, namely those in academic communities who question an overreliance on anthropic reasoning and scientific detractors who point out simulations need not be conscious to be studied by future superintelligence. But as artificial intelligence and machine learning emerge as powerful business and cultural trends, many of Bostrom's ideas are going mainstream. SEE: Research: 63% say business will benefit from AI (Tech Pro Research) It's natural to wonder if the simulation hypothesis has real-world applications, or if it's a fun but purely abstract consideration. For business and culture, the answer is unambiguous: It doesn't matter if we live in a simulation or not. The accelerating pace of automated technology will have a significant impact on business, politics, and culture in the near future. The simulation hypothesis is coupled inherently with technological evolution and the development of superintelligence. While superintelligence remains speculative, investments in narrow and artificial general intelligence are significant. Using the space race as an analogue, advances in artificial intelligence create technological innovations that build, destroy, and augment industry. IBM is betting big with Watson and anticipates a rapidly emerging $2 trillion market for cognitive products. Cybersecurity experts are investing heavily in AI and automation to fend off malware and hackers. In a 2016 interview with TechRepublic, United Nations chief technology diplomat, Atefeh Riazi, anticipated the economic impact of AI to be profound and referred to the technology as "humanity's final innovation." SEE: Artificial Intelligence and IT: The good, the bad and the scary (Tech Pro Research) Though long-term prognostication about the impact of automated technology is ill-advised, in the short term advances in machine learning, automation, and artificial intelligence represent a paradigm shift akin to the development of the internet or the modern mobile phone. In other words, the economy post-automation will be dramatically different. AI will hammer manufacturing industries, and logistics distribution will lean heavily on self-driving cars, ships, drones, and aircraft, and financial services jobs that require pattern recognition will evaporate. Conversely, automation could create demand for inherently interpersonal skills like HR, sales, manual labor, retail, and creative work. "Digital technologies are in many ways complements, not substitutes for, creativity," Erik Brynjolfsson said, in an interview with TechRepublic. "If somebody comes up with a new song, a video, or piece of software there's no better time in history to be a creative person who wants to reach not just hundreds or thousands, but millions and billions of potential customers." SEE: IT leader's guide to the future of artificial intelligence (Tech Pro Research) The golden age of artificial intelligence began in 1956 at the Ivy League research institution Dartmouth College with the now-infamous proclamation, "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." The conference established AI and computational protocols that defined a generation of research. The conference was preceded and inspired by developments at Manchester College in 1951 that produced a program that could play checkers, and another program that could play chess. Though excited researchers anticipated the speedy emergence of human-level machine intelligence, programming intelligence unironically proved to be a steep challenge. By the mid-1970s the field entered the so-called "first AI winter." The era was marked by the development of strong theories limited by insufficient computing power. Spring follows winter, and by the 1980s AI and automation technology grew from the sunshine of faster hardware and the boom of consumer technology markets. By the end of the century parallel processing—the ability to perform multiple computations at one time—emerged. In 1997 IBM's Deep Blue defeated human chess player Gary Kasparov. Last year Google's DeepMind defeated a human at Go, and this year the same technology easily beat four of the best human poker players. Driven and funded by research and academic institutions, governments, and the private sector these benchmarks indicate a rapidly accelerating automation and machine learning market. Major industries like financial services, healthcare, sports, travel, and transportation are all deeply invested in artificial intelligence. Facebook, Google, and Amazon are using AI innovation for consumer applications, and a number of companies are in a race to build and deploy artificial general intelligence. Some AI forecasters like Ray Kurzweil predict a future with the human brain cheerly connected to the cloud. Other AI researchers aren't so optimistic. Bostrom and his colleagues in particular warn that creating artificial general intelligence could produce an existential threat. Among the many terrifying dangers of superintelligence—ranging from out-of-control killer robots to economic collapse—the primary threat of AI is the coupling of of anthropomorphism with the misalignment of AI goals. Meaning, humans are likely to imbue intelligent machines with human characteristics like empathy. An intelligent machine, however, might be programed to prioritize goal accomplishment over human needs. In a terrifying scenario known as instrumental convergence, or the "paper clip maximizer," a superintelligent narrowly focused AI designed to produce paper clips would turn humans into gray goo in pursuit of resources. SEE: Research: Companies lack skills to implement and support AI and machine learning (Tech Pro Research) It may be impossible to test or experience the simulation hypothesis, but it's easy to learn more about the hypothesis. TechRepublic's Hope Reese enumerated the best books on artificial intelligence, including Bostrom's essential tome Superintelligence, Kurzweil's The Singularity Is Near: When Humans Transcend Biology, and Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat. Make sure to read TechRepublic's smart person's guides on machine learning, Google's DeepMind, and IBM's Watson. Tech Pro Research provides a quick glossary on AI and research on how companies are using machine learning and big data. Finally, to have some fun with hands-on simulations, grab a copy of Cities: Skylines, Sim City, Elite:Dangerous, or Planet Coaster on game platform Steam. These small-scale environments will let you experiment with game AI while you build your own simulated reality.


News Article | March 2, 2017
Site: www.scientificamerican.com

It is no mystery why poker is such a popular pastime: the dynamic card game produces drama in spades as players are locked in a complicated tango of acting and reacting that becomes increasingly tense with each escalating bet. The same elements that make poker so entertaining have also created a complex problem for artificial intelligence (AI). A study published today in Science describes an AI system called DeepStack that recently defeated professional human players in heads-up, no-limit Texas hold’em poker, an achievement that represents a leap forward in the types of problems AI systems can solve. DeepStack, developed by researchers at the University of Alberta, relies on the use of artificial neural networks that researchers trained ahead of time to develop poker intuition. During play, DeepStack uses its poker smarts to break down a complicated game into smaller, more manageable pieces that it can then work through on the fly. Using this strategy allowed it to defeat its human opponents. For decades scientists developing artificial intelligence have used games to test the capabilities of their systems and benchmark their progress. Twenty years ago game-playing AI had a breakthrough when IBM’s chess-playing supercomputer Deep Blue defeated World Chess Champion Garry Kasparov. Last year Google DeepMind’s AlphaGo program shocked the world when it beat top human pros in the game of go. Yet there is a fundamental difference between games such as chess and go and those like poker in the amount of information available to players. “Games of chess and go are ‘perfect information’ games, [where] you get to see everything you need right in front of you to make your decision,” says Murray Campbell, a computer scientist at IBM who was on the Deep Blue team but not involved in the new study. “In poker and other imperfect-information games, there’s hidden information—private information that only one player knows, and that makes the games much, much harder.” Artificial intelligence researchers have been working on poker for a long time—in fact, AI programs from all over the world have squared off against humans in poker tournaments, including the Annual Computer Poker Competition, now in its 10th year. Heads-up, no-limit Texas hold’em presents a particularly daunting AI challenge: As with all imperfect-information games, it requires a system to make decisions without having key information. Yet it is also a two-person version of poker with no limit on bet size, resulting in a massive number of possible game scenarios (roughly 10160, on par with the 10170 possible moves in go). Until now poker-playing AIs have attempted to compute how to play in every possible situation before the game begins. For really complex games like heads-up, no-limit, they have relied on a strategy called abstraction in which different scenarios are lumped together and treated the same way. (For example, a system might not differentiate between aces and kings.) Abstraction simplifies the game, but it also leaves holes that opponents can find and exploit. With DeepStack, study author Michael Bowling, a professor of machine learning, games and robotics, and colleagues took a different approach, adapting the AI strategies used for perfect-information games like go to the unique challenges of heads-up, no-limit. Before ever playing a real game DeepStack went through an intensive training period involving deep learning (a type of machine learning that uses algorithms to model higher-level concepts) in which it played millions of randomly generated poker scenarios against itself and calculated how beneficial each was. The answers allowed DeepStack’s neural networks (complex networks of computations that can “learn” over time) to develop general poker intuition that it could apply even in situations it had never encountered before. Then, DeepStack, which runs on a gaming laptop, played actual online poker games against 11 human players. (Each player completed 3,000 matches over a four-week period.) DeepStack used its neural network to break up each game into smaller pieces—at a given time, it was only thinking between two and 10 steps ahead. The AI solved each mini game on the fly, working through millions of possible scenarios in about three seconds and using the outcomes to choose the best move. “In some sense this is probably a lot closer to what humans do,” Bowling says. “Humans certainly don’t, before they sit down and play, precompute how they’re going to play in every situation. And at the same time, humans can’t reason through all the ways the poker game would play out all the way to the end.” DeepStack beat all 11 professional players, 10 of them by statistically significant margins. Campbell was impressed by DeepStack’s results. “They're showing what appears to be a quite a general approach [for] dealing with these imperfect-information games,” he says, “and demonstrating them in a pretty spectacular way.” In his view DeepStack is an important step in AI toward tackling messy, real-world problems such as designing security systems or performing negotiations. He adds, however, that even an imperfect-info game like poker is still much simpler than the real world, where conditions are continuously changing and our goals are not always clear. DeepStack is not the only AI system that has enjoyed recent poker success. In January a system called Libratus, developed by a team at Carnegie Mellon University, beat four professional poker players (the results have not been published in a scientific journal). Unlike DeepStack, Libratus does not employ neural networks. Instead, the program, which runs off a supercomputer, relies on a sophisticated abstraction technique early in the game and shifts to an on-the-fly reasoning strategy similar to that used by DeepStack in the game’s later stages. Campbell, who is familiar with both technologies, says it is not clear which is superior, pointing out that whereas Libratus played more elite professionals, DeepStack won by larger margins. Michael Wellman, a computer scientist at the University of Michigan who was also not involved in the work, considers both successes “significant milestone[s] in game computation.” Bowling sees many possible directions for future AI research, some related to poker (such as systems that can compete in six-player tournaments) and others that extend beyond it. “I think the interesting problems start to move into what happens if we’re playing a game where we don’t even know the rules,” he says. “We often have to make decisions where we’re not exactly sure how things actually work,” he adds, which will involve “building agents that can cope with that and learn to play those games, getting better as they interact with the world.”

Loading Deep Blue collaborators
Loading Deep Blue collaborators