News Article | May 11, 2017
On the seventh move of the crucial deciding game, black made what some now consider to have been a critical error. When black mixed up the moves for the Caro-Kann defence, white took advantage and created a new attack by sacrificing a knight. In just 11 more moves, white had built a position so strong that black had no option but to concede defeat. The loser reacted with a cry of foul play – one of the most strident accusations of cheating ever made in a tournament, which ignited an international conspiracy theory that is still questioned 20 years later. This was no ordinary game of chess. It’s not uncommon for a defeated player to accuse their opponent of cheating – but in this case the loser was the then world chess champion, Garry Kasparov. The victor was even more unusual: IBM supercomputer, Deep Blue. In defeating Kasparov on May 11 1997, Deep Blue made history as the first computer to beat a world champion in a six-game match under standard time controls. Kasparov had won the first game, lost the second and then drawn the following three. When Deep Blue took the match by winning the final game, Kasparov refused to believe it. In an echo of the chess automaton hoaxes of the 18th and 19th centuries, Kasparov argued that the computer must actually have been controlled by a real grand master. He and his supporters believed that Deep Blue’s playing was too human to be that of a machine. Meanwhile, to many of those in the outside world who were convinced by the computer’s performance, it appeared that artificial intelligence had reached a stage where it could outsmart humanity – at least at a game that had long been considered too complex for a machine. Yet the reality was that Deep Blue’s victory was precisely because of its rigid, unhumanlike commitment to cold, hard logic in the face of Kasparov’s emotional behaviour. This wasn’t artificial (or real) intelligence that demonstrated our own creative style of thinking and learning, but the application of simple rules on a grand scale. What the match did do, however, was signal the start of a societal shift that is gaining increasing speed and influence today. The kind of vast data processing that Deep Blue relied on is now found in nearly every corner of our lives, from the financial systems that dominate the economy to online dating apps that try to find us the perfect partner. What started as student project, helped usher in the age of big data. The basis of Kasparov’s claims went all the way back to a move the computer made in the second game of the match, the first in the competition that Deep Blue won. Kasparov had played to encourage his opponent to take a “poisoned” pawn, a sacrificial piece positioned to entice the machine into making a fateful move. This was a tactic that Kasparov had used against human opponents in the past. What surprised Kasparov was Deep Blue’s subsequent move. Kasparov called it “human-like”. John Nunn, the English chess grandmaster, described it as “stunning” and “exceptional”. The move left Kasparov riled and ultimately thrown off his strategy. He was so perturbed that he eventually walked away, forfeiting the game. Worse still, he never recovered, drawing the next three games and then making the error that led to his demise in the final game. The move was based on the strategic advantage that a player can gain from creating an open file, a column of squares on the board (as viewed from above) that contains no pieces. This can create an attacking route, typically for rooks or queens, free from pawns blocking the way. During training with the grand master Joel Benjamin, the Deep Blue team had learnt there was sometimes a more strategic option than opening a file and then moving a rook to it. Instead, the tactic involved piling pieces onto the file and then choosing when to open it up. When the programmers learned this, they rewrote Deep Blue’s code to incorporate the moves. During the game, the computer used the position of having a potential open file to put pressure on Kasparov and force him into defending on every move. That psychological advantage eventually wore Kasparov down. From the moment that Kasparov lost, speculation and conspiracy theories started. The conspiracists claimed that IBM had used human intervention during the match. IBM denied this, stating that, in keeping with the rules, the only human intervention came between games to rectify bugs that had been identified during play. They also rejected the claim that the programming had been adapted to Kasparov’s style of play. Instead they had relied on the computer’s ability to search through huge numbers of possible moves. IBM’s refusal of Kasparov’s request for a rematch and the subsequent dismantling of Deep Blue did nothing to quell suspicions. IBM also delayed the release of the computer’s detailed logs, as Kasparov had also requested, until after the decommissioning. But the subsequent detailed analysis of the logs has added new dimensions to the story, including the understanding that Deep Blue made several big mistakes. There has since been speculation that Deep Blue only triumphed because of a bug in the code during the first game. One of Deep Blue’s designers has said that when a glitch prevented the computer from selecting one of the moves it had analysed, it instead made a random move that Kasparov misinterpreted as a deeper strategy. He managed to win the game and the bug was fixed for the second round. But the world champion was supposedly so shaken by what he saw as the machine’s superior intelligence that he was unable to recover his composure and played too cautiously from then on. He even missed the chance to come back from the open file tactic when Deep Blue made a “terrible blunder”. Whichever of these accounts of Kasparov’s reactions to the match are true, they point to the fact that his defeat was at least partly down to the frailties of human nature. He over-thought some of the machine’s moves and became unecessarily anxious about its abilities, making errors that ultimately led to his defeat. Deep Blue didn’t possess anything like the artificial intelligence techniques that today have helped computers win at far more complex games, such as Go. But even if Kasparov was more intimidated than he needed to be, there is no denying the stunning achievements of the team that created Deep Blue. Its ability to take on the world’s best human chess player was built on some incredible computing power, which launched the IBM supercomputer programme that has paved the way for some of the leading-edge technology available in the world today. What makes this even more amazing is the fact that the project started not as an exuberant project from one of the largest computer manufacturers but as a student thesis in the 1980s. When Feng-Hsiung Hsu arrived in the US from Taiwan in 1982, he can’t have imagined that he would become part of an intense rivalry between two teams that spent almost a decade vying to build the world’s best chess computer. Hsu had come to Carnegie Mellon University (CMU) in Pennsylvania to study the design of the integrated circuits that make up microchips, but he also held a longstanding interest in computer chess. He attracted the attention of the developers of Hitech, the computer that in 1988 would become the first to beat a chess grand master, and was asked to assist with hardware design. But Hsu soon fell out with the Hitech team after discovering what he saw as an architectural flaw in their proposed design. Together with several other PhD students, he began building his own computer known as ChipTest, drawing on the architecture of Bell Laboratory’s chess machine, Belle. ChipTest’s custom technology used what’s known as “very large-scale integration” to combine thousands of transistors onto a single chip, allowing the computer to search through 500,000 chess moves each second. Although the Hitech team had a head start, Hsu and his colleagues would soon overtake them with ChipTest’s successor. Deep Thought – named after the computer in Douglas Adams’ The Hitchhiker’s Guide to the Galaxy built to find the meaning of life – combined two of Hsu’s custom processors and could analyse 720,000 moves a second. This enabled it to win the 1989 World Computer Chess Championship without losing a single game. But Deep Thought hit a road block later that year when it came up against (and lost to) the reigning world chess champion, one Garry Kasparov. To beat the best of humanity, Hsu and his team would need to go much further. Now, however, they had the backing of computing giant IBM. Chess computers work by attaching a numerical value to the position of each piece on the board using a formula known as an “evaluation function”. These values can then be processed and searched to determine the best move to make. Early chess computers, such as Belle and Hitech, used multiple custom chips to run the evaluation functions and then combine the results together. The problem was that the communication between the chips was slow and used up a lot of processing power. What Hsu did with ChipTest was to redesign and repackage the processors into a single chip. This removed a number of processing overheads such as off-chip communication and made possible huge increases in computational speed. Whereas Deep Thought could process 720,000 moves a second, Deep Blue used large numbers of processors running the same set of calculations simultaneously to analyse 100,000,000 moves a second. Increasing the number of moves the computer could process was important because chess computers have traditionally used what is known as “brute force” techniques. Human players learn from past experience to instantly rule out certain moves. Chess machines, certainly at that time, did not have that capability and instead had to rely on their ability to look ahead at what could happen for every possible move. They used brute force in analysing very large numbers of moves rather than focusing on certain types of move they already knew were most likely to work. Increasing the number of moves a machine could look at in a second gave it the time to look much further into the future at where different moves would take the game. By February 1996, the IBM team were ready to take on Kasparov again, this time with Deep Blue. Although it became the first machine to beat a world champion in a game under regular time controls, Deep Blue lost the overall match 4-2. Its 100,000,000 moves a second still weren’t enough to beat the human ability to strategise. To up the move count, the team began upgrading the machine by exploring how they could optimise large numbers of processors working in parallel – with great success. The final machine was a 30-processor supercomputer that, more importantly, controlled 480 custom intergrated circuits designed specifically to play chess. This custom design was what enabled the team to so highly optimise the parallel computing power across the chips. The result was a new version of Deep Blue (sometimes referred to as Deeper Blue) capable of searching around 200,000,000 moves per second. This meant it could explore how each possible strategy would play out up to 40 or more moves into the future. By the time the rematch took place in New York City in May 1997, public curiosity was huge. Reporters and television cameras swarmed around the board and were rewarded with a story when Kasparov stormed off following his defeat and cried foul at a press conference afterwards. But the publicity around the match also helped establish a greater understanding of how far computers had come. What most people still had no idea about was how the technology behind Deep Blue would help spread the influence of computers to almost ever aspect of society by transforming the way we use data. Complex computer models are today used to underpin banks’ financial systems, to design better cars and aeroplanes, and to trial new drugs. Systems that mine large datasets (often known as “big data”) to look for significant patterns are involved in planning public services such as transport or healthcare, and enable companies to target advertising to specific groups of people. These are highly complex problems that require rapid processing of large and complex datasets. Deep Blue gave scientists and engineers significant insight into the massively parallel multi-chip systems that have made this possible. In particular they showed the capabilities of a general-purpose computer system that controlled a large number of custom chips designed for a specific application. The science of molecular dynamics, for example, involves studying the physical movements of molecules and atoms. Custom chip designs have enabled computers to model molecular dynamics to look ahead to see how new drugs might react in the body, just like looking ahead at different chess moves. Molecular dynamic simulations have helped speed up the development of successful drugs, such as some of those used to treat HIV. For very broad applications, such as modelling financial systems and data mining, designing custom chips for an individual task in these areas would be prohibitively expensive. But the Deep Blue project helped develop the techniques to code and manage highly parallelised systems that split a problem over a large number of processors. Today, many systems for processing large amounts of data rely on graphics processing units (GPUs) instead of custom-designed chips. These were originally designed to produce images on a screen but also handle information using lots of processors in parallel. So now they are often used in high-performance computers running large data sets and to run powerful artificial intelligence tools such Facebook’s digital assistant. There are obvious similarities with Deep Blue’s architecture here: custom chips (built for graphics) controlled by general-purpose processors to drive efficiency in complex calculations. The world of chess playing machines, meanwhile, has evolved since the Deep Blue victory. Despite his experience with Deep Blue, Kasparov agreed in 2003 to take on two of the most prominent chess machines, Deep Fritz and Deep Junior. And both times he managed to avoid a defeat, although he still made errors that forced him into a draw. However, both machines convincingly beat their human counterparts in the 2004 and 2005 Man vs Machine World Team Championships. Junior and Fritz marked a change in the approach to developing systems for computer chess. Whereas Deep Blue was a custom-built computer relying on the brute force of its processors to analyse millions of moves, these new chess machines were software programs that used learning techniques to minimise the searches needed. This can beat the brute force techniques using only a desktop PC. But despite this advance, we still don’t have chess machines that resembles human intelligence in the way it plays the game – they don’t need to. And, if anything, the victories of Junior and Fritz further strengthen the idea that human players lose to computers, at least in part, because of their humanity. The humans made errors, became anxious and feared for their reputations. The machines, on the other hand, relentlessly applied logical calculations to the game in their attempts to win. One day we might have computers that truly replicate human thinking, but the story of the last 20 years has been the rise of systems that are superior precisely because they are machines. This article was originally published on The Conversation. Read the original article. Mark Robert Anderson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
News Article | May 23, 2017
AlphaGo took the first encounter in a three-game series against brash 19-year-old Chinese world number one Ke Jie, who before the game called the programme a "cold machine" and vowed never to play it again after this week. AlphaGo, which was developed by London-based AI company DeepMind Technologies, stunned the Go community a year ago when it trounced South Korean grandmaster Lee Se-Dol four games to one—the first time a computer programme beat a top player in a full contest. The win over Lee was hailed as a technology landmark, fuelling visions of a brave new world of AI that can not only drive cars and operate "smart homes", but potentially help mankind figure out some of the most complex scientific, technical, and medical problems. Last year's win set the stage for a much-anticipated contest with Ke in the eastern Chinese city of Wuzhen. Ke, who has been the top-ranked player in the 3,000-year-old game for more than two years and has previously described himself as a "pretentious", boldly said last year "Bring it on!" and vowed never to lose to a machine. Computer programmes have previously beaten humans in cerebral contests, starting with IBM's Deep Blue defeating chess grandmaster Garry Kasparov in 1997. But AlphaGo's success is considered the most significant for AI yet due to the complexity of Go, which has an incomputable number of move options, putting a premium on human-like "intuition", creative thinking and the ability to learn. Ke has alternated between awe and disdain for AlphaGo. On Monday night, Ke said on China's Twitter-like Weibo platform that "the advancement of AI has far exceeded our imagination" but added he would never play it again after this week. "It will always be a cold machine. Compared to humans, I cannot feel its passion and longing for the game of Go," he said. Go involves two players alternately laying black and white stones on a grid, seeking to seal off the most territory. Proponents had considered Go a bastion in which human ingenuity would remain superior, at least for the foreseeable future. AlphaGo uses two sets of "deep neural networks" containing millions of connections similar to neurons in the brain. It is partly self-taught—having played millions of games against itself after initial programming. Explore further: Ready, Set, Go! Rematch of man vs machine in ancient game
News Article | May 23, 2017
Computer beats Chinese champion in ancient board game of go (AP) — A computer defeated China's top player of the ancient board game go on Tuesday in the latest test of whether artificial intelligence can master one of the last games that machines have yet to dominate. Ke Jie, a 19-year-old prodigy, is due to play a three-game match against AlphaGo in this town west of Shanghai. During the five-day event, the computer also is to face off against other top-ranked Chinese players. AlphaGo beat Ke by a half-point, "the closest margin possible," according to Demis Hassabis, founder of DeepMind, the Google-owned company that developed AlphaGo. "AlphaGo wins game 1!" said Hassabis on Twitter. "Ke Jie fought bravely and some wonderful moves were played." Go, which originated in China more than 25 centuries ago, has avoided mastery by machines even as computers surpassed humans in most other games. They conquered chess in 1997 when IBM Corp.'s Deep Blue system defeated champion Garry Kasparov. Go, known as weiqi in China and baduk in Korea, is considered more challenging because the near-infinite number of possible positions requires intuition and flexibility. Players take turns putting white or black stones on a rectangular grid with 361 intersections, trying to surround larger areas of the board while also capturing each other's pieces. Competitors play until both agree there are no more places to put stones or one quits. Players had expected it to be at least another decade before computers could beat the best humans due to go's complexity and reliance on intuition, but AlphaGo surprised them in 2015 by beating a European champion. Last year, it defeated South Korea's top player, Lee Sedol. AlphaGo was designed to mimic such intuition in tackling complex tasks. Google officials say they want to apply those technologies to areas such as smartphone assistants and solving real-world problems. Human players were startled when AlphaGo scored its first major upset in October 2015 by defeating a European champion. AlphaGo defeated Lee, the South Korean, in four out of five games during a weeklong match in March 2016. Lee lost the first three games, then came back to win the fourth, after which he said he took advantages of weaknesses including AlphaGo's poor response to surprises. Go is hugely popular in Asia, with tens of millions of players in China, Japan and the Koreas. Google said a broadcast of Lee's match with AlphaGo was watched by an estimated 280 million people. Players have said AlphaGo enjoys some advantages because it doesn't get tired or emotionally rattled, two critical aspects of the mentally intense game.
News Article | May 21, 2017
China's 19-year-old Ke Jie is given little chance in the three-game series beginning Tuesday in the eastern Chinese city of Wuzhen after AlphaGo stunned observers last year by trouncing South Korean grandmaster Lee Se-Dol four games to one. Lee's loss in Seoul marked the first time a computer programme had beaten a top player in a full match in the 3,000-year-old Chinese board game, and has been hailed as a landmark event in the development of AI. AI has previously beaten humans in cerebral contests, starting with IBM's Deep Blue defeating chess grandmaster Garry Kasparov in 1997, but AlphaGo's win last year is considered the most significant win for AI yet. Go is considered perhaps the most complex game ever devised, with an incomputable number of move options that puts a premium on "intuition." Proponents had considered it a bastion in which human thought would remain superior, at least for the foreseeable future. AlphaGo's triumph fuelled hopes of a brave new world in which AI is applied not only to driverless cars or "smart homes", but to helping mankind figure out some of the most complex scientific, technical, and medical problems. "AlphaGo's successes hint at the possibility for general AI to be applied to a wide range of tasks and areas, to perhaps find solutions to problems that we as human experts may not have considered," Demis Hassabis, founder of London-based DeepMind, which developed AlphaGo, said ahead of this week's matches. AI's ultimate goal is to create "general" or multi-purpose, rather than "narrow," task-specific intelligence—something resembling human reasoning and the ability to learn. But for some, it conjures sci-fi images of a future in which machines "wake up" and enslave humanity. Physicist Stephen Hawking is a leading voice for caution, warning in 2015 that computers may outsmart humans, "potentially subduing us with weapons we cannot even understand." Ke is a brash prodigy who went pro at 11 years old, has been world number one for more than two years, and has described himself as a "pretentious prick". After AlphaGo flattened Lee, Ke declared he would never lose to the machine. "Bring it on," he said on China's Twitter-like Weibo. But he has tempered his bravado since then. Ke was among many top Chinese players who were trounced in online contests in January by a mysterious adversary who reportedly won 60 straight victories. That opponent—cheekily calling itself "The Master"—was later revealed by DeepMind to have been an updated AlphaGo. "Even that was not AlphaGo's best performance," Gu Li, a past national champion, told Chinese state media last week. "It would be very hard for Ke to play against it, but then again, Ke has also been working extremely hard to change his methods in preparation. I hope he can play well." Go involves two players alternately laying black and white stones on a grid. The winner is the player who seals off the most territory. AlphaGo uses two sets of "deep neural networks" containing millions of connections similar to neurons in the brain. It is partly self-taught—having played millions of games against itself after initial programming. Explore further: Where does AlphaGo go?
News Article | April 26, 2017
Nearly 20 years ago, I was fortunate enough to play friendly blitz chess against former world champion Garry Kasparov. It was quite an experience; his competitive spirit and creative genius were palpable. I had recently founded Elixir Studios, which specialized in artificial intelligence (AI) games, and my ambition was to conduct cutting-edge research in the field. AI was on my mind that day: Kasparov had played chess against IBM's supercomputer Deep Blue just a few years before. Now, he sets out the details of that titanic event in his memoir Deep Thinking. The 1997 match was a watershed for AI and an extraordinary technical feat. Strangely, although Kasparov lost, it left me more in awe of the incredible capabilities of the human brain than of the machine. Kasparov was able to compete against a computational leviathan and to complete myriad other tasks that make us all distinctly human. By contrast, Deep Blue was hard-coded with a set of specialized rules distilled from chess grandmasters, and empowered with a brute-force search algorithm. It was programmed to do one thing only; it could not have played even a simpler game such as noughts and crosses without being completely reprogrammed. I felt that this brand of 'intelligence' was missing crucial traits such as generality, adaptability and learning. As he details in Deep Thinking, Kasparov reached a similar conclusion. The book is his first thorough account of the match, and it offers thoughtful meditations on technology. The title references what he believes chess engines cannot do: they can calculate, but not innovate or create. They cannot think in the deepest sense. In drawing out these distinctions, Kasparov provides an impressively researched history of AI and the field's ongoing obsession with chess. For decades, leading computer scientists believed that, given the traditional status of chess as an exemplary demonstration of human intellect, a competent computer chess player would soon also surpass all other human abilities. That proved not to be the case. This has to do partly with differences between human and machine cognition: computers can easily perform calculation tasks that people consider incredibly difficult, but totally fail at commonsense tasks we find intuitive (a phenomenon called Moravec's paradox). It was also due to industry and research dynamics in the 1980s and 1990s: in pursuit of quick results, labs ditched generalizable, learning-based approaches in favour of narrow, hand-coded solutions that exploited machines' computational speed. The focus on brute-force approaches had upsides, Kasparov explains. It may not have delivered on the promise of general-purpose AI, but it did result in very powerful chess engines that soon became popularly available. Today, anyone can practise for free against software stronger than the greatest human chess masters, enabling enthusiasts worldwide to train at top levels. Before Deep Blue, pessimists predicted that the defeat of a world chess champion by a machine would lead to the game's death. In fact, more people play now than ever before, according to World Chess Federation figures. Chess engines have also given rise to exciting variants of play. In 1998, Kasparov introduced 'Advanced Chess', in which human–computer teams merge the calculation abilities of machines with a person's pattern-matching insights. Kasparov's embrace of the technology that defeated him shows how computers can inspire, rather than obviate, human creativity. In Deep Thinking, Kasparov also delves into the renaissance of machine learning, an AI subdomain focusing on general-purpose algorithms that learn from data. He highlights the radical differences between Deep Blue and AlphaGo, a learning algorithm created by my company DeepMind to play the massively complex game of Go. Last year, AlphaGo defeated Lee Sedol, widely hailed as the greatest player of the past decade. Whereas Deep Blue followed instructions carefully honed by a crack team of engineers and chess professionals, AlphaGo played against itself repeatedly, learning from its mistakes and developing novel strategies. Several of its moves against Lee had never been seen in human games — most notably move 37 in game 2, which upended centuries of traditional Go wisdom by playing on the fifth line early in the game. Most excitingly, because its learning algorithms can be generalized, AlphaGo holds promise far beyond the game for which it was created. Kasparov relishes this potential, discussing applications from machine translation to automated medical diagnoses. AI will not replace humans, he argues, but will enlighten and enrich us, much as chess engines did 20 years ago. His position is especially notable coming from someone who would have every reason to be bitter about AI's advances. His account of the Deep Blue match itself is fascinating. Famously, Kasparov stormed out of one game and gave antagonistic press conferences in which he protested against IBM's secrecy around the Deep Blue team and its methods, and insinuated that the company might have cheated. In Deep Thinking, Kasparov offers an engaging insight into his psychological state during the match. To a degree, he walks back on his earlier claims, concluding that although IBM probably did not cheat, it violated the spirit of fair competition by obscuring useful information. He also provides a detailed commentary on several crucial moments; for instance, he dispels the myth that Deep Blue's bizarre move 44 in the first game of the match left him unrecoverably flummoxed. Kasparov includes enough detail to satisfy chess enthusiasts, while providing a thrilling narrative for the casual reader. Deep Thinking delivers a rare balance of analysis and narrative, weaving commentary about technological progress with an inside look at one of the most important chess matches ever played.
News Article | April 17, 2017
During human evolution, our cerebral cortex increased in size in response to new environmental challenges. The cerebral cortex is the site of diverse processes, including visual perception and language acquisition. However, no accepted unitary theory of cortical function exists yet. One hypothesis is that there is an evolutionarily conserved neural circuit that implements a simple and flexible computation. This idea is known as the "canonical circuit." As Gary Marcus, Adam Marblestone, and Thomas Dean note in a perspective piece in Science, there is still no consensus about whether such a canonical circuit exists nearly four decades later. Researchers argue that there is little evidence that such a uniform structure can capture the diversity of cortical function in simple mammals. What would it mean for the cortex to be diverse rather than uniform? Marcus and his colleagues propose that the brain may consist of an array of elementary computational units, which act in parallel as a microprocessor. To probe at questions on cortical circuitry, collaborative endeavors engaging scientists and engineers have emerged. In parallel with the Human Brain Project in Europe, the Brain Research through Advancing Imaging Neurotechnologies Initiative is developing tools to view the brain at superior spatial and temporal resolution. Researchers aim to directly study the relationship of a neuron's function to its connections. This can be accomplished by mapping neuronal connections in conjunction with acquiring functional activity data. Researchers can scale up from the study of neurons and synapses to those of neural circuits and networks. However, this bottom-up approach may not be the best way to understand the neural basis of human cognition. Psychologist Frank van der Welde argues that this confuses the nature of understanding with the way to achieve understanding. He provides an analogy to physics. To understand the universe, a bottom-up approach would begin with an understanding of elementary particles, how these particles combine to make atoms, and how atoms to combine to make molecules. However, physics did not begin with the study of elementary particles. It began with a study of the solar system, which provided the first law of motion. In summary, understanding can proceed from top to bottom as well as bottom to top. To understand human cognition, neuroscientists should combine top-down and bottom-up approaches. One top-down approach is generating a large-scale simulation of neural processes that generate intelligence. This project intersects neuroscience and artificial intelligence. For instance, a research team at IBM represented 8 x 106 neurons and 6400 synapses per neuron on the IBM Blue Gene Processor, and ran 1 s of model time in 10 s of real time. Large-scale simulations provide a tool to investigate the brain not limited by the experimental and ethical limitations of in vivo work. Through such virtual models alongside neuroscience studies, researchers may arrive at the neural basis of intelligence. As neuroscience details the anatomy and activity of the brain, artificial intelligence seeks to develop an independent, non-biological path to intelligence. Researchers of both fields can agree that artificial intelligence is not the equal of natural intelligence in most important tasks for human cognition such as vision, motor manipulations, and natural language understanding. However, this is not to say that artificial intelligence won't ultimately achieve principles of natural intelligence. I met with Ernest Davis, a professor of computer science at New York University, to learn more about artificial intelligence. Advances in artificial intelligence have been incredible, from IBM Watson to Google DeepMind AlphaGo. Davis believes that AI will continue to advance and be used for positive and negative applications. For instance, AI can be used for improving the quality of medicine and building self-driving cars. "The impact of that will be largely positive though there will be a negative impact on employment. It will also be used for military purposes, and the immediate effects of that are mostly negative," Davis says. Nevertheless, artificial intelligence is far from building programs that retain human-level intelligence. The human brain has an unsurpassed ability to do complex, real-time interactions in a dynamic world. To implement reasoning in artificial intelligence, Davis explains different techniques available by example. For instance, chess-playing program Deep Blue did reasoning by thinking through a large number of paths to the future and determining what solution works out. A second example is software verification programs. These algorithms search for bugs in programs through logical reasoning. The program provides users with a formal proof that a program doesn't have a certain kind of bug. A third example is taxonomic reasoning, which is a simple form of logical reasoning by category. This type of reasoning has been done in programs called semantic nets, which have been in use for 50 years. A New Century of the Brain Paul Allen, the co-founder of Microsoft, has funded two distinct research projects in neuroscience and artificial intelligence. The Washington Post aptly describes it as Allen's $500 million quest to dissect the mind and code a new one from scratch. Elon Musk has funded a rival company called NeuroLink to create human-computer hybrids. To build an artificial brain, Davis believes: "The replica of the human brain can mean either the ability to do human-level tasks, which I think in principle is possible, or algorithms that are an imitation of the brain. We don't yet know which direction is the best way to pursue AI. There's an awful lot we don't understand about the brain so it may be pre-mature to try the latter." Neuroscience and artificial intelligence both aim to identify architectures of intelligence and cognition. Michael Gazzaniga states that cognitive neuroscience asks: what are the algorithms that drive structural neural elements into the physiological activity that results in perception, cognition, and consciousness? Likewise, artificial intelligence asks: what are the algorithms that can generate human-like intelligence in machines to perform reasoning, planning, and learning? Both fields describe mechanisms of intelligence as "algorithms."4 To uncover human intelligence and cognition, intersections in neuroscience and artificial intelligence are necessary. Perhaps only then can we can understand what makes us human. Explore further: Trudeau looks to make Canada 'world leader' in AI research
News Article | April 17, 2017
In one of the top achievements for AI, AlphaGo, Google DeepMind's machine learning platform, defeated Go world champ Lee Sedol in a game more complex than chess, in March 2016. AlphaGo beat Sedol in four of the five-game tournament, after mastering the game 10 years earlier than many experts predicted. AlphaGo has continued to improve its skills throughout the past year, training by playing against top players from South Korea, China, and Japan. And at the end of May, AlphaGo will explore a new way to learn: By pairing up with top Go players and AI experts at the Future of Go Summit. The five-day conference, held from May 23-27, will bring together AI experts, the China Go Association, AlphaGo, and China's top Go players in Wuzhen, China. AlphaGo harnesses convolutional neural networks to play the ancient Chinese game. What has made it particularly impressive is the fact that it uses reinforcement learning, instead of being programmed specifically for the task. While IBM's Deep Blue achieved an AI victory in 1997 by beating world chess master Gary Kasparov, it was programmed with the moves. And Go is a complex game, with potentially 200 options per move, as opposed to 20 on a chessboard, relying heavily on intuition. While it is interesting to see what AlphaGo is capable of, the summit in May will be a test for how AI and human collaborations can work. It will be a chance to see how human learning can be enhanced by AI. And the takeaways will likely extend past the gaming world. Manuela Veloso, head of machine learning at Carnegie Mellon University previously told TechRepublic that she was interested to see "if and how AlphaGo's learning approach may apply to other different 'non-game' problems." AlphaGo has already taken on the task of reducing energy use, and the technology has been applied to medical research projects as well. Toby Walsh, AI professor at the University of New South Wales, also previously told TechRepublic that the nature of the game itself could change, and that the AI system "played moves that have surprised even Go masters." "Will man and machine together be better than man or machine alone?" Walsh asked. "Each of us can bring unique strengths to the table."
News Article | May 8, 2017
CLEARWATER, FL--(Marketwired - May 8, 2017) - Endurance Exploration Group, Inc., ( : EXPL) ("Endurance" or the "Company"), a company specializing in shipwreck research, survey and recovery, announces that, operating through its joint-venture, Swordfish Partners (the "Joint Venture"), the Company has located and conducted preliminary survey operations on a ship wreck site that is believed to be the remnants of an early 1800s American, wooden-hulled, passenger steam ship ("American Passenger Steam Ship" or "Target") located some forty (40) miles off the coast of North Carolina. Based upon the Company's research files and the research and other data contributed by its joint venture partner, Deep Blue Exploration, LLC dba Marex, the Company, operating through its Joint Venture, conducted side-scan sonar search and survey operations off the coast of North Carolina during the spring of this year. The Company has located what it believes to be the remnants of its targeted search during this phase of its operations, an American Passenger Steam Ship, which may contain a valuable cargo of passenger artifacts, specie, and jewelry. The wreck site lies some forty miles off the coast of North Carolina and the wreckage pattern, debris field, machinery characteristics, and other information leads the Company to believe it is possibly the American Passenger Steam Ship that was the subject and Target of the Company's Joint Venture search. The Company's Joint Venture expects to conduct additional survey operations in the near term, in an effort to positively identify the wreckage as its Target and, if warranted, move the project from a survey operation to a salvage operation beginning in the summer of this year. Endurance CEO, Micah J. Eldred, further commented, "We are very pleased with the results of our initial search and survey for this recently discovered shipwreck. If ultimately confirmed as the subject of our targeted search, the American Passenger Steamship may represent a unique opportunity for our Company and our shareholders. Based upon our archival research, we believe passengers lost significant amounts of specie and valuables on the American Passenger Steamship we have been searching for, and the location of this wreck site, if positively confirmed, may represent a salvage process that should prove to be less technically challenging and less costly than some of the other, deep-water, projects we are working on." Endurance's Joint Venture has petitioned the U.S. Federal Courts for the Middle District of Florida for a claim of the shipwreck, an appointment of the Joint Venture as substitute custodian of the shipwreck and a salvage award or title to the shipwreck and its cargo. If granted, Endurance plans to return to the wreck site to conduct salvage operations, beginning in 2017. About Endurance Exploration Group, Inc.: Endurance Exploration Group, Inc. specializes in historic shipwreck research, subsea search, survey and recovery of lost ship containing valuable cargoes. Over the last 8 years, Endurance has developed a research database of over 1,400 ships that are known to be lost with valuable cargoes in the world oceans, and in 2013 began subsea search and survey operations. About Deep Blue Exploration LLC dba Marex JV partner, Deep Blue Exploration, LLC dba Marex of Memphis, Tennessee has been researching, locating, and salvaging historic and valuable shipwrecks for over 25 years. Some of their better known ship wreck projects include the famous galleon Nuestra Senora de las Maravillas which sank in the Bahamas in 1656, El Cazadore 1780, SS North Carolina 1840, SS Merida 1911, and SS Ancona 1917. Marex Chairman, Captain Herbo Humphreys, commented, "we are excited and confidence to be working with Endurance, and we feel this American Passenger Steamship project may ultimately prove to be one the most historic and economically viable projects we have been involved with over the years." In the coming weeks, the Company may be posting sonar and still images as well as video footage of the shipwreck to its website and to its Facebook page at the following links: This press release includes "forward-looking statements" within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. In addition to statements which explicitly describe such risks and uncertainties, readers are urged to consider statements labeled with the terms "believes," "belief," "expects," "intends," "anticipates," "will," or "plans" to be uncertain and forward looking. The forward-looking statements contained herein are also subject generally to other risks and uncertainties including but not limited to legal and operational risks of offshore, historic shipwreck recovery. Forward-looking statements contained in this press release are made under the Safe Harbor Provision of the Private Securities Litigation Reform Act of 1995. Any such statements are subject to risks and uncertainties that could cause actual results to differ materially from the anticipated. The information contained in this release is as of May 8, 2017. Endurance Exploration Group, Inc. assumes no obligation to update forward-looking statements contained in this press release as the result of new information or future events or developments.
News Article | May 2, 2017
Looking for the best Xbox One deals? You're in the right place for the cheapest Xbox One prices and bundles from around the web as we're constantly on the lookout for the best offers. The Xbox One S has generally replaced the original model now, but we still see the some cheap discounts, so keep an eye out on both comparison charts and bundle lists below. Don't forget, bundles are usually way better value than buying a console on its own. Looking to buy in the UK? You'll want to take a look at our UK page then. Before you look at the older Xbox One bundles, you may want to consider the new Xbox One S. The slimmed-down design looks much better than the original chunky box and the power brick has been absorbed. The main draw though is 4K visual support meaning you'll be able to watch specialized 4K Blu-Ray and Netflix content in 4K on your new 4K TV. The new version's 2TB, 1TB and 500GB models launched recently and have been selling very well. These prices are for the older Xbox One model. The console is considerably larger and has an external power brick. It won't play Netflix or Blu-Rays in 4K, but is just as an effective games console as the new Xbox One S. Nowadays though, we're finding the Xbox One S deals are even cheaper, so we'd only go for the older model if you spot a seriously cheap deal. 500GB Xbox One S | Battlefield 1 | $269 @ Amazon This is the cheapest Xbox One S Battlefield 1 bundle we've seen in a while and will probably be back up to around $300 soon. You're only getting a digital copy, but it's still way cheaper than buying a physical copy separately. 500GB Xbox One S Minecraft Favourites bundle | $245 @ Amazon This is the cheapest way to get a hold of a brand new Xbox One S console this week and you're getting a few Minecraft-related goodies with it too including both the Xbox One version and early access to the Windows 10 version. There are a batch of content packs to give you a head start too. View this deal: Xbox One S with Minecraft Favourites $245 @ Amazon Blue 500GB Xbox One S | Gears of War 4 | $269 @ Amazon The 500GB hard drive isn't ideal for the keen gamer, but it's certainly manageable. We'd try to make it work too in order to get one of these gorgeous Deep Blue Xbox One consoles under the TV. Gears of War 4 is a fine addition to the series from a fresh studio studio. View this deal: Blue 500GB Xbox One S with Gears 4 $269 @ Amazon 1TB Xbox One S | Halo Wars 2 and Season Pass | $314.49 @ Amazon Given the high prices of the 1TB Xbox One S right now, this could be the bundle to go for if you were planning on picking up Halo Wars 2 and the season pass. It's $30 cheaper this week too. View this deal: 1TB Xbox One S with Halo Wars 2 and Season Pass $314.49 @ Amazon 1TB Xbox One S | Gears of War 4 | $285 @ Amazon Gears of War 4 should be high on the must-buy list of any new Xbox One S owner. So this is an ideal bundle, especially if some of the other bundled games like FIFA 17 or Battlefield 1 don't take your fancy. View this deal: 1TB Xbox One S with Gears of War 4 $285 @ Amazon Xbox Live Gold: Need to top up your Xbox Live Gold membership? Don't pay the default automatic £40 renewal price. Check out our range of Xbox Live Gold deals below to save some money. Here's our roundup of the best Xbox One controller deals around. We've listed the best prices for all the cheapest official pads and the Elite controller. Still considering a PS4 instead? Then you'll want to take a look at our cheap . Try our new Google Chrome add-on and never pay more than the cheapest prices ever again! Pricehawk is a Chrome extension that will automatically find you the cheapest deals for the tech and games items you're shopping for online. It'll also let you know if there are voucher codes you can use to save even more money!
News Article | February 16, 2017
In continuation to our letters dated February 8, 2017 and February 13, 2017, please find enclosed the transcripts of the 'KIE's Chasing Growth Conference' held on February 13, 2017. The same will be made available on the Company's website at the following weblink https://www.infosys.com/investors/news-events/events/Documents/chasing-growth-conference-13feb2017.pdf This is for your information and records. Our next speaker is Dr. Vishal Sikka. Dr. Vishal Sikka is the current CEO and Managing Director of Infosys, a globally recognized innovative and business leader in the IT Services Industry. Dr. Sikka was born and raised in India. He studied Computer Science at Syracuse University before receiving his Ph.D. in AI Stanford in 1996. After a very long successful stint at SAP in 2014, he joined Infosys as CEO to help drive a massive transformation not only at Infosys but also within the larger IT services industry. In the last two and half years since he took over, his vision and leadership has helped accelerate Infosys transformation resulting into growth of their global work force to almost 2 lakh people. There was a massive reduction in attrition, doubling of large deal total contract value, number of large clients and industry leading cash flows generation with revenues of $9.5 billion despite strong headwinds. Infosys new software and new services has seen tremendous growth as well. Client satisfaction levels are accordingly at the highest levels ever, particularly at the CXO segment which according to Infosys Annual Survey showed 22 points increase. May I invite Dr. Vishal Sikka to talk about Infosys and the future of IT Services Thank you so much. That was quite a handful introduction. Hi, Mr. Kotak. Good to see all of you. Uday has asked me to organize my talk in two sections. In the first part I want to talk about our performance and financial metrics that most of you care a lot about and then following I want to spend some time looking at the bigger picture. I hope to finish this speech itself in another 25 minutes and then leave around half an hour for your questions. Two and half years ago, when I started, we laid out a relatively straight forward strategy. The idea was to launch a three-part strategy of renewing the core business entering into new businesses and the culture of the company. And the interesting part about this three-part strategy was also that it reflects the challenges of our clients the challenges of business that they have this dual priority of only one hand getting better at everything that they did and on the other hand doing new things that they never did before. In this, they have their own timeless culture and value system that they operate on. So, if we look over the last two and half years, the execution that we have followed on this path is starting to show signs of success. I will quickly walk you through some of these metrics. The relative revenue performance compared to the industry has distinctly improved. When I joined, our revenue was significantly below the industry. It has now got up towards the top of the industry, or certainly in line with the industry. Year-on-year if you compare the first three quarters of this financial year to the last one, you see the performance there and despite the challenging environment, we have managed to keep the operating margin steady. If you look at many of the operational parameters of our business the core business utilization for the last seven quarters has been above 80% which is for the first time in a very long time in our company. The employee cost as a percentage of revenue has come down. You can see the operating cash flow as percentage of net profit is getting close to 100%. The onsite we can still do better but the subcon costs are something that are continually improving. I know a lot of you have questions around the visa policies and things like this, we can talk about that in the Q&A but our endeavor has been to get closer to the 30% number. In terms of the core business, IT services continue to grow through a dedicated focus on renewing our existing services through a combination of automation and innovation as well as mixing our portfolio increasingly towards more high value, value add kind of services. One area of disappointment for us has been the consulting business. We continue to focus on this over the course this year. The negative performance of consulting in Q1 impacted us more than we expected. Products, platforms and other areas of the business also continue to grow and here is a different cut on that. If you look from the time when I started to last quarter, revenue per quarter has grown by more than $400 million per quarter. While the margin is same from 25.1% to 25.1%, operating profits has gone up from $536 million to $640 million. Attrition has come down significantly from 23.4% at a group level in the quarter when I joined to now a little bit below 15%. There is a separate attrition metric that we have started to internally focus on over the last year which is high performer attrition and that number has come down significantly to single-digits. Revenue per FTE is one of the key metrics of our strategy. While it has decreased compared to when I started, it has in the last couple of quarters started to go back up and that is something we are very happy about. One key part of the strategy is the new software. When we talk about renew and new, there are new services in unprecedented new areas and there are also new software that we did not have before. Excluding Finacle, the software revenue in the quarter when I started was $35 million a quarter and that number has already gone up to $60 million now. The $100 million accounts went up from 12 to 18. $200 million accounts have doubled from 3 to 6 in that time. Organizationally Pravin and I have established a strong team around us. Jayesh is here, our Deputy CFO. Similarly, we have appointed Ravi as our Deputy COO. There are three Presidents responsible for our go to market. Below their level we have established 15 industry heads to create much more focused small encapsulated empowered business units, so that client focus and agility can be achieved and it is the best way for us to achieve scale. Similarly, we have been making a lot of progress and Pravin oversees this on simplifying our internal policies, systems and process so that we can become much more agile. So, when we look at this in the broader context of the strategy, beyond the numbers you can clearly see that on all the three dimensions of strategy things have been taking hold. One program that I am particularly proud of, I am approaching it since two years, next month it is going to be two years since we started it, is the Zero Distance. It has been deeply engrained into our company's culture. More than 95% of the projects of the company have Zero Distance plans. The idea of Zero Distance for those of you who do not know about it, is it inspires our project team to bring innovation in every single project no matter what. The number is actually 100% and the projects that do not have a Zero Distance plan are the ones that just started recently. So, if you look at the Zero Distance plans of project that started more than six weeks ago, it is 100%. Every project is basically covered by Zero Distance project and it has become a part of the culture. For the last nine months or so, we have been focusing on elevating the Zero Distance innovation idea where project teams think of bigger innovations. The CWRF is a tribute to Martin Luther King's quote that, "If you cannot fly we must run, if you cannot run we must walk, if you cannot walk we must crawl, but we have to always keep moving forward" and so we have this framework of most of the Zero Distance innovations, even though we are close to 100%, are quite incremental quite small and so our endeavor now is to elevate these towards more and more transformative innovation for our clients coming from the grass root. Zero Bench, which was the other initiative that we launched in July of 2015 that also reached close to 100%. So basically, everyone on the bench has done a Zero Bench project. We have now more than 34,000 jobs on our internal market place. 470 jobs show up every week on this and it has touched basically everybody on the bench. As I mentioned earlier, one of the direct impacts of Zero Bench is giving the youngsters an opportunity to get some experience by working on a Zero Bench project. That experience then makes it easier for them to get into production. So, Zero Bench has had a direct impact on the utilization, especially on the fresher utilization. The utilization is above 80% consistently now, but in particular the fresher utilization and off-shore utilization has gone up directly as a result of Zero Bench. The renewal of our existing core business, in particular the maintenance and run parts of the business but also to some degree the build parts of the business is impacted by automation. I mentioned, in the earnings in January that more than 2,600 people over the course of Q3 were saved but 2,650 FTE worth of effort was eliminated with automation. Over the last 12 months that number has crossed 8,000. The Mana platform plays a very important role in helping us achieve that automation. All 8,500 is not due to Mana but Mana for IT has a huge impact on bringing automation led productivity improvements to our work. And within our core business, we have launched many new services like the Mainframe Modernization, the API Economy, BI Renewal, the work with Salesforce.com, ServiceNow, etc. Roughly 10 or so of these have launched in the last couple of years and they have all been showing growth that is faster than the growth in the company. As a result of all of this, the client satisfaction, a survey done since last 12 years, has demonstrated the highest ever score this time. In particular, as you said in the introduction, the CXO satisfaction has jumped up by 22 points. So, I am quite proud of the work that our team has done in renewing our core business. When you look at new, getting into new dimensions of business is something that should come easily to us. When you create something new, you do that but culturally and organizationally new is much more difficult to pull off because it is not just enough to create a separate appendage that is sitting somewhere else and doing something totally independent. You have to find a way to make these harmonious. The idea of 'Renew and New' comes from Arthur Koestler's word on creation, this idea of two habitually distinct but self-consistent frames of preference where you are doing something unprecedented in the new dimension, whereas you are continuously improving in the existing dimension. We have seen continued strong momentum in the three acquisitions that we have made Skava, Panaya and Noah, as well as the new services that we have launched organically. One of the things that we are very proud of Skava, is that beyond retail Skava has now entered into other industries in particular financial services, telecommunication, and utility. We have opened new frontiers with Skava and our home-grown Mana platform. Mana helps us renew our services in particular in maintenance and run areas. The same Mana platform also has huge applications in helping our clients build breakthrough business application of artificial intelligence. I think, having a common platform that does both is extremely important to our strategy. A lot of people in the industry build captive automation capabilities in service of their own business. When we do that, the platform would not be competitive. You have to ensure that the platform that is powering the productivity improvement of the existing business is a world class platform. Therefore, it is important to have this platform subjected to the outside world, to the light of the outside. Building breakthrough business applications on Mana is a very-very critical part of the strategy. I am really excited by what our teams are doing, what our clients are doing, all the way from CPG companies using it for revenue reconciliation and financial consolidation much faster to companies in the pharmaceutical industry, working on much better forecasting, using Mana for contract management and compliance to contracts to dynamic fraud analysis for banks and applications like that. From new services dimension, our digital services as well as the strategic design consulting services are things that have been going forward quite well. One key part of our strategy is the cultural dimension and there are two main parts to that. One is the agility of the company as a whole and the other one is education. Infosys always had a very strong emphasis on education. Mr. Murthy always talked about this. The idea of learnability, the ability to learn. Especially in the times ahead, learnability is going to be a very critical thing for our future. So, when you look at all the work that we do in culture dimension, it is dominated by this. ILP, the Infosys Learning Platform that is our own platform to very immersively teach our fresher as well as our regular employees what is going on in the world around in a very immersive way. We drop them into boot camps, into projects and into sub-sets of the projects, so they are learning in a much more accelerated way. We have done a lot of innovation in learning itself and how people learn and so forth and brought that into ILP. One of the most interesting program that we have done is for our top 200 executives. We have done a global leadership program together with Stanford and we have now done three cohorts of this. Some of the Infoscions and the senior executive who go through this, have said that this is the best thing they have done in their entire Infosys career. This is a two-and-a-half-week program or distributed over a year which is done together with the Stanford Business School, and the Stanford School of Engineering and Design School. It has a huge impact on changing the perspective of our management team. As well as we have been investing in on-site learning and I think more and more as we focus on-site creating on-site environment for learning is going to be critical. Design thinking training has become an integral part of being at Infosys. We have now more than 130,000 Infoscion who have been trained on design thinking. By the end of this year I expect that we will be able to train everyone in the company on design thinking and similarly training in artificial intelligence, agile methodologies, and in all manner of new technologies is something that we continue to invest in. In a nutshell, two and half years going from $8.2 billion in revenue or so to now we just crossed $10 billion on a LTM basis, continued protection of margins and continued to ensure that we have strong margins. That is the philosophy of consistent profitable growth. Improvement in revenue per employee despite the pressure, the commoditization, improvement in the attrition and the sense of inspiration among the employees is growing into completely new dimensions. We are now achieving new scale in the new dimensions and highest client satisfaction ever. That is basically in a nutshell from a performance point of view where we are. As I look at the journey ahead, as happy as I am looking backwards to the last two and half years, the journey ahead is even more challenging and even more interesting one. Here is a quote from my friend Bill Ruh of GE. He will be here day after tomorrow talking at the NASSCOM Event. We have been doing some amazing work together on bringing design thinking and new kinds of solutions, co-innovation, to their platform. Here is a quote from the Head of HR and the Head of Technology for Visa on how we have helped establish a dramatic new culture as well as doing our traditional services. Visa has been one of our fastest growing accounts over the last two years and it is something that we are particularly proud of. So as I said, as good as it is to look backwards and feel good about this dimensions of improvement that we have managed to achieve, the world around us is growing through a very rapid transformation. I want to switch gears a little bit and share with you some thoughts on what I see happening in the world around us. It is difficult to capture what is going on in a nutshell, but it basically has three dimensions. A very deep rooted sense of end user centricity. When you look at retail or banking or insurance or communications and telco and so forth. In any industry there is a very deep rooted sense in which things are becoming more connected. A continuous connection to the end user and serving end user needs has never been more important. These experiences are now increasingly becoming digital but they are also being powered by AI technology. Underneath this enablement of the new experience is the intelligent infrastructure. It is governed by this exponential laws. I am calling it Moore's Laws, I know it is Moore's Law but I am calling it Moore's Laws because there are many of the similar laws not only in the performance improvement or cost performance improvement of processors but in many other dimensions of technology. We see these exponential price performance improvement. As a result of that, computing is becoming far more pervasive and that is all enabling and extreme efficiency of disinter-mediating existing industry. If we look into this in a little bit more detail, here are three projects done by our strategic design consulting team. I mentioned, in the new dimension of our strategy, this one service that we offer where we co-innovate with our clients on things that are the most important to their future. On the right-hand side is a 3D printed fully functioning replica of a low fidelity model of a GE engine that is an aircraft engine. The entire engine is 3D printed and with a Microsoft HoloLens, you can deeply examine the operational behavior of this engine. You can go back and change its design parameters, tune its design parameters and you can do predictive maintenance on it. All of that in real time. It actually has the potential to dramatically accelerate the lifecycle of these engines to dramatically redesign the nature of maintenance and repair on these engines. And also make the entire machine far more connected and far more intelligent, as it goes through its lifecycle. In the middle, you see a project that we have done with a very large agriculture company. This is a digital farm that we have built. Our team actually builds the farm. It is not only a digital farm, but it is a highly affordable digital farm that can be put together with extremely cheap components and it makes the plant itself connected. Supply of nutrient, supply of water, supply of light to the plants can be controlled thousands of times more effectively than in traditional agriculture. Instead of supplying water once a day or twice a day you can supply it two thousand times in a day by milliliters individually to the plant. The CEO of this company have told me that Vishal if you are able to do this, if you are able to connect into the plant itself, we will be able to improve agricultural yield by 30%. What we found in our experience was that some of the plants actually grow 10x faster than in the normal conditions, not only 30%. The entire Green Revolution that happened in our lifetimes, in my lifetime here in India, was at 22% improvement in agricultural yield. So, you can imagine what kind of a revolution this can create. On the left hand side is actually a store that we have built for one of our luxury retail client and this is an entire store that looks like a store from 1850's but it is completely digital. Every aspect of the store, every part of this is digital. It is like a complete room, it is a large computer that is programmable, those mirrors and the coat hangers, and the closets, and everything that you see is all completely digital. The mirror turns on when you put on a coat. It knows what coat it is. You can see yourself in different circumstances and things of this nature. So, all industries are going through this deep transformation because of a very deep rooted understanding of users enabled by a pervasive connectivity and pervasive computing. This is powered by Moore's Laws rapid advance. Moore's Law this year is going to be 52 years old, two years older than me. It is one of the extraordinary feats of human engineering achievement. Many people say that Moore's Laws is reaching its end. It is true that traditional Moore's Laws is reaching its end. On the upper right, you see a 7 nanometer wafer that DFMC has started to produce on a non-production scale yet. In early 2018, they will go production on 7 nanometers. When I spent summer at Intel working in the Intel AI Lab in 1990, I remember the micron process was going live, that was 1,000 nanometers. When I was building HANA at SAP we were at 22 nanometers process of Intel. Now it is down to 7 nanometers. This is an extraordinary thing. Of course, the reason people say that Moore's Laws is going to end is that 7 nanometers is getting already pretty close to physical scale. There are two silicon atoms in 1 nanometer, so at 7 nanometers you are basically at the width of 14 or so silicon atoms. So, there is not much more that you can get out of this. Probably by 2022, 2023 we will have 4 nanometers, 5 nanometer process but that is as far as it will grow. But then other things are happening. On the left-hand side in the middle you see the neuromorphic board that has already been made by Jenn Hasler at Georgia Tech. These neuromorphic boards are ways of bypassing the traditional CMOS manufacturing process and going into new kinds of chip design. So therefore, some other new way continuing Moore's Laws is going to continue post 2024. On the bottom middle you see the Nvidia AI Box that has 170 teraflops box of computing in it, two Intel processors of 20 cores and 12 or so of these Tesla GPU chips inside it. This is already shipping, a lot of the deep learning algorithms run on this kind of path. And then of course on the bottom right you see this incredible growth that our partner AWS have had. That is a data center picture of a part of the datacenter of Amazon. They have dozens of datacenter, each one between 50,000 servers and 80,000 servers and collectively somewhere between 3 million servers to 5 million servers in AWS. This is an astonishingly large amount of computing capacity and AWS has seen 40%-45% growth in the last 12 months. So, what we see is that this entire digitization and AI revolution is being powered by computing that is simply exploding. Even though the traditional Moore's Laws is nearing a trend in the next seven-eight years, the new kind of architectures and new kinds of technologies is going to ensure that we are going to grow significantly further beyond that. As a result, new AI technologies are becoming possible. We all heard about this AlphaGo from Google last year that beat the world champion Go player. Recently the news came out that some students from CMU have put together a Poker playing robot. Now, why was Go significant last year, because Deep Blue beat Garry Kasparov in the middle of 90's. That was more or less a Brute Search computational capability exercise because the computer can look ahead in the Chess moves much more than the human brain can. Go is a different kind of a game. The computational complexity of Go is much worse then of Chess. With AlphaGo they were able to overcome that by bringing learning into that. A new frontier has been broken. Poker is based on human intuition, on deception, on bluffing and things like this. It actually has beaten four of the top Poker players continuously and has earned something like $1.7 million as a Poker player online. So, this is quite an interesting development. On the left-hand side, you see the autonomous car from Toyota which actually learns the driver's behavior as you drive the car and adjust itself to that. So, this is a great advancement that has happened in autonomous driving. Udacity has a car that is fully programmable by the students of Udacity who take the autonomous driving car class on Udacity. On the right-hand side is open AI which is a consortium of research in artificial intelligence in the public good. I am very proud that we, at Infosys are one of the sponsors of open AI. Universe is a package that the open AI team has released recently. It has now 45 world class researchers and they are really doing some amazing work in artificial intelligence. As a result of all of this, basically what we see, that it becomes much easier for a new comer to come in and disrupt in industry. Big parts of the supply chain and of the value chain can be disrupted and can be digitized. The world of atoms where we have been coming from, has huge numbers of intermediary. As digitization and computing replaces these atoms with bits, you can see that the value chain can be much more connected and can be much more zero distance. It can be much more efficient as a result of that, both in terms of pricing and in terms of production. So, this is what is going on. As a result of that, when we look at the journey ahead for us, a lot more needs to be done. I studied artificial intelligence as a graduate student. There are two pieces of research that our team picked out. One is from McKenzie that was done a few months ago, going through various kinds of jobs and the potential of automation in those jobs. The other is the impact of automation in the IT BPO industry, in our industry, that was done by HFS Research recently. And you can see that India, according to this study has the biggest impact of jobs among all of these countries. There are dozens of studies like this. If you research you can find out that recently, couple of weeks ago McKenzie Global published another one and Eric Reynoldson and Andy have done something. There are dozens and dozens of reports on automation. But really all you have to do is walk into any one of our floors at any company in the IT BPO industry and it becomes starkly clear as you walk around the floor that a huge number of these jobs are going to go away. It may not be two years it might be four years, five years, seven years, ten years, but there is no doubt that these jobs are going to be replaced with automation. So, what do we do? What is the nature of the journey that we are on? My sense is that there is only one way forward. Professor Mashelkar use to have this beautiful line "doing more with less for more" that is basically the nature of the endeavor. If you look at the complexity of human activity and plot that against time, the natural course of event is that we have to constantly be moving upward. That journey upwards is not only a journey upwards, it is actually accelerating. There is a very straight forward duality that we have to follow. On one hand, we have to eliminate our work through automation and improve our productivity while on the other hand we have to deploy that improved productivity towards innovation. This is basically the formula. The reason for automation is so that we can continue to do the work that has already been commoditized more efficiently and more productively and used that improvement in productivity, become more innovative to do the new kinds of things for which there is still tremendous opportunity and tremendous value. Self-driving car engineered today is worth millions of dollars. What if we were able to produce a hundred thousand self-driving autonomous driving engineers? This continues productivity improvement but actually shifting continuously from things that are commoditizing towards things that are innovative is the basic point that has to be fueled by education. This idea that we study for the first 21 years of our life and then we do not study any more, we have to abandon this idea because technology will continually change. So we have to continuously re-skill ourselves. Thankfully we at Infosys have a tremendous advantage on this because we have always had a culture of learnability like I said. So, the fueling of this automation and innovation journey on the basis of education is the key. The other basis is software. It is the intellectual property. If all we did was moving upwards on the basis of education, which alone will also become commoditized over time. Therefore, we have to take advantage of this improving productivity using software, using our software, software that is monetized. Instead of 10 people doing something if you now get that same thing done with three people, if you do not have your own software to do the work of the remaining seven then you are losing value and somebody else who made that software is capturing that value. Therefore, that software has to be monetizable. But for that software to be monetizable, it is software that has to be world class. It is software that has to withstand the test of outside world. It cannot just be a captive thing that you do only for yourself. So, having software together with education powering this transformation is something that is critical for our future. This is why the same Mana platform applies to the renewal of our services and for the breakthrough new business solution. This is the reason why education for our traditional services as well as in the breakthrough new areas is something that is critical. So this in essence is the nature of our journey in the future. John McCarthy the father of artificial intelligence once said, in a lecture that "articulating a problem is half the solution". That lecture changed the course of my life. Within our lifetimes, we are going to see AI Technology and get to the point where a well-articulated problem, a well-specified problem will be solved automatically. The human frontier at that point is going to be problem finding, creativity, the ability to look at a situation and see what is not there, to see what is missing, see what can be innovated, that is our future. One way or the other it is going to happen. When I look at 3.7 million people in our industry the only future I see is a future that is fueled by automation and getting ahead of this curve of automation. I see the future where the automation enables us to be more innovative to exercise our creativity, to exercise our ability and to find problems, not only solve them. If we do that we will be okay, if we do not we will get disrupted. This is the journey that I see in front of us. So, I do not know how long I went on, Uday. We laid out a clear strategy. We have seen signs of early success but the world is going through a very rapid change driven by AI and Technology. Much more is still to be done to lead in the next generation of IT and with focused execution we will do it. Thank you. Thank you, Mr. Sikka. Okay, we have the first question already. Very nice presentation. My first question is that, how much of the value in this transformation will be captured by the software versus the services company? I understand that Infosys is trying to adopt software plus services model but when you look at the overall value chain it is the software company that are gaining bigger and bigger share of the overall value in this transformation. Our core business is services. We are a services company. But it is the software just like there is software-defined networking and software-defined datacenter and software-defined in retail and similarly there is software-defined services. So, we are talking about amplifying our services ability through the use of software, through the use of automation and by moving up in the value chain. So it is both, the value of the software itself will be there but it is more in services, where the real power will come from. And you see that in the numbers already. The productivity improvement in the first nine months of this financial year, compared to the first nine months of the last financial year. Last financial year in the first nine months we hired 17,500 people, this financial year's first nine months we hired something like 5,500 people, and yet we improved utilization, we improved revenue and RPE and so on and so forth. So, more with less for more, powered by software and education. The second question that I have is something which is on top of everyone's mind, that how is your relationship with the founders and what does it mean for Infosys? My relationship with the founders, it is wonderful. I meet Mr. Murthy quite frequently. I do not meet the other founders quite as frequently. I ran into Kris the other day in a Lufthansa flight and I have not seen Nandan for more than a year. But it is an amazing relationship. I have a heartfelt warm relationship with Mr. Murthy. I probably meet him four, five, six times a year, something like this. He is an incredible man, we usually talk about quantum physics and things like this and technology. He was telling me the other day about the Paris metro and how he worked on the Paris metro in the 1970s before he started Infosys. It had this whole idea of automation of autonomous driving and things like this. So, all this drama that has been doing on in the media, it is very distracting. It takes our attention, but underneath that there is a very strong fabric that this company is based on and it is a real privilege for me to be its leader. Hi Vishal, great presentation. I was curious if you could talk about, from your personal experiences over the last two and a half years, you have been turning around a really large ship. What have been your deepest personal experiences, what have been most difficult to accomplish, what have been easy to accomplish? And in light of that, if you think I were to ask the same question from the leaders of the smaller mid-tier IT services companies, what do you think they would likely say? The second question I cannot answer. Compared to two and a half years ago, to me, it is disappointing to see the rate of growth and embrace of automation in the industry. I think more should have been done by now, but that is my opinion. When it comes to Infosys, it has actually been quite counter intuitive. I would not have imagined that zero distance would get picked up the way it did. One month after I started, I think September or October of 2014, this customer satisfaction survey came out, and it was incredibly depressing. I actually got the details of customers, the client's feedback, it was a pile of paper that thick, and I read it over a period of one month. I read every single one of them. Consistently it was the same story that we used to get high marks on quality, on delivery excellence, on being responsive, being professional. And we would get the lowest scores on strategic relevance, on innovation, on being proactive, and it was a shock to me that that was the case. So, a lot of the ideas for Zero Distance formed in those four, five months, and by March of 2015 I launched it. Within nine months we had managed to create a very grass roots oriented culture around that. Recently, Ravi, our Head of Delivery, was talking to some youngsters and they told him that it is not a new thing for us anymore, we just assume that it is a part of our job to come up with something innovative in whatever project that we are doing. That to me has been the biggest positive. The bringing of design thinking at a very massive scale has been surprisingly easy because of the scale. This comes from Mr. Murthy, the scale that was setup for education employees. The new, it is always difficult to embrace the new. So, I think bringing new in, in particular, letting go of your existing way of doing things and embracing. If a piece of software automates the work that you do, then embracing that piece of software and letting go of some of the manual work. It is something that is visible to you, that is something that is very difficult for people to do, and so that has been a hard thing. Some things that I thought were going to be much easier have turned out to be not so easy, some that I thought would be difficult have turned out to be much easier than I thought. In your Vision 2020, where we are in that journey? Would inorganic be a big component of it, how products will contribute? And given the recent noise in media, how confident you are seeing through that journey? How confident am I of what? See, that $20 billion, 30%, $80,000 revenue per employee, has always been an aspiration for us. What good is an aspiration if it is not aspirational? So, we are marching down that path. When you launch something like training people on design thinking, you cannot say that we will get 100,000 people trained in the next one year or two years, you have to setup a grassroots movement and then let it go and see what happens. When you do it like that, usually it surprises you. These numbers are a consequence of the work that we do, they are not that you figure out some way to get to that number. You do the right thing and then if you do the right thing, these numbers are the footprint that we leave behind. They are the consequences of the work that we do. So, this is how we look at it. Where are we on this journey? We are still early in this journey, we still have a lot of things to deal with, we have to get the consulting endeavor right, and we have to get BPO transformed. If we talk about the Infosys transformation as a whole, BPO transformation is even more difficult because it is even lower in the ranks and the skill levels and things like this. Many of the software assets have to still be transformed, like Finacle and so forth. We are still in the early stages of that. But generally, in the renewal of the code services and the adoption of the new services, I feel very good about where we are. So now as we look ahead to the future, a new financial year is starting and so forth. We are going through this exercise of trying to think about what does the journey ahead looks like. Taking stock of the last two and a half years and figuring out what the path ahead is going to look like in terms of both the software as well as the services, and the services amplified by software and so on. How big would be the inorganic component? I have to stop you, one question. Because we are running out of time. I am going to come to this side of the hall and then back to you sir. Vishal, I understand that you are going through two huge transitions or transformations we are going to call it. One on the technology side where you articulated your walk through on floor, half the jobs would not be there in five years. On the other side, you are going through a cultural transition or transformation, a founder run company that moved in to your hands and the obvious changes that play out. I am sure this is creating a bunch of insecurity at the organization level through the different ranks. So two questions here, how do you communicate, what do you communicate through the ranks to ensure that people are calm and focused on the job. And the second is, what are the two or three key pieces of your strategy? As you are saying, two of our key foundation stones are moving around and we are probably shaking a bit. How do you make sure you anchor the ship better and navigate through this? I think continuous and intense communication is very important across different parts of the organization and so on. That is something that cannot be overstated. You need to just communicate, communicate and communicate. And in our case it is not easy because the onsite employees are difficult to communicate to. They are inside their client context and so forth. So it is not so straightforward. The other part of it is that you have to demonstrate by example, talk is cheap but you have to really demonstrate in work that you are focused, that you are ignoring the noise and the distractions and so forth, and continuing to stay engaged. That is something, there are natural latencies that our brains have and there is not much you can do about that. I think that organizational change, organizational transformation is important. A lot of people would say that why go through this exercise of taking the leadership team through two years of a Stanford program and things like that, instead of just changing the leadership team. But if you change the leadership team then what happens to the layers below. You create a complete disruption, and not to mention you missed the point of transformation that is supposed to happen. So, I think that creating a context, creating an atmosphere which elevates everybody is something that is exceedingly important. Elan Kay says that context is worth 80 IQ points. So, if we build a real context around us that elevates everybody and can have the biggest transformative effect that takes time. The benefit of that approach is that it is very sustainable, long-term, lasting approach. The drawback is that it takes time. It is easy to plant a company here or there, buy some companies, put them inside and hire some leaders and put them inside, but that is like appendages growing out of control out of you. That is not sustained, organic transformation in the true sense. We have time exactly for one more brief question and one more brief answer. So try and keep your question as brief as you can. My question is on buyback. Some of the founders have said that they have written letters to the Board on this issue and they have not heard anything. Just your views on how you plan to utilize the cash? The official answer is the Board from time to time will consider capital allocation policies. In the last four years we have reviewed twice, we have improved dividend payout, once from 30% to 40% and then again from 40% to 50%. When we have something to report, we will report it. Jayesh, did I miss something? Did I do a good job? But that was an official answer. The unofficial answer is that you look at the business circumstances over the next four, five years and then you look at what you need the capital for. Then based on that you decide. We will do that as necessary. In our case, it is basically three things, there are strategic growth initiatives, and there is a capital for buildings and infrastructure and so forth to house the employees. Then there is the investments, the acquisitions. I am not interested in buying yesterday's technology to make something or the other look good. We are interested in tomorrow's technology. And tomorrow's technology is usually expensive if it is really good. So, you have to be very, very selective in that. Based on how that mix changes over the next five years and based on that you take a decision on how to utilize the cash. We have done this with 50% so far and as we go through this exercise we will take a look at it. Nobody asked about visas or Donald Trump etc Would you like to Suo Moto answer the question about Visas and Donald Trump? No, there is nothing that has happened so far. Okay. I am afraid that concludes our session. Thank you, Dr. Sikka. Thank you for being a very responsive audience. This is a disclosure announcement from PR Newswire.