News Article | May 2, 2017
Looking for the best Xbox One deals? You're in the right place for the cheapest Xbox One prices and bundles from around the web as we're constantly on the lookout for the best offers. The Xbox One S has generally replaced the original model now, but we still see the some cheap discounts, so keep an eye out on both comparison charts and bundle lists below. Don't forget, bundles are usually way better value than buying a console on its own. Looking to buy in the UK? You'll want to take a look at our UK page then. Before you look at the older Xbox One bundles, you may want to consider the new Xbox One S. The slimmed-down design looks much better than the original chunky box and the power brick has been absorbed. The main draw though is 4K visual support meaning you'll be able to watch specialized 4K Blu-Ray and Netflix content in 4K on your new 4K TV. The new version's 2TB, 1TB and 500GB models launched recently and have been selling very well. These prices are for the older Xbox One model. The console is considerably larger and has an external power brick. It won't play Netflix or Blu-Rays in 4K, but is just as an effective games console as the new Xbox One S. Nowadays though, we're finding the Xbox One S deals are even cheaper, so we'd only go for the older model if you spot a seriously cheap deal. 500GB Xbox One S | Battlefield 1 | $269 @ Amazon This is the cheapest Xbox One S Battlefield 1 bundle we've seen in a while and will probably be back up to around $300 soon. You're only getting a digital copy, but it's still way cheaper than buying a physical copy separately. 500GB Xbox One S Minecraft Favourites bundle | $245 @ Amazon This is the cheapest way to get a hold of a brand new Xbox One S console this week and you're getting a few Minecraft-related goodies with it too including both the Xbox One version and early access to the Windows 10 version. There are a batch of content packs to give you a head start too. View this deal: Xbox One S with Minecraft Favourites $245 @ Amazon Blue 500GB Xbox One S | Gears of War 4 | $269 @ Amazon The 500GB hard drive isn't ideal for the keen gamer, but it's certainly manageable. We'd try to make it work too in order to get one of these gorgeous Deep Blue Xbox One consoles under the TV. Gears of War 4 is a fine addition to the series from a fresh studio studio. View this deal: Blue 500GB Xbox One S with Gears 4 $269 @ Amazon 1TB Xbox One S | Halo Wars 2 and Season Pass | $314.49 @ Amazon Given the high prices of the 1TB Xbox One S right now, this could be the bundle to go for if you were planning on picking up Halo Wars 2 and the season pass. It's $30 cheaper this week too. View this deal: 1TB Xbox One S with Halo Wars 2 and Season Pass $314.49 @ Amazon 1TB Xbox One S | Gears of War 4 | $285 @ Amazon Gears of War 4 should be high on the must-buy list of any new Xbox One S owner. So this is an ideal bundle, especially if some of the other bundled games like FIFA 17 or Battlefield 1 don't take your fancy. View this deal: 1TB Xbox One S with Gears of War 4 $285 @ Amazon Xbox Live Gold: Need to top up your Xbox Live Gold membership? Don't pay the default automatic £40 renewal price. Check out our range of Xbox Live Gold deals below to save some money. Here's our roundup of the best Xbox One controller deals around. We've listed the best prices for all the cheapest official pads and the Elite controller. Still considering a PS4 instead? Then you'll want to take a look at our cheap . Try our new Google Chrome add-on and never pay more than the cheapest prices ever again! Pricehawk is a Chrome extension that will automatically find you the cheapest deals for the tech and games items you're shopping for online. It'll also let you know if there are voucher codes you can use to save even more money!
News Article | May 26, 2017
Just over 20 years ago was the first time a computer beat a human world champion in a chess match, when IBM’s Deep Blue supercomputer beat Gary Kasparov in a narrow victory of 3½ games to 2½. Just under a decade later, machines were deemed to have conquered the game of chess when Deep Fritz, a piece of software running on a desktop PC, beat 2006 world champion Vladimir Kramnik. Now the ability of computers to take on humanity has taken a step further by mastering the far more complex board game Go, with Google’s AlphaGo program beating world number one Ke Jie twice in a best-of-three series. This signifcant milestone shows just how far computers have come in the past 20 years. DeepBlue’s victory at chess showed machines could rapidly process huge amounts of information, paving the way for the big data revolution we see today. But AlphaGo’s triumph represents the development of real artificial intelligence by a machine that can recognise patterns and learn the best way to respond to them. What’s more, it may signify a new evolution in AI, where computers not only learn how to beat us but can start to teach us as well. Go is considered one of the world’s most complex board games. Like chess, it’s a game of strategy but it also has several key differences that make it much harder for a computer to play. The rules are relatively simple but the strategies involved to play the game are highly complex. It is also much harder to calculate the end position and winner in the game of Go. It has a larger board (a 19x19 grid rather than an 8x8 one) and an unlimited number of pieces, so there are many more ways that the board can be arranged. Whereas chess pieces start in set positions and can each make a limited number of moves each turn, Go starts with a blank board and players can place a piece in any of the 361 free spaces. Each game takes on average twice as many turns as chess and there are six times as many legal move options per turn. Each of these features means you can’t build a Go program using the same techniques as for chess machines. These tend to use a “brute force” approach of analysing the potential of large numbers of possible moves to select the best one. Feng-Hsiung Hsu, one of the key contributors to the DeepBlue team, argued in 2007 that applying this strategy to Go would require a million-fold increase in processing speed over DeepBlue so a computer could analyse 100 trillion positions per second. The strategy used by AlphaGo’s creators at Google subsidiary DeepMind was to create an artificial intelligence program that could learn how to identify favourable moves from useless ones. This meant it wouldn’t have to analyse all the possible moves that could be made at each turn. In preparation for its first match against professional Go player Lee Sedol, AlphaGo analysed around 300m moves made by professional Go players. It then used what are called deep learning and reinforcement learning techniques to develop its own ability to identify favourable moves. But this wasn’t enough to enable AlphaGo to defeat highly ranked human players. The software was run on custom microchips specifically designed for machine learning, known as tensor processing units (TPUs), to support very large numbers of computations. This seems similar to the approach used by the designers of DeepBlue, who also developed custom chips for high-volume computation. The stark difference, however, is that DeepBlue’s chips could only be used for playing chess. AlphaGo’s chips run Google’s general-purpose AI framework, Tensorflow, and are also used to power other Google services such as Street View and optimisation tasks in the firm’s data centres. The other thing that has changed since DeepBlue’s victory is the respect that humans have for their computer opponents. When playing chess computers, it was common for the human players to adopt so-called anti-computer tactics. This involves making conservative moves to prevent the computer from evaluating positions effectively. In his first match against AlphaGo, however, Ke Jie, adopted tactics that had previously been used by his opponent to beat it at its own game. Although this attempt failed, it demonstrates a change in approach for leading human players taking on computers. Instead of trying to stifle the machine, they have begun trying to learn from how it played in the past. In fact, the machine has already influenced the professional game of Go, with grandmasters adopting AlphaGo’s strategy during their tournament matches. This machine has taught humanity something new about a game it has been playing for over 2,500 years, liberating us from the experience of millennia. What then might the future hold for the AI behind AlphaGo? The success of DeepBlue triggered rapid developments that have directly impacted the techniques applied in big data processing. The benefit of the technology used to implement AlphaGo is that it can already be applied to other problems that require pattern identification. For example, the same techniques have been applied to the detection of cancer and to create robots that can learn to do things like open doors, among many other applications. The underlying framework used in AlphaGo, Google’s TensorFlow, has been made freely available for developers and researchers to build new machine-learning programs using standard computer hardware. More excitingly, combining it with the many computers available through the internet cloud creates the promise of delivering machine-learning supercomputing. When this technology matures then the potential will exist for the creation of self-taught machines in wide-ranging roles that can support complex decision-making tasks. Of course, what may be even more profound are the social impacts of having machines that not only teach themselves but teach us in the process. This article was originally published on The Conversation. Read the original article. Mark Robert Anderson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
News Article | May 26, 2017
Google's AlphaGo has taken on top Go champions before, but on Friday the narrow AI program took on five Go champions at once, and came out victorious. The match was one of several that took place at the Future of Go Summit in Wuzhen, China, highlighting AlphaGo's continued dominance of the game. The match in question was the fourth main match at the summit, following two one-on-one matches and a pairs match where human players played against each other with AlphaGo as their teammate. During the pairs matches, Google chairman Eric Schmidt tweeted: "This speaks volumes about where AI is headed - human players are teaming up with AlphaGo to have even more fun with the game!" Next up was the five-on-one match. Chen Yaoye, Zhou Ruiyang, Mi Yuting, Shi Yue, and Tang Weixing joined forces to play against AlphaGo, resigning after 255 moves. AlphaGo made major headlines when it defeated world Go champion Lee Sedol in 2016. The system continued to make its way in the Go circuits, winning important matches in Europe and 60 in a row online. At the Future of Go Summit, AlphaGo took on current world champion Ke Jie in three separate matches. Winning the first two already, with the final match to be held on Saturday, May 27, 2017. The summit also hosted a forum on the future of AI as well. The AI behind AlphaGo originated at DeepMind, a startup that Google acquired back in 2014 for some $400 million. The system uses convolutional neural networks to learn the game by watching previous matches. The match parallels what has been seen in AI circles in the past, with IBM's Deep Blue beating Garry Kasparov at chess, and its cognitive computing system Watson triumphing in Jeopardy. However, whereas those systems were largely programmed to win, AlphaGo has taught itself the game of Go by training on a large number of previous matches.
News Article | May 23, 2017
A Google algorithm has narrowly beaten the world’s best player in the ancient Chinese board game of Go, reaffirming the arrival of what its developers say is a groundbreaking new form of artificial intelligence. AlphaGo took the first of three scheduled games against the brash 19-year-old Chinese world number one Ke Jie, who after the match anointed the program as the new “Go god”. AlphaGo stunned the Go community and futurists a year ago when it trounced South Korean grandmaster Lee Sedol four games to one. That marked the first time a computer program had beaten a top player in a full contest and was hailed as a landmark for artificial intelligence. This week’s battle, in the eastern Chinese city of Wuzhen between Ke and an updated version of AlphaGo, was preceded by intense speculation about whether the world’s top player could be beaten by a computer. After his defeat, a visibly flummoxed Ke – who last year declared he would never lose to an AI opponent – said AlphaGo had become too strong for humans, despite the razor-thin half-point winning margin. “I feel like his game is more and more like the ‘Go god’. Really, it is brilliant,” he said. Ke vowed never again to subject himself to the “horrible experience”. AlphaGo’s feats have fuelled visions of a new era of AI that can not only drive cars and operate smarthomes, but potentially help mankind tackle some of the most complex scientific, technical and medical problems. Computer programs have previously beaten humans in cerebral contests, starting with the victory by IBM’s Deep Blue over chess grandmaster Garry Kasparov in 1997. But AlphaGo’s success is considered the most significant yet for AI due to the complexity of Go, which has an incomputable number of move options and puts a premium on human-like “intuition”, instinct and the ability to learn. Go involves two players alternately laying black and white stones on a grid, seeking to seal off the most territory. Before AlphaGo, mechanical mastery of the game had been perceived to be years away. Its victories have been analysed and praised by students of the game as innovative and even “beautiful”, opening up new ways of approaching strategy. AlphaGo uses two sets of “deep neural networks” containing millions of connections similar to neurons in the brain. It is partly self-taught, having played millions of games against itself following initial programming. After Lee lost to AlphaGo last year, Ke boldly declared: “Bring it on!” But the writing was on the wall in January when Ke and other top Chinese players were flattened by a mysterious competitor in online contests. That opponent was revealed afterwards to be the latest version of AlphaGo, which was being given an online test run by its developer, London-based AI company DeepMind Technologies, which Google acquired in 2014. Ke, who turned professional at 11 and describes himself as pretentious, has vacillated between awe and disdain for AlphaGo. He called it a “cold machine” lacking passion for the game in comments on Monday night on China’s Twitter-like Weibo platform. Ke and AlphaGo will face off again on Thursday and Saturday. For some, rapid advances in AI conjure sci-fi images of a “Terminator” future in which machines “wake up” and enslave humanity. But DeepMind founder Demis Hassabis dismissed such concerns. “This isn’t about man competing with machines, but rather using them as tools to explore and discover new knowledge together,” he said before Tuesday’s match. “Ultimately, it doesn’t matter whether AlphaGo wins or loses … either way, humanity wins.”
News Article | May 25, 2017
Google's AlphaGo program defeated 19-year-old prodigy Ke Jie on Tuesday in the first of three games they are due to play this week at a forum on artificial intelligence in this town west of Shanghai. Internet users outside China could watch the games live but Chinese censors blocked most mainland web users from seeing the Google website carrying the feed. None of China's dozens of video sites carried the live broadcasts but a recording of Tuesday's game was available Wednesday night on one of the most popular sites, Youku.com. State media reports on the games have been brief, possibly reflecting Beijing's antipathy toward Google, which closed its China-based search engine in 2010 following a dispute over censorship and computer hacking. The official response to the match, a major event for go and artificial intelligence, reflects the conflict between the ruling Communist Party's technology ambitions and its insistence on controlling what its public can see, hear and read. The possible reason for suppressing coverage while allowing Google to organize the event was unclear. Censorship orders to Chinese media are officially secret and government officials refuse to confirm whether online material is blocked. Machines have mastered most other games. They conquered chess in 1997 when IBM Corp.'s Deep Blue system defeated champion Garry Kasparov, but go was considered more challenging. Players take turns putting white or black stones on a rectangular grid with 361 intersections, trying to capture territory and each other's pieces by surrounding them. The near-infinite number of possible positions requires intuition and flexibility that players had thought would take computers at a least another decade to master. This week's games are taking place in a hall where Chinese leaders hold the annual World Internet Conference, an event attended by global internet companies. Reports by Chinese newspapers and websites on AlphaGo's victory Tuesday were brief, despite Ke's post-game news conference being packed with scores of reporters. None mentioned Google. Government TV in Zhejiang province, where Wuzhen is located, showed a 90-second report about Tuesday's match but few other broadcasters mentioned it, even though go has millions of fans in China and Ke is a celebrity. Google says 60 million people in China watched online as AlphaGo played South Korea's go champion in March 2016. The communist government encourages internet use for business and education but operates elaborate online monitoring and censorship. Censors block access to social media and video-sharing websites such as Facebook and YouTube. Internet companies are required to employ teams of censors to watch social media and remove banned material. China has the world's biggest population of internet users, with some 730 million people online by the end of last year, according to government data. Web surfers can get around online filters using virtual private networks, but Beijing has cracked down on use of those. Beijing's relationship with Google is especially prickly. The company, headquartered in Mountain View, California, opened a China-based search site in 2006 and cooperated with official censorship. That prompted complaints from human rights and other activists and free press groups such as Reporters Without Borders. In 2010, Google announced it no longer wanted to comply after hacking attacks on the company were traced to China. The company shut down its China search engine and visitors were automatically transferred to another Google service in Hong Kong, but that stopped after Chinese authorities objected. Online filters slow access to the Hong Kong site enough to discourage many users. Explore further: China blocks online broadcast of computer go match
News Article | May 25, 2017
The artificial intelligence (AI) programme won its second straight match against 19-year-old Chinese world number one Ke Jie, who will try to salvage some pride for humanity in the third and final game on Saturday. This week's series in the eastern Chinese city of Wuzhen has been closely watched by futurists and Go fans curious over whether AlphaGo could beat the world's best, after making headlines last year by trouncing a South Korean grandmaster. But Chinese fans struggled to get information on the event after authorities banned live coverage, amid online speculation in China that it was linked to Google's tense history with Beijing. Google shut down its www.google.cn website in 2010 in a row over cyberattacks and Chinese censorship, and most of its offerings have remained blocked by authorities. That made for an awkward situation in the Go series, in which the company's corporate logos have been clearly visible on live stream broadcasts. This week's match-up received considerable attention in Chinese media in the run-up, but live coverage was abruptly banned shortly before the first match, according to a government directive to media outlets that was widely circulated online. The notice said the contests "may not be broadcast live in any form, without exception, including text commentary, photography, video streams, self-media accounts, etc." A Google spokesman could not immediately be reached for comment. "I just lost my chance to witness such a historic moment. I feel so terrible," wrote one user on China's Twitter-like Weibo platform. Another user asked: "Has the political environment become so dangerous that covering something that has gained public attention has become impossible?" AlphaGo was developed by London-based AI company DeepMind Technologies, which Google acquired in 2014. Its win last year over South Korean grandmaster Lee Se-Dol marked the first time a computer programme had beaten a top player in a full contest and was hailed as a landmark for AI. Ke, who last year said he would never lose to a machine, accepted defeat after game one, calling AlphaGo a "Go god". For some, AI advances conjure sci-fi images of machines enslaving humanity. But for AI proponents, AlphaGo's feats have fuelled visions of AI that can not only perform mundane tasks, but potentially help mankind figure out some of the most complex scientific, technical and medical problems. Computer programmes have previously beaten humans in cerebral contests, starting with the victory by IBM's Deep Blue over chess grandmaster Garry Kasparov in 1997. But AlphaGo's success is considered the most significant yet for AI due to the complexity of Go, which has an incomputable number of move options and puts a premium on human-like "intuition". Go involves two players alternately laying black and white stones on a grid, seeking to seal off the most territory. AlphaGo's "thinking" is powered by millions of connections similar to neurons in the brain. It is partly self-taught, having played millions of games against itself. Explore further: AI wins as Google algorithm beats No. 1 Go player (Update)
News Article | May 23, 2017
During his commencement speech at Dickinson College on May 21, retired four-star Admiral James Stavridis discussed why he’s sometimes troubled by the common statement to military members, “thank you for your service,” and why often-maligned groups, like the news media, are deserving of the expression of gratitude as well. “My problem is that by making that catch phrase, ‘Thank you for your service,’ somehow the province of the military alone, we miss a crucial point,” Stavridis, former supreme allied commander of NATO, explained. “Which is simply that there are so many ways to serve this nation, and indeed to serve the world beyond what our military does.” He called out other groups and professions, including journalists, whom he said are doing “deeply dangerous work on the front lines of crisis around the world, risking their lives to tell the story.” He added that foreign correspondents, particularly, are “brave and steady under fire, like the best of our military, yet they are armed only with a notepad, a recorder and a smart phone.” The admiral pointed out how first responders, Peace Corps members, teachers, diplomats, health-care professionals, entrepreneurs, social workers and even politicians serve the country. “Politics has become blood sport in America,” Stavridis said, “Any elected representative… is subject to endless scrutiny, bottomless skepticism and often deep personal unpopularity.” Stavridis advised the class of 2017 to make service a priority. “In the gorgeous trajectories of your lives, find time to serve,” he said. “Because one day I want to be able to say to you—the class of 2017—'thank you for your service.'” Stavridis was the longest-serving combatant commander in recent U.S. history. From 2009-2013, he led the NATO alliance in global operations as supreme allied commander with responsibility for Afghanistan, Libya, Syria, the Balkans, cybersecurity and piracy off the coast of Africa. His memoir of the NATO years, “The Accidental Admiral,” was released in 2014, and his leadership book, “The Leader’s Bookshelf,” appeared in March of 2016. He is married to Laura Hall Stavridis, Dickinson class of 1981, and the father of two daughters. From 2006-2009, Stavridis led the U.S. Southern Command in Miami, with responsibility for all military operations in Latin America. He earlier served as senior military assistant to the secretary of the Navy and the secretary of defense and—immediately after the 9/11 attacks—led “Deep Blue,” the Navy’s premier operational think tank for innovation. Earlier in his military career, Stavridis commanded the top ship in the Atlantic Fleet, winning the Battenberg Cup for operational excellence, as well as a squadron of destroyers and a carrier strike group, all in combat. He is a recipient of the Navy League John Paul Jones Award for Inspirational Leadership and holds more than 50 medals, including 28 from foreign nations. After 37 years of service, Stavridis retired from the Navy in 2013 and became the 12th dean of The Fletcher School of Law and Diplomacy at Tufts University, a position he holds currently. An Annapolis graduate, he holds a Ph.D. in international relations from the Fletcher School, where he won the Gullion Prize as top student. Stavridis also chairs the board of the U.S. Naval Institute, the professional association of the nation’s sea services: Navy, Marine Corps, Coast Guard and Merchant Marine. He is a monthly columnist for TIME magazine and chief international security and diplomacy analyst for NBC news and MSNBC. About Dickinson: Dickinson is a nationally recognized liberal arts college chartered in 1783 in Carlisle, Pa. The highly selective college is home to 2,400 students from across the nation and around the world. Defining characteristics of a Dickinson education include a focus on global education—at home and abroad—and study of the environment and sustainability.
News Article | May 23, 2017
Computer beats Chinese champion in ancient board game of go (AP) — A computer defeated China's top player of the ancient board game go on Tuesday, earning praise that it might have finally surpassed human abilities in one of the last games machines have yet to dominate. Google's AlphaGo won the first of three planned games this week against Ke Jie, a 19-year-old prodigy, in this town west of Shanghai. The computer will also face other top-ranked Chinese players during the five-day event. AlphaGo beat Ke by a half-point, "the closest margin possible," according to Demis Hassabis, founder of DeepMind, the Google-owned company in London that developed AlphaGo. AlphaGo has improved markedly since it defeated South Korea's top competitor last year and is a "completely different player," Ke told reporters. "For the first time, AlphaGo was quite human-like," Ke said. "In the past it had some weaknesses. But now I feel its understanding of go and the judgment of the game is beyond our ability." Go players take turns putting white or black stones on a rectangular grid with 361 intersections, trying to capture territory and each other's pieces by surrounding them. Competitors play until both agree there are no more places to put stones or one quits. The game, which originated in China more than 25 centuries ago, has avoided mastery by computers even as they surpassed humans in most other games. They conquered chess in 1997 when IBM Corp.'s Deep Blue system defeated champion Garry Kasparov. Go, known as weiqi in China and baduk in Korea, is considered more challenging because the near-infinite number of possible positions requires intuition and flexibility. Players had expected it to be at least another decade before computers could beat the best humans due to go's complexity and reliance on intuition, but AlphaGo surprised them in 2015 by beating a European champion. Last year, it defeated South Korea's Lee Sedol. AlphaGo was designed to mimic such intuition in tackling complex tasks. Google officials say they want to apply those technologies to areas such as smartphone assistants and solving real-world problems. AlphaGo defeated Lee in four out of five games during a weeklong match in March 2016. Lee lost the first three games, then won the fourth, after which he said he took advantages of weaknesses including AlphaGo's poor response to surprises. Following Lee's surprise victory, "we went back to try and improve the architecture and the system," said Hassabis. "We believe we have fixed that knowledge gap but of course there could be many other new areas that it doesn't know and that we don't know either." Go is hugely popular in Asia, with tens of millions of players in China, Japan and the Koreas. Google said a broadcast of Lee's 2016 match with AlphaGo was watched by an estimated 280 million people. Players have said AlphaGo enjoys some advantages because it doesn't get tired or emotionally rattled, two critical aspects of the mentally intense game.
News Article | May 23, 2017
A computer defeated China's top player of the ancient board game go on Tuesday, earning praise that it might have finally surpassed human abilities in one of the last games machines have yet to dominate. Google's AlphaGo won the first of three planned games this week against Ke Jie, a 19-year-old prodigy, in this town west of Shanghai. The computer will also face other top-ranked Chinese players during the five-day event. AlphaGo beat Ke by a half-point, “the closest margin possible,” according to Demis Hassabis, founder of DeepMind, the Google-owned company in London that developed AlphaGo. AlphaGo has improved markedly since it defeated South Korea's top competitor last year and is a “completely different player,” Ke told reporters. “For the first time, AlphaGo was quite human-like,” Ke said. “In the past it had some weaknesses. But now I feel its understanding of go and the judgment of the game is beyond our ability.” Go players take turns putting white or black stones on a rectangular grid with 361 intersections, trying to capture territory and each other's pieces by surrounding them. Competitors play until both agree there are no more places to put stones or one quits. The game, which originated in China more than 25 centuries ago, has avoided mastery by computers even as they surpassed humans in most other games. They conquered chess in 1997 when IBM Corp.’s Deep Blue system defeated champion Garry Kasparov. Go, known as weiqi in China and baduk in Korea, is considered more challenging because the near-infinite number of possible positions requires intuition and flexibility. Players had expected it to be at least another decade before computers could beat the best humans due to go's complexity and reliance on intuition, but AlphaGo surprised them in 2015 by beating a European champion. Last year, it defeated South Korea's Lee Sedol. AlphaGo was designed to mimic such intuition in tackling complex tasks. Google officials say they want to apply those technologies to areas such as smartphone assistants and solving real-world problems. AlphaGo defeated Lee in four out of five games during a weeklong match in March 2016. Lee lost the first three games, then won the fourth, after which he said he took advantages of weaknesses including AlphaGo's poor response to surprises. Following Lee's surprise victory, “we went back to try and improve the architecture and the system,” said Hassabis. “We believe we have fixed that knowledge gap but of course there could be many other new areas that it doesn't know and that we don't know either.” Go is hugely popular in Asia, with tens of millions of players in China, Japan and the Koreas. Google said a broadcast of Lee's 2016 match with AlphaGo was watched by an estimated 280 million people. Players have said AlphaGo enjoys some advantages because it doesn't get tired or emotionally rattled, two crucial aspects of the mentally intense game. AlphaGo beats human Go champ in milestone for artificial intelligence Four takeaways from AlphaGo's victory over a world champion Go player
News Article | May 11, 2017
On the seventh move of the crucial deciding game, black made what some now consider to have been a critical error. When black mixed up the moves for the Caro-Kann defence, white took advantage and created a new attack by sacrificing a knight. In just 11 more moves, white had built a position so strong that black had no option but to concede defeat. The loser reacted with a cry of foul play – one of the most strident accusations of cheating ever made in a tournament, which ignited an international conspiracy theory that is still questioned 20 years later. This was no ordinary game of chess. It’s not uncommon for a defeated player to accuse their opponent of cheating – but in this case the loser was the then world chess champion, Garry Kasparov. The victor was even more unusual: IBM supercomputer, Deep Blue. In defeating Kasparov on May 11 1997, Deep Blue made history as the first computer to beat a world champion in a six-game match under standard time controls. Kasparov had won the first game, lost the second and then drawn the following three. When Deep Blue took the match by winning the final game, Kasparov refused to believe it. In an echo of the chess automaton hoaxes of the 18th and 19th centuries, Kasparov argued that the computer must actually have been controlled by a real grand master. He and his supporters believed that Deep Blue’s playing was too human to be that of a machine. Meanwhile, to many of those in the outside world who were convinced by the computer’s performance, it appeared that artificial intelligence had reached a stage where it could outsmart humanity – at least at a game that had long been considered too complex for a machine. Yet the reality was that Deep Blue’s victory was precisely because of its rigid, unhumanlike commitment to cold, hard logic in the face of Kasparov’s emotional behaviour. This wasn’t artificial (or real) intelligence that demonstrated our own creative style of thinking and learning, but the application of simple rules on a grand scale. What the match did do, however, was signal the start of a societal shift that is gaining increasing speed and influence today. The kind of vast data processing that Deep Blue relied on is now found in nearly every corner of our lives, from the financial systems that dominate the economy to online dating apps that try to find us the perfect partner. What started as student project, helped usher in the age of big data. The basis of Kasparov’s claims went all the way back to a move the computer made in the second game of the match, the first in the competition that Deep Blue won. Kasparov had played to encourage his opponent to take a “poisoned” pawn, a sacrificial piece positioned to entice the machine into making a fateful move. This was a tactic that Kasparov had used against human opponents in the past. What surprised Kasparov was Deep Blue’s subsequent move. Kasparov called it “human-like”. John Nunn, the English chess grandmaster, described it as “stunning” and “exceptional”. The move left Kasparov riled and ultimately thrown off his strategy. He was so perturbed that he eventually walked away, forfeiting the game. Worse still, he never recovered, drawing the next three games and then making the error that led to his demise in the final game. The move was based on the strategic advantage that a player can gain from creating an open file, a column of squares on the board (as viewed from above) that contains no pieces. This can create an attacking route, typically for rooks or queens, free from pawns blocking the way. During training with the grand master Joel Benjamin, the Deep Blue team had learnt there was sometimes a more strategic option than opening a file and then moving a rook to it. Instead, the tactic involved piling pieces onto the file and then choosing when to open it up. When the programmers learned this, they rewrote Deep Blue’s code to incorporate the moves. During the game, the computer used the position of having a potential open file to put pressure on Kasparov and force him into defending on every move. That psychological advantage eventually wore Kasparov down. From the moment that Kasparov lost, speculation and conspiracy theories started. The conspiracists claimed that IBM had used human intervention during the match. IBM denied this, stating that, in keeping with the rules, the only human intervention came between games to rectify bugs that had been identified during play. They also rejected the claim that the programming had been adapted to Kasparov’s style of play. Instead they had relied on the computer’s ability to search through huge numbers of possible moves. IBM’s refusal of Kasparov’s request for a rematch and the subsequent dismantling of Deep Blue did nothing to quell suspicions. IBM also delayed the release of the computer’s detailed logs, as Kasparov had also requested, until after the decommissioning. But the subsequent detailed analysis of the logs has added new dimensions to the story, including the understanding that Deep Blue made several big mistakes. There has since been speculation that Deep Blue only triumphed because of a bug in the code during the first game. One of Deep Blue’s designers has said that when a glitch prevented the computer from selecting one of the moves it had analysed, it instead made a random move that Kasparov misinterpreted as a deeper strategy. He managed to win the game and the bug was fixed for the second round. But the world champion was supposedly so shaken by what he saw as the machine’s superior intelligence that he was unable to recover his composure and played too cautiously from then on. He even missed the chance to come back from the open file tactic when Deep Blue made a “terrible blunder”. Whichever of these accounts of Kasparov’s reactions to the match are true, they point to the fact that his defeat was at least partly down to the frailties of human nature. He over-thought some of the machine’s moves and became unecessarily anxious about its abilities, making errors that ultimately led to his defeat. Deep Blue didn’t possess anything like the artificial intelligence techniques that today have helped computers win at far more complex games, such as Go. But even if Kasparov was more intimidated than he needed to be, there is no denying the stunning achievements of the team that created Deep Blue. Its ability to take on the world’s best human chess player was built on some incredible computing power, which launched the IBM supercomputer programme that has paved the way for some of the leading-edge technology available in the world today. What makes this even more amazing is the fact that the project started not as an exuberant project from one of the largest computer manufacturers but as a student thesis in the 1980s. When Feng-Hsiung Hsu arrived in the US from Taiwan in 1982, he can’t have imagined that he would become part of an intense rivalry between two teams that spent almost a decade vying to build the world’s best chess computer. Hsu had come to Carnegie Mellon University (CMU) in Pennsylvania to study the design of the integrated circuits that make up microchips, but he also held a longstanding interest in computer chess. He attracted the attention of the developers of Hitech, the computer that in 1988 would become the first to beat a chess grand master, and was asked to assist with hardware design. But Hsu soon fell out with the Hitech team after discovering what he saw as an architectural flaw in their proposed design. Together with several other PhD students, he began building his own computer known as ChipTest, drawing on the architecture of Bell Laboratory’s chess machine, Belle. ChipTest’s custom technology used what’s known as “very large-scale integration” to combine thousands of transistors onto a single chip, allowing the computer to search through 500,000 chess moves each second. Although the Hitech team had a head start, Hsu and his colleagues would soon overtake them with ChipTest’s successor. Deep Thought – named after the computer in Douglas Adams’ The Hitchhiker’s Guide to the Galaxy built to find the meaning of life – combined two of Hsu’s custom processors and could analyse 720,000 moves a second. This enabled it to win the 1989 World Computer Chess Championship without losing a single game. But Deep Thought hit a road block later that year when it came up against (and lost to) the reigning world chess champion, one Garry Kasparov. To beat the best of humanity, Hsu and his team would need to go much further. Now, however, they had the backing of computing giant IBM. Chess computers work by attaching a numerical value to the position of each piece on the board using a formula known as an “evaluation function”. These values can then be processed and searched to determine the best move to make. Early chess computers, such as Belle and Hitech, used multiple custom chips to run the evaluation functions and then combine the results together. The problem was that the communication between the chips was slow and used up a lot of processing power. What Hsu did with ChipTest was to redesign and repackage the processors into a single chip. This removed a number of processing overheads such as off-chip communication and made possible huge increases in computational speed. Whereas Deep Thought could process 720,000 moves a second, Deep Blue used large numbers of processors running the same set of calculations simultaneously to analyse 100,000,000 moves a second. Increasing the number of moves the computer could process was important because chess computers have traditionally used what is known as “brute force” techniques. Human players learn from past experience to instantly rule out certain moves. Chess machines, certainly at that time, did not have that capability and instead had to rely on their ability to look ahead at what could happen for every possible move. They used brute force in analysing very large numbers of moves rather than focusing on certain types of move they already knew were most likely to work. Increasing the number of moves a machine could look at in a second gave it the time to look much further into the future at where different moves would take the game. By February 1996, the IBM team were ready to take on Kasparov again, this time with Deep Blue. Although it became the first machine to beat a world champion in a game under regular time controls, Deep Blue lost the overall match 4-2. Its 100,000,000 moves a second still weren’t enough to beat the human ability to strategise. To up the move count, the team began upgrading the machine by exploring how they could optimise large numbers of processors working in parallel – with great success. The final machine was a 30-processor supercomputer that, more importantly, controlled 480 custom intergrated circuits designed specifically to play chess. This custom design was what enabled the team to so highly optimise the parallel computing power across the chips. The result was a new version of Deep Blue (sometimes referred to as Deeper Blue) capable of searching around 200,000,000 moves per second. This meant it could explore how each possible strategy would play out up to 40 or more moves into the future. By the time the rematch took place in New York City in May 1997, public curiosity was huge. Reporters and television cameras swarmed around the board and were rewarded with a story when Kasparov stormed off following his defeat and cried foul at a press conference afterwards. But the publicity around the match also helped establish a greater understanding of how far computers had come. What most people still had no idea about was how the technology behind Deep Blue would help spread the influence of computers to almost ever aspect of society by transforming the way we use data. Complex computer models are today used to underpin banks’ financial systems, to design better cars and aeroplanes, and to trial new drugs. Systems that mine large datasets (often known as “big data”) to look for significant patterns are involved in planning public services such as transport or healthcare, and enable companies to target advertising to specific groups of people. These are highly complex problems that require rapid processing of large and complex datasets. Deep Blue gave scientists and engineers significant insight into the massively parallel multi-chip systems that have made this possible. In particular they showed the capabilities of a general-purpose computer system that controlled a large number of custom chips designed for a specific application. The science of molecular dynamics, for example, involves studying the physical movements of molecules and atoms. Custom chip designs have enabled computers to model molecular dynamics to look ahead to see how new drugs might react in the body, just like looking ahead at different chess moves. Molecular dynamic simulations have helped speed up the development of successful drugs, such as some of those used to treat HIV. For very broad applications, such as modelling financial systems and data mining, designing custom chips for an individual task in these areas would be prohibitively expensive. But the Deep Blue project helped develop the techniques to code and manage highly parallelised systems that split a problem over a large number of processors. Today, many systems for processing large amounts of data rely on graphics processing units (GPUs) instead of custom-designed chips. These were originally designed to produce images on a screen but also handle information using lots of processors in parallel. So now they are often used in high-performance computers running large data sets and to run powerful artificial intelligence tools such Facebook’s digital assistant. There are obvious similarities with Deep Blue’s architecture here: custom chips (built for graphics) controlled by general-purpose processors to drive efficiency in complex calculations. The world of chess playing machines, meanwhile, has evolved since the Deep Blue victory. Despite his experience with Deep Blue, Kasparov agreed in 2003 to take on two of the most prominent chess machines, Deep Fritz and Deep Junior. And both times he managed to avoid a defeat, although he still made errors that forced him into a draw. However, both machines convincingly beat their human counterparts in the 2004 and 2005 Man vs Machine World Team Championships. Junior and Fritz marked a change in the approach to developing systems for computer chess. Whereas Deep Blue was a custom-built computer relying on the brute force of its processors to analyse millions of moves, these new chess machines were software programs that used learning techniques to minimise the searches needed. This can beat the brute force techniques using only a desktop PC. But despite this advance, we still don’t have chess machines that resembles human intelligence in the way it plays the game – they don’t need to. And, if anything, the victories of Junior and Fritz further strengthen the idea that human players lose to computers, at least in part, because of their humanity. The humans made errors, became anxious and feared for their reputations. The machines, on the other hand, relentlessly applied logical calculations to the game in their attempts to win. One day we might have computers that truly replicate human thinking, but the story of the last 20 years has been the rise of systems that are superior precisely because they are machines. This article was originally published on The Conversation. Read the original article. Mark Robert Anderson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.