News Article | May 7, 2017
The future has already arrived in a small factory in Worcester, according to the man hired by Theresa May to put Britain at the forefront of the next industrial revolution. Juergen Maier, the chief executive of Siemens UK, believes new technologies including robotics, artificial intelligence and additive manufacturing, or 3D printing, can deliver greater productivity and create more highly paid jobs. But failing to crack the next revolution will come at a high price: falling living standards. The work being done in Worcester, and places like it, will be crucial if Britain is to be successful outside the EU, Maier says. The starting gun has been fired in this global race and Britain needs to get ahead. “The beauty of it is, if we get this right, it doesn’t just drive productivity, but it also means that you’re driving jobs up the value chain, which means that people are getting better paid, so ultimately you raise living standards,” the 53-year-old says from the factory floor of Materials Solutions, which is 85% owned by Siemens and boasts big-name clients such as Rolls-Royce. “If you take it as an average, our living standards have hardly risen since the recession. The fundamental reasons are we’re not exporting enough, and we’re not driving productivity and output. Unless you’re driving productivity, you can’t raise wages.” Maier, a firm supporter of the remain campaign in the run-up to the EU referendum, has quickly become the go-to expert on the future of British industry. When he met the Guardian, he was preparing to appear on BBC1’s Question Time in Wigan, alongside panellists including the Brexit secretary, David Davis, and the Ukip leader, Paul Nuttall. His brief as head of the industrial digitalisation review commissioned by the government is to work out how can the UK can better deliver existing technologies, how it can create new industries and, in doing so, whether the UK can generate a net increase in manufacturing jobs despite greater levels of automation. “Our gut feeling is we can, but we still need to prove that,” Maier says. “The absolute nightmare for me would be that we’re applying this technology, we’re displacing jobs as a result of it – which will happen – but what we’re not doing at the same time is creating all the jobs in computer science, in data analytics, in software code writing. The good news is we already have a lot of jobs in this area. These industries will create thousands of jobs, software jobs, engineers.” With much at stake, the review is a major undertaking for Maier, but he does have support from a panel of UK business leaders including Sir Charlie Mayfield, the chairman of John Lewis Partnership, and Carolyn Fairbairn, the director general of the CBI. The current plan is to report back with initial recommendations to the business secretary, currently Greg Clark, in late summer. Maier’s ambition for the UK is considerable but so too are the obstacles, not least the uncertainty created by Brexit and strong competition from the likes of Germany and the US. Another key issue will be Britain’s ability to fill these highly skilled roles in sufficient numbers post-Brexit. “It is not going to be as good as it was in the single market and I think we just have to be more honest about that,” Maier says. “I’m not a moaner or a ‘remoaner’, I’m saying we have to get to grips with the realities, which are that there are going to be some barriers to trade. And the sooner we accept that the better. “Once we’ve got over the heat over the elections and this slight hysteria that we’ve got at the moment, we have to get into a period of calm.” On Materials Solutions’ factory floor in Worcester, 3D printing machines whirr quietly away, making complex metal parts for industries including automotive, aerospace and motor sports with a speed that would have been unthinkable using traditional manufacturing. Here, welding and forging are replaced with machines that turn 3D CAD models into parts using software, lasers and metal powders. For Siemens, the firm has delivered breakthrough technology, by 3D printing gas turbine blades. In doing so, the time it takes from the design of a new blade to its production has been reduced from two years to just two months. Founded and run by Carl Brancher, Materials Solutions employs 23 people and is the perfect example of how Britain can create new, cutting-edge industries, according to Maier. The Siemens boss believes the prize for the winner in the next industrial revolution is considerable: a thriving manufacturing sector, highly paid, skilled jobs and greater productivity, which would in turn fuel growth and raise living standards for all. Britain has some advantages, Maier says, including a flexible, skilled workforce and existing infrastructure that fosters innovation such as the UK’s Catapult centres, the Alan Turing Institute in London and the Henry Royce Institute in Manchester. Besides, he says, we have no choice but to win the race to industrial digitalisation: there is too much at stake to fail. “We have no option. We have to pull this off and if we don’t pull it off, our living standards will drop further.”
News Article | May 5, 2017
Scientists at the University of Warwick's Tissue Image Analytics (TIA) Laboratory—led by Professor Nasir Rajpoot from the Department of Computer Science—are creating a large, digital repository of a variety of tumour and immune cells found in thousands of human tissue samples, and are developing algorithms to recognize these cells automatically. "We are very excited about working with Intel under the auspices of the strategic relationship between Intel and the Alan Turing Institute," said Professor Rajpoot, who is also an Honorary Scientist at University Hospitals Coventry & Warwickshire NHS Trust (UHCW). "The collaboration will enable us to benefit from world-class computer science expertise at Intel with the aim of optimising our digital pathology image analysis software pipeline and deploying some of the latest cutting-edge technologies developed in our lab for computer-assisted diagnosis and grading of cancer." The digital pathology imaging solution aims to enable pathologists to increase their accuracy and reliability in analysing cancerous tissue specimens over what can be achieved with existing methods. "We have long known that important aspects of cellular pathology can be done faster with computers than by humans," said Professor David Snead, clinical lead for cellular pathology and director of the UHCW Centre of Excellence. "With this collaboration, we finally see a pathway toward bringing this science into practice. The successful adoption of these tools will stimulate better organisation of services, gains in efficiency, and above all, better care for patients, especially those with cancer." The initial work focuses on lung cancer. The University of Warwick and Intel are collaborating to improve a model for computers to recognize cellular distinctions associated with various grades and types of lung cancer by using artificial intelligence frameworks such as TensorFlow running on Intel Xeon processors. UHCW is annotating the digital pathology images to help inform the model. The aim is to create a model that will eventually be useful in many types of cancer—creating more objective results, lowering the risk of human errors, and aiding oncologists and patients in their selection of treatments. The TIA lab at Warwick and the Pathology Department at the UHCW have established the UHCW Centre of Excellence for Digital Pathology and begun digitising their histopathology service. This digital pathology imaging solution will be the next step in revolutionising traditional healthcare with computerised systems and could be placed in any pathology department, in any hospital. The project has been launched in collaboration with Intel and the Alan Turing Institute—the latter being the UK's national centre for data science, founded in 2015 in a joint venture between the University of Warwick and other top UK universities. "This project is an excellent example of data science's potential to underpin critical improvements in health and well-being, an area of great importance to the Alan Turing Institute," said Dr. Anthony Lee, the Strategic Programme Director at the Alan Turing Institute for the collaboration between the Institute and Intel. Rick Cnossen, general manager of HIT-Imaging Analytics in Intel's Data Center Group, commented, "This project has massive potential benefit for cellular pathology, and Intel technologies are the foundation for enabling this transformation. "We've seen what has happened over recent years with the digitisation of X-rays (PACS). The opportunity to transform the way pathology images are handled and analysed, building on experience with PACS and combining data with other sources, could be truly ground-breaking. "This collaboration could not only improve service efficiency, but also open up new and exciting analytical techniques for more personalised precision care." Explore further: New advances in cancer diagnosis
News Article | November 29, 2016
The internet age made big promises to us: a new period of hope and opportunity, connection and empathy, expression and democracy. Yet the digital medium has aged badly because we allowed it to grow chaotically and carelessly, lowering our guard against the deterioration and pollution of our infosphere. We sought only what we wanted – entertainment, cheaper goods, free news and gossip – and not the deeper understanding, dialogue or education that would have served us better. The appetite for populism is not a new problem. In the ferocious newspaper battles of 1890s New York, the emerging sensational style of journalism in Joseph Pulitzer’s New York World and William Randolph Hearst’s New York Journal was dubbed “yellow journalism” by those concerned with maintaining standards, adherence to accuracy and an informed public debate. We now have the same problem with online misinformation. Humans have always been prejudiced and intolerant of different views. Francis Bacon’s philosophical masterwork Novum Organum, published in 1620, analyses four kinds of idols or false notions that “are now in possession of the human understanding, and have taken deep root therein”. One of them, the “idols of the cave”, refers to our conceptual biases and susceptibility to external influences. “Everyone ... has a cave or den of his own, which refracts and discolours the light of nature, owing either to his own proper and peculiar nature; or to his education and conversation with others; or to the reading of books, and the authority of those whom he esteems and admires; or to the differences of impressions, accordingly as they take place in a mind preoccupied and predisposed or in a mind indifferent and settled; or the like.” It is at least a 400-year-old problem. Likewise, the appetite for shallow gossip, pleasant lies and reassuring falsehoods has always been significant. The difference is that the internet allows that appetite to be fed a bottomless supply of semantic junk, transforming Bacon’s caves into echo chambers. In that way, we have always been “post-truth”. These kinds of digital, ethical problems represent a defining challenge of the 21st century. They include breaches of privacy, of security and safety, of ownership and intellectual property rights, of trust, of fundamental human rights, as well as the possibility of exploitation, discrimination, inequality, manipulation, propaganda, populism, racism, violence and hate speech. How should we even begin to weigh the human cost of these problems? Consider the political responsibilities of newspapers’ websites in distorting discussions around the UK’s Brexit decision, or the false news disseminated by the “alt-right”, a loose affiliation of people with far-right views, during the campaign waged by President-elect Donald Trump. So far, the strategy for technology companies has been to deal with the ethical impact of their products retrospectively. Some are finally taking more significant action against online misinformation: Facebook, for example, is currently working on methods for stronger detection and verification of fake news, and on ways to provide warning labels on false content – yet only now that the US presidential election is over. But this is not good enough. The Silicon Valley mantra of “fail often, fail fast” is a poor strategy when it comes to the ethical and cultural impacts of these businesses. It is equivalent to “too little, too late”, and has very high, long-term costs of global significance, in preventable or mitigable harms, wasted resources, missed opportunities, lack of participation, misguided caution and lower resilience. A lack of proactive ethics foresight thwarts decision-making, undermines management practices and damages strategies for digital innovation. In short, it is very expensive. Amazon’s same-day delivery service, for example, systematically tends to exclude predominantly black neighbourhoods in the 27 metropolitan areas where it was available, Bloomberg found. It would have been preventable with an ethical impact analysis that could have considered the discriminatory impact of simple, algorithmic decisions. The near instantaneous spread of digital information means that some of the costs of misinformation may be hard to reverse, especially when confidence and trust are undermined. The tech industry can and must do better to ensure the internet meets its potential to support individuals’ wellbeing and social good. We need an ethical infosphere to save the world and ourselves from ourselves, but restoring that infosphere requires a gigantic, ecological effort. We must rebuild trust through credibility, transparency and accountability – and a high degree of patience, coordination and determination. There are some reasons to be cheerful. In April 2016, the British government agreed with the recommendation of the House of Commons’ Science and Technology Committee that the government should establish a Council of Data Ethics. Such an open and independent advisory forum would bring all stakeholders together to participate in the dialogue, decision-making and implementation of solutions to common ethical problems brought about by the information revolution. In September 2016, Amazon, DeepMind, Facebook, IBM, Microsoft and Google (whom I advised on the right to be forgotten) established a new ethical body called the Partnership on Artificial Intelligence to Benefit People and Society. The Royal Society, the British Academy and the Alan Turing Institute, the national institute for data science, are working on regulatory frameworks for managing personal data, and in May 2018, Europe’s new General Data Protection Regulation will come into effect, strengthening the rights of individuals and their personal information. All these initiatives show a growing interest in how online platforms can be held more responsible for the content they provide, not unlike newspapers. We need to shape and guide the future of the digital, and stop making it up as we go along. It is time to work on an innovative blueprint for a better kind of infosphere.
Agency: GTR | Branch: EPSRC | Program: | Phase: Research Grant | Award Amount: 42.00M | Year: 2015
The work of the Alan Turing Institute will enable knowledge and predictions to be extracted from large-scale and diverse digital data. It will bring together the best people, organisations and technologies in data science for the development of foundational theory, methodologies and algorithms. These will inform scientific and technological discoveries, create new business opportunities, accelerate solutions to global challenges, inform policy-making, and improve the environment, health and infrastructure of the world in an Age of Algorithms.
News Article | February 23, 2017
Researchers say 'benevolent bots', otherwise known as software robots, that are designed to improve articles on Wikipedia sometimes have online 'fights' over content that can continue for years. Editing bots on Wikipedia undo vandalism, enforce bans, check spelling, create links and import content automatically, whereas other bots (which are non-editing) can mine data, identify data or identify copyright infringements. The team analysed how much they disrupted Wikipedia, observing how they interacted on 13 different language editions over ten years (from 2001 to 2010). They found that bots interacted with one another, whether or not this was by design, and it led to unpredictable consequences. The research paper, published in PLOS ONE, concludes that bots are more like humans than you might expect. Bots appear to behave differently in culturally distinct online environments. The paper says the findings are a warning to those using artificial intelligence for building autonomous vehicles, cyber security systems or for managing social media. It suggests that scientists may have to devote more attention to bots' diverse social life and their different cultures. The research paper by the University of Oxford and the Alan Turing Institute in the UK explains that although the online world has become an ecosystem of bots, our knowledge of how they interact with each other is still rather poor. Although bots are automatons that do not have the capacity for emotions, bot to bot interactions are unpredictable and act in distinctive ways. It finds that German editions of Wikipedia had fewest conflicts between bots, with each undoing another's edits 24 times, on average, over ten years. This shows relative efficiency, says the research paper, when compared with bots on the Portuguese Wikipedia edition, which undid another bot's edits 185 times, on average, over ten years. Bots on English Wikipedia undid another bot's work 105 times, on average, over ten years, three times the rate of human reverts, says the paper. The findings show that even simple autonomous algorithms can produce complex interactions that result in unintended consequences - 'sterile fights' that may continue for years, or reach deadlock in some cases. The paper says while bots constitute a tiny proportion (0.1%) of Wikipedia editors, they stand behind a significant proportion of all edits. Although such conflicts represent a small proportion of the bots' overall editorial activity, these findings are significant in highlighting their unpredictability and complexity. Smaller language editions, such as the Polish Wikipedia, have far more content created by bots than the large language editions, such as English Wikipedia. Lead author Dr Milena Tsvetkova, from the Oxford Internet Institute, said: 'We find that bots behave differently in different cultural environments and their conflicts are also very different to the ones between human editors. This has implications not only for how we design artificial agents but also for how we study them. We need more research into the sociology of bots.' The paper was co-authored by the principal investigator of the EC-Horizon2020-funded project, HUMANE, Professor Taha Yasseri, also from the Oxford Internet Institute. He added: 'The findings show that even the same technology leads to different outcomes depending on the cultural environment. An automated vehicle will drive differently on a German autobahn to how it will through the Tuscan hills of Italy. Similarly, the local online infrastructure that bots inhabit will have some bearing on how they behave and their performance. Bots are designed by humans from different countries so when they encounter one another, this can lead to online clashes. We see differences in the technology used in the different Wikipedia language editions and the different cultures of the communities of Wikipedia editors involved create complicated interactions. This complexity is a fundamental feature that needs to be considered in any conversation related to automation and artificial intelligence.' Professor Luciano Floridi, also an author of the paper, remarked: 'We tend to forget that coordination even among collaborative agents is often achieved only through frameworks of rules that facilitate the wanted outcomes. This infrastructural ethics or infra-ethics needs to be designed as much and as carefully as the agents that inhabit it.' The research finds that the number of reverts is smaller for bots than for humans, but the bots' behaviour is more varied and conflicts involving bots last longer and are triggered later. The average time between successive reverts for humans is 2 minutes, then 24 hours or one year, says the paper. The first bot to bot revert happened a month later, on average, but further reverts often continued for years. The paper suggests that humans use automatic tools that report live changes and can react more quickly, whereas bots systematically crawl over web articles and they can be restricted on the number of edits allowed. The fact that bots' conflicts are longstanding also flags that humans are failing to spot the problems early enough, suggests the paper. For more information, contact the University of Oxford News Office on +44 (0) 1865 280534 or email: firstname.lastname@example.org *The paper, 'Even Good Bots Fight: The Case of Wikipedia', is by Milena Tsvetkova, Ruth Garcia-Gavilanes, Luciano Floridi, and Taha Yasseri. *It will be published in PLOS ONE on Thursday, February 23, 2017 at 2 pm (Eastern Time Once live, the article can be found at: http://journals. *The team identified bots' editorial activity mainly through Wikipedia's own flagging system, which requires human editors to create separate accounts for bots. Bot account names have to show that the author is a bot. The researchers also trawled the Wikipedia API pages (Application Programming Interface), checking the User page for each suspected bot account to link bots to their human owners. They modelled interactions between pairs of bots involved in successive reverts as trajectories that trace who reverts more over time. Then, they used simple machine learning techniques to identify different types of trajectories and analyse how commonly they occur for bots versus humans, and across the different language editions of Wikipedia. *Bots vary from web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content editing in online collaboration communities.
News Article | December 8, 2015
Cray is partnering with the Alan Turing Institute, the new U.K. data science research organization in London, to help the U.K. as it increases research in data science to benefit research and industry. Earlier this month Fiona Burgess, U.K. senior account manager, and I attended the launch of the institute. At the event, U.K. Minister for Science and Universities Jo Johnson paid tribute to Turing and his work. Institute director Professor Andrew Blake told the audience that the Turing Institute is about much more than just big data — it is about data science, analyzing that data and gaining a new understanding that leads to decisions and actions. Alan Turing was a pioneering British computer scientist. He has become a household name in the U.K. following publicity surrounding his role in breaking the Enigma machine ciphers during the Second World War. This was a closely guarded secret until a few years ago, but has recently become the subject of numerous books and several films. Turing was highly influential in the development of computer science, providing a formalization of the concepts of algorithm and computation with the Turing machine. After the war, he worked at the National Physical Laboratory, where he designed ACE, one of the first stored-program computers. The Alan Turing Institute is a joint venture between the universities of Cambridge, Edinburgh, Oxford, Warwick, University College London, and the U.K. Engineering and Physical Science Research Council (EPSRC). The Institute received initial funding in excess of £75 million ($110 million) from the U.K. government, the university partners and other business organizations, including the Lloyd’s Register Foundation. The Turing Institute will, among other topics, research how knowledge and predictions can be extracted from large-scale and diverse digital data. It will bring together people, organizations and technologies in data science for the development of theory, methodologies and algorithms. The U.K. government is looking to this new Institute to enable the science community, commerce and industry to realize the value of big data for the U.K. economy. Cray will be working with the Turing Institute and EPSRC to provide data analytics capability to the U.K.’s data sciences community. EPSRC’s ARCHER supercomputer, a Cray XC30 system based at the University of Edinburgh, has been chosen for this work. Much as we worked with NERSC to port Docker to Cray systems, we will be working with ATI to port analytics software to ARCHER and then XC systems generally. ARCHER is currently the largest supercomputer for scientific research in the U.K. — with its recent upgrade ARCHER’s 118,080 cores can access in excess of 300 TB of memory. What sort of problem might need that amount of processing power? Genomics England is collecting around 200 GB of DNA sequence data from each of 100,000 people. Finding patterns in all this information will be a mammoth task! ATI have put together a wide ranging programme of workshops and data science summits, details of which can be found on their Web site. Duncan Roweth is a principal engineer in the Cray CTO Office in Bristol, U.K.
News Article | October 28, 2016
Why SKY, ITV and ADT are unlocking the power of data. • Government backs use of data science with launch of £42million institute opened by Minster of Universities and Science, Jo Johnson (Boris’ brother) and Martha Lane Fox. • Top marketers recognise data is one of most important things to drive twenty first century economy. Big brands are unlocking the power of data science to help inform their advertising strategies to get ahead of the crowd. Using unique social intelligence tools developed by leading scientists, they can analyse millions of tweets by hundreds of thousands of people in real time. The cutting edge technology used to consume and analyse vast quantities of data was recognised recently at the launch of the Alan Turing Institute (on Wednesday 11 November). The £42million government-funded institute was opened by Minster of Universities and Science, Jo Johnson and Martha Lane Fox and showcased the vision for the newly created Institute. Data Visualisations of UK cities were presented at the event to show how the way people come together to form communities differs across the UK. Leeds business, Bloom Agency created the visualisations by analysing 23 million tweets from 300,000 individuals over one month. Bloom Agency’s Chief Data Scientist, Peter Laflin developed the Whisper tool which created the data visualisations. He said: ‘Our work with the Universities of Oxford and Strathclyde using social media data maps out the complex structures that exist in cities, and doing this in real time, is a great example of a new data source being used to tackle previously intractable problems. This work is intended to help city leaders to understand the impact of the decisions they make about how cities should be run and which policies should receive civic support. This is one step on the road to creating a ‘smart city’, where we use data to help us live more sustainably, efficiently and in harmony with others. This means problems of everyday life could be solved such as how to best get to work, where to park, even which street lights need replacement bulbs. It’s the exactly the same technology we use with our clients, to help them understand how their target audiences behave so we can tailor specific content for them. We used Whisper to help mine the information that informed the making of the recent Thierry Henry Sky advert which the Mirror called ‘the greatest thing in the history of the universe’. What is clear is that data is one of the most important things that will drive our twenty first century economy, our academic understanding and our progress as a society. Data is allowing us a view into problems previously too complex to investigate. And it seems that Marketers at the top of their game already recognise that.’ The newly launched Alan Turing Institute brings together top leaders in advanced mathematics and computing science from across the UK including Bloom’s Peter Laflin. Peter added: ‘The institute will play a pivotal role in connecting together industry, academia and the global economy in such a way that the lack of clarity around ‘data science’ is skilfully and cleverly resolved whilst providing the motivation for world class innovation with data science. Bloom Agency is an integrated marketing agency. Bloom is a discovery agency that embraces change and technology and places great insight and strategically led creativity at the heart of everything it does. In fact all of their work is based on Insight and data. Bloom Agency’s unique listening tool Whisper measures True Influence and the team believes that great insight produces creative work that gets results and which is powerful and effective. Peter Laflin is Bloom’s Chief Data Scientist he uses Data Science to inform Bloom’s strategically led creative work. His specialist team use data science and a set of uniquely developed strategy and insight tools to help clients to make better business decisions. Bloom Agency has used Professor Peter Grindrod CBE’s academic research to create and evolve Whisper and Professor Grindrod is a Non-Executive Director of the company. Peter Laflin works closely with Professor Grindrod, who is a Board Director at the Alan Turing Institute and is interested in the scope of the work that the Alan Turing Institute will undertake. The Alan Turing Institute is a joint venture between the universities of Cambridge, Edinburgh, Oxford, Warwick, UCL and the Engineering and Physical Sciences Research Council (EPSRC). The Institute will promote the development and use of advanced mathematics, computer science, algorithms and big data for human benefit. The Institute will bring together leaders in advanced mathematics and computing science from the five lead universities and other partners and will conduct first class research and development in an environment that brings together theory and practical application. It will build on the UK’s existing academic strengths and help position the country as a world leader in the analysis and application of big data and algorithm research. The Institute is being funded over five years with £42 million from the UK government. The university partners are contributing £5 million each, totalling £25 million. The Institute is based at the British Library at the heart of London’s Knowledge Quarter. Its work is expected to encompass a broad range of scientific disciplines and be relevant across multiple business sectors.
News Article | December 13, 2016
LONDON, 13-Dec-2016 — /EuropaWire/ — We are delighted to congratulate Zoubin Ghahramani, Faculty Fellow at The Alan Turing Institute and Professor of Information Engineering at the University of Cambridge, on the news that his AI start-up has been acquired by tech giant Uber as their new research arm. According to an article published in The New York Times, Uber intend ‘to apply A.I. in areas like self-driving vehicles, along with solving other technological challenges through machine learning.’ The team from Geometric Intelligence will form the initial core of a new initiative ‘Uber AI Labs’, a new division of the tech giant dedicated to cutting-edge research in artificial intelligence and machine learning.Zoubin and Gary Marcus, of New York University, founded Geometric Intelligence, an artificial intelligence startup that vows to surpass the deep learning systems under development at internet giants like Google and Facebook, two years ago. At The Alan Turing Institute, Zoubin’s research interests include statistical machine learning, Bayesian nonparametrics, scalable inference, deep learning, probabilistic programming, Bayesian optimisation, and automating data science. His Automatic Statistician project aims to automate the exploratory analysis and modelling of data, discovering good models for data and generating a human-interpretable natural language summary of the analysis. He and his group have also worked on automating inference (though probabilistic programming) and on automating the allocation of computational resources. “It is terrific to see that a small UK start-up co-founded by an Alan Turing Institute Fellow will form the new AI research arm at Uber, and a superb example of how cutting-edge research can be applied to major industry challenges. Many congratulations to Zoubin and the entire team from Geometric Intelligence.”
News Article | February 23, 2017
For many it is no more than the first port of call when a niggling question raises its head. Found on its pages are answers to mysteries from the fate of male anglerfish, the joys of dorodango, and the improbable death of Aeschylus. But beneath the surface of Wikipedia lies a murky world of enduring conflict. A new study from computer scientists has found that the online encyclopedia is a battleground where silent wars have raged for years. Since Wikipedia launched in 2001, its millions of articles have been ranged over by software robots, or simply “bots”, that are built to mend errors, add links to other pages, and perform other basic housekeeping tasks. In the early days, the bots were so rare they worked in isolation. But over time, the number deployed on the encyclopedia exploded with unexpected consequences. The more the bots came into contact with one another, the more they became locked in combat, undoing each other’s edits and changing the links they had added to other pages. Some conflicts only ended when one or other bot was taken out of action. “The fights between bots can be far more persistent than the ones we see between people,” said Taha Yasseri, who worked on the study at the Oxford Internet Institute. “Humans usually cool down after a few days, but the bots might continue for years.” The findings emerged from a study that looked at bot-on-bot conflict in the first ten years of Wikipedia’s existence. The researchers at Oxford and the Alan Turing Institute in London examined the editing histories of pages in 13 different language editions and recorded when bots undid other bots’ changes. They did not expect to find much. The bots are simple computer programs that are written to make the encyclopedia better. They are not intended to work against each other. “We had very low expectations to see anything interesting. When you think about them they are very boring,” said Yasseri. “The very fact that we saw a lot of conflict among bots was a big surprise to us. They are good bots, they are based on good intentions, and they are based on same open source technology.” While some conflicts mirrored those found in society, such as the best names to use for contested territories, others were more intriguing. Describing their research in a paper entitled Even Good Bots Fight in the journal Plos One, the scientists reveal that among the most contested articles were pages on former president of Pakistan Pervez Musharraf, the Arabic language, Niels Bohr and Arnold Schwarzenegger. One of the most intense battles played out between Xqbot and Darknessbot which fought over 3,629 different articles between 2009 and 2010. Over the period, Xqbot undid more than 2,000 edits made by Darknessbot, with Darknessbot retaliating by undoing more than 1,700 of Xqbot’s changes. The two clashed over pages on all sorts of topics, from Alexander of Greece and Banqiao district in Taiwan to Aston Villa football club. Another bot named after Tachikoma, the artificial intelligence in the Japanese science fiction series Ghost in the Shell, had a two year running battle with Russbot. The two undid more than a thousand edits by the other on more than 3,000 articles ranging from Hillary Clinton’s 2008 presidential campaign to the demography of the UK. The study found striking differences in the bot wars that played out on the various language editions of Wikipedia. German editions had the fewest bot fights, with bots undoing other’s edits on average only 24 times in a decade. But the story was different on the Portuguese Wikipedia, where bots undid the work of other bots on average 185 times in ten years. The English version saw bots meddling with each other’s changes on average 105 times a decade. The findings show that even simple algorithms that are let loose on the internet can interact in unpredictable ways. In many cases, the bots came into conflict because they followed slightly different rules to one another. Yasseri believes the work serves as an early warning to companies developing bots and more powerful artificial intelligence (AI) tools. An AI that works well in the lab might behave unpredictably in the wild. “Take self-driving cars. A very simple thing that’s often overlooked is that these will be used in different cultures and environments,” said Yasseri. “An automated car will behave differently on the German autobahn to how it will on the roads in Italy. The regulations are different, the laws are different, and the driving culture is very different,” he said. As more decisions, options and services come to depend on bots working properly together, harmonious cooperation will become increasingly important. As the authors note in their latest study: “We know very little about the life and evolution of our digital minions.” Earlier this month, researchers at Google’s DeepMind set AIs against one another to see if they would cooperate or fight. When the AIs were released on an apple-collecting game, the scientists found that the AIs cooperated while apples were plentiful, but as soon as supplies got short, they turned nasty. It is not the first time that AIs have run into trouble. In 2011, scientists in the US recorded a conversation between two chatbots. They bickered from the start and ended up arguing about God. Hod Lipson, director of the Creative Machines Lab at Columbia University in New York, said the work was a “fascinating example of the complex and unpredictable behaviours that emerge when AI systems interact with each other.” “Often people are concerned about what AI systems will ultimately do to people,” he said. “But what this and similar work suggests is that what AI systems might do to each other might be far more interesting. And this isn’t just limited to software systems – it’s true also for physically embodied AI. Imagine ACLU drones watching over police drones, and vice versa. It’s going to be interesting.”