News Article | November 29, 2016
The internet age made big promises to us: a new period of hope and opportunity, connection and empathy, expression and democracy. Yet the digital medium has aged badly because we allowed it to grow chaotically and carelessly, lowering our guard against the deterioration and pollution of our infosphere. We sought only what we wanted – entertainment, cheaper goods, free news and gossip – and not the deeper understanding, dialogue or education that would have served us better. The appetite for populism is not a new problem. In the ferocious newspaper battles of 1890s New York, the emerging sensational style of journalism in Joseph Pulitzer’s New York World and William Randolph Hearst’s New York Journal was dubbed “yellow journalism” by those concerned with maintaining standards, adherence to accuracy and an informed public debate. We now have the same problem with online misinformation. Humans have always been prejudiced and intolerant of different views. Francis Bacon’s philosophical masterwork Novum Organum, published in 1620, analyses four kinds of idols or false notions that “are now in possession of the human understanding, and have taken deep root therein”. One of them, the “idols of the cave”, refers to our conceptual biases and susceptibility to external influences. “Everyone ... has a cave or den of his own, which refracts and discolours the light of nature, owing either to his own proper and peculiar nature; or to his education and conversation with others; or to the reading of books, and the authority of those whom he esteems and admires; or to the differences of impressions, accordingly as they take place in a mind preoccupied and predisposed or in a mind indifferent and settled; or the like.” It is at least a 400-year-old problem. Likewise, the appetite for shallow gossip, pleasant lies and reassuring falsehoods has always been significant. The difference is that the internet allows that appetite to be fed a bottomless supply of semantic junk, transforming Bacon’s caves into echo chambers. In that way, we have always been “post-truth”. These kinds of digital, ethical problems represent a defining challenge of the 21st century. They include breaches of privacy, of security and safety, of ownership and intellectual property rights, of trust, of fundamental human rights, as well as the possibility of exploitation, discrimination, inequality, manipulation, propaganda, populism, racism, violence and hate speech. How should we even begin to weigh the human cost of these problems? Consider the political responsibilities of newspapers’ websites in distorting discussions around the UK’s Brexit decision, or the false news disseminated by the “alt-right”, a loose affiliation of people with far-right views, during the campaign waged by President-elect Donald Trump. So far, the strategy for technology companies has been to deal with the ethical impact of their products retrospectively. Some are finally taking more significant action against online misinformation: Facebook, for example, is currently working on methods for stronger detection and verification of fake news, and on ways to provide warning labels on false content – yet only now that the US presidential election is over. But this is not good enough. The Silicon Valley mantra of “fail often, fail fast” is a poor strategy when it comes to the ethical and cultural impacts of these businesses. It is equivalent to “too little, too late”, and has very high, long-term costs of global significance, in preventable or mitigable harms, wasted resources, missed opportunities, lack of participation, misguided caution and lower resilience. A lack of proactive ethics foresight thwarts decision-making, undermines management practices and damages strategies for digital innovation. In short, it is very expensive. Amazon’s same-day delivery service, for example, systematically tends to exclude predominantly black neighbourhoods in the 27 metropolitan areas where it was available, Bloomberg found. It would have been preventable with an ethical impact analysis that could have considered the discriminatory impact of simple, algorithmic decisions. The near instantaneous spread of digital information means that some of the costs of misinformation may be hard to reverse, especially when confidence and trust are undermined. The tech industry can and must do better to ensure the internet meets its potential to support individuals’ wellbeing and social good. We need an ethical infosphere to save the world and ourselves from ourselves, but restoring that infosphere requires a gigantic, ecological effort. We must rebuild trust through credibility, transparency and accountability – and a high degree of patience, coordination and determination. There are some reasons to be cheerful. In April 2016, the British government agreed with the recommendation of the House of Commons’ Science and Technology Committee that the government should establish a Council of Data Ethics. Such an open and independent advisory forum would bring all stakeholders together to participate in the dialogue, decision-making and implementation of solutions to common ethical problems brought about by the information revolution. In September 2016, Amazon, DeepMind, Facebook, IBM, Microsoft and Google (whom I advised on the right to be forgotten) established a new ethical body called the Partnership on Artificial Intelligence to Benefit People and Society. The Royal Society, the British Academy and the Alan Turing Institute, the national institute for data science, are working on regulatory frameworks for managing personal data, and in May 2018, Europe’s new General Data Protection Regulation will come into effect, strengthening the rights of individuals and their personal information. All these initiatives show a growing interest in how online platforms can be held more responsible for the content they provide, not unlike newspapers. We need to shape and guide the future of the digital, and stop making it up as we go along. It is time to work on an innovative blueprint for a better kind of infosphere.
Van Bunderen C.C.,VU University Amsterdam |
Van Nieuwpoort I.C.,VU University Amsterdam |
Arwert L.I.,VU University Amsterdam |
Heymans M.W.,VU University Amsterdam |
And 4 more authors.
Journal of Clinical Endocrinology and Metabolism | Year: 2011
Context: Adults with GH deficiency (GHD) have a decreased life expectancy. The effect of GH treatment on mortality remains to be established. Objective: This nationwide cohort study investigates the effect of GH treatment on all-cause and cause-specific mortality and analyzes patient characteristics influencing mortality in GHD adults. Design, Setting, and Patients: Patients in the Dutch National Registry of Growth Hormone Treatment in Adults were retrospectively monitored (1985-2009) and subdivided into treatment (n =2229), primary (untreated, n = 109), and secondary control (partly treated, n = 356) groups. Main Outcome Measures: Standardized mortality ratios (SMR) were calculated for all-cause, malignancy, and cardiovascular disease (CVD) mortality. Expected mortality was obtained from cause, sex, calendar year, and age-specific death rates from national death and population counts. Results: In the treatment group, 95 patients died compared to 74.6 expected [SMR 1.27 (95% confidence interval, 1.04-1.56)]. Mortality was higher in women than in men. After exclusion of high-risk patients, the SMR for CVD mortality remained increased in women. Mortality due to malignancies was not elevated. In the control groups mortality was not different from the background population. Univariate analyses demonstrated sex, GHD onset, age, and underlying diagnosis as influencing factors. Conclusions: GHD men receiving GH treatment have a mortality rate not different from the background population. In women, after exclusion of high-risk patients, mortality was not different from the background population except for CVD. Mortality due to malignancies was not elevated in adults receiving GH treatment. Next to gender, the heterogeneous etiology is of influence on mortality in GHD adults with GH treatment. Copyright © 2011 by The Endocrine Society.
Logemann H.N.A.,University Utrecht |
Bocker K.B.E.,Turing Institute |
Deschamps P.K.H.,University Utrecht |
Kemner C.,University Utrecht |
Kenemans J.L.,University Utrecht
Pharmacology Biochemistry and Behavior | Year: 2013
Understanding the neuropharmacology of inhibition is of importance to fuel optimal treatment for disorders such as Attention Deficit/Hyperactivity Disorder. The aim of the present study was to assess the effect of noradrenergic antagonism by clonidine on behavioral-performance and brain-activity indices of inhibition. A placebo-controlled, double-blind, randomized, crossover design was implemented. Male (N = 21) participants performed in a visual stop signal task while EEG was recorded under clonidine in one session and under placebo in another.We expected that 100 μg clonidine would have a negative effect on EEG indices of inhibition, the Stop N2 and Stop P3. Furthermore, we expected that clonidine would negatively affect the behavioral measure of inhibition, the stop signal reaction time(SSRT). Behavioral analyseswere performed on data of 17 participants, EEG analyses on a subset (N = 13). Performance data suggested that clonidine negatively affected attention (response variability, omissions) without affecting inhibition as indexed by SSRT. Electrophysiological data show that clonidine reduced the Stop P3, but not the Stop N2, indicating a partial negative effect on inhibition. Results showthat it is unlikely that the Stop P3 reductionwas related to the effect of clonidine on lapses of attention and on peripheral cardiovascular functioning. In conclusion, the current dose of clonidine had a negative effect on attention and a partial effect on inhibitory control. This inhibitory effect was restricted to the dorsal region of the prefrontal cortex (presumably the superior frontal gyrus) as opposed to the ventral region of the prefrontal cortex (right inferior frontal gyrus). © 2013 Elsevier Inc. All rights reserved.
Agency: GTR | Branch: EPSRC | Program: | Phase: Research Grant | Award Amount: 42.00M | Year: 2015
The work of the Alan Turing Institute will enable knowledge and predictions to be extracted from large-scale and diverse digital data. It will bring together the best people, organisations and technologies in data science for the development of foundational theory, methodologies and algorithms. These will inform scientific and technological discoveries, create new business opportunities, accelerate solutions to global challenges, inform policy-making, and improve the environment, health and infrastructure of the world in an Age of Algorithms.
News Article | December 13, 2016
LONDON, 13-Dec-2016 — /EuropaWire/ — We are delighted to congratulate Zoubin Ghahramani, Faculty Fellow at The Alan Turing Institute and Professor of Information Engineering at the University of Cambridge, on the news that his AI start-up has been acquired by tech giant Uber as their new research arm. According to an article published in The New York Times, Uber intend ‘to apply A.I. in areas like self-driving vehicles, along with solving other technological challenges through machine learning.’ The team from Geometric Intelligence will form the initial core of a new initiative ‘Uber AI Labs’, a new division of the tech giant dedicated to cutting-edge research in artificial intelligence and machine learning.Zoubin and Gary Marcus, of New York University, founded Geometric Intelligence, an artificial intelligence startup that vows to surpass the deep learning systems under development at internet giants like Google and Facebook, two years ago. At The Alan Turing Institute, Zoubin’s research interests include statistical machine learning, Bayesian nonparametrics, scalable inference, deep learning, probabilistic programming, Bayesian optimisation, and automating data science. His Automatic Statistician project aims to automate the exploratory analysis and modelling of data, discovering good models for data and generating a human-interpretable natural language summary of the analysis. He and his group have also worked on automating inference (though probabilistic programming) and on automating the allocation of computational resources. “It is terrific to see that a small UK start-up co-founded by an Alan Turing Institute Fellow will form the new AI research arm at Uber, and a superb example of how cutting-edge research can be applied to major industry challenges. Many congratulations to Zoubin and the entire team from Geometric Intelligence.”
News Article | October 28, 2016
Why SKY, ITV and ADT are unlocking the power of data. • Government backs use of data science with launch of £42million institute opened by Minster of Universities and Science, Jo Johnson (Boris’ brother) and Martha Lane Fox. • Top marketers recognise data is one of most important things to drive twenty first century economy. Big brands are unlocking the power of data science to help inform their advertising strategies to get ahead of the crowd. Using unique social intelligence tools developed by leading scientists, they can analyse millions of tweets by hundreds of thousands of people in real time. The cutting edge technology used to consume and analyse vast quantities of data was recognised recently at the launch of the Alan Turing Institute (on Wednesday 11 November). The £42million government-funded institute was opened by Minster of Universities and Science, Jo Johnson and Martha Lane Fox and showcased the vision for the newly created Institute. Data Visualisations of UK cities were presented at the event to show how the way people come together to form communities differs across the UK. Leeds business, Bloom Agency created the visualisations by analysing 23 million tweets from 300,000 individuals over one month. Bloom Agency’s Chief Data Scientist, Peter Laflin developed the Whisper tool which created the data visualisations. He said: ‘Our work with the Universities of Oxford and Strathclyde using social media data maps out the complex structures that exist in cities, and doing this in real time, is a great example of a new data source being used to tackle previously intractable problems. This work is intended to help city leaders to understand the impact of the decisions they make about how cities should be run and which policies should receive civic support. This is one step on the road to creating a ‘smart city’, where we use data to help us live more sustainably, efficiently and in harmony with others. This means problems of everyday life could be solved such as how to best get to work, where to park, even which street lights need replacement bulbs. It’s the exactly the same technology we use with our clients, to help them understand how their target audiences behave so we can tailor specific content for them. We used Whisper to help mine the information that informed the making of the recent Thierry Henry Sky advert which the Mirror called ‘the greatest thing in the history of the universe’. What is clear is that data is one of the most important things that will drive our twenty first century economy, our academic understanding and our progress as a society. Data is allowing us a view into problems previously too complex to investigate. And it seems that Marketers at the top of their game already recognise that.’ The newly launched Alan Turing Institute brings together top leaders in advanced mathematics and computing science from across the UK including Bloom’s Peter Laflin. Peter added: ‘The institute will play a pivotal role in connecting together industry, academia and the global economy in such a way that the lack of clarity around ‘data science’ is skilfully and cleverly resolved whilst providing the motivation for world class innovation with data science. Bloom Agency is an integrated marketing agency. Bloom is a discovery agency that embraces change and technology and places great insight and strategically led creativity at the heart of everything it does. In fact all of their work is based on Insight and data. Bloom Agency’s unique listening tool Whisper measures True Influence and the team believes that great insight produces creative work that gets results and which is powerful and effective. Peter Laflin is Bloom’s Chief Data Scientist he uses Data Science to inform Bloom’s strategically led creative work. His specialist team use data science and a set of uniquely developed strategy and insight tools to help clients to make better business decisions. Bloom Agency has used Professor Peter Grindrod CBE’s academic research to create and evolve Whisper and Professor Grindrod is a Non-Executive Director of the company. Peter Laflin works closely with Professor Grindrod, who is a Board Director at the Alan Turing Institute and is interested in the scope of the work that the Alan Turing Institute will undertake. The Alan Turing Institute is a joint venture between the universities of Cambridge, Edinburgh, Oxford, Warwick, UCL and the Engineering and Physical Sciences Research Council (EPSRC). The Institute will promote the development and use of advanced mathematics, computer science, algorithms and big data for human benefit. The Institute will bring together leaders in advanced mathematics and computing science from the five lead universities and other partners and will conduct first class research and development in an environment that brings together theory and practical application. It will build on the UK’s existing academic strengths and help position the country as a world leader in the analysis and application of big data and algorithm research. The Institute is being funded over five years with £42 million from the UK government. The university partners are contributing £5 million each, totalling £25 million. The Institute is based at the British Library at the heart of London’s Knowledge Quarter. Its work is expected to encompass a broad range of scientific disciplines and be relevant across multiple business sectors.
News Article | December 8, 2015
Cray is partnering with the Alan Turing Institute, the new U.K. data science research organization in London, to help the U.K. as it increases research in data science to benefit research and industry. Earlier this month Fiona Burgess, U.K. senior account manager, and I attended the launch of the institute. At the event, U.K. Minister for Science and Universities Jo Johnson paid tribute to Turing and his work. Institute director Professor Andrew Blake told the audience that the Turing Institute is about much more than just big data — it is about data science, analyzing that data and gaining a new understanding that leads to decisions and actions. Alan Turing was a pioneering British computer scientist. He has become a household name in the U.K. following publicity surrounding his role in breaking the Enigma machine ciphers during the Second World War. This was a closely guarded secret until a few years ago, but has recently become the subject of numerous books and several films. Turing was highly influential in the development of computer science, providing a formalization of the concepts of algorithm and computation with the Turing machine. After the war, he worked at the National Physical Laboratory, where he designed ACE, one of the first stored-program computers. The Alan Turing Institute is a joint venture between the universities of Cambridge, Edinburgh, Oxford, Warwick, University College London, and the U.K. Engineering and Physical Science Research Council (EPSRC). The Institute received initial funding in excess of £75 million ($110 million) from the U.K. government, the university partners and other business organizations, including the Lloyd’s Register Foundation. The Turing Institute will, among other topics, research how knowledge and predictions can be extracted from large-scale and diverse digital data. It will bring together people, organizations and technologies in data science for the development of theory, methodologies and algorithms. The U.K. government is looking to this new Institute to enable the science community, commerce and industry to realize the value of big data for the U.K. economy. Cray will be working with the Turing Institute and EPSRC to provide data analytics capability to the U.K.’s data sciences community. EPSRC’s ARCHER supercomputer, a Cray XC30 system based at the University of Edinburgh, has been chosen for this work. Much as we worked with NERSC to port Docker to Cray systems, we will be working with ATI to port analytics software to ARCHER and then XC systems generally. ARCHER is currently the largest supercomputer for scientific research in the U.K. — with its recent upgrade ARCHER’s 118,080 cores can access in excess of 300 TB of memory. What sort of problem might need that amount of processing power? Genomics England is collecting around 200 GB of DNA sequence data from each of 100,000 people. Finding patterns in all this information will be a mammoth task! ATI have put together a wide ranging programme of workshops and data science summits, details of which can be found on their Web site. Duncan Roweth is a principal engineer in the Cray CTO Office in Bristol, U.K.
News Article | February 23, 2017
For many it is no more than the first port of call when a niggling question raises its head. Found on its pages are answers to mysteries from the fate of male anglerfish, the joys of dorodango, and the improbable death of Aeschylus. But beneath the surface of Wikipedia lies a murky world of enduring conflict. A new study from computer scientists has found that the online encyclopedia is a battleground where silent wars have raged for years. Since Wikipedia launched in 2001, its millions of articles have been ranged over by software robots, or simply “bots”, that are built to mend errors, add links to other pages, and perform other basic housekeeping tasks. In the early days, the bots were so rare they worked in isolation. But over time, the number deployed on the encyclopedia exploded with unexpected consequences. The more the bots came into contact with one another, the more they became locked in combat, undoing each other’s edits and changing the links they had added to other pages. Some conflicts only ended when one or other bot was taken out of action. “The fights between bots can be far more persistent than the ones we see between people,” said Taha Yasseri, who worked on the study at the Oxford Internet Institute. “Humans usually cool down after a few days, but the bots might continue for years.” The findings emerged from a study that looked at bot-on-bot conflict in the first ten years of Wikipedia’s existence. The researchers at Oxford and the Alan Turing Institute in London examined the editing histories of pages in 13 different language editions and recorded when bots undid other bots’ changes. They did not expect to find much. The bots are simple computer programs that are written to make the encyclopedia better. They are not intended to work against each other. “We had very low expectations to see anything interesting. When you think about them they are very boring,” said Yasseri. “The very fact that we saw a lot of conflict among bots was a big surprise to us. They are good bots, they are based on good intentions, and they are based on same open source technology.” While some conflicts mirrored those found in society, such as the best names to use for contested territories, others were more intriguing. Describing their research in a paper entitled Even Good Bots Fight in the journal Plos One, the scientists reveal that among the most contested articles were pages on former president of Pakistan Pervez Musharraf, the Arabic language, Niels Bohr and Arnold Schwarzenegger. One of the most intense battles played out between Xqbot and Darknessbot which fought over 3,629 different articles between 2009 and 2010. Over the period, Xqbot undid more than 2,000 edits made by Darknessbot, with Darknessbot retaliating by undoing more than 1,700 of Xqbot’s changes. The two clashed over pages on all sorts of topics, from Alexander of Greece and Banqiao district in Taiwan to Aston Villa football club. Another bot named after Tachikoma, the artificial intelligence in the Japanese science fiction series Ghost in the Shell, had a two year running battle with Russbot. The two undid more than a thousand edits by the other on more than 3,000 articles ranging from Hillary Clinton’s 2008 presidential campaign to the demography of the UK. The study found striking differences in the bot wars that played out on the various language editions of Wikipedia. German editions had the fewest bot fights, with bots undoing other’s edits on average only 24 times in a decade. But the story was different on the Portuguese Wikipedia, where bots undid the work of other bots on average 185 times in ten years. The English version saw bots meddling with each other’s changes on average 105 times a decade. The findings show that even simple algorithms that are let loose on the internet can interact in unpredictable ways. In many cases, the bots came into conflict because they followed slightly different rules to one another. Yasseri believes the work serves as an early warning to companies developing bots and more powerful artificial intelligence (AI) tools. An AI that works well in the lab might behave unpredictably in the wild. “Take self-driving cars. A very simple thing that’s often overlooked is that these will be used in different cultures and environments,” said Yasseri. “An automated car will behave differently on the German autobahn to how it will on the roads in Italy. The regulations are different, the laws are different, and the driving culture is very different,” he said. As more decisions, options and services come to depend on bots working properly together, harmonious cooperation will become increasingly important. As the authors note in their latest study: “We know very little about the life and evolution of our digital minions.” Earlier this month, researchers at Google’s DeepMind set AIs against one another to see if they would cooperate or fight. When the AIs were released on an apple-collecting game, the scientists found that the AIs cooperated while apples were plentiful, but as soon as supplies got short, they turned nasty. It is not the first time that AIs have run into trouble. In 2011, scientists in the US recorded a conversation between two chatbots. They bickered from the start and ended up arguing about God. Hod Lipson, director of the Creative Machines Lab at Columbia University in New York, said the work was a “fascinating example of the complex and unpredictable behaviours that emerge when AI systems interact with each other.” “Often people are concerned about what AI systems will ultimately do to people,” he said. “But what this and similar work suggests is that what AI systems might do to each other might be far more interesting. And this isn’t just limited to software systems – it’s true also for physically embodied AI. Imagine ACLU drones watching over police drones, and vice versa. It’s going to be interesting.”
News Article | February 23, 2017
Researchers say 'benevolent bots', otherwise known as software robots, that are designed to improve articles on Wikipedia sometimes have online 'fights' over content that can continue for years. Editing bots on Wikipedia undo vandalism, enforce bans, check spelling, create links and import content automatically, whereas other bots (which are non-editing) can mine data, identify data or identify copyright infringements. The team analysed how much they disrupted Wikipedia, observing how they interacted on 13 different language editions over ten years (from 2001 to 2010). They found that bots interacted with one another, whether or not this was by design, and it led to unpredictable consequences. The research paper, published in PLOS ONE, concludes that bots are more like humans than you might expect. Bots appear to behave differently in culturally distinct online environments. The paper says the findings are a warning to those using artificial intelligence for building autonomous vehicles, cyber security systems or for managing social media. It suggests that scientists may have to devote more attention to bots' diverse social life and their different cultures. The research paper by the University of Oxford and the Alan Turing Institute in the UK explains that although the online world has become an ecosystem of bots, our knowledge of how they interact with each other is still rather poor. Although bots are automatons that do not have the capacity for emotions, bot to bot interactions are unpredictable and act in distinctive ways. It finds that German editions of Wikipedia had fewest conflicts between bots, with each undoing another's edits 24 times, on average, over ten years. This shows relative efficiency, says the research paper, when compared with bots on the Portuguese Wikipedia edition, which undid another bot's edits 185 times, on average, over ten years. Bots on English Wikipedia undid another bot's work 105 times, on average, over ten years, three times the rate of human reverts, says the paper. The findings show that even simple autonomous algorithms can produce complex interactions that result in unintended consequences - 'sterile fights' that may continue for years, or reach deadlock in some cases. The paper says while bots constitute a tiny proportion (0.1%) of Wikipedia editors, they stand behind a significant proportion of all edits. Although such conflicts represent a small proportion of the bots' overall editorial activity, these findings are significant in highlighting their unpredictability and complexity. Smaller language editions, such as the Polish Wikipedia, have far more content created by bots than the large language editions, such as English Wikipedia. Lead author Dr Milena Tsvetkova, from the Oxford Internet Institute, said: 'We find that bots behave differently in different cultural environments and their conflicts are also very different to the ones between human editors. This has implications not only for how we design artificial agents but also for how we study them. We need more research into the sociology of bots.' The paper was co-authored by the principal investigator of the EC-Horizon2020-funded project, HUMANE, Professor Taha Yasseri, also from the Oxford Internet Institute. He added: 'The findings show that even the same technology leads to different outcomes depending on the cultural environment. An automated vehicle will drive differently on a German autobahn to how it will through the Tuscan hills of Italy. Similarly, the local online infrastructure that bots inhabit will have some bearing on how they behave and their performance. Bots are designed by humans from different countries so when they encounter one another, this can lead to online clashes. We see differences in the technology used in the different Wikipedia language editions and the different cultures of the communities of Wikipedia editors involved create complicated interactions. This complexity is a fundamental feature that needs to be considered in any conversation related to automation and artificial intelligence.' Professor Luciano Floridi, also an author of the paper, remarked: 'We tend to forget that coordination even among collaborative agents is often achieved only through frameworks of rules that facilitate the wanted outcomes. This infrastructural ethics or infra-ethics needs to be designed as much and as carefully as the agents that inhabit it.' The research finds that the number of reverts is smaller for bots than for humans, but the bots' behaviour is more varied and conflicts involving bots last longer and are triggered later. The average time between successive reverts for humans is 2 minutes, then 24 hours or one year, says the paper. The first bot to bot revert happened a month later, on average, but further reverts often continued for years. The paper suggests that humans use automatic tools that report live changes and can react more quickly, whereas bots systematically crawl over web articles and they can be restricted on the number of edits allowed. The fact that bots' conflicts are longstanding also flags that humans are failing to spot the problems early enough, suggests the paper. For more information, contact the University of Oxford News Office on +44 (0) 1865 280534 or email: firstname.lastname@example.org *The paper, 'Even Good Bots Fight: The Case of Wikipedia', is by Milena Tsvetkova, Ruth Garcia-Gavilanes, Luciano Floridi, and Taha Yasseri. *It will be published in PLOS ONE on Thursday, February 23, 2017 at 2 pm (Eastern Time Once live, the article can be found at: http://journals. *The team identified bots' editorial activity mainly through Wikipedia's own flagging system, which requires human editors to create separate accounts for bots. Bot account names have to show that the author is a bot. The researchers also trawled the Wikipedia API pages (Application Programming Interface), checking the User page for each suspected bot account to link bots to their human owners. They modelled interactions between pairs of bots involved in successive reverts as trajectories that trace who reverts more over time. Then, they used simple machine learning techniques to identify different types of trajectories and analyse how commonly they occur for bots versus humans, and across the different language editions of Wikipedia. *Bots vary from web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content editing in online collaboration communities.