Movassagh M.,University of Cambridge |
Choy M.-K.,University of Cambridge |
Cordeddu L.,University of Cambridge |
Haider S.,Computer Laboratory |
And 8 more authors.
Circulation | Year: 2011
Background-The epigenome refers to marks on the genome, including DNA methylation and histone modifications, that regulate the expression of underlying genes. A consistent profile of gene expression changes in end-stage cardiomyopathy led us to hypothesize that distinct global patterns of the epigenome may also exist. Methods and Results-We constructed genome-wide maps of DNA methylation and histone-3 lysine-36 trimethylation (H3K36me3) enrichment for cardiomyopathic and normal human hearts. More than 506 Mb sequences per library were generated by high-throughput sequencing, allowing us to assign methylation scores to â‰̂28 million CG dinucleotides in the human genome. DNA methylation was significantly different in promoter CpG islands, intragenic CpG islands, gene bodies, and H3K36me3-enriched regions of the genome. DNA methylation differences were present in promoters of upregulated genes but not downregulated genes. H3K36me3 enrichment itself was also significantly different in coding regions of the genome. Specifically, abundance of RNA transcripts encoded by the DUX4 locus correlated to differential DNA methylation and H3K36me3 enrichment. In vitro, Dux gene expression was responsive to a specific inhibitor of DNA methyltransferase, and Dux siRNA knockdown led to reduced cell viability. Conclusions-Distinct epigenomic patterns exist in important DNA elements of the cardiac genome in human end-stage cardiomyopathy. The epigenome may control the expression of local or distal genes with critical functions in myocardial stress response. If epigenomic patterns track with disease progression, assays for the epigenome may be useful for assessing prognosis in heart failure. Further studies are needed to determine whether and how the epigenome contributes to the development of cardiomyopathy. © 2011 American Heart Association. All rights reserved.
News Article | November 24, 2016
After designing a Civil War computer game for an eighth-grade project, Sonia Uppal was intrigued by programming. But her school material was just too dry and boring for a kid to endure. "It was awful," so she gave up computer science, she says. At a technology fair the next year, though, a chance encounter with a $35 computer called the Raspberry Pi reignited her interest. The palm-size machine is a proudly naked circuit board brazenly showing all its chips and electronic connectors. "It was magical to me," she says. "I picked one up and started tinkering with it. You could do so much with this little thing." For openers, Uppal, 17, taught herself Python, one of today's hottest programming languages. Next, she raised $600 for a project called Pi à la Code, buying 10 of the little computers while at home in Los Altos Hills, California, and taking them to the hill town of Kasauli in northwestern India. There, she taught programming to 25 kids while still a high school sophomore and junior. Now she's headed to the University of California, Berkeley, eager to study computer science. Her story is exactly why Raspberry Pis may be coming to a school near you. The computers burst onto the scene in 2012, six years after a team of computer scientists at the University of Cambridge Computer Laboratory realized fewer and fewer students had any real understanding of computers. Together, they designed a deliberately bare-bones computer they hoped would encourage a new generation of hobbyist programmers, just like the old PCs from Commodore, Tandy and BBC Micro did in the 1980s. The result was the Raspberry Pi, a credit card-size circuit board exposing students to computer fundamentals. "These are the USB ports where you plug in a keyboard -- this is input," says Matt Richardson, evangelist for the Raspberry Pi Foundation charity organization that designs the computer and, through its trading operation, oversees sales. "This is where you plug in the monitor -- that's an output. That's a computer lesson right there." The little computer is a commercial success, selling in 114 countries and shipping 8 million units over four years. "When we first launched Raspberry Pi, we thought we'd ship 10,000 units a year," says Claire Doyle, head of the Raspberry Pi division for Element14, which makes and distributes the computer. Instead, the explosive growth of the Raspberry Pi took the company by surprise as schools and tinkerers built a movement around it. Because the Raspberry Pi Foundation is a charity, it funnels all profits back into R&D and things like providing free teacher training and resource materials. It also plays a role in some interesting projects, such as working with the Zoological Society of London and the Kenya Wildlife Service, which have put up camera traps to spot illegal rhino and elephant poaching. The computing industry doesn't want us to dirty our hands with chores like formatting hard drives, administering network privileges and updating antivirus software. We don't even have to type if we don't want to; we now tell our gadgets what to do just by speaking to them. Sure, making gadgets easier to use means more people can use them. But somebody has to know what makes these things tick. "The 'it just works' ethos from tech companies is phenomenal. It's changed people's lives for the better," says Raspberry Pi Foundation Chief Executive Philip Colligan. "But it's also robbed us of a sense of agency over the computer," he says. "I don't think we're going to get much innovation from people who don't understand how computing really works." That's where the Raspberry Pi comes in. Because you can plug lots of widgets into a Raspberry Pi -- cameras, thermometers, motors and blinking lights, for example -- it's a great foundation for all kinds of projects. "Are you interested in nature? Here's a bird box project," says Richardson. "Music? Here's how to make a synthesizer. Robots? Here's how to use Raspberry Pi as its brains." That's why it appeals to Melissa Jurist, who designs school programs for children in her job directing K-12 outreach at the University of Delaware's College of Engineering. Jurist especially likes it when kids tackle Pi projects that are "slightly subversive, something that could yield something gross or funny, or something parents probably don't know how to do. "Allowing kids to catapult, shoot or hurl disgusting stuff at each other using a Pi is also at the top of my list," says Jurist. "Cool kids may have iPhones, but cooler kids know how to manipulate and create with their devices." But the Raspberry Pi can be used for more than child's play. Computer engineering students at the Rochester Institute of Technology often use a Raspberry Pi as the brains for their senior projects. These range from devices to monitor home security and control home electronics, to operating a weather station that posts reports on Twitter. "Having a board with all the capabilities built in lets them focus on the application they're trying to get working," vaulting over low-level electronics work that otherwise would fritter away most of the year, says Roy Melton, a lecturer and engineer who teaches the students. And graduate students at Worcester Polytechnic Institute's Aerospace Engineering program put Pis on drones to help them navigate. "We need a fairly powerful computer that can fly," says Professor Raghvendra Cowlagi. "You can't find something under $50 that's so powerful. If you burn one, you just get another." The Raspberry Pi can do more than just raise the next generation of engineers, although that's plenty. With technology pervading every part of our lives, modern society needs a technically literate population, says Laura Blankenship, chair of computer science at the Baldwin School in Bryn Mawr, Pennsylvania, and member of the Computer Science Teachers Association. Knowing how computers work can help Baldwin's middle-school classes wrestle with social issues -- like the conflicting needs of national security and individual privacy. Informed students also can question the tech giants whose products and services reach deep into their lives, says Blankenship. "Once they build something that's similar to a mobile phone, they can ask, 'Why is there software installed on a phone that I can't take off?'" There's a lot riding on the tiny computer. Who knew it could achieve so much? This story appears in the fall 2016 edition of CNET Magazine. For other magazine stories, click here.
News Article | April 13, 2016
The researchers behind the study, led by the University of Cambridge, will present their results today (13 April) at the 25th International World Wide Web Conference in Montréal. The Cambridge researchers, working with colleagues from the University of Birmingham, Queen Mary University of London, and University College London, used data from approximately 37,000 users and 42,000 venues in London to build a network of Foursquare places and the parallel Twitter social network of visitors, adding up to more than half a million check-ins over a ten-month period. From this data, they were able to quantify the 'social diversity' of various neighbourhoods and venues by distinguishing between places that bring together strangers versus those that tend to bring together friends, as well as places that attract diverse individuals as opposed to those which attract regulars. When these social diversity metrics were correlated with wellbeing indicators for various London neighbourhoods, the researchers discovered that signs of gentrification, such as rising housing prices and lower crime rates, were the strongest in deprived areas with high social diversity. These areas had an influx of more affluent and diverse visitors, represented by social media users, and pointed to an overall improvement of their rank, according to the UK Index of Multiple Deprivation. The UK Index of Multiple Deprivation (IMD) is a statistical exercise conducted by the Department of Communities and Local Government, which measures the relative prosperity of neighbourhoods across England. The researchers compared IMD data for 2010, the year their social and place network data was gathered, with the IMD data for 2015, the most recent report. "We're looking at the social roles and properties of places," said Desislava Hristova from the University's Computer Laboratory, and the study's lead author. "We found that the most socially cohesive and homogenous areas tend to be either very wealthy or very poor, but neighbourhoods with both high social diversity and high deprivation are the ones which are currently undergoing processes of gentrification." This aligns with previous research, which has found that tightly-knit communities are more resistant to changes and resources remain within the community. This suggests that affluent communities remain affluent and poor communities remain poor because they are relatively isolated. Hristova and her co-authors found that of the 32 London boroughs, the borough of Hackney had the highest social diversity, and in 2010, had the second-highest deprivation. By 2015, it had also seen the most improvement on the IMD index, and is now an area undergoing intense gentrification, with house prices rising far above the London average, fast-decreasing crime rate and a highly diverse population. In addition to Hackney, Tower Hamlets, Greenwich, Hammersmith and Lambeth are also boroughs with high social diversity and high deprivation in 2010, and are now undergoing the process of gentrification, with all of the positive and negative effects that come along with it. The ability to predict the gentrification of neighbourhoods could help local governments and policy-makers improve urban development plans and alleviate the negative effects of gentrification while benefitting from economic growth. In order to measure the social diversity of a given place or neighbourhood, the researchers defined four distinct measures: brokerage, serendipity, entropy and homogeneity. Brokerage is the ability of a place to connect people who are otherwise disconnected; serendipity is the extent to which a place can induce chance encounters between its visitors; entropy is the extent to which a place is diverse with respect to visits; and homogeneity is the extent to which the visitors to a place are homogenous in their characteristics. Within categories of places, the researchers found that some places were more likely places for friends to meet, and some were for more fleeting encounters. For example, in the food category, strangers were more likely to meet at a dumpling restaurant while friends were more likely to meet at a fried chicken restaurant. Similarly, friends were more likely to meet at a B&B, football match or strip club, while strangers were more likely to meet at a motel, art museum or gay bar. "We understand that people who diversify their contacts not only socially but also geographically have high social capital but what about places?" said Hristova. "We all have a general notion of the social diversity of places and the people that visit them, but we've attempted to formalise this - it could even be used as a specialised local search engine." For instance, while there are a number of ways a tourist can find a highly-recommended restaurant in a new city, the social role that a place plays in a city is normally only known by locals through experience. "Whether a place is touristy or quiet, artsy or mainstream could be integrated into mobile system design to help newcomers or tourists feel like locals," said Hristova. Explore further: Diversity or deprivation -- what makes a 'bad' neighborhood
News Article | December 14, 2016
Wiseguyreports.Com Adds “Laboratory Information System (LIS) -Market Demand, Growth, Opportunities and analysis of Top Key Player Forecast to 2021” To Its Research Database This report studies sales (consumption) of Laboratory Information System (LIS) in Global market, especially in United States, China, Europe, Japan, focuses on top players in these regions/countries, with sales, price, revenue and market share for each player in these regions, covering Market Segment by Regions, this report splits Global into several key Regions, with sales (consumption), revenue, market share and growth rate of Laboratory Information System (LIS) in these regions, from 2011 to 2021 (forecast), like United States China Europe Japan Split by product Types, with sales, revenue, price and gross margin, market share and growth rate of each type, can be divided into On-premises LIS Cloud-Based LIS Type III Split by applications, this report focuses on sales, market share and growth rate of Laboratory Information System (LIS) in each application, can be divided into Hospitals Clinics Application 3 Global Laboratory Information System (LIS) Sales Market Report 2016 1 Laboratory Information System (LIS) Overview 1.1 Product Overview and Scope of Laboratory Information System (LIS) 1.2 Classification of Laboratory Information System (LIS) 1.2.1 On-premises LIS 1.2.2 Cloud-Based LIS 1.2.3 Type III 1.3 Application of Laboratory Information System (LIS) 1.3.1 Hospitals 1.3.2 Clinics 1.3.3 Application 3 1.4 Laboratory Information System (LIS) Market by Regions 1.4.1 United States Status and Prospect (2011-2021) 1.4.2 China Status and Prospect (2011-2021) 1.4.3 Europe Status and Prospect (2011-2021) 1.4.4 Japan Status and Prospect (2011-2021) 1.5 Global Market Size (Value and Volume) of Laboratory Information System (LIS) (2011-2021) 1.5.1 Global Laboratory Information System (LIS) Sales and Growth Rate (2011-2021) 1.5.2 Global Laboratory Information System (LIS) Revenue and Growth Rate (2011-2021) 7 Global Laboratory Information System (LIS) Manufacturers Analysis 7.1 CompuGroup Medical 7.1.1 Company Basic Information, Manufacturing Base and Competitors 7.1.2 Laboratory Information System (LIS) Product Type, Application and Specification 184.108.40.206 Type I 220.127.116.11 Type II 7.1.3 CompuGroup Medical Laboratory Information System (LIS) Sales, Revenue, Price and Gross Margin (2011-2016) 7.1.4 Main Business/Business Overview 7.2 McKesson Corporation 7.2.1 Company Basic Information, Manufacturing Base and Competitors 7.2.2 110 Product Type, Application and Specification 18.104.22.168 Type I 22.214.171.124 Type II 7.2.3 McKesson Corporation Laboratory Information System (LIS) Sales, Revenue, Price and Gross Margin (2011-2016) 7.2.4 Main Business/Business Overview 7.3 SCC Soft Computer 7.3.1 Company Basic Information, Manufacturing Base and Competitors 7.3.2 135 Product Type, Application and Specification 126.96.36.199 Type I 188.8.131.52 Type II 7.3.3 SCC Soft Computer Laboratory Information System (LIS) Sales, Revenue, Price and Gross Margin (2011-2016) 7.3.4 Main Business/Business Overview 7.4 Cerner Corporation 7.4.1 Company Basic Information, Manufacturing Base and Competitors 7.4.2 Dec Product Type, Application and Specification 184.108.40.206 Type I 220.127.116.11 Type II 7.4.3 Cerner Corporation Laboratory Information System (LIS) Sales, Revenue, Price and Gross Margin (2011-2016) 7.4.4 Main Business/Business Overview 7.5 Sunquest Information Systems 7.5.1 Company Basic Information, Manufacturing Base and Competitors 7.5.2 Product Type, Application and Specification 18.104.22.168 Type I 22.214.171.124 Type II 7.5.3 Sunquest Information Systems Laboratory Information System (LIS) Sales, Revenue, Price and Gross Margin (2011-2016) 7.5.4 Main Business/Business Overview 7.6 Agfa HealthCare 7.6.1 Company Basic Information, Manufacturing Base and Competitors 7.6.2 Million USD Product Type, Application and Specification 126.96.36.199 Type I 188.8.131.52 Type II 7.6.3 Agfa HealthCare Laboratory Information System (LIS) Sales, Revenue, Price and Gross Margin (2011-2016) 7.6.4 Main Business/Business Overview 7.7 Siemens Healthineers 7.7.1 Company Basic Information, Manufacturing Base and Competitors 7.7.2 Machinery & Equipment Product Type, Application and Specification 184.108.40.206 Type I 220.127.116.11 Type II 7.7.3 Siemens Healthineers Laboratory Information System (LIS) Sales, Revenue, Price and Gross Margin (2011-2016) 7.7.4 Main Business/Business Overview 7.8 Sysmex Corporation 7.8.1 Company Basic Information, Manufacturing Base and Competitors 7.8.2 Product Type, Application and Specification 18.104.22.168 Type I 22.214.171.124 Type II 7.8.3 Sysmex Corporation Laboratory Information System (LIS) Sales, Revenue, Price and Gross Margin (2011-2016) 7.8.4 Main Business/Business Overview 7.9 A&T Corporation 7.9.1 Company Basic Information, Manufacturing Base and Competitors 7.9.2 Product Type, Application and Specification 126.96.36.199 Type I 188.8.131.52 Type II 7.9.3 A&T Corporation Laboratory Information System (LIS) Sales, Revenue, Price and Gross Margin (2011-2016) 7.9.4 Main Business/Business Overview 7.10 Merge Healthcare 7.10.1 Company Basic Information, Manufacturing Base and Competitors 7.10.2 Product Type, Application and Specification 184.108.40.206 Type I 220.127.116.11 Type II 7.10.3 Merge Healthcare Laboratory Information System (LIS) Sales, Revenue, Price and Gross Margin (2011-2016) 7.10.4 Main Business/Business Overview 7.11 Orchard Software 7.12 Epic Systems 7.13 Medasys 7.14 Psyche Systems 7.15 GeniPulse Technologies For more information, please visit https://www.wiseguyreports.com/sample-request/821613-global-laboratory-information-system-lis-sales-market-report-2016
News Article | March 7, 2016
Researchers have designed a web-based platform that uses artificial neural networks to answer standard crossword clues better than existing commercial products specifically designed for the task. The system, which is freely available online, could help machines understand language more effectively. In tests against commercial crossword-solving software, the system, designed by researchers from the UK, US and Canada, was more accurate at answering clues that were single words (e.g. ‘culpability’ — guilt), a short combination of words (e.g. ‘devil devotee’ — Satanist), or a longer sentence or phrase (e.g. ‘French poet and key figure in the development of Symbolism’ — Baudelaire). The system can also be used as a ‘reverse dictionary’ in which the user describes a concept and the system returns possible words to describe that concept. The researchers used the definitions contained in six dictionaries, plus Wikipedia, to ‘train’ the system so that it could understand words, phrases and sentences — using the definitions as a bridge between words and sentences. Their results, published in the journal Transactions of the Association for Computational Linguistics, suggest that a similar approach may lead to improved output from more general language understanding and dialogue systems and information retrieval engines in general. All of the code and data behind the application has been made freely available for future research. “Over the past few years, there’s been a mini-revolution in machine learning,” said Felix Hill of the University of Cambridge’s Computer Laboratory, one of the paper’s authors. “We’re seeing a lot more usage of deep learning, which is especially useful for language perception and speech recognition.” Deep learning refers to an approach in which artificial neural networks with little or no prior ‘knowledge’ are trained to recreate human abilities using massive amounts of data. For this particular application, the researchers used dictionaries — training the model on hundreds of thousands of definitions of English words, plus Wikipedia. “Dictionaries contain just about enough examples to make deep learning viable, but we noticed that the models get better and better the more examples you give them,” said Hill. “Our experiments show that definitions contain a valuable signal for helping models to interpret and represent the meaning of phrases and sentences.” Working with Anna Korhonen from the Cambridge’s Department of Theoretical and Applied Linguistics, and researchers from the Université de Montréal and New York University, Hill used the model as a way of bridging the gap between machines that understand the meanings of individual words and machines that can understand the meanings of phrases and sentences. “Despite recent progress in AI, problems involving language understanding are particularly difficult, and our work suggests many possible applications of deep neural networks to language technology,” said Hill. “One of the biggest challenges in training computers to understand language is recreating the many rich and diverse information sources available to humans when they learn to speak and read.” However, there is still a long way to go. For instance, when Hill’s system receives a query, the machine has no idea about the user’s intention or the wider context of why the question is being asked. Humans, on the other hand, can use their background knowledge and signals like body language to figure out the intent behind the query. Hill describes recent progress in learning-based AI systems in terms of behaviorism and cognitivism: two movements in psychology that effect how one views learning and education. Behaviorism, as the name implies, looks at behavior without looking at what the brain and neurons are doing, while cognitivism looks at the mental processes that underlie behavior. Deep learning systems, like the one built by Hill and his colleagues, reflect a cognitivist approach, but for a system to have something approaching human intelligence, it would have to have a little of both. “Our system can’t go too far beyond the dictionary data on which it was trained, but the ways in which it can are interesting, and make it a surprisingly robust question and answer system — and quite good at solving crossword puzzles,” said Hill. While it was not built with the purpose of solving crossword puzzles, the researchers found that it actually performed better than commercially-available products that are specifically engineered for the task. Existing commercial crossword-answering applications function in a similar way to a Google search, with one system able to reference over 1100 dictionaries. While this approach has advantages if you want to look up a definition verbatim, it works less well when you input a question or query that the model has never seen in training. It also makes it incredibly ‘heavy’ in terms of the amount of memory it requires. “Traditional approaches are like lugging many heavy dictionaries around with you, whereas our neural system is incredibly light,” said Hill. According to the researchers, the results show the effectiveness of definition-based training for developing models that understand phrases and sentences. They are currently looking at ways of enhancing their system, specifically by combining it with more behaviorist-style models of language learning and linguistic interaction. Citation: Hill, Felix et al. Learning to Understand Phrases by Embedding the Dictionary. Transactions of the Association for Computational Linguistics, [S.l.], v. 4, p. 17-30, feb. 2016. ISSN 2307-387X.
Gordon M.J.C.,Computer Laboratory |
Kaufmann M.,University of Texas at Austin |
Ray S.,University of Texas at Austin
Journal of Automated Reasoning | Year: 2011
We present a case study illustrating how to exploit the expressive power of higher-order logic to complete a proof whose main lemma is already proved in a first-order theorem prover. Our proof exploits a link between the HOL4 and ACL2 proof systems to show correctness of a cone of influence reduction algorithm, implemented in ACL2, with respect to the classical semantics of linear temporal logic, formalized in HOL4. © 2010 Springer Science+Business Media B.V.
News Article | November 2, 2016
Budding cyberspies will learn how to hack into drones and crack codes at a new cybersecurity boot camp backed by the government. Matt Hancock, the minister for digital and culture, said students would gain the skills needed to "fight cyber-attacks" and help keep the UK safe. The 10-week course has been "certified" by UK spy agency GCHQ. But some security experts raised questions about the need for the course and the intent behind it. "If I were a company, I would not hire security consultants who had been approved by GCHQ," said Prof Ross Anderson, who leads the security group at Cambridge University's Computer Laboratory. "I would simply not be able trust them. GCHQ's goal is that no-one should be able to shield themselves from surveillance, ever," he told the BBC. The Cyber Retraining Academy will be operated by cybersecurity training firm Sans Institute. It will be funded as part of the government's £1.9bn National Cybersecurity Strategy. Sans Institute said "leading cybersecurity employers" would be able to track students' performance throughout the course, with a view to recruiting talented individuals. Would-be recruits must pass a series of competency tests to be considered for the boot camp, including a multiple-choice quiz before they can even submit an application. The successful 50 candidates will attend the academy in London in 2017, and will receive two years of training condensed into 10 weeks. Rik Ferguson of cybersecurity firm Trend Micro said the scheme could help people learn the skills to "hit the ground running" in a security-related role, but questioned why the scheme was needed. "Employers often complain about the 'cybersecurity skills gap' - a gap that I would argue doesn't exist," he told the BBC. "The problem is rather that employers are not looking beyond very narrowly specified certifications or degree courses in security-related subjects. "If advertising a cyber-retraining programme as 'drone hacking' is going to get individuals with the right character and curiosity applying for this course, then it can only be a good thing. "But obviously it takes more than 10 weeks, however intense, to create a well-rounded security professional."
News Article | January 28, 2016
Specialised computer software components to improve the security, speed and scale of data processing in cloud computing are being developed by a University of Cambridge spin-out company. The company, Unikernel Systems, which was formed by staff and postdoctoral researchers at the University Computer Laboratory, has recently been acquired by San-Francisco based software company Docker Inc. Unikernels are small, potentially transient computer modules specialised to undertake a single task at the point in time when it is needed. Because of their reduced size, they are far more secure than traditional operating systems, and can be started up and shut down quickly and cheaply, providing flexibility and further security. They are likely to become increasingly used in applications where security and efficiency are vital, such as systems storing personal data and applications for the so-called Internet of Things (IoT) – internet-connected appliances and consumer products. "Unikernels provide the means to run the same application code on radically different environments from the public cloud to IoT devices," said Dr Richard Mortier of the Computer Laboratory, one of the company's advisors. "This allows decisions about where to run things to be revisited in the light of experience - providing greater flexibility and resilience. It also means software on those IoT devices is going to be a lot more reliable." Recent years have seen a huge increase in the amount of data that is collected, stored and processed, a trend that will only continue as increasing numbers of devices are connected to the internet. Most commercial data storage and processing now takes place within huge datacentres run by specialist providers, rather than on individual machines and company servers; the individual elements of this system are obscured to end users within the 'cloud'. One of the technologies that has been instrumental in making this happen is virtual machines. Normally, a virtual machine (VM) runs just like a real computer, with its own virtual operating system – just as your desktop computer might run Windows. However, a single real machine can run many VMs concurrently. VMs are general purpose, able to handle a wide range of jobs from different types of user, and capable of being moved across real machines within datacentres in response to overall user demand. The University's Computer Laboratory started research on virtualisation in 1999, and the Xen virtual machine monitor that resulted now provides the basis for much of the present-day cloud. Although VMs have driven the development of the cloud (and greatly reduced energy consumption), their inherent flexibility can come at a cost if their virtual operating systems are the generic Linux or Windows systems. These operating systems are large and complex, they have significant memory footprints, and they take time to start up each time they are required. Security is also an issue, because of their relatively large 'attack surface'. Given that many VMs are actually used to undertake a single function, (e.g. acting as a company database), recent research has shifted to minimising complexity and improving security by taking advantage of the narrow functionality. And this is where unikernels come in. Researchers at the Computer Laboratory started restructuring VMs into flexible modular components in 2009, as part of the RCUK-funded MirageOS project. These specialised modules – or unikernels - are in effect the opposite of generic VMs. Each one is designed to undertake a single task; they are small, simple and quick, using just enough code to enable the relevant application or process to run (about 4% of a traditional operating system according to one estimate). The small size of unikernels also lends considerable security advantages, as they present a much smaller 'surface' to malicious attack, and also enable companies to separate out different data processing tasks in order to limit the effects of any security breach that does occur. Given that resource use within the cloud is metered and charged, they also provide considerable cost savings to end users. By the end of last year, the unikernel technology arising from MirageOS was sufficiently advanced that the team, led by Dr. Anil Madhavapeddy, decided to found a start-up company. The company, Unikernel Systems, was recently acquired by San Francisco-based Docker Inc. to accelerate the development and broad adoption of the technology, now envisaged as a critical element in the future of the Internet of Things. "This brings together one of the most significant developments in operating systems technology of recent years, with one of the most dynamic startups that has already revolutionised the way we use cloud computing. This link-up will truly allow us all to "rethink cloud infrastructure", said Balraj Singh, co-founder and CEO of Unikernel Systems. "This acquisition shows that the Computer Laboratory continues to produce innovations that find their way into mainstream developments. It also shows the power of open source development to have impact and to be commercially successful", said Professor Andy Hopper, Head of the University of Cambridge Computer Laboratory. Explore further: Researchers discover new way to patch holes in the 'cloud'
News Article | November 28, 2016
AN ARTIFICIAL intelligence system can predict how a scene will unfold and dream up a vision of the immediate future. Given a still image, the deep learning algorithm generates a mini video showing what could happen next. If it starts with a picture of a train station, it might imagine the train pulling away from the platform, for example. Or an image of a beach could inspire it to animate the motion of lapping waves. Teaching AI to anticipate the future can help it comprehend the present. To understand what someone is doing when they’re preparing a meal, we might imagine that they will next eat it, something which is tricky for an AI to grasp. Such a system could also let an AI assistant recognise when someone is about to fall, or help a self-driving car foresee an accident. “Any robot that operates in our world needs to have some basic ability to predict the future,” says Carl Vondrick at the Massachusetts Institute of Technology, part of the team that created the new system. “For example, if you’re about to sit down, you don’t want a robot to pull the chair out from underneath you.” Vondrick and his colleagues will present their work at a neural computing conference in Barcelona, Spain, on 5 December. To develop their AI, the team trained it on 2 million videos from image sharing site Flickr, featuring scenes such as beaches, golf courses, train stations and babies in hospital. These videos were unlabelled, meaning they were not tagged with information to help an AI understand them. After this, the researchers gave the model still images and it produced its own micro-movies of what might happen next. “One network generates the videos, and the other judges whether they look real or fake” To teach the AI to make better videos, the team used an approach called adversarial networks. One network generates the videos, and the other judges whether they look real or fake. The two get locked in competition: the video generator tries to make videos that best fool the other network, while the other network hones its ability to distinguish the generated videos from real ones. At the moment, the videos are low-resolution and contain 32 frames, lasting just over 1 second. But they are generally sharp and show the right kind of movement for the scene: trains move forward in a straight trajectory while babies crumple their faces. Other attempts to predict video scenes, such as one by researchers at New York University and Facebook, have required multiple input frames and produced just a few future frames that are often blurry. The videos still seem a bit wonky to a human and the AI has lots left to learn. For instance, it doesn’t realise that a train leaving a station should also eventually leave the frame. This is because it has no prior knowledge about the rules of the world; it lacks what we would call common sense. The 2 million videos – about two years of footage – are all the data it has to go on to understand how the world works. “That’s not that much in comparison to, say, a 10-year-old child, or how much evolution has seen,” says Vondrick. That said, the work illustrates what can be achieved when computer vision is combined with machine learning, says John Daugman at the University of Cambridge Computer Laboratory. He says that a key aspect is an ability to recognise that there is a causal structure to the things that happen over time. “The laws of physics and the nature of objects mean that not just anything can happen,” he says. “The authors have demonstrated that those constraints can be learned.” Vondrick is now scaling up the system to make larger, longer videos. He says that while it may never be able to predict exactly what will happen, it could show us alternative futures. “I think we can develop systems that eventually hallucinate these reasonable, plausible futures of images and videos.” This article appeared in print under the headline “AI predicts the future”
News Article | March 7, 2016
Here’s a crossword puzzle clue for you: A tall, long-necked spotted ruminant of Africa. If you’re someone who knows African wildlife like the back of your hand, it may take you only a moment to come up with: Giraffe. Researchers from the Univ. of Cambridge, New York Univ., and Université de Montréal have developed a web-based platform that can assist you with your morning crossword puzzle and act as a reverse dictionary. But as fun as solving crossword puzzles might be, the tool may have a deeper role to play when it comes to teaching artificial intelligence (AI) systems human language. The research behind the tool was published in Transactions of the Association for Computational Linguistics. “To compile a bank of dictionary definitions for training the model, we started with all words in the target embedding space. For each of these words, we extracted dictionary-style definitions from five electronic sources: Wordnet, The American Heritage Dictionary, The Collaborative International Dictionary of English, Wiktionary and Webster’s,” the researchers wrote in their study. “To allow models access to more factual knowledge than might be present in a dictionary (for instance, information about specific entities, places or people) we supplemented this training data with information extracted from Simple Wikipedia,” they added. The researchers released the code for their system online for future research usage. The research represents an early step towards endowing machines with the ability of human language. Deep learning—which is when scientists feed an artificial neural network with massive amounts of data—is integral to this process. “Despite recent progress in AI, problems involving language understanding are particularly difficult, and our work suggests many possible applications of deep neural network to language technology,” said co-author Felix Hill, of Cambridge’s Computer Laboratory, in a statement. “One of the biggest challenges in training computers to understand language is recreating the many rich and diverse information sources available to humans when they learn to speak and read.” Currently, the way computers learn language, according to Hill, is similar to the psychological framework referred to as cognitivism. In order to understand the larger context of human communication though, researchers need to combine cognitivism with behaviorism, in order for a machine to be able to infer meaning. According to Univ. of Cambridge, the researchers are looking into integrating behaviorist-style models of language learning and linguistic interaction into their system. R&D 100 AWARD ENTRIES NOW OPEN: Establish your company as a technology leader! For more than 50 years, the R&D 100 Awards have showcased new products of technological significance. You can join this exclusive community! .