Alexandria, Australia
Alexandria, Australia

NICTA is Australia's Information and Communications Technology Research Centre of Excellence. The term "Centre of Excellence" is common marketing terminology used by some Australian government organisations for titles of science research groups. NICTA's role is to pursue potentially economically significant ICT related research for the Australian economy.NICTA's organisation is structured around groups focused primarily on pure research and the implementation of those ideas within business groups. Wikipedia.

Time filter
Source Type

News Article | March 15, 2016

We've seen how social media can be more than just a place to share pictures of your dinner. It can play an important role in cultural movements, political discourse, tracking diseases and now, researchers have discovered that it can play a crucial role in natural disaster relief by predicting the true impact in just a few hours. An international study by researchers at the Universidad Carlos III de Madrid (UC3M), NICTA (National Information Communications Technology Australia) and the University of California in San Diego has found that analysis of social network activity during and in the hours following a natural disaster can quickly reveal the extent of damage. "Twitter, the social network which we have analyzed, is useful for the management, real-time monitoring and even prediction of the economic impact that disasters like Hurricane Sandy can have," says one of the researchers, Esteban Moro Egido, of UC3M. Hurricane Sandy was the perfect chance for the researchers to collect data because it was a very large storm that was being tracked and they could monitor Twitter for information before, during and after it hit areas. Hundreds of millions of geo-located tweets were sent by Twitter users referencing the storm in 50 metropolitan areas. The researchers were able to track the movement and impact of storm through Twitter activity as it pummeled the East Coast. The storm caused more damage than any other is U.S. history with an economic impact of 50 billion dollars. The researchers compared the Twitter data they collected with official FEMA data concerning the level of aid grants for different areas. The researchers found that there was a strong correlation between the mean per capita social network activity and the mean per capita economic damage for each area. The danger and actual disaster impact was directly observable in real time by monitoring the social network. The researchers have gone on to verify that the same correlation exists in floods, tornados and storms. The researchers believe that social networks could be a critical prediction tool for the damage of natural disasters, giving governments the ability to see where and how much relief will be needed much more quickly. It can also be used to see where people are in need of immediate help so that first responders can be dispatched to the hardest hit areas. The researchers say this finding is especially important as we face an increase in natural disasters due to climate change. "We believe that this is going to cause even more natural disasters and, therefore, the use of social networks will allow us to obtain useful supplementary information," Egido said. "We are trying to see if there is a relationship between activity on social networks and climate change which will affect us in the future". If social networks are monitored, more lives could be saved and the right amount of aid will reach the areas that need it much more quickly.

The study, published in the latest issue of the journal Science Advances, along with scientists from NICTA (National Information Communications Technology Australia) and the University of California in San Diego concludes that it is possible to determine the damage caused by a natural disaster in just a few hours, by using data from social networks. "Twitter, the social network which we have analyzed, is useful for the management, real-time monitoring and even prediction of the economic impact that disasters like Hurricane Sandy can have," says one of the researchers, Esteban Moro Egido, of UC3M's Grupo Interdisciplinar de Sistemas Complejos - Complex Systems Interdisciplinary Group (GISC). The research was carried out by analyzing Twitter activity before, during and after Hurricane Sandy which, in 2012, caused more damage than any other storm in US history, with an economic impact in the region of 50,000 million dollars. Hundreds of millions of geo-located tweets making reference to this topic were collected from fifty metropolitan areas in the USA. "Given that citizens were turning to these platforms for communication and information related to the disaster, we established a strong correlation between the route of the hurricane and activity on social networks," explains Esteban Moro. But the main conclusion of the study was obtained when the data relating to social network activity was examined alongside data relating to both the levels of aid granted by the Federal Emergency Management Agency (FEMA) and insurance claims: there is a correlation between the mean per capita of social network activity and economic damage per capita caused by these disasters in the areas where such activity occurs. In other words, both real and perceived threats, along with the economic effects of physical disasters, are directly observable through the strength and composition of the flow of messages from Twitter. Furthermore, researchers have verified the results obtained from Hurricane Sandy and have been able to demonstrate that the same dynamic also occurs in the case of floods, storms and tornadoes; for example, whenever there is sufficient activity on social media to extract such data. In this way, communication on Twitter allows the economic impact of a natural disaster in the affected areas to be monitored in real time, making it possible to provide information in addition to that currently used to assess damage resulting from these disasters. Moreover, the distribution space of the event-related messages can also help the authorities in the monitoring and evaluation of emergencies, in order to improve responses to natural disasters. The authors of the study suggest that we are facing an increase in the frequency and intensity of natural disasters as a consequence of climate change. "We believe that this is going to cause even more natural disasters and, therefore, the use of social networks will allow us to obtain useful supplementary information," points out Professor Esteban Moro, who is currently working on further research in this area. "We are trying to see if there is a relationship between activity on social networks and climate change which will affect us in the future". Explore further: A system detects global trends in social networks two months in advance More information: Y. Kryvasheyeu, H. Chen, N. Obradovich, E. Moro, P. Van Hentenryck, J. Fowler, M. Cebrian, Rapid Assessment of Disaster Damage Using Social Media Activity. Sci. Adv. 2, e1500779 (2016) DOI: 10.1126/sciadv.1500779,

Kang L.,University of Maryland University College | Li Y.,NICTA | Doermann D.,University of Maryland University College
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2014

In this paper, higher-order correlation clustering (HOCC) is used for text line detection in natural images. We treat text line detection as a graph partitioning problem, where each vertex is represented by a Maximally Stable Extremal Region (MSER). First, weak hypothesises are proposed by coarsely grouping MSERs based on their spatial alignment and appearance consistency. Then, higher-order correlation clustering (HOCC) is used to partition the MSERs into text line candidates, using the hypotheses as soft constraints to enforce long range interactions. We further propose a regularization method to solve the Semidefinite Programming problem in the inference. Finally we use a simple texton-based texture classifier to filter out the non-text areas. This framework allows us to naturally handle multiple orientations, languages and fonts. Experiments show that our approach achieves competitive performance compared to the state of the art. © 2014 IEEE.

Tricoire F.,University of Vienna | Tricoire F.,NICTA
Computers and Operations Research | Year: 2012

This paper introduces multi-directional local search, a metaheuristic for multi-objective optimization. We first motivate the method and present an algorithmic framework for it. We then apply it to several known multi-objective problems such as the multi-objective multi-dimensional knapsack problem, the bi-objective set packing problem and the bi-objective orienteering problem. Experimental results show that our method systematically provides solution sets of comparable quality with state-of-the-art methods applied to benchmark instances of these problems, within reasonable CPU effort. We conclude that the proposed algorithmic framework is a viable option when solving multi-objective optimization problems. © 2012 Elsevier Ltd. All rights reserved.

Brebner P.,NICTA
ICPE'12 - Proceedings of the 3rd Joint WOSP/SIPEW International Conference on Performance Engineering | Year: 2012

Elasticity, the ability to rapidly scale resources up and down on demand, is an essential feature of public cloud platforms. However, it is difficult to understand the elasticity requirements of a given application and workload, and if the elasticity provided by a cloud provider will meet those requirements. We introduce the elasticity mechanisms of a typical Infrastructure as a Service (IaaS) cloud platform (inspired by Amazon EC2). We have enhanced our Service Oriented Performance Modeling method and tool to model and predict the elasticity characteristics of three realistic applications and workloads on this cloud platform. We compare the pay-as-you-go instance costs and end-user response time service level agreements for different elasticity scenarios. The model is also able to predict the elasticity requirements (in terms of the maximum instance spin-up time) for the three applications. We conclude with an analysis of the results. Copyright 2012 ACM.

Walsh T.,NICTA
Journal of Artificial Intelligence Research | Year: 2011

Voting is a simple mechanism to combine together the preferences of multiple agents. Unfortunately, agents may try to manipulate the result by mis-reporting their preferences. One barrier that might exist to such manipulation is computational complexity. In particular, it has been shown that it is NP-hard to compute how to manipulate a number of different voting rules. How- ever, NP-hardness only bounds the worst-case complexity. Recent theoretical results suggest that manipulation may often be easy in practice. In this paper, we show that empirical studies are useful in improving our understanding of this issue. We consider two settings which represent the two types of complexity results that have been identified in this area: manipulation with un- weighted votes by a single agent, and manipulation with weighted votes by a coalition of agents. In the first case, we consider Single Transferable Voting (STV), and in the second case, we consider veto voting. STV is one of the few voting rules used in practice where it is NP-hard to compute how a single agent can manipulate the result when votes are unweighted. It also appears one of the harder voting rules to manipulate since it involves multiple rounds. On the other hand, veto voting is one of the simplest representatives of voting rules where it is NP-hard to compute how a coalition of weighted agents can manipulate the result. In our experiments, we sample a number of distributions of votes including uniform, correlated and real world elections. In many of the elections in our experiments, it was easy to compute how to manipulate the result or to prove that manipulation was impossible. Even when we were able to identify a situation in which manipula- tion was hard to compute (e.g. when votes are highly correlated and the election is \hung"), we found that the computational diffculty of computing manipulations was somewhat precarious (e.g. with such \hung" elections, even a single uncorrelated voter was enough to make manipulation easy to compute). © 2011 AI Access Foundation. All rights reserved.

Elphinstone K.,NICTA | Heiser G.,NICTA
SOSP 2013 - Proceedings of the 24th ACM Symposium on Operating Systems Principles | Year: 2013

The L4 microkernel has undergone 20 years of use and evolution. It has an active user and developer community, and there are commercial versions which are deployed on a large scale and in safety-critical systems. In this paper we examine the lessons learnt in those 20 years about microkernel design and implementation. We revisit the L4 design papers, and examine the evolution of design and implementation from the original L4 to the latest generation of L4 kernels, especially seL4, which has pushed the L4 model furthest and was the first OS kernel to undergo a complete formal verification of its implementation as well as a sound analysis of worst-case execution times. We demonstrate that while much has changed, the fundamental principles of minimality and high IPC performance remain the main drivers of design and implementation decisions. © 2013 ACM.

IJCAI International Joint Conference on Artificial Intelligence | Year: 2013

In social choice settings with strict preferences, random dictatorship rules were characterized by Gibbard [1977] as the only randomized social choice functions that satisfy strategyproofness and ex post efficiency. In the more general domain with indifferences, RSD (random serial dictatorship) rules are the well-known and perhaps only known generalization of random dictatorship. We present a new generalization of random dictatorship for indifferences called Maximal Recursive (MR) rule as an alternative to RSD. We show that MR is polynomial-time computable, weakly strategy-proof with respect to stochastic dominance, and, in some respects, outperforms RSD on efficiency.

Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST | Year: 2012

Wireless vehicular networks operating on the dedicated short-range communications (DSRC) frequency bands are the key enabling technologies for the emerging market of intelligent transport system (ITS). The wireless access in vehicular environments (WAVE) is significantly different from the Wi-Fi and cellular wireless networking environments. The specifications defined by IEEE802.11P and IEEE1609 represent the most mature set of standards for DSRC/WAVE networks. This paper provides an overview of the current state of the art, and analyses the potential differences between application requirements and what can be offered by the current WAVE solutions. It is shown that the current solutions may be inadequate for large-scale deployment. The primary challenge is to develop scalable, robust, low-latency and high-throughput technologies for safety applications that will significantly reduce collisions and save lives and property loss. Further research ideas are proposed to address this challenge.

Domke J.,NICTA
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2013

Likelihood-based learning of graphical models faces challenges of computational complexity and robustness to model misspecification. This paper studies methods that fit parameters directly to maximize a measure of the accuracy of predicted marginals, taking into account both model and inference approximations at training time. Experiments on imaging problems suggest marginalization-based learning performs better than likelihood-based approximations on difficult problems where the model being fit is approximate in nature. © 1979-2012 IEEE.

Loading NICTA collaborators
Loading NICTA collaborators