Entity

Time filter

Source Type


Wu W.,University of Technology, Sydney | Li B.,University of Technology, Sydney | Li B.,Machine Learning Research Group | Chen L.,University of Technology, Sydney | Zhang C.,University of Technology, Sydney
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2016

Traditional cross-view information retrieval mainly rests on correlating two sets of features in different views. However, features in different views usually have different physical interpretations. It may be inappropriate to map multiple views of data onto a shared feature space and directly compare them. In this paper, we propose a simple yet effective Cross-View Feature Hashing (CVFH) algorithm via a “partition and match” approach. The feature space for each view is bi-partitioned multiple times using B hash functions and the resulting binary codes for all the views can thus be represented in a compatible B-bit Hamming space. To ensure that hashed feature space is effective for supporting generic machine learning and information retrieval functionalities, the hash functions are learned to satisfy two criteria: (1) the neighbors in the original feature spaces should be also close in the Hamming space; and (2) the binary codes for multiple views of the same sample should be similar in the shared Hamming space. We apply CVFH to cross view image retrieval. The experimental results show that CVFH can outperform the Canonical Component Analysis (CCA) based cross-view method. © Springer International Publishing Switzerland 2016. Source


Gould S.,Australian National University | Gould S.,Machine Learning Research Group | He X.,Computer Vision Research Group | He X.,Australian National University
Communications of the ACM | Year: 2014

Pixels labeled with a scene's semantics and geometry let computers describe what they see. © 2014 ACM. Source


Zhang D.,Orange S.A. | Sun L.,Orange S.A. | Li B.,Machine Learning Research Group | Chen C.,Chongqing University | And 3 more authors.
IEEE Transactions on Intelligent Transportation Systems | Year: 2015

Taxi service strategies, as the crowd intelligence of massive taxi drivers, are hidden in their historical time-stamped GPS traces. Mining GPS traces to understand the service strategies of skilled taxi drivers can benefit the drivers themselves, passengers, and city planners in a number of ways. This paper intends to uncover the efficient and inefficient taxi service strategies based on a large-scale GPS historical database of approximately 7600 taxis over one year in a city in China. First, we separate the GPS traces of individual taxi drivers and link them with the revenue generated. Second, we investigate the taxi service strategies from three perspectives, namely, passenger-searching strategies, passenger-delivery strategies, and service-region preference. Finally, we represent the taxi service strategies with a feature matrix and evaluate the correlation between service strategies and revenue, informing which strategies are efficient or inefficient. We predict the revenue of taxi drivers based on their strategies and achieve a prediction residual as less as 2.35 RMB/h, The currency unit in China; 1 RMB U.S. 0.17. © 2000-2011 IEEE. Source


Cheng H.,University of Alberta | Zhang X.,Machine Learning Research Group | Schuurmans D.,University of Alberta
Uncertainty in Artificial Intelligence - Proceedings of the 29th Conference, UAI 2013 | Year: 2013

Although many convex relaxations of clustering have been proposed in the past decade, current formulations remain restricted to spherical Gaussian or discriminative models and are susceptible to imbalanced clusters. To address these shortcomings, we propose a new class of convex relaxations that can be flexibly applied to more general forms of Bregman divergence clustering. By basing these new formulations on normalized equivalence relations we retain additional control on relaxation quality, which allows improvement in clustering quality. We furthermore develop optimization methods that improve scalability by exploiting recent implicit matrix norm methods. In practice, we find that the new formulations are able to efficiently produce tighter clusterings that improve the accuracy of state of the art methods. Source


Aslan O.,University of Alberta | Cheng H.,University of Alberta | Schuurmans D.,University of Alberta | Zhang X.,Machine Learning Research Group
Advances in Neural Information Processing Systems | Year: 2013

Latent variable prediction models, such as multi-layer networks, impose auxiliary latent variables between inputs and outputs to allow automatic inference of implicit features useful for prediction. Unfortunately, such models are difficult to train because inference over latent variables must be performed concurrently with parameter optimization - creating a highly non-convex problem. Instead of proposing another local training method, we develop a convex relaxation of hidden-layer conditional models that admits global training. Our approach extends current convex modeling approaches to handle two nested nonlinearities separated by a non-trivial adaptive latent layer. The resulting methods are able to acquire two-layer models that cannot be represented by any single-layer model over the same features, while improving training quality over local heuristics. Source

Discover hidden collaborations