Entity

Time filter

Source Type

NJ, United States

Jia Y.,University of California at Berkeley | Huang C.,Nec Labs America | Darrell T.,University of California at Berkeley
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2012

In this paper we examine the effect of receptive field designs on classification accuracy in the commonly adopted pipeline of image classification. While existing algorithms usually use manually defined spatial regions for pooling, we show that learning more adaptive receptive fields increases performance even with a significantly smaller codebook size at the coding layer. To learn the optimal pooling parameters, we adopt the idea of over-completeness by starting with a large number of receptive field candidates, and train a classifier with structured sparsity to only use a sparse subset of all the features. An efficient algorithm based on incremental feature selection and retraining is proposed for fast learning. With this method, we achieve the best published performance on the CIFAR-10 dataset, using a much lower dimensional feature space than previous methods. © 2012 IEEE. Source


Tajer A.,Columbia University | Prasad N.,Nec Labs America | Wang X.,Columbia University
IEEE Transactions on Signal Processing | Year: 2010

We consider decentralized multiantenna cognitive radio networks where the secondary (cognitive) users are granted simultaneous spectrum access along with the license-holding (primary) users. We treat the problem of distributed beamforming and rate allocation for the secondary users such that the minimum weighted secondary rate is maximized. Such an optimization is subject to 1) a limited weighted sum-power budget for the secondary users and 2) guaranteed protection for the primary users in the sense that the interference level imposed on each primary receiver does not exceed a specified level. Based on the decoding method deployed by the secondary receivers, we consider three scenarios for solving this problem. In the first scenario, each secondary receiver decodes only its designated transmitter while suppressing the rest as Gaussian interferers (single-user decoding). In the second case, each secondary receiver employs the maximum likelihood decoder (MLD) to jointly decode all secondary transmissions. In the third one, each secondary receiver uses the unconstrained group decoder (UGD). By deploying the UGD, each secondary user is allowed to decode any arbitrary subset of users (which contains its designated user) after suppressing or canceling the remaining users. We offer an optimal distributed algorithm for designing the beamformers and allocating rates in the first scenario (i.e., with single-user decoding). We also provide explicit formulations of the optimization problems for the latter two scenarios (with the MLD and the UGD, respectively), which, however are nonconvex. While we provide a suboptimal centralized algorithm for the case with MLD, neither of the two scenarios can be solved efficiently in a decentralized setup. As a remedy, we offer two-stage suboptimal distributed algorithms for solving the problem for the MLD and UGD scenarios. In the first stage, the beamformers and rates are determined in a distributed fashion after assuming single user decoding at each secondary receiver. By using these beamformer designs, MLD often and UGD always allow for supporting rates higher than those achieved in the first stage. Based on this observation, we construct the second stage by offering optimal distributed low-complexity algorithms to allocate excess rates to the secondary users such that a notion of fairness is maintained. Analytical and empirical results demonstrate the gains yielded by the proposed rate allocation and the beamformer design algorithms. © 2009 IEEE. Source


Tajer A.,Princeton University | Prasad N.,Nec Labs America | Wang X.,Columbia University
IEEE Transactions on Signal Processing | Year: 2011

Coordinated information processing by the base stations of multi-cell wireless networks enhances the overall quality of communication in the network. Such coordinations for optimizing any desired network-wide quality of service (QoS) necessitate the base stations to acquire and share some channel state information (CSI). With perfect knowledge of channel states, the base stations can adjust their transmissions for achieving a network-wise QoS optimality. In practice, however, the CSI can be obtained only imperfectly. As a result, due to the uncertainties involved, the network is not guaranteed to benefit from a globally optimal QoS. Nevertheless, if the channel estimation perturbations are confined within bounded regions, the QoS measure will also lie within a bounded region. Therefore, by exploiting the notion of robustness in the worst-case sense some worst-case QoS guarantees for the network can be asserted. We adopt a popular model for noisy channel estimates that assumes that estimation noise terms lie within known hyper-spheres. We aim to design linear transceivers that optimize a worst-case QoS measure in downlink transmissions. In particular, we focus on maximizing the worst-case weighted sum-rate of the network and the minimum worst-case rate of the network. For obtaining such transceiver designs, we offer several centralized (fully cooperative) and distributed (limited cooperation) algorithms which entail different levels of complexity and information exchange among the base stations. © 1991-2012 IEEE. Source


Yang T.,Nec Labs America
Advances in Neural Information Processing Systems | Year: 2013

We present and study a distributed optimization algorithm by employing a stochastic dual coordinate ascent method. Stochastic dual coordinate ascent methods enjoy strong theoretical guarantees and often have better performances than stochastic gradient descent methods in optimizing regularized loss minimization problems. It still lacks of efforts in studying them in a distributed framework. We make a progress along the line by presenting a distributed stochastic dual coordinate ascent algorithm in a star network, with an analysis of the tradeoff between computation and communication. We verify our analysis by experiments on real data sets. Moreover, we compare the proposed algorithm with distributed stochastic gradient descent methods and distributed alternating direction methods of multipliers for optimizing SVMs in the same distributed framework, and observe competitive performances. Source


Long P.M.,Nec Labs America | Servedio R.A.,Columbia University
ITCS 2013 - Proceedings of the 2013 ACM Conference on Innovations in Theoretical Computer Science | Year: 2013

For S ⊆ {0,1}n, a Boolean function f: S - → {-1,1} is a halfspace over S if there exist w ∈ ℝn and θ ∈ ℝ such that f(x)=sign(w · x - θ) for all x ∈ S. We give bounds on the size of integer weights w1,...,wn ∈ ℤ that are required to represent halfspaces over Hamming balls centered at 0n, i.e. halfspaces over S = {0,1}≤ k n =def {x ∈ {0,1}n : x1 + ⋯ + x n ≤ k}. Such weight bounds for halfspaces over Hamming balls have immediate consequences for the performance of learning algorithms in the increasingly common scenario of learning from very high-dimensional categorical examples which are such that only a small number of features are active in each example. We give upper and lower bounds on weight both for exact representation (when sign(w · x -θ) must equal f(x) for every x ∈ S) and for ε-approximate representation (when sign(w · x-θ) may disagree with f(x) for up to an ε fraction of points x ∈ S). Our results show that extremal bounds for exact representation are qualitatively rather similar whether the domain is all of {0,1}n or the Hamming ball {0,1} ≤ k n, but extremal bounds for approximate representation are qualitatively very different between these two domains. © 2013 ACM. Source

Discover hidden collaborations