Entity

Time filter

Source Type

Póvoa e Meadas, Portugal

Martins A.F.T.,Carnegie Mellon University | Martins A.F.T.,Telecommunications Institute of Portugal | Smith N.A.,Carnegie Mellon University | Aguiar P.M.Q.,Institute Sistemas e Robotica | Figueiredo M.A.T.,Telecommunications Institute of Portugal
EMNLP 2011 - Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference | Year: 2011

Linear models have enjoyed great success in structured prediction in NLP. While a lot of progress has been made on efficient training with several loss functions, the problem of endowing learners with a mechanism for feature selection is still unsolved. Common approaches employ ad hoc filtering or L 1-regularization; both ignore the structure of the feature space, preventing practicioners from encoding structural prior knowledge. We fill this gap by adopting regularizers that promote structured sparsity, along with efficient algorithms to handle them. Experiments on three tasks (chunking, entity recognition, and dependency parsing) show gains in performance, compactness, and model interpretability. © 2011 Association for Computational Linguistics.


Nascimento J.C.,Institute Sistemas e Robotica
Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference | Year: 2010

We propose a improved Gradient Vector Flow (iGVF) for active contour detection. The algorithm herein proposed allows to surpass the problems of the GVF, which occur in noisy images with cluttered background. We experimentally illustrate that the proposed modified version of the GVF algorithm has a better performance in noisy images. The main difference concerns the use of more robust and informative features (edge segments) which significantly reduce the influence of noise. Experiments with real data from several image modalities are presented to illustrate the performance of the proposed approach.


Carneiro G.,University of Adelaide | Nascimento J.C.,Institute Sistemas e Robotica
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2012

The use of statistical pattern recognition models to segment the left ventricle of the heart in ultrasound images has gained substantial attention over the last few years. The main obstacle for the wider exploration of this methodology lies in the need for large annotated training sets, which are used for the estimation of the statistical model parameters. In this paper, we present a new on-line co-training methodologythat reduces the need for large training sets for such parameter estimation. Our approach learns the initial parameters of two different models using a small manually annotated training set. Then, given each frame of a test sequence, the methodology not only produces the segmentation of the current frame, but it also uses the results of both classifiers to retrain each other incrementally. This on-line aspect of our approach has the advantages of producing segmentation results and retraining the classifiers on the fly as frames of a test sequence are presented, but it introduces a harder learning setting compared to the usual off-line co-training, where the algorithm has access to the whole set of un-annotated training samples from the beginning. Moreover, we introduce the use of the following new types of classifiers in the co-training framework: deep belief network and multiple model probabilistic data association. We show that our method leads to a fully automatic left ventricle segmentation system that achieves state-of-the-art accuracy on a public database with training sets containing at least twenty annotated images. © 2012 IEEE.


Martins A.F.T.,Carnegie Mellon University | Martins A.F.T.,Telecommunications Institute of Portugal | Smith N.A.,Carnegie Mellon University | Aguiar P.M.Q.,Institute Sistemas e Robotica | Figueiredo M.A.T.,Telecommunications Institute of Portugal
EMNLP 2011 - Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference | Year: 2011

Dual decomposition has been recently proposed as a way of combining complementary models, with a boost in predictive power. However, in cases where lightweight decompositions are not readily available (e.g., due to the presence of rich features or logical constraints), the original subgradient algorithm is inefficient. We sidestep that difficulty by adopting an augmented Lagrangian method that accelerates model consensus by regularizing towards the averaged votes. We show how first-order logical constraints can be handled efficiently, even though the corresponding subproblems are no longer combinatorial, and report experiments in dependency parsing, with state-of-the-art results. © 2011 Association for Computational Linguistics.


Nascimento J.C.,Institute Sistemas e Robotica | Carneiro G.,University of Adelaide
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2014

In this paper, we propose a new methodology for segmenting non-rigid visual objects, where the search procedure is onducted directly on a sparse low-dimensional manifold, guided by the classification results computed from a deep belief network. Our main contribution is the fact that we do not rely on the typical sub-division of segmentation tasks into rigid detection and non-rigid delineation. Instead, the non-rigid segmentation is performed directly, where points in the sparse low-dimensional can be mapped to an explicit contour representation in image space. Our proposal shows significantly smaller search and training complexities given that the dimensionality of the manifold is much smaller than the dimensionality of the search spaces for rigid detection and non-rigid delineation aforementioned, and that we no longer require a two-stage segmentation process. We focus on the problem of left ventricle endocardial segmentation from ultrasound images, and lip segmentation from frontal facial images using the extended Cohn-Kanade (CK+) database. Our experiments show that the use of sparse low dimensional manifolds reduces the search and training complexities of current segmentation approaches without a significant impact on the segmentation accuracy shown by state-of-the-art approaches. © 2014 IEEE.

Discover hidden collaborations