Research and Technology Innovation Center

Manaus, Brazil

Research and Technology Innovation Center

Manaus, Brazil
Time filter
Source Type

Couto T.,Federal University of Minas Gerais | Ziviani N.,Federal University of Minas Gerais | Calado P.,University of Lisbon | Cristo M.,Research and Technology Innovation Center | And 3 more authors.
Information Retrieval | Year: 2010

Automatic document classification can be used to organize documents in a digital library, construct on-line directories, improve the precision of web searching, or help the interactions between user and search engines. In this paper we explore how linkage information inherent to different document collections can be used to enhance the effectiveness of classification algorithms. We have experimented with three link-based bibliometric measures, co-citation, bibliographic coupling and Amsler, on three different document collections: a digital library of computer science papers, a web directory and an on-line encyclopedia. Results show that both hyperlink and citation information can be used to learn reliable and effective classifiers based on a kNN classifier. In one of the test collections used, we obtained improvements of up to 69.8% of macro-averaged F1 over the traditional text-based kNN classifier, considered as the baseline measure in our experiments. We also present alternative ways of combining bibliometric based classifiers with text based classifiers. Finally, we conducted studies to analyze the situation in which the bibliometric-based classifiers failed and show that in such cases it is hard to reach consensus regarding the correct classes, even for human judges. © 2009 Springer Science+Business Media, LLC.

Berlt K.,Federal University of Amazonas | de Moura E.S.,Federal University of Amazonas | Carvalho A.,Federal University of Amazonas | Cristo M.,Research and Technology Innovation Center | And 2 more authors.
Information Systems | Year: 2010

In this work we propose a model to represent the web as a directed hypergraph (instead of a graph), where links connect pairs of disjointed sets of pages. The web hypergraph is derived from the web graph by dividing the set of pages into non-overlapping blocks and using the links between pages of distinct blocks to create hyperarcs. A hyperarc connects a block of pages to a single page, in order to provide more reliable information for link analysis. We use the hypergraph model to create the hypergraph versions of the Pagerank and Indegree algorithms, referred to as HyperPagerank and HyperIndegree, respectively. The hypergraph is derived from the web graph by grouping pages by two different partition criteria: grouping together the pages that belong to the same web host or to the same web domain. We compared the original page-based algorithms with the host-based and domain-based versions of the algorithms, considering a combination of the page reputation, the textual content of the pages and the anchor text. Experimental results using three distinct web collections show that the HyperPagerank and HyperIndegree algorithms may yield better results than the original graph versions of the Pagerank and Indegree algorithms. We also show that the hypergraph versions of the algorithms were slightly less affected by noise links and spamming. © 2009 Elsevier B.V. All rights reserved.

Farina A.,University of Parma | Capra A.,University of Parma | Chiesi L.,University of Parma | Scopece L.,Research and Technology Innovation Center
Proceedings of the AES International Conference | Year: 2011

The paper describes the theory and the first operational results of a new multichannel recording system based on a 32-capsules spherical microphone array. Up to 7 virtual microphones can be synthesized in real-time, choosing dynamically the directivity pattern (from standard cardioid to 6 th-order ultradirective) and the aiming. A graphical user's interface allows for moving the virtual microphones over a 360-degrees video image. The system employs a novel mathematical theory for computing the matrix of massive FIR filters, which are convolved in real time and with small latency thanks to a partitioned convolution processor.

Cioni S.,European Space Agency | Colavolpe G.,University of Parma | Mignone V.,Research and Technology Innovation Center | Modenini A.,University of Parma | And 4 more authors.
International Journal of Satellite Communications and Networking | Year: 2015

The second-generation specification of the digital video broadcasting for satellite (DVB-S2) was developed in 2003 with the aim of improving the existent broadcasting standard DVB-S. The main new features introduced by DVB-S2 included increased baud rates, higher cardinality constellations (up to 32 points), and more efficient binary codes. The extension to DVB-S2, approved in 2014 with the name DVB-S2X, together with continuous technological evolution, moves further steps in this direction, with the use of constellations with cardinality up to 256 points, improved granularity of modulation and coding schemes, and the possibility to increase the baud rate. In this scenario, it is important to be able to ascertain what is the best transceiver structure, starting from the choice of the shaping pulse and the baud rate of the transmitted signals and ending with the most promising receiver architectures, with the aim of maximizing the spectral efficiency. In this paper, we will discuss some of the aspects of this investigation, namely, the optimization of transmission parameters and the description of an efficient receiver. We will then assess the performance of the proposed scheme in comparison with a classical DVB-S2 architecture. Some synchronization aspects will also be discussed, to account for the impairments introduced by the channel. © 2015 John Wiley & Sons, Ltd.

Ugolini A.,University of Parma | Modenini A.,University of Parma | Colavolpe G.,University of Parma | Picchi G.,University of Parma | And 2 more authors.
2014 7th Advanced Satellite Multimedia Systems Conference and the 13th Signal Processing for Space Communications Workshop, ASMS/SPSC 2014 | Year: 2014

We investigate different techniques to improve the spectral efficiency of systems based on the DVB-S2 standard, when the transmitted signal bandwidth cannot be increased because it has already been optimized to the maximum value allowed by transponder filters. We will investigate and compare several techniques to involve different sections of the transceiver scheme. The techniques that will be considered include the use of advanced detection algorithms, the adoption of time packing, and the optimization of the constellation and shaping pulses. The LDPC codes recently proposed for the evolution of the DVB-S2 standard will be considered, as well as the adoption of iterative detection and decoding. Information theoretical analysis will be followed by the study of practical modulation and coding schemes. © 2014 IEEE.

Baracca P.,University of Padua | Tomasin S.,University of Padua | Vangelista L.,University of Padua | Benvenuto N.,University of Padua | Morello A.,Research and Technology Innovation Center
IEEE Transactions on Communications | Year: 2011

In orthogonal frequency division multiplexing (OFDM) communication systems mobility results in time-variations of the channel, which yield intercarrier interference (ICI), especially when large OFDM blocks are employed in order to achieve a high spectral efficiency. In this letter we focus on systems with very long OFDM blocks, where many of the existing ICI mitigation techniques can not be applied due to complexity constraints. To mitigate ICI we propose a pre-equalizer, operating on sub-blocks of the received OFDM block, whose aim is to force all sub-blocks to have almost the same equivalent channel. In other words, the pre-equalizer combats only time variations of the channel. Next, after OFDM demodulation, the classical equalizer compensates frequency selectivity of the target channel. Performance of the proposed scheme, together with a suitable channel estimate implemented on a per sub-block basis, is evaluated for a digital video broadcasting scenario, according to the DVB-T2 standard, where the OFDM block size may be 32k, and its possible extension to hand-held devices in a next-generation DVB-H. © 2011 IEEE.

Huang J.,Polytechnic University of Turin | Lo Presti L.,Polytechnic University of Turin | Garello R.,Polytechnic University of Turin | Sacco B.,Research and Technology Innovation Center
2012 4th International Conference on Communications and Electronics, ICCE 2012 | Year: 2012

In this paper, we study a positioning method working in DVB-T single frequency networks (SFNs). These networks have several properties useful for positioning (synchronized emitters at fixed and known location, high SNR, OFDM signals containing pilot subcarriers). Thus, their signals may be used as signals-of-opportunity when the user enters a GNSS hostile environment (e.g. dense urban or indoor areas). One of the challenges to overcome is to distinguish the signals from different emitters. Here we suppose that the user can first compute his position by GNSS during an initialization phase, which is used for solving all the ambiguities concerning DVB-T emitters. After that, DVB-T signals can be used to aid positioning when the user enters a GNSS-blocked area, up to a limit case where all the GNSS satellites are not in view and only DVB-T signals are used for positioning. We tested this method by simulation, by adopting the free space loss model for the emitter attenuations and a two ray model for multipath. The obtained results show good performance if the receiver correctly associates the signal during the user's motion. © 2012 IEEE.

Loading Research and Technology Innovation Center collaborators
Loading Research and Technology Innovation Center collaborators