Entity

Time filter

Source Type


Farina A.,University of Parma | Capra A.,University of Parma | Chiesi L.,University of Parma | Scopece L.,Research and Technology Innovation Center
Proceedings of the AES International Conference | Year: 2011

The paper describes the theory and the first operational results of a new multichannel recording system based on a 32-capsules spherical microphone array. Up to 7 virtual microphones can be synthesized in real-time, choosing dynamically the directivity pattern (from standard cardioid to 6 th-order ultradirective) and the aiming. A graphical user's interface allows for moving the virtual microphones over a 360-degrees video image. The system employs a novel mathematical theory for computing the matrix of massive FIR filters, which are convolved in real time and with small latency thanks to a partitioned convolution processor. Source


Berlt K.,Federal University of Amazonas | de Moura E.S.,Federal University of Amazonas | Carvalho A.,Federal University of Amazonas | Cristo M.,Research and Technology Innovation Center | And 2 more authors.
Information Systems | Year: 2010

In this work we propose a model to represent the web as a directed hypergraph (instead of a graph), where links connect pairs of disjointed sets of pages. The web hypergraph is derived from the web graph by dividing the set of pages into non-overlapping blocks and using the links between pages of distinct blocks to create hyperarcs. A hyperarc connects a block of pages to a single page, in order to provide more reliable information for link analysis. We use the hypergraph model to create the hypergraph versions of the Pagerank and Indegree algorithms, referred to as HyperPagerank and HyperIndegree, respectively. The hypergraph is derived from the web graph by grouping pages by two different partition criteria: grouping together the pages that belong to the same web host or to the same web domain. We compared the original page-based algorithms with the host-based and domain-based versions of the algorithms, considering a combination of the page reputation, the textual content of the pages and the anchor text. Experimental results using three distinct web collections show that the HyperPagerank and HyperIndegree algorithms may yield better results than the original graph versions of the Pagerank and Indegree algorithms. We also show that the hypergraph versions of the algorithms were slightly less affected by noise links and spamming. © 2009 Elsevier B.V. All rights reserved. Source


Couto T.,Federal University of Minas Gerais | Ziviani N.,Federal University of Minas Gerais | Calado P.,University of Lisbon | Cristo M.,Research and Technology Innovation Center | And 3 more authors.
Information Retrieval | Year: 2010

Automatic document classification can be used to organize documents in a digital library, construct on-line directories, improve the precision of web searching, or help the interactions between user and search engines. In this paper we explore how linkage information inherent to different document collections can be used to enhance the effectiveness of classification algorithms. We have experimented with three link-based bibliometric measures, co-citation, bibliographic coupling and Amsler, on three different document collections: a digital library of computer science papers, a web directory and an on-line encyclopedia. Results show that both hyperlink and citation information can be used to learn reliable and effective classifiers based on a kNN classifier. In one of the test collections used, we obtained improvements of up to 69.8% of macro-averaged F1 over the traditional text-based kNN classifier, considered as the baseline measure in our experiments. We also present alternative ways of combining bibliometric based classifiers with text based classifiers. Finally, we conducted studies to analyze the situation in which the bibliometric-based classifiers failed and show that in such cases it is hard to reach consensus regarding the correct classes, even for human judges. © 2009 Springer Science+Business Media, LLC. Source


Baracca P.,University of Padua | Tomasin S.,University of Padua | Vangelista L.,University of Padua | Benvenuto N.,University of Padua | Morello A.,Research and Technology Innovation Center
IEEE Transactions on Communications | Year: 2011

In orthogonal frequency division multiplexing (OFDM) communication systems mobility results in time-variations of the channel, which yield intercarrier interference (ICI), especially when large OFDM blocks are employed in order to achieve a high spectral efficiency. In this letter we focus on systems with very long OFDM blocks, where many of the existing ICI mitigation techniques can not be applied due to complexity constraints. To mitigate ICI we propose a pre-equalizer, operating on sub-blocks of the received OFDM block, whose aim is to force all sub-blocks to have almost the same equivalent channel. In other words, the pre-equalizer combats only time variations of the channel. Next, after OFDM demodulation, the classical equalizer compensates frequency selectivity of the target channel. Performance of the proposed scheme, together with a suitable channel estimate implemented on a per sub-block basis, is evaluated for a digital video broadcasting scenario, according to the DVB-T2 standard, where the OFDM block size may be 32k, and its possible extension to hand-held devices in a next-generation DVB-H. © 2011 IEEE. Source


Huang J.,Polytechnic University of Turin | Lo Presti L.,Polytechnic University of Turin | Garello R.,Polytechnic University of Turin | Sacco B.,Research and Technology Innovation Center
2012 4th International Conference on Communications and Electronics, ICCE 2012 | Year: 2012

In this paper, we study a positioning method working in DVB-T single frequency networks (SFNs). These networks have several properties useful for positioning (synchronized emitters at fixed and known location, high SNR, OFDM signals containing pilot subcarriers). Thus, their signals may be used as signals-of-opportunity when the user enters a GNSS hostile environment (e.g. dense urban or indoor areas). One of the challenges to overcome is to distinguish the signals from different emitters. Here we suppose that the user can first compute his position by GNSS during an initialization phase, which is used for solving all the ambiguities concerning DVB-T emitters. After that, DVB-T signals can be used to aid positioning when the user enters a GNSS-blocked area, up to a limit case where all the GNSS satellites are not in view and only DVB-T signals are used for positioning. We tested this method by simulation, by adopting the free space loss model for the emitter attenuations and a two ray model for multipath. The obtained results show good performance if the receiver correctly associates the signal during the user's motion. © 2012 IEEE. Source

Discover hidden collaborations