Le Touquet – Paris-Plage, France
Le Touquet – Paris-Plage, France

Time filter

Source Type

Pischella M.,Superieur dElectronique de Paris | Belfiore J.-C.,Ecole Nationale Superieure des Telecommunications
IEEE Transactions on Vehicular Technology | Year: 2010

This paper addresses resource allocation for weighted sum throughput maximization (WSTM) in multicell orthogonal frequency-division multiple-access (OFDMA) networks. Two suboptimal algorithms with polynomial-time complexity are provided. First, we determine a graph-based subcarrier allocation algorithm allowing joint transmission of two interfering links whenever such a transmission fulfills the pairwise WSTM objective. An analytical expression of the optimal power allocation with two interfering cells in single-carrier transmission is obtained, and the capacity region study serves as the basis to build the interference graph. Second, we present a distributed power-control algorithm suitable for any signal-to-interference-plus-noise ratio (SINR) regime. It rejects users and subcarriers with weighted SINR that is too low before operating in the high SINR regime for the remaining users and subcarriers. Both algorithms and their combination are assessed via dynamic simulations, where the weight of each user is proportional to its queue length. They are shown to significantly decrease resource consumption and efficiently balance users' queue lengths. © 2010 IEEE.


Larbi S.,National Engineering School of Tunis | Jaidane M.,National Engineering School of Tunis | Moreau N.,Ecole Nationale Superieure des Telecommunications
European Signal Processing Conference | Year: 2015

Digital audiowatermarking can be viewed as a communication system (cf. figure 1) where the information vn is embedded imperceptibly into the digital audio signal xn through appropriate spectral shaping filter Hj. The resulting watermark tn should be robust to standard signal manipulations (i.e. compression, A/D-D/A conversion, resampling,...) on one side, and to intentional pirat attacks on the other side. In particular, all-pass filter attacks introduce a nonminimum phase problem in the watermarking detection scheme of figure 1, which strongly degrades the performances of detection techniques based on mean square criteria. We propose in this paper a solution to the above mentioned problem, combining Wiener deconvolution and blind equalization based on a non quadratic criterion. © 2002 EUSIPCO.


Ciblat P.,Ecole Nationale Superieure des Telecommunications | Vandendorpe L.,Catholic University of Louvain
European Signal Processing Conference | Year: 2015

We consider transmission over a frequency-selective channel. We focus on the data-aided joint and directed maximum-likelihood estimation of the carrier frequency offset and of the dispersive channel. The directed estimators correspond to the frequency offset estimator assuming the channel known, as well as the channel estimator assuming the frequency offset known. A comparison of directed and non-directed estimators based on asymptotic (large sample) analysis is addressed and shows that the performance of the joint estimates and the directed estimates is not so different. © 2002 EUSIPCO.


Nicolas J.M.,Ecole Nationale Superieure des Telecommunications
European Signal Processing Conference | Year: 2015

Probability Density Functions defined on IR+ can be successfully modeled with the help of 'second kind statistics'. This new approach, proposed in [4], is based on Mellin transform instead of Fourier transform so that classical probability density functions defined on IR+ can be identified with the help of 'second kind moments' and 'second kind cumulants', the analytic expressions of which are oversimple. In this article, we propose to analyse a-stable positive distributions. Indeed we know that classical moments of such distributions are generally not defined. As it is possible to derive their 'second kind moments' and their 'second kind cumulants', the estimation of the parameters of such laws is avalaible with the help of very simple expressions. © 2002 EUSIPCO.


Cohen G.,Ecole Nationale Superieure des Telecommunications | Fachini E.,University of Rome La Sapienza | Korner J.,University of Rome La Sapienza
IEEE Transactions on Information Theory | Year: 2016

We begin a systematic study of the problem of the zero-error capacity of noisy binary channels with memory and solve some of the non-trivial cases. © 2015 IEEE.


Ciullo D.,Polytechnic University of Turin | Garcia M.A.,Polytechnic University of Turin | Horvath A.,Budapest University of Technology and Economics | Leonardi E.,Polytechnic University of Turin | And 4 more authors.
IEEE Transactions on Multimedia | Year: 2010

Early P2P-TV systems have already attracted millions of users, and many new commercial solutions are entering this market. Little information is however available about how these systems work, due to their closed and proprietary design. In this paper, we present large scale experiments to compare three of the most successful P2P-TV systems, namely PPLive, SopCast and TVAnts. Our goal is to assess what level of "network awareness" has been embedded in the applications. We first define a general framework to quantify which network layer parameters leverageapplication choices, i.e., what parameters mainly drive the peer selection and data exchange. We then apply the methodology to a large dataset, collected during a number of experiments where we deployed about 40 peers in several European countries. From analysis of the dataset, we observe thatTVAnts and PPLive exhibit a mild preference to exchange data among peers in the same autonomous system the peer belongs to, while this clustering effect is less intense in SopCast. However, no preference versus country, subnet or hop count is shown. Therefore, we believe that next-generation P2P live streaming applications definitively need to improve the level of network-awareness, so to better localize the traffic in the network and thus increase their network-friendliness as well. © 2009 IEEE.


Liu X.,University of Western Australia | Cardoso J.-F.,Ecole Nationale Superieure des Telecommunications | Randall R.B.,University of New South Wales
Mechanical Systems and Signal Processing | Year: 2010

This paper works on joint approximate diagonalization of simplified fourth order cumulant matrices for very fast and large scale blind separation of instantaneous mixing model sources. The JADE algorithm is widely accepted but only limited to small scale separation tasks. The SHIBBS algorithm calculates a fraction of the fourth order cumulant set and avoids eigenmatrix decomposition to reduce calculation cost. However, it was seen to be slower than JADE at the time of its first publication and is hence less known. On the other hand, the SJAD algorithm using the same approach is shown to be very fast. This paper studies the iteration convergence criterion and proposes to use a signal to noise ratio based iteration stopping threshold approach. The improved SHIBBS/SJAD algorithm is very fast, and capable of large scale separation. Experimental separation comparisons between the SHIBBS/SJAD and FastICA are presented. © 2010 Elsevier Ltd. All rights reserved.


Ciblat P.,Ecole Nationale Superieure des Telecommunications | Forster P.,Paris West University Nanterre La Défense | Larzabal P.,Ecole Normale Superieure de Cachan
European Signal Processing Conference | Year: 2015

We focus on harmonic retrieval in multiplicative and additive noise. At low SNR, Maximum-Likelihood based estimate does not reach the Cramer-Rao bound. Actually, at low SNR, the Cramer-Rao bound is not a tight bound anymore and has to be replaced with the so-called Barankin bound which is tighter but more complicate. In this paper, we derive the Barankin bound when the multiplicative noise is complex-valued and non-circular. We observe that the Barankin bound is much more greater than the Cramer-Rao bound, especially when the multiplicative noise is not non-circular enough. © 2004 EUSIPCO.


Nicolas J.M.,Ecole Nationale Superieure des Telecommunications | Maruani A.,Ecole Nationale Superieure des Telecommunications
European Signal Processing Conference | Year: 2015

Probability Density Functions defined on IR+ can be successfully modeled with the help of the Mellin Transform: this rather underrated transform is well suited for such functions so that we propose the new definitions of 'second kind' characteristic functions based on this transform. By this way, second kind moments and second kind cumulants can also be defined, so that multiplicative noise, which can be seen as a Mellin convolution of Probability Density Function, can be easily analysed. The estimation of PDF parameters can be improved with this new approach. Indeed, it is possible to deal with lower order statistics so that negative moments can be defined in such a way that the variances of the estimators are reduced. The analytical formulation of this variance is proposed and validated on numerical simulations for the Gamma law. With this new approach, a same estimator variance is reached with a more reduced set of samples than traditional methods. © 2000 EUSIPCO.


Battail G.,Ecole Nationale Superieure des Telecommunications
IEEE Transactions on Information Theory | Year: 2010

Heredity is relevant to information theory as a communication process. The conservation of genomes over intervals at the geological timescale and the existence of mutations at shorter intervals can be conciliated assuming that genomes possess intrinsic error-correction codes. The better conservation of old parts of genomes leads to assume that these codes are organized as a nested system set up during geological times, which protects a genomic message the better, the older it is. These hypotheses imply that: genomes are redundant, discrete species exist with a hierarchical taxonomy, successive generations are needed, evolution is contingent and saltationist; it trends towards increasing complexity. These consequences match features of the actual living world but their experimental confirmation needs a still lacking collaboration of biologists and information-theorists. It is suggested that genomic error-correcting codes could consist of "soft codes" where mutual dependence of symbols results from physical-chemical and linguistic constraints, not only mathematical equalities. The constraints incurred by DNA molecules moreover result in a nested structure. Guesses about genomic error-correcting codes are made. © 2006 IEEE.

Loading Ecole Nationale Superieure des Telecommunications collaborators
Loading Ecole Nationale Superieure des Telecommunications collaborators