CentraleSupelec

France

CentraleSupelec

France

Time filter

Source Type

Ghorbel A.,CentraleSupelec | Kobayashi M.,CentraleSupelec | Yang S.,CentraleSupelec
Proceedings of the 1st Workshop on Content Caching and Delivery in Wireless Networks, CCDWN 2016 | Year: 2016

We consider a cache-enabled K-user erasure broadcast channel in which a server with a library of N files wishes to deliver a requested file to each user k who is equipped with a cache of a finite memory Mk. Assuming that the transmitter has state feedback and user caches can be filled during off-peak hours reliably by decentralized cache placement, we characterize the achievable rate region as a function of the memory sizes and the erasure probabilities. The proposed delivery scheme, based on the broadcasting scheme proposed by Wang and Gatzianas et al., exploits the receiver side information established during the placement phase. A twouser toy example shows that the cache network with asymmetric memory sizes might achieve better sum rate performance than the network with symmetric memory sizes. © 2015 ACM.


News Article | May 15, 2017
Site: techcrunch.com

There’s absolutely nothing efficient about sorting through 30,000 resumes by hand. Recruiters spend months evaluating applicants only to have great prospective candidates get lost in the pile. On the stage of TechCrunch’s Startup Battlefield, French startup Riminder made the case for how its deep learning-powered platform could augment recruiters — helping them better surface ideal contenders for job openings. Riminder generates candidate rankings for open jobs by comparing applicant resumes against resumes from current employees and others in the world with similar job titles. Behind the curtain, Riminder uses a cocktail of computer vision and natural language processing to build profiles of what ideal resumes should look like for specific roles. Once a resume is processed, recruiters can view its strengths and weaknesses. Riminder makes it easy to see this information visually and to identify market trends, like the most popular schools to recruit from or the most common skills applicants for a certain type of job typically have. The goal is to make sure recruiters have the information they need to judge candidates on both their ability to fit into company culture and their mastery of key skills. When demonstrating Riminder’s value to a potential recruiting client, the team often runs tests on historical data. This data makes it easy to compare the platform’s automated short list with a human-generated list. “When we compared results, recruiters found 3x more candidates they were interested in, they just weren’t using the right keywords,” explained Mouhidine Seiv, founder of Riminder. Aside from helping recruiters sort through applications, the tool also has the potential to make hiring more fair. Seiv told me that some companies refuse to accept applications from international candidates simply because they don’t feel comfortable evaluating the applications. Because Riminder has seen resumes from around the world, regional variations are easy to accommodate. Another perk of Riminder is that it can automatically reroute applications to other open positions if it notices a better fit. This increases the likelihood that the right applicant will be considered for the right job. The objective of Riminder isn’t to replace recruiters. Humans still provide regular feedback on automated rankings and are ultimately better suited to be the final gatekeepers, interviewing and issuing acceptance and rejection letters to applicants. The company participated in the CentraleSupelec Incubator and is currently running betas with companies like Uber and Blablacar. It’s currently marketing an enterprise-based offering charged on a per-user, per-year basis, a teams contact charged on a per-job title, per-month basis and a per-resume API offering.


Benammar M.,CentraleSupelec | Benammar M.,Huawei | Piantanida P.,University Paris - Sud
IEEE Transactions on Information Theory | Year: 2015

This paper investigates the secrecy capacity of the wiretap broadcast channel (WBC) with an external eavesdropper where a source wishes to communicate two private messages over a broadcast channel (BC) while keeping them secret from the eavesdropper. We derive a nontrivial outer bound on the secrecy capacity region of this channel which, in absence of security constraints, reduces to the best known outer bound to the capacity of the standard BC. An inner bound is also derived, which follows the behavior of both the best known inner bound for the BC and the wiretap channel. These bounds are shown to be tight for the deterministic BC with a general eavesdropper, the semideterministic BC with a more noisy eavesdropper, and the wiretap BC where users exhibit a less noisiness order between them. Finally, by rewriting our outer bound to encompass the characteristics of parallel channels, we also derive the secrecy capacity region of the product of two inversely less noisy BCs with a more noisy eavesdropper. We illustrate our results by studying the impact of security constraints on the capacity of the WBC with binary erasure and binary symmetric components. © 2015 IEEE.


Huang J.,City University of Hong Kong | Li Y.-F.,CentraleSupelec | Xie M.,City University of Hong Kong
Information and Software Technology | Year: 2015

Context Due to the complex nature of software development process, traditional parametric models and statistical methods often appear to be inadequate to model the increasingly complicated relationship between project development cost and the project features (or cost drivers). Machine learning (ML) methods, with several reported successful applications, have gained popularity for software cost estimation in recent years. Data preprocessing has been claimed by many researchers as a fundamental stage of ML methods; however, very few works have been focused on the effects of data preprocessing techniques. Objective This study aims for an empirical assessment of the effectiveness of data preprocessing techniques on ML methods in the context of software cost estimation. Method In this work, we first conduct a literature survey of the recent publications using data preprocessing techniques, followed by a systematic empirical study to analyze the strengths and weaknesses of individual data preprocessing techniques as well as their combinations. Results Our results indicate that data preprocessing techniques may significantly influence the final prediction. They sometimes might have negative impacts on prediction performance of ML methods. Conclusion In order to reduce prediction errors and improve efficiency, a careful selection is necessary according to the characteristics of machine learning methods, as well as the datasets used for software cost estimation. © 2015 Elsevier B.V.


Bjornson E.,Linköping University | Larsson E.G.,Linköping University | Debbah M.,CentraleSupelec | Debbah M.,Huawei
IEEE Transactions on Wireless Communications | Year: 2016

Massive MIMO is a promising technique for increasing the spectral efficiency (SE) of cellular networks, by deploying antenna arrays with hundreds or thousands of active elements at the base stations and performing coherent transceiver processing. A common rule-of-thumb is that these systems should have an order of magnitude more antennas M than scheduled users K because the users' channels are likely to be near-orthogonal when M/K > 10. However, it has not been proved that this rule-of-thumb actually maximizes the SE. In this paper, we analyze how the optimal number of scheduled users K∗ depends on M and other system parameters. To this end, new SE expressions are derived to enable efficient system-level analysis with power control, arbitrary pilot reuse, and random user locations. The value of K∗ in the large-M regime is derived in closed form, while simulations are used to show what happens at finite M, in different interference scenarios, with different pilot reuse factors, and for different processing schemes. Up to half the coherence block should be dedicated to pilots and the optimal M/K is less than 10 in many cases of practical relevance. Interestingly, K∗ depends strongly on the processing scheme and hence it is unfair to compare different schemes using the same K. © 2002-2012 IEEE.


Molinie P.,CentraleSupelec
IEEE Transactions on Plasma Science | Year: 2015

Conduction models in disordered materials are described, with a special focus on the transient behavior appearing on a broad timescale as a consequence of disorder. Multiple trapping models, hopping models, or random walks coupled with waiting-time distributions are commonly used to describe charge transport in semiconductors. Important concepts have been introduced in this field, such as demarcation energy, percolation, or transport energy. Dispersive transport appears as a consequence of the disorder, together with a memory effect, that may be described using various mathematical tools. The interest of this research field concerning the charging behavior of materials used in spacecraft applications, especially polymers, is underlined and a practical model of the insulator is suggested. © 2015 IEEE.


Baili H.,CentraleSupelec | Assaad M.,CentraleSupelec
Mathematical and Computer Modelling of Dynamical Systems | Year: 2015

This paper addresses the problem of joint transmit power allocation and time slot scheduling in a wireless communication system with time varying traffic. The system is handled by a single base station transmitting over time varying channels. This may be the case in practice of a hybrid TDMA-CDMA (Time Division Multiple Access-Code Division Multiple Access) system. The operating time horizon is divided into time slots; a fixed amount of power is available at each time slot. The users share each time slot and the power available at this time slot with the objective of minimizing the expected total queue length. The problem is reformulated, via a heavy traffic approximation, as the optimal control of a reflected diffusion in the positive orthant. We establish a closed form solution for the obtained control problem. The main feature that makes it possible is an astute choice of some auxiliary weighting matrices in the cost rate. The proposed solution relies also on the knowledge of the covariance matrix of the non-standard multi-dimensional Wiener process which is the driving process in the reflected diffusion. We then compute this covariance matrix given the stationary distribution of the multi-dimensional channel process. Further stochastic analysis is provided: the cost variance, and the Fokker–Planck equation for the distribution density of the queue length. © 2015 CentraleSupélec.


Fayaz A.M.,CentraleSupelec | Hamache D.,CentraleSupelec
IECON 2015 - 41st Annual Conference of the IEEE Industrial Electronics Society | Year: 2015

A variable structure flatness based controller achieving both stabilization and energy management in a DC grid is proposed in this paper. The grid connects through undamped LC filters, a voltage source, a storage device and a reversible load. It is shown that the energy management goals along with the reversibility of the load make the grid to behave as a variable structure system and that a fixed structure controller is not able to achieve the management goals. After identifying a flat output for the system, a variable structure controller resulting from switching between two «flat controllers» is proposed. The switching is driven by load power profile sign. The asymptotic stability of the grid under the switching controller results from the asymptotic stability of flat output feedback controllers along with «slow» switching assumption. Simulation results illustrate both the shortcoming of a fixed «flat controllers» and the effectiveness of the proposed approach. © 2015 IEEE.


Ballarini P.,CentraleSupelec | Duflot M.,University of Lorraine
Theoretical Computer Science | Year: 2015

Stochastic temporal logics have demonstrated their efficiency in the analysis of discrete-state stochastic models. In this paper we consider the application of a recently introduced formalism, namely the Hybrid Automata Stochastic Language (HASL), to the analysis of biological models of genetic circuits. In particular we demonstrate the potential of HASL by focusing on two aspects: first the analysis of a genetic oscillator and then the analysis of gene expression. With respect to oscillations, we formalize a number of HASL based measures which we apply on a realistic model of a three-gene repressilator. With respect to gene expression, we consider a model with delayed stochastic dynamics, a class of systems whose dynamics includes both Markovian and non-Markovian events, and we identify a number of relevant and sophisticated measures. To assess the HASL defined measures we employ the COSMOS tool, a statistical model checker designed for HASL model checking. © 2015 Elsevier B.V.


Grant
Agency: European Commission | Branch: H2020 | Program: RIA | Phase: ICT-25-2015 | Award Amount: 3.44M | Year: 2015

New computing paradigms are required to feed the next revolution in Information Technology. Machines need to be invented that can learn, but also handle vast amount of data. In order to achieve this goal and still reduce the energy footprint of Information and Communication Technology, fundamental hardware innovations must be done. A physical implementation natively supporting new computing methods is required. Most of the time, CMOS is used to emulate e.g. neuronal behavior, and is intrinsically limited in power efficiency and speed. Reservoir computing (RC) is one of the concepts that has proven its efficiency to perform tasks where traditional approaches fail. It is also one of the rare concepts of an efficient hardware realization of cognitive computing into a specific, silicon-based technology. Small RC systems have been demonstrated using optical fibers and bulk components. In 2014, optical RC networks based integrated photonic circuits were demonstrated. The PHRESCO project aims to bring photonic reservoir computing to the next level of maturity. A new RC chip will be co-designed, including innovative electronic and photonic component that will enable major breakthrough in the field. We will i) Scale optical RC systems up to 60 nodes ii) build an all-optical chip based on the unique electro-optical properties of new materials iii) Implement new learning algorithms to exploit the capabilities of the RC chip. The hardware integration of beyond state-of-the-art components with novel system and algorithm design will pave the way towards a new era of optical, cognitive systems capable of handling huge amount of data at ultra-low power consumption.

Loading CentraleSupelec collaborators
Loading CentraleSupelec collaborators