Benammar M.,CentraleSupelec |
Benammar M.,Huawei |
Piantanida P.,University Paris - Sud
IEEE Transactions on Information Theory | Year: 2015
This paper investigates the secrecy capacity of the wiretap broadcast channel (WBC) with an external eavesdropper where a source wishes to communicate two private messages over a broadcast channel (BC) while keeping them secret from the eavesdropper. We derive a nontrivial outer bound on the secrecy capacity region of this channel which, in absence of security constraints, reduces to the best known outer bound to the capacity of the standard BC. An inner bound is also derived, which follows the behavior of both the best known inner bound for the BC and the wiretap channel. These bounds are shown to be tight for the deterministic BC with a general eavesdropper, the semideterministic BC with a more noisy eavesdropper, and the wiretap BC where users exhibit a less noisiness order between them. Finally, by rewriting our outer bound to encompass the characteristics of parallel channels, we also derive the secrecy capacity region of the product of two inversely less noisy BCs with a more noisy eavesdropper. We illustrate our results by studying the impact of security constraints on the capacity of the WBC with binary erasure and binary symmetric components. © 2015 IEEE.
Huang J.,City University of Hong Kong |
Sun H.,City University of Hong Kong |
IEEE International Conference on Industrial Engineering and Engineering Management | Year: 2016
Software economic analysis supports practitioners making decisions during the development process. Effective analysis requires considering higher productivity and quality while maintaining lower effort and time-to-market for business demands. Former studies have identified multiple factors of team and project that determine software economics. However, there is serious conclusive inconsistency. Experts are calling for more empirical evidence with objective data. This study aims at validating the empirical relationships between team/project factors and software economic measurement, including productivity, quality, effort, and time-to-market. The data analysis bases on a renowned dataset, ISBSG. Our findings indicate multiple factors, including team size, language type, and organization type, turn out to have a significant impact on software economics. © 2015 IEEE.
Huang J.,City University of Hong Kong |
Li Y.-F.,CentraleSupelec |
Xie M.,City University of Hong Kong
Information and Software Technology | Year: 2015
Context Due to the complex nature of software development process, traditional parametric models and statistical methods often appear to be inadequate to model the increasingly complicated relationship between project development cost and the project features (or cost drivers). Machine learning (ML) methods, with several reported successful applications, have gained popularity for software cost estimation in recent years. Data preprocessing has been claimed by many researchers as a fundamental stage of ML methods; however, very few works have been focused on the effects of data preprocessing techniques. Objective This study aims for an empirical assessment of the effectiveness of data preprocessing techniques on ML methods in the context of software cost estimation. Method In this work, we first conduct a literature survey of the recent publications using data preprocessing techniques, followed by a systematic empirical study to analyze the strengths and weaknesses of individual data preprocessing techniques as well as their combinations. Results Our results indicate that data preprocessing techniques may significantly influence the final prediction. They sometimes might have negative impacts on prediction performance of ML methods. Conclusion In order to reduce prediction errors and improve efficiency, a careful selection is necessary according to the characteristics of machine learning methods, as well as the datasets used for software cost estimation. © 2015 Elsevier B.V.
Bjornson E.,Linköping University |
Larsson E.G.,Linköping University |
Debbah M.,CentraleSupelec |
IEEE Transactions on Wireless Communications | Year: 2016
Massive MIMO is a promising technique for increasing the spectral efficiency (SE) of cellular networks, by deploying antenna arrays with hundreds or thousands of active elements at the base stations and performing coherent transceiver processing. A common rule-of-thumb is that these systems should have an order of magnitude more antennas M than scheduled users K because the users' channels are likely to be near-orthogonal when M/K > 10. However, it has not been proved that this rule-of-thumb actually maximizes the SE. In this paper, we analyze how the optimal number of scheduled users K∗ depends on M and other system parameters. To this end, new SE expressions are derived to enable efficient system-level analysis with power control, arbitrary pilot reuse, and random user locations. The value of K∗ in the large-M regime is derived in closed form, while simulations are used to show what happens at finite M, in different interference scenarios, with different pilot reuse factors, and for different processing schemes. Up to half the coherence block should be dedicated to pilots and the optimal M/K is less than 10 in many cases of practical relevance. Interestingly, K∗ depends strongly on the processing scheme and hence it is unfair to compare different schemes using the same K. © 2002-2012 IEEE.
IEEE Transactions on Plasma Science | Year: 2015
Conduction models in disordered materials are described, with a special focus on the transient behavior appearing on a broad timescale as a consequence of disorder. Multiple trapping models, hopping models, or random walks coupled with waiting-time distributions are commonly used to describe charge transport in semiconductors. Important concepts have been introduced in this field, such as demarcation energy, percolation, or transport energy. Dispersive transport appears as a consequence of the disorder, together with a memory effect, that may be described using various mathematical tools. The interest of this research field concerning the charging behavior of materials used in spacecraft applications, especially polymers, is underlined and a practical model of the insulator is suggested. © 2015 IEEE.
Baili H.,CentraleSupelec |
Mathematical and Computer Modelling of Dynamical Systems | Year: 2015
This paper addresses the problem of joint transmit power allocation and time slot scheduling in a wireless communication system with time varying traffic. The system is handled by a single base station transmitting over time varying channels. This may be the case in practice of a hybrid TDMA-CDMA (Time Division Multiple Access-Code Division Multiple Access) system. The operating time horizon is divided into time slots; a fixed amount of power is available at each time slot. The users share each time slot and the power available at this time slot with the objective of minimizing the expected total queue length. The problem is reformulated, via a heavy traffic approximation, as the optimal control of a reflected diffusion in the positive orthant. We establish a closed form solution for the obtained control problem. The main feature that makes it possible is an astute choice of some auxiliary weighting matrices in the cost rate. The proposed solution relies also on the knowledge of the covariance matrix of the non-standard multi-dimensional Wiener process which is the driving process in the reflected diffusion. We then compute this covariance matrix given the stationary distribution of the multi-dimensional channel process. Further stochastic analysis is provided: the cost variance, and the Fokker–Planck equation for the distribution density of the queue length. © 2015 CentraleSupélec.
Fayaz A.M.,CentraleSupelec |
IECON 2015 - 41st Annual Conference of the IEEE Industrial Electronics Society | Year: 2015
A variable structure flatness based controller achieving both stabilization and energy management in a DC grid is proposed in this paper. The grid connects through undamped LC filters, a voltage source, a storage device and a reversible load. It is shown that the energy management goals along with the reversibility of the load make the grid to behave as a variable structure system and that a fixed structure controller is not able to achieve the management goals. After identifying a flat output for the system, a variable structure controller resulting from switching between two «flat controllers» is proposed. The switching is driven by load power profile sign. The asymptotic stability of the grid under the switching controller results from the asymptotic stability of flat output feedback controllers along with «slow» switching assumption. Simulation results illustrate both the shortcoming of a fixed «flat controllers» and the effectiveness of the proposed approach. © 2015 IEEE.
Ballarini P.,CentraleSupelec |
Duflot M.,University of Lorraine
Theoretical Computer Science | Year: 2015
Stochastic temporal logics have demonstrated their efficiency in the analysis of discrete-state stochastic models. In this paper we consider the application of a recently introduced formalism, namely the Hybrid Automata Stochastic Language (HASL), to the analysis of biological models of genetic circuits. In particular we demonstrate the potential of HASL by focusing on two aspects: first the analysis of a genetic oscillator and then the analysis of gene expression. With respect to oscillations, we formalize a number of HASL based measures which we apply on a realistic model of a three-gene repressilator. With respect to gene expression, we consider a model with delayed stochastic dynamics, a class of systems whose dynamics includes both Markovian and non-Markovian events, and we identify a number of relevant and sophisticated measures. To assess the HASL defined measures we employ the COSMOS tool, a statistical model checker designed for HASL model checking. © 2015 Elsevier B.V.
He K.,IRT B Com |
Bidan C.,CentraleSupelec |
Le Guelvouit G.,IRT B Com
ACM International Conference Proceeding Series | Year: 2016
More and more users prefer to share their photos through image-sharing platforms of social networks than using e-mail or personal webpages. It makes sharing easier, and most of the platforms allow the users to specify who can access to each image. It may result a feeling of safety and privacy, but privacy is not guaranteed since at least the provider of the image-sharing platform can clearly know the contents of any published images. Therefore, uploading an encrypted image is a good solution to protect user's privacy. In this paper, we implement an image encryption algorithm to be used on several widely used image-sharing platforms. In order to provide sufficient information for our experiments, we first analyze and get to know about the characteristics of different platforms. We then upload the encrypted images to these platforms, download and decrypt them. Further, by comparing decrypted images and the corresponding original ones, we obtain the experimental results. The results show that our encryption algorithm can be used to protect the privacy on Flickr, Pinterest, Google+ and Twitter. © 2016 ACM.
Agency: Cordis | Branch: H2020 | Program: RIA | Phase: ICT-25-2015 | Award Amount: 3.44M | Year: 2015
New computing paradigms are required to feed the next revolution in Information Technology. Machines need to be invented that can learn, but also handle vast amount of data. In order to achieve this goal and still reduce the energy footprint of Information and Communication Technology, fundamental hardware innovations must be done. A physical implementation natively supporting new computing methods is required. Most of the time, CMOS is used to emulate e.g. neuronal behavior, and is intrinsically limited in power efficiency and speed. Reservoir computing (RC) is one of the concepts that has proven its efficiency to perform tasks where traditional approaches fail. It is also one of the rare concepts of an efficient hardware realization of cognitive computing into a specific, silicon-based technology. Small RC systems have been demonstrated using optical fibers and bulk components. In 2014, optical RC networks based integrated photonic circuits were demonstrated. The PHRESCO project aims to bring photonic reservoir computing to the next level of maturity. A new RC chip will be co-designed, including innovative electronic and photonic component that will enable major breakthrough in the field. We will i) Scale optical RC systems up to 60 nodes ii) build an all-optical chip based on the unique electro-optical properties of new materials iii) Implement new learning algorithms to exploit the capabilities of the RC chip. The hardware integration of beyond state-of-the-art components with novel system and algorithm design will pave the way towards a new era of optical, cognitive systems capable of handling huge amount of data at ultra-low power consumption.