Le H.M.,University of Lorraine |
Thi H.A.L.,University of Lorraine |
Dinh T.P.,National Institute for Applied Sciences, Strasbourg |
Huynh V.N.,Quy Nhon University
Neural Computation | Year: 2013
We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM. © 2013 Massachusetts Institute of Technology.
Agency: European Commission | Branch: FP7 | Program: CSA-CA | Phase: SST-2007-1.2-01;SST-2007-1.2-02 | Award Amount: 1.25M | Year: 2009
The DIREC_MAT project objective is to share, at a European scale, knowledge and practices on recycling road and road-related waste in the aim of ensuring an environmentally-friendly and sustainable end-of-life for road infrastructure. Road material recycling processes have previously been studied in national and European research projects and have led to various levels of practical implementation; unfortunately, the national experiences developed across Europe almost never benefit other European countries. This is especially true for the newer Member States. Furthermore existing knowledge and practices are presently scattered. Reliable practice-oriented data on all types of road materials and waste will be identified and compiled by skilled experts working in both research and construction capacities. Field experience and relevant research issues will be integrated into a Web database to provide the European road community with unrestricted access to updated online data on end products that have been classified, assessed and illustrated with jobsite practices for dismantling and recycling applications. This database will not only offer information to stakeholders on facilitating the correct re-use of road and road-related waste products back into roads without generating health impacts, but will also provide technical and scientific information for CEN Technical Committees. Lastly, such a tool will make it possible to better identify outstanding research needs in this area. Best practice guides on green techniques for recycling road and road-related waste back into roads will be delivered; benchmarking processes will be detailed and shared by all stakeholders in order to achieve a road material recyclability level of nearly 100%. FEHRL (Forum of European Highway Research Laboratories) will contribute by performing decisive clustering tasks and engaging in a comprehensive dissemination plan to promote the required knowledge sharing for end-users
Agency: European Commission | Branch: FP7 | Program: MC-IAPP | Phase: FP7-PEOPLE-2011-IAPP | Award Amount: 1.69M | Year: 2012
The analysis of emerging technologies and their potential impact on markets, economies and societies requires reliable and repeatable methods and tools since the related information plays a critical role for strategic decisions of private and public organizations. All existing techniques reveal several weaknesses such as limited accuracy on middle and long-term forecast; poor repeatability; poor adaptability, i.e. no universal methods are known, besides complementary instruments must be integrated according to the specific goal and data availability. These considerations highlight the need to introduce structured methods and tools capable to support strategic decisions in industrial R&D activities, by managing the multi-disciplinary complexity of current systems and by anticipating the future characteristics of products and processes. The final FORMAT project result will be the development of an innovative forecasting methodology, backed by a web semantic IT tool, supporting decision making in Manufacturing Industries, to be evaluated in real test cases and extensively described in the FORMAT handbook and through an IT demonstrator to proof the concept.
Agency: European Commission | Branch: FP7 | Program: CP-FP | Phase: GC-SST.2010.7-2. | Award Amount: 4.26M | Year: 2010
ICE is focused on the development of a new air conditioning and heat pump system based on the Magneto Caloric heat pump and a on the redesign of the cabin air conditioning and microclimate control to use is the most efficient way the cooling and heating power. The Consortium includes a SME having a relevant and unique KH on Magneto Caloric heat pump, a OEM supported by an acknowledged automotive research center, a tier one automotive supplier and two important academic and research institutions. The FEV scenario is moving towards a progressive diffusion in urban areas (e.g. small passenger cars and small buses). In this context a small bus has been selected as a demonstrator vehicle this because represents a challenging application, is commercially available and in use (real use data available) within the consortium so to give a real chance of exploitation in the short medium term period for the project outcomes. Within the project will be also evaluated the applications for passenger cars and trucks (parking heating and cooling). The project major contents are Efficient automotive electrical compact heat pump (COP > 5 in cooling mode) based on Magneto Caloric effect using high efficiency magnetic materials, smart design and specific micro channelled heat exchangers. Redesign of the thermal power distribution system based on a coolant loop to distribute locally in the cabin the thermal power and to control the temperature of batteries and electronics. Microclimate control system based on thermal comfort and able to limit the thermal power generation only to the really required quantity and to adapt the system to the occupants number. Sustainable Cost: thanks to the resize of the systems and systems integration The project results will be validated installing the system on an electrical bus and testing it also with road tests. The project includes also a relevant dissemination and exploitation activity to promote the application of the ICE approach.
Kamrin K.,Massachusetts Institute of Technology |
Koval G.,National Institute for Applied Sciences, Strasbourg
Physical Review Letters | Year: 2012
Extending recent modeling efforts for emulsions, we propose a nonlocal fluidity relation for flowing granular materials, capturing several known finite-size effects observed in steady flow. We express the local Bagnold-type granular flow law in terms of a fluidity ratio and then extend it with a particular Laplacian term that is scaled by the grain size. The resulting model is calibrated against a sequence of existing discrete element method data sets for two-dimensional annular shear, where it is shown that the model correctly describes the divergence from a local rheology due to the grain size as well as the rate-independence phenomenon commonly observed in slowly flowing zones. The same law is then applied in two additional inhomogeneous flow geometries, and the predicted velocity profiles are compared against corresponding discrete element method simulations utilizing the same grain composition as before, yielding favorable agreement in each case. © 2012 American Physical Society.
Peng L.,National Institute for Applied Sciences, Strasbourg |
Helard M.,National Institute for Applied Sciences, Strasbourg |
Haese S.,National Institute for Applied Sciences, Strasbourg
Journal of Lightwave Technology | Year: 2013
A novel bit-loading approach is proposed for the discrete multi-tone (DMT) transmission over short range polymer optical fiber (POF) systems. First of all, from the extract signal-to-noise ratio table of quadrature amplitude modulation for different desired bit error rates (BER), a new linear approximation (LA) expression is introduced to implement bit-loading for DMT systems. Then, based on water-filling concept, the performance bounds and optimal power allocations for the classical and the proposed bit-loading algorithms in Gaussian low-pass channel models are derived. Consequently, introducing the measured channel parameters of step-index (SI)-POF channels with different transmission distances, the theoretical performance bounds are computed and the practical transmission rates are simulated. Simulation results show that the proposed LA expression based bit-loading achieves higher transmission rate than classical modulation gap based bit-loading. Both algorithms use a sub-optimal Chow algorithm with a constant power allocation and an iterative process. Finally, real DMT transmissions over SI-POFs are implemented in order to verify the proposed method. The LA expression based bit-loading outperforms the modulation gap based bit-loading in DMT transmission systems over different transmission distances. Moreover, experimental results show that the longer fiber length, the higher performance gain with LA expression based bit-loading. In the comparisons, for a 50 m SI-POF transmission, the transmission rate in DMT system with LA expression based bit-loading is improved by 5% with the same experimental setups for a given BER at 1 ×10-3 and by 10% for a 100 m length. © 2013 IEEE.
Le Thi H.A.,University of Lorraine |
Nguyen M.C.,University of Lorraine |
Dinh T.P.,National Institute for Applied Sciences, Strasbourg
Neural Computation | Year: 2014
Automatic discovery of community structures in complex networks is a fundamental task in many disciplines, including physics, biology, and the social sciences. The most used criterion for characterizing the existence of a community structure in a network is modularity, a quantitative measure proposed byNewman and Girvan (2004). The discovery community can be formulated as the so-called modularity maximization problem that consists of finding a partition of nodes of a network with the highest modularity. In this letter, we propose a fast and scalable algorithm called DCAM, based on DC (difference of convex function) programming and DCA (DC algorithms), an innovative approach in nonconvex programming framework for solving the modularity maximization problem. The special structure of the problem considered here has been well exploited to get an inexpensive DCA scheme that requires only a matrixvector product at each iteration. Starting with a very large number of communities, DCAM furnishes, as output results, an optimal partition together with the optimal number of communities * that is, the number of communities is discovered automatically during DCAM's iterations. Numerical experiments are performed on a variety of real-world network data sets with up to 4,194,304 nodes and 30,359,198 edges. The comparative results with height reference algorithms show that the proposed approach outperforms them not only on quality and rapidity but also on scalability.Moreover, it realizes a very good trade-off between the quality of solutions and the run time. © 2014 Massachusetts Institute of Technology.
Martin N.,Jet Propulsion Laboratory |
Monnier J.,National Institute for Applied Sciences, Strasbourg
European Journal of Mechanics, B/Fluids | Year: 2015
The present work addresses the question of performing inverse rheometry and basal properties inference for pseudoplastic gravity-driven free-surface flows at low Reynolds' number. The modeling of these flows involves several parameters, such as the rheological ones or the state of the basal boundary (modeling an interface between the base and the fluid). The issues of inverse rheometry are addressed in a general laboratory flow context using surface velocity data. The inverse characterization of the basal boundary is proposed in a geophysical flow context where the parameters involved in the empirical effective sliding law are particularly difficult to estimate. Using an accurate direct and inverse model based on the adjoint method combined with an original efficient solver, sensitivity analyses and parameter identification are performed for a wide range of flow regimes, defined by the degree of slip and the non-linearity of the viscous sliding law considered at the bottom. The first result is the numerical assessment of the passive aspect of the viscosity singularity inherent to a power-law pseudoplastic (shear-thinning) description in terms of surface velocities. From this result, identification of the two parameters of the constitutive law, namely the power-law exponent and the consistency, are performed. These numerical experiments provide, on the one hand, a very robust identification of the power-law exponent, even for very noisy surface velocity observations and on the other hand, a strong equifinality problem on the identification of the consistency. This parameter has a minor influence on the flow, in terms of surface velocities. Typically for temperature-dependent geophysical fluids, a law describing a priori its spatial variability is then sufficient (e.g. based on a temperature vertical profile). This study then focuses on the basal properties interacting with the fluid rheology. An accurate joint identification of the scalar valued triple (n,m;β) (respectively the rheological exponent, the non linear friction exponent and the friction coefficient) is achieved for any degree of slip, allowing to completely infer the flow regime. Next, in a geophysical flow context, identifications of a spatially varying friction coefficient are performed for various perturbed bedrock topography. The (2D-vertical) results demonstrate a severely ill-posed problem that allows to compute a given set of surface velocity data with different topography/friction pairs.
Newall A.T.,University of New South Wales |
Wood J.G.,University of New South Wales |
Oudin N.,National Institute for Applied Sciences, Strasbourg |
MacIntyre C.R.,University of New South Wales
Emerging Infectious Diseases | Year: 2010
We used a hybrid transmission and economic model to evaluate the relative merits of stockpiling antiviral drugs and vaccine for pandemic influenza mitigation. In the absence of any intervention, our base-case assumptions generated a population clinical attack rate of 31.1%. For at least some parameter values, population prepandemic vaccination strategies were effective at containing an outbreak of pandemic influenza until the arrival of a matched vaccine. Because of the uncertain nature of many parameters, we used a probabilistic approach to determine the most cost-effective strategies. At a willingness to pay of >A$24,000 per life-year saved, more than half the simulations showed that a prepandemic vaccination program combined with antiviral treatment was cost-effective in Australia.
Hoai An L.T.,National Institute for Applied Sciences, Strasbourg |
Tao P.D.,National Institute for Applied Sciences, Strasbourg
Journal of Global Optimization | Year: 2015
We address a class of particularly hard-to-solve combinatorial optimization problems, namely that of multicommodity network optimization when the link cost functions are discontinuous step increasing. Unlike usual approaches consisting in the development of relaxations for such problems (in an equivalent form of a large scale mixed integer linear programming problem) in order to derive lower bounds, our d.c.(difference of convex functions) approach deals with the original continuous version and provides upper bounds. More precisely we approximate step increasing functions as closely as desired by differences of polyhedral convex functions and then apply DCA (difference of convex function algorithm) to the resulting approximate polyhedral d.c. programs. Preliminary computational experiments are presented on a series of test problems with structures similar to those encountered in telecommunication networks. They show that the d.c. approach and DCA provide feasible multicommodity flows x* such that the relative differences between upper bounds (computed by DCA) and simple lower bounds r:= (f (x*) − LB)/f (x*) lies in the range [4.2%, 16.5%] with an average of 11.5%, where f is the cost function of the problem and LB is a lower bound obtained by solving the linearized program (that is built from the original problem by replacing step increasing cost functions with simple affine minorizations). It seems that for the first time so good upper bounds have been obtained. © 2002Kluwer Academic Publishers. Printed in the Netherlands.