UVSQ

Versailles, France
Versailles, France
SEARCH FILTERS
Time filter
Source Type

Delbos F.,French Institute of Petroleum | Dumas L.,UVSQ | Echague E.,UVSQ
14th European Conference on the Mathematics of Oil Recovery 2014, ECMOR 2014 | Year: 2014

In the context of oil industry, many problems consist in a minimization process of a computationally expensive function with bound constraints. The values of the objective function are in general the output of a complex simulator for which we don't have any explicit expression. And the absence of any information on the function gradient narrows the resolution field to algorithms using no first or second order derivatives. There exists many different approaches in derivative free optimization, among which the most popular are space partitioning methods like DIRECT, direct search methods like Nelder Mead or MADS but also evolutionary algorithms like genetic algorithms, evolution strategies or other similar methods. The main drawback for all these methods is that they can suffer from a poor convergence rate and a high computational cost, especially for high dimensional cases. However, they can succeed in finding a global optimum where all other classical methods fail. In this work we consider surrogate based optimization which has already been widely used for many years. A surrogate model is a framework used to minimize a function by sequentially building and minimizing a simpler model (surrogate) of the original function. In this work we build a new surrogate model by using the sparse grid interpolation method. Basically, the sparse grid approach is a grid error-controlled hierarchical approximation method which neglects the basis functions with the smallest supports. This approach was introduced in 1963 by Smolyak in order to evaluate integrals in high dimensions. It was then applied for PDE approximations but also for optimization and more recently for sensitivity analysis. Compared to the first optimization algorithm based on sparse grids proposed by Klimke et al, a local refinement step is constructed here in order to explore the more promising regions. Moreover, no optimization steps are performed over the objective function, which reduces significantly the number of function evaluations employed. In this talk we present our GOSGrid optimization algorithm based on sparse grids. We also compare it with other derivative free global algorithms. The comparison is based on a parameter estimation problem in reservoir engineering.


Marteau B.,French Institute of Petroleum | Ding D.,French Institute of Petroleum | Dumas L.,UVSQ
14th European Conference on the Mathematics of Oil Recovery 2014, ECMOR 2014 | Year: 2014

History matching is a challenging optimization problem that involves numerous evaluations of a very expensive objective function through the simulation of complex fluids flows inside an oil reservoir. Typically, the gradient of such a function is not always available. Therefore derivative free optimization methods such as Powell's NEWUOA are often chosen to try to solve such a problem. One way to reduce the number of evaluations of the objective function is to exploit its specific structure, for example its partial separability. A function F:x→F(x1,..., xp) is said to be partially separable if there exists some subfunctions fi such that F=f1+...+fn and that for all i, fi depends only on pi


Mayaud L.,French Institute of Health and Medical Research | Mayaud L.,University of Versailles | Mayaud L.,University of Oxford | Congedo M.,CNRS GIPSA Laboratory | And 6 more authors.
Neurophysiologie Clinique | Year: 2013

Aims of the study: A brain-computer interface aims at restoring communication and control in severely disabled people by identification and classification of EEG features such as event-related potentials (ERPs). The aim of this study is to compare different modalities of EEG recording for extraction of ERPs. The first comparison evaluates the performance of six disc electrodes with that of the EMOTIV headset, while the second evaluates three different electrode types (disc, needle, and large squared electrode). Material and methods: Ten healthy volunteers gave informed consent and were randomized to try the traditional EEG system (six disc electrodes with gel and skin preparation) or the EMOTIV Headset first. Together with the six disc electrodes, a needle and a square electrode of larger surface were simultaneously recording near lead Cz. Each modality was evaluated over three sessions of auditory P300 separated by one hour. Results: No statically significant effect was found for the electrode type, nor was the interaction between electrode type and session number. There was no statistically significant difference of performance between the EMOTIV and the six traditional EEG disc electrodes, although there was a trend showing worse performance of the EMOTIV headset. However, the modality-session interaction was highly significant (P<. 0.001) showing that, while the performance of the six disc electrodes stay constant over sessions, the performance of the EMOTIV headset drops dramatically between 2 and 3. h of use. Finally, the evaluation of comfort by participants revealed an increasing discomfort with the EMOTIV headset starting with the second hour of use. Conclusion: Our study does not recommend the use of one modality over another based on performance but suggests the choice should be made on more practical considerations such as the expected length of use, the availability of skilled labor for system setup and above all, the patient comfort. © 2013 Elsevier Masson SAS.


News Article | November 15, 2016
Site: phys.org

Photograph of the MR2 archaeological site at Mehrgarh occupied from 4 500 to 3 600 BC, where the amulet was found. Credit: C. Jarrige, Mission archéologique de l'Indus At 6000 years old, this copper amulet is the earliest lost-wax cast object known. Now, researchers have finally discovered how it was made, using a novel UV-visible photoluminescence spectral imaging approach. All the parameters of elaboration process, such as the purity of the copper, and melting and solidification temperatures, are now accurately known. This work has enabled the scientists to solve the mystery of the invention of lost-wax casting, a technique that led to art foundry. Resulting from a collaboration1 between researchers from the CNRS, the French Ministry of Culture and Communication and the SOLEIL synchrotron, the work is published on 15 november 2016 in the journal Nature Communications. The researchers examined a copper amulet, discovered in the 1980s at a site that was occupied 6,000 years ago and had been a focal point for innovation since Neolithic times: Mehrgarh, in today's Pakistan. The shape of the object shows that it was designed using the earliest known precision casting technique, lost-wax casting (still in use today). The process begins with a model formed in a low melting point material such as beeswax. The model is covered with clay, which is heated to remove the wax and then baked. The mould is filled with molten metal and then broken to release the metal object. This was all that was known about the process used to make the copper amulet until it was subjected to a novel photoluminescence approach, which revealed that it had an unexpected internal structure. Although the amulet today consists mainly of copper oxide (cuprite), it emits a non-uniform response under UV-visible illumination. Between the dendrites formed during initial solidification of the molten metal, the researchers found rods that were undetectable using all other approaches tested. The shape and arrangement of the rods enabled the team to reconstruct the process used to make the amulet with an unprecedented level of detail for such a corroded object. 6,000 years ago, following high-temperature solidification of the copper forming it, the amulet was made up of a pure copper matrix dotted with cuprite rods, resulting from the oxidizing conditions of the melt. Over time, the copper matrix also corroded to cuprite. The contrast observed using photoluminescence results from a difference in crystal defects between the two cuprites present: there are oxygen atoms missing in the cuprite of the rods, a defect that is not present in the cuprite formed by corrosion. This innovative imaging technique, with high resolution and a very wide field of view, made it possible to identify the ore used (extremely pure copper), the quantity of oxygen absorbed by the molten metal, and even the melting and solidification temperatures (around 1072 °C). The discovery illustrates the potential of this new analytical approach, which can be applied to the study of an extremely wide range of complex systems, such as semiconducting materials, composites and, of course, archaeological objects. Comparison of high spatial dynamics-photoluminescence (PL, top), and optical microscopy (bottom) images. The area imaged corresponds to part of one of the spokes of the amulet. The PL image reveals a eutectic rod-like structure that is undetectable using all other tested techniques. The image at last made it possible to explain the process used to make the amulet. Credit: T. Séverin-Fabiani, M. Thoury, L. Bertrand, B. Mille, IPANEMA, CNRS / MCC / UVSQ, Synchrotron SOLEIL, C2RMF Explore further: Voltammetry of microparticles used to date archeological artifacts made of copper and bronze More information: M. Thoury et al. High spatial dynamics-photoluminescence imaging reveals the metallurgy of the earliest lost-wax cast object, Nature Communications (2016). DOI: 10.1038/ncomms13356


Pradat-Diehl P.,University Pierre and Marie Curie | Joseph P.-A.,University of Bordeaux Segalen | Beuret-Blanquart F.,CRMPR Les Herbiers | Luaute J.,University of Lyon | And 7 more authors.
Annals of Physical and Rehabilitation Medicine | Year: 2012

This document is part of a series of guidelines documents designed by the French Physical and Rehabilitation Medicine Society (SOFMER) and the French Federation of PRM (FEDMER). These reference documents focus on a particular pathology (here patients with severe TBI). They describe for each given pathology patients' clinical and social needs, PRM care objectives and necessary human and material resources of the pathology-dedicated pathway. 'Care pathways in PRM' is therefore a short document designed to enable readers (physician, decision-maker, administrator, lawyer, finance manager) to have a global understanding of available therapeutic care structures, organization and economic needs for patients' optimal care and follow-up. After a severe traumatic brain injury, patients might be divided into three categories according to impairment's severity, to early outcomes in the intensive care unit and to functional prognosis. Each category is considered in line with six identical parameters used in the International Classification of Functioning, Disability and Health (World Health Organization), focusing thereafter on personal and environmental factors liable to affect the patients' needs. © 2012.


Glatigny A.,French National Center for Scientific Research | Mathieu L.,UVSQ | Herbert C.J.,French National Center for Scientific Research | Dujardin G.,French National Center for Scientific Research | And 3 more authors.
BMC Systems Biology | Year: 2011

Background: The mitochondrial inner membrane contains five large complexes that are essential for oxidative phosphorylation. Although the structure and the catalytic mechanisms of the respiratory complexes have been progressively established, their biogenesis is far from being fully understood. Very few complex III assembly factors have been identified so far. It is probable that more factors are needed for the assembly of a functional complex, but that the genetic approaches used to date have not been able to identify them. We have developed a systems biology approach to identify new factors controlling complex III biogenesis.Results: We collected all the physical protein-protein interactions (PPI) involving the core subunits, the supernumerary subunits and the assembly factors of complex III and used Cytoscape 2.6.3 and its plugins to construct a network. It was then divided into overlapping and highly interconnected sub-graphs with clusterONE. One sub-graph contained the core and the supernumerary subunits of complex III, it also contained some subunits of complex IV and proteins participating in the assembly of complex IV. This sub-graph was then split with another algorithm into two sub-graphs. The subtraction of these two sub-graphs from the previous sub-graph allowed us to identify a protein of unknown function Usb1p/Ylr132p that interacts with the complex III subunits Qcr2p and Cor1p. We then used genetic and cell biology approaches to investigate the function of Usb1p. Preliminary results indicated that Usb1p is an essential protein with a dual localization in the nucleus and in the mitochondria, and that the over-expression of this protein can compensate for defects in the biogenesis of the respiratory complexes.Conclusions: Our systems biology approach has highlighted the multiple associations between subunits and assembly factors of complexes III and IV during their biogenesis. In addition, this approach has allowed the identification of a new factor, Usb1p, involved in the biogenesis of respiratory complexes, which could not have been found using classical genetic screens looking for respiratory deficient mutants. Thus, this systems biology approach appears to be a fruitful new way to study the biogenesis of mitochondrial multi-subunit complexes. © 2011 Glatigny et al; licensee BioMed Central Ltd.


Becker A.,Ecole Polytechnique Federale de Lausanne | Ducas L.,CWI | Gama N.,UVSQ | Laarhoven T.,TU Eindhoven
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2016

To solve the approximate nearest neighbor search problem (NNS) on the sphere, we propose a method using locality-sensitive filters (LSF), with the property that nearby vectors have a higher probability of surviving the same filter than vectors which are far apart. We instantiate the filters using spherical caps of height 1 -α, where a vector survives a filter if it is contained in the corresponding spherical cap, and where ideally each filter has an independent, uniformly random direction. For small a, these filters are very similar to the spherical locality-sensitive hash (LSH) family previously studied by Andoni et al. For larger α bounded away from 0, these filters potentially achieve a superior performance, provided we have access to an efficient oracle for finding relevant niters. Whereas existing LSH schemes are limited by a performance parameter of p ≥ 1/(2c2 - 1) to solve approximate NNS with approximation factor c, with spherical LSF we potentially achieve smaller asymptotic values of ρ, depending on the density of the data set. For sparse data sets where the dimension is super-logarithmic in the size of the data set, we asymptotically obtain ρ = 1/(2c2 - 1), while for a logarithmic dimensionality with density constant κ we obtain asymptotics of ρ ∼ l/(4κc2). To instantiate the filters and prove the existence of an efficient decoding oracle, we replace the independent filters by filters taken from certain structured random product codes. We show that the additional structure in these concatenation codes allows us to decode efficiently using techniques similar to lattice enumeration, and we can find the relevant filters with low overhead, while at the same time not significantly changing the collision probabilities of the filters. We finally apply spherical LSF to sieving algorithms for solving the shortest vector problem (SVP) on lattices, and show that this leads to a heuristic time complexity for solving SVP in dimension n of (3/2)n/2+o(n) ≈ 20.29n+o(n). This asymptotically improves upon the previous best algorithms for solving SVP which use spherical LSH and cross-polytope LSH and run in time 20.298n+o(n). Experiments with the GaussSieve validate the claimed speedup and show that this method may be practical as well, as the polynomial overhead is small.


Memon A.,UVSQ | Fursin G.,French Institute for Research in Computer Science and Automation
Advances in Parallel Computing | Year: 2014

Software and hardware co-design and optimization of HPC systems has become intolerably complex, ad-hoc, time consuming and error prone due to enormous number of available design and optimization choices, complex interactions between all software and hardware components, and multiple strict requirements placed on performance, power consumption, size, reliability and cost. We present our novel long-term holistic and practical solution to this problem based on customizable, plugin-based, schema-free, heterogeneous, open-source Collective Mind repository and infrastructure with unified web interfaces and online advise system. This collaborative framework distributes analysis and multi-objective off-line and on-line auto-tuning of computer systems among many participants while utilizing any available smart phone, tablet, laptop, cluster or data center, and continuously observing, classifying and modeling their realistic behavior. Any unexpected behavior is analyzed using shared data mining and predictive modeling plugins or exposed to the community at cTuning.org for collaborative explanation, top-down complexity reduction, incremental problem decomposition and detection of correlating program, architecture or run-time properties (features). Gradually increasing optimization knowledge helps to continuously improve optimization heuristics of any compiler, predict optimizations for new programs or suggest efficient run-time (online) tuning and adaptation strategies depending on end-user requirements. We decided to share all our past research artifacts including hundreds of codelets, numerical applications, data sets, models, universal experimental analysis and auto-tuning pipelines, self-tuning machine learning based meta compiler, and unified statistical analysis and machine learning plugins in a public repository to initiate systematic, reproducible and collaborative R&D with a new publication model where experiments and techniques are validated, ranked and improved by the community. © 2014 The authors and IOS Press.


Trad D.,ESSTT | Al-Ani T.,UVSQ | Al-Ani T.,School of Engineering in Information and Communication Science and Technology | Jemni M.,ESSTT
2015 5th International Conference on Information and Communication Technology and Accessibility, ICTA 2015 | Year: 2015

The aim of this paper is to investigate a nonlinear approach for feature extraction of Electroencephalogram (EEG) signals in order to classify motor imagery for Brain Computer Interface (BCI). This approach consists of combining the Empirical Mode Decomposition (EMD) and band power (BP). Considering the non-stationary and nonlinear characteristics of the motor imagery EEG, the EMD method is proposed to decompose the EEG signal into set of stationary time series called Intrinsic Mode Functions (IMF). These IMFs are analyzed with the bandpower (BP) to detect the caracteristics of sensorimotor rhythms (mu and beta). Finally, the data were reconstructed with only with the specific IMFs and then the band power is employed on the new database. Once the new feature vector is reconstructed, the classification of motor imagery is applied using Hidden Markov Models (HMMs). The results obtained show that the EMD method allows the most reliable features to be extracted from EEG and that the classification rate obtained is higher and better than using only the direct BP approach. Such a system appears as a particularly promising communication channel for people suffering from severe paralysis, for instance for persons with myopatic diseases or muscular dystrophy (MD) to move a joystick to a desired direction corresponding to the specific motor imagery. © 2015 IEEE.


PubMed | French Institute of Health and Medical Research, AP HP, Center Hospitalier Intercommunal Of Creteil and UVSQ
Type: | Journal: Surgery for obesity and related diseases : official journal of the American Society for Bariatric Surgery | Year: 2016

Weight loss failure and proton pomp inhibitor (PPI)-resistant gastroesophageal reflux diseases (GERD) after sleeve gastrectomy (SG) are frequently encountered.The aim of this study was to evaluate the efficacy and risks of SG conversion to Roux-en-Y gastric bypass (RYGB) in the case of weight loss failure or severe GERD.University hospitals.Between March 2007 and December 2014, 34 patients with history of SG underwent RYGP. A retrospective analysis of a prospectively collected database was undertaken.Among 34 patients, 31 underwent revisional surgery for weight loss failure and 3 for PPI-resistant GERD. Six patients in the weight loss failure group had symptomatic GERD that was effectively treated with PPIs. The average body mass index (BMI) was 5311 kg/m2 before SG. A laparoscopic approach was performed in 94% of patients. There was no postoperative mortality. Major adverse events (<90 days) occurred in 4 patients (11.7%). The mean length of stay was 6.72.8 days. At the time of revisional surgery, the mean BMI, percentage excess weight loss, and percentage weight loss were 44.79.8 kg/m2, 33.627.1%, and 169.7%, respectively, compared with 40.98.5 kg/m2, 63.136.2%, and 23.814% at 3 years. The GERD was resolved in all patients, allowing the cessation of PPI medication.Laparoscopic conversion of SG to RYGB is feasible and it allows improvement in secondary weight loss and GERD, but at the cost of high morbidity.

Loading UVSQ collaborators
Loading UVSQ collaborators