Time filter

Source Type

Versailles, France

Mayaud L.,French Institute of Health and Medical Research | Mayaud L.,University of Versailles | Mayaud L.,University of Oxford | Congedo M.,CNRS GIPSA Laboratory | And 6 more authors.
Neurophysiologie Clinique | Year: 2013

Aims of the study: A brain-computer interface aims at restoring communication and control in severely disabled people by identification and classification of EEG features such as event-related potentials (ERPs). The aim of this study is to compare different modalities of EEG recording for extraction of ERPs. The first comparison evaluates the performance of six disc electrodes with that of the EMOTIV headset, while the second evaluates three different electrode types (disc, needle, and large squared electrode). Material and methods: Ten healthy volunteers gave informed consent and were randomized to try the traditional EEG system (six disc electrodes with gel and skin preparation) or the EMOTIV Headset first. Together with the six disc electrodes, a needle and a square electrode of larger surface were simultaneously recording near lead Cz. Each modality was evaluated over three sessions of auditory P300 separated by one hour. Results: No statically significant effect was found for the electrode type, nor was the interaction between electrode type and session number. There was no statistically significant difference of performance between the EMOTIV and the six traditional EEG disc electrodes, although there was a trend showing worse performance of the EMOTIV headset. However, the modality-session interaction was highly significant (P<. 0.001) showing that, while the performance of the six disc electrodes stay constant over sessions, the performance of the EMOTIV headset drops dramatically between 2 and 3. h of use. Finally, the evaluation of comfort by participants revealed an increasing discomfort with the EMOTIV headset starting with the second hour of use. Conclusion: Our study does not recommend the use of one modality over another based on performance but suggests the choice should be made on more practical considerations such as the expected length of use, the availability of skilled labor for system setup and above all, the patient comfort. © 2013 Elsevier Masson SAS. Source

Marteau B.,French Institute of Petroleum | Ding D.,French Institute of Petroleum | Dumas L.,UVSQ
14th European Conference on the Mathematics of Oil Recovery 2014, ECMOR 2014 | Year: 2014

History matching is a challenging optimization problem that involves numerous evaluations of a very expensive objective function through the simulation of complex fluids flows inside an oil reservoir. Typically, the gradient of such a function is not always available. Therefore derivative free optimization methods such as Powell's NEWUOA are often chosen to try to solve such a problem. One way to reduce the number of evaluations of the objective function is to exploit its specific structure, for example its partial separability. A function F:x→F(x1,..., xp) is said to be partially separable if there exists some subfunctions fi such that F=f1+...+fn and that for all i, fi depends only on pi


Becker A.,Ecole Polytechnique Federale de Lausanne | Ducas L.,CWI | Gama N.,UVSQ | Laarhoven T.,TU Eindhoven
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2016

To solve the approximate nearest neighbor search problem (NNS) on the sphere, we propose a method using locality-sensitive filters (LSF), with the property that nearby vectors have a higher probability of surviving the same filter than vectors which are far apart. We instantiate the filters using spherical caps of height 1 -α, where a vector survives a filter if it is contained in the corresponding spherical cap, and where ideally each filter has an independent, uniformly random direction. For small a, these filters are very similar to the spherical locality-sensitive hash (LSH) family previously studied by Andoni et al. For larger α bounded away from 0, these filters potentially achieve a superior performance, provided we have access to an efficient oracle for finding relevant niters. Whereas existing LSH schemes are limited by a performance parameter of p ≥ 1/(2c2 - 1) to solve approximate NNS with approximation factor c, with spherical LSF we potentially achieve smaller asymptotic values of ρ, depending on the density of the data set. For sparse data sets where the dimension is super-logarithmic in the size of the data set, we asymptotically obtain ρ = 1/(2c2 - 1), while for a logarithmic dimensionality with density constant κ we obtain asymptotics of ρ ∼ l/(4κc2). To instantiate the filters and prove the existence of an efficient decoding oracle, we replace the independent filters by filters taken from certain structured random product codes. We show that the additional structure in these concatenation codes allows us to decode efficiently using techniques similar to lattice enumeration, and we can find the relevant filters with low overhead, while at the same time not significantly changing the collision probabilities of the filters. We finally apply spherical LSF to sieving algorithms for solving the shortest vector problem (SVP) on lattices, and show that this leads to a heuristic time complexity for solving SVP in dimension n of (3/2)n/2+o(n) ≈ 20.29n+o(n). This asymptotically improves upon the previous best algorithms for solving SVP which use spherical LSH and cross-polytope LSH and run in time 20.298n+o(n). Experiments with the GaussSieve validate the claimed speedup and show that this method may be practical as well, as the polynomial overhead is small. Source

Pradat-Diehl P.,University Pierre and Marie Curie | Joseph P.-A.,University of Bordeaux Segalen | Beuret-Blanquart F.,CRMPR les Herbiers | Luaute J.,University of Lyon | And 7 more authors.
Annals of Physical and Rehabilitation Medicine | Year: 2012

This document is part of a series of guidelines documents designed by the French Physical and Rehabilitation Medicine Society (SOFMER) and the French Federation of PRM (FEDMER). These reference documents focus on a particular pathology (here patients with severe TBI). They describe for each given pathology patients' clinical and social needs, PRM care objectives and necessary human and material resources of the pathology-dedicated pathway. 'Care pathways in PRM' is therefore a short document designed to enable readers (physician, decision-maker, administrator, lawyer, finance manager) to have a global understanding of available therapeutic care structures, organization and economic needs for patients' optimal care and follow-up. After a severe traumatic brain injury, patients might be divided into three categories according to impairment's severity, to early outcomes in the intensive care unit and to functional prognosis. Each category is considered in line with six identical parameters used in the International Classification of Functioning, Disability and Health (World Health Organization), focusing thereafter on personal and environmental factors liable to affect the patients' needs. © 2012. Source

Memon A.,UVSQ | Fursin G.,French Institute for Research in Computer Science and Automation
Advances in Parallel Computing | Year: 2014

Software and hardware co-design and optimization of HPC systems has become intolerably complex, ad-hoc, time consuming and error prone due to enormous number of available design and optimization choices, complex interactions between all software and hardware components, and multiple strict requirements placed on performance, power consumption, size, reliability and cost. We present our novel long-term holistic and practical solution to this problem based on customizable, plugin-based, schema-free, heterogeneous, open-source Collective Mind repository and infrastructure with unified web interfaces and online advise system. This collaborative framework distributes analysis and multi-objective off-line and on-line auto-tuning of computer systems among many participants while utilizing any available smart phone, tablet, laptop, cluster or data center, and continuously observing, classifying and modeling their realistic behavior. Any unexpected behavior is analyzed using shared data mining and predictive modeling plugins or exposed to the community at cTuning.org for collaborative explanation, top-down complexity reduction, incremental problem decomposition and detection of correlating program, architecture or run-time properties (features). Gradually increasing optimization knowledge helps to continuously improve optimization heuristics of any compiler, predict optimizations for new programs or suggest efficient run-time (online) tuning and adaptation strategies depending on end-user requirements. We decided to share all our past research artifacts including hundreds of codelets, numerical applications, data sets, models, universal experimental analysis and auto-tuning pipelines, self-tuning machine learning based meta compiler, and unified statistical analysis and machine learning plugins in a public repository to initiate systematic, reproducible and collaborative R&D with a new publication model where experiments and techniques are validated, ranked and improved by the community. © 2014 The authors and IOS Press. Source

Discover hidden collaborations