Entity

Time filter

Source Type


Hill N.J.,New York State Department of Health | Scholkopf B.,Max Planck Institute for Intelligent Systems (Tubingen)
Journal of Neural Engineering | Year: 2012

We report on the development and online testing of an electroencephalogram- based brain-computer interface (BCI) that aims to be usable by completely paralysed users - for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare 'oddball' stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology. © 2012 IOP Publishing Ltd. Source


Dinuzzo F.,Max Planck Institute for Intelligent Systems (Tubingen)
Neurocomputing | Year: 2013

Simultaneously solving multiple related learning tasks is beneficial under a variety of circumstances, but the prior knowledge necessary to properly model task relationships is rarely available in practice. In this paper, we develop a novel kernel-based multi-task learning technique that automatically reveals structural inter-task relationships. Building over the framework of output kernel learning (OKL), we introduce a method that jointly learns multiple functions and a low-rank multi-task kernel by solving a non-convex regularization problem. Optimization is carried out via a block coordinate descent strategy, where each subproblem is solved using suitable conjugate gradient (CG) type iterative methods for linear operator equations. The effectiveness of the proposed approach is demonstrated on pharmacological and collaborative filtering data. © 2013 Elsevier B.V. Source


Sra S.,Max Planck Institute for Intelligent Systems (Tubingen)
Data Mining and Knowledge Discovery | Year: 2012

Joint sparsity offers powerful structural cues for feature selection, especially for variables that are expected to demonstrate a "grouped" behavior. Such behavior is commonly modeled via group-lasso, multitask lasso, and related methods where feature selection is effected via mixed-norms. Several mixed-norm based sparse models have received substantial attention, and for some cases efficient algorithms are also available. Surprisingly, several constrained sparse models seem to be lacking scalable algorithms.We address this deficiency by presenting batch and online (stochastic-gradient) optimizationmethods, both of which rely on efficient projections onto mixed-norm balls. We illustrate our methods by applying them to the multitask lasso. We conclude by mentioning some open problems. © The Author(s) 2012. Source


Sun D.,Harvard University | Roth S.,TU Darmstadt | Black M.J.,Max Planck Institute for Intelligent Systems (Tubingen)
International Journal of Computer Vision | Year: 2014

The accuracy of optical flow estimation algorithms has been improving steadily as evidenced by results on the Middlebury optical flow benchmark. The typical formulation, however, has changed little since the work of Horn and Schunck. We attempt to uncover what has made recent advances possible through a thorough analysis of how the objective function, the optimization method, and modern implementation practices influence accuracy. We discover that "classical" flow formulations perform surprisingly well when combined with modern optimization and implementation techniques. One key implementation detail is the median filtering of intermediate flow fields during optimization. While this improves the robustness of classical methods it actually leads to higher energy solutions, meaning that these methods are not optimizing the original objective function. To understand the principles behind this phenomenon, we derive a new objective function that formalizes the median filtering heuristic. This objective function includes a non-local smoothness term that robustly integrates flow estimates over large spatial neighborhoods. By modifying this new term to include information about flow and image boundaries we develop a method that can better preserve motion details. To take advantage of the trend towards video in wide-screen format, we further introduce an asymmetric pyramid downsampling scheme that enables the estimation of longer range horizontal motions. The methods are evaluated on the Middlebury, MPI Sintel, and KITTI datasets using the same parameter settings. © 2013 Springer Science+Business Media New York. Source


Schultz T.,Max Planck Institute for Intelligent Systems (Tubingen)
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention | Year: 2012

Having to determine an adequate number of fiber directions is a fundamental limitation of multi-compartment models in diffusion MRI. This paper proposes a novel strategy to approach this problem, based on simulating data that closely follows the characteristics of the measured data. This provides the ground truth required to determine the number of directions that optimizes a formal measure of accuracy, while allowing us to transfer the result to real data by support vector regression. The method is shown to result in plausible and reproducible decisions on three repeated scans of the same subject. When combined with the ball-and-stick model, it produces directional estimates comparable to constrained spherical deconvolution, but with significantly smaller variance between re-scans, and at a reduced computational cost. Source

Discover hidden collaborations