Skolkovo Institute of Science and Technology Skoltech

Leninogorsk, Russia

Skolkovo Institute of Science and Technology Skoltech

Leninogorsk, Russia
SEARCH FILTERS
Time filter
Source Type

Ustinova E.,Skolkovo Institute of Science and Technology Skoltech | Lempitsky V.,Skolkovo Institute of Science and Technology Skoltech
Advances in Neural Information Processing Systems | Year: 2016

We suggest a loss for learning deep embeddings. The new loss does not introduce parameters that need to be tuned and results in very good embeddings across a range of datasets and problems. The loss is computed by estimating two distribution of similarities for positive (matching) and negative (non-matching) sample pairs, and then computing the probability of a positive pair to have a lower similarity score than a negative pair based on the estimated similarity distributions. We show that such operations can be performed in a simple and piecewise-differentiable manner using 1D histograms with soft assignment operations. This makes the proposed loss suitable for learning deep embeddings using stochastic optimization. In the experiments, the new loss performs favourably compared to recently proposed alternatives. © 2016 NIPS Foundation - All Rights Reserved.


Berry M.D.,Memorial University of Newfoundland | Gainetdinov R.R.,Saint Petersburg State University | Gainetdinov R.R.,Skolkovo Institute of Science and Technology Skoltech | Hoener M.C.,Hoffmann-La Roche | Shahid M.,Orion Corporation
Pharmacology and Therapeutics | Year: 2017

The discovery in 2001 of a G protein-coupled receptor family, subsequently termed trace amine-associated receptors (TAAR), triggered a resurgence of interest in so-called trace amines. Initial optimism quickly faded, however, as the TAAR family presented a series of challenges preventing the use of standard medicinal chemistry and pharmacology technologies. Consequently the development of basic tools for probing TAAR and translating findings from model systems to humans has been problematic. Despite these challenges the last 5. years have seen considerable advances, in particular with respect to TAAR1, which appears to function as an endogenous rheostat, maintaining central neurotransmission within defined physiological limits, in part through receptor heterodimerization yielding biased signaling outputs. Regulation of the dopaminergic system is particularly well understood and clinical testing of TAAR1 directed ligands for schizophrenia and psychiatric disorders have begun. In addition, pre-clinical animal models have identified TAAR1 as a novel target for drug addiction and metabolic disorders. Growing evidence also suggests a role for TAARs in regulating immune function. This review critically discusses the current state of TAAR research, highlighting recent developments and focussing on human TAARs, their functions, and clinical implications. Current gaps in knowledge are identified, along with the research reagents and translational tools still required for continued advancement of the field. Through this, a picture emerges of an exciting field on the cusp of significant developments, with the potential to identify new therapeutic leads for some of the major unmet medical needs in the areas of neuropsychiatry and metabolic disorders. © 2017 The Authors.


Junker J.P.,Netherlands Cancer Institute | Noel E.S.,Netherlands Cancer Institute | Guryev V.,University of Groningen | Peterson K.A.,University of Southern California | And 8 more authors.
Cell | Year: 2014

Advancing our understanding of embryonic development is heavily dependent on identification of novel pathways or regulators. Although genome-wide techniques such as RNA sequencing are ideally suited for discovering novel candidate genes, they are unable to yield spatially resolved information in embryos or tissues. Microscopy-based approaches, using in situ hybridization, for example, can provide spatial information about gene expression, but are limited to analyzing one or a few genes at a time. Here, we present a method where we combine traditional histological techniques with low-input RNA sequencing and mathematical image reconstruction to generate a high-resolution genome-wide 3D atlas of gene expression in the zebrafish embryo at three developmental stages. Importantly, our technique enables searching for genes that are expressed in specific spatial patterns without manual image annotation. We envision broad applicability of RNA tomography as an accurate and sensitive approach for spatially resolved transcriptomics in whole embryos and dissected organs. © 2014 Elsevier Ltd. All rights reserved.


Sarkar S.,Whitehead Institute For Biomedical Research | Maetzel D.,Whitehead Institute For Biomedical Research | Korolchuk V.I.,Vitality | Jaenisch R.,Whitehead Institute For Biomedical Research | Jaenisch R.,Skolkovo Institute of Science and Technology Skoltech
Autophagy | Year: 2014

Autophagy is essential for cellular homeostasis and its dysfunction in human diseases has been implicated in the accumulation of misfolded protein and in cellular toxicity. We have recently shown impairment in autophagic flux in the lipid storage disorder, Niemann-Pick type C1 (NPC1) disease associated with abnormal cholesterol sequestration, where maturation of autophagosomes is impaired due to defective amphisome formation caused by failure in SNARE machinery. Abrogation of autophagy also causes cholesterol accumulation, suggesting that defective autophagic flux in NPC1 disease may act as a primary causative factor not only by imparting its deleterious effects, but also by increasing cholesterol load. However, cholesterol depletion treatment with HP-β-cyclodextrin impedes autophagy, whereas pharmacologically stimulating autophagy restores its function independent of amphisome formation. Of potential therapeutic relevance is that a low dose of HP-β-cyclodextrin that does not perturb autophagy, coupled with an autophagy inducer, may rescue both the cholesterol and autophagy defects in NPC1 disease. © 2014 Landes Bioscience.


Babenko A.,National Research University Higher School of Economics | Lempitsky V.,Skolkovo Institute of Science and Technology Skoltech
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2015

We propose a new vector encoding scheme (tree quantization) that obtains lossy compact codes for high-dimensional vectors via tree-based dynamic programming. Similarly to several previous schemes such as product quantization, these codes correspond to codeword numbers within multiple codebooks. We propose an integer programming-based optimization that jointly recovers the coding tree structure and the codebooks by minimizing the compression error on a training dataset. In the experiments with diverse visual descriptors (SIFT, neural codes, Fisher vectors), tree quantization is shown to combine fast encoding and state-of-the-art accuracy in terms of the compression error, the retrieval performance, and the image classification error. © 2015 IEEE.


Chai Y.,University of Oxford | Lempitsky V.,Skolkovo Institute of Science and Technology Skoltech | Zisserman A.,University of Oxford
Proceedings of the IEEE International Conference on Computer Vision | Year: 2013

We propose a new method for the task of fine-grained visual categorization. The method builds a model of the base-level category that can be fitted to images, producing high-quality foreground segmentation and mid-level part localizations. The model can be learnt from the typical datasets available for fine-grained categorization, where the only annotation provided is a loose bounding box around the instance (e.g. bird) in each image. Both segmentation and part localizations are then used to encode the image content into a highly-discriminative visual signature. The model is symbiotic in that part discovery/localization is helped by segmentation and, conversely, the segmentation is helped by the detection (e.g. part layout). Our model builds on top of the part-based object category detector of Felzenszwalb et al., and also on the powerful Grab Cut segmentation algorithm of Rother et al., and adds a simple spatial saliency coupling between them. In our evaluation, the model improves the categorization accuracy over the state-of-the-art. It also improves over what can be achieved with an analogous system that runs segmentation and part-localization independently. © 2013 IEEE.


Lebedev V.,Yandex | Lempitsky V.,Skolkovo Institute of Science and Technology Skoltech
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2016

We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers in ConvNets. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion. After such pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. We investigate different ways to add group-wise prunning to the learning process, and show that severalfold speedups of convolutional layers can be attained using group-sparsity regularizers. Our approach can adjust the shapes of the receptive fields in the convolutional layers, and even prune excessive feature maps from ConvNets, all in data-driven way.


Kononenko D.,Skolkovo Institute of Science and Technology Skoltech | Lempitsky V.,Skolkovo Institute of Science and Technology Skoltech
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2015

We revisit the well-known problem of gaze correction and present a solution based on supervised machine learning. At training time, our system observes pairs of images, where each pair contains the face of the same person with a fixed angular difference in gaze direction. It then learns to synthesize the second image of a pair from the first one. After learning, the system gets the ability to redirect the gaze of a previously unseen person by the same angular difference as in the training set. Unlike many previous solutions to gaze problem in videoconferencing, ours is purely monocular, i.e. it does not require any hardware apart from an in-built web-camera of a laptop. Being based on efficient machine learning predictors such as decision forests, the system is fast (runs in real-time on a single core of a modern laptop). In the paper, we demonstrate results on a variety of videoconferencing frames and evaluate the method quantitatively on the hold-out set of registered images. The supplementary video shows example sessions of our system at work. © 2015 IEEE.


Ganin Y.,Skolkovo Institute of Science and Technology Skoltech | Lempitsky V.,Skolkovo Institute of Science and Technology Skoltech
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2015

We propose a new architecture for difficult image processing operations, such as natural edge detection or thin object segmentation. The architecture is based on a simple combination of convolutional neural networks with the nearest neighbor search. We focus our attention on the situations when the desired image transformation is too hard for a neural network to learn explicitly. We show that in such situations the use of the nearest neighbor search on top of the network output allows to improve the results considerably and to account for the underfitting effect during the neural network training. The approach is validated on three challenging benchmarks, where the performance of the proposed architecture matches or exceeds the stateof- the-art. © Springer International Publishing Switzerland 2015.


Babenko A.,Moscow Institute of Physics and Technology | Lempitsky V.,Skolkovo Institute of Science and Technology Skoltech
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2014

We introduce a new compression scheme for high-dimensional vectors that approximates the vectors using sums of M codewords coming from M different codebooks. We show that the proposed scheme permits efficient distance and scalar product computations between compressed and uncompressed vectors. We further suggest vector encoding and codebook learning algorithms that can minimize the coding error within the proposed scheme. In the experiments, we demonstrate that the proposed compression can be used instead of or together with product quantization. Compared to product quantization and its optimized versions, the proposed compression approach leads to lower coding approximation errors, higher accuracy of approximate nearest neighbor search in the datasets of visual descriptors, and lower image classification error, whenever the classifiers are learned on or applied to compressed vectors. © 2014 IEEE.

Loading Skolkovo Institute of Science and Technology Skoltech collaborators
Loading Skolkovo Institute of Science and Technology Skoltech collaborators