Skolkovo Institute of Science and Technology Skoltech

Leninogorsk, Russia

Skolkovo Institute of Science and Technology Skoltech

Leninogorsk, Russia

Time filter

Source Type

Junker J.P.,Netherlands Cancer Institute | Noel E.S.,Netherlands Cancer Institute | Guryev V.,University of Groningen | Peterson K.A.,University of Southern California | And 8 more authors.
Cell | Year: 2014

Advancing our understanding of embryonic development is heavily dependent on identification of novel pathways or regulators. Although genome-wide techniques such as RNA sequencing are ideally suited for discovering novel candidate genes, they are unable to yield spatially resolved information in embryos or tissues. Microscopy-based approaches, using in situ hybridization, for example, can provide spatial information about gene expression, but are limited to analyzing one or a few genes at a time. Here, we present a method where we combine traditional histological techniques with low-input RNA sequencing and mathematical image reconstruction to generate a high-resolution genome-wide 3D atlas of gene expression in the zebrafish embryo at three developmental stages. Importantly, our technique enables searching for genes that are expressed in specific spatial patterns without manual image annotation. We envision broad applicability of RNA tomography as an accurate and sensitive approach for spatially resolved transcriptomics in whole embryos and dissected organs. © 2014 Elsevier Ltd. All rights reserved.


Sarkar S.,Whitehead Institute For Biomedical Research | Maetzel D.,Whitehead Institute For Biomedical Research | Korolchuk V.I.,Vitality | Jaenisch R.,Whitehead Institute For Biomedical Research | Jaenisch R.,Skolkovo Institute of Science and Technology Skoltech
Autophagy | Year: 2014

Autophagy is essential for cellular homeostasis and its dysfunction in human diseases has been implicated in the accumulation of misfolded protein and in cellular toxicity. We have recently shown impairment in autophagic flux in the lipid storage disorder, Niemann-Pick type C1 (NPC1) disease associated with abnormal cholesterol sequestration, where maturation of autophagosomes is impaired due to defective amphisome formation caused by failure in SNARE machinery. Abrogation of autophagy also causes cholesterol accumulation, suggesting that defective autophagic flux in NPC1 disease may act as a primary causative factor not only by imparting its deleterious effects, but also by increasing cholesterol load. However, cholesterol depletion treatment with HP-β-cyclodextrin impedes autophagy, whereas pharmacologically stimulating autophagy restores its function independent of amphisome formation. Of potential therapeutic relevance is that a low dose of HP-β-cyclodextrin that does not perturb autophagy, coupled with an autophagy inducer, may rescue both the cholesterol and autophagy defects in NPC1 disease. © 2014 Landes Bioscience.


Babenko A.,National Research University Higher School of Economics | Lempitsky V.,Skolkovo Institute of Science and Technology Skoltech
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2015

We propose a new vector encoding scheme (tree quantization) that obtains lossy compact codes for high-dimensional vectors via tree-based dynamic programming. Similarly to several previous schemes such as product quantization, these codes correspond to codeword numbers within multiple codebooks. We propose an integer programming-based optimization that jointly recovers the coding tree structure and the codebooks by minimizing the compression error on a training dataset. In the experiments with diverse visual descriptors (SIFT, neural codes, Fisher vectors), tree quantization is shown to combine fast encoding and state-of-the-art accuracy in terms of the compression error, the retrieval performance, and the image classification error. © 2015 IEEE.


Arteta C.,University of Oxford | Lempitsky V.,Skolkovo Institute of Science and Technology Skoltech | Noble J.A.,University of Oxford | Zisserman A.,University of Oxford
Medical Image Analysis | Year: 2016

In many microscopy applications the images may contain both regions of low and high cell densities corresponding to different tissues or colonies at different stages of growth. This poses a challenge to most previously developed automated cell detection and counting methods, which are designed to handle either the low-density scenario (through cell detection) or the high-density scenario (through density estimation or texture analysis).The objective of this work is to detect all the instances of an object of interest in microscopy images. The instances may be partially overlapping and clustered. To this end we introduce a tree-structured discrete graphical model that is used to select and label a set of non-overlapping regions in the image by a global optimization of a classification score. Each region is labeled with the number of instances it contains - for example regions can be selected that contain two or three object instances, by defining separate classes for tuples of objects in the detection process.We show that this formulation can be learned within the structured output SVM framework and that the inference in such a model can be accomplished using dynamic programming on a tree structured region graph. Furthermore, the learning only requires weak annotations - a dot on each instance. The candidate regions for the selection are obtained as extremal region of a surface computed from the microscopy image, and we show that the performance of the model can be improved by considering a proxy problem for learning the surface that allows better selection of the extremal regions. Furthermore, we consider a number of variations for the loss function used in the structured output learning.The model is applied and evaluated over six quite disparate data sets of images covering: fluorescence microscopy, weak-fluorescence molecular images, phase contrast microscopy and histopathology images, and is shown to exceed the state of the art in performance. © 2015 Elsevier B.V.


Chai Y.,University of Oxford | Lempitsky V.,Skolkovo Institute of Science and Technology Skoltech | Zisserman A.,University of Oxford
Proceedings of the IEEE International Conference on Computer Vision | Year: 2013

We propose a new method for the task of fine-grained visual categorization. The method builds a model of the base-level category that can be fitted to images, producing high-quality foreground segmentation and mid-level part localizations. The model can be learnt from the typical datasets available for fine-grained categorization, where the only annotation provided is a loose bounding box around the instance (e.g. bird) in each image. Both segmentation and part localizations are then used to encode the image content into a highly-discriminative visual signature. The model is symbiotic in that part discovery/localization is helped by segmentation and, conversely, the segmentation is helped by the detection (e.g. part layout). Our model builds on top of the part-based object category detector of Felzenszwalb et al., and also on the powerful Grab Cut segmentation algorithm of Rother et al., and adds a simple spatial saliency coupling between them. In our evaluation, the model improves the categorization accuracy over the state-of-the-art. It also improves over what can be achieved with an analogous system that runs segmentation and part-localization independently. © 2013 IEEE.


Lebedev V.,Yandex | Lempitsky V.,Skolkovo Institute of Science and Technology Skoltech
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2016

We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers in ConvNets. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion. After such pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. We investigate different ways to add group-wise prunning to the learning process, and show that severalfold speedups of convolutional layers can be attained using group-sparsity regularizers. Our approach can adjust the shapes of the receptive fields in the convolutional layers, and even prune excessive feature maps from ConvNets, all in data-driven way.


Kononenko D.,Skolkovo Institute of Science and Technology Skoltech | Lempitsky V.,Skolkovo Institute of Science and Technology Skoltech
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2015

We revisit the well-known problem of gaze correction and present a solution based on supervised machine learning. At training time, our system observes pairs of images, where each pair contains the face of the same person with a fixed angular difference in gaze direction. It then learns to synthesize the second image of a pair from the first one. After learning, the system gets the ability to redirect the gaze of a previously unseen person by the same angular difference as in the training set. Unlike many previous solutions to gaze problem in videoconferencing, ours is purely monocular, i.e. it does not require any hardware apart from an in-built web-camera of a laptop. Being based on efficient machine learning predictors such as decision forests, the system is fast (runs in real-time on a single core of a modern laptop). In the paper, we demonstrate results on a variety of videoconferencing frames and evaluate the method quantitatively on the hold-out set of registered images. The supplementary video shows example sessions of our system at work. © 2015 IEEE.


Ganin Y.,Skolkovo Institute of Science and Technology Skoltech | Lempitsky V.,Skolkovo Institute of Science and Technology Skoltech
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2015

We propose a new architecture for difficult image processing operations, such as natural edge detection or thin object segmentation. The architecture is based on a simple combination of convolutional neural networks with the nearest neighbor search. We focus our attention on the situations when the desired image transformation is too hard for a neural network to learn explicitly. We show that in such situations the use of the nearest neighbor search on top of the network output allows to improve the results considerably and to account for the underfitting effect during the neural network training. The approach is validated on three challenging benchmarks, where the performance of the proposed architecture matches or exceeds the stateof- the-art. © Springer International Publishing Switzerland 2015.


Babenko A.,Moscow Institute of Physics and Technology | Lempitsky V.,Skolkovo Institute of Science and Technology Skoltech
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2014

We introduce a new compression scheme for high-dimensional vectors that approximates the vectors using sums of M codewords coming from M different codebooks. We show that the proposed scheme permits efficient distance and scalar product computations between compressed and uncompressed vectors. We further suggest vector encoding and codebook learning algorithms that can minimize the coding error within the proposed scheme. In the experiments, we demonstrate that the proposed compression can be used instead of or together with product quantization. Compared to product quantization and its optimized versions, the proposed compression approach leads to lower coding approximation errors, higher accuracy of approximate nearest neighbor search in the datasets of visual descriptors, and lower image classification error, whenever the classifiers are learned on or applied to compressed vectors. © 2014 IEEE.


Ganin Y.,Skolkovo Institute of Science and Technology Skoltech | Lempitsky V.,Skolkovo Institute of Science and Technology Skoltech
32nd International Conference on Machine Learning, ICML 2015 | Year: 2015

Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of "deep" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard back-propagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets. Copyright © 2015 by the author(s).

Loading Skolkovo Institute of Science and Technology Skoltech collaborators
Loading Skolkovo Institute of Science and Technology Skoltech collaborators