Houston, TX, United States
Houston, TX, United States

Time filter

Source Type

Griebel T.,Functional Bioinformatics Group | Zacher B.,Functional Bioinformatics Group | Zacher B.,Computational Biology and Regulatory Networks Group | Ribeca P.,Algorithm | And 3 more authors.
Nucleic Acids Research | Year: 2012

High-throughput sequencing of cDNA libraries constructed from cellular RNA complements (RNA-Seq) naturally provides a digital quantitative measurement for every expressed RNA molecule. Nature, impact and mutual interference of biases in different experimental setups are, however, still poorly understood - mostly due to the lack of data from intermediate protocol steps. We analysed multiple RNA-Seq experiments, involving different sample preparation protocols and sequencing platforms: we broke them down into their common - and currently indispensable - technical components (reverse transcription, fragmentation, adapter ligation, PCR amplification, gel segregation and sequencing), investigating how such different steps influence abundance and distribution of the sequenced reads. For each of those steps, we developed universally applicable models, which can be parameterised by empirical attributes of any experimental protocol. Our models are implemented in a computer simulation pipeline called the Flux Simulator, and we show that read distributions generated by different combinations of these models reproduce well corresponding evidence obtained from the corresponding experimental setups. We further demonstrate that our in silico RNA-Seq provides insights about hidden precursors that determine the final configuration of reads along gene bodies; enhancing or compensatory effects that explain apparently controversial observations can be observed. Moreover, our simulations identify hitherto unreported sources of systematic bias from RNA hydrolysis, a fragmentation technique currently employed by most RNA-Seq protocols. © 2012 The Author(s).


Luo W.,Guangxi University for Nationalities | Luo W.,Southwest Jiaotong University | Fang X.,Guangxi University for Nationalities | Cheng M.,Guangxi University for Nationalities | Zhao Y.,Algorithm
IEEE Transactions on Vehicular Technology | Year: 2013

The capacity of a multiple-input-multiple-output (MIMO) channel with $N$ transmit and receive antennas for high-speed railways (HSRs) is analyzed based on the 3-D modeling of the line of sight (LOS). The MIMO system utilizes a uniform linear antenna array. Instead of increasing the number of antennas or simply changing the parameters of the antenna array, such as separation and geometry, the capacity gain can be obtained by adjusting the weights of multiantenna array groups, because there are few scatterers in strong LOS environments. On the other hand, it is hard to obtain the array gain of MIMO beamforming for HSRs because of drastic changes in the receiving angle when the train travels across E-UTRAN Node B. Without changing the antenna design of Long-Term Evolution systems, this paper proposes a multiple-group multiple-antenna (MGMA) scheme that makes the columns of such a MIMO channel orthogonal by adjusting the weights among MGMA arrays, and the stable capacity gain can be obtained. The value of weights depends on the practical network topologies of the railway wireless communication system. However, the reasonable scope of group number $N$ is less than 6. In selecting $N$, one important consideration is the tradeoff between practical benefit and cost of implementation. © 1967-2012 IEEE.


Kang S.-J.,Algorithm | Kim Y.H.,Pohang University of Science and Technology
IEEE/OSA Journal of Display Technology | Year: 2011

This paper presents a new method for dynamic backlight dimming that uses multiple histograms for liquid crystal display (LCD) devices. The proposed multi-histogram-based gray-level error control (MGEC) algorithm considers the pixel distribution of an image using multiple histograms, thereby improving the image quality. Additionally, we propose several techniques to reduce the computational cost of the MGEC algorithm. In the experimental results, the average peak signal-to-noise ratio (PSNR) of the proposed method was improved by up to 8.144 dB compared to that of the benchmark method, while the power consumption was reduced substantially. In addition, when we applied the proposed techniques to reduce the computational cost of the MGEC algorithm, the computation time was reduced by up to 95.801% compared to that of the original algorithm. © 2006 IEEE.


Djuric P.M.,State University of New York at Stony Brook | Khan M.,Algorithm | Johnston D.E.,Quantalysis LLC
IEEE Journal on Selected Topics in Signal Processing | Year: 2012

In this paper, we address univariate stochastic volatility models that allow for correlation of the perturbations in the state and observation equations, i.e., models with leverage. We propose a particle filtering method for estimating the posterior distributions of the log-volatility, where we employ Rao-Blackwellization of the unknown static parameters of the model. We also propose a scheme for choosing the best model from a set of considered models and a test for assessing the validity of the selected model. We demonstrate the performance of the proposed method on simulated and S&P 500 data. © 2011 IEEE.


Chimani M.,Algorithm | Rahmann S.,TU Dortmund | Bocker S.,Friedrich - Schiller University of Jena
2010 ACM International Conference on Bioinformatics and Computational Biology, ACM-BCB 2010 | Year: 2010

In computational phylogenetics, the problem of constructing a consensus tree or supertree of a given set of rooted input trees can be formalized in different ways. We consider the Minimum Flip Consensus Tree and Minimum Flip Su-pertree problem, where input trees are transformed into a 0/1/?-matrix, such that each row represents a taxon, and each column represents a subtree membership. For the consensus tree problem, all input trees contain the same set of taxa, and no ?-entries occur. For the supertree problem, the input trees may contain different subsets of the taxa, and unrepresented taxa are coded with ?-entries. In both cases, the goal is to find a perfect phylogeny for the input matrix requiring a minimum number of 0/1-ips, i.e., matrix entry corrections. Both optimization problems are NP-hard. We present the first efficient Integer Linear Programming (ILP) formulations for both problems, using three distinct characterizations of a perfect phylogeny. Although these three formulations seem to differ considerably at first glance, we show that they are in fact polytope-wise equivalent. Introducing a novel column generation scheme, it turns out that the simplest, purely combinatorial formulation is the most efficient one in practice. Using our framework, it is possible to find exact solutions for instances with ∼100 taxa. Copyright © 2010 ACM.


Ribeca P.,Algorithm | Valiente G.,University of Barcelona
Briefings in Bioinformatics | Year: 2011

Next-generation sequencing technologies have opened up an unprecedented opportunity for microbiology by enabling the culture-independent genetic study of complex microbial communities, which were so far largely unknown. The analysis of metagenomic data is challenging: potentially, one is faced with a sample containing a mixture of many different bacterial species, whose genome has not necessarily been sequenced beforehand. In the simpler case of the analysis of 16S ribosomal RNA metagenomic data, for which databases of reference sequences are known, we survey the computational challenges to be solved in order to be able to characterize and quantify a sample. In particular, we examine two aspects: how the necessary adoption of new tools geared towards high-throughput analysis impacts the quality of the results, and how good is the performance of various established methods to assign sequence reads to microbial species, with and without taking taxonomic information into account. © The Author 2011. Published by Oxford University Press.


Masago A.,Tohoku University | Tsukada M.,Tohoku University | Shimizu M.,Algorithm
Physical Review B - Condensed Matter and Materials Physics | Year: 2010

The partitioned-real-space density-functional-based tight-binding (PR-DFTB) method is proposed as a simulation method for calculating the quantum electronic states in Kelvin probe force microscopy (KPFM). This method can be used when a tip is set on a sample surface with a nonorbital-hybridization distance and an applied bias voltage. The PR-DFTB method can perform self-consistent calculations of a system that consists of two subsystems (the tip and the sample). Each subsystem is expressed by a block element of the Fock matrix and thus is characterized by the Fermi level in the block element. Consequently, charge distributions on the two subsystems can be calculated individually. Furthermore, charge redistributions in the subsystems induced by approach of them under an applied bias voltage can also be calculated. Using the proposed PR-DFTB method, we can clarify the mechanism by observing the local contact potential difference (LCPD). Unlike the conventional description of the Kelvin force, the force acting between a biased tip and a sample depends not only on the net charge transferred between the tip and the sample but also on the multipole forces generated by the microscopic charge distribution within the tip and the sample. This is the mechanism responsible for observing the "apparent" LCPD. KPFM images generated from the minimum bias voltage in the force-bias curve (i.e., LCPD images) are theoretically simulated using tip models for a Si or hydrogenated Si cluster for simple models of a Si (111) -c (4×2) surface, a monohydride Si(001) surface with/without a defect, and a Si (111) - (5×5) dimer-adatom-stacking fault (DAS) surface. © 2010 The American Physical Society.


Azuma H.,Algorithm | Ban M.,Ochanomizu University
Physica D: Nonlinear Phenomena | Year: 2014

We study a quasiperiodic structure in the time evolution of the Bloch vector, whose dynamics is governed by the thermal Jaynes-Cummings model (JCM). Putting the two-level atom into a certain pure state and the cavity field into a mixed state in thermal equilibrium at initial time, we let the whole system evolve according to the JCM Hamiltonian. During this time evolution, motion of the Bloch vector seems to be in disorder. Because of the thermal photon distribution, both a norm and a direction of the Bloch vector change hard at random. In this paper, taking a different viewpoint compared with ones that we have been used to, we investigate quasiperiodicity of the Bloch vector's trajectories. Introducing the concept of the quasiperiodic motion, we can explain the confused behaviour of the system as an intermediate state between periodic and chaotic motions. More specifically, we discuss the following two facts: (1) If we adjust the time interval Δt properly, figures consisting of plotted dots at the constant time interval acquire scale invariance under replacement of Δt by sΔt, where s(>1) is an arbitrary real but not transcendental number. (2) We can compute values of the time variable t, which let |Sz(t)| (the absolute value of the z-component of the Bloch vector) be very small, with the Diophantine approximation (a rational approximation of an irrational number). © 2014 Elsevier B.V. All rights reserved.


News Article | February 15, 2017
Site: www.newscientist.com

AN ALGORITHM that analyses facial expressions and head movements could help doctors diagnose autism-like conditions and attention deficit hyperactivity disorder. There is no simple test for autism or ADHD, but clinicians usually observe someone’s behaviour as part of the assessment. “These are frequently co-occurring conditions and the visual behaviours that come with them are similar,” says Michel Valstar at the University of Nottingham, UK. His team used machine learning to identify some of these behaviours. The group captured video of 55 adults as they read and listened to stories and answered questions about them. “People with autism do not always get the social and emotional subtleties,” says Valstar. The participants fell into four groups: people diagnosed with autism-like conditions, ADHD, both or neither. The system learned to spot differences between how the groups responded. For example, people with both conditions were less likely to raise their eyebrows when they saw surprising information. The team also tracked head movement to gauge how much the volunteers’ attention wandered. Using both measures, the system correctly identified people with ADHD or autism-like conditions 96 per cent of the time (arxiv.org/abs/1612.02374). Eric Taylor at King’s College London welcomes the potential of this as a diagnostic tool for these conditions. But he says the best approach is still observing children in everyday surroundings. Algorithms won’t take over from doctors any time soon, says Valstar. “We are creating diagnostic tools that will speed up the diagnosis in an existing practice, but we do not believe we can remove humans. Humans add ethics and moral values to the process.” This article appeared in print under the headline “Computer spots signs of autism and ADHD”

Loading Algorithm collaborators
Loading Algorithm collaborators