Millan D.,Polytechnic University of Catalonia |
Millan D.,Laboratorio Nacional Of Computacao Cientifica |
Arroyo M.,Polytechnic University of Catalonia
Computer Methods in Applied Mechanics and Engineering | Year: 2013
Model reduction in computational mechanics is generally addressed with linear dimensionality reduction methods such as Principal Components Analysis (PCA). Hypothesizing that in many applications of interest the essential dynamics evolve on a nonlinear manifold, we explore here reduced order modeling based on nonlinear dimensionality reduction methods. Such methods are gaining popularity in diverse fields of science and technology, such as machine perception or molecular simulation. We consider finite deformation elastodynamics as a model problem, and identify the manifold where the dynamics essentially take place - the slow manifold - by nonlinear dimensionality reduction methods applied to a database of snapshots. Contrary to linear dimensionality reduction, the smooth parametrization of the slow manifold needs special techniques, and we use local maximum entropy approximants. We then formulate the Lagrangian mechanics on these data-based generalized coordinates, and develop variational time-integrators. Our proof-of-concept example shows that a few nonlinear collective variables provide similar accuracy to tens of PCA modes, suggesting that the proposed method may be very attractive in control or optimization applications. Furthermore, the reduced number of variables brings insight into the mechanics of the system under scrutiny. Our simulations also highlight the need of modeling the net effect of the disregarded degrees of freedom on the reduced dynamics at long times. © 2013 Elsevier B.V.
Leiva J.S.,Instituto Balseiro |
Blanco P.J.,Laboratorio Nacional Of Computacao Cientifica |
Buscaglia G.C.,University of Sao Paulo
International Journal for Numerical Methods in Engineering | Year: 2010
In this article we address decomposition strategies especially tailored to perform strong coupling of dimensionally heterogeneous models, under the hypothesis that one wants to solve each submodel separately and implement the interaction between subdomains by boundary conditions alone. The novel methodology takes full advantage of the small number of interface unknowns in this kind of problems. Existing algorithms can be viewed as variants of the 'natural' staggered algorithm in which each domain transfers function values to the other, and receives fluxes (or forces), and vice versa. This natural algorithm is known as Dirichlet-to-Neumann in the Domain Decomposition literature. Essentially, we propose a framework in which this algorithm is equivalent to applying Gauss-Seidel iterations to a suitably defined (linear or nonlinear) system of equations. It is then immediate to switch to other iterative solvers such as GMRES or other Krylov-based method, which we assess through numerical experiments showing the significant gain that can be achieved. Indeed, the benefit is that an extremely flexible, automatic coupling strategy can be developed, which in addition leads to iterative procedures that are parameter-free and rapidly converging. Further, in linear problems they have the finite termination property. © 2009 John Wiley & Sons, Ltd.
Boettcher S.,Emory University |
Falkner S.,Emory University |
Portugal R.,Laboratorio Nacional Of Computacao Cientifica
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2014
We show how to extract the scaling behavior of quantum walks using the renormalization group (RG). We introduce the method by efficiently reproducing well-known results on the one-dimensional lattice. For a nontrivial model, we apply this method to the dual Sierpinski gasket and obtain its exact, closed system of RG recursions. Numerical iteration suggests that under rescaling the system length, L′=2L, characteristic times rescale as t′=2dwt, with the exact walk exponent dw=log25=1.1609... Despite the lack of translational invariance, this value is very close to the ballistic spreading, dw=1, found for regular lattices. However, we argue that an extended interpretation of the traditional RG formalism will be needed to obtain scaling exponents analytically. Direct simulations confirm our RG prediction for dw and furthermore reveal an immensely rich phenomenology for the spreading of the quantum walk on the gasket. Invariably, quantum interference localizes the walk completely, with a site-access probability that decreases with a power law from the initial site, in contrast to a classical random walk, which would pass all sites with certainty. © 2014 American Physical Society.
Higashi S.,Laboratorio Nacional Of Computacao Cientifica
BMC genomics | Year: 2012
An essential step of a metagenomic study is the taxonomic classification, that is, the identification of the taxonomic lineage of the organisms in a given sample. The taxonomic classification process involves a series of decisions. Currently, in the context of metagenomics, such decisions are usually based on empirical studies that consider one specific type of classifier. In this study we propose a general framework for analyzing the impact that several decisions can have on the classification problem. Instead of focusing on any specific classifier, we define a generic score function that provides a measure of the difficulty of the classification task. Using this framework, we analyze the impact of the following parameters on the taxonomic classification problem: (i) the length of n-mers used to encode the metagenomic sequences, (ii) the similarity measure used to compare sequences, and (iii) the type of taxonomic classification, which can be conventional or hierarchical, depending on whether the classification process occurs in a single shot or in several steps according to the taxonomic tree. We defined a score function that measures the degree of separability of the taxonomic classes under a given configuration induced by the parameters above. We conducted an extensive computational experiment and found out that reasonable values for the parameters of interest could be (i) intermediate values of n, the length of the n-mers; (ii) any similarity measure, because all of them resulted in similar scores; and (iii) the hierarchical strategy, which performed better in all of the cases. As expected, short n-mers generate lower configuration scores because they give rise to frequency vectors that represent distinct sequences in a similar way. On the other hand, large values for n result in sparse frequency vectors that represent differently metagenomic fragments that are in fact similar, also leading to low configuration scores. Regarding the similarity measure, in contrast to our expectations, the variation of the measures did not change the configuration scores significantly. Finally, the hierarchical strategy was more effective than the conventional strategy, which suggests that, instead of using a single classifier, one should adopt multiple classifiers organized as a hierarchy.
Moura A.M.D.C.,Laboratorio Nacional Of Computacao Cientifica
Journal of Biomedical Informatics | Year: 2015
Scientific text annotation has become an important task for biomedical scientists. Nowadays, there is an increasing need for the development of intelligent systems to support new scientific findings. Public databases available on the Web provide useful data, but much more useful information is only accessible in scientific texts. Text annotation may help as it relies on the use of ontologies to maintain annotations based on a uniform vocabulary. However, it is difficult to use an ontology, especially those that cover a large domain. In addition, since scientific texts explore multiple domains, which are covered by distinct ontologies, it becomes even more difficult to deal with such task. Moreover, there are dozens of ontologies in the biomedical area, and they are usually big in terms of the number of concepts. It is in this context that ontology modularization can be useful. This work presents an approach to annotate scientific documents using modules of different ontologies, which are built according to a module extraction technique. The main idea is to analyze a set of single-ontology annotations on a text to find out the user interests. Based on these annotations a set of modules are extracted from a set of distinct ontologies, and are made available for the user, for complementary annotation. The reduced size and focus of the extracted modules tend to facilitate the annotation task. An experiment was conducted to evaluate this approach, with the participation of a bioinformatician specialist of the Laboratory of Peptides and Proteins of the IOC/Fiocruz, who was interested in discovering new drug targets aiming at the combat of tropical diseases. © 2015 Elsevier Inc.