Entity

Time filter

Source Type


Malossi A.C.I.,Ecole Polytechnique Federale de Lausanne | Blanco P.J.,Laboratorio Nacional Of Computacao Cientifica | Blanco P.J.,Institute Nacional Of Ciencia E Tecnologia Em Medicina Assistida Por Computacao Cientifica | Deparis S.,Ecole Polytechnique Federale de Lausanne
Computer Methods in Applied Mechanics and Engineering | Year: 2012

In this work a numerical strategy to address the solution of the blood flow in one-dimensional arterial networks through a topology-based decomposition is presented. Such decomposition results in the local analysis of the blood flow in simple arterial segments. Hence, iterative methods are used to perform the strong coupling among the segments, which communicate through non-overlapping interfaces. Specifically, two approaches are considered to solve the associated nonlinear interface problem: (i) the Newton method and (ii) the Broyden method. Moreover, since the modeling of blood flow in compliant vessels is tackled using explicit finite element methods, we formulate the coupling problem using a two-level time stepping technique. A local (inner) time step is used to solve the local problems in single arteries, meeting thus local stability conditions, while a global (outer) time step is employed to enforce the continuity of physical quantities of interest among the one-dimensional segments. Several examples of application are presented. Firstly a study about spurious reflections produced at interfaces as a consequence of the two-level time stepping technique is carried out. Secondly, the application of the methodologies to physiological scenarios is presented, specifically addressing the solution of the blood flow in a model of the entire arterial network. The effects of non-uniformities of the material properties, of the variation of the radius, and of viscoelasticity are taken into account in the model and in the (local) numerical scheme; they are quantified and commented in the arterial network simulation. © 2012 Elsevier B.V. Source


Millan D.,Polytechnic University of Catalonia | Millan D.,Laboratorio Nacional Of Computacao Cientifica | Arroyo M.,Polytechnic University of Catalonia
Computer Methods in Applied Mechanics and Engineering | Year: 2013

Model reduction in computational mechanics is generally addressed with linear dimensionality reduction methods such as Principal Components Analysis (PCA). Hypothesizing that in many applications of interest the essential dynamics evolve on a nonlinear manifold, we explore here reduced order modeling based on nonlinear dimensionality reduction methods. Such methods are gaining popularity in diverse fields of science and technology, such as machine perception or molecular simulation. We consider finite deformation elastodynamics as a model problem, and identify the manifold where the dynamics essentially take place - the slow manifold - by nonlinear dimensionality reduction methods applied to a database of snapshots. Contrary to linear dimensionality reduction, the smooth parametrization of the slow manifold needs special techniques, and we use local maximum entropy approximants. We then formulate the Lagrangian mechanics on these data-based generalized coordinates, and develop variational time-integrators. Our proof-of-concept example shows that a few nonlinear collective variables provide similar accuracy to tens of PCA modes, suggesting that the proposed method may be very attractive in control or optimization applications. Furthermore, the reduced number of variables brings insight into the mechanics of the system under scrutiny. Our simulations also highlight the need of modeling the net effect of the disregarded degrees of freedom on the reduced dynamics at long times. © 2013 Elsevier B.V. Source


Leiva J.S.,Instituto Balseiro | Blanco P.J.,Laboratorio Nacional Of Computacao Cientifica | Buscaglia G.C.,University of Sao Paulo
International Journal for Numerical Methods in Engineering | Year: 2010

In this article we address decomposition strategies especially tailored to perform strong coupling of dimensionally heterogeneous models, under the hypothesis that one wants to solve each submodel separately and implement the interaction between subdomains by boundary conditions alone. The novel methodology takes full advantage of the small number of interface unknowns in this kind of problems. Existing algorithms can be viewed as variants of the 'natural' staggered algorithm in which each domain transfers function values to the other, and receives fluxes (or forces), and vice versa. This natural algorithm is known as Dirichlet-to-Neumann in the Domain Decomposition literature. Essentially, we propose a framework in which this algorithm is equivalent to applying Gauss-Seidel iterations to a suitably defined (linear or nonlinear) system of equations. It is then immediate to switch to other iterative solvers such as GMRES or other Krylov-based method, which we assess through numerical experiments showing the significant gain that can be achieved. Indeed, the benefit is that an extremely flexible, automatic coupling strategy can be developed, which in addition leads to iterative procedures that are parameter-free and rapidly converging. Further, in linear problems they have the finite termination property. © 2009 John Wiley & Sons, Ltd. Source


Boettcher S.,Emory University | Falkner S.,Emory University | Portugal R.,Laboratorio Nacional Of Computacao Cientifica
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2014

We show how to extract the scaling behavior of quantum walks using the renormalization group (RG). We introduce the method by efficiently reproducing well-known results on the one-dimensional lattice. For a nontrivial model, we apply this method to the dual Sierpinski gasket and obtain its exact, closed system of RG recursions. Numerical iteration suggests that under rescaling the system length, L′=2L, characteristic times rescale as t′=2dwt, with the exact walk exponent dw=log25=1.1609... Despite the lack of translational invariance, this value is very close to the ballistic spreading, dw=1, found for regular lattices. However, we argue that an extended interpretation of the traditional RG formalism will be needed to obtain scaling exponents analytically. Direct simulations confirm our RG prediction for dw and furthermore reveal an immensely rich phenomenology for the spreading of the quantum walk on the gasket. Invariably, quantum interference localizes the walk completely, with a site-access probability that decreases with a power law from the initial site, in contrast to a classical random walk, which would pass all sites with certainty. © 2014 American Physical Society. Source


Higashi S.,Laboratorio Nacional Of Computacao Cientifica
BMC genomics | Year: 2012

An essential step of a metagenomic study is the taxonomic classification, that is, the identification of the taxonomic lineage of the organisms in a given sample. The taxonomic classification process involves a series of decisions. Currently, in the context of metagenomics, such decisions are usually based on empirical studies that consider one specific type of classifier. In this study we propose a general framework for analyzing the impact that several decisions can have on the classification problem. Instead of focusing on any specific classifier, we define a generic score function that provides a measure of the difficulty of the classification task. Using this framework, we analyze the impact of the following parameters on the taxonomic classification problem: (i) the length of n-mers used to encode the metagenomic sequences, (ii) the similarity measure used to compare sequences, and (iii) the type of taxonomic classification, which can be conventional or hierarchical, depending on whether the classification process occurs in a single shot or in several steps according to the taxonomic tree. We defined a score function that measures the degree of separability of the taxonomic classes under a given configuration induced by the parameters above. We conducted an extensive computational experiment and found out that reasonable values for the parameters of interest could be (i) intermediate values of n, the length of the n-mers; (ii) any similarity measure, because all of them resulted in similar scores; and (iii) the hierarchical strategy, which performed better in all of the cases. As expected, short n-mers generate lower configuration scores because they give rise to frequency vectors that represent distinct sequences in a similar way. On the other hand, large values for n result in sparse frequency vectors that represent differently metagenomic fragments that are in fact similar, also leading to low configuration scores. Regarding the similarity measure, in contrast to our expectations, the variation of the measures did not change the configuration scores significantly. Finally, the hierarchical strategy was more effective than the conventional strategy, which suggests that, instead of using a single classifier, one should adopt multiple classifiers organized as a hierarchy. Source

Discover hidden collaborations