Entity

Time filter

Source Type


Sharples W.,Monash University | Moresi L.,Monash University | Cooper K.,Washington State University | Sunter P.,Victorian Partnership for Advanced Computing
Lecture Notes in Electrical Engineering | Year: 2012

Building software which can deliver high performance consistently, across a range of different computer clusters, is a challenging exercise for developers as clusters come with specialized architectures and differing queuing policies and costs. Given that optimal code configuration for a particular model on any machine is difficult for developers and end-users alike to predict, we have developed a test which can provide instructions for optimal code configuration, is instantly comprehensible and does not bombard the user with technical details. This test is in the form of a 'personality type' resonant with users' everyday experience of colleagues in the workplace. A given cluster is deemed suitable for either development and or production and small/composite models and or large/complex ones. To help users of our software to choose an efficient configuration of the code, we convert the personality assessment result into a series of optimization instructions based on their cluster's personality type. © 2012 Springer-Verlag. Source


O'Callaghan S.,University of Melbourne | De Souza D.P.,University of Melbourne | Isaac A.,University of Melbourne | Wang Q.,NICTA | And 9 more authors.
BMC Bioinformatics | Year: 2012

Background: Gas chromatography-mass spectrometry (GC-MS) is a technique frequently used in targeted and non-targeted measurements of metabolites. Most existing software tools for processing of raw instrument GC-MS data tightly integrate data processing methods with graphical user interface facilitating interactive data processing. While interactive processing remains critically important in GC-MS applications, high-throughput studies increasingly dictate the need for command line tools, suitable for scripting of high-throughput, customized processing pipelines.Results: PyMS comprises a library of functions for processing of instrument GC-MS data developed in Python. PyMS currently provides a complete set of GC-MS processing functions, including reading of standard data formats (ANDI- MS/NetCDF and JCAMP-DX), noise smoothing, baseline correction, peak detection, peak deconvolution, peak integration, and peak alignment by dynamic programming. A novel common ion single quantitation algorithm allows automated, accurate quantitation of GC-MS electron impact (EI) fragmentation spectra when a large number of experiments are being analyzed. PyMS implements parallel processing for by-row and by-column data processing tasks based on Message Passing Interface (MPI), allowing processing to scale on multiple CPUs in distributed computing environments. A set of specifically designed experiments was performed in-house and used to comparatively evaluate the performance of PyMS and three widely used software packages for GC-MS data processing (AMDIS, AnalyzerPro, and XCMS).Conclusions: PyMS is a novel software package for the processing of raw GC-MS data, particularly suitable for scripting of customized processing pipelines and for data processing in batch mode. PyMS provides limited graphical capabilities and can be used both for routine data processing and interactive/exploratory data analysis. In real-life GC-MS data processing scenarios PyMS performs as well or better than leading software packages. We demonstrate data processing scenarios simple to implement in PyMS, yet difficult to achieve with many conventional GC-MS data processing software. Automated sample processing and quantitation with PyMS can provide substantial time savings compared to more traditional interactive software systems that tightly integrate data processing with the graphical user interface. © 2012 O'Callaghan et al.; licensee BioMed Central Ltd. Source


Domazetis G.,Clean Coal Technology Ltd. | Domazetis G.,La Trobe University | James B.D.,La Trobe University | Liesegang J.,La Trobe University | And 4 more authors.
Fuel | Year: 2012

The paper presents experimental data of catalytic steam gasification of brown coal containing aqua-iron species, and the chemical mechanism(s) at a molecular level. Experimental techniques provided weight loss from catalysed reaction of char with steam of 17 wt% at 800 °C and 40 wt% at 900 °C, over 15 min, on a dry ash free basis (daf). Inorganic and organic oxygen, identified using XPS in the char samples, was derived from reactions with steam. High yields of H 2 resulted from catalysed reactions between char and steam. Semi-empirical (SE) quantum molecular modelling using MOPAC, of reaction routes for high temperature pyrolysis and steam gasification, provided results consistent with experimental data for weight loss, iron species, and the distribution of inorganic and organic oxygen in char samples after reaction with steam. The catalysis mechanism(s) that have been examined are considered to be a hybrid of organometallic and heterogeneous chemistry, involving iron hydride species that precede H 2 formation; oxygen insertion into [Fe-C-] to form [Fe-O-C-] followed by elimination of CO, creating another [Fe-C] site to continue the catalytic cycle. SE modelling indicates concerted reactions were more energetically favoured. Initial results from molecular dynamics (MD) show a higher concentration of H 2O molecules about the active site [Fe-C]. © 2011 Elsevier Ltd. All rights reserved. Source


Lemiale V.,CSIRO | Lemiale V.,Monash University | Muhlhaus,University of Queensland | Meriaux C.,Monash University | And 3 more authors.
International Journal for Numerical and Analytical Methods in Geomechanics | Year: 2011

In this paper, the suitability of rate-dependent constitutive relationships to model the rheology of granular materials is investigated. In particular, the formation of shear bands as predicted by this approach is studied. First, a rate-dependent model is investigated in terms of a linear stability analysis. It turns out that at low to moderate strain rates, the orientation of shear bands tends to vary from the so-called Roscoe and Coulomb solutions towards a unique admissible orientation with an increase of the so-called inertial number I. This effect is confirmed by numerical simulations of a compression test performed with a particle in cell finite element program. To further assess the validity of continuum approaches for the simulation of dense granular flows, a quasi-static fall of a granular column is studied numerically and the results are confronted to available experimental data. It is shown that a satisfying agreement is obtained at different aspect ratios and for the two materials investigated in this paper, i.e. sand and glass beads. The results reported in the present paper demonstrate the relevance of continuum approaches in the modelling of dense granular flows. © 2010 John Wiley & Sons, Ltd. Source


Purss M.B.J.,Geoscience Australia | Lewis A.,Geoscience Australia | Oliver S.,Geoscience Australia | Ip A.,Geoscience Australia | And 6 more authors.
GeoResJ | Year: 2015

Earth Observation data acquired by the Landsat missions are of immense value to the global community and constitute the world's longest continuous civilian Earth Observation program. However, because of the costs of data storage infrastructure these data have traditionally been stored in raw form on tape storage infrastructures which introduces a data retrieval and processing overhead that limits the efficiency of use of this data. As a consequence these data have become 'dark data' with only limited use in a piece-meal and labor intensive manner. The Unlocking the Landsat Archive project was set up in 2011 to address this issue and to help realize the true value and potential of these data. The key outcome of the project was the migration of the raw Landsat data that was housed in tape archives at Geoscience Australia to High Performance Data facilities hosted by the National Computational Infrastructure (a super computer facility located at the Australian National University). Once this migration was completed the data were calibrated to produce a living and accessible archive of sensor and scene independent data products derived from Landsat-5 and Landsat-7 data for the period 1998-2012. The calibrated data were organized into High Performance Data structures, underpinned by ISO/OGC standards and web services, which have opened up a vast range of opportunities to efficiently apply these data to applications across multiple scientific domains. © 2015. Source

Discover hidden collaborations