CAMO Software AS

Oslo, Norway

CAMO Software AS

Oslo, Norway

Time filter

Source Type

Sivertsen H.K.,University of California at Davis | Ueland O.,Consumer and Sensory science | Westad F.,CAMO Software AS
Food and Nutrition Research | Year: 2010

Background and objectives: Foods high in protein are known to satiate more fully than foods high in other constituents. One challenge with these types of food is the degree of palatability. This study was aimed at developing the frankfurter style of sausages that would regulate food intake as well as being the preferred food choice of the consumer. Design and measures: 16 sausage varieties with commercial (PE% 20) or higher amount of protein (PE% 40), being modified with vegetable fat (3% of rapeseed oil), and smoked or not, underwent a sensory descriptive analysis, in which the information was used to choose a subsample of four sausages for a satiety test. Twenty-seven subjects were recruited based on liking and frequency of sausage consumption. The participants ranged in age from 20 to 28, and in body mass index (BMI) between 19.6 and 30.9. The students were served a sausage meal for five consecutive days and then filled out a questionnaire to describe their feelings of hunger, satiety, fullness, desire to eat an their prospective consumption on a visual analogue scale (VAS) starting from right before, right after the meal, every half hour for 4 h until the next meal was served, and right after the second meal. Results and conclusion: The higher protein sausages were less juicy, oily, fatty, adhesive, but harder and more granular than with lower amount of protein. The high-protein sausages were perceived as more satiating the first 90 min after the first meal. Some indication of satiety effect of added oil versus meat fat. No significant differences in liking among the four sausage varieties. © 2010 Hanne K. Sivertsen et al.


PubMed | Statoil, PROOCEANO Servico Oceanografico, University of Sao Paulo and CAMO Software AS
Type: | Journal: Integrated environmental assessment and management | Year: 2016

The present article describes integration of environmental monitoring and discharge data and interpretation using multivariate statistics, principal component analysis (PCA), and partial least squares (PLS) regression. The monitoring was carried out at the Peregrino oil field off the coast of Brazil. One sensor platform and 3 sediment traps were placed on the seabed. The sensors measured current speed and direction, turbidity, temperature, and conductivity. The sediment trap samples were used to determine suspended particulate matter that was characterized with respect to a number of chemical parameters (26 alkanes, 16 PAHs, N, C, calcium carbonate, and Ba). Data on discharges of drill cuttings and water-based drilling fluid were provided on a daily basis. The monitoring was carried out during 7 campaigns from June 2010 to October 2012, each lasting 2 to 3 months due to the capacity of the sediment traps. The data from the campaigns were preprocessed, combined, and interpreted using multivariate statistics. No systematic difference could be observed between campaigns or traps despite the fact that the first campaign was carried out before drilling, and 1 of 3 sediment traps was located in an area not expected to be influenced by the discharges. There was a strong covariation between suspended particulate matter and total N and organic C suggesting that the majority of the sediment samples had a natural and biogenic origin. Furthermore, the multivariate regression showed no correlation between discharges of drill cuttings and sediment trap or turbidity data taking current speed and direction into consideration. Because of this lack of correlation with discharges from the drilling location, a more detailed evaluation of chemical indicators providing information about origin was carried out in addition to numerical modeling of dispersion and deposition. The chemical indicators and the modeling of dispersion and deposition support the conclusions from the multivariate statistics. Integr Environ Assess Manag 2016;X:000-000. 2016 SETAC.


Westad F.,CAMO Software AS | Marini F.,University of Rome La Sapienza
Analytica Chimica Acta | Year: 2015

In this tutorial, we focus on validation both from a numerical and conceptual point of view. The often applied reported procedure in the literature of (repeatedly) dividing a dataset randomly into a calibration and test set must be applied with care. It can only be justified when there is no systematic stratification of the objects that will affect the validated estimates or figures of merits such as RMSE or R2. The various levels of validation may, typically, be repeatability, reproducibility, and instrument and raw material variation. Examples of how one data set can be validated across this background information illustrate that it will affect the figures of merits as well as the dimensionality of the models. Even more important is the robustness of the models for predicting future samples. Another aspect that is brought to attention is validation in terms of the overall conclusions when observing a specific system. One example is to apply several methods for finding the significant variables and see if there is a consensus subset that also matches what is reported in the literature or based on the underlying chemistry. © 2015 Elsevier B.V.


Westad F.,CAMO Software AS | Gidskehaug L.,CAMO Software AS | Swarbrick B.,CAMO Software AS | Flaten G.R.,CAMO Software AS
Chemometrics and Intelligent Laboratory Systems | Year: 2015

Modeling strategies currently in use for the monitoring of batch processes where multivariate data are available have some limitations, particularly for batches where the true starting or end point are not the same on an absolute time scale, or the batch progression varies among batches. In this paper, a method capturing these differences and allowing modeling and monitoring of batches in relative time is proposed. Using scores from principal component analysis (PCA) models as a feature space the new methodology is better able to handle the challenges usually experienced in batch analysis. The feasibility of the relative time approach is demonstrated using data from a chemical synthesis and a pharmaceutical drying process. © 2015 Elsevier B.V.


Eide I.,Statoil | Neverdal G.,Statoil | Westad F.,CAMO Software AS
Energy and Fuels | Year: 2010

Positive electrospray ionization mass spectrometry (ESI-MS) and multivariate regression (chemometrics) have been used for the identification and quantification of fatty acid methyl ester (FAME) in jet fuel in concentrations from 3 to 35 ppm. The jet fuel samples were injected directly and undiluted into the ion source. Each analysis takes less than 1 min to perform. Calibration series with rapeseed methyl ester (RME) and soybean methyl ester (SME) alone or in combination were used to create regression models with excellent prediction properties. An independent test set with known amounts of RME and SME was made several weeks later, and the regression model was used to predict the concentration of RME and SME with a root-mean-square error of prediction (RMSEP) of 2.6 and 1.2 ppm, respectively. © 2010 American Chemical Society.


Westad F.,CAMO Software AS | Bevilacqua M.,University of Rome La Sapienza | Marini F.,University of Rome La Sapienza
Data Handling in Science and Technology | Year: 2013

In this chapter, a survey of the theory behind the main chemometric methods used for multivariate calibration is presented. Ordinary least squares, multiple linear regression, principal component regression, partial least squares regression and principal covariate regression are discussed in detail. Tools for model diagnostics and model interpretation are presented, together with strategies for variable selection. © 2013 Elsevier B.V.


PubMed | University of Rome La Sapienza and CAMO Software AS
Type: | Journal: Analytica chimica acta | Year: 2015

In this tutorial, we focus on validation both from a numerical and conceptual point of view. The often applied reported procedure in the literature of (repeatedly) dividing a dataset randomly into a calibration and test set must be applied with care. It can only be justified when there is no systematic stratification of the objects that will affect the validated estimates or figures of merits such as RMSE or R(2). The various levels of validation may, typically, be repeatability, reproducibility, and instrument and raw material variation. Examples of how one data set can be validated across this background information illustrate that it will affect the figures of merits as well as the dimensionality of the models. Even more important is the robustness of the models for predicting future samples. Another aspect that is brought to attention is validation in terms of the overall conclusions when observing a specific system. One example is to apply several methods for finding the significant variables and see if there is a consensus subset that also matches what is reported in the literature or based on the underlying chemistry.

Loading CAMO Software AS collaborators
Loading CAMO Software AS collaborators