Time filter

Source Type

Kristoufek L.,Czech Institute of Information Theory And Automation | Kristoufek L.,Charles University
Physica A: Statistical Mechanics and its Applications | Year: 2014

In this short report, we investigate the ability of the DCCA coefficient to measure correlation level between non-stationary series. Based on a wide Monte Carlo simulation study, we show that the DCCA coefficient can estimate the correlation coefficient accurately regardless the strength of non-stationarity (measured by the fractional differencing parameter d). For a comparison, we also report the results for the standard Pearson correlation coefficient. The DCCA coefficient dominates the Pearson coefficient for non-stationary series. © 2014 Elsevier B.V.

Kristoufek L.,Czech Institute of Information Theory And Automation | Kristoufek L.,Charles University
Physica A: Statistical Mechanics and its Applications | Year: 2014

In the paper, we introduce a new measure of correlation between possibly non-stationary series. As the measure is based on the detrending moving-average cross-correlation analysis (DMCA), we label it as the DMCA coefficient ρDMCA(λ) with a moving average window length λ. We analytically show that the coefficient ranges between -1 and 1 as a standard correlation does. In the simulation study, we show that the values of ρDMCA(λ) very well correspond to the true correlation between the analyzed series regardless the (non-)stationarity level. Dependence of the newly proposed measure on other parameters-correlation level, moving average window length and time series length-is discussed as well. © 2014 Elsevier B.V. All rights reserved.

Sroubek F.,Czech Institute of Information Theory And Automation | Milanfar P.,University of California at Santa Cruz
IEEE Transactions on Image Processing | Year: 2012

Blind deconvolution, which comprises simultaneous blur and image estimations, is a strongly ill-posed problem. It is by now well known that if multiple images of the same scene are acquired, this multichannel (MC) blind deconvolution problem is better posed and allows blur estimation directly from the degraded images. We improve the MC idea by adding robustness to noise and stability in the case of large blurs or if the blur size is vastly overestimated. We formulate blind deconvolution as an ℓ 1 -regularized optimization problem and seek a solution by alternately optimizing with respect to the image and with respect to blurs. Each optimization step is converted to a constrained problem by variable splitting and then is addressed with an augmented Lagrangian method, which permits simple and fast implementation in the Fourier domain. The rapid convergence of the proposed method is illustrated on synthetically blurred data. Applicability is also demonstrated on the deconvolution of real photos taken by a digital camera. © 2011 IEEE.

Karny M.,Czech Institute of Information Theory And Automation
Information Sciences | Year: 2016

A high-order Markov chain is a universal model of stochastic relations between discrete-valued variables. The exact estimation of its transition probabilities suffers from the curse of dimensionality. It requires an excessive amount of informative observations as well as an extreme memory for storing the corresponding sufficient statistic. The paper bypasses this problem by considering a rich subset of Markov-chain models, namely, mixtures of low dimensional Markov chains, possibly with external variables. It uses Bayesian approximate estimation suitable for a subsequent decision making under uncertainty. The proposed recursive (sequential, one-pass) estimator updates a product of Dirichlet probability densities (pds) used as an approximate posterior pd, projects the result back to this class of pds and applies an improved data-dependent stabilised forgetting, which counteracts the dangerous accumulation of approximation errors. © 2015 Elsevier Inc. All rights reserved.

Somol P.,Czech Institute of Information Theory And Automation | Novovicova J.,Czech Institute of Information Theory And Automation
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2010

Stability (robustness) of feature selection methods is a topic of recent interest, yet often neglected importance, with direct impact on the reliability of machine learning systems. We investigate the problem of evaluating the stability of feature selection processes yielding subsets of varying size. We introduce several novel feature selection stability measures and adjust some existing measures in a unifying framework that offers broad insight into the stability problem. We study in detail the properties of considered measures and demonstrate on various examples what information about the feature selection process can be gained. We also introduce an alternative approach to feature selection evaluation in the form of measures that enable comparing the similarity of two feature selection processes. These measures enable comparing, e.g., the output of two feature selection methods or two runs of one method with different parameters. The information obtained using the considered stability and similarity measures is shown to be usable for assessing feature selection methods (or criteria) as such. © 2006 IEEE.

Bakule L.,Czech Institute of Information Theory And Automation
Annual Reviews in Control | Year: 2014

This paper reviews state of the art in the area of decentralized networked control systems with an emphasis on event-triggered approach. The models or agents with the dynamics of linear continuous-time time-invariant state-space systems are considered. They serve for the framework for network phenomena within two basic structures. The I/O-oriented systems as well as the interaction-oriented systems with disjoint subsystems are distinguished. The focus is laid on the presentation of recent decentralized control design and co-design methods which offer effective tools to overcome specific difficulties caused mainly by network imperfections. Such side-effects include communication constraints, variable sampling, time-varying transmission delays, packet dropouts, and quantizations. Decentralized time-triggered methods are briefly discussed. The review is deals mainly with decentralized event-triggered methods. Particularly, the stabilizing controller-observer event-based controller design as well as the decentralized state controller co-design are presented within the I/O-oriented structures of large scale complex systems. The sampling instants depend in this case only on a local information offered by the local feedback loops. Minimum sampling time conditions are discussed. Special attention is focused on interaction-oriented system architecture. Model-based approach combined with event-based state feedback controller design is presented, where the event thresholds are fully decentralized. Finally, several selected open decentralized control problems are briefly offered as recent research challenges. © 2014 Elsevier Ltd. All rights reserved.

Kroupa T.,Czech Institute of Information Theory And Automation
International Journal of Approximate Reasoning | Year: 2012

It will be shown that probabilities of infinite-valued events represented by formulas in Łukasiewicz propositional logic are in one-to-one correspondence with tight probability measures over rational polyhedra in the unit hypercube. This result generalizes a recent work on rational measures of polyhedra and provides an elementary geometric approach to reasoning under uncertainty with states in Łukasiewicz logic. © 2011 Elsevier Inc. All rights reserved.

Kroupa T.,Czech Institute of Information Theory And Automation
Soft Computing | Year: 2012

We generalise belief functions to many-valued events which are represented by elements of Lindenbaum algebra of infinite-valued Łukasiewicz propositional logic. Our approach is based on mass assignments used in the Dempster-Shafer theory of evidence. A generalised belief function is totally monotone and it has Choquet integral representation with respect to a unique belief measure on Boolean events. © 2012 Springer-Verlag.

Karny M.,Czech Institute of Information Theory And Automation
Information Sciences | Year: 2014

Bayesian learning provides a firm theoretical basis of the design and exploitation of algorithms in data-streams processing (preprocessing, change detection, hypothesis testing, clustering, etc.). Primarily, it relies on a recursive parameter estimation of a firmly bounded complexity. As a rule, it has to approximate the exact posterior probability density (pd), which comprises unreduced information about the estimated parameter. In the recursive treatment of the data stream, the latest approximate pd is usually updated using the treated parametric model and the newest data and then approximated. The fact that approximation errors may accumulate over time course is mostly neglected in the estimator design and, at most, checked ex post. The paper inspects the estimator design with respect to the error accumulation and concludes that a sort of forgetting (pd flattening) is an indispensable part of a reliable approximate recursive estimation. The conclusion results from a Bayesian problem formulation complemented by the minimum Kullback-Leibler divergence principle. Claims of the paper are supported by a straightforward analysis, by elaboration of the proposed estimator to widely applicable parametric models and illustrated numerically. © 2014 Elsevier Inc. All rights reserved.

Kristoufek L.,Czech Institute of Information Theory And Automation
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2014

We discuss two alternate spectrum-based estimators of the bivariate Hurst exponent in the power-law cross-correlations setting, the cross-periodogram and local X-Whittle estimators, as generalizations of their univariate counterparts. As the spectrum-based estimators are dependent on a part of the spectrum taken into consideration during estimation, a simulation study showing performance of the estimators under varying bandwidth parameter as well as correlation between processes and their specification is provided as well. These estimators are less biased than the already existent averaged periodogram estimator, which, however, has slightly lower variance. The spectrum-based estimators can serve as a good complement to the popular time domain estimators. © 2014 American Physical Society.

Loading Czech Institute of Information Theory And Automation collaborators
Loading Czech Institute of Information Theory And Automation collaborators