Entity

Time filter

Source Type


Bonardi C.,University of Nottingham | Mondragon E.,Center for Computational and Animal Learning Research | Brilot B.,Newcastle University | Jennings D.J.,Newcastle University
Quarterly Journal of Experimental Psychology | Year: 2015

Two experiments investigated the effect of the temporal distribution form of a stimulus on its ability to produce an overshadowing effect. The overshadowing stimuli were either of the same duration on every trial, or of a variable duration drawn from an exponential distribution with the same mean duration as that of the fixed stimulus. Both experiments provided evidence that a variable-duration stimulus was less effective than a fixed-duration cue at overshadowing conditioning to a target conditioned stimulus (CS); moreover, this effect was independent of whether the overshadowed CS was fixed or variable. The findings presented here are consistent with the idea that the strength of the association between CS and unconditioned stimulus (US) is, in part, determined by the temporal distribution form of the CS. These results are discussed in terms of time-accumulation and trial-based theories of conditioning and timing. © 2014, © 2014 The Experimental Psychology Society. Source


Alonso E.,City University London | Mondragon E.,Center for Computational and Animal Learning Research
ICAART 2013 - Proceedings of the 5th International Conference on Agents and Artificial Intelligence | Year: 2013

In this position paper we propose to enhance learning algorithms, reinforcement learning in particular, for agents and for multi-agent systems, with the introduction of concepts and mechanisms borrowed from associative learning theory. It is argued that existing algorithms are limited in that they adopt a very restricted view of what "learning" is, partly due to the constraints imposed by the Markov assumption upon which they are built. Interestingly, psychological theories of associative learning account for a wide range of social behaviours, making it an ideal framework to model learning in single agent scenarios as well as in multi-agent domains. Source


Mondragon E.,Center for Computational and Animal Learning Research | Alonso E.,City University London | Fernandez A.,Rey Juan Carlos University | Gray J.,City University London
Computer Methods and Programs in Biomedicine | Year: 2013

This paper introduces R&W Simulator version 4, which extends previous work by incorporating context simulation within standard Pavlovian designs. This addition allows the assessment of: (1) context-stimulus competition, by treating contextual cues as ordinary background stimuli present throughout the whole experimental session; (2) summation, by computing compound stimuli with contextual cues as an integrating feature, with and without the addition of specific configural cues; and (3) contingency effects in causal learning. These new functionalities broaden the range of experimental designs that the simulator is able to replicate, such as some recovery from extinction phenomena (e.g., renewal effects). In addition, the new version permits specifying probe trials among standard trials and extracting their values. © 2013 Elsevier Ireland Ltd. Source


Alonso E.,University of London | Mondragon E.,Center for Computational and Animal Learning Research
Proceedings of the 11th International Conference on Cognitive Modeling, ICCM 2012 | Year: 2012

Classical conditioning is at the heart of most learning processes. It is thus essential that we develop accurate models of conditioning phenomena and data. In this paper we review the different uses of computational models in exploring conditioning, as simulators and as psychological models by proxy. Source


Alonso E.,Northampton Square | Fairbank M.,Northampton Square | Mondragon E.,Center for Computational and Animal Learning Research
Proceedings of the 11th International Conference on Cognitive Modeling, ICCM 2012 | Year: 2012

It is well known that, in one form or another, the variational Principle of Least Action (PLA) governs Nature. Although traditionally referred to explain physical phenomena, PLA has also been used to account for biological phenomena and even natural selection. However, its value in studying psychological processes has not been fully explored. In this paper we present a computational model, value-gradient learning, based on Pontryagin's Minimum Principle (a version of PLA used in optimal theory), that applies to both classical and operant conditioning. Source

Discover hidden collaborations