Boulder, CO, United States

Adaptive Cognitive Systems

www.adcogsys.com
Boulder, CO, United States
SEARCH FILTERS
Time filter
Source Type

Pilly P.K.,Adaptive Cognitive Systems | Grossberg S.,Adaptive Cognitive Systems | Seitz A.R.,University of California at Riverside
Vision Research | Year: 2010

Studies of perceptual learning have focused on aspects of learning that are related to early stages of sensory processing. However, conclusions that perceptual learning results in low-level sensory plasticity are controversial, since such learning may also be attributed to plasticity in later stages of sensory processing or in readout from sensory to decision stages, or to changes in high-level central processing. To address this controversy, we developed a novel random dot motion (RDM) stimulus to target motion cells selective to contrast polarity by ensuring the motion direction information arises only from signal dot onsets and not their offsets, and used these stimuli in the paradigm of task-irrelevant perceptual learning (TIPL). In TIPL, learning is achieved in response to a stimulus by subliminally pairing that stimulus with the targets of an unrelated training task. In this manner, we are able to probe learning for an aspect of motion processing thought to be a function of directional V1 simple cells with a learning procedure that dissociates the learned stimulus from the decision processes relevant to the training task. Our results show direction-selective learning for the designated contrast polarity that does not transfer to the opposite contrast polarity. This polarity specificity was replicated in a double training procedure in which subjects were additionally exposed to the opposite polarity. Taken together, these results suggest that TIPL for motion stimuli may occur at the stage of directional V1 simple cells. Finally, a theoretical explanation is provided to understand the data. © 2009 Elsevier Ltd.


Dobres J.,Adaptive Cognitive Systems | Seitz A.R.,University of California at Riverside
Journal of Vision | Year: 2010

Classication image analysis is a psychophysical technique in which noise components of stimuli are analyzed to produce an image that reveals critical features of a task. Here we use classication images to gain greater understanding of perceptual learning. To achieve reasonable classication images within a single session, we developed an efcient classication image procedure that employed designer noise and a low-dimensional stimulus space. Subjects were trained across ten sessions to detect the orientation of a grating masked in noise, with an eleventh, test, session conducted using a stimulus orthogonal to the trained stimulus. As with standard perceptual learning studies, subjects showed improvements in performance metrics of accuracy, threshold, and reaction times. The clarity of the classication images and their correlation to an ideal target also improved across training sessions in an orientation-specic manner. Furthermore, image-based analyses revealed aspects of performance that could not be observed with standard performance metrics. Subjects with threshold improvements learned to use pixels across a wider area of the image, and, apposed to subjects without threshold improvements, showed improvements in both the bright and dark parts of the image. We conclude that classication image analysis is an important complement to traditional metrics of perceptual learning. © ARVO.


Dobres J.,Adaptive Cognitive Systems
Journal of vision | Year: 2010

Classification image analysis is a psychophysical technique in which noise components of stimuli are analyzed to produce an image that reveals critical features of a task. Here we use classification images to gain greater understanding of perceptual learning. To achieve reasonable classification images within a single session, we developed an efficient classification image procedure that employed designer noise and a low-dimensional stimulus space. Subjects were trained across ten sessions to detect the orientation of a grating masked in noise, with an eleventh, test, session conducted using a stimulus orthogonal to the trained stimulus. As with standard perceptual learning studies, subjects showed improvements in performance metrics of accuracy, threshold, and reaction times. The clarity of the classification images and their correlation to an ideal target also improved across training sessions in an orientation-specific manner. Furthermore, image-based analyses revealed aspects of performance that could not be observed with standard performance metrics. Subjects with threshold improvements learned to use pixels across a wider area of the image, and, apposed to subjects without threshold improvements, showed improvements in both the bright and dark parts of the image. We conclude that classification image analysis is an important complement to traditional metrics of perceptual learning.


Best B.J.,Adaptive Cognitive Systems
Computational and Mathematical Organization Theory | Year: 2013

We developed an end-to-end process for inducing models of behavior from expert task performance through in-depth case study. A subject matter expert (SME) performed navigational and adversarial tasks in a virtual tank combat simulation, using the dTank and Unreal platforms. Using eye tracking and Cognitive Task Analysis, we identified the key goals pursued by and attributes used by the SME, including reliance on an egocentric spatial representation, and on the fly re-representation of terrain in qualitative terms such as "safe" and "risky". We demonstrated methods for automatic extraction of these qualitative higher-order features from combinations of surface features present in the simulation, producing a terrain map that was visually similar to the SME annotated map. The application of decision-tree and instance-based machine learning methods to the transformed task data supported prediction of SME task selection with greater than 95 % accuracy, and SME action selection at a frequency of 10 Hz with greater than 63 % accuracy, with real time constraints placing limits on algorithm selection. A complete processing model is presented for a path driving task, with the induced generative model deviating from the SME chosen path by less than 2 meters on average. The derived attributes also enabled environment portability, with path driving models induced from dTank performance and deployed in Unreal demonstrating equivalent accuracy to those induced and deployed completely within Unreal. © 2012 The Author(s).


Grossberg S.,Adaptive Cognitive Systems | Markowitz J.,Adaptive Cognitive Systems | Cao Y.,Adaptive Cognitive Systems
Neural Networks | Year: 2011

Visual object recognition is an essential accomplishment of advanced brains. Object recognition needs to be tolerant, or invariant, with respect to changes in object position, size, and view. In monkeys and humans, a key area for recognition is the anterior inferotemporal cortex (ITa). Recent neurophysiological data show that ITa cells with high object selectivity often have low position tolerance. We propose a neural model whose cells learn to simulate this tradeoff, as well as ITa responses to image morphs, while explaining how invariant recognition properties may arise in stages due to processes across multiple cortical areas. These processes include the cortical magnification factor, multiple receptive field sizes, and top-down attentive matching and learning properties that may be tuned by task requirements to attend to either concrete or abstract visual features with different levels of vigilance. The model predicts that data from the tradeoff and image morph tasks emerge from different levels of vigilance in the animals performing them. This result illustrates how different vigilance requirements of a task may change the course of category learning, notably the critical features that are attended and incorporated into learned category prototypes. The model outlines a path for developing an animal model of how defective vigilance control can lead to symptoms of various mental disorders, such as autism and amnesia. © 2011 Elsevier Ltd.


Cao Y.,Adaptive Cognitive Systems | Grossberg S.,Adaptive Cognitive Systems | Markowitz J.,Adaptive Cognitive Systems
Neural Networks | Year: 2011

All primates depend for their survival on being able to rapidly learn about and recognize objects. Objects may be visually detected at multiple positions, sizes, and viewpoints. How does the brain rapidly learn and recognize objects while scanning a scene with eye movements, without causing a combinatorial explosion in the number of cells that are needed? How does the brain avoid the problem of erroneously classifying parts of different objects together at the same or different positions in a visual scene? In monkeys and humans, a key area for such invariant object category learning and recognition is the inferotemporal cortex (IT). A neural model is proposed to explain how spatial and object attention coordinate the ability of IT to learn invariant category representations of objects that are seen at multiple positions, sizes, and viewpoints. The model clarifies how interactions within a hierarchy of processing stages in the visual brain accomplish this. These stages include the retina, lateral geniculate nucleus, and cortical areas V1, V2, V4, and IT in the brain's What cortical stream, as they interact with spatial attention processes within the parietal cortex of the Where cortical stream. The model builds upon the ARTSCAN model, which proposed how view-invariant object representations are generated. The positional ARTSCAN (pARTSCAN) model proposes how the following additional processes in the What cortical processing stream also enable position-invariant object representations to be learned: IT cells with persistent activity, and a combination of normalizing object category competition and a view-to-object learning law which together ensure that unambiguous views have a larger effect on object recognition than ambiguous views. The model explains how such invariant learning can be fooled when monkeys, or other primates, are presented with an object that is swapped with another object during eye movements to foveate the original object. The swapping procedure is predicted to prevent the reset of spatial attention, which would otherwise keep the representations of multiple objects from being combined by learning. Li and DiCarlo (2008) have presented neurophysiological data from monkeys showing how unsupervised natural experience in a target swapping experiment can rapidly alter object representations in IT. The model quantitatively simulates the swapping data by showing how the swapping procedure fools the spatial attention mechanism. More generally, the model provides a unifying framework, and testable predictions in both monkeys and humans, for understanding object learning data using neurophysiological methods in monkeys, and spatial attention, episodic learning, and memory retrieval data using functional imaging methods in humans. © 2011 Elsevier Ltd.


Ivey R.,Adaptive Cognitive Systems | Bullock D.,Adaptive Cognitive Systems | Grossberg S.,Adaptive Cognitive Systems
Neural Networks | Year: 2011

In order to create spatial plans in a complex and changing world, organisms need to rapidly adapt to novel configurations of obstacles that impede simple routes to goal acquisition. Some animals can mentally create successful multistep spatial plans in new visuo-spatial layouts that preclude direct, one-segment routes to goal acquisition. Lookahead multistep plans can, moreover, be fully developed before an animal executes any step in the plan. What neural computations suffice to yield preparatory multistep lookahead plans during spatial cognition of an obstructed two-dimensional scene? To address this question, we introduce a novel neuromorphic system for spatial lookahead planning in which a feasible sequence of actions is prepared before movement begins. The proposed system combines neurobiologically plausible mechanisms of recurrent shunting competitive networks, visuo-spatial diffusion, and inhibition-of-return. These processes iteratively prepare a multistep trajectory to the desired goal state in the presence of obstacles. The planned trajectory can be stored using a primacy gradient in a sequential working memory and enacted by a competitive queuing process. The proposed planning system is compared with prior planning models. Simulation results demonstrate system robustness to environmental variations. Notably, the model copes with many configurations of obstacles that lead other visuo-spatial planning models into selecting undesirable or infeasible routes. Our proposal is inspired by mechanisms of spatial attention and planning in primates. Accordingly, our simulation results are compared with neurophysiological and behavioral findings from relevant studies of spatial lookahead behavior. © 2010 Elsevier Ltd.


Grant
Agency: Department of Defense | Branch: Navy | Program: STTR | Phase: Phase I | Award Amount: 100.00K | Year: 2012

A subtle and often unrecognized assumption lies behind the use of the terms salience and perception when they appear in the context of human information processing. Although it has become commonplace to blend the language of low-level physiological process with the language of high-level cognition, there is a substantive theoretical step to be taken before we can claim that, for instance, the ability to notice a blinking light or to pick out a single voice in the crowded room is somehow continuous with the ability to sense the attitude of an airplane from only instrument readings or see a changing battlefield dynamic given the visual clutter of the plan view display of a common operating picture. A significant research opportunity exists to make the relationship between low-level perception and high-level information processing explicit and quantifiable. To do this we propose to reconcile a leading-edge theory of human perception in complex work environments with the process-level mechanisms of the state-of-the art cognitive modeling architecture. Specifically, we will establish a formal mapping between the constructs of Wickens SEEV statistical model of perception and select generative structures of ACT-R, an executable, computational theory of cognition.


Grant
Agency: Department of Defense | Branch: Navy | Program: SBIR | Phase: Phase I | Award Amount: 80.00K | Year: 2013

We aim to develop Artificial Intelligence (AI) software that is generalized across entity-level simulation systems and can be used to generate behaviors in training simulations that are both contextually and tactically realistic. The near-term goal for Phase I is to define and develop a concept for improving simulation training using AI in VBS2 and JSAF, focusing on improving the behavioral realism of the AI while reducing the need for support personnel.


Grant
Agency: Department of Defense | Branch: Navy | Program: SBIR | Phase: Phase II | Award Amount: 741.56K | Year: 2010

A tremendous need exists for intelligent agents that can be created and edited without resorting to intensive knowledge engineering and programming, and which exhibit believable and variable behavior in the training contexts in which they are deployed. This proposal describes a novel method for creating and editing intelligent agents’ behavior based on using instance-based modeling and statistical learning methods that learn from the example of a person interacting in a virtual environment. These methods, which leverage structured knowledge in a hybrid symbolic-subsymbolic approach, support automatic incorporation of assessment feedback directly from the interface into an agent, allowing a domain expert to interact with an agent in a closed feedback loop through a participation in a virtual environment, instead of through lengthy reprogramming by a knowledge engineering expert.

Loading Adaptive Cognitive Systems collaborators
Loading Adaptive Cognitive Systems collaborators