Novamente LLC

United States

Novamente LLC

United States

Time filter

Source Type

Ikle M.,Adams State College | Goertzel B.,Novamente LLC
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

Inspired by a broader perspective viewing intelligent system dynamics in terms of the geometry of "cognitive spaces," we conduct a preliminary investigation of the application of information-geometry based learning to ECAN (Economic Attention Networks), the component of the integrative OpenCog AGI system concerned with attention allocation and credit assignment. We generalize Amari's "natural gradient" algorithm for network learning to encompass ECAN and other recurrent networks, and apply it to small example cases of ECAN, demonstrating a dramatic improvement in the effectiveness of attention allocation compared to prior (Hebbian learning like) ECAN methods. Scaling up the method to deal with realistically-sized ECAN networks as used in OpenCog remains for the future, but should be achievable using sparse matrix methods on GPUs. © 2011 Springer-Verlag Berlin Heidelberg.


Goertzel B.,Novamente LLC
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2013

Human cognition is portrayed as involving a highly flexible, self-organizing "cognitive network", closely coupled with a number of more specific intelligent "body-system networks" - e.g. those associated with the perceptual and motor systems, the heart, the digestive system, the liver, and the immune and endocrine systems, all of which have been shown to have their own adaptive intelligence. These specialized intelligent networks provide the general-purpose cognitive network with critical structural and dynamical inductive biasing. It is argued that early-stage AGI systems must involve roughly comparable inductive biasing, though not necessarily achieved in the same way. © 2013 Springer-Verlag.


Goertzel B.,Novamente LLC
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

A novel approach to computer vision is outlined, involving the use of imprecise probabilities to connect a deep learning based hierarchical vision system with both local feature detection based preprocessing and symbolic cognition based guidance. The core notion is to cause the deep learning vision system to utilize imprecise rather than single-point probabilities, and use local feature detection and symbolic cognition to affect the confidence associated with particular imprecise probabilities, thus modulating the amount of credence the deep learning system places on various observations and guiding its pattern recognition/formation activity. The potential application to the hybridization of the DeSTIN, SIFT and OpenCog systems is described in moderate detail. The underlying ideas are even more broadly applicable, to any computer vision approach with a significant probabilistic component which satisfies certain broad criteria. © 2011 Springer-Verlag Berlin Heidelberg.


Goertzel B.,Novamente LLC
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2013

Lojban is a constructed language based on predicate logic, with no syntactic ambiguity and carefully controllable semantic ambiguity. It originated in the middle of the last century, and there is now a community of several hundred human speakers. It is argued here that Lojban++, a minor variation on Lojban, would be highly suitable as a language for communication between humans and early-stage AGI systems. Software tools useful for the deployment of Lojban++ in this manner are described, and their development proposed. © 2013 Springer-Verlag.


Goertzel B.,Novamente LLC
Proceedings of the 2013 IEEE Symposium on Computational Intelligence for Human-Like Intelligence, CIHLI 2013 - 2013 IEEE Symposium Series on Computational Intelligence, SSCI 2013 | Year: 2013

A novel "Mind-World Correspondence Principle" is proposed - which, given an environment and goal-set, heavily constrains the structure of any intelligent system capable of efficiently achieving those goals in that environment. This is proposed as a potential step toward a "general theory of general intelligence." An approximate gloss of the proposed principle is: "For a mind to work intelligently toward certain goals in a certain world, there should be a nice mapping from goal-directed sequences of world-states into sequences of mind-states, where "nice" means that a world-state-sequence W composed of two parts W1 and W2, gets mapped into a mind-state-sequence M composed of two corresponding parts M1 and M2." The principle is formulated using the mathematical language of category theory, but refinement of the principle into a precise theorem is left for later works. Discussion is given regarding the use of the principle to explain common properties of realworld intelligences such as the presence of hierarchical structure. Declarative, procedural and episodic memory systems, as present in human minds and human-like cognitive architectures, are formalized using category theory in a manner consistent with the Mind-World Correspondence principle. The notion of development of minds is similarly formulated, using the category-theoretic notion of natural transformations. It is suggested that this approach to cognitive analysis and modeling may eventually be useful for deriving and refining practical designs for Artificial General Intelligence. © 2013 IEEE.


Goertzel B.,Novamente LLC
International Journal of Machine Consciousness | Year: 2011

A novel theory of reflective consciousness, will and self is presented, based on modeling each of these entities using self-referential mathematical structures called hypersets. Pattern theory is used to argue that these exotic mathematical structures may meaningfully be considered as parts of the minds of physical systems, even finite computational systems. The hyperset models presented are hypothesized to occur as patterns within the "moving bubble of attention" of the human brain and any roughly human-mind-like AI system. These ideas appear to be compatible with both panpsychist and materialist views of consciousness, and probably other views as well. Their relationship with the CogPrime AI design and its implementation in the OpenCog software framework is elucidated in detail. © 2011 World Scientific Publishing Company.


Goertzel B.,Novamente LLC
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

Bridging the gap between symbolic and subsymbolic representations is a - perhaps the - key obstacle along the path from the present state of AI achievement to human-level artificial general intelligence. One approach to bridging this gap is hybridization - for instance, incorporation of a subsymbolic system and a symbolic system into a integrative cognitive architecture. Here we present a detailed design for an implementation of this approach, via integrating a version of the DeSTIN deep learning system into OpenCog, an integrative cognitive architecture including rich symbolic capabilities. This is a "tight" integration, in which the symbolic and subsymbolic aspects exert detailed real-time influence on each others' operations. An earlier technical report has described in detail the revisions to DeSTIN needed to support this integration, which are mainly along the lines of making it more "representationally transparent," so that its internal states are easier for OpenCog to understand. © 2012 Springer-Verlag.


Goertzel B.,Novamente LLC
Artificial General Intelligence - Proceedings of the Third Conference on Artificial General Intelligence, AGI 2010 | Year: 2010

Two new formal definitions of intelligence are presented, the "pragmatic general intelligence" and "efficient pragmatic general intelligence." Largely inspired by Legg and Hutter's formal definition of "universal intelligence," the goal of these definitions is to capture a notion of general intelligence that more closely models that possessed by humans and practical AI systems, which combine an element of universality with a certain degree of specialization to particular environments and goals. Pragmatic general intelligence measures the capability of an agent to achieve goals in environments, relative to prior distributions over goal and environment space. Efficient pragmatic general intelligences measures this same capability, but normalized by the amount of computational resources utilized in the course of the goal-achievement. A methodology is described for estimating these theoretical quantities based on observations of a real biological or artificial system operating in a real environment. Finally, a measure of the "degree of generality" of an intelligent system is presented, allowing a rigorous distinction between "general AI" and "narrow AI.".


Goertzel B.,Novamente LLC
Journal of Experimental and Theoretical Artificial Intelligence | Year: 2014

A high-level artificial general intelligence (AGI) architecture called goal-oriented learning meta-architecture (GOLEM) is presented, along with an informal but careful argument that GOLEM may be capable of preserving its initial goals while radically improving its general intelligence. As a meta-architecture, GOLEM can be wrapped around a variety of different base-level AGI systems, and also has a role for a powerful narrow-AI subcomponent as a probability estimator. The motivation underlying these ideas is the desire to create AGI systems fulfilling the multiple criteria of being: massively and self-improvingly intelligent, probably beneficial and almost surely not destructive. © 2014 Taylor & Francis.


Goertzel B.,Novamente LLC
International Journal of Machine Consciousness | Year: 2012

What does it mean for one mind to be a different version of another one, or a natural continuation of another one? Or put differently: when can two minds sensibly be considered versions of one another? This question occurs in relation to mind uploading, where one wants to be able to assess whether an approximate upload constitutes a genuine continuation of the uploaded mind or not. It also occurs in the context of the rapid mental growth that is likely to follow mind uploading, at least in some cases - here the question is, when is growth so rapid or discontinuous as to cause the new state of the mind to no longer be sensibly considerable as a continuation of the previous one? Provisional answers to these questions are sketched, using mathematical tools drawn from category theory and probability theory. It is argued that if a mind's growth is "approximately smooth", in a certain sense, then there will be "continuity of self" and the mind will have a rough comprehension of its growth and change process as it occurs. The treatment is somewhat abstract, and intended to point a direction for ongoing research rather than as a definitive practical solution. These ideas may have practical value in future, however, for those whose values favor neither strict self-preservation nor unrestricted growth, but rather growth that is constrained to be at least quasi-comprehensible to the minds doing the growing. © 2012 World Scientific Publishing Company.

Loading Novamente LLC collaborators
Loading Novamente LLC collaborators