Austin, TX, United States
Austin, TX, United States

Time filter

Source Type

Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2007.4.2 | Award Amount: 10.68M | Year: 2008

Current Semantic Web reasoning systems do not scale to the requirements of their hottest\napplications, such as analyzing data from millions of mobile devices, dealing with terabytes of\nscientific data, and content management in enterprises with thousands of knowledge workers.\nWe will build the Large Knowledge Collider (LarKC, for short, pronounced lark), a platform\nfor massive distributed incomplete reasoning that will remove these scalability barriers. This\nwill be achieved by:\n- Enriching the current logic-based Semantic Web reasoning methods with methods\nfrom information retrieval, machine learning, information theory, databases and\nprobabilistic reasoning,\n- Employing cognitively inspired approaches and techniques such as spreading activation,\nfocus of attention, reinforcement, habituation, relevance reasoning, and bounded\nrationality.\n- Building a distributed reasoning platform and realising it both on a high-performance\ncomputing cluster and via computing at home.\nThe consortium is an interdisciplinary team of engineers and researchers in Computing Science,\nWeb Science and Cognitive Science, well qualified to realize this ambitious vision. The Large\nKnowledge Collider will be an open architecture. Researchers and practitioners from outside the\nconsortium will be encouraged to develop and plug in their own components to drive parts of the\nsystem. This will make the Large Knowledge Collider a generic platform, and not just a single\nreasoning engine.\nThe success of the Large Knowledge Collider will be demonstrated in three end-user case\nstudies. The first case study is from the telecom sector. It aims at real-time aggregation and\nanalysis of location data obtained from mobile phones carried by the population of a city, in order\nto regulate city infrastructure functions such as public transport and to provide context-sensitive\nnavigation information. The other two case studies are in the life-sciences domain, related\nrespectively to dr


Schneider D.,Cycorp Inc. | Witbrock M.,Cycorp Inc.
WWW 2015 Companion - Proceedings of the 24th International Conference on World Wide Web | Year: 2015

In this paper, we discuss Semantic Construction Grammar (SCG), a system developed over the past several years to facilitate translation between natural language and logical representations. Crucially, SCG is designed to support a variety of different methods of representation, ranging from those that are fairly close to the NL structure (e.g. so-called 'logical forms'), to those that are quite different from the NL structure, with higher-order and high-arity relations. Semantic constraints and checks on representations are integral to the process of NL understanding with SCG, and are easily carried out due to the SCG's integration with Cyc's Knowledge Base and inference engine [1] , [2].


Lenat D.B.,Cycorp Inc. | Durlach P.J.,Advanced Distributed Learning Initiative
International Journal of Artificial Intelligence in Education | Year: 2014

We often understand something only after we've had to teach or explain it to someone else. Learning-by-teaching (LBT) systems exploit this phenomenon by playing the role of tutee. BELLA, our sixth-grade mathematics LBT systems, departs from other LTB systems in several ways: (1) It was built not from scratch but by very slightly extending the ontology and knowledge base of an existing large AI system, Cyc. (2) The "teachable agent" - Elle - begins not with a tabula rasa but rather with an understanding of the domain content which is close to the human student's. (3) Most importantly, Elle never actually learns anything directly from the human tutor! Instead, there is a super-agent (Cyc) which already knows the domain content extremely well. BELLA builds up a mental model of the human student by observing them interact with Elle. It uses that Socratically to decide what Elle's current mental model should be (what concepts and skills Elle should already know, and what sorts of mistakes it should make) so as to best help the user to overcome their current confusions. All changes to the Elle model are made by BELLA, not by the user - the only learning going on is BELLA learning more about the user - but from the user's point of view it often appears as though Elle were attending to them and learning from them. Our main hypothesis is that this may prove to be a particularly powerful and effective illusion to maintain. © 2014 International Artificial Intelligence in Education Society.


A method, system and computer program product for identifying documents of interest. A profile of a subscriber is created based on information obtained about the subscriber. Subscriber-interest determination rules are used to identify potential topics of interest of the subscriber based on the subscribers profile as well as based on external knowledge sources. Each potential interest of the subscriber may be represented by a pointer that references a concept. Additionally, concepts in the documents published by the publishers are identified. A comparison may be made between the concepts identified in the documents published by the publishers with those concepts representing the potential topics of interests of the subscriber. Those documents with matching concepts may then be identified as potentially being of interest for the subscriber. In this manner, documents of interest are more accurately identified for the document seeker.


Grant
Agency: Department of Defense | Branch: Missile Defense Agency | Program: STTR | Phase: Phase I | Award Amount: 99.96K | Year: 2012

The process of an MDA stakeholder specifying a needed modeling and simulation system has many problems: they may be ambiguous, overly specific, overly general, contradictory, etc. Symbolic knowledge-based representation and reasoning systems have matured to the point where they can ameliorate this. E.g., an extension of Cyc can help a Marines ISR-E requirements officer prepare a JCIDS CDD needs document spec"ing a complex future system. We propose an analogous application of Cyc here. Part of Phase I will involve eliciting the needed decision rules and heuristics in the MDA M & S domain, and, along the way, augmenting the system"s ontology and English lexicon. What makes this R & D is that, unlike the JCIDS task, there is less preexisting structure in this domain, so part of Phase I will involve Cycorp eliciting the knowledge from GMU (and, where there are still gaps, the two of them eliciting it from MDA) to create in effect a detailed instruction manual (which, incidentally, might be of use to the humans learning to do this task, not just to our Cyc-based automated assistant.)


Grant
Agency: Department of Defense | Branch: Missile Defense Agency | Program: STTR | Phase: Phase II | Award Amount: 1000.00K | Year: 2013

We will develop automatic ontology-aligning software, leveraging the enormous existing Cyc AI system to drive the formulation and ranking (pro/con argumentation) of hypotheses about term-term relationships, especially where the relationship is not simple 1-to-1 correspondence. Multiple Cyc micro-theories (contexts) will be created, to hold incommensurate alternative mappings, and logical consequences within each will be checked for internal inconsistency with each other and scored for their accuracy of external predictions about data not yet integrated. Secondly, we will develop a suite of semi-automatic software power tools to help MDA users control, review/navigate, and correct the output of that automatic aligner. Each part (the automatic aligner and the alignment manager toolkit) will be built by augmenting Cyc's existing knowledge base, ontology, inference engine, and UI's, using MDA sources and MDA ground test planning tasks to drive the development and testing. GMU will spearhead liaising with MDA experts, and Cycorp will spearhead the Cyc extensions.


News Article | August 9, 2014
Site: techcrunch.com

Editor’s note: Catherine Havasi is CEO and co-founder of Luminoso, an artificial intelligence-based text analytics company in Cambridge. Luminoso was founded on nearly a decade of research at the MIT Media Lab on how NLP and machine learning could be applied to text analytics. Catherine also directs the Open Mind Common Sense Project, one of the largest common sense knowledge bases in the world, which she co-founded alongside Marvin Minsky and Push Singh in 1999. Imagine for a moment that you run into a friend on the street after you return from a vacation in Mexico. “How was your vacation?” your friend asks. “It was wonderful. We’re so happy with the trip,” you reply. “It wasn’t too humid, though the water was a bit cold.” No surprises there, right? You and your friend both know that you’re referring to the weather in terms of “humidity” and the ocean in terms of “cold.” Now imagine you try to have that same conversation with a computer. Your response would be met with something akin to: “Does. Not. Compute.” Part of the problem is that when we humans communicate, we rely on a vast background of unspoken assumptions. Everyone knows that “water is wet,” and “people want to be happy,” and we assume everyone we meet shares this knowledge. It forms the basis of how we interact and allows us to communicate quickly, efficiently, and with deep meaning. As advanced as technology is today, its main shortcoming as it becomes a large part of daily life in society is that it does not share these assumptions. We find ourselves talking more and more to our devices — to our mobile phones and even our televisions. But when we talk to Siri, we often find that the rules that underlie her can’t comprehend exactly what we want if we stray far from simple commands. For this vision to be fulfilled, we’ll need computers to understand us as we talk to each other in a natural environment. For that, we’ll need to continue to develop the field of common-sense reasoning — without it, we’re never going to be able to have an intelligent conversation with Siri, Google Glass or our Xbox. Common-sense reasoning is a field of artificial intelligence that aims to help computers understand and interact with people in a more naturally by finding ways to collect these assumptions and teach them to computers. Common Sense Reasoning has been most successful in the field of natural language processing (NLP), though notable work has been done in other areas. This area of machine learning, with its strange name, is starting to quietly infiltrate different applications ranging from text understanding to processing and comprehending what’s in a photo. Without common sense, it will be difficult to build adaptable and unsupervised NLP systems in an increasingly digital and mobile world. When we talk to each other and talk online, we try to be as interesting as possible and take advantage of new ways to express things. It’s important to create computers that can keep pace with us. There’s more to it than one would think. If I asked you if a giraffe would fit in your office, you could answer the question quite easily despite the fact that in all probability you had never pictured a giraffe inhabiting your office, quietly munching on your ficus while your favorite Pandora station plays in the background. This is a perfect example of you not just knowing about the world, but knowing how to apply your world knowledge to things you haven’t thought about before. The power of common sense systems is that they are highly adaptive, adjusting to topics as varied as restaurant reviews, hiking boot surveys, and clinical trials, and doing so with speed and accuracy. This is because we understand new words from the context they are used in. We use common sense to make guesses at word meanings and then refine those guesses and we’ve built a system that works similarly. Additionally, when we understand complex or abstract concepts, it’s possible we do so by making an analogy to a simple concept, a theory described by George Lakoff in his book, “Metaphors We Live By.” The simple concepts are common sense. There are two major schools of thought in common-sense reasoning. One side works with more logic-like or rule-based representations, while the other uses more associative and analogy-based reasoning or “language-based” common sense — the latter of which draws conclusions that are fuzzier but closer to the way that natural language works. Whether you realize it or not, you interact with both of these kinds of systems on a daily basis. You’ve probably heard of IBM’s Watson, which famously won at Jeopardy, but it’s a lesser-known fact that Watson’s predecessor was a project called Cyc that was developed in 1984 by Doug Lenat. The makers of Cyc, called Cycorp, operate a large repository of logic-based common sense facts. It’s still active today and remains one of the largest logic-based common sense projects. In the school of language-based common sense, the Open Mind Common Sense project was started in 1999 by Marvin Minsky, Push Singh, and myself. OMCS and ConceptNet, its more well-known offshoot, include an information store in plain text, as well as a large knowledge graph. The project became an early success in crowdsourcing, and now ConceptNet contains 17 million facts in many languages. The last few years have seen great steps forward in particular types of machine learning: vector-based machine learning and deep learning. They have been instrumental in advancing language-based common sense, thus bringing computers one step closer to processing language the way humans do. NLP is where common-sense reasoning excels, and the technology is starting to find its way into commercial products. Though there is still a long way to go, common-sense reasoning will continue to evolve rapidly in the coming years and the technology is stable enough to be in business use today. It holds significant advantages over existing ontology and rule-based systems, or systems based simply on machine learning. It won’t be long before you have a more common-sense conversation with your computer about your trip to Mexico. And when you tell it that the water was a bit cold, your computer could reply: “I’m sorry to hear the ocean was chilly, it tends to be at this time of year. Though I saw the photos from your trip and it looks like you got to wear that lovely new bathing suit you bought last week.”


News Article | April 18, 2014
Site: recode.net

At last night’s StartOut gay entrepreneurs demo event, queer tech founders competed for venture capital attention in a warehouse in San Francisco’s South of Market neighborhood. Entrepreneurs from 10 startups pitched to VCs including Dave McClure from 500 Startups and Andy Wheeler from Google Ventures. No one was granted money that night, but organizer Chris Sinton said the exposure to venture capitalists and other founders — nearly 200 showed up to watch — would help get the ball rolling for the companies. StartOut and other minority affinity groups have grown this year, as more tech entrepreneurs, frustrated with the venture capital old boys’ networks, are looking to cultivate their own. Michael Witbrock, who sits on the board of StartOut, watched from the back of the room. The next step in gay activism, he argued, will be through helping the gay community in Silicon Valley become richer and more powerful. “There are things money can do that nothing else can,” said Witbrock, the vice president of research at artificial-intelligence company Cycorp. “This is a means for us as a community to empower ourselves financially. It’s about building people who have the resources to defend the community, who have the resources to buy those who would discriminate.” So advancing gay rights is about money now? “We don’t just need a place at the table,” Witbrock said. “Sometimes you need to buy the table.”


News Article | October 20, 2014
Site: www.techworld.com

In 1966, some Massachusetts Institute of Technology researchers reckoned that they could develop computer vision as a summer project, perhaps even get a few smart undergrads to complete the task. The world has been working on the problem ever since. Computer vision is where computers recognize objects like people do. That's a tree. He's Carlos. And so on. It's one of a number of tasks we consider essential for generalized artificial intelligence, in which machines can act and reason as humans do. While we've been making some considerable headway in computer vision, especially in recent years, that it has taken 50 years longer than expected shows why AI (artificial intelligence) is such as difficult and elusive goal. "How much progress is being made? It's really hard to get a handle on that," said Beau Cronin, a Salesforce.com product manager currently working on some AI-influenced technologies for the company. Cronin spoke Friday at the O'Reilly Strata + Hadoop World conference, in New York. The main theme of the conference was big data. The need for big data analytics has given AI research a shot in the arm. Today the titans of the Internet industry -- Apple, Google, Facebook, Microsoft, IBM -- are putting AI research into the driver's seat, pushing forward the state of the art for seemingly routine tasks such as ad targeting and personalized assistance. But in many ways, we are no closer to achieving an overall general artificial intelligence, in the sense that a computer can behave like a human, Cronin observed. Systems that use AI technologies, such as machine learning, are defined to execute very narrowly defined tasks. The state of AI has always been hard to assess, Cronin said. AI systems are hard to evaluate: They may excel in one area but fall short in another, similar task. Many projects, even sometimes very well-funded ones, go nowhere. Even basic definitions of AI are still not locked down. When two people talk about AI, one may be referring to a specific machine learning algorithm while the other may be talking about autonomous robots. AI still attracts oddballs, lone wolves working in their basements 10 hours a week hoping to solve the AI problem once and for all. The overambitious "Summer of Vision" MIT project in the 60s pointed out one of the major stumbling blocks for AI research, called Moravec's Paradox. Moravec's Paradox asserts basically that things that are easy for people to do -- object recognition and perception -- are extremely difficult for computers to do, while simple tasks for computers -- proving complex theorems -- are extremely difficult if not impossible for people to do (some present readers excluded, no doubt). The waves of hype around getting machines to think, and the subsequent disillusions borne of the marginal results, led the field to go through a number of what have been called AI Winters, in which research funding dries up, and progress slows. We probably will not see another AI Winter, if only because too many large companies, notably Google and Facebook, are basing their business models on using intelligence computing to better intuit what their users are looking for, Cronin said. Other companies offer AI-assisted technologies, such as such as Apple with Siri, and IBM with Watson. In many ways, today's AI systems are a direct lineage of the first AI systems built in the 1960s, such as the Eliza -- the psychiatric advice-dispensing program still used for some Twitterbots today -- and Perceptron, one of the first precursors to deep-learning neural networks. Such early AI systems were "deeply flaw and limited. They were just very basic in their capabilities," Cronin said. Nonetheless, "you can draw a direct lines from those early systems to the work we're doing today in AI," he observed. "Watson is what we wished Eliza would be." After years of very little progress, though, we are increasingly becoming awash in ever-more astounding forms of AI-like assistance for specific tasks. The pace of advance is happening at a rate that have surprised "even people who have been in the field for a long time," Cronin said. Self-driving vehicles, on the precipice of becoming commercially available, were considered to be almost an unachievable technology as little as 10 years ago. Perhaps this is due to the change in funding for AI research. Governments with research money to spare have always invested in researchers with grand ambitions. And for many years, small commercial research organizations such as SRI International and Cycorp moved forward the state of the art. These days, AI research has benefactors across most of the major IT and Internet companies, such as Google, Facebook and Microsoft Research. Many smaller startups, flush with venture capital, are also pushing the envelope. "The work is increasingly applied to commercial [projects] rather than academic" ones, Cronin said. As a result, AI technologies are now operating at larger scales than they ever did in the academic days. "Deep learning on its own, done in academia, doesn't have the [same] impact as when it is brought into Google, scaled and built into a new product." As a result, AI methods, such as machine learning, are now being integrated into commercial services and products, at a speedier pace than ever before. Cronin noted that Watson and Siri are more notable as "big integration projects" than for pioneering new forms of intelligence. The growing influx of big data has helped the field, as well, introducing inferencing and other statistical methods that few would have predicted would play a such powerful role in technology, Cronin said. In the olden days of academic AI research, the amount of data that could be used to reason against was relatively sparse, compared to the mountains of stuff we have today. Google has made bank from its massive set of data on its users, which it collected first and figured out how to make money from later. The company didn't get initially get hung up on "putting a lot of structure in the model," Cronin said. This has been termed by Google engineers as the "unreasonable effectiveness of data." In the long haul, however, we will have to put more thought into deeper learning techniques than we have now, Cronin said. Today's methods just aren't going to get us to full artificial intelligence. "We need richer, more predictive models," Conin said, ones that can "routinely make predictions of what will happen." One member of the audience, Juan Pablo Velez, a data analyst at New York data science consultancy firm Polynumeral, agreed with Cronin's assessment of AI. "A lot of new innovation has come around in deep learning that has been rolled out in scale, like Google image search. But the research is very much tied to the agendas of big companies and it doesn't necessarily mean we are any closer to generalized machine intelligence," Velez said. In many ways, we are at the same point in AI research where we've always been: moving forward rapidly in some aspects, while seemingly standing still in relation to the big goal, generalized artificial intelligence. As Facebook head of AI research Yann LeCun has said, AI research is like driving fast in the fog, where you can't see the next roadblock you will hit. Until the day when we build a machine to look ahead into the fog for us, the future of AI will be uncertain for some time to come. Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is Joab_Jackson@idg.com


Loading Cycorp Inc. collaborators
Loading Cycorp Inc. collaborators