Agency: Department of Defense | Branch: Navy | Program: SBIR | Phase: Phase II | Award Amount: 1.21M | Year: 2013
In Phase-I, ObjectVideo (OV) prototyped a system that ingests video from a variety of sensors (airborne, ground-based, maritime) and generates meta-data representing their contents in the form of modified DoD Discovery Metadata Specification (DDMS) cards that provide a compact representation of targets, their attributes, trajectories, and scene elements. This information is combined with the context of mission goals and specific needs of warfighters, specified as queries using interactive interfaced, in order to identify and retrieve relevant video clips. In the Phase-II, OV will advance this technology by creating DDMS adapter SDK that will allow existing ISR platforms to integrate this capability assisting in sensor data exploitation by large ISR enterprises, such as DCGS-MC, by providing a flexible and scalable data framework enabling horizontal integration of formats and capabilities, including NATOs, STANAG4676 and NGA's MISB. A real-time automated video dispatcher will analyze the DDMS cards as they become available and identify video segments of interest based on operators'and warfighters'needs. Necessary extensions to DDMS specifications will be made to support detection of complex events. The system will be designed using open architecture in which system components will be implemented as services allowing ease of integration, scalability and platform independent operations.
Agency: Department of Defense | Branch: Army | Program: SBIR | Phase: Phase I | Award Amount: 100.00K | Year: 2015
ObjectVideo proposes to design and implement a fully automatic event-based adaptive video compression system to encode a video with different bitrates by analyzing events in the scene. We will provide the modular system with the following innovations: 1) Automatic detection of entities-of-interest and events-of-interest from a moving platform sensor by parsing the video; 2) Optimal and dynamic bitrate allocation for each event by solving a bandwidth-constrained utility maximization problem; 3) Interactive event editing and visualization tool; and 4) Evaluation and identification of tradeoffs with processing time, bandwidth, recognition rate and video quality.
Agency: Department of Defense | Branch: Air Force | Program: SBIR | Phase: Phase II | Award Amount: 1.05M | Year: 2013
ABSTRACT: This Small Business Innovation Research Phase II project will develop novel nonlinear filtering and joint decision and estimation methods for robust and persistent tracking of multiple targets with nonlinearities in measurement and dynamic model. The key innovations in this effort include i) powerful measures of nonlinearity and non-Gaussianity, ii) efficient and accurate nonlinear point estimators that outperform widely-used extended Kalman filter and unscented filter, iii) improved methods for mesh adaptation for efficient approximate density estimation based on a numerical solution of Fokker-Planck equation, iv) mathematical models defining a continuum of multi-target data association problems in complexity and problem space, and v) ensemble tracking based on a novel joint density and (point) estimation (JDE) framework. The Phase I effort demonstrated the proof of concept by developing core technologies that form the basis of the proposed algorithms. The Phase II effort will be focused towards refinement, advancement, and integration of these enabling technologies and will include development of new algorithms and measures to handle the issues identified during Phase I, transitioning of technology and prototyping, integration of technologies into an existing ISR system, detailed quantitative and qualitative evaluation of the system and its components, and demonstration of technologies in operationally representative scenarios. BENEFIT: Automatic interpretation of sensory data has been a persistent topic of research in the areas of information theory, sensor fusion, computer vision, pattern recognition, machine learning, and psychology, but is still in a very primitive state. Robust detection and persistent tracking of targets in the scene is a critical first step towards enabling systems capable to interpret activities in the scene and provide timely situational awareness and effective forensic analysis capabilities to the analysts. The technologies proposed here enable persistent tracking of multiple targets with nonlinear dynamic and observation models. The proposed tracking techniques are generic and can be applied to many ISR sensors and applications that are characterized by complex models and uncertainties in the data, for example, urban surveillance from WAMI sensors, GMTI and AMTI tracking, 2D/3D particle tracking, and dynamic analysis of cellular organism in microbiology domain. The proposed technologies can also be used to enable real-time and forensic analysis tasks that find a lot of applications in the domains of geospatial intelligence (GEOINT), persistent surveillance, and video analysis. These benefits include: (a) Timely Situational Awareness: Timely availability of reliable intelligence is critical to support military missions and objectives. Knowledge regarding the trajectories of targets, their interactions with each other and other scene elements is a critical part of intelligence and its automated extraction from surveillance data will help expedite the dissemination of relevant intelligence to the commanders in the field. (b) Reliable Inference: Existing automated video surveillance and monitoring tools allow users to specify security rules in the form of tripwires and exclusion zones on maps or images. Persistent tracking of targets will enable users to receive alerts with high precision thus increasing the reliability of the overall system and user-confidence on the output of the intelligent system. (c) Comprehensive Video Analysis: Robust target tracking in videos is a step closer towards comprehensive video analysis, i.e., automatic extraction of a description of the elements, activities, and events in the scene.
ObjectVideo | Date: 2014-04-14
Relational graphs may be used to extract information. Similarities between the relational graphs and the items they represent may be determined. For example, when applied to video searching, relational graphs may be obtained from searching videos to extract objects, events and/or relations therebetween. Each relational graph may comprise a plurality of nodes and edges, wherein at least some of the detected objects and events are represented by each node, and wherein each edge and represents a relationship between two nodes. Subgraphs may be extracted from each relational graph and dimension reduction may be performed on the subgraphs to obtain a reduced variable set which may then be used to perform searches, such as similarity analyses of videos.
ObjectVideo | Date: 2014-09-29
The automatic generation (702) of one or more secondary video streams based on an input primary video stream (701) may use analytics performed on the video to provide information on targets, events, and/or areas of interest to permit the one or more secondary video streams (703) to concentrate on one or more targets, events, and/or areas of interest.
ObjectVideo | Date: 2014-12-19
Methods and systems for facial recognition are provided. The method includes determining a three-dimensional (3D) model of a face of an individual based on different images of the individual. The method also includes extracting two-dimensional (2D) patches from the 3D model. Further, the method includes generating a plurality of signatures of the face using different combinations of the 2D patches, wherein the plurality of signatures correspond to respective views of the 3D model from different angles.
ObjectVideo | Date: 2015-03-31
Systems, methods, and manufactures for a surveillance system are provided. The surveillance system includes sensors having at least one non-overlapping field of view. The surveillance system is operable to track a target in an environment using the sensors. The surveillance system is also operable to extract information from images of the target provided by the sensors. The surveillance system is further operable to determine probablistic confidences corresponding to the information extracted from images of the target. The confidences include at least one confidence corresponding to at least one primitive event. Additionally, the surveillance system is operable to determine grounded formulae by instantiating predefined rules using the confidences. Further, the surveillance system is operable to infer a complex event corresponding to the target using the grounded formulae. Moreover, the surveillance system is operable to provide an output describing the complex event.
ObjectVideo | Date: 2014-08-27
Systems, methods, and computer applications and media for gathering, categorizing, sorting, managing, reviewing and organizing large quantities of multimedia items across space and time and using crowd-sourcing resources are described. Various implementations may enable either public or private (e.g., internal to an organization) crowdsourcing of multimedia item gathering and analysis, including the gathering, analysis, lead-searching, and classification of digital still images and digital videos. Various implementations may allow a user, such as a law enforcement investigator, to consolidate all of the available multimedia items into one system, and quickly gather, sort, organize, and display the multimedia items based on location, time, content, or other parameters. Moreover, an investigator may be able to create crowd source tasks as he works with the multimedia items and utilize crowd source resources when he needs help.
ObjectVideo | Date: 2014-08-08
A system for detecting behavior of a target may include: a target detection engine, adapted to detect at least one target from one or more objects from a video surveillance system recording a scene; a path builder, adapted to create at least one mature path model from analysis of the behavior of a plurality of targets in the scene, wherein the at least one mature path model includes a model of expected target behavior with respect to the at least one path model; and a target behavior analyzer, adapted to analyze and identify target behavior with respect to the at least one mature path model. The system way further include an alert generator, adapted to generate an alert based on the identified behavior.
ObjectVideo | Date: 2014-03-10
A video surveillance system is set up, calibrated, tasked, and operated. The system extracts video primitives and extracts event occurrences from the video primitives using event discriminators. The system can undertake a response, such as an alarm, based on extracted event occurrences.