Barrington, RI, United States
Barrington, RI, United States

Time filter

Source Type

Patent
Vision Systems, Inc. | Date: 2013-03-22

A new technology is provided for a portable bar code verification reader and is designed to create a video image of a bar code label in accordance to strict industry bar code verification standards. The barcode verification reader will effectively capture a bar code label and then send the resulting video image to a computer. The resulting bar code label quality may be displayed on the portable verification reader via one or more LEDs, an embedded display monitor, or the like.


Grant
Agency: Department of Defense | Branch: Air Force | Program: SBIR | Phase: Phase II | Award Amount: 749.86K | Year: 2015

ABSTRACT: VSI proposes the development of the SIGMA (Speedy Imagery Geo-registration and Motion Analysis) system, which will address three key requirements by the end of Phase II: (i) adapt to different type of sensors, sensor-configurations and sensor operating conditions, (ii) handle large amount of data in real-time or close to real-time using limited resources, and (iii) operate under conditions such as lack of metadata in case of GPS-denied areas and linear, unbounded trajecto-ries. The central tenet of the proposed approach will be formation of a graph where the nodes will be a group of images and these nodes will be connected to each other if there is a spatial or temporal connectivity. Furthermore, the development in Phase II will leverage Phase I develop-ments, specifically the robust online SFM system, which was successfully tested on multiple da-tasets such as CLIF 2007, MAMI and synthetic datasets, and (ii) the demonstrated feasibility of the dynamic graph- representation to handle large amounts of image data and variable camera geometry constraints. The proposed system is a plugin-based framework which will allow each of the components to be replaced by different implementations or algorithms to allow the system to work flexibly under different operating conditions. BENEFIT: Real-time mapping of aerial imagery for urban planning, construction and disaster relief and real-time traffic updates.


Grant
Agency: Department of Defense | Branch: Air Force | Program: SBIR | Phase: Phase II | Award Amount: 1.50M | Year: 2016

ABSTRACT:A system is proposed to access, collate and process imagery from multiple small satellite constellations. The imagery is fused and registered to a common reference so that exploitation algorithms, such as change detection and event monitoring, are reported with accurate geographic coordinates. The system, called SatTel, is implemented as open source software that is designed to run on the cloud with web-based analyst tools for defining intelligence queries. These queries arise from tasks such as monitoring weapon test sites, and receive timely responses due to SatTels ability to aggregate images based on an integrated portal to access multiple vendor archives. The imagery to satisfy intelligence queries is selected using a rule-based algorithm that matches exploitation algorithm data requirements against available image attributes such as resolution, viewpoint and atmospheric effects. The image fusion process results in useful intermediate products including: a high resolution digital elevation model (DEM) and a controlled orthorectified image base map (CIB). These products are exported in standard formats such as GEOTIF and NITF. The image exploitation algorithms include: global change detection, event triggers and vehicle tracking and are exported as attributed shape files, a format accepted by many current analyst tools, such as ArcGIS.BENEFIT:Oil and Gas energy, Natural Resource management: agriculture and forestry, Insurance, Fleet management, Real estate, city planning and engineering


Grant
Agency: Department of Defense | Branch: Air Force | Program: SBIR | Phase: Phase I | Award Amount: 149.93K | Year: 2013

ABSTRACT: VSI proposes a flexible sensor estimation framework for airborne calibration processing of image streams over long time periods. The components of the framework will consist of existing software and libraries. The plugin-based framework provides flexibility to switch the components easily. The core of the framework is a belief propagation engine to globally optimize the sensor parameters. The input to the approach is an image stream and its associated navigational data when available. The proposed framework is an online system wherein the sensor estimation is continuously refined as more spatial overlap is discovered. The proposed system can be dynamically tuned for either speed/efficiency or accuracy, depending on the requirements of the operator. The central tenets of the proposed approach are (i) use temporal ordering of the images to increase the computation efficiency, (ii) use spatial overlap of the images which are temporally disjoint to correct for any accumulated drift and (iii) to locally optimize the sensor parameters and use the belief propagation to globally optimize the sensor parameters for hundred thousands of images. BENEFIT: Sensor estimation capabilities for airborne image acquisition systems for Remote Sensing Applications. On-board processing capabilities for updating traffic reports.


Grant
Agency: Department of Defense | Branch: Air Force | Program: SBIR | Phase: Phase I | Award Amount: 149.87K | Year: 2013

ABSTRACT: While the state of the art in both single-image reconstruction algorithms and multi-view structure from motion algorithms have advanced considerably in recent years, little work has been performed which leverages the constraints relied upon by both approaches. When an area of interest is imaged from a long distance with little angular diversity in viewpoint, multi-view constraints alone are often insufficient to produce an accurate 3-d reconstruction. It is proposed that the additional constraints provided by surface properties learned from image data will improve reconstruction performance significantly. The proposed Phase I effort is focused on the development of an aerial image-based 3-d reconstruction algorithm that combines the relative strengths of both single-image reconstruction and context algorithms and state of the art multi-view stereo. The result is an automatically generated 3-d model that is optimally constrained by all information contained in a set of collected aerial images. The proposed system is general enough to exploit high angular diversity datasets, but exhibits graceful degradation as the viewpoint diversity decreases. The decrease in information due to low view angle diversity is compensated by single-image constraints on surface orientation derived by machine learning algorithms. BENEFIT: Benefits of the proposed approach include improved sensor model estimation and high accuracy 3-D modeling capabilities. Applications include support of downstream processing (tracking, geo-positioning, geo-registration), augmented reality / situational awareness, and simulation/training.


Grant
Agency: Department of Defense | Branch: National Geospatial-Intelligence Agency | Program: SBIR | Phase: Phase II | Award Amount: 499.54K | Year: 2013

With the advancement of aerial imaging sensors, high quality data equipped with partial sensor calibration models is available. There is a recent research activity in computer vision community that aims to reconstruct 3-d structure of the observed scenes relying on the content of the imagery in fully automated ways. However the research has not matured into robust systems ready for operational settings. In this proposal, a novel architecture that reconstructs the 3-d geometry of the scene in the form of a geo-registered 3-d point cloud given imagery from multiple sensor platforms is presented. The 3-d cloud is equipped with LE and CE measurements through propagation of errors in the sensor calibration and the geometry reconstruction stages. The CVG team proposes to use a volumetric probabilistic 3-d representation (P3DM) and dense image matching to reconstruct the geometry and the appearance of the scene starting from a set of images with partial calibration data. The P3DM technology is at Technical Readiness Level (TRL) 4, with critical modules of the system parallelized and implemented on GPU hardware for real-time processing.


Grant
Agency: Department of Defense | Branch: Air Force | Program: SBIR | Phase: Phase I | Award Amount: 149.90K | Year: 2012

ABSTRACT: Multi-sensor fusion is a key capability required for effective inferences for timely and accurate strategy and tactical planning. A multi-sensor fusion technology needs to appropriately weigh different image streams according to the uncertainties and errors across different inputs and parameters such as GPS and INS measurements, sensor model parameters, inherent ambiguities of the data such as localization for featureless region and errors due to registering data in a common frame in each source of data for effective fusion. Vision Systems, Inc. propose to fuse aerial imagery via 3-d models using subjective logic to represent and account for various uncertainties. Additionally, the 3-d models will be registered by computing a 3-d to 3-d transformation. BENEFIT: Remote Sensing, surveying, evacuation and emergency planning, visualization and navigation.


Disclosed herein are systems, apparatus, and methods for real-time image capture with variable density display, high resolution electronic zoom, and improved depth of field. The systems, apparatus, and methods used to view a target site include: at least one image sensor with an energy receiving surface having X pixels for producing one or more signals that correspond to the target site; at least one display for presenting N fields of view, wherein each field of view is comprised of at least one image produced by the signals; and an electronic zoom mechanism that can switch between N fields of view with little or no loss of resolution, wherein the first field of view is comprised of input from no more than X/N non-adjacent pixels and at least one zoom field of view is comprised of input from no more than X/N adjacent active pixels.


Patent
Vision Systems, Inc. | Date: 2012-02-10

Driver distraction is reduced by providing information only when necessary to assist the driver, and in a visually pleasing manner. Obstacles such as other vehicles, pedestrians, and road defects are detected based on analysis of image data from a forward-facing camera system. An internal camera images the driver to determine a line of sight. Navigational information, such as a line with an arrow, is displayed on a windshield so that it appears to overlay and follow the road along the line of sight. Brightness of the information may be adjusted to correct for lighting conditions, so that the overlay will appear brighter during daylight hours and dimmer during the night. A full augmented reality is modeled and navigational hints are provided accordingly, so that the navigational information indicates how to avoid obstacles by directing the driver around them. Obstacles also may be visually highlighted.


Trademark
Vision Systems, Inc. | Date: 2016-05-17

Downloadable electronic data reports, comprised of real estate title plant data, searcher notes, property files and research, for users of title evidence to evaluate particular residential and commercial properties.

Loading Vision Systems, Inc. collaborators
Loading Vision Systems, Inc. collaborators