Bentonville, AR, United States

Vision Technologies

visntec.com
Bentonville, AR, United States

VT Systems, also known as Vision Technologies Systems, is an integrated engineering group providing solutions and services in the aerospace, electronics, land systems, and marine sectors. It has 6,300 professional and technical employees. VT Systems is the United States corporate entity of ST Engineering, a global defense contractor. Wikipedia.

SEARCH FILTERS
Time filter
Source Type

Patent
Vision Technologies | Date: 2017-03-29

Systems and methods are provided for detecting an object and causing a vehicle to brake based on the detection. In one implementation, an object detecting and braking system for a vehicle includes at least one image capture device configured to acquire a plurality of images of an area including an object in front of the vehicle. The system includes at least one processing device programmed to perform a first image analysis to determine a first estimated time-to-collision of the vehicle with the object, and to perform a second image analysis to determine a second estimated time-to-collision of the vehicle with the object. The processing device is also programmed to calculate a difference between the first estimated time-to-collision and the second estimated time-to-collision, to determine that the difference does not exceed a predetermined threshold, and to cause the vehicle to brake based on the determination that the difference does not exceed the predetermined threshold.


Patent
Vision Technologies | Date: 2017-03-08

A system and method for capturing images from various angles of an object that is resting on an orientation indication background, using a camera of a mobile device and visible light. The images are processed to construct a three-dimensional image model and a representation of the three-dimensional image model are displayed on an electronic display screen and manipulated or embedded into a video game.


Patent
Vision Technologies | Date: 2017-03-28

Systems and methods are provided for detecting an object and causing a vehicle to brake based on the detection. In one implementation, an object detecting and braking system for a vehicle includes at least one image capture device configured to acquire a plurality of images of an area including an object in front of the vehicle. The system includes at least one processing device programmed to perform a first image analysis to determine a first estimated time-to-collision of the vehicle with the object, and to perform a second image analysis to determine a second estimated time-to-collision of the vehicle with the object. The processing device is also programmed to calculate a difference between the first estimated time-to-collision and the second estimated time-to-collision, to determine that the difference does not exceed a predetermined threshold, and to cause the vehicle to brake based on the determination that the difference does not exceed the predetermined threshold.


Systems and methods use cameras to provide autonomous and/or driver-assist navigation features. In some implementations, techniques for predicting the location of first roadway lane constraints are provided. The system may receive multiple images of a roadway in a vicinity of a vehicle, recognize a first roadway lane constraint, and, when lane prediction conditions are determined to be satisfied, predict a location of a second roadway lane constraint. In some implementations, techniques for detecting and responding to construction zones are provided. The system may receive multiple images of a roadway in a vicinity of a vehicle, recognize indicators of a construction zone in the images, determine that the vehicle is proximate to a construction zone, and output a signal indicating that the vehicle is proximate to a construction zone.


Patent
Vision Technologies | Date: 2017-04-12

Systems and methods are provided for detecting an object in front of a vehicle. In one implementation, an object detecting system includes an image capture device configured to acquire a plurality of images of an area, a data interface, and a processing device programmed to compare a first image to a second image to determine displacement vectors between pixels, to search for a region of coherent expansion that is a set of pixels in at least one of the first image and the second image, for which there exists a common focus of expansion and a common scale magnitude such that the set of pixels satisfy a relationship between pixel positions, displacement vectors, the common focus of expansion, and the common scale magnitude, and to identify presence of a substantially upright object based on the set of pixels.


Patent
Vision Technologies | Date: 2017-03-22

A detection system for a vehicle is provided. The detection system may include at least one image capture device configured to acquire a plurality of images of an area forward of the vehicle, the area including a curb separating a road surface from an off-road surface and a data interface. The detection system may also include at least one processing device programmed to receive the plurality of images via the data interface, and determine a plurality of curb edge line candidates in the plurality of images. The at least one processing device may be further programmed to identify at least one edge line candidate as an edge line of the curb.


Patent
Vision Technologies | Date: 2017-04-19

Systems and methods use cameras to provide autonomous navigation features. In one implementation, top-down refinement in lane marking navigation is provided. The system may include one or more memories storing instructions and one or more processors configured to execute the instructions to cause the system to receive from one or more cameras one or more images of a roadway in a vicinity of a vehicle, the roadway comprising a lane marking comprising a dashed line, update a model of the lane marking based on odometry of the one or more cameras relative to the roadway, refine the updated model of the lane marking based on an appearance of dashes derived from the received one or more images and a spacing between dashes derived from the received one or more images, and cause one or more navigational responses in the vehicle based on the refinement of the updated model.


Patent
Vision Technologies | Date: 2017-03-06

Driver assistance systems for detecting a structural barrier extending along a road. The driver assistance system may be mountable in a host vehicle. The camera may capture multiple image frames in the forward field of view of the camera. A processor may process motion of images of the barrier in the image frames. The camera may be a single camera. The motion of the images may be responsive to forward motion of the host vehicle and/or the motion of the images may be responsive to lateral motion of the host vehicle.


Patent
Vision Technologies | Date: 2017-04-12

An electronic device including an array of addressable registers storing data. An input register connected to the array stores an input command parameter (e.g an opcode of a command) and its associated operands in one or more input registers connected to the addressable register array. A single instance of a command accesses the at least one register of the array. Based on the input command parameter, the command for all of the address operands: reads a datum of the data previously stored in at least one register, updates the datum thereby producing an updated datum, and writes the updated datum into at least one register. The command has multiple address operands referencing the one or more registers and supports two or more of the address operands being identical. The device includes logic circuitry which provides a logical output signal to the processing circuitry indicating which, if any, of the address operands are identical. The processing circuitry based on the logical output, processes first any identical address operands prior to writing the updated datum into the at least one register so that a new instance of the command begins processing by the processing circuitry on a consecutive clock pulse and the command throughput is one command per clock pulse.


Grant
Agency: European Commission | Branch: H2020 | Program: SGA-RIA | Phase: FETFLAGSHIP | Award Amount: 89.00M | Year: 2016

This project is the second in the series of EC-financed parts of the Graphene Flagship. The Graphene Flagship is a 10 year research and innovation endeavour with a total project cost of 1,000,000,000 euros, funded jointly by the European Commission and member states and associated countries. The first part of the Flagship was a 30-month Collaborative Project, Coordination and Support Action (CP-CSA) under the 7th framework program (2013-2016), while this and the following parts are implemented as Core Projects under the Horizon 2020 framework. The mission of the Graphene Flagship is to take graphene and related layered materials from a state of raw potential to a point where they can revolutionise multiple industries. This will bring a new dimension to future technology a faster, thinner, stronger, flexible, and broadband revolution. Our program will put Europe firmly at the heart of the process, with a manifold return on the EU investment, both in terms of technological innovation and economic growth. To realise this vision, we have brought together a larger European consortium with about 150 partners in 23 countries. The partners represent academia, research institutes and industries, which work closely together in 15 technical work packages and five supporting work packages covering the entire value chain from materials to components and systems. As time progresses, the centre of gravity of the Flagship moves towards applications, which is reflected in the increasing importance of the higher - system - levels of the value chain. In this first core project the main focus is on components and initial system level tasks. The first core project is divided into 4 divisions, which in turn comprise 3 to 5 work packages on related topics. A fifth, external division acts as a link to the parts of the Flagship that are funded by the member states and associated countries, or by other funding sources. This creates a collaborative framework for the entire Flagship.

Loading Vision Technologies collaborators
Loading Vision Technologies collaborators