Bentonville, AR, United States
Bentonville, AR, United States

VT Systems, also known as Vision Technologies Systems, is an integrated engineering group providing solutions and services in the aerospace, electronics, land systems, and marine sectors. It has 6,300 professional and technical employees. VT Systems is the United States corporate entity of ST Engineering, a global defense contractor. Wikipedia.


Time filter

Source Type

Patent
Vision Technologies | Date: 2016-06-09

A multi-core processor configured to improve processing performance in certain computing contexts is provided. The multi-core processor includes multiple processing cores that implement barrel threading to execute multiple instruction threads in parallel while ensuring that the effects of an idle instruction or thread upon the performance of the processor is minimized. The multiple cores can also share a common data cache, thereby minimizing the need for expensive and complex mechanisms to mitigate inter-cache coherency issues. The barrel-threading can minimize the latency impacts associated with a shared data cache. In some examples, the multi-core processor can also include a serial processor configured to execute single threaded programming code that may not yield satisfactory performance in a processing environment that employs barrel threading.


Patent
Vision Technologies | Date: 2016-11-17

A system and method of determining whether an object captured in a series of images is a three dimensional object or an image of an object by comparing image data of the object captured in the images at various angles relative to the object, to image data of one or more reference images of an actual object. The image data derived from the captured images may vary with the different angles at which such images were captured and such variances may be compared to the reference images, whereas image data of an image of an object will not comparably vary


Patent
Vision Technologies | Date: 2016-12-09

An imaging system for a vehicle may include a first image capture device having a first field of view and configured to acquire a first image relative to a scene associated with the vehicle, the first image being acquired as a first series of image scan lines captured using a rolling shutter. The imaging system may also include a second image capture device having a second field of view different from the first field of view and that at least partially overlaps the first field of view, the second image capture device being configured to acquire a second image relative to the scene associated with the vehicle, the second image being acquired as a second series of image scan lines captured using a rolling shutter. As a result of overlap between the first field of view and the second field of view, a first overlap portion of the first image corresponds with a second overlap portion of the second image. The first image capture device has a first scan rate associated with acquisition of the first series of image scan lines that is different from a second scan rate associated with acquisition of the second series of image scan lines, such that the first image capture device acquires the first overlap portion of the first image over a period of time during which the second overlap portion of the second image is acquired.


Patent
Vision Technologies | Date: 2016-12-28

A navigation system for a vehicle may include at least one image capture device configured to acquire a plurality of images of an environment of a vehicle and a radar sensor to detect an object in the environment of the vehicle and to provide and output including range information indicative of at least one of a range or range rate between the vehicle and the object. The system may also include at least one processing device programmed to: receive the plurality of images from the at least one image capture device; receive the output from the radar sensor; determine, for each of a plurality of image segments in a first image, from among the plurality of images, and corresponding image segments in a second image, from among the plurality of images, an indicator of optical flow; use range information determined based on the output of the radar sensor together with the indicators of optical flow determined for each of the plurality of image segments in the first image and the corresponding image segments in the second image to calculate for each of a plurality of imaged regions at least one value indicative of a focus of expansion; identify a target object region, including at least a subset of the plurality of imaged regions that share a substantially similar focus of expansion; and cause a system response based on the identified target object region.


Patent
Vision Technologies | Date: 2017-03-29

Systems and methods are provided for detecting an object and causing a vehicle to brake based on the detection. In one implementation, an object detecting and braking system for a vehicle includes at least one image capture device configured to acquire a plurality of images of an area including an object in front of the vehicle. The system includes at least one processing device programmed to perform a first image analysis to determine a first estimated time-to-collision of the vehicle with the object, and to perform a second image analysis to determine a second estimated time-to-collision of the vehicle with the object. The processing device is also programmed to calculate a difference between the first estimated time-to-collision and the second estimated time-to-collision, to determine that the difference does not exceed a predetermined threshold, and to cause the vehicle to brake based on the determination that the difference does not exceed the predetermined threshold.


Patent
Vision Technologies | Date: 2017-03-06

Driver assistance systems for detecting a structural barrier extending along a road. The driver assistance system may be mountable in a host vehicle. The camera may capture multiple image frames in the forward field of view of the camera. A processor may process motion of images of the barrier in the image frames. The camera may be a single camera. The motion of the images may be responsive to forward motion of the host vehicle and/or the motion of the images may be responsive to lateral motion of the host vehicle.


Systems and methods use cameras to provide autonomous and/or driver-assist navigation features. In some implementations, techniques for predicting the location of first roadway lane constraints are provided. The system may receive multiple images of a roadway in a vicinity of a vehicle, recognize a first roadway lane constraint, and, when lane prediction conditions are determined to be satisfied, predict a location of a second roadway lane constraint. In some implementations, techniques for detecting and responding to construction zones are provided. The system may receive multiple images of a roadway in a vicinity of a vehicle, recognize indicators of a construction zone in the images, determine that the vehicle is proximate to a construction zone, and output a signal indicating that the vehicle is proximate to a construction zone.


Patent
Vision Technologies | Date: 2017-04-12

Systems and methods are provided for detecting an object in front of a vehicle. In one implementation, an object detecting system includes an image capture device configured to acquire a plurality of images of an area, a data interface, and a processing device programmed to compare a first image to a second image to determine displacement vectors between pixels, to search for a region of coherent expansion that is a set of pixels in at least one of the first image and the second image, for which there exists a common focus of expansion and a common scale magnitude such that the set of pixels satisfy a relationship between pixel positions, displacement vectors, the common focus of expansion, and the common scale magnitude, and to identify presence of a substantially upright object based on the set of pixels.


Patent
Vision Technologies | Date: 2017-03-28

Systems and methods are provided for detecting an object and causing a vehicle to brake based on the detection. In one implementation, an object detecting and braking system for a vehicle includes at least one image capture device configured to acquire a plurality of images of an area including an object in front of the vehicle. The system includes at least one processing device programmed to perform a first image analysis to determine a first estimated time-to-collision of the vehicle with the object, and to perform a second image analysis to determine a second estimated time-to-collision of the vehicle with the object. The processing device is also programmed to calculate a difference between the first estimated time-to-collision and the second estimated time-to-collision, to determine that the difference does not exceed a predetermined threshold, and to cause the vehicle to brake based on the determination that the difference does not exceed the predetermined threshold.


Grant
Agency: European Commission | Branch: H2020 | Program: SGA-RIA | Phase: FETFLAGSHIP | Award Amount: 89.00M | Year: 2016

This project is the second in the series of EC-financed parts of the Graphene Flagship. The Graphene Flagship is a 10 year research and innovation endeavour with a total project cost of 1,000,000,000 euros, funded jointly by the European Commission and member states and associated countries. The first part of the Flagship was a 30-month Collaborative Project, Coordination and Support Action (CP-CSA) under the 7th framework program (2013-2016), while this and the following parts are implemented as Core Projects under the Horizon 2020 framework. The mission of the Graphene Flagship is to take graphene and related layered materials from a state of raw potential to a point where they can revolutionise multiple industries. This will bring a new dimension to future technology a faster, thinner, stronger, flexible, and broadband revolution. Our program will put Europe firmly at the heart of the process, with a manifold return on the EU investment, both in terms of technological innovation and economic growth. To realise this vision, we have brought together a larger European consortium with about 150 partners in 23 countries. The partners represent academia, research institutes and industries, which work closely together in 15 technical work packages and five supporting work packages covering the entire value chain from materials to components and systems. As time progresses, the centre of gravity of the Flagship moves towards applications, which is reflected in the increasing importance of the higher - system - levels of the value chain. In this first core project the main focus is on components and initial system level tasks. The first core project is divided into 4 divisions, which in turn comprise 3 to 5 work packages on related topics. A fifth, external division acts as a link to the parts of the Flagship that are funded by the member states and associated countries, or by other funding sources. This creates a collaborative framework for the entire Flagship.

Loading Vision Technologies collaborators
Loading Vision Technologies collaborators