Bentonville, AR, United States
Bentonville, AR, United States

VT Systems, also known as Vision Technologies Systems, is an integrated engineering group providing solutions and services in the aerospace, electronics, land systems, and marine sectors. It has 6,300 professional and technical employees. VT Systems is the United States corporate entity of ST Engineering, a global defense contractor. Wikipedia.


Time filter

Source Type

Patent
Vision Technologies | Date: 2016-08-17

A parking occupancy detection system includes a camera and one or more processors communicatively coupled to the camera. The one or more processors are collectively configured to receive an image of a parking region from the camera, wherein the parking region comprises a parking spot, extract an image of the parking spot from the image of the parking region, obtain an image descriptor value of the image of the parking spot, input the image descriptor value into a dynamic classification model, and classify the image descriptor value as occupied or unoccupied based on training data, wherein the training data comprises examples of occupied image descriptor values and unoccupied image descriptor values.


Patent
Vision Technologies | Date: 2016-08-24

A camera mount for mounting a camera inside a windshield of a vehicle. The camera includes a lens mount and a camera housing. The front tip of the lens mount is constrained to be in close proximity to or constrained to contact the inside of the windshield for different rake angles of the windshield.


Patent
Vision Technologies | Date: 2017-03-29

Systems and methods are provided for detecting an object and causing a vehicle to brake based on the detection. In one implementation, an object detecting and braking system for a vehicle includes at least one image capture device configured to acquire a plurality of images of an area including an object in front of the vehicle. The system includes at least one processing device programmed to perform a first image analysis to determine a first estimated time-to-collision of the vehicle with the object, and to perform a second image analysis to determine a second estimated time-to-collision of the vehicle with the object. The processing device is also programmed to calculate a difference between the first estimated time-to-collision and the second estimated time-to-collision, to determine that the difference does not exceed a predetermined threshold, and to cause the vehicle to brake based on the determination that the difference does not exceed the predetermined threshold.


Grant
Agency: European Commission | Branch: H2020 | Program: SGA-RIA | Phase: FETFLAGSHIP | Award Amount: 89.00M | Year: 2016

This project is the second in the series of EC-financed parts of the Graphene Flagship. The Graphene Flagship is a 10 year research and innovation endeavour with a total project cost of 1,000,000,000 euros, funded jointly by the European Commission and member states and associated countries. The first part of the Flagship was a 30-month Collaborative Project, Coordination and Support Action (CP-CSA) under the 7th framework program (2013-2016), while this and the following parts are implemented as Core Projects under the Horizon 2020 framework. The mission of the Graphene Flagship is to take graphene and related layered materials from a state of raw potential to a point where they can revolutionise multiple industries. This will bring a new dimension to future technology a faster, thinner, stronger, flexible, and broadband revolution. Our program will put Europe firmly at the heart of the process, with a manifold return on the EU investment, both in terms of technological innovation and economic growth. To realise this vision, we have brought together a larger European consortium with about 150 partners in 23 countries. The partners represent academia, research institutes and industries, which work closely together in 15 technical work packages and five supporting work packages covering the entire value chain from materials to components and systems. As time progresses, the centre of gravity of the Flagship moves towards applications, which is reflected in the increasing importance of the higher - system - levels of the value chain. In this first core project the main focus is on components and initial system level tasks. The first core project is divided into 4 divisions, which in turn comprise 3 to 5 work packages on related topics. A fifth, external division acts as a link to the parts of the Flagship that are funded by the member states and associated countries, or by other funding sources. This creates a collaborative framework for the entire Flagship.


Patent
Vision Technologies | Date: 2016-09-22

Systems and methods may calibrate an indicator of speed of an autonomous vehicle. In one implementation, a system may include at least one processor programmed to: receive from a camera at least a plurality of images representative of an environment of the vehicle; analyze the plurality of images to identify at least two recognized landmarks; determine, based on known locations of the two recognized landmarks, a value indicative of a distance between the at least two recognized landmarks; determine, based on an output of at least one sensor associated with the autonomous vehicle, a measured distance between the at least two landmarks; determine a correction factor for the at least one sensor based on a comparison of the value indicative of the distance between the at least to recognized landmarks and the measured distance between the at least two landmarks.


Patent
Vision Technologies | Date: 2016-09-22

A system for autonomously navigating a vehicle along a road segment is disclosed. The system may have at least one processor. The processor may be programmed to receive from an image capture device at least one image representative of an environment of the vehicle. The processor may also be programmed to analyze the at least one image to identify at least one recognized landmark. Further, the processor may be programmed to determine a current location of the vehicle relative to a predetermined road model trajectory associated with the road segment based, at least in part, on a predetermined location of the recognized landmark. In addition, the processor may be programmed to determine an autonomous steering action for the vehicle based on a direction of the predetermined road model trajectory at the determined current location of the vehicle relative to the predetermined road model trajectory.


Patent
Vision Technologies | Date: 2016-09-22

Systems and methods are provided for autonomously navigating a vehicle along a road segment. In one implementation, a system includes at least one processor programmed to receive from a camera at least one image representative of an environment of the vehicle. The at least one image may be analyzed to identify at least one recognized landmark based upon one or more landmark group characteristics associated with a group of recognized landmarks. The current location of the vehicle relative to a predetermined road model trajectory associated with the road segment may be determined based, at least in part, on a predetermined location of the recognized landmark. An autonomous steering action for the vehicle may be determined based on a direction of the predetermined road model trajectory at the determined current location of the vehicle relative to the predetermined road model trajectory.


Patent
Vision Technologies | Date: 2016-09-22

A system for autonomously navigating a vehicle along a road segment may be based on a predetermined landmark location. The system may include at least one processor programmed to receive from a camera, at least one image representative of an environment of the vehicle, and determine a position of the vehicle along a predetermined road model trajectory associated with the road segment based, at least in part, on information associated with the at least one image. The at least one processor may be further programmed to identify a recognized landmark forward of the vehicle based on the determined position, wherein the recognized landmark is beyond a sight range of the camera, and determine a current distance between the vehicle and the recognized landmark by comparing the determined position of the vehicle with a predetermined position of the recognized landmark. The at least one processor may also be programmed to determine an autonomous navigational response for the vehicle based on the determined current distance.


Patent
Vision Technologies | Date: 2016-09-22

A method of processing vehicle navigation information for use in autonomous vehicle navigation is provided. The method includes receiving, by a server, navigation information from a plurality of vehicles. The navigation information from the plurality of vehicles is associated with a common road segment. The method also includes storing, by the server, the navigation information associated with the common road segment. The method also includes generating, by the server, at least a portion of an autonomous vehicle road navigation model for the common road segment based on the navigation information from the plurality of vehicles. The method further includes distributing, by the server, the autonomous vehicle road navigation model to one or more autonomous vehicles for use in autonomously navigating the one or more autonomous vehicles along the common road segment.


Patent
Vision Technologies | Date: 2016-09-22

A system for autonomously navigating a vehicle is disclosed. The system may include at least one processor and the at least one processor may be programmed to receive from a rearward facing camera, at least one image representing an area at a rear of the vehicle and analyze the at least one rearward facing image to locate in the image a representation of at least one landmark. The at least one processor may be further programmed to determine at least one indicator of position of the landmark relative to the vehicle and determine a forward trajectory for the vehicle based, at least in part, upon the indicator of position of the landmark relative to the vehicle. Additionally, the at least one processor may be programmed to cause the vehicle to navigate along the determined forward trajectory.

Loading Vision Technologies collaborators
Loading Vision Technologies collaborators