Santa Clara, CA, United States
Santa Clara, CA, United States

Nvidia Corporation is an American worldwide technology company based in Santa Clara, California. Nvidia manufactures graphics processing units , as well as system-on-a-chip units for the mobile computing market. Nvidia's primary GPU product line, labeled "GeForce", is in direct competition with AMD's "Radeon" products. Nvidia also joined the gaming industry with its handheld Shield Portable and Shield Tablet, as well as the tablet market with the Tegra Note 7.In addition to GPU manufacturing, Nvidia provides parallel processing capabilities to researchers and scientists that allow them to efficiently run high-performance applications. They are deployed in supercomputing sites around the world. More recently, Nvidia has moved into the mobile computing market, where it produces Tegra mobile processors for smartphones and tablets, as well as vehicle navigation and entertainment systems. In addition to Advanced Micro Devices, its competitors include Intel and Qualcomm. Wikipedia.


Time filter

Source Type

A method and renderer for a progressive computation of a light transport simulation are provided. The method includes the steps of employing a low discrepancy sequence of samples; and scrambling an index of the low discrepancy sequence independently per region using a hash value based on coordinates of a respective region, wherein for each set of a power-of-two number of the samples, the scrambling is a permutation.


The present disclosure discloses a video encoder, a video encoding system and a video encoding method. The video encoder comprises a logic control module and an encoding module. Wherein, the logic control module is configured to receive a control command sent from an external controller for encoding a specified portion of each frame of image, and send the control command to the encoding module; and the encoding module is configured to receive the control command from the logic control module, and encode the specified portion of each frame of image according to the control command, so as to cooperate with a plurality of other video encoders to complete encoding each frame of image. By employing the video encoder, video encoding system and video encoding method according to the present disclosure, each frame of image can be accomplished jointly by a plurality of video encoders, thus significantly reducing the encoding time, diminishing the encoding latency, and achieving real-time encoding of high-definition video sources, in particular, video sources with a resolution ratio of 4 k or above.


A system, method, and computer program product are provided for implementing anti-aliasing operations using a programmable sample pattern table. The method includes the steps of receiving an instruction that causes one or more values to be stored in one or more corresponding entries of the programmable sample pattern table and performing an anti-aliasing operation based on at least one value stored in the programmable sample pattern table. At least one value is selected from the programmable sample pattern table based on, at least in part, a location of one or more corresponding pixels.


Patent
Nvidia | Date: 2016-11-14

In embodiments of the invention, an apparatus may include a display comprising a plurality of pixels and a computer system coupled with the display and operable to instruct the display to display images. The apparatus may further include an SLM array located adjacent to the display and comprising a plurality of SLMs, wherein the SLM array is operable to produce a light field by altering light emitted by the display to simulate an object that is in focus to an observer while the display and the SLM array are located within a near-eye range of the observer.


Patent
Nvidia | Date: 2016-12-02

One embodiment of the present invention sets forth a technique for performing nested kernel execution within a parallel processing subsystem. The technique involves enabling a parent thread to launch a nested child grid on the parallel processing subsystem, and enabling the parent thread to perform a thread synchronization barrier on the child grid for proper execution semantics between the parent thread and the child grid. This technique advantageously enables the parallel processing subsystem to perform a richer set of programming constructs, such as conditionally executed and nested operations and externally defined library functions without the additional complexity of CPU involvement.


Patent
Nvidia | Date: 2016-03-03

An apparatus and method for gesture detection and recognition. The apparatus includes a processing element, a radar sensor, a depth sensor, and an optical sensor. The radar sensor, the depth sensor, and the optical sensor are coupled to the processing element, and the radar sensor, the depth sensor, and the optical sensor are configured for short range gesture detection and recognition. The processing element is further configured to detect and recognize a hand gesture based on data acquired with the radar sensor, the depth sensor, and the optical sensor.


A system, method, and computer program product are provided for adjusting vertex positions. One or more viewport dimensions are received and a snap spacing is determined based on the one or more viewport dimensions. The vertex positions are adjusted to a grid according to the snap spacing. The precision of the vertex adjustment may increase as at least one dimension of the viewport decreases. The precision of the vertex adjustment may decrease as at least one dimension of the viewport increases.


Patent
Nvidia | Date: 2016-09-02

The present invention facilitates efficient and effective utilization of storage management features. In one embodiment, a system comprises: a storage component, a memory controller, and a communication link. The storage component stores information. The memory controller controls the storage component. The communication link communicatively couples the storage component and the memory controller. In one embodiment, the communication link communicates storage system management information between the memory storage component and memory controller, and communication of the storage system management information does not interfere with command/address information communication and data information communication. In one exemplary implementation, the communication link comprises: a data bus that communicates data; a command/address bus that communicates commands and addresses, wherein the command and the addresses are related to the storage of the data; and a management communication bus that communicates storage system management information.


Grant
Agency: GTR | Branch: EPSRC | Program: | Phase: Fellowship | Award Amount: 721.30K | Year: 2016

My proposed Fellowship will revolutionise the use of High Performance Computing (HPC) within The University of Sheffield by changing perceptions of how people utilise software and are trained and supported in writing code which scales to increasingly large computer systems. I will provide leadership by demonstrating the effectiveness of specific research software engineer roles, and by growing a team of research software engineer at The University of Sheffield in order to accommodate our expanding programme of research computing. I will achieve this by: 1) developing the FLAME and FLAME GPU software to facilitate and demonstrate the impact of Graphics Processing Unit (GPU) computing on the areas of complex systems simulation; 2) vastly extending the remit of GPUComputing@Sheffield to provide advanced training and research consultancy, and to embed specific software engineering skills for high-performance data parallel computing (with GPUs and Xeon Phis) across EPSRC-remit research areas at The University of Sheffield. My first activity will enable long-term support of the extensive use of FLAME and FLAME GPU for EPSRC, industry and EU-funded research projects. The computational science and engineering projects supported will include those as diverse as computational economics, bioinformatics and transport simulation. Additionally, my software will provide a platform for more fundamental computer science research into complexity science, graphics and visualisation, programming languages and compilers, and software engineering. My second activity will champion GPU computing within The University of Sheffield (and beyond to its collaborators and industrial partners). It will demonstrate how a specific area of research software engineering can be embedded into The University of Sheffield, and act as a model for further improvement in areas such as research software and data storage. I will change the way people develop and use research software by providing training to students and researchers who can then embed GPU software engineering skills across research domains. I will also aid researchers who work on computationally demanding research by providing software engineering consultancy in areas that can benefit from GPU acceleration, such as, mobile GPU computing for robotics, deep neural network simulation for machine learning (including speech, hearing and Natural language processing) and real time signal processing. The impact of my Fellowship will vastly expand the scale and quality of research computing at The University of Sheffield, embed skills within students and researchers (with long-term and wide-reaching results) and ensure energy-efficient use of HPC. This will promote the understanding and wider use of GPU computing within research, as well as transitioning researchers to larger regional and national HPC facilities. Ultimately my research software engineer fellowship will facilitate the delivery of excellent science whilst promoting the unique and important role of the Research Software Engineer.


One embodiment of the present invention sets forth a computer-implemented method for migrating a memory page from a first memory to a second memory. The method includes determining a first page size supported by the first memory. The method also includes determining a second page size supported by the second memory. The method further includes determining a use history of the memory page based on an entry in a page state directory associated with the memory page. The method also includes migrating the memory page between the first memory and the second memory based on the first page size, the second page size, and the use history.

Loading Nvidia collaborators
Loading Nvidia collaborators