San Rafael, CA, United States
San Rafael, CA, United States

Autodesk, Inc. is an American multinational software corporation that makes software for the architecture, engineering, construction, manufacturing, media, and entertainment industries. Autodesk is headquartered in San Rafael, California, and features a gallery of its customers' work in its San Francisco building. The company has offices world wide, with U.S. locations in Northern California, Oregon, and in New England in New Hampshire and Massachusetts.The company was founded in 1982 by John Walker, a coauthor of the first versions of AutoCAD, the company's flagship computer-aided design software. Its AutoCAD and Revit software is primarily used by architects, engineers, and structural designers to design, draft, and model buildings and other structures. Autodesk software has been used in many fields, from the New York Freedom Tower to Tesla electric cars.Autodesk became best known for AutoCAD but now develops a broad range of software for design, engineering, and entertainment as well as a line of software for consumers, including Sketchbook, Homestyler, and Pixlr. The company makes educational versions of its software available free to qualified students and faculty through the Autodesk Education Community. Autodesk's digital prototyping software, including Autodesk Inventor and the Autodesk Product Design Suite, are used in the manufacturing industry to visualize, simulate, and analyze real-world performance using a digital model during the design process. The company's Revit line of software for Building Information Modeling is designed to let users explore the planning, construction, and management of a building virtually before it's built.Autodesk's Media and Entertainment division creates software for visual effects, color grading, and editing as well as animation, game development, and design visualization. Maya is a 3D animation software used in film visual effects and game development. Wikipedia.


Time filter

Source Type

Patent
Autodesk | Date: 2016-10-06

Techniques and systems for sub-pixel grayscale three-dimensional (3D) printing are described. A technique includes mapping a 3D digital model onto a 3D grid of voxels associated with a 3D printer; assigning a first intensity level to first voxels that are fully contained within the model, the first intensity level being sufficient to cure photoactive resin during a curing time; determining, based on geometric information provided by the model, containment degrees for second voxels that are partially contained within the model; assigning second intensity levels to the second voxels based respectively on the containment degrees, the second intensity levels being greater than a third intensity level and lesser than the first intensity level; assigning the third intensity level to third voxels that are outside of the model; and generating one or more graphic files based on the first, second, third voxels, and respectively assigned intensity levels.


One embodiment of the present invention sets forth a technique for performing tasks associated with a construction project. The technique includes transmitting to a worker, via a mobile computing device worn by the worker, a first instruction related to performing a first task included in a plurality of tasks associated with a construction project, and transmitting to a light-emitting device a command to provide a visual indicator to the worker that facilitates performing the first task, based on an input received from the mobile computing device, determining that the worker has completed the first task of the construction project, selecting, from a database that tracks eligibility of each of the plurality of tasks, a second task included in the plurality of tasks that the worker is eligible to perform, and transmitting to the worker, via the mobile computing device, a second instruction related to performing the second task.


In various embodiments of the present invention, a blending engine blends multiple surfaces included in a three-dimensional (3D) model of an object. First, the blending engine trims off portions of the surfaces that are targeted for blending at trimming curves to generate trimmed surfaces. The blending engine then constructs a single parametric blending surface via a unified parametrization for the trimming curves. Notably, to achieve the unified parametrization, the blending engine performs one or more spherical parametrization operations that generate parametrized curves based on the trimming curves and a fundamental sphere. After constructing the parametric blending surface based on the parametrized curves, the blending engine joins the parametric blending surface to the trimmed surfaces to produce a final, smooth intersection between the surfaces. Advantageously, because the blending engine creates a single parametric blending surface, the blending engine can blend arbitrary pipe surfaces and is compatible with computer-aided design modeling subsystems.


A method and system provide the ability to modify a three-dimensional (3D) model in a shape editing system. The 3D model is obtained and faces of the model are selected as features (S). A subset (S) of the model that are fixed are selected. Shape modification operations to be performed are prescribed. A deformation lattice is constructed by setting up a lattice structure with control points. Parametric space (u,v,w) is defined in terms of vertices of the lattice structure. Euclidean space (x,y,z) of the 3D model is mapped to the parametric space (u,v,w). The deformation lattice is evaluated by selecting control points, and either affine transformations are applied directly to the selected control points, or the deformation lattice is deformed based on a discrete fitting problem. The evaluated deformed model is then output.


In one embodiment, a device generator automatically generates a circuit, firmware, and assembly instructions for a programmed electronic device based on behaviors that are specified via mappings between triggers and actions. In operation, the device generator generates a circuit based on the mappings. The circuit specifies instances of electronic components and interconnections between the instances. Subsequently, the device generator generates firmware based on code fragments associated with the triggers and actions included in the mappings that specify the high-level behavior. In addition the device generator generates assembly instructions based on the interconnections between the instances. Advantageously, the device generator provides an automated, intuitive design process for programmed electronic devices that does not rely on the designers possessing any significant technical expertise. By contrast, conventional design processes for programmed electronic devices typically only automate certain steps of the design process, require specialized knowledge, and/or are limited in applicability.


A method, system, and apparatus provide the ability to globally register point cloud scans. A first and a second three-dimensional (3D) point cloud are acquired. The point clouds have a subset of points in common and there is no prior knowledge on an alignment between the point clouds. Particular points that are likely to be identified in the other point cloud are detected. Information about a normal of each of the detected particular points is retrieved. A descriptor (that only describes 3D information) is built on each of the detected particular points. Matching pairs of descriptors are determined. Rigid transformation hypotheses are estimated (based on the matching pairs) and represent a transformation. The hypotheses are accumulated into a fitted space, selected based on density, and validated based on a scoring. One of the hypotheses is then selected as a registration.


Methods, systems, and apparatus, including a method of multi-material stereolithographic three dimensional printing comprising, depositing a first material through a first material dispenser of a stereolithographic three dimensional printer onto an optical exposure window to form a first material layer; curing the first material layer to form a first material structure on a build head of the stereolithographic three dimensional printer; depositing a second material through the first material dispenser or a second material dispenser onto the optical exposure window to form a second material layer; and curing the second material layer to form a second material structure on the build head.


A system and method are disclosed for manipulating objects within a virtual environment using a software widget. The software widget includes one or more controls for performing surface constrained manipulation operations. A graphical representation of the software widget is superimposed over the object and enables a user to use simple mouse operations to perform the various manipulation operations. The position operation determines an intersection point between the mouse cursor and a surface of a different object and moves the object to the intersection point. The scale operation adjusts the size of the object. The rotate operation adjusts the rotation of the object around a normal vector on the surface of the different object. The twist operation deforms the object along a local z-axis. The orientation operation adjusts the orientation of the object with respect to the normal vector.


One embodiment of the present invention sets forth a technique for determining a location of an object that is being manipulated or processed by a robot. The technique includes capturing a digital image of the object while the object is disposed by the robot within an imaging space, wherein the digital image includes a direct view of the object and a reflected view of the object, detecting a visible feature of the object in the direct view and the visible feature of the object in the reflected view, and computing a first location of the visible feature in a first direction based on a position of the visible feature in the direct view. The technique further includes computing a second location of the visible feature in a second direction based on a position of the visible feature in the reflected view and causing the robot to move the object to a processing station based at least in part on the first location and the second location.


In one embodiment, a model generator generates a new model for a behavior of a system based on an existing, authoritative model. First, a mapping generator generates a mapping model that maps authoritative values obtained via the authoritative model to measured values that represent the behavior of the system. Subsequently, the model generator creates the new model based on the authoritative model and the mapping model. In this fashion, the mapping model indirectly transforms the authoritative model to the new model based on the measured values. Advantageously, the authoritative model enables the model generator to increase a rate of accuracy improvement experienced while developing the new model compared to a rate of accuracy improvement that would be experienced were the new model to be generated based on conventional modeling techniques. In particular, for a given sampling budget, the model generator improves the accuracy of the new model.

Loading Autodesk collaborators
Loading Autodesk collaborators