Adobe Systems Incorporated is a multinational computer software company. The company is headquartered in San Jose, California, United States. Adobe has historically focused upon the creation of multimedia and creativity software products, with a more-recent foray towards rich Internet application software development. It is best known for the Portable Document Format and Adobe Creative Suite, later Adobe Creative Cloud.Adobe was founded in February 1982 by John Warnock and Charles Geschke, who established the company after leaving Xerox PARC in order to develop and sell the PostScript page description language. In 1985, Apple Computer licensed PostScript for use in its LaserWriter printers, which helped spark the desktop publishing revolution.As of 2015, Adobe Systems has about 13,500 employees, about 40% of whom work in San Jose. Adobe also has major development operations in Waltham, Massachusetts; New York City, New York; Orlando, Florida; Minneapolis, Minnesota; Lehi, Utah; Seattle, Washington; San Francisco and San Luis Obispo, California in the United States. Wikipedia.
Adobe Systems | Date: 2016-02-29
Systems and methods are disclosed for identifying depth refinement image capture instructions for capturing images that may be used to refine existing depth maps. The depth refinement image capture instructions are determined by evaluating, at each image patch in an existing image corresponding to the existing depth map, a range of possible depth values over a set of configuration settings. Each range of possible depth values corresponds to an existing depth estimate of the existing depth map. This evaluation enables selection of one or more configuration settings in a manner such that there will be additional depth information derivable from one or more additional images captured with the selected configuration settings. When a refined depth map is generated using the one or more additional images, this additional depth information is used to increase the depth precision for at least one depth estimate from the existing depth map.
Adobe Systems | Date: 2016-09-12
One embodiment involves receiving, by a web page authoring tool, presentation information in a markup language corresponding to a static graphical object. In this embodiment, the web page authoring tool receives animation information in a data interchange format corresponding to an adjustment for the static graphical object. In this embodiment, the web page authoring tool receives a runtime engine. In this embodiment, the web page authoring tool stores the presentation information, the animation information, and the runtime engine within a web page. The runtime engine may be configured to cause a web browser displaying the web page to render an animation. The animation can be based at least in part on the presentation information and the animation information.
Adobe Systems | Date: 2016-08-17
Systems and methods of automatic image sizing are provided. An image is provided in a first frame within a first layout. A request to display the image in a second frame of a second layout is received, where the second frame is different than the first frame. Region data associated with the image is accessed. The region data corresponds to a prior edit to the image and indicates a portion of the image to be displayed in the second frame. The image is provided in the second frame using the region data such that the portion of the image is displayed in the second frame.
Adobe Systems | Date: 2016-08-19
In various implementations, a computing device is configured to provide a live preview of salient contours generated on a live digital video feed. In particular, a designer can use a computing device with a camera, such as a smart phone, to view a real-time preview of salient contours generated from edges detected in frames of a live digital video feed prior to capture, thereby eliminating the unpredictability of salient contours generated from a previously captured image. In some implementations, the salient contours are overlaid on a greyscale conversion of the live digital video feed for improved processing and visual contrast. Other implementations modify aspects of edge-detecting or post-processing filters for improved performance on mobile computing devices.
Adobe Systems | Date: 2016-09-20
One embodiment involves receiving a fine mesh as input, the fine mesh representing a 3-Dimensional (3D) model and comprising fine mesh polygons. The embodiment further involves identifying, based on the fine mesh, near-planar regions represented by a coarse mesh of coarse mesh polygons, at least one of the near-planar regions corresponding to a plurality of the coarse mesh polygons. The embodiment further involves determining a deformation to deform the coarse mesh based on comparing normals between adjacent coarse mesh polygons. The deformation may involve reducing a first angle between coarse mesh polygons adjacent to one another in a same near-planar region. The deformation may additionally or alternatively involve increasing an angle between coarse mesh polygons adjacent to one another in different near-planar regions. The fine mesh can be deformed using the determined deformation.
Adobe Systems | Date: 2016-06-15
Feature interpolation techniques are described. In a training stage, features are extracted from a collection of training images and quantized into visual words. Spatial configurations of the visual words in the training images are determined and stored in a spatial configuration database. In an object detection stage, a portion of features of an image are extracted from the image and quantized into visual words. Then, a remaining portion of the features of the image are interpolated using the visual words and the spatial configurations of visual words stored in the spatial configuration database.
Adobe Systems | Date: 2016-09-01
Accelerating object detection techniques are described. In one or more implementations, adaptive sampling techniques are used to extract features from an image. Coarse features are extracted from the image and used to generate an object probability map. Then, dense features are extracted from high-probability object regions of the image identified in the object probability map to enable detection of an object in the image. In one or more implementations, cascade object detection techniques are used to detect an object in an image. In a first stage, exemplars in a first subset of exemplars are applied to features extracted from the multiple regions of the image to detect object candidate regions. Then, in one or more validation stages, the object candidate regions are validated by applying exemplars from the first subset of exemplars and one or more additional subsets of exemplars.
Adobe Systems | Date: 2016-01-20
Embodiments of the present invention provide systems, methods, and computer storage media directed to hosting a plurality of copies of a digital content. A common component and one or more individual components from one or more copies of the digital content are generated. As such, the common component and the one or more individual components are stored, such that each individual component in conjunction with the common component represents a separate copy of the digital content. In some implementations, a compression ratio may be customized for determining the sizing of the common component and individual component.
Adobe Systems | Date: 2016-05-13
In example embodiments, systems and methods for learning and using user preferences for image adjustments are presented. In example embodiments, a new image is received. A correction parameter based on previously stored user adjustments for similar images is determined. A user style that is an adjusted version of the new image is generated by applying the correction parameter. The user style is provided on a user interface. A user adjustment is received. Based on determining that a user sample image is within a predetermined threshold of closeness to the new image, data corresponding to the user sample image is replaced with new adjustment data for the new image in a database of user sample images used to generate the correction parameter. Based on determining that no user sample images are within the predetermined threshold of closeness, new adjustment data is appended to the database used to generate the correction parameter.
Adobe Systems | Date: 2016-02-01
Image compensation value computation techniques are described. In one or more implementations, an image key value is calculated, by a computing device, for image data based on values of pixels of the image data. A tuning value is computed by the computing device using the image key value. The tuning value is configured to adjust how the image data is to be measured to compute an image compensation value. The image compensation value is then computed by the computing device such that a statistic computed in accordance with the tuning value approaches a target value. The image compensation value is applied by the computing device to adjust the image data.