Kodak Research Laboratories

Rochester, NY, United States

Kodak Research Laboratories

Rochester, NY, United States

Time filter

Source Type

Yin Z.,University of Illinois at Urbana - Champaign | Cao L.,University of Illinois at Urbana - Champaign | Han J.,University of Illinois at Urbana - Champaign | Luo J.,Kodak Research Laboratories | Huang T.,University of Illinois at Urbana - Champaign
Proceedings of the 11th SIAM International Conference on Data Mining, SDM 2011 | Year: 2011

Social media such as those residing in the popular photo sharing websites is attracting increasing attention in recent years. As a type of user-generated data, wisdom of the crowd is embedded inside such social media. In particular, millions of users upload to Flickr their photos, many associated with temporal and geographical information. In this paper, we investigate how to rank the trajectory patterns mined from the uploaded photos with geotags and timestamps. The main objective is to reveal the collective wisdom recorded in the seemingly isolated photos and the individual travel sequences reflected by the geo-tagged photos. Instead of focusing on mining frequent trajectory patterns from geo-tagged social media, we put more effort into ranking the mined trajectory patterns and diversifying the ranking results. Through leveraging the relationships among users, locations and trajectories, we rank the trajectory patterns. We then use an exemplar-based algorithm to diversify the results in order to discover the representative trajectory patterns. We have evaluated the proposed framework on 12 different cities using a Flickr dataset and demonstrated its effectiveness. Copyright © SIAM.


Wang G.,University of Illinois at Urbana - Champaign | Gallagher A.,Kodak Research Laboratories | Luo J.,Kodak Research Laboratories | Forsyth D.,University of Illinois at Urbana - Champaign
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

The people in an image are generally not strangers, but instead often share social relationships such as husband-wife, siblings, grandparent-child, father-child, or mother-child. Further, the social relationship between a pair of people influences the relative position and appearance of the people in the image. This paper explores using familial social relationships as context for recognizing people and for recognizing the social relationships between pairs of people. We introduce a model for representing the interaction between social relationship, facial appearance, and identity. We show that the family relationship a pair of people share influences the relative pairwise features between them. The experiments on a set of personal collections show significant improvement in people recognition is achieved by modeling social relationships, even in a weak label setting that is attractive in practical applications. Furthermore, we show the social relationships are effectively recognized in images from a separate test image collection. © 2010 Springer-Verlag.


Cao L.,Beckman Institute | Del Pozo A.,Beckman Institute | Jin X.,University of Illinois at Urbana - Champaign | Luo J.,Kodak Research Laboratories | And 2 more authors.
Proceedings of the 19th International Conference on World Wide Web, WWW '10 | Year: 2010

With the explosive growth of digital cameras and online media, it has become crucial to design efficient methods that help users browse and search large image collections. The recent VisualRank algorithm [4] employs visual similarity to represent the link structure in a graph so that the classic PageRank algorithm can be applied to select the most relevant images. However, measuring visual similarity is difficult when there exist diversified semantics in the image collection, and the results from VisualRank cannot supply good visual summarization with diversity. This paper proposes to rank the images in a structural fashion, which aims to discover the diverse structure embedded in photo collections, and rank the images according to their similarity among local neighborhoods instead of across the entire photo collection. We design a novel algorithm named RankCompete, which generalizes the PageRank algorithm for the task of simultaneous ranking and clustering. The experimental results show that RankCompete outperforms VisualRank and provides an efficient but effective tool for organizing web photos. © 2010 Copyright is held by the author/owner(s).


News Article | December 14, 2016
Site: www.businesswire.com

ROCHESTER, N.Y.--(BUSINESS WIRE)--Today the American Institute for Manufacturing Integrated Photonics (AIM Photonics) named Eastman Business Park (EBP) as the new home of its Test, Assembly and Packaging (TAP) manufacturing facility. The decision secures Rochester’s position as a critical hub for photonics and part of a growing and thriving innovation zone. The site was selected in an open process organized by the state. The selected site is known as Building 81 and is on Lake Avenue across from the Kodak Research Laboratory. The building is now owned by ON Semiconductor. ON Semiconductor will lease excess clean room, lab and office space for the TAP facility. With its world-class capabilities; Kodak’s legendary Eastman Business Park was a logical choice for the TAP facility. And there is plenty of room to grow and welcome new companies to the Photonics innovation zone. The site’s location near Kodak Research Laboratories and over 50 acres of developable industrial land provides significant expansion opportunity. Eastman Business Park was designed for innovation and manufacturing with a wide range of capabilities including: Dolores Kruchten, President of Eastman Business Park said, “This spotlights the technology innovation happening at Eastman Business Park today. ON Semiconductor is an important part of the Eastman Business Park ecosystem and a great partner with Kodak, their facilities are the ideal location for the TAP facility. We look forward to collaborating with AIM Photonics, ON Semiconductor and the Rochester area community to build a new technology ecosystem, based on our innovative past and our vision for the future.” Kodak is a technology company focused on imaging. We provide – directly and through partnerships with other innovative companies – hardware, software, consumables and services to customers in graphic arts, commercial print, publishing, packaging, electronic displays, entertainment and commercial films, and consumer products markets. With our world-class R&D capabilities, innovative solutions portfolio and highly trusted brand, Kodak is helping customers around the globe to sustainably grow their own businesses and enjoy their lives. For additional information on Kodak, visit us at kodak.com, follow us on Twitter @Kodak, or like us on Facebook at Kodak. Eastman Business Park is a 1,200-acre R&D and manufacturing campus with over 16 million square feet of multi-scale manufacturing, distribution, lab and office space. There are currently over 70 companies onsite employing over 6,600 people, many of them responsible for the development of our nation’s next generation technologies in the areas of Energy Storage, Chemical Manufacturing, Roll-to-Roll Manufacturing and Photonics. Additionally, the Park’s immense manufacturing infrastructure—including the private utilities and onsite water and wastewater management system—is a competitive advantage for its high-use tenants, especially in the Food and Agriculture industry.


Chen H.,Stanford University | Gallagher A.,Kodak Research Laboratories | Gallagher A.,Cornell University | Girod B.,Stanford University
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

Describing clothing appearance with semantic attributes is an appealing technique for many important applications. In this paper, we propose a fully automated system that is capable of generating a list of nameable attributes for clothes on human body in unconstrained images. We extract low-level features in a pose-adaptive manner, and combine complementary features for learning attribute classifiers. Mutual dependencies between the attributes are then explored by a Conditional Random Field to further improve the predictions from independent classifiers. We validate the performance of our system on a challenging clothing attribute dataset, and introduce a novel application of dressing style analysis that utilizes the semantic attributes produced by our system. © 2012 Springer-Verlag.


Mei T.,Microsoft | Hsu W.H.,National Taiwan University | Luo J.,Kodak Research Laboratories
IEEE Multimedia | Year: 2010

This special issue presents a concise reference of state-of-the-art efforts in the attempts for knowledge discovery in large-scale community-contributed multimedia, and in particular the opportunities and challenges in this nascent arena. The guest editors have selected five articles that represent ways to exploit the user-contributed photos and videos for several applications and that identify the theoretical challenges associated with managing such multimedia data. © 2010 IEEE.


Joshi D.,Eastman Kodak Co. | Datta R.,Pennsylvania State University | Fedorovskaya E.,Moscow State University | Wang J.Z.,University of Minnesota | And 2 more authors.
IEEE Signal Processing Magazine | Year: 2011

In this tutorial, we define and discuss key aspects of the problem of computational inference of aesthetics and emotion from images. We begin with a background discussion on philosophy, photography, paintings, visual arts, and psychology. This is followed by introduction of a set of key computational problems that the research community has been striving to solve and the computational framework required for solving them. We also describe data sets available for performing assessment and outline several real-world applications where research in this domain can be employed. © 2011 IEEE.


Milliman H.W.,Case Western Reserve University | Boris D.,Kodak Research Laboratories | Schiraldi D.A.,Case Western Reserve University
Macromolecules | Year: 2012

Polyhedral oligomeric silsesquioxanes (POSS) have been incorporated into a wide range of polymers over the past two decades in an attempt to enhance their thermal and mechanical properties. Properties of POSS/polymer blends/composites are highly dependent on the uniformity of POSS dispersion and thus are particularly sensitive to the magnitude of interaction between POSS and added fillers/polymers. Methods to characterize these interactions in terms of solubility parameters have been recently examined in the literature using group contribution calculations. The present work presents a method for measuring three-dimensional Hansen solubility parameters for polymers and POSS which allows for the direct calculation of interaction potentials. These measured solubility parameters predict POSS/polymer interactions more accurately than group contribution calculations and accurately predict the uniformity of POSS dispersion and the resultant property enhancements. © 2012 American Chemical Society.


Yang Y.,Zhejiang University | Zhuang Y.,Zhejiang University | Tao D.,Intelligent Systems Technology, Inc. | Xu D.,Nanyang Technological University | And 2 more authors.
IEEE Transactions on Circuits and Systems for Video Technology | Year: 2010

In this paper, we propose a new method to recognize gestures of cartoon images with two practical applications, i.e., content-based cartoon image retrieval and interactive cartoon clip synthesis. Upon analyzing the unique properties of four types of features including global color histogram, local color histogram (LCH), edge feature (EF), and motion direction feature (MDF), we propose to employ different features for different purposes and in various phases. We use EF to define a graph and then refine its local structure by LCH. Based on this graph, we adopt a transductive learning algorithm to construct local patches for each cartoon image. A spectral method is then proposed to optimize the local structure of each patch and then align these patches globally. MDF is fused with EF and LCH and a cartoon gesture space is constructed for cartoon image gesture recognition. We apply the proposed method to content-based cartoon image retrieval and interactive cartoon clip synthesis. The experiments demonstrate the effectiveness of our method. © 2006 IEEE.


Ma Y.,University of Pittsburgh | Bhattacharya A.,University of Pittsburgh | Kuksenok O.,University of Pittsburgh | Perchak D.,Kodak Research Laboratories | Balazs A.C.,University of Pittsburgh
Langmuir | Year: 2012

Understanding the transport of multicomponent fluids through porous medium is of great importance for a number of technological applications, ranging from ink jet printing and the production of textiles to enhanced oil recovery. The process of capillary filling is relatively well understood for a single-component fluid; much less attention, however, has been devoted to investigating capillary filling processes that involve multiphase fluids, and especially nanoparticle-filled fluids. Here, we examine the behavior of binary fluids containing nanoparticles that are driven by capillary forces to fill well-defined pores or microchannels. To carry out these studies, we use a hybrid computational approach that combines the lattice Boltzmann model for binary fluids with a Brownian dynamics model for the nanoparticles. This hybrid approach allows us to capture the interactions among the fluids, nanoparticles, and pore walls. We show that the nanoparticles can dynamically alter the interfacial tension between the two fluids and the contact angle at the pore walls; this, in turn, strongly affects the dynamics of the capillary filling. We demonstrate that by tailoring the wetting properties of the nanoparticles, one can effectively control the filling velocities. Our findings provide fundamental insights into the dynamics of this complex multicomponent system, as well as potential guidelines for a number of technological processes that involve capillary filling with nanoparticles in porous media. © 2012 American Chemical Society.

Loading Kodak Research Laboratories collaborators
Loading Kodak Research Laboratories collaborators