Time filter

Source Type

Rochester, NY, United States

Rochester Institute of Technology is a private university located within the town of Henrietta in the Rochester, New York metropolitan area.RIT is composed of nine academic colleges, including National Technical Institute for the Deaf. The Institute is one of only a small number of engineering institutes in the State of New York, including New York Institute of Technology, SUNY Polytechnic Institute, and Rensselaer Polytechnic Institute. It is most widely known for its fine arts, computing, engineering, and imaging science programs; several fine arts programs routinely rank in the national "Top 10" according to the US News & World Report. Wikipedia.

Kandlikar S.G.,Rochester Institute of Technology
Applied Physics Letters | Year: 2013

Evaporation momentum force arises due to the difference in liquid and vapor densities at an evaporating interface. The resulting rapid interface motion increases the microconvection heat transfer around a nucleating bubble in pool boiling. Microstructure features are developed on the basis of this hypothesis to control the bubble trajectory for (i) enhancing the heat transfer coefficient, and (ii) creating separate liquid and vapor pathways that result in an increased critical heat flux (CHF). An eightfold higher heat transfer coefficient (629 000 W/m2 °C) and two-and-half times higher CHF (3 MW/m2) over a plain copper surface were achieved with water. © 2013 American Institute of Physics. Source

Faber J.A.,Rochester Institute of Technology | Rasio F.A.,The Interdisciplinary Center
Living Reviews in Relativity | Year: 2012

We review the current status of studies of the coalescence of binary neutron star systems. We begin with a discussion of the formation channels of merging binaries and we discuss the most recent theoretical predictions for merger rates. Next, we turn to the quasi-equilibrium formalisms that are used to study binaries prior to the merger phase and to generate initial data for fully dynamical simulations. The quasi-equilibrium approximation has played a key role in developing our understanding of the physics of binary coalescence and, in particular, of the orbital instability processes that can drive binaries to merger at the end of their lifetimes. We then turn to the numerical techniques used in dynamical simulations, including relativistic formalisms, (magneto-)hydrodynamics, gravitational-wave extraction techniques, and nuclear microphysics treatments. This is followed by a summary of the simulations performed across the field to date, including the most recent results from both fully relativistic and microphysically detailed simulations. Finally, we discuss the likely directions for the field as we transition from the first to the second generation of gravitational-wave interferometers and while supercomputers reach the petascale frontier. Source

Bajorski P.,Rochester Institute of Technology
IEEE Transactions on Geoscience and Remote Sensing | Year: 2011

In an effort to assess the effective linear dimensionality of hyperspectral images, a notion of virtual dimensionality (VD) has been developed, and it is being used in many papers including those published in the IEEE Transactions on Geoscience and Remote Sensing. The ever-spreading use of VD warrants its thorough investigation. In this paper, we investigate the properties of VD, and we show that VD does not have a desirable property of being invariant to translations and rotations. We show specific examples when VD gives entirely misleading results. We also propose a new method (called second moment linear dimensionality), which is a natural alternative to VD because it is based on the positive aspects of VD without suffering from its pitfalls. We implement this new methodology to calculate the dimensionalities of an artificial data set and two images - AVIRIS and HyMap images. © 2006 IEEE. Source

Bajorski P.,Rochester Institute of Technology
IEEE Journal on Selected Topics in Signal Processing | Year: 2011

Principal component analysis (PCA) is a popular tool for initial investigation of hyperspectral image data. There are many ways in which the estimated eigenvalues and eigenvectors of the covariance matrix are used. Further steps in the analysis or model building for hyperspectral images are often dependent on those estimated quantities. It is therefore important to know how precisely the eigenvalues and eigenvectors are estimated, and how the precision depends on the sampling scheme, the sample size, and the covariance structure of the data. This issue is especially relevant for applications such as difficult target or anomaly detection, where the precision of further steps in the algorithm may depend on the reliable knowledge of the estimated eigenvalues and eigenvectors. The issue is also relevant in the context of small sample sizes occurring in local types of detectors (such as RX) and in investigations of individual components and small multi-material clusters within a hyperspectral image. The sampling properties of eigenvalues and eigenvectors are known to some extent in statistical literature (mostly in the form of asymptotic results for large sample sizes). Unfortunately, those results usually do not apply in the context of hyperspectral images. In this paper, we investigate the sampling properties of eigenvalues and eigenvectors under three scenarios. The first two scenarios consider the type of sampling traditionally used in statistics, and the third scenario considers the variability due to image noise, which is more appropriate for hyperspectral imaging applications. For all three scenarios, we show the precision associated with the estimated eigenvalues, eigenvectors, and intrinsic dimensionality in six images of various sizes. In a broader context, we show an example of the correct statistical inference (construction of confidence intervals and bias-adjusted estimates) that can be implemented in other imaging applications. © 2011 IEEE. Source

Merritt D.,Rochester Institute of Technology
Astrophysical Journal | Year: 2010

Motivated by recent observations that suggest a low density of old stars around the Milky Way supermassive black hole (SMBH), models for the nuclear star cluster are considered that have not yet reached a steady state under the influence of gravitational encounters. A core of initial radius 1-1.5 pc evolves to a size of approximately 0.5 pc after 10 Gyr, roughly the size of the observed core. The absence of a Bahcall-Wolf cusp is naturally explained in these models, without the need for fine-tuning or implausible initial conditions. In the absence of a cusp, the time for a 10M⊙ black hole (BH) to spiral in to the Galactic center from an initial distance of 5 pc can be much greater than 10 Gyr. Assuming that the stellar BHs had the same phase-space distribution initially as the stars, their density after 5-10 Gyr is predicted to rise very steeply going into the stellar core, but could remain substantially below the densities inferred from steady-state models that include a steep density cusp in the stars. Possible mechanisms for the creation of the parsec-scale initial core include destruction of stars on centrophilic orbits in a pre-existing triaxial nucleus, inhibited star formation near the SMBH, or ejection of stars by a massive binary. The implications of these models are discussed for the rates of gravitational-wave inspiral events, as well as other physical processes that depend on a high density of stars or stellar-mass BHs near SgrA*. © 2010. The American Astronomical Society. All rights reserved. Source

Discover hidden collaborations