Huang L.,University of Pennsylvania |
Frideger M.,Holy Names University |
Pearce J.L.,University of California at Irvine |
Pearce J.L.,The London School of Economics and Political Science
Journal of Applied Psychology | Year: 2013
We propose and test a new theory explaining glass-ceiling bias against nonnative speakers as driven by perceptions that nonnative speakers have weak political skill. Although nonnative accent is a complex signal, its effects on assessments of the speakers' political skill are something that speakers can actively mitigate; this makes it an important bias to understand. In Study 1, White and Asian nonnative speakers using the same scripted responses as native speakers were found to be significantly less likely to be recommended for a middle-management position, and this bias was fully mediated by assessments of their political skill. The alternative explanations of race, communication skill, and collaborative skill were nonsignificant. In Study 2, entrepreneurial start-up pitches from national high-technology, newventure funding competitions were shown to experienced executive MBA students. Nonnative speakers were found to have a significantly lower likelihood of receiving new-venture funding, and this was fully mediated by the coders' assessments of their political skill. The entrepreneurs' race, communication skill, and collaborative skill had no effect. We discuss the value of empirically testing various posited reasons for glass-ceiling biases, how the importance and ambiguity of political skill for executive success serve as an ostensibly meritocratic cover for nonnative speaker bias, and other theoretical and practical implications of this work. © 2013 American Psychological Association.
Stryker J.B.,Holy Names University |
Santoro M.D.,Lehigh University
Research Technology Management | Year: 2012
Despite the increasing use of electronically mediated methods for team communications, research continues to underline the importance of face-to-face (F2F) communication for the successful accomplishment of complex, high-tech team tasks. Although a crucial aspect of F2F communication is the physical proximity of team members, studies that have explored the relationship between the design of the physical workplace and F2F communication have produced conflicting findings. This paper reports the results of a field study conducted at two R&D sites of a large U.S. high technology and life sciences company; the results suggest that the typical space planning solution of simply moving people from closed offices to open cubicles does not in and of itself increase F2F communication. Rather, the level of F2F communication depends on the location of team members' workstations within the overall configuration of the space and the amount of space provided to support collaboration opportunities, including both formal and informal spaces. Based on the results of the study, we offer suggestions for the layout and design of R&D workstations to foster productive F2F encounters.
Stryker J.B.,Holy Names University |
Santoro M.D.,Lehigh University |
Farris G.F.,Rutgers University
IEEE Transactions on Engineering Management | Year: 2012
Despite the increasing use of electronically mediated communication when team members are not collocated, research continues to underline the importance of face-to-face (F2F) communication for the successful accomplishment of complex team tasks. Although a crucial aspect of F2F communication is the physical proximity of participants, studies that have explored the relationship between the design of the physical workplace and F2F communication have produced conflicting findings. The results of this field study conducted at two R&D sites of a large Midwestern U.S. pharmaceutical company suggest that the typical space planning solution of simply moving people from closed offices to open cubicles does not, in and of itself, increase F2F communication. We found that the visibility of the work environment and the amount of collaboration opportunity, defined as formal and informal space available for meetings and collaboration, are related to F2F communication. The implications of our findings for theory, future research, and management practice are discussed. © 1988-2012 IEEE.
Bode C.A.,University of California at Berkeley |
Limm M.P.,Holy Names University |
Power M.E.,University of California at Berkeley |
Finlay J.C.,University of Minnesota
Remote Sensing of Environment | Year: 2014
Solar radiation flux, irradiance, is a fundamental driver of almost all hydrological and biological processes. Ecological models of these processes often require data at the watershed scale. GIS-based solar models that predict insolation at the watershed scale take topographic shading into account, but do not account for vegetative shading. Most methods that quantify subcanopy insolation do so only at a single point. Further, subcanopy model calibration requires significant field effort and knowledge of characteristics (species composition, leaf area index & mean leaf angle for each species), and upscaling to watersheds is a significant source of uncertainty.We propose an approach to modeling insolation that uses airborne LiDAR data to estimate canopy openness as a Light Penetration Index (LPI). We couple LPI with the GRASS GIS r.sun solar model to produce the Subcanopy Solar Radiation model (SSR). SSR accounts for both topographic shading and vegetative shading at a landscape scale.After calibrating the r.sun model to a weather station at our study site, we compare SSR model predictions to black thermopile pyranometer field measurements and to hemispherical photographs using Gap Light Analyzer software, a standard method for point estimation of subcanopy radiation. Both SSR and hemispherical models exhibit a similar linear relationship with pyranometer data, and the models predict similar total solar radiation flux across the range of canopy openness. This approach allows prediction of light regimes at watershed scales with resolution that was previously possible only for local point measurements. © 2014.
Hanekamp J.C.,Roosevelt Academy |
Hanekamp J.C.,University of Massachusetts Amherst |
Kwakman J.,Seafood Importers and Processors Alliance |
Pieterman R.,Erasmus University Rotterdam |
Ricci P.F.,Holy Names University
European Journal of Risk Regulation | Year: 2012
Responding to public fears and the loss of confidence in the aftermath of several food safety crises in the 1990s and 2000s, more and more regulatory laws have increasingly been affected by the precautionary principle. To clarify how those developments can have adverse consequences, we discuss two very different cases. First, at the molecular level we discuss the problems the system encounters by strictly applying the linear no-threshold (LNT) at low doses model, which was adopted in response to fears about the effects of ionizing radiations. Second, at a global scale, we discuss the problems associated with the precautionary regulation on Illegal, Unreported and Unregistered Fisheries that came into effect January 1, 2010. The technical aspects of food safety testing and their impacts are perhaps unknown to policy makers but they do dominate safety decisions. Both examples show that strict application of the precautionary principle produce deleterious side effects, which go against the very policy values that the precautionary regulation should protect. We show, in particular, that overly precautionary food safety regulation may harm food security. We conclude in the EU and other Western nations, problems of food security are much more relevant to human health and life expectancy than food safety. We recommend that current food safety regulation based on the precautionary risk-regulation reflex should normatively be re-evaluated with a complete regard for the values of food security - both within and outside the EU.