Entity

Time filter

Source Type

New York, NY, United States

Zhang Q.,Universities Space Research Association | Zhang Q.,NASA | Cheng Y.-B.,NASA | Cheng Y.-B.,Sigma Space Corporation | And 7 more authors.
Remote Sensing of Environment | Year: 2014

Photosynthesis (PSN) is a pigment level process in which antenna pigments (predominately chlorophylls) in chloroplasts absorb photosynthetically active radiation (PAR) for the photochemical process. PAR absorbed by foliar non-photosynthetic components is not used for PSN. The fraction of PAR absorbed (fAPAR) by a canopy/vegetation (i.e., fAPARcanopy) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) images, referred to as MOD15A2 FPAR, has been used to compute absorbed PAR (APAR) for PSN (APARPSN) which is utilized to produce the standard MODIS gross primary production (GPP) product, referred to as MOD17A2 GPP. In this study, the fraction of PAR absorbed by chlorophyll throughout the canopy (fAPARchl) was retrieved from MODIS images for three AmeriFlux crop fields in Nebraska. There are few studies in the literature that compare the performance of MOD15A2 FPAR versus fAPARchl in GPP estimation. In our study MOD15A2 FPAR and the retrieved fAPARchl were compared with field fAPARcanopy and the fraction of PAR absorbed by green leaves of the vegetation (fAPARgreen). MOD15A2 FPAR overestimated field fAPARcanopy in spring and in fall, and underestimated field fAPARcanopy in midsummer whereas fAPARchl correctly captured the seasonal phenology. The retrieved fAPARchl agreed well with field fAPARgreen at early crop growth stage in June, and was less than field fAPARgreen in late July, August and September. GPP estimates with fAPARchl and with MOD15A2 FPAR were compared to tower flux GPP. GPP simulated with fAPARchl was corroborated with tower flux GPP. Improvements in crop GPP estimation were achieved by replacing MOD15A2 FPAR with fAPARchl which also reduced uncertainties of crop GPP estimates by 1.12-2.37gCm-2d-1. © 2014 Elsevier Inc. Source


Trademark
Code Climate | Date: 2013-02-14

Computer software to detect defects in security system software and reliability; software for design and development of computer systems and software applications; software for analysis and production of programming code in the field of software development; software development tools; software for visualization of software and design of computer systems; software for design and development of software programs; software for the management and development of computer systems; software for assisting developers in creating program code for use in multiple application programs. Teaching and training in the fields of information technology, design and development of software programs, management and development of computer systems, creating program code for use in multiple application programs, software design, computer software development, computer software integration, verification of computer software, and maintenance of computer software. Consultation services in the field of development and testing of computer software; computer software consultation, namely, providing an online, automated, on-demand service for analyzing software source code; software design, development, integration, installation, verification and updating and maintenance services; computer software risk assessment services; providing quality assurance services in the field of computer software.


News Article | September 18, 2014
Site: venturebeat.com

Developers could ask their bosses to check for issues in their code before they deploy it. But bosses might have better things to do. A robot might not mind, though. Think of Code Climate like that — a development team’s robot in the cloud that runs standard tests on code without actually executing it. It can uncover security vulnerabilities, potential bugs, repetition of existing code, and unnecessarily complex programming, in Ruby and JavaScript. Support for PHP is in public beta. Code Climate has done quite well for itself almost right from its 2011 beginning, founder and chief executive Bryan Helmkamp told VentureBeat in an interview. But now Helmkamp wants to make the robot smarter and work with more programming languages, like Go and Python, for example. “We’re not going to be able to reach those other languages and reach that audience nearly as quickly if we continue to do it off revenue growth from the bootstrap strategy,” Helmkamp said. And so today Code Climate is announcing its first venture funding, a $2 million seed round. The money comes a few months after a business with software for on-premises automatic code testing, Coverity, found itself being acquired by Synopsys for $375 million. Coverity and Code Climate operate in a domain known as static analysis. For those that want to run custom code tests before getting the green light for deployment, continuous-integration services in the cloud, like CircleCI and Codeship, are available. Such startups have been taking on more and more funding in recent months. And Code Climate can work right alongside such tools. Helmkamp thought up Code Climate while he was chief technology officer of energy startup Efficiency 2.0, after noticing that, over time, adding features to applications became harder and took longer. “The kind of question was, ‘Is there anything that could be done to give developers a much better chance of having higher-quality outcomes with their projects?'” he said. The technology he and co-founder (and Efficiency 2.0 colleague) Noah Davis assembled not only runs tests automatically but also provides letter grades for specific projects and overall grade-point averages for at-a-glance analysis of code quality. The service has since become increasingly popular among engineering squads. Code Climate claims more than 1,200 paying customers, including GitHub, New Relic, Kickstarter, LivingSocial, and Zendesk. NextView Ventures led the funding round after checking with chief technology officers and vice presidents of engineering to get their impression of Code Climate. NextView co-founder and partner David Beisel wrote in a blog post announcing the investment: That’s a strong positive signal. And the service has registered well with other investors Code Climate talked to. Investors would ask their firms’ developers what they thought of the service, only to find out they paid for the service, Helmkamp said. In addition to NextView Ventures, Lerer Ventures, Fuel Capital, and Trinity Ventures also participated in the funding. Angels who have backed the company include Heroku co-founder James Lindenbaum and GitHub co-founder Tom Preston-Werner. Ten people work for New York-based Code Climate, Helmkamp said. A year from now, the headcount should be between 15 and 20. Besides going into more languages, the team intends to create ways to give visibility into current code right when a bunch of people start using Code Climate, Helmkamp said.


Code Climate is pulling a gutsy move today. The startup is open-sourcing key parts of its proprietary software for performing tests on source code to determine its quality. No longer will developers be limited by the set of programming languages and frameworks that Code Climate supports. Now you can call on new engines for CoffeeScript, CSS stylesheets, Go, JavaScript, PHP, or Ruby, or write an engine for any other language based on a new specification, and then call on Code Climate’s servers to run checks. Code Climate today is also coming out with a new open-source command-line interface (CLI) through which developers can run checks locally on their own computers. In other words, you don’t have to upload your work to a remote repository like GitHub or Bitbucket to check your code if you don’t feel like it. “You can use that [the CLI] entirely for free, so it’s a pretty big shift,” Code Climate founder and chief executive Bryan Helmkamp told VentureBeat in an interview. Not only is this big for the startup, but the changes could pose a challenge to other code-testing outfits, like bitHound, Codacy, and Scrutinizer. You might think that releasing valuable technology under an open-source license would be bad for generating revenue, but Helmkamp isn’t worried about that. If anything, he said, greater revenue will come in, as more people start to rely on the startup’s technology. Currently, 50,000 developers use Code Climate to analyze around 700 billion lines of code on any given weekday, Helmkamp wrote in a blog post on today’s news. The new tools are available here.


Zhang Q.,Universities Space Research Association | Zhang Q.,NASA | Cheng Y.-B.,NASA | Cheng Y.-B.,Sigma Space Corporation | And 8 more authors.
Agricultural and Forest Meteorology | Year: 2015

Satellite remote sensing estimates of gross primary production (GPP) have routinely been made using spectral vegetation indices (VIs) over the past two decades. The Normalized Difference Vegetation Index (NDVI), the Enhanced Vegetation Index (EVI), the green band Wide Dynamic Range Vegetation Index (WDRVIgreen), and the green band Chlorophyll Index (CIgreen) have been employed to estimate GPP under the assumption that GPP is proportional to the product of VI and photosynthetically active radiation (PAR) (where VI is one of four VIs: NDVI, EVI, WDRVIgreen, or CIgreen). However, the empirical regressions between VI*PAR and GPP measured locally at flux towers do not pass through the origin (i.e., the zero X-Y value for regressions). Therefore they are somewhat difficult to interpret and apply. This study investigates (1) what are the scaling factors and offsets (i.e., regression slopes and intercepts) between the fraction of PAR absorbed by chlorophyll of a canopy (fAPARchl) and the VIs and (2) whether the scaled VIs developed in (1) can eliminate the deficiency and improve the accuracy of GPP estimates. Three AmeriFlux maize and soybean fields were selected for this study, two of which are irrigated and one is rainfed. The four VIs and fAPARchl of the fields were computed with the MODerate resolution Imaging Spectroradiometer (MODIS) satellite images. The GPP estimation performance for the scaled VIs was compared to results obtained with the original VIs and evaluated with standard statistics: the coefficient of determination (R2), the root mean square error (RMSE), and the coefficient of variation (CV). Overall, the scaled EVI obtained the best performance. The performance of the scaled NDVI, EVI and WDRVIgreen was improved across sites, crop types and soil/background wetness conditions. The scaled CIgreen did not improve results, compared to the original CIgreen. The scaled green band indices (WDRVIgreen, CIgreen) did not exhibit superior performance to either the scaled EVI or NDVI in estimating crop daily GPP at these agricultural fields. The scaled VIs are more physiologically meaningful than original un-scaled VIs, but scaling factors and offsets may vary across crop types and surface conditions. © 2014 Elsevier B.V. Source

Discover hidden collaborations