Time filter

Source Type

Johor Bahru, Malaysia

Yang J.,University of Toronto | He Y.,UTM | Caspersen J.,University of Toronto
International Geoscience and Remote Sensing Symposium (IGARSS) | Year: 2014

A comprehensive forest resource inventory needs more detailed species information at individual tree level. Although conventional ground-based measurement fails to achieve this target in an efficient way, the emergence of high resolution remote sensing images has made it possible in the past decade. Individual tree crown delineation is one of the most critical steps for tree species classification from remote sensing images. However, it is still challenging to delineate individual tree crowns in deciduous forests because of the continuous canopy. In this study, a multi-band watershed segmentation method is proposed to delineate deciduous tree crowns by constructing a spectral angle space. The proposed algorithm is further examined by a high resolution multispectral aerial image of a deciduous forested area in Haliburton Forest, Ontario, Canada. Results demonstrate that, the proposed multi-band watershed segmentation method outperforms the existing valley-following based ITC map, in terms of visual interpretation and quantitative evaluation. © 2014 IEEE. Source

News Article | June 4, 2015
Site: http://www.techtimes.com/rss/sections/space-aviation.xml

As the number of drones flying through our airspace increases, NASA is developing a plan to keep track of and direct all these hovering devices. The space agency has partnered with telecom company Verizon to develop technology to direct and monitor all of the commercial and civilian drones in the U.S. According to documents obtained by The Guardian, Verizon signed an agreement last year with NASA "to jointly explore whether cell towers ... could support communications and surveillance of unmanned aerial systems (UAS) at low altitudes." The $500,000 project is already under way in Silicon Valley and NASA this summer will begin testing the system, which uses radar, satellite and cellular signals to track the drones. Verizon is scheduled to finalize the technology by 2019. Currently there is nothing stopping drone pilots from sending their aircraft pretty much anywhere they like, but NASA hopes the new system could enable geofencing of certain sensitive areas such as the airspace of the White House or Capitol Building in Washington. Geofencing uses GPS or radio frequency identification (RFID) to define geographical boundaries and set a virtual barrier. The documents revealed that the plan is still at the exploratory stages as the FAA is yet to clearly define regulations for commercial drones. The current proposal would limit drones for recreational and commercial use to 55 pounds, 100 mph speeds, heights of 500 feet during daylight hours and require the aircraft to remain within line of sight of the pilot. The aim of the NASA project is to "jointly explore if cell towers and communications could possibly support Unmannned Aerial Systems (UAS) Traffic Management (UTM) for communications and surveillance of UAS at low altitudes". A new system is needed as the current air traffic control system isn't capable of tracking small drones flying at low altitude. Radar coverage at low altitudes is unreliable so using the cellphone network and partnering with Verizon is a viable alternative. Verizon is the largest wireless telecom company in the U.S. with an estimated 12,000 to 15,000 cellphone towers across the country. Workshops have also been held with other carriers like AT&T, though such is the scale of the project that NASA is getting help some of the biggest tech firms, too. The Guardian also reports that Google and Amazon are currently testing the systems with their commercial drone projects at NASA's Ames Research Center in Silicon Valley. Google is spending $450,000 testing its self-driving cars at Ames and is also sharing data from its Project Wing drone project. Amazon has invested $1.8 million in the testing of its Prime Air drones and contributing to plans for the monitoring project.

News Article
Site: http://phys.org/technology-news/

But before drone aviation can become pervasive, a new infrastructure must be developed to define low-altitude avenues of flight, regulate traffic in congested areas and prevent collisions. On this front, the Stanford Intelligent Systems Laboratory (SISL) is part of a broad partnership led by NASA Ames to create an unmanned aerial system traffic management system, or UTM, to manage the expected surge in unmanned flights. "UTM is meant to fulfill a lot of the functions of air traffic control, but it will be in the cloud and largely automated," said SISL Director Mykel Kochenderfer, an assistant professor of aeronautics and astronautics. NASA envisions that the UTM system will be able to support the orchestration of a huge number of drone operations without air traffic control operators monitoring each and every vehicle in the air. A key attribute of this system will involve automated conflict avoidance – software that can alert multiple drones when a collision is possible, and calculate the maneuvers necessary to avoid it. Kochenderfer recently coauthored a new paper with mechanical engineering graduate student Hao Yi Ong in which they detail a conflict-avoidance algorithm that, when implemented within the UTM system, will minimize the threat of low-altitude, unmanned collisions. Ong says the sheer projected number of drones would make it impractical to replicate the human-operated air traffic control system to regulate drone flights. Today the Federal Aviation Administration has 15,000 human controllers to manage roughly 87,000 pilot-driven flights per day. Amazon's drone projections alone could dwarf those numbers. Ong has conservatively estimated that Amazon Prime's roughly 40 million subscribers could generate 130,000 drone deliveries per normal shopping day. And that's before accounting for the dozens of other companies including Google and Matternet that are also developing commercial drone operations. "You're not going to hire another 30,000 people just to handle the traffic from drones," Ong said. "It's just not feasible." NASA envisions that the cloud-based, largely autonomous UTM system will roll out in a series of four builds with increasing capabilities. The first build, which was released in August, largely focuses on geo-fencing – GPS-based corridors for drone flights – to maintain safety and efficiency. "That works for farming applications," Ong said. "But once you want to start moving transport drones around urban areas, you can't really do that, because you're not going to block out the airspace over entire residential areas just for when your aircraft is flying through." The Stanford team believes that automated conflict avoidance is the best way to enable a greater density of flights in crowded areas. But automating conflict avoidance to deal with the volume of drone traffic will require new algorithms to predict and avoid potential collisions. Previously, when Kochenderfer worked at MIT's Lincoln Laboratory, he developed a new approach to collision avoidance that is being incorporated into a next-generation system for manned aircraft, called ACAS X, that could serve as the building block for such a protocol. The system employs a process known as dynamic programming to figure out optimal collision avoidance strategies. In ACAS X, the software alerts human operators of potential risk and recommends a maneuver to avoid collision. The result has been a system that significantly enhances safety while decreasing the number of unnecessary alerts. "The FAA was very happy with the outcome and supported further development," Kochenderfer said. Indeed, the ACAS X system is currently being standardized by the FAA and the international safety community. Ong adapted some of the techniques from ACAS X and applied them toward developing an automated conflict avoidance system for unmanned aircraft. But the expected density of drone flights created an entirely new level of complexity for the SISL team. "In traditional aviation, conflicts between more than two aircraft are pretty rare," Ong said. But in confined, urban airspaces, conflicts could easily involve three or more drones. For instance, consider several packages being delivered to the same address. Or imagine a blaze that draws multiple drones from the fire department, police and local media. "As the number of aircraft grows, the avoidance problem becomes exponentially more complicated, a challenge that mathematicians call the curse of dimensionality," Ong said. "So we have to come up with better ways than just brute-force searching and iterating through all possible solutions." To beat the curse, Ong's cloud computing architecture separates multi-aircraft conflicts into paired problems. It quickly picks the best action for each pair of drones from a table predicting each drone's flight path. The server then coordinates each of these pairwise solutions and issues a joint collision avoidance order to all of the affected drones. In a matter of milliseconds, a dozen drones delivering Christmas Eve packages will know precisely what maneuvers to take to ensure each enjoys a safe flight path down a crowded cul-de-sac. To test this approach, the researchers ran over 1 million simulations of encounters between two to 10 aircraft. They compared their pairwise solution to other solutions, such as a less-coordinated strategy in which each drone only reacts to its closest threat. Their pairwise solution showed significant safety improvements, faster decision times and decreased alert rates. Ong and Kochenderfer said more work remains to be done, for instance, to account for communication breakdowns, sudden weather anomalies or deliberately disruptive drones. But they expect that an evolved version of their architecture will be implemented in one of the final builds of the UTM, which NASA estimates will be completed by 2019. "It's gratifying to work on a problem that people are coming together and knocking heads and figuring out the best solution, even though there actually isn't a single profitable flight yet," said Ong, whose work was recognized in September as the best graduate student paper at the Digital Avionics Systems Conference held in Prague.

News Article | April 20, 2016
Site: http://www.rdmag.com/rss-feeds/all/rss.xml/all

Large aircrafts toting passengers from one destination to another follow an air traffic system that keeps them safe as they travel through the national airspace system. According to the Federal Aviation Administration, pilots, who are flying from Los Angeles to Baltimore, will talk to around 28 air traffic controllers in 11 different facilities across the United States. Each day, the FAA provides this service to tens of thousands of aircrafts. Drones, or unmanned aircraft systems (UAS), aren’t like aircrafts. Their surge in popularity is recent, with proliferation in both the public and private sector. Their utilization as delivery systems is on the cusp of becoming a reality. But much work has to be accomplished before that becomes the norm. Working to anticipate these changes are the FAA and NASA. This week, they conducted the largest test of NASA’s UAS traffic management (UTM) research platform. The test consisted of 24 drones flying simultaneously at six different FAA UAS test site locations around the country. The test sites were located in Fairbanks, Ala.; Grand Forks, N.D.; Reno, Nev.; Rome, N.Y.; Blacksburg, Va.; Bushwood, Md.; and Corpus Christi, Texas. “UTM is designed to enable safe low-altitude civilian UAS operations by providing pilots information needed to maintain separation from other aircraft by reserving areas for specific routes, with consideration of restricted airspace and adverse weather conditions,” said project lead Parimal Kopadekar in a statement following an initial test of the system in the fall. During the fall test, the drone pilots would submit their operation plans and positions to the UTM system for approval. The UTM system checked the airspace for conflicts and tracked the drones. To make the drone management system a reality, NASA is researching a variety of technologies in airspace design, dynamic geofencing, congestion management, and terrain avoidance. According to Popular Science, the agency needed only 16 of the drones to work with the system to count the test as a success. However, all 24 worked. “Research results in the form of airspace integration requirements are expected to be transferred from NASA to the FAA in 2019 for their further testing,” according to the agency. In the interim, NASA plans on testing the system in high-density urban areas for tasks including news gathering and package delivery, among other tests scheduled over the next three years. Establish your company as a technology leader! For more than 50 years, the R&D 100 Awards have showcased new products of technological significance. You can join this exclusive community! Learn more.

News Article
Site: http://www.nature.com/nature/current_issue/

A review and accounting of the history of claims and disputed points in the published literature was developed before construction of the meta-model that guided this analysis (Extended Data Fig. 1 and Supplementary Information). During this review, attention was paid to the theoretical constructs invoked by various authors, since our goal was to provide a framework that had the potential to clarify and resolve disputed points. Attention was also paid to types of variable measured by different authors, as the relationship between constructs and measurements constitutes one of the several sources of ambiguity and confusion31, 32. An in-depth description of the literature synthesized to generate the meta-model is presented in the Supplementary Information. Data collected by the Nutrient Network Cooperative33 was used to design and evaluate a structural equation model based on the meta-model presented. The Nutrient Network is a distributed, coordinated research cooperative. Sites in the Network are dominated primarily by herbaceous vegetation and intended to represent natural/semi-natural grasslands and related ecosystems worldwide. Individual sites were selected to accommodate at least a 1,000 m2 study design footprint. Most sites sampled vegetation in 2007, although 12 sites sampled in 2008 or 2009. No statistical methods were used to predetermine sample size. Samples were collected using a completely randomized block design. The standard design has three blocks and ten plots per block at each site, although some sites deviate slightly from this design. A few sites are grazed or burned before sampling, consistent with their traditional management. Further details on site selection and design can be found at http://www.nutnet.org/exp_protocol. In this study, we analysed data from 39 of the 45 sites considered in ref. 2 possessing a complete set of covariates (Extended Data Table 3). While ref. 2 only examined bivariate relations between productivity and richness, our analyses brought in many additional variables (Extended Data Table 1) so that we could address the many hypotheses embodied in the meta-model. Individual plots with greater than 10% woody plant cover were omitted from consideration to maintain comparability in total biomass across plots. This step resulted in the removal of 73 plots, leaving 1,126 plots in the data set analysed. Four plots were omitted owing to incomplete plant data and one for incomplete light data. For two of the sites, live mass was estimated from total mass using available information on the proportion of live to total. One apparent measurement error was detected for light data and the associated plot removed from the analysed sample. Random imputation methods34 were used for cases where there were missing soil measurements at a site. The decision to use this approach was based on weighing the demerits of deleting nearly complete multivariate data records versus introducing a modest amount of random error through the imputation process. Study plots in this investigation had a perimeter of 5 m × 5 m and were separated by 1 m walkways. A single 1 m × 1 m subplot within each plot was permanently marked and sampled for species richness during the season of peak biomass. Sites with strong seasonal variation in composition were sampled twice during the season to assemble a complete list of species. To obtain an estimate of site-level richness, we used a jack-knife procedure35. (Because there have been some recent advances in the reduction of certain sources of bias in richness estimation36, we checked our original results by computing site-level richness using the new iNEXT R package. The correlation between the two estimates of richness was found to be 0.972.) Productivity and total above-ground biomass were sampled immediately adjacent to the permanent vegetation subplot. Vegetation was sampled destructively by clipping at ground level all above-ground biomass of individual plants rooted within two 0.1-m2 (10 cm × 100 cm) strips. Harvested plant material was sorted into the current year’s live and recently senescent material, and into previous year’s growth (including litter). For shrubs and sub-shrubs, the current year’s leaves and stems were collected. Plant material was dried at 60 °C to a constant mass and weighed to the nearest 0.01 g. We used the current year’s biomass increment as our estimate of annual above-ground productivity, which commonly serves as a measurable surrogate for total productivity37, 38. All sites used this protocol to estimate productivity (except for the Sevilleta, New Mexico, site which relied on species-specific allometric relationships39). Total above-ground biomass was computed as the sum of the current year’s biomass and that from previous years and included remaining dead material (litter). Photosynthetically active radiation was measured at the time of peak biomass, both above the vegetation and at the ground surface, the ratio representing the proportion of available light reaching the ground. Degree of shading was computed as 1.0 minus the proportion of light reaching the ground. Within each plot, 250 g of soil were collected and air dried for processing and soil archiving. Total soil %C and %N were measured using dry combustion gas chromatography analysis (COSTECH ESC 4010 Element Analyzer) at the University of Nebraska. All other soil analyses were performed at A&L Analytical Laboratory, Memphis, Tennessee, USA; these included the following: extractable soil phosphorus and potassium were quantified using the Mehlich-3 extraction method, and parts per million concentration estimated using inductively coupled plasma-emission spectrometry. Soil pH was quantified with a pH probe (Fisher Scientific) in a slurry made from 10 g dry soil and 25 ml of deionized water. Soil texture, expressed as the percentage sand, percentage silt, and percentage clay, was measured on 100 g dry soil using the Buoycous method. Further details on sampling methodology are at http://www.nutnet.org/exp_protocol. Climatic characteristics were obtained for each site from version 1.4 of BioClim, which is part of the WorldClim40 set of global climate layers at 1 km2 spatial resolution. To represent measures of temperature and precipitation with meaningful relationships to plant growth in global grasslands, we selected mean temperature of the wettest quarter of the year (BIO8) and total precipitation of the warmest quarter of the year (BIO18). Climate values were extracted using universal transverse Mercator (UTM) coordinates collected near the centre of each site. Several derived variables were developed to include in the modelling effort. To represent within-site heterogeneity, coefficients of variation were computed for the site-level model based on plot-to-plot variation in plot-level measures. This allowed us to examine the explanatory value of heterogeneity in soil nitrogen, phosphorus, potassium, and pH, as well as heterogeneity in biomass and light interception. Indices of total resource supply and resource imbalance were also calculated using the method of ref. 27 and evaluated for inclusion in our models. Disturbance history information for the sites was converted into four binary (0,1) variables for analyses; information available included pretreatment history of (1) substantial anthropogenic alteration (for example, conversion to pasture), (2) grazing history, by wild or domestic animals, (3) active management (typically haying or mowing), and (4) fire. Current levels of herbivory were estimated by comparing biomass inside and outside exclosure plots located at each site. Certain variables were constructed within the structural equation modeling process using the composite index development methods of ref. 41. Consideration of the ideas conveyed by the meta-model (Extended Data Fig. 1) and the specific situation being modelled suggested the need to develop index variables for soil fertility and soil suitability. Soil fertility indices were developed using all measured soil properties and were operationally defined as the drivers of productivity, controlling for all other effects on productivity in the model. Two indices were developed, one for site-to-site variations and another for plot-to-plot variations. Similarly, soil suitability indices were developed for the site- and plot-level data using all measured soil properties as potential contributors and operationally defined as the drivers of richness, controlling for all other effects on richness in the model. Modelling with composites in structural equation models involved a two-step process. First, we constructed a fully specified structural equation model (as represented in Fig. 2), but providing a specific set of soil properties to serve as formative indicators for soil fertility and soil suitability. Variables that did not contribute to the total model (on the basis of model fit indices) were eliminated individually for the two composites being formed. The resulting prediction equations were used to compute index scores. Then, the model was reconstructed, substituting the indices in place of the collection of individual soil properties. Documentation of this process is provided in the Supplementary Information computer code (R script). A structural equation model was developed based on the ideas embodied in the meta-model, available data, and the principles and procedures laid out in ref. 42. Indicators for constructs were chosen from the set of variables available and quantities that could be computed from them (Extended Data Table 1). The modelling approach used was semi-exploratory in that while we worked to address the general hypothesis embodied in the meta-model, the precise variables (for example, mean annual precipitation versus mean annual precipitation in the warmest quarter of the year) to use for certain constructs (specifically, resource supplies and regulators) were determined empirically. Compositing techniques were used to estimate construct-level effects41. For comparative purposes, we analysed the bivariate pattern in Fig. 1A using a variety of regression models, including Ricker-type nonlinear models as well as second- and third-order polynomials. A three-parameter Ricker-type model provided the best fit for the data. Data were screened for distributional properties and nonlinear relations. Several variables were log-transformed as a result of evaluations (Extended Data Table 1). We used the R software platform43 and the lavaan package44 along with the lavaan.survey45 package for our structural equation model analyses. For the plot-scale model, robust χ2 tests, as implemented in the lavaan.survey package, were used to judge variable inclusion and model adequacy because of the nested nature of the plot-level data. Each link in the final model was evaluated for significant contribution to the model. Final model fit to data was very good for both submodels. Model fit indices were supplemented by using additional diagnostic evaluations that involve visualizing residual relationships to evaluate conditional independence29. These residual visualizations allowed, among other things, an ability to evaluate linearity assumptions and implement curve-fitting procedures if needed (which was only the case for the composite relationships in this case). Our structural equation model in this case is non-recursive and includes a causal loop. Models of this form are commonplace in structural equation model applications, although they come with some additional assumptions and requirements. Specifically, there is a requirement for unique predictors for the elements involved in loops, a requirement that was met in this case. Additional analysis details are documented in the R script used for the analysis (Supplementary Information). Multi-level relations were incorporated into the architecture of our model. Several ways to incorporate both site- and plot-level variations in the model were considered and multiple approaches evaluated to ensure results are general. In the model form presented, we chose to follow modern hierarchical modelling principles and allow plot-level observations to depend on site-level parameters, since plots were nested within sites. The result of choosing this approach means site-level explanatory effects can filter down to the plot level while plot-level explanatory variables (for example, pathways from edaphic conditions to plot richness) explain additional plot-to-plot variations in responses that are not predicted from site-level (mean) conditions. Consistent with the capabilities of the structural equation model software used in our analyses (described below), we estimated site- and plot-level submodels using a two-stage approach, first estimating parameters for the site-level component and then using site productivity, biomass, and richness as exogenous predictors in the plot-level component. Comparisons with results from separate site- and plot-level models led to very similar conclusions, although the hierarchical approach used allowed a better integration of processes and greater variance explanation. One of our objectives in this study was to assess the model dimensionality needed to detect the hypothesized signals in the data. To do this, we started with the most complete model (Fig. 2) and eliminated variables from the model (always retaining richness and some measure of biomass production, either productivity or total biomass). We then made any modifications needed to ensure adequate model-data fit for these reduced-form models. The consequences of model simplification was judged on the basis of signal retention, in particular a loss of capacity to detect signals associated with the remaining parts of the model. The computer script associated with the analyses in this paper is available as part of the Supplementary Information.

Discover hidden collaborations