News Article | May 4, 2017
Lao People’s Democratic Republic (Lao PDR) is highly susceptible to climate change and natural hazards, particularly to flood and drought conditions which seriously affect the country’s agricultural production. Although gradually declining in terms of its contribution to GDP in recent years, agriculture continues to play a major role in Lao PDR’s economy. Kangphosay village, located in Savannakhet Province, and its surrounding agricultural lands are located along the bank of a river, an area which is prone to flooding. Since 1992, villagers have experienced four major floods, the most recent in 2015 when 106 hectares of agricultural land were flooded, damaging more than one third of cropped land. Four years earlier, in 2011, the village lost all its crops during a major flood that persisted for three months. Almost 400 hectares of farm land is potentially vulnerable to flood damage in any given year, and the water level can remain persistently high for months at a time. FAO and the European Union partnered with the Ministry of Agriculture and Forestry and Laotian Province authorities to provide disaster risk reduction and management (DRRM) training in agriculture in Kangphosay village, with the aim of increasing farmers’ resilience to disasters and broadening livelihood diversity. Lao PDR is very vulnerable to natural disasters, including extreme weather events which have been increasing in frequency and intensity. Almost all the country’s farming systems are susceptible to flooding, drought and the late onset of the rainy seasons. With a high dependency on traditional agricultural systems and a predominance of smallholder farms, the impacts of such natural disasters can be all the more devastating. Good Practice Operations FAO’s DRRM training has supported smallholder farmers in Kangphosay village not only in adopting Good Practice Operations (GPOs) designed to prevent flood damage, but also in adapting successfully their approach to fish culture. Malaythip Viengmany, a smallholder farmer and beneficiary from Kangphosay village whose livelihood mainly relies on fish farming, is now sharing her knowledge with others, and the community as a whole is also diversifying livestock. “Earlier when the floods came, there was no way to avoid the loss. One of the prevention options offered by the project, which we have adopted, is to place a high net fence around the fish pond so that during periods of flooding the fish were not swept away. Apart from this, the project also introduced some new techniques for raising fish”, explained Malaythip. “Thanks to the support from FAO project we are able to raise more fish and minimize the damage from the floods.” Today, Malaythip and her family not only have sufficient fish for their own consumption, but they are also able to supplement their income by selling the production surplus. As a FAO project village, Kangphosay has ten participating households in the GPO programme, comprising flood tolerant rice, fish cultures, and organic fertilizer/soil improvement. Farmers have received soil improvement training to produce organic fertilizer and, as a result, have increased their overall agricultural production. In addition to boosting their own livelihoods, they have shared these techniques with other farmers to spread the practice throughout the community. While currently implemented GPOs offer significant opportunities to reduce vulnerability in target areas, their selective implementation leaves great room for expansion. Resilient livelihood opportunities for women Kangphosay villagers have pursued a range of activities to broaden their livelihood diversity, from growing multiple types of crops to raising different types of livestock and developing supplemental incomes. Additional activities, farm and business –focused, aim to limit the impacts of disasters, in particular the exploration of resilient livelihood opportunities for women. Traditionally, women have had a strong agricultural role in Lao PDR communities, and that role should be enhanced. Increased inclusion of women in agricultural decision making and training activities, as demonstrated by the success of the GPO programme, increases the effectiveness of the community’s agricultural system as a whole. District Agricultural Officers periodically check in with communities to provide additional support and resources, taken up by the citizens and local leaders. In support of this goal, DRRM plans have proven most successful when local stakeholders take ownership of the processes and organize periodic meetings to oversee the implementation of activities. Mainstreaming DRRM into agricultural planning The Ministry of Agriculture and Forestry (MAF) has taken important steps to better address and mainstream DRRM into agricultural planning. With the support of FAO, they have developed a sector-specific Plan of Action (PoA) for DRRM in agriculture to raise awareness, strengthen sectoral capabilities and promote a proactive approach to DRRM. With FAO’s support, the Government has been increasing the resilience of agricultural communities to disasters, and in 2014 a Plan of Action for Disaster Risk Reduction Management in Agriculture was produced. The Organization is currently supporting implementation of the Plan by developing guidelines for planners and technical officials, field testing and validating good practice options. Assistance is also enabling the Government to provide rapid and coordinated cross-sectoral responses to poultry disease epidemics – including the detection and stamping out of several outbreaks of Highly Pathogenic Avian Influenza (HPAI).
News Article | May 15, 2017
In the Lending Circles model, small groups come together to lend, borrow, and save money collectively. For example, a group of 10 people might agree on a plan to each borrow $1,000. Each participant pays $100 per month for 10 months to fund the loans. MAF reports participants' monthly payments to credit agencies and secures the loans in the event that one member misses a payment; remarkably, MAF has maintained a repayment rate of more than 99 percent. People all over the world use informal social lending practices when bank loans aren't an option. Lending Circles, by reporting payments directly to credit bureaus, transforms these practices into a proven way to establish credit and become integrated into the financial mainstream. "Too often, low-income people are invisible to traditional financial providers," explains José A. Quiñonez, MAF's founder and CEO. "Sixty-four million people in this country don't have credit scores. Another 17 million people don't have access to a bank account. Our goal is to help them overcome those barriers so that they, their families, and their communities can flourish." The majority of Lending Circles participants enroll in the program with the primary goal of establishing or increasing their credit scores. But they use their loans for a range of goals: to pay off expensive debt, establish emergency savings, put a down payment on an apartment, or start their own business. JPMorgan Chase, a longtime financial supporter of both LISC and MAF, is funding this expansion, which includes the addition of two new Lending Circles programs at Financial Opportunity Centers in the Greater Cincinnati area. The experience to date is promising: five Financial Opportunity Centers are now official Lending Circles providers. Collectively, they have provided over $300,000 to nearly 500 participants. Since 2013, JPMorgan Chase has invested close to $3 million in Lending Circles efforts with partners across the country. "Improving the financial health of households leads to stronger, more resilient communities and economies," explained Colleen Briggs, executive director of community innovation, JPMorgan Chase. "JPMorgan Chase is excited to help expand the proven MAF Lending Circles model to LISC clients throughout the country. By establishing and improving credit, this partnership will help more Americans improve their financial security and achieve their long-term goals." "Poverty should not be a life sentence," said Maurice Jones, LISC president and CEO. "As a country, we need to open new doors for the millions of Americans who haven't been able to share in the opportunities and wealth our economy creates. That means connecting them to the tools they need to earn more, save more, and find a more secure financial footing." Brighton Center, located in Newport, Ky., is of the newest Lending Circles partners. Last month, Brighton Center brought together eight people to form the organization's first circle. The Lending Circles process begins with the Formation, when participants meet for the first time and decide upon the amount to contribute monthly, the order in which they'll receive their loans, and a name for their group. (Brighton Center's eight-member circle proudly named themselves, "Sens8tional"). Brighton Center staff are confident that the Lending Circles program will be a valuable resource for clients who already have a relationship with the organization and are involved in its other programs ranging from job training to childcare. "The families we work with every day want to build a better future for themselves and their children," said Stephanie Stiene, director of financial services with Brighton Center. "But sometimes they don't know where to begin. By giving them a tool they can use to access affordable loans and build credit, families move toward self-sufficiency and financial independence." Another recent addition to the network is Santa Maria Community Services, located in Cincinnati's Greater Price Hill neighborhood. The Lending Circles program fits naturally among the organization's existing services, which are geared toward raising incomes, improving education, boosting health, and supporting youth and families. The seven participants in Santa Maria's first group, dubbed "The A-Team", were thrilled to have access to such a valuable, "win-win" financial service. Santa Maria staff members report that many other clients are eager to take advantage of the program and intend to enroll in the coming months. "By connecting neighbors through shared financial goals, we have the chance not only to help individuals improve their quality of life, but also to fuel economic opportunities that benefit our community as a whole" said Santa Maria's CEO H.A. Musser. "Santa Maria is a catalyst and advocate for Greater Price Hill families to attain their educational, financial, and health goals. The lack of credit-building opportunities in our region keep people from establishing and maintaining financial security. Lending Circles allow families to unlock their full economic potential," he said. About LISC LISC equips struggling communities with the capital, program strategy and know-how to become places where people can thrive. It combines corporate, government and philanthropic resources. Since 1980, LISC has invested $17.3 billion to build or rehab 366,000 affordable homes and apartments and develop 61 million square feet of retail, community and educational space. For more, visit www.lisc.org About MAF Mission Asset Fund (MAF) builds pathways to prosperity through zero-interest, credit-building loans. Over 7,000 people across the country have used MAF's award-winning Lending Circles programs to increase credit scores, pay down debt, and save for important goals like becoming a homeowner, a student, or a U.S. citizen. MAF currently manages a national network of over 50 Lending Circles providers in 17 states and Washington, D.C. About JPMorgan Chase JPMorgan Chase & Co. is a leading global financial services firm with assets of $2.5 trillion and operations worldwide. The Firm is a leader in investment banking, financial services for consumers and small businesses, commercial banking, financial transaction processing, and asset management. A component of the Dow Jones Industrial Average, JPMorgan Chase & Co. serves millions of consumers in the United States and many of the world's most prominent corporate, institutional and government clients under its J.P. Morgan and Chase brands. Information about JPMorgan Chase & Co. is available at www.JPMorganChase.com. To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/national-nonprofits-expand-partnerships-to-help-low-income-families-increase-financial-opportunity-300457721.html
News Article | May 16, 2017
DUBAI, United Arab Emirates--(BUSINESS WIRE)--Fetchr, a delivery-business and consumer technology app dedicated to disrupting shipping by eliminating the need for a traditional address, using its patented mobile technology solutions, today announced that it has raised $41 million in a Series B financing round led by New Enterprise Associates Inc. (NEA), to continue expanding globally and developing its proprietary technology. Fetchr’s innovative and technological edge in an antiquated industry comes at a time when e-commerce is growing in the MENA region and there has been a lot of focus on the delivery industry to enable the growth of e-commerce. The interest from regional investors including Majid Al Futtaim Holding, a leading regional group with a portfolio of companies including the largest malls, retail and leisure brands is a testament to the expected changes in the retail environment that fetchr will continue to facilitate. Other investors in the round include Nokia Growth Partners, Raed Ventures, Iliad Partners, BECO Capital, YBA Kanoo, Venture Souq and Swicorp. Fetchr tackles the “no address problem” in emerging markets, typically encountered by traditional companies delivering packages to customers. In a region where more than 80% of users have smartphones, fetchr is tackling delivery challenges by going directly to customers’ phone and capturing the geo-location for package deliveries. With technology at the center of their business model, fetchr’s vision is to address the shipping challenges across all emerging markets and make delivery as easy as shopping online. In August 2016, fetchr launched its on-demand delivery service “NOW” that allows customers to receive door-to-door deliveries within an hour. Fetchr is currently operational in UAE, Saudi Arabia, Egypt and Bahrain, with plans to expand its footprint in MENA and beyond. Fetchr is also partnering with governmental organizations like Oman Post, to deliver the technology-backed delivery solutions extending its technology through strategic partnerships. “Fetchr has demonstrated impressive growth since our initial investment in 2015,” said Scott Sandell, Managing General Partner at NEA and fetchr board member. “They’re revolutionizing global e-commerce by enabling delivery access via mobile (in contrast to the traditional requirement of a physical address). We’re thrilled to continue partnering with the team as they further expand their reach among the two billion customers in emerging markets worldwide.” “Fetchr has revolutionized and digitized deliveries, enabling e-commerce and providing a seamless delivery experience across the booming e-commerce industry in the MENA region and we’re excited to see their continued growth and expansion across the region and beyond,” said Abdulrahman Addas, CEO of MAF Holdings. “More than two billion people live without an address. While these emerging markets represent the key for the growth of e-commerce in the next decade they are still being catered to with an antiquated address-based delivery software. This is both ineffective and incredibly frustrating for customers who think ‘mobile first’ throughout the region,” said Idriss Al Rifai, Founder & CEO at fetchr. “I am excited that fetchr is the first company in the Middle East to employ women couriers in Dubai and an all women call center team in Saudi Arabia. It is my passion to create gender equality and more jobs that empower women to break out of traditional roles,” concluded Joy Ajlouny, Co-founder and Creative Director. Fetchr is a tech company, disrupting the traditional logistics sector in the MENA region. The company was founded in 2012 by Idriss Al Rifai, CEO and Founder and Joy Ajlouny, Co-Founder and Creative Director. Fetchr’s sophisticated technology uses customers’ mobile’s geo-location as a physical address to deliver packages straight to them, wherever they are. For more information, visit https://www.fetchr.us. About New Enterprise Associates Inc. (NEA) New Enterprise Associates, Inc. (NEA) is a global venture capital firm focused on helping entrepreneurs build transformational businesses across multiple stages, sectors and geographies. With approximately $13 billion in committed capital, NEA invests in technology and healthcare companies at all stages in a company’s lifecycle, from seed stage through IPO. The firm’s long track record of successful investing includes more than 195 portfolio company IPOs and more than 320 acquisitions. For additional information, visit www.nea.com Founded in 1992, Majid Al Futtaim is the leading shopping mall, communities, retail and leisure pioneer across the Middle East, Africa and Asia. Since its inception, it has grown into one of the United Arab Emirates’ most respected and successful businesses spanning 15 international markets, with 20 shopping malls, 12 hotels and three mixed-use communities, and further developments underway in the region.
News Article | February 22, 2017
The report "Exhaust Systems Market by After Treatment Device (DOC, DPF, LNT, SCR & GPF), Fuel Type, Component (Manifold, Downpipe, Catalytic Converter, Muffler, Tailpipe, Sensor), and Region, Aftermarket by Component & after Treatment Device - Global Forecast to 2021", published by MarketsandMarkets. The Exhaust Systems Market is primarily driven by the adoption of newer and stringent emission regulations from different regulatory bodies at a global level. The market is projected to grow at a CAGR of 8.45%, to reach USD 59.02 Billion by 2021. Browse 96 market data Tables and 72 Figures spread through 202 Pages and in-depth TOC on "Exhaust Systems Market" Early buyers will receive 10% customization on this report. "Catalytic Converters and Exhaust manifolds holds the largest share in Exhaust Systems Market by components" Catalytic converter and exhaust manifolds accounted to have the largest market for components. The growth can be attributed to the increased production of passenger cars and commercial vehicles around the world. The exhaust system manufacturers have been continuously investing heavily in R&D activities for the development of technologically advanced products in order to comply with the new guidelines. As a result, the global market for exhaust systems components is projected to accelerate significantly. Also, the increasing vehicle production across the globe will also trigger the growth of components market. "Gasoline Particulate Filters (GPF) is the fastest growing segment of the market after treatment devices for exhaust systems" The introduction of gasoline direct injection (GDI) technology in passenger cars has been started from a while. The gasoline direct injection (GDI) engines emits more particulate matter (PM) content than Multi-Port Fuel Injection (MPI) engines. In order to meet the emission limits defined by European and U.S. regulatory bodies, the implementation of GPF devices becomes necessary in order to control the particulate matter (PM) level from the vehicles. For instance, the Euro 6 norms has been introduced to limit the particulate matter level from gasoline engines. Hence, the latest emission regulations requires lower gasoline emissions which will further fuel the market for GPF devices. The Asia-Pacific region has emerged as the leader in global vehicle production with the production growth of around 17% in past 5 years, and also becomes the leader in automotive exhaust systems manufacturing. China has been the world's largest automobile market in recent years. According to Organisation Internationale des Constructeurs Automobiles (OICA), China accounted almost 27% of global vehicle production in 2015. This could be owed mainly to the major automotive OEMs setting up their manufacturing plants in China. . In addition to that, the growing economies of countries like India and Indonesia are reasons that this region is able to maintain its leadership position in the market across globe. The Exhaust Systems Market is dominated by a few global players such as Faurecia (France), Tenneco Inc. (U.S.), Eberspächer (Germany), Futaba Industrial Co. Ltd. (Japan), Sango Co. Ltd (Japan), Benteler International AG (Austria), Friedrich Boysen GmbH & Co. KG (Germany), Yutaka Giken Co., Ltd. (Japan), Sejong Industrial Co., Ltd. (South Korea), and Bosal (Belgium). Catalytic Converter Market by Vehicle Type & Type (TWC, SCR, DOC, LNC, & LNT), Material (Platinum, Palladium, Rhodium & Others), & by Region (North America, Europe, Asia-Oceania, & ROW) - Global Trends and Forecast to 2019 Exhaust Sensor Market for Automotive by Sensor Type (Exhaust Temperature & Pressure, O2, NOx, Particulate Matter, Engine Coolant Temperature, & MAP/MAF Sensor), Fuel Type (Gasoline & Diesel), Vehicle Type, & by Region - Industry Trends & Forecast to 2020 MarketsandMarkets is the largest market research firm worldwide in terms of annually published premium market research reports. Serving 1700 global fortune enterprises with more than 1200 premium studies in a year, M&M is catering to a multitude of clients across 8 different industrial verticals. We specialize in consulting assignments and business research across high growth markets, cutting edge technologies and newer applications. Our 850 fulltime analyst and SMEs at MarketsandMarkets are tracking global high growth markets following the "Growth Engagement Model - GEM". The GEM aims at proactive collaboration with the clients to identify new opportunities, identify most important customers, write "Attack, avoid and defend" strategies, identify sources of incremental revenues for both the company and its competitors. M&M's flagship competitive intelligence and market research platform, "RT" connects over 200,000 markets and entire value chains for deeper understanding of the unmet insights along with market sizing and forecasts of niche markets. The new included chapters on Methodology and Benchmarking presented with high quality analytical infographics in our reports gives complete visibility of how the numbers have been arrived and defend the accuracy of the numbers. We at MarketsandMarkets are inspired to help our clients grow by providing apt business insight with our huge market intelligence repository. Visit MarketsandMarkets Blog @ http://mnmblog.org/market-research/automotive-transportation Connect with us on LinkedIn @ http://www.linkedin.com/company/marketsandmarkets
News Article | February 15, 2017
At dairies, the reverse osmosis filtration technique is extensively used to remove water from milk to be used for further processing such as e.g. cheese or milk powder. However, many resources would be saved if it was possible to move this process to the farms, since you would reduce the amount of water transported. In cooperation with the Danish dairy company, Arla, PhD student Ida Sørensen and Associate Professor Lars Wiking from Department of Food Science at Aarhus University have examined how milk quality is affected when concentrating the milk is carried out on-farm. The researchers at Aarhus University have analyzed experiments with both the so-called ultrafiltration, which is supposed be more gentle to the milk, and with the reverse osmosis technique, which requires a higher pressure on the milk but also retains the lactose which may be an advantage in for example milk powder. Neither the total bacterial count, or the FFA-levels nor the protein breakdown were negatively affected by reverse osmosis; the concentrated milk could very well be used for both cheese and milk powder. Analyses also demonstrate that the quality and durability of milk powder made from concentrated milk is the same as for powder made from ordinary milk; in cheese, however, there is a minor difference as to how the enzymes react; and in the experiments, concentrated milk coagulated approximately ten minutes later than regular milk. Significant interest -- but is it worthwhile? Concentration of milk on the farm, or during transport from farm to dairy, is carried out in many other countries in the world, e.g. in Texas, USA, where both herds and distances are huge. Different models exist as to how on-farm milk concentration may become a reality. The farmer may buy the filtration equipment himself and achieve an additional price for the milk. Or perhaps the dairy could buy, maintain and service the filtration installation or it could be acquired through some kind of leasing agreement. Herd size and distance to the dairy in particular, are of major importance when considering resources and profitability, as small installations typically use more power than one large installation, says Ida Sørensen; she has just presented the results of the studies at a major conference in Dublin. "New sustainable milk concentration technology for dairy herds" is a five year project which ends this year. Project participants include Arla Foods amba/Arla Foods Ingredients PS (Arla), Danmarks Kvægforskningscenter (the Danish Cattle Research Center - DKC) and Aarhus University (AU). In addition, GEA Process Engineering (GEA) is affiliated as an external consultant. The project is financially supported by Mælkeafgiftsfonden (Milk Taxation Foundation - MAF) and the Green Development and Demonstration Programme (GUDP).
News Article | February 15, 2017
The discovery cohort consisted of 147 studies comprising 458,927 adult individuals of the following ancestries: (1) European descent (n = 381,625); (2) African (n = 27,494); (3) South Asian (n = 29,591); (4) East Asian (n = 8,767); (5) Hispanic (n = 10,776) and (6) Saudi Arabian (n = 695). All participating institutions and coordinating centres approved this project, and informed consent was obtained from all subjects. Discovery meta-analysis was carried out in each ancestry group (except the Saudi Arabian) separately as well as in the All group. Validation was undertaken in individuals of European ancestry only (Supplementary Tables 1–3). Conditional analyses were undertaken only in the European descent group (106 studies, n = 381,625). The SNPs we identify are available from the NCBI dbSNP database of short genetic variations (https://www.ncbi.nlm.nih.gov/projects/SNP/). No statistical methods were used to predetermine sample size. The experiments were not randomized and the investigators were not blinded to allocation during experiments and outcome assessment. Height (in centimetres) was corrected for age and the genomic principal components (derived from GWAS data, the variants with a MAF > 1% on ExomeChip (http://genome.sph.umich.edu/wiki/Exome_Chip_Design), or ancestry-informative markers available on the ExomeChip), as well as any additional study-specific covariates (for example, recruiting centre), in a linear regression model. For studies with non-related individuals, residuals were calculated separately by sex, whereas for family-based studies sex was included as a covariate in the model. Additionally, residuals for case/control studies were calculated separately. Finally, residuals were subject to inverse normal transformation. The majority of studies followed a standardized protocol and performed genotype calling using the designated manufacturer’s software, which was then followed by zCall30. For ten studies participating in the Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE) Consortium, the raw intensity data for the samples from seven genotyping centres were assembled into a single project for joint calling11. Study-specific quality-control measures of the genotyped variants was implemented before association analysis (Supplementary Tables 1–2). Individual cohorts were analysed separately for each ancestry population, with either RAREMETALWORKER (http://genome.sph.umich.edu/wiki/RAREMETALWORKER) or RVTEST (http://zhanxw.github.io/rvtests/), to associate inverse normal transformed height data with genotype data taking potential cryptic relatedness (kinship matrix) into account in a linear mixed model. These software are designed to perform score-statistics based rare-variant association analysis, can accommodate both unrelated and related individuals, and provide single-variant results and variance-covariance matrix. The covariance matrix captures linkage disequilibrium relationships between markers within 1 Mb, which is used for gene-level meta-analyses and conditional analyses31. Single-variant analyses were performed for both additive and recessive models (for the alternate allele). The individual study data were investigated for potential existence of ancestry population outliers based on the 1000 Genome Project phase 1 ancestry reference populations. A centralized quality control procedure implemented in EasyQC32 was applied to individual study association summary statistics to identify outlying studies: (1) assessment of possible problems in height transformation; (2) comparison of allele frequency alignment against 1000 Genomes Project phase 1 reference data to pinpoint any potential strand issues; and (3) examination of quantile–quantile plots per study to identify any problems arising from population stratification, cryptic relatedness and genotype biases. We excluded variants if they had a call rate <95%, Hardy–Weinberg equilibrium P < 1 × 10−7, or large allele frequency deviations from reference populations (>0.6 for all ancestry analyses and >0.3 for ancestry-specific population analyses). We also excluded from downstream analyses markers not present on the Illumina ExomeChip array 1.0, variants on the Y chromosome or the mitochondrial genome, indels, multiallelic variants, and problematic variants based on the Blat-based sequence alignment analyses. Meta-analyses were carried out in parallel by two different analysts at two sites. We conducted single-variant meta-analyses in a discovery sample of 458,927 individuals of different ancestries using both additive and recessive genetic models (Extended Data Fig. 1 and Supplementary Tables 1–4). Significance for single-variant analyses was defined at an array-wide level (P < 2 × 10−7, Bonferroni correction for 250,000 variants). The combined additive analyses identified 1,455 unique variants that reached array-wide significance (P < 2 × 10−7), including 578 non-synonymous and splice-site variants (Supplementary Tables 5–7). Under the additive model, we observed a high genomic inflation of the test statistics (for example, a λ of 2.7 in European ancestry studies for common markers, Extended Data Fig. 2 and Supplementary Table 8), although validation results (see below) and additional sensitivity analyses (see below) suggested that it is consistent with polygenic inheritance as opposed to population stratification, cryptic relatedness, or technical artefacts (Extended Data Fig. 2). The majority of these 1,455 association signals (1,241; 85.3%) were found in the European ancestry meta-analysis (85.5% of the discovery sample size) (Extended Data Fig. 2). Nevertheless, we discovered eight associations within five loci in our all-ancestry analyses that are driven by African studies (including one missense variant in the growth hormone gene GH1 (rs151263636), Extended Data Fig. 3), three height variants found only in African studies, and one rare missense marker associated with height in South Asians only (Supplementary Table 7). We observed a marked genomic inflation of the test statistics even after adequate control for population stratification (linear mixed model) arising mainly from common markers; λ in European ancestry was 1.2 and 2.7 for all and common markers, respectively (Extended Data Fig. 2 and Supplementary Table 8). Such inflation is expected for a highly polygenic trait like height, and is consistent with our very large sample size3, 33. To confirm this, we applied the recently developed linkage disequilibrium score regression method to our height ExomeChip results34, with the caveats that the method was developed (and tested) with >200,000 common markers available. We restricted our analyses to 15,848 common variants (MAF ≥ 5%) from the European-ancestry meta-analysis, and matched them to pre-computed linkage disequilibrium scores for the European reference dataset34. The intercept of the regression of the χ2 statistics from the height meta-analysis on the linkage disequilibrium score estimates that the inflation in the mean χ2 is due to confounding bias, such as cryptic relatedness or population stratification. The intercept was 1.4 (s.e.m. = 0.07), which is small when compared to the λ of 2.7. Furthermore, we also confirmed that the linkage disequilibrium score regression intercept is estimated upward because of the small number of variants on the ExomeChip and the selection criteria for these variants (that is, known GWAS hits). The ratio statistic of (intercept − 1)/(mean χ2 − 1) is 0.067 (s.e.m. = 0.012), well within the normal range34, suggesting that most of the inflation (~93%) observed in the height association statistics is due to polygenic effects (Extended Data Fig. 2). Furthermore, to exclude the possibility that some of the observed associations between height and rare and low-frequency variants could be due to allele calling problems in the smaller studies, we performed a sensitivity meta-analysis with primarily European ancestry studies totalling >5,000 participants. We found very concordant effect sizes, suggesting that smaller studies do not bias our results (Extended Data Fig. 2). The RAREMETAL R package35 and the GCTA v1.24 (ref. 36) software were used to identify independent height association signals across the European descent meta-analysis results. RAREMETAL performs conditional analyses by using covariance matrices in order to distinguish true signals from those driven by linkage disequilibrium at adjacent known variants. First, we identified the lead variants (P < 2 × 10−7) based on a 1-Mb window centred on the most significantly associated variant and performed linkage disequilibrium pruning (r2 < 0.3) to avoid downstream problems in the conditional analyses due to co-linearity. We then conditioned on the linkage disequilibrium-pruned set of lead variants in RAREMETAL and kept new lead signals at P < 2 × 10−7. The process was repeated until no additional signal emerged below the pre-specified P-value threshold. The use of a 1-Mb window in RAREMETAL can obscure dependence between conditional signals in adjacent intervals in regions of extended linkage disequilibrium. To detect such instances, we performed joint analyses using GCTA with the ARIC and UK ExomeChip reference panels, both of which comprise >10,000 individuals of European descent. With the exception of a handful of variants in a few genomic regions with extended linkage disequilibrium (for example, the HLA region on chromosome 6), the two pieces of software identified the same independent signals (at P < 2 × 10−7). To discover new height variants, we conditioned the height variants found in our ExomeChip study on the previously published GWAS height variants3 using the first release of the UK Biobank imputed dataset and regression methodology implemented in BOLT-LMM37. Because of the difference between the sample size of our discovery set (n = 458,927) and the UK Biobank (first release, n = 120,084), we applied a threshold of P < 0.05 to declare a height variant as independent in this analysis. We also explored an alternative approach based on approximate conditional analysis36. This latter method (SSimp) relies on summary statistics available from the same cohort, thus we first imputed summary statistics38 for exome variants, using summary statistics from a previous study3. Conversely, we imputed the top variants from this study3 using the summary statistics from the ExomeChip. Subsequently, we calculated effect sizes for each exome variant conditioned on the top variants of this study3 in two ways. First, we conditioned the imputed summary statistics of the exome variant on the summary statistics of the top variants that fell within 5 Mb of the target ExomeChip variant. Second, we conditioned the summary statistics of the ExomeChip variant on the imputed summary statistics of the hits of this study3. We then selected the option that yielded a higher imputation quality. For poorly tagged variants ( < 0.8), we simply used up-sampled HapMap summary statistics for the approximate conditional analysis. Pairwise SNP-by-SNP correlations were estimated from the UK10K data (TwinsUK39 and ALSPAC40 studies, n = 3,781). Several studies, totalling 252,501 independent individuals of European ancestry, became available after the completion of the discovery analyses, and were thus used for validation of our experiment. We validated the single-variant association results in eight studies, totalling 59,804 participants, genotyped on the ExomeChip using RAREMETAL31. We sought additional evidence for association for the top signals in two independent studies in the UK (UK Biobank) and Iceland (deCODE), comprising 120,084 and 72,613 individuals, respectively. We used the same quality control and analytical methodology as described above. Genotyping and study descriptions are provided in Supplementary Tables 1–3. For the combined analysis, we used the inverse-variance-weighted fixed effects meta-analysis method using METAL41. Significant associations were defined as those with a combined meta-analysis (discovery and validation) P < 2 × 10−7. We considered 81 variants with suggestive association in the discovery analyses (2 × 10−7 < P ≤ 2 × 10−6). Of those 81 variants, 55 reached significance after combining discovery and replication results based on a P < 2 × 10−7 (Supplementary Table 9). Furthermore, recessive modelling confirmed seven new independent markers with a P < 2 × 10−7 (Supplementary Table 10). One of these recessive signals is due to a rare X-linked variant in the AR gene (rs137852591, MAF = 0.21%). Because of its frequency, we only tested hemizygous men (we did not identify homozygous women for the minor allele) so we cannot distinguish between a true recessive mode of inheritance or a sex-specific effect for this variant. To test the independence and integrate all height markers from the discovery and validation phase, we used conditional analyses and GCTA ‘joint’ modelling36 in the combined discovery and validation set. This resulted in the identification of 606 independent height variants, including 252 non-synonymous or splice-site variants (Supplementary Table 11). If we consider only the initial set of lead SNPs with P < 2 × 10−7, we identified 561 independent variants. Of these 561 variants (selected without the validation studies), 560 have concordant direction of effect between the discovery and validation studies, and 548 variants have a P < 0.05 (466 variants with P < 8.9 × 10−5, Bonferroni correction for 561 tests), suggesting a very low false discovery rate (Supplementary Table 11). For the gene-based analyses, we applied two different sets of criteria to select variants, based on coding variant annotation from five prediction algorithms (PolyPhen2 HumDiv and HumVar, LRT, MutationTaster and SIFT)42. The mask labelled ‘broad’ included variants with a MAF < 0.05 that are nonsense, stop-loss, splice site, as well as missense variants that are annotated as damaging by at least one program mentioned above. The mask labelled ‘strict’ included only variants with a MAF < 0.05 that are nonsense, stop-loss, splice-site, as well as missense variants annotated as damaging by all five algorithms. We used two tests for gene-based testing, namely the SKAT43 and VT44 tests. Statistical significance for gene-based tests was set at a Bonferroni-corrected threshold of P < 5 × 10−7 (threshold for 25,000 genes and four tests). The gene-based discovery results were validated (same test and variants, when possible) in the same eight studies genotyped on the ExomeChip (n = 59,804 participants) that were used for the validation of the single-variant results (see above, and Supplementary Tables 1–3). Gene-based conditional analyses were performed in RAREMETAL. We accessed ExomeChip data from GIANT (BMI, waist:hip ratio), GLGC (total cholesterol, triglycerides, HDL-cholesterol, LDL-cholesterol), IBPC (systolic and diastolic blood pressure), MAGIC (glycaemic traits), REPROGEN (age at menarche and menopause), and DIAGRAM (type 2 diabetes) consortia. For coronary artery disease, we accessed 1000 Genomes Project-imputed GWAS data released by CARDIoGRAMplusC4D45. DEPICT (http://www.broadinstitute.org/mpg/depict/) is a computational framework that uses probabilistically defined reconstituted gene sets to perform gene set enrichment and gene prioritization15. For a description of gene set reconstitution, refer to refs 15, 46. In brief, reconstitution was performed by extending pre-defined gene sets (such as Gene Ontology terms, canonical pathways, protein-protein interaction subnetworks and rodent phenotypes) with genes co-regulated with genes in these pre-defined gene set using large-scale microarray-based transcriptomics data. In order to adapt the gene set enrichment part of DEPICT for ExomeChip data (https://github.com/RebeccaFine/height-ec-depict), we made two principal changes. First, because DEPICT for GWAS incorporates all genes within a given linkage disequilibrium block around each index SNP, we modified DEPICT to take as input only the gene directly impacted by the coding SNP. Second, we adapted the way DEPICT adjusts for confounders (such as gene length) by generating null ExomeChip association results using Swedish ExomeChip data (Malmö Diet and Cancer (MDC), All New Diabetics in Scania (ANDIS), and Scania Diabetes Registry (SDR) cohorts, n = 11,899) and randomly assigning phenotypes from a normal distribution before conducting association analysis (see Supplementary Information). For the gene set enrichment analysis of the ExomeChip data, we used significant non-synonymous variants statistically independent of known GWAS hits (and that were present in the null ExomeChip data; see Supplementary Information for details). For gene set enrichment analysis of the GWAS data, we used all loci with a non-coding index SNP and that did not contain any of the novel ExomeChip genes. In visualizing the analysis, we used affinity propagation clustering47 to group the most similar reconstituted gene sets based on their gene memberships (see Supplementary Information). Within a ‘meta-gene set’, the best P value of any member gene set was used as representative for comparison. DEPICT for ExomeChip was written using the Python programming language and the code can be found at https://github.com/RebeccaFine/height-ec-depict. We also applied the PASCAL (http://www2.unil.ch/cbg/index.php?title=Pascal) pathway analysis tool16 to association summary statistics for all coding variants. In brief, the method derives gene-based scores (both SUM and MAX statistics) and subsequently tests for the over-representation of high gene scores in predefined biological pathways. We used standard pathway libraries from KEGG, REACTOME and BIOCARTA, and also added dichotomized (Z score > 3) reconstituted gene sets from DEPICT15. To accurately estimate SNP-by-SNP correlations even for rare variants, we used the UK10K data (TwinsUK39 and ALSPAC40 studies, n = 3781). To separate the contribution of regulatory variants from the coding variants, we also applied PASCAL to association summary statistics of only regulatory variants (20 kb upstream, gene body excluded) from a previous study3. In this way, we could classify pathways driven principally by coding, regulatory or mixed signals. For the generation of STC2 mutants (R44L and M86I), wild-type STC2 cDNA contained in pcDNA3.1/Myc-His(−) (Invitrogen)23 was used as a template. Mutagenesis was carried out using Quickchange (Stratagene), and all constructs were verified by sequence analysis. Recombinant wild-type STC2 and variants were expressed in human embryonic kidney (HEK) 293T cells (293tsA1609neo, ATCC CRL-3216) maintained in high-glucose DMEM supplemented 10% fetal bovine serum, 2 mM glutamine, nonessential amino acids, and gentamicin. The cells are routinely tested for mycoplasma contamination. Cells (6 × 106) were plated onto 10-cm dishes and transfected 18 h later by calcium phosphate co-precipitation using 10 μg plasmid DNA. Medium was collected 48 h after transfection, cleared by centrifugation, and stored at −20 °C until use. Protein concentrations (58–66 nM) were determined by TRIFMA using antibodies described previously23. PAPP-A was expressed stably in HEK293T cells as previously reported48. Expressed levels of PAPP-A (27.5 nM) were determined by a commercial ELISA (AL-101, Ansh Labs). Culture supernatants containing wild-type STC2 or variants were adjusted to 58 nM, added an equal volume of culture supernatant containing PAPP-A corresponding to a 2.1-fold molar excess, and incubated at 37 °C. Samples were taken at 1, 2, 4, 6, 8, 16, and 24 h and stored at −20 °C. Specific proteolytic cleavage of 125I-labeled IGFBP-4 is described in detail elsewhere49. In brief, the PAPP-A–STC2 complex mixtures were diluted (1:190) to a concentration of 72.5 pM PAPP-A and mixed with pre-incubated 125I-IGFBP4 (10 nM) and IGF-1 (100 nM) in 50 mM Tris-HCl, 100 mM NaCl, 1 mM CaCl . Following 1 h incubation at 37 °C, reactions were terminated by the addition of SDS–PAGE sample buffer supplemented with 25 mM EDTA. Substrate and co-migrating cleavage products were separated by 12% non-reducing SDS–PAGE and visualized by autoradiography using a storage phosphor screen (GE Healthcare) and a Typhoon imaging system (GE Healthcare). Band intensities were quantified using ImageQuant TL 8.1 software (GE Healthcare). STC2 and covalent complexes between STC2 and PAPP-A were blotted onto PVDF membranes (Millipore) following separation by 3–8% SDS–PAGE. The membranes were blocked with 2% Tween-20, and equilibrated in 50 mM Tris-HCl, 500 mM NaCl, 0.1% Tween-20; pH 9 (TST). For STC2, the membranes were incubated with goat polyclonal anti-STC2 (R&D systems, AF2830) at 0.5 μg ml−1 in TST supplemented with 2% skimmed milk for 1 h at 20 °C. For PAPP-A–STC2 complexes, the membranes were incubated with rabbit polyclonal anti-PAPP-A50 at 0.63 μg ml−1 in TST supplemented with 2% skimmed milk for 16 h at 20 °C. Membranes were washed with TST and subsequently incubated with polyclonal rabbit anti-goat IgG[en rule]horseradish peroxidase (DAKO, P0449) or polyclonal swine anti-rabbit IgG[en rule]horseradish peroxidase (DAKO, P0217), respectively, diluted 1:2,000 in TST supplemented with 2% skimmed milk for 1 h at 20 °C. Following washing with TST, membranes were developed using enhanced chemiluminescence (ECL Prime, GE Healthcare). Images were captured using an ImageQuant LAS 4000 instrument (GE Healthcare). Summary genetic association results are available on the GIANT website (http://portals.broadinstitute.org/collaboration/giant/index.php/GIANT_consortium).
News Article | February 22, 2017
No statistical methods were used to predetermine sample size. The experiments were not randomized and the investigators were not blinded to allocation during experiments and outcome assessment. At 24 clinical genetics centres within the United Kingdom National Health Service and the Republic of Ireland, 4,293 patients with severe, undiagnosed DDs and their parents (4,125 families) were recruited and systematically phenotyped. The study has UK Research Ethics Committee approval (10/H0305/83, granted by the Cambridge South Research Ethics Committee and GEN/284/12, granted by the Republic of Ireland Research Ethics Committee). Families gave informed consent for participation. Clinical data (growth measurements, family history, developmental milestones, and so on) were collected using a standard restricted-term questionnaire within DECIPHER39, and detailed developmental phenotypes for the individuals were entered using HPO terms40. Saliva samples for the whole family and blood-extracted DNA samples for the probands were collected, processed and quality controlled as previously described15. Genomic DNA (approximately 1 μg) was fragmented to an average size of 150 base pairs (bp) and a DNA library was created using established Illumina paired-end protocols. Adaptor-ligated libraries were amplified and indexed using polymerase chain reaction (PCR). A portion of each library was used to create an equimolar pool comprising eight indexed libraries. Each pool was hybridized to SureSelect RNA baits (Agilent Human All-Exon V3 Plus with custom ELID C0338371 and Agilent Human All-Exon V5 Plus with custom ELID C0338371) and sequence targets were captured and amplified in accordance with the manufacturer’s recommendations. Enriched libraries were analysed by 75-base paired-end sequencing (Illumina HiSeq) following the manufacturer’s instructions. Mapping of short-read sequences for each sequencing lanelet was carried out using the Burrows-Wheeler aligner (BWA; version 0.59)41 backtrack algorithm with the GRCh37 1000 Genomes Project phase 2 reference (also known as hs37d5). Sample-level BAM improvement was carried out using the Genome Analysis Toolkit (GATK; version 3.1.1)42 and SAMtools (version 0.1.19)43. This consisted of a realignment of reads around known and discovered indels (insertions and deletions) followed by base quality score recalibration (BQSR), with both steps performed using GATK. Lastly, SAMtools calmd was applied and indexes were created. Known indels for realignment were taken from the Mills Devine and 1000 Genomes Project Gold set and the 1000 Genomes Project phase low-coverage set, both part of the GATK resource bundle (version 2.2). Known variants for BQSR were taken from dbSNP 137, also part of the GATK resource bundle. Finally, single-nucleotide variants (SNVs) and indels were called using the GATK HaplotypeCaller (version 3.2.2); this was run in multisample calling mode using the complete dataset. GATK Variant Quality Score Recalibration (VQSR) was then computed on the whole dataset and applied to the individual-sample variant calling format (VCF) files. DeNovoGear (version 0.54)44 was used to detect SNV, insertion and deletion DNMs from child and parental exome data (BAM files). Variants in the VCF were annotated with minor allele frequency (MAF) data from a variety of different sources. The MAF annotations used included data from four different populations of the 1000 Genomes Project45(American, Asian, African and European), the UK10K cohort, the NHLBI GO Exome Sequencing Project (ESP), the Non-Finnish European (NFE) subset of the Exome Aggregation Consortium (ExAC) and an internal allele frequency generated using unaffected parents from the cohort. Variants in the VCF were annotated with Ensembl Variant Effect Predictor (VEP)46 based on Ensembl gene build 76. The transcript with the most severe consequence was selected and all associated VEP annotations were based on the predicted effect of the variant on that particular transcript; where multiple transcripts shared the same most severe consequence, the canonical or longest was selected. We included an additional consequence for variants at the last base of an exon before an intron, where the final base is a guanine, since these variants appear to be as damaging as a splice-donor variant28. We categorized variants into three classes by VEP consequence: (1) protein-truncating variants (PTV): splice donor, splice acceptor, stop gained, frameshift, initiator codon and conserved exon terminus variant; (2) missense variants: missense, stop lost, inframe deletion, inframe insertion, coding sequence and protein altering variant; (3) silent variants: synonymous. We filtered candidate DNM calls to reduce the false-positive rate but to maximize sensitivity, on the basis of previous results from experimental validation by capillary sequencing of candidate DNMs15. Candidate DNMs were excluded if not called by GATK in the child, or called in either parent, or if they had a maximum MAF greater than 0.01. Candidate DNMs were excluded when the forward and reverse coverage differed between reference and alternative alleles, defined as P < 10−3 using a Fisher’s exact test of coverage from orientation by allele summed across the child and parents. Candidate DNMs were also excluded if they met two of the three following three criteria: (1) an excess of parental alternative alleles within the cohort at the DNMs position, defined as P < 10−3 under a one-sided binomial test given an expected error rate of 0.002 and the cumulative parental depth; (2) an excess of alternative alleles within the cohort in DNMs in a gene, defined as P < 10−3 under a one-sided binomial test given an expected error rate of 0.002 and the cumulative depth; or (3) both parents had one or more reads supporting the alternative allele. If, after filtering, more than one variant was observed in a given gene for a particular trio, only the variant with the highest predicted functional impact was kept (protein truncating > missense > silent). For candidate DNMs of interest, primers were designed to amplify 150–250-bp products centred around the site of interest. Default primer3 design settings were used with the following adjustments: GC clamp = 1, human mispriming library used. Site-specific primers were tailed with Illumina adaptor sequences. PCR products were generated with JumpStart AccuTaq LA DNA polymerase (Sigma Aldrich), using 40 ng genomic DNA as template. Amplicons were tagged with Illumina PCR primers along with unique barcodes enabling multiplexing of 96 samples. Barcodes were incorporated using Kapa HiFi mastermix (Kapa Biosystems). Samples were pooled and sequenced down one lane of the Illumina MiSeq, using 250 bp paired-end reads. An in-house analysis pipeline extracted the read count per site and classified inheritance status per variant using a maximum likelihood approach (see Supplementary Note). We previously screened 1,133 individuals for variants that contribute to their disorder15, 18. All candidate variants in the 1,133 individuals were reviewed by consultant clinical geneticists for relevance to the individuals’ phenotypes. Most diagnosable pathogenic variants occurred de novo in dominant genes, but a small proportion also occurred in recessive genes or under other inheritance modes. DNMs within dominant DD-associated genes were very probable to be classified as the pathogenic variant for the individuals’ disorder. Owing to the time required to review individuals and their candidate variants, we did not conduct a similar review in the remainder of the 4,293 individuals. Instead we defined probable pathogenic variants as candidate DNMs found in autosomal and X-linked dominant DD-associated genes, or candidate DNMs found in hemizygous DD-associated genes in males. 1,136 individuals in the 4,293 cohort had variants either previously classified as pathogenic15, 18, or had a probably pathogenic DNM. Gene-specific germline mutation rates for different functional classes were computed15, 23 for the longest transcript in the union of transcripts overlapping the observed DNMs in that gene. We evaluated the gene-specific enrichment of PTV and missense DNMs by computing its statistical significance under a null hypothesis of the expected number of DNMs given the gene-specific mutation rate and the number of considered chromosomes23. We also assessed clustering of missense DNMs within genes15, as expected for DNMs causing activating or dominant-negative mechanisms. We did this by calculating simulated dispersions of the observed number of DNMs within the gene. The probability of simulating a DNM at a specific codon was weighted by the trinucleotide sequence context15, 23. This allowed us to estimate the probability of the observed degree of clustering given the null model of random mutations. Fisher’s method was used to combine the significance testing of missense + PTV DNM enrichment and missense DNM clustering. We defined a gene as significantly enriched for DNMs if the PTV-enrichment P value or the combined missense P value was less than 7 × 10−7, which represents a Bonferroni corrected P value of 0.05 adjusted for 4 × 18,500 tests (2 × consequence classes tested × protein coding genes). Families were given the option to have photographs of the affected individual(s) uploaded within DECIPHER39. Using images of individuals with DNMs in the same gene we generated de-identified realistic average faces (composite faces). Faces were detected using a discriminately trained, deformable-part-model detector47. The annotation algorithm identified a set of 36 landmarks per detected face48 and was trained on a manually annotated dataset of 3,100 images24. The average face mesh was created by the Delaunay triangulation of the average constellation of facial landmarks for all patients with a shared genetic disorder. The averaging algorithm is sensitive to left–right facial asymmetries across multiple patients. For this purpose, we use a template constellation of landmarks based on the average constellations of 2,000 healthy individuals24. For each patient, we align the constellation of landmarks to the template with respect to the points along the middle of the face and compute the Euclidean distances between each landmark and its corresponding pair on the template. The faces are mirrored such that the half of the face with the greater difference is always on the same side. The dataset used for this work may contain multiple photos for one patient. To avoid biasing the average face mesh towards these individuals, we computed an average face for each patient and use these personal averages to compute the final average face. Finally, to avoid any image in the composite dominating owing to variance in illumination between images, we normalized the intensities of pixel values within the face to an average value across all faces in each average. The composite faces were assessed visually to confirm successful ablation of any individually identifiable features. Visual assessment of the composite photograph by two experienced clinical geneticists, alongside the individual patient photos, was performed for all 93 genome-wide significant DD-associated genes for which clinical photos were available for more than one patient, to remove potentially identifiable composite faces as well as quality control on the automated composite face generation process. Eighty-one composite faces were excluded leaving the twelve de-identified composite faces that are shown in Fig. 2 and Extended Data Fig. 3. Each of the twelve composite faces that passed de-identification and quality control was generated from photos of ten or more patients. We previously described a method to assess phenotypic similarity by HPO terms among groups of individuals sharing genetic defects in the same gene28. We examined whether incorporating this statistical test improved our ability to identify dominant genes at genome-wide significance. Per gene, we tested the phenotypic similarity of individuals with DNMs in the gene. We combined the phenotypic-similarity P value with the genotypic P value per gene (the minimum P value from the DDD-only and meta-analysis) using Fisher’s method. We examined the distribution of differences in P value between tests without the phenotypic-similarity P value and tests that incorporated the phenotypic-similarity P value. Many individuals (854, 20%) of the DDD cohort experience seizures. We investigated whether testing within the subset of individuals with seizures improved our ability to find associations for seizure-specific genes. A list of 102 seizure-associated genes was curated from three sources: a gene panel for Ohtahara syndrome, a currently used clinical gene panel for epilepsy and a panel derived from DD-associated genes18. The P values from the seizure subset were compared to P values from the complete cohort. We compared the expected power of exome sequencing versus genome sequencing to identify disease genes. Within the DDD cohort, 55 dominant DD-associated genes achieve genome-wide significance when testing for enrichment of DNMs within genes. We did not incorporate missense DNM clustering owing to the large computational requirements for assessing clustering in many replicates. We assumed a cost of USD$1,000 per individual for genome sequencing. We allowed the cost of exome sequencing to vary relative to genome sequencing, from 10–100%. We calculated the number of trios that could be sequenced under these scenarios. Estimates of the improved power of genome sequencing to detect DNMs in the coding sequence are around 1.05-fold29 and we increased the number of trios by 1.0–1.2-fold to allow this. We sampled as many individuals from our cohort as the number of trios and counted which of the 55 DD-associated genes still achieved genome-wide significance for DNM enrichment. We ran 1,000 simulations of each condition and obtained the mean number of genome-wide significant genes for each condition. We tested whether phenotypes were associated with the likelihood of having a probably pathogenic DNM. We analysed all collected phenotypes which could be coded in either a binary or quantitative format. Categorical phenotypes (for example, sex coded as male or female) were tested using a Fisher’s exact test whereas quantitative phenotypes (for example, duration of gestation coded in weeks) were tested using a logistic regression, using sex as a covariate. We investigated whether having autozygous regions affected the likelihood of having a diagnostic DNM. Autozygous regions were determined from genotypes in every individual, to obtain the total length per individual. We fitted a logistic regression for the total length of autozygous regions to whether individuals had a probably pathogenic DNM. To illustrate the relationship between length of autozygosity and the occurrence of a probably pathogenic DNM, we grouped the individuals by length and plotted the proportion of individuals in each group with a DNM against the median length of the group. The effects of parental age on the number of DNMs were assessed using 8,409 high confidence (posterior probability of DNM > 0.5) unphased coding and noncoding DNMs in 4,293 individuals. A Poisson multiple regression was fit on the number of DNMs in each individual with both maternal and paternal age at child birth as covariates. The model was fit with the identity link and allowed for overdispersion. This model used exome-based DNMs, and the analysis was scaled to the whole genome by multiplying the coefficients by a factor of 50, based on approximately 2% of the genome being well covered by our data (exons + introns). We identified the threshold for posterior probability of DNM for which the number of observed candidate synonymous DNMs was equal to the number of expected synonymous DNMs. Candidate DNMs with scores below this threshold were excluded. We also examined the probable sensitivity and specificity of this threshold based on validation results for DNMs of a previous publication15 in which comprehensive experimental validation was performed on 1,133 trios that comprise a subset of the families analysed here. The numbers of expected DNMs per gene were calculated per consequence from expected mutation rates per gene and the 2,407 male and 1,886 females in the cohort. We calculated the excess of DNMs for missense and PTVs as the ratio of numbers of observed DNMs versus expected DNMs, as well as the difference of observed DNMs minus expected DNMs. We identified 150 autosomal dominant haploinsufficient genes that affect neurodevelopment within our curated DD gene set. Genes affecting neurodevelopment were identified where the affected organs included the brain; or where HPO phenotypes linked to defects in the gene included either an abnormality of brain morphology (HP:0012443) or cognitive impairment (HP:0100543) term. The 150 genes were classified for ease of clinical recognition of the syndrome from gene defects by two consultant clinical geneticists. Genes were rated from 1 (least recognizable) to 5 (most recognizable). Categories 1 and 2 contained 5 and 22 genes, respectively, and so were combined in later analyses. The remaining categories had more than 33 genes per category. The ratio of observed loss-of-function DNMs to expected loss-of-function DNMs was calculated for each recognizability category, along with 95% CIs from a Poisson distribution given observed counts. We estimated the likelihood of obtaining the observed number of PTV DNMs under two models. Our first model assumed no haploinsufficiency, and mutation counts were expected to follow baseline mutation rates. Our second model assumed fully penetrant haploinsufficiency, and scaled the baseline PTV-mutation expectations by the observed PTV enrichment in our known haploinsufficient neurodevelopmental genes, stratified by clinical recognizability into low (containing genes with our ‘low’, ‘mild’ and ‘moderate’ labels) and high categories. We calculated the likelihoods of both models per gene as the Poisson probability of obtaining the observed number of PTVs, given the expected mutation rates. We computed the Akaike’s Information Criterion for each model and ranked them by the difference between model 1 and model 2 (Δ ). The observed excess of missense/inframe indel DNMs is composed of a mixture of DNMs with loss-of-function mechanisms and DNMs with altered-function mechanisms. We found that the excess of PTV DNMs within dominant haploinsufficient DD-associated genes had a greater skew towards genes with high intolerance for loss-of-function variants than the excess of missense DNMs in dominant non-haploinsufficient genes. We binned genes by the probability of being loss-of-function intolerant30 constraint decile and calculated the observed excess of missense DNMs in each bin. We modelled this binned distribution as a two-component mixture with the components representing DNMs with a loss-of-function or altered-function mechanism. We identified the optimal mixing proportion for the loss-of-function and altered-function DNMs from the lowest goodness of fit (from a spline fitted to the sum-of-squares of the differences per decile) to missense/inframe indels in all genes across a range of mixtures. The excess of DNMs with a loss-of-function mechanism was calculated as the excess of DNMs with a VEP loss-of-function consequence, plus the proportion of the excess of missense DNMs at the optimal mixing proportion. We independently estimated the proportions for loss of function and altered function. We counted PTV and missense/inframe indel DNMs within dominant haploinsufficient genes to estimate the proportion of excess DNMs with a loss-of-function mechanism, but which were classified as missense/inframe indel. We estimated the proportion of excess DNMs with a loss-of-function mechanism as the PTV excess plus the PTV excess multiplied by the proportion of loss of function classified as missense. We estimated the birth prevalence of monoallelic DDs by using the germline-mutation model. We calculated the expected cumulative germline-mutation rate of truncating DNMs in 238 haploinsufficient DD-associated genes. We scaled this upwards based on the composition of excess DNMs in the DDD cohort using the ratio of excess DNMs (n = 1,816) to DNMs within dominant haploinsufficient DD-associated genes (n = 412). Around 10% of DDs are caused by de novo copy-number variations49, 50, which are underrepresented in our cohort as a result of previous genetic testing. If included, the excess DNM in our cohort would increase by 21%, therefore we scaled the prevalence estimate upwards by this factor. Mothers aged 29.9 and fathers aged 29.5 have children with 77 DNMs per genome on average21. We calculated the mean number of DNMs expected under different combinations of parental ages, given our estimates of the extra DNMs per year from older mothers and fathers. We scaled the prevalence to different combinations of parental ages using the ratio of expected mutations at a given age combination to the number expected at the mean cohort parental ages. To estimate the annual number of live births with DDs caused by DNMs, we obtained country population sizes, birth rates, age at first birth51, and calculated global birth rate (18.58 live births per 1,000 individuals) and age at first birth (22.62 years), weighted by population size. We calculated the mean age when giving birth (26.57 years) given a total fertility rate of 2.45 children per mother52, and a mean interpregnancy interval of 29 months53. We calculated the number of live births given our estimate of DD prevalence caused by DNMs at this age (0.00288), the global population size (7.4 billion individuals) and the global birth rate. Source code for filtering candidate DNMs, testing DNM enrichment, DNM clustering and phenotypic similarity can be found here: https://github.com/jeremymcrae/denovoFilter, https://github.com/jeremymcrae/mupit, https://github.com/jeremymcrae/denovonear and https://github.com/jeremymcrae/hpo_similarity. Exome sequencing and phenotype data are accessible via the European Genome-phenome Archive (EGA) under accession number EGAS00001000775 (https://www.ebi.ac.uk/ega/studies/EGAS00001000775). Details of DD-associated genes are available at www.ebi.ac.uk/gene2phenotype. All other data are available from the corresponding author upon reasonable request.
News Article | February 16, 2017
LONDON--(BUSINESS WIRE)--According to the latest market study released by Technavio, the global automotive exhaust gas sensors market is expected to grow at a CAGR of more than 6% during the forecast period. This research report titled ‘Global Automotive Exhaust Gas Sensors Market 2017-2021’ provides an in-depth analysis of the market in terms of revenue and emerging market trends. This market research report also includes up to date analysis and forecasts for various market segments and all geographical regions. China, Japan, India, and South Korea are the key countries from APAC in the automotive exhaust gas sensors market. Increasing industrialization and quick development of infrastructure are attracting more automotive manufacturers to the region, thereby driving the growth of the segment. Technavio’s sample reports are free of charge and contain multiple sections of the report including the market size and forecast, drivers, challenges, trends, and more. Based on sensor type, the report categorizes automotive exhaust gas sensors into the following segments: The top three revenue-generating sensor segments in the global automotive exhaust sensors market are discussed below: “The demand for NOx sensors is driven by the stringent emission standards that are mandated by governments across the world. Also, the shift in the focus of automotive manufacturers toward developing engines that significantly reduce fuel consumption is also driving the demand for NOx sensors,” says Siddharth Jaiswal, one of the lead analysts at Technavio for automotive electronics research. NOx sensors play a major role in the selective catalytic reduction (SCR) system by monitoring NOx emissions. This helps in the compliance of stringent emission norms in place to curb the greenhouse effect. Additionally, the average selling price (ASP) of these sensors is expected to decrease during the forecast period, which will increase the adoption of these sensors. O sensors have become a standard issue in almost all vehicles since their launch in 1976. With the introduction of on-board diagnostics, the number of O sensors used in a single vehicle has increased, leading to increased adoption. Different kinds of O sensors are used to monitor the air-fuel (A-F) mixture and the converter’s operating efficiency. These sensors help in improving fuel economy, maintaining peak engine performance, reducing exhaust emissions, and minimizing the risk associated with catalytic converters in vehicles. The fall in their ASP will drive their adoption during the forecast period. “The electronic control units in vehicles aids in the enhancement of engine operation and ensures effective exhaust gas treatment by precisely regulating the air-fuel ratio. Manifold absolute pressure and mass air flow sensors are extensively used in the electronic control units to aid the process,” says Siddharth. Both the sensors work by sending a signal to the electronic control unit (ECU), which is used to calculate the amount of air aspirated by the engine. The popularity of MAP-MAF sensors in putting a clear check on emissions, thereby driving for their increased adoption. The top vendors highlighted by Technavio’s research analysts in this report are: Become a Technavio Insights member and access all three of these reports for a fraction of their original cost. As a Technavio Insights member, you will have immediate access to new reports as they’re published in addition to all 6,000+ existing reports covering segments like automotive components, wheels and tires, and powertrain. This subscription nets you thousands in savings, while staying connected to Technavio’s constant transforming research library, helping you make informed business decisions more efficiently. Technavio is a leading global technology research and advisory company. The company develops over 2000 pieces of research every year, covering more than 500 technologies across 80 countries. Technavio has about 300 analysts globally who specialize in customized consulting and business research assignments across the latest leading edge technologies. Technavio analysts employ primary as well as secondary research techniques to ascertain the size and vendor landscape in a range of markets. Analysts obtain information using a combination of bottom-up and top-down approaches, besides using in-house market modeling tools and proprietary databases. They corroborate this data with the data obtained from various market participants and stakeholders across the value chain, including vendors, service providers, distributors, resellers, and end-users. If you are interested in more information, please contact our media team at email@example.com.
News Article | February 15, 2017
Canada, New England, and California all have Carbon Credit programs to achieve GHG reduction goals. Several forms of biomass diversion from landfills, farms, and other biomass – dependent GHG sources are already in operation to support significant GHG reductions. Examples of GHG reductions are given, and the carbon impact of the different commercially available biomass to GHG reduction processes are described. The three groups of commercially guaranteed biomass conversion processes are: : from the combustion of biomass wastes. This industry, with about 100 independent units in operation in the US, is based upon pulp mill technology and includes fluid beds, pulverized fuels, and suspension grate technologies matched to well-proven emissions controls. High pressure steam is generated and drives turbines to make power. While fluid beds can use fuels up to 65 % water, e.g. sludges, most units use waste woody biomass from a variety of sources, and provide an important regional waste disposal service in the Circular Economy. The GHG reduction is site specific – landfill diversion of biomass is strongly GHG Negative, while 100 % forestry waste fuel is approximately GHG neutral. AD is firmly established as the leading method of converting high moisture content organic wastes first to methane-rich gas, thence to power, pipeline gas, CNG / LNG. AD’s lower conversion efficiency and higher specific capital cost is offset by a consistent GHG reduction due to the sources of its biomass feedstocks, and in many cases by wider socioeconomic benefits in disposing of wastes. This category includes simple pyrolysis, classical and advanced gasification processes, and direct catalytic conversion, with end products ranging from crude distillate oils, synthesis gas for all applications, and catalyst-tailored products. These processes generally have a high conversion efficiency, and their GHG reduction impact is largely a function of their biomass sources. Over 100 US biomass Independent Power Producers, and our many biomass industrial and municipal Combined Heat and Power generators, do not grow and burn trees – they cannot afford to do so at current power prices. They all use some form of waste biomass fuel, often diverting the wastes from landfills. They cannot afford to plant and harvest either trees or other ‘energy crops’, even though such plantations have been tried. The range of waste biomass fuels is wide – forestry wastes, including sawmill wastes, wastes from wood products manufacturing, non-recyclable waste paper, recycled paper mill wastes and sludges. In addition, biomass plants use wood waste diverted from municipal landfills to avoid methane generation, clean wood from construction and demolition sites, utility transmission line right-of-way clearance and urban tree removals – a huge volume. Finally, there is process waste from biodiesel and cellulosic fuels and chemicals production, agricultural wastes, straw and husks from grain crops and grain processing. Some biomass plants, such as municipal Waste-to-Energy plants, combine recycling with power generation. The US has about 80 WtE biomass power plants, operating in compliance with emissions regulations. These third generation WtE plants, like those in Europe and the newer WtE plants being built in China, exhibit high reliability and extremely low emissions as the result of several decades of continuous process improvement. The carbon footprint of biomass power plants is generally neutral as determined by UE EPA and US Department of Energy. Each location should calculate its own particular carbon footprint. Some biomass power is strongly carbon-negative, owing to the reduction in landfill methane emissions when biomass is diverted, even with landfill gas recovery, as shown below. Additional reductions in carbon footprint can be achieved by the use of biodiesel and renewable LNG or CNG in trucks and equipment – a growing trend. Dry wood consists of a mixture of cellulose, hemi-cellulose, and lignin. The ash-free chemical composition of wood can be represented either as C H(H O) , or more simply as CH O. CH O is used below for the approximate calculation of the amount of methane, CH , and carbon dioxide, CO , that is released from a landfill when woody material undergoes anaerobic decomposition. Anaerobic decomposition is the result of the exclusion of air, the presence of water, and the presence of anaerobic and methane-forming bacteria, similar to the conditions in a swamp where methane, or marsh gas, is generated. One ton of dry, ash-free, wood in a landfill produces: The typical 25 MW biomass power plants use from 1.05 to 1.1 dry ton of wood per net MW-hour; using 1.05 tons/ MWhr: One MWhr of biomass power from diverted biomass avoids the formation of 0.28 tons of methane in a landfill. As a GHG gas, methane is approximately 21 times as powerful as CO2. So one MWhr of diverted biomass power avoids the release of approximately 6 tons of CO2 equivalent, plus the CO2 also generated for a total of 6.7 tons of CO2 E per biomass MWhr. However, most landfills practice landfill gas recovery. The EPA model uses a default value of 50 % LFG recovery when calculating emissions, but it is assumed that in southern California – the origin of LFG recovery technology – LFG recovery is approximately 65 %, as advised by SCS Engineers and industry sources. Therefore the net emissions of methane to the atmosphere are approximately: 0.28 tons CH4 / MWhr x 35 % net emitted = 0.1 ton of methane emission avoided per Biomass MWhr, or 2.1 tons of CO2 equivalent. Wood waste deposited in a C&D landfill will generate LFG more slowly than in an MSW landfill, but typically there will be no LFG recovery at a C&D landfill. Once water accumulates, and oxygen is depleted, anaerobic decomposition will take place, yielding 6 – 7 tons CO E per biomass MWhr. Waste paper is lignin-free wood, and decomposes in the same way, but more rapidly than woody material. If a given facility uses 50 % landfill-diverted biomass, and 50 % carbon-neutral forestry waste, then a pro rata calculation of the negative carbon impact can be used to calculate the Carbon Credits so created. A typical 25 MW biomass power plant, using 100 % landfill diverted biomass, prevents the emission of about 1.2 MM tons per year of CO equivalent. A typical 60 MW, 2,000 ton MSW/day waste to energy plant, where 100 % of the biomass fraction of the fuel avoids landfill decomposition, prevents the emission of about 2.9 MM tons/yr of CO equivalent. Most AD feedstocks would be converted to methane and CO if not so processed, therefore the above simplified calculation may be applied, with adjustment for the individual MAF organic analysis. Loss of carbon to digestate affects the overall carbon conversion efficiency of the particular AD process, but does not affect the negative carbon impact due to conversion. Negative carbon impact must be calculated on the MAF content of the feedstock, then converted to wet tons, or to gallons. The additional socio-economic impact of many AD projects which eliminate the discharge of livestock manures and their consequent damage to rivers, lakes, fisheries and tourist business should be added to their negative carbon impact. Although the carbon conversion efficiency of most AD processes is significantly lower than that in a combustion boiler, the heat rate of the gas engine gensets used for AD is better than that of a biomass boiler/steam turbine, such that an AD plant has negative carbon impact of about 5 tons of CO E per MWhr, compared to the 6.7 tons CO E for the solid biomass system. Care must be taken to define the alternative decomposition pathway of each part of the AD feedstock in calculating this value. A typical 5 MW AD facility, using 100 % feedstock that would otherwise generate methane emissions, prevents the emission of about 200,000 tons per year of CO equivalent. There are about a dozen commercially guaranteeable biomass conversion systems available. Some are equipped to handle MSW, others are limited to less-corrosive forestry wood wastes. In all cases, the negative carbon impact can be calculated from the feedstock analysis and the alternative disposal of that feedstock in the absence of the project. A good example is the Edmonton, Alberta, Enerkem project which converts 100,000 tons/yr of RDF from MSW into approx 440 bbl/day of hydrocarbon, using a classic oxygen blown gasification process followed by gas clean-up and methanol synthesis. Gasification processes have excellent carbon conversion efficiencies; if the resulting syngas is used in combined cycle power generation, a very efficient overall system results. But the carbon impact is based upon the feedstock consumption, not the MW output, so a lower negative carbon impact per MWhr than for a conventional boiler/turbine is the result. The Enerkem Alberta project has a negative carbon impact of about 550,000 tons/year of CO equivalent, based upon 100 % of its feedstock being diverted from landfill. Andrew Grant has a B.A. and M.A. from Cambridge University and over 35 years’ experience as a manager of biomass conversion projects. He has been involved in providing guarantees of performance, environmental impact and cost studies, for coal and biomass conversion technologies, and has performed due diligence studies of technologies and of facilities. Andrew is familiar with a wide range of biomass processing, ranging from wood chips to rice straw to MSW, and is experienced in greenhouse gas reduction and carbon footprinting, in the use of waste biomass, and other emerging technologies.
News Article | August 22, 2016
Deleterious variants are expected to have lower allele frequencies than neutral ones, due to negative selection. This theoretical property has been demonstrated previously in human population sequencing data12, 13 and here (Fig. 1d, e). This allows inference of the degree of selection against specific functional classes of variation. However, mutational recurrence as described earlier indicates that allele frequencies observed in ExAC-scale samples are also skewed by mutation rate, with more mutable sites less likely to be singletons (Fig. 2c and Extended Data Fig. 1d). Mutation rate is in turn non-uniformly distributed across functional classes. For example, variants that result in the loss of a stop codon can never occur at CpG dinucleotides (Extended Data Fig. 1e). We corrected for mutation rates (Supplementary Information section 3.2) by creating a mutability-adjusted proportion singleton (MAPS) metric. This metric reflects (as expected), strong selection against predicted PTVs, as well as missense variants predicted by conservation-based methods to be deleterious (Fig. 2e). The deep ascertainment of rare variation in ExAC also allows us to infer the extent of selection against variant categories on a per-gene basis by examining the proportion of variation that is missing compared to expectations under random mutation. Conceptually similar approaches have been applied to smaller exome data sets11, 14, but have been underpowered, particularly when analysing the depletion of PTVs. We compared the observed number of rare (minor allele frequency (MAF) <0.1%) variants per gene to an expected number derived from a selection neutral, sequence-context based mutational model11. The model performs well in predicting the number of synonymous variants, which should be under minimal selection, per gene (r = 0.98; Extended Data Fig. 3b). We quantified deviation from expectation with a Z score11, which for synonymous variants is centred at zero, but is significantly shifted towards higher values (greater constraint) for both missense and PTV (Wilcoxon P < 10−50 for both; Fig. 3a). The genes on the X chromosome are significantly more constrained than those on the autosomes for missense (P < 10−7) and loss-of-function mutations (P < 10−50), in line with previous work15. The high correlation between the observed and expected number of synonymous variants on the X chromosome (r = 0.97 versus 0.98 for autosomes) indicates that this difference in constraint is not due to a calibration issue. To reduce confounding by coding sequence length for PTVs, we developed an expectation-maximization algorithm (Supplementary Information section 4.4) using the observed and expected PTV counts within each gene to separate genes into three categories: null (observed ≈ expected), recessive (observed ≤ 50% of expected), and haploinsufficient (observed <10% of expected). This metric—the probability of being loss-of-function (LoF) intolerant (pLI)—separates genes of sufficient length into LoF intolerant (pLI ≥ 0.9, n = 3,230) or LoF tolerant (pLI ≤ 0.1, n = 10,374) categories. pLI is less correlated with coding sequence length (r = 0.17 as compared to 0.57 for the PTV Z score), outperforms the PTV Z score as an intolerance metric (Supplementary Table 15), and reveals the expected contrast between gene lists (Fig. 3b). pLI is positively correlated with the number of physical interaction partners of a gene product (P < 10−41). The most constrained pathways (highest median pLI for the genes in the pathway) are core biological processes (spliceosome, ribosome, and proteasome components; Kolmogorov–Smirnov test P < 10−6 for all), whereas olfactory receptors are among the least constrained pathways (Kolmogorov–Smirnov test P < 10−16), as demonstrated in Fig. 3b, and this is consistent with previous work5, 16, 17, 18, 19. Crucially, we note that LoF-intolerant genes include virtually all known severe haploinsufficient human disease genes (Fig. 3b), but that 72% of LoF-intolerant genes have not yet been assigned a human disease phenotype despite clear evidence for extreme selective constraint (Supplementary Table 13). We note that this extreme constraint does not necessarily reflect a lethal disease or status as a disease gene (for example, BRCA1 has a pLI of 0), but probably points to genes in which heterozygous loss of function confers some non-trivial survival or reproductive disadvantage. The most highly constrained missense (top 25% missense Z scores) and PTV (pLI ≥ 0.9) genes show higher expression levels and broader tissue expression than the least constrained genes20 (Fig. 3c). These most highly constrained genes are also depleted for expression quantitative trait loci (eQTLs) (P < 10−9 for missense and PTV; Fig. 3d), yet are enriched within genome-wide significant trait-associated loci (χ2 test, P < 10−14, Fig. 3e). Genes intolerant of PTV variation would be expected to be dosage-sensitive, as in such genes natural selection does not tolerate a 50% deficit in expression due to the loss of single allele. It is thus unsurprising that these genes are also depleted of common genetic variants that have a large enough effect on expression to be detected as eQTLs with current limited sample sizes. However, smaller changes in the expression of these genes, through weaker eQTLs or functional variants, are more likely to contribute to medically relevant phenotypes. Finally, we investigated how these constraint metrics would stratify mutational classes according to their frequency spectrum, corrected for mutability as in the previous section (Fig. 3f). The effect was most dramatic when considering nonsense variants in the LoF-intolerant set of genes. For missense variants, the missense Z score offers information orthogonal to Polyphen2 and CADD classifications, which are measures predicting the likely deleteriousness of variants, indicating that gene-level measures of constraint offer additional information to variant-level metrics in assessing potential pathogenicity.