Feller I.C.,Smithsonian Environmental Research Center |
Lovelock C.E.,University of Queensland |
Berger U.,Growth Science |
McKee K.L.,U.S. Geological Survey |
And 2 more authors.
Annual Review of Marine Science | Year: 2010
Mangroves are an ecological assemblage of trees and shrubs adapted to grow in intertidal environments along tropical coasts. Despite repeated demonstration of their economic and societal value, more than 50% of the world's mangroves have been destroyed, 35% in the past two decades to aquaculture and coastal development, altered hydrology, sea-level rise, and nutrient overenrichment. Variations in the structure and function of mangrove ecosystems have generally been described solely on the basis of a hierarchical classification of the physical characteristics of the intertidal environment, including climate, geomorphology, topography, and hydrology. Here, we use the concept of emergent properties at multiple levels within a hierarchical framework to review how the interplay between specialized adaptations and extreme trait plasticity that characterizes mangroves and intertidal environments gives rise to the biocomplexity that distinguishes mangrove ecosystems. The traits that allow mangroves to tolerate variable salinity, flooding, and nutrient availability influence ecosystem processes and ultimately the services they provide. We conclude that an integrated research strategy using emergent properties in empirical and theoretical studies provides a holistic approach for understanding and managing mangrove ecosystems. © 2010 by Annual Reviews.
Grimm V.,Helmholtz Center for Environmental Research |
Berger U.,Growth Science |
DeAngelis D.L.,University of Miami |
Polhill J.G.,Macaulay Institute |
And 2 more authors.
Ecological Modelling | Year: 2010
The 'ODD' (Overview, Design concepts, and Details) protocol was published in 2006 to standardize the published descriptions of individual-based and agent-based models (ABMs). The primary objectives of ODD are to make model descriptions more understandable and complete, thereby making ABMs less subject to criticism for being irreproducible. We have systematically evaluated existing uses of the ODD protocol and identified, as expected, parts of ODD needing improvement and clarification. Accordingly, we revise the definition of ODD to clarify aspects of the original version and thereby facilitate future standardization of ABM descriptions. We discuss frequently raised critiques in ODD but also two emerging, and unanticipated, benefits: ODD improves the rigorous formulation of models and helps make the theoretical foundations of large models more visible. Although the protocol was designed for ABMs, it can help with documenting any large, complex model, alleviating some general objections against such models. © 2010 Elsevier B.V.
Fanget A.,Laboratory of Physics of Complex Matter |
Traversi F.,Bioengineering Institute |
Khlybov S.,Bioengineering Institute |
Granjon P.,Bioengineering Institute |
And 5 more authors.
Nano Letters | Year: 2014
A high-throughput fabrication of sub-10 nm nanogap electrodes combined with solid-state nanopores is described. These devices should allow concomitant tunneling and ionic current detection of translocating DNA molecules. We report the optimal fabrication parameters in terms of dose, resist thickness, and gap shape that allow easy reproduction of the fabrication process at wafer scale. The device noise and current voltage characterizations performed and the influence of the nanoelectrodes on the ionic current noise is identified. In some cases, ionic current rectification for connected or biased nanogap electrodes is also observed. In order to increase the extremely low translocation rates, several experimental strategies were tested and modeled using finite element analysis. Our findings are useful for future device designs of nanopore integrated electrodes for DNA sequencing. © 2013 American Chemical Society.
Julien S.G.,McGill University |
Julien S.G.,Growth Science |
Dube N.,McGill University |
Hardy S.,McGill University |
Tremblay M.L.,McGill University
Nature Reviews Cancer | Year: 2011
Members of the protein tyrosine phosphatase (Ptp) family dephosphorylate target proteins and counter the activities of protein tyrosine kinases that are involved in cellular phosphorylation and signalling. As such, certain PTPs might be tumour suppressors. Indeed, PTPs play an important part in the inhibition or control of growth, but accumulating evidence indicates that some PTPs may exert oncogenic functions. Recent large-scale genetic analyses of various human tumours have highlighted the relevance of PTPs either as putative tumour suppressors or as candidate oncoproteins. Progress in understanding the regulation and function of PTPs has provided insights into which PTPs might be potential therapeutic targets in human cancer. © 2011 Macmillan Publishers Limited. All rights reserved.
Hardy S.,McGill University |
Julien S.G.,Growth Science |
Tremblay M.L.,McGill University
Anti-Cancer Agents in Medicinal Chemistry | Year: 2012
Protein tyrosine phosphatases (PTPs) constitute a large family of enzymes that can exert both positive and negative effects on signaling pathways. They play dominant roles in setting the levels of intracellular phosphorylation downstream of many receptors including receptor tyrosine kinases and G protein-coupled receptors. As observed with kinases, deregulation of PTP activity can also contribute to cancer. This review will examine a broad array of PTP family members that positively affect oncogenesis in human cancer tissues. We will describe the PTP family, their biological significance in oncology, and how recent progress is being made to more effectively target specific PTPs. Finally, we will discuss the therapeutic implications of targeting these oncogenic PTPs in cancer. © 2012 Bentham Science Publishers.
Hirabayashi S.,Growth Science |
Hirabayashi S.,Imperial College London
DMM Disease Models and Mechanisms | Year: 2016
Accumulating epidemiological evidence indicates a strong clinical association between obesity and an increased risk of cancer. The global pandemic of obesity indicates a public health trend towards a substantial increase in cancer incidence and mortality. However, the mechanisms that link obesity to cancer remain incompletely understood. The fruit fly Drosophila melanogaster has been increasingly used to model an expanding spectrum of human diseases. Fly models provide a genetically simpler system that is ideal for use as a first step towards dissecting disease interactions. Recently, the combining of fly models of diet-induced obesity with models of cancer has provided a novel model system in which to study the biological mechanisms that underlie the connections between obesity and cancer. In this Review, I summarize recent advances, made using Drosophila, in our understanding of the interplay between diet, obesity, insulin resistance and cancer. I also discuss how the biological mechanisms and therapeutic targets that have been identified in fly studies could be utilized to develop preventative interventions and treatment strategies for obesityassociated cancers. © 2016. Published by The Company of Biologists Ltd.
Hart T.G.B.,Growth Science
Agriculture and Human Values | Year: 2011
Technologies and services provided to resource-poor farmers need to be relevant and compatible with the context in which they operate. This paper examines the contribution of extension services to the food security of resource-poor farmers in a rural village in South Africa. It considers these in terms of the local context and the production of African vegetables in household food plots. A mixture of participatory, qualitative and quantitative research tools, including a household survey, is used to argue that local production practices contribute more to food security requirements than the extension services. This is because of the ability of African vegetables to grow relatively well in semi-arid areas where other exotic plants do not, their ability to provide at least two foodstuffs during their life cycle, and the ability of either the fruit or the leaves, or both, to be dried and stored for consumption in the winter months. These crops can make a significant contribution in terms of household food security, but a number of social and agroecological factors are constraining their production and placing their availability under threat. Despite this, the extension services remain focused on certain activities within vegetable garden projects, even when these are not meeting their proposed purpose-food security by means of cash-crop production. The paper concludes that social and agroecological constraints could be improved if the extension services were changed. This could include the use of context specific and low-cost technologies to ensure that these crops are able to increase their contribution to household food security for resource-poor farmers in semi-arid areas. © 2010 Springer Science+Business Media B.V.
Renuka Devi K.,Growth Science |
Srinivasan K.,Growth Science
Journal of Crystal Growth | Year: 2013
The formation of α and γ nucleation in the aqueous solution of glycine is discussed in the context of self-charge compensation among the molecular clusters. Self-charge compensation is the main process responsible for the creation of dimers needed for the initiation of α nucleation in the solution which is more probable at most environmental conditions, whereas, the creation of γ nucleation is due to the assembly of monomers which requires sufficient/more number of monomers in the solution and has less probability to occur at normal circumstances. In the present work, a new induced charge compensation mechanism was adopted by adding selected externally Induced Charge Compensator (ICC) species by which the dimer formation was completely arrested into the solution and this paved a way for the nucleation of γ. The effect of concentration of the added species on the induction time, nucleation, growth and morphology of the nucleated polymorphs was studied. Structural affirmation of the nucleated polymorphs was confirmed by X-ray diffraction and their functional groups by FTIR analysis. Results reveal that, the changeover of the charge compensation mechanism from the selfcharge compensation to the induced one occurs only at a critical concentration of the charged species. This approach is fruitful in the control of either only α or only γ nucleation in the system. © 2012 Elsevier B.V. All rights reserved.
News Article | June 30, 2014
Editor’s note: Thomas Thurston is a Partner at WR Hambrecht + Co, a San Francisco-based investment bank and venture capital firm. He is also Fund Manager at Ironstone, a San Francisco-based private equity firm that uses algorithms to identify disruptive startups, CEO of Growth Science, a data science firm, and former Chief Investment Officer of Rottura Capital, a long-short equities hedge fund. Formerly, Thomas worked at Intel Capital where he used data science to guide growth investments. A Fellow at the Harvard Business School, Thomas holds a BA, MBA and Juris Doctor. Nothing gets keyboards clicking like a good controversy. Recently Jill Lepore, a history professor at Harvard, published a fierce article in the New Yorker accusing another Harvard professor, Clayton Christensen, of being a quack. Lepore didn’t use that word, but she may as well have. Christensen is a business school professor renowned for his “Disruption Theory” about why businesses survive or fail. Lepore basically says Disruption Theory is no-good because it’s reckless, based on bad evidence and can’t predict the future. An ability to predict the future is, after all, the true test of a model. Christensen fired back in a Bloomberg BusinessWeek interview days later, followed by droves of Internet chatter by onlookers. The real question is, who’s right? Christensen or Lepore? Is this just a case of one reasonable opinion versus another? Actually, no. The unpopular, debate-killing truth is opinion doesn’t matter. Whether or not Disruption Theory can predict the future isn’t a matter of opinion, it’s a matter of fact. Here are the facts. Most people don’t know this, but it turns out Disruption Theory is the foundation of the most accurate, thoroughly vetted, quantitative prediction models of new business survival or failure in the world today. Oops. Allow me to explain. Nearly a decade ago I was working at Intel when it dawned on me to turn the company’s new business investment history into a formatted dataset. The goal was to look for quantitative patterns to better predict which Intel innovations would succeed or fail. Generally speaking, most businesses fail (around 75 percent) before their 10th birthday, regardless of whether they’re a startup, a venture capital investment or launched by a company like Intel. I wanted to know if data-centric analyses could better pick winners. Strong patterns began to emerge, suggesting it was far more possible to predict the fate of innovations than anyone thought possible. The clearer these patterns became, the more I noticed how similar they were to phenomenon Christensen had already been writing about for years. At the time Christensen had last published the book Seeing What’s Next, claiming Disruption Theory could predict the kinds of outcomes my research focused on. While Christensen’s work had a litany of supporting examples, it struck me (perhaps as it struck Lepore) that the research didn’t have the kinds of data I cared about – quantitative predictive data. Christensen had reason to believe Disruption Theory was predictive, but I wanted to know how predictive – exactly. Was it 10 percent predictive? 21 percent? 55 percent? 98 percent? As a manager in the trenches of Intel, this was the specificity I needed before deciding if Disruption Theory was useful. Those details were the gap between theory and practice. Since only around 25 percent of new businesses survive, to be useful any model would have to be more than 25 percent accurate at picking winners on a consistent basis. It’s important to note how improvement, not perfection, is the standard to which science is valued. For example, a new cancer treatment is valuable if it saves 10 percent more lives, even if it doesn’t cure 100 percent of patients. At any point in time, solutions just have to be better than the alternatives. Since the patterns I found were more than 25 percent accurate, and those patters seemed to dovetail with what Christensen had long written about, I decided to test Disruption Theory on its own. Predictive testing is part of a structured discipline called the Scientific Method. While it can be part of a social science education, it’s most commonly associated with “hard” sciences like Physics, Chemistry and medicine. It’s why new drugs have clinical trials. A model has to pass through stages including blind tests across random control groups to see if its predictions are not only accurate, but also support statistically significant levels of confidence. Predictive accuracy with 95 percent or more statistical confidence means the model is probably right. Less than 95 percent confidence means the model isn’t reliable enough. So how’d it do? Was Disruption Theory more than 25 percent accurate with at least 95 percent statistical confidence at picking winners? In the first round of tests, the only blind dataset I had at the time was barely big enough to meet minimum sample size requirements (it only had 48 companies). Still, it was enough to at least run some preliminary trials, and it’s worth noting Christensen wasn’t involved – I’d never met the man. Instead, I did my best to reduce his theory to falsifiable yes/no logic using published research. Even so, in the first round these relatively crude rules based on Disruption Theory blindly predicted if new businesses would survive or fail with 94 percent accuracy and over 99 percent statistical confidence. Holy crap. If business research had “Eureka” bathtub moments, this would be one of them. This early test was described in detail by a former co-author of Christensen’s named Michael Raynor in the book The Innovator’s Manifesto. These results alone satisfy the burden of proof demanded by Lepore’s article. The debate could end right there. My research started getting attention in and out of Intel. So while at Harvard one day I barged into Christensen’s office unannounced (he asked, confused, if I was there for a job interview). I introduced myself and summarized what I’d been working on. Months later I found myself living in Boston, leading joint research between Intel and Harvard to expand and improve these predictive models for new innovations. I was surprised to learn Christensen wasn’t the only guru whose theory hadn’t been tested. To my knowledge – brace yourself – zero business gurus in the fields of strategy or innovation had ever subjected their theories to the level of predictive testing we put Christensen’s work through (except for, partly, a little work by Eric Von Hippel at MIT in 1976 that, by oddball coincidence, made reminiscent discoveries to what Christensen and I found decades later). In business strategy and innovation departments, predictive testing simply isn’t the norm. Digest that for a moment. I applaud Lepore for calling out a popular business theory for lacking proof, but it’s no small irony that she targeted the one theory that’s been tested from hat to socks. Following the Intel-Harvard research I’ve continued to build predictive models as a data scientist, and more recently as a venture capitalist and head of research of an investment firm. In hindsight, the early Intel sampling cited in The Innovator’s Manifesto seems quaint compared with the subsequent work that’s followed. Nearly a decade later, highly refined versions of these Disruption-based models had produced more than 3,400 blind, real-world predictions about business survival or failure. These predictions informed more than $100 billion in organic growth, venture capital, stock trades and acquisition investments. When the models predicted survivors, they were right 66 percent of the time. When they predicted failures, they were right 88 percent of the time. Adding all survival and failure predictions together, the total gross accuracy was 84 percent. While lower at first glance than the 94 percent accuracy of the first early test at Intel, the models now account for robust combinations of industry, geography and temporality in ways early models didn’t. In each case, the predictions have sustained 99 percent levels of statistical confidence without a flinch. Science is a process, not an event, and last year the models took another leap forward. More sophisticated models yet – all based on Disruption Theory – continue to evolve, now involving more advanced algorithms and technologies. Taken together, the latest methodologies produced over 20,000 blind predictions (and counting). Not one but multiple Disruption Theory-based models, each drawing from different data and underlying algorithms, continue to deliver 66 percent sustained accuracy with 99 percent statistical confidence. Put into perspective, the models have now made more predictions than all U.S. venture capital deals over the past five years combined, with a predictive accuracy more than 2.5X greater than the venture capital industry as a whole. A lot of people point to examples of when Disruption Theory, or Christensen, was wrong. It was wrong about the iPhone. Tesla. Ralph Lauren. In fact, it’s been wrong over 7,500 times by my count (remember it has a 33 percent error rate when predicting winners). Keep in mind, however, it’s 66 percent right while everything else is stuck at 25 percent. Improvement, not perfection, is the standard. Disruption isn’t the end-all-be-all of management thinking, but it’s a solid contribution to the field. The theory’s accuracy is also disproportionately higher for big financial wins, as opposed to small wins. I bring this up because some people look at exceptions like the iPhone, Tesla and Ralph Lauren and fret that the models somehow miss blockbusters. This too is a question of fact, not opinion, to which there’s been considerable analysis. The bigger a win, the greater the odds current Disruption-based models will catch it. I just used examples like the iPhone and Tesla because they’re well known. As if it weren’t enough, Disruption Theory has also proven highly replicable. It’s rules-based, not a fuzzy art form. More than 1,000 corporate managers and students at schools including Harvard and MIT have been tested both before, and after, specific training in Disruption Theory (over 8,000 observations). When asked to make blind predictions about the survival or failure of real (but disguised) businesses, test subjects with no training averaged 35 percent accuracy, whereas after being trained the average accuracy rose to 65 percent. This demonstrated that anyone following certain Disruption-based rules can achieve similar results — a hallmark of good science. Lepore’s article suggests the word “disruption” is over-hyped to the point of an empty rallying cry. She’s right. My research treats disruption as an extremely narrow, specific term of art, much as Christensen also takes great pains to articulate. Most people throw disruption around loosely, misstating, misunderstanding and misapplying it at the same time. I’d say at least half of the startup pitches I hear claim to be disruptive, but few of them are. Disruption Theory is like quantum mechanics in that, while anyone can read books about it, it takes a relatively high level of rigor and precision to accurately apply. It’s science, not art. As someone who understands disruption at a quantified level, I heard Lepore’s critique the way I’d probably sound if I read just one book on quantum physics, determined myself to be an expert (which I’m certainly not), and then called it all hogwash. Yet the article goes further. Entrepreneurs are called “ravenous hyenas,” investors are accused of having no conscience, innovation is blamed for the Holocaust, Hiroshima, genocide, global warming and both World Wars. That’s a stretch, to say the least. Innovation isn’t monolithic – the word is like “engineering” in that there are many flavors with different impacts on the world. Christensen writes about “sustaining” verses “disruptive” innovation, where sustaining innovation tends to deliver incremental growth, favor powerful incumbents, decrease access for those with fewer means and drive up costs. In contrast, disruptive innovation tends to create transformational growth, opportunity for underdogs, greater access for the less fortunate and lower costs. This is why, for many, disruptive innovation is a worthy goal. By no means does it inherently negate the conscience, loyalty or character of those who pursue it. I can’t help but notice another irony. Christensen has written two books arguing colleges and universities are beginning to face signs of disruption from online education, corporate and on-the-job training, and even YouTube (think Kahn Academy). For example, the University of Phoenix is now the largest college in the U.S. by enrollment, having over three times as many students as the second runner up (Pennsylvania State). Christensen says higher education faces a genuine threat – even at incumbent bastions like Harvard where he and Lepore work. However Christensen also predicts incumbents, when faced with disruption, overwhelmingly dismiss it, downplay its encroachment and resort to justifying their industry domination as a moral imperative. Lepore dismisses Christensen’s arguments about disruption in higher education. As support, rather than challenging the substance of Christensen’s case, Lepore takes a superficial, snarky stab at some of his examples and quickly migrates to another topic. The irony, however, is by offhandedly dismissing evidence that higher education may be facing serious disruption, Lepore – as part of the incumbency – is doing exactly what Disruption Theory would predict. This isn’t the first time Christensen’s theory has been challenged, and Lepore is correct to demand more predictive proof from business theories. There’s no shortage of hucksters, and bad business advice isn’t a victimless crime; especially for anyone whose life has been damaged by business collapse. It’s just a shame that when the article says “disruptive innovation can reliably be seen only after the fact,” it doesn’t seem to be aware of the relatively quiet, albeit massive, vetting that’s been done. Lepore could be right about Disruption Theory, but the odds are literally over 500,000 times greater that, as a matter of fact, she’s just plain wrong.