Time filter

Source Type

Seattle, WA, United States

News Article | October 23, 2015
Site: tech.co

Knowing when to scale is the single most important reason for startup success. Janis Machala of Paladin Partners says that market timing is the biggest reason by orders of magnitude. This part is more art than science and is a function of product market fit. Ahead of her panel during Seattle Startup Week, she chatted with me about this topic and the people on the panel who are the best examples. Janis Machala is the managing partner of Paladin Partners. Paladin is a consultancy firm based in Kirkland, WA founded in 1995 focused on start-up growth and team building. Janis is moderating a panel on Monday Oct 26 at Seattle Startup Week called Making B2B Companies Scale: Women Entrepreneurs Weigh In. The four speakers on the panel each overcame unique issues in building their own companies: Janis also recommended reading Four Steps to Epiphany by Steve Blank. She says she wishes she had written this book, and that every client she has ever served should read it. Here is a PDF of a shorter presentation on the book. Additionally, Janis recommends founders read this article by Mark Leslie to learn more about how to scale their startup and why. Janis says market focus must be as narrow as possible in the beginning, and once you have nailed the sales in one area move to the next. In each step of growth, it’s vital to focus your time as a founder and CEO interviewing users, identifying pain points, and adjusting the product to fit. Janis said you must listen more than you talk if you want to succeed. This is where Janis feels women have an advantage over men. “Women are better listeners than men,” she said. Janis acknowledged the broad generalization there, but that statement is not without truth. Janis has the business chops to make such a conclusion. Her background is solid and results are impressive. Janis will also be hosting a B2C panel on scaling on Friday, Oct. 30. There is value in the panels she is hosting. Janis said she wanted to have distinct panels for each segment as there are issues inherent to the markets that don’t cross over. Seattle Startup Week runs Oct 26-30 in downtown Seattle and many other locations in the area.

News Article | August 23, 2013
Site: gigaom.com

Plenty of genetic testing and analysis startups want to use personalized medicine to revolutionize healthcare — but there’s one thing they may have to do first: help consumers understand what that actually means. According to a report released this week from research firm GfK, just 27 percent of U.S. consumers said they’d heard of the term “personalized medicine,” and just 4 percent could accurately describe it as medical care that matched a person’s genetic makeup. Once respondents were told what it meant, the study, which included more than 600 people in the general population over the age of 30, found that 55 percent of people with work-sponsored health plans said they were interested in having a genetic test. Not surprisingly, that figure rose to 80 percent among those who have or have had cancer and, in general, interest increased among those who have more medical conditions. With the approach of the more-affordable so-called “$1,000 genome,” the phrase “personalized medicine” has become more ubiquitous. Several companies, from genetic testing firms 23andme and Gene by Gene to genomic data processing and analysis startups Bina Technologies and Spiral Genetics, are working on technology to help doctors provide care that’s most appropriate for a person’s genetic characteristics. That could mean using genetic assessments to determine whether a patient is a slow processor of caffeine or whether they’re at a higher risk for diabetes and other inherited conditions and then recommending the most fitting healthcare regime, or discovering which drugs are most incompatible with a person’s genetic predispositions. It’s true that patients don’t need to know the term “personalized medicine” to benefit from genetic testing – and consumer-facing companies like 23andme tend to market with plainer language that more generally  explains how “DNA tests [are] improving lives.” But the study still points out that these companies have a bit of work cut out for them when it comes to consumer education.

News Article | July 31, 2013
Site: www.wired.com

They call it precision agriculture, and it’s a hot topic. Across the country, new-age farmers are hacking their operations with robots, sensors, drones, and good-old circuit boards, hoping to increase both the quality and quantity of their fruits, vegetables, and grains. But that’s merely a first step. Thanks to the burgeoning field of “cloud-based genomics,” we will further improve our crops by, well, plugging them into the internet. Scientists and entrepreneurs have now sequenced the genomes of plants such as the tomato, potato and oil palm, and using this information, they can better understand the evolution of these fruits and vegetables — and ultimately improve them. This isn’t just about better taste or bigger crops. In some cases, it’s about saving iconic crops, such as the orange, from parasites. Big research institutions and corporations such as Monsanto are already pushing into this field, but a new company in Seattle, Spiral Genetics, wants to bring the benefits of genomics to the little guy. Spiral is developing “cloud-based” genomics algorithms that anyone can use over the net. WIRED caught up with Spiral co-founder and CEO Adina Mangubat at our offices in San Francisco to discuss how the company is tackling the new world of bionic ag. WIRED: When most people hear the word “genomics,” they think about using the genetic data to, say, personalize your medicine. But you’re tackling agriculture. How come? Adina Mangubat: The agriculture sequencing ecosystem is larger than in humans. There’s a lot more data. Plants have a ton of genetic variation. Also, many plant reference genomes are pretty poorly constructed. There’s a lot of reasons for that. They’ve got a lot more repetitive regions than humans do. Trying to figure out the linear sequence is really, really hard. The other [reason] is that it hasn’t seen as much attention and funding. The agriculture world is definitely not as sexy as curing children’s cancer. But the plant world is what’s going to enable us to do really efficient biofuel production to help us be energy independent. We have to fix medical issues, but we also have to be able to feed everyone and also be able to provide the energy that the world needs to be able to continue to function. It also has impact on things that people aren’t so keen on, like genetically modified foods. There’s a lot of moral questions around that. WIRED: You started off as a consumer genomics company? What happened to make you transition into data analytics and ag? Mangubat: 23andMe came out. We were two ladies in a garage, and Anne [Wojcicki] was already set up, already had a service. We were like, “Ok, don’t go head-to-head with Google.” Eventually, we met Jeremy [Bruestle]. It took Jeremy looking at the bioinformatics tools and saying, “These are not going to work for large scale,” and me and Becky [Drees] looking at the data production trends for sequencing and saying, “Oh my gosh. This is exploding!” for all of us to realize we had the competency to make tools to serve this market. I don’t think we really set out with the goal of creating a tool that was going to be specifically useful for ag. We just were interested in solving the problem of large insertions or deletions. As we were in the middle of it, we realized that it was far more applicable than just for the human side of things. WIRED: Why are insertions or deletions an issue? Mangubat: Most of the tools out there in the wild work really well at detecting small insertions, deletions, and single base pair changes in the genome. But once you get past a certain limit, usually around 10 or 11 base pair inserts or deletions, the algorithms basically break down. It has to do with the way the algorithms are written. Right now, almost everybody is doing this process called “alignment to reference.” So you have a reference genome, and you take every single [DNA] read, and you’re trying to align it against the reference to see where it goes. The current algorithms can only have so many mismatches between the read and the reference before it goes, “AHHHH! I don’t know where to put it.” The current mechanisms don’t know what to do with that, so it goes into the I-don’t-know bucket. But this method is the only thing right now that’s computationally feasible to use on a large scale. If you want to do de novo sequencing, it’s far more computationally intensive. De novo is when you don’t use a reference genome. Groups that are forced to do this are trying to sequence a species that has never been sequenced before. It can take 30 days of computation just to generate the graph of one species, which is a really long time. It’s not something you’re going to be able to do all the time. It takes a long time and a lot of money. WIRED: How are you providing a solution? Mangubat: We have a new product we’re going to be rolling out shortly that has the ability to detect large insertions and deletions. The thing that’s beautiful about this technique is that you can use reference genomes that aren’t very well constructed and still get really good results because it’s not heavily biased on the reference. It’s really important on the clinical side for doing diagnostics or recommending treatment for diseases like autism, schizophrenia, or Alzheimer’s that are tied to those types of [genetic] variations. The other place it’s really important for is plants. Plants have tons of insertions and deletions. Tools for ag have been pretty limited and we think that this will help a lot, but there’s a lot more than can be done for sure. WIRED: So in a way these technologies are leveling the playing field? Mangubat: I would say, “Yeah.” If you’re a large, large company, you can spend a ton on R&D and go reasonably far. If you have to do a bunch of de novo sequencing, yeah, it might take a lot of computational resources, but if you’re a large company, you can sink millions of dollars into computing infrastructure and still get it done. If you’re a smaller guy, you can’t do that. This is the technology that will allow groups that don’t have their own R&D groups to have a tool that will actually work for them. That’s really exciting. The other thing is that it really opens up the space of ag to be able to support new ways of doing crop development. We’re already seeing this, and I think it’s going to become even more prevalent. The whole GMO [genetically modified organism] thing — it has a bad rep. I can totally understand why people are uncomfortable with it. We don’t really have a good handle of what the outcomes are going to be in the long run, but I think there is still a huge amount of optimization that can be done for plants in a way that anybody would argue is safe. For example, there are a lot of groups that are starting to move toward what is called focused selection, or selective breeding, in a really well-informed kind of way. WIRED: That’s been done for a long time with cows and plants. Mangubat: Exactly. So people have been doing selective breeding forever, but if you can do it with a window into what’s happening on the genetic side of things, then you can do it in a much more selective way, with much better information about what’s going on. You can see things like this strain of corn is resistant to this fungus because of these set of genes. We’d really like that to be bred into this other high-yield variety of corn. Instead of splicing that information out of the corn with the fungus-resistant genome and stuffing that into the high-yield corn genome, what you could do is just breed them together and sequence to see if it’s there until you get that information transferred over. It’s technically happening in a natural way — like if those two plants happen to be growing in the wild. It’s a thing that can actually technically occur in nature. The likelihood of it is low, but it could happen. It’s a natural-ish process unlike splicing, which people are definitely uncomfortable with. If groups can make really, really high-yield, fungus-resistant, pest-resistant crops that aren’t technically GMO, I think people would use them. WIRED: Could that expand the definition of what GMO is? Mangubat: It would be really, really hard to argue that because you’d have to essentially outlaw [the basic genetic engineering first practiced by Gregor] Mendel. I don’t think that anybody is going to make that jump. They’re employing the same techniques as Mendel. They’re just cheating a little bit in that they can see what’s happening in the DNA to make sure that they really got it right. WIRED: Where might wee see the most interesting effects? Mangubat: All of the things that are happening in bioinformatics right now will eventually feed into synthetic biology, which is really the ability to write DNA sequences from scratch. The more that we know about how the natural world works, the more intelligently we can write DNA code. That’s all more information that can be used for creating really well-designed DNA. WIRED: What does that get us? Mangubat: It could be anything from being able to design a plant from scratch, creating new species that have various properties that you want. If you have an oil spill, you can create bacteria that are going to eat that particular type of oil. That could be something that you could code. The implications of synthetic biology are pretty broad. It really enables almost anything. But that’s really far out, you know. I don’t think anybody has a really great handle of what the future of that is going to hold. Mangubat: What would I code? I don’t know. I’m a softie. I love cute animals. Lolcats are dear to my heart. Maybe I’d create something really cute, like a cross between a lemur and a chinchilla. That’s the joking answer. On the serious side of things, probably a cure for something, like being able to essentially code up an antibody to wipe out a disease that’s affecting people pretty substantially. But I don’t know that I could do that by myself. I’d get some help. I’d probably lead the company that does that.

The current document is directed to automated methods and processor-controlled systems for assembling short read symbol sequences into longer assembled symbol sequences that are aligned and compared to a reference symbol sequence in order to determine differences between the longer assembled symbol sequences and the reference sequence. These methods and systems are applied to process electronically stored symbol-sequence data. While the symbol-sequence data may represent genetic-code data, the automated methods and processor-controlled systems may be more generally applied to various different symbol-sequence data. In certain implementations, redundancy in read symbol sequences is used to preprocess the read symbol sequences to identify and correct symbol errors. In certain implementations, those corrected read symbol sequences that exactly match subsequences of the reference symbol sequence are identified and removed from subsequent processing steps, to simply the identification of differences between the longer assembled symbol sequences and the reference sequence.

The Prefix Burrows-Wheeler Transform (PWBT) is described to provide data operations on data sets even if the data set has been compressed. Techniques to set up a PWBT, including an offset table and a prefix table, and techniques to apply data operations on data sets transformed by PWBT are also described. Data operations include k-Mer substring search. General applications of techniques using PWBT, such as plagiarism searches and open source clearance, are described. Bioinformatics applications of the PWBT, such as genomic analysis and genomic tagging, are also described.

Discover hidden collaborations