Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have some experience working at a genomics research company and I'll broadly +1 Fred's experience about the industry, although in less negative terms. I got out before I got jaded, so my perspective is a bit more "oh, that's a shame" than his. I really like genetics, bioinformatics, hardware, deep-science, and all that but the timing and fit wasn't right.

The tools are written by (in my experience) very smart bioinformaticians who aren't taught much computer science in school (you get a smattering, but mostly it's biology, math, chemistry, etc.). Ex:

http://catalog.njit.edu/undergraduate/programs/bioinformatic...

http://www.bme.ucsc.edu/bioinformatics/curriculum#LowerDivis...

http://advanced.jhu.edu/academic/biotechnology/ms-in-bioinfo...

The tools themselves are written by smart non-programmers (a very dangerous combination) and so you get all sorts of unusual conventions that make sense only to the author or organization that wrote it, anti-patterns that would make a career programmer cringe, and a design that looks good to no one and is barely useable.

Then, as he said, they get grants to spend millions of dollars on giant clusters of computers to manage the data that is stored and queried in a really inefficient way.

There's really no incentive to make better software because that's not how the industry gets paid. You get a grant to sequence genome "X". After it's done? You publish your results and move on. Sure, you carve out a bit for overhead but most of it goes to new hardware (disk arrays, grid computing, oh my).

I often remarked that if I had enough money, there would be a killing to be made writing genome software with a proper visual and user experience design, combined with a deep computer science background. My perfect team would be a CS person, a geneticist, a UX designer, and a visual designer. Could crank out a really brilliant full-stack product that would blow away anything else out there (from sequencing to assembly to annotation and then cataloging/subsequent search and comparison).

Except, I realized that most folks using this software are in non-profits, research labs, and universities, so - no, there in fact is not a killing to be made. No one would buy it.



I live in this field, as a computer scientist learning the biology, and trying to make a living with a bootstrapped company.

I wrote a post about why GATK - one of the most popular bioinformatic tools in Next Generation Sequencing should not be put into a clinical pipeline:

http://blog.goldenhelix.com/?p=1534

In terms of your ideal software strategy, I can speak to that as well, as I am actually attempting to do almost exactly what you suggesting. My team is all masters in CS & Stats, with focus on kick-ass CG visualization and UX.

We released a free genome browser (visualization of NGS data and public annotations) that reflects this:

http://www.goldenhelix.com/GenomeBrowse/

But you're right, selling software in this field is a very weird thing. It's almost B2B, but academics are not businesses and their alternative is always to throw more Post-Doc man-power at the problem or slog it out with open source tools (which many do).

That said, we've been building our business (in Montana) over the last 10 years through the GWAS era selling statistical software and are looking optimistically into the era of sequencing having a huge impact on health care.


> I wrote a post about why GATK - one of the most popular bioinformatic tools in Next Generation Sequencing should not be put into a clinical pipeline:

I've seen you link to your blog post a couple of times now, and I still think it's misleading. I do wonder whether your conflict of interest (selling competing software) has led you to come to a pretty unreasonable conclusion. (My conflict of interest is that I have a Broad affiliation, though I'm not a GATK developer.)

In your blog post, you received output from 23andme. The GATK was part of the processing pipeline that they used. What you received from 23andme indicated that you had a loss of function indel in a gene. However, it turns out that upon re-analysis, that was not present in your genome; it was just present in the genome of someone else processed at the same time as you.

Somehow, the conclusion that you draw is that the GATK should not be used in a clinical pipeline. This is hugely problematic:

1) It's not clear that there were any errors made by the GATK. Someone at 23andme said it was a GATK error, but the difference between "user error" and "software error" can be blurred for advantage. It's open source, so can someone demonstrate where this bug was fixed, if it ever existed?

2) Now let's assume that there was truly a bug. Is it not the job of the entity using the software to check it to ensure quality? An appropriate suite of test data would surely have caught this error yielding the wrong output. Wouldn't it be as fair, if not more so, to say that 23andme should not be used for clinical purposes since they don't do a good job of paying attention to their output?

Your blog post shows, for sure, a failure at 23andme. Depending on whether the erroneous output was purely due to 23andme or if the GATK had a bug in production code, your post shows an interesting system failure: an alignment of mistakes at 23andme and in the GATK. But I really don't think it remotely supports the argument that the GATK is unsuitable for use in a clinical sequencing pipeline.


On your first point, my post detailed that 23andMe confirmed it was a GATK bug that introduced the bogus variants and the bug was fixed in the next minor release of the software. There are comments on the post from members of 23andMe and the GATK team that go into more details as well.

On your second point. 23andMe had every incentive to pay attention to their output, but it is fair to say it's their responsibility for letting this slip through. But, it's worth noting in the context of the OP rant, that 23andMe probably paid much more attention to their tools than most academics who often treat alignment and variant calling as a black box that they trust works as advertised.

So what I actually argue in the post (and should have stated more clearly in my summary here) was that GATK is incentivised, as an academic research tool, to quickly advance their set of features with the cost of bugs being introduced (and hopefully squashed) along the way.

This "dev" state of a tool is inappropriate for a clinical pipeline, and GATK's teams' answer to that is a "stable" branch of GATK that will be supported by their commercial software partner. Good stuff.

Finally, I actually have no conflict of interest here as Golden Helix does not sell commercial secondary analysis tools (like CLC Bio does). I wrote this from the perspective of someone who is a 23andMe consumer as well as being informed as I give recommendations of upstream tools with our users (which I might add, I would still recommend and use GATK for research use, with the caution to potentially forgo the latest release for a more stable one).

You know though, the conflict of interest dismissal is something I run into more than I would expect. I'm not sure if some commercial software vendor has acted in bad faith in our industry to deserve the cynicism or if this is defaultly inherited by the "academic" vs "industry" ethos.


> So what I actually argue in the post (and should have stated more clearly in my summary here) was that GATK is incentivised, as an academic research tool, to quickly advance their set of features with the cost of bugs being introduced (and hopefully squashed) along the way.

Sure, I agree with that. And I would agree if you would say "Using bleeding-edge nightly builds of %s for production-level clinical work is a bad idea," whether %s was the GATK or the Linux kernel. I would be in such complete agreement that I wouldn't even feel compelled to respond to your posts if that's what you would say originally, rather than saying, "the GATK ... should not be put into a clinical pipeline". The former is accepted practice industry-wide; the latter reads like FUD and cannot be justified by one anecdote.

> You know though, the conflict of interest dismissal is something I run into more than I would expect.

Regarding conflict of interest, my point in trying to understand your potential interests, and also disclosing my own so that you can see where I'm coming from. That's not a dismissal, it's a search for a more complete picture. Interested parties are often the most qualified commenters, anyway, but their conclusions merit review.

Hopefully people wouldn't dismiss my views because of my Broad connection, anymore than they would dismiss yours if you sold a competing product.


They key is 23andMe was not using bleeding-edge nightly builds but official "upgrade-recommended" releases.

GATK currently has no concept of a "stable" branch of their repo (Appistry is going to provide quarterly releases in the future, which is great).

The flag I am raising is that a "stable" release is needed before it get's integrated into a clinical pipeline. Because the Broad's reputation is so high, it is important to raise this flag as otherwise researchers and even clinical bioinformaticians assume choosing the latest release of GATK for their black-box variant caller is as safe as an IT manager choosing IBM.


Good call. Much like a Ubuntu LTE, having stable freezes of the GATK (now that it's relatively mature) that only get bug-fixes but no new (possibly bug-prone) features is a great idea.


This is an old story. Every domain I've worked in featured a chasm between the domain experts and the software folks. Experts write terrible software that somehow mostly works. Software folks misunderstand the problem and create overwrought monstrosities.

In my experience, this applies to accounting software, sensor data, computer-aided design, print manufacturing, healthcare, etc.

I imagine there's phases of maturity, something akin to CMM/SEI. Eventually there's enough people with a foot on both sides to bridge the gap.

It just takes time.


Hrrm, I was in a genetics research lab myself and got annoyed at the inefficiencies myself. In particular, I got frustrating to write & use in-house scripts to run pipelines for compute clusters and then not know what the state of the execution is, where the files are, etc. It's sort of a meta-problem, but I decided to do a startup based on writing good software w/ a good UI to make the problem better (problem = running, monitoring, managing pipelines on clusters that have job schedulers like Grid Engine): http://www.palmyrasoftware.com/workflowcommander/

Maybe it's still in the early going, but I do see how it's going to be real difficult making a living doing this. OTOH, companies like CLC Bio seem like they're doing well for themselves...


Why wouldn't anyone buy your product? If it is easy to use, and SPEEDS UP RESEARCH TIME, your researcher/PI who is spending thousands on computing clusters will buy your software for their graduate students. Hell, my PI keeps asking me if I need a faster computer so I can run Matlab better/quicker. Really, if I had a software that helped me perform research faster/better/quicker and compare my results to ground truth or gold-standards, that is a much more useful tool than a bunch of hardware for my research. You push out papers fast.

So I disagree with you on your very last sentence (agree with the rest)


Ahh the efficiency argument.

The trick is, academics often have excess manpower capacity in the form of grad students and post-docs. Even though personell is usually one of the highest expenses on any given grant, they often don't look at ways to improve the efficiency of their research man-hours.

That's not a blank rule, as we have definitely had success with the value proposition of research efficiency, but in general, a lot of things business adopt to improve project time (like Theory of Constraints project management, Mindset/Skillset/Toolset matching of personel et) is of no interest to academic researchers.


I disagree with you. If there was excess manpower, graduate students wouldn't be stressed out with overwhelming work. Obviously, there is a lot more work to go around and less bodies to give it to. Most of the research man-hours is gone trying to implement other people's research-methods so you have a 'baseline.' A complete waste of time just to have one graph in the Results section of your publication. The height of research inefficiency is to replicate someone else's results and hope (finger's crossed) that you followed their 8-page paper (that took them 10 months to develop) meticulously. Academic researchers only care about results, it is the graduate students that need to be efficient. The efficiency software should be bought by the PIs for their graduate students.


After researching this field (biomedical R&D) a bit, I found that the mindset and workflow is mostly pre-computers. The relevant decision makers in the labs usually don't see a need to change something because "it works" and "it's done always this way".


"its always done this way" is the ultimate motivation of any startup. We wouldn't have any competing startups if everyone just accepted that, probably, not have any entrepreneurs or have a better world for that matter. The fitness function of the world will flatline.


I'd be happy for you to be right. At least back when I worked there it wasn't clear the total addressable market was there. It's not that they couldn't buy it, it's that they didn't see the need. Perhaps that has changed. :)


There are companies out there that offer commercial sequencing software. DNANexus is one.

As for whether there's "a killing to be made", it's kind of unclear so far.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: