Hacker News new | past | comments | ask | show | jobs | submit login
Hard Tech Startups (blog.ycombinator.com)
304 points by dwaxe on Sept 29, 2016 | hide | past | favorite | 157 comments



> Many hard tech founders come from academia or big company backgrounds, where projects can be expensive and slow. We help these founders shift into a much more iterative mindset so they can move faster.

I think this is key and I hope it takes off beyond YC. I used to work on a DARPA-funded R&D project in the wireless space. DARPA's culture is really cool: throw a bunch of money at PhD program managers, give them a ton of directorial discretion, and let them fund "blue sky" projects they think are really cool. But it's a very academic mindset and the mission creep is huge. DARPA kept sending us down these "oh can we do this?" rabbit holes. At one point our radio grew an expert system that processed XML-formatted regulatory directives.[1] We spent huge amounts of time doing things other than solving the core problem. A decade later consumer deployment is still a ways away. It would be so much further along if DARPA had the focus of something like YC.

It seems like a no-brainer to those in Silicon Valley that you should find a tractable subset of your problem and pound on it until you've figured it out. That's not the way the big research institutions working on "hard tech" usually operate, and it's not really how major research universities teach people to think.

[1] https://en.wikipedia.org/wiki/DARPA_Agent_Markup_Language


What do you mean mission creep and how can you avoid it when you have "blue sky" projects? As I understand it those types of projects, by their very definition, are trying to do something so unprecedented that they don't have a clear roadmap and will always go off into many tangents before circling back to their main goal. Take the moon landing for example: early rocketry pioneered technologies in chemistry and controlled explosives, material science, mathematics, and computation before the first Mercury mission to say nothing of systems engineering, a very broad field born almost entirely from the NASA space program. DARPA is the agency that takes on projects that are way outside of the capabilities of any other research organization in defense and its a lot less focused on commercializing technology than the NIH or even NASA.

The very interesting thing about DARPA is how its management is structured so differently from every other government agency. DARPA has a $3 billion budget with a staff of less than 300 people who are mostly project managers and administrative staff (compare to NIH - $30 bil/20,000, NSF - $7 bil/1,700, or NASA - $19 bil/17,000). By policy these PMs have a strict term limit of four years and the only DARPA PM I know of that served a second term did so over two decades after his first. These PMs have scientific advisory boards that don't have any term limits but the mandatory turnover for management means that they come in with clear goals and deadlines that seem to remove career advancement out of the picture (although to be fair, most DARPA PMs are way beyond worrying about their career prospects). No other government agency works this way and it has paid dividends not only for our defense but for technology and society as whole.

Unfortunately there isn't a DARPA equivalent of https://spinoff.nasa.org


> What do you mean mission creep and how can you avoid it when you have "blue sky" projects?

Imagine if during the Mercury project, someone threw in: "oh, and while we've got someone in space, we should do a spacewalk too."


Can you give a specific example as it relates to DARPA? A space walk would have been mission creep for the Mercury program only in the most trivial way because the technology was already developed by that point. The problem was the unacceptable risk and lack or reason to do any kind of spacewalk (Mercury and the mission procedures were not even remotely designed for in-flight repair).


So our project (XG) was about developing radio networks that could opportunistically use spectrum without a priori channel allocations. That was hard enough when you took into account challenging environments (terrain obstacles, jammers, etc). But DARPA decided the radios also had to be able to make decisions in conformity with regulatory policies written in this declarative XML language. It became this huge tangential AI project: http://xg.csl.sri.com/technical_reports.php.


Honestly, sounds like the side-AI project could be the exact point of a "blue sky" project but that open ended "maybe this won't result in a explicit result in this decade" ain't for everyone. Neither good nor bad.

The ability to produce AI tuned opportunistic spectrum allocation could have far reaching implications in fields outside of spectrum analysis. Many of the biggest problems faced by modern organizations is dealing with beaurocracy while efficiently allocation resources. Maybe this never lead to anything, but then again who knows if this could lead to a whole future field of efficiently scaling large organizations ... Take a look at the outputs of undirected DARPA funding and the results still bearing fruits decades later: computer animation by people like Ed Catmull (culminating in Pixar), Alan Key and object oriented programming, tcp/ip, etc. The bigger question of whether this side project could yield fruit has more to do with the type of team environment of the project and whether the pursuit came from a genuine "I wonder if...". E.g. One of the best predictors of the useful of such blue sky discovery is whether there's a large element of "this seems fun/intruiging!" as appeared to the driving motivation of Ed Catmull when he invented texture mapping.

And thanks for the links! I'm actually _very_ interested in reading these papers. There's been a lot of intruiging work and research on type-valued types (c.f. Iris programming language). But there's a difficulty of describing and translating rulesets from various vague human logic into effective computer passable rulesets.


> Honestly, sounds like the side-AI project could be the exact point of a "blue sky" project

As a separate follow up project; not as a sub-project that stalls the main project (which is how it will be done since "What else are we going to do with the money? Doing the boring work of ironing out the kinks in the main project?").


> systems engineering, a very broad field born almost entirely from the NASA space program

What's the evidence for that? The Idea Factory, a history of Bell Labs, claims that it was originally "system engineering", back when America's phone network was called the Bell System. Of course, there was a lot of overlap between Bell Labs and NASA. Echo, Telstar, transistors and so on.


DARPA does a great job when they design competitions especially. Look at what they did for self-driving cars (they held all their competitions before Google or any company was putting any effort into it). A couple years later and suddenly it's a burgeoning field and now companies are being acquired. Their competition for humanoid robots likewise resulted in teams building real hardware on a deadline with impressive results.


DARPA can be ahead of the curve. A few years ago, I met a VC that worked in the wireless sector. He seemed surprised to learn DARPA had been working on what I guess you'd now call whitespace technology back in the early 2000s. Industry interest came significantly later than that.

Though that can be a bad thing. You don't want to be in a position where your DARPA project ends but VCs aren't ready to look at you yet.


Strongly concur. In the mid-2000s, I was looking at a couple of collaborative filtering investments (in the days of RSS readers, the early mainstreaming of recommendation engines, etc.) and in doing some background / tech diligence, discovered that the entire thing had been done in the early 1980s at Xerox PARC.

That happened a couple of times during my first stint in VC in the mid-2000s -- ideas pioneered (and often reduced to practice) at PARC or other pure R&D facilities were finally finding mass-market or industry adoption.


This is the hardest thing to explain to people about companies like ours. When I say that machine vision systems are comparable to building hardware people can't conceptualize that because they have never really worked with novel CV/ML systems.

I think this post has it right too in that finding a small application of the technology is a good approach to sustaining yourself while you get to a bigger goal. This however adds a magnitude of complexity - which means funding requirements aren't that much smaller. For example you have to build a product while also building the technology - vs just building a product on exiting technologies, which is hard enough itself. It's also a risk because if that one product is not a winner, then it poisons the larger play - even though it might be itself very valuable.

The challenge comes when you have to explain that, no actually this first thing is just the POC for a larger thing we're trying to do - especially when you are building the market for that larger thing.

I think the best path for hard tech companies is to have a lot of money from the start. I know that sounds obvious, but I think it's true specifically for hard tech companies.

Unfortunately that usually means you as a founder are a known entity in the venture world. Whether that means you use your own money from a previous exit (like Musk) or convince someone to fund a lot of it (probably because you had a previous exit and have connections) you need a lot of capital up-front.

So if you are an unknown to the people with cash, with hard tech plans, prepare for a tough time ahead.


If you don't have to design for assembly, you aren't dealing with hardware. The fact that software essentially has zero reproduction and distribution cost is what makes it so much easier.

Put it this way, when you buy a car, the difficult part for the manufacturer wasn't the customer-experienced design of the car, or even the driving concept. The hard part is designing the assembly line, and designing the car in such a way that 100,000 of them can be put together such that the assembly of each individual piece takes exactly the same amount of time, otherwise there are assembly line stalls. Then you also have to design, or at the very minimum optimally position and program, the machinery to put the pieces together, organise supply chains, deal with subpar contractors whose quality changes even if they are delivering theoretically the same product, etc.

Seriously, I have done my time in the CV space, and it is a lot easier than real world hardware, even if CV is still being explored, just because of the distribution and replication of a finished product.


I don't disagree that hardware itself has more dependencies. I don't think that makes it harder or less probable of delivering though than bleeding edge computer vision/machine learning systems.

I too have worked hardware and on those projects we could get fairly low level line workers on assembly up to speed quickly, source our parts in a repeatable fashion and build systems to scale without too much brain power. Not easy, but more logistics management than creativity.

With these non-deterministic CV systems (for example point cloud generation) the path to working is much less clear.

I'm not saying one is harder than the other, but the class of unsolved problems in CV/ML doesn't have off the shelf solutions, so they pose different but similarly hard problems as hardware.


I still think ml systems, which I work on, are easier than hardware, which I used to work on. It's things like the cost to dump inventory if you make a mistake (eg a founder cheaped out on a ballast circuit on a dna reader, which caused the light to flicker at power on, which fucked up the chemistry, which cost the company well over a million dollars), etc. Which is not to imply novel ml systems are easy...

Pair3d looks really cool -- I've been making a list of neat industrial uses of computer vision tech and virtual showrooms isn't something I'd thought of.

A friend actually did something similar the hard way -- he and his fiance were apartment shopping in nyc, but the apartments were empty and they were having a hard time visualizing what they would be like (less spacious, for starters) with furniture. So they cut butcher paper into pieces the size and shape of their major furniture and laid it out in the apartment to see what it would feel like with their couch, bed, etc in the apartment rather than empty.


A lot of the CV problems are essentially 'solved', though.

Feature extraction (AKAZE, BinBoost), matching (GPU brute force hamming), RANSAC (with PROSAC and relatives), bundle adjustment (Schur Decomposition with LM), point clouds (SGM, PatchMatch) and mesh (Poisson, FSSR).

Each stage has tuning, and real-time requires sacrifices on the hardware we have available, but we know with better computers we can have it. (HSA has me drooling, hurry up with Zen, AMD!)

IMO the harder stuff is in semantic segmentation of point clouds and dynamic scenes, but I have high hopes for the next few years.


semantic segmentation of point clouds

lol, hey it's hard enough to do with static images. Feature matching pointclouds is probably turing complete :P.

That's the kind of shit we're working on though. We're trying to turn the real world into a platform.


I feel like it should be easier than images, since we have all the 3D information. There are all of the features based on 3D structure which image segmenters can't even begin to use.

It's one of those things that we can do efficiently, so with enough priors of what scenes look like a sufficiently informed ML system should be able to get decent accuracy.


This reminds me of the video game Factorio.

Is there software that solves this problem of factory layout? Why or why not?


There are whole fields more or less dedicated to it! In mathematics, queueing theory; in engineering pretty much the entirety of Industrial Engineering; and there are probably 10 competing frameworks in management, the most important being Lean, Theory of Constraints, Six Sigma.


I'm not super familiar with your company. Does the approaching of building the smallest nucleus of your tech like Sam described work for Pair? A glance at Crunchbase looks like you are seed stage so maybe this is a real pain point right now.

> Very often, the first thing we do is help hard tech founders find a small project within their larger idea that fits the model of quick iteration and requires a relatively small amount of capital. This project is often the smallest subset of their technology that still matters to some user or customer. It may at first look like a detour, but it’s a starting point that lets founders build measurable momentum–for themselves, for recruiting employees, and for attracting investors.

Edit: The realtime 3D object placement in the demo video [1] is incredibly cool. I've done enough computer vision to realize you have some non-trivial technical work, but I'm not sure if I would label it hard tech personally. I think the gray area is all around the real-time constraint.

[1]: https://angel.co/pair-2


Does the approaching of building the smallest nucleus of your tech like Sam described work for Pair?

So far it is working - but as you say it is a real pain point.

The biggest problem we have is that venture people focus on our first product only so they are looking at traditional metrics such as MAU, Revenues, exits etc... which is reasonable if they assume that Pair the commerce platform is all we ever want to do - nothing else we are doing seems real to them. In theory, the commerce platform alone is a big product, but it's maybe 1/1000th of our whole stack we're building.

but I'm not sure if I would label it hard tech personally. I think the gray area is all around the real-time constraint.

Without going into a long technical explanation, our mobile monocular SLAM implementation, semantic segmentation, structure from motion and relocalization (loop closure) systems are firmly in the Machine Learning/Hard Tech sphere. Our team is ML/CV people and we're focused on building fundamental infrastructure for future computing platforms.

None of that is obvious from our marketing materials - because it would just be confusing for those in our first market/vertical.

As I said before, this approach is doubly hard than building just a product because you have to build it on top of all the other stuff.

Trust me, if I had 10s of millions of dollars, we probably wouldn't be working on this commerce platform (even though we have had good success and it's a great tool.)


> Without going into a long technical explanation, our mobile monocular SLAM implementation, semantic segmentation, structure from motion and relocalization (loop closure) systems are firmly in the Machine Learning/Hard Tech sphere.

I am sure it was hard for you guys but aren't these systems no longer technical unknowns? There are lone PhD students who managed to build them and open sourced their work. [1]

This is no longer hard in the sense of "I'll need a research team and five years -- and we might not figure it out." It seems like it's just academic technology transfer where you need people who can read the latest papers and implement them.

[1] http://vision.in.tum.de/research/vslam/lsdslam (The author is now at oculus research)


Well, first off no Lone PhD student has built a SLAM system. LSD itself was years and a whole team. Similar for ORB-SLAM.

Notice also that those systems are built for robots and standup machines to run it, not mobile handsets. Sounds like a small difference if you aren't in it, but it is critically harder, so the approach is actually different. It's not about just "reading the latest papers and implement them." Somewhat offensive to assume that is the case. I'd challenge you to find a mobile monocular SLAM system that is up to our capabilities, let alone usable. They have all been acquired (Apple, Facebook, Intel). The reason you can't just copy paste implementations is because they are non-deterministic in optimization.

Second, SLAM isn't the only thing we do. In fact it's not even the hardest thing we do. The majority of what we do isn't something I'll go into in depth, but it actually falls into the category of "I'll need a research team and five years -- and we might not figure it out" - though now we're in year three and have made enough progress that it's starting to come out of the realm of "we might now figure it out."


> It's not about just "reading the latest papers and implement them." Somewhat offensive to assume that is the case.

I wouldn't take too much offense to it. From my (much smaller) experience in computer vision and pattern recognition, everything in CV sounds easy until you actually do it AND make it work in the real world. Real data, poor lighting, low contrast, realtime, etc. There are just so, so many factors that make this an extremely challenging field.

When I was in grad school (2012–2013) the textbooks we used said things like "generalized object recognition is an unsolvable problem". In a lot of ways since then "unsolvable" problems have been solved. The field is just changing so rapidly.


If you've never tried to implement academic papers, I think you'll find reimplementations have a very high failure rate. Some of it is probably due to different datasets, some of it is due to not disclosing the entire algorithm or crucial implementation choices, and some of it is probably due to the bits in the code where "magic goes here", but those parts don't get published... Crucial tweaks to optimization algorithms, smart choices of initialization for iterative algorithms, etc. eg it's not really like saying L-M really tells a practitioner precisely what you did; it's more of a family of techniques.


You're correct that it wasn't obvious (at least to me), and given your explanation it makes much more sense, especially from knowing more about the implementation.

The video reminded me of Ikea's virtual furniture app [1]. Tech-wise the comparison is superficial, as your tech is so much harder / more sophisticated, but my first thought as a consumer was "sounds like an iteration of that".

I wish I had suggestions. Computer vision research on the edge is hard x starting a startup is hard. That's tough. Maybe someone else will have more insight.

[1]: https://www.wired.com/2013/08/a-new-ikea-app-lets-you-place-...


"Hard tech startups

Some people think YC only funds software startups."

Kinda strange that software is not seen by YC as "hard tech".

I think there's lots of hard tech in software but the way venture capitalists/angel investors tell it, you should only ever build software that can be made in a weekend and then iterated upon.

The outcome of this is the X thousandth dog walking social network for uber drivers staying at AirBNB - i.e. stuff that you can build in a weekend and iterate on. And this is why demo days are full of such lightweight software applications - people are actively discouraged from taking the time to build something large scale and, well, "hard".

Sometimes it makes sense to make a big bet on building some complex software that might take months or years to build, with lots of moving parts, but once complete solves a big problem. That is software that is "hard tech".


There are plenty of hard tech software startups. But I see how unclear this is; fixing now. Thanks.


I agree wholeheartedly. I think that treating software as dichotomous with "hard tech" (instead of orthogonal) is a profound mistake that really proliferates, not just through VCs, not just through the industry, but through our entire culture. So, so many (especially older) hardware guys I know don't see software as a real field, and I think this is a big part of the reason why.

And much of the discussion about technical debt, about "the web toolchain sucks", or "operating systems suck", or X, Y, Z, they all suck... so often, I think that's just a different way of saying "we have this really hard problem, and we've spent decades piling really innovative partial solutions to it on top of other really innovative partial solutions, but nobody has come along with something that really, really works." And it's a tremendous problem.

The response I've seen in the past is, "We love those ideas, they're just more appropriate for, say, YC research." But for hardware, the bar is automatically different: it's understood that there must be some degree of longer-term research and development that goes into a product. And I think that's completely untrue: both require experimentation, both require feedback, both require upfront work to get right. And to be honest, this misunderstanding probably stands at the core of the strongest arguments I've ever heard against the maturity of the software industry.


As someone in the hard tech space I can tell you that it's not completely irrational on their part. When you look at exits and probabilities of exits, the types of software companies you describe make up the bulk.

The biggest (by val) technology startups that we know of (Uber, Lyft, AirBnB, Snapchat, Dropbox, Pinterest) are by no means hard tech. Even the public ones aren't hard tech (Facebook, Linkedin, Twitter). So that's what these investors look to as the types of investments that make sense for them.

The question is, do hard tech companies have similar probabilities of outcomes or bigger?


DropBox was hard-tech when it came out; they reverse engineered the Finder so that your DropBox folder would appear like a normal Mac folder. They may seem like a commodity now because everybody's doing stuff like that (and in many cases, APIs have appeared that make it trivial), but when they first launched, they did stuff most people assumed was impossible. Twitch/Justin.TV was also hard-tech when it came out, though now the hard parts are built into every iPhone.

Other than that, I largely agree. The last software hard-tech company to strike it really big was Google. There've been a number of open-source projects doing what I'd consider hard-tech software, though - Bitcoin, git/Mercurial/Darcs, Bittorrent, TensorFlow, etc.

(My definition of hard tech, as applied to software, is "Software where you need to use scientific-method trial-and-error to build core pieces of the product." If you can read an online tutorial or reference manual and build the product, it's not hard tech. If you need to poke around at things, observe the responses, and build your own model of how things work, it is.)


"DropBox was hard-tech when it came out; they reverse engineered the Finder so that your DropBox folder would appear like a normal Mac folder. "

This, IMHO, is a really bad example.

This is pretty much just a few days of sitting in GDB for the right engineer[1]. Now, maybe it require people experienced with debugging tools, but it's really not "hard tech". Now, productionizing it so it works on all versions, yeah, a bit trickier. but again, none of this is at the level of basically "understanding how to make custom bacteria that do a thing", etc. If this is the example you mean for "did stuff people assumed was impossible", then i strongly disagree.

""Software where you need to use scientific-method trial-and-error to build core pieces of the product.""

This, IMHO, is way too low a bar. By this definition, the clang compiler we built for windows is "hard-tech". While it requires time and energy and trial and error, that is not hard, in the same way the dropbox stuff is not hard.

It is known that it is possible, and requires the reasonable application of good engineering skill. That engineering skill may often involve the scientific method trial-and-error, but you know you will eventually get there.

The same is true of dropbox, and in particular, your finder example. The only thing unknown is the timeline, and even that you can take a reasonable stab at if you have good enough engineers.

[1] I did it before they did, and i wasn't even the first. Plenty of people have made this happen :)


I like the xkcd definition "I'll need a research team and five years" with the implication made explicit -- "and we might not figure it out."


Yeah, this is pretty much what i'd say qualifies as hard software tech


Unlikely Google would've qualified then, but I would certainly have put Google into the hard software tech category.


If we're talking about Google before Brin's PhD thesis, I think it would have qualified. It was not at all clear back then that using backlink data would yield more useful results than mere textual analysis of page content. One can definitely imagine a scenario where you try to build a search engine based on going down the rabbit hole of natural language processing as the key feature and then end up with something that doesn't work all that great.


Perhaps I'm just not as conversant in reverse-engineering as some of the people here, but my understanding is that if a key part of your product relies on patching somebody else's software for which you don't have the source code, this is also fraught with potential dead-ends and uncontrollable risks. What if they're using ASLR? What if they change the functions involved in the next version? What if the functions you're trying to patch have side effects that you can't afford to ignore?

That's why I prefer to put the dividing line at "must figure out things by poking at them rather than by reading documentation". The definition of "research" can be pretty vague - is a security team poking at a product conducting research? How about a UX team trying to figure out how their users behave? A search team doing language modeling? All of these would count in my head, and if a startup built their product around one of these results I would consider it "hard tech", but evidently not everyone agrees.


I think Research = Observe-Theorize-Experiment works. Reverse engineering is not research because someone already has the answer.


> Reverse engineering is not research because someone already has the answer.

Someone potentially had the answer at one point. That person/organization may be dead/defunct, or the knowledge may otherwise be lost to time.


Yes, but an answer exists.


...an answer always exists, even in pure scientific research. Nothing is inexplicable.


Why not? The research just happened in grad school [1] before the startup.

[1]: http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.109.4...


"Research team and 5 years" != "Two [1] grad students and 2.5 years".

Dropbox, BTW, was 4-5 engineers and 1.5 years.

[1] Actually more like 4-5, Larry & Sergey had help.


Google's other big innovation was building with commodity machines instead of high-end servers. That definitely puts them in the hard software tech category.


I'm personally sympathetic to "I'll need a research team and five years" missions. I think both university research and xkcd are great! But it's a critically different from YC's definition that:

Hard tech = "There is doubt that the technology can be built at all."

Though many pieces of technology face huge doubts, what's key is often a team can get to a working prototype or partial release in way less than 5 years! (E.g. Dropbox.) YC is exactly the type of environment to refocus a 'research style' team exclusively on demonstrable progress.

The problem with the xkcd definition (if attempted in a startup) is very few research teams can continue to fundraise for 5 years without a product or significant prototype.

A partial solution to the "doubted" tech is often good enough to build a great company.


>This is pretty much just a few days of sitting in GDB for the right engineer[1]. Now, maybe it require people experienced with debugging tools, but...

As someone sort-of familiar with gdb (but not extensively so) I have no idea how I'd do that. Can you point me in the right direction?


I don't use a Mac, but assuming Finder will detect that if its view is of a folder and items are added to the folder elsewhere, Finder's view will be updated to reflect this:

Use dtrace and create a lot of such events. They're presumably using kqueue or some event mechanism to be notified when the file arrives. Do this with many file types if they look different in Finder. Somewhere in there should also be a read that corresponds to the dirent. You can break on these things.

attach the debugger and create the events. Step through the code to find when these things are read. Attempt to discern how what is read differs between file types. Do stuff like make files with conspicuous attributes (e.g., file size), because it's easier to correlate from traces. The data is probably a file containing file metadata somewhere.

This is probably mostly looking at the bytes coming off the read. dtrace makes this easy because you can trigger it to set a flag when the kqueue event fires and then just dump bytes and locations from file reads/opens. If it's more integrated into the OS Finder would have to have its own special syscalls to read stuff off inodes or whatever. You'd be able to see those happening too.

Once you think you know how it works, give it a try. Rinse and repeat.

Now it may be you have surprises here and there and it's kind of annoying, but I'd be surprised if I couldn't do it.


My definition of hard tech, as applied to software, is "Software where you need to use scientific-method trial-and-error to build core pieces of the product."

I think that's a great definition actually and fits with how ML systems are built.


How about things like

* jvm -- it wasn't clear you could build a vm that was fast enough to compete with c++;

* c2 compiler inside the jvm -- also not clear you could do this, fast enough, or one that optimized enough to care about

* azul systems / zing -- could you build a gc that tolerates terabit allocation rates w/o stop-the-world pauses?


Sorry, no, JVM was not even innovative, let alone "hard tech". There were lots and lots of previous examples of doing the same (and better) than JVM, see for example the Smalltalk and Lisp world.

IMO, the only thing on your list that comes close is the Azul GC which, in my limited understanding, actually advanced the state of the art.


Microsoft (operating system, SQL database, programming languages / environments) and Oracle (databases, partial credit for programming language (Java)) could be considered hard-tech software companies with outcomes (so far) that are many multiples of those you mentioned.


Those ones (along with Google and Microsoft that are arguable hard tech) were founded prior to web 2.0. Since about 2002 no hard tech software startups achieved any scale- they are sort of all "moving up the stack". When they emerged (Android Inc) they are promptly swallowed. In the mid-2000s there were Yelp (from the software perspective, a simple website with Lucene backend and Youtube - a simple website with Flash player and ffmpeg transcoding command). It's probably a sign of consolidation of the industry.


Using Sama's definition of a hard-tech product or business as "where there is doubt that the technology can be built at all", a bunch of successes come to mind.

1. Tesla (Market Cap ~$29B w/$14B+ in pre-orders) 2. SpaceX (Valued north of $10B) Elon's companies faced massive doubts about tech feasibility, both when started and at basically every-point since.

3. Oculus. Sold for $2B in March 2014, including $1.6B in FB stock that's now worth 2x more.

4. Machine Zone The messaging platform for Game of War/Mobile Strike is very "hard tech". (Google/AWS still lack equivalents and MZ was kinda forced to take a "time out" to develop it.) In 2014 the WSJ said they where fundraising at close to $3B and now their cash flow powers global TV add campaigns.

5. Cruise. "Hard Tech" and a big win for YC.

6. Zoox. They started w/a prototype and have made huge progress.

Many new business lines at public companies faced huge doubts about tech feasibility and are now clear home runs:

7. Nvidia Every new chip is "hard tech" and they've grown massively in the last decade.

8. iPhone/Apple + Android Inc. Smartphones are really, really, "hard tech". So many doubts on both the tech and product side are now forgotten!

9. Google (mentioned by others) In addition to search/adds/Android, I'd say maps/gmail/infrastructure/android are all examples of "hard tech". Each was viewed as "hard tech" when released. (Contrast Google Maps w/Apple's disastrous maps launch years latter.)

10. AWS If you had pitched AWS as a startup, in late 2003, you would have faced lots and lots of technology skepticism. A phenomenal success.

11. Amazon Echo Hard tech, took a massive team to develop. Doing great as a product.

Crossing "the valley of death" in commercializing new tech is really hard. But there are vast success!


I'd say Google was hard tech


Google was working, using Stanford resources, before it was funded. Low investor technical risk. Also, Google's founders had patent coverage of their idea.


Edit: I'm awful at this reading thing. Please see the replies to this comment for a correct definition of "hard tech". My original comment follows.

.

I think "hard tech" in this instance means "technology in the form of hardware" or "technology that is physically manifest" and not "technology that is difficult."


There's a definition from Sam in the footnote:

> [1] I use 'hard tech' to mean a startup is whether there is doubt that the technology can be built at all.

Edit: I'm glad he defined it for this post. In general though, there's definitely ambiguity around the term.


No. From the footnote: " I use ‘hard tech’ to mean a startup where there is doubt that the technology can be built at all."


I think the latter is the intended meaning actually. AI/machine learning seem to be thrown into the "hard tech" bucket in VC jargon.


From the footnote in the original post:

[1] I use ‘hard tech’ to mean a startup where there is doubt that the technology can be built at all.


This conflicts with the way he uses the term "hard tech" to differentiate from software startups, as if the two are mutually exclusive:

It’s relatively easy for a software startup to take short cycle times and low-costs to an extreme, but hard tech founders are often surprised by how effectively they can do this, too.

What I think he is really referring to as "hard tech" is the move from the consumer electronics and office automation markets to the grittier, less-visible industrial verticals: Automotive, industrial, energy & utilities, aerospace & defense, etc. Verticals which have generally been dominated by a GE/Siemens/Boeing/ABB/Schneider etc. due to the high R&D costs needed to realize a certified, generally safety-critical solution in a highly bureaucratic environment.

Dropbox is down for a few hours? We'll have a few pissed customers. The ADAS system in a customer's car failed momentarily? They'll be lucky to survive and massive lawsuits/loss of business could ensue.

As an aside he should really define key terms up front to avoid this sort of confusion and to allow us to focus on his main point: YC is trying to move into and disrupt this lucrative but hard-to-broach space.


Exactly. I think that itself actually an interesting "hard tech" problem: how do you move fast, iterate, and be first-to-market without killing anyone?


You do not move fast and iterate if you care at all about the cost of failures. Period.


I did say it's a hard tech problem... :)

(But seriously, at least in theory, there are sort of obvious things you can do to isolate anything failure-critical behind statically or dynamically checked contracts, which then allows rapid iteration outside of the small set of things that are truly failure-critical. But you need those contracts trustworthy enough that you can iterate without worry, flexible enough that you can iterate on economically important sub-components, and applicable to a system or set of systems where this iteration gives you the trump card in a billion dollar(s) industry. As I said, it's "hard tech" -- which almost by definition means some people will think it's impossible.)


I don't think you do. You want to be number 2, and swoop in when the time is right.


The first line of the post explicitly makes a distinction between "software" and "hard tech".

I really challenge the accepted wisdom of "build something tiny and iterate" as being the only way to build software.


Of course there are many approaches to build software [1]. I also read that comment as more of a philosophy for product development than software development.

[1]: https://en.wikipedia.org/wiki/Software_development_process#A...


Games can be really complex 'hard-tech', but compare Elite Dangerous's small+ iterative to Star Citizen's big+all-at-once approaches.

Even with things like nuclear energy and particle colliders I'd guess those who succeeded probably started small and iterated.


The quote you put isn't what is said on the blog post. I assume they edited it after your comment, Thanks! Adding that straightforward adjective removes the ambiguity.

> I use ‘hard tech’ to mean a startup where there is doubt that the technology can be built at all.

> Some people think YC only funds straightforward software startups.


Yes the original post has been edited.


I too found it a bit strange that the list didn't include software, or at least some specific types of software. I can assure you though that YC does invest in this and doesn't exclusively look for software that can be built in a weekend. My startup, Pachyderm [0], is a pretty good example of this that they funded in W15. A ton of other hard tech software startups have come out of YC including: Docker, CoreOS, Memsql and RethinkDB. In general I think it's one of their more successful verticals, although probably not quite as successful as consumer software.

In general I think you are right that VCs have the bias you're talking about though. Not all VCs but many. It's not totally irrational, software that can be built in a weekend has fewer risks associated with it and can have just as much upside. However there are many very problems that are worth solving and can only be solved with complicated, hard tech, software. Investors with this bias cut themselves off from this opportunity.

[0] https://www.pachyderm.io


My idea for a startup weekend was such a hard-tech/software problem that lost out to a couple of social weekend apps. While a lot of people got behind my idea and we put together a strong presentation, the lack of a finished app sank us. How do you compete with something launched on mobile in a weekend when your at power points and mocks?


I've been an organizer of several hackathons and participated in Startup Weekend. The judging guidelines vary across organizers, theme, and city. In general, Startup Weekend events focus on business model generation and customer validation, with building an actual product following.

And remember, it's a weekend: the "winners" typically don't turn the project into an actual startup.

I think of like in school, you can learn because you want to know things or you can learn to the test because you want the mark to move on.

Would you rather build something hard or build something more likely to win? From my experience, it's not the same apps that do both. I also don't think one approach is necessarily better than the other, just personal preference, plus also depends on the team dynamic you end up with. For example, I wouldn't try to solve a hard tech problem on a team with mostly non-tech people because their expectations can be unrealistic. But if my team is 3 experienced startup engineers that can iterate, well why the hell not?


The judging criteria for Startup Weekend is 3 parts: Business Model, Customer Validation, Execution: http://www.techstars.com/content/community/startup-weekend-j...

If your mocks were sufficient for gathering user feedback, that may be okay. I've certainly seen groups with no written software place in the top 3, off of designer mocks alone.


Another point on judging criteria is that it varies widely based on who your judges are too. Those categories are pretty broad with plenty of room for interpretation intentionally.

If the judging panel is less technical, you'll have to sell your idea differently. I've seen cool tech hackathon projects fail because of bad presentations, either because they're ineffective or they just didn't practice. It's kind of less than ideal that you're judged for the whole weekend based on how well you can sell in a 2-minute pitch, but then again it's not all that different from the real startup world.


Edit: I guess with the premise being "startup [in a] weekend", I shouldn't have expected my idea to win even with the team I picked up. Perhaps a startup month would work?


It's always better to develop most of the software with real users using it. So although the goal may be huge, start by building a piece that does something in a month and get real users using it, so the rest of the project can proceed with a compass.


"It's always better" - words like this make me think "unchallenged accepted wisdom".

Accepted wisdom remains accepted until eventually people start to realize that it's not entirely correct.


There is considerable experience with doing large software projects in one shot. Many corporate and government software projects are done, or at least used to be done, that way. Results were pretty bad.

It's hard to be positive of causation, but correlation (long time to initial customer contact <=> software doesn't make customers happy) is pretty strong.

If you want to challenge it, you could try to deliver a successful large multi-year software projects with no prior customer engagement, and document the process. People can then make informed comparisons against the accepted wisdom.


Sure, one counterpoint, or at least nuance, is when you give too much focus to the wrong users.

Then wrong users can oversteer your product. For example, BigCo wants Feature X and will pay $100k/year for it, you add it but it complicates the experience for others and delays the rest of your roadmap.

Another is knowing when your users are wrong, they request Feature Y, but really what they want is Benefit Z.

Of course there's also nuance around how exactly you define minimum viable product (i.e., the part built before you start exposing it to users). I've seen experienced non-startup business people describe an entire app with every feature that they envision, then claim that's the MVP. It can be a hard model to adjust to.


As someone who has gone through YC, I think a lot of the kind of absolute talk you are pointing out has to do with consistent messaging and providing general advice to an entire group of people who might not have the experience to understand for themselves what the best way to proceed is.

Obviously in reality, things are a lot more nuanced than 'always', but when your goal is to help as many companies at once, where the degree of experience varies, it often help to simplify the problem. In one-on-one situations, the partners can give you more personalized insights and in general are quite insightful.


Software have a poor image due to lack of perfectionists in the software industry. Buggy software are happily released. Correct software are very hard to build. I wonder how many have ever existed.


I don't think endemic poor software quality is about "lack of perfectionists." It's mostly about perfectly rational cost/benefit analysis. A lot of software is valuable for reasons other than code quality (e.g. network effects). In this case it makes complete sense that businesses would rather be first to market than produce something of the highest technical quality.

There are other issues at play, of course. In cases where quality and business profitability are strongly aligned - think avionics - then there is plenty of high quality software.


> It's mostly about perfectly rational cost/benefit analysis.

I disagree.

Looks at any exploit database. A large percentage of the exploits on any typical day are the sorts of mistakes that a well-educated developer wouldn't make, or that best practices and modern tooling would prevent.

That's not even a cost/benefit problem, because in many cases there's no additional cost to doing things the right way. It's a culture and education problem[1].

[1] by which I do not mean traditional university education, fwiw.


I'm not so sure we do disagree :)

At the risk of splitting hairs... there are costs associated with finding developers who have this education and/or fostering a company culture where security best practices are important.

If software companies were seriously punished for compromising PII, then I imagine we'd see a change.


Maybe engineers are making a rational cost/benefit analysis too. In that perfectionism doesn't pay more or lead to promotion. Actually it can be harmful to call things out in a corporate environment.


buggy software makes money

perfectionists don't


Buggy (and bloated by extension) software never has my money at least. I never trust, for example, my battery life and my mobile traffic to software houses that are not exactly perfectionist.

Perfectionism should become a skill, not just a characteristic.


This should be a plaque somewhere.


So that it could be pointed to people who start wondering why almost everything in software industry is utter garbage built upon a pile of shit.

Yay, business interests.


One of the consequences of that being tons of cheap disposable software jobs everywhere, crappy recruitment process, good engineers literally drowning in the sea of the above.


No kidding. Just ask the Japanese - They were known for going through their software with a fine-tooth comb before releasing, but they started to realize how far behind the Americans they were (and arguably still are).


> Kinda strange that software is not seen by YC as "hard tech".

Could the reason be that pure software patents have a weaker status than hardware patents and/or are more difficult to enforce?


I read it as tech that is hard to build because of barriers to entry and an assumption that there isn't a good way to iterate rapidly.


Sometimes it makes sense to make a big bet on building some complex software that might take months or years to build, with lots of moving parts, but once complete solves a big problem. That is software that is "hard tech".

If and only if, a market for such a software still exists and continues to grow when you deliver the product.


I have a hunch it is because many people, myself included, see a lot of software development getting replaced by generalized AI in the next 10 years.


Building these AI systems is itself a form of hard software development.

(I disagree on the premise, BTW. I saw a number of successful machine-learning systems at Google, and the most successful were always the ones that combined machine-learned functions - sometimes many of them - with traditional algorithms. AI is a tool that extends the reach of software into new problem domains, it's not going to replace software itself.)


What makes you think that we'll see generalized AI this soon? Is there some kind of progress beyond machine learning that I missed?


Hard = hardware, physical. Not "difficult"


FTA: "I use ‘hard tech’ to mean a startup where there is doubt that the technology can be built at all."


> Hard tech companies go through the same 3-month batch format as all of the startups we fund.

Is 3 months sufficient time for working on hard tech startup ideas. The original 3 month batch format was designed for software & web startups - YC's initial focus. It seems to me that the cycle time for iterating and testing different approaches to solving problems in hard tech would be much longer - hence require more time?


My personal opinion (not that I have any commercial experience with hardware development) is that it's not.

It can take up to a month to receive a specific part for a prototype, and prototyping takes a lot more time in real space than in software.

I can change a class or a module in some code and 15 seconds later have the new version running. Changing a single part in a hardware system could take a day of work, changing both the software drivers and the actual physical hardware.

You can't just NPM install a new part and see if it works, you have to actually order it and wait for it to arrive in the mail.


The ONLY way I can see this working is if your parts can ALL be 3-D printed in their entirety or milled in under an hour. That then relies on a very very good printer set-up, likely one that you own and have used for at least a year (temp/humidity issues). You can do some electronics and PCB design with a mill and a copper plated proto-board (mill away copper plate and you can make crude PCB 'wires'), but you still need a fully functioning and stocked components chest or you have to wait a week for Digikey. If the zeitgeist is going towards IoT stuff, and I think it is, then you must wait for electronics parts to link the 3-D printed gizmo to the hackers waiting to make it a spam bot. 3 months is unlikely to be enough time to really get anything working without a well stocked lab.


I just don't see it, from my perspective in mechanical design, you are at a minimum of 2.5 months from first concept to getting real parts. Hardware is just slow, and there is not much you can do about it without spending tons of money.


YC companies aren't "done" after 3 months. They keep building and growing for years to come. The 3 months is just how long it takes from first dinner to demo day. Companies of all types are typically able to show some kind of growth, traction, or progress in that time.


Yes, I do know that. However, the point is precisely that even getting from first dinner to demo day in 3 months, will be much harder for "hard tech" companies than your typical web startup. Which is not to say there won't be any progress - but the real question is would there be enough time for the multiple iterations typically needed to get the most out of the incubation time at YC.


Very often, the first thing we do is help hard tech founders find a small project within their larger idea that fits the model of quick iteration and requires a relatively small amount of capital.

Tesla is my favorite example of how powerful this small project + long-term planning mentality can be. Their vision has always been to bring an affordable electric car to the masses, but they first built the Roadster—the opposite of a mass market car—to generate revenue to get to the Model S.

so designing and building a top of the line sports car is a small project? what?


It's a lot smaller than Tesla's subsequent projects.

Remember, also, that the roadster was based substantially on the Lotus Elise, and Lotus built the bulk of the vehicle. Look at us as a drivetrain project rather than a car project and it looks at least a bit more manageable (which isn't, by any means, to say trivial).


I agree with your point, but just for the record:

The Lotus Elise + minor delta was the original thinking, but Tesla/Elon has repeatedly pointed out that it was a mistake; the Elise frame was dramatically reworked to the point where it would have been better to have started from scratch.


There is this quote from a honda engineer that said creating the original 70mpg honda insight was a lot harder than creating their roadster / supercar project.

So yes it is.


Now is probably the best time to get into "hard tech" - aka industrial automation, automotive, aerospace & defense, energy & utilities, etc.

Look to GE for a prime example - a $250B manufacturing and processing behemoth rebranding itself as an agile tech startup with GE Digital and Predix rolling out across many of its factories. Its 2015 10k was titled "Digital Industrial" and really describes the company's new focus on "hard tech":

We are just beginning our transformation as the Digital Industrial Company. The Internet has had a massive impact on consumer productivity and commerce. Its impact on industrial markets is just now being realized. By 2020, 10,000 gas turbines, 68,000 jet engines, more than 100 million lightbulbs and 152 million cars will be connected to the Internet.

At GE, we have decided to generate and model this data ourselves—both inside the Company and with our customers. This is what we mean by becoming a Digital Industrial. Our Digital Industrial capabilities will expand our growth rate, improve our margins and bring us closer to our customers. There was a time when every sale had a clear endpoint, followed only by routine service and maintenance. Now, sensors on our products send constant streams of data, analyzed and translated into upgrades that drive productivity in industries where even the smallest incremental efficiency can mean very large gains. Capturing it will be a mission in every one of our businesses. Our aspiration is to offer with every GE product a pathway to greater productivity through sensors, software and big-data analytics.

Why GE? I assure you we didn’t wake up one morning with “software envy.” We have been investing in software and accumulating data for decades. Competing will not require big acquisitions. Rather, the technology required to compete is in our sweet spot. So, why not us?

Similar story if you look outside of the traditional IT market and across the embedded market. Software is being layered on top of "hard tech" to collect data and provide value in a ton of markets that you won't see covered in Tech Crunch. Or maybe you will, given that YC is now pushing for founders with knowledge of these grittier industries.


This is a great comment.

If you really have a true hard tech company with a solid idea, you are probably much better approaching industry partners than going the VC route.

If it is that good, you might not even have to sacrifice hardly any equity at all.

That's what I'm trying to do right now.

We are focusing on telecoms, utilities and large electronics corporates as strategic partners.

Early days, but so far seems like a good strategy.


What is "consumer productivity"?


Obviously, it's our ability to consume more rapidly! You know, one-click ordering :-)


> [Tesla] first built the Roadster—the opposite of a mass market car—to generate revenue to get to the Model S. The Model S then generated the revenue to start the Model 3.

Except the Roadster didn't generate enough revenue, Tesla was on the verge of bankruptcy, and Elon Musk swooped in to save the day (and have himself labelled a co-founder as part of the deal).

The Model X was supposed to be a quick reskin of the Model S chassis as an SUV to provide a stopgap model while the Model 3 was developed, but that turned out to be more complicated than expected and Musk's demands for those striking but complicated gull-wing doors caused development and production problems that delayed it for years.

I'm readying the popcorn for the delays and production issues of the Model 3.


Interesting post, could be interpreted multiple ways:

- growth in pure/easy SW projects is stalling, need to find new areas

- trying to get away from the "bro" prejudice against SW startups

- YC has found a good way to bring rapid, iterative development techniques to hard/HW projects

Hopefully the third one but probably a combination of all of the above.


Maybe I'm just cynical, but my interpretation was more of "YC's over-arching strategy is to benevolently get a small piece of EVERYTHING, so we'd better convince new markets to come work with us."


I am somewhat curious whether Y Combinator is considering partnering with Cyclotron Road up at LBL. It seems to be focused on the same type of companies, hard science, but with an excellent model for supporting this kind of work.

As a scientist who founded a hard science startup (and now work for another), I feel like the hands down biggest barrier is lack of access to free/low-cost space to work. These projects aren't the kind of thing you can do out of a coffee shop, it's just too dangerous. Plus it requires expensive capital outlay to do the initial work, even if you only need the equipment for one or two tests.

The fact that they partner with a national lab to provide those testing resources saves so much time and energy that's better spent on finding product fit than fundraising enough to get to the point where you can even find out how well your technology can perform.

I'd strongly suggest at least reaching out to them to see if they can help advise, they've had amazing results so far.


> YC’s largest exit to date is a self-driving car company

Isn't that just because several highly valued software startups haven't exited yet? Judging by this list:

https://www.cbinsights.com/blog/y-combinator-startup-valuati...


Of course. I never saw the exact Cruise acquisition price, just "north of $1 billion", and you can see 6–7 on that list worth about a billion or more.

In fact, in her How to Build the Future talk [1], Jessica Livingston said that most unicorns have come out of YC.

[1]: https://www.ycombinator.com/future/jessica/


My two cents here from the industry I am in. Thesis: I can't see any nuclear reactor at the paper stage in 2016 beating the small modular reactors from westinghouse (III gen integral pwr) or ge-hitachi (IV gen PRISM sfr) to the 2030 race for the next coming innovative fleets in the Western World. Corollary: Elsewhere, China and Russia will probably risk more and do better at raising the bar of progress. Action: What can be done at YC level then imho? There may be a niche in standardising the design of the most critical components using data science methods, think of an Ikea for the nuclear island. Hard tech, reduced to the simplest problem with a customer, may be the for nuclear sector some critical process (a new material?) or some critical component (a one-fits-all vessel?), both economically affordable at the lab level and at the computer-powered engineering design.


It can take decades in-reactor to accumulate the irradiation dose needed to prove out really advanced fuels and materials. We really need an international nuclear innovation center complete with a flexible test reactor (high flux, fast spectrum, multiple independent coolant loops), legit post-irradiation examination facilities, core mockup facilities for mechanical design, flow loops for thermal/hydraulics, etc. This is extremely expensive to build and maintain, so a business model that can make money along the way (medical isotope production, Pu238 for space travel, easy access to many customers, etc.) is needed, though multi-mission stuff can add lots of institutional complexity. The national labs are supposed to play this role, and are doing so to a degree, but the current lack of facilities really saddens my advanced nuclear design soul.

Russia has operating sodium-cooled reactors (BOR-60, BN-350, BN-600) and low-power critical facilities (BFS-1&2). Using those, they can develop and test new nuclear structural materials and fuels that push the envelope. And shipping materials to Russia for testing and bringing it back for investigation is ridiculously hard. Politically streamlining collaboration is essential for nuclear progress.

The US shut down its best test reactors (FFTF near Handford, WA and EBR-2 in Idaho) in the 90s so it's pretty challenging to iterate. At least we're trying to turn TREAT back on now.

So what can a nuclear startup do? Sam is right. You have to focus on small things and bootstrap yourself up. I was an advisor to Nuclear Innovation Bootcamp a few months ago and the team wanted to build a new giant reactor in Diablo Canyon's containment for hydrogen production. Big picture stuff. I encouraged them to focus on something more specific, like technology for coupling the nuclear island to an industrial unit (hydrogen, desal, ... anything) while being able to smoothly alternate power between it and the turbine (for load following on the grid). It's a lot less glorious to work on something like that, but that's the only way to get started in this field unless you're sitting at the non-existent international nuclear tech center, or on $1B of very patient seed money.


Hello, great post, thanks! About the material: it may be an incremental steel alloy, for which data science methods can help a lot: for the learning part, feed the results (the plots with dpa irradiation vs damage in terms of displacement, swelling, bubbles o whatever relevant) from the 2000-2016 papers from Western and Chinese universities aiming at the same target; for prediction, let deep neural networks or extreme gradient boosting cluster or classify or rank the nearest alloy that solves your most urgent technological readiness problem. This would help speed the process imho.


Out of curiosity do the "hard software" startups at YC (a startup is whether there is doubt that the software can be built at all) share more DNA/commonalities with the "hard tech" startups or their fellow software startups?


VC is apparently awash in capital (11 Trillion USD in bonds now trading at negative interest rates quotes Andreessen)

The Internet was born out of probably the worlds most successful long term focused VC - DARPA

now I can't tell if this is the VC community going back to a long term model and invest for decades before expecting a return. 11 trillion would do a lot of good invested for ten years in the globes best and brightest

If we don't know if the tech will work at all (fusion etc) then YC is effectively funding academic research. Which is fine, but I assume the right way to build a business may not be the right way to choose the next research project. Gittens index notwithstanding

Are these businesses measured in some way differently to the next Uber? Different P&L expectations, different pool of funds? If not then ... ?


My first company Light Up Africa Inc., a hard tech startup, was a part of the Impact Engine in Chicago. We did not have access to 3D printing, hardware facilities, and labs to help us prototype and iterate effectively. And that is just step 1. We then had to manufacture in parallel while continuing to research.

Also, I can attest that is extremely difficult to build a startup with a heavy research focus. However, from the brief write-up presented, I don't see how YC is in the position to help build and scale hard tech startups at scale - especially those with a tremendous need for capital invest to further research and manufacture.



As much as I like YC and this particular board, their standard deal is $120K for 7% of the company. That amount of money is so ridiculously low for almost any serious hardware project.

Just consider what $120K buys you. Barely one mid-level engineer for a year (don't forget the taxes, social security). In that year you would have to create and mass-produce and sell the product, just to keep paying the salary.

I'm surprised YC doesn't offer something like $500K for 50% of the company.


This is a misunderstanding of fundraising. If you give up 50% of your company, no investor will touch you unless you're already profitable or growing like Uber. So unless you believe you can go from zero to unicorn with $500k, giving up such enormous equity is doom for the company.

That $120k is intended to last 3 months, not 1 year. And afterwards most companies go on to raise 7 digits for much less than 50% of their companies.


The typical trajectory is to take YC's $120k for 7%, gather some evidence that the idea will work, then raise $1-2M for 15%, then get it really working at small scale, then raise $10-20M for 25%, then scale up. The farther along you are, the more money you can raise for a given amount of equity, so it makes sense to raise money in stages.


Why would this be a better path than using SBIR/STTR funding and the associated ecosystem for proof of concept? Unlike YC it is non-dilutive.

It's something I've long wondered but it's never been answered to my satisfaction.


SBIR/STTR funding is non-dilutive but it is far from free. To apply, you: (1) Spend 1+ month on writing the grant, doing all the paperwork, getting letters of support, etc. (2) Wait up to a year to see if you got the money. The hit rate is often 15% (on average you need to apply 6 times to get 1). (3) Get ~$150K if you're lucky for phase 1, and this money is only for research. You cannot use it on lawyers or keep it in a bank. You also have to spend the money in 8 months. And write a report on your progress. (4) Then apply for Phase II, which can be up to $1M. Even if everything goes right, it will take at the least 2 years since you start applying to get a phase II application.

It's great money if you have all the time in the world and really like writing grants. Or if you have no other choice. But you will pay for it with your time. Also, you will be at a huge disadvantage unless you have a PhD and university contacts. Writing successful grants is a difficult skill that takes a lot of practice, and you will be competing with people that ONLY write grants.


I think what YC gives you is primarily ideas on how to get a minimal proof point out without hiring that one engineer.


The value of YC isn't the money, it's the publicity. Just look at the front page of Hacker News - I've seen 8/10 of the top stories related to YC companies. Demo days give you immediate access to the press.


By "hard tech", YC apparently means "has technical risk", as opposed to a marketing-based business.

There was a time, pre-2000, when VCs did almost entirely "hard tech" startups. VCs were more profitable then. But the weakening of patents by the anti-patent lobby has made this infeasible as a business strategy. You have to buy market share now before someone steals your technology.


Not sure if I'd agree that weakening of patent protection has made it infeasible. To my eye, technical risk startups are still very defensible, it's just that the early technical advantage needs to grow into a branding or data advantage that is more defensible long-term. And I'd say the reason for this is a combination of more people getting into tech startups and consumers being conditioned to accept less. Flashy couldn't-be-done-before technology makes customers stand up and take notice, but pretty soon they start focusing on the parts that are really useful in their daily lives, and chances are there exist 80/20 solutions that get most of the way there but can be implemented by any one of a number of competitors.

I still think that technology risk tends to be underpriced by VCs today. The pendulum's swung too far, where everyone assumes that the tech is just a commodity and what really matters is how many Facebook ads you buy, and that's left several openings where consumers are not consuming because existing solutions suck too much.


How have tech patents become weaker? Unlike older subject areas, you can still be given decades of arbitrary pricing power for being the first to claim all the obvious solutions to a straightforward problem.


Why are patents weak now? Why were they strong before?


America Invents Act, easy and repeatable post-grant challenges, no more injunctions, very limited punitive damages. Worst case for a patent infringer is having to buy a license.


I prefer Hard Business to Hard Tech. Some businesses may be easy on the tech side, but very hard on the customer acquisition side because they require a complete change in the way people think and/or behave.

In that light these businesses were easy tech but hard Businesses that ended up being worth a lot: Ebay, Amazon, Uber, AirBNB, Dropbox, and more.

EDIT

10 of the top 15 Unicorns

Uber - Not Hard Tech

Xiaomi - Kind Of Hard Tech

Didi - Not Hard Tech

AirBNB - Not Hard Tech

Palantir - Hard Tech

Snapchat - Not Hard Tech

Wework - Not Hard Tech

FlipKart - Not Hard Tech

SpaceX - Hard Tech

Pinterest - Not Hard Tech

Dropbox - Not Hard Tech


Uber's core business isn't hard tech, but they're investing mountains of money in hard tech.


Or to say it another way, providing v1 of the service wasn't hard tech, just a mobile app with routing and payments. Scaling up and driving the cost down is.

This is a great way to build a company, because you have real data about which scaling problems are actually important, and a real operation to incrementally test solutions in.


Actually I was thinking self driving. But that too.


God help us if Palantir is hard tech


How is Palantir hard tech ?


Say I have a technology that's related to autonomous vehicles. It requires hardware and software. How would joining YC increase the odds of the technology being adopted by auto manufacturers? Its not a consumer product and its a technology that aides and improves autonomous operation but does not actually drive the car. It would be nice to hear from anyone from YC or with experience.


The mentors provide excellent feedback on what you should focus on at any given time, and are great at troubleshooting problems. In addition to that, they have a great network (the YC alumni which includes some self-driving car companies as well as investors) who may be able to connect you directly to the companies you want to work with.

One of the hardest things for people that have never started a company before is to understand just how hard it will be. Having people on your side can make all the difference, if only to improve morale and help you maintain you sanity.


Thank you for the reply. Is there something in YC that is not for pure startups? Something to support R&D with a commercial goal but that do mor necessarily follow the common business path.


I think YC's criterion for getting admitted into the core program is still fitting - do they believe it can become a (ten) billion dollar company? They recently created the Fellowship program to fund more companies at an earlier stage but I don't know too much about that.

It's hard to give you more feedback without knowing more. If you don't want to focus on growth and realistically take VC money, YC might not be for you (these are generalizations, I'd argue Wufoo was a YC success story and they barely took any money at all).


This is great feedback. I will make sure to contact YC and simply ask them. I'm not really interested in VC but the network seems very valuable.


Hmm. I'm only half serious, but if anyone wants to compete with mylan/epi-pen ping me.

They've taken the price from $100 to $500+ over the course of a decade. There's got to be a way to (1) make epi-pens for $25-$50 (which mylan does), (2) sell them for $50-$100, (3) get stupid rich (4m sold per year in the us alone), and (4) do good for the world as well.


I watched Chad Rigetti's talk (https://www.youtube.com/watch?v=GzMvG8UO6Eg) this morning, which was really exciting.


Alright. I'll apply. I'm barely through the business model stage, but I'll do it. I'll lose some sleep, but if YC is serious about hardware then ok.


If you're starting a company, be prepared to lose a lot of sleep moving forward :]

Shoot me an email [redacted] and I'll help you with your application if you want.


Cool!


"We look for brains, motivation, and a sense of design. Experience is helpful but not critical.

Your idea is important too, but mainly as evidence that you can have good ideas. Most successful startups change their idea substantially."

Could you possibly be more la dee dah?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: