Hacker News new | past | comments | ask | show | jobs | submit login
After the end of the startup era (techcrunch.com)
418 points by imartin2k on Oct 22, 2017 | hide | past | favorite | 297 comments



I look at it from another end - I feel there is a sort of an app fatigue. Maybe it is jut me. I used to be excited about Uber-like things, note taking apps, DropBox, coupon sites, review sites. Installed some of them, used them, then kind of stopped. Someone at work would say "you should try this new thing, totally cool" and I'd be inclined to do it, but now would pass it probably.

Extrapolating scientifically from the sample of 1, it would seem that a small loss of excitement over trying new services and apps over the larger population would be enough to prevent the fabled hockey stick growth. No hockey sticks -> no unicorns -> less VC money for new startup -> articles about the death of startups -> comments on HN about articles about the death of startups


That might just be because you are getting older and value your time more.

I used to try every new app, every new service. I used to run Linux on my desktop. Doing those things was fun and challenging, but took time.

I have kids now. I value spending time with them over fiddling with the latest app or spending an hour tuning my kernel so I can run my window manager at max resolution. I use a Mac and it just does that for me now, albeit with far less customizability and fun, but that is the tradeoff I'm willing to make.

I'm not saying you're wrong, but I'm saying your sample group has an age bias. :)


You don't need to customize your software every month. I haven't customized my configs for over 3 years now. I am running linux and it is a one-time effort.


No offense, but that's kind of the fun of linux on the desktop.

Also, if you really haven't updated in three years, that means that you're still running kernel 3.16, which has 81 known vulnerabilities[0], 12 of which are critical. That's the kind of stuff I don't really want to worry about anymore.

[0] https://www.cvedetails.com/vulnerability-list/vendor_id-33/p...


No idea what's your definition of customization is but running package update command isn't what i consider as customization. And updating kernel/pkgs doesn't require you to customize them.


I get both sides of this. For 17 years I ran linux everywhere. FreeBSD as well but infrequently as a desktop. After two years of mac and seeing the breakage and perversion of the unix foundations apple inflicts I gave up on it (entirely) and work with macs only under protest.

So now? Linux and *bsd for paid work. Windows for the desktop. cygwin and mobaxterm ftw. All old is new again.


Long time (but not very religious about it) Linux user here.

What operating system allows you to "not worry about" OS security updates, and how is it different from Linux?


On a Mac, you get a popup that says there's a new OS update, hit the spacebar with your forehead, get a drink while your computer reboots, and when you get back, it's restarted and put all your application windows back the way you left them, including terminal contents.

In my experience, on Linux such installations are smooth for about a year and a half, and then start failing for cryptic reasons, even if you've done everything possible to pick a mainstream distro, ideal hardware, and keep the machine as vanilla as possible.

(And I say this as someone who's been using Linux since the 1990's.)


I use a rolling release distro(void-linux), like i mentioned i haven't had to do any such thing for about 3 years.

I also take care of my parents computer with ubuntu and i have done two major release upgrades and have experienced no errors.


> On a Mac, you get a popup that says there's a new OS upgrade, you hit the spacebar with your forehead, get a drink while your computer reboots, and when you get back, it's restarted and put all your application windows back the way you left them, including terminal contents.

The latest update introduced many problems, though

https://www.google.com/search?q=high+sierra+breaks&ie=utf-8&...


And it's major "man bites dog" news. When a major Linux distro does that, it doesn't even merit mentioning on Slashdot anymore.


Installing Linux is takes no more time is in anyway more complicated than installing Windows. Even installing Ubuntu on my laptop was a simple affair (including having the wifi work right away, in the installer).

What you've said may have been true several years ago, but it isn't anymore.


Apps have also gotten much worse than in their heyday. They are all built in whatever the latest cross platform iframe-with-makeup framework which always feels clunky, and they all have to start out as free to get their foot in the door and then make it their life’s purpose to annoy you repeatedly until you cave and give them $.99 every month. The hardware space is now much more diverse so a developer doesn’t have the luxury of making pixel-perfect, detailed experiences because their app has to work on watches and refrigerators as well as phones (and now there’s even a phone screen with a forehead, thanks iPhone X).


Using phrases like 'all built with', 'all have to start out as free', and so on, is fairly inaccurate. There are plenty of apps that run native, cost money upfront, and are still viable, profitable businesses.


The "unicorn" in this statement is simple "build something people want" (to pay for)

You need to add value to others no matter what you do, that's the definition of "building"

Value is what is extracted from work.

"Buildings" are the expression of physical labor put into something OF value.

It's literally defined in our lexicon.


It's not about apps being worse, but about good apps are difficult to discover. Now, the conditions are that if you have larger budget, you are considered better for major platforms. That's the real problem.


Apps have never been that exciting to me, especially in the "on a smartphone" incarnation. Partly that's because I'm a h/w and s/w guy, but also because it's clear from a number of angles that only so much real wealth/happiness/etc can be had from just shuffling bits around without reference to the physical world. (And I say that as someone who worked in derivatives finance for the best part of 20 years!)


Mobile devices are very constrained. The OSes are locked down, the screens are small, high CPU apps are out, and input is slow and cumbersome.

After trying a lot of apps I have found that a smart phone has only a few functions:

1. Talking and texting.

2. Taking pictures and video.

3. Maps and directions.

4. Music, movies, ebooks.

5. Brief interactions with services like Uber, Lyft, AirBnB, Los Angeles metro, etc.

6. Casual web browsing. (I want a bigger screen and a keyboard for anything in depth.)

7. edit: Casual gaming.

Looking at how most other people use phones I don't think I am alone.

These devices are limited and I don't think it took us long to exhaust their potential. There just isn't much else a phone can do well (and that locked down vendor fiefdoms will allow).

The PC era on the other hand gave us a long, wide, and deep trench of innovation that is still not exhausted. PCs are more open, more extensible, and have a much wider IO path to the human in the chair. The breadth of what a PC can do is incredible.

(The main problem with PCs is antiquated, insecure, bloated operating systems. Fix that and I think you'd see even more innovation.)

Of course I have been a mobile skeptic since the iPhone. I just saw the next incarnation of the feature phone. Since it was still locked down by carriers and vendors and since its bandwidth to the user is poor I knew it would not deliver lasting innovation.


I suspect there is money to be made selling a "just a phone". That pretends to do only your list above and heavily optimised the the usability an reliability of that subset. Any user-visible changes to those apps would be done only very conservatively.

It would be a lie though. Since it really would be a Turing complete phone, with lots of programs on it. That was true even of the old Symbian phones, and items from 3-6 go well beyond what they did.

Nonetheless, I think there is benefit to be gained by trading flexibility for reliability. One business difficulty/opportunity is how do you pick out services like Uber, AirBVnB etc. I suspect the solution is profitable, but user hostile: lock them into a vendor when they buy their phone.


Exactly. I always cringe when I hear "build mobile first" that's just stupid, "flappy-bird" era is dead, agreed! - but there are so much more potential for full blown websites (or "webapps") that require PC/laptop and larger screen.


Mobile could be more innovative if it were more open and modular. Think of all the hundreds of things that would never have happened on the PC if they were as closed as phones.


> The main problem with PCs is antiquated, insecure, bloated operating systems. Fix that and I think you'd see even more innovation.

In a way, it's a feature, not a bug. The security of mobile devices is created by the very same thing that limits their usefulness - by OS being vendor-controlled, fully sandboxed and locked down, so that users (via programs acting on their behalf) can't break it. I'm not sure if it is even possible to have a secure system that allows you to do anything interesting on it.


You forgot the biggest smartphone market:

7. Games


I guess I don't game much, but yes that can be added too. Still there is a huge gap between phone games and PC and console games.


There definitely is a gap but not in the way I think you meant; in case you were not aware, mobile games are now the biggest market segment of video games period (console, PC included)


Considering you missed the #1 market, maybe your research techniques might be lacking?


Also, for clarity, the browsing and gaming done on phones isn't casual.


8. Mobile wifi hotspot.

I loathe using my phone for most things. Any opportunity I get it is hotspot + laptop.


I never really got the feature-phone for anything more than data and voice either. When you add in the downsides for those mobile features: MRC,1k for a decent phone, and the moving targets all the app providers put out it's easy to see through the facade to a bubble market based on peoples low tech IQ and attention deficit: both encouraged in/by other popular media.

The only upside is it's easy to track you with these devices by those who care about you the most: big $$ and big brother.


I've almost given up on the Play Store. It's overblown with all sorts of useless garbage. Want an alarm clock app? There are hundreds, if not thousands. I couldn't even find what I was looking for. Paid apps., Ads, in-app. purchases. It's an alarm clock for heaven's sake.


https://play.google.com/store/apps/details?id=com.urbandroid...

Paid.

Used it for years. Just works.

Integrates with my Hue lights.

All kinds of settings including one that refuses to turn off the alarm if I don't use the NFC code in the bathroom (you can use QR codes as well as plain barcodes as well, just hide them away from reach from your bed.


That's the ONE paid Android App that i have been using for 5 years now. The only other paid "App" i use is Action Launcher 2 (3 didn't click for me).


Happy Tasker user here. Well worth the little money I paid for it.


If someone is still visiting this thread I'm still looking for a notebook app. Something to replace my paper notebook (because that can neither be backed up nor properly encrypted).

Keep is almost there with a combination of text and drawings. Missing: - links to other notes - checkboxes in between text (currently notes in Keep are either all checkboxes or no checkboxes) - tapping on tags should show all notes with that tag

OneNote is also almost there.

Samsung notes is also almost there.

Maybe what I wish most is some way to link from my calendar to Keep and back etc etc.

However, seeing how I still cannot even link to a mail in desktop Outlook I'm afraid this is only a dream.


Honestly, this is a simple problem with a simple answer. If you look at the real life, Play Store is like a state-owned bazaar where you can find all great high quality products next to useless crap that breaks minutes after you have bought it. Google is the state.

In order to fix it Google should open up "Play Store" to other providers. You could then end up with equivalent of both high-end stores & corner shops, selling products of quality that matches their brand, rather than mishmash of everything.

If you think about it, Apple suffers from a similar problem. In their case they allow in only high quality, impeccable software and there's no place for corner shops. Still - "the app economy" is state-owned.


What's wrong with Google's offering?


Doesn't everyone on Android use Timely?


To answer your question, no.


I thought everyone used Sleep as Android, but I guess I haven't been following the news.

And that's just me, who tries to stay somewhat up to date with the ecosystem and generally understands tech. How my mother is going to discover which alarm clock app is useful, performant, and doesn't mine bitcoin or exfiltrate her photos to the cloud? I have no idea.


I do now, thanks!


TBH I'm fine with Google's Clock app.


There is a problem with apps, but we should look at it from another angle. We all know that in the app store and play market popular apps become more popular over time and it is difficult for smaller companies to promote their products even if their solution is better.

App market is now a market of couple dozens of apps that make up the most amount of revenue. For small developers, there is a too little chance to be seen at all.

Actually, the whole article makes sense. Nowadays, we all see that it's difficult for small startups to promote themselves. The privilege on major platforms is given to those who pay more(ex: Google). Even if one company gains enough traction and begins to become successful, another larger corporation buys it (in best case) or makes everything to not let small competitor get larger market share. That'a a very big problem that if not solved will result in a slower progress rate.


Dude you're just growing up that's all. You'll find the sandbox is also not as fun as it used to be.


Playing in the dirt is still more fun than tapping out Beets in Dirt Farm Island Saga-TM powered by interstitial ads-after-every-beet or add Nitrogen for only $0.99


I agree with you, I just install the basics these days when I get a new phone.

Objectively speaking there are numbers to prove we are not in the minority.

https://www.recode.net/2016/9/16/12933780/average-app-downlo...

Quick facts from that article:

- 50% of users download zero apps per month

- 13% of users account for more than 50% of the downloads


A.k.a. reality 101: there's no hockey stick growth that lasts forever.


>>I look at it from another end - I feel there is a sort of an app fatigue.

For all practical purposes, the early start up riches are now gone. And there is also a degree of low hanging fruit value, that gets taken early in the stage of any industry. First movers have had that advantage. After that you can have a lot of small companies providing value to small niche places, but the big money value is now taken.

Every once in a while you will have some nice idea that comes along and will get big, but those will exception rather being the rule.


>IoT devices [are] hard to prototype, generally low-margin, expensive to bring to market, and very expensive to scale. Just ask Fitbit. Or Jawbone. Or Juicero. Or HTC.

I feel like this is the public's idea of "iot," while I've learned after a year of working for an iot stack provider that the real "internet of things" explosion is found in places nobody looks except the procurement guys - the stadium lights, the refrigerators, the factory doors, etc. Small, single-minded decided sending one or two bytes of data, spread en masse through an industrial area.


IoT devices [are] hard to prototype

Here in Shenzhen, I would say no. They are cheap as chips. I have multiple friends doing consumer IoT devices (lots of BLE) with single-dimensional backgrounds (often software) who have been able to iterate through multiple prototypes and produce concepts within months. The chief problems for IoT products IMHO are marketing and replicability (low security for initial investment).

The fact that hardware is expensive to tackle in SV just means SV is badly positioned in this sector, not that it's inherently difficult or expensive. BTW, we just had the Hacker Trip to China (by Noisebridge founder Mitch Altman) roll through town here.


I mean, even outside of Shenzhen, if you want bluetooth the NRF52832 has fantastic hardware layout/design documentation and a frickin' Cortex-M0+ processor for core logic and communicating with the BTLE stack. The ESP32 has wifi and bluetooth and has dealt with all that FCC crap (but uses a weird MCU core.) Etc, etc.

Living next door to a few factories probably reduces your iteration times from weeks to days, but IMHO - and maybe this is the 'H' part, I'm still learning - the time that you as an individual wind up spending on design and figuring out software issues are pretty sizable, even compared to a fortnight for a nice 2-4 layer PCB.

Either way, we are certainly living in a wonderful time for rapid prototyping. Dropping the 'IoT' tag, embedded systems are super super exciting in all kinds of areas; telecommunications are just one of those.


kind of off topic. but, how much markup is there compared to aliexpress? it seems prototyping using parts from there is pretty inexpensive. I heard about giant multi story marketplaces in Shenzhen, but I was curious if they were much cheaper or just a lot faster since everything you need is in one building.


There's more choice online but you have to pay shipping for everything. Quantities available, prices and discoverability are less attractive in the marketplaces but you can potentially save time and there are certain stores where you can get what amounts to nontrivial free in person implementation or product selection consulting. In general we buy online because we source so many things, but if you are starting out in electronics or just need some standard parts for cheap then the markets are pretty cool.


How does a software person get that far with hardware that fast?


It's really not that hard. A bit of electronics comprehension is the only requirement, and there are many helpful people about and hackerspaces for those who are new. For example this Wednesday my favourite hackerspace https://lab0x0.com/ is offering a free introductory course in English for people totally new to arduino. There are tonnes of books and resources online, and a friendly 24x7 WeChat group for live help. A shopping trip to buy sensors and bits in town here can be completed in half a day and under $100 (including soldering equipment), two days of hacking and anyone motivated will be on top of the basics.


That's called "machine to machine communication", or "M2M", by the companies who do that. Much of it is still over pager networks. There are hundreds of thousands of industrial HVAC systems, quietly sending "Inside temp 72, outside temp 87, compressor 1 on/OK/2421 operating hours, compressor 2 off/OK/3423 operating hours, coolant pressure OK, water level in chiller OK, city water pressure OK..." back to a maintenance service.


Exactly. Google "Industrial IoT" or "Industrie 4.0" and it's pretty clear that some major changes are happening with IoT, far away from consumer gadgetry like Nest and an IP-enabled light bulb.


Having worked with some companies in the field, I feel "Industry 4.0" is just about the same level of marketing fad as "Sharing Economy". Something useful will eventually appear there, but my impression is that there's lot of papering over crappy tech with sales being done, and people are jockeying for position of being the platform provider, who sucks everyone else into their SaaS offering. In a way, it doesn't look that different from consumer IoT, which is probably why people think calling it IIoT is a good idea (instead of trying to separate themselves away from the accumulated bullshit IoT is).

Not the kind of revolution I fought for.


Yeah, it seems like you could replace 'IoT' with 'physical product.'

Sounds like the author has been living in the land of hand-waved 9-figure growth estimates for a bit too long.


No you cannot. A brick is also a physical product. But it doesn't communicate. There needs to be software and technology involved, and more than one object needs to talk via a network.


That's true, but a brick still requires a sophisticated set of knowledge which we accept to be public domain by this point, after a few thousand years.

You need to know how to make and fire clay, what sorts of additives can help, how to package and ship them en masse...

There are a lot of problems involved in bricks. OSHA once categorized bricks as hazardous materials (don't breathe the dust when grinding them). We've just gotten really good at dealing with those problems because of how useful bricks are and how easy it is to teach people about making them.

Anecdotally, I once took an archaeology course in college, and we had a guest speaker come in who knew how to do flint-knapping[1]. Like, prehistoric toolmaking out of volcanic rocks. I'll bet you'd run into some real hurtles trying to market an obsidian knife today, if you didn't have experience in that area.

[1]: https://en.wikipedia.org/wiki/Knapping


I think the GP's point is that software is not the hard part when you're dealing with physical products, connected or not...


Well who means what is really hard to discuss at this point.

The article's author wants to point out that the hardware part must be so cheap that you can't make any profit in that area any more. But you still need to get it right, otherwise you can't sell anything.

Komali doesn't disagree with that point, but adds another. That most of IoT is actually not in the consumer area but in the industry area where people don't see it but the scale in terms of item numbers might be much bigger. Also a good point.

Then leggomylibro points out that you could replace "IoT" with "physical product", which may or may not imply that hardware is the hard part in IoT.

In the literally interpretation I disagree with the equal sign there, because a brick is not "IoT". There shouldn't be much to say about htis. IoT _is_ about the communication, right?

Considering your interpretation of his statement, that the hardware is the hard part in IoT, makes me wonder how you define hard part. The physical important parts like chips are quite well understood I believe (not an expert though) and we churn these out in the millions each year and they mostly do their job. The point where these chips actually need to be connected to physical I/O and software, these are horribly underdeveloped areas. There is low amount of skilled people, low amount of debugging, low amount of standards. If that's the area you are talking about I agree.

However if we talk business, we also need to talk money. And while you have one or two suicidal hardware architect on such a project, you probably have 50-100 software developers, who will use open-source plugins, libraries, operating systems, which again are each developed by 50-100 developers. So the whole cost of software is margins higher than the costs of developing the hardware. And that factor is growing towards the software side on a daily basis. The hardware will grow more and more to standardized processing units (GPUs, SoCs, touchdisplays for I/O) and the main activity will happen in more and more in software.

So considering that part I have to say nope, software is the harder part (to finance, to develop, to finish) here.


What I mean by physical part being the harder here is that it's much slower and more expensive to iterate on.

You can make a lot of changes to software within a single day, for free (sans programmer time), and deploy them to all your customers. With physical items, you usually need to fab a new version for each significant change to test it, which is a time-consuming and expensive process. This is not something electronics-specific, this applies just as much to a new brick design.

This is a qualitative difference that makes physical product design an expensive process - you need to get all the things right before you start shipping; you can't just patch things after you deploy. But, as the article correctly points out, ultimately the manufacturing of a correctly designed batch of a physical item is cheap. Which means it's hard to make profit, especially if you spent all that money iterating in your lab, and then someone just reverse-engineers your final design and starts pumping out copies.

In case of IoT - common processing and communication chips are cheap and easy to get. Electrical engineering is hard and expensive, product design is hard and expensive, and - compared to that - software is cheap, because all IoT companies are doing is bog standard cloud-based CRUD.

(If they make the software part needlessly complicated for themselves, that's another problem, but it's endemic to this industry anyway.)


I see and agree that an equal sized hardware and software component compared, the hardware will be harder to develop and more expensive. I don't think from a project perspective this holds true though, since the amounts are often something like 1 hardware to 10 software or even more.

But your argument still holds true in that iterations are quicker and easier to achieve, which is also why this factor will only increase.


From the article:

"We’re already seeing this. Consider Y Combinator, by all accounts the gold standard of startup accelerators, famously harder to get into than Harvard. Then consider its alumni. Five years ago, in 2012, its three poster children were clearly poised to dominate their markets and become huge companies: AirBnB, Dropbox, and Stripe. And so it came to pass."

"Fast forward to today, and Y Combinator’s three poster children are… unchanged. In the last six years YC have funded more than twice as many startups as they did in their first six — but I challenge you to name any of their post-2011 alumni as well-positioned today as their Big Three were in 2012. The only one that might have qualified, for a time, was Instacart. But Amazon broke into that game with Amazon Fresh, and, especially, their purchase of Whole Foods."

Look at the list of Ycombinator companies from the 2012 batch.[1] Where are they now?

[1] http://yclist.com/


The journalist reveals his or her ignorance by not considering Gusto or Flexport to be well positioned, imo.


Several companies with $1B valuations were missed:

  Reddit
  Zenefits
  Gusto
  Docker
  Flexport
  Coinbase
  Twitch.tv
  Quora
  Door Dash
  Machine Zone
  Mixpanel


Plus the article could do with some basic fact checking.

"The web boom of 1997-2006 brought us Amazon, Facebook, Google, Salesforce, Airbnb"

Wikipedia AirBnB page: "Founded August 2008; 9 years ago"


Also iirc Amazon predates 1997. 1997 is its IPO date.


Yeah, Gusto is a big win IMO. Payroll stuff sucks, especially for early stage businesses. I've used all the services over the past 15 years, my current company is using Gusto, still sucks, but sucks the least, best price, best service. And 1000s of small businesses trust Gusto to process millions of dollars of their most sacred money each month. Underrated.


Zenpayroll was great! Gusto sucks. And the change took place right around the time of the name change.

Now, both Gusto and Zenefits are trying to play the same game and both are basically parasitic companies feeding off lock-in and transaction fees.


Really curious what changed between ZP and Gusto. (we only use their basic payroll product)


Which payroll and benefits providers make their money a different way?


In terms of investment round valuations there's Instacart $3.4bn, Coinbase $1.6bn and Gusto $1bn.


Reminds me of what got me to this site many years ago. I was reading a lisp programming book that had a footnote that somehow got me to HN.

As a teenager I got enamoured by the idea of the future of the tech being driven by small fast-moving hacker friendly companies rather than big corp, as portrayed in PG's writings and this site.

I remember how I increasingly became disillusioned as I gradually realized that many (most?) startups were optimizing for the acquisition by the same big corp (investors gotta cash out, right?) instead of building long-term sustainable businesses.

This seems to have attracted the kind of personality that just wants to play the game, cash out and get rich.


Agreed. Working for a company that optimizes for cashing out loses luster really quickly. As someone who's been on that hamster wheel for too long, have to say it'd definitely be more interesting to work for a company that builds for long-term sustainability.


I like the Hollywood model of bespoke teams for projects. A stable salary could be created via long term "royalties" that each project creates. (or may not be require if each project is big enough.)

This model requires projects that have long term stability upon creation but also a recognized lifespan due to attention attrition. Like movies.

A lot of bad code nowadays exists because of pressure from "creators" that don't actually create. They are middlenmanagers appeasing their boss who is appeasing their boss and so on. There is nothing tying it to reality so all sense of quality is lost.

The main things that prevent this now IMO is the friction of setting up these groups for value capture and the maintenance of the subsequent results. To me, the most exciting part of block chains is that they can create a non-human central authority that can theoretically embody any set of rules including group structure and value capture.

New photo sharing apps can end up capturing value like new superhero movies. Winner take all dynamics still exist but enduring monopolies do not.

The contracting model will also solve another huge issue which is the shitty interview process. (mostly self imposed)

After spending some time at well known large companies what I've realized is that they aren't the best of the beverythey are simply places to hide, specialize, or grow. They are like oasis' that survive due to captured value streams or large value stores or deep moats. Inside people can become specialists in service of this "city" but lack the well roundedness grittiness of a desert wanderer.

But unlike a city which is neutral provided you can generate value/pay rent, companies are like cults in the the entrances are guarded and the HOA rules are super strict and apply to everyone. So while they can generate highly specialized "trebuchets" that wanders cannot the citizens can be far from happy. The Hollywood model would greatly benefit the group of people who may currently feel too tied down.


So the premise is that the startup era is ending:

> ...because we’ve all lived through back-to-back massive worldwide hardware revolutions — the growth of the Internet, and the adoption of smartphones — we erroneously think another one is around the corner, and once again, a few kids in a garage can write a little software to take advantage of it. [...] But there is no such revolution en route...

Then it says:

> It is widely accepted that the next wave of important technologies consists of AI, drones, AR/VR, cryptocurrencies, self-driving cars, and the “Internet of Things.” These technologies are, collectively, hugely important and consequential — but they are not remotely as accessible to startup disruption as the web and smartphones were.

But a real counter to the premise is actually presented:

> (However, in fairness, software and services built atop newly emerging hardware are likely an exception to the larger rule here; startups in those niches have far better odds than most others.)

So once again, 'a few kides in a garage can write a little software to take advantage of it.' They start as niches, but we can't say what their potential is without discovery and development.


Well, the argument that 1995 through 2015 was a cycle completely different from whatever the next one will be, is not without merit.

I mean, it is easy to forget, but many, many things had to come together at one time in order for this to pop off like it did:

* high speed internet

* widespread consumer demand for high speed internet and services

* multi-core, low-power hardware that gave us smartphones and cheapish "device" pcs like tivo/roku/etc.

* widespread cellular and wifi networks

* miniturization and improvement of many types of sensors

* widespread data collection of all types

* massive investments and growth in consumer GPU devices, which underwrote the ML boom

and I'm probably missing some things, but you get the idea.

All of these things had to come together at the same time to give us the boom that we just went through, and it gave us rise to the likes of Google, Facebook, and so on.

This is very unlikely to repeat itself. Those who grew up in the late 90s, early 2000s may not really notice, but the difference between 1995 and 2010 is astronomical.

This is not to say that we're about to crash, or that there won't be another boom in short order, just that it will likely follow a very different pattern than the previous one. The period of 1995-2010/2015 was really a very unique confluence of events, historically speaking, and over what is really a very, very short time frame. Whereas the current boom is built around leveraging the smartphone and widespread internt access, the next will not be, as it will already be filled out by competitors.


It is likely entirely without merit.

Many thing had to come together from 1975 to 1995 to enable the things you're referencing. The list you made is not impressive versus the past, it's normal.

What happened from 1995 to 2015 that was more important than the Internet, transistor, microprocessor, router, DRAM or the GUI? Good luck resolving that debate.

Many thing had to to come together from 1955 to 1975 to....

I don't know how old you are and how familiar you might be with the prior half century plus in tech, but we could be here all day listing the incredible inventions and leaps forward in tech during each of those 20 year periods of time.

Nothing has changed fundamentally about what's occurring in tech. The process continues as before. Each new generation thinks what has happened during its era is particularly special or unique versus the past. We see the same generational bias in most everything, from music to politics.


Yes, I am old enough to remember, and I think you are reading more into my post than was there.

I am not claiming 1995-2015 was unique in the fundamental factors (we are all riding the exponential curve here), simply that the confluence of advances is unique to that time period, and gives you a unique distribution of companies/organizations/industries/etc. that is very different from other time periods.


That's right, and I'm not at all convinced that the author of the TechCrunch article is correct.

But there is one important difference between the past 20 years and the 20 years before that. The number of people participating in a self-employed or entrepreneurial role has been far greater in the past two decades than before.

We did see some of that during the PC revolution as well, but it was disproportionally smaller in scale.

I haven't done the research to say whether there were historical periods before in which such large swaths of the population were gripped by the idea that they could start their own business based on a new technology.

It's possible that it happened before, but I don't think it was like that between 1975 and 1995. Certainly not towards the end of that period because I would remember.


Of course the technology wasn't all invented from scratch in that period, but it was when a lot of things progressed just enough to generate massive markets which pulled a lot of money into the system.


The recent past does appear very special but I wouldn't discount the near future either. e.g. the tech for AlphaGo was made by a startup.

In any case I took the piece at it's title's face value, where startups are nowhere near over. Unicorns shouldn't really factor into that.


> e.g. the tech for AlphaGo was made by a startup.

Making the next AlphaGo is far less accessible than making the next AirBnB. The gold rush where we're basically sticking a web/mobile app on a business and off to the races is ending.


Now we start to stick AI on a business. And like with uber/airbnb where an app became the business, we'll see unicorns where the AI will become the business.


AI is some applied math. But as applied math goes, AI is only a tiny fraction, nearly absurdly narrow, and not very impressive. There's a lot more good applied math to be brought forward to exploit the recent fantastic hardware.


The other half of AI is data. Valuable propietart data. ie credit card transactions or checkin data from foursquare


Can you give some examples of applied math sub fields that haven't been exploited to their fullest? I'm genuinely interested.


Statistical hypothesis tests: Commonly calculations to predict something have two ways to be wrong (A) predict it will happen when it doesn't and (B) predict it won't happen when it does. Then in the context of a statistical hypothesis test, get to address the probabilities of A and B and how to adjust the test to get the combination of A, B like best or get a better test that will give better combinations. If have enough data, then the classic Neyman-Pearson result says how to get the best test. The proof is like investing in real estate: First buy the property with the highest ROI. Then the next highest, etc. until out of money. That's crude but not really wrong. I have a fancy proof based on the Hahn decomposition from the Radon-Nikodym theorem. Well, statistical hypothesis tests are being seriously neglected.

E.g., some tests are distribution-free. And for other tests, will want to make good use of multi-dimensional data, e.g., not just, say, blood pressure or blood sugar level but both of those two jointly. Well, I'm the inventor of the first, and a large, collection of statistical hypothesis tests that are both distribution-free and multidimensional. That work is published, powerful, valuable, but neglected. I did the work for better zero-day detection of anomalies in high end server farms and networks. So, I got a real statistical hypothesis tests, e.g., know the false alarm rate and get to adjust it and get that rate exactly in practice. IMHO, my work totally knocked the socks off the work our group had been doing on that problem with expert systems using data on thresholds. Also, the core math is nothing like what is most popular in AI/ML now and as far as I know nothing like anything even in small niches of AI/ML now.

Once I was asked to predict revenue. We knew the present revenue, and from our planned capacity knew our maximum, target revenue. So, roughly had to interpolate between those two. So, how might that go? Well, assume that the growth is mostly from current happy customers talking to people who are target customers but not customers yet. Let t denote time, in, say, days. At time t, let y(t) be the revenue, in, say, dollars, at time t. Let b be the revenue at full capacity. Let the present be time t = 0 so that the present revenue is y(0). Then the rate of growth should be, first-cut, ballpark, proportional to both the number of customers talking or y(t) and the number of target customers listening or (b - y(t). Of course the rate of growth is the calculus first derivative of y(t) or

d/dt y(t) = y'(t)

Then for some constant of proportionality k, we must have

y'(t) = k y(t) (b - y(t))

Yes, just from freshman calculus, there is a closed form solution. I'm guessing that the solution is a logistic curve. So, the growth starts slowly, climbs quickly as an exponential, and then grows slowly again as it approaches b asymptotically from below. So, get a lazy S curve. So, it's a model of viral growth. Get the whole curve with minimal data, just y(0), b, and the guess for k. The curve looks a lot like growth of several important products, e.g., TV sets. I derived this and used it to save FedEx. For all the interest in viral growth, there should be more interest in that little derivation.

There is the huge field of optimization -- linear, integer linear, network integer linear (gorgeous stuff, especially with the Cunningham strongly feasible ideas), multi-objective linear, quadratic non-linear, non-linear via the Kuhn-Tucker necessary conditions, convex, dynamic, optimal control, etc. optimization. It is a well developed field with a lot known. I've made good attacks on at least three important problems in optimization, via stochastic optimal control, network integer linear programming, and 0-1 integer linear programming via Lagrangian relaxation and attempted several more where ran into too much in politics. Sadly the great work in optimization is neglected in practice.

The world is awash in stochastic processes, but they are neglected in practice. E.g., once for the US Navy, I dug into Blackman and Tukey, got smart on power spectral estimation, IIRC important for cases of filtering, explained to the Navy the facts of life, helped their project, and got a sole source development contract for my company.

The crucial core of my startup is some applied math I derived based on some advanced pure/applied math prerequisites.

And there is a huge body of brilliant work with beautifully done theorems and proofs that can be used to get powerful, valuable new results for particular problems.

Computers are now really good at doing what we tell them to do. Well, IMHO, for what we should tell them to do that isn't just obvious is nearly all from applied math.


Can you point me to some paper I might read? If it's not too much trouble.

I'm CS a grad student and sometimes it's hard to filter out the hype and find promising but underrated ideas among all the noise.


"Paper"? There are lots of pure/applied math journals packed with papers. I touched on the fields of statistics, probability, optimization, and stochastic processes, and each of these fields has their own journals.

Usually a start better than papers in journals is books. A first list of books would be for a good ugrad pure math major. There get to concentrate on analysis, algebra, geometry with some concentration on topology or foundations.

For grad school might want to do well with measure theory, functional analysis, probability based on measure theory, statistics based on that probability, optimization, stochastic processes, numerical analysis, pure/applied algebra (applied algebra -- coding theory), etc.

Then, sure, work with some promising applications and then dig deeper into relevant fields as needed by the applications.

One key to success is good "problem selection". So, with good problem selection, some good background, and maybe some original work, might do really well on a good problem, publish some papers, do a good startup, make some big bucks, etc. That's what I'm working on -- picked my problem, for the first good, an excellent, solution did some original applied math derivations, have my production code in alpha test, 24,000 programming language statements in 100,000 lines of typing.

It's applied math; hopefully it's valuable; but I wouldn't call it either AI or ML.

In case my view is not obvious, it is that the best help for the future of computing is pure/applied math and not much like current computer science. Computer science could help -- just learn and do more pure/applied math.


Time to sell shovels


After the first year in whatever cycle it is always about shovels.


It's why everyone wants to sell "a platform" today...


> I mean, it is easy to forget, but many, many things had to come together at one time in order for this to pop off like it did:

Yup. You describe a gold mine. Well, there's still a lot of gold in there. The amazing hardware developments you describe are not yet fully exploited.


Notice how there is an incredibly alignment between the problems in VR/AR, Drones, Self Driving Cars, and IoT? They require solving problems in 3D space in the real world either reacting or simulating. hmm...might we solve some of these problems with AI?

I am not a software developer, but these problems are very conceptually similar, and it seems we're all waiting on software/compute capabilities to leverage all of these new hardware technologies simultaneously.


Given this combination of technologies, I'd say we're waiting for robot warfare :)


I've been thinking for the past year or two that we are hitting a bit of an inflection point. Previously, it seems the hardware developments moved somewhat linearly PC -> internet -> mobile but now it seems like there is all of a sudden significant overlap. Multiple new disruptive hardware technologies with significant, potential consumer applications are emerging simultaneously.

I might be just drawing arbitrary lines. "Mobile" depended on a number of technologies to happen. Possibly, looking back, this large number of simultaneous technologies will be described by one overarching technology category.


Your problem may be that you're using hindsight to see the PC, internet and mobile eras and using somewhat irrational exuberance to view the current "disruptions." It's a pretty good bet that not all of them will pan out or have the kind of pervasive impact on society that you're predicting and that 20 years from now, again with hindsight, you'll see the 1 landscape-altering technology that defined the current era and fits into your linear model.

For instance, people have been predicting that VR will be the next big thing for years now. I remember a birthday party more than 30 years ago at an arcade with an expensive VR setup that, while far more limited than today's applications, was still an awkward piece of headgear that's more of a novelty than a potentially-ubiquitous change to society. It's still possible that we hit some sort of inflection point where technology improves to the point where AR/VR becomes unobtrusive enough that it can become ubiquitous, but that's by no means a certainty.

Similarly, I think the jury is still out on IoT, drones, 3D printers and cryptocurrencies/blockchains. If I had to place a bet, I'd say that when we look back on this time period, we'll be talking about machine learning and AI defining this era. The rest of the "current hotness" technologies I could easily see not getting that big.


I have to wonder if some of that growth was related to Moore's Law? Namely, the number of transistors grew and the use of them grew linearly with them in a sort of 'if you build it, they will come' fashion?

Now, as I understand it, growth has slowed outside of the labs and Moore's Law appears to no longer be valid. So, there will likely still be growth, but the growth will be more rare, difficult, and expensive?

Like your VR example, we've often had great predictions of the future and so very few of them actually pan out as expected.

I don't know, it's just a thought I've pondered.


CMOS scaling is close to dead and that’s been a very special tech and tech enabler. It’s not the only way to continue ramping performance and function. See GPUs for example. But it’s hard to replace that kind of tech.


Because the pitch on wires hasn’t decreased nearly as fast as effective feature size, there’s a strong bottleneck effect that limits single-socket devices in terms of their usable compute power. A good rule of thumb since ~2012, and for at least the next 5 years would be ~12 tera-single-channel ops/sec.


As I recall it, the next wave usually doesn't conform with what "it is widely accepted" it will consist of.


This is a trend that's been occurring over the past 50 years in all business sectors. The conglomeration of everything, from healthcare to food creates a oligopoly where only the big players have a seat at the table.

Tech was just immune from it because of its immaturity and infancy of the technology itself. Now that its a mature business its being subjected to the same pressures and issues of any established industry.

In some ways it's just tech "growing up".


The evidence is likely in the data housed by Y Combinator, 500 Startups, and others in their market. Chart the statistical mean valuation of each cohort by year since graduation and see if that mean is changing. If Evans is correct, the mean should be declining for each successive cohort.

Alternatively, since the number of funded startups seems to be increasing, you may prefer to look at total valuation over time since graduation. Even if the mean is decreasing, the total number of 'successful' startups may be increasing and no 'end' is signaled.

I'm not convinced that 'there are no good startup ideas left in this technology era' because the big winners are all black swans. By definition, they defy conventional wisdom.


This reminds me of when Pando went to Demo Day and saw no interesting companies and a bunch of knock offs https://pando.com/2013/03/26/y-combinator-demo-day-2013-stil.... They completely missed Zenefits, Teespring, and a few others.

I’m not convinced that the dominant YC companies looked obviously dominant at their early stages; they only seem dominant in retrospect.

As for why the giant companies at YC are still the dominant winners, it’s because the winners just keep growing. We don’t have a sense of scale; when Airbnb was a $500m company it was YC’s poster child. Now Airbnb is worth many billion dollars, and of course it still is the poster child, while companies like LendUp are valued at $500m but are not even talked about.

It’s not that because YC and startups are less successful, it’s that some are so incredibly successful you stop paying attention to the successful ones.

If the measure is valuations coming out of YC, Airbnb raised at a $3m valuation. A lot of YC companies raised $3m at a $14m+ valuation, but that’s more an indication of the market than of the likelihood of success of those companies.


I think this is partly due to the saturation of the consumer space, it's easier to understand the benefits of AirBnB vs Zenefits.


I think there is something to the notion that we have for the time exhausted the low-hanging fruit produced by smartphone market penetration. There are still plenty of good and potentially extremely lucrative startup ideas, but the untapped market potential will not come from taking advantage of mobile alone.


> 'there are no good startup ideas left in this technology era'

The low-hanging fruit has been thoroughly picked, 'tis all


Yet another way to analyze the situation is to look how big successful startups grow before they sell to one of the big ones, and how many of them are sold each year.

Big internet giants can reap the benefits of the startup scene by picking up promising startups early on before the valuations growth. Just hint that they might provide free alternative that is just good enough, might drop the valuation of a startup

When the Microsoft was the scary monster in software business in the 80's, all software startups had to have Microsoft strategy. What to do when MS shows interest. Show them a demo before product is ready and they have several ways to shut it down or buy it off and kill it.


Thank you. For me, the weak spot of the article is the missing data for this story.


It’s almost as bad as when we solved all of physics in 1917.

I’d write more, but I have to go to the chemist’s to buy some westinghouse relays for my Bell systems electrified collating typewriter-telegraph.


Yes! Jeez, thank you. These articles are nothing but clickbait, but more insidious clickbait than the other stuff, because it's cloaked behind a faux point: It's not actually a thoughtful piece of writing, but it's trying to dress up like one.


"640K ought to be enough for anyone" - Bill Gates, ~1985


To my way of thinking the pendulum is swinging away from funded startups to folks bootstrapping side projects.

So, on the "large" end you'll have the googles, facebooks, etc and on the "micro" end you'll see a bloom of people starting small projects like those shown on indihackers.

It's something I've been hoping for since 2011!

http://justinvincent.com/page/1392/entreporn-the-fallacy-tha...


My short term goal right now is to sell/grow enough side projects to go full time with prototyping "micro businesses". I feel like a startup studio of 4-8 people could get some serious work done. It would also just be a great time exploring new ideas and green-field code all day every day. But hey, a young guy trapped in a megacorp-cube can dream, right?


Same here. Send me an email justin@nugget.one let's chat.


This is why I love HN! Talk soon.


I'm not sure comparing the YC batches is convincing evidence. Off the top of my head, Coinbase (YC S12) is the 2017 analogue of a company that is "poised to dominate its industry", already valued at $1B+.


Another poised-to-dominate unicorn is Flexport (YC W14).


I agree with this premise. I knew it was over about 2 years ago when I noticed that about half of new startups were built to sell to other startups, or to serve the existing startup market in someway. That’s a sign of saturation.

Another paradigm shift in media (like recorded sound, recorded video, radio, tv, computing, web, and mobile) is what’s needed to produce another startup boom. It all comes down to media—�—the media is the message.


Money as a medium, from McLuhan's Understanding Media, as an encouragement to think of cryptocurrency as a medial paradigm shift...

“Money talks” because money is a metaphor, a transfer, and a bridge. Like words and language, money is a storehouse of communally achieved work, skill, and experience. Money, however, is also a specialist technology like writing; and as writing intensifies the visual aspect of speech and order, and as the clock visually separates time from space, so money separates work from the other social functions. Even today money is a language for translating the work of the farmer into the work of the barber, doctor, engineer, or plumber. As a vast social metaphor, bridge, or translator, money—like writing—speeds up exchange and tightens the bonds of interdependence in any community. It gives great spatial expansion and control to political organizations, just as writing does, or the calendar.


VR/AR fits that bill. It's just that no one has nailed the form factor and hardware yet so that it can reach mass adoption.


Ideal situation, where having wearable VR devices, that is small in size, efficient in energy consumption, fast processing unit that there is no lag, and excellent algorithm that makes interact with real world effortless...what is VR's biggest usage for a common joe?


Or is it a chicken and egg problem? VR interests me, but I don't feel there is a killer app yet to make the plunge worth it.


Disregarding the new applications enabled by the medium, think of it as simply a new display form factor. Once the quality exceeds the best flat screens we can produce, having one unobtrusively attached to your eyes at all times becomes a no-brainer (from a cost-benefit perspective; obviously there are lifestyle-altering implications that each individual will have to embrace or reject on their own terms).


A poster above mentioned that they experienced VR some 30 years ago. I, myself, had a chance to work with it in the early 1990s. Truth be told, it was horrible. It was rudimentary and the graphics were very poor as it was computationally expensive. Exploration stopped and we reexamined the tech a decade later, with similar results.

I've long since postulated that I'd absolutely volunteer to 'jack in' to a neural method to control a computer - complete with my standard joke about being willing to even have a wifi antenna poking out of my skull.

I wonder, then, if we are going about this the wrong way. We are trying for ocular stimulation directly. If we could skip that and move to neurological stimulation directly, I'd expect VR and AR to finally reach the tipping point. There is, after all, a finite amount of miniaturization that's possible.

It is purely a hunch that tells me VR/AR are not destined for wide-scale 'normal people' adoption until it doesn't require external apparatus to utilize.

As it is, we already have people who don't even like wearing simple eyeglasses. However, if it didn't require such, then it may just be something we humans add to our bodies to augment it.

Thoughts?


I think the value of the technology hasn't reached a point where it exceeds the inconvenience of using it; namely the bulky headsets and expensive hardware. I agree that it won't be more than a niche technology in the near term future. I guess it all depends on how many more orders of magnitude we can expect to get out of miniaturization before we hit the wall. If it's possible to get a minimum of 4k per eye resolution and the graphics performance of a 2017 high end gaming GPU into a pair of wraparound sunglasses combined with a smartphone, I see it becoming the dominant display technology. The convenience of being able to summon any number of arbitrarily large displays at will is hard to deny, even if you completely disregard any value proposition involving AR and new forms of human computer interaction.

I'm sitting in bed with my laptop right now, and if I could choose between reading this on a pair of lightweight glasses or my laptop, I'm not sure what the laptop has to offer. The big issue is touch typing; no (macro) gesture-based virtual keyboard is ever going to be usable for professional workloads. I sometimes wonder if some kind of one- or two-handed finger-chording input could be as efficient as a qwerty keyboard. I would be willing to toss my decades of qwerty experience if I could eventually get something as fast without having to carry around a keyboard. I imagine something like this device from Children of Men: https://youtu.be/sJO0n6kvPRU?t=2m4s (1024 "keys" should be plenty, so it's technically possible).

Regarding direct brain-computer interfaces, I just don't see the technological barriers going away any time soon. You'd either need some type of non-invasive technology that could wirelessly stimulate the optic nerves (aside from light, obviously), which I'm not sure exists even in theory, or such sophisticated nano-machinery that it would be effectively invisible, like a neural lace. I don't see either of these things happening for decades at least (I would love to be proven wrong though!).

I don't know if the form factor that will finally trigger mass adoption will resemble currently available headsets. The breakthrough might be retinal laser projection or light field displays. I just think that if nothing else, the ability to move our current workloads to a portable virtual display is such an obvious improvement I can't imagine it not happening as soon as the technology is good enough. Of course the same can be said for BCI but that doesn't even work in the lab yet.


I would absolutely love AR. It'd be fascinating to look at a bridge or building and see who designed it, when it was built, how it was constructed, who died while building it, the floor plan, utilization rates, etc... Ideally, this would be done while not actually driving.

As for the direct methods, I think we may get there someday. We already enable paralyzed people to interact, albeit on a minimal scale, with a computer using nothing but their mind. There is even a DIY movement that has enabled this, again on a minimal scale, for hackers at home.

No timeline, no estimates, but I think we may get there.

My thinking is that miniaturizing is a limited endeavor. We're very unlikely to ever have things like AR by means of contact lenses. So, we're looking at something people will wear.

It is my own personal view that I see no great benefit in consuming print media by means of VR. For that, I have a tablet and a few ebook readers around the house. I am not sure that I (expressing only my own thoughts) see any great benefit in that.


That seems really far off if it's even possible.


Maybe? We have learned a lot about the brain in recent times. You can, today, hack your way into controlling a mouse with your mind. That's all external and, of course, requires a headset.

I doubt I'll be alive to see it, but it seems that is the most probable method to get mass adoption of VR. Right now, it is really niche. Right now, we are still trying to do it the way we've been doing it for the past three decades. Things are faster and smaller but there are limits to those two traits.

I'd absolutely love AR. I have a modest collection of automobiles and sometimes work on them. I sometimes make things out of wood. I sometimes can't identify an animal species or plant family. Having the ability to augment that would be a wonderful way to enjoy life even more - at least for me.

But, even if we got these down to the size of eyeglasses (which seems really unlikely for the foreseeable future) I'm not sure we will get mass adoption by Jane Q. Public. It's not a cell phone you pick up and put away, but something worn. The form factor is, by itself, a negative.

I dunno? I can't predict the future. I'd still volunteer to test a viable method. With the ubiquity of cellular network connectivity, it'd be fantastic to have the sum of human knowledge at your immediate beck and call and without the need for an external device.


The era of startups began to end in the 1970s:

"Where are all the startups? U.S. entrepreneurship near 40-year low"

http://money.cnn.com/2016/09/08/news/economy/us-startups-nea...

In recent weeks, this issue has been discussed several times on Hacker News. I recall someone recently wrote a comment and said, "We should distinguish between new businesses, like a pizza shop, and real startups, that might become big companies." But why exclude a little pizza shop that might become the next PizzaHut or Dominos or Little Caesars? During the real startup era, in the mid 20th century, there were hundreds of successful pizza startups that turned into big companies. If we say "We won't count small pizza places because they can not possibly become big companies that get listed on Wall Street" then we are simply assuming our conclusions. If we exclude all of the categories which were once hot, and which should be hot right now, and only focus on the handful of sectors that still have some life in them, then we can end up believing that the era of startups is still happening right now, but we are blinding ourselves to reality.

When the economy is healthy, small businesses, with the right leadership, can make the jump to the big time. It is from the frothy, primordial soup of little mom and pop shops that new giants emerge. Two examples off the top of my head: both McDonalds and Barnes & Noble were small family businesses, for decades, before new management took over and found a way to turn them into giants.

Focusing on the tech sector, and acting as if it is the only sector that matters, allows us to ignore the sclerosis that has crept over the USA economy since the end of the post-war boom, back in 1973. We should take a step back and look at the long-term trend. The economy has been increasingly sick for 40 years now.

We should ask ourselves, where does this trend end? How large do the monopolies grow? Will there ever be an era when the USA returns to creating new businesses at a rate that would have been normal for most of the 20th century?


Even ignoring "AI, drones, AR/VR, cryptocurrencies, self-driving cars, and IoT" from the article, there are still two big areas still in its infancy:

1. Rich web apps - We know Gmail, Gdocs, Salesforce, etc are/have taken over from desktop apps. I'm continuously discovering more, e.g. Figma. Basically anything that was a single-user desktop app can be made into a realtime collaborative networked one.

2. Mobile business apps - Yes we have mobile versions of business web apps but these are typically as useful as responsive web apps which drop critical features needing to resort to a poor [x] request desktop app experience. What is needed is to create apps which make full use of what works on mobile. Speech input, gestures, what have you. Just as PCs took over from centralized computers, and web from OSes, future computing will be more mobile and ubiquitous. Current apps are translations of desktop/web ideas. We have a long way to go to making great mobile ones. The many significant discoveries and inventions along the way will come from both large and smaller contributors.


>single-user desktop app can be made into a realtime collaborative networked one

Dear God, Why?

I would rather have MORE single desktop, single license products which I can buy once and then forget about having to upgrade every year, and buy yet another license for. (B2B)


There definitely is a change in Silicon Valley from before. I don't think it is a lack of things to be created by any means. The difference I see is that before the thing to do was create startup. Now everyone wants to go work for Google, Apple, Facebook or another large company. And why not, you can get a huge salary there without the risk and hard work of a startup. With so many more of the smart people are going to big companies rather than trying to make something on there own, I definitely expect less innovation here in the US, for now at least.

On the positive side, there are some things a big company can do that a small company can't. This may contribute to why so many people are going to work at big companies, and we may see some good results.


The weirdos and hackers are still doing the same stuff they've always done. What you are seeing is the influx of people who would have gone into law or finance identifying tech as having the biggest payoff for the least amount of effort. The Silicon Valley of before was a small fraction of its current size.


This sounds suspiciously like the mid 1990's when everyone was bemoaning how the big software companies had basically ended the at home software startup shop. Then the web struck and all these big companies had to get nimble enough to complete with the scrappy little web shops. This basically happened again the mid 2000s with smartphone development.

I'm not sure if the machine learning revolution will favor the guy at home with a GTX 1080, but until we go for a decade or more without any big hardware/software companies starting up then I will believe it.

OTOH, I suspect that the US as lost the competitive advantage to china in this regard. The "IOT" revolution probably isn't going to start in the US since all the little board dev shops are in china where you can actually purchase all the little parts you need without having to wait 6 weeks for a part or pay 10x in shipping.


Similarly, there is a big growth in the small scrappy startup space around cryptocurrency and associated blockchain technologies.

Not to start a conversation over ponzi schemes, scams, and vaporware, but its impossible to ignore the millions of dollars of investment moving around on a monthly basis in this space.


eh, I think what people need to consider in this is the meta-learning pattern.

First wave of disruption - its all equal. Anyone with an idea, coding chops and time can give it a shot

Second wave - leaders of the first wave get caught unaware, some survive, others don't and new leaders are created.

Third wave - There is no third wave. Survivors of rounds 1 and 2 have learned the lesson, and realize that being wrong footed means death. They spend every erg of energy they can spare to identify trends, buy potential competitors and invest in survival.


Having lived through three "deaths" of silicon valley so far I will reserve judgement until the price of housing goes down to the national median


NYC-based angel investor Jerry Neumann calls this the "deployment age" and has a very nice, and somewhat prescient, write-up about it:

http://reactionwheel.net/2015/10/the-deployment-age.html


He's just quoting Carlotta Perez. But he doesn't get it quite right, we're in the installation phase for some technologies and the deployment phase for others.


Thanks for mentioning Carlotta Perez. I found a fairly recent (Aug 2017) article by her and cohorts that discusses these "installation" and "deployment" phases. It's a more optimistic article, so I created a separate post:

Are We on the Verge of a New Golden Age? https://news.ycombinator.com/item?id=15531574


Yeah, he's kind of focusing on only his field, but that's OK. AI and genetic engineering are clearly just getting started, but if it's not in his field then it might make sense for him to not care.


> Big businesses and executives, rather than startups and entrepreneurs, will own the next decade; today’s graduates are much more likely to work for Mark Zuckerberg than follow in his footsteps

This is a tautology and reflective of the mindset that produces articles like this one. There is only one Zuckerberg. Unicorns are by definition rare, and disrupting entrenched business interests is rare because they are entrenched. That's what it means.


Yeah this is another example of how SV exists in a bubble and can't see the broader world. Today's graduates are more likely to work for Zuckerberg than to follow him, as were yesterday's, as were the day before's and so on.

It only seems like everyone is a founder because that's how it was in SV. It only seems like every stopped creating startups because that's how it is in SV. But like everything else in the world, trends start at the coasts and work their way inward, to the point where SV tech growth may be stagnating or declining, but Midwest startups are booming. And if the Midwest is firing up, African startups are virtually exploding.

Big businesses have always owned every decade of capitalism, and that's not going to change. Economies of scale is too hard to turn away from. But startups aren't dying, they're just taking on less sexy, harder problems that don't impact SV. So of course SV thinks entrepreneurship is dying.


I see other 'startups' that I think are missed by the SV folks. They aren't tech, but may use tech in their startup. I see quite a few of them.

They are people starting their own business. A friend borrowed some money to start a business where he drives a remote control vehicle down pipes to examine them from the inside. The space didn't have much 'local' competition and he was able to spot that, take advantage of it, and now has a dozen employees, has repaid the money, and is seeking to expand his operations. A similar experience was loaning my sibling some money so he could start a plumbing company - except it is more niche and not residential. They use some tech but they aren't tech companies.

It seems to me that the SV pundits see the tech startups while not seeing the guy who does vehicle power washing on-site, the interior decorator that uses 3D modeling and VR, or the folks who opened a diner where they have automated the food ordering process.


It seems like you have a lot of entrepreneurial energy and inspiration in your personal network. SV pundits don't care about those people creating lifestyle businesses, they just want "megascale" and big fancy exits.


I think the future is even starker, that todays graduates will be just as likely to work for Jared Kushner as Zuckerberg, and I'm pretty sure history supports this. That is, who under 40 manages the largest workforce? They'll probably keep doing that.


I am not an expert in this, but SV has actually a great record on taking on super hard problems. It's just that Facebook is not one of them.


I think FB tackles hard problems, it's just that the hard problems they tackle aren't FB-specific so much as decades-old market research, interest-tracking, and adtech in the internet era, not to mention datacenter design and engineering. But none of these achievements require FB to exist as FB in any way more than a test bed.


I think there is a constant shift towards different kind of startups, as one field saturates and big players emerge. Currently the shift has been from "easy" towards "harder". in the early 2010s there was a shift from viral consumer services and apps towards b2b services and Uber-style marketplaces, which both are arguably harder to make than most consumer apps, and now there is a shift towards hard tech startups, which, again are harder to build.

But startups were more about hard tech in 80s, early 90s. It will take a while that new big consumer tech platforms emerge, but when it happens, then again there will be shift towards viral consumer services.


One killer for startups is the weakening of patents. After eBay vs. MercExchange, easy post-grant re-examination, and Seagate[1], the worst-case penalty for infringing a patent is only paying the royalties that would have been paid if a license had been negotiated. So there's no incentive left for big companies to buy technology; they can just steal it.

This is new. All those things changed in the last 15 years. The result is that VCs have gone for market share, not technology. (Or they've gone for pharma, where patents still work.) Until about 2000, Silicon Valley VCs wanted startups to show that they had a strong intellectual property position. That all changed in the first dot-com boom, when it started being about buying market share.

This hurts innovation. Why work on a hard problem?

[1] http://www.fulcrum.com/punitive_damages_are_now/


Meh, articles like this are neither new nor interesting. Journalists have been writing articles like this for decades. At some point, one of the journalists who writes these articles will be right, but I doubt this one will be. It's the same with how the US dollar will get replaced as the reserve currency. Maybe some time in the future, but definitely not now and not in 10 years.

There will always be cycles, but it's so easy to be a naysayer, just write a medium post. People have been calling for the collapse of Silicon Valley since the dotcom crash, and at some point they'll be right, but they've been wrong so far.

Currently, I don't see any companies that are particularly interesting, but I'm confident there's some young kid out there that will create something great that will capture everyone's attention in the next couple of years, and then a new land-grab will occur all over again. It always has happened, and will always continue to happen.


Why would this be the end of the startup era? That era has been going on for decades it's just taken different forms. It's maybe the end of the overvalued photo sharing mobile app company era (it can't come soon enough).

Even "old" startups like Palantir that claim valuations at $20 billion are privately being revalued down significantly [1].

We're simply moving into a different era and the very public, very flash, direct to consumer startup market appears to be well beyond saturated. The money will go back to where it's always gone, quieter B2B type companies that build boring technology that solves less glamorous, but more important, problems.

1 - https://www.bloomberg.com/news/articles/2017-10-17/palantir-...


This isn't to say that you can't still build a successful startup, but it is saying that you probably won't build the next Uber/Airbnb. The difference is a few zeros but you'd still be wealthy either way.


>"Big businesses and executives, rather than startups and entrepreneurs, will own the next decade; today’s graduates are much more likely to work for Mark Zuckerberg than follow in his footsteps."

I disagree with this. I think the bloom is off the rose for these companies(FB, Google, Amazon et al)in terms of an image as a "cool place" to work. I they will increasingly be viewed for what they essentially are - just other big corporations. Big corporations that only have their own best interests at heart.

This nearly reads like a propaganda piece intended to discourage people from doing their own thing.


If the smartphone boom was “2007-2016” it’s premature to claim nothing is next. It’s too soon to say “there is no such revolution en route”, when we had one practically yesterday!


This article is using false assumptions about AI to back up its narrative.

"AI doesn’t just require top-tier talent; that talent is all but useless without mountains of the right kind of data. And who has essentially all of the best data? That’s right: the abovementioned Big Five, plus their Chinese counterparts Tencent, Alibaba, and Baidu."

Making the next quantum leaps in AI is not a question of data advantages.

1) We have just seen the world's best Go-playing AI trained completely from self-play, without access to any labeled data nor hand-engineering, trained on a few machines. This happened at Google, but could have easily been done by a small startup.

https://thenextweb.com/artificial-intelligence/2017/10/20/go...

2) Even the pioneers of deep learning are now strongly pushing back on the methodology that lots of labeled data is required to solve AI problems.

Geoffrey Hinton: "I don't think it's how the brain works. We clearly don't need all the labeled data."

https://www.axios.com/ai-pioneer-advocates-starting-over-248...

In a nutshell, the biggest challenge of pushing AI forward is tackling unsupervised learning, not having "better data".


1) That's false. AlphaGo did not just train by self-play. It trained on millions of pre-played games, and the bot also uses hand-engineered features for their MCTS hybrid (best) model. Refer to their paper for details.

2) Read the article that you are referencing please. What you are implying is not the thesis of that article nor is it what Geoffrey Hinton is saying.

What Hinton is saying is, we should throw out deep learning. What he is saying is that the current approaches to AI are fundamentally broken and aren't going to result in artificial general intelligence.

Backpropagation is not just used in supervised learning, it is also used in unsupervised learning. I happen to agree with Hinton, in that there is too much hype around the current state and successes of AI, which has mainly been in "narrow AI".

AI is a term that gets thrown around a lot these days. But there is a big difference between technology that automates the tedious tasks of daily life, and artificial general intelligence.

In the world of deep learning, data is king.


https://deepmind.com/blog/alphago-zero-learning-scratch/

>After just three days of self-play training, AlphaGo Zero emphatically defeated the previously published version of AlphaGo - which had itself defeated 18-time world champion Lee Sedol - by 100 games to 0. After 40 days of self training, AlphaGo Zero became even stronger, outperforming the version of AlphaGo known as “Master”, which has defeated the world's best players and world number one Ke Jie.


Sorry, but you are incorrect on both accounts.

1) AlphaGo Zero was indeed trained in the way I mention.

2) As directly quoted from the article, Hinton believes that a better way of learning doesn't require all that labeled data. If such a method is invented, as is required to push AI forward, big corporations would not have a data advantage, which is my original point.


AlphaGo, trained using supervised learning using games from KGS:

https://storage.googleapis.com/deepmind-media/alphago/AlphaG...

AlphaGo scratch: https://www.nature.com/articles/nature24270.epdf

I was specifically talking about AlphaGo, not AlphaGo scratch. Also, if you read the paper about AlphaGo scratch, the key innovation driving the self-learning is the use of MCTS as a policy improver, which couldn't have feasibly been done without AlphaGo and the supervised learning.

And I think Hinton is saying that we need fundamental breakthroughs for AI, and I don't think he is in favor of "traditional" modern neural network architectures. Anything that requires SGD, he doesn't like.


You can also leverage Google's apis and their expertise. These kind of articles forget about the api economy. Then we have new technologies like "serverless" that radically lower the bar to entry for new business to get into the game. Why did I work for the guy who founded Geocities instead of founding something myself? Up to my neck in studio loans and remember in 1998 you had to cough up $1 million for a starter oracle license to go big, today you can scale as you go. The only thing that will end the startup is a lack of creativity and vision, it keeps getting easier and easier to jump in.


I think the conclusion may still be sound despite the reasoning. It may not be a "data advantages" question, but it is still almost surely going to be a question of human capital -- and the only places likely able to afford the talent & operational costs are going to be "Big N" companies. There is a tremendous amount of human expertise involved in the advancements we're seeing in AI.

AlphaGo Zero is the product of incremental improvements made by the same group of computer Go experts that built the original system. _They_ learned from each experiment and incorporated that knowledge in the next version. It was not obvious _a priori_ that the AlphaGo Zero architecture or training method would succeed; if it had been, then the AlphaGo team would not have built the earlier versions. And while it runs on "only" 4 TPUs for inference, those TPUs are each about 30x as powerful as the computers that the original AlphaGo ran on, so it's more like a reduction from ~180 GPUs to 120 equivalent GPUs.


I was mostly disputing the claim about data advantages, which often gets thrown around willy nilly to favor big corporations, because it fundamentally goes against the nature of where AI is heading.

I am not disputing that having more financial resources would help anyone hire talent and build awesome infrastructure, and certainly the latest way of training AlphaGo Zero was aided by earlier experiments that relied on labeled data and extensive computational effort. However, by no means do I think big corps have a lock on these kinds of advancements. There will always be great people who would rather go the startup route, and both algorithmic and hardware advancements are drastically reducing the operational cost of training AI systems. Thus, when it comes to AI, I think very small teams will be able to get very far with the right approach.


No one wants to be called the next Yahoo, so even a whiff of competition means the startup gets acquired. This considering FAANG companies have so much cash on their books, not to mention ease at which credit can be had.


This doesn't feel particularly true in the enterprise space. There still is the constraint on IPO exits due to (unclear what exactly -- that's another topic), but I don't see companies like HPE, IBM, Dell, etc. getting more innovative, nimble, or effective.

It might just be that in the consumer space there are some exceptionally good companies, where that hasn't really happened in enterprise, or it could be a function of the technologies and markets. Amazon (AWS) and Google to some extent compete in enterprise as well, but they're not as dominant relatively.


Ironically, you have a giant media house (AOL), one of whose important revenue streams is selling hopes of the startup world ("Startups, start here". [1]) through their Disrupt conferences, writing something like this.

My personal opinion - as long as there is a market demand and as long as you're a business fulfilling that demand, startup or not, whether the tech is accessible or not, it's all that matters.

[1] https://techcrunch.com/event-info/disrupt-berlin-2017/


Of course market demand and other factors matter but the point of this article is that large tech companies are now a barrier on the way of small startups to their success. We all see that now, most startups fail and even if some of them gain some traction, they are being bought by big guys. And one of the main causes of that is the outdated nature of large corporations. Nowadays, one of the main platforms to promote your company is Google. However, google advertising is good for those who pay more. Thus, companies with more cash have a privilege over smaller companies even if the second ones do their job better. And that's only one example with one large corporation. To conclude, big guys, which were praised by the whole tech community all the time are now a barrier on our road to progress.


Weird that an article on the end of startups doesn't mention the likely next big area for startups: Genetics.


Genetics research is slow. It could be 10 , 20 or 50 years away.


Most industries tend to consolidate over time with a few big players. For technology it's particularly true given the winner takes all phenomenon. Unless there is a fundamental new technology / hardware platform, there's little chance small startups can go into existing ones.

However, I am more interested in seeing technology disrupting old heavy industries. Please go outside the silicon valley and there are plenty of those.


I try to be patient with Techcrunch, but seriously, phrases like this make me see red:

> The market capitalization of Bitcoin vastly exceeds that of any Bitcoin-based startup. The same is true for Ethereum.

The market cap of the US dollar exceeds that of any financial institution. What's the fucking point?


TechCrunch will publish almost anything, so long as they think it will drive clicks. If you're a decent writer and want to write about a controversial topic for free, they'll likely have you.


The point is that the success of bitcoin does not mean a huge windfall to bitcoin based startups. Reread the paragraph.


I suppose it depends on how broadly you define startup. Surely some drug dealers, online casinos, etc, are doing well. I believe Silkroad did $1B+ in sales.


Starting businesses is outmoded, the new fashionable high growth thing is starting currencies?


We're offering early access to a presale of our ICO to highly-valued partners; please direct inquiries about how you can be a member of this exclusive group to ponzi@scheme.coin


I really like your publication and would like to subscribe to your newsletter.

My qualifications include a downstream consumer network of people ripe for my Business Savvy and I'd like to let you in on the opportunity to harvest the wealth which your seeds would grow through my hand.

In fact, I wrote a book about this and it's yours for $50 - the title of the book is "how to get people to pay you $50 for your book"

Reserve your exclusive .txt copy today for only three hundred payments of .0015 BTC


Take my money!


I’d like to see your whitepaper!


How do you hit on a network engineer?


Dunno. Show him your whitepapers?!


Send him a SYN packet and see if you get an ACK


Not looking at this part, the whole article makes sense. Nowadays, we all see that it's difficult for small startups to promote themselves. The privilege on major platforms is given to those who pay more(ex: Google). Even if one company gains enough traction and begins to become successful, another larger corporation buys it (in best case) or makes everything to not let it get larger market share. Thus, "big guys" now are a barrier on our way to progress.


I remember the beginning of the startup era. When Bill Gate and Steve Jobs started their companies in 1976, rhere were very few venture capital firms and the stepped procedure of investment you see today. A company had to borrow from more conventional banks and/or grow their own capital from revenues. A big help was when President Carter lowered capital gains tax in 1977. That stimulated investment firms.


Startups have also seemed to have morphed from ventures started by founders, to vehicles for investors to make long plays for monopoly control of a market, like Uber and Netflix. I am not sure of the laws regarding dumping and artificially low prices, but when startups operate at a loss for years and gobble up the entire market, that looks like what they're doing.


Clickbait.

ML won’t require mass amounts of data forever. One-shot learning will get solved.


> One-shot learning will get solved.

This is nonsense. I would challenge you to show any few-shot learning work that isn't basically some form of transfer learning in disguise.

Few-shot learning is super useful if you're in a position where you don't have much data, but will never compete with huge stacks of data.


The nice thing with digital models is they can get data from everywhere and continue to improve slowly over time. It's the lean startup applied to ML: there's nothing wrong with launching with a one-shot ML product to test a hypothesis, and as time goes on keep adding to it. That's the core concept there: start with a working but basic model and continue to extrapolate from that as time goes on.

"I would challenge you to show proof of X" in ML is an empty challenge when the technology is in its infancy. You might as well say "I challenge you to stream HD video over the internet" in 1989. Theoretically possible but technically impossible for the time. We proved it possible as tech improved, though.

The problem with cynicism is it costs nothing but still makes you look like an expert.


> That's the core concept there: start with a working but basic model and continue to extrapolate from that as time goes on.

This has literally nothing to do with few-shot learning though. You can always make a crappy model with a small amount of data and then improve it as you get more. And my point is that if you have a competitor with several orders of magnitude more data, their models are almost certainly still going to be better than yours.

Few-shot learning will probably be able to improve the baseline of what you can do on certain tasks, but you're not magically going to learn a sufficiently accurate cancer diagnosis algorithm from 5 radiology images.

The best case for few-shot learning is that you go from "worse than the alternatives" to "good enough for some people to get value", which will probably happen a few times, but is going to be a minor phenomenon.


Of course it's a kind of transfer learning.

>will never compete with huge stacks of data

Children learn with sparse data based on composable models, and are bio machines.


This boils down to a fundamental divide in life view. Anyone who has done any amount of expanding their consciousness will tell you that there is a ghost in the machine. It's always a mistake to assume that all you can see is all there is to see.


I do some AI work and recently have been thinking on this, can you expand on this. Or point me to the right direction regarding this idea.


It's always there for anyone to grab. Try lying on your back with all your gadgets turned off and eyes closed; and observe your brain thinking for a while; which is about as difficult as meditation needs to be. Might take some practice to not identify with the thoughts and get carried away, or to not block yourself by trying too hard. Then ask yourself while lying there: Who am I? Literally :) Who is observing the thoughts? The experience is not of this world, it can't be transferred using words. There are other ways to get a glimpse, mushrooms, ayahuasca etc; but introspection is the way to go for lasting results. I spent 32 years writing software, and I just know that a computer will never be able to do the same thing.


I am a materialist, there is no magic :)


I agree that there is no magic, but evolution has done some pretty impressive initialization.


You are also most probably human, have you even bothered to have a look inside?


Yeah, AlphaGo Zero is the tip of the iceberg.


I call bullshit on this. The problem is centralization, but that hasn't prevented startups. It just caused them to be bought by large companies like Faceook (YouTube, WhatsApp and Instagram are some examples)

I write about this extensively here: https://qbix.com/blog/index.php/2017/08/centralization-and-o...

Decentralization of trust, power, energy generation, and much more is on the way. This will lead to a lot more startup activity.

https://www.ted.com/talks/clay_shirky_on_institutions_versus...


Well, back to good old small business it is.


That’s right. The small business SAAS biz in niches.


Not every startup needs to become the next Amazon or the next Uber. Startups will still be around, some will still be successful, but yeah, maybe they won't turn into mastodonte that easily.


I think this was a very sober presentation of things, however it seems that it underestimates our inability to tell how the future will pan out, especially when we're moving on an exponential function of technological progress and creativity. Sure, the Internet window is closed and so is the mobile apps window but no one saw either of them coming - likewise, I can't tell yet what the next really big thing will be or when (unfortunately) but I wouldn't mourn garage entrepreneurship just yet.


The world does not need more Mark Zuckerbergs, it needs millions of small and mid-size businesses working on the build out of low carbon energy sources. Converting the world electricity supply to run off renewable sources will require tens of trillions of dollars of investment over a period of decades. It's not a small opportunity, but unlike the app economy, it's not a winner take all market. This is mostly a good thing, although we won't see many overnight billionaires from it.


I agree the future is built on small businesses. But to have the most impact on the environment we could all just stop eating so much meat.


Preaching abstinence has never worked. It didn't work for sex and it is not going to work for meat-eating. The right question/attitude is how can we have an abundance of meat while protecting the environment? One possible solution is lab-grown meat and if that doesn't work then we need to invent some other solution that produces abundance with minimal cost to the environment.


Travelling less is much more important. Skip that vacation to Europe or Asia if you really want to have an impact on your footprint. Just one such trip dominates the impact of eating meat.


"... four widely applicable high-impact (i.e. low emissions) actions with the potential to contribute to systemic change and substantially reduce annual personal emissions: having one fewer child (an average for developed countries of 58.6 tonnes CO2-equivalent (tCO2e) emission reductions per year), living car-free (2.4 tCO2e saved per year), avoiding airplane travel (1.6 tCO2e saved per roundtrip transatlantic flight) and eating a plant-based diet (0.8 tCO2e saved per year). These actions have much greater potential to reduce emissions than commonly promoted strategies like comprehensive recycling (four times less effective than a plant-based diet) or changing household lightbulbs (eight times less)."

http://iopscience.iop.org/article/10.1088/1748-9326/aa7541/m...


BTW, compared to many of my peers I have done all four.

http://www.earth.org.uk/

I'm definitely not strictly veggie: I just wolfed down a nearly free burger when offered! And though I haven't flown for years, I may need to visit China once or more over the next couple of years.


> having one fewer child (an average for developed countries of 58.6 tonnes CO2-equivalent (tCO2e) emission reductions per year)

I.e. one (or one thousandth, given his carbon footprint) less Zuckerberg? I could get behind that. Heck, I supported NPG back when I thought humanity had a chance.


We'll even out at about 10bil. Then this won't be a problem.


Indeed: if a few hundred million westerners made a real effort to curtail frivolous footprint then that 10b may happen with a lower-than-current total footprint.


Convincing a few hundred million to act in a certain way without force won't happen before we got 10b. I don't think that's an outlandish prediction to make...


<rant>Yes, but I get annoyed at some fat white western guy claiming that all those poor brown people a long way away are the problem. We (my family in the UK) managed to cut tonnes per year off our footprint while raising two kids in the UK; not that hard. Some would much rather blame others rather than even make any effort.</rant>


And I agree with you to a great degree (South African here...)

The problem is still, it's unlikely to be the solution. If we want the west to change there are two ways to go.

1. Legislation - This is the force option, i.e. if you break the law you go to jail and we'll come and take your property by force.

2. Market - Make technology that makes alternatives easier/more efficient/sexier etc.

2 is hard especially if alternatives are more expensive or about the same. Cost of panel can't really go much lower so we're stuck. Many people on the eco side don't like Nuclear (they're crazy).

Technology is getting better and I think small steps in regulation (i.e. taxing emissions slightly) may work. That's the direction that we're heading in so I'm pretty hopeful.


0. Howzit! I have family in Cape Town etc.

1. There has to be some of this: no market is totally free, and free-riders are always a problem to some degree.

2. Actually this is exactly the flavour of product/service that I am working on. Have the more efficient solution be better and easier, not a hair shirt. The area we are working on could knock 5% off Europe's entire carbon footprint and save most families hundreds of USD per year also.


Remember that rich westerners have a far higher per-capita footprint than the rest of the planet, eg the top 10% has about 50% of the total carbon footprint for example...


> Just one such trip dominates the impact of eating meat.

impact of eating meat for how long? one year? I know that going vegan for a year is better than getting an electric car.


You could do both though.


The end your argument leads to suicide. Let's do what's needed not what to utmost but useless.


I prefer figuring out how to incorporate the externalities instead of banning/quitting things. Meat at twice the price would probably solve the problem.


> Meat at twice the price would probably solve the problem.

While true, it basically sends the message of "If you have enough money you are free to do what you please to the detriment of the planet."


Although it sounds outrageous when you say it like that, its pragmatic, and fits us as a species.

I'd like petrol to be 3 or 4 times the cost it is now. That would greatly reduce emissions, but fossil fuels could still be used in situations where there was no alternative (long distance flight, huge earthmoving machines?).

Inevitably, rich jerks would continue to ride their jetskis. So what?

You can't legislate people to have the right attitude. But taxing things does drive bulk behaviour.


Some of those giant earth-movers, like the tracked mining equipment that fills a giant mining-sized dump truck, are already powered by electricity that is generated on-site. They have big giant cables that trail behind them.

I mention this just to share that there is some chance of this improving, even in areas we might not think likely. If it is a long-term mining operation (short-term someday, perhaps) then I can't think of a technical reason that prevents wind and solar being used to generate the on-site electricity.

A recent HN article was about one of the mining dump trucks being powered by battery and using regenerative braking to mean it was able to charge itself. Those giant diggers are already using braking in their cables that power their shovels. Maybe there is something to capture there, as well.

In my work, I had the chance to deal with some riggers and some crane operators (smaller stuff) by just exposure while collecting data. During that exposure, I learned something new. Lifting the stuff up is actually pretty easy. It is putting it down that is difficult. Maybe there is some energy to capture in that process?


That sounds pretty cool. I guess there will be a lot of mining for lithium in the future.


> If you have enough money you are free to do what you please to the detriment of the planet.

But it wouldn't be a detriment, the extra cost would be derived from the expenses associated with offsetting the negative externalities (by cleaning pollution, planting trees, recycling, water purification, whatever).


Uh, that's the exact business model of Whole Foods, now amazon.

Basically "only the rich shall eat healthy"

And to be totally honest, I want to tackle this issue through a startup called "standard pantry" which is the idea that one should have access to a standard pantry of goods and recipes (in place of a standard min income) and they should be able to sustain only and healthily feed themselves.


Replace

> only the rich shall eat healthy

With

> only the rich shall eat fancy

It's easy to eat healthy if you're not rich. Buy non-processed ingredients and cook them yourself.


"Easy" means that you have been educated to a state that makes you aware of such...

That's the problem.

We need to educate on a basic standard pantry that allows healthy to be so easy that it doesn't even cross our minds.


Educate? The only thing between me and a home cooked meal is fast food. Cooking is not inaccessible.

If the only problem is nutrition education then that is not a rich person privilege.

A large majority of the public cannot afford Whole foods but are educated enough to understand that home cooked meals from non-processed ingredients are healthier than their counterparts.

If education is all it took we'd all be rich with six pack abs. (Derek Sivers.)

Habit rules all.

Make unhealthy food illegal? Make marketing unhealthy food illegal?

Hmm...


See, your logic is sound, and the math isn't working out recently though.

I learned to cook from my depression era grandmother. Home cooking is healthy, but not always cheap.

My grandmother used to tell me "this meal is $1.37 per serving"

As we made things, I learned to cook in le curset pots from a woman who used to travel the world with Martin yan and Julia child... (we were state department family and had a lot of opportunity)

But my main point remains: the "educate" is pArt-and-parcel to the whole greater idea of a home cooked meal: "family time"

"Fast food is the bane of existence as we have basically brought generations to not value what home cooking means... then we exploit them in the cheap labor force and perpetuate the idea that having a "home cooked meal" means that "the way mom used to cook it" is an actual phrase and it means the disruption of the basis of modern society, the family...

This issue can spiral and spiral, but my point is that rather than basic income, we need basic pantry - the basis of understanding, having access to, and the knowledge to cook a good meal without breaking the home (due to how hard people need to work at jobs for their ability to provide food on the table)


I think we will enjoy a cup of coffee together.


Deal. I'm in Oakland during the day and Walnut Creek during the nights. Let me know what works.


Houston during the day, South Africa in my dreams.

I'm there once a year. I'll pin this.


Well, this is true today at least. What are you surprised at?


What makes you think I am suprised?


My bet is on lab produced meat.


FWIW earlier this year I ate something soy based that felt and tasted like pork.


Yeah good luck achieving that using pure capitalism. Unless you liberate all the farm animals by buying them out. You need regulations to prevent factory farming!


We can start by removing subsidies.

Amount US taxpayers spend yearly to subsidize meat and dairy : $38 billion

To subsidize fruits and vegetables : $17 million

US retail price of a pound of chicken in 1935 (adjusted for inflation) : $5.07

In 2011 : $1.34

Pounds of chicken eaten annually per American in 1935 : 9

In 2011 : 56

Revenue collected by US fishing industry per pound of fish caught : $0.59

Portion of this figure funded by taxpayers as subsidies : $0.28

https://meatonomics.com/2013/08/22/meatonomics-index/


Comparing current day consumption to a single year in the past reeks of bias.


What kind of bias? I copy pasted a cluster of lines from that page, but the sources themselves are well documented, mainly from the gov itself.

Anyway here's another source that shows that meat consumption has dramatically increased since the 1920s, though that wasn't my central point:

http://www.npr.org/sections/thesalt/2012/06/27/155527365/vis...


Example:

I want to show that there is a lot more murder today than before because of guns.

I pick one year (a far edge of the bell curve type of year) to compare to today where there was the least amount of murders and compare it to day.

Without showing an average or showing even a distribution bias should be expected. There is very little reason for picking a single year to these types of arguments and it is too often done.


Didn't conservatives do this with global warming? Except they took the hottest year 1998 as a starting point to say warming isn't happening

https://www.climaterealityproject.org/blog/three-ways-climat...


Why would we do that when are bodies are specifically evolved for eating both vegetables and meat?

I, for one at least, consider along the millions of years that evolution tried a version of homo sapiens that couldn't eat meat, and it was selected against...why would we now attempt to overturn the apparent wisdom of that selection?


I don't think they're saying we should stop eating all meat — just that it should be significantly reduced.

That's pretty much a no-brainer for our health, for the environment and to reduce the suffering of millions of animals at the hands of the industrial livestock industry.

Personally, I used to eat meat multiple times a day, and now I only eat it once or twice a week. It has really improved my life. I think everyone should try it.


Quoting from the collective wisdom of HN here:

  Because it doesn't scale.


Oh crap, you summed up my long-ass comment in 4 words!

"I've been disrupted by a leaner competitor!"


> Why would we do that when are bodies are specifically evolved for eating both vegetables and meat?

Our bodies specifically evolved for reproduction, but that doesn't stop us from using contraception when we have sex. Evolution also designed us to have wisdom teeth, but that doesn't stop people removing them.

Your argument assumes that natural selection has some kind of innate "wisdom", and that's just not true. Natural selection doesn't produce perfection or have any kind of intelligence behind it, it simply produces "good enough" (or, as we're on HN, the minimum viable product).


It's where we are now though. We're adapted to it.


Because the wisdom of that selection isn't so wise, now that there are almost 8 billion of us. Natural selection is dictated by conditions; conditions change. There is nothing permanent about it. And existing population of the target species (humans in this case) is itself one of the conditions.

Also it's probably a mistake to attribute something like wisdom to what is essentially a statistical crap-shoot. The great temptation of religion is of course the idea that there is an Intelligence at work. But just to pick a silly example, does the dodo bird or the passenger pigeon think it was "wise" and "intelligent" for the forces of nature to select for meat-eating humans? Dodo philosophers and theologians in the last days - did they question why God would unleash such a fury on them?

Anyway I can tell you human ones will. Natural selection will most likely start killing off humans as soon as our technology can't keep up with all our baby-birthin'. It will manifest in forms you're already seeing in today's headlines. Voluntarily consuming less is an attempt to deal with it before the wisdom of natural selection deals with it.


This is the classic "appeal to nature" fallacy. On top of that, we don't know much about the diet of ancient humans, including how often they actually got to eat meat. Natural selection isn't wise, it's just an uncaring response to the environment. The environment has changed dramatically.


It's not wise no, but it puts us where we are. Generic engineering is all the tool that we have to fight ur and we're not so good at that.


Because it’s killing the planet? We also evolved a brain so we could adapt. Hopefully we’ll use it.


Heck, even replacing beef by chicken or farmed fish is a lot less resource intensive.


> we could all just stop eating so much meat.

Sounds promising. Maybe you can build a small or medium sized organization around trying to make that happen.


Impossible Foods is a startup working on this mission. They have a really tasty product too.

http://impossiblefoods.com/about/


We could stop flying, driving and buying electronics from Asia. Are you willing to do it?


I've done most of those, reduced my family's footprint by tonnes CO2e per year, and we are not wearing sackcloth and ashes.

http://m.earth.org.uk/


I prefer we keep doing all those things if we can figure out how to make them harm less.


and why do you think that? a lot of the so call gig economy employees are "small businesses" or at least that's what ubber et all say


What absolutely astounds me is how Facebook literally has failed to offer anything of substance to empower people from their massive user base.

For example, it should be "three clicks and you're in" to found a virtual company and state "this is the problem we are going to solve, who's with me" - and another person from across the globe should be able to say "heck yeah let's do this" and then Facebook would say "that sounds great we will host your site/app/whatever and when you show traction on that problem you will hit level 1 funding of "X" and so on and so on..."

Facebook has the daily attention of over a billion people they claim, yet they literally just steal their energy and provide no building power.

Start a fucking economy and empower people unlucky enough to be born in a far reaching place such as the Congo but smart enough to know how to solve various problems affecting many.

Facebook is not making the world a better place until they do something like this.


I don't know why it astounds you. Facebook is a business. I doubt they spend much time in the board meetings pondering what they are doing for the planet. Instead I'd guess its all about mobile clickthrough rates and ROI on recent acquisitions.


I'm astounded at what a bullshit business it is.


I agree with your sentiment - they have the potential to truly contribute to "making the world a better place", yet seem to be blinded by their narrow focus, or lack of vision..


The current trend is to transition ~all electricity generation to renewable because they cost less. Part of this is because interest rates are so low, but cars also seem headed in that direction. (2.5% of new car sales worldwide, dropping prices and rapidly growing sales.)

So, I don't know if we actually need to change much at this point.

EX: Per person US emissions are down ~18% since 2000. We are on tack to hit 50% drop reasonably quickly.


Why do you not think the world needs SMBs over larger corporations with more resources? Are there any economic study or more academic writings on this topic. I'd be curious.


For one it would be a major factor in reducing income inequality. Large companies are designed to funnel profits to relatively few people.

I also strongly believe we would see more innovation and generally a more resilient economy.


This is merely because of the structure that has been dominate for organizing large corporations. It is authoritarian heirarchy, which is essentially the kings of old. We need new ways of organizing our companies, new ways of empowering the employees, and better ways of distributing and holding the value of the company among more equal ownership proportions. This is a tough sell, but I think this period of history will go down as a dark age of freedom in the private sector and the next age will be the empowerment of individuals as the owners and decision makers in large organizations.


I agree with you about the authoritatianism in large corporations currently, but what makes you think the future will be more free?

The internet looked like it might help for a while, but it's currently being constricted by the ISPs and corporations like Facebook and Google, while more and more data is being tracked about each person at every moment. Authoritarianism broke down many times because you couldn't track what everyone doing and eventually someone managed to make a plan to fight against you. Information technology is just making it easier for authoritarians to maintain control


If you take solar arrays as an example, someone needs to do the build out. That someone is likely to be a local or regional company that hires local workers. Planning and construction accounts for a significant fraction of project costs as materials costs have plummeted, so this is money that ends up being recirculated in the local economy. Of course there will be large companies in the mix, but there won't be a network effect that results in one company owning the whole industry like Facebook, which is a good thing.


That doesn't make much sense. A good comparison would be to look at big Telecom. A handful of large companies both build out the infrastructure and then sell their services to the public. Why would building energy infrastructure be any different?


thats the secret of robustness of German economy.


I don’t think it’s the end of the startup era at all. It might be the end of startup hype ending on itself, with valuations not tight to reality and companies without business.


I get what he's saying. The internet used to be peaceful grassy plains. Now it is loud bustling New York with bigger fish that are out to get you. This doesn't mean you can't disrupt everything, but now you have to outrun this new establishment that reinvests all its money into eating you more efficiently. Companies like Facebook already have giant audiences, and copying your product can be done virtually overnight.


> today’s new technologies are complicated, expensive, and favor organizations that have huge amounts of scale and capital already.

That is not a fact of life or act of nature, but to a great degree it is a choice. Decades ago, engineers built many technologies that were conceptually simple, elegant, open, free-as-in-liberty, and hackable - not by accident, but to empower hackers and end-users. Those efforts created the environment, platforms and tools that the startups thrived on. What are today's engineers building for the next generation?

For example, I recently had to learn in-depth how computers boot these days - down in the weeds with ME, TPM, UEFI, SEDs, PXEs, GPT, and more:

First, the complexity of that small step to starting your computer is beyond an individual's comprehension; just one spec was 2,500 pages; many technologies' roles overlap and it's not clear why there is redundancy or how they integrate with each other. Perhaps there is someone who specializes in this narrow field as a full-time job and actually grasps it all, or maybe if I had a year to dedicate to it - but I can't imagine some new teenage hacker mastering it.

Second, much of it is proprietary or at least not free. TPM and ME, for example, are ultimately outside the user's control.

Compare that to the former system of POST - BIOS - MBR - boot loader. I'm not saying change is bad and that we couldn't have improved on it, but clearly nobody thought about making it hackable or empowering end-users.

I'll go a little further and say it reflects - I can't support a causal connection - a broader loss of interest in the 'American Dream', equality of opportunity (or at least an abundance of it for all), and empowering the little guy and underdog. The U.S. also cuts back education and other programs that give opportunity to less advantaged citizens, often with the idea that the powerful should not be hindered by taxes or government. Polls show interest in democracy is slipping, with the US government saying it's not worth promoting abroad (the White House and State Dept both explicitly said it) - democracy is political empowerment of the little guy; everyone gets a vote and is equal. Some political groups even explicitly argue that their group should dominate the others. I'm not starting a political debate; I'm just pointing out that it's a much different outlook.


We don't need more humongous private firms like Google/Facebook/Apple. If they were to be replaced by other firms, they will only be even bigger and more powerful. Pretty sure most would say no to that.

What we need is more tech savvy populace that know how to put up their own website (facebook replacement) and apps to share photos (instagram) with family and friends.


The point has been proven: there’re some business models that can be jump start with VC, and generate tons of cash flow. It will never go away.

What’s happening is that the world is learning no to invest hundreds of millions in a ”bro app”. VCs learning = less investment in absurd enterprises = less total number of startups


The cryptocurrency space has brought multiple unicorns - they just mostly don't come from silicon valley and don't follow the typical VC funding model.

Bitcoin Ethereum Ripple Bitcoincash Litecoin Dash NEM NEO Monero Iota


I admit I don't know too much about cryptocurrencies but how are those unicorns? Aren't those all currencies and not companies?


Current startups are jokes. Not really many of them have techs, let alone a working business model. All they have is websites/apps, nothing more.


This article fails to take into account blockchain startups (esp for mobile) and the trend toward decentralization.


The champion of the dying cause is a trope going back (at least) to Greek poet Pindar. It usually treats the titans of the near past as god-like and looks at the future as one of inevitable decline. It ends with Get Off My Lawn.

I've already lost a few bets on the future. I didn't see Uber/Lyft although I thoroughly hated taxis. What do I know?


Interesting article, but if the big five really dominate all resources in SF/SV, maybe other cities will become the next Silicon Valley.


robo machine startup end u


This just sounds like unwarranted negativity to me. There will always be innovation.


I don't think the takeaway is that innovation will slow down. The point of the article is that the large, powerful companies that we have now are on top of the game, understand technology, networking, internet, and there's no big new shift where they'll lose to startups.

Innovation will continue and Alphabet and Amazon etc. will remain dominant and powerful, controlling and doing much of the innovation.

Startups had whole new markets to enter in the past where big corporations were not prepared to go. Now, any potential new markets aren't wide open for small startups and the big corporations are ready to jump in and dominate. It's not the same.

End of widespread startup success ≠ end of innovation (but it will mean some types of innovation are stifled or don't happen)


I'm pretty heavily involved in the Ethereum ecosystem, and it's a great place for startups.


One ICO a day keeps doctor away


I have some questions as to why you would say that - could you reach out to me at my email or add yours to your profile? Thanks.


Why do you say thanks? They may not want to say any more than they’ve already said, or they may be unfriendly to you.


No worries, I'm not an unfriendly person.


> It is widely accepted that the next wave of important technologies consists of AI, drones, AR/VR, cryptocurrencies, self-driving cars, and the "Internet of Things."

While I agree with a lot of this analysis, the fact that the author seems not to understand the difference between cryptocurrency and blockchain doesn't exactly signal credibility in this domain.


I think each one of those platforms are growing rapidly. So what if they haven't hit critical mass yet? Each one has that potential. What are we looking at then, a few years before one or several of those become as big as mobile?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: