Hacker News new | past | comments | ask | show | jobs | submit login
The End of the Beginning (stratechery.com)
202 points by nikbackm on Jan 7, 2020 | hide | past | favorite | 115 comments



I've been thinking along the same line, albeit on a more personal take as a software engineer.

Basically, starting around 15 years ago, there's the proliferation of bootcamps teaching fullstack development, because software startups were the new hot thing, and they desperately need generalist engineers that were capable of spinning up web app quickly. Rails was the hot thing those days because of this as well. Hence we saw many new grads or even people who change careers to fullstack development and bootcamps churning out these workers at an incredible pace (regardless of quality) but the job market took it because the job market was desperate for fullstack engineers.

During that time, the best career move you can do was to join the startups movement as fullstack engineers and get some equity as compensation. These equities, if you are lucky, can really be life changing.

Fast forward now, the low hanging CRUD apps (i.e., Facebook, Twitter, Instagram, etc) search space has been exhausted, and even new unicorns (i.e., Uber) don't make that much money, if they do for that matter. Now those companies have become big, they are the winners in this winner take all filed that is the cloud tech software. Now these companies these days have no use for fullstack engineers anymore, but more specialists that do few things albeit on a deeper level.

Today, even the startup equity math has changed a lot. Even with a good equity package, a lot of the search space has been exhausted. So being fullstack engineers these days that join startups don't pay as much anymore. Instead, a better move would be to try to get into one of these companies because their pay just dwarfed any startups or even medium / big size companies.

Just my 2c as someone who is very green (5 yrs) doing software engineering. Happy to hear criticism.


"Fullstack" development is just a tiny slice of software. There's an entire economy to be automated, there are endless inefficient processes and massive opportunities there.


And many of those processes are spreadsheet based, ie CRUD!


My understanding is that startups never paid well. Sure, if you got lucky and were employee #7 at Facebook, it paid off great! But even during that time frame, working at startups instead of MS or Google was not a good proposition. And even during DotCom and the rise of Microsoft, it paid a lot better to work at IBM than these classes of startups.


True. I prefer working for startups and tiny companies -- not because of the income potential (as you point out, there are better ways of chasing that), but because the really interesting stuff is almost always done by startups and tiny companies.

Small companies make it possible, large companies make it economical.


Not always true. There are some very interesting projects at Google, Apple, etc. that can only be done by companies capable of throwing hundreds of Millions/billions of dollars at problems with uncertain payoffs


That is why I said "almost always done" and not "always done".

Of course, a lot of that is subjective, as it depends on what you find interesting. I don't personally tend to find large-scale projects very interesting because they tend to be worked on by large teams, rendering most individual roles to something much more narrow. But I do know plenty of devs who get very excited about large-scale projects. Vive la différence!


Sure but is the probability to work on them with an I teresting impactful role greater than a financially successful startup?


Right but this is because startups are largely a financial vehicle to transfer value created by employees to investors and founders. If startups actually gave employees a real stake in their business, the equity math would make much more sense. Its a cultural issue, but also, the set of people working at startups for worthless equity is mostly disjoint with the set that can get hired at Google/FB. Its almost a different career.

The value of full stack engineering is also plummeting because the tooling i.e. React has gotten so good that the barrier to entry is very low.


historically engineers who could be hired at G/FB worked for startups on better terms than the big tech firms of their day (IBM/Cisco/HP/Oracle etc) The market changed during the last recession to not allow employee's to receive "real" equity compensation which flipped the math from stable pay vs. risky equity to better pay vs. risky lower pay.


I’d like to see some analysis on how disjoint startups are from the highest paying companies. I think it’s true.


Software is still eating the world, and there will be plenty to eat for a long time. Cars (the foremost example in the OP) had basically eaten the world by the 1950s (sometimes even in a fairly literal sense).


Interesting to note, that software is in the process of eating the automobile industry.


Isn't the perception of a saturated market persistent though? I mean, there were many social media apps when Facebook started. Twitter was created when microblogging had already become a trend.


Anecdotally, it seems like the rate of disruption in this space has been steadily slowing down.


Even the most pointed to example and seemingly saturated space, social media, had three major players emerge last decade: Instagram, Snapchat and TikTok.


Viber 2010, Pinterest 2010, Ask.fm 2010, Twitch 2011, WeChat 2011, Medium.com 2012, Slack 2013, Telegram 2013, Musical.ly 2014, Discord 2015, are all in related/overlapping space and have tens of millions of users.


If you count wechat and WhatsApp more like billions.


The low hanging CRUD apps may have been done but there are still less low hanging CRUD apps like Flexport (rails) and the advance of AI is opening up loads of new opportunities.


If tools keep getting better, the barrier to entry for simple CRUD apps goes down, and you have a new class of journeymen programmers who do simple work not unlike construction.

The specialty jobs still exist. Planning is still another pay grade. But the average labor cost goes down and the volume goes up.

There are so many apps that could be written but the profit potential is too low to be worth it.


The tragedy of people who want to make money and believe it will be 'life changing', is that no matter how many times they are told it won't be, they think 'ha, you only say that because you got yours'.

What useless shit are you going to buy with 'life changing' money exactly that a software developer's salary won't allow?


> What useless shit are you going to buy with 'life changing' money exactly that a software developer's salary won't allow?

I still have the childish dream of wanting to change the world. Specifically, I want to be involved in certain kinds of political activism that will likely piss off a number of people, and I want to have enough money that I can support my family without needing to back off if my income source is threatened.

I’m lucky enough to be almost at that point due to startup equity.


Yeah, I'm personally uninterested in spending money. But I could quite easily find things to do with money in the 10-100M range. First being starting companies without being beholden to investors that are monomaniacally focused on short-term cash extraction. Then investing in entrepreneurs who are up to good things in the world. And then supporting people who do useful things that don't pay well.


Unfortunately 10m - 100m can't do much in developing countries - from my experience.

But in developing and emerging economies you can make a huge impact.


Do you feel like you'll graduate to 'political problems are human problems and human problems boil down to idiots having children, idiots raising children and society being unwilling to rid itself of humans who have convincingly shown to be counter-productive to society', coupled with 'for society to function long term, it needs to have a shared goal, vision, not selfish goals of one house, two cars, fence, big tv and plenty of derp entertainment for each family'?

I feel like anything that isn't addressing these two issues is re-arranging chairs on the Titanic as it's headed for a crash.


I think that dismissing large swaths of the population as “idiots” is overly simplistic.

There are many groups with power that benefit from keeping people misinformed and unable to rationally determine what is truly in their own self interest, all the way from childhood indoctrination on through to old age.

I’d want to neutralize some of the most egregious uses of such misinformation before simply blaming the people entirely themselves.


Do you feel like you'll graduate to 'Political problems are not emergent individual problems, but an issue of rules. The rules of the political game are rigged to filter out competent people and only allow shallow ideologues to power'? ;)


This is an interesting counterpoint. On one hand, alexashka makes valid observations, but to what degree is the behavior (s)he is observing a result of high quality political leadership most anywhere on the planet? And if that could somehow be changed, might people's behavior suddenly change as well?


I live moderately. I don't wear jewelries, watches. All my clothes are from Uniqlo. I don't have expensive toys. I have mediocre gaming PC, my old Macbook pro laptop, an iPhone X, and a small set of electric guitar + few pedals + small amp. I have a small 29 gal aquarium. I live in a cheap area in NYC (Queens, nearby Flushing). I have monthly karate membership, and give regularly to my local church.

But I started my career late. I had three different unrelated degrees (no student debt thankfully, I attended public schools), jumping to different fields in my career. I also need to be ready to support my all-single/divorced-female old family members (aunt, mom, grandma). I also will probably have kids in two years.

So a FAANG salary would be nice.


The inverse of that is being poor, where one unlucky break can lead you to living on the streets.

As with most things in life, the best path is the middle path.


Peace of mind.


Independence...


I'm not exactly sure where I fall on this. Ben is a really smart guy (way smarter than me), but I feel like this could be a classic case of hindsight.

Now, looking back, it makes sense that the next logical step after PCs was the Internet. But from each era looking forward, it's not as easy to see the next "horizon".

So, if each next "horizon" is hard to see, and the paradigm it subsequently unlocks is also difficult to discern, why should we assume that there is no other horizon for us?

I also don't know if I agree that we are at a "logical endpoint of all of these changes". Is computing truly continuous?

However, I think Ben's main point here is about incumbents, and I agree that it seems it is getting harder and harder to disrupt the Big Four. But I don't know if disruption for those 4 is as important as he thinks: Netflix carved out a $150B business that none of the four cared about by leveraging continuous computing to disrupt cable & content companies. I sure wasn't able to call that back in 2002 when I was getting discs in the mail. I think there are still plenty of industries ripe for that disruption.


Until I have something resembling Iron Man's Jarvis with at least a subvocal interface, I think there's still a long way to go for "continuous" computing. I currently still have to pull out a discrete device and remove myself from other interactions to deal with it. If I'm not on that device all the time, then I don't have continuous computing. Maybe continuously available computing is more accurate?


Right -- today you need to remember to charge your phone, you don't take it everywhere (and don't have signal everywhere, especially internationally), and you need to take it out of your pocket to use it, and type into it with your thumbs (though voice "assistants" are here, and some people get use out of them.)

The end-goal is being able to talk to anyone at any time, remember anything you've seen before, and know the answer to any question you can phrase that someone has already answered.

(Now, you might say that parts of it sound less than ideal, but I think we'll get there by gradient descent, though may be with some legal or counter-cultural hiccups.)


Everything you describe are very minor tweaks to what already exists today.

The bottom line is everyone does have their phone charged and with them always, is probably out of their pocket most of the time anyway, and can get the answer to pretty much any question you can phrase that someone has already answered. The voice assistants will continue to improve, but some people actually prefer thumb-typing for various reasons.

And the "improvements" you suggest probably bring even more problems from privacy, security, and mental health issues than any plausible benefits they might provide.


> Everything you describe are very minor tweaks to what already exists today.

Sure, and the iphone was the same thing - I had a 3G windows phone years before the iphone that essentially did all the same things. But the iphone was still a breakthrough nonetheless.


This captures my sentiment.

It’s hard to see how the incumbents could be beaten — precisely because how effective they are around data, buying potential competitors (Instagram, YouTube) .... but this is precisely because we don’t know what/if the next market shift is.

What happens if AI takes off? What happens if 3D printing magically becomes 100x more efficient and you can print anything you want from home?

We don’t know. It doesn’t seem like the big incumbents could be defeated, but history repeats itself.


> it seems it is getting harder and harder to disrupt the Big Four

Microsoft, IBM, Oracle... What is the other one?

Or, right, wrong decade.

(My point is, it completely not obvious if it is getting harder to disrupt the incumbents.)


The conclusion of the article is that it is getting harder to disrupt the incumbents. I'm saying that regardless of whether it is or isn't, there are still lots of new companies to come that can take advantage of technology to disrupt other, old-guard incumbents.

That, I think is where the metaphor Ben uses breaks down. The automobile is a single idea (move people around with an ICE). Tech is more like the ICE than the car. So, there might not be much disruption to consumer hardware (Apple) companies, or search (Google) companies, or cloud computing (Amazon, Microsoft) companies. But there will still be lots of disruption to come as tech (just like the ICE) gets applied to new features.


> Microsoft, IBM, Oracle... What is the other one?

Cisco, of course.


It was actually Oracle, Sun, Cisco, and EMC who were the four "horsemen of the Internet" in the run up the dot-com bubble.


Isn’t that kind of his conclusion too though? It matters in as much as we’re less likely to see new general purpose public clouds come into play, but he didn’t seem to predict there was no more room for change in the industry, just that were unlikely to see those incumbents toppled from certain foundational positions in the ecosystem.


Much of Ben's writing recently has been on the topic of regulation and anti-trust, specifically in relation to tech companies. If I had to summarize his thesis, I'd say it's something along the lines of: "Previous antitrust regulation prioritized price. Tech allows for better products by nature of aggregation and network effects, and to promote competition, we need a new prioritization in our regulation".

So, I see this article as being a part of that thread. The conclusion is that the Big Four are not going to get disrupted, which is bad, and drawing some conclusions we need a new framework of antitrust to allow for it. I might be putting words in his mouth, but I don't think it is really that much of a jump if you read his body of work, especially recently.


I read Ben's writing a lot and listen to the podcast, I think you did a pretty good job capturing his points.


Was it really that hard to predict the Internet? SF authors picked up on it almost immediately.


Which authors are you thinking of?

Up until the web existed, I think it was extremely hard to usefully predict the Internet's impact. TCP was invented in 1974, but it wasn't until 1993 that we started seeing things that really pointed to where we were going: https://en.wikipedia.org/wiki/List_of_websites_founded_befor...

Of course, everybody knew computers would be important. But that was true starting in the 1960s. E.g., Stand on Zanzibar has a supercomputer as a central plot element.


I mean, if you even look at popular sci-fi, nobody exactly predicted the internet as it is today. It wasn't until someone coined the "information superhighway" that gears started turning. Even then, the earliest commercial websites were basically just digital brochures and catalogs. It wasn't until SaaS, search and social took off that we grasped what the specific use cases were that were going to be the dominant money makers. And the internet evolved quite a bit as a result.

Some people like me still lament the loss of the 90s internet in some ways, as it felt like a more "wild west" domain and not saturated and stale like it is today.


The concept of an information superhighway dates to at least 1964:

https://en.wikipedia.org/wiki/Information_superhighway#Earli...


It looks like those terms from the 60s and 70s referred to "superhighway" in regards to communication, but didn't prefix it with "information". And whether someone incidentally used the word or not is sort of irrelevant. It started to become popular as a means of visualizing the possibilities of the internet in the late 80s and 90s, and that's when I think the first people started to imagine what this might become in the abstract.


I'm leaning the other way -- that the usages were significant.

The Brotherton reference in particular interests me -- masers and light-masers (as lasers were initially called) were pretty brand-spanking new, and were themselves the original "solution in search of a problem". I've since come to realise that any time you can create either a channel or medium with a very high level of uniformity and the capacity to be modulated in some way, as well as to be either transmitted/received (channel) or written/read (medium), you've got the fundamental prerequisites for an informational system based on either signal transmission (for channels) or storage (for media).

Which Brotherton beat me to the punch by at least 55 years, if I'm doing my maths correctly.

I've made a quick search for the book -- it's not on LibGen (though Internet Archive has a copy for lending, unfortunately the reading experience there is ... poor), and no library within reasonable bounds seems to have a copy. Looks like it might be interesting reading however.

Point being: Brotherton (or a source of his) had the awareness to make that connection, and to see the potential as comparable to the other contemporary revolution in network technology, the ground-transit superhighway. That strikes me as a significant insight.

Whether or not he was aware of simultaneous developments in other areas such as packet switching (also 1964, see: https://www.rand.org/about/history/baran.html) would be very interesting to know.

Not much information on him, but Manfred Brotherton retired from Bell Labs in 1964, and died in 1981:

https://www.nytimes.com/1981/01/25/obituaries/manfred-brothe...


That's a cool article on Baran, it looks like he predicted Amazon in 1968, and they were experimenting with early email type systems in that time, too. I'm sure the bulletin board followed shortly after.

Brotherton wrote a book on Masers and Lasers in 1964, you might find more info in that: https://www.amazon.com/Masers-Lasers-They-Work-What/dp/B0000...

is that the one you mean?


Yes, that book.

Baran's full set of monographs written for RAND are now freely available online. I'd asked a couple of years ago if they might include one specifically, and they published the whole lot. Asking nicely works, sometimes.

Yes, there's interesting material there.

https://www.rand.org/pubs/authors/b/baran_paul.html


I'd say networking was not incredibly difficult to predict, but the businesses and products it allowed for (and how we use them) was very difficult.


> but I feel like this could be a classic case of hindsight.

Well it's 2020 afterall.

> Now, looking back, it makes sense that the next logical step after PCs was the Internet.

But the internet existed before PCs.

> and I agree that it seems it is getting harder and harder to disrupt the Big Four.

I agree, but then again, people thought AOL was hard to disrupt so you never know. A company can look invincible one day and irrelevant a few years later.

> I think there are still plenty of industries ripe for that disruption.

Yes, but the low hanging fruits have already been taken. I suspect the next round of disruptions would be more difficult and less profitable.


This has happened in every industry and in every time period. The mistake made in the article is that the author doesn't actually appear to realise how important this effect is (it is always amazing to me that you have all these people writing about the same topics, always from the same angle...no-one thinks to just open a book, and check what happened last time...actually I am aware of one book that has done this, just one). Again: every industry, every time period. It is permanent.

Definitely, you see new industries replacing old ones. Acknowledging the above isn't denying progress. But every industry consolidates down to a few large companies.


Bah. He took three datapoints, built a continuum out of it and says that since the third datapoint is at the end of his continuum, we must be at the end.

But this doesn't fit any of the upcoming trends. The biggest current trend is edge computing where cloud-based services introduce issues around latency, reliability and privacy. These are big money problems - see smart speakers and self-driving cars. The cloud players are aware of this trend - see AWS Outposts that brings the cloud to the needed location and AWS Wavelength where they partnered with Verizon to bring compute closer to people.

But privacy in a world full of data-driven technology is still very much an unsolved problem. And most of the major technology players have public trust issues of one sort or another that present openings for competitors in a world where trust is increasingly important.


I've seen similar analogies to the airline industry, but IMHO this misses the forest for the trees. Tech isn't an industry, like automobiles or airlines. Tech is industry, like machine tools and assembly lines. When industry was first developed in the 1810s it meant specifically the textile industry, which was the easiest existing manufacturing task that could benefit from power tools and specialized workers on an assembly line. It was only a century later that we could begin to dream of things like automobiles and airplanes.

Similarly, I bet that our great-grandchildren will look upon the Internet, e-commerce, and mobile phones the same way we look upon railroads, paddle steamers, and power looms. Great inventions for their time, and drivers of huge fortunes, but also quaint anachronisms that have long since been replaced by better alternatives.

Notice that the article focuses almost entirely on I/O and the physical location of computation. This is a pretty good sign that we're still in the infrastructure phase of the information revolution. When we get to the deployment phase, the focus will be on applications, and our definition of an industry focuses on what you can do with the technology (like fly or drive) rather than how the technology works. In between there's usually an epochal war that remakes the structure of society itself using the new technologies.

FWIW, there was a similar "quiet period" between the First and Second Industrial Revolutions, from 1840-1870s. It was very similar: the primary markets of the original industrial revolution (textiles, railroads, steamboats) matured, and new markets like telegraphs were not big enough to sustain significant economic growth. But economic growth picked up dramatically once a.) the tools of the industrial revolution could be applied to speed up science itself and b.) the social consequences of the industrial revolution remade individual states into much larger nation-states, which created larger markets. That's when we got steel, petroleum, electrification, automobiles, airplanes, radio, and so on.


Don't agree, comparing histories is not a reliable way to predict the future, I think we'll see the growth of governance level disruption, a pushback that will encourage home grown solutions for countries that are not necessarily aligned with US interests. That field is wide open and growing!


Policy driven disruption is the only option I see to break the cycle. Let's see.


I've been reading Zero to One, and one of the ideas the book pitches is that monopoly and innovation are two sides of the same coin. Only monopoly-like companies have time and money to dump into innovative products (Bell, GE, IBM, Google). And people only invest in an idea if they think they can profit from it (look at how crucial a patent system was for the industrial revolution).

Competition is important, but to drive efficiency - weed out bad ideas and bring down costs of already created innovations. But the thing that usually drives monoliths out of business is... new monoliths.

The somewhat contrarian takeaway is that some (keyword) amount of consolidation is good.


That isn't right.

The truth is somewhere in the middle: definitely, you see some large companies invest heavily but (more commonly) you see small firms nibble at the edges of an existing product until it is too late for the larger companies.

Saying that monopoly produces innovation is like saying government produces innovation. It happens but given a long enough period all things happen. The question is about incentives: the incentives to innovate within large companies are terrible, that is why it doesn't happen most of the time.

Also, consolidation has happened in all industries at all times. It is a function of things that repeat: knowledge curves, lindy effects, etc.

Just generally: be wary of Thiel and his ilk. They have a predilection for ahistorical nonsense. The history in this area, broadly business history, is particularly difficult and not well known (the only tech person who I have seen get close is Patrick Collison..and then...not really).


That's not Thiel's argument at all. His argument is that the most innovative companies tend to become monopolies.

However monopolies are not always due to innovation, nor our monopolies inefficient. As you mentioned, it's a function of things that repeat, but also due to stronger players that gobble up less efficient and/or innovative firms.

I would read between the lines. Business history is indeed difficult.


Yep, more of the usual basic errors.

First, I replied to a comment. The majority of your points should be directed there. Second, your point about monopolies or why they happen is just uninteresting (the question of "always" is not something that can be answered). Third, your point about the most innovative companies tending to become monopolies is wrong...I am not sure how little you have to know to think this but it is certainly very minimal. The historical evidence is that industries consolidate down to a few large companies, not that they become monopolies. Fourth, again, I repeat what I said about ahistorical nonsense. Neither in theory or reality is monopoly a natural consequence of capitalism. Fifth, most monopolies that have existed in reality, by number, are not privately-owned, they are not innovative. There is a fairly obvious inverse correlation between monopoly and innovation (again though, the issue that is confusing you is thinking that innovation -> monopoly...this isn't a thing).


Thiel’s point is on the extreme end if you have hyper-competition, firms will have no money leftover to invest in moonshots (self driving cars, cloud computing, etc. ) instead they focus on pure survival.

I don’t see how a company that’s in a life or death struggle could pour hundreds of millions/billions of dollars into R&D but perhaps I’m missing something


> What is notable is that the current environment appears to be the logical endpoint of all of these changes: from batch-processing to continuous computing, from a terminal in a different room to a phone in your pocket, from a tape drive to data centers all over the globe. In this view the personal computer/on-premises server era was simply a stepping stone between two ends of a clearly defined range.

Sure, that's what happened.

But what jumps out for me is that, at both ends of that range, users are relying on remote stuff for processing and data storage. Whether it's mainframe terminals or smartphones, you're still using basically a dumb terminal.

In the middle, there were personal computers. As in under our control. That's often not the case now. People's accounts get nuked, and they lose years of work. And there's typically no recourse.

As I see it, the next step is P2P.


The current computing paradigm is all about "data entry", you are your own "sysadmin". Slowly enriching others by working for them. Yes saved time is valuable for you, but also, you create value for "them". We have moved from mainframe to phone with very little design change. The current wave was about convenience. There is a coming wave of redesign that is people centric. Interfaces will be vastly different. This article is just a lack of imagination.


Thats a very bold claim, that goes against Ray Kurzweil's hypothesis tech is accelerating. Maybe (unlikely) that cloud/mobiles is the end game for silicon. But what about quantum? What about biological? What about Nano? What about AI? Literally there are a ton of potential generational changes in the making that could turn everything on its head again


Why is Ray Kurzweil's hypothesis particularly important to contrast other hypotheses against? What sets it apart in relevance and/or authority?


Because it's evidence is pretty straightforward: you can take wikipedia's list of important inventions and plot their frequency on a chart.

Of course, there are debates around which inventions count as significant. And there is recency bias.

Never underestimate the power in something easy to communicate.


> "Never underestimate the power in something easy to communicate."

Yes, I suppose that's the biggest thing with these "futurologists": their predictions are both tantalizing and easy to digest. I reserve the right to remain skeptical about any of these Silicon Valley religions though.


This is not limited to Silicon Valley. Every predictor since before Nostradamus has used this to their advantage. Those who haven't lose their audience, because predicting the future in concrete terms is really hard. So hard that nobody can do it regularly. Which causes people to stop listening.


Agreed!

What strikes me as particularly interesting about Silicon Valley and some techie circles -- as opposed to Nostradamus -- is that many of these people self-identify as hiperrational, agnostic, atheist or wary of traditional religions, yet here they are, building their own religions under a more palatable technological guise (I could list ideas like the Singularity, Super AI good or bad, immortality, "we're living in a simulation", "every problem in the world can be fixed with the right app", etc, but if the list of absurdities goes long enough I'm sure to hit some raw nerve, so I'll stop here).

These modern day Nostradamuses also tend to overinflate their own importance in the wider world. Outside of techie circles Kurzweil is a nobody, and the notion that his theories are some bar that other theories must somehow pass is laughable.


Sure, some people are not critical thinkers. I am not a worshipper of Kurzweil and not into Transhumanism, but I liked the historical charts he drew showing exponential acceleration over a wide time interval and tech domains. I feel his evidence of historical exponential progress was compelling, but what u do with that evidence is up to you. If you have something to say about that point I am happy to hear it, but you mostly seem to be lashing out at 'silicon valley' types which is a mischaracterization of who you are talking to. I respect Kurzweil because he is a real engineer, entrepreneur and a helper of those less fortunate (the blind in-particular).


U should look him up, futurology is only a slice of Kurzweil's enormous body of innovative work.


Kurzweil is someone who does not understand where his expertise ends and makes bogus claims based on a poor understanding of non-computer science topics (see his claims about AGI + the human brain as an example). His claims are typically too generic to be wrong or he does mental gymnastics to claim his prediction was correct when it wasn't. Just read his analysis of how his predictions for 2009 did - https://www.kurzweilai.net/images/How-My-Predictions-Are-Far...

He claims he's right even when he's obviously wrong. He is not someone whose predictions should be blindly trusted.


I think population growth also factors in. Population is leveling off. In the past century, the global population has quadrupled, so there are four times as many people to invent things in raw numbers alone. But global population will increase no more than 50% in the next century, which means we aren't creating a lot more inventors than we are now.


This is a good point, but also consider the proportion of the global population who have the opportunity to become inventors is hopefully going to grow over the next century. As a result, the absolute number of inventors may grow faster than by just growing the overall population size.


It's very well known and makes a compelling argument that tech progress has been accelerating since the Stone age.


Well known is irrelevant. "Has been accelerating" is also irrelevant. Will continue to accelerate to something very close to infinity is the relevant part. There are plenty of us who do not find Kurzweil's argument on that topic to be at all compelling.


I agree Kurzweil's hypothesis is well known... within techie/Silicon Valley circles, somewhat like the Singularity -- a related concept -- is common in those circles as well. Regardless, there's no particular weight to Kurzweil's hypothesis, tantalizing as it may seem, and it's not reasonable in my opinion to use it as a measuring stick of other hypotheses as if Kurzweil was a proven and acknowledged authority on this topic.

Likewise, if someone said "human aging and death are unavoidable" this wouldn't be bold just because Kurzweil has written a lot about immortality.


I think his hypothesis deserves more credit than that — many of his predictions have come to pass, and many of those that have, at the time they were predicted, seemed somewhat far fetched since they were firmly in the realm of science fiction.


It doesn't matter. Each prediction has to be weighed on its own merit against ever-evolving reality. Even Einstein didn't bat a thousand.


I think this is a crucial point. Based on Ben's premises, his conclusions make sense. But what if you alter the premises, for example, by assuming that compute will happen on another substrate? If you choose a biological substrate, then you can move compute from inside one's pocket, to inside the body. And for many functions, you wouldn't need the cloud. I doubt that the dominant companies in silicon-based tech today have the expertise to make that shift.

A lot of work is being done to make bio-silicon fusion real, with use cases like creating olfactory sensors.

And our increasing control over both brain and genes may be the pathway to more general biological computation.

https://www.ucsf.edu/magazine/control-brains-genes


I think it will, at best, be a semantic argument in retrospect. The companies highlighted are all clearly defined as being bolstered by computing technology. But what about next generation, huge companies that are bolstered by computing and other technologies fused together? For example, if a company manages to create a brain-computer interface that gains global adoption and equivalent valuations to the existing tech giants, but the software layer is a mashup of, by that time, commoditized services from the existing tech giants who fail to enter this industry, does it count?


This myopia really puzzles me.

It seems like the whole analysis is predicated on the idea that technology = software made in Silicon Valley, with unimportant secondary factors. That 3M and ExxonMobil are not "tech" companies because they don't make iPhone apps.

Every company is a tech company, not because we've had computers for a while, but because technology is what we build to get what we want.

These kinds of narrow, myopic, siloed takes miss the forest for the trees.

If you think the epitome of human evolution is going to be people looking at bright rectangles for eternity, you haven't been paying attention to what technologists are doing.


I don't think the claim is technology in general, but non quantum based computing.


It's a substitute computing product though - kind of like electric cars for combustion cars.


I don't think that's accurate. The article goes through pains to discuss that sometimes castles are simply routed around. Quantum computing would potentially be one of those.


As @oflannabhra said, I think this is a case of hindsight thinking, with little predictive impact. Privacy issues can very quickly change everything. Security issues (story on NPR today about medical devices pretty much all vulnerable and ripe for random killing of people) could as well. Climate change is going to be a large driver for technology in the near future. The tech situation is very, very dynamic right now and it is way too early to say we are going to settle down with the current tech giants.

Also, giants are giants. In manufacturing, there are absolutely vast advantages to economy of scale. In tech, except for network effects, it's very easy for a very broad array of upstart companies to dominate their respective arenas at the 100bn level.

> today’s cloud and mobile companies — Amazon, Microsoft, Apple, and Google — may very well be the GM, Ford, and Chrysler of the 21st century.

Well, except google is not a cloud or mobile company. They are an advertising company.


while new I/O devices like augmented reality, wearables, or voice are natural extensions of the phone.

I don't agree with this at all. This is like saying "the internet is a natural extensions of the operating system, therefore Microsoft Windows will remain all powerful and the sole route to consumers"

Bill Gates in his famous memo realised that this wasn't the case, and Google realized that mobile did to the internet what the internet did to Windows (hence Android).

Wearables are radically different to phones. People want to use them differently, and interact with them in different way to how they do with phones.

To be clear: We are in the very early days of wearables, and Apple is far and away the dominant player (and maybe Garmin). But there is huge disruptive potential here.


Interesting point about Garmin. I wonder if it would ever make sense for Apple to just buy them...


I disagree. A long period of evolution starts when a revolution before that managed to find a more or less working solution. However at this point, there are at least two big problems that haven't been solved properly yet, that get worse every day, and where a revolution would be more probable than evolution - social networks and payments.

I believe at least one more revolution would still be possible before we have a long period of evolution. It will be a shift from centralization to de-centralization (one more time), actually to federation. De-centralized federated systems might be able to get social networking and payments to the level where it finally works well and only needs to be gradually improved.


The essay reminds me of The Deployment Age: http://reactionwheel.net/2015/10/the-deployment-age.html


>And, to the extent there are evolutions, it really does seem like the incumbents have insurmountable advantages...

By definition, doesn't it always seem like this?

Jim Barksdale (Netscape) said there are 2 ways to make money - bundling and unbundling. What can be unbundled from the incumbent bundles, in order to be offered in a more fit-for-purpose way, or with a better experience?

How might that answer change if the world's political structure changes? How might that answer change if processing, storage and networking continue their march towards ubiquitous availability?


His graph conveniently stops in the 1980s. Since then, there have been many new US car companies, mostly in the electric of self-driving spaces. Lots of little city cars, new imports from China, too.


He specifically mentions excluding imports. Outside of Tesla, what are the new American car companies that made any kind of mark?


Most of those are NEVs with a 25MPH max speed to bypass safety regulations. $20K golf carts.


> there may not be a significant paradigm shift on the horizon, nor the associated generational change that goes with it.

That's possible, but I see things that lead to me think that we're not there.

Primarily, there are a number of rather serious problems with the cloud, some of which are inherent to the paradigm and likely can't be resolved -- we'll just have to live with them.

When a paradigm has such problems, the possibility always exists that a new way of doing things can come about that sidesteps those problems.


The dealership model really helps manufacturers keep a tight reign on the market, look at all the trouble Tesla had.

In a similar vein, Apple, Google and Microsoft control the medium and have grown so powerful, I can't imagine there ever being a new "Google" that comes about the old grass roots method.

Someday Apple will be bought though, probably by Facebook.


I think if you abstract away the specific companies mentioned and stuck to the technology, the point about people building on top of already "accepted" paradigms is a good one, in my opinion.

The rest doesn't really seem to have enough evidence for such a bold claim.


Frankly not sure this piece really said anything other than the big 4 or 5 are so unbelievably strong that we're all left playing in the spaces, usually small, leftover.


For the OP, let me think ....

There is

> IBM’s mainframe monopoly was suddenly challenged by minicomputers from companies like DEC, Data General, Wang Laboratories, Apollo Computer, and Prime Computers.

So, to shed some more light on this statement, especially about "mainframe monopoly", let me recount some of my history with IBM mainframes:

(1) Uh, to help work myself and my wife through grad school, I had a part time job in applied math and computing: Our IBM Mainframe TSO (time-sharing option) bill was about $80,000 a year, so we got a Prime, and soon with my other work I was the system administrator. Soon I graduated and was a new B-school prof where the school wanted more in computing. So, I led an effort to get a Prime -- we did. IBM and their super-salesman Buck Rodgers tried hard but lost.

The Prime was easy to run, very useful, and popular but would not have replaced IBM mainframe work running CICS, IMS, DB2, etc. Of course, in a B-school, we wanted to run word processing, D. Knuth's TeX math word whacking, SPSS statistics, some advanced spreadsheet software (with linear programming optimization), etc. and not CICS, IMS, DB2.

(2) Later I was at IBM's Watson lab in an AI group. For our general purpose use, our lab had six IBM mainframes, IIRC U, V, W, X, Y, Z. As I recall they had one processor core each with a processor clock likely no faster that 153 MHz.

Okay, in comparison, the processor in my first server in my startup is an AMD FX-8350 with 8 cores and a standard clock speed of 4.0 GHz.

So, let's take a ratio:

(8 * 4.0 * 109)/(6 * 153 * 106) = 34.9

so that, first cut, just on processor clock ticks, the one AMD processor is 35 times faster than all the general purpose mainframes at IBM's Watson lab when I was there.

But, still, on IBM's "mainframe monopoly", if what you want is really an IBM mainframe, e.g., to run old software, then about the only place to get one is from IBM. So, IBM still has their "mainframe monopoly".

Or to be extreme, an Apple iPhone, no matter how fast it is, does not really threaten the IBM "mainframe monopoly".

Continuing:

> ... like DEC, Data General, Wang Laboratories, Apollo Computer, and Prime Computers. And then, scarcely a decade later, minicomputers were disrupted by personal computers from companies like MITS, Apple, Commodore, and Tandy.

Not really: The DEC, DG, ..., Prime computers were super-mini computers and were not "disrupted" by the PCs of "MITS, Apple, Commodore, and Tandy."

The super-mini computers did lose out but later and to Intel 386, etc. chips with Windows NT or Linux.

> ... Microsoft the most powerful company in the industry for two decades.

Hmm. So now Microsoft is not so "powerful"? Let's see: Google makes it easy to get data on market capitalization:

Apple: $1,308.15 B

Microsoft: $1,202.15 B

Alphabet: $960.96 B

Amazon: $945.42 B

Facebook: $607.59 B

Exxon-Mobil: $297.40 B

Intel: $256.35 B

Cisco: $201.47 B

Oracle: $173.73

IBM: $118.84 B

GM: $50.22 B

Microsoft is still a very powerful company.

Uh, I'm no expert on Apple, but it appears that the Apple products need a lot of access to servers, and so far they tend to run on processors from Intel and AMD with operating system software from Microsoft or Linux -- that is, Apple is just on the client and not the server side.

It appears, then, that in computing Microsoft is the second most powerful company and is the most powerful on the server side.

Sure, maybe some low power ARM chips with 3 nm line widths and Linux software will dominate the server side, but that is in the future?

And personally, I can't do my work with a handheld device, need a desktop, and am using AMD and Microsoft and nothing from Apple. A Macbook might suffice for my work but seems to cost maybe $10,000 to have the power I plugged together in a mid-tower case for less than $2000.

Broadly it appears that the OP is too eager to conclude that the older companies are being disrupted, are shrinking and are fading, are being replaced, etc.

Maybe the main point is just that in the US hamburgers were really popular and then along came pizza. So, pizza is popular, but so are hamburgers!

I also go along with the point of zozbot234 at

https://news.ycombinator.com/item?id=21986141

> Software is still eating the world, and there will be plenty to eat for a long time.


This is wrong on merit, and I am not sure why it is presented this way.

The difference between a car company and a software company is economy of scale. I.e. economy of scale dominate the physical world but does not exist in the software world since I can replicate software at zero cost.

In addition, new tools and new processes for software has increased the productivity times fold, which means that you need fewer developers for new software.

I predict two shifts in the tech world:

1) Move to the edge. Specially for AI, there is really no need for a central public cloud due to latency, privacy, and dedicated hardware chips. I.e. most of AI traffic is inference traffic which should be done on the edge.

2) Kubernetes operators for replacing cloud services. The value add of the public cloud is managing complexity.


If you read more of Ben's writing, he talks extensively about how software companies dominate market share through network effects and vertical integration.

You don't hear him talk about economies of scale because marginal costs are negligible for software companies. Besides, network effects and vertical integration are sufficiently powerful to control the market.

> In addition, new tools and new processes for software has increased the productivity times fold, which means that you need fewer developers for new software.

There are other barriers to entry besides the cost of writing software, like product, sales, operations, and most importantly, network.


However, the network effect in tech can be leapfrogged due to the zero marginal cost (as shown in this post). I.e. what network effect do you get from doing ML inference in the cloud?

The case for big tech today is still the economy of scale and not network effects (maybe facebook have those, but it exists only if the interface to facebook does not change).

The big tech players have economy of scale, due to their ability to use automation and offload the risk of managing complexity (I.e. one AWS engineer can manager 1000's of machines with AWS software).

No wonder, that the software that manages the public cloud is still closed source.

However, with Kubernetes operators, there is a way to move those capabilities into any Kubernetes cluser.


Did you actually read the post?

> The case for big tech today is still the economy of scale and not network effects (maybe facebook have those, but it exists only if the interface to facebook does not change).

This is only true if you believe that the greatest cost of developing software is running hardware. The greatest cost of developing software is developing software. Not only are economies of scale in compute management negligible except at massive scale, the cost of compute has declined dramatically as the companies you've described have made their datacenters available for rent through the cloud. Yet the tech giants persist.

Facebook, Google, Netflix, Amazon all have considerable network effects that you're not considering. For each of these companies, having so many customers provides benefits that accrue without diminishing returns, giving them a firm hold on market share. See https://stratechery.com/2015/aggregation-theory/

Ben is saying that the only way to topple the giants is by working around them and leveraging new computing technologies better than them. He makes the (admittedly speculative) case that this is no longer possible because we can't bring compute any closer to the user than the mobile devices.

> However, with Kubernetes operators, there is a way to move those capabilities into any Kubernetes cluser.

Kubernetes, at the scale of technologies we're discussing, is a minor optimization. Introducing k8s costs more than it helps far until far into a company's infra maturity. Even if most companies deployed k8s in a manner that significantly reduced costs, it's not enough to overcome the massive advantages existing tech companies have accrued. Not to mention all of the big tech companies have internal cluster managers of their own.


I don't think that the amount of current customers is any indication of network effects or any other kind of moat.

See: Walmart -> Amazon, Nokia->Apple, MSFT -> Andriod.

I mean, what more of network effect did MSFT had in the 90's. It was dominating both the OS layer AND the app layer (office). And yet, it does not have ANY share in mobile.

Kubernetes is not minor optimization if you think about what it is. Yes, if you see it as mere container orchestration. But it is the first time that a widely deployed, permissionless, open API platform exists.


>this is no longer possible because we can't bring compute any closer to the user than the mobile devices.

this is based on a very dubious assumption that bringing compute closer is the only path for innovation.

and even that is not true, you could imagine compute being even closer with a direct brain interface (actually you could consider google glasses to be an attempt at bringing compute closer)


Your comment would work equally well without the initial inflammatory "Did you actually read the post?" opening line.


Your point contradicts itself, does it not? Movement to the edge necessitates edge hardware and edge personnel to manage the hardware.

Network effects are THE factor in software because the marginal cost tends towards zero with each incremental user in the network. The edge adds cost per node.

Up until the point that users are paid to connect to the network and/or the network is directly linked to the user with the I/O line completely obviated, the economics of hardware and management underlying the network will tend towards economies of scale... which is the point Ben is trying to make.


> 1) Move to the edge. Specially for AI, there is really no need for a central public cloud due to latency, privacy, and dedicated hardware chips. I.e. most of AI traffic is inference traffic which should be done on the edge.

Inferencing is done at the edge, but training must be done centrally.


Right now, the only market participant I see doing some inferencing at the edge is Apple with its photo analysis stuff that runs on the phone itself.

Anyone else is busy building little dumb cubes with microphones and speakers that send sound bites into clouds and receive sound bites to play back (heck, even Apple does it this way with Siri). Or other dumb cubes that get plugged into a wall socket and that can switch lights that you plug into them by receiving commands from a cloud (even if the origin of the command is in the same room). Or dumb bulbs that get RGB values from a cloud server which inferred somehow that the owner must have come home recently and which then set the brightness of their RGB LEDs accordingly. Or software that lets you record sounds bites, send them into the cloud and receive transcripts back. Or software that sends all your photos to a cloud library where it is scanned and tagged so you can search for "bikes" or whatever in your photos.

No matter what you look at in all that stuff that makes up what consumers currently consider to be "AI", it does inference (if it even does anything like that at all) on some cloud server. I don't like that development myself, but unfortunately that's how it is.


It really doesn't make a lot of sense to do AI at the edge (in terms of the various edge providers).

But then a lot of edge cases don't make a lot of sense. The best edge use cases are fan-in (aggregation and data reduction), fan-out (replication and amplification - broadcasting, conferencing, video streaming, etc.) and caching (which is just a variant of fan-out).

The rest of the cases are IMHO largely fictional - magical latency improvements talked about in the same context as applications that are grossly un-optimized in every way imaginable, AR/VR, etc. Especially the AR/VR thing.

Beyond that the only thing left is cost arbitrage - selling bandwidth (mostly) cheaper than AWS.

What's the use case for moving inference to the edge? Most of the inference will in fact be at the edge - in the device, which has plenty of capacity - but that's not the case you're describing.


Why would you run AI in the cloud? It is a closed, expensive, high latency, etc. You might want to train in the cloud, maybe.

For inference, I See 90% on the edge (I.e. outside of the clouds).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: