Hacker News new | past | comments | ask | show | jobs | submit login
AI Canon (a16z.com)
518 points by nihit-desai on May 25, 2023 | hide | past | favorite | 219 comments



If you click the domain on this submission, you'll see loads of articles from a16z on the topic of generative AI.

Click back a couple years and you'll find this page: https://news.ycombinator.com/from?site=a16z.com&next=2981684... with submissions like "DAOs, a Canon" https://news.ycombinator.com/item?id=29440901


Lest we forget, here is their piece de resistance, dripping with VC gravitas, about the role of tech in the midst of the covid crisis: "It's Time To Build" [0]

What did they end up investing in? NFTs and shitcoins.

[0] https://a16z.com/2020/04/18/its-time-to-build/


Another twist: Billionaire Marc ‘It's Time to Build’ Andreesen Is a NIMBY

Link: https://www.vice.com/en/article/k7bbd9/billionaire-marc-its-...


Steph Curry is as well despite his progressive public image (he's also got FTX entanglements)


I think his objections were valid “taller fencing and landscaping to block sight lines onto our family’s property.”

I would hate to be a celebrity and have people recording me all the time, especially at home.


Fair, but it's inconsistent with his image — and he didn't propose much meaningful accommodation in return.


I haven't read too closely, but no one is building meaningfully affordable housing in Atherton. Each unit in the development would easily sell for millions.

CA/the bay area should focus on packing high density housing closer to transit lines, not lifting apartments in areas that are an hour+ walk from transit to score political points.


To the best of my knowledge it's mandated by the state — our old city council used to refer to our town as "complete" and slow rolled housing an approach which, to the best of my knowledge, would just yield state intervention with local decision-makers removed.


Par for the course for the super-rich.


That's because we need to cut taxes to incentivize these geniuses of industry to make actual things!


I'll take JPEG monkeys over perpetual war funding any day of the week.


The fallacy of the excluded middle, also known as a "false dilemma" or "false dichotomy", is what happens when two options are presented as being the only possibilities when, in fact, there may be other options that exist.


> What did they end up investing in? NFTs and shitcoins.

Believe their cryptocurrency/defi/web3 fund is aloof from the main fund?


Honestly the links in the article are really good whether you hate a16z or not.


TBF, the roll of a VC isn't to be on the cutting edge of science, but rather business, and generative AI is very new business, even if it isn't very new science.


> if it isn't very new science

it's pretty new "science." at least last 8 years or so


Let us not forget history. Generative AI (informally defined as "algorithmic generation of stuff" for the sake of argument) has been around for more than 40 years. For example,

https://en.wikipedia.org/wiki/Algorithmic_art?wprov=sfla1

And of course, other forms of generative AI exist.


Sure, if we define "Generative AI" as "algorithmic generation of stuff" then it has been around for a long time.

I disagree that this is what people are really referring to when they use the phrase generative AI and basically none of the techniques being used can really be said to be 40 years old.



I'm pretty sure I remember it being talked about on the tv show 'beyond 2000' in the 90s


generative AI?


It’s at least 40 years old. Think Gibbs sampling was invented the 1980s


Gibbs sampling is not generative AI. Really, generative AI is not at all a well defined term.

It is, unfortunately, not the same thing as a generative model.


What’s generative AI?


A buzzword - generally I would say it refers to using unsupervised training of some neural network model and then generating new data in the domain from the model.

I believe it originally grew out of the phrase "generative model" (which models the joint data probability distribution) but most of the prominent 'generative AI' models (like GPT) are not actually generative models but discriminators.

Regardless, outside of simple things like matrix multiplication, the theory of backpropagation (although not the implementations), language modeling as a concept, etc. - almost all of these techniques are within the last decade and a half or so.


This is a fine list, but it only covers a specific type of generative AI. Any set of resources about AI in general has to at least include the truly canonical Norvig & Russel textbook [1].

Probably also canonical are Goodfellow's Deep Learning [2], Koller & Friedman's PGMs [3], the Krizhevsky ImageNet paper [4], the original GAN [5], and arguably also the AlphaGo paper [6] and the Atari DQN paper [7].

[1] https://aima.cs.berkeley.edu/

[2] https://www.deeplearningbook.org/

[3] https://www.amazon.com/Probabilistic-Graphical-Models-Princi...

[4] https://proceedings.neurips.cc/paper_files/paper/2012/file/c...

[5] https://arxiv.org/abs/1406.2661

[6] https://www.nature.com/articles/nature16961

[7] https://www.nature.com/articles/nature14236


As a recent addition, I've been impressed with Kevin P. Murphy's Probabilistic Machine Learning: An Introduction (2022)[1] and Advanced Topics (2023)[2].

[1] https://probml.github.io/pml-book/book1.html

[2] https://probml.github.io/pml-book/book2.html


This is an excellent list of additions! We will try to include it shortly!


The sad truth is that people nowadays can't even pass through a 15-minute podcast without checking out their Twitter feed multiple times. So, I'm not sure how many people would read through a 800-page textbook.


I think you'll find that the "screen brain" effect dissipates after about 20 minutes of discomfort. I've noticed this effect with novels and text books.

Note that I don't think it's a great idea to just "read through" an 800 page text book even if you can - you've got to do exercises and check your own knowledge or else you will be spinning your wheels.


> I think you'll find that the "screen brain" effect dissipates after about 20 minutes of discomfort

You mean one should persevere for more than 20 minutes and then can easily focus on the book? If so, that's great news! Note that I suffer from the "screen brain", but it's always good to know how brain works.


Well, there is a trick to reading a lot: don't live for the thrill of finally finishing a book or a paper. Instead enjoy the process of reading and understanding every paragraph.


Not sure why this is down voted. If this is because my armchair stats are wrong, I'd be very happy to be wrong. Otherwise, I was not saying textbook is no good. I'm just speculating that many people couldn't not enjoy even an invaluable book.


I am sorry but I am not a believer in a16z anything after their massive crypto token scams and wealth extraction. We all need to move away from all these companies who continue to bloat in private and then have a big pay day as a public company.


a16z's investment thesis completely centers around finding the next bagholder for their investments.

They don't care about building enduring businesses that make them a lot of money because the businesses are actually driving that much value creation.

They care about creating a hype cycle and dumping before the tide goes out and their investments crash on someone else's balance sheet.

None of them could get on a podcast and say anything remotely rational or intelligent about their crypto investments: it was all pump and dump. In one interview, their head of crypto investments main value claim was "monetizing the sharing of your home wifi," and after some pushback resorted to deference to authority by saying the founders they invest in went to the right schools and nobody else could understand because "they weren't in the room."

These are smart people who couldn't articulate a clear value. They obviously didn't believe any of it and just saw an opportunity to promote a Dutch tulip mania.

Good fund for con artist "entrepreneurs" though. Same deal for Chamath and his SPACs.

There's increasing hesitation among builders to take money from funds that have strayed this far from long-term value creation, because once they get addicted to short-term pump and dump profits and chasing the latest thing, it's hard to go back to supporting actual builders.


> a16z's investment thesis completely centers around finding the next bagholder for their investments.

Boom, yes, this. I think a big part of the (for lack of a better phrase) butt-hurt the best of HN feels towards a16z is summed up by the Obi-wan scene where he's screaming at Anakin about how he was supposed to be the chosen one blah blah you've hurt my feelings because I truly believed you could have been something you are clearly not etc.

In reality, a16z are shrewd, smart operators, and it's a valid, scary effective investment thesis to be able to push waves higher due to your own gravitas. If you had the Buffet/Elon effect (genuine ability to move markets) and could, why wouldn't you trade on it?

The "they were the best of us and now look at them" is a sad, hard reality lesson for anyone feeling it, and utterly irrelevant to a16z.


> a16z are shrewd, smart operators...if you had the Buffet/Elon effect (genuine ability to move markets) and could, why wouldn't you trade on it

Because you want to keep it. a16z's returns have been bottom-tier for a while. It's partly why they switched models from VC to retail asset manager. (The other part was to trade crypto.) That lack of returns translates to a lack of carry, which corrodes one's ability to attract and retain talent. It's a doom loop they've been floundering in for at least half a decade.


I know a London-based VC who calls them the Daily Mail of Venture Capital.

Loud, noisy, full of half truths, sensationalist.


You know that “don’t anthropomorphize the lawn mower” bit about Larry Ellison and Oracle? Same story different org.

https://news.ycombinator.com/item?id=5170246

https://www.youtube.com/watch?v=-zRN7XLCRhc&t=33m


I'll never forget when I was at HBO in the '90s (large Oracle shop at the time) and we were trying to figure out an Oracle product. I wasn't core to the effort; I was just trying to help, since the people smarter than I was were completely stuck.

So I asked for the install cd. I figured I could install the product in question (Oracle Forms? It's been too long and I barely remember) and poke around, and maybe a beginner's eyes would see something the masters missed.

I stuck the cd in my computer and clicked the installer. It asked which install file I wanted to execute. There's more than one? Yep, several. And not all in one folder, clearly labeled. They were in several different places on the cd, and named things like x39usethis21 and yz2nousethis982. It was a completely garbage experience.

And to be clear, this wasn't some hand-crafted one-off. This was a silk-screened, mass-produced, official Oracle product.

So I went to the documentation, also on the cd. It wasn't in text files, or even standard html files -- there was a special documentation reader application, because of course there was -- even though the files were obviously (nearly) standard html. And again, the documentation app wanted to know what file to start with, and the doc files weren't well organized, and didn't have an obvious starting place. So I opened the most likely candidate, which turned out to not be the documentation root. But it did have a menu of links, and there was one that was labeled "index" or "start here" or something.

I clicked the link and got a 404. Double-check: yes, it's pointing to a location on the cd, just not one where there's a file.

There's more to the story, but the above pattern continued throughout. And the experts never got the tool to work, after months of trying, with Oracle consulting on speed-dial.

This confirmed what I've described since the '80s as Canyon's Law of Inverse Usability: the price of a product and the usability of that product tend to correlate inversely.


I also discovered that law. Sometimes I’ve even declared, “This software really sucks. We must be paying $200,000 a year for this.”


Yep. I first realized this in the '80s when I was working in FileMaker Plus (from Nashoba at the time) to import data and print out bulk mail. FMP was WYSIWYG -- easy layout tools, and I could make anything I wanted happen in seconds with a ~$2800 Mac 512k and a ~$7000 laserwriter.

One time we had a job that required a high-speed printer, so we went to a copy shop. They had a ~$300K Xerox printer that was the size of three washing machines side by side. It could print something like 20-30 pages per minute compared to the laserwriter's 2(?).

And the Xerox had something like a 4-inch amber-and-black display, and the guy setting up the job was putting in parameters by hand, almost like writing code to do the layout. He spent a minute doing that, hit the button, the machine spun up, and then BOOM, out came a page. And the layout was wrong.

He spent 20 minutes getting the layout right, through maybe 10 iterations of write code, spit out a copy, see that it was wrong.

And that's when it hit me: with a machine that expensive, you want it working non-stop. Time spent on setup is time wasted, clearly. And yet no one at Xerox was thinking about that, obviously, because that thing couldn't have wasted more of its time on setup if they'd tried.

And on the other hand, the (relatively) cheap Mac+LaserWriter had WYSIWYG and was ready the first time, every time. It was insane the difference between the two.


It's still a thing with Oracle. I worked at a large music retailer running on ATG/Oracle commerce and I don't think I ever saw documentation on how any of it worked.

The folks on the backend had it running and, if they were out, you were SOL. It was 7+ sites running out of a ~10 year old code base with an alarming amount of technical debt. I'm sure it wasn't cheap.


> If you had the Buffet/Elon effect (genuine ability to move markets) and could, why wouldn't you trade on it?

Because trading on it to extract other people’s money as part of fraudulent schemes will destroy your reputation.

The only reason to do it is if you feel your reputation is unearned and worthless and this is the best thing you can do with it.

There are definitely other ways to make a living.


The only reputation that matters is the reputation of making money.

That's the only business investment firms have ever been in. To believe otherwise is naïveté.


Clearly that's not correct.

A reputation for making money unfairly, and at the expense of others is, and has always been, extremely relevant to your ability to convince other people to do business with you.


Yeah. That's why hedge funds and private equity in general went out of business 40 years ago.

Seriously. As someone that grew up in the 80s and watched how Milton Friedman's id was unleashed, and continues to run rampant today, there's little evidence to support your thesis.


I mean in general I agree with you. Just having a ton of money has its own kind of reputation.

But these guys have really traded far beyond their actual raw financial power due to reputation.


If you know you will receive your cut as LP you don't care while you are not the defraud one.


> only reputation that matters is the reputation of making money

Within this context, totally agree. It's where a16z has failed: its returns are subpar. The other failures flow from that.


Just curious, are there any sources you could share on that? Or is this the kind of intel you need to be in the really, really in-crowd for in order to know what's really going on?

[Edit]: Can't reply that deep in the thread, but thanks for the insight!


> are there any sources you could share on that

a16z started lagging in 2016 [1]. That led to the crypto fund in 2018 [2] and registering as an investment adviser in 2019 [3]. The '18 crypto fund did well [4], so they quadrupled down on the crack pipe and returned to form [5].

[1] https://www.wsj.com/articles/andreessen-horowitzs-returns-tr...

[2] https://techcrunch.com/2018/06/25/andreessen-horowitz-has-a-...

[3] https://www.thestreet.com/investing/cryptocurrency/andreesen...

[4] https://www.theinformation.com/briefings/a16zs-first-crypto-...

[4] https://www.wsj.com/articles/andreessen-horowitz-went-all-in...


"Here lies Jonathan Koren. He made a lot of money."

No, this Gordon Gecko philosophy is why the 2010s were a cultural wasteland.


Dude. This philosophy has been driving everything since the 1980s. Either as a rejection, and embracing of it.


> a16z's investment thesis completely centers around finding the next bagholder for their investments.

To be fair, that's all of VC investment thesis, not just a16z. VC is by definition early stage investing, the objective being to build up a project enough that it can be either IPO'd or sold to a bigger company, providing the returns for the next round of early stage investments. a16z is nothing unusual there.


The phrase "the next bagholder" means that it isn't worth it.

If, for example, you buy a house with a fucked up foundation, fixing and selling it isn't finding the next bagholder. Covering it up and selling it ASAP without disclosing it is finding the next bagholder.


No, the objective is to add value by making it a better business. Not to pump it up full of hype so you can palm it off on a greater fool.


> resorted to deference to authority by saying the founders they invest in went to the right schools and nobody else could understand because "they weren't in the room."

Marc Andreeson tried to defend Groupon's use of a non-standard accounting metric by saying all the critics didn't know what they were talking about because he was "in the room" when the decision was made, and the critics were not.

Given how things worked out for Groupon, I think it's fair to say that non-standard accounting metrics are non-standard for a reason.

That was one of many things that made me skeptical of pmarca and a16z.


> In one interview, their head of crypto investments main value claim was "monetizing the sharing of your home wifi," and after some pushback resorted to deference to authority by saying the founders they invest in went to the right schools and nobody else could understand because "they weren't in the room."

Chris Dixon ?


> None of them could get on a podcast and say anything remotely rational or intelligent

> main value claim was "monetizing the sharing of your home wifi,"

Let's see, we have a big stagnant incumbent monopoly, a last-mile moat that allows them the monopoly, and a proposed strategy for attacking the moat. Most ISPs have a "no-sharing" clause, but it isn't difficult to brainstorm potential workarounds: seeding connections into high-density locations, netflix boxes, even possibly a legislative play in a sympathetic area. Of course, the crux is in the execution of the workaround, but I'd expect this to have some complexity to it and I wouldn't expect a random a16z podcast host to necessarily have details at that resolution. I certainly wouldn't call them an idiot for not having all the details handy.

You seem pretty darn sure that it's a prima-facie idiotic idea, however. Can you back that up and explain your reasoning? Or are you just not remotely rational or intelligent enough to imagine a business that doesn't already exist? (See, I can be an asshole too!)


Crypto has done absolutely nothing to warrant anything beyond the most cursory dismissal.


This is simply objectively inaccurate, and reflects very poorly on the speaker's levels of bias when such false claims are parroted.

It's fine to not like cryptocurrency, but don't lie about it.

Ransomware would not exist without cryptocurrency: it was a major innovation (that subsequently enabled tons of new use cases, many of them criminal, some of them not).


There’s something about this comment that’s tongue in cheek, and something that’s earnest, and I cannot decipher it


Reminds me of this tweet[0]:

>Sick of people calling everything in crypto a Ponzi scheme. Some crypto projects are pump and dump schemes, while others are pyramid schemes. Others are just standard issue fraud. Others are just middlemen skimming of the top. Stop glossing over the diversity in the industry.

[0] https://twitter.com/patdennis/status/1518637225789042688


Darknet markets, political dissident funding, the money laundering via mining trick, etc.. All pretty big innovations even if you don't like them


With that attitude, nothing ever would. I'm glad VC exists. Without people taking long odds, our world would be much poorer.


People don’t owe crypto a ‘good attitude’. The only justifiable attitude toward it is a healthy scepticism until it proves to be anything more than a glorified pump and dump scheme. So far, it hasn’t cleared that bar in over a decade of trying.


Bitcoin is used by people in unstable countries.



It's an ISP. They provide internet service and bill people for it.


Embarrassing to post nonsense like this in 2023.


> Or are you just not remotely rational or intelligent enough to imagine a business that doesn't already exist? (See, I can be an asshole too!)

Weird how you accuse someone of being "not remotely rational" yet you seem to completely reject the idea of reasonability


Yes. I feel there should be some kind of anti-list of people who dumped SPACs and ICOs on the public.

I personally feel it was pretty gross behavior (in many, but not all, cases), and the people who did the egregious ones mostly knew at the time that they were doing a zero sum wealth transfer to themselves.

I personally avoid working with people who were involved in ICOs and SPACs where at the time of issuance a reasonable analysis could've shown that it was grossly overpriced to the public investors it was sold to (because in those cases, I believe that the issuer themselves should've known, and shouldn't have proceeded).


I have the data. What would be a good UI for it?

For example: https://embarc.com/capital/leadership/chamath-palihapitiya


A list, ranked by person/sponsor, ranked by total value reduction across all SPACs they did since some time point (merger? before then?), and normalized for multiple compression by looking at the ratio of some overall public tech index between that date and now (so that they're not overly penalized by multiple compression from higher interest rates).

Either presented as a list on a web page or a list in an infographic/image.

Overall show how much wealth public investors have lost in aggregate from investing with that person, normalized to general tech stock market declines. How much did they lose relative to the market. Their alpha, or negative alpha, I suppose.


e: Removed, not sure I really agree with what I wrote on further reflection


That just isn't a reasonable statement. No one forces someone to participate in a ponzi scheme.

The problem with SPACS is these were companies that did not go the IPO route because they would not pass SEC approval. Companies should go public even if they are unlikely to survive, but what we saw was mostly fraud. They received absurd valuations based on exaggerated growth claims combined with imaginary non-GAAP accounting -- things you can't do in an IPO.

Some of these participants will get in trouble. Enforcement is not immediate. a16z is probably going to end up in a lot of trouble over their cryptocurrency shenanigans. I think these guys pretty much burned their reputation in exchange for things like owning a $177m house in Malibu.

The consequences will be felt by everyone, not just the shitco and shitcoin hucksters.


Fair enough, I didn't realize SPACs were effectively an end-run around SEC approval & GAAP accounting - and I worded too strongly for what was effectively an uninformed take.


Burning rep for wealth is kinda an SV trope, though, right? It's "fuck you money."


> the people who did the egregious ones mostly knew at the time that they were doing a zero sum wealth transfer to themselves

sorry we can't all be dirt farmers in USA we got to get that bag or we die from medical cost

EDIT: everyone downvoting me knows it is true


Lots of people out there have decent middle class careers working in technical roles outside the FOMO-based-startup sector.


I mean, the `canon` here is not created by experts in AI. It is to put something on the internet to make a16z look legit on AI and get traffic.


I wouldn't go so far. I know the authors quite well, and as someone who has multiple publications in machine learning confeerences (and started a PhD in ML), they know their stuff well.


OK, thanks for reassuring.


The piece speaks for itself. It’s very strong.


OK, thanks for reassuring.


Their crypto scams look like child's play compared to their 350MM investment into Adam Neumann AGAIN. Like whaaat? Were "Liz" Holmes and Martin Shrkeli not available? Or did SBF not satiate you enough?


They are ready to pump another stock again.


You should never be a "believer" or trust ANY VC. They are blood suckers who only want to make 10x their money


And are willing to fund 9 failures… nobody else will do that and it’s impossible without gunning for the 10x return.


This is oversimplified at best. Not all VCs are created equal. A broad-brush characterization like this removes any incentive for VCs to behave well (if everyone will assume the worst of you, why bother trying to build a good reputation), and discourages entrepreneurs from vetting their investors (why bother, if they're all equally bad).


They don't care.

The goal for all VCs is to make large returns from startups, Otherwise they underperform. VCs know that 90% of these startups fail and will make sure that they are highly unlikely to lose. Even if it means dumping on retail at higher prices via selling SAFEs in 'crowdfunding campaigns'.


Do you include other parts of a16z like their biology investments? Do you believe the bio folks are doing scamming and wealth extraction (beyond the normal VC scamming and wealth extraction)? Or is it guilt-by-association, simple being in a16z when another part of the company did something dumb is enough to write them off?


You don't have to be a believer in a16z to find this list of AI resources useful.


> don't have to be a believer in a16z to find this list of AI resources useful

One wonders why the authors published with them, as well as what down-round portfolio company a16z are using this to shill.


It’s a list of links to websites. Not a journal or book.


I doubt there are any down-round AI portfolio companies right now. It's too hot.


a16z took decades to build a reputation and less than a year to toss it in the fire. It's hard to take their current discourse around anything seriously (they constantly publish think pieces on hype topics like AI or games).


Also not a fan after one of their members went on twitter saying that all non STEM education should be eradicated


The submission has nothing to do with what you're complaining about. It's just a review of key papers, writings, and courses that are most pertinent to understanding the current state of AI.


The publishing of this article was paid for with the intent of furthering a16z’s reputation. I think addressing their reputation is relevant.


I think a16z and other SV companies got rich by being lucky in their investments in a low interest rates environment. Maybe they then bought into their own hype, or they’ve trapped themselves in overpromises to the people they’ve raised money from, and that’s why they’re now pumping crypto. They have no other options.

I’m curious to see what happens in the next couple of years and which big VC firms will remain given the current economy…


Oh sweet summer child.

VCs exist to make money. They do not care what industry it is, unless it makes them a giant return.


that's nothing to be sorry about


And why should we trust their judgement about anything, after they put money into Adam Neuman's new company (after the WeWork debacle)?

CNBC: https://www.cnbc.com/2022/08/15/a16z-to-invest-in-adam-neuma...


The article linked in the OP is mostly a list of other links you can visit to learn about the types of AI that are coming to market, and some background. Those links are not authored by A16z themselves.

I generally would not read anything authored by A16Z partners, those just feel like bad inspirational speeches authored by Thomas Friedman.


A thousand times echoing your sentiment, everyone on this thread seems dismissive, but the webpage linked carries some links which I would want to come back to later.

If the source itself is a problem, we wouldn't want to listen to anyone for some reason or another.


Adam Neuman, with support from a16z, has an excellent track record returning capital to Americans while leaving Chinese & Saudi investors to hold the bag. That is why they invested.


Came for everyone roasting a16z. Was not disappointed.


Looking at these comments, I can't think of another VC that has burned as much goodwill among technical people as a16z has. Don't get me wrong, it's well deserved, but it's just surprising how universal it seems to be (at least in this thread).


It's because Andreessen is the HN reader's ideal VC, on paper. He is (was?) highly technical, he built a very successful product with Netscape. He also came of age during the first tech bubble and in theory should be able to see through the BS.

Yet nearly every public statement he's made since becoming a full-time VC has been about pumping the worst companies and claiming they're the future. Airbnb, WeWork, a metric f'ton of crypto startups.

It's almost painful watching the inventor of Netscape trying to postulate to Tyler Cowen how Web3 is the new internet:

https://twitter.com/liron/status/1537186589486460928


Not to take anything away from Netscape, I do think there was interesting technical things happening with that browser, but the company never made money.

I would go as far as to say their 'real' product was an avenue to attack Microsoft - not the browser they made.

They got a lot of money by positioning themselves as an anti-Microsoft vector, had a huge IPO as an unprofitable company during the dot com boom, successfully entrapped Microsoft in an anti-trust case, then got acquired by AOL.

Of course I'm not a billionaire and I don't doubt it took a lot of skill to earn that money. I'm just not sure those are the same skills you need to build a profitable business or useful product.


They probably would have made money in the medium term, but MS gave their browser away for free. The empire strikes back.


Crypto. Folks like Andreesen pissed off all sides.

The "crypto true believers" are pissed off because they got extracted like the pyramid scheme players they were. The "crypto is a scam" people are pissed off because it was bloody obvious that it was a pyramid scheme and these people are getting off scot free while the plebians take it in the shorts. And the crypto pyramid players got pissed because the big boys have the money to extract the payout better than they do.


SoftBank. Tiger. There are investors with negative brand value; take their money, but expect it to make future fundraising and business harder.


Don't forget Sequoia Capital now too, after the SBF fiasco [1]. Though maybe their reputation hasn't quite reached the gutter yet.

[1] https://web.archive.org/web/20221109050305/https://www.sequo...


> Don't forget Sequoia Capital now too, after the SBF fiasco

Single fiascos rarely tank brands.

Sequoia still has a braintrust that makes it profitable to associate with. Andreessen is increasingly a refuge for those who have no other choice, are in on the grift or are being grifted.


> Sequoia still has a braintrust that makes it profitable to associate with.

That's why this single fiasco is so damaging: the braintrust has proven to be a farcical joke. The guy was playing video games while on their funding calls and they called it "genius." Zero due diligence, just brown nosing.


> why this single fiasco is so damaging: the braintrust has proven to be a farcical joke. The guy was playing video games while on their funding calls and they called it "genius."

Sequoia balance their SBFs with solid, profitable bets. That means they can keep raising funds, keep paying out carry, and through both of those maintain staying power. I don't love reducing this to economics, but at the end of the day if you can't pay your LPs, you can't pay your GPs, and when you have second-rate GPs you're going to wind up with second-rate founders. Sequoia brings home the bacon. Andreessen lives on management fees.


> Sequoia balance their SBFs with solid, profitable bets.

SBF was such a bad bet with so little due diligence that I don't think this is going to be true going forward and probably hasn't been for a while. They were running on zero interest rates and brand momentum so let's see how much money their next fund returns.


> were running on zero interest rates and brand momentum so let's see how much money their next fund returns

"Let's see" means they're still in the game. Anyone might stumble in the future. The difference between Sequoia and a16z is Sequoia might stumble, a16z already has.


You're just moving the goal posts. Sequoia has already stumbled.


> You're just moving the goal posts. Sequoia has already stumbled.

The goal post is and always has been returns. I’ve been consistent on that [1].

[1] https://news.ycombinator.com/item?id=36074612


Is there any large VC shop left with their reputation intact?


Kleiner, Khosla?


> Kholsa?

Martins beach


Accel


Balderton


I think this is going a bit too far. Not a single other VC is going to look at you negatively for raising from a16z.


> Not a single other VC is going to look at you negatively for raising from a16z

Have seen it directly. Not automatic dismissal. But dismissing the valuation, which puts a founder in down-round territory out of the gate. (Also, questions about naïveté.) We saw similar issues with SoftBank when they were in the late stages of their reputational shredder.

It’s similar to the crowdfunding penalty. Is it money? Sure. Does it impact going-forward fundraising prospects? Of course.


not to mention a glance at Andreessen's Twitter reveals someone who is deeply confused about almost everything, with a confidence that I can't seem to figure out the origin of


e: Removing because I think this is distracting from the main conversation & potentially flamebait.


Wouldn't a more reasonable explanation be that a16z destroyed their brand through years of questionable investments, excessive hype, cringe inducing behavior and being very obnoxious, very publicly. Also not a fan of their push to "American Exceptionalism" aka weapons dealing and mass surveillance.


Sure, except for the fact that I see this same trend of very young accounts making very negative comments for any sort of successful new tech trend.


I don't think anyone here is commenting on the AI trend, they're commenting on a16z.


The point is a16z is wholly unqualified to offer commentary on AI. The authors are qualified. But I’m immediately sceptical given their choice of affiliation.


Right, I am making a comment about the overall tenor of the website, which requires looking at multiple threads.

You're saying "No, they just really hate a16z" and I'm saying huh, it is interesting that all of the threads on new technology, whether it is LLMs or crypto or whatever, tend to have people who "really hate [xyz]".

Honestly, it speaks to the magnitude of the LLM breakthrough that HN is at least mixed on that topic.


Fwiw I'm a huge fan of LLMs and not a fan at all of a16z for the reasons I stated. I think you're reading too much into this thread.


If you actually joined the website 13 days ago though, I'm not sure if you are really in a position to evaluate the shift in culture over years timespan?


Lot of talking down to new users coming from a 3 year old account on a website that's been up for 15+ years.

This site would be better if users like you read and followed their own advice.


I'm not talking down, in fact I think it is likely they have either been lurking for much longer than they've had an account or this is not their first account. That's the case for me, for instance.

There's no higher "status" or anything to be gained on how long you spend on this silly site, I just think it is relevant to the specific conversation.


Ah, so the person with multiple accounts knows that account age is irrelevant, but still chooses to pick on "new" users anyway.

Why is account age relevant to the conversation when you have already demonstrated it's not accurate?


That's the point lol. Hacker news guidelines clearly state... sigh whatever


I'm inclined to agree that the vibe has shifted a bit. HN users have always shown a lot of hostility towards certain groups, like journalists, politicians and financiers. VCs, founders and tech executives were more often seen as part of the in-group. I think the explanation is simple: many HN users are tech employees, and they've had a rude awakening with recent layoffs as to who is the master and who the slave.


My take is the opposite. Hacker News has turned increasingly away from Hackers and more towards Marketing. Understandably, some of us resent the apparent astrotufing of this site and react accordingly.


It’s so easy to throw out meta-commentary and to pscyholigize[1] people based on a single prompt that you then don’t even have to defend (“people are reading this as more of a[...]”). Oh, were a16z good or bad, or neutral? Uh, doesn’t matter as long as I get to express my little pet-peeve.

[1] Using “ressentiment” is on the level of accusing someone of having father/mother issues (the Freud variant).


That seems unrelated to the issue at hand, which is that people have specific grievances with a16z


I don't deny that, what I am saying is that a broader 'grievance culture' has arisen in HN that previously was much more muted.


Have a look at HN threads on topics like corporate diversity, women in tech, or H1B visas.

One man's 'objective critique' is going to be another man's 'grievance culture'.

In any case, A16z attracts more dislike because their image is that of influencers and hype men. You won't see the same dislike of Kleiner Perkins or others because they keep a low public profile.


I've been here for a few years longer than that and haven't observed that trend. Feels like it's always been that way to me.

Seems to me like it goes all the way back to that Dropbox comment.


I was thinking about the Dropbox comment actually as emblematic of the sort of current I was thinking of, but one that I perceive as growing larger.

It might be bias on my part - I work in deep NLP and maybe I'm just feeling the heat there for the first time.


Speaking as someone who worked for FB for five years, it really sucks when people seem to hate the thing that you're in to.


Why should that matter? They are not addressing you personally.

Look at all the negative posts about Google, Youtube and ads. Do you think the adtech engineer visitng HN cares one iota?


They do. (Source: talked to Google adtech engineers.)


Dunno, all I was saying is that it was more difficult for me than a regular visit to HN. It actually made me a _lot_ more sceptical of the average commenter on things I didn't know anything about.


I wouldn't say you're entirely wrong about that trend, but that is clearly not why a16z is facing criticism here.

What is large or successful about their crypto currency, web3 or nft endeavors?


Now that we have good language models, it would be great to see what would a quantitative analysis of this perceived increase in ressentiment actually lands as results. I lean on agreeing with your comment, but also curious to see if we're both hallucinating.


a16z have always been sleazy, and their entire crypto/NFT run just proved it further. Their hook towards AI the moment the NFT/Crypto market started crashing shows how little they believe their own bullshit. I don't have a problem with people getting wealthy, I have a problem with a16z because they did it by pushing crypto scams on normal, everyday, people by convincing the world it was the future while knowing full well it was shit all the way down.

Now they're trying to to do it again. AI is important, but it's simply not the "all things will be different in six week, so invest now or get left behind!" that they're doing.


Why do you think that is ?


Build AI or just invest in chip makers?

https://a16z.com/2023/01/19/who-owns-the-generative-ai-platf...

Over the last year, we’ve met with dozens of startup founders and operators in large companies who deal directly with generative AI. We’ve observed that infrastructure vendors are likely the biggest winners in this market so far, capturing the majority of dollars flowing through the stack. Application companies are growing topline revenues very quickly but often struggle with retention, product differentiation, and gross margins. And most model providers, though responsible for the very existence of this market, haven’t yet achieved large commercial scale.

In other words, the companies creating the most value — i.e. training generative AI models and applying them in new apps — haven’t captured most of it


Would love to learn how a16z measures who is 'creating the most value.' My guess? Vibes and a conflation of "customer facing" and "value creation."

And I would disagree - the chipmakers have produced most of the value and are reaping massive rewards right now. Certainly, the new LLM wrappers are not the value creators.


the value creators are all the researchers that have invested untold hours into designing the models and collecting the data.


I think both those researchers and the researchers who did the same but with chip design are "value creators"


Selling shovels continues to be profitable.


Getting whiplash from the 90 degree handbrake turn the crypto grifters have taken into being AI grifters.


Yeah. What's the next grift? Maybe quantum computing?


Climate


I would hold skepticism for the moment.

I know the authors from the blog post quite well. Say what you will about the firm, but one of the authors have been investing in machine learning since 2016, and another has a PhD in CS (including a SIGCOMM test of time award!)

I come from a strong ML background (multiple publications, PhD dropout), I would say that the canon is actually quite good.


> and another has a PhD in CS

Sorry to say but 'big deal'.

> one of the authors have been investing in machine learning since 2016

Ditto.

I have been doing something (in another field) since the mid 90's. I would say most people would consider me an expert. I get referrals for what I do from 'top' people investors in tech. I also went to what most would consider 'a top college'. I would never want to be positioned as being right or expert because of the amount of time I spent doing something or the college that I went to, or who trusts me, but actual things that I have done that point to my expertise (not a halo of some type).


Medium matters. Anyone making public statements should understand as much.


I agree with both you and the commenters roasting a16z, tbh.


I was an early member of the CNCF community (circa 2016), and at the time I thought "wow things are moving quickly." Lots of different tech was being introduced to solve similar problems -- I distinctly remember multiple ways of templating K8S YAML :-).

Now that I'm spending time learning AI, it feels the same -- but the innovation pace feels at least 10x faster than the evolution of the cloud native ecosystem.

At this point, there's a reasonable degree of convergence around the core abstractions you should start with in the cloud-native world, and an article written today on this would probably be fine a year from now. I doubt this is the case in AI.

(Caveat: I've only been learning about the space for about 4 weeks, so maybe it's just me!)


> At this point, there's a reasonable degree of convergence around the core abstractions you should start with in the cloud-native world, and an article written today on this would probably be fine a year from now. I doubt this is the case in AI.

It's a continuous process. It is way, way, way better than it was 8 years ago. Most of the frameworks can export models between each other/delta some layers, ONNX actually largely kinda works.


Also 4 weeksish in. I am not a good future seer.

I tried learning ML years ago but got bored. Not even stable diffusion budged me to even look at it again.

But ChatGPT?!! Hell yeah I am motivated now! I will not be ashamed to say I am jumping on the bandwagon!

More seriously I want to at least deeply understand the tech that will change our lives.


> Andrej Karpathy was one of the first to clearly explain (in 2017!) why the new AI wave really matters.

Geoff Hilton had been saying this well before 2017. I remember his talks at Google ~2013ish.


Schmidthuber did it in the 90s too, probably.



The trick here is that they get to put their own think pieces alongside actually influential work and pretend like the two deserve to share a stage.


I hope Tyler Cowen can ask Marc Andreessen how AI works so that we can all learn something from the master


Who is the master on how AI works?


I think this is referencing an interview where Tyler Cowen interviewed Andreessen and asked him directly about the true value of crypto and use cases and the response was... lackluster (imo).

https://conversationswithtyler.com/episodes/marc-andreessen/


The original comments seems sarcastic now. The answer that stands out here and is really the only answer from MA is: "Money". Whoever has the most tokens and those tokens increase in perceived value, benefits because they sell their tokens. It's not those little micropayments to podcaster (which are possible myriad ways without tokens); its the market forces driving up token price on tokens that were obtained at $0.00 by early investors...So they found a lucrative business of dumping tokens, that really seems all that web3 became. I will always cherish knowing about the Axie Infinity situation and crazy explanations for why that business model was the future!


> Research in artificial intelligence is increasing at an exponential rate.

Probably in the blundering sense of "exponential", meaning a lot. But what are some specific numbers? (such as publications)



So, it's not even monotonically growing.


Eh, it's a little tricky. A lot of research marketed under the "AI" umbrella would be categorized under cs.LG (https://arxiv.org/list/cs.LG/recent), cs.CV (https://arxiv.org/list/cs.CV/recent), cs.CL (https://arxiv.org/list/cs.CL/recent), and to a lesser degree cs.NE (https://arxiv.org/list/cs.NE/recent). Oh, and of course, cs.AI (https://arxiv.org/list/cs.AI/recent). Not every one of those areas has grown monotonically, but the growth in CV and CL especially has been explosive over the last ten years.


No recent data seems to be available.

https://ourworldindata.org/grapher/number-artificial-intelli...

Edit - Arxiv ML publications double every 23 months: https://twitter.com/MarioKrenn6240/status/131462299513926451...


Nvidia is 25% up on "AI guidance", a16z publishes "AI Canon".

Its settled -> AI is the new crypto


Looking at the authors, was this created by experts in AI? Is it sufficient to truly be a `canon`?


Doesn't seem like it to me and it skews very non-technical, but I can vouch for some of the sources they link - Sasha Rush's Annotated Transformer is great

If I were to give a critique, it seems too skewed towards things business/product people would find interesting that aren't actually all that impactful (Tesla's self-driving, for instance) and also seems skewed to things I see in certain SF twitter bubbles.

Directly linking lesswrong posts also seems a bit... cringe-y for a VC.


Wow, why so much hate against a16z. There's a really funny clip about Marc on the Rogan podcast where he is like "I have to come on Rogan, there's so much clout" or something to that effect. Rogan was immediately like "igghh".


Well, that is a good list. I would guess that I have only previously read the content from about 15% of the links, oh well!

Like everyone else, starting about a year and a half ago I have found it really difficult to stay up to date.

I try to dive deep on a narrow topic for several months and then move on.

I am just wrapping up a dive into GPT+LangChain+LlamaIndex applications. I am now preparing to drop most follows on social media for GPT+LangChain+LlamaIndex and try to find good people and companies to follow for LLM+Knowledge Graphs (something I tried 3 years ago, but the field was too new).

I find that when I want to dive into something new the best starting point is finding the right people who post links to the best new papers, etc.


Pompous to use the word 'canon' to describe what amounts to a bunch of links and thoughts/opinions. Implies the authors of the various articles are the authoritative source/experts to which there is no point disagreeing.


Anyone else feel like we've seen peak A16z at this point?


Or is it nadir a16z?


max or min -- good question.


From the people who bought you web3. Look where the crowd is going, run to the front and shout "Follow me!".


A16Z: Friendship ended with Blockchain. Now AI is my best friend.

What's the last investment A16Z was actually ahead of the curve on? I guess it isn't important, since from their position, they don't rely on being ahead of the curve in order to make good investments, they make their investments good through their network and funding abilities.


The early investors still always make bank. A 5x is nice.

>By Q1 this year, venture capital firm Andreessen Horowitz’s (a16z) flagship crypto fund had returned almost five times for early backers, according to documents reviewed by Semafor. The firm sold a portion of its tokens right before crypto’s bear market began in May, meaning that early investors are guaranteed a successful return.

https://protos.com/the-crypto-bets-of-a16z-crumble-early-inv...

I'm reminded from one of my favorite quotes from Margin Call, one of my favorite movies. He's talking about the big Goldman-esque firm they work for, but it applies here too.

"I've been at this company for 10 years, and I've seen things you wouldn't believe. When all is said and done, they do not lose money. They don't mind if everybody else does, but they don't lose."


From the same movie, another quote that applies to VC trendsetting as a business model:

“There are three ways to make a living in this business: be first; be smarter; or cheat. Now, I don't cheat. And although I like to think we have some pretty smart people in this building, it sure is a hell of a lot easier to just be first.”

Not sure about cheating being off-limit given the crypto debacle…


It's a fantastic quote that I think about a lot.


Yes, this is how money is made in finance.

Outside of the major market makers, every financial firm is desperately trying to flee anything with any semblance of public liquidity because they can't beat the market makers & John Q Public without insider trading.

Private tech firms are a great way to flee liquidity.


It's a good resource but I hardly think a16z is the team to host the 'AI Canon'.


I was quite surprised to see Sequoia getting involved in crypto fiascos.

My data sample is very small but I have a pretty good track record of shifting career focus in the last 20 years. In particular, 2000 and 2008 were two HUGE shifts for me as the writing was on the wall before crisis hit. Common theme to drive change: too much competition. I jumped out from areas where there was still tremendous growth to be seen but no serious money to be done.

I’m calling a third.


Did you actually shift careers those two times in the past (Ie, change companies/jobs/job focus)? And will you be doing the same when "calling a third" now?


It's just a list of links with no real substance. Don't they have some crypto scams to attend to?


The links are the substance. Do you see no value in compiling a list of resources? They also explain why each article was included which helps quite a bit.


> Research in artificial intelligence is increasing at an exponential rate.

but then most of the list is transformers & stablediffusion.

Anyway, oobabooga and automatic1111 are doing more to spread AI than many of those papers.


It's almost offensive how this "AI Canon" leaves out landmark AI results from... what, before the 2020s? (or 2017, I suppose) Honestly reads like something generated by ChatGPT.


Seems like a good list, I enjoyed the comic explainer of stable diffusion and learned a thing. Thank you to the authors and a16z for publishing this :).


perfect timing i was just prescribed vyvanse + adderall


Looking at the content list, all I need now is Brain - AI interface that “uploads“ all of it to my brain neural net.

</cheeky>


I’d buy a copy of these all bound into a nice book as a point in time in the industry collectible.


Hope this is an "in progress" article.

Not a single resource or pointer mentioning "ethics"?


Why everybody (including this a16z dude) underestimates/not mentions:

1. quality of input data - for language models that are currently setup to be force-fed with any incoming data instead of real training (see 2.) this is the greatest gain you can get for your money - models can't distinguish between truth and nonsense, they're forced to follow training data auto-completion regardless of how stupid or sane it is

2. evaluation of input data by the model itself - self evaluating what is nonsense during training and what makes sense/is worthy of learning - based on so far gathered knowledge, dealing with biases in this area etc.

Current training methods equate things like first order logic with any kind of nonsense - having on its defense only quantity, not quality.

But there are many widely repeated things that are plainly wrong. Simplifying this thought - if there weren't, there would be no further progress in human kind. We constantly reexamine assumptions and come up with new theories leaving solid axioms untouched - why not teach this approach/hardcode it into LLMs?

Those two aspects seem to be problems with large gains, yet nobody seems to be discussing them.

Align training towards common/self sense, good/own judgement, not unconditional alignment towards input data.

If fine-tuning works, why not start training with first principles - dictionary, logic, base theories like sets, categories, encyclopedia of facts (omitting historic facts which are irrelevant at this stage) etc. - taking snapshots at each stage so others can fork their own training trees. Maybe even stop calling fine-tuning fine-tuning, just learning stages. Let researchers play with paths on those trees and evaluate them to find something more optimal, find optimal network sizes for each step, allow models to gradually grow in size etc.

To rephrase it a bit - we're saying that base models learned on large data work well when fine tuned - why not base models trained on first principles can continue to be trained on concepts that depend on previously learned first principles recursively are efficient - did anybody try?

As some concrete example - you want LLM to be good at math? Tokenize digits, teach it to do base-10 math, teach it to do addition, subtraction, multiplication, division, exponentiation, all known math basic operations/functions, then grow from that.

You want it to do good code completion? Teach it bnf, parsing, ast, interpreting, then code examples with simple output, then more complex code (github stuff).

Learning LLMs should start with teaching tiny model ASCII, numbers, basic ops on them, then slowly introducing words instead of symbols (is instead of =) etc., then forming basic phrases, then basic sentences, basic language grammar, etc. - everything in software 2.0 way - just throw in examples that have expected output and do back-propagation/gradient descent on it.

Training has to have a way of gradually growing model size in (ideally) optimal way.


These hucksters have found the next thing to latch on to, I see.


Usually this is written as awesome- list in a github repo.


Nice, later this afternoon I'll have ChatGPT read these and summarize them for me


How do you think they wrote it? (Reminds me of this: https://marketoonist.com/2023/03/ai-written-ai-read.html)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: