Lest we forget, here is their piece de resistance, dripping with VC gravitas, about the role of tech in the midst of the covid crisis: "It's Time To Build" [0]
What did they end up investing in? NFTs and shitcoins.
I haven't read too closely, but no one is building meaningfully affordable housing in Atherton. Each unit in the development would easily sell for millions.
CA/the bay area should focus on packing high density housing closer to transit lines, not lifting apartments in areas that are an hour+ walk from transit to score political points.
To the best of my knowledge it's mandated by the state — our old city council used to refer to our town as "complete" and slow rolled housing an approach which, to the best of my knowledge, would just yield state intervention with local decision-makers removed.
The fallacy of the excluded middle, also known as a "false dilemma" or "false dichotomy", is what happens when two options are presented as being the only possibilities when, in fact, there may be other options that exist.
TBF, the roll of a VC isn't to be on the cutting edge of science, but rather business, and generative AI is very new business, even if it isn't very new science.
Let us not forget history. Generative AI (informally defined as "algorithmic generation of stuff" for the sake of argument) has been around for more than 40 years. For example,
Sure, if we define "Generative AI" as "algorithmic generation of stuff" then it has been around for a long time.
I disagree that this is what people are really referring to when they use the phrase generative AI and basically none of the techniques being used can really be said to be 40 years old.
A buzzword - generally I would say it refers to using unsupervised training of some neural network model and then generating new data in the domain from the model.
I believe it originally grew out of the phrase "generative model" (which models the joint data probability distribution) but most of the prominent 'generative AI' models (like GPT) are not actually generative models but discriminators.
Regardless, outside of simple things like matrix multiplication, the theory of backpropagation (although not the implementations), language modeling as a concept, etc. - almost all of these techniques are within the last decade and a half or so.
This is a fine list, but it only covers a specific type of generative AI. Any set of resources about AI in general has to at least include the truly canonical Norvig & Russel textbook [1].
Probably also canonical are Goodfellow's Deep Learning [2], Koller & Friedman's PGMs [3], the Krizhevsky ImageNet paper [4], the original GAN [5], and arguably also the AlphaGo paper [6] and the Atari DQN paper [7].
As a recent addition, I've been impressed with Kevin P. Murphy's Probabilistic Machine Learning: An Introduction (2022)[1] and Advanced Topics (2023)[2].
The sad truth is that people nowadays can't even pass through a 15-minute podcast without checking out their Twitter feed multiple times. So, I'm not sure how many people would read through a 800-page textbook.
I think you'll find that the "screen brain" effect dissipates after about 20 minutes of discomfort. I've noticed this effect with novels and text books.
Note that I don't think it's a great idea to just "read through" an 800 page text book even if you can - you've got to do exercises and check your own knowledge or else you will be spinning your wheels.
> I think you'll find that the "screen brain" effect dissipates after about 20 minutes of discomfort
You mean one should persevere for more than 20 minutes and then can easily focus on the book? If so, that's great news! Note that I suffer from the "screen brain", but it's always good to know how brain works.
Well, there is a trick to reading a lot: don't live for the thrill of finally finishing a book or a paper. Instead enjoy the process of reading and understanding every paragraph.
Not sure why this is down voted. If this is because my armchair stats are wrong, I'd be very happy to be wrong. Otherwise, I was not saying textbook is no good. I'm just speculating that many people couldn't not enjoy even an invaluable book.
I am sorry but I am not a believer in a16z anything after their massive crypto token scams and wealth extraction. We all need to move away from all these companies who continue to bloat in private and then have a big pay day as a public company.
a16z's investment thesis completely centers around finding the next bagholder for their investments.
They don't care about building enduring businesses that make them a lot of money because the businesses are actually driving that much value creation.
They care about creating a hype cycle and dumping before the tide goes out and their investments crash on someone else's balance sheet.
None of them could get on a podcast and say anything remotely rational or intelligent about their crypto investments: it was all pump and dump. In one interview, their head of crypto investments main value claim was "monetizing the sharing of your home wifi," and after some pushback resorted to deference to authority by saying the founders they invest in went to the right schools and nobody else could understand because "they weren't in the room."
These are smart people who couldn't articulate a clear value. They obviously didn't believe any of it and just saw an opportunity to promote a Dutch tulip mania.
Good fund for con artist "entrepreneurs" though. Same deal for Chamath and his SPACs.
There's increasing hesitation among builders to take money from funds that have strayed this far from long-term value creation, because once they get addicted to short-term pump and dump profits and chasing the latest thing, it's hard to go back to supporting actual builders.
> a16z's investment thesis completely centers around finding the next bagholder for their investments.
Boom, yes, this. I think a big part of the (for lack of a better phrase) butt-hurt the best of HN feels towards a16z is summed up by the Obi-wan scene where he's screaming at Anakin about how he was supposed to be the chosen one blah blah you've hurt my feelings because I truly believed you could have been something you are clearly not etc.
In reality, a16z are shrewd, smart operators, and it's a valid, scary effective investment thesis to be able to push waves higher due to your own gravitas. If you had the Buffet/Elon effect (genuine ability to move markets) and could, why wouldn't you trade on it?
The "they were the best of us and now look at them" is a sad, hard reality lesson for anyone feeling it, and utterly irrelevant to a16z.
> a16z are shrewd, smart operators...if you had the Buffet/Elon effect (genuine ability to move markets) and could, why wouldn't you trade on it
Because you want to keep it. a16z's returns have been bottom-tier for a while. It's partly why they switched models from VC to retail asset manager. (The other part was to trade crypto.) That lack of returns translates to a lack of carry, which corrodes one's ability to attract and retain talent. It's a doom loop they've been floundering in for at least half a decade.
I'll never forget when I was at HBO in the '90s (large Oracle shop at the time) and we were trying to figure out an Oracle product. I wasn't core to the effort; I was just trying to help, since the people smarter than I was were completely stuck.
So I asked for the install cd. I figured I could install the product in question (Oracle Forms? It's been too long and I barely remember) and poke around, and maybe a beginner's eyes would see something the masters missed.
I stuck the cd in my computer and clicked the installer. It asked which install file I wanted to execute. There's more than one? Yep, several. And not all in one folder, clearly labeled. They were in several different places on the cd, and named things like x39usethis21 and yz2nousethis982. It was a completely garbage experience.
And to be clear, this wasn't some hand-crafted one-off. This was a silk-screened, mass-produced, official Oracle product.
So I went to the documentation, also on the cd. It wasn't in text files, or even standard html files -- there was a special documentation reader application, because of course there was -- even though the files were obviously (nearly) standard html. And again, the documentation app wanted to know what file to start with, and the doc files weren't well organized, and didn't have an obvious starting place. So I opened the most likely candidate, which turned out to not be the documentation root. But it did have a menu of links, and there was one that was labeled "index" or "start here" or something.
I clicked the link and got a 404. Double-check: yes, it's pointing to a location on the cd, just not one where there's a file.
There's more to the story, but the above pattern continued throughout. And the experts never got the tool to work, after months of trying, with Oracle consulting on speed-dial.
This confirmed what I've described since the '80s as Canyon's Law of Inverse Usability: the price of a product and the usability of that product tend to correlate inversely.
Yep. I first realized this in the '80s when I was working in FileMaker Plus (from Nashoba at the time) to import data and print out bulk mail. FMP was WYSIWYG -- easy layout tools, and I could make anything I wanted happen in seconds with a ~$2800 Mac 512k and a ~$7000 laserwriter.
One time we had a job that required a high-speed printer, so we went to a copy shop. They had a ~$300K Xerox printer that was the size of three washing machines side by side. It could print something like 20-30 pages per minute compared to the laserwriter's 2(?).
And the Xerox had something like a 4-inch amber-and-black display, and the guy setting up the job was putting in parameters by hand, almost like writing code to do the layout. He spent a minute doing that, hit the button, the machine spun up, and then BOOM, out came a page. And the layout was wrong.
He spent 20 minutes getting the layout right, through maybe 10 iterations of write code, spit out a copy, see that it was wrong.
And that's when it hit me: with a machine that expensive, you want it working non-stop. Time spent on setup is time wasted, clearly. And yet no one at Xerox was thinking about that, obviously, because that thing couldn't have wasted more of its time on setup if they'd tried.
And on the other hand, the (relatively) cheap Mac+LaserWriter had WYSIWYG and was ready the first time, every time. It was insane the difference between the two.
It's still a thing with Oracle. I worked at a large music retailer running on ATG/Oracle commerce and I don't think I ever saw documentation on how any of it worked.
The folks on the backend had it running and, if they were out, you were SOL. It was 7+ sites running out of a ~10 year old code base with an alarming amount of technical debt. I'm sure it wasn't cheap.
A reputation for making money unfairly, and at the expense of others is, and has always been, extremely relevant to your ability to convince other people to do business with you.
Yeah. That's why hedge funds and private equity in general went out of business 40 years ago.
Seriously. As someone that grew up in the 80s and watched how Milton Friedman's id was unleashed, and continues to run rampant today, there's little evidence to support your thesis.
Just curious, are there any sources you could share on that? Or is this the kind of intel you need to be in the really, really in-crowd for in order to know what's really going on?
[Edit]: Can't reply that deep in the thread, but thanks for the insight!
a16z started lagging in 2016 [1]. That led to the crypto fund in 2018 [2] and registering as an investment adviser in 2019 [3]. The '18 crypto fund did well [4], so they quadrupled down on the crack pipe and returned to form [5].
> a16z's investment thesis completely centers around finding the next bagholder for their investments.
To be fair, that's all of VC investment thesis, not just a16z. VC is by definition early stage investing, the objective being to build up a project enough that it can be either IPO'd or sold to a bigger company, providing the returns for the next round of early stage investments. a16z is nothing unusual there.
The phrase "the next bagholder" means that it isn't worth it.
If, for example, you buy a house with a fucked up foundation, fixing and selling it isn't finding the next bagholder. Covering it up and selling it ASAP without disclosing it is finding the next bagholder.
> resorted to deference to authority by saying the founders they invest in went to the right schools and nobody else could understand because "they weren't in the room."
Marc Andreeson tried to defend Groupon's use of a non-standard accounting metric by saying all the critics didn't know what they were talking about because he was "in the room" when the decision was made, and the critics were not.
Given how things worked out for Groupon, I think it's fair to say that non-standard accounting metrics are non-standard for a reason.
That was one of many things that made me skeptical of pmarca and a16z.
> In one interview, their head of crypto investments main value claim was "monetizing the sharing of your home wifi," and after some pushback resorted to deference to authority by saying the founders they invest in went to the right schools and nobody else could understand because "they weren't in the room."
> None of them could get on a podcast and say anything remotely rational or intelligent
> main value claim was "monetizing the sharing of your home wifi,"
Let's see, we have a big stagnant incumbent monopoly, a last-mile moat that allows them the monopoly, and a proposed strategy for attacking the moat. Most ISPs have a "no-sharing" clause, but it isn't difficult to brainstorm potential workarounds: seeding connections into high-density locations, netflix boxes, even possibly a legislative play in a sympathetic area. Of course, the crux is in the execution of the workaround, but I'd expect this to have some complexity to it and I wouldn't expect a random a16z podcast host to necessarily have details at that resolution. I certainly wouldn't call them an idiot for not having all the details handy.
You seem pretty darn sure that it's a prima-facie idiotic idea, however. Can you back that up and explain your reasoning? Or are you just not remotely rational or intelligent enough to imagine a business that doesn't already exist? (See, I can be an asshole too!)
This is simply objectively inaccurate, and reflects very poorly on the speaker's levels of bias when such false claims are parroted.
It's fine to not like cryptocurrency, but don't lie about it.
Ransomware would not exist without cryptocurrency: it was a major innovation (that subsequently enabled tons of new use cases, many of them criminal, some of them not).
>Sick of people calling everything in crypto a Ponzi scheme. Some crypto projects are pump and dump schemes, while others are pyramid schemes. Others are just standard issue fraud. Others are just middlemen skimming of the top. Stop glossing over the diversity in the industry.
People don’t owe crypto a ‘good attitude’. The only justifiable attitude toward it is a healthy scepticism until it proves to be anything more than a glorified pump and dump scheme. So far, it hasn’t cleared that bar in over a decade of trying.
Yes. I feel there should be some kind of anti-list of people who dumped SPACs and ICOs on the public.
I personally feel it was pretty gross behavior (in many, but not all, cases), and the people who did the egregious ones mostly knew at the time that they were doing a zero sum wealth transfer to themselves.
I personally avoid working with people who were involved in ICOs and SPACs where at the time of issuance a reasonable analysis could've shown that it was grossly overpriced to the public investors it was sold to (because in those cases, I believe that the issuer themselves should've known, and shouldn't have proceeded).
A list, ranked by person/sponsor, ranked by total value reduction across all SPACs they did since some time point (merger? before then?), and normalized for multiple compression by looking at the ratio of some overall public tech index between that date and now (so that they're not overly penalized by multiple compression from higher interest rates).
Either presented as a list on a web page or a list in an infographic/image.
Overall show how much wealth public investors have lost in aggregate from investing with that person, normalized to general tech stock market declines. How much did they lose relative to the market. Their alpha, or negative alpha, I suppose.
That just isn't a reasonable statement. No one forces someone to participate in a ponzi scheme.
The problem with SPACS is these were companies that did not go the IPO route because they would not pass SEC approval. Companies should go public even if they are unlikely to survive, but what we saw was mostly fraud. They received absurd valuations based on exaggerated growth claims combined with imaginary non-GAAP accounting -- things you can't do in an IPO.
Some of these participants will get in trouble. Enforcement is not immediate. a16z is probably going to end up in a lot of trouble over their cryptocurrency shenanigans. I think these guys pretty much burned their reputation in exchange for things like owning a $177m house in Malibu.
The consequences will be felt by everyone, not just the shitco and shitcoin hucksters.
Fair enough, I didn't realize SPACs were effectively an end-run around SEC approval & GAAP accounting - and I worded too strongly for what was effectively an uninformed take.
I wouldn't go so far. I know the authors quite well, and as someone who has multiple publications in machine learning confeerences (and started a PhD in ML), they know their stuff well.
Their crypto scams look like child's play compared to their 350MM investment into Adam Neumann AGAIN. Like whaaat? Were "Liz" Holmes and Martin Shrkeli not available? Or did SBF not satiate you enough?
This is oversimplified at best. Not all VCs are created equal. A broad-brush characterization like this removes any incentive for VCs to behave well (if everyone will assume the worst of you, why bother trying to build a good reputation), and discourages entrepreneurs from vetting their investors (why bother, if they're all equally bad).
The goal for all VCs is to make large returns from startups, Otherwise they underperform. VCs know that 90% of these startups fail and will make sure that they are highly unlikely to lose. Even if it means dumping on retail at higher prices via selling SAFEs in 'crowdfunding campaigns'.
Do you include other parts of a16z like their biology investments? Do you believe the bio folks are doing scamming and wealth extraction (beyond the normal VC scamming and wealth extraction)? Or is it guilt-by-association, simple being in a16z when another part of the company did something dumb is enough to write them off?
a16z took decades to build a reputation and less than a year to toss it in the fire. It's hard to take their current discourse around anything seriously (they constantly publish think pieces on hype topics like AI or games).
The submission has nothing to do with what you're complaining about. It's just a review of key papers, writings, and courses that are most pertinent to understanding the current state of AI.
I think a16z and other SV companies got rich by being lucky in their investments in a low interest rates environment. Maybe they then bought into their own hype, or they’ve trapped themselves in overpromises to the people they’ve raised money from, and that’s why they’re now pumping crypto. They have no other options.
I’m curious to see what happens in the next couple of years and which big VC firms will remain given the current economy…
The article linked in the OP is mostly a list of other links you can visit to learn about the types of AI that are coming to market, and some background. Those links are not authored by A16z themselves.
I generally would not read anything authored by A16Z partners, those just feel like bad inspirational speeches authored by Thomas Friedman.
A thousand times echoing your sentiment, everyone on this thread seems dismissive, but the webpage linked carries some links which I would want to come back to later.
If the source itself is a problem, we wouldn't want to listen to anyone for some reason or another.
Adam Neuman, with support from a16z, has an excellent track record returning capital to Americans while leaving Chinese & Saudi investors to hold the bag. That is why they invested.
Looking at these comments, I can't think of another VC that has burned as much goodwill among technical people as a16z has. Don't get me wrong, it's well deserved, but it's just surprising how universal it seems to be (at least in this thread).
It's because Andreessen is the HN reader's ideal VC, on paper. He is (was?) highly technical, he built a very successful product with Netscape. He also came of age during the first tech bubble and in theory should be able to see through the BS.
Yet nearly every public statement he's made since becoming a full-time VC has been about pumping the worst companies and claiming they're the future. Airbnb, WeWork, a metric f'ton of crypto startups.
It's almost painful watching the inventor of Netscape trying to postulate to Tyler Cowen how Web3 is the new internet:
Not to take anything away from Netscape, I do think there was interesting technical things happening with that browser, but the company never made money.
I would go as far as to say their 'real' product was an avenue to attack Microsoft - not the browser they made.
They got a lot of money by positioning themselves as an anti-Microsoft vector, had a huge IPO as an unprofitable company during the dot com boom, successfully entrapped Microsoft in an anti-trust case, then got acquired by AOL.
Of course I'm not a billionaire and I don't doubt it took a lot of skill to earn that money. I'm just not sure those are the same skills you need to build a profitable business or useful product.
Crypto. Folks like Andreesen pissed off all sides.
The "crypto true believers" are pissed off because they got extracted like the pyramid scheme players they were. The "crypto is a scam" people are pissed off because it was bloody obvious that it was a pyramid scheme and these people are getting off scot free while the plebians take it in the shorts. And the crypto pyramid players got pissed because the big boys have the money to extract the payout better than they do.
> Don't forget Sequoia Capital now too, after the SBF fiasco
Single fiascos rarely tank brands.
Sequoia still has a braintrust that makes it profitable to associate with. Andreessen is increasingly a refuge for those who have no other choice, are in on the grift or are being grifted.
> Sequoia still has a braintrust that makes it profitable to associate with.
That's why this single fiasco is so damaging: the braintrust has proven to be a farcical joke. The guy was playing video games while on their funding calls and they called it "genius." Zero due diligence, just brown nosing.
> why this single fiasco is so damaging: the braintrust has proven to be a farcical joke. The guy was playing video games while on their funding calls and they called it "genius."
Sequoia balance their SBFs with solid, profitable bets. That means they can keep raising funds, keep paying out carry, and through both of those maintain staying power. I don't love reducing this to economics, but at the end of the day if you can't pay your LPs, you can't pay your GPs, and when you have second-rate GPs you're going to wind up with second-rate founders. Sequoia brings home the bacon. Andreessen lives on management fees.
> Sequoia balance their SBFs with solid, profitable bets.
SBF was such a bad bet with so little due diligence that I don't think this is going to be true going forward and probably hasn't been for a while. They were running on zero interest rates and brand momentum so let's see how much money their next fund returns.
> were running on zero interest rates and brand momentum so let's see how much money their next fund returns
"Let's see" means they're still in the game. Anyone might stumble in the future. The difference between Sequoia and a16z is Sequoia might stumble, a16z already has.
> Not a single other VC is going to look at you negatively for raising from a16z
Have seen it directly. Not automatic dismissal. But dismissing the valuation, which puts a founder in down-round territory out of the gate. (Also, questions about naïveté.) We saw similar issues with SoftBank when they were in the late stages of their reputational shredder.
It’s similar to the crowdfunding penalty. Is it money? Sure. Does it impact going-forward fundraising prospects? Of course.
not to mention a glance at Andreessen's Twitter reveals someone who is deeply confused about almost everything, with a confidence that I can't seem to figure out the origin of
Wouldn't a more reasonable explanation be that a16z destroyed their brand through years of questionable investments, excessive hype, cringe inducing behavior and being very obnoxious, very publicly. Also not a fan of their push to "American Exceptionalism" aka weapons dealing and mass surveillance.
The point is a16z is wholly unqualified to offer commentary on AI. The authors are qualified. But I’m immediately sceptical given their choice of affiliation.
Right, I am making a comment about the overall tenor of the website, which requires looking at multiple threads.
You're saying "No, they just really hate a16z" and I'm saying huh, it is interesting that all of the threads on new technology, whether it is LLMs or crypto or whatever, tend to have people who "really hate [xyz]".
Honestly, it speaks to the magnitude of the LLM breakthrough that HN is at least mixed on that topic.
If you actually joined the website 13 days ago though, I'm not sure if you are really in a position to evaluate the shift in culture over years timespan?
I'm not talking down, in fact I think it is likely they have either been lurking for much longer than they've had an account or this is not their first account. That's the case for me, for instance.
There's no higher "status" or anything to be gained on how long you spend on this silly site, I just think it is relevant to the specific conversation.
I'm inclined to agree that the vibe has shifted a bit. HN users have always shown a lot of hostility towards certain groups, like journalists, politicians and financiers. VCs, founders and tech executives were more often seen as part of the in-group. I think the explanation is simple: many HN users are tech employees, and they've had a rude awakening with recent layoffs as to who is the master and who the slave.
My take is the opposite. Hacker News has turned increasingly away from Hackers and more towards Marketing. Understandably, some of us resent the apparent astrotufing of this site and react accordingly.
It’s so easy to throw out meta-commentary and to pscyholigize[1] people based on a single prompt that you then don’t even have to defend (“people are reading this as more of a[...]”). Oh, were a16z good or bad, or neutral? Uh, doesn’t matter as long as I get to express my little pet-peeve.
[1] Using “ressentiment” is on the level of accusing someone of having father/mother issues (the Freud variant).
Have a look at HN threads on topics like corporate diversity, women in tech, or H1B visas.
One man's 'objective critique' is going to be another man's 'grievance culture'.
In any case, A16z attracts more dislike because their image is that of influencers and hype men. You won't see the same dislike of Kleiner Perkins or others because they keep a low public profile.
Dunno, all I was saying is that it was more difficult for me than a regular visit to HN. It actually made me a _lot_ more sceptical of the average commenter on things I didn't know anything about.
Now that we have good language models, it would be great to see what would a quantitative analysis of this perceived increase in ressentiment actually lands as results. I lean on agreeing with your comment, but also curious to see if we're both hallucinating.
a16z have always been sleazy, and their entire crypto/NFT run just proved it further. Their hook towards AI the moment the NFT/Crypto market started crashing shows how little they believe their own bullshit. I don't have a problem with people getting wealthy, I have a problem with a16z because they did it by pushing crypto scams on normal, everyday, people by convincing the world it was the future while knowing full well it was shit all the way down.
Now they're trying to to do it again. AI is important, but it's simply not the "all things will be different in six week, so invest now or get left behind!" that they're doing.
Over the last year, we’ve met with dozens of startup founders and operators in large companies who deal directly with generative AI. We’ve observed that infrastructure vendors are likely the biggest winners in this market so far, capturing the majority of dollars flowing through the stack. Application companies are growing topline revenues very quickly but often struggle with retention, product differentiation, and gross margins. And most model providers, though responsible for the very existence of this market, haven’t yet achieved large commercial scale.
In other words, the companies creating the most value — i.e. training generative AI models and applying them in new apps — haven’t captured most of it
Would love to learn how a16z measures who is 'creating the most value.' My guess? Vibes and a conflation of "customer facing" and "value creation."
And I would disagree - the chipmakers have produced most of the value and are reaping massive rewards right now. Certainly, the new LLM wrappers are not the value creators.
I know the authors from the blog post quite well. Say what you will about the firm, but one of the authors have been investing in machine learning since 2016, and another has a PhD in CS (including a SIGCOMM test of time award!)
I come from a strong ML background (multiple publications, PhD dropout), I would say that the canon is actually quite good.
> one of the authors have been investing in machine learning since 2016
Ditto.
I have been doing something (in another field) since the mid 90's. I would say most people would consider me an expert. I get referrals for what I do from 'top' people investors in tech. I also went to what most would consider 'a top college'. I would never want to be positioned as being right or expert because of the amount of time I spent doing something or the college that I went to, or who trusts me, but actual things that I have done that point to my expertise (not a halo of some type).
I was an early member of the CNCF community (circa 2016), and at the time I thought "wow things are moving quickly." Lots of different tech was being introduced to solve similar problems -- I distinctly remember multiple ways of templating K8S YAML :-).
Now that I'm spending time learning AI, it feels the same -- but the innovation pace feels at least 10x faster than the evolution of the cloud native ecosystem.
At this point, there's a reasonable degree of convergence around the core abstractions you should start with in the cloud-native world, and an article written today on this would probably be fine a year from now. I doubt this is the case in AI.
(Caveat: I've only been learning about the space for about 4 weeks, so maybe it's just me!)
> At this point, there's a reasonable degree of convergence around the core abstractions you should start with in the cloud-native world, and an article written today on this would probably be fine a year from now. I doubt this is the case in AI.
It's a continuous process. It is way, way, way better than it was 8 years ago. Most of the frameworks can export models between each other/delta some layers, ONNX actually largely kinda works.
I think this is referencing an interview where Tyler Cowen interviewed Andreessen and asked him directly about the true value of crypto and use cases and the response was... lackluster (imo).
The original comments seems sarcastic now. The answer that stands out here and is really the only answer from MA is: "Money". Whoever has the most tokens and those tokens increase in perceived value, benefits because they sell their tokens. It's not those little micropayments to podcaster (which are possible myriad ways without tokens); its the market forces driving up token price on tokens that were obtained at $0.00 by early investors...So they found a lucrative business of dumping tokens, that really seems all that web3 became. I will always cherish knowing about the Axie Infinity situation and crazy explanations for why that business model was the future!
Doesn't seem like it to me and it skews very non-technical, but I can vouch for some of the sources they link - Sasha Rush's Annotated Transformer is great
If I were to give a critique, it seems too skewed towards things business/product people would find interesting that aren't actually all that impactful (Tesla's self-driving, for instance) and also seems skewed to things I see in certain SF twitter bubbles.
Directly linking lesswrong posts also seems a bit... cringe-y for a VC.
Wow, why so much hate against a16z. There's a really funny clip about Marc on the Rogan podcast where he is like "I have to come on Rogan, there's so much clout" or something to that effect. Rogan was immediately like "igghh".
Well, that is a good list. I would guess that I have only previously read the content from about 15% of the links, oh well!
Like everyone else, starting about a year and a half ago I have found it really difficult to stay up to date.
I try to dive deep on a narrow topic for several months and then move on.
I am just wrapping up a dive into GPT+LangChain+LlamaIndex applications. I am now preparing to drop most follows on social media for GPT+LangChain+LlamaIndex and try to find good people and companies to follow for LLM+Knowledge Graphs (something I tried 3 years ago, but the field was too new).
I find that when I want to dive into something new the best starting point is finding the right people who post links to the best new papers, etc.
Pompous to use the word 'canon' to describe what amounts to a bunch of links and thoughts/opinions. Implies the authors of the various articles are the authoritative source/experts to which there is no point disagreeing.
A16Z: Friendship ended with Blockchain. Now AI is my best friend.
What's the last investment A16Z was actually ahead of the curve on? I guess it isn't important, since from their position, they don't rely on being ahead of the curve in order to make good investments, they make their investments good through their network and funding abilities.
The early investors still always make bank. A 5x is nice.
>By Q1 this year, venture capital firm Andreessen Horowitz’s (a16z) flagship crypto fund had returned almost five times for early backers, according to documents reviewed by Semafor. The firm sold a portion of its tokens right before crypto’s bear market began in May, meaning that early investors are guaranteed a successful return.
I'm reminded from one of my favorite quotes from Margin Call, one of my favorite movies. He's talking about the big Goldman-esque firm they work for, but it applies here too.
"I've been at this company for 10 years, and I've seen things you wouldn't believe. When all is said and done, they do not lose money. They don't mind if everybody else does, but they don't lose."
From the same movie, another quote that applies to VC trendsetting as a business model:
“There are three ways to make a living in this business: be first; be smarter; or cheat. Now, I don't cheat. And although I like to think we have some pretty smart people in this building, it sure is a hell of a lot easier to just be first.”
Not sure about cheating being off-limit given the crypto debacle…
Outside of the major market makers, every financial firm is desperately trying to flee anything with any semblance of public liquidity because they can't beat the market makers & John Q Public without insider trading.
Private tech firms are a great way to flee liquidity.
I was quite surprised to see Sequoia getting involved in crypto fiascos.
My data sample is very small but I have a pretty good track record of shifting career focus in the last 20 years. In particular, 2000 and 2008 were two HUGE shifts for me as the writing was on the wall before crisis hit. Common theme to drive change: too much competition. I jumped out from areas where there was still tremendous growth to be seen but no serious money to be done.
Did you actually shift careers those two times in the past (Ie, change companies/jobs/job focus)? And will you be doing the same when "calling a third" now?
The links are the substance. Do you see no value in compiling a list of resources? They also explain why each article was included which helps quite a bit.
It's almost offensive how this "AI Canon" leaves out landmark AI results from... what, before the 2020s? (or 2017, I suppose) Honestly reads like something generated by ChatGPT.
Seems like a good list, I enjoyed the comic explainer of stable diffusion and learned a thing. Thank you to the authors and a16z for publishing this :).
Why everybody (including this a16z dude) underestimates/not mentions:
1. quality of input data - for language models that are currently setup to be force-fed with any incoming data instead of real training (see 2.) this is the greatest gain you can get for your money - models can't distinguish between truth and nonsense, they're forced to follow training data auto-completion regardless of how stupid or sane it is
2. evaluation of input data by the model itself - self evaluating what is nonsense during training and what makes sense/is worthy of learning - based on so far gathered knowledge, dealing with biases in this area etc.
Current training methods equate things like first order logic with any kind of nonsense - having on its defense only quantity, not quality.
But there are many widely repeated things that are plainly wrong. Simplifying this thought - if there weren't, there would be no further progress in human kind. We constantly reexamine assumptions and come up with new theories leaving solid axioms untouched - why not teach this approach/hardcode it into LLMs?
Those two aspects seem to be problems with large gains, yet nobody seems to be discussing them.
Align training towards common/self sense, good/own judgement, not unconditional alignment towards input data.
If fine-tuning works, why not start training with first principles - dictionary, logic, base theories like sets, categories, encyclopedia of facts (omitting historic facts which are irrelevant at this stage) etc. - taking snapshots at each stage so others can fork their own training trees. Maybe even stop calling fine-tuning fine-tuning, just learning stages. Let researchers play with paths on those trees and evaluate them to find something more optimal, find optimal network sizes for each step, allow models to gradually grow in size etc.
To rephrase it a bit - we're saying that base models learned on large data work well when fine tuned - why not base models trained on first principles can continue to be trained on concepts that depend on previously learned first principles recursively are efficient - did anybody try?
As some concrete example - you want LLM to be good at math? Tokenize digits, teach it to do base-10 math, teach it to do addition, subtraction, multiplication, division, exponentiation, all known math basic operations/functions, then grow from that.
You want it to do good code completion? Teach it bnf, parsing, ast, interpreting, then code examples with simple output, then more complex code (github stuff).
Learning LLMs should start with teaching tiny model ASCII, numbers, basic ops on them, then slowly introducing words instead of symbols (is instead of =) etc., then forming basic phrases, then basic sentences, basic language grammar, etc. - everything in software 2.0 way - just throw in examples that have expected output and do back-propagation/gradient descent on it.
Training has to have a way of gradually growing model size in (ideally) optimal way.
Click back a couple years and you'll find this page: https://news.ycombinator.com/from?site=a16z.com&next=2981684... with submissions like "DAOs, a Canon" https://news.ycombinator.com/item?id=29440901