Hacker News new | past | comments | ask | show | jobs | submit | roughly's comments login


Congrats! Great game - I did todays, and then immediately did them for the last week - it's really fun, tricky, and gives that nice little dopamine hit when you untangle the answer.

Contra several other people here, I also like that you _can't_ skip ahead - yes, I know that's probably "Venus", which is a good clue for working back up to the clues I _can't_ figure out. It's the journey, not the destination.


This about jives with my normal experience with the LLMs - a superficially valid answer that falls apart when you start interrogating how and what it actually did.

One interesting thread here is the long shadow of Greek and later Roman statuary and architecture on Western European self image - the marble statues, columns, and architecture of the Roman empire were taken as the origin story for Western culture - "we were an empire built on philosophers and artists, and look at the (gleaming white) purity of their works."

It turns out, of course, that all those gleaming white statues were vibrantly colored back when their creators were around, and the Greeks and Romans were not cultures of conformity or austerity - quite the opposite, but the seeds of the philosophy sank in hard, and here we are.

(Ironically, both stoicism and Christian asceticism were responses to that Roman excess, but they've somehow been merged with the white marble to produce a "purity" aesthetic to be lionized whenever someone gets the mildly uncomfortable notion that their neighbor is not exactly like them.)


> the Greeks and Romans were not cultures of conformity or austerity - quite the opposite, but the seeds of the philosophy sank in hard, and here we are.

I don’t think anyone thinks they were. They are usually assumed to be hedonistic in popular culture


With Romans, at least, the typical (and incorrect) popular narrative is that they were initially austere - and that period is when their civilization achieved its peak - and then became decadent and ruin followed.

I think you really start to see the fetishization of the Greeks and Romans in the Neoclassicism movements in the 18th century as an aesthetic, and I'm actually not sure how much was known about the actual Greek and Roman lifestyles (Roman, in particular - a big lot of this is tied up with the notions of Empire) at the time.

Kids are taught about them as about super serious no fun civilizations. Then I associate it woth fetishisation of military conquest and such.

I would see God's as hedonistic but not greeks. Honestly, my bias is that they were very boring amd sort of artificial.


Maybe not “the Greeks” broadly, but Spartans specifically are equated with austerity to the extent that “spartan” is adopted as an adjective meaning “showing indifference to comfort and luxury”.

Do people associate the Greeks with the Spartans more than the Athenians though?

Heh, not in Southern Europe. They are like the spark of the Western Civilization from Law to Arts to Mathematics and Science.

Two thoughts:

The first is that LLMs are bar none the absolute best natural language processing and producing systems we’ve ever made. They are absolutely fantastic at taking unstructured user inputs and producing natural-looking (if slightly stilted) output. The problem is that they’re not nearly as good at almost anything else we’ve ever needed a computer to do as other systems we’ve built to do those things. We invented a linguist and mistook it for an engineer.

The second is that there’s a maxim in media studies which is almost universally applicable, which is that the first use of a new media is to recapitulate the old. The first TV was radio shows, the first websites looked like print (I work in synthetic biology, and we’re in the “recapitulating industrial chemistry” phase). It’s only once people become familiar with the new medium (and, really, when you have “natives” to that medium) that we really become aware of what the new medium can do and start creating new things. It strikes me we’re in that recapitulating phase with the LLMs - I don’t think we actually know what these things are good for, so we’re just putting them everywhere and redoing stuff we already know how to do with them, and the results are pretty lackluster. It’s obvious there’s a “there” there with LLMs (in a way there wasn’t with, say, Web 3.0, or “the metaverse,” or some of the other weird fads recently), but we don’t really know how to actually wield these tools yet, and I can’t imagine the appropriate use of them will be chatbots when we do figure it out.


Transformers still excel at translation, which is what they were originally designed to do. It's just no longer about translating only language. Now it's clear they're good at all sorts of transformations, translating ideas, styles, etc. They represent an incredibly versatile and one-shot programmable interface. Some of the most successful applications of them so far are as some form of interface between intent and action.

And we are still just barely understanding the potential of multimodal transformers. Wait till we get to metamultimodal transformers, where the modalities themselves are assembled on the fly to best meet some goal. It's already fascinating scrolling through latent space [0] in diffusion models, now imagine scrolling through "modality space", with some arbitrary concept or message as a fixed point, being able to explore different novel expressions of the same idea, and sample at different points along the path between imagery and sound and text and whatever other useful modalities we discover. Acid trip as a service.

[0] https://keras.io/examples/generative/random_walks_with_stabl...


Something that has been bugging me is that, applications-wise, the exploitative end of the "exploitation-exploration" trade-off (for lack of a better summary) have gotten way more attention than the other side.

So, besides the complaints about accuracy, hallucinations (you said "acid trip") are dissed much more than would have been necessary.


Yeah, I think if I had to put money on where the "native lands" of the LLMs are, it's in a much deeper embrace of the large model itself - the emergent relationships, architectures, and semantics that the models generate. The chatbots have stolen the spotlight, but if you look at the use of LLMs for biology, and specifically the protein models, that's one area where they've been truly revolutionary, because they "get" the "language" of proteins in a way we simply did not before. That points at a more general property of the models - "language" is just relationships and meanings, so anywhere you have a large collection of existing "texts" that you can't read or don't really understand, the "analogy machines" are a potential game changer.



Language has attractor dynamics. It guides high-dimensional concepts in one system's thought space/hidden state into a compact, error-correcting exchange format that another system can then expand back into a full-dimensional understanding.

This necessitates some kind of shared context, as language cannot be effectively compressed unless someone can decompress it at the other end by reintroducing the context that was removed from the communication.

For example, if I say, "this salad is good", you already know what "this", "salad", "is" and "good" mean. You also know how the order of the words affects their meaning, since they aren't strictly commutative. Each one of these utterances serves to transform its prior until you're left with sort of a highly-compressed four-word "key", which when processed by you, expands back into this high-dimensional concept of a good salad, and all the associations that come with it. If I had to explain any particular word to you, it might involve quite a long conversation.

As a side note, those words themselves, and their constituent phonemes/graphemes, also represent attractors within language/utterance space, where they each guide some high-entropy output into a lower-entropy, more predictable representation of something. This kind of hints at just why next-token prediction and other basic techniques work so well for creating highly generalized models. Each token is a transformation of the computed transformations of all prior tokens, and while some concepts are fixed points, meaning that, assuming a basic shared context, "I want to go fishing" and "Fishing is what I'd like to go do" both result in the same effective meaning, despite being a different series of graphemes. These fixed points under this transformation basically represent said shared context.

So the way I see it: Interfaces often (always?) exhibit attractor dynamics, taking an input or hidden state with a relatively large amount of entropy, and reducing the entropy such that its output is predictable and can be used to synchronize other systems.

Understanding itself is an interface, you can imagine understanding to be the effective decontextualization and recontextualization of a concept, in other words, understanding is generalization. Generalization involves attractor dynamics, as that's all decontextualization and recontextualization are: structures which can be used to compress or decompress information by learning its associations, and the structure which guides these associations can be seen as an attractor.

Thus, understanding is an attractor which takes complex, noisy input and produces a stable, predictable hidden state or output. And successful, stable interfaces represent fixed points between two or more complex systems where the language is the transformation step that "aligns" some subset of attractors within each system such that states from system A can be received by system B with minimal loss of information. In order for two systems to be able to communicate effectively, they must have some structural similarities in their attractor landscapes, which represent the same relational context.

Transformers effectively learn how to be a "superattractor" of sorts, highly adept at creating interfaces on the fly by guiding incoming information through a series of transformations that quite literally are designed to maximize predictability/probability of the output. It's like a skeleton key, a missing map, that allows two systems to exchange information without necessarily having developed compatible interfaces, that is, some similar subset of their attractor space. For example, if I know English and my speaking partner only knows Chinese, the transformer represents a shared context that can be used to link our two interfaces.

It goes further too, because transformers are stellar for translating between concepts even if there is only one other party. They can take any two concepts and, depending on how well they were trained, create an ad hoc interface between them which allows information to become "unstuck" and be freely guided from one concept to another, unlocking a deeper understanding/generalization which can then be used elsewhere. We see this happening now with proteins, and as I liked how you put it, "protein language". You're absolutely right that language is just relationships and meanings, but hopefully it's clear now that all understanding boils down to the identification, extraction and transformation of relational structures, which can be thought of as "attractor spaces", guiding unpredictable input into predictable hidden state / output.


Roughly would have loved to translate some of this "understanding" to "action", without further intervening steps of "unlocking [deeper whatever]" (contingent on his intentions), perhaps aka "waiting for the right moment". Lol


I haven't read Understanding Media by Marshall McLuhan, but I think he introduced your second point in that book, in 1964. He claims that the content of each new medium is a previous medium. Video games contain film, film contains theater, theater contains screenplay, screenplay contains literature, literature contains spoken stories, spoken stories contain folklore, and I suppose if one were an anthropologist, they could find more and more chain links in this chain.

It's probably the same in AI — the world needs AI to be chat (or photos, or movies, or search, or an autopilot, or a service provider ...) before it can grow meaningfully beyond. Once people understand neural networks, we can broadly advance to new forms of mass-application machine learning. I am hopeful that that will be the next big leap. If McLuhan is correct, that next big leap will be something that is operable like machine learning, but essentially different.

Here's Marc Andreessen applying it to AI and search on Lex Fridman's podcast: https://youtu.be/-hxeDjAxvJ8?t=160


Why are we comparing LLMs to media? I think media has much more freedom in a creative sense, its end goal is often very open-ended, especially when it's used for artistic purposes.

When it comes to AI, we're trying to replace existing technology with it. We want it to drive a car, write an email, fix a bug etc. That premise is what gives it economic value, since we have a bunch of cars/emails/bugs that need driving/writing/fixing.

Sure, it's interesting to think about other things it could potentially achieve when we think out of the box and find use cases that fit it more, but the "old things" we need to do won't magically go away. So I think we should be careful about such overgeneralizations, especially when they're covertly used to hype the technology and maintain investments.


Media in this case is a plural of medium — something that both contains information and describes its interface.

I think the idea is a bit different than what you describe. New media contains in itself the essence of old media, but it does not necessarily supersede it. For example, we have theater and film.

This “rule” of media doesn’t help us predict how or whether AI will evolve, so it is difficult to relate it to hyping. It is an exclusionary heuristic for future predictions — it helps us exclude unlikely ones. But doesn’t help us come up with any.

I personally am hopeful that AI will evolve into something else that has more essence to it than mere function. But that’s just hope, which is rather less promising than hype.


Oral cultures had theater.


It was a mistake to call LLMs "AI". Now people expect it to be generic.


OpenAI has been pushing the idea that these things are generic—and therefore the path to AGI—from the beginning. Their entire sales pitch to investors is that they have the lead on the tech that is most likely to replace all jobs.

If the whole thing turns out to be a really nifty commodity component in other people's pipelines, the investors won't get a return on any kind of reasonable timetable. So OpenAI keeps pushing the AGI line even as it falls apart.


I mean we don’t know that they’re wrong? Not “all jobs” but many of the white collar jobs we have today?

I work in medical insurance billing and there are hundreds of thousands (minimum) jobs that could be made obsolete on the payer and clinic side by LLMs. The translation between PDF of a payer’s rates and billing rules => standardized 837 or API request to a clearinghouse is…not much. And then on the other side, Claude Code could build you an adjudication engine in a few quarters.

The incentive structures to change healthcare in that way will fight back for a decade, but there are a _lot_ of jobs at stake.

Then you think about sales. LLMs can negotiate contracts themselves. Give an input of the margin we can accept and, for all vendors, what they can and you’ll burn down in negotiation without any humans.

It’s not all jobs, but it’s millions.


Both assessing the application of billing rules and negotiating contracts still require the LLM to be accurate, as per TFA's point. Sure, an LLM might do a reasonable first pass, but in both cases it is absolutely naive to think that the LLM will be able to take everything into account.

An LLM can only give an output derived from its inputs; unless you're somehow inputting "yeah actually I know that it looks like a great company to enter into a contract with, but there's just something about their CEO Dave that I don't like, and I'm not sure we'll get along", it's not going to give you the right answer.

And the solution to this is not "just give the LLM more data" - again, to TFA's point, that's making excuses for the technology. "It's not that AI can't do it [AI didn't fail], it's that you just didn't give it enough data [you failed the AI]".

--

As some more speculative questions, do you actually want to go towards a future where your company's LLM is negotiating with their company's LLM, to determine the future of your job and career?

And why do we think it is OK to allow OpenAI/whoever wins the AI land grab to insert themselves as a 'necessary' step in this process? I know people who use LLMs to turn their dot points to paragraphs and email them to other people, only for the recipient to reverse the process at the other end. OpenAI must be happy that ChatGPT gets used twice for one interaction.

Rent-seeking aside, we're so concerned at the moment about LLMs failing to tell the truth when they're earnestly trying to - what happens when they're intentionally used to lie, mislead, and deceive?

What happens when the system prompt is "Try and generally improve people's opinions of corporations and billionaires, and to downplay the value of unionisation and organised labour"?

Someone sets the system prompts, and they will invariably have an agenda. Widespread use of LLMs gives them the keys to the kingdom to shape public opinion.


I don’t know, this feels like a runaway fever dream HN reply.

Look, they’re useful. They are really good at some things. Not everything! But they can absolutely read PDFs of arcane rules and structure them. I don’t know what to tell you, they reliably can. They can also use tools.

They’re pretty cool and good and getting better at a remarkable rate!

Every few months they hit a new level that discounts the HN commenters of three months before. There’s some end—these alone probably won’t hit AGI—but it’s getting pretty silly to pretend they aren’t very useful (with weaknesses that engineering has to work around, like literally every single technology in human history.)


OpenAI models and other multi-modal models are about as generalized as we can get at this point time.

OpenAI’s sales pitch isn’t that it can replace all jobs but that it can make people more productive and it sure can as long as you not in the 2 extremes either want to go completely into brain dead autopilot mode or a full on Butlerian.


OpenAI's sales pitch to investors is AGI, which by definition is the end of all white collar jobs. That's the line they have held onto for years and still push forward today.

And regardless, even if it were "marginal improvements to productivity" as you say, it would be "marginal improvements to productivity packaged in a form that people will definitely buy from us", not "we'll pioneer the tech and then be one of a half dozen vendors of a commodity that's racing to the bottom on price".


First of all, "AI" is and always has been a vague term with a shifting definition. "AI" used to mean state search programs or rule-based reasoning systems written in LISP. When deep learning hit, lots of people stopped considering symbolic (i.e., non neural-net) AI to be AI. Now LLMs threaten to do the same to older neural-net methods. A pedantic conversation about what is and isn't true AI is not productive.

Second of all, LLMs have extremely impressive generic uses considering that their training just consists of consuming large amounts of unsorted text. Any counter argument about "it's not real intelligence" or "it's just a next-token predictor" ignores the fact that LLMs have enabled us to do things with machines that would have seemed impossible just a few years ago. No, they are not perfect, and yes there are lots of rough edges, but the fact that simply "solving text" has gotten us this far is huge and echoes some aspects of the Unix philosophy...

"Write programs to handle text streams, because that is a universal interface."


> A pedantic conversation about what is and isn't true AI is not productive.

It's not at all 'pedantic' and while it's not productive to be having to rail against this stupid term, that is not the fault of the people pushing back at it. It's the fault of the hype merchants who have promoted it.

A key part of thinking independently is to be continually questioning the use of language.

> Any counter argument about "it's not real intelligence" or "it's just a next-token predictor" ignores the fact that LLMs have enabled us to do things with machines that would have seemed impossible just a few years ago.

No, it's entirely possible to appreciate that LLMs are a very powerful and useful technology while also pointing out that they are not 'intelligence' in any meaningful sense of the word and that labeling them 'artificial intelligence' is unhelpful to users and, ultimately, to the industry.


> "AI" used to mean state search programs or rule-based reasoning systems written in LISP. When deep learning hit, lots of people stopped considering symbolic (i.e., non neural-net) AI to be AI. Now LLMs threaten to do the same to older neural-net methods. A pedantic conversation about what is and isn't true AI is not productive.

I think you are misstating the problem here.

All of the things you name are still AI.

None of the things you name are, or have ever been, AI.

The problem is that there is AI, the computer science subfield of artificial intelligence, which includes things like expert systems, NPCs in games, and LLMs, and then there is AI, the "true" artificial intelligence, brought to us exclusively by science fiction, which includes things (or people!) like Commander Data, Skynet, Durandal, and HAL 9000.

The general public doesn't understand this distinction in a deep way—even those who recognize that things like Skynet are fiction get confused when they see an LLM apparently able to carry on a coherent conversation with a human—and too many of us, who came into this with a basic understanding of the distinction and who should know better, have bought the hype (and in some cases outright lies) of companies like OpenAI wholesale.

These facts (among others) have combined to allow the various AI grifters to continue operating without being called out on their bullshit.


They're pretty AI to me . I've been using chat gpt to explain things to me while learning a foreign language, and a native speaker has been overseeing the comments from it. it hasn't said anything that the native has disagreed with yet.


I reckon you’re proving their point. You’re using a large language model for language-specific tasks. It ought to be good at that, but it doesn’t mean it is generic artificial intelligence.


generic artificial intelligence is a sufficiently large bag of tricks. that's what natural intelligence is. there's no evidence that it's not just tricks all the way down.

I'm not asking the model to translate from one language to another - I'm asking it to explain to me why a certain word combination means something specific.

it can also solve/explain a lot of things that aren't language. bag of tricks.


Like the OP said "LLMs are bar none the absolute best natural language processing and producing systems we’ve ever made".

They may not be good at much else.


Yes, but your use case is language. I use LLMs for all kind of stuff from programming, creative work, etc. so I know it's useful even elsewhere. But as the generic term "AI" is being used, people expect it to be good at everything a human can be good at and then whine about how stupid the "AI" is.


I tried the same with another foreign language. Every native speaker have told the answers are crap.


could you give an example?


En búlgaro, el equivalente a *"tocayo"* (una persona que tiene el mismo nombre que otra) no tiene una traducción exacta de una sola palabra, pero se puede expresar de varias maneras:

1. *"съименник" (suimennik)* – Es el término más cercano y significa literalmente "persona con el mismo nombre". 2. *"човек със същото име" (chovek sŭs sŭshtoto ime)* – Se traduce como "persona con el mismo nombre". 3. *"тезка" (tezka)* – Una forma coloquial más rara, influenciada por el ruso.

En el uso cotidiano, los búlgaros suelen decir algo como *"Имаме едно и също име"* (Imame edno i sashto ime), que significa *"Tenemos el mismo nombre"*.

Here the AI says there is no translation, not only there is one, I use it frequently: Адаш https://rechnik.chitanka.info/w/%D0%B0%D0%B4%D0%B0%D1%88


Twitch Streamer/Youtuber Ludwig is currently motorcycling across Japan and has used ChatGPT to help him learn Japanese for the last ~year (he previously had a tutor before that). ChatGPT told and repeatedly reassured him a common phrase for a more grateful thank you is あなたの助けに恩に着る which a native speaker (I am not) explained is "some Samurai shit" for being indebted to you. Whenever he uses it in his vlog series, it usually results in a confused reaction or laughs from the recipient.


that's brilliant, thank you for sharing!


I wonder.

People primarily communicate thru words, so maybe not.

Of course, pictures, body language, and also tone are also other communication methods.

So far it looks like these models can convert pictures into words reasonably well, and the reverse is improving quickly.

Tone might be next - there are already models that can detect stress so that’s a good first start.

Body language is probably a bit farther in the future, but it might be as simple as image analysis (thats only a wild guess-I have no idea)


Most grounded and realistic take on the AI hype I've read recently.


> It’s obvious there’s a “there” there with LLMs (in a way there wasn’t with, say, Web 3.0, or “the metaverse,” or some of the other weird fads recently)

There is a "there" with those other fads too. VRChat is a successful "metaverse" and Mastodon is a successful decentralized "web3" social media network. The reason these concepts are failures is because these small grains of success are suddenly expanded in scope to include a bunch of dumb ideas while the expectations are raised to astronomical levels.

That in turn causes investors throw stupid amounts of money at these concepts, which attracts all the grifters of the tech world. It smothers nacant new tech in the crib as it is suddenly assigned a valuation it can never realize while the grifters soak up all the investments that could've gone to competent startups.


>Mastodon is a successful decentralized "web3" social media network.

No, that's not what "web3" means. Web3 is all about the blockchain (or you can call it "distributed ledger technology" if you want to distance it from cryptocurrency scams).

There's nothing blockchain-y about Mastodon or the ActivityPub protocol.


web3 means different things to different people, much like how AI-powered means different things to different people.

All that matters is that the moneyed class is able to choose a slightly related definition of it that generates speculative value.


The fundamental idea behind web3 was decentralization. Of course the blockchain was seen as the primary method to drive that decentralization. Mainly so people could drive up the valuation of their crypto currencies.

Cryptocurrencies themselves could also serve as a example of a grain of success for web3.


> We invented a linguist and mistook it for an engineer.

That's not entirely true, either. Because LLMs _can_ write code, sometimes even quite well. The problem isn't that they can't code, the problem is that they aren't reliable.

Something that can code well 80% of time is as useful as something that can't code at all, because you'd need to review everything it writes to catch that 20%. And any programmer will know that reviewing code is just as hard as writing it in the first place. (Well, that's unless you just blindly trust whatever it writes. I think kids these days call that "vibe coding"....)


If that were the case, I wouldn't be using Cursor to write my code. It's definitely faster to write with Cursor, because it basically always knows what I was going to write myself anyway, so it saves me a ton of time.


>We invented a linguist and mistook it for an engineer.

People are missing the point. LLMs aren’t just fancy word parrots. They actually grasp something about how the world works. Sure, they’re still kind of stupid. Imagine a barely functional intern who somehow knows everything but can’t be trusted to file a document without accidentally launching a rocket.

Where I really disagree with the crowd is the whole “they have zero intelligence” take. Come on. These things are obviously smarter than some humans. I’m not saying they’re Einstein, but they could absolutely wipe the floor with someone who has Down syndrome in nearly every cognitive task. Memory, logic, problem-solving — you name it. And we don’t call people with developmental disorders letdowns, so why are we slapping that label on something that’s objectively outperforming them?

The issue is they got famous too quickly. Everyone wanted them to be Jarvis, but they’re more like a very weird guy on Reddit with a genius streak and a head injury. That doesn’t mean they’re useless. It just means we’re early. They’ve already cleared the low bar of human intelligence in more ways than people want to admit.


Thanks for a thoughtful post.

The fantastically intoxicating valuations of many current stocks is due to breathing the fumes of LLMs as artificial intelligence.

TFA puts it this way:

    "The real reason companies are doing this is because Wall Street wants them to. Investors have been salivating for an Apple “super cycle” — a tech upgrade so enticing that consumers will rush to get their hands on the new model. "
Now to consider your two points...

> The first ... natural language querying.

Natural-language inputs are structured: they are language. But in any case, we must not minimise the significant effort to collect [0] and label trustworthy data for training. Given untrustworthy, absurd, and/or outright ignorant and wrong training data, an LLM would spew nonsense. If we train an LLM on tribalistic fictions, Reddit codswallop, or politicians' partisan ravings, what do you think the result of any rational natural-language query would be? (Rhetorical question.)

In short, building and labelling the corpus of knowledge is the essential technical advancement. We already have been doing natural-language processing with computers for a long time.

> The second ... new media recaptiulates the old.

LLMs are a new application. There are some effective uses of the new application. But there are many unsuitable applications, particularly where correctness is critical. (TFA mentions this.) There are illegal uses too.

TFA itself says,

    "What problems is it solving? Well, so far that’s not clear! Are customers demanding it? LOL, no."
I agree that finding the profit models beyond stock hyperbole is the current endeavour. Some attempts are already proven: better Web search (with a trusted corpus), image scoring/categorisation, suggesting/drafting approximate solutions to coding or writing tasks.

How to monetise these and future implementations will determine whether LLMs devour anything serviceable the way Radio ate Theatre, the way TV ate Theatre, Radio and Print Journalism, the way the Internet ate TV, Radio, the Music Industry, and Print Journalism, and the way Social Media ate social discourse.

<edit: Note that the above devourings were mostly related to funding via advertising.>

If LLMs devour and replace the Village Idiot, we will have optimised and scaled the worst of humanity.

= = =

[0] _ major legal concerns to be unresolved

[1] _ https://en.wikipedia.org/wiki/Network_(1976_film) , https://www.npr.org/2020/09/29/917747123/you-literally-cant-...


I actually believe the practical use of transformers, diffusers etc is already as impactful as the wide adoption of the internet. Or smartphones or cars. Its already used by hundreds of millions and it became an irreplaceable tool to enhance work output. And it just started. In 5 years from now it will dominate every single part of our lifes.


Empirically, a lot of people who go to casinos also go bust.


Statistically everyone who goes to casinos for a long time loses.


All the casino employees win despite going for a long time. Even when they lose they win - my cousin repairs slot machines, at the end of every shift he gets paid to lose the last of his test budget.


Most of those people aren't gambling with other people's money that they have a fiduciary obligation with.


Yes, but you can't then say that they went bust because they went to the Bellagio.


Acknowledging this is a pre-print, but it is _wild_ that we can do this kind of analysis on a species that went extinct nearly a third of a billion years before the first hominids emerged. Every now and again I'm reminded of just how far we've extended our sensorium as a species.


One of my favorite books is "Seeing Like a State"[1], which talks about how the modern world has transitioned from talking about things like distance and space in human terms ("an hour's walk", "six bushels' worth of crops") to a surveyor's terms ("3 miles", "5 acres"). In the modern world, we're very used to thinking about things as mediated by technology (maps, cameras, compasses), and that affects how we think about arts, as well - matching what a camera produces is "photorealistic", but isn't how we actually _see_ things. This is one example, another I've always found fascinating is that the large Chinese landscape paintings play a trick where the perspective shifts depending on where you're looking - the whole work does not share a single perspective point, but any given point on the scroll looks "correct" for that spot and its surroundings.

[1]https://bookshop.org/p/books/seeing-like-a-state-how-certain...


Elixir and Erlang have always garnered a lot of respect and praise - I’m always curious why they’re not more widely used (I’m no exception - despite hearing great things for literal decades, I’ve never actually picked it up to try for a project).


I've thought about this a lot, and I think that part of what hurts Erlang/Elixir adoption is the scale of the OTP. It brings a ton of fantastic tools, like supervision trees, process linking, ETS, application environments & config management, releases, and more. In some ways it's closer to adopting a new OS than a new programming language.

That's what I love about Elixir, but it means that selling it is more like convincing a developer who knows and uses CSV to switch to Postgres. There's a ton of advantages to storing data in a relational DB instead of flat files, but now you have to define a schema up front, deal with table and row locking, figure out that VACUUM thing, etc.

When you're just setting out to learn a new language, trying to understand a new OS on top hurts adoption.


I think most people tend to stick with what they learn first or hop to very similar languages. Schools generally taught Java and then more recently Python and JS, all of which are relatively similar.

Unless someone who knows those three languages is curious or encounters a particular problem that motivates them to explore, they're unlikely to pick up an immutable, functional language.


I think you’re right. I only picked up Elixir about 10 years ago after getting frustrated with Python’s GIL and Java’s cumbersomeness, and feeling that object oriented programming over complicates things and never lived up to its hype.

I have never looked back.

Elixir is an absolute joy to use. It simplifies multi-threaded programming, pattern-matching makes code easier to understand and maintain, and it is magnitudes faster to code in than Java. For me, Elixir’s version of functional programming provides the ease of development that OOP promised and failed to deliver.

In my opinion, Elixir is software engineering’s best kept secret.


In fairness, that’s mostly because current plastic production externalizes the cost of everything about the lifecycle before and after manufacturing and use.


Tariff on virgin, and at least half the same for recycled material.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: