Hacker Newsnew | past | comments | ask | show | jobs | submit | LarsDu88's commentslogin

There's an obvious difference between the two in that Blue Origin is the gateway to multibillion dollar prospective markets that current have virtually no incumbents (other than one very big obvious one). Whereas the WP does not have any prospective future growth trajectory whatsoever b/c it's competing with the endless turd spigot that is social media.

The WP is his propaganda tool contributing to maintaining this billionaire-friendly environment. Trump gave the bourgeoisie trillions in tax cuts last year, and Bezos is a major receiver of this present himself. It's hard to quantify, but these captured media together are much more valuable to oligarchs than any other ventures of theirs, certainly more than their space toys. Hence why Ellison would spend $100B of his personal wealth to add CNN to his catalogue, or why Musk spent so much on X and doesn't seem to care too much about making it profitable.

Yep.

One of the big lessons of the last decade is that media can have billionaires as their primary market. The Free Press got huge because of infusions of cash from the rich. Media that flatters the opinions of billionaires and projects their propaganda into the world can be enormously valuable even if it isn't making traditional cash. It is a return to a patronage model.

Garry Tan has even said this expressly. That the rich should simply own their own parallel media so they can project their will against the will of the people.


Here's a controversial opinion -- it's actually always been this way.

Hearst used his newspapers to manipulate the American public into war against the Spanish Empire.

Government lies (babies in incubators, yellow cake...) were used to push two Iraq wars on the American public by the media.

The abnormal thing is that we had maybe 10-15 years where the press put up at least a pretense of acting impartial as power shifted from pineapple and arms companies to tech monopolies.


I love how in an article about making python faster, the fastest option is to simply write Rust, lol

That has been a thing forever, many "Python" libraries, are actually bindings to C, C++ and Fortran.

The culture of calling them "Python" is one reason why JITs are so hard to gain adoption in Python, the problem isn't the dynamism (see Smalltalk, SELF, Ruby,...), rather the culture to rewrite code in C, C++ and Fortran code and still call it Python.


There's no surprise that Rust is faster to run, but I don't think there are many who would claim that Rust is faster to write.

Maybe with LLM/Code Assistance this effort reduces? Since we're mostly talking mathematics here, you have well defined algorithms that don't need to be "vibed". The codegen, hopefully, is consistent.

I actually started this comment thread, and I just "wrote" two things using Opus 4.6 in Rust:

https://panel-panic.com https://larsdu.github.io/Dippy6502/

The issue is... I barely know the language. There are vast gaps in my knowledge (e.g. lifetimes, lifetime elison, dynamic dispatch, Box, Ref, Arc, all that)

Nor do I know much of anything about the 6502 microprocessor. I know even less from having the LLM one shot most of the project rather than grinding through it myself.

So in the age of AI, the question of how easy it is to write with a language may be less important than the question of how we should go about reading code.

Quite honestly, I don't really know why I wouldn't use Rust for a wide variety of projects with AI other than the compile/iteration time for larger projects. The compiler and static typing provide tremendously useful feedback and guardrails for LLM based programming. The use of structs and traits rather than OOP prevent a lot of long term architectural disasters by design. Deployment on the web can be done with WASM. Performance is out-of-the-box?

Writing Rust by hand though? I'm still terrible at it.


Go and Java/C# (if you forgo all the OOP nonsense) aren't much harder to write than Python, and you get far better performance. Not all the way to Rust level, bur close enough for most things with far less complexity.

As an AI engineer I kinda wish the community had landed on Go or something in the early days. C# would also be great, although it tends to be pretty verbose.

Python just has too strong network effects. In the early days it was between python and lua (anyone remember torchlua?). GoLang was very much still getting traction and in development.

Theres also the strong association of golang to google, c# to microsoft, and java to oracle...


Why go? That language has absolutely no expressive power, while you surely at least want to add together two vectors/matrices in AI with the + operator.

Yeah that bothers me too, but it's damn hard to get away from these days. Most language projects have significant corporate involvement one way or the other.

Go is criminally underrated in my opinion. It ticks so many boxes I'm surprised it hasn't seen more adoption.


It ticks many boxes for me on the surface, but I've read a few articles that critique some of its design choices.

Rust really ticks the "it got all the design choices right" boxes, but fighting the borrow checker and understand smart pointers, lifetimes, and dispatch can be a serious cognitive handicap for me.


No languages are perfect, they all make tradeoffs. I just like a lot of the ones Go made.

Go and Rust try to solve very different problems. Rust takes on a lot of complexity to provide memory safety without a garbage collector, which is fine, but also unnecessary for a lot of problems.


A recent study reveals that aging doesn't happen linearly or gradually, but actually undergoes some major jumps around age 44 and 60.

I was just thinking of something along the same lines, but for something much sillier. Use self-supervised JEPA to make a color visualizer for audio -- synesthesia. Was wondering if you guys have explored leJEPA and SIGReg?

The stuff I built with Opus 4.6 in the past 2.5 weeks:

Full clone of Panel de Pon/Tetris attack with full P2P rollback online multiplayer: https://panel-panic.com

An emulator of the MOS 6502 CPU with visual display of the voltage going into the DIP package of the physical CPU: https://larsdu.github.io/Dippy6502/

I'm impressed as fuck, but a part of me deep down knows that I know fuck all about the 6502 or its assembly language and architecture, and now I'll probably never be motivated to do this project in a way that I would've learned all the tings I wanted to learn.


That game is AWESOME! The fact that was vibe coded is insane.

Honestly that game wasnt oneshotted. I had longtine PdP enthusiasts play it and guve feedback

I've been thinking hard about this paradigm shift while thinking about ideas for things to "vibecode"

I started by trying to think about ways of running a vending machine company autonomously using a finite state machine + agents. It turns out most of "automating" a vending machine company doesn't need LLM agents at all, and simply buying machines with reliable telemetry + a database + automated inventory could get you much further than replacing every or even some components with an LLM. The LLM could replace the person on the phone texting the laborers who refill and service the machines, perhaps autonomously order refills (but hey so can a cronjob).

The troubling thought I had is that AI does not displace the technicians, or the vending machines. It replaces the manager. The human manager is the component that is unnecessary. The entire global economy can eventually reflect this reality where most of the wealth is technically owned by humans but where the majority of financial transactions and decision making will be done by machines (at a level not yet seen)

Macroeconomic metrics will go up along with wealth and standard of living, but for actual flesh and blood humans, much of this will be irrelevant.


> The troubling thought I had is that AI does not displace the technicians, or the vending machines. It replaces the manager.

This is really why ai will have a more profound impact on the society: it is fundamentally changing the hierarchy of conpetence we have gotten so accustomed to.


Why the difference that I’ve seen the exact opposite? It brutally reinforces it. It’s no longer the ability to do a task that is valuable, it’s the ability to understand what tasks need to be done.

Yes. So only 2% (down from 90%)of the population is needed in farming now to produce for the rest of 98%.

That is fine because there are other parts of the value chain these 98% people fit into.

With the development of Ai I don't see new areas to graduate into.

So you are right: there will be people left. But it is not clear what the masses can up skill themselves to do.


Maybe there are a lot of bad managers (almost certainly there are) but I feel like a lot of the talk about what a manager doesn’t address the true role of a manager, the whole point of a manager is to address uncertainty, and look to the future. The manager shouldn’t have any “tasks” per say, but in the vending machine example, they’re the one that keeps an eye on their suppliers, negotiates, changes suppliers if one fails, decides how much inventory to store in a warehouse.

But like, I as a manager try and delegate the coordination role yes. Unlike an IC, loosely speaking the more ‘tasks’ I’m doing as a manager, the more I consider myself to be failing at the job.


Fun fact, he almost got the worldwide console rights to Tetris back in the 80s, and tried going to Soviet officials to get those rights. To the point he's the antagonist of a recent "Tetris" movie that came out.

This is a fun fact, thank you.

That's such a terrible take.

For a hot minute Meta had a top 3 LLM and open sourced the whole thing, even with LeCunn's reservations around the technology.

At the same time Meta spat out huge breakthroughs in:

- 3d model generation

- Self-supervised label-free training (DINO). Remember Alexandr Wang built a multibillion dollar company just around having people in third world countries label data, so this is a huge breakthrough.

- A whole new class of world modeling techniques (JEPAs)

- SAM (Segment anything)


> - Self-supervised label-free training (DINO). Remember Alexandr Wang built a multibillion dollar company just around having people in third world countries label data, so this is a huge breakthrough.

If it was a breakthrough, why did Meta acquire Wang and his company? I'm genuinely curious.


People make stupid acquisitions all of the time.

Wang fits the profile of a possible successor ceo for meta. Young, hit it big early, hit the ai book early straight out of college. Obviously not woke (just look at his public statements).

Unfotunately the dude knows very little about ai or ml research. He's just another wealthy grifter.

At this point decision making at Meta is based on Zuckerberg's vibes, and i suspect the emperor has no clothes.


I really hate the world model terminology, but the actual low level gripe between LeCunn and autoregressive LLMs as they stand now is the fact that the loss function needs to reconstruct the entirety of the input. Anything less than pixel perfect reconstruction on images is penalized. Token by token reconstruction also is biased towards that same level of granularity.

The density of information in the spatiotemporal world is very very great, and a technique is needed to compress that down effectively. JEPAs are a promising technique towards that direction, but if you're not reconstructing text or images, it's a bit harder for humans to immediately grok whether the model is learning something effectively.

I think that very soon we will see JEPA based language models, but their key domain may very well be in robotics where machines really need to experience and reason about the physical the world differently than a purely text based world.


Isn't the Sora video model a ViT with spatiotemporal inputs (so they've found a way to compress that down), but at the same time LeCunn wouldn't consider that a world model?

VideoGen models have to have decoder output heads that reproduce pixel level frames. The loss function involes producing plausible image frames that requires a lot of detailed reconstruction.

I assume that when you get out of bed in the morning, the first thing you dont do is paint 1000 1080p pictures of what your breakfast looks like.

LeCunns models predict purely in representation space and output no pixel scale detailed frames. Instead you train a model to generate a dower dimension representation of the same thing from different views, penalizing if the representation is different ehen looking at the same thing


There's been a few very interesting JEPA publications from LeCun recently, particularly the leJEPA paper which claims to simplify a lot of training headaches for that class of models.

JEPAs also strike me as being a bit more akin to human intelligence, where for example, most children are very capable of locomotion and making basic drawings, but unable to make pixel level reconstructions of mental images (!!).

One thing I want to point out is that very LeCunn type techniques demonstrating label free training such as JEAs like DINO and JEPAs have been converging on performance of models that require large amounts of labeled data.

Alexandr Wang is a billionaire who made his wealth through a data labeling company and basically kicked LeCunn out.

Overall this will be good for AI and good for open source.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: