Hacker Newsnew | past | comments | ask | show | jobs | submit | robotresearcher's commentslogin

> Probably the most incredible virtue of Zig compiler is its ability to compile C code. This associated with the ability to cross-compile code to be run in another architecture, different than the machine where it is was originally compiled, is already something quite different and unique.

Isn't cross compilation very, very ordinary? Inline C is cool, like C has inline ASM (for the target arch). But cross-compiling? If you built a phone app on your computer you did that as a matter of course, and there are many other common use cases.


> Isn't cross compilation very, very ordinary?

Working cross compilation out of the box any-to-any still isn't.


Yes, very rare and there is a strong cartel of companies ensuring it doesn't happen in more mainstream langs through multiple avenues to protect their interests!

From helicoptering folks onto steering committee and indoctrination of young CS majors.


Specifically The Eagle in Cambridge. Close to Kings College, and a cosy and storied pub it is. The back bar has photos and soot-signatures of air crews from all over the world, a tradition that started during WWII.

Yes, I am also super interested in cutting the size of models.

However, in a few years today’s large models will run locally anyhow.

My home computer had 16KB RAM in 1983. My $20K research workstation had 192MB of RAM in 1995. Now my $2K laptop has 32GB.

There is still such incredible pressure on hardware development that you can be confident that today’s SOTA models will be running at home before too long, even without ML architecture breakthroughs. Hopefully we will get both.

Edit: the 90’s were exciting for compute per dollar improvements. That expensive Sun SPARC workstation I started my PhD with was obsolete three years later, crushed by a much faster $1K Intel Linux beige box. Linux installed from floppies…


> My home computer had 16KB RAM in 1983. My $20K research workstation had 192MB of RAM in 1995. Now my $2K laptop has 32GB.

You’ve picked the wrong end of the curve there. Moore’s law was alive and kicking in the 90s. Every 1-3 years brought an order of magnitude better CPU and memory. Then we hit a wall. Measuring from the 2000s is more accurate.

My desktop had 4GB of RAM in 2005. In 20 years it’s gone up by a factor of 8, but only by a factor of 2 in the past 10 years.

I can kind of uncomfortably run a 24B parameter model on my MacBook Pro. That’s something like 50-200X smaller (depending on quantization) than a 1T parameter model.

We’re a _long_ way from having enough RAM (let alone RAM in the GPU) for this size of model. If the 8x / 20 years holds, we’re talking 40-60 years. If 2X / 10 years holds, we’re talking considerably longer. If the curve continues to flatten, it’s even longer.

Not to dampen anyone’s enthusiasm, but let’s be realistic about hardware improvements in the 2010s and 2020s. Smaller models will remain interesting for a very long time.


Moore’s Law is about transistor density, not RAM in workstations. But yes, density is not doubling every two years any more.

RAM growth slowed in laptops and workstations because we hit diminishing returns for normal-people applications. If local LLM applications are in demand, RAM will grow again.

RAM doubled in Apple base models last year.


Good grief, NBC runs such shitty junk ads on their front page. What a blight on a once-great brand.

Carnegie Libraries, Nobel Prizes, Rhodes scholarships?

Clay Mathematics Institute, with its 7 Millennium problems

Good one!

Stanford & Carnegie Mellon universities...


IIRC, C++ started out this way, or at least its precursor ‘C with classes’. A compiler came later.

Won’t usually be monospaced type.

The iPad lidar has a range of a handful of meters indoors and is not safety critical.

Higher specs can make all the difference. A model rocket engine vs Space Shuttle main engine, for an extreme example. Or a pistol round vs an anti-armor tank round. The cost of the former says nothing at all about the latter.


OK, how about this? Volvo EX90, a consumer SUV on sale now in the UK. Fitted with Lidar.

https://www.volvocars.com/uk/support/car/ex90/article/47d2c9...


They are getting there. But that link has big caveats. Not sure how cool it is to endanger other people’s cameras.

From your linked page:

> Important Use responsibly

The lidar and features that can rely on it are supplements to safe driving practices. They do not reduce or replace the need for the driver to stay attentive and focused on driving safely.

Safe for the eyes

The lidar is not harmful to the eyes.

Lidar light waves can damage external cameras

Do not point a camera directly at the lidar. The lidar, being a laser based system, uses infrared light waves that may cause damage to certain camera devices. This can include smartphones or phones equipped with a camera.


How many cameras don't use IR filters? At one time, at least, they were quite common.

I suppose one example might include fixed security cameras with IR-based night vision capability.


And that Lidar is not assisting drivers and prevously sold US EX90s now need a computer upgrade to get it to work

Lidar will continue to get cheaper, but it has fundamental features that limit how cheap it can get that passive vision does not.

You’re sending your own illumination energy into the environment. This has to be large enough that you can detect the small fraction of it that is reflected back at your sensor, while not being hazardous to anything it hits, notably eyeballs, but also other lidar sensors and cameras around you. To see far down the road, you have to put out quite a lot of energy.

Also, lidar data is not magic: it has its own issues and techniques to master. Since you need vision as well, you have at least two long range sensor technologies to get your head around. Plus the very real issue of how to handle their apparent disagreements.

The evidence from human drivers is that you don’t absolutely need an active illumination sensor to be as good as a human.

The decision to skip LiDAR is based on managing complexity as well as cost, both of which could reduce risk getting to market.

That’s the argument. I don’t know who is right. Waymo has fielded taxis, while Tesla is driving more but easier autonomous miles.

The acid test: I don’t use the partial autonomy in my Tesla today.


Does the "sensor fusion" argument that Tesla made against LiDAR make as much sense now that everyone is basically just plugging all the sensor data into a large NN model?

It's still a problem conceptually, but in practice now that it's end to end ML, plug'n'pray, I guess it's an empirical question. Which gives one the willies a bit.

It'll always be a challenge to get ground truth training data from the real world, since you can't know for sure what was really out there causing the disagreeing sensor readings. Synthetic data addresses this, but requires good error models for both modalities.

On the latter, an interesting approach that has been explored a little is to SOAK your synthetic sensor training data in noise so that the details you get wrong in your sensor model are washed out by the grunge you impose, and only the deep regularities shine through. Avoids overfitting to the sim. This is Jakobi's 'Radical Envelope of Noise Hypothesis' [1], a lovely idea since it means you might be able to write a cheap and cheerful sim that does better than a 'good' one. Always enjoyed that.

[1] https://www.sussex.ac.uk/informatics/cogslib/reports/csrp/cs...


now that it's end to end ML, plug'n'pray, I guess it's an empirical question

Aren't human drivers the same empirical question?

That paper is really interesting, thanks!


Yep. The most popular sim well integrated with ROS is Gazebo, a full 3D sim. Very powerful. There’s also the much simpler Stage, limited to 2.5D mobile robots.

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: