Hacker News new | past | comments | ask | show | jobs | submit | teo_zero's comments login

> What is the point of compiling rust to C?

To address platforms that don't support Rust. TFA mentions NonStop, whatever it is.



They are amazing machines designed for fault tolerance (99.999% reliability). The Wikipedia article below has design details for how many generations were made. HP bought them.

https://en.m.wikipedia.org/wiki/Tandem_Computers

I think it would be useful in open-source, fault tolerance to copy one of their designs with SiFive's RISC-V cores. They could use a 20 year old approach to dodge patent issues. Despite its age, the design would probably be competitive, maybe better, than FOSS clusters on modern hardware in fault tolerance.

One might also combine the architecture with one of the strong-consistency DR'S, like FoundationDB or CochroachDB, with modifications to take advantage of its custom hardware. At the local site, the result would be easy scaling of a system whose nodes appeared to never fail. The administrator still has to do regular maintenance, though, as the system reports component failures which it works around.


Not only does NonStop not support Rust, but apparently they failed to port gcc to it, even. So compiling Rust straight to C itself is pretty much the only option there.

While I agree on hierarchical headings, I don't think H1...H6 are the right tools. These represent absolute levels, we would need a means to express "level+1". I think a H tag with no digit would better convey the meaning "heading of the right level as defined by the surrounding SECTION tags".

But they don't have to represent absolute levels. They could just as easily have represented relative levels, with h2 in a doubly nested section to describe the level today expressed as h4. Dependent on your usecase, this could either be very useful or just mess up your styles a lot.

Well I totally agree with you but we don’t have that. h1 was that, but not anymore.

It’s a shame that we basically can’t rely on default styling to structure a simple document .

There are tons of moments in my life and in my career where I wanted to "just publish" some web page with content while not really caring for aesthetics. But even if you write semantically correct HTML without styling, what you get is not neutral and coherent as it should be, on the contrary, it’s all over the place and inconsistent.


XHTML2 had some good ideas.


> I'd rather prefer people use easy to remember flags

Like -fhardened?


Sure.

-f is technically machine independent.

-m should be used when having it implemented as machine dependent options.

So if you are telling me all these security features are only developed without requiring to implement per machine level support then it makes sense.


The interactions between different optimization passes may have surprising consequences.

Endless loops are technically undefined behavior, can be dropped, except for their assembly jump tag entry point, and collide with the next function's assembly jump tag.

All because of UB.

Huge headache. Try debugging that.

And interaction loops on games are sometimes endlessly waiting for input.


Sorry, I'm not sure I understand your point. Are you complaining because it has a leading -f instead of a different spelling?

I've read the article and the comments with interest. I just have a question: if Windows ABI is so stable that 20-year-old programs are guaranteed to run, why are there computers with Win95 or NT that nobody dares touching lest some specific software stops working? I see plenty of these in industrial environments, but also in public libraries, corporate databases, etc.


In practice most of those machines are an environment in and of themselves. It’s not that they can’t be upgraded, it’s that they likely couldn’t even be rebuilt if they had a hardware failure. The risk they’re taking is that the system is more likely to break due to being touched than it is to suffer a hardware failure. Which as most of us can attest to, is true until it’s not.

Relatedly, at a previous job we ran an absolutely ancient piece of software that was critical to our dev workflow. The machine had an issue of some sort, so someone imaged the hard drive, booted it as a VM and we resumed business as usual. Last I heard it was still running untouched, and unmaintained.


> if Windows ABI is so stable that 20-year-old programs are guaranteed to run

That's not actually true; there are no guarantees. Microsoft does a best effort to ensure the majority of applications continue to work. But there are billions of applications, they're not all going to work. Many applications don't even adhere to the Win32 API properly. Microsoft will sometimes, if the app is important enough, ensure even misbehaving applications work properly.


I know in my use case all these ancient machines are nessessary for interacting with some ancient hardware, not a case where wine is particularly useful.


This is usually about drivers, not applications. The Windows driver model didn’t maintain long-term compatibility.


why touch it? these are usually not directly connected to the internet. some possibly virtualized. "updating" to use wine on linux is a ton of work on its own, you will run into unforseeable issues. nobody wants to pay for that and nobody wants to be responsible for the problems when the net benefit is zero. but a real update/replacement of all these systems is too expensive, hence the status quo.


Because they just work. Nobody cares if their MRI machine runs Win2000, they care if the machine reveals brain cancer.


They care to a certain degree, and that degree is the size of the carefully-tuned payment that Trend Micro extract for the firewall product that lets the Windows 2000 MRI machine safely co-exist on the network with the hospital DC.


Very well explained, thank you.

> Let Linux be Linux and Windows be Windows. They're both great if you appreciate them for what they are and use the accordingly.

What if you technically prefer the Windows way, but are worried about Microsoft's behavior related to commercial strategy, lock-down, privacy...?

The author envisions a system that's technically stable as Windows, yet free as Linux.


Microsoft has always been end-user-hostile. You hack around it :)

Reverse-engineer it's undesirable behavior, mitigate it. The real stuff that scares me is hardware-based (secure enclave computing for example) and legal measures it is taking to prevent us from hacking it.

ReactOS exists, as does Wine. Linux is a purely monolithic Kernel, unlike NT which is a hybrid that has the concept of subsystems built into it. Linux would have to have the concept of subsystems and have an NT-interop layer (probably based off of Wine), the advantage over Wine I fail to see.

In the end, where is the demand coming from I ask? Not from Linux devs in my opinion. I suppose a Wine focused distro might please folks like you, but Wine itself has lots of bugs and errors even after all these years. I doubt it is keeping up with all the Windows11 changes even, what the author proposes, in my opinion is not practical, at least not if you are expecting an experience better than ReactOS or Wine. If it is just Win32/winapi interop layer, it might be possible, but devs would need to demand it, otherwise who will use it?

Linux users are the most "set in their way" from my experience, try convincing any Linux dev to stop using gtk/qt and write apps for "this new Windows like api interface to create graphical apps".

but ultimately, there is no harm in trying other than wasted time and resources. I too would like to see an ecosystem that learns and imitates windows in many ways (especially security measures).


FreeBSD?


> I kinda just don't get wireless CarPlay/Android Auto at all. If I'm going to connect my phone to my car wirelessly for that, it's gonna drain the battery. So I'm going to plug it in so it can charge. So... now it's wired, so why do I need wireless?

For short trips. Like the two many of us do every single working day.


"Short" is perhaps relative. I know many people with hour+ commutes; they'll be wanting to plug in, presumably.

I guess I'm also just a low-key battery-life stresser. If I have the opportunity to plug in outside the home, with a charging cable readily in front of, me, I'm gonna do it... just in case.

I dunno, I still don't get it. Wireless anything is always going to be significantly less reliable than wired, and I've heard enough stories of wireless CarPlay/AA flaking out (with dongles and built-in setups) to turn me off on it.

Wireless is incredibly convenient when you don't have a wire and a port nearby, but that will essentially never be that case while you're in the car.


I think the parent poster was not arguing that allowing this combination of accesses is invalid, just that it can't be called read-ONLY if it's not ONLY read.

"Any color the customer wants, as long as it's black"


We should have two distinct, independent LLMs: one generates the code, the other one the tests.


Do you also hire two different kinds of programmers - one that never wrote a test in their life, and is not allowed to write anything other than production code, and second that never ever wrote anything other than tests, and is only ever allowed to write tests?

It makes no sense to have "two distinct, independent LLMs" - two general-purpose tools - to do what is the same task. It might make sense to have two separate prompts.


perfect use case for GANs (generative adversarial network, consisting of (at least) a generator and a discriminator / judge) isn't it? (iiuc)


Suppose that, since next March 20th, the government adopts a standard time and school schedule that are perfectly synchronized with the local astronomical time, i.e. kids wake up at dawn, go to bed when it's dark, etc. On March 21st the sun will rise a little earlier and set a little later, so the once-perfect synchronization will be a bit off, and more the following day, an so on until the summer solstice. Then the trend will be inverted and the skew between the two times will slowly decrease until reaching zero on September 22nd. But the very next day the synchronization will go off again, this time in the opposite sense, until March 2026, and so on forever.

Just like a broken clock that shows the right time twice per day, the ideal ruling suggested in the article will result in two perfect days every year.

At least the current scheme theoretically allows for 4 perfecy days per year! Note that I'm not saying that it is what really happens, though. It depends on school schedule and the dates of the switch. About the latter, I do think that the switch should happen as close as possible to the equinox, so towards the end of March, like in Europe.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: