Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> RadiantOS treats your computer as an extension of your mind. It’s designed to capture your knowledge, habits, and workflows at the system layer. Data is interlinked like a personal wiki, not scattered across folders.

This sounded really interesting... till I read this:

> It’s an AI-native operating system. Artificial neural networks are built in and run locally. The OS understands what applications can do, what they expose, and how they fit together. It can integrate features automatically, without extra code. AI is used to extend your ability, help you understand the system and be your creative aid.

(From https://radiant.computer/system/os/)

That's... kind of a wierd thing to have? Other than that, it actually looks nice.





Most of the text on the site seems LLM written as well. Given that the scope of the project involves making their own programming language, OS, and computing hardware, but they don't seem to have made very much tangible progress towards these goals, I don't understand why they decided to spend time making a fancy project site before they have anything to show. It makes me doubt that this will end up going anywhere.

They've written an R' compiler in C, and ported its order and parser to be self-hosted, with source code for those included in blog posts.

I'm not a fan of all the LLM and image generator usage either, though.


> It's a computer designed to help you learn, create, play, and explore. It's a space to focus, free from distractions. A return to the simple joy of computing: just you and your ideas.

One of the bigger tells is this tendency to triple up on an idea in three separate sentences, each with a slightly different rhythm but with nearly identical meaning.

It’s just not the way I’d imagine a native English hacker would talk about a project like this whose audience is other nerds.

I don’t think the author is grifting or vibe coding, I just imagine they’re not much of a writer and figured they could cut a quick corner and work on the project itself. Writing good product copy is actually really difficult, IMO.


MVP proof of market?

>Most of the text on the site seems LLM written as well.

I was thinking the same thing. Out of curiosity I pasted it at one of those detection sites and it said 0% AI written, but the tone of vague transcendance certainly got my eyebrow raised.


I actually don't mind it necessarily. I wonder if the medium-far future of software is a ground-level AI os that spins up special purpose applications on the fly in real time.

What clashes for me is that I don't see how that has anything to do with the mission statement about getting away from social media and legacy hardware support. In fact it seems kind of diametrically opposite, suggesting intentionally hand crafted, opinionated architecture and software principles. Nothing about the statement would have lead me to believe that AI is the culmination of the idea.

And again, the statement itself I am fine with! In fact I am against the culture of reflex backlash to vision statements and new ventures. But I did not take the upshot of this particular statement to be that AI was the culmination of the vision.


Same. I was super excited until I saw the AI stuff you pointed out. I'll have to read more about that. I like the idea of a new OS that isn't just a Linux clone, networking stack that is old school and takes computing in a different direction. I don't have a lot of need for the AI stuff outside of some occasional LLM stuff. I'd like to hear more from the authors on this.

I also understand that the old BBS way of communicating isn't perfect, but looking into web browsers seems to just be straight up insanity. Surely we can come up with something different now that takes the lessons learned over the past few decades combined with more modern hardware. I don't pretend to know what that would look like, but the idea of being able to fully understand the overall software stack (at least conceptually) is pretty tempting.


> Radiance compiler targetting RISC-V ISA. Involves writing an R' compiler in C and then porting it to R'.

R is a language for statistics and data analysis, I can't understand why they chose it for low-level systems programming having modern alternatives like Go or Rust. Maybe it has to do with the AI integration.

It seems interesting enough to follow, but I'm uncertain about its actual direction.

Edit: Thanks to people in this thread for pointing out that it's not R, but R'. The language they're creating is called Radiance, so it may be that R' is a subset of it.

> Radiance is a small statically-typed systems programming language designed for the Radiant platform, targeting the RISC-V RV64GC architecture. Radiance features a modern syntax and design inspired by Rust, Swift and Zig.


I think R’ is completely separate from R-the-stats-language and more like a cut down version of their Radiance language. Pretty common way to bootstrap a self-hosted runtime.

Yes, R' is "R prime", unrelated to the statistics language. Honestly didn't think about it that much.

Honestly putting a single single-quote in the name of your programming language seems like trolling

This whole product description with the use of the words "intentional" next to "AI" seems like trolling. There are a lot of very trendy words put next to each other and there are no artifacts.

Other than the usual scepticism around AI, it seems like it goes in the complete opposite direction of what they're trying to do. It sounds like they want to rethink how computers should work right from the beginning, eg they dont ship with a web browser, presumably so developers can rethink what a web browser actually is/should be, which is all really cool.

But then to throw in "you know that thing thats built upon decades of pre-existing infrastructure and assumptions on how computing should work? Yeh thats there too" doesn't seem compatible with the above.


Looks like an experiment to me. Which is fine. Why not play around? A NN based computer is something people have been contemplating for awhile. Though it seems more like a solution looking for a problem to me ¯\\\_(ツ)\_/¯[0]

  > what personal computing could be when designed from first principles.
Actually this bugs me more. I really dislike how frequently people claim "first principles". First principles are those that cannot be reduced. It's often used not in this way and all that accomplishes is people tricking themselves (or worse, tries to convince others) that these are the simplest components. We use first principles in subjects like math and physics but honestly 99.9% of what we work with in those domains are not through first principles. Maybe if you start with axioms in set theory or if you're in physics trying to derive a ToE (what those first principles are) then you're not down there.

It bugs me because deriving first principles is an extremely complicated task. Its also a very beneficial exercise, especially when trying to build things from the ground up. Constantly asking yourself how this can be broken down even more. First principles are not where you start. Having them should demonstrate a large amount of work having already been done and deep thought into what you're doing.

When clicking on that section I see nothing that looks like first principles. I see really a manifesto and much of which is actually difficult to distinguish from current computing.

It's a nice manifesto, but also seems too vague and naïve. Though the latter is generally a feature of manifestos, not exactly a fault

[0] to me, AI being built in should look more like AI being used like physics informed networks. Which are more using the NN for optimization. You could definitely abstract out more and this would be cool and interesting to see. But from the sound of it, it seems more like they're just putting LLMs in


I got the impression it's just a new OS with a different kind of alternative to the internet. Their intent is to have their own systems programming language, UI...etc. I think the idea is to start from scratch and not just rebuild on top of the current monoliths. All of that sounds intriguing to me if they can pull it off. The AI part sounds like something they also want integrated with the OS (maybe their own LLM)? I think you're overthinking what they meant by "first principles". I think that is more about...hey...if we start with this RISC computer...can we build a new computing environment from the ground up based on some core principles like it shouldn't harvest your data and serve you ads and that we don't need a web browser...etc.

Maybe I'm wrong though, but I don't think they're interested in inventing their own universe to create an apple pie from scratch. It sounds like you're alluding to that, which I agree would be an entirely different kind of experiment haha. To further that analogy, it sounds like they may be rejecting the modern industrial/factory approach to buying a Sara Lee apple pie and showing...hey...we can just use an oven and some simple tools.


Most importantly, unless they explicitly use "good" models trained on ethically-sourced data, their reliance on AI is fundamentally at odds with their mission of a "computing movement rooted in human dignity".

People had similar fears about OLE in Windows 95.

That’s kind of where my mind went too. They’re pitching this functionality for use by AI, but if it’s actually something like OLE or the Smalltalk browser or something like that where you can programmatically enumerate APIs, this has a lot of potential for non-AI use cases too that I generally find lacking in conventional platforms.

If you can use it more precisely without AI in case you prefer to avoid the AI, then I think that what you describe would be reasonable.

There are lots of systems that have tried to do something like the first quote. They're usually referred to as "semantic OSes", since the OS itself manages the capturing of semantic links.

I don't think anyone denies the current utility of AI. A big problem of the current OSes is that AI features are clumsily bolted on without proper context. If the entire system is designed from the ground up for AI and the model runs locally, perhaps many of the current issues will be diminished.


> I don't think anyone denies the current utility of AI. A big problem of the current OSes is that AI features are clumsily bolted on without proper context.

I do. "AI" is not trustworthy enough to be anything but "clumsily bolted on without proper context."


Why isn't AI just another application that can be run on the device? Surely we expose the necessary interfaces through the OS and the application goes from there?

Good luck running your super-necessary local models without nvidia drivers

Which came first, the nvidia drivers, or the super-necessary local models that wrote those drivers?

I think it's fine if all the 'ai' is local.

I haven't read all of the documentation around this project but I hope it's in the same vein as the cannon cat and the apple-apple//gs (and other early computer systems with quick and easy access to some kind of programmable environment). (as an aside I think apple tried to keep this going with applescript and automator but didn't quite pull it off)

I think there is a weird trick though. General purpose computers a great and they can do anything and many people bog down their systems with everything as a result. I feel like past projects like Microsoft's Bob and the Canon Cat were also in this general thought pattern. Strip it back, give them the tools that they need and very little else.

I try and follow that pattern also on my work macbook. I only install what I need to do my job. Anything I want to try out gets a deadline for removal. I keep my /Applications folder very light and I police my homebrew installs with a similar zeal.


Sounds like it's vibe-coding your entire software stack (data, apps, OS) in real time.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: