Hacker Newsnew | past | comments | ask | show | jobs | submit | apatheticonion's commentslogin

Due to the limitations of MacOS, my M1 MacBook pro is pretty much exclusively a thin client (software dev, gaming) and I see no reason to upgrade it - unless battery degradation warrants it.

It's fantastic as a thin client - though it's a bit annoying carrying around a mini PC with me when I travel.


I mostly use my M1 Air as a browser, sometimes I'll ftp or ssh out. VS Code + ssh remoting to my personal desktop when I actually need to be productive.

Have you tried Linux on it? I’ve been meaning to try putting NixOS on my M1 MBP.

God I miss Windows XP. I feel like, with a few small changes, the Windows XP GUI would be the most solid desktop experience you could possibly have.

Throw in POSIX compliance/bash, first party Linux compatibility (not WSL), window snapping, dark mode, maybe a spotlight-like search and a few enhancements to the file manager and you'd have a pretty much perfect desktop/productivity OS.

Why can't we have nice things?


Feels like you just want Cinnamon with maybe 1 or two more polished features?

I daily drive Linux and deeply appreciate the fact that pretty much everything (at least in the DE space) is developed for free by people donating their time - so don't take this the wrong way.

...but I've yet to experience the level of DE stability you get from Windows XP/7

That also applies to Windows 11 (low bar, I know) and MacOS.

It is getting much better and that's happening very quickly - but there is always some jank.

For instance, dragging a Chrome tab off the current window to create a new window. The various file managers in Linux (dolphin, files, thunar) fall short (also MacOS Finder is an actual joke).

Also matching glibc versions when distributing software is a bit tedious


> ...but I've yet to experience the level of DE stability you get from Windows XP/7

Have you used Cinnamon? I used Cinnamon for five years and the only weird quirk is needing to change the keybind for locking the machine (Power key + L defaults to opening some stupid debugger).


Not recently. I think the last time I used it was a few years ago. I'll take a look though!

  > maybe a spotlight-like search
As I recall, XP had a background indexing process, to facilitate system wide search.

I distinctly recall having to turn it off as I had builds fail more than once because the indexer had a derivative file open that the build was trying to delete.

Locking open files has been a Windows pet peeve of mine for decades.


I literally just want Rust style macros and proc macros in JavaScript. e.g. using

``` const MyComponent = () => jsx!(<div></div>) ```

rather than a .tsx file.

That or wasm to be usable so I can just write my web apps in Rust


Maybe the (relative) lack of ecosystem has kept you away, but I really recommend checking out both Dioxus and Leptos. Leptos is incredibly similar to React, but with Rust ergonomics, and it's been a pleasure to learn and use. With an LLM by my side that knows React and Rust pretty well, I've found myself not even needing the React libraries that I thought I would, since I can easily build on the fly the features/components I actually need.

I too, eventually gave up on React <> WASM <> Rust but I was able to port all my existing React over into Leptos in a few hours.


Yeah they are great, it's more the poor integration and lack of parallelism that makes it not worthwhile.

Thunking everything through JavaScript and not being able to take advantage of fearless concurrency severely restrict the use-cases. May as well just use TypeScript and React at that point


The bun and other authors would probably do well to not repurpose already understood terminology. "Macros" are already understood to be code that produces other code. "Comptime" is a nice alternative, but bun's "macros" aren't macros in that sense.

We had sweet-js macros as a library many years ago but it looks like it went nowhere, especially after an incompatible rewrite that (afaik) remains broken for even basic cases. (Caveat: been a while since I looked at it)



Every once in a while I get a strong urge to hack on sweet.js to add typescript support

That particular example is odd. What are you gaining by having a macro that needs a compile step vs no macro and just configuring your compile step to use a JSX loader for js files?

The general idea is something like prebaking computation into your deployed JS/TS. This is much more general than JSX-related tools, and a lot cheaper to run. In JS applications I often find myself doing various small bits of work on startup, comptime.ts would move all these bits into build-time.

Oh, I get the value of comptime! I was specifically responding to the rust-like macros comment

There are quite a lot of valid use cases to being able to transform arbitrary tokens into JavaScript at "compile" time. One that already exists is JSX, which is a macro that is baked into the TypeScript compiler but is restricted/tailored to React-style libraries.

We sort of get around this today using template literals and eval, but it's janky. https://github.com/developit/htm

A generic macro system could open the door to a framework like Svelte, Angular, Vue, etc being able to embed their template compilers (with LSP support) without wrapper compilers and IDE extensions.

e.g. imagine syntax like this being possible (not saying it's good)

```

export class MyComponent {

  template = Vue.template!(<div>{{ this.foo }}</div>)

  #[Vue.reactive]
  foo = 'Hello World'

  constructor() { setTimeout(() => this.foo = 'Updated', 1000) }
}

svelte.init(MyComponent, document.body)

```

Where the `template!` macro instructs the engine how to translate the tokens into their JavaScript syntax and the `#[reactive]` macro converts the class member into a getter/setter that triggers a re-render calculation.

It would need to be adopted by TC39 of course and the expectation would be that, if provided at runtime, a JavaScript engine could handle the preprocessing however transpilers should be able to pre-compute the outputs so they don't need to be evaluated at runtime.


I really really (really) don’t want Rust style macros and proc macros in JavaScript (or TypeScript), ever.

Might be a good idea to advocate for faster progress in wasm so fans of the feature don't try to pollute the language :p

„sandbox, but its to contain the bored developers, not for security“

Writing a web app at the moment with C++/Emscripten. What makes wasm unusable in Rust?

It's not unusable per-se, however being unable to take advantage of Rust's fearless concurrency and having to glue everything together with JavaScript severely restrict the usefulness.

May as well just use TypeScript and React at that point.

The dream is to be able to specify only a wasm file in an html script tag, have the tab consume under 1mb of memory and maximise the use of client hardware to produce a flawless user experience across all types of hardware.


You want manual memory management for your web apps?

Rust memory management is... profoundly not manual?

Case in point: I use Rust/WASM in all of my web apps to great effect, and memory is never a consideration. In Rust you pretty much never think about freeing or memory.

On top of that, when objects are moved across to be owned by JS, FinalizationRegistry is able to clean up them up pretty much perfectly, so they're GC-ed as normal.


Wrangling the borrow checker seems pretty manual at times. And I don’t know why you’d bother with a persnickety compile time GC when JS’s GC isn’t a top issue for front end development.

The borrow checker just verifies that you're handling the concept of ownership of memory correctly.

The actual management of memory- allocating, reclaiming, etc - are all handled automagically for you.


There is no need for the concept of ownership of memory in JavaScript. So you are wasting time on a concept that doesn't matter in languages with a real GC. Dealing with ownership = manual memory management.

This used to not be true- once upon a time, Internet Explorer kept memory separate for DOM nodes and JavaScript objects, so it was very easy to leak memory by keeping reference cycles between the two.

Now, with all the desire for WASM to have DOM access I wonder if we'll end up finding ourselves back in that position again.


You can still have ownership issues and leaks even with a GC, if an object is reachable from a root. e.g. object A is in a cache and it references object B which references objects C D E F G ... which will now never get collected.

If A owns B then that is as expected but if A merely references B then it should hold a WeakRef


I use Rust for all the other reasons, real types being a major one of them:

https://hn.algolia.com/?type=comment&query=typescript%20soun...

It's kinda exhausting to use TypeScript and run into situations where the type system is more of a suggestion than a rule. Passing around values [1] that have a type annotation but aren't the type they're annotated as is... in many ways worse than not typing them in the first place.

[1]: not even deserialized ones - ones that only moved within the language!


You stop noticing the borrow checker after a while and being able to write insanely parallel/performant code is quite rewarding.

Again, not all websites need to be usable on low end hardware/have a 1mb memory footprint - but there are a lot of use cases that would benefit.

Think, browser extensions that load on every tab and consume 150mb+ * number of tabs open and shares the main thread with the website.

ServiceWorkers that sit as background processes in your OS even when the browser is closed, that sort of thing.


Does IPV6 change this dynamic at all?

It's conceivable that a VPN provider could change the V6 IP on their server every hour for the rest of time and still get unique addresses.

If the VPN server only has an IPV6 address and no V4 address, can they connect to the target website?


IP addresses are routed in aggregate groups using BGP. The groups are called Autonomous Systems and are handed out to ISPs. Your home ISP has a bunch. The ISP that hosts your virtual server has some too. You can see the one you’re connecting from right now with tools like https://bgp.tools and https://bgp.he.net.

The number of these systems scales in a reasonably tractable way — on the order of the number of ISPs and physical Internet infrastructure around which traffic needs to be routed.

As well as making aggregate routing possible you can use the ISP’s registration details see what location (or legal jurisdiction) a whole chunk of address space has. Hopping around IP addresses will give you unique ones every five minutes but they’ll all still be inside 2001:123::/32 from AS1234 aka Apathetic Onion’s Finest Habidashery and Internet Connections LLC, Delaware, USA.


> Seen the news recently? People have been discovering that the LLMs are snake oil lately.

Sources? That would cheer me up for sure, haha.

> If you're tired of working for the kind of idiot that sees "AI will replace everyone in your company for cheap" and starts drooling I don't blame you. I'm pretty sick of the industry too.

Yeah you're probably right and I suppose this is more a factor than AI itself.

The company I work for has been stack ranking for about a year and a half. It's exhausting and, combined with the onslaught of AI hype, it's pretty easy to slip into a pattern of thought where I feel I contribute no value.

So AI might just be the proverbial straw.

It also doesn't help that the engineering landscape in Australia is a bit lacklustre when it comes to anything more complicated than web/app development. Would love to work on operating systems or crazy hardware projects like piloting software or robotics.


https://aicodinghorrors.com/ - the database deletion one was how I found out this site exists.

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o... - somewhat controversial because measuring different tasks. But nobody can measure dev tasks anyway.

https://venturebeat.com/ai/stack-overflow-data-reveals-the-h... - Saw this very recently, haven't dug in to the sources.

https://www.wheresyoured.at/ Talks a LOT about the AI hype bubble in general.

The landscape here in the US sucks too. Layoffs and offshoring are still going strong. Everything new is some stupid AI startup. I kinda thought I was done too until I got laid off and started working on my own projects to stay sharp.

Realized I still love making stuff. Just hate the corporate games.


Two perennial opportunities in Australia IT are fixing up outsourced systems and building workarounds for enterprise shovelware.

Job ads tend to be for Java/C# coding and project managers willing to harangue techies to deliver improbable results to impossible schedules with meagre budgets


The web is a platform that has so much unrealized potential that is absolutely wasted.

Wasm is the perfect example of this - it has the potential to revolutionize web (and desktop GUI) development but it hasn't progressed beyond niche single threaded use cases in basically 10 years.


It should never have been web assembly. WASM is the fulfillment of the dream that started with Java VM in the 90’s but never got realized. A performant, truly universal virtual machine for write-once, run anywhere deployment. The web part is a distraction IMHO.


Why did Java fail though, and why would wasm succeed when the underlying philosophy is the same?


I'm not sure how it's for others, but for me there was a perception issue with java applets on the web in the mid 2000s:

Java applets loading on a website started as a gray rectangle, which loaded very slowly, and sometimes failed to initialize with an "uninited" error. Whenever you opened a website with a java applet (like could happen with some math or physics related ones), you'd go "sigh" as your browser's UI thread itself halted for a while

Flash applets loading on a website started as a black rectangle, did not cause the UI thread to halt, loaded fast, and rarely gave an error

(the only reason I mention the gray vs black rectangle is because seeing a gray rectangle on a website made me go "sigh")

JavaScript was not yet optimized but the simple JS things that worked, did work without loading time.

Runescape (a 3d MMORPG from the early 2000s that still exists) used Java though and somehow they managed to use it properly since that one never failed to load and didn't halt the browser's UI either despite being way more complex than any math/physics Java applet demo. So if Java forced their applets to do whatever Runescape was doing so correctly, they'd not have had this perception issue...


The fact that we said “Java” and you went to thinking about “Java applets” is part of the problem. Java was meant to be a universal executable format. It ended up confined to the web (mostly, at least in the popular consciousness).


Well actually that was because the topic of the article was webassembly :) I have seen Java used for backends / software as well (and other than the lack of unsigned integers for e.g. crypto/hashing/compression/..., and lack operator overloading for e.g. vectors/matrices/bigints/..., it's 'fine' to me)


But that’s the point! “Web” Assembly really has nothing to do with the web, or browsers :) It came out of the web standards groups, that’s all. It is an architecture agnostic executable format.


Yup, "Enterprise Java" really grated.

Until it didn't.


That's because Java used the old piece-of-shit NPAPI (Netscape Plugin Application Programming Interface) from 1995, first released in the NetScape 2.0b3 Plug-in SDK:

https://en.wikipedia.org/wiki/NPAPI

Problems Found with the NetScape Plug-in API. By Don Hopkins, Kaleida Labs:

https://donhopkins.com/home/archive/netscape/Netscape-Plugin...

More about Netscape's fleeting obsession with Java and Javagator in the pre-LiveConnect/XPConnect/NPRuntime/ActiveX/DHTML/XPCOM/XUL days:

https://news.ycombinator.com/item?id=22708076

>I hope NetScape can come up with a plug-in interface that is good enough that they can implement their own navigator components with it (like the mail reader, outliner, progressive jpeg viewer, etc). The only way it's going to go anywhere is if they work closely with developers, and use the plug-in interface for non-trivial things themselves. Microsoft already has a VRML plug-in for their navigator, so presumably they have a plug-in interface, and from what I've seen on their web site, it may not be "good enough", but it's probably going to do a lot more that you can do with NetScape right now, since they're exposing a lot of their navigator's functionality through OLE. They seem to understand that there's a much bigger picture, and that the problems aren't trivial. Java isn't going to magically solve all those problems, folks.

Early Browser Extension Wars of 1996:

https://news.ycombinator.com/item?id=19837817

>Wow, a blast from the past! 1996, what a year that was.

>Sun was freaking out about Microsoft, and announced Java Beans as their vaporware "alternative" to ActiveX. JavaScript had just come onto the scene, then Netscape announced they were going to reimplement Navigator in Java, so they dove into the deep end and came up with IFC, which designed by NeXTStep programmers. A bunch of the original Java team left Sun and formed Marima, and developed the Castanet network push distribution system, and the Bongo user interface editor (like HyperCard for Java, calling the Java compiler incrementally to support dynamic script editing).

More about browser extension APIs:

https://news.ycombinator.com/item?id=27405137

>At the time that NSAPI came around, JavaScript wasn't really much of a thing, and DHTML didn't exist, so not many people would have seriously thought of actually writing practical browser extensions in it. JavaScript was first thought of more as a way to wire together plugins, not implement them. You were supposed to use Java for that. To that end, Netscape developed LiveConnect.

>Microsoft eventually came out with "ActiveX Behavior Components" aka "Dynamic HTML (DHTML) Behaviors" aka "HTML Components (HTCs)" that enabled you to implement ActiveX controls with COM interfaces in all their glory and splendor, entirely in Visual Basic Script, JavsScript, or any other language supporting the "IScriptingEngine" plug-in interface, plus some XML. So you could plug in any scripting language engine, then write plug-ins in that language! (Easier said than done, though: it involved tons of OLE/COM plumbing and dynamic data type wrangling. But there were scripting engines for many popular scripting languages, like Python.)

Javagator Down Not Out:

https://www.cnet.com/tech/tech-industry/javagator-down-not-o...

>Though Netscape has ceased development efforts on its Java-based browser, it may pass the baton to independent developers.

Shockwave (the Macromedia Director Player Library) came long before Flash, and it used NPAPI (and ActiveX on IE), but later on, Google developed another better plug-in interface called "Pepper" for Flash.

1995: Netscape releases NPAPI for Netscape Nagivator 2.0, Macromedia releases Shockwave Player on NPAPI for playing Director files

1996: Microsoft releases ActiveX, FutureWave releases FutureSplash Animator and NPAPI player for FutureSplash files, Macromedia acquires FutureSplash Animator and renames it Flash 1.0

2009: Google releases PPAPI (Pepper Plugin API) as part of the Native Client project, suddenly Flash runs much more smoothly


I asked a number of times on HN why wasm was good when java applets, exactly the same thing, were bad. There was a vague feeling that java applets were insecure and that this would somehow not be an issue for wasm.

It's not just applets; we also had Flash, which was a huge success until it was suddenly killed.

As far as I can tell, the difference between java applets and Flash is that you, the user, have to install java onto your system to use applets, whereas to use Flash you have to install Flash into your browser. I guess that might explain why one became more popular than the other.


> There was a vague feeling that java applets were insecure and that this would somehow not be an issue for wasm.

Nothing "vague" or "somehow" about that.

Applets were insecure, because A) they were based on the Netscape browser plugin API, which had a huge attack surface, and B) they ran in a normal JVM with a standard API that offeres full system access, restricted by a complex sandbox mechanism which again had a huge attack surface.

This IS, in fact, not an issue for wasm, since A) as TFA describes it has by default no access at all to the JavaScript browser API and has to be granted that access explicitly for each function, and B) the JavaScript browser API has extremely restricted access to OS functionality to begin with. There simply is no API at all to access arbitrary files, for example.


There wasn't a vague feeling. Both kept getting exploited. My favorite is Trusted Method Chaining, which is hard to find a reference on now, but showed the whole Java security model was fundamentally flawed. These days that security model has simply been removed: all code in the VM is assumed to run in the privilege level of the VM.

WASM sandboxes the entire VM, a safer model. Java ran trusted and untrusted code in the same VM.

Flash, while using the whole-VM confinement model, simply had too many "boring" exploits, like buffer overflows and so on, and was too much of a risk to keep using. While technically nothing prevented Flash from being safe, it was copyright Adobe and Adobe didn't make it safe, and no one else was allowed to.


I don't think "exactly the same thing" is accurate. And WASM has put more effort into sandboxing, both in the design (very limited interfaces outside the sandbox) and implementations (partially because we've just gotten a lot better at that as an industry).


But now you can do more in the browser than back then with Java applets.

Crypto miners weren’t a thing for Java applets


Only because Java Applets died before crypto mining became a thing/got turned into a "click to enable" thing because security problems.


That’s my point. It’s Java applets all over again but now with crypto miners


Flash was a security nightmare, with multiple vulnerabilities discovered regularly.

It was eventually killed because Apple decided it won't support it on the iPhone.


Your implication is that the idea that java applets were insecure had nothing to do with their adoption or lack thereof.


And conveniently around the same time javascript was rapidly evolving.


WASM doesn’t feel like an “applet” and can be either seamlessly integrated or take over the space.

Applets felt horrible, maybe if they appeared today it would be different but back then the machines were not powerful enough and the system not integrated enough to make it feel smooth.


They are vastly different. The WASM is much more low level, and supports a wider range of program types, including low level embedded stuff.


Off-topic but... whatever happened to Real Player? Installing that was always a nightmare.


Every few months Sun / Oracle would release a new update requiring new incantations in the manifest files. If you didn't constantly release patched versions of your applet, your software stopped working.

Javascript from 20 years ago tends to run just fine in a contemporary browser.


abundance of the runtime, ease of distribution of programs, the permission model which was bolted on, appropriate sandboxing mechanism leading to authorisation problems and performance when people were not ready to sacrifice a very significant amount for no reason.

Oh, and breaking changes between versions meaning you needed multiple runtimes and still got weird issues in some cases.


Because the underlying philosophy was not the same.

Java bytecode was never originally intended as a generic low-level assembly language. Its semantics are defined largely by the needs of Java the language, which means that, on one hand, it doesn't even have pointers (only object references; so you can't e.g. point into the middle of an array), and, on the other hand, it has opcodes like "call method" that map nicely to what Java does but not necessarily to what other languages do.

wasm, on the other hand, is sufficiently low-level that you can compile C into it.


Oracle bought Sun and now owns Java, remember? All technical discussions about Java -vs- any other languages can now be immediately terminated by mentioning the word "Lawnmower", which overrides all technical issues.

https://news.ycombinator.com/item?id=15886728


High level and opinionated. The jvm bytecode is much closer to wasm than Java ever was.


To expand on this, WASM is closer to LLVM bytecode vs Java VM which is really designed around what the Java programming language needs.


Java didn't fail, it was replaced by the web to a large part, but remains strong not only as the original JVM, but the Android clone and .NET. As for why the clones... IP issues. Otherwise Java would dominate, like the web does.


> Why did Java fail though

Um, Java has dominated enterprise computing, where the money is, for 25+ years.

There's no money in browser runtimes. They're built mostly defensively, i.e., to permit ads or to prohibit access to the rest of the machine.

> why would wasm succeed when the underlying philosophy is the same?

wasm is expressly not a source language; people use C or rust or Swift to write it. It's used when people want to move compute to the browser, to save server resources or move code to data instead of data to server. Thus far, it hasn't been used for much UI, i.e., to replace Javascript.

Java/Oracle spent a lot of money to support other JVM languages, including Kotlin, Scala, Clojure - for similar reasons, but also without trying to replace Javascript, which is a loss leader for Google.


But the blocking of ad, tracking and miner scripts gets more complicated with WASM


What would you propose if you were to rename it?

Generalized Assembly? GASM?


I'd probably go with something like Open Regular Generalized ASseMembly.


How about "Optimized Reduced Generalized Assembly for Simulated Machines"?


The classpath that came with the JVM was much more well thought out than the web is. That's the real problem.


"Dream", well until you think about i18n and a11y..


What does a compiler target have to do with accessibility?


What do you mean?


i18n: internationalisation, a11y: accessibility


What I mean is, what does that have to do with a portable executable binary format?


What I mean is that portable execution is only a very small part of 'good' SW: you need also security, i18n, a11y etc.


All of that exists at a way higher level. It's like saying ELF binaries don't provide good i18n or a11y. I don't even know how to interpret that.


Sure, but as a 'dream' portable executable is kind of funny when execution by itself brings you nothing (except heating the CPU), you also need portable IO, security, i18n, a11y..


Your argument goes like Sagan’s

if you want to make an omelet, you also need to create the universe.

Your points being unrelated to wasm.


Curious: what are the cases you want that a currently blocked?

I’ve personally felt like it has been progressing, but I’m hoping you can expand my understanding!


Give me one thing that your theoretical WASM can "revolutionize". Aside from more efficient covert crypto mining on shady sites.


High performance web-based applications is pretty high on my list.

Low memory usage and low CPU demand may not be a requirement for all websites because most are simple, but there are plenty of cases where JavaScript/TypeScript is objectively the wrong language to be using.

Banking apps, social network sites, chat apps, spreadsheets, word processors, image processors, jira, youtube, etc

Something as simple as multithreading is enough to take an experience from "treading water" to "runs flawlessly on an 8 year old mobile device". Accurate data types are also very valuable for finance applications.

Another use case is sharing types between the front and back end.


But... we already have all that.

Multithreading could've been a bit more convenient, but if you want it, you can get it. Don't tell me those revolutionaries that want to revolutionize the web give up so easily.


I use multi-threading extensively in JavaScript (it's part of my day job). I can tell you that you have to _really really_ want it and it is very hard to use it to improve performance.

Serialization/deserialization overhead is non-trivial while shared primitives like SharedArrayBuffer are mostly useless and disabled without prohibitivly restrictive security headers.

To get anything useful out of JavaScript threading, you need to implement hydration and synchronization mechanisms which are slow and unbelievably complex.

Meanwhile, multithreading is effortless and performant in Rust or Go.

If my backend is in Rust already, I could simply reuse my API types/structs.

I could use language specific serialization formats for REST endpoints (like Rust's bincode) rather than JSON or protobuf - which is faster to parse and compresses better.

If the web opens up more maybe we could gain access to raw TCP/UDP sockets and have access to highly efficient streaming from our servers - enabling use cases like usable game netcode (which is currently impossible). This benefits other realtime application use-cases too, like finance applications, chat applications and streaming services.

Don't forget the memory footprint of web applications. Imagine if the browser figured out there was no JavaScript running and it didn't spawn a JS runtime - imagine a tab consuming 15mb of RAM rather than 250mb.

...

I'm not saying JavaScript is bad and that we don't need it to evolve - I'm just saying that the ROI on a fully featured wasm runtime in the browser is massive and that browser vendors & the standards committee are sleeping on it - which is hugely frustrating.


I use it for a web version of some robotics simulation & visualization software I wrote in C++. It normally runs on as an app on Mac or Linux, but compiling to WASM lets me show public interactive demos.

Before WASM, the options were:

- require everyone to install an app to see visualizations

- just show canned videos of visualizations

- write and maintain a parallel Javascript version

Demo at https://throbol.com/sheet/examples/humanoid_walking.tb


> Throbol requires WebGPU. Try with Chrome, Edge, Safari, or Opera.


It's almost there: firefox has shipped it for windows (to begin with): https://mozillagfx.wordpress.com/2025/07/15/shipping-webgpu-...

And safari is coming soon.

So not too long untill it'll be usable.


Doesn't work, neither with Safari nor with Chrome, at least not on macOS Monterey. I guess the whole stack is too modern for my 18-core Intel iMac Pro.


Works fine in Chrome for me.


With access to DOM it could run with no (or just very little) js, no ts-to-js transpiler, no web-framework-of-the-month wobbly frontends perpetually reinventing the wheel. One could use a sane language for the frontend. That would be quite the revolution.


You can just interpolate bits of Javascript inside C++ to access the DOM, like so:

  #include <emscripten.h>

  void sayHello()
  {
    EM_ASM(
      let foo = document.getElementById('foo')
      foo.innerHTML = 'hello';
    );
  }
See https://emscripten.org/docs/porting/connecting_cpp_and_javas...


You already can do that. You need a tiny shim and you can forget it's JS and use all WASM.


Can you point me to a howto/tutorial?


This is wasm-dom for Rust: https://crates.io/crates/wasm-dom There's also web-sys. Those are Rust centric, I'm sure there are others, but basically you only need the whim once, and then you write in your language of choice and compile to WASM, you never have to think about JS. Although... I may question the purist attitude. But you can do it.

Think of it like HTMX, the way people say it "avoids working with JS" (HTMX itself is written in JS). Same principle.


People are already appreciate the accessibility to low level native libraries like duckdb, sqlite, imagemagick, ffmpeg… allowed by wasm. Or high performance games/canvas based applications (figma).

But CRUD developers don’t know/care about those, I guess.



God I hate multi-threading on the web/nodejs. Rather than implementing syncronization primatives like mutexes or rwlocks that capture the contained values and make them "transferrable" between JavaScript contexts (v8 isolates) - they introduced SharedArrayBuffer that is almost entirely unusable for anything meaningful.

Syncronizing between threads involves thunking and copying data through RPC layers.

Sucks for me because our production app has grown faster than we are able to rewrite it and uses 70-100gb of ram (written before my time). To try to get around this, we are investigating exotic solutions like using native code to manually manage pages of shared memory that contains custom data structures and as little serialization/deserializing logic as possible - but of course v8 uses utf16 for string encoding which means working with JavaScript values in the native layer is expensive.


100GB of RAM — I'm curious, why is this a web app? Sounds like an internal tool that could have been written in, for example, C#.


It's actually an internal build tool (forked from an open source tool) used to build our public applications and runs in Nodejs. We are rewriting it in Rust but it's a complex tool and that takes a lot of time.

In the mean time, a "build" using this Nodejs based tool takes about an hour and uses 100gb of RAM.

By contrast, preliminary unoptimized results of the Rust tool brings build times down from 1 hour to 1 minute and maxes out at 6gb of ram - but that'll be finished in 2 years.

It literally costs millions per year to run and is a nightmare to maintain.

To be fair, the JavaScript itself is actually surprisingly performant - it's the multi-threading that's the real problem.

It would be a non issue if there were memory sync primitives that could be sent between threads but because of how v8 isolates work, this isn't possible without changes to v8. So we are forced to copy gigantic graphs of data between threads and manually synchronize them with postMessages.

Running a build on the main JS thread (no workers) uses 10gb of ram and takes 5 hours. So shared memory would solve our issue and avoid the need to rewrite it.

Attempts to centralize the data (in a DB or similar) perform worse due to the de/serialization of the data. Other attempts of using a shared in-memory database shares the same issue. We also need to modify every shared structure in the codebase to use getters/setters with a copy on read approach (so memory blows up any way).

It's not that JavaScript itself is the wrong language, but that the isolated nature of the runtimes fail in multi-threaded use cases.


Microsoft doesn’t exist in the hive mind of Silicon Valley. That’s the only reason.

“We’d rather spend 3x the effort to bash things with open source rocks than ever use a proprietary hammer!”

“.NET is open source. It also comes with a steam roller.”

“We’ve made up our minds!”

PS: I just had to work on a Node.js app for the first time in my career. After dotnet core it feels like working with children’s wood blocks.


.NET being "open source" doesn't mean it's not proprietary - Microsoft still effectively controls its future.


Similar experience here. Though I moved from C# to Rust and it's become my favorite language of all time.

I'd write everything in it if I could, haha


I find the gaps in the Rust standard library annoying. The batteries-not-included design just doesn't do it for me, especially when it's for commonly used things such as the guid type and common interfaces.


That's fair. The std lib is the biggest pain point for me also but it's easy enough to work around. That and the poor ergonomics of async/await/futures


Bundler engineer here. ESM is great when it comes to build-time optimisations. When a bundler runs into CJS code it literally deopts the output to accommodate - so from that side it's amazing.

But also, there's a special place in hell for the people that decided to add default exports, "export * from" and top level await.

Commonjs is also very weird as a "module" instance can be reassigned to a reference of another module

module.exports = require('./foo')

and there's no way to do this in ESM (for good reason, but also no one warned us). In fact major projects like React use CJS exports and the entire project cannot be optimized by bundlers. So, rather than porting to ESM, they created a compiler LOL


If I may, the evil is not in the top level await but in the use of a top level await in a module that exports. That is evil. But a top level await in a programs main script seems ok to me.


> But also, there's a special place in hell for the people that decided to add […] top level await.

There is also a special place in extra-hell for those who export a function named 'then'.


Wait what? What would that module even be for?!


I've achieved something similar by passing markdown into handlebars/ejs first before converting it to html


As a kiwi living in Australia and working for mini FANG, the thought of moving to Southeast Asia and living off savings for a couple of years in the hope that I might wait out this jobs/rental crisis has crossed my mind.


Not necessarily a bad plan per se, but don’t forget to take into account the fact that this jobs/rental crisis might be due to deep systemic+geopolitical issues that are only going to get worse and that we might not be going back to the abundance of the 2010s anytime soon.

Finding yourself in an even worse job market 5 years from now is a very real possibility.


I think it is alleviating slowly. In Melbourne at least, rent seems to have peaked already. The state government has done great work in overruling councils to get more housing built faster. And it seems to be working.


To be honest, the perfect solution and you have a high chance of finding something there that makes you happy.

Housing there is more affordable also but not for long, people are buying insane houses there coz cost "nothing".

If this is something that you already considered and have enough saving to do so, personally, I would start looking for what can I do over there job wise to avoid using all my savings and screw it, I am off. No matter how small is the income coz you will need to start from the bottom, I am getting tired of IT, too much drama, all the time, if living in Asia doing something completely different but that pays the bills and that I am happy with, I wouldn't come back. Australia within the last 11 years went to shit big time, and it is getting worse by the day.

I have considered the same, I think it might be a little to late for me now, who knows.


> Australia within the last 11 years went to shit big time, and it is getting worse by the day.

Most major wester economies did the same in past 11 years: stagnating wages, exploding CoL. Life was good 11 years ago when salaries were great and housing still abundant and affordable to buy but this isn't the case anymore.

If you expect an improvement, moving anywhere there right now would be like jumping from a lake to a pond.


Housing in Australia was not affordable at all 11 years ago! Perhaps 20 years ago yeah… but I mean, compared to almost anywhere else, salaries for the average folk are great even when you compensate for that. Only the top tier is underpaid if you compare with the USA, much like Europe.


It's so funny to me how you Australians have no idea how fortunate you are. You've been born into one of the luckiest places on earth and never stop moaning about it. When I lived there all anyone ever did was bitch and moan while pulling in huge salaries, soaking in some of the best weather on earth and knowing their country's immense mineral wealth and isolated status would always be a security blanket.

Truly some of the most fortunate, least deserving people on the planet. I am thankful I can get my special category visa and when my parents eventually pass away I will set my roots down in Australia. Despite all of the Australians.

I'm interested where in SE Asia you think you can can just barge in, buy a property and live. It might pay to actually look into what it takes to do that legally, it might not be as simple as you think.


Or move to SE Asia and work remotely? Some killer coworking spots out there


What is a MiniFANG?


In Aus could be Atlassian


Or it could be Canva, which is Australia-based.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: