Hacker News new | past | comments | ask | show | jobs | submit login

I work on Bun - happy to answer any questions



Hi! I saw your PR review of a community effort to add Bun to the Techempower benchmark. You had really great, exact feedback about "unnecessary allocation here", "unnecessary allocation there".

It was eye-opening, in terms of how often us JS programmers play fast & loose with "a little filter" here, "a little map" there, and end up with death by a thousand allocations.

Given that context, I'm curious:

1) How much you think Bun's (super-appreciated!) fanatical OCD about optimizing everything "up to & outside of JSC" will translate to the post-boot performance of everyday backend apps, and

2) If you're tempted/could be compelled :-D to create a "Mojo of TypeScript", where you do a collab with Anders Hejlsberg to create some sort of "TypeScript-ish" language that, if a TS programmer plays by a stricter set of rules + relies on the latest borrowing inference magic, we could bring some of the Bun amazing-performance ethos to the current idiomatic FP/JS/TS style that is "lol allocations everywhere". :-)

Or, maybe with Bun bringing the right native/HTTP/etc libraries wrapped around the core runtime, doing "just the business logic" in JS really won't be that bad? Which iirc was the theory/assertion of just-js when its author was benchmark hacking Techempower, and got pretty far with that approach.

Anyway, thanks for Bun! We're not running on it yet, but it's on the todo list. :-)


> 1) How much you think Bun's (super-appreciated!) fanatical OCD about optimizing everything "up to & outside of JSC" will translate to the post-boot performance of everyday backend apps

Bun is extremely fast at data processing tasks. Shuffling data from one place to another (disk, network, APIs, etc). We use SIMD, minimize allocations/copies and pay lots of attention to what system calls are used and how often and where. A naively implemented script in Bun that moves data using Bun’s APIs will often outperform a naively implemented program written in Rust or Go. Bun’s APIs try really hard to make the obvious & default way also the fast way.

That, and Bun’s builtin build tooling are where Bun’s performance shines.

> “Mojo of TypeScript”

I’ve thought a little about this. I think it’s a project that’d take 5+ years and crazy hard to hire the right people to do it. I think it’s mostly unnecessary though. JITs are really good. The API design is usually the reason why things aren’t as fast as they could be.


> Bun’s APIs try really hard to make the obvious & default way > also the fast way.

Nice! That makes a lot of sense, and look forward to trying them.

Fwiw I sometimes worry about the slippery slope to infra that exists on the JS side, i.e. I work a lot in a GraphQL backend, and even if Bun gave us (or really our framework, so fastify/mercurius) a super-optimized way of getting the raw bytes off the wire, Mercurius is still doing GraphQL parsing, validation, routing, response building in JS land.

Granted, I want to keep my application's business logic in TS, but naively it seems tempting to push as much of the "web framework" / "graphql framework" as possible into the native side of things, where as I think historically Node/etc API have stopped at the "here's the raw HTTP request off the wire".

> I’ve thought a little about this.

Sweet! That's awesome just to hear that it's crossed your mind. Agreed it would be a moonshot. And, yeah, I'm perfectly happy leaning into JITs and good APIs.

Thanks for the reply!


> I think it’s a project that’d take 5+ years

What are the major difficulties you see? Is this estimate for supporting all existing TS code...or as the OC said, a new language with only newly written code.

The way I naively think about it is to imagine transpiling TypeScript code to Zig code. How far could that take you?

And if you restricted how much dynamic-y stuff you could do...maybe with a linter. I always get the feeling that 90% of the business logic (sequence, selection, iteration) is that same between languages whether they are interpreted or compiled, with just some memory management stuff added on top - which can be abstracted anyway.


> A naively implemented script in Bun that moves data using Bun’s APIs will often outperform a naively implemented program written in Rust or Go.

Sounds exciting, do you have some example benchmarks I can run that show this?


> us JS programmers play fast & loose with "a little filter" here, "a little map" there, and end up with death by a thousand allocations.

That's because speed isn't always top priority. Readability is very high on the list.

I would rather have a slightly slower .map or .filter in a chain than a harder to read nested for or while loop.


This just shows again that readability is entirely subjective, IMHO combinations of .map, .filter, .reduce, etc are often less readable than doing the same thing in a nested loop.


> combinations of .map, .filter, .reduce, etc are often less readable than doing the same thing in a nested loop.

I often find map/reduce/filter easier to read when using named functions (or lazy data structures ) for intermediary results - depending on the language/runtime that might imply more allocations - or not.

Eg pseudocode:

Integers.filter(//non-obvious prime sieve).sum()

Vs

Primes = Integer.filter(//non-obvious prime sieve)

Primes.sum()

Or lifting the anonymous filter to primes()-filter:

Integers.primes().sum()


This is true I code very functional style in javascript but I can see how that might be harder to read for some.


That's an interesting point. Agree that readability is somewhat subjective, but many programmers find .map, .filter, .reduce, etc... as more convenient and provides clarity as to their intentions. Many languages (like Vlang, Java, Python, etc...), arguably also have them to more closely align themselves with the functional programming paradigm.


For simple combinations I agree (maybe 2-long chains with very simple conditions and transforms - but for such things loops are also trivial to read).

But I have seen "functional contraptions of horror" where those functions are both chained and nested which were completely undecipherable by mere humans.

And at least from my personal impression, people who are a fan of this type of functional style are also more likely to create such horrors (which they themselves of course find totally readable and superior to "unreadable" loops) - I suspect that there's often a bit of cargo-culting going on.


A nested for or while loop isn't always less readable FWIW. If you need more than 3-4 filters and/or maps the balance starts shifting back the other way.


yeah, cause fixing the missing index on your DB that adds 3 seconds to an API call is better than optimising loops to save 2ms.

In the front-end we were making about 20 API calls to fetch data we probably don't need yet and the developer is like: the problem has to exist in the way we call them, time to optimise the loops!


Well-factored code often reveals more important optimizations one can make. Like figuring out the nested loop isn't really necessary.


The fact of un-readability isn’t necessarily implied by the statement being replied to. `map` and `filter` are names for common operations on lists, not less performant alternatives other things. If there’s a more performant alternative to either operation, give it a name and express it in a function. That’s what functions are for, no?


The biggest problem is that map, filter make copies of the structure. That's an allocation, that means GC calls.

You can make a similar function that changes the data in place and will be faster. But that comes with side effects which is not functional style.


The real problem here with the spec committee.

Lodash is MUCH faster than native because it uses iterators so you aren't actually looping over and over.

We need builtin iterator versions of all these looped functions so it adds an implicit/explicit `.valueOf()` that calls the chain and allocates only one new array.


There are now builtin iterator versions of most of these looped functions [1], should be shipping in stable Chrome next month. The "give me an array at the end" function is spelled toArray.

But it's not going to help all that much with this problem. The iterator protocol in JS involves an allocation for every individual value, and while in some cases that can be optimized out, it's pretty tricky to completely optimize.

It's always going to be difficult to beat a c-style for loop for performance.

[1] https://github.com/tc39/proposal-iterator-helpers/


Yeah, besides Javascript runs to client side so let client deal with bad perf. I am happy writing my beautifully designed readable code.


Javascript also runs on server side.



Concerning stephen's item (2). The stricter set of rules was laid out by Richard C. Waters in Optimization of Series Expressions: Part I: User's Manual for the Series Macro Package, page 46 (document page 48). See reference Waters(1989a).

The paper's language is a bit different than contemporary (2023) language.

`map()` is called `map-fn`.

`reduce()` a.k.a. `fold` seems to be `collect-fn`, although `collecting-fn` also seems interesting.

sorting, uniqueness and permutation seem to be covered by `producing`.

Just think of McIlroy's famous pipeline in response to Donald Knuth's trie implementation[mcilroy-source]:

  tr -cs A-Za-z '\n' |
  tr A-Z a-z |
  sort |
  uniq -c |
  sort -rn |
  sed ${1}q
As far as pipeline or stream processing diagrams are concerned, the diagram on page 13 (document page 15) of Waters(1989a) may also be worth a closer look.

What the SERIES compiler does is pipeline the loops. Think of a UNIX shell pipeline. Think of streaming results. Waters calls this pre-order processing. This also seems to be where Rich Hickey got the term "transducer" from. In short it means dropping unnecessary intermediate list or array allocations.

Shameless self-plug: Eliminated unnecessary allocations in my JavaScript code by adding support for SERIES to the PARENSCRIPT Common Lisp to JavaScript compiler. The trick was (1) to define (series-expand ...) on series expressions so that they can be passed into (parenscript:ps ...) and (2) the parenscript compiler was missing (tagbody ... (go ...) ...) support. The latter is surprisingly tricky to implement. See dapperdrake(2023). Apologies for the less than perfect blog post. Got busy actually using this tool. Suddenly stream processing is easy, and maintainable.

When adding a Hylang-style threading macro (-> ...) you get UNIX style pipelines without unnecessary allocations. It looks similar to this:

  (-> (it :let*-symbol series::let)
    (scan-file in-path-name #'read)
    (map-fn T #'some-function it)
    (collect 'list it))
Sadly, the SERIES compiler available on quicklisp right now is a bit arcane to use. It seems like it may have been more user friendly if it would have been integrated into the ANSI Common Lisp 1995 standard so that is has access to compiler internals. The trick seems to be to use macros instead of (series::defun ...) and use (series::let ...) instead of (cl:let ...). Note, that the two crucial symbols 'defun and 'let are not exported by SERIES. So using the package is insufficient and pipelining fails without a decent warning.

Am chewing on the SERIES source code. It is available on sourceforge. [series-source]. If anybody is interested in porting it, then please reach out. It seems to be of similar importance as Google's V8 relooper algorithm [relooper-reference]. Waters(1989b), page 27 (document page 29) even demonstrates an implementation for Pascal. So it is possible.

References:

dapperdrake(2023): Faster Loops in JavaScript https://dapperdrake.neocities.org/faster-loops-javascript

Waters(1989a) document page 48, paper page 46 https://dspace.mit.edu/bitstream/handle/1721.1/6035/AIM-1082...

Waters(1989b) document page 29, paper page 27 https://dspace.mit.edu/bitstream/handle/1721.1/6031/AIM-1083...

[relooper-reference] http://troubles.md/why-do-we-need-the-relooper-algorithm-aga...

[series-source] https://series.sourceforge.net/

[mcilroy-source] https://franklinchen.com/blog/2011/12/08/revisiting-knuth-an...


The paper about Series explicitly bemoans the lack of compiler integration, explaining why the hacks are that way: why Series has its own implementations of let and so on.


Apologies for being slightly unclear.

For both getting function composition and avoiding unnecessary intermediate allocations, the naive approach to using the SERIES package is insufficient. And the error messages it returns along the way are unhelpful.

Evaluating (defpackage foo (:use :cl :series)) fails to import (series::defun ...) and (series::let ...) and (series::let* ...). So, when you think you are following the rules of the paper, you are invisibly not following the rules of the paper and get the appropriate warnings about pipelining being impossible. That seems somewhat confusing.

After reading the source code, it turns out the answer is calling (series::install :shadow T :macro T :implicit-map nil). How is (series::install ...) supposed to be discoverable with (describe (find-package :series)) in SBCL if it is an internal symbol of package SERIES (?) Usability here is somewhat less than discoverable. Listing all exported package symbols of SERIES obviously also fails here.

Furthermore, the source code and naming in "s-code.lisp" suggest that (series::process-top ...) may be useful for expanding series expressions to their pipelined (read optimized/streaming/lazy) implementations. This is desirable for passing the optimized version on to PARENSCRIPT or other compilers. Here is the catch: It fails when the series expression is supposed to return multiple values. One of the points of using SERIES, is that Waters and his fellow researchers already took care of handling multiple return values. (If the lisp implementation is smart enough, this seems to mean that these values are kept in registers during processing.) After some tinkering, there is a solution that also handles multiple return values:

  (defun series-expand (series-expression)
      "(series::process-top ...) has problems with multiple return values."
    (let (series::*renames*
          series::*env*)
      (series::codify
        (series::mergify
          (series::graphify
            series-expression)))))
Will submit pull requests once I am comfortable enough with the source code.

Yes, the SERIES papers Waters(1989a,b) bemoan the lack of deep integration with Common Lisp compilers. And yes, they could have been resolved by making SERIES part of ANSI Common Lisp like LOOP was. They could theoretically also have been resolved by having explicit compiler and environment interfaces in ANSI Common Lisp. That is not the world we seem to live in today. Nevertheless, package SERIES solved all of the hard technical problems. When people know about the documentation failings, then SERIES is a powerful hammer for combining streaming/lazy-evaluation with function composition as well as other compilers like PARENSCRIPT.


I think, one issue is that Series had been hacked on since that paper, which had not been updated.

Anyway, I guess series::let is an unexported symbol? I.e. not series:let? There is a good reason for that.

If series:let were exported, then you would get a clash condition by doing (:use :cl :series). The CL package system detects and flags ambiguous situations in which different symbols would become visible under the same name. You would need to a shadowing import for all the clashing symbols.

It's probably a bad idea for any package to export symbols that have the same names as CL symbols. Other people say that using :use (other than for the CL package) is a bad idea. Either way, if you have clashing symbols, whether exported or not, you're going to be importing them individually if you also use CL, which is often the case.


Does anyone have a link to the PR?


> PR review of a community effort to add Bun to the Techempower benchmark

Has this been added? That PR got closed right? Whilst valid, it was sad that Bun didn't make it. Would be good if someone from the community or Bun team can give it another go.


> some sort of "TypeScript-ish" language that, if a TS programmer plays by a stricter set of rules

This is basically just rust, no? As a TS developer, I've found picking up rust to be really neat.


No. :-)

That was certainly the promise/hype of Rust ~2-3 years ago, that it was going to become so ergonomic that even "boring line-of-business applications" (i.e. JS backends) could be written in Rust, by everyday programmers, without any slow down in delivery/velocity due to the language complexity.

But, from what I've seen, that's not played out, and instead the community is still on the look out for the killer "systems-language performance, but scripting-language ergonomics" language.


Try C# instead.


I write a fair bit of C#. It feels like the prequal to Typescript. Its lack of discriminated unions and type narrowing feels like a step back from Typescript.


Given that TypeScript is driven in late part by one of the main long term (until they moved to TS) C# language designers, I feel a lot of the core of the TS type system is what they wanted to do in C# but for legacy reasons just can’t.


DU is available in C# via packages like OneOf and Dunet.

Dunet is really good.

https://github.com/mcintyre321/OneOf

https://github.com/domn1995/dunet

It's on the roadmap and will arrive at some point:

https://github.com/dotnet/csharplang/blob/main/proposals/dis...


Hey Jarred, big fan of your work on Bun.

Until recently I've been using Deno (mostly to avoid using Node and the tooling hell that entails) and it looks like for my use-cases Bun is getting there. I've had a pleasant experience using Bun as the basis of a test harness.

Here's my question (with a tiny bit of lead-in):

What I like about Deno is the integrated LSP (reducing tooling hell), are there any plans for Bun to feature this too? Bun already internally transpiles TypeScript which is great but having the LSP bundled too would give this single binary integrated experience a boon I feel.

Looking forward to Bun 1.0!

P.S. I'm starting to stretch my Zig muscles, you looking for Zig developers? ;)


Friendly warning about expected working conditions: https://twitter.com/lukeshiru/status/1563493902560428034


I don't understand the hand-wringing about this. Bun explicitly says, they are a small team working very hard on some hard problems. If that's your idea of a good time, you're free to try and join. If you want a chiller job, there's 1,000 of them out there. It's not like Jarred is some corporate overlord demanding people slave for him while he sips margarita on a beach... he just wants people working on the same frequency as himself.


There's a school of thought on work life balance which amounts to wanting just enough life overhead to support the work. That 'balance' is not for everyone - but crucially it is what some people want.


Companies wanted to say they have "work / life balance"; rather than change their practices, they expanded the definition of work / life balance.

By this definition, is there any company that doesn't have work / life balance? By this new definition you propose, does the term mean anything?

Is 996 a good work life balance because "it is what some people want"?


>That 'balance' is not for everyone - but crucially it is what some people want

I've never seen anyone that could sustain an 80+ hour per week grind and make it out without severe personal issues (whether they are willing to acknowledge it or not). I've seen many, many incredibly talented people burn out and suffer permanent health or career damage to hit their short-term goals. I personally know an otherwise healthy 30 year old swe who had a stress related heart attack. It may be what some people want but you can't grind your way out of being a human.


But are they compensated or are we dealing with disguised wage theft[1]? A lot of times, when it's time to pay all that overtime or when someone finally speaks up about it, suddenly the "fun" stops.

Then there is the not speaking out, resulting in: 1) Burn out and quit. 2) Company dumps or fires them after burning them out. Then does the same to the new ones. Until something obvious or tragic stops them. 3) Quiet destruction of personal lives. Sometimes leading to significant health and/or mental problems, related to stress, and even suicide in some cases.

Balance is necessary, because otherwise it can be like playing with fire. It's all "fun and games", until people get or realized they got burned.

[1]: https://en.wikipedia.org/wiki/Wage_theft


New product teams are a grind, but with the right people, also a lot of fun. It isn't for everyone. When I was hiring for a NPT working on cutting edge tech I told everyone I interviewed they the work life balance was super skewed.

The people who accepted job offers self selected for having a passion for pushing technology forward.

I tried to keep things as sane as I could, but I'd have to go in on weekends and usher people out of the office.

For some people, building cutting edge things is /fun/.


How is this relevant to the comment you're replying to?


He's asking to be hired by the company behind Bun?

How <<isn't>> the previous comment relevant?!?


You can pretty much use tsserver with bun-types and get most if not all the features you get from deno-lsp. I know because we provide both deno-lsp and tsserver for windmill.dev to provide intellisense over websocket/jsonrpc for our monaco webide at windmill.dev and it works great :)


the Segmentation Fault issues #499 is closed, but i got this error, its a issue that you guys is work step by step or should be work now?


From https://oven.sh/

> The plan is to run our own servers on the edge in datacenters around the world. Oven will leverage end-to-end integration of the entire JavaScript stack (down to the hardware) to make new things possible.

So how does oven-sh the company make money? It sounds like you release Bun open source, and then sell access to your edge infrastructure to enterprises?

Is these edge servers basically a running a smarter version of NPM with a CDN? Can you say more about what this may eventually do?

Will individuals be able to use the edge servers via some free tier?

Does Bun in it's current form use this edge infrastructure already?


Same as Deno, Vercel, and many others; Oven will host your backend JS code for you and run it.


Have you given thought what Bun 2.0 would look like? What major features it would have? Or 2.0 if it ever happens is mostly making Bun work 'at the edge'?


Hey Jarred! Is windows support for 1.0 still on the horizon or has that been pushed? What’s been the best part of this project for you?


The honest answer is: don't know yet, but if it doesn't happen in 1.0, it will be the priority for 1.1.

I'm going to do some experiments in the next few days and see how it goes.

Roughly, the way we're thinking of adding Windows support to Bun is:

1) Get all the Zig code using platform-specific system APIs to use the closest Windows equivalent API. Fortunately, we have lots of code for handling UTF-16 strings (since that's what JS uses in some cases)

2) Get uSockets/uWebSockets (C/C++ library we use for tcp & http serve) to compile for Windows, or fall back to using libuv if it takes too long to make it work

3) Get the rest of the dependencies to compile on Windows

4) Fix bugs and perf issues

There are a lot of open questions though. None of us are super familiar with I/O on Windows. JavaScriptCore doesn't have WebAssembly enabled on Windows yet.

The biggest thing I'm worried about (other than time) re: Windows is async i/o. In Bun, we _mostly_ use synchronous I/O. Synchronous I/O is simpler and when using SSDs, is often meaningfully lower overhead than the work necessary to make it async. I've heard that anti-virus software will often block I/O for potentially seconds, which means that using sync I/O at all is a super bad idea for performance in Windows (if this is true). If that is true, then making it fast will be difficult in cases where we need to do lots of filesystem lookups (like module resolution)


On Windows you may consider using higher level IO routines. For example, for HTTP requests you can use WinHTTP which is super fast and scalable. For other IOs you can use Windows Thread Pool API(https://learn.microsoft.com/en-us/windows/win32/procthread/t...) so that you do not need to manually manage threads or register/unregister IO handlers/callbacks. gRPC uses that. Though Windows IOs internally are all async, actually it makes using sync I/O easier and you do not need to say it is a super bad idea. Windows has IOCP. If the machine has n logical CPUs, you may create a thread pool with 2*n threads. And, by default the operating system will not make more than n threads active at the same time. When one of the threads is doing blocking IO and entered IO wait state, the OS will wake-up another thread and let it go. This is why the number of threads in the thread pool needs be larger than than the number of CPUs. This design doesn't lead to an optimal solution, however, practically it works very well. In this setting you still have the flexibility to use async IOs, but it is not a sin to use sync IO in a blocking manner in a thread pool.

Disclaimer: I work at Microsoft and ship code to Windows, but the above are just my personal opinions.


> JavaScriptCore doesn't have WebAssembly enabled on Windows yet.

I got JavaScriptCore compiling with WebAssembly enabled yesterday, but I don't know how long it'll take to get it to actually work.

The bigger problem for Bun is that JavaScriptCore doesn't have the FTL JIT enabled on Windows [1]. It's going to be much slower than other platforms without that final tier of JIT, shows up pretty dramatically on benchmarks.

[1] https://bugs.webkit.org/show_bug.cgi?id=145366


Sync IO is probably fine on Windows with the exception of CloseHandle, in which case Windows Defender or other AV will invoke a file filter in the kernel's file I/O filter stack to scan changes for data recently written to the file. A common approach used in Rust, version control software, and other runtimes is to defer file closing to a different thread to keep other I/O and user-facing threads responsive. All that said, I think IOCP on Win32 is a far superior asynchronous programming model to the equivalent APIs on Linux which feel far less usable (with more footguns).


I can personally attest to anti-virus issues with IO slowdown.

Some of apps we use will simply cease to function with any AV scanning in place.


Common misconception - it's not opening or writing files that can take a second, it's closing them.


This definitely also used to be true on macOS. Bun previously would just request the max ulimit for file descriptors and then not close them. Most tools don't realize there are hard and soft limits to file descriptors, and the hard limit is usually much higher.

On Linux, not closing file descriptors makes opening new ones on multiple threads occasionally lock for 30ms or more. Early versions of `bun build` were something like 5x slower on Linux compared to macOS until we narrowed down that the bug was caused by not closing file descriptors.


Here likely meant that system level hooks of antiviral software are invoked on CloseFile call when you do writes, not on OpenFile.

You may want to take a look on https://youtu.be/qbKGw8MQ0i8?si=HO5b0MuljPQN9Sb2


Hey Jarred, any idea where the bundler + React server components fall in the priority list? Colin's post[1] made me excited about the idea of having a lightweight RSC compiler/bundler built into bun. I'm curious when it'll be considered usable and ready for experimentation.

[1] https://bun.sh/blog/server-components-in-bun


Bun is an executable as far as I understand. Would it be possible to call Bun directly from another language with bindings?

For example Erlang (and Elixir) has Native Implemented Functions[0] (NIF) where you can call native code directly from Erlang. Elixir has the zigler[1] project where you can call Zig code directly from Elixir.

Maybe you can see where I'm going with this, but it would be super cool to have the ability to call Javascript code from within Elixir. Especially when it comes to code that should be called on the server and client. I'm the developer of LiveSvelte[2] where we use Node to do SSR but it's quite slow atm, and would be very cool to use Bun for something like this.

In any case Bun is super impressive, keep it up!

[0] https://www.erlang.org/doc/tutorial/nif.html

[1] https://github.com/E-xyza/zigler

[2] https://github.com/woutdp/live_svelte


Yes it would be possible, though it'd be easier to start in Bun and then call out to your executable from JavaScript via napi or bun:ffi


Any plans for Bun on mobile in future? If so, any idea around how big it might be (sub-megabyte, multi-megabyte)?


What's zig like to work with? Did the lack of async in 0.11 mess you up?


Nice. I should revisit whether Bun nowadays supports OpenTelemetry


yeah, I would also love to find out about that


When will Bun use an open standard like IRC, XMPP, or Matrix as a community chat option instead of being limited to usage of proprietary Discord & subject to their ToS?


When will a large enough fraction of users prefer that?


The crowd that is blocked by sanctions, values privacy/freedom, has certain accessibility needs third-party clients could provide, or doesn’t have powerful enough hardware or a big/fast enough internet plan likely hasn’t been able to participate even if they wanted to. It’s a bad choice, and we see users get banned for weird, non-project-related reasons & they lose access to the community due to the whims of the Discord corporation.


My point is that the vast majority of users want Discord now, and second is Slack. I wish there were good enough Matrix clients and features that users demanded Matrix, but they don't.


Eh, I mean it’s better than Discord & Slack, yes. The Matrix model of mirror the entire history of all user conversations, including attachments, make it costly to host the storage side. This in turn makes self-hosting not appeal which has lead to de facto centralization around Matrix.org. I would like to see a greater uptake in XMPP MUCs as it’s way lighter to self-host …or IRCv3 which is lightweight but has some modern comforts. Chat needs to be treated an non-permanent where Slack & Discord have made folks over-reliant on it & now we can no longer do a simple web search.



Without in any way suggesting this is a solution rather than a work around, the last time I needed Discord for something I used bitlbee to present it as an ircd and it worked out nicely.


That workaround is obviously not ideal & still requires an account (& possibly a phone number too). And your private message will still be logged. And using the service also still causes a big issue for search as you can’t find solutions using an engine like you could with a forum.

If there are the resources a community would self-host a decentralized, federated server where users are in control of their account/data along with the community in control of moderation, bans, CoC, ToS… & then bridge to other services from that base if they are seen as useful. If a community has less resources certain servers use less resources—especially if the server isn’t supposed to hold the entire history (which these chat rooms shouldn’t be seen as a place for permanent decision making anyhow).


Curious why you chose Zig and not e.g. Rust.


I suspect that this has something to do with it:

https://zackoverflow.dev/writing/unsafe-rust-vs-zig/

Zig is also very easy to learn in comparison to rust.


I'm quite happy they did.


Why is that?


[flagged]


There's Deno already if someone really wants their JS runtime in Rust. Personally, since everything just runs on a JIT JS runtime anyway, I don't really see to much of a difference between Node, Deno, and Bun, it's not like the JS is being AOT compiled via Zig or Rust, which would be very interesting, you could basically treat JS as a compiled language rather than an interpreted one, even if the JIT is already fast.


> it's not like the JS is being AOT compiled via Zig or Rust

That would actually be slower than what you get from a JIT compiler.


I meant the compiler written in a lower level language that then binds to LLVM, as Rust is currently. Why would AOT be slower than JIT?


Why would the implementation language matter for the compiler? It is a traditional input-output algorithm, it is either good and does many fancy optimizations for good output, or it isn’t. Sure, the speed of compilation may vary, but that’s a different question.

Also, JS being such a dynamic language, it will likely perform better with a speculative JIT compiler, e.g. a shape can be assumed for a frequently used object. This is only possible with PGO to a degree.


In theory it could be: compiling for the specific latency profile and instruction-set of each individual machine. In practice that has never been done.


Are there any plans to enhance the bun compile functionality, e.g. bytecode generation? Similar to pkg in the node world. We use the latter to ship close-source binaries to customers.


Do you plan to support sandboxing like deno?

What is your view on the usefulness of sandboxing and why it was skipped for bun?


While not implemented, Bun may eventually have an alternative to Deno's sandboxing: https://github.com/oven-sh/bun/discussions/725

"Rather than runtime permission checks (Deno's model) which could potentially have bugs that lead to permissions being ignored, Bun plans to have binary dead code elimination based on statically analyzing what features/native modules are used by the code."


Why would someone use this over much more maintainable and performant backends in Go or Rust?


Define performant and define maintainable.

If you have a lot of duplicated functionality in a web frontend and backend, it may be a lot more maintainable to do the development once and not have to keep two implementations in sync.

As far as performance, if you’re using one of the JS application frameworks with server-side pre-rendering time to interactivity may very well be faster than anything you can build in Go or Rust.


There is no way that a js ssr framework is faster than a go ssr template renderer.


If you're building in plain Go with pure SSR, sure, but OP is talking about building a React frontend that is pre-rendered server-side. TTI very well could be faster with that than with a React front end that talks to a Go backend.


I'm kind of lost here. It's been ages since I wrote anything frontend related so here the question: how does it matter?

I mean unless you have pure front end app - you are still going to talk to some backend. Regardless of how you got your frontend part - generated by the server or a static file served by nginx.


Modern JS frameworks pre-render the components on the server then they attach to them to "rehydrate" and add JS handlers on the client. They also allow mixing of purely-server-side components (essentially templates, no JS or hydration) and mostly-client-side components (that still pre-render on the server)

For Rust, Leptos (https://leptos.dev/) would be one choice that can do SSR+hydration (but not intermixed with server-side components, at least AFAIK not yet)


There very well could be, the same way that java can be faster than c.

Why? Because js and java are JIT compiled for the very specific cpu they run on, and HotSpot is basically PGO but running all the time.

Granted, this is if and only if the js or java is carefully and properly written, which never happens...


The performance of Go and the maintainability of Rust ..., did you forget a /s there?


While many people have the impression the Rust type system slows them down while prototyping, I think it is a huge time saver when working on existing code, so I'm not sure why you think Rust isn't as maintainable.


Because there's a large productivity gain from being able to use one language for both the frontend and the backend, and server-side JS tooling is far more mature and usable than client-side Go or Rust.


Hiring real backend devs will give you a much better productivity gain then letting front-end JS devs larp at it.


Because they simply can't convert all of their js code into either go or rust for a forseeable future? Even providing they actually find those languages desirable, comparing to typescript.


> Why would someone use this over much more maintainable and performant backends in Go or Rust?

They were already using NodeJs or would have. And that's a large part of the current ecosystem.


Yeah it’s a silly question. It’s like going into a python thread and being like “why didn’t you implement all of this in Go or Rust?” Well, there’s a thousand reasons, and it’s really not even worth the time to unwrap this very reductive question that is really only going to turn into a language flame war for no reason.


because I can hire more JS/TS developers than I can go or rust (especially rust) developers AND I can pay them less... No $400k for a crud app here.


Indeed.

That's why outsourcing firms (both inside and outside the US) love JS backend.

The fragility of JS codebase shows much later.


Go is not a low level language FFS. It is just absolutely nowhere close enough to anywhere Rust/C/C++/Zig’s niche. Just stop it.


I never claimed Go is low level. However, it is very performant for building web backends.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: