Hacker News new | past | comments | ask | show | jobs | submit | mrath's comments login

bsky-social-cqrjv-tsfgi bsky-social-xufsa-d6dvu bsky-social-cz7w7-ypiji


All used as well :(


bsky-social-gpoqz-2s6fo

bsky-social-k2vcy-bbcwf

bsky-social-4z3ir-rndd7

bsky-social-ug2hp-de3br

EDIT: all have been claimed


Got some more for all y'all!

bsky-social-te3n3-yx3ib

bsky-social-3rwal-hj636

bsky-social-mvjpq-mzusy

bsky-social-jtuoi-ktnlj


As a backend developer, I was finding it difficult to use svelte kit to develop a SPA. Specifically it is hard to know from the docs how to handle authentication. The router is not much exposed. I hoped that 2.0 would add something in this area. nextjs docs have a section on server and client based authentication which is useful.


More and more frontend frameworks are now somewhat mandating a server part. I know pretty much all(sveltekit, nextjs) say you can do static export, but docs are more focused to use with a server. IMO it appears like this is being pushed by cloud vendors sponsoring those projects.

If you want to build a plain old SPA, the new react docs pretty much have no info. Sveltekit docs discourages it. I have not looked much into vue, but it appears like vue does better here.

Qwik while has a static export, looks like you lose all the good parts of you don’t use a server.

I can certainly see the SEO and other good thing that comes with SSR but not everyone needs it.


Yep, not a fan either. A big shift is that where "routing" happens has moved. Routing has traditionally been done within the SPA. Now it's an external concern and part of a build process.

The various hybrid/mpa/spa innovations are great but not everyone needs them. It's sad to see the traditional SPA approach be discouraged so much.

These meta-frameworks crossing into the server-side will raise a lot of architectural questions for teams. For us, we have React SPAs served by ASP.NET backends and using Vite locally. We'd rather not adopt another dependency like Next or Remix. We're definitely not going to adopt any Node on the backend.

The friction we're experiencing with these new static-exports is that we'd have to configure the backends with the front-end framework's routing system in order to serve the split out static files. e.g. ASP.NET needs to know how to serve NextJS's folder/file/path structure. As opposed to today where we just have to serve an index.html file regardless of whether it's Ang, React, Svelte, etc.


A static build in Sveltekit is as simple as changing 1 line in your config:

`import adapter from “@sveltejs/adapter-auto”`

becomes

`import adapter from “@sveltejs/adapter-static”`

Also, Sveltekit was designed as a serverless framework optimized for the edge long before Vercel hired Rich. Doing SSR on the edge right can be a hard problem to solve, so getting it for free “automagically” in Sveltekit is a huge value-add! Static site generation, on the other hand, has long been a solved problem.

That said, I agree that a lot of sites are better off just being static. Nonetheless, being able to change one line in my Sveltekit config to deploy my static site onto Cloudflare Pages, Netlify, or my own node server thanks to Sveltekit’s adapters is pretty awesome in my experience!


I have not said SvelteKit can't do it. In fact having different options(and for free) is a great thing. I was raising the point that messaging [1] is that it is not recommended most of the time, and does not say when it is good to have an SPA.

Also mentions about scenario JavaScript is disabled. I am not too sure if these frameworks help there with SSR. May be it meant the user can still see a pre-rendered page but with almost no interaction possible?

Essentially I was saying there is a push towards server based frameworks.

[1] https://kit.svelte.dev/docs/single-page-apps


> IMO it appears like this is being pushed by cloud vendors sponsoring those projects.

Having a server component is just the natural step after a decade of bloated and unnecessary SPAs.

That said, I do think Vercel has probably influenced the design of Next and SvelteKit so that it fits better with their platform.


> Having a server component is just the natural step after a decade of bloated and unnecessary SPAs.

Not sure I buy that argument. While I am not totally against the idea of server based frameworks, what is disappointing is the frameworks are appearing to plant the above message.

SPAs provided us a good boundary and that is blurred by all these server based frameworks. You now have server for UI, server for BE. While I agree some SPAs are heavy, I think we have reasonable solutions for most. And these server based framework still need all the work that is done in SPA, just that routing and some aspects of bundling are taken care.


There's no conspiracy. SPAs should have never become the default way to make web front end.

Making a good SPA takes a lot of work and almost nobody has time to do it properly. Even Google with all its resources is incapable of making a good SPA for the Google Cloud console.

This article hits the nail in the head:

https://nolanlawson.com/2022/06/27/spas-theory-versus-practi...

"The best SPA is better than the best MPA. The average SPA is worse than the average MPA"

I'm not saying SPAs do not have their use cases. They totally do (Gmail, Twitter, Spotify, etc) and use them daily.


> More and more frontend frameworks are now somewhat mandating a server part. I know pretty much all(sveltekit, nextjs) say you can do static export, but docs are more focused to use with a server.

Sometimes servers just make sense from an architectural sense. You wanna get a bunch of the same stuff to many people? Servers are a good and battle tested way of doing so — with very well known trade offs and costs, minus weird gotchas that you only realize after the fact.


From the blog post: Qwik is a next-gen web application framework (and meta-framework) designed for instant application startup, no matter how big the app is. Qwik uses a modern and innovative approach that “streams” chunks of JavaScript to the end user — and it does it automatically!


Is there a way to pin a version of the tool with this? For me, the biggest problem with Nix is that version pinning is not easy, actually it is complex.


Pinning is first-class with the new flake method. When you create a flake, a corresponding flake.lock is created, similar to lock files in other languages. There's even a dedicated command now to upgrade the dependencies declared in a flake.


Yes, I think I get that. But it is not easy to pin on different packages. For example if I want golang 1.18, node 12, I need to find the commit corresponding to those and add them. I mean it is possible but not as easy as it should be.


There's often a bunch of different versions in nixpkgs. For example, postgres currently has 11 through 15 available (https://github.com/NixOS/nixpkgs/blob/e7f345ca81f4f5513c4e73...). Nodejs has 14, 16, 18, and 19 (https://github.com/NixOS/nixpkgs/blob/e7f345ca81f4f5513c4e73...).


Finding a way to install a specific version - not git commit or git tag, version - of a package is the problem.

https://github.com/NixOS/nixpkgs/issues/93327

Nix before flakes used channels, each with a flat global namespace. Flakes are intended to sit inside the git repo, so that's even less discoverable.

Nix solves practically every technical problem, but has terrible UX and documentation.


Hello! When you add packages with Devbox add, we automatically pin your devbox.json to a specific commit in the Nix store. This ensures that any developer who uses your devbox.json will get the same packages that you do.

You can read more about how it works here:

https://www.jetpack.io/devbox/docs/guides/pinning_packages/


I guess that only pins tools by nixpkgs version, or does it have capability to pin by exact version of tools like say go 1.18, pg 12, node 16 etc?


No, because older software versions are dropped often rather quickly, especially if they are not maintained by upstream anymore, to reduce required maintaince work on them.


Isn’t this similar to solidjs?


I almost thought my internet or dns has some problem until I opened HN


I don't have an idea on CPU design. Can this lead to M1 like performance?


The goal of this kind of design is M1 like energy efficiency, not performance (I don't know if the Alder Lake P cores can outperform an M1 P core.) M1 has a similar architecture with performance and efficiency cores, but so far I haven't heard of it causing problems for software developers. All versions of macOS that support M1 also have a scheduler that is able to move threads between E and P cores nicely.


My understanding is that Alder Lake P-cores have the highest single-thread performance currently available, but M1 Firestorm cores are close behind, and so is AMD Zen 3. Performance varies by benchmark, and each of them gets some wins depending on the application. See here for SPEC2006 single-thread totals: https://www.anandtech.com/bench/CPU-2020/2797


I thought the secret sauce to M1 was largely that it was a new architecture, without the decades of x86 backwards compatible baggage, with an OS and software capable of running natively on it?


Both, sort of. The secret sauce to M1 battery life is a combination of OS thread scheduling and efficiency cores. The secret sauce to M1 performance is having an architecture that doesn’t need to be thermal throttled as much as in other platforms.

It’s not that Apple invented chips that are remarkably faster than Intel’s or NVIDIA’s, it’s that Apple invented an integrated experience on this new architecture with split performance & efficiency cores and did so while maintaining backwards compatibility with macOS x86 apps in a way that’s indistinguishable from M1-native apps. (So they kept the “baggage” as it were, and it still performs fine.)

I expect Microsoft’s Surface line will catch up eventually, but the disconnect between Qualcomm/Intel as chip makers and Microsoft as OS vendor will slow down the transition for a good 4-6 years easy, and we’re 2 years into this transition to ultra-energy efficient architectures with less thermal throttling. (Full disclosure, I’ve a few Apple shares.)


The secret sauce is most likely TSMC 5nm.


No, that’s cope. There isn’t any secret sauce. It’s just well designed on every level.

You won’t match its power/performance by doing one thing and suddenly making it there. Intel’s process is quite close to TSMC’s already.


It's quite close to the process AMD are on but not 5nm that apple use.

A lot of M1's advantage is in terms of density. That's why they were theoretically able to make such an absolutely enormous processor and keep it cool.

Making that theory work in practice came from decades of low power semi experience. They're extremely good chip designers.


The Alder Lake node is about equivalent to TSMC's N7, maybe 10% better. N5, the node Apple is using for M1, is about 1.8x the density of N7 with 40% lower power usage.

AMD and Intel are on similar nodes at the moment, but Apple has a very notable advantage. It's not "secret sauce" so much as paying many billions for exclusive access.


> that it was a new architecture

This is the biggest part of the 'sauce' but having E-cores and extremely close memory (faster memory access - less time the cores need to wait for data - less time spent doing nothing and burning watts) surely helps.


Is there any reading material that you can share about the M1 scheduler?


I think the article is is saying it won’t lead to much of anything except headaches.


That's the impression I got as well. That made me discount Agner's arguments since they apply to Apple's M1, and the M1 is great in both raw performance and power efficiency.


Difference: Apple controls software + hardware and can this immediately utilize hardware efficiencies like a new Arch with a simultaneous release. Intel doesn’t control Windows.


This CPU is already faster than M1 in single core. And multi core depending on the SKU.


Compared to the M1, Intel is still restricted by the existing memory and I/O interfaces and by x86 compatibility, which means they can’t do certain things the M1 did, like the integrated unified memory (for massive memory bandwidth) and the relaxed memory model. And as the siblings explain, the P/E design is more about power efficiency than about performance, although the increased power efficiency does allow for more cores at the high end.


The M1 has the exact same memory architecture as an 11th generation Intel Core series mobile CPU.


Not really, the M1 has a relaxed memory model, the 11th gen intel core series has a strict memory model. Generally the relaxed memory model allows better bandwidth and performance out of the same memory system.

Not to mention there's the M1 pro, max, and ultra if you need more memory bandwidth.


Not in its current form.


From the docs it looks like you have dedicated instance of service for each user, is that correct. Or it is a kind of multi tenant system where data might be colocated. I am guessing that it is a multi tenant system but If you spin off an instance for user, how do you do that?


I am working on a side project solving the exact same thing. The project is not open source yet and I am not very actively working on it. I kind of created it to learn golang and some web technologies and other infra tools that I normally don’t use at work. Good to know that there is a market for, not that i have plans to commercialise it but if i open source it people might be interested to contribute.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: