Hacker News new | past | comments | ask | show | jobs | submit | cheema33's comments login

Kanye West?

Maybe the misdirection is a feature, not a bug.


i miss the old kanye

Intel has come back recently with a new series of "Lunar Lake" CPUs for laptops. They are actually very good. For now, Intel has regained the crown for Windows laptops.

Maybe Pat has lit the much needed fire under them.


Worth noting,

> Future Intel generations of chips, including Panther Lake and Nova Lake, won’t have baked-on memory. “It’s not a good way to run the business, so it really is for us a one-off with Lunar Lake,” said Gelsinger on Intel’s Q3 2024 earnings call, as spotted by VideoCardz.[0]

[0]: https://www.theverge.com/2024/11/1/24285513/intel-ceo-lunar-...


“It’s not a good way to run the business, so it really is for us a one-off with Lunar Lake,”

When you prioritize yourself (way to run the business) over delivering what customers want you're finished. Some companies can get that wrong for a long time, but Intel has a competitor giving the customers much more of what they want. I want a great chip and honestly don't know, care, or give a fuck what's best for Intel.


> When you prioritize yourself

Unless “way to run the business” means “delivering what the customer wants.”


Customer being the OEMs.

I thought the OEMs liked the idea of being able to demand high profit margins on RAM upgrades at checkout, which is especially easy to justify when the RAM is on-package with the CPU. That way no one can claim the OEM was the one choosing to be anti-consumer by soldering the RAM to the motherboard, and they can just blame Intel.

OEMs like it when it's them buying the cheap RAM chips and getting the juicy profits from huge mark-ups, not so much when they have to split the pie with Intel. As long as Intel cannot offer integrated RAM at price equivalent to external RAM chips, their customers (OEMs) are not interested.

Intel would definitely try to directly profit from stratified pricing rather than letting the OEM keep that extra margin (competition from AMD permitting).

The only ugly (for Intel) detail being that they are fabbed by TSMC

> Windows laptops

A dying platform and as relevant as VAX/VMS going forward.


You just made me nostalgic for amber screens, line printers, and all-nighters with fellow students.

Snapdragon X Plus/Elite is still faster and has better battery life. Lunar Lake does have a better GPU and of course better compatibility.

X Elite is faster, but not enough to offset the software incompatibility or dealing with the GPU absolutely sucking.

Unfortunately for Intel, X Elite was a bad CPU that has been fixed with Snapdragon 8 Elite's update. The core uses a tiny fraction of the power of X Elite (way less than the N3 node shrink would offer). The core also got a bigger frontend and a few other changes which seem to have updated IPC.

Qualcomm said they are leading in performance per area and I believe it is true. Lunar Lake's P-core is over 2x as large (2.2mm2 vs 4.5mm2) and Zen5 is nearly 2x as large too at 4.2mm2 (Even Zen5c is massively bigger at 3.1mm2).

X Elite 2 will either be launching with 8 Elite's core or an even better variant and it'll be launching quite a while before Panther Lake.


LNL is a great paper launch but I have yet to see a reasonably priced LNL laptop so far. Nowadays I can find 16GB Airs and X Elite laptops for 700-900 bucks, and once you get into 1400 territory just pay a bit more for M4 MBPs which are far superior machines.

And also, they compete in the same price bracket as Zen 5, which are more performant with not that much worse battery life.

LNL is too little too late.


An M4 Macbook Pro 14 with 32 GB of RAM and 1 TB storage is $2,199... a Lunar Lake with the same specs is $1199. [0]

[0] https://www.bestbuy.com/site/asus-vivobook-s-14-14-oled-lapt...


Those are not nearly comparable specs.

With a build quality planets apart.

My point is it's not "just pay a bit more".

Yeah because it's an ASUS product. They make garbage.

Lunarrow Lake is a big L for Intel because it's all Made by TSMC. A big reason I buy Intel is because they're Made by Intel.

We will see whatever they come out with for 17th gen onwards, but for now Intel needs to fucking pay back their CHIPS money.


Are they being fabbed by TSMC in the US, or overseas?

TSMC doesn't have any cutting-edge fabs in the US yet.

TSMC Washington is making 160nm silicon [0], and TSMC Arizona is still under construction.

[0] https://www.tsmcwashington.com/en/foundry/technology.html


That page doesn't really say much about what's currently being produced at TSMC Arizona vs the parts still under construction.

There's 4-nm "engineering wafer" production happening at TSMC Arizona already, and apparently the yields are decent:

https://finance.yahoo.com/news/tsmc-arizona-chip-plant-yield...

No idea when/what/how/etc that'll translate to actual production.

---

Doing a bit more poking around the net, it looks like "first half 2025" is when actual production is pencilled in for TSMC Arizona. Hopefully that works out.


No disagreement here; the link I provided was specifically for TSMC Washington.

I'm not saying that TSMC is never going to build anything in the US, but rather that the current Lunar / Arrow Lake chips on the market are not being fabbed in the US because that capacity is simply not online yet.

2025H1 seems much more promising for TSMC Arizona compared to the mess that is Samsung's Taylor, TX plant (also nominally under construction).


Yeah, but can they run any modern OS well? The last N intel laptops and desktops I’ve used were incapable of stably running Windows, MacOS or Linux. (As in the windows and apple ones couldn’t run their preloaded operating systems well, and loading Linux didn’t fix it.)

Very strange. Enough bad things can be said about Intel CPUs, but I have never had any doubts about their stability. Except for that one recent generation that could age to death in a couple of months (I didn't have any of these).

AMD is IME more finicky with RAM, chipset / UEFI / builtin peripheral controller quality and so on. Not prohibitively so, but it's more work to get an AMD build to run great.

No trouble with any AMD or Intel Thinkpad T models, Lenovo has taken care of that.


You can use CloudFlare Tunnel (https://www.cloudflare.com/products/tunnel/) to connect a system to your cloudflare gateway, without exposing it to the Internet.


Or Tailscale, which is pretty cool piece of tech.


Tailscale is wireguard with advertising, a convenient UI, and a STUN/TURN server.


I'm aware they wrap OSS, but they made it very, very easy to adopt and maintain for a large chunk of potential users. This requires significant effort and should not be undervalued, in my opinion.


exactly, which means setting up a vps, generating certificates, setting up some type of monitoring to make sure the tunnel is working, etc. I agree that wireguard is the best option, if you have the time and knowledge, but for some dev people that just wants to put up a webpage with a few users, tailscale/cloudflare is a much easier system to maintain (especially as it handles ssl for you as well - to some degree...).


I suffer from ADHD and the pomodoro timer part interests me. However, I am not sure if a pomodoro timer would help me. And $200 is an expensive gamble.


(This isn't necessarily directed at you; just at the last minute thought to look for any top-level posts talking about this aspect. Much thanks for the piggyback! :pray:)

I've nothing but utmost love for Flipper Devices Inc., seeing their approach to PR and as a 1000% happy owner of a Flipper Zero that sits unused 99.99% of the time in my backpack (and boy, am I _extremely_ happy and relieved for the 0.01% when I need it). I would pull the trigger on this in a heartbeat if I had the play-cash.

For those specifically with productivity challenges, it seems to be a good opportunity to remind that there are simple techniques people can try out (even if I find hard to discipline myself to adhere to) like a programmer's anecdote of one of John Carmack's methods[0].

I have personally adapted that technique to buying a $20 (hour-long) sand timer and turning it to its side when I need to pause. Unfortunately, my challenge now is overcoming my reluctance to use it more often, dreading the commitment I am signing myself up for as soon as I start the timer and fear of failure. Alas, it appears that I allow that fear to control me (into shirking responsibility).

[0]: https://news.ycombinator.com/item?id=30801421 (unfortunately, original source no longer has this entry published, and archive.org is still offline, so all I have is this HN post to cite :)

[edit]: https://archive.ph/kcKYY


An old android phone with no cell service can do the job and costs roughly $0 (plenty of people will happily give one or three away)


Good call on being cautious here. Unfortunately, there isn't anything between this product and the most basic, basic, basic timer for you to test out if the pomodoro method would work for you.


Get a free timer and find out!


> it's LLM sanitized/rewritten

LLM is the new spellchecker. Soon we'll we will wonder why some people don't use it to sanity check blog posts or any other writing.

And let's be honest, some writings would greatly benefit from a sanity check.


> Deploying this requires running 5 different open source servers (databases, proxies, etc)

That is the nature of the beast for most feature rich products these days. The alternative is to pay for cloud service and outsource the maintenance work.

I don't mind a complex installation of such a service, as long as it is properly containerized and I can install and run it with a single docker-compose command.


I have a different take on this. I think local-first is the future. This is where the apps runs mostly within user's browser with little to no help from the server. Apps like Figma, Linear and Superhuman use this model very successfully. And to some degree Stackblitz does as well.

If somewhat complex apps like Figma can run almost entirely within user's browser, then I think vast majority of the apps out there can. Server side mostly is there to sync data between different instances of the app if the user uses it from different locations.

The tooling for this is in the works, but is not yet mature. e.g Electric-SQL. Once these libraries are mature, I think this space will take off.

Serverless is mostly there to make money for Amazon and Azures of the world and will eventually go the way of the CGI.

WASM could succeed as well. But mostly in user's browser. Microsoft uses it today for C#/Blazor. But it isn't the correct approach as dotnet in browser will likely never be as fast as Javascript in the browser.


>Serverless is mostly there to make money for Amazon and Azures of the world and will eventually go the way of the CGI.

CGI empowers users and small sites. No one talks about it because you can't scale to a trillion add impressions a second on it. Serverless functions add 10 feet to Bazoz's yacht every time someone writes one.


I’m not sure I’d call Figma local first. If I’m offline or in a spotty wifi area, I can’t load my designs. And unless it’s recently changed, if you lose wifi and quit the browser after some edits, they won’t be saved.


That's intentional: they need you and your data tied to the server to make money. But there's no reason why it couldn't be local first (except the business model), since the bulk of execution is local.

Incidentally, I think that's why local-first didn't take off yet: it's difficult to monetize and it's almost impossible to monetize to the extent of server-based or server-less. If your application code is completely local, software producers are back to copy-protection schemes. If your data is completely local, you can migrate it to another app easily, which is good for the user but bad for the companies. It would be great to have more smaller companies embracing local-first instead of tech behemoths monopolizing resources, but I don't see an easy transition to that state of things.


>Incidentally, I think that's why local-first didn't take off yet

Local first is what we had all throughout the 80s to 10s. It's just that you can make a lot more from people who rent your software rather than buy it.


The sweet, sweet ARR. Investors love it, banks love it, employees should also love it since it makes their paychecks predictable.

It sucks for customers, though.


More and more reliably.

When people have an abo that cannot be quit every month it gives more financial security to the company.

Previously people would buy e.g. the creative suite from Adobe and then work with that version for many, many years to come


Previously people would crack CS from Adobe then work with that version for many, many years to come :)


Previously amateurs would crack Adobe software and then get a letter telling them they needed to pay or be sued when they went professional.

The cracked software was there to onramp teens into users. Adobe has burned this ramp and now no one under 14 uses it any more which is quite the change from when I was 14.


True but do all those peeople now pay $100 a month to Adobe? Hardly.


If they need what Adobe offers, yes.


A better example than Figma is Rive, made with Flutter.

Works well local-first, and syncs with the cloud as needed. Flutter space lends itself very well to making local-first apps that also play well in the cloud.


Hehehe, so the future is how we used to run applications from before the era of the web.


Except with runtime safety, no installation process, no pointless scare popups when trying to run an app directly downloaded from the internet, and trivial distribution without random app store publishing rules getting in the way.

In a way - yes - it's almost like it was before the internet, but mostly because other ways to distribute and run applications have become such a hassle, partly for security reasons, but mostly for gatekeeping reasons by the "platform owners".


Apps like these were incredibly common on Windows from the late 90s-early 2010s era. They could do all this (except for the sandboxing thing). You just downloaded a single .exe file, and it ran self-contained, with all its dependencies statically linked, and it would work on practically any system.

On MacOS, the user facing model is still that you download an application, drop it in the Applications folder, and it works.


> They could do all this (except for the sandboxing thing).

The sandbox is very very important, it is the reason I mostly do not worry about clicking random links or pasting random urls in a browser.

There are many apps that I would have liked to try if not for the security risk.


The download of a single EXE to keep had a nice side-effect though, that it made it trivial to store (most) apps (or their installers) for future use. Not so sure if in-browser apps can do that (yet?) except maybe by saving an entire virtual machine containing the web browser with the app installed.


> You just downloaded a single .exe file, and it ran self-contained, with all its dependencies statically linked, and it would work on practically any system.

Yeah, but try that today (and even by 2010 that wouldn't work anymore). Windows will show a scare popup with a very hard to find 'run anyway' button, unless your application download is above a certain 'reputation score' or is code-signed with an expensive EV certificate.

> On MacOS, the user facing model is still that you download an application, drop it in the Applications folder, and it works.

Not really, macOS will tell you that it cannot verify that the app doesn't do any harm and helpfully offer to move the application into the trash bin (unless the app is signed and notarized - for which you'll need an Apple developer account, and AFAIK even then there will be a 'mild' warning popup that the app has been downloaded from the internet and whether you want to run it anyway). Apple is definitely nudging developers towards the app store, even on macOS.


Yes and Windows in that time period had massive issues with security and culture. The culture of downloading and running EXEs from the internet quickly caught up to everyone, and not in a good way.

Also the "big idea" is that those applications aren't portable. Now that primary computers for most people are phones, portable applications are much more important.


Except worse, because everything has to run in a gigantic web browser even if it could be a small native app.


Except better, because it doesn't only work on Windows, and because I don't invite a dozen viruses into my computer.


Every native app has to be run in a gigantic special OS when it could be a small webapps running in a medium sized browser.

Many many ChromeOS (web based consumer OS) laptops are 4GB of ram. You do not want to try that with any normal OSes.


That’s because windows is loaded with trash. You can easily run desktop Linux with 4 GB or RAM, and people have been doing it for decades.


But the browser is running in that gigantic special OS. It's not like the OS magically disappears.


I've already mentioned ChromeOS as one counter-example.

SerenityOS and Ladybird browser forked but until recently had a lot of overlap.

LG's WebOS is used on a range of devices, derived from the Palm Pre WebOS released in 2009.

The gigantic special OS is baggage which already has been cut loose numberous times. Yes you can run some fine light Linux OS'es in 4GB but man, having done the desktop install for gnome or kde, they are not small at all, even if their runtime is ok. And most users will then go open a web browser anyways. It's unclear to me why people clutch to the legacy native app world, why this other not-connected mode of computing has such persistent adherency to it. The web ran a fine mobile OS in 2009; Palm Pre rocked. It could today.


I for one don't want to use web apps. I want the speed, convenience, and availability of native apps. I want to use applications that work if the internet isn't. I want to use applications that store my data locally. I want to use unglamorous applications that just work and use a native GUI toolkit instead of torturing a poor, overburdened document display engine into pretending it's a sane place for apps to run.

Not to mention, from the perspective of a developer, the relative simplicity of native apps. Why should I jump through all the hoops of distributed computing to, for example, edit a document in a WYSIWYG editor? This is something I could do comfortably on a Packard Bell in 1992.


The Web is portable, operating systems are not. Windows and Mac, being short-sighted, did this to themselves. Nobody can agree on anything, Microsoft is constantly deprecating UI frameworks, and it's not convenient at all to write local apps.

It's only JUST NOW we have truly portable UI frameworks. And it's only because of the Web.


The only thing that defines portability is everyone adhering to the same standards.

You say that the web is portable, but really, only Google's vision for the web is relevant, seeing how they have the final say in how the standards are implemented and evolved.

So it's basically another walled garden, only much bigger and not constrained to the CPU architecture and OS kernel.

Chromium IS a platform. And indeed many applications that do work on Chrome don't work on Firefox. So we're pretty much back where we started, but the problem is harder to see because Chrome has such a monopoly over browsers that for most intents and purposes, and for most devs, it's the only platform that exists.

Everyone is good at multiplat when there's only one plat.


QT has been around for decades. So has GTK. Bindings for whatever language you could possibly want. Runs on whatever OS you want. We've had "truly portable" UI frameworks since the late 90s. This has not been an issue for my entire adult life. 20 years ago, I was using desktop applications that ran on Mac OS X, Windows, and *nix with no modifications. They were written in Python, used GTK, and just worked.

Web apps are popular because 1) people don't like installing things anymore for some reason and 2) it's easier to justify a subscription pricing model.


Even those are not portable because they don't target the #1 personal computer in use - smart phones.


These are all the views of a fossil. Maybe some truth, historically, but years out of date.

Want an offline app? Possible for a long time, build a local-first app. Don't want to build a client-server system? Fine, build an isolated webapps. There's so many great tools for webdev that get people going fast, that are incomparably quick at throwing something together. It's just bias and ignorance of an old crusty complainy world. This is a diseased view, is reprehensible small minded & aggressively mean, and it's absurd given how much incredibly effort has been poured into making HTML and CSS incredibly capable competent featureful fast systems, for shame: torturing a poor, overburdened document display engine into pretending it's a sane place for apps to run

The web has a somewhat earned reputation for being overwhelmed by ads, which slow things down, but today it feels like most native mobile apps are 60MB+ and also have burdensome slow ads too.

There aren't really any tries to go full in on the web. We have been kind of a second system half measure, for the most part, since Pre WebOS gave up on mobile (since FirefoxOS never really got a chance). Apps have had their day and I'm fine with there being offerings for those with a predeliction for prehistoric relics, but the web deserves a real full go, deserves a chance too, and the old salty grudges and mean spirits shouldn't obstruct the hopeful & the excited who have pioneered some really great tech that has both become the most popular connected ubiquitous tech on the planet, but which is also still largely a second system and not the whole of the thing.

The web people are always hopeful & excited & the native app people are always overbearingly negative nellies, old men yelling at the cloud. Yeah, there's some structural issues of power around the cloud today, but as Molly White's recent XOXO talk says, the web is still the most powerful system that all humanity shares that we can use to enrich ourselves however we might dream, and I for one feel great excitement and energy, that this is the only promise I see right now that shows open potential. (I would be overjoyed to see native apps show new promise but they feel tired & their adherents to be displeasurable & backwards looking) https://www.youtube.com/watch?v=MTaeVVAvk-c


These are all the views of someone who is hopelessly naive. Maybe some truth, but ignorant of where we came from and how we got here. This is a diseased view, is reprehensible, small minded, and aggressively mean, and it's absurd given how much complexity has been poured into making computers do simple things in the most complex way possible.

My man, I am not a fossil. I came of age with web apps. But I am someone who has seen both sides. I have worked professionally on both desktop applications and as a full stack web developer, and my informed takeaway is web apps are insane. Web dev is a nightmarish tower of complexity that is antithetical to good engineering practice, and you should only do it if you are working in a problem space that is well and truly web-native.

I try to live by KISS, and nontrivial web apps are not simple. A couple of things to consider:

1. If it is possible to do the same task with a local application, why should I instead do that task with a web app that does everything in a distributed fashion? Unnecessary distributed computing is insane.

2. If it is possible to do the same task with a local application, and as a single application, not client-server, why should I accept the overhead of running it in a browser? Browsers are massive, complex, and resource hungry. Sure, I'll just run my application inside another complex application inside a complex OS. What's another layer? But actually, raw JS, HTML, and CSS are too slow to work with, so I'll add another layer and do it with React. But actually, React is also too slow to work with, so I'll add another layer and do it with Next.js. That's right, we've got frameworks inside of frameworks now. So that's OS -> GUI library -> browser -> framework -> framework framework -> application.

3. The world desperately needs to reduce its energy consumption to reduce the impact of climate change. If we can make more applications local and turn off a few servers, we should.

I am not an old man yelling at the cloud. I am a software engineer who cares deeply about efficient, reliable software, and I am begging, pleading for people to step back for a second and consider whether a simpler mode of application development is sufficient for their needs.


> Browsers are massive, complex, and resource hungry. Sure, I'll just run my application inside another complex application inside a complex OS. What's another layer? But actually, raw JS, HTML, and CSS are too slow to work with, so I'll add another layer and do it with React.

That's just your opinion, and you're overgeneralizing one framework as the only way.

A 2009 mobile phone did pretty damned awesome with the web. The web is quite fast if you use it well. Sites like GitHub and YouTube use web components & can be extremely fast & featureful.

Folks complain about layers of web tech but what's available out of box is incredible. And it's a strength not a weakness that there are many many ways to do webdev, that we have good options & keep refining or making new attempts. The web keeps enduring, having strong fundamentals that allow iteration & exploration. The Extensible Web Manifesto is alive and well, is the cornerstone supporting many different keystone styles of development. https://github.com/extensibleweb/manifesto

It's just your opinion again and again that the web so bad and ke, all without evidence. It's dirty shitty heresay.

Native OSes are massive, complex, and resource hungry and better replaced by the universal hypermedia. We should get rid of the extra layers of non-web that don't help, that are complex and bloated.


There is no other industry that is equally driven by fad and buzzword. Try to hide a simple fact that a whole motivation behind SaaS preaching is greed, and bait users with innovative "local-first" option.

It is actually kinda funny to read cries about "enshitiffication" and praises for more web-based bullshittery on the same site, although both are clearly connected and supporting each other. Good material for studying false consciousness among dev proletariat.


I also support the development of client side applications, but I don't think they should necessarily be run in a browser or sandbox or be bought through an app store, and it's definitely not a new idea.


> Microsoft uses it today for C#/Blazor. But it isn't the correct approach as dotnet in browser will likely never be as fast as Javascript in the browser.

Might be true, but both will be more than fast enough. We develop Blazer WASM. When it comes to performance, dotnet is not the issue


Yep. And when wasmgc is stable & widely adopted, apps built using blazer will probably end up smaller than their equivalent rust+wasm counterparts, since .net apps won’t need to ship an allocator.


I thought the problem was the hefty upfront price to pay for loading the runtime.


There's some truth to this, but there's a new way of rendering components on the server and pushing that HTML directly to the browser first. The components render but aren't fully interactive until the WASM comes in. It can make it feel snappy if it doesn't take too long to load the WASM.


At the end of the day, all you are doing is syncing state with the server. In the future, you'll have a local state and a server state and the only server component is a sync Wasm binary hehe.

Still, you'll be coding your front-end with Wasm/Rust, so get in on the Rust train :)


Rust frontend dev is not going to become mainstream, no matter what.


metaphorically, Rust train does not sound enticing.


CGI is alive and well. It’s still the easiest way to build small applications for browsers.


Nobody talks about it because people who use it just use it and get on with their life. It’s painfully easy to develop and host.

However it’s likely that generations who weren’t making websites in the days of Matt’s script archive don’t even know about cgi, and end up with massive complex frameworks which go out of style and usability for doing simple tasks.

I’ve got cgi scripts that are over 20 years old which run on modern servers and browsers just as the did during the dot com boom.


It truly depends on the application. If you have a LOB database-centered application that's pretty much impossible to make "local first".

Figma and other's work because they're mostly client-side applications. But I couldn't, for example, do that with a supply chain application. Or a business monitoring application. Or a ticketing system.


I have a different take on this:

It depends on what you're actually building.

For the business applications I build SSR (without any JS in the stack, but just golang or Rust or Zig) is the future.

It saves resources which in turn saves money, is way more reliable (again: money) and less complex (again: money) to syncing state all the time and having frontend state diverge from the actual (backend) state.


I have a different take on this:

Business applications don't care about client side resource utilisation. That resource has already been allocated and spent, and it's not like their users can decide to walk away because their app takes an extra 250ms to render.

Client-side compute is the real money saver. This means CSR/SPA/PWA/client-side state and things like WASM DuckDB and perspective over anything long-lived or computationally expensive on the backend.


I definitely view the browser as an app delivery system... one of the benefits being you don't have to install and thus largely avoid dependency hell.

Recently I wrote an .e57 file uploader for quato.xyz - choose a local file, parse its binary headers and embedded xml, decide if it has embedded jpg panoramas in it, pull some out, to give a preview .. and later convert them and upload to 'the cloud'.

Why do that ? If you just want a panorama web tour, you only need 1GB of typically 50GB .. pointclouds are large, jpgs less so !

I was kind of surprised that was doable in browser, tbh.

We save annotations and 3D linework as json to a backend db .. but I am looking for an append-only json archive format on cloud storage which I think would be a simpler solution, especially as we have some people self hosting .. then the data will all be on their intranet or our big-name-cloud provider... they will just download and run the "app" in browser :]


> Figma can [...] then I think vast majority of the apps out there can

This doesn't follow. If Figma has the best of the best developers then most businesses might not be able to write just as complex apps.

C++ is a good example of a language that requires high programming skills to be usable at all. This is one of the reasons PHP became popular.


I worked on something in this space[1], using a heavily modified Chrome browser years ago, but I consider I was too early and I bet something in this lines (probably simpler) will take off when the time is right.

Unfortunately I got a little of a burnout for working some years on it, but I confess I have a more optimized and more to the point version of this. Also having to work on Chrome for this with all its complexity is a bit too much.

So even though is a lot of work, nowadays I think is better to start from scratch and implement the features slowly.

1 - https://github.com/mumba-org/mumba


> I think local-first is the future. This is where the apps runs mostly within user's browser with little to no help from the server. Apps like Figma, Linear and Superhuman use this model very successfully.

The problem is: Figma and Linear are not local-first in the way people who are local-first proponents explain local-first. Both of them require a centralized server, that those companies run, for synchronization. This is not what people mean when they talk about "local-first" being the future, they are talking about what Martin Kleppman defined it as, which is no specialized synchronization software required.


I work on an iOS app like this right now, it predates a lot of these newer prebuilt solutions. There are some really nice features of working and building features this way, when it works well you can ignore networking code entirely. There are some tradeoffs though and a big one has been debugging and monitoring as well as migrations. There is also some level of end user education because the apps don’t always work the way they’re expecting. The industry the app serves is one in which people are working in the field, doing data entry on a tablet or phone with patchy connections.


The frontend space is moving away from client-side state, not toward it.


the frontend space is always moving in every direction at the same time, this is known as Schrodinger's frontend, depending on when you look at it and what intentions you have - you may think you're looking at the backend.


I think you'll find the real long-term movement is to client-side, not away, and that's because it is both a faster and simpler model if done right.


Some applications are inherently hard to make local-first. Social media and Internet forums come to mind. Heavily collaborative applications maybe too.


I feel like social media is one of the main things folks want to be local-first. Own your own data, be able to browse/post while offline, and then it all syncs to the big caches in the sky on reconnect...


But how do you do that without essentially downloading the whole social network to your local machine? Are other people's comments, quotes, likes, moderation signals something that should stay on the server or should be synced to the client for offline use? In the first case, you can't really use the social network without connecting to a server. The second case is a privacy and resources nightmare (privacy, because you can hold posts and comments from users that have deleted their data or banned you, you can see who follows who etc. Resources, because you need to hold the whole social graph in your local client).


Usually folks looking for this sort of social network are also looking for a more intimate social experience, so we're not necessarily talking about sync'ing the whole Twitter feed firehose.

I don't think it's unreasonable from a resources perspective to sync the posts/actions of mutual followers, and from a privacy standpoint it's not really any worse than your friend screenshotting a text message from you.


Sure, but they're a tiny fraction of the mainstream users and you can already have that sort of experience with blogging and microblogging. Relevant social networks as the public knows them are hard to develop local-first. Even the humble forum where strangers meet to discuss is really hard to do that way. If it needs centralized moderation, or a relevance system via karma / votes, it's hard.


(unless you want another paradigm of social networking in which you don't have likes, public follows, replies etc., which won't probably fly because it has a much worse UX compared to established social networks)


> WASM could succeed as well.

I would guess WASM is a big building block of the future of apps you imagine. Figma is a good example.


> In gamedev there is simple rule: don't try to do any of that.

I am not in gamedev, but I frequently have to develop middleware that takes in user entered data and formats it in a way that will import into a 3rd party system without errors. And that sometimes means changing the case on strings.

In my experience as a developer, this is very very common requirement.

Luckily I am not forced to use a low level language for any of my work. In C# I can simply do this: "hello world".ToUpper();


If you're putting data into a third-party system, you might want `ToUpperInvariant`, not `ToUpper`. (Just checking that you know the difference, because most people don't!)


The problem is that such third-party requirements are usually wrong.

Two decades ago some developer probably went "Yeah, obviously all names start with capital letters!", not realizing that there are in fact plenty of names which start with a lowercase letter. So they added an input validation test which checks for capitals, which meant everyone feeding that system had to format their data. A whole ecosystem grew around the format of the output of that system, and now you're suddenly rewriting the system and you run into weird and plain wrong capitalization requirements for no technical reason whatsoever.

Alternatively, the same but start with punch cards which predate ASCII and don't distinguish between uppercase and lowercase letters.

> In C# I can simply do this: "hello world".ToUpper()

... which does not work.

Take a look at the German word "straße" (street), for example. Until very recently the "ß" character did not have an uppercase variant, so a ToUpper would convert it to "STRASSE". This is a lossy operation, as the reverse isn't true: the lowercase variant of "KONGRESSSTRASSE" (congress street) is not "kongreßstraße" - it's supposed to be "Kongressstraße".

It can get even worse: the phrase "in Maßen" (in moderate amounts) naively has the uppercase variant "IN MASSEN" - but that means "in huge amounts"! In that case it is probably better to stick to "IN MASZEN".

And then there's Turkish, where the uppercase variant of the letter "i" is of course "İ" rather than "I" - note the dot.

So no, you cannot "simply" use ToUpper() / ToLower(). They might work well enough of basic ASCII for languages like English, but they have a habit of making a mess out of everything else. You're supposed to use CultureInfo.TextInfo.ToUpperCase() and explicitly specify what locale the text is in so that it can use the right converter. Which is of course essentially impossible in general-purpose text fields.

In practice that means your options are a) giving up on the concept of uppercase/lowercase conversion and just passing it as-is, or b) accepting that you are inevitably going to be silently corrupting your data.


> So no, you cannot "simply" use ToUpper() / ToLower(). They might work well enough of basic ASCII for languages like English, but they have a habit of making a mess out of everything else. You're supposed to use CultureInfo.TextInfo.ToUpperCase() and explicitly specify what locale the text is in so that it can use the right converter. Which is of course essentially impossible in general-purpose text fields.

Have you ever read the documentation? https://learn.microsoft.com/en-us/dotnet/fundamentals/runtim...


Yes. Now try applying it to something like this very HN comment section, which is mixing words belonging to different cultures inside a single comment - and in some cases even inside the same word.

Sure, you can now do case conversion for a specific culture, but which one?


It’s a lossy operation, but it does work. By this logic jpeg and mpeg don’t work either. But were watching them videos daily.

Yes we can simply ToUpper(). We just can’t ToUpper().ToLower(), but that’s useless cause we have the original string if we need it and fine if we don’t need it.


The point is that what ToUpper does depends on locale AND Unicode version. This for many applications it only appears to work until it will fail spectacularly in production.


> In C# I can simply do this: "hello world".ToUpper();

Hmm still actual: https://www.moserware.com/2008/02/does-your-code-pass-turkey...


> 2008

This is completely irrelevant because culture-sensitive case conversion relies on ICU/NLS.


But at least a programmer shall be aware to call it (whatever API is used).


Note that the correct way to do that in C# would be to pass an instance of CultureInfo.


> The pull of oligarchs is more real in Russia than anywhere else in the world.

It may have been true at some point. But, Putin put an end to it. He killed oligarchs when they fell out of line. There are some ex-oligarchs living outside of Russia, no longer super rich, after Putin took away their wealth.


And then there are those who try very hard to get on the list. I know at least one famous one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: