Depends on your definition of "best".
Programming for 128K takes massive skill. Having multiple gigabytes allows more people to participate, and to prioritize other attributes than "low memory", typically faster and cheaper. So you get say a better UX, with better error handling, for less money.
Which one is "best"? Well depends if you are skills & money rich, or skills and money poor. Clearly both are best, depending on what you are optimizing for.
Up to now launches have been highly optimized for weight. Optimizing for other priorities (like cost) will change the rules of the game.
To program simple things, maybe not. But if you are trying to do something complex then having a severe memory limit adds one more area of complexity to the process.
If you can remove areas of complexity then you are doing three things:
1. allowing mediocre people to accomplish the same goals as used to take skilled programmers, largely by throwing hardware at the problem (this seems to be the source of your complaint)
2. letting skilled programmers accomplish the same task in less time (again by throwing hardware at the problem)
3. making even more complicated things possible for skilled programmers
I certainly see the value in your complaint about things like the Slack app using Electron to make the UI easy to build (at the cost of my hardware performance), but that is a business optimizing for costs, rather than trying to optimize more for quality. Please assign the blame where it belongs.
Concur. I've done some programming for a platform that gave me way less than 128k to work with, and it's not rocket science. In some ways it's a hell of a lot easier than, say, web development.
It just forces you to be aware of certain things that are tremendously wasteful of memory and cycles, but which don't matter on "modern" hardware.
And by "don't matter" I mean "do matter a whole bunch, but your particular contribution to the problem is likely to be small enough that it won't help much if you spend the time to do it right, so why bother"
Under that assumption, modern software would have a better UX and be more reliable than the software written "back then". And from my experience it's the exact opposite - modern software, like Slack, is not only slower and less functional than old counterparts, like IRC; it's also much less reliable.
Both. "User has quit" doesn't seem to happen in Slack, because Slack doesn't report stuff like that. But the problem goes deeper - Slack often fails at very basic functionality, like failing deliver messages until much later or duplicating them. It's helloworldware with a lot of marketing money to pretend to be a technically sound solution.
Because it turns out to be a colossally chatty and stateful protocol in an era of stateless mobile devices. Plus it's not HTTP... the rule at the network layer these days is basically "80 and 443 are open; every other port is good-luck-to-you."
OP's assertions it was more reliable are assuming a very static network. Modern protocols (Slack included) are far, far more reliable on a highly-unreliable transport layer.
Not true. Or rather: in theory what you are saying would be true, however in case of Slack it simply doesn't work: Slack handles unreliable transport worse than IRC - IRC never dropped or duplicated messages; you could reconnect to IRC client under a tmux session from anywhere and it would do the right thing.
> you could reconnect to IRC client under a tmux session from anywhere
That would actually be a pretty decent solution if it could be automated on a mobile device: run the IRC client in the cloud and then have tmux auto-reestablish every time it gets its connection dropped while the mobile device moves around.
I'd have to see it in action to be confident that a continuously-dropping-and-reestablishing tmux won't drop incoming messages or double-send though.
Which one is "best"? Well depends if you are skills & money rich, or skills and money poor. Clearly both are best, depending on what you are optimizing for.
Up to now launches have been highly optimized for weight. Optimizing for other priorities (like cost) will change the rules of the game.