I fondly remember the days of using Borland Delphi, and then Embarcadero RAD Studio, to build fully-functional GUI applications in a matter of hours. As a teenager, no less.
You had great libraries like Jedi/JVCL and various data binding layers that would allow you to assemble highly complex UIs.
Of course, today's technology is so much better in so many regards (code quality, security, proper backend/frontend separation, UX, to name just a few), but I wonder if it'll ever be as easy as it felt back then.
There's a lot that we take for granted nowadays, that either did not exist back then, or were expensive commercial products:
- Ubiquitous transport layer encryption.
- Secure memory-safe programming languages.
- Structured data exchange formats that do not suck.
- RPC that isn't SOAP, or DCOM.
- Distributed consistent databases.
- Proper sandboxing for applications.
- Resolution-independent proportional UI.
- A vibrant open source ecosystem used across the industry.
... and much more. Today's foundations are a lot more solid than they were back then, but I still miss the rapid prototyping. We even have an ok-ish Linux desktop!
Even the most powerful React tree views are much worse than what we had 15 years ago, and are much harder to use than dragging a component onto a form.
> Given how modern OSes still insist in using C derived languages at their bottom layer, instead of the Pascal and Modulas from the 90's.
On the other hand, the bottom layer is exposed to very little untrusted input nowadays, and the upper layers are mostly either using memory-safe programming languages, or languages with C++ with a ton of effective exploit mitigations (we owe much of that to PaX and grsecurity).
The days of Java, ActiveX and Flash browser exploits are long gone. Windows browser exploits have become so rare and complex that they're worth $1M+.
Still not ideal, but a lot better than it ever was.
> How those RPC alternatives keep re-inventing the validation schemas, performance optimization and code generation tooling of SOAP, CORBA and DCOM.
SOAP and DCOM were big balls of mud with myriad compatibility and security issues. CORBA was a designed-by-committee disaster.
It appears that we're finally close to getting this right with Protobufs, gRPC and Cuelang.
> Sandboxing is mostly a mobile OS thing, with devs clamouring against it on their desktops.
Desktop apps have mostly been replaced by web apps, which have excellent sandboxing and fine-grained permissions. The only native desktop apps I run nowadays are highly specialized (like an IDE). The days of having to download and run a zoo of random third party applications are mostly gone. ChromeOS exists.
> Many give up on open source, make use of SaaS, or switch to commercial because people leech their work and they need to pay bills.
This is true for end-user applications because yes, bills need to paid. Exceptions apply - well-funded open source projects like Krita and Blender have managed to not only keep up with commercial competition, but surpass them.
The rise of SaaS had the side effect of VC and corporate money funding a ton of open source libraries and infrastructure tooling, released under permissive licenses.
> Linux desktop is 1% of world market and keeps being plagued with audio, 3D drivers and lack of hardware acceleration for video decoding in browsers.
Last time I checked, it was ~3.5% if you include ChromeOS.
As long as you're careful to buy compatible hardware, audio, 3D, and hardware accelerated video decoding works just fine nowadays. I've been using a Linux desktop for more than ten years and I haven't had to mess with any hardware issues for the past four.
With ChromeOS and Crostini, you get a macOS-like experience where everything just works with zero fiddling and compatibility issues.
Mitigations that are Linux only and hardly effective, hence why Android is now going to require hardware pointer validation on Android 11.
Protobufs, gRPC are only a thing if one is on Google land and even then quite primitive tooling, versus what those other protocols offered.
Desktop apps are quite alive, specially on enterprise and domains like lifesciences laboratory automation, medical monitoring devices and plenty of other areas where a browser just doesn't cut it.
Krita and Blender are lucky to have good sponsors, a drop in the ocean of FOSS.
ChromeOS hardly counts as Linux, specially since not all devices are Crostini capable, and yes Steam hardware survey puts it at ruffly 1%.
I gave up on hunting for Linux hardware ages ago, Apple and Microsoft OSes and related hardware keep me happy when doing graphics programming.
> Mitigations that are Linux only and hardly effective, hence why Android is now going to require hardware pointer validation on Android 11.
What do you mean by Linux only? Modern mitigations are very effective at increasing cost of exploitation and the underlying concepts are platform-independent - Windows, macOS and iOS follow similar approaches with similar results. If anything, Linux (both the kernel, and various userlands) has been trailing behind.
Pointer authentication is just the logical next step, just like CPU features like SMEP and SMAP superseded software-only implementations like PaX KERNEXEC/UDEREF. And yes, even hardware pointer authentication can be bypassed if someone tries hard enough.
Without moving to a memory safe language, memory corruption exploits will always be possible (just more and more expensive), and I'd love to use an OS kernel written in Rust. It has been exhaustively proven that even the world's best engineering teams are incapable of consistently writing safe C/C++ code - any line of C code exposed to untrusted input is a liability.
But meanwhile, I'm happy that the industry is finally investing in mitigations that kill some classes of bugs, and raise the bar for others, rather than only hunting for and fixing individual bugs.
> Protobufs, gRPC are only a thing if one is on Google land and even then quite primitive tooling, versus what those other protocols offered.
The simplicity, even at the cost of expressivity, are one of the best parts of Protobuf and gRPC. The unbounded complexity and feature set of approaches like SOAP or CORBA is the reason why they've failed.
gRPC has seen a lot of adoption outside of the Google ecosystem.
We probably both agree that REST and JSON were a mistake.
> Desktop apps are quite alive, specially on enterprise and domains like lifesciences laboratory automation, medical monitoring devices and plenty of other areas where a browser just doesn't cut it.
Those are highly niche use cases, and the UI for many of these will eventually move to the web platform, too. Even my dentist's office, which used to run on an antique Win32 application, is now using a next-generation web application built by the same vendor, that supersedes the Windows application. Hosted on a local server, not the cloud.
Very few applications actually need low level hardware access, and sandboxing won't be a concern for these since they're typically the only thing running on the machine that's worth protecting to begin with.
> ChromeOS hardly counts as Linux, specially since not all devices are Crostini capable, and yes Steam hardware survey puts it at ruffly 1%.
Why wouldn't it count as Linux? It runs on the Linux kernel and with a slightly exotic, but fully open source userland. You can modify it and run it on your device (at the expense of boot integrity protection). ChromeOS shares a lot of code with other Linux distributions (like ModemManager, CUPS, the new Chrome renderer, a lot of work on upstreaming kernel driver improvements...).
Any recent ChromeOS device supports Crostini. If it doesn't, it's for reasons like "the CPU has no VT-x support", "this is a 32bit ARM CPU" or a potato CPU.
Steam hardware survey data is not representative.
> I gave up on hunting for Linux hardware ages ago, Apple and Microsoft OSes and related hardware keep me happy when doing graphics programming.
Agreed - graphics programming, especially the high performance kind, is best done on Windows. The tooling is just so much better. But for the kind of work I do - security research and systems engineering - GNU/Linux is perfect and hardware support is great.
> As you see, we have to agree to disagree.
Thank you for this pleasant and interesting discussion, even if we ended up disagreeing :)
lol on the "death of desktop apps" - browsers are just the modern version of VT220 terminals. They solve none of the problems of remote vs. local computing, performance issues around abstraction vs. running closer to the hardware natively, etc.
Everything old is new again.
Yes, you can shim stuff and run some things "offline", but there are many tasks for whom I would never want to relinquish total autonomy from the cloud.
It isn't going to happen. Both approaches have their pro's and con's - insinuating one is going to supplant the other is folly, despite people continuing to do it for multiple decades now.
> It isn't going to happen. Both approaches have their pro's and con's - insinuating one is going to supplant the other is folly, despite people continuing to do it for multiple decades now.
The "web app vs. native" line is already blurry now with Electron apps, and will get blurrier in the future as browser capabilities and performance improves. The browser is becoming an OS on its own - Chromium has already surpassed the Linux kernel in terms of complexity and lines of code.
I hate JavaScript and the surrounding ecosystem, but there's no denying that this appears to be the future of a majority of desktop applications (if it isn't already).
Desktop GUI development definitely peaked in the late 1990s / early 2000s in terms of lightweight but capable tooling and good design tools. Today it's a damn mess, though I see signs of some movement toward improvement driven by the realization that mobile is not going to replace desktop for high-end, developer, and "pro" use cases.
Just to share that it's available and doable--I have been getting back into the old Delphi-style work recently using Lazarus IDE / FreePascal Compiler / Object Pascal language.
The same approach still works well. I've heard that many Delphi apps will require very little tweaking to get working. Personally all my projects are selfish productivity- and solo gaming-related apps that will never see public release, but I like it that way for now. And it's fun.
Anyway, by the end of the day you could be one of those programmers with a 25 year gap in their release notes, to the amazement of onlookers!
I wonder though, what made Delphi so simple in terms of UI development? Is there anything that could not in theory also be applied to other languages, e.g. Java?
One of the features of the gui components that we have used extensively was the double data bind. It is extremely powerful to be able to connect various input fields to a data model and have the results updated automatically upon changes.
The only expectations are my own... Not too different from any other project in that way. But I will say that I think it's a good idea to keep a loose & playful grip on any given programming language... They're all good or even great in this way or that, so I don't mind having a lot of favorites.
I had a crush on a girl named Lan back when I was in third grade. This was also the same year (1997-98) when my parents got a new computer: a HP Vectra running Windows 95, which I learned HTML on towards the end of third grade. I remember discovering the acronym LAN.
Tailscale is very much a successor to Hamachi. Hamachi was already "the 90s as a world of mesh networks on top of today’s internet." Ive run into some client side issues with Tailscale on Windows, but when it works, it works great.
BUT, this blog post is the worst part of SEO. If you're going to blog with lots of keywords to build up google cred, communicate useful things, not just a weird nostalgia. I didnt find the story particularly compelling.
Also, and probably more importantly, the message of 'as long as the perimeter is secure, security within the lan is just a nuisance' is kind of odd. I dont know if that part of the 90s is the part worth reviving. Endpoints should not be trusted by default. (That isnt necessarily tailscales job, but dont market/sell the product as a "its super easy to connect EVERYTHING, security be damned." That is irresponsible to people who want to build connectivity but dont know what they are getting into. If this is an Apple like 'it just works' dont underplay the dangers.)
(Author here) FWIW, I wrote this blog post a long time ago, when I was trying to understand what the product we were building is and why I like it. I sat on it for months because I assumed it was almost entirely unrelatable. Eventually I realized the internet is full of unrelatable things and so I may as well post it. SEO certainly wasn't on my mind.
As for how it relates to our product, I agree this post doesn't go into the "how" we recreate this programming environment. We certainly don't ever take a "security be damned" attitude, it's most of what I think about.
The goal is to build a secure environment, to achieve the security we had in disconnected environments (LANs), while keeping the world connected. If the environment is secure, you can spend less time wiring security through your code.
We are working on something fancier here, but it's tricky to get right so we haven't released it yet.
To be clear: if you're going to share a google account, you should create a new empty google account to do it, and should assume everyone in your family is an admin.
Please let me know if you found something on our site that isn't clear about that.
Yeah, I should have written something like "source code available for self-hosting".
The important point for me isn't really if this is true Open Source or not, but that I usually want (even paid is OK) access to source code so that I can fix things myself, if I have to.
This is the same license as CockroachDB and a bunch of other projects. It's used in cases where a company wants to avoid their product being put behind someone else's paywall ("SaaSification"). The AGPL isn't quite good enough and has issues.
Sorry you didn't like the article :) We posted another one this past weekend called "How Tailscale works" that this crowd seems to enjoy more, at least based on upvotes: https://news.ycombinator.com/item?id=22644357
And yes, we do think of ourselves as a "modern Hamachi." Hamachi was great! (I'm from Tailscale)
I thought that article last week was much more interesting.
/Techsupport side note. I have a Windows laptop, and for the life of it the client will not connect to the service. How do I troubleshoot this thing? Its almost too simple. Are there logs? https://i.imgur.com/QRLkQl1.png
/And is there any way to get to test the magic dns? Will there be better ways to set up relay nodes, and then SEE the design of the network? Can I set up a relay node on Windows, and only for specific IP addresses instead of entire subnets?
/On the topic of "security within the lan is just a nuisance" I do appreciate this being more of a unix philosophy like single tool than kitchen sink. I do think there would be a market for a companion piece of cloud software that acts as a firewall orchestrator between devices, surfaces logs, allows you to block or allow specific connections between specific devices, and to visualize the activity of the mesh. It would be beneficial to be able to super quickly design a "all the field ipads can get to the server, but not each other" type network, while at the same time giving pcs more LAN like access, or being able to designate "edge/guests/clients" that can only connect to "servers." The two types of networks shouldnt need to be mutually exclusive.
/On the topic of other markets and use cases, and maybe this isnt what Tailscale ever wants to be, I think there is a huge opportunity for a pure software client Wireguard SD-WAN a la Velocloud. If I have two ISPs plugged into my network, let me define BOTH gateways as acceptable places for traffic outflow, and build simultaneous tunnels out both. Maybe load balance traffic between them, watch for packet loss, congestion, jitter, and correct accordingly. That extra redundancy of being able to lose a link and keep the path would be extraordinary.
(I'm from Tailscale) A few people have reported a failure for the Windows service to start, usually related to registry lockdowns. You can try running "start /w tailscale-ipn.exe /server" in a "admin mode command prompt" window and paste the messages in an email to support@tailscale and we can try to decode them. You can also compile tailscaled from https://github.com/tailscale/tailscale with a Windows target and it'll run on Windows, which makes it easier to explore any bugs.
Magic DNS isn't available for testing yet, but coming soon!
Our network diagnostic logs are based on https://apenwarr.ca/log/20190216 and we do intend to surface those eventually to end users or at least network admins. Just need to work out the right API and security model for that.
As for access controls between devices, Tailscale already supports that but our docs are currently too vague. If you're using Tailscale, visit https://login.tailscale.com/admin/acls to explore. Security policies you edit in there are immediately enforced by all nodes right away, so you can give some users access only to central servers, etc.
While we're reminiscing what made Hamachi great; am I missing something or is it not possible to connect different accounts together with the 'Solo' Tier package of Tailscale?
Also I couldn't find an 'Exit' button anywhere. Is deinstallation really the only official way to close the application?
We started using SoftEther recently to connect a couple of warehouses over fiber to our existing subnet without having to redo any of our network architecture. I’ve always felt it was the new Hamachi. Tailscale to me feels like the new DirectConnect.
I meant it mostly as a comparison between IPSec and Wireguard. That tailscale is the next generation. I much doubt Hamachi will transition anytime soon.
DN42, anyone? It's a large private network built on top of VPNs. In addition to connecting hackerspaces, you can have your own AS, be LIR, creating entities in the RIPE database, and experiment with BGP.
> Since dn42 is very similar to the Internet, it can be used as a hands-on testing ground for new ideas, or simply to learn real networking stuff that you probably can't do on the Internet (BGP multihoming, transit). The biggest advantage when compared to the Internet: if you break something in the network, you won't have any big network operator yelling angrily at you.
I remember those days when availability of cheap PCs enabled average people to write SW to run their businesses. I'm not very nostalgic about those days though. What else I remember: expensive SW on stacks of floppies, lots of incompatible network HW and protocols, crashy early versions of Windows and MacOS. I did love OS/2 when it came out, though, maybe I was the only one...
I think this is the golden age right now, you can still experience coding on small systems like microcontrollers or you can go up to any kind of OS and HW at practically no cost.
PCs sure didn't seem particularly cheap "back then"; at least, not for my "back then". $2300 in 1993 money for a 486/dx33 is over $4100 in today's money. I just bought a brand new, loaded MacBook Air for $1600.
IMHO early cheap PCs were machines like the Commodore 64, Apple IIe, and the like.
You could do business on those machines, with some limitations. We had a word processor on the Commodore 64 that through some black magic even managed to have a spellchecker. The big downside was that our 9 pin dot matrix printer couldn't print descenders so characters like g j and y were pushed up a couple of points.
I'm sure there are people on this board who can tell you about industrial equipment controlled via a dongle hanging off of the cartridge port on a C=64.
One of our early automated tools on the inkjet cartridge assembly line was controlled by a hp calculator that had an expansion slot. I think the app that opened and closed valves and actuators was written in BASIC. The workmanship of those things was at least as good as some modern PLCs.
These were home computers, not PC's. Basically glorified desk calculators, though the multimedia capabilities were impressive for the time and some office work could be done on them. The industrial control jobs you mention would be well within Arduino territory today.
And even those were not "cheap" at all in modern terms; the cheapest would cost around $500 or so in today's money.
I think he's got a point in that there was a simplicity in the past that was useful, but the counterpoint is we now have way more "coding power". The cost of that is a learning cost: you need to know a bunch of things before you can make a modern app.
I can see why someone new to coding might find it a little daunting. Here's some things you need to know a bit about before you can write most things:
- Networking. What's a subnet mask? How does routing work? What's special about 10/8 and 192.168/16?
- Security models. What do the letters AAA mean? How do you use oauth? You're told not to roll your own, so here you have to learn someone else's lib. And you can't avoid it completely, every OS requires some kind of account system with privileges/restrictions given to different users.
- Drawing stuff models. How does a web page render? What about an Android, iOS, or a Windows form? Annoyingly I did all these things at some point, and they're not quite the same. Vocabulary varies, for instance.
- How do you use libs in whatever language you're looking at? Again, there's a whole load and they're all similar but different.
- How does git work? You're not gonna avoid getting something from github or similar. So now you need to have an idea of how git works, and the use model represented by github/gitlab/bitbucket etc.
But once you know most of these things, you can get a lot farther than you could on some 1990s small business network. You can shove stuff in the cloud for scaling or resilience, you can keep dependencies clean with various container-type stuff, you can put media on CDN, and so on.
>But once you know most of these things, you can get a lot farther than you could on some 1990s small business network. You can shove stuff in the cloud for scaling or resilience, you can keep dependencies clean with various container-type stuff, you can put media on CDN, and so on.
If those advantages are not important to you, you still have to pay the cost in learning all that complexity. Partly its the complexity, and partly its the rate of change. Nothing you write stays working on the web for long. You have to constantly keep jumping through the hoops in this crazy web-circus. Its "standards based" (i.e. controlled by Google in some way or other), but backwards compatibility and keeping things working is way more important to me personally.
It left me so stoked to see IPX mentioned in the first paragraph...
And disappointed to not see it mentioned in any of the comments.
The native support in many DOS applications and ease of configuration was a dream. As a kid setting up LAN games he nailed it with the header "The childhood magic".
Agreed - IPX & Netware were way ahead of anyone else until years after MS released Active Directory. I actually started out working working with IBM using Token Ring and installing Magde NIC Cards.
These days with NFV/SDN/Etc the "Network" is more a spaghetti of virtual circuits. Try drawing a diagram of east-west pod traffic using VMWare NSX/vSANs in a hyperconverged ACI overlay environment and you will run out of ink !
I don't understand the point this piece is trying to make.
> Could a part-time programmer like my father write small-business software today? Could he make it as safe and productive as our LAN was? Maybe. If he was canny, and stuck to old-fashioned desktops of the 90s and physically isolated the machines from the internet. But there is no chance you could get the records onto a modern phone safely (or even legally under HIPPA) with the hours my father gave the project.
Sure.
> We can have the LAN-like experience of the 90’s back again, and we can add the best parts of the 21st century internet. A safe small space of people we trust, where we can program away from the prying eyes of the multi-billion-person internet. Where the outright villainous will be kept at bay by good identity services and good crypto.
> The broader concept of virtualizing networks has existed forever: the Virtual Private Network. New protocols make VPNs better than before, Wireguard is pioneering easy and efficient tunneling between peers. Marry the VPN to identity, and make it work anywhere, and you can have a virtual 90s-style LAN made up of all your 21st century devices. Let the internet be the dumb pipe, let your endpoints determine who they will talk to based on the person at the other end.
OK, but we have VPNs now. The problem with keeping your medical records on a modern computer isn't that your modern computer isn't connected to your other computers. The problem is that it is! It's connected to your other computers just like it's connected to everybody else's computers. Your simulated LAN is not safe in the way the old LAN was until -- as the article already noted -- you disconnect it from the internet so it's an actual LAN.
So Tailscale has diagnosed the problem and recognized the solution. And they propose... to do something different. It's not clear what. How does their proposed solution address what they say is the problem?
-----
Side note:
> (or even legally under HIPPA)
The relevant regulation is actually named HIPAA. I've always pronounced this as "hee-pah", which would make sense given the spelling. But that is not normal usage. People in medical fields universally seem to pronounce it as if were spelled "hippa". It's interesting to see that circle back the other way.
There's something strange about the flow "I don't know how to pronounce this. I'll pronounce it as if it were a different word." -> "This spelling is inappropriate. I'll spell it like it's pronounced." You end up with a spelling and a pronunciation that are both completely unrelated to the word you're notionally using. Eventually people will start to ask "how does HIPPA stand for Health Insurance Portability and Accountability Act?".
It's like when people ask "what does wifi stand for?" and I have to tell them "high fidelity".
"hee-pah" ... Is English your native language? Or did you perhaps study another language extensively?
I ask because "hee" isn't a likely pronunciation for "hi" for me unless it's Japanese that I'm looking at. Certain other languages, such as Spanish, I think might also get the same pronunciation.
But in English, it's pretty rare. Pronunciation as in "hit" or "high" are a lot more likely.
And once you've decided that it's "hi" as in "hit", the rest of the pronunciation just follows as "hip-uh". And yes, trying to spell from pronunciation comes back as "hippa".
"Hee-pah" wouldn't come out as HIPAA, though. It'd be HEPA or something, but that's actually already used and is pronounced "heppa" as in "head".
In short, there's no real good way to pronounce a lot of acronyms that doesn't end up with some kind of pronunciation or spelling problem at some point. And people generally just go with what sprung to mind first.
> Certain other languages, such as Spanish, I think might also get the same pronunciation.
Most languages, if they use Roman letters, use "i" to indicate the FLEECE vowel. (That is, the vowel used in the English word "fleece".) English is the odd man out for vowel pronunciation, due to the https://en.wikipedia.org/wiki/Great_Vowel_Shift .
Spanish follows the ordinary pronunciation of "i". It doesn't for "h"; the letter "h" in Spanish is silent. So you wouldn't get "hee" from "hi", you'd get "ee". (Compare, say, the Spanish word hija, "daughter", best approximated by the English syllables "ee-hah".)
> Is English your native language? Or did you perhaps study another language extensively?
Yes, English is my native language. However, the word "hipaa" is so obviously impossible as an English word that I consider it normal to use the pronunciation conventions for foreign words.
Yeah, I thought of that bit about Spanish afterwords, but left it anyhow.
I can't speak for the rest of the world, but most Americans don't consider pronunciations in other languages, possibly because they generally don't know a second language anyhow. And since HIPAA (almost wrote HIPPA!) is a US thing, I don't see foreign pronunciation entering into it much.
I wish more people would learn a second language. It can be really enriching. The key, IMO, is to pick a language that you'll enjoy using somehow. For me, it was Japanese, because I enjoyed a lot of Japanese media as a teen, and continue to do so as an adult.
I'm seeing a lot more foreign content on Netflix lately, and Brazilian seems like it'd be a good choice for a second language, if you're into TV shows.
I enjoyed the post and like the idea of Tailscale[^1], but agreed. As long as there's some path to the Internet, you can't trust your LAN. I mean, that's kind of the whole point of modern network security: even on a trusted network, you can't trust the endpoints.
Consider a modernized example of the business app in the story: let's say this is an internal-facing webapp with no security features implemented. With tailscale you can implement network security controls that only allow access from specific endpoints (tied to user or service identities), but the point is that you don't trust those endpoints. Of course they could be phished/compromised, but they could also easily use a boundary-crossing attack like CSRF/SSRF to attack your insecure app. So no matter what, you need to implement standard web app security features.
(I'm from Tailscale) You've nailed the problem statement, but we are trying to find a better solution.
The current "zero trust networking" trend is actually not about distrusting the endpoints; it's about distrusting the network, and securing the endpoints. Tailscale lets you distrust the network and allow only trusted endpoints, which is a step.
You're right that more steps are needed before we can also prevent CSRF/SSRF attacks on internal-private services, but that ought to be done at a higher level, not in every single app. The latter is just too error prone, as we see over and over.
Curious what the higher-level solution to CSRF/SSRF is? I’m struggling to think how it could be prevented except at the browser level (for CSRF). And for SSRF if there’s a legitimate need for a network path between two services but one has an SSRF issue, how can you stop that?
We used the internet here in California in the 90s from our LAN, at some point a firewall was installed to keep out the growing mischief. I’m also unclear on the problem definition and solution. With the exception that web apps are too hard to build.
Wonder how the anvil product (a potential solution) is doing?
I feel like the entire premise of this article is flawed. "Learning how to store passwords or add OAuth2 to your toy web site is not fun." Maybe it's not fun for him, but I find both of those things fascinating and enjoyed learning them.
And with all of his talk about how it wouldn't be possible for someone to crap out medical records software like his father did: GOOD! Let's be real, that software was almost certainly crap and full of bugs. It's not the type of thing that an amateur should be writing on his lunch break.
It sounds to me like this guy wants to be able to make important software without actually learning to program for real.
> "Learning how to store passwords or add OAuth2 to your toy web site is not fun."
I think it's more that correctly implementing security is hard, and if you have no interest in exposing your application to the greater internet you shouldn't be shouldered with the burden of both gaining and maintaining an understanding of the state-of-the-art in web security.
> Let's be real, that software was almost certainly crap and full of bugs. It's not the type of thing that an amateur should be writing on his lunch break.
Nope; computer systems were simpler, and so they were easier to program correctly. There is tons of high quality hobbyist software from the 80s and 90s.
> It sounds to me like this guy wants to be able to make important software without actually learning to program for real.
The author of that blog is one of the most skilled programmers I have known.
Part of it was probably that older computers had a set form factor. You could recognize a C64 or an Amiga when you walked in the door. These days they all look the same.
On a different note I find myself pondering the similarities between the spread of malware now that everything is online 24/7, and the spread of illnesses in an era of cheap and fast air travel.
You had great libraries like Jedi/JVCL and various data binding layers that would allow you to assemble highly complex UIs.
Of course, today's technology is so much better in so many regards (code quality, security, proper backend/frontend separation, UX, to name just a few), but I wonder if it'll ever be as easy as it felt back then.