Hacker News new | past | comments | ask | show | jobs | submit login
IDEs we had 30 years ago (blogsystem5.substack.com)
683 points by titaniumtown 12 months ago | hide | past | favorite | 603 comments



IMO the real loss in IDE tech is the speed that visual basic 6 gave you to make desktop guis.

Web and mobile development (which I have done all 3) are significantly slower than what you could do with VB6. It's really strange why we haven't gotten back to it.


VB6 was fantastic. The language was terrible, but the IDE was the best way to make GUIs that ever existed. The Windows 98 era was excellent for UX as well, with every element on the screen having an accelerator key, consistent menu guidelines, consistent visuals, keyboard shortcuts, etc.

It was just brilliant.


VB6 worked because the environment was simple. GUIs are simple when you don't need to do any styling, do not require any special modifications and most importantly you don't need responsiveness. All of that, and you need a special environment and operating system to run the GUI. Web front-end are a completely different game in that regard.


Yeah. Back in the day, every application was expected to use the same common set of controls - which were well written by the OS vendor, well tested, well documented and well understood by users. Every button on windows 95 looks and behaves the same way. Every application had application menus that worked the same way, in the same font, and they all responded to the same keyboard shortcuts.

These days every application reinvents its own controls for everything. For example, in a web browser the tab bar and address bars are totally custom controls. Electron apps like vscode take this to the extreme - I don't think vscode or spotify use any native controls in the entire UI.

I blame the web in part. It was never designed as a system to build application UIs, and as such the provided UI primitives are primitive and rubbish. Developers got used to building our own custom elements for everything - which we build fresh for every client and style in alignment with the brand. UIs built on the web are inconsistent - so our users never get a chance to learn a common set of primitives.

And for some reason, OS vendors have totally dropped the ball on this stuff. Windows has several competing "official" UI libraries. Every library has a different look and feel, and all are in various stages of decomposition. Microsoft's own software teams seem as lost as everyone else when it comes to navigating the morass of options - if windows 11 and MS Teams are anything to go by. Macos isn't quite as bad, but its still a bit of a mess. Every year I expect the people building xcode to know how to build and debug software on macos. And then xcode crashes for the 3rd time in a week. Xcode achieves the impossible of making the javascript ecosystem seem sane and attractive.

I'd love a return to the halcyon days of vb6, where UIs were simple and consistent. Where there was a human interface guidelines document that UI developers were expected to read. I want users and developers alike to know how the platform works and what the rules and conventions are. F it. How hard can that really be to build?


> Every application had application menus that worked the same way, in the same font, and they all responded to the same keyboard shortcuts.

My experience had been that most (almost all) of them _only_ worked with a single font in a single font size and a specific window size. If you did change any of these, it got unusable.

About consistency: except for those that didn't, not even MS themselves were consistent. http://hallofshame.gp.co.at/shame.html

The 90s had been also the time of programs like Kai's Power Tools https://mprove.de/script/99/kai/ and Winamp.

I'm sorry, but "interface consistency" is not something that comes to my mind when I think of 90s and early 2000s Windows programs (neither was Linux). Irix with 4DWM had been quite consistent at that time.


In the "hall of shame" you linked, they list applications which misuse radio buttons. Uh oh, call the police. Sure, there were a few inconsistencies. But honestly, up until the ribbon in Office, it was incredibly homogeneous compared to today.

In comparison, modern windows doesn't even hide its inconsistencies. Try right clicking on the desktop in windows 11. You get a dropdown with large item spacing and rounded corners. But then if you click "Show more options", the dropdown is replaced with a different dropdown with subtly different menu items, small spacing and sharp corners.

They aren't even pretending any more.

This isn't Kai's photo goo we're talking about here. This is core windows.

Unless you opted out, basically every application in the 90s and early 2000s was built using the core platform's UI library. There were a few exceptions, but the 90s were a golden age for platform consistency compared to today.

Now its hard to find any 2 applications on windows which use the same UI style. Firefox and explorer? Nope - the maximise and close buttons have a different style. Spotify? Nah thats some custom webview. Visual studio? Nope, thats using an old windows library. Whatsapp desktop? Qt. Intellij? Some java thing. And so it goes. Its an absolute zoo.


You can always make web form based applications without style sheets.

They're pretty fast to create, relatively consistent, simple and fast/easy to create.

That's not what people want, they want all the flexibility and features of say Gmail or maps. B along with some communication and flexibility. Not to mention running on every OS under the sun, and being able to use accessibility and screen reader tools on all those platforms.


> You can always make web form based applications without style sheets.

Uh, no you can't. Web forms aren't rich enough to build most desktop applications. You can't make vscode, gmail, slack or spotify using web forms. They're just a bad set of primitives for applications. (In the web's defense, it was never designed as an application platform).

Yet - we had some version of all of those applications in the 90s, on every OS at the time. And (mostly) using the platform's built in UI libraries so the look and feel was consistent and delightful.


Web forms are absolutely enough to roughly match the typical VB6 application. Especially the "easy" path mentioned. Not every app, but definitely most.


Interfaces were WAY more consistent than they are nowadays even when taking into account some applications slightly altering their themes. One application using a listbox instead of a tree (like one of the Microsoft examples in the site you linked) does not make a UI inconsistent (even as the wrong control, the listbox still looks and works exactly like the other listboxes in the system), it only makes it an odd/bad choice. And from a quick browse, most of the issues mentioned there are things like that (including, amusingly, a ribbon-like interface in Windows 3.1 in the tabs section[0] :-P).

IMO the fact that someone cared enough to make a site about what is largely nitpicks like this shows exactly how these stood out in the otherwise consistent landscape of UIs in the 90s.

Nowadays such a site would have 99% of every application released. It could even be automated, just somehow track all new .exe files in GitHub, MajorGeeks, Softpedia, etc and add them automatically to a hall of shame list, chances are even without human supervision the overwhelming majority will be correct :-P.

[0] http://hallofshame.gp.co.at/tabs.html


I used Windows 98SE for eight years. I never saw any issue changing window sizes.

I can't speak to font sizes since IIRC that was an option but I left it default.


For all my apps that had controls pinned to a specific coordinate, I made sure to make the windows fixed-size.

I taught courses on VB from versions 3 to 6 and this was always something we went over.


I think you're forgetting how much more dense and complex even a basic web ui is in controls. Every HN comment in the default story view has something like up to 10 clickable controls - a pair of vote buttons, username, timestamp, a bunch of nav links, a hide control, a reply button. HN, the famously minimalist, oldskool website. The web made new ui and even the back-in-the-day version of that ui is way more complicated than the back-in-the-day desktop app UI you're thinking of.


Dense?! I've never seen a web UI remotely close to what I'd call dense.

The HN UI would be very easy to recreate in something like Delphi, I'd call it trivial.

We use Delphi at work. Our main application is a regular Windows desktop application. I recently had to add a new module to our application at work, it required three new input windows/screens, each with 100-150 input fields, few dozen buttons, and several grids.

I made the UI parts in a day. A few more hours the day after and all the UI logic and database loading/saving was done, and I had a functional UI for all three windows/screens.

Many of our customers have HTML-based ERP or CRM systems that we integrate with. I've never seen any of them that are close to dense UIs, and most of the time I see the user having to click through multiple sub-screens just to do a relatively simple task like looking up a couple of values. With a denser UI that could all have been on a single screen and saved the user a ton of time.

But I'd love to be proven wrong. Any examples out there of actually dense, in the good sense, web UIs?


Hm. I don't think it would be that hard it would be to remake HN's UI using an old school UI component library. The up/down arrows could use the up/down buttons on this screenshot from macos's old color picker:

https://guidebookgallery.org/pics/gui/interface/dialogs/colo...

... But limited to only allow you up/downvote between -1 and 1.

Everything else could be done with buttons and a TextField / Label for comments and replying.

The web is a bit weird in that it taught us to build every UI as a giant scrolling document. And HN is no different. A more "classic" UI approach would be something like Thunderbird mail - with 2 panes: a "comments" tree on top and a "message body" down the bottom. That would be harder to read (since you'd need to click on the next message to jump to it). But it might encourage longer, more thoughtful replies.

Thunderbird: https://lwn.net/Articles/91536/

Or you could reimplement HN with classic controls and something like TB 114's UI:

https://www.ghacks.net/wp-content/uploads/2022/08/account_ma...

Probably still worse that what HN is now though.


A "traditional" UI for HN would surely look a lot like Usenet newsgroup readers. I think Thunderbird might even still support NNTP.

Usenet had a much better model for discussion groups/forums like HN in my view, though crucially for the modern world it is missing some kind of "comment voting"/user-driven moderation. I wonder if there's an HN<->NNTP gateway around somewhere?


Voting could be handled with comments with specific content - wouldn’t even need to have a body, as everything could be in the headers. Totals could be computed at the server, unless we wanted real NNTP-style distributed discussions.


You'd probably just need create your own X-header extension


> I think you're forgetting how much more dense and complex even a basic web ui is in controls.

I realised I should've added: Have you actually seen a proper dense UI? Look at any professional software. You will die of old age before you can implement just the layout for it using web technologies.

Here's Cubase with TAL-U-No-LX synth emulator in front of it: https://pbs.twimg.com/media/E8M6-QOWEAQMgGl?format=jpg&name=...


Can MIXXX be considered professional software?

The skins are made using XML and QSS (Qt CSS).

It's not web per se, but it is not too far from it either. I would say it's a combination of both, Web and Desktop technologies, into one.

https://i0.wp.com/djtechtools.com/wp-content/uploads/2021/07...


It's not web technology by any stretch of imagination


What about a dense ui is intrinsically more difficult to implement on the web? Doesn't dense just mean lots of buttons/fields and small margins between them? Your example looks very challenging indeed, but it was probably not trivial to implement either way.


> What about a dense ui is intrinsically more difficult to implement on the web?

You have no control over layout or control over rendering. In native apps (especially desktop apps) you can always go down or up any level you want. In the browser you're always fighting the quite rigid and high-level layout and rendering engine.

It's somewhat better now with grid and felxbox, but the browser will always have its own ideas on how the children can be laid out, and there are no good ways for children to depend on the parent, or for the parent to depend on the children etc.

There are very, very few dense layouts on the web. And while late-90s/early-2000s sofware was often ridiculed, you could create dense control-rich interfaces in minutes. Because they are usually not ad-hoc hacks added to the platfrom as an afterthought, like everything on the web.


> I think you're forgetting how much more dense and complex even a basic web ui is in controls.

It really isn't. Web is anemic in controls and layouts compared to what's actually possible with proper controls and control over those controls.


Yeah; especially given the number of pixels we have at our disposal. Old UI screenshots look simple but they had to work at tiny resolutions compared to modern user interfaces. I remember when 800x600 was considered luxurious.

Web apps also can't use a lot of native shortcut keys to build keyboard-friendly UIs. Its rude to override the browser's right click menu. Alt-anything might bring up browser native controls. Ctrl/Cmd+S is owned by the browser. And so on. Some of this stuff you can override, but even if you do, users never experiment because those shortcut keys basically never do what you expect.


I don't see anything special in having tens actions on a UI item. Toggles, action links and vanilla buttons would take care of it.


> Windows has several competing "official" UI libraries. Every library has a different look and feel, and all are in various stages of decomposition

I can’t fathom why they made the decision to push every style to a different rendering engine. Working on Windows gets you to see every look and feel and the deeper you go, the closer you get to Windows 2000.


They dropped the ball because no major part of their current business model involves creating a better operating system for the sake of attracting new users

They seem to aim at minimising users leaving and maximising the extraction of data obtained per user


Hasn't there been a set of web components that tries to reproduce this?


I think about this every time I'm asked to build a custom dropdown menu. Can we just use a <select>? No- it must be 'branded'!


If you’re in a rare team doing agile halfway right, see if you can start providing separate level-of-effort for a feature with good UI that’s also quick, and the worse-for-the-user “on-brand” one.

Part of what causes this crap is that the costs aren’t visible. If you can get it through stakeholders’ heads that they’re cutting the feature development rate in half (and making UX worse, and paying in maintenance time and a higher defect rate, but those are harder to make them understand) with their twee UI, some can be steered away from it, or at least convinced to tone down the most-harmful parts.


> Part of what causes this crap is that the costs aren’t visible.

A billion times this. And this is when the next version of the design system isn’t incompatible with the previous one.


Uhhh, aren't native GUI toolkits "responsive" by definition? I remember in VB6 you could have elements take up a given % of its container, and I am pretty sure you could even make them appear/disappear programmatically (e.g., depending on container width). Sure you didn't have to make it work across a huge set of resolutions, but it was quite flexible still.


wxWidgets sizers raise their hands!


I liked Delphi's anchors. They were extremely simple, visually defined, and met all of my layout needs.


Those were nice because you would start with floating elements that you'd move around with the mouse until you'd figure out a good layout and then add the anchors at the end and make it all flexible. Delphi was all about having a smooth progression from mockup to prototype to production. Compared to it, current web workflows are outright retarded.


> GUIs are simple when you don't need to do any styling

Styling should be provided by the host, not the app. The app should give, at most, hints to the system - that this button is the default, what the tab key navigation order is, etc.

> Web front-end are a completely different game in that regard

They shouldn’t be. The API is different, because the presentation layer leaks all over the place into the app logic. With runtimes such as VB’s the UI elements see and react to events, while the runtime takes care of pushing those events either to the host software or to the event handlers that are part of the app.


That and the fact that web is built over a language for text, not UI. HTML is a terrible foundation for UI.


That's why I'm still hoping on Flutter or something similar for actual web applications. But it's been a long time in the making, I'm not quite convinced it will ever be able to replace the more traditional stuff.


I have to disagree on that.

Most of the controls you have on the average GUI app are present in HTML. The big difference is that HTML describes active documents, not dialog boxes.


>and most importantly you don't need responsiveness

I'm gonna ask a dumb question out of ignorance because I know responsiveness is all the rage, but... what do we gain from it? Would it not be more straightforward to build UIs from the ground up for desktop and mobile targets than make one UI to morph to fit both?


Responsiveness lets you target two platforms with shit UIs for the price of one good one*, which is what businesses want[0], per the good ol' "worse is better".

* - two for one up front, ongoing development and maintenance costs of modern responsive UIs is going to be much greater than doing things right 2 or 3 times, but those costs are beyond the next sprint, and pay the good chunk of salaries in this industry, so...

--

[0] - And, unfortunately, this thinking became the zeitgeist of software development, so OSS projects do the same.


"Desktop" encompasses everything from ultrawide 8k monitors to 768p laptop screens where the user may have scaled down the browser window - and that's on brand new hardware, I know people who are still using 15+ years old laptops!

This alone means you either handle any window size and size ratio or your UI will break for some users.


> This alone means you either handle any window size and size ratio or your UI will break for some users.

You should spec your UI in device-independent units and let the GUI handle things like different pixel densities (or aspect ratios) and font sizes.


There are many different screen resolutions. Being able to adjust the application based on available space makes the application usable to more people.


In theory that sounds nice but in practice i haven't seen a single desktop application that can actually handle resolutions lower than whatever resolution the developer/designer used. I used a 1366x768 monitor for a long while ~3 years ago and everything looked both gigantic and had padding everywhere.

As for the web, responsive design was so great that in almost every site i had to zoom out to make it think i had a monitor with a bigger resolution than i really did.

These days and since i do not maximize the browser (because my monitor is huge - like all modern monitors that do not have awful image quality tend to be), i often have to resize the window because sites tend to think i'm using a mobile phone and instead of scaling down / hiding less important stuff (that would at least be appropriate for a narrower viewport) they make thing ultraginormous (because touch screens), overly padded out (because touch screens) and they hide all options behind a hamburger menu (because mobile screens are physically too small).

I'm certain there are theoretically ways to do it "right" (a friend web developer told me how but i forgot) but absolutely zero sites (that i visit) do that.


And it introduces new failure modes. I often experience responsive web applications hiding UI elements in my Firefox windows. Those windows usually use either the left or right have of a 28" 4k display. Why the heck do the hide the sidebar to save space? It is fucking annoying.


I think Delphi was slightly better. The components in Delphi were more powerful at the time. But the general idea is the same.

C# with WinForms is still usable today and provides a similar experience. Although the design language has fallen out of fashion.


I agree on all points. In Delphi designers and properties felt more logical and consistent.

For me, WinForms always had an element of struggle not present in Delphi


The GUI WinForms editor in Visual Studio 2022 is a direct descendant of the one in VB6 and has all the exact same functions that I remember from VB6.


Let's not forget Delphi and OptiPerl https://www.uptiv.com/free/optiperl/ Till date I have not seen any IDE that could reproduce its amazing box and line coding feature


Delphi was even better. Great IDE, great language, compiled executables. Too bad that paradigm didn't survive the web.


>> but the IDE was the best way to make GUIs that ever existed

It was Lifted in spirit from NeXTstep's Interface Builder, so it was okay. But still a pale comparison.

P.S. If you are old enough, you will remember when $MSFT tried to steal Quicktime so its video products didn't suck. They got caught because they copied the machine code. Byte by byte. #howSad


I couldn't agree more, although I wouldn't be quite so hard on the language. It did have some strong points (for example, it was awesome for integrating with MS Office). To this day the GUI builder was the best I've seen. The current Mac stuff is more powerful, maybe, but the VB builder was way more intuitive and discoverable.

Back in those days, I (and many others) would use VB just to build the GUIs for C or C++ programs. It was that good.


was the language even that bad?


Well, a language that allows something like "On Error Resume Next" is not that great by any definition of the word...

Having said that, I must also say that I started coding on VB6 and if I had to show some elemetary programming to a ~10-yo, I'd give them something like QB64 in a heartbeat. There's something good in grappling with a "bad" language, educationally speaking.


"On Error Resume Next" never died, it just became serverless!


“On Error Resume Next” is how C has always behaved.


What do you mean? I thought runtime errors in C were almost always catastrophic; the real issue is UB (first example that comes to mind is out-of-bound array access; sometimes you segfault sometimes you just get random data) or, in general, stuff that should be an error but isn't

EDIT: Ah I guess you were referring to arbitrary code injection after, say, a stack overflow? But I think that's a runtime issue rather than a language one (hard to draw a line in a systems PL, but still)


I was more in the line of not checking whether you got a valid fd when you open a file and trying to use it nevertheless. In BASIC, when you try to open a file that’s not there, by default you get a runtime error and the program exits. If you do an “On Error Resume Next” the program will happily proceed with invalid data.


segmentation fault (core dumped)


It was a running joke about how bad it was. It was a bit before my time although, but even when I saw people use it as a kid and playing with it myself, it was pretty obvious how fast it was in making things.


It was terrible. Under that surface friendliness was hiding an ocean of incoherent rules, bugs and straight madness.


tl;dr: Aside from whatever flaws the VB6 language had, people writing so-so code in VB6 contributed to it's reputation as being a bad language.

Something that my sibling comments haven't (yet) mentioned:

VB6 was really easy to create a GUI with, had great DB support, and was a pretty easy language to get started with. Given that a lot of business apps boil down to "present a nice, user-friendly interface to the company database" (particularly biz apps for smaller businesses) VB6 was a great fit for business consulting types. And use it they did! There were a lot of business consultant types writing code in VB6. They were smart people but not necessarily the most hard core coders.

I think that's part of why VB6 coders (as a group) got a reputation as being "lesser" programmers - they were derisively called "code monkeys", etc.

Regardless of the language itself, I think that seeing a lot of "minimally viable code" being shipped by people focused on delivering business apps helps to contribute to VB6's reputation.

Side question: I wonder how many people here on HN have a 'career origin story' something to the effect of "Yeah, so I was in college/high school/middle school, and the <name of small organization/mom-and-pop business> wanted to use their computer to streamline things. I was just learning how to code but was able to get VB to do <minimal but useful task> which really helped <org mentioned prior>. Looking back on it that was some really gnarly code I wrote, but it worked, <the org> appreciated it, and it got me hooked on programming"


IIRC, there were no user defined types. You had to write the whole app using primitives.


You can create classes in VB 6 from IDE where you would add a class module and define code in that module. There was no "class" keyword AFAIK.


There was definitely a `Type` keyword that was similar to a struct.


VB6 was the only “low code / no code” tool that actually delivered on its promise.

Bill Gates demoing it from 32-years ago.

https://youtu.be/Fh_UDQnboRw?feature=shared


Microsoft Access was way ahead, in my opinion. Especially that it came with a ready database, reports, etc.. and with VB6 you could practically do anything you want with the operating system. It's interesting we have thrown it all for something much more inferior.


VB6 could do more than access. I used both in the day and I found Access basic was limited and once you had some tools in VB6 to deal with databases and tables then it was easier than access.

However this is so long ago I have forgotten the details.

Access was probably simpler for a simple database entry and query.

However Access"s database was a disaster constantly getting corrupted etc. However I would note I was in companies that had access to full databases like Sybase and Oracle and as we had site licenses they were effectively free for each project. We often took a users Access project and made it maintainable in VB6.


it's apples and oranges.

VB6 was a programming language environment, Access was a client for the Jet DB engine. You could technically connect to more than just Jet DB's but that's primarily what it was for.


One thing that i think would be very useful would be for VB to have inherent database support at the language/core level with the IDE being able to edit tables, etc. Essentially something similar to what tools like dBase/FoxPro/Visual dBase/etc but also being a generic application development tool like VB.

Visual Basic did have some database support but the actual database functionality/engine relied on external engines instead of being part of the runtime itself and it wasn't really part of the language. Being able to define a data type that is transparently mapped to a table's columns in a database and variables being cursors in a database would help with a bunch of smaller scale applications.

Basically something that mixed VB and Access. Ssadly that'd mean one would cannibalize the other's sales so it was never done.


That demo is partially faked - if you look at the "plan B" button, it doesn't actually make the UI show some newly changed data, it simply hides the entire window, and shows a different pre-made static window.

Even the "plan A" button code doesn't show the only interesting part of the code - how the UI is actually updated in response to the changed underlying data.


Exactly, and the pre-made window doesn't even have the plan A / plan B buttons. For such a simple example it's hard to understand why they didn't just implement it properly.


Delphi was even better. In fact, having used both, I hated vb6.


Same. I learnt VB because VC++ with MFC was too complex for what I needed it for then. Hated VB after seeing Delphi.


Excel is arguably the most successful low-code tool.

Even non programmers can come up with some crazy stuff in Excel that just works, that will just run on any machine in their company.


Just wait until Billy in Accounting gets their hands on embedded Python...

https://support.microsoft.com/en-us/office/get-started-with-...


I spent a long time doing VB6 and Windows.Forms, the idea that it was meaningfully better than NeXT or Delphi is just wrong.


Minor nit: he is demoing VB 1


You're totally wrong.

The IDEs I was making in VB6 in the 90s I'm making about twice as quick in Visual Studio 2022 in C# with WinForms.

In fact, quicker because I'm getting GPT to write any gnarly code I need -- like I just asked it if it was possible to add a tooltip to a ListBox and it churned out exactly the code I need, something I would have spent a bunch of time figuring out on my own.


In Delphi u didn’t need any code to add a tooltip to a component - just add it as a property in the inspector. U can do it in code if you want, but it’s easy to do since u see the available properties in the inspector.


I guess that was about tooltips for the listbox items, because just a tooltip for a component would be the same with Windows Forms by just setting a property in the properties panel (granted, it's basically an ancient form of attached property that uses a separate tooltip component, but the UI streamlines this.


That's how it works today in Qt Creator too


It's the same in WinForms, my example was bad as I was trying to do something "unsupported" by the base objects :)


I never used VS/C#, but can you drop a TDBGrid connected to a TDataset and then just resize autoloaded columns to your taste? Discussing tooltips on a listbox in RADs is like discussing colors of pencils in a hi-tech workshop.

>>Web and mobile development (which I have done all 3) are significantly slower

The above will probably take half a day in web development (assuming an experienced developer), so no, they aren't "totally wrong" at all.


Or even better one: Delphi (if you were into pascal)


I’ve often heard people mention that Delphi was a superior RAD GUI experience than Visual Basic, but as someone who’s never used it, what is it that made it so great compared to VB or other GUI builder type tools (eg Qt Designer)?


The pascal language requires things to be declared in a certain order. It's a bit awkward some of the time bit it enables the compiler to work in a single pass. This meant that running an application was extremely fast by any standards and this really made it stand out compared to other development tools out there.

VB created applications that had to ship with a shared runtime library. Windows wasn't great at versioning these libraries so developers often shipped their own VB runtime with their executable. The executable was small and the runtime was comparatively huge which had a negative impact on user perception when downloading the installers.

Before moving on to Microsoft in 1996, Anders Heilsberg was the Chief Engineer at Borland that oversaw the development and release of Delphi 1.0.

For years, VB felt like an application that could make deployable versions of itself. Delphi felt like a programming environment that compiled code into applications.

After Heilsberg moved to MS, a lot of improvements were made in VB that utlimately made Delphi less attractive, especially during Borland's strategic waffling known as "The Inprise Years": https://en.wikipedia.org/wiki/Borland#Inprise_Corporation_Er...

If you want to get a feel for what it was like then check out the FOSS clone "Lazarus".


> After Heilsberg moved to MS, a lot of improvements were made in VB that utlimately made Delphi less attractive

Well, not actually. With Anders' move to Microsoft, VB6 (aka, VB "classic") was discontinued. Microsoft supported Visual Basic syntax on the .NET runtime, but the vast majority of VB programmers considered this to be a different language because developing for the .NET Framework (remember this is ~2001) was a huge departure from VB Classic.

Many VB developers petitioned Microsoft to open-source VB6 or continue releasing improvements on it. Microsoft did not and chose to continue with their .NET + C# strategy.


There's a bit of a gap there though between 1996 and Visual Basic classic being discontinued. VB.NET came out in 2002 but VB6 was supported until 2008.

VB5 in 1997 and VB6 in 1998 really closed the gap with Delphi from what I remember.


VB5/6 had native code compilers. Performance wise, the gap was reduced. But it still was only object based and not full OOP, VCL was much better in all respects, so were the GUI builders. The component ecosystem was much better, despite having a much smaller user base. I prefer not to use Object Pascal today, but back then, it was superior to using VC++ or VB.


VB6 was my last VB. It came at just the time when getting a computer onto a network to download and install a giant distribution was touch and go. The building where I worked didn't have networking in the labs, the "labs" were whatever space we could find, and the "lab computers" were old cast-offs retired from office use. It meant that supporting VB.NET with a brand spankin new networked computer wasn't a safe assumption.

At the same time, I was playing with Linux at home, and wanted tooling that could run on either platform. I learned Python at home, and then made the switch at work.

One of my last VB6 apps, thousands of lines, has been running in the plant without issue for 15 years. On one occasion I had to bump up the declared sizes of some fixed length arrays.

As for GUIs, I never found anything close to VB, but also decided to just write a thin wrapper around Tkinter and let my layout be generated automatically by default. I haven't missed laying out my GUIs, which were always a hodgepodge anyway.


.NET 2.0 fixed much of those complains and even brought back Me.


> After Heilsberg moved to MS, a lot of improvements were made in VB that utlimately made Delphi less attractive, especially during Borland's strategic waffling known as "The Inprise Years": https://en.wikipedia.org/wiki/Borland#Inprise_Corporation_Er...

This is (from Borland's telling) not an accident. https://news.ycombinator.com/item?id=29513242

Hard to not waffle when your major competitor hires away your top talent.


Having worked with both Delphi and Visual Basic, I've found that Delphi had the edge, especially for professional apps. Its use of Object Pascal meant you got compiled, efficient code right out of the box, and it didn't need an extra runtime like VB.

The VCL component library in Delphi is still unmatched in native code – very comprehensive built-in (and commercial) components, and customizable. Plus, Delphi's database connectivity was unparalleled and can be setup in the designer where you could see data queried and returned live in your grid component!

Delphi also supported advanced language capabilities (when compared to VB), like inline assembly and pointers, which were essential for low-level optimization or system hacking. The robust error handling in Delphi was another plus compared to VB's older 'On Error' style.


Interestingly enough, the VB6 runtime has been part of Windows for quite some time, to the point that my old VB6 projects still run fine, while my experiments with VB.NET in .NET Framework 1 and 2 currently don't, since the ringtone isn't preinstalled. (granted, moving those to modern .NET is probably fairly easy)


Few things that made Delphi great: for one thing you could develop all sorts of apps in Delphi - from system management apps, to database heavy apps, to editors, to games etc.

1. The component architecture was just great - I have yet to see any language/platform with a more sophisticated component structure even now.

2. There were all sorts of free and commercial components that you could download and install into the IDE and have it running in your application in next to no time. ActiveX etc didn't even come close to the level of integration and speed that the Delphi component architecture provided.

3. Components could be loaded into the UI during the design time itself and you could see it functioning directly within the IDE when you place it on your form. Example: You could connect to a database, run a query, put that into a table with all sorts of sorting and navigation functions and you could see this working even without compiling the 3pplication.

4. It was trivial to develop components for Delphi - even though it was so sophisticated. You could develop one in about 15-30 minutes.

5. The compilation speed was just insane. It just blew everything else out of the water and so you could do fast compile+run cycles.


I agree with everything except 4. It was easy to develop trivial components, but anything more complex and you'd have to do some serious work and the IDE would crash on you on mistakes and there was no way to debug your component live. I can't imagine the pain those DevExpress developers went through to make that VCL data grid of theirs.


I wrote a neural net ML app in the 90s using Delphi.

VB was easier to start but Delphi was nearly as fast to build, code was compiled (fast!), Object Pascal supported OOP and the UI features like visual form inheritance I have yet to see implemented anywhere else.


Visual Basic 6 was quite limited, it was a programming language designed for doing applications.

Delphi, with its Object Pascal roots, was(is) a systems programming language that also happens to have a nice RAD tooling for GUI applications.

Put in another way, it didn't own anything to C++ in capabilities, and had VB like tooling.

VB had to wait for version 6 to support AOT compilation and being able to implement its own COM based controls without depending on C++, and even then it only supported a subset of COM features.

Meanwhile the GUI framework being used by C++ Builder is actually written in Delphi.


1. The language was better for big apps (statically typed, more powerful, lots of features like proper exception handling). It supported proper multi-threading and better Win32 interop, which in practice you often needed for advanced effects in both Delphi and VB.

2. The documentation was excellent and always reachable from anywhere in the IDE. It was all local of course so if you wanted to know about something - a control, an API, a part of the UI, whatever - you just pointed at it and pressed f1. Help would appear immediately with low latency. It also came with a big pile of yellow and blue covered books that taught you everything you needed to know.

3. The UI was well optimized with things like tabbed editors before tabbing became popular. It was made up floating but dockable windows, sometimes this was annoying but other times it let you keep things on screen.

4. Very good support for database connectivity.

5. The ecosystem was more coherent. VB wasn't usable for many advanced tasks so VB devs relied heavily on components written in C++ (OCX controls). Delphi was more powerful so components were often written in Delphi itself, with all the attendant advantages.

6. "Components" were more powerful than ordinary libraries of the type you get today - you could install a component into the IDE and it'd appear in a components picker at the top. You could then immediately drag and drop them onto a "form" (window) and begin configuring it with an interactive property and event UI that was always present on the left. This worked even for components that were not UI components, for example, you could drag/drop a timer from the component library onto a form, see all the properties and configure them visually, and then double click on the event to be taken to an editor window with the event handler already written out for you.

7. There were lots of little quality of life things, like it came with some stock icons for buttons.

Last time I used Qt Designer was the KDE2 days, it's almost certainly better than Delphi by now.

Versus the way we do things today, some stuff was better and some is worse. Delphi was primarily a wrapper around Win32 so apps written in it weren't portable. Back then it didn't matter of course, Windows had a monopoly. That meant it inherited Windows' limitations - UIs weren't responsive, styling barely existed, typography was limited, and the deployment story was nonexistent. That's why you keep seeing references to how great it was that Delphi made statically linked EXEs; same reason people like Go today. Operating systems suck hard at deployment, one of the things that eventually pushed people towards the web. In the grand tradition of platform devs ignoring deployment entirely, Borland never fixed it even as the web ate their lunch along with Microsofts.

On the other hand, the components model was pretty nice. Being able to configure libraries using a simple auto-generated GUI made them often much easier to use.


> The ecosystem was more coherent. VB wasn't usable for many advanced tasks so VB devs relied heavily on components written in C++ (OCX controls)

Until VB 6, which introduced AOT compilation based on VC++ backend, and creation of OCX in VB itself.

To this day, one of the easiest way to do COM on Windows, even better than .NET Framework (.NET Core made it harder, yet another thing that Framework does better).


I thought VB6 used p-code embedded into the EXE? I don't think compiling VB6 to machine code would have been easy.


Nope, that is how VB worked until version 6.

In version 6, you could chose what compilation model, classical (p-code) or the new AOT backend.

Compiling BASIC to machine code is how it was born after all, the interpreters came to be in 8 bit machines, due to their hardware constraints.

Besides, Microsoft already had similar experience with a dual compilation model in Quick BASIC.

And not to leave this being my opinion only,

"Microsoft Visual Basic allows you to compile your applications to fast, efficient native code, using the same optimizing back-end compiler technology as Microsoft Visual C++. Native code compilation provides several options for optimizing and debugging that aren't available with p-code. These options are traditionally called "switches," because each option can be turned on or off."

https://learn.microsoft.com/en-us/previous-versions/visualst...


It allowed for both, the default being pcode (IIRC because it was more compact) but it could also do native code. The feature was introduced in VB5.

Note that it still required the VB runtime either way.


I always thought that the primary reason that native apps lost to the web was deployment, something that is so easy to fix and yet OS people are completely blind to it.


There was also PowerBuilder and other "4GL" tools.

As an old fart, GUI toolkits and building tools are one of those categories of absolutely essential software that gets scrapped and rebuilt (not in a good way) for every platform.

Kind of like IDEs for new languages, to the point of the article. First comes the language and basic CLI tools, and then about ten years later you might have a mature IDE with full language-specific syntax coloring, autocomplete, visual debugging, variable watches, etc.

And these 4GL tools were doing this with BASIC variants (not compiled) atop maybe 200-300 Mhz processors and 16-64MB (not GB, MB) of RAM. It blows my mind that modern OSes have slowdown and stuttering when the amount of CPU, considering multicore, more pipelining, branch prediction, etc, is likely 100x stronger than the the CPUs back then.

The 4GL languages/tools were all closed source and that did not age well. PowerBuilder folded (probably killed by Visual Basic), and then Visual Basic was killed in one of Microsofts platform purges.

They also were tightly coupled with databases in a UI --> DB architecture. As server architectures exploded in complexity beyond that, the 4GL tools couldn't adapt, partly because they were leveraging so much power of SQL under the hood, and the various server and RPC/invocations never offered the same power and flexibility.

But I can hear you .. SQL over the wire? Like send SQL to a backend service so it can be intercepted and injected and all that? Yup, probably part of the problem. Thick clients and direct internal LAN communication is an implicit requirement.


I would really expect there to be some FOSS Python or JS insta-app-maker that's as easy as VB was, but for some reason nobody wants to work on such things.


Open source devs often don't like writing GUIs or documentation, which VB-like environments rely on very heavily.

Commercial devs meanwhile will work on those things, but generally want to host the resulting app on their own cloud for a monthly fee. That's lockin which is scary to people and so outside of platforms where there's no choice (e.g. Apple) they prefer to have a less productive dev environment but more vendor options.

30 years ago devs were much less sensitive to lockin concerns. Open source barely existed so the question was merely which vendor would you choose to get locked in to, not whether you'd do it at all. And fewer people had been burned by projects going off the rails or being abandoned. The VB/Delphi era ended when VB6 suffered execution-by-product-manager and Borland renamed itself to Inprise whilst generally losing the plot (wasting resources on Linux, etc).

Open source stuff tends to have much worse usability, but there are no product managers, no corporate strategies and in the unlikely even that the project decides to radically change direction it can always be forked and collectively maintained for a while. That concern outweighs developer experience.

Also the ecosystem is just way more fragmented these days. In the 90s everyone coded for Windows unless you were doing mainframe stuff, and on Windows there was C++, Delphi and VB. That was pretty much it and they could all interop due to Microsoft's investment in COM. These days you have JS, Python, Ruby, Java, Kotlin, Swift, C#, Rust, Go ... and they barely talk to each other.


Have a look at https://anvil.works -it's a drag and drop web app creator using python for front and backend. It's as close to Delphi for the modern age as I've come across.


I used it. The user experience is not remotely close to Delphi, but it has WYSIWYG. The closest Python had to Delphi was Boa Constructor, but it stalled quickly, decades ago.


There is Gambas: https://en.m.wikipedia.org/wiki/Gambas

It's more that few people want to make and distribute small desktop apps anymore.


That's 20 years old. People like native apps, it's just that GUI programming is generally such a pain that many people avoid it.


I don't think it's the GUI programming, at least not for me. It's the cross platformness, and the fact that everything needs some kind of sync feature these days, plus just the sheer size of modern GUI apps.


i am working on such a thing myself at https://github.com/yazz/yazz. Also there are many other people trying to build something similar


Never really got why Microsoft didn't keep going with that kind of thing. Why don't we have .net properly inside excel instead of VBA?

Money? Internal elitism?


Office Developer Tools exist.

They're trying to steer people away from end-user programming towards using more well-defined, tested, features. I find it patronizing, but, having seen some VBA, understand the reasoning.


Sekhurity.

Can't have user-programmable tools, because they could be used to ship (gasp!) malware, or let users (GASP!) work around policies of their employers' IT staff.


VBA can't really be removed due to the need to support legacy files (especially Excel). For more sophisticated stuff they expect you to use Office Developer Tools or similar now.


Yes, Delphi too. So also tools like Powerbuilder for developing database-heavy apps. There is nothing even remotely close to those tools now.


Windows Forms is pretty much the successor to VB6 and I think is still supported. Still nothing like it on Linux though, even though people say Linux is way better than Windows for software development :/


The subset of GUI software is much much smaller than the greater whole.

I do agree though that any GUI programming on Linux is a pain compared to Linux / Mac. However the proliferation of web based apps, even those running in electron shows how valuable a truely cross platform GUI framework that is easy to use would be.

Google is trying with flutter and dart but last time I used it I felt it was still being iterated on far too quickly, maybe a bit ( maybe even now ) it will be more friendly to use.


VB6 was primitive compared to what Delphi gave us. It had live design time data binding, visual form inheritance etc. VB and VC++ were primitive compared to that. VCL vs MFC? No contest. The API was powerful, fast, great to extend by inheritance, best in class layout management. There were about 8000 third party components at web sites like Delphi Super Page. Half of those were free, rest were affordably priced for commercial use.

Much of the 90s experience lives on in FPC/Lazarus, but it did not get much better after that, and few use it. I wish they aligned themselves with a more popular language like Rust or Go for a more cohesive experience, while keeping Object Pascal (built with IDE experience in mind).

I still prefer the GUI building experience in Delphi 6/7 than anything that was produced since then. C#/Winforms is fine, it was designed by the same person - Anders Hejlsberg, but I wanted something native.

It's unbelievable that something modern like Flutter still fails to capture the design convenience of Delphi, from decades ago. Yes, the design markup is great, but I don't even want to look at it most of the time.

The later Swing/SWT editors never came close. Even Qt, which was inspired by it, never provided the component market experience of Delphi, and it was much more bloated in runtimes.

I fell in love with GUI design with Delphi, but otherwise hated it since.


> Much of the 90s experience lives on in FPC/Lazarus, but it did not get much better after that, and few use it. I wish they aligned themselves with a more popular language like Rust or Go for a more cohesive experience

A very core aspect of LCL and how Lazarus works (and VCL/Delphi for that matter) is language features like metaclasses, RTTI, properties, etc which allow the framework to register classes, instantiate them at runtime and inspect the class definitions so that serialization/deserialization and IDE features like the object inspector will work. AFAIK neither Go nor Rust have this.

The only language off the top of my head that has it is C# (which isn't surprising considering both Delphi's dialect of Object Pascal and C# were designed by the same person).

Of course there can be language extensions like C++ Builder did for C++ but they'd need to maintain their own compiler (or fork).

Personally i get the impression that Visual Basic and Delphi's language features were added in tandem with the underlying framework (if not deciding first how the framework and IDE features will look like and then deciding on the language features to support those) whereas modern UI stuff are made with whatever the target language has in place.


> Of course there can be language extensions like C++ Builder did for C++ but they'd need to maintain their own compiler (or fork).

This is exactly what I hope would happen, except this time with Rust or Go. They don't need to integrate fully. I am not hoping for the ability to write Go/Rust for GUI code, just to be able to call into them without the clunky foreign functions.

I leave out Pascal now because besides the GUI part, the value isn't there anymore. The language is archaic otherwise.

> Personally i get the impression that Visual Basic and Delphi's language features were added in tandem with the underlying framework (if not deciding first how the framework and IDE features will look like and then deciding on the language features to support those) whereas modern UI stuff are made with whatever the target language has in place.

I agree. But the features stabilized, more or less. IDE experiences have not gotten better in a long while. It's not much of a moving target.


> language features like metaclasses, RTTI, properties, etc which allow the framework to register classes, instantiate them at runtime and inspect the class definitions so that serialization/deserialization and IDE features like the object inspector will work. AFAIK neither Go nor Rust have this.¶ The only language off the top of my head that has it is C#

Smalltalk and Objective-C?


They're not "static enough" to have automatic serialization/deserialization and object inspector-like functionality. AFAIK in OpenStep/GNUstep/Cocoa objects (derived from NSObject) are expected to implement encodeWithCoder and initWithCoder to save/load to an object to disk and relies on per-class versioning and the subclasses handling the versions manually. The default NSObject implementation does not do anything as it doesn't have enough information to do anything "automatically". For an IDE you could have objects implementing a protocol that enumerates properties (or just creates the GUI for editing an object) but this isn't something that can be done by what the language provides itself.

In Smalltalk you might be able to inspect objects and their values. The VM certainly has enough information for that - after all it needs to so it can read/write the image the objects are in - but i don't know how much of that is exposed (i've only played around with a few Smalltalks and never made any real application with them). Though even if that functionality is available, you'd still need some way to tell -say- a Delphi-like object inspector what properties are available and what values they are meant to be (so, e.g., a "backgroundColor" property is presented in the GUI with a color editor and not a string editor or whatever else :-P).

In contrast, Free Pascal (and Delphi) has enough functionality to implement serialization/deserialization for objects without the objects themselves having to have any custom logic (this is how components are saved - you simply declare a published property and the framework handles serialization and the IDE handles showing it in the object inspector with the appropriate editor based on its type). In addition the on-disk format can be implemented in a way as to allow only storing properties which have different values from the defaults (there is a language keyword to specify what the default is though there are also other means), meaning that any new property can be added. The default object writers work like that but as this information can be exposed to any program, you can do your own (the default formats and functionality mostly support whatever is needed for saving GUI forms but in my own game engine i need more functionality than that to handle binary data and external data assets so i wrote my own serialization system).


what aspect of the development speed do you feel is faster? i feel like i can write things like

    <select name=title><option>Mr.<option>Ms.</select><input name=name><input type=submit>
faster in html than i could in vb6. maybe i'm wrong about that?

you can try it in your url bar: data:text/html,<select name=title><option>Mr.<option>Ms.</select><input name=name><input type=submit>

of course that doesn't give you database integration, but if you just want crud, you can get crud pretty quick out of django's admin

here's a couple of things i've hacked up recently in dhtml which i think wouldn't have been easier in vb6

http://canonical.org/~kragen/sw/dev3/ifs a 2-d iterated function system editor

http://canonical.org/~kragen/sw/dev3/clock a watchlighting clock


Now try to make a layout with that form example that you could free form put anywhere in 2d space and have it flex properly as the window size changes beyond the defaults that html gives you and make the generic B2B SAAS dashboard like Segment, RevenueCat, Mixpanel, Datadog, Sentry, etc. I bet you could make a VB6 / Pascal equivalent much faster than you would be able to with a mobile or web app, especially if they were updated with a decent graph widget set.

Also your two examples are drawing canvas examples, thats a pretty different target that delphi / vb6 have with their GUI toolkits.


FYI, since 2017-ish doing layouts in HTML is much easier if you use "display: grid". Just be aware that the numbering is based on lines, not boxes. Also be aware that to use percentage-based heights at top level, you have to style the `html` and `body` too.

Additionally, use of `@container` queries (the modern, poorly-documented alternative to `@media` queries) lets you do more advanced layout changes on element resize. This requires adding `container-type: size` style on the parent (I found this very confusing in the docs).


> Just be aware that the numbering is based on lines, not boxes.

With grid-template-areas, you can use ascii art instead of specifying rows/columns manually. Though you would still need numbers if you wanted something other than the default spacing, which would be based on the content.


I'm going back to FRAMESETs :V

Responsive UI like its 1996!


i'm struggling under the misconception that having things flex properly as the window sizes is the default in html and basically impossible in vb6. but i'm very open to having my misconceptions corrected. is there a public video that demonstrates what it looks like when an expert uses vb6, so i can see what things that are hard in html are easy in vb6?

i have no idea what segment, revenuecat, mixpanel, datadog, sentry, etc., are


The point everyone is making is that you could have modernized the VB6/Lazarus approach to meet modern needs and it would have been much more productive than what we have currently with html+css.

And indeed, many of the complaints people have about VB6 not supporting multiple resolutions, etc, are fixed in Lazarus.


it's possible that the people i was talking to meant that, but i took them to mean something much stronger, that there are things that were easy and fluent in vb6 that are clumsy in dhtml. and i'd like to know what those things were, but text is not a good medium for that

christine lavin wrote a song about your interpretation https://mojim.com/usy144575x1x8.htm in which she says

    The reality of me
    cannot compete
    with the dreams you have of her.
    And the love you've given me
    is not as sweet
    as the feelings that she stirs.
    And so you turn away
    and you say that you're sorry,
    But you must pursue this dream,
    this improbable dream.
    Though things have not been bad,
    you can't say you've had
    Quite as good a time as it first seemed.
some software that could have been written is always better than all software that has actually been written


The conversation often goes like this.

creating UI's in VB6 was so much faster and easier than html+css

yeah, but they're not reactive to resolution changes.

me: but they could have been updated rather than throwing them away and going with the complexity of today's solution.


the person i was talking to seemed to be saying the opposite: that vb6 uis adapted better to window size changes than html. i'd like to understand what they meant

css has gotten pretty complex, but html isn't inherently complex


Flash and Actionscript did this for the web but then Apple killed Flash.

Maybe in the era of LLM-assisted webdev tools (like teleporthq.io or MS Sketch2Code etc) the LLM will help sidestep API moats and help bring back such low-code IDEs. Or it could backfire and bring about API moats with non-human-accessible APIs (obfuscated binary only).


Apple did not kill Flash. Flash killed Flash.

Adobe claimed that they could get Flash running on the first iPhone if Apple let them.

When Flash finally did come to mobile via Android in 2010, it required 1GB of RAM and a 1Ghz CPU and it still ran badly.

The first iPhone had a 400Mhz CPU and 128MB of RAM. It could barely run Safari. Were you around when the only way that Safari could keep up with your scrolling was by having a checkerboard pattern while waiting on the screen to be redrawn?


Building apps with VB6 was so productive, wish it was that easy to build mac apps these days.


I haven't used it recently on macOS but Lazarus[0] has mac support. I used it some years ago on my iMac.

If you don't mind proprietary languages, there is Xojo[1] which seems to be designed mainly around Mac (it can do Win32 and Linux apps though).

[0] https://www.lazarus-ide.org/

[1] https://www.xojo.com/


I dunno, I worked with Qt5 and Qt6 recently and creating a UI felt pretty fast and smooth to me (Much more so than the last time I did frontend web work anyway)

That said, I've never used VB6 so maybe I'm missing something...


VBA is still used all the time. As long as you have Excel or Outlook you can use it.


If one wants, VB.NET with Windows Forms still does offer a similar experience.


Didn't Microsoft more or less just copy everything from Borland Delphi? They had drag and drop GUI stuff in the early 90s.


No, Delphi came out years after Visual Basic. There was an earlier Turbo Pascal for Windows, but it didn't have visual design features as far as I remember.


Turbo Pascal for Windows did have a GUI designer, but it was a separate application from the code editor. It wasn't as integrated as VB.


Even Visual Basic for DOS had a TUI designer. Wasn't addressed in the article.


I think by overall value it pales in comparison to Delphi. At leas this is my experience.


Would this be similar to the .NET Webforms? That could be done in C# or VB.


The equivalent would be WinForms


I doubt it


I loved TurboPascal. I agree with everything the post argues.

I would like to expand a bit on it being before the Internet was a huge thing.

The manuals that came with TurboPascal were nearly excellent. It included most of what you would need to get started. When you didn't quite understand something, you had to spend time to figure it out and how to do it well. This could be time consuming, but afterwards you gained important knowledge.

Then there were other books to get, there were "coding" magazines, though at the moment I cant remember any TurboPascal specific ones. and if you were lucky you knew one or two other people who were into coding and you could share problems, solutions, hints and tips. and warez.

There were also a lot of BBSs out there. Where you could ask questions, help others, etc.

These days most people if they face a problem, Google it (maybe now ChatGPT it) find someone posted a solution, cut and paste the crappy code, crosses fingers that it works and off you go.

(or pull down libraries without having any idea what in the world it actually does)

At the same time things have gotten a lot more complex . In my TurboPascal days I knew most of the stack. The programming language, the operating system, assembler, a lot of how the CPU worked.

These days understanding javascript, understanding the runtime / compiler etc, before you even get close to the underlying OS, and certainly not down into assembler amd CPU


I'm not sure whether the issue is stack complexity / depth, but there is definitely something in the culture where it's common for docs and how-tos and Q/A to tell you what to do, the steps/commands/etc, but to do little to help you build any kind of relevant domain model.

This isn't exactly new, there's always been documentation like this, but I think the proportions have changed dramatically since even the early 2000s.

One of my vaguely defined theories is that as valuable as automation is, the culture of automation / process has started to diffuse and act as an influence on how we think, in steps & commands rather than models.

Possible that we've always been steps-results / input-output beings, but I wonder.


I recently created some documentation for a process at work where I explained the 'why' for most of the steps and commands. Not in great detail, but just a bit of detail. I thought that it was good as a step-by-step recipe, while also giving context that could help someone in the event that something didn't go according to plan.

I was asked to remove much of the context that I provided, so as not to confuse the reader, and to make it as direct as possible. This is documentation intended for experienced, technical professionals. I think that the revised documentation is less helpful.


I had the same thing. I took the reader through the journey of "why" they should want to work in this way, but apparently that's confusing.

That actually the important part though. If people don't know why they are doing something, they don't do it.


Perhaps you could compromise by moving the “whys” into an appendix to the guide with references from the “how” section.


Perhaps it could be restructured to separate out the howto from the explanation to serve the reader’s intended use at the time as described here: https://diataxis.fr


As a tech writer I love the concepts of Diataxis but don't agree with it being invoked here. Context is critical in all four of its quadrants, and its model doesn't apply uniformly to every aspect of every application.

GP did IMO the right thing by understanding the audience first in order to judge what level of context is appropriate. That should be rule 0 before anything in Diataxis gets involved.


I think another factor is the medium through which the docs are to be consumed. If the intended audience are "experienced, technical professionals" as the ggp says, but those folks are arriving at the docs primarily from search engines, then it's likely they are in "how-to mode" [0].

People in "how-to mode" are almost always time-constrained. They need immediate answers so interspersing the why with the how would slow such readers down (since they have to scan more text than they need to [1]). Their impatience will often cause them to bounce from your web page back to the search engine in search of other sources that can provide them with immediate answers.

Without additional context on the medium of delivery of the docs (in-app vs web page vs PDF), it's hard for us commenters to say with certainty whether the decision to remove the "why" was a good call or not.

0: https://diataxis.fr/how-to-guides/

1: https://www.nngroup.com/articles/information-foraging/


Everyone wants a quick fix these days, in all parts of their life.

How to do X in [an hour, a day,one week of lunches, 30 days.] Heal yourself from X in [an hour, a day, one week of lunches, 30 days. ] Eat/Dont eat this get slim in [ one week of lunches, 30 days, 60 days]

A long time ago I studied martial arts (Karate). Our structure went up to the master in Japan. Once you got past the orange belt, the only way to progress was when the master was visiting. the master also decreed that you had to have a belt for min 1 years, in brown 2 years. in black 4 years before you could try to graduate. He said people needed to grow into their skills and mature.

I haven't thought much about that until a few years ago a colleague told me her daughter had just gotten her first black belt. Knowing her child was no more than 12 that seemed odd, but she had studied hard for almost 2 years.

That seemed like some form of fast-food martial arts.

By pure coincidence I met the son of my previous master at a train . I knew him but the last time I saw him he was still a kid. He was Japanese as well but he spoke excellent English. Something his father had no interest in. He had taken over from his father and was now the master. We talked a bit his dad, the clubs. I mentioned this rapid progression that my coworker's daughter had done. He leaned back and laughed a little.

"Ah, we call those dirty white belts"

A little like coding bootcamps I think.


Yes, too many cookbooks, "with solutions and examples", and no one writes good reference manuals any more.

I kind of get it. I think about a problem, and I am solution focused. Furthermore, I want to learn what I need to implement the solution, and usually don't pay attention to other features that are not directly related to my problem.

However, by giving me a cookbook, the documentators are doing me a disservice in two ways: first, they are greatly limiting the amount of solutions I can use. If my problem is not something they already imagined, I need to find another cookbook or read their source code to find out how to solve it.

And second: they are taking away from me the process of learning about the different parts of the system and how to integrate them into a solution by myself.

And the gotchas are endless. I am using FastAPI and the amount of things I have no idea how they work underneath, until something fails, is mind wrecking.

I wish there was some real reference manual of it, not just the autogenerated function signatures and the most minimally useful documentation I have ever seen. So many things make no sense, and I have to second guess the FastAPI designers all the time.


> there is definitely something in the culture where it's common for docs and how-tos and Q/A to tell you what to do, the steps/commands/etc, but to do little to help you build any kind of relevant domain model.

So very true and this is actually my biggest complaint with MSDN nowadays. It's so difficult to built a mental model of what's actually going on, which makes the documentatino difficult if your needs aren't exactly what the documentation is describing.

I don't know when this phenomenon started but it drives me batty.


Manuals in the era were in general pretty good comparing to today’s open source software standard. Microsoft C++ has about 6-7 dictionary sized books as the manual.

Love that era. I’d definitely pay more if say Jetbrain has these kinds of manuals. But it doesn’t make sense nowadays to have printed manuals, not only because of the cost, but because nowadays people don’t need to read books about language specifications.


I do miss books.

For a while I was a contractor at a hush hush place. We had extremely limited internet access and even stronger rules not to use it. No cellphones, no Wi-Fi, no BYOD.

It was like going back to the good old days with books. We were given the opportunity to order books.

I adapted the best of our team but probably only because I am old.

We had one good old fashioned corded phone, which was for internal calls only. It was not hooked up to anything "outside".

We were given a number that our nearest could call in on.

The number should be connected to some semi anonymous front desk somewhere.

It was to be used (only) in an emergency (they emphasized that)

The routine would be that a member security detail would come to notify the relevant person, escort that person to a room with a phone, that the front desk could route the call to. Someone might or might not be listening in.

As far as I know nobody tried it.

I dont know if the routine is still the same now. I would guess that younger developers would have a much harder time adjusting now than back then.


Ah, a place I'd love to work in.

Anyway, nowadays software development, including tool development takes the path of quick iteration so manuals are not very relevant anyway after a while.

Does IBM still give paper manuals to system programmers? Maybe the mainframe business is old enough to retain certain traditions.


The first Turbo Pascal IDE, with compiler and debugger, ran as a .com file so all the code and data had to fit in a 64KB memory segment.


The article covers some good points, but misses a few extra things that the Turbo Pascal 7.0 IDE included that made it a true powerhouse:

- A full OOP tree to determine parents/traits of descendant objects

- The ability to edit, assemble, and trace through both inline and external assembler code

- A registers window that showed all registers and flags at any stage of runtime

...all while able to run on the first 4.77 MHz 8088 IBM PC, which was long in the tooth by the time TP 7.0 came out. (The OOP tree required a 286 as they only added it to the protected mode IDE.) This made the TP 7.0 IDE a complete development and debugging environment for assembler as well as Pascal.


I never tested it on an XT, but it ran like a dream on my 286. I wouldn't be where I am now without Turbo C++/Turbo Assembler.


> ...all while able to run on the first 4.77 MHz 8088 IBM PC

Eh, more like it walked rather than ran :-P.


I still use it on that hardware for hobby projects, and while I have to wait 60-120 seconds for a compile, it's still more convenient for me than cross-compiling on Windows and then copying the code over.

It's not how well the bear dances, but that the bear dances at all. That said, EMS and a solid-state hard drived do help a little.


TBH for an original IBM PC i'd use at most TP5.5 as it is faster and you do not lose much in terms of functionality (IIRC the biggest loss is the inline assembler).


> there are a few things that VSCode doesn’t give us.

> The first is that a TUI IDE is excellent for work on remote machines—even better than VSCode. You can SSH into any machine with ease and launch the IDE. Combine it with tmux and you get “full” multitasking.

I definitely disagree with this sentiment. At my last job, I had to do most of my work on a remote server (because it had a GPU), and I found VS Code far more pleasant to use than plain old SSH. People recommended using an editor on the server side or freaking around with Win SCP / Cyberduck, but VS Code was just so much better in so many ways.

Because of VS Code's web roots, it can easily run its frontend on your own local computer while running its backend somewhere else. This means that most common actions, like moving the cursor or selecting text, can be done locally, without the need for a round trip to the server. The operations that do have to be executed remotely, like saving a file for example, are properly asynchronous and don't interrupt your workflow. Everything is just far snappier, even if you're working from home, through a VPN, on barely working WiFi and an ADSL line.

As a bonus, you get fully native behavior and keyboard shortcuts that you'd expect from your platform. Things like text selection and copying just work, even some of your addons are carried over.


100% agree. Remote VSCode over SSH is great.

The resource consumption on the client doesn't bother me one bit. Any minimally decent laptop can put up with that load, on battery power, for hours.

I would agree with “whatever it takes to make the server install leaner, more portable, etc” just without sacrificing many features.

If the server side doesn't run on FreeBSD that's really too bad. If Microsoft makes it hard to improve by not making those bits open source, that's very unfortunate.


VS Code remote in some cases is better than local.

As the remote can be a docker container, so when I have to do some experiment, I create a container takes 5 min to setup. I than can play around, test dozen packages and configs, once I am comfortable commit last version.

If I want to do some quick testing on project by different team, again a local container is setup in 2-10 mins. Once done delete the container and my local system isn't messed up.

Last is obvious use case if you want to test anything on reasonable large data or GPUs. Create a cloud server, get data run your code, tests. Push data to S3 and done.


vscode's model of server on host is good because of low latency.

It can be a bit heavy in cpu usage depending on plugins though.

I like emacs tramp in theory since it doesn't impose that, but latency suffers.

With correct ssh config it usually works well, but many times I'd prefer lower latency with emacs being on the host.

That's supposedly possible, but I've never gotten it working.


What were you trying to do with tramp? I’ve used it for coding Common Lisp, together with a remote SLIME session - ie slime-connect - and while I have run into at least 1 limitation with paths, I have a decent enough work around for it. I think the setup was just a matter of setting some customizable variables.


I typically use tramp for:

- docker containers - accessing boxes on same network

Sometimes its fine, but then perhaps because of regressions, I get buffers that never seem to recover and have to be cleaned up.


I see. I thought I had some .emacs customized settings I could share, but they're all slime specific. It appears tramp otherwise just works without further configuration - unless I set them in ielm and forgot about them before copying them over to .emacs, but I didn't see anything like that in my ielm history.


I was doing exactly the same 30 years ago with X Windows and XEmacs.


> This means that most common actions, like moving the cursor or selecting text, can be done locally, without the need for a round trip to the server

No, you weren't doing this. You were making a round trip to the server when you moved the cursor or selected text.


> You were making a round trip to the server when you moved the cursor or selected text.

Of course this being X, your machine ran the server and the remotes were the clients…


No, as gummy well putted it, all of that was done on the client computer.


The fact that it is easy to confuse the server with the client in X, it does not change the fact that the XServer and XEmacs are running on different computers, so each interaction is a round-trip.


XServer and XEmacs are both running on the client machine.

Also it is impossible by laws of physics by using distributed computing, not having each keypress and its display on a rendering surface, being a two way street.


By the "client machine" where XServer and XEmacs are both running, do you mean the machine where the human user is entering keypresses and viewing windows? Or do you mean the machine where the files are ultimately getting edited? Clearly, there has to be something running on each of the machines, since otherwise one side would have nothing to connect to on the other side. What is running on the machine opposite the "client machine"?

The idea with VS Code is that neither the keypresses nor the displayed windows are being sent over the network, but are kept within the same machine where the user is entering or viewing them. Only the file data (or debugger status, etc.), which are cached and far less frequently updated, are sent over the network. Are you saying that XEmacs can also function remotely in this way, with neither keypresses nor displayed windows sent over the network?


There’s some confusion in some of the replies here. The point this person is trying to make is that you get the remote machine’s key bindings, not the local’s. That’s an artifact of the experience being a remote desktop.


It's similar in outcome (doing "stuff" remotely), but not the same architecturally.

VScode runs on the computer in front of you, and it _does not_ send key-presses or other user input over the network at all. Instead VScode sends file-changes over the network to the remote, and executes commands on the remote (e.g. SSH's in and runs 'gcc ...').

With X, XEmacs is not running on the computer in front of you; it's running on a computer far away. Every key-press and mouse click must be transmitted from the computer in front of you over the network, received by the remote computer, then a response sent from the remote to the computer you're interacting with, where it'll be displayed.


You still had to do a roundtrip for every single click though, right? I don't think X Windows has any kind of client side scripting system.

That's better than SSH for sure, but still not as good as the web model.


X Windows server runs on the client machine.

The client is the server application.


The point still stands, though. You need a roundtrip, even if it starts from the X server rather than the X client.


You always need some level of round trip between keyboard and UNIX procecess.

The server application isn't guessing keys, regardless of the connection format.

What matters is how the communication is being compressed and local optimizations.


The difference here is that VisualStudio code fully runs the GUI on the local machine and only file IO or external programs (compiler, the actual program being devleoped, ...) run remotely. Thus the UI reacts promptly to all interactions and many of the remote interactions happen asynchronously, thus even saving a file will not block further actions.

Whereas any non trivial X application does work in the client, thus even basic interactions have a notable delay, depending on connection.


You're assuming someone would be running Emacs on the remote machine talking to a local X server in order to edit files on a remote machine, but people would generally not do that, but use something like TRAMP, where Emacs would be running on your local machine, but accessing remote files.

TRAMP only requires ssh or telnet (or scp, rsync, any number of other methods) on the remote machine.


It shows you never used slow telnet sessions over modems.

There is no difference between doing this over text or graphics, in terms of the whole setup regarding network communications for data input and output.


Again: The key difference is that in VS.Code the UI runs local, thus all UI interactions are "immediate" and there is no difference between local and remote operation. Yes, IO has latency, but where possible that is hidden by the UI (possible: saving a file happens without blocking UI; not possible: loading a file requires the file to be loaded .. but even then the UI can already prepare the window layout)

Thisnis very different form a system, where each keystroke and each menu action has to be transfered first, before the remote side can identify the needed UI update and send that back


Again: learn UNIX distributed computing architecture.

Not going to waste more my time explaining this.


Telnet is a way more low level protocol. Please learn what you are talking about and have a good day.


Pjmlp is right. You need to read on how X was designed for remote work.


Johannes's point was, I believe, that using VSCode remotely works fundamentally different than using apps remotely via X. I don't think he is confused about how X was designed.


Designed badly, in this case.

Arguments to authority aren't appealing. Arguments from logic are. The fact is that X and VSCode's remote protocols are designed very differently, and in high-latency and high-jitter connections (and many low-bandwidth ones), VSCode's protocol is simply better.


VS Code isn't doing this with text or graphics, though. In X terms, it's running both the client and server on your local machine. It simply doesn't put the network boundary in the same place as an X application.

VS Code's "backend" that runs on the remote machine is rather only in charge of more "asynchronous" operations that aren't part of the UI's critical path, like saving files or building the project. It doesn't speak anything as granular as the X protocol.


Classic UNIX program architecture in distributed systems, apparently some knowledge lacking here.

Long are the days using pizza boxes for development it seems.


The comparison you made wasn't to arbitrary distributed UNIX programs, though. It was to X applications, which don't work this way.


I'm sorry to say I'm as confused as I was before I read these sentences.

Let me try to rephrase: with X Windows, the UI server runs on your local machine, while the UI client runs on the remote machine (e.g. your application's server). Is that correct?


No, the whole UI runs on the client machine, which in X Windows nomenclature is the server.

The client application (on X Windows nomenclature), runs on the remote server and is headless.

Instead of sending streams of bytes to render text, it sends streams of encoded X Windows commands to draw the UI.

Everything else regarding compilers, subprocesses and what have you keeps running on the server, regardless how the connection is made.

Think big X Windows terminals or green/ambar phosphor terminals accessing the single UNIX server, used by the complete university department.


I'm surprised pjmip is missing the point here. Or maybe I am

> Instead of sending streams of bytes to render text, it sends streams of encoded X Windows commands to draw the UI.

(Simplified) VSCode is sending no bytes to a server when you're editing a file. The entire file exists on the client, you can edit all you want and everything stays on the client. Only when you pick "save" is a data sent to the server.

My understanding with X Windows is as you mentioned above, you press a key, that key it sent app on another machine, that other machine sends back rendering commands. Correct? Vs VSCode, you press a key, nothing is sent remotely

Note: There's more to VSCode, while it doesn't have to send keystrokes and it is effectively editing the file locally (so fast). It does send changes asynchronously to the remote machine to run things like the Language Server Protocol stuff and asychronously sending the results back. But, you don't have to wait for that info to continue to edit.


No, you are correct. On any sort of low bandwidth or high latency connection, your remote X experience will be terrible.


Thanks for elaborating, it helped a bit and now this section of the Wikipedia article fully clicked for me:

"""The X server is typically the provider of graphics resources and keyboard/mouse events to X clients, meaning that the X server is usually running on the computer in front of a human user, while the X client applications run anywhere on the network and communicate with the user's computer to request the rendering of graphics content and receive events from input devices including keyboards and mice."""


Even in 2023 you can get vim to be more powerful than VS Code. But it's that much more difficult.

As the author states, IDEs haven't necessarily gotten a lot better, but imo advanced features have become a lot more accessible.


What does it mean "more powerful" ? Do you mean in terms of productivity ? It probably depends on your task anyways. In 2023, it's still a pain to have decent debugging in Vim. For pure text editing, I can believe you, but for software development, I highly doubt it.


> Even in 2023 you can get vim to be more powerful than VS Code. But it's that much more difficult.

I absolutely agree, assuming you're using "powerful" in the same sense as saying that a Turing machine is more powerful than a MacBook.


Vim is a text editor, not a code editor. It has always been fundamentally designed this way.


Vim has many features that do not belong to a text editor, like `:make`, `gd` or even QuickFix.


In the spirit of what the person you are replying to wrote, you really weren't doing the same thing 30 years ago, because X Windows doesn't really have the capabilities vscode has for remote work. X Windows approach is very primitive compared to what vscode does.

https://en.wikipedia.org/wiki/X_Window_System_protocols_and_...

https://code.visualstudio.com/docs/remote/remote-overview


Emacs can edit files remote to where Emacs runs with e.g. TRAMP. Emacs can also run remote to where the X server runs. Those two are entirely orthogonal to each other.


> I definitely disagree with this sentiment. At my last job, I had to do most of my work on a remote server (because it had a GPU), and I found VS Code far more pleasant to use than plain old SSH. People recommended using an editor on the server side or freaking around with Win SCP / Cyberduck, but VS Code was just so much better in so many ways.

I'm not familiar with VS Code setup for remote editing. Does it run LSP on remote and give you full hints, errors, etc. locally?

> As a bonus, you get fully native behavior and keyboard shortcuts that you'd expect from your platform. Things like text selection and copying just work, even some of your addons are carried over.

Selecting text with Shift+ArrowKey or something like that is not a "bonus", it is just a bad text editing experience. Keyboard shortcuts are the way they are on Vim/Emacs not because their developers can't figure out how to bind Ctrl+C/Ctrl+V...


> I'm not familiar with VS Code setup for remote editing. Does it run LSP on remote and give you full hints, errors, etc. locally?

Not sure about other languages, but when I use VS Code to develop Rust remotely, it prompts me to install the rust-analyzer extension (which is my preferred LSP server for Rust) to a remote whenever I'm opening a project for the first time. VS Code is able to distinguish between extensions that need to be installed on the same machine as the code (like the LSP server) and extensions that are just making changes to the local UI.

> Selecting text with Shift+ArrowKey or something like that is not a "bonus", it is just a bad text editing experience. Keyboard shortcuts are the way they are on Vim/Emacs not because their developers can't figure out how to bind Ctrl+C/Ctrl+V...

I use an extension for vim keybindings in VS Code. When connecting to a remote host, the vim plugin still works fine, itand doesn't prompt me to install anything on the remote side, since the changes are synced to the remote host at a much higher level than that (i.e. locally mapping "dd" to "delete this line from this file" and sending that to the remote rather than sending the remote the keystrokes "dd" and having the remote determine how to interpret it).


My understanding follows (I don’t use it but I’ve noticed the processes running on other people’s machines). Corrections welcome.

It’s split into a client (web frontend) and server that’s doing all the work. The server can be run anywhere but it’s effectively a bunch of stuff installed in a docker container. When you start an instance for a project, it creates a container for that instance with all the code folders etc bound in. LSPs are running in that container too.

It’s possible to use your own image as a base (you might have custom deps that make installing the requirements for an LSP hard, for example).

The trick they use here is that there’s some base container/volume that has most of the server stuff downloaded and rest to go. Whether you start a normal instance or from a custom image they do it the same way by just mounting this shared volume and installing what they need to bootstrap the server.

It also appears they create a frontend window per server process too. So the master client process starts, you select a project folder, they create a new server container and a new client window connected to it. The frontend client is local while each server can be anywhere (obviously you could run the client with X if you wanted to further muddy that).


I use that all the time in my hobby tinkering pseudo cloud server on a odroid SBC. It feels like I'm literally on that specific computer directly. Plugins like docker work as well


I've been wanting to try something like that with neovim's remote features, but haven't found the time. Has someone attempted this? If so, how successful was it?

I've always been a big user of powerful laptops because I do like the mobility (allows me to work/browse stuff outside my home office) and I dread the pains of properly synching my files across a laptop and desktop (not only documents/projects, but also configs and whatnot).


`nvim scp://devhost/main.c`


Using the editor on the server from a remote connection is silly. However VSCode is not unique. On my local Emacs I use ssh via tramp [0] to browse files on the server and then edit localy. HOWEVER I also have physical access to my server. Emacs then gives me the added benefit of being able to run in terminal on the physical server without any window manager installed.

[0] https://www.gnu.org/software/tramp/


> Using the editor on the server from a remote connection is silly.

In my experience, this is the best way to do remote work. The alternative is to either not work with remote resources (data, hardware, etc), work locally and sync changes to remote, or work locally with a remote mounted file system (unless you need remote hardware).

For the parent, they needed GPU access, so they had to run remotely for hardware access.

I normally need particular data that is too big to move locally, so I like to work remotely for that reason. I could remotely mount drives via an SSH Fuse mount, however the IO speed for this method can quickly become a problem. For me, it is a much better experience to either use a remote web editor (rstudio server), VSCode remotely (which is a remote web editor over ssh), or vim. With web based remote editors, you still draw the screen locally, but get updates from remote. And more importantly, compiling and building takes place remotely.

I find this method much better than either pure remote access (VNC/RDC/X11) or local-only editing with syncing code and/or data. But it very much depends on your work. When I don’t need to work with remote data, a locally managed Docker devcontainer provides a much better development experience.


In my experience, it's the worst way to do remote work. There are so many better solutions.

If TRAMP is too slow, just mount the remote filesystem locally using FUSE somehow. Use SSH to run processes on the remote system like compile and run the program. No need to run the text editor on the remote system.

You can also do it the other way around: have your remote system load your local data. I developed a small bare metal OS this way. Ran the cross compiler locally, had the output go to some NFS mount which was also available via TFTP. Booted the target system with PXE.

Running a text editor on a remote system is good for one off things and maybe as a last resort, but that's it.


> just mount the remote filesystem locally using FUSE somehow

This is the step that never works consistently for me. There is always some amount of random extra latency that makes the this workflow painful. I work with some extremely large data files, so random access to these is the primary issue.

In general, the idea is that it is often better to do compute where the data already is. My experience is that you should also do the programming closer to where the data is as well. This tends to make an iterative development loop tighter.

But this is highly dependent upon what you’re doing.


That's a different thing, though. You don't edit the data in a text editor interactively, do you? I would do any interactive editing with a local editor and then fire off remote processes to operate on the data.

It's funny because my reasons against using a text editor remotely are exactly the same: to make the development loop tighter. I am very upset by latency and always try to remove it where possible. I think this is the kind of thing where we'd need to look over each other's shoulders to understand our respective workflows.


> You don't edit the data in a text editor interactively, do you?

That’s exactly what I’m doing. The code is written on the remote server. VSCode’s remote setup is actually very good at this. Mainly because, it is really a web editor that is hosted remotely and you use a local browser (Electron) to interact with it. The processing loop then happens all remotely.

But really, I’m talking more about data analysis, exploration, or visualization work. This is when I need to have good (random) access to 100’s of GB of data (genomics data, not ML). For these programs, having the full dataset present during development is very important.

If I’m working on more traditional programming projects, I can work locally and then sync, but recently I’ve been using more docker based devcontainers. These are great for setting up projects to run wherever, and even in this case, the Docker containers could be hosted remotely or locally (or more accurately in a VM).


Yeah I used to work with genomics data and never did I think I needed to have part of my text editor running on the high performance cluster.

I think people are just talking about different things and confusing each other. The original comment I replied to was arguing against SSHing in (or vnc or something) and running the text editor there. VSCode isn't doing that. It is running the interactive part locally. It's hard for me to understand why it needs a server part, though. If you want to edit something locally it has to send it across the network. There's no way around it. It seems like six of one and half a dozen of the other.


Vscode remote has almost no visible latency period.


Because it's running the editor locally...


Is there an efficient way to do "Find in files" from a vim or vscode instance running locally and editing+compiling remote files via ssh ? Preferably something that runs instantly for 1 GiB repos ?


Haven't tried on exceptionally large repos, but in VSCode since actual find logic is on server, it should work simply fine. If I remember correct, even on vscode.dev (in browser with no server), your browser downloads the search index and then search and navigation are fast. Though it may struggle with very large repos.


I’m not sure what you mean by vscode running locally with editing via ssh. I’m fairly certain that when you do a remote connection in vscode, it literally runs the vscode program remotely and you are just connecting to a tunneled web interface. The only thing running locally is the Electron browser shell. So, remote “find in files” is running remotely, so it should be as efficient as it would be from that side.

That said, you can also open a terminal in vscode and use grep. If you’re running remotely, the terminal is also remote. That’s what I normally do.


VS Code uses ripgrep under the hood (locally and remote).


Have you actually used vscode remote? If not you should. If you have all I can is that I’ve personally used all the solutions you are mentioning and for me vscode remote is the top bar none even for very large repos.


I worked at a place that had a half built distributed system that we still needed to use (many bidders buying Ad space from a API based market). one great thing with tramp is that you can tramp into multiple systems simultaneously. So you are editing say files from 5 different systems (tweaking the yaml or whatever) at the same time. You could then start eshells on each of those systems at the same time. It made it really easy to adjust the settings and restart multiple apps really quickly (big screen, 5 files on top, 5 shells on bottom). I always get a kick out of people saying "you use that! you need to switch to editor X it has feature Y!" And me thinking yeah, that feature has been in emacs since before you were born. it is getting a bit crufty in its age though. Its main attraction is for people who like LISP. There a project called lem (IIRC) that is rewriting it in much higher performance Common Lisp.


Absolutely: https://lem-project.github.io/ Works for Common Lisp out of the box (it's a Lisp machine) and for other languages (LSP client).


> Using the editor on the server from a remote connection is silly.

Why?


Constant screen redrawing and input lag.


Which is not only not the case with VS Code, but that is explicitly explained in at the top of the thread.


> Which is not only not the case with VS Code [...]

Which is also immediately mentioned after the claimed that using a remote editor is silly.


Tramp is quite slow though, IMHO, and last I used it Emacs very much expects file access to by synchr.


Tramp has like four backends, try sshfs if ssh is too slow


This ability also proves useful when trying to do complex package management in an isolated manner with ROS; I ultimately used a remote vs code shell running off the robots OS to just have my ide recognize the many local and built dependencies that requires a full ROS setup.


I wish I could find a decent way to make VSCode work properly on Android.


People still forget Eclipse when it comes to a full-blown yet not bloated IDE. That thing consumes less than a bare bones VSCode install while running 5x the tools. It can handle everything from code to git to CI/CD and remote development since 2013.

I'm using it for 20 years and I think it's the unsung hero of the IDE world.

This article doesn't mention it either as a modern GUI IDE.


There’s a reason people don’t talk much about Eclipse these days and it’s because it was a pain to maintain back when it really should have shone.

I really wanted to like Eclipse but gave up on it a decade ago because it required constant management from release to release. I remember one job I had where I didn’t need an IDE all that often and I would spend nearly as much time configuring Eclipse again upon the next time I came to use it, as I was spending time writing code in it.

I’m sure it’s improved leaps and bounds in that time - 10 years is a heck of a long time in any industry, let alone IT. But I do know I wasn’t the only one who got frustrated with it. So myself and others switched to other solutions and never looked back.


I was there, but it has changed. "Four updates a year" was a great decision to make, to be honest.

It just updates now, and I export my installation XML and send to people when they want the exact same IDE I use.


> I remember one job I had where I didn’t need an IDE all that often and I would spend nearly as much time configuring Eclipse again upon the next time I came to use it, as I was spending time writing code in it.

So basically the same as setting up and configuring a development environment today, except that nowadays it's a lot more centered around the command line and involves a bunch of disparate, half-documented packages/tools from GitHub (and that also inexplicably require 10 to 1000 times more space and clock cycles).


The “I” part of IDE stands for “integrated” whereas what you’re describing is just a development environment without any integration into your text editor.

That all said, I have found VSC to be piss poor for tempting out new projects. And some ecosystems like TypeScript really do need a lot of boilerplate before you can ever start on a “hello world” application.


I used to like Eclipse but honestly it was and still is a hog. At the time i used it in the late 2000s it was basically the best IDE for C++, having features that Visual C++ users either did not have or needed to pay extra for plugin to get. I used it at work then when everyone else used Visual C++.

However at home i had a computer i bought late 2003 (which was a high end PC at the time but still) and the program was so heavy i remember despite running it under a lightweight environment (just X with Window Maker) i had to choose between Firefox and Eclipse because otherwise things would crawl due to the excessive memory use both programs made :-P.

Eventually i switched to other IDEs and forgot about Eclipse. But i did try it recently again and while obviously doesn't feel as heavyweight as it did back then (i'm running it on a 8 core machine with 32GB of RAM so it better be), it still feels sluggish and startup time is still quite slow.

Also TBH i never liked the idea behind workspaces.

These days i don't write C++ much but when i do i use either QtCreator or Kate with the Clangd LSP (which i also use for C).


I think 9 seconds startup time with 1GB of memory use is pretty acceptable for an IDE at the size of Eclipse (just timed).

Considering I'm not closing it down for whole day when I'm using it, waiting for ~10 seconds in the morning is not that bad.

In 2003, Eclipse was at its infancy and was an absolute hog, I agree on that front.

Actually you are not expected to have "n" workspaces. Maybe a couple (personal and office) at most. Project relationships and grouping is handled via "referenced projects".

Kate is an awesome code-aware text editor. I generally write small Go programs with that, but if something gonna be a proper project, it's always developed on Eclipse.


There were a couple things going on in 2003.

First, it was quite common for a company to buy a developer the exact same corporate standard computer as everyone else. So lots of computers had limited ram to run things like J2EE, Lotus Notes, and Eclipse at the same time. It was painful.

The startup was always slow because it preloaded everything. This was a deliberate choice to not load things and interrupt the developer. Just don't close it all day and the experience was very good.

A plus compared to the standard of the day was that it ran native widgets. So doing something as simple as opening a file explorer to browse through your project was considerably faster than comparable IDE's at the time.

Personally, I loved the customization which was dialed all the way up. I could have multiple windows with different arrangements of panels within them, all saved. I haven't run across anything as configurable since then.

It also had the big benefit of their plugin system which shined when working with multiple languages in the same project.

It always felt to me like it became trendy to crap on Eclipse because of the slow startup time and it never could shake that.


> Considering I'm not closing it down for whole day when I'm using it, waiting for ~10 seconds in the morning is not that bad.

I tend to close and run the IDEs (and most programs) multiple times per day - a clean desktop kinda lets me clean/reset my thoughts - so long startup times are annoying. Of course i wouldn't avoid a program if it was responsive, fast and did what i wanted after it started up.

> Actually you are not expected to have "n" workspaces. Maybe a couple (personal and office) at most. Project relationships and grouping is handled via "referenced projects".

Yeah i also had a single workspace but i worked in a bunch of other things, including some Java stuff in NetBeans and i want to have everything in one place. I do use and prefer IDEs but every other IDE could just store projects wherever i wanted.


> I think 9 seconds startup time with 1GB of memory use is pretty acceptable.

9 seconds of startup time on a modern GHz computer is completely unnecessary and unacceptable IMO. There may be 9 seconds of work it wants to do at startup, but there's no way it needs to do it in a single thread before letting you start to interact with it. This is an optimization effort, nothing more. Give me a month with their codebase and I could get that down to under a second. (So could most decent software engineers.) It would just need to be something they actually put effort into.


In that 9 seconds, a Java VM starts up, starts up an OSGI compliant platform and loads all the plugins you have installed and enabled in that particular Eclipse installation. When the window appears 9 seconds later, the VM is warmed up, all your plugins + IDEs (yes multiple) are ready to use. No additional memory allocations are done WRT your development plugins. Also remember that these plugins are not in isolation. There’s a dependency graph among them.

In the following seconds, updates are checked and indexes of your open projects are verified and re-run if necessary which takes again <10 seconds on different threads. Your computer may scream momentarily due to increased temperature on all cores if indexes are rebuilt.

If you think that code is not optimized in the last 20 years, you’re mistaken. Many tools from Android Studio to Apache Directory Studio runs on that platform.

Nevertheless, I’ll try to profile its startup tomorrow if I can find the time.


For me it's completely wild to think that all the steps you mentioned should take more than half a second in a 2023 medium dev computer able to process 40GB/s of data in RAM and read at 7GB/s from SSDs. Normalizing things being this slow is why using computers is such a pain nowadays, this should entirely be treated as a bug.


I think there's a bit of misconception about how I run this software.

First of all, this neither machine's RAM bandwidth is 40GB/sec, nor it has a 7GB/sec PCIe drive. It's a run of the mill, SATA backed system with a 7th generation i7.

Second, JVM is always a heavy machinery to start. The startup CPU utilization is around 600%, dipping to 400% and spiking to 800% at the end, showing some plugin dependency requirements are slowing things down. Also, that's a 20 year old OSGI platform, which runs a ton of interconnected plugins, not a mere text editor. It's in the same ballpark of MATLAB or scientific modelling software in complexity and sophistication.

Lastly, as an HPC admin and develoeper, I live by and die by performance. Computers can do some complex things for humans (e.g.: Floating point number crunching) stupidly fast, but some things which are seemingly simple for us (e.g. understanding language) can be equally stupidly slow and resource hungry.

For me, it's wild to think about complaining for something without investigating and understanding it completely.


It may not be about optimization, but about user experience. You may have to be clever and think outside the box. Can you save a snapshot of all that work so that the next instance doesn't have to do it before showing the window? And then assuming it has to do the work (which may not be necessary if it just started up--once a day is probably sufficient), it can redo the work in a separate thread.


Eclipse already does non-critical background tasks on separate threads, and non-critical startup tasks are done in "deferred early start" queue, which is emptied after initial startup.

Normally Eclipse IDE is not something like Vim, which you enter and exit 10 times a day. It just lives there and you work with it. 10 seconds in the morning for a tool that big is very acceptable, esp. after considering that everything is instantaneous after that 10 seconds.


Android Studio is IntelliJ


It was Eclipse when they first started. Still tons of IDEs run on Eclipse platform, too. Esp. on the embedded space.


It wasn’t “a hog” it was the hog. I don’t know where OP gets the idea that it was svelte. IntelliJ is considered a pig and a half in most eras but at the time, for most if not quite all projects, Eclipse had a worse memory footprint, for less functionality.

Also the UX was mediocre at best and infuriating at worst. Practically every interaction worth performing in that editor took at least one more click or keystroke than IntelliJ, and I would rank IntelliJ as merely good, but not amazing with input economy.


For about five years, my daily start of the day ritual was starting eclipse, going to a 10 minute standup, and coming back two minutes before it stopped loading. To be fair, it's probably better now, and I stopped doing Java work in 2014.


Anyone who thinks Eclipse is compact is hallucinating.


what I dont understand about java is why doesn't it just take what it needs? If I commanded eclipse to open, that's it. Open an editor, maybe 2-3 recent files, and let me move the cursor around. If IntelliJ isn't ready yet, so be it, but dont slow my UX down because it's running a bunch of services I didn't ask for. If I hit the IntelliJ autocomplete then fine, I'll wait if it's not ready, but until then, the editor frames should be just as snappy as notepad. Java doesn't put the user first!


One of the biggest tricks with Java IDEs was not giving them more memory, but giving them more initial memory.

Tuning startup heap size could cut upward of 40% off of startup and settling time.


Interesting. I hate Eclipse with a passion, I find the ergonomics so horrendous, and back in the days it was a hog. Maybe on today's hardware it's leaner than webkit based vscode. But the last time I tried to use git with it .. it made things 10x harder than the CLI. It was so bad that I developped RSI in 24h (and I'm a daily emacs user)


It’s possible that Eclipse has had a “Firefox moment” where someone carved it down to a lighter core, but I’ve no reason to check.

Seconded on the ergonomics. They were a joke. Longest inputs of any IDE I’ve ever used. If your sequences are longer than vim you need to get your head examined.


eclipse was a child of the java components era, even a trimmed down eclipse would still have tons of baggage

i really despised (to stay polite) everything about eclipse/java culture.. lots of generic layouts and components, nothing i cared about or bringing me dense information about code. way too much chrome and perspectives and what not. it was a cultural dead end, the people who "enjoy" working this way are on a different axis from me.. give me emacs+magit where things are right under your fingers and easy to extend.. and people using this kind of tools (i'm sure vim/neovim crowd likes that too even more) produce more tools of that kind


Sorry, but "not bloated" really doesn't enter my mind when I think of Eclipse. The few times I used it for Java programming, it took forever to start up, and the UI was laggy as hell during regular use. Granted, that was about 10 years ago, but on a (at the time) beefy Windows PC.


But Eclipse was often laggy and slow. So it felt bloated to the users than VS Code which is snappier even though it is bigger


It was, for C++, for a couple of years, 12-13 years ago. It's neither laggy nor slow for the last 8-9 years. I've written my Ph.D. thesis on it, on C++, which was a sizeable high performance code.

It never crashed, allowed me to work remotely if required, integrated with Valgrind, allowed me to do all my tests, target many configurations at once, without shutting it down even once.

Currently it has a great indexer (which is actually an indexer + LSP + static analyzer and more), LSP support if you wish, and tons of features.

It gets a stable release every three months, comes with its own optimized JRE if you don't want to install one on your system, etc.

Plus, it has configuration snapshots, reproducible configurability, configuration sync and one click config import for migrating/transforming other installs.

That thing is a sleeper.


While Eclipse today is certainly a quite decent IDE, I use it mostly in the form of STM32CubeIDE[1] now, it was servicable at most back in 2005-2006 when I used it for some Java classes.

In any case, it's a younger product than the offerings in the article.

[1]: https://www.st.com/en/development-tools/stm32cubeide.html


> In any case, it's a younger product than the offerings in the article.

Yeah, but my gripe was about the closing of the article, which mentioned VSCode. I think the author just doesn't know about it.

Eclipse is my DeFacto C++/Python IDE and I'd love to develop a decent Go plugin for it, too. Maybe someday.


Not just C++. I used to use it for Java development and had the same experiences as the GP too.

I’m sure it’s really good these days. But I’ve moved on now and my current workflow works for me, so I don’t see the point in changing it until I run into issues again.


Java never got that slow, but it used to tax the system a lot in the earlier days, yes.

I developed Java with Eclipse, but the project I did was not that big when Eclipse was not its prime, and it was in its prime when I was experienced enough to be able to "floor it" in terms of features and project complexity.

Now it's just a blip on the memory usage graph when working with big projects, and way way more efficient than the Electron apps which supposed to do 20% of what Eclipse can do.


My experience with Eclipse, about 10 to 15 years ago, was the exact opposite. It was incredibly bloated. With some combination of plugins installed, it became unusable. At a previous company, we were using some sort of Scala plugin, and Eclipse couldn't even keep up with my typing! I moved on to IntelliJ around that time.


All of the JetBrains users sitting around comparing notes, trying to figure out what was wrong with our coworkers that they thought eclipse was worth using, let alone defending.

JetBrains has plenty of problems, which they seem to want to address but I fear Fleet won’t fix, and I lament but understand people wanting something lighter these days, but eclipse isn’t even in that conversation.


Additionally, I always felt the whole Eclipse "user experience" was terrible. Setting up a project was a mess. The default layout left a tiny window for code. The default fonts were bad. I could go on.


"It's only free if your time is worthless."


I (author) wouldn’t say I “forgot” about it. I was there when Eclipse became a thing, and my memories are all pretty grim. Difficult to use, slow, very resource hungry… so I never really paid much attention once I finished school. It probably is better now as others are saying, but I don’t know nor care at this point to be honest.


I started Android development with Eclipse. That IDE is a beast. People also forgot about Netbeans.


Netbeans was my absolute favorite IDE for Java development. After its last release, I honestly felt lost.

I’ve gotten back up to speed via IntelliJ but it still doesn’t feel as effortless as it did in Netbeans. And way less care and feeding than Eclipse.

Sorry, there’s a lot of “feels” in this post but for me, Netbeans was the one Java IDE that I didn’t have to fight with.


What do you mean, “last release”? NetBeans 20 was released just this month. I still use it.


Apologies for not clarifying -- the last release of Netbeans prior to the Oracle acquisition of Sun.


Yes Netbeans was very underrated, I used it for making Nokia ME apps. And learning Java.


Still is, quite a few features like Swing editors, or the two way editing between rendering templates and Java code, or the quality of profiling tools for such open source product.


My first Java IDE was Symantec Café (which became Visual Café). I haven't thought about that in 25 years.


I also used NetBeans a bit years ago, though that was mainly because it had a (mostly) WYSIWYG editor compared to Eclipse (technically Eclipse had a plugin for that which supposedly was also superior in how it worked - it parsed the code to figure out what the GUI would look like and updated in place instead of NetBeans' generating code with commented out sections you wasn't supposed to touch - but in practice it was both slow and clunky).

For Java specifically i felt NetBeans was faster and simpler though i bounced between it and Eclipse because i also used Eclipse for other stuff (C++ mainly) so unless i wanted a GUI i used Eclipse. I did stopped writing Java some time ago though.

I did try a recent NetBeans build but i found it much less polished than what i remember from before it became "Apache NetBeans".


I have good memories of Eclipse, from back when I was doing Java. I remember at the time it seemed everyone dissed it, much as it feels like everyone disses Jira now and for the last decade, but I liked it.


I think you are mistaken, Eclipse takes up 3 times the ram VSCode does...I can use VSCode using only 6gig ram even with big projects with native code such as kotlin, java, c, swift, etc..Eclipse will not run on 6 gig ram neither will jetbrains or android studio.


My system monitor says it's using 1.3GBs after warming up, and even forcefully reindexing a big C++ project.

I don't think VSCode will use 400MBs with that amount of code, plus electron, plus all the LSP stuff you run beneath it.

In that state Eclipse will fit into a 6GB system just fine. I'd love to try that at a VM right now, but I don't have the time, unfortunately :)


If memory serves, fully loaded Eclipse would take about 20-25% more memory than IntelliJ, which was itself rightfully called greedy.

At the time most of us felt it was worth the cost of entry for all of the tools you got, which eclipse had a subset of.


Eclipse... not bloated. I can't say I understand those words in that order.

I used it for quite awhile until JetBrains stole my heart, but it was nothing if not bloated, even then.


I still love Eclipse, and you can pry it from my cold, dead hands.

The last couple of years, however, it feels like Eclipse is actively getting worse. And I don't mean that it's lacking features. I mean that every new release seems to break something else.

I tried reporting some bugs, but that required signing some kind of soul-selling agreement with the Eclipse Foundation or some other nonsense.

I then tried fixing those bugs, but there is no up to date documentation on how to build the IDE from the myriad of repositories and modules. So I gave up.


VSCode is really a text editor-in-IDE-clothing. Also, it's an Electron app and those are notoriously resource heavy.

~20 years ago I became an early IntelliJ user. From version 3 maybe? It's hard to recall. I've never looked back.

But I did try Eclipse and... I never got the appeal. For one, the whole "perspectives" thing never gelled with me. I don't want my UI completely changing because now I'm debugging. This is really part of a larger discussion about modal editors (eg vim vs emacs). A lot of people, myself included, do not like modal editors.

But the big issue for Eclipse always was plugins. The term "plugin hell" has been associated with Eclipse for as long as I can recall. Even back in the Subversion days I seem to recall there were 2 major plugins (Subclipse? and another?) that did this and neither was "complete" or fully working.

To me, IntelliJ was just substantially better from day one and I never had to mess around with plugins. I don't like debugging and maintaining my editor, which is a big reason why I never got big into vim or eclipse. I feel like some people enjoy this tinkering and completely underestimate how much time they spend on this.


For me, perspectives are perfect, because it provides me a perfect set of tools for everything I do at that moment. It's probably a personal choice, so I agree and respect your PoV.

The plugin conflicts were way more common in the olden days, that's true, however, I used subclipse during my Master's and it was not incomplete as my memory serves. It allowed me to do the all wizardry Subversion and a managed Redmine installation Assembla had to offer back in the day.

It's much better today, and you can work without changing perspectives if you prefer, so you might give it another shot. No pressure though. :)

Trivia: VSCode Java LSP is an headless Eclipse instance.


At a minimum, perspectives play very nicely with the plugins system.

Eclipse was created over that extremely interesting idea that you can write a plugin to do some completely random task, and have all of it reconfigured on the perfect way for that task.

But you can't have a rich ecosystem of plugins without organizing them in some way, and nobody ever created a Debian-like system for them as it's a lot of thankless hard work.


I’ve been using vscode for a few years now and while i find its search amazing, it doesn’t do much more for me. Its syntax highlighting is good, but the auto complete recommendations have been driving me insane recently.

Writing rails api with a nextjs ui, anyone got any suggestions on alternative paths i should take?


This may not apply to you but I find it so weird how many programmers won't invest even a modest amount into software they'll use 8 hours a day every day. Particularly when we'll so easily spend money to upgrade RAM or buy a new PC.

RubyMine on a cancel anytime personal license is $22.90/month (or $229 for a year). That's nothing. I'd say just try it. If you don't like it, you might only be out $23.

I'm not a Ruby person so can't comment on that really. For Java (and C++) it's a lifesaver. Things like moving a file to a different directory and it'll update all your packages and imports. Same with just renaming a class or even a method.

The deep syntactic understanding Jetbrains IDE have of the code base is one of the big reasons I use them.


JetBrains solutions. It think it's called RubyMine.


> VSCode is really a text editor-in-IDE-clothing.

This is kind of my problem with it. I'll use VSCode for typescript but I avoid it if there are other alternatives. The entire model of VSCode just doesn't jive with me.


Eclipse used to be my reference for most-horribly-bloated-IDE....

It's bizarre to now see it described this way.


Eclipse is the first thing that comes to my mind when I think of the most bloated and stodgy IDE on the earth.


Ha. I mostly used Eclipse in college. I learned how to compile programs from the Command Prompt (Windows user back then) primarily to avoid Eclipse LOL. It was dog slow and somewhat difficult to navigate


There does seem to be a lot of hate for eclipse. The complaint I always hear is that it is a pain to use. Personally I’ve always liked it, even though I’ve used the other popular IDEs.


Same here.

You will find old rants from me complaining about workspaces metadata, but that problem has been sorted for quite sometime now.


Agreed. And there's simply nothing that comes close to the power of the workspace when working on multiple projects that share dependencies.


The original idea was to replicate the Smalltalk image approach, but backed by a virtual filesystem instead.

Eclipse is Visual Age for Smalltalk reborn, after all.

It was common to have plugins corrupt its metada, but somehow it finally became quite stable.


A blast from the past there. I used Eclipse for Java in its infancy while I was at university and thought it was decent enough compared to whatever version on emacs would have been on whatever version of Solaris was on my CS department servers.

A couple of years later I started an internship at a bank and spent ~3 hours trying to get a project building before someone introduced me to IntelliJ, which I still use every day almost 20 years later!


I'm shocked to hear you describe it as "not bloated". Eclipse took many seconds to start up, responded slowly to typing, and used huge amounts of memory. It was by far the slowest of any application I had used. I used it when I had to, but never got comfortable with it because it was just way too slow.


I should fire it up, I haven't tried it in a while. It was the only thing I could use that seem to accurately (more or less) index large projects that you, uh, had some issues compiling and just want to navigate around and look through the code. now I mostly just use rg for big projects, inside of neovim


Eclips also had (have?) The very interesting mylyn plug-in which narrows down the code to the context your working within. Think collapsing everything in eg the project tree and also functions within files.

This context is built up based on what part of the code you work on.


Honestly, I feel like the primary reason why IntelliJ "won" over Eclipse and Netbeans was that it was first to market with a decent-looking dark mode. Back when Eclipse and Netbeans were as stark white as Windows Notepad... and caught with their pants down as developers abruptly decided en masse that white backgrounds were over, and every app needed to be dark mode first.

Hell, Eclipse STILL doesn't really have a nice dark mode. The actual editor view looks okay, but the dark mode feels very bolted-on to the surrounding UI.

I think this is the primary reason why VSCode is eating the world today. People will talk about the plugin ecosystem and all these other community inertia advantages. However, VSCode was exploding in popularity BEFORE that plugin ecosystem was in place! If we're really honest with ourselves, we flocked to because it was even more gorgeous looking than Sublime Edit, and without the nag modal to pay someone 70-something dollars.

Appearances MATTER.


JetBrains “won” because of code inspection tools and code completion that was light years ahead of Eclipse and Netbeans. I remember in my Java days I used to be able to do in a keystroke what my Eclipse friends did in a dozen dialogs.


I don't disagree, but my anecdotal experience from working with peers is that the overwhelming majority of IntelliJ users never learn a small fraction of the keyboard shortcuts and advanced tooling.

I really do believe that for most people, IntelliJ is basically a VSCode that: (1) has a better debugger and some more polish around Maven/Gradle integration, and (2) came out 10+ years sooner.

But ~10 years ago, everyone I knew was flocking over because IntelliJ felt less slow and bloated than Eclipse, and its dark mode UI was more attractive in comparison. Then it became the more-or-less official way to develop Android apps (back when Android's U.S. market share was a lot higher), and that was all she wrote.


No, dark mode is a red herring. I used IntelliJ because it had better functionality and wasn't incredibly slow (only somewhat slow).

User experience matters. Most of user experience has nothing to do with dark mode. Dark mode is pure fashion, and should be prioritized appropriately.


I love eclipse, but it's unbearable on macos


How come? I use it regularly. Genuinely asking.


I don’t know. It’s even worse with IntelliJ. IntelliJ crashes regularly. It unbearable.

Running m1 sonoma


Interesting - I run Intellij Ultimate on Macbooks (both Intel and m2) and never have a crash. Infrequently run into bugs when upgrading the ide or 3rd party plugins; that requires some sort of cache invalidation or project reimport (couple times a year), but it's pretty smooth sailing for something I use across many different projects and languages. Java, kotlin, TS, python, groovy, shell scripting, json/xml/yaml/html/tsx are all generally touched 40+ hours on a weekly basis - it just works.

I do agree intellij is memory hungry with multiple projects open and a variety of languages involved, but RAM is cheap enough (and VMs/Docker/K8s hungry enough) that I just don't buy a machine with less than 32GB anyway, so I give intellij up to 6 GB and never give it another thought.

I don't do much android development, but do find Android Studio to feel clunky and slow at times, guessing because of the heavy integration with Android dependencies and emulation, but not really something I know enough about to comment with any sense of authority.


How so? Use it daily, with hundreds of open projects and it just flies.


I’ll raise you NetBeans to that.


> People still forget Eclipse

thank god


For me, the closest modern successors to the Borland suite are Visual Studio (not VSCode) and the Jetbrains IDEs. The feel like they're the only one with a holistic, batteries included, design that actually focuses on debuggability.

I actually feel that the terminal-based focus of modern FAANG-style development actually hindered proper tool development, but I was never able to explain it to anyone that hasn't used Borland C++ or Borland Pascal in the past, except maybe to game developers on Visual Studio.


C++ Builder versus Visual C++ for RAD GUI development.

I never understood why Redmond folks have so hard time thinking of a VB like experience for C++ tooling, like Borland has managed to achieve.

The two attempts at it (C++ in .NET), and C++/CX, always suffered push back from internal teams, including sabotage like C++/WinRT (nowadays in maintainance as they are having fun in Rust/WinRT).

The argument for language extensions, a tired one, doesn't really make sense, as they were Windows only technologies, all compilers have extensions anyway, and WinDev doesn't have any issues coming with extensions all the time for dealing with COM.

Or the beauty of OWL/VCL versus the lowlevel from MFC.


DevDiv vs WinDev. The Windows group maintains the C++ compiler. So you get the resource editor for dialog templates and that’s about it. And that actually got worse from Visual Studio .NET onwards, my guess is that it got took over by the DevDiv people when they unified the IDEs.


Yes pretty much that.

Windows could have been like Android, regarding the extent of managed languages usage and NDK, if DevDiv and WinDev had actually collaborated in Longhorn, but I digress.


out of the loop, how is terminal-based development related to FAANG?


I guess it's caused by the "brogrammer" culture of Silicon Valley, where you would get hazed if you dared using a GUI-based tool. Also, being more focused on open-sourcing their tools (because other companies do not open source them, therefore being un-cool), which begets a "simpler" and "engineeristic" approach to UX, which do not need UI experts and designers.


Lots of companies end up with their own internal tooling. They have their own build systems, packaging systems, release systems, version control, programming languages, configuration languages, everything.

Some even have their own editors.

There is a lot of value in picking a transferrable editor and using that. From that point it becomes "what is the best editor that will _always_ be available". Emacs/Vim fit that.

Then the muscle memory can begin to grow, and there is one less bit of friction in starting a new job.

One of the best pieces of advice I received was "pick an editor and go deep".


> One of the best pieces of advice I received was "pick an editor and go deep".

Agreed, I'd be infinitely less productive if I couldn't use the editor I learned to master in the past 20 years.

A corollary to that would be "pick a company that lets you use your own editor". There's lots of friction from IT departments towards emacs and vim. The package/plugin system is a security nightmare with lots of potential supply chain attacks and more importantly no trusted vendor to blame when something goes wrong.


It became sort of a hackerish trend in the past decade, usinga hyper customized (neo)vim in lieu of an IDE.


Except maybe Apple, all the others are service-oriented companies. They run heterogenous pieces of code on their servers and their ideology is “move fast and break things”. It’s a hipster culture that reinforced the use of 1980s “video terminal” editors and CLI tooling because they were supposedly more flexible for their workflows.


I loved Turbo Pascal, but to me the high point of Borland's tooling was Delphi (1995). I don't want to sound like old man yells at cloud, but every time someone says that building GUIs with Electron is so easy compared to native apps, I just wished they experienced Delphi in its prime.

There are some very short/simple demos on YouTube:

https://www.youtube.com/watch?v=m_3K_0vjUhk


> but every time someone says that building GUIs with Electron is so easy compared to native apps, I just wished they experienced Delphi in its prime.

Every time someone says that, I mention Lazarus. I stll get a thrill out of using it (one of my github projects is a C library, and the GUI app is in Lazarus, which calls into the API to do everything).

The problem I find with Lazarus is that it seems to be slowly dying; yes, they still work on it, but feature-wise they are very behind what can be done with HTML+CSS and a handful of js utility functions.

A wealthy benefactor could very quickly get Lazarus to the point of doing all the eye-candy extras that HTML+CSS let you do (animated elements, for example).


Looks pretty similar to C# WinForms that ships with Visual Studio. https://youtu.be/n5WneLo6vOY?si=maped85dMX90KIn1


They can still experience it today with the community edition.


If you can agree to their very strange terms and conditions.

Or, use Lazarus/Free Pascal, which is almost identical, except for the documentation, which needs a massive overhaul, in tooling and content.


Not everyone is religious against such agreements.

Those can profit from very latest version.


If you make gears, for example, and sell more than $5000 of gears, you still have to pay for Delphi under that license... it's really weird.


I will happily fill an hour with trash talking Microsoft, but getting the father of Delphi on board is one of the shrewdest things they’ve managed. I wish he’d found a different project to sink his teeth into though.


Twenty nine years ago, Metrowerks Code Warrior was released https://en.wikipedia.org/wiki/Metrowerks

I had the shirt ( https://www.rustyzipper.com/shop.cfm?viewpartnum=282427-M558... ) and wore it for many years... wish I knew where it was (if its still in one of my boxes somewhere).

The IDE was no where near as clunky as a text only DOS screen. https://www.macintoshrepository.org/577-codewarrior-pro-6


The entire System 7 UI was really a thing of beauty.


The alternative was MPW which was awful! Long Live Code Warrior! Its debugging was probably a decade ahead of its time.


I still miss the simplicity and power of the Lightspeed/THINK IDEs for Pascal and C.

For me Metrowerks was a big step back in terms of complexity, speed and affordances.

The thing that I loved about MPW was MPW Shell and Commando, brilliant if you compare it to the state of the UNIX art at the time, probably tcsh, and still to this day feeling just a bit like the future.


At least the programs in the screenshot have actually useful and visible scrollbars. Seriously, scrollbars are super useful and should never be hidden, they both provide information you want to see and actions you want to do, why is everything trying to make them as subtle as possible today, even most Linux UI's which I'd expect are normally made more for usefulness than "design trends"?


GitHub's Android app doesn't even show scroll bars. And no scroll grab or snapback in apps even when there is a scroll bar. Am I the only person who scrolls back to check something and wants to quickly return to where I was in a document? Even if just FF on Android had this I would be happy.

On desktop we can drag scrollbars but I can't imagine what it's like to use modern 4-8px action area scroll bars if you have fine motor control challenges.

I just don't understand how we got to this point. Do people not use the apps they write?


This must be a bug though. If you unfold hidden comments, you jumpt to the BOTTOM, where you just WERE, rather than the top. So you scroll up, with no scrollbar, frantically, because you don't know how far you have to go. Until you reach the top - and you drag down ONE MORE TIME, because you're scrolling frantically, so the whole thread reloads, and everything is folded again, and you're back where you started.


On Linux this depends on your theme really, all the themes i use have scrollbars - e.g. here is an example with Gtk3 (which IIRC introduced the "autohiding scrollbars" to Linux desktop)[0]. It is "cdetheme-solaris" which i think is from [1]. I might have modified it a bit though. Though normally i use Gtk2 apps with a modified "cleanlooks" theme (a screenshot from Lazarus[2] i made a couple of days ago shows it - including the scrollbars :-P).

[0] https://i.imgur.com/CAyu5Ay.png

[1] https://github.com/josvanr/cde-motif-theme

[2] https://i.imgur.com/Yw1tTcD.png


Moreover, make the scrollbars big enough for my thumbs on my touch screen. Or at least make it optional.


Even if VSCode/other IDE had features that blew Neovim out of the water, I don’t think I’d move over. The customizability, modal aspect, and open-source-ness are huge for me. I can create macros on the fly for repetitive tasks, can navigate without ever having to stall my train of thought to touch my mouse, and customize every single key binding and the code that it runs. I can create custom commands to do every single thing I’ve ever conceived of wanting to do. I can upstream bug fixes and see those others have suggested that haven’t been up streamed yet. I will concede that for some, maybe this is too much work to set up “just a text editor”, but I enjoy it, and I spend most of my day editing or viewing text, so to me, it’s worth it.

If there is one thing I’ve learned in my years of software engineering, it’s that everyone prefers different workflows. Let people build their own, however they want, with whatever tools they want, and they will be happier for it.


Almost everything you wrote is available for any modern IDE, and modern IDEs, thankfully, don't assume that your code is text. So they give all the things you mentioned and superior tools to work with code out of the box: anything from refactoring to code analysis to several types of semantic search to...


Corect me if I'm wrong, but you can't define macros in a couple key strokes or contribute to development of any modern IDE.

Nor they were designed to be hackable or customizable. If you open one of the setting sections in VS-Code you are presented with giant, over-engineered control panel with myriad of options you can toggle.

Where Vim has a blank canvas you splash with couple dozen lines of code to make it unique and personal. It's almost like you aren't using just vim, but your own hand-made editor built on a great minimal base, that is Vim.


> Corect me if I'm wrong, but you can't define macros in a couple key strokes or contribute to development of any modern IDE.

1. I did say most, not all

2. The main reason for macros I've seen is the lack of useful features in editors like vim. There are not that many repetitive tasks that you need macros that often. I think I used IDEAs macro recording once in the past 10 years

> Where Vim has a blank canvas you splash with couple dozen lines of code to make it unique and personal.

I:

- don't need "unique and personal", I need working out of the box

- don't want to "hack on my editor" to get the basic functionality I already usually have :)


> Almost everything you wrote is available for any modern IDE ... So they give all the things you mentioned ...

I guess, I've misread your comment somehow.

But anyway, my point is that modern IDEs can't give the same experience as or replace Vim, Neovim, Emacs. And while you don't need that experience, there are plenty of people who do. Nothing wrong with either side =-)


> my point is that modern IDEs can't give the same experience as or replace Vim, Neovim, Emacs

Every time I ask "what experience is that", all I get back is "unique editor" and "you can write macros for repetitive tasks".

No, thank you. I prefer the experience of an IDE that doesn't think that your code is plain text and offers tools that text editors stuck 30-40 years in the past know nothing about.


Any good Vim-emulator extension has macro support. VSCode also has an extension that lets you run the actual neovim server to manage your text buffer.

The settings GUI in VSCode is just an auto-generated layer over raw JSON files. You can even configure it to skip the GUI and open the JSON files directly when you open settings.


I am easily distracted (as in, ADHD-like), I enjoy very sparse work spaces in general. Tools with lots of icons, windows, and other widgets are very uncomfortable to me. I prefer typing commands, I believe a well written command language and a good search function are more comfortable. I will go to great lengths to avoid some tools if it means avoiding a clickodrome : for example, when coding for STM32 devices, I prefer bare metal GCC + makefile over STMCube.

To each his own tho, it's nice to have different tools for different peoples.


The two most indispensable programs for me are "midnight manager" for Linux/OSX and the like, and "Far Manager" on Windows (and mc there too under wsl2).

I'm lost with Far Manager at work. I've stopped using Explorer long time ago (and use it only to verify some reported workflow from other users, or some obscure thing not possible in Far Manager).

Other tools: tig (pager), emacs, mg (mini-emacs) - I wish that was available on Windows too without needing msys2/cygwin, and few others.

Back in the DOS days - it was Norton/Volkov Commander, PC Tools, but also Turbo/Borland Pascal 3, 4, 5; the most awesome E3 editor (from this series - https://en.wikipedia.org/wiki/E_(PC_DOS) ) - and many more


30 years ago, THINK C for the mac was already nearing discontinuation. It was a great compiler plus plus graphical IDE with debugger for its time. Hard to find info about it, but this site has some screenshots of the various versions:

https://winworldpc.com/product/think-c/4x#screenshotPanel


This fella has a number of interesting videos of development on System 6 using THINK C!

https://jcs.org/2022/03/05/serial


Yes! I absolutely lived in Think C for many years. You’re right though, it was on the way out by then, supplanted by CodeWarrior and MPW, which were both really good too.


Visual Studio and XCode are the closest experiences to “first-class IDEs”, reminiscent of the Borland stuff from the early 1990s. They offer tight integration with the native toolchains and a set of menus that mostly make sense. Environments like VSCode or Emacs are a generic platform for text editing and file manipulation, a lowest common denominator for a variety of languages, workflows and tastes.


Try Eclipse, or Geany if you want something very small, yet powerful for its size.


How odd to find a mistake in 30-year-old Turbo C++ man pages from a screenshot. printf and vprintf send send formatted output to stdout of course, not stdin.


Language server protocol has really improved things. I use it with neovim. Makes it straightforward to get IDE features in vim for a variety of languages & allows code analysis part to compete. With Go I'm mixing golangci & gopls. In Rust there was a transition from rls to rust-analyzer. & this effort is able to be shared between vim/vscode/emacs/etc so once someone makes a good lsp implementation it doesn't need to be ported everywhere


I was expecting the author to mention Smalltalk because I distinctly remember people praising Smalltalk for its IDE but I think the IDE I was thinking of is from the late 90s or early 00s.


I think it's fair to say Smalltalk(s) have had in many aspects the most advanced IDE in existence at every moment since its introduction.

Demos for a couple old versions: https://www.youtube.com/watch?v=NqKyHEJe9_w Demo for Pharo: https://www.youtube.com/watch?v=baxtyeFVn3w


Not only Smalltalk, all Xerox PARC Workstations

Interlisp-D, Mesa (XDE) and Mesa/Cedar, all shared same ideas with Smalltalk, regarding developer tooling.

Same on Genera with Lisp Machines.


An interesting continuation of that evolutionary line is the Open Dylan IDE: https://opendylan.org/history/apple-dylan/screenshots/index.... the deconstruction of the object browser / Miller columns seems interesting even as a general UI concept.


> I think the IDE I was thinking of is from the late 90s or early 00s.

Smalltalk's from the 70s and 80s, and almost certainly had what you're thinking about given it's where both Microsoft and Apple got their foundational ideas (restricted to significantly less powerful hardware), and later unrelated smalltalks retained a lot of now quirky considerations and behaviours. Self inherited a lot of those and is from the late 80s.

So yes, I was also expecting the author to talk about Smalltalk.


If you actually look at Xerox’s hardware, it’s quite a stretch to call Lisa or Macintosh “less powerful.” It’s kind of amazing what Xerox was able to accomplish on such underpowered systems; they did it largely by having multitasking microcode to handle performance-sensitive I/O, and implementing BitBlt there too.


I suspect there are quite a few more niche languages/interfaces that the author didn’t consider when it comes to GUI IDEs. I thought of EiffelStudio immediately, myself, having worked with a group that used it in a past life.


I don't think SmallTalk ever had a TUI. Even the very early versions on the Xerox machines used a GUI, and that GUI persisted even when it was all ported to Solaris.


Oberon was this weird mix where you had a proper GUI on thr screen but it would basically only show text. You could run commands by selecting text from inside any arbitrary window. The plan9 OS and Acme editor have kept this workflow.


Digitalk had Smalltalk TUI for DOS named Methods. https://pbs.twimg.com/media/ET8sPbsXQAAcgpb?format=jpg&name=...


The title of the article doesn't mention TUIs (or UIs at all) but I was thinking of a GUI. Specifically it seems I was likely thinking of Pharo (which is '00s not '90s so off by a decade).


The article subtitle is: "A deep dive into the text mode editors we had and how they compare to today's".

The second paragraph says: "This time around, I want to look at the pure text-based IDEs that we had in that era before Windows eclipsed the PC industry."


Vim is the only tool I've been able to use at every place I've ever worked at, from intern to staff engineer in three FAANG companies. I've watched tool teams spend months integrating the latest GUI editor, only for it to get killed by corporate acquisitions and replaced with N+1 that offers an almost identical feature set.

Meanwhile there's always a community of vim and emacs users who build all the internal integrations by themselves. Vim and Emacs aren't editors, they're platforms and communities, and the benefit of using them over VSCode of JB is that you get to be a part of these communities that attract the best talent, give the best troubleshooting advice, and share advanced configurations for everything and anything you could possibly ever want. They are programmable programming environments first and foremost, and that attracts people who are good at programming and like to hack on stuff.

Technologists who choose the propriety path of least resistance when it comes to their most important tools I think are ultimately missing out in a lot of ways, least of all is the actual editing experience. A craftsman should understand his tools inside and out, and picking something you can't fully disassemble and doesn't have the breath of knowledge a tried and true open source tool ultimately becomes just as frustrating as the initial learning curve of these older tools.


Changing IDE isn't that big of a deal.

I would much rather have to spend a few days to relearn a tool every few years and to get the benefit of that tool, than accept a lower quality tool just to avoid a few days work.

If you worked in C++, then visual studio has been around for 20 years - visual C++ for 10 years before that. If you use java, then intellij has been around for 20 years. Pycharm for 15 years. If you're writing JavaScript, I don't know what to say because the framework du hour has changed so many times in that time frame that I don't think the tool saves you much.

> Technologists who choose the propriety path of least resistance when it comes to their most important tools I think are ultimately missing out in a lot of ways, least of all is the actual editing experience

Equally, I can say purists or idealogists are so concerned with theoretical changes and breakages, and so afraid of the possibility of something changing that they miss out on game changing improvements to tooling.


I think the way people use IDEs is a lot deeper than just reducing them down to "purist" or "ideologist". That sounds a tad bit dismissive for something that is essentially your trade tool. It's akin to saying all keyboards are created equal because they have the same keys. The way you lay the thing out and the perspective that you build it for matters quite a lot. Distilled in a quote, "the whole is greater than the sum of the parts."

I got used to JetBrains' key mappings when I was at my last company, I also adored their debugger. My new company uses VSCode and I started down the venture of remapping all of them to JetBrains keys. I ended up with a lot of collisions and things that no longer made sense because the keys were mapped using a different perspective when laying them out. I'm sure I'm not alone being in a pool of engineers that primarily navigate using their keyboard.

VSCode's debugger is better now, but it still doesn't really stand up to JetBrains'. On the other hand, launching VSCode on a remote box is much easier and their configuration is much more portable with their JSON based settings files. I like using VSCode, but it took me months to get up to speed with how I navigated, and more generally operated with, JetBrains' IDEs.


A person using Vim or Emacs has had best in class integration with the unix environment, modal editing, and remote development. Today, both editors have integration with VCS via fugitive or magit, fuzzy finding, LSPs, tree sitter, and code generation tools using LLMs. These tools have not stagnated, they've continued to evolve and stay best in class in many areas. So the "one tool is better than the other" argument doesn't really sway me. My point still stands that the community and open architecture are more important than any one editing feature.

> Equally, I can say purists or idealogists are so concerned with theoretical changes and breakages, and so afraid of the possibility of something changing that they miss out on game changing improvements to tooling.

Blindly following the crowd is also dangerous. Making choices based on principle is what allows good things like open source communities and solutions not swayed by corporations to exist, even though they might require more up front investment.


The problem with these tools is that despite having worked with computers for 35 years, I don‘t get them. My brain is not made for them.

I only use out of the box vim when I work on consoles (which is still a fair amount of the time), I can exit (hey!), mark/cut/copy/paste (ok, yank!), save, and find/replace if I must. Everything else is just beyond what my brain wants to handle.

A lot of Jupyter lab and some VSCode otherwise. I can‘t say I know all about those either.

The last IDE that I knew pretty well was Eclipse, in about 2004. I even wrote plugins for it for my own use. That wasn‘t too bad for its time, I don‘t quite get why it got out of fashion.


There are those of us that still use it :) productivity gains lie elsewhere. And running various maven and git commands from command line instead of clicking around... something about keeping the skills in better shape


> My point still stands that the community and open architecture are more important than any one editing feature.

No, it doesn't, because it's essentially a matter of opinion, not an objective fact that can be measured and proven. You prefer to have an open architecture and a community of enthusiasts. I prefer to have most of my editor features available out of the box, and modal editors just confuse me.

At the end of the day, developer productivity is not a function of their editor of choice, so what matters is that each developer is comfortable in the environment they work in, whether that be Vim, Emacs, IntelliJ, or VS Code.


Learning curves are uncomfortable, so by your logic we should all always take the path of least resistance and use the tool that makes things easy up front without considering the long term benefits of using something like Vim or Emacs. I find this to be counterproductive to having a great career as a software engineer.

Rapidly assimilating difficult to understand concepts and technologies is an imperative skill to have in this field. Personally, I find the whole notion of Vim being difficult to learn, or not "ready out of the box" perplexing. Writing some code that's a few hundred lines or less, where it's mostly just importing git repos, is easy. Vim has superb documentation. How hard must regular programming be if it's difficult to just understand how to configure a text editor?


It's not that configuring the editor is hard, it's that it's unnecessary—the only thing you've been able to identify that I'm missing by using IntelliJ is an ideology and a community, neither of which are important to me in a text editor.

If it matters to you, that's fine—use whatever you're comfortable with! I just don't understand why you feel the need to shame others for choosing to focus their energy on something else.


> I would much rather have to spend a few days to relearn a tool every few years and to get the benefit of that tool, than accept a lower quality tool just to avoid a few days work.

I've worked with engineers who studiously avoid configuring any sort of quality of life improvements for their shell, editor, etc. because they claim it makes things easier when they have to use an environment that they can't as easily configure, like sshing into a shared box without a separate user for them to customize. This mindset has always been hard for me to understand; not only does it seem like they're optimizing for the rare case rather than the common one, but it seems like they're actually just lowering the quality of their normal experience in order to make the edge cases feel less bad without actually improving them at all


> I've worked with engineers who studiously avoid configuring any sort of quality of life improvements for their shell, editor, etc.

This is me. It's really easy to explain my mindset here, though. If I get used to a nonstandard tool or tool configuration, then it causes real productivity issues when I'm using a system that lacks that tool or the ability to customize.

This is not a rare edge case for me at all. I constantly work on a half dozen very different platforms. Having each platform work as much like the others as possible is a quality of life improvement, and improves the efficiency and quality of my work.


But isn't that why we have config files that you can easily copy to a new system?

But I guess you have to look at how much time you'll gain from your copying your personal config, vs the overhead of copying your personal config itself. If you often switch to a new system where you would have to copy the config file, but if you only really edit 2/3 files on that system for which a personal config won't have much benefit, then it is understandable. If I need to setup a new server for example, that doesn't need to be configured heavily, but just some installs and small config changes, and won't have to touch it anymore after that, then why would I spend time putting my personal editor config there? But if I have a personal computer that I use daily, and I edit a lot of code, it would benefit me greatly to optimize my editor for my use cases. And whenever I get a new PC or I get a PC from work for example, I can just copy my config and benefit from it.


> optimizing for the rare case rather than the common one

This has been a common enough case for me to not get too used to super customized local environments. I'd rather learn the vagaries of commonly available tools than build myself a bespoke environment that I can't port anywhere easily. That's not to say I don't do any QOL changes but I try to be careful about what I end up relying on.


I’ve usually had this attitude in the past and for me it came from my background: I worked IT, help desk, PC repair, etc for years before I got into programming and none of those contexts allow customization. And then for a while I did infra work where I’d be ssh’d into a vanilla VM or container instance to debug a deployment issue. Even though I mostly work in my own bespoke environment now, I still try to stay fairly vanilla with my tools. It’s pretty nice to be able to reset my workstation every year to get a clean slate and know that it’ll only be a couple of hours of customization to get back to what I know.


>I would much rather have to spend a few days to relearn a tool every few years and to get the benefit of that tool, than accept a lower quality tool just to avoid a few days work.

Vim police has issued your red warrants. Justice will be served.

Jokes aside, I'd say yes. I have worked with Eclipse, NetBeans, JB and nowadays I'm happy with VS Code. For a polyglot, it's the best out their at price point $0.0 and tooling is pretty good for me.

I'm doing Python, Go, Typescript and occasional Rust without missing anything.

Few keystrokes faster with command kata would not going to save years of labor. Actuall effort in software engineering is not in typing text or in text editing. Not at all. The battle is far far beyond that and bigger than that.

EDIT: Typos


just to add my $0.02 WebStorm makes writing NodeJS feel a lot more like Java - it's pretty good.


The best part is, you don't need to choose between using IDEs and your favorite text editor! Most modern IDEs with proper plugin support can be configured to provide a superset of Vim's functionality. I personally use IdeaVim on the IntelliJ Platform [1] with AceJump [2] and haven't looked back. You can import many of the settings from your .vimrc and it interoperates fairly well with the IDE features. Although I prefer Vim keybindings and it is technically possible to hack together an IDE-like UX with ctags and LSP, the IDE experience is so much better I wouldn't even consider working on a large Java or Kotlin project from the command line.

[1]: https://plugins.jetbrains.com/plugin/164-ideavim

[2]: https://plugins.jetbrains.com/plugin/7086-acejump


I think you missed the point of my post. The value of Vim/Emacs isn't the modal editing or key chords. It's the community and architecture, which you lose if you're still using JB with a frankenport of Vim on top. In fact, I think what you're suggesting is the worst of both worlds - a reimplementation of Vim on top of an already resource hungry and complicated IDE that's supposed to let you do things with a mouse. So you're left guessing whether something from real Vim will be the same in your Vim, plus you now have two competing environments for ways to do things, and you have to wait for JB to implement features that (neo)vim have already implemented, without supporting the opens source communities that did the work in the first place.

You also lose the killer feature of Vim, which is being able to work over an SSH connection on any sort of device, even those that don't have a GUI.


> You also lose the killer feature of Vim, which is being able to work over an SSH connection on any sort of device, even those that don't have a GUI.

In the last decade, I can count on one hand the number of times I have SSH'ed into a machine to do actual editing - and in every situation, nano would have been totally fine. Crippling my workflow so I can handle the most obscure scenarios that we've moved past for the most part, is not a good decision


> In the last decade, I can count on one hand the number of times I have SSH'ed into a machine to do actual editing

And I need to do this multiple times every workday. Generally speaking, this isn't an obscure scenario that we've mostly moved past. It's just not a common scenario in your particular work environment.


> I can count on one hand the number of times I have SSH'ed into a machine to do actual editing

Both companies I've worked at previously employed zero trust networking. That means developer laptops don't have privileges to things like secrets management infrastructure or even feature flag config. You end up making a choice: mock services that require trust, which comes with its own set of dangerous tradeoffs, or build remotely in a trusted environment. Many devs choose the latter.


As I said, I have worked at several FAANG companies where people had to wait for editor integrations because the source repos were so big you couldn't work locally. Having a tool that works everywhere no matter what has been incredibly valuable to my career. I also wouldn't say working for one of these companies that pays very well and handles a large portion of the world's traffic is obscure.


decade and a half for me and even then I remember telling my tech lead, this feels like the stone ages


I assure you, the open source community around modern IDEs is thriving. I see plenty of innovation in plugin marketplaces that is hard to find in even the Emacs/Vim ecosystem. Despite its share of detractors, there is a lot of value in having a business model that prioritizes language support, platform stability and a well-curated plugin marketplace. The IdeaVim integration is thoughtfully designed and I seldom notice much difference coming from Vim. I see where you're coming from with resource consumption, but even Fleet is starting to offer rudimentary Vim support, which I expect will address many of the issues around bloat. [1]

[1]: https://youtrack.jetbrains.com/issue/FL-10664/Vim-mode-plugi...


I'm a happy ide with Vim bindings guy. We do exist.

I think in vim edit patterns when editing text, but I don't particularly care about most of the : commands. I'm happy to use the vscode command palette for that.


> You also lose the killer feature of Vim, which is being able to work over an SSH connection on any sort of device, even those that don't have a GUI.

The gold standard for remote development is Visual Studio Code. All of the UI stuff happens locally, and it transfers files and runs commands remotely. It's way less chatty than going over SSH or an X11 connection.


I heavily disagree. From experience, working over SSH with tmux allows me to work with my editor, run commands, start up various qemu instances, start debuggers etc, and other tools that have their own TUIs. I think remote VSCode makes sense to people who have very narrow needs to edit specific projects rather than live on a remote machine.


The terminal window from VSCode still gives you all of that, with some extra ergonomics from the GUI. No need to remember ctrl b + % to split a Tmux window, scrolling and find just works, no need to install plugins to save sessions.


Can you save your session? I think I have a tmux session running for months now on my vps. Everything is exactly the same when I connect.


> The value of Vim/Emacs isn't the modal editing or key chords. It's the community and architecture

These are empty words that have no meaning. I don't use my IDE for "community" or for "architecture". I use my IDE for writing code, navigating code, finding code, refactoring code, exploring unknown code bases, analyzing code, moving code around, reading code...

How many of those things have the words "community and architecture" in them?

> you have to wait for JB to implement features that (neo)vim have already implemented

You mean the other way around. Nothing NeoVim implements trumps the depth and breadth of features IDEA offers out of the box. NeoVim (and others like vim and emacs) is busy re-creating, with great delay and poorly, a subset of a subset of features of a modern IDE.


When I joined FAANG #1, JetBrains wasn't an option, and all of our work had to be done on remote servers, with code bases big enough that indexing took on the order of hours. Meanwhile there have been internal communities at all of my companies for Vim and Emacs to make them work in these environments, write plugins for various company services, etc. None of the editors or IDEs we are talking about struggle with any of the common tasks you mentioned, what something like Vim allows is extensibility and portability, and communities that support it no matter where you need to use it.

I do believe that for some use cases, like a person who unfortunately only works with Java or Android, JetBrains makes sense and is probably your only option. I believe outside of those environments, JetBrains offers no tangible benefits and plenty of downsides - cost, resource consumption, can't be run in a terminal, not easy to work with remote machines.

By the way, vim already has built in support for cscope, ctags, autocomplete, terminal windows, gdb debugging, if you work on a C-like project, it already is an IDE. With one plugin (Ale) which takes one line of config, you get an actual IDE that can auto-detect LSPs which offers refactoring, code actions, etc. that is the exact same you would get in an IDE. But for very large projects, I have found that CLion and Clangd both take far too long to index, and so having an editor that works without indexing is a huge plus.


> I do believe that for some use cases, like a person who unfortunately only works with Java or Android,

Your bias is showing through :)

If people had less bias and actually looked at what an IDE can and does offer, they wouldn't be dismissive with "oh, if we just add these 12 plugins and an integration, then for this on particular language we may have a full IDE" (in actuality, a subset of a subset of all features an IDE offers).

> With one plugin (Ale) which takes one line of config, you get an actual IDE that can auto-detect LSPs which offers refactoring, code actions, etc. that is the exact same you would get in an IDE.

Looking at the "huge" list of features that Ale lists consisting of 6 very basic things, I again see that people who use vim have never ever in their life used a proper IDE.

> and so having an editor that works without indexing is a huge plus.

This I can actually agree with :) Indexing is often such a pain


It's very telling who has had to actually support prod systems and who hasn't when it comes to this topic. Very often the only interface we used to have before better CI/CD pipelines and idempotence was a serial tty or ssh connection. There were a lot of sysadmins run off in the 00s (for various reasons such as increasing salaries) and a lot of institutional knowledge about real operations such as this was lost or diluted.

Another reason why I like to encourage people to not customize their vim/emacs too much (at least for the first year or so of learning) - because when it's 0300 and prod is down you don't want to fight your tools to match your expectation. Another example while HN loves to hate on bash, but I love and live in bash.

The names and titles have changed, but I still see the same dev/ops battles.


> that attract the best talent

I've seen hugely talented folk on vim/emacs/emacs+evil, and on VSCode/JB. I think was the latter tools do, it make some of the advantages of being proficient in vim/emacs/regex available with less learning curve.

Currently there are some combinations that simply best-in-class: VSCode+TS, JetBrainsIDEA+Java/Kotlin, VisualStudio+MicrosoftGuiStuff. vim/emacs may come a long way in these areas, but cannot beat the integration level offered in these combinations.

Also, you mention "proprietary", but JetBrainsIDEA and VSC are opensource to some extend, which does improve community imho. But, the fact that they are less "open access innovation" projects, and more company owned is clear to everyone.

Finally: AI will come to software devt, and I wonder if AI tools will ever be available on true open access innovated IDEs.


> I've seen hugely talented folk on vim/emacs/emacs+evil, and on VSCode/JB. I think was the latter tools do, it make some of the advantages of being proficient in vim/emacs/regex available with less learning curve.

Take Reddit and Hacker News as a fitting analogy, a community with a higher barrier to entry/more niche will be smaller, but the quality is vastly improved. There's still going to be people who sit in both communities, and smart people in both, but it's not controversial to say that an initial learning curve tends to attract people who can pass the learning curve and are motivated to do so. Another great example is the linux kernel development process.

> Currently there are some combinations that simply best-in-class: VSCode+TS, JetBrainsIDEA+Java/Kotlin, VisualStudio+MicrosoftGuiStuff. vim/emacs may come a long way in these areas, but cannot beat the integration level offered in these combinations.

Integration in some ways, in other ways a terminal based tool that adheres to the unix philosophy is more integrated with thousands of tools than an IDE where every tool has to be converted into a bespoke series of menu items. Just look at fzf, git, rg, etc. integrations in Vim. They are only lightly wrapped and so the full power of the tool shines through, and it's easy to customize it to your specific needs, or add more tools.

> Finally: AI will come to software devt, and I wonder if AI tools will ever be available on true open access innovated IDEs.

In the same vein, AI tools that act as black boxes and are integrated in the same transparent way as git or rg in Vim at least allow the editor to remain full transparent to the end user, and leave the complexity in the LSP or bespoke tool. I really see no difference between how AI tools will relate to editing as LSPs do today.


> Take Reddit and Hacker News as a fitting analogy

In so many ways they are not, but I see why you come to this conclusion. Some overlap in users.

To me opensource is "common good" stuff, HN and Reddit are "us playing on some one else's computer+software".

All options have integrations, gits, fzf's, etc. And AI is not just "another black box", it's going to save you a lot of typing very soon. This is good: more time for thinking and crafting; less time for boilerplate-y stuff.


> VSCode+TS, JetBrainsIDEA+Java/Kotlin, VisualStudio+MicrosoftGuiStuff

Do any of these finally have something remotely as good as Magit? Or a good email client?


I found JetBrains' Git better in many ways than my console flow. Tried, but never got into Magit, as I moved on from Emacs.


VisualStudio (not code) has a decent git interface. The "view git changes" window is similar to magit-status.

Although magit is superior for staging custom chunks from a selection. Most other tools seem to think a single line is the atomic unit of code and cannot comprehend 2 changes 1 on line can be chunked apart.


Er.... "VSCode+TS" ... wat?

ITT: people who have not used tools they're talking about with confidence.

Everything available in VSCode is available in (neo)vim, without a slow buggy UI, modals, misfocused elements, and crashes.

All the LSPs used by vscode are easily available, including copiolt, full intellisense, and full LSP-backed code refactors/formats/etc.


Neovim has been hugely problematic for me as an IDE (lots of plugins). Lots of errors related to OS dependencies I need to manually install and keep up to date.


I use VSCode every day and cant remember the last time it crashed or the UI glitched.


The file explorer constantly glitches -- it never knows where the focus is supposed to be so adding/moving/deleting/etc. files ends up selecting the wrong ones.

Most plugins to add barely any IDE-like functionality grind the whole thing to a hault.

Whether you're on insert mode or replace on autocompletes is random, and changed by plugins.

The list goes on.

VSCode is an extremely poor quality piece of desktop software hacked together with web tech. It's an amazing plugin for a website.


VS Code has been crashing at launch in Wayland since more than eight months ago:

https://github.com/electron/electron/issues/37531


Just today I helped a coworker patch their /etc/bash.bashrc because VSC's bash integration was broken enough to not load bash-completion. Apparently, VSC would rather hijack bash's entire boot process (via the --init-file flag) and then simulate, obviously poorly, bash's internal loading process, instead of just sourcing a file into bash after it loads.


In my anecdotal experience the best developers are the ones that don't overly focus on their tools at all. One of the most proficient developers I've known was perfectly ok programming on a little square monitor with Visual Studio where the code window was 40% of the screen real estate.

It doesn't have to be that extreme but it reminds of hobby craftsman who focus on having a garage full of the tools of the trade while never finding the time to work on a project with them.


At some point in a developer career one shifts from a tool focus to a work focus.

I used to be picky about my operating system and often would spend time making the tools that I wanted or preferred to use work within the project dev environment, as opposed to just using the tools my employer provided. It usually ends up just being easier, and if everyone is using the same tools then pair programming or collaborating becomes easier, too, as compared to having to deal with the one stubborn dev who insists on using Emacs on a Mac when everyone else is using Visual Studio.


I think the benefits are when you already know the extent of your work and you can build tools to steamline you workflows. Now having something that you love working with instead of making you frustrated every day is very nice.


This has been my experience as well. The most productive people are the ones who actually focus on the work instead of wasting time configuring a perfect editor.


Mileage varies, even the most ardent vim user I know gave up and switched to VS Code this year. It's just too much to try to keep up with when projects and technologies change. I've programmed in C++, Go, Python, Java, and Angular just in the last year. I can believe that there's vim plugins to handle all those, but the energy it would take to find auto-complete and navigation and formatting and debugging and any number of other out-of-the-box IDE functionality is more than I'd like to thing about. Then there's the associated tools - Kubernetes yamls, swagger, markup, Makefiles. In IDEs they are only one plugin download away.

I love vim, I used it exclusively for years when I was doing C/C++. I still ssh into servers a lot and use it pretty much daily. Still, I'm far to lazy to try to turn it into my full time development environment .


> I can believe that there's vim plugins to handle all those, but the energy it would take to find auto-complete and navigation and formatting and debugging and any number of other out-of-the-box IDE functionality is more than I'd like to thing about.

Well, I'll be the bearer of the good news, then!

NeoVim has a native LSP client which unifies all auto-complete/navigation/formatting into a single plugin, only requiring you to install per-language LSP server.

As for debugging, there's also DSP (Debug Server Protocol) which NeoVim doesn't have native support for, but there's a plugin for that.


There’s more to language support than LSP.

I use vscode and IntelliJ these days. In rust, IntelliJ lets me rename functions and variables across my entire project. Or select a few lines of code and extract them into their own function. It’ll even figure out what arguments the function needs and call it correctly.

I’m writing a paper at the moment using vscode and typst (a modern latex replacement). The vscode plugin shows me the resulting rendered pdf live as I type. I can click anywhere I want to edit in the pdf and the editing window will scroll to the corresponding source text.

Maybe there’s ways to do all this stuff in vim but I never found it. I used vim on and off for 20 years and I barely feel any more productive in it than when I was 6 months in. As far as I can tell, IntelliJ is both easier to learn and more powerful.

Making nontrivial software in vim just doesn’t feel productive. LSP is the tip of a big iceberg of features.


The rename across the project scenario is a LSP feature that Neovim supports. I use it frequently. I do miss the ability to trivially extract a function. I used to do that all the time in Visual Studio back in my C# days.


Yeah; and there's so many little features like that - most of which I use rarely, but they're still useful. Aside from rename, there are:

- Change function arguments. (Eg, reorder the arguments of a function and update all callers)

- Add a new function argument. Eg, if I change a call from foo() to foo(some_int), the suggested actions include adding a new parameter to foo with some_int's type.

- Contextually fill in trait, struct, or match statements

- Move a bunch of stuff to a new / different file

- Organize imports

- Run a test. Anything function with #[test] and no arguments can be run or debugged instantly with the click of the mouse.

- Up and down buttons for trait implementations. See the definition for a trait method, or jump to one of its implementations.

I have no idea how to do any of this stuff in vim. Maybe its possible with enough macros and scripts and mucking about. I really do admire vim's tenacity, but seriously. The time spent learning a modern IDE pays dividends in weeks and our careers are measured in decades.


Emacs also has LSP support built-in with eglot and a good start for treesitter support.


VSCode is a significantly more pleasurable experience working over a 100ms+ network connection than either vim or emacs (this being the reason why I switched away from emacs/tramp myself).


If you haven’t already, and I know this doesn’t hold up for GUI emacs or vim, but consider running them through https://mosh.org/


Mosh is definitely an improvement over ssh, especially for connections with lag spikes (and I use it for terminal sessions). But it's no match for VSCode.


Visual Studio Code is currently where all of the tooling effort is focused. For the best tools with the best integration you should be using it or, depending on language, JetBrains. These are easy to use and developers can be productive in them from the word go -- without extensive customization. Hell, if you open a file in a new programming language, VSCode will suggest plugins to install.

You do NOT want to build integrations by yourself. You want to build whatever product you're building. The fact that vim and emacs users do this is yak-shaving that pulls precious time away from the task at hand and serves as a distraction for editor fetishists. Do not become an editor fetishist. Modern IDEs help you become more productive faster and give you more support for things like debugging. (You are using a debugger to inspect and analyze your code, right?)


the ide is not more complicated than your customized Vim setup once you get it close to functionality (you won't). I use all keybindings anyway, so it's not like the UI adds anything bad?

I switched from lightweight editors to IDEs many years ago and my productivity went up A BUNCH - even if sometimes it uses gigabytes of ram. so what? Even my old used machines have 8-16gb of memory now.

I would honestly much rather hire people that work in IDEs than people that like to "hack" on their vi/vim/emacs setup. The number of times I've been in a screen share with someone while they're trying to code or debug something with Vim etc, it just feels so slow to watch them work that I get that embarrassed-for-them feeling.


>"Technologists who choose the propriety path of least resistance when it comes to their most important tools I think are ultimately missing out in a lot of ways"

Nope. Not missing Vim. Live and let live


One nice side effect of modern TUI applications is that they can be distributed via Docker Hub as platform-agnostic applications that just require a compatible color terminal.

E.g. here's my C64 emulator running in Docker (it's a real C64 emulator underneath, but only renders the C64 PETSCII buffer via ncurses, e.g. no graphics or audio output):

    docker run --rm -it flohofwoe/c64
...code is here: https://github.com/floooh/docker-c64


Yes, the Borland TUI was pretty good. But nothing came even close to what we had on the Commodore 64 with the X-Ass Dev Kit. You could code in a nice TUI with syntax highlighting. Press a key to assemble in a split second, execute the program, press reset and be back in the TUI exactly where you left off. And this with source file > 64 kByte, creating executables that occupied the complete memory space of the computer.

https://csdb.dk/release/?id=27625

http://www.fairlight.to/docs/text/xass_docs.htm

So productive and easy to use.


Well, I guess we are lacking any LISP or Smaltalk fans to one-up that today :)


This article completely ignores the Macintosh and the greatest IDE ever, which came out in in the mid-80s: Coral Common Lisp. It ran on a Mac Plus with 1MB of RAM and it was fucking awesome. It included a drag-and-drop interface builder that let you build a complete app in literally a matter of minutes. Nothing I've seen has come close since.


That sounds like something I would enjoy playing with? But I can’t find much about it online.


The descendant of CCL runs on modern Intel Macs. (It also runs on Linux and Windows but without the IDE.) The modern IDE is quite a bit different from the original. In particular, it no longer has the interface builder. But it's still pretty good. It is now called Clozure Common Lisp (so the acronym is still CCL) and you can find it here:

https://ccl.clozure.com/

If you want to run the original that is a bit of a challenge, but still possible. The original was never ported directly to OS X so you have to run it either on old hardware or an emulator running some version of the original MacOS, or on an older Mac running Rosetta 1. In the latter case you will want to look for something called RMCL. Also be aware that Coral Common Lisp was renamed Macintosh Common Lisp (i.e. MCL) before it became Clozure Common Lisp (CCL again).

This looks like it might be a promising place to start:

https://github.com/binghe/mcl

If you need more help try this mailing list:

https://lists.clozure.com/mailman/listinfo/openmcl-devel

Good luck!


NeXT had a terrific IDE 30 years ago, called Interface Builder. More info: https://arstechnica.com/gadgets/2012/12/the-legacy-of-next-l...


The IDE was Project Builder, Interface Builder was a separate, as the name implies, builder for interfaces that could then be connected to Obj-C code in Project Builder. They continued as separate apps even after Apple bought NeXT and shipped Mac OS X, until they unified the two into Xcode.


Ah, this takes me back.

Turbo Pascal and Turbo Prolog - those were the days. Borland had the greatest products, and the accompanying books were always well written, used beautiful fonts, and, not less important, smelled nice.

If my memory serves me right, Borland even released their Text User Interface (TUI) library for developers to use in their own applications.

Fond memories indeed!


Loved the concluding question. "So the question I want to part with is: have we advanced much in 30 years?"

Every time I see all the bloated software we are putting out. Is it really worth it?

One program that does it all vs many programs that do one thing really well. Oh, maybe someone should write about this concept, it maybe interesting to ponder :)


30 years ago, I used Trash'em One on the Amiga. It was a text editor, 68000 assembler, debugger and memory monitor in one. I preferred it over its predecessor Asm One (which had been based on Seka), but I don't remember what feature it was that made me switch.

https://www.pouet.net/prod.php?which=92408

You can run Asm One in the browser here: https://archive.org/details/ASM-One_v1.02_1991_Gram_Data


Lives on in the form of asmtwo (lightweight, supports 1.3) and asmpro (featureful, needs v37).


Thirty years ago is 1993, and I was a junior dev using CodeCenter on SunOS. It was a nice editor/debugger with a C interpreter mode that iirc let you inspect aa variable's history and un-execute lines of code.

A few years later I was using SparcWorks on Solaris. When I realised you could pause on a breakpoint and hover the cursor over a variable in the editor to see its value, my brain nearly fell on the floor.

A few years after that I moved to PC development using Visual C++, and then Visual Studio 97 on NT4. Drag and drop UI builders for Windows and Web.

And 30+ years later I still spend a significant part of my working day in VS.


Somehow those text based IDEs of 30 years ago feel special to me, maybe reminding me of a simpler time when there were not too many things to master


I love the aesthetic of Borland Turbo TUIs. I went back and tried to use TurboVision some time last year. It was not good, to say the least. Would be pretty cool if VSCode were themable to the extent that WinAmp was, so we could reclaim that old style.


Is Doom Emacs really 500 MB? How did the author determine that?

I use vanilla Emacs and I believe it was closer to 100 MB when I installed it.

Related: I’ve spent a lot of time trying to find a development setup that runs on low-end hardware (as a reaction to modern bloat, and also as a way to minimize distractions). It has many flaws, but Emacs with Common Lisp is my current sweet spot of “price” (resource requirements) to “performance” (features+speed).


I spent countless hours in Borland Pascal and Borland C as a kid. This brings back nice memories.


We used Turbo Pascal and Turbo C++ for my high school classes, and I have very fond memories of both. This must have been on Windows.

Borland really had a series of these fantastic products, it's a shame they are no more. The only modern company in this space is JetBrains, it seems, so the niche is small.


Turbo Pascal and Turbo C++ are still used to this day in some high schools and even college intro courses. Of course nowadays you need Dosbox to run them on modern computers.



30 years ago people were using Interface Builder already. Admittedly not that many, but the drag and drop interface is still there integrated into Xcode.


They were, and had it not been for Apple's reverse acquisition, it wouldn't be there in Xcode today.

My graduation thesis was porting a visualization particles engine from NeXTSTEP/Objective-C/OpenGL into Windows/Visual C++/OpenGL, as the department was seeing the end of NeXT and they wanted to keep the research going on.

My supervisor had a NeXT Cube getting dust on the office corner, waiting to be collected.


I found myself lack of energy and time stepping into my 40s and vowed never to waste time “learning” cool editors such as Vim and Emacs.

VSCode is now my one stop editor of choice on Linux and VS on Windows. I also use Jetbrain editors for work.

I’m done. For people like me, who write SQL and Python for data pipelines, the Jetbrain IDEs are no-brainers. We don’t actually get the time or energy to do a lot of side projects so it doesn’t make sense to learn advanced editors such as Vim and Emacs: 1) These two need a lot of muscle memory just to start using it, but we don’t use it on a daily basis, 2) I’m not smart enough to write code as if I’m writing this reply so fluent coding experience without mouse isn’t useful for me ——- I have to stop and think hard every few minutes anyway.


Funny because I entirely agree with you, even though I do the exact opposite. Everything I write is in vim because I'm too lazy to learn an IDE which may or may not still be around in 5 years. I use 0 plugin and 6 lines of .vimrc config which I know by heart, so I don't care if I'm using it locally or remote, I can always get started in 1 minute.

I tried to install plugins but there is always something that fails somehow. Nvim distributions don't install and run out of the box for the most part, I get weird errors regarding lua or something and just give up. As for VSCode I wrote my first python project using it a couple weeks ago (I'm not a developper) and it's alright but a few things annoy me, like the integrated terminal and some things getting in my way.

At the end of the day, each of us should chose whatever we feel comfortable with. I spent maybe 2 hours in my life learning vim movers and never looked back. I don't even use tmux or anything, just open 1 or 2 terminal windows and alt+tab between them, with the occasional :split or :vsplit command.


There's a strange dance of IDEs coming and going, with their idiosyncracies and partial plugins.. you still have to invest and devest everytime a new wave comes. Meanwhile emacs is still mostly emacs. I understand the dread emacs can impose on you, from old keybinding cultures and elisp but there's something timeless and freeing in emacs.


> There's a strange dance of IDEs coming and going [...]

Intellij IDEA 1.0 was released in 2001 - is still in active development - and as far as I know the keyboard shortcuts are still the same (depending on the configuration one chooses)

The first Microsoft Visual Studio release was in 1997. XCode was first released in 2003.


Fog of the future not withstanding; most people aren't going to have been using IDEA since 1.0.

If you learned Java between 2001-2012 then the default was Eclipse or netbeans.

So you should not be comparing IDEA from 2001 to today (or any individual IDE), you should be comparing the IDE landscape or ecosystem of 2001 to today, and part of that analysis should be a requirement to weight IDE's based on popularity and the recommendations of established institutions (academia, companies).


So you've changed IDE once in 22 years? That doesn't change the argument in any meaningful way.


I know my school has changed IDE recommendation 7 times in 22 years.

But my point is much, much broader than one persons experience.


> I know my school has changed IDE recommendation 7 times in 22 years.

Just so I understand you correctly - am I using your comment

> That's cool, I didn't know you were most programmers.

correctly here?


Clearly theres something I have failed to communicate, you have an experience that does not match the most programmers and I pointed that out.

As I stated in my post above, (after someone asked me a direct question) that: despite answering the question, it was the wrong question and not the point I was making.


I was using Visual J++ in 1998.

We did a bakeoff of Eclipse, NetBeans and IDEA upon its beta in 2001. IDEA won hands down and is still the IDE of choice among the developers who work on our codebase.


My first java IDE was Visual Cafe by Symantec 1999 - and if I remember correctly I started using IDEA around 2002 (and still do - incl. Rider, etc).


That's cool, I didn't know you were most programmers.


I only wanted to mention that certain IDEs still used today are not coming and going but have been around for decades and are still more or less the same (keybinding, etc).

Maybe I just don't understand your comment - even translated it still confuses me tbh. (I'm not a native speaker). Sorry if you feel offended I guess.


Not offended, but not understanding because of translation is fair.

My entire point was that it's unusual for someone, especially someone who is new to IDE's or programming in general, to pick something brand new. As educational institutions will take time to change from the popular thing and most companies will also need time to adjust.

Distilled: my point is that you should not compare IDE release dates to the stability of IDEs vs Editors. -- you must consider the entire ecosystem of each at the time.

Another perhaps good example to conclude this would be something like python backends. One could (unreasonably) argue that Python has been around since 1991; but backends typically were written in Perl or PHP for a very long time. It wasn't until 2008 or so that Python started making headroom for web backends (ruby around the same time) -- The possibility existed but the popularity wasn't there.

A similar argument could be made for Sublime text (which is uncommon these days) but was extremely common in 2010. Or Atom, which doesn't even exist any longer but took considerable market share from Sublime in its heyday.

It's not fair to say "x has been around for y time therefore it is not changing", the ecosystem does change and it has darlings and detractors.

The only exception to this ecosystem over tool argument I can think of is probably visual studio itself as that was a monoculture and stuck around because of that.


What do you consider the successor to Atom and Sublime today?


I would definitely say VSCode, I wonder if anything comes after it though. :)

I know there are many editors fighting for its market share, like Zed from the original Atom team or Fleet from Jetbrains.


> There's a strange dance of IDEs coming and going, with their idiosyncracies and partial plugins.

The Language Server Protocol [1] is the best thing to happen to text editors. Any editor that speaks it gets IDE features. Now if only they'd adopt the Debug Adapter Protocol [2]...

[1] https://microsoft.github.io/language-server-protocol/

[2] https://microsoft.github.io/debug-adapter-protocol/


One of the nice things today is that DAP and LSP (while very much designed around VS Code's internal extension APIs), the things that use them are basically duct taped to tiny VS Code extensions over JSON RPC. They can and will outlive VS Code as the IDE of choice.

While on the surface that means that language support doesn't have to be designed for a particular editor/IDE, what's less obvious is that LSP (in particular) can be used as a generic IDE plugin API. I've heard of some non-language support extensions (ab)use LSP to get cross-editor support with the same codebase.


Yes I used emacs at school 15 years ago, and I agree that it was great for development ; with OCaml at the time (and also C with gdb integration, and 68k assembly later on) we shared a few tips with other students and the workflow was convenient for dev.

But vim is ubiquitous which is a huge plus when you are like me always connected remotely on a different machine. Once I learned a few shortcuts I never went back (and never dug into the tool itself actually, I can't even run a macro ; I'm still faster than most people I know with an IDE).

The only thing I was impressed with is I think phpstorm, watching a laravel dev crafting an SQL query. If I ever get serious about developping I would look into this kind of things (not just for SQL but also framework and module functions), especially if I can get vim movers, and a screen that isn't bloated. VSCode displays like 15 things and I'm only interested in 1 of them 99% of the time for example.


i never used vim in a large codebase though, do you ? I understand the remote edition appeal, and I use vim 90% of the time in cli


Again I'm not a dev but :

- for ansible on reasonably large projects (a dozen of roles) it was never a problem ; you have to understand how the project has been structured and be able to use grep and find though

- when I was playing around with os161 I don't remember it being an issue. Although for this particular case I did use the cscope vim plugin which is helpful to navigate through the codebase (there are equivalents for various languages). Not sure if os161 would qualify as "large codebase" but it's a bunch of files in a bunch of folders.


If you’re dealing with a large C or C++ codebase, Vim’s native cscope support scales way better to large codebases than the newer language server solutions from visual studio code, etc.


It is not really the learning curve, but it takes just too much time to set up to match VSCode or VS or Jetbrain IDEs, plus it requires too much muscle memory to use it effectively. It's difficult to stick unless one uses it frequently. I simply can't afford it.

TBH everything on Linux/Unix variant (except MacOS) is like that, there is no open-box solution. There is always too many configurations and even begin with (even VSCode is too configuration heavy for my taste but I use it as my Linux VM is light). This is definitely good in its own sense (more powerful), but most of time I just want something to work and concentrate on what I really want to learn. I mean, if I really want to learn how an editor works, I'd go ahead to build one myself, but in the mean time I just want to write a toy compiler so please just let me do it.


You can use Jetbrains IDEs on Linux.


Yeah I'm going to try it out when I purchase a dedicated Linux machine. Usually I ran from a 4GB-6GB VM so it's a bit stretchy.


> you still have to invest and devest everytime a new wave comes

Not really... When a new editor/IDE comes and replaces the rest, it's because it seduces the original userbase of the previous IDE, so usually the transition is smooth (same shortcuts, similar functionalities and ergonomics). Moreover, I find it weird to "invest" time in an IDE, usually you don't really need to, you learn the basics of it and you're good to go for years.


> I use 0 plugin and 6 lines of .vimrc config…

What are the six lines in your .vimrc?


set tabstop=2

set softtabstop=2

set shiftwidth=2

set expandtab

syntax on

set bg=dark

optionally :

set autoindent

on wsl :

set t_u7=

(no value ; almost pulled my hair finding this one out)


In case anyone is wondering what the `set t_u7=` is all about:

https://vi.stackexchange.com/questions/27391/why-it-enters-r...


I'm in my late 30s. So maybe not so far away from the age of lack of energy :P

I like JetBrains a lot. Things work seemlessly and easily integrate with external tools that make up the whole experience. But from 2012, I tried to rely as much on shortcuts as possible, for one simple reason, the mouse.

There is no problem with using the mouse. But everytime I have to use while focusing and coding, I find that that small gesture to move my hand from the keyboard to the mouse a bit flow breaking.

I have to move to the mouse, do a thing or two, then find my way back to the J key notch.

I like what NeoVim and emacs bring with regards to the reliance on the mouse. They allow for maintaining the same posture most of the time and focus only on typing.

I dislike how brutal they are at learning how to use them to full potential, and that making them into IDEs takes ages of IDE building rather than project coding.

I like Helix. Which takes a lot of inspiration from Vim/NeoVim/Emacs. But require no configuration to get you going right away. The documentation is easy to read, as of now there is no plugin system but there is a builtin integration with a lot of LSP servers for most of the popular languages by default.

Keys and navigation is easy, it even shows a helper popup to show you which key to use next.

My suggestion is, if you ever want to start a new silly project, and you're feeling free to take it slow for 2 days. Try using Helix on said project.

PS: Helix isn't fully complete by any means, but it really is capable of doing everything you want in many projects without being a hindrence if you can adapt to the lack of some built-in features like git and file tree. Its annoying but I am less upset about it and use alternatives


You can use a VIM plugin for many IDEs, including IntelliJ.


IntelliJ (and the whole Jetbrains suite of IDEs), has one of the best VIM plugins I've seen in an editor. It's hard to say what it does differently to others, but I've rarely encountered a situation where it does something common in my workflow differently to (neo)vim. It's just pleasant to use and gets out of my way, and has a nice method of configuring whether a shortcut should be handled by the IDE or the VIM plugin when they might conflict.


I've found multiple features and motions that do not work the same as in (neo)vim and end up breaking my flow a bit. However, I have to agree it is probably the best vim plugin I've seen anywhere, and is life saviour for me :)


I was looking for this comment. I'm very happy with the combo of IntelliJ features and vim movement commands.

The only inertia vim adds to my workflow is escaping into command mode. I have a 'jk' shortcut combo rather than escape, but if I'm hammering away I often mistime it and need to backspace out my jjkk or whatever.


VSCodeVim (https://github.com/VSCodeVim/Vim) brings Vim-style input to VS Code.

Good Vim input plugins can make IDEs more pleasant and efficient for users who prefer vim, neovim, vi, elvis, etc.


Shame it falls flat whenever you open big files.


Honest question: are other jetbrains IDEs "feel" similar to the Android Studio one that can be downloaded for free?

I installed it a couple of weeks ago to modify some android app, and boy it gave me vibes of the old Eclipse : sluggish Java feel , with "stuff" happening all around and being slow to render basic editor stuff.


They can be slowish to start, but IME are fine performance wise for all the features they provide.


> Honest question: are other jetbrains IDEs "feel" similar to the Android Studio one that can be downloaded for free?

Android Studio is generally one generation behind mainstream Intellij and has its own modifications on top of it. It depends on your target language. With the exception of CLion, all other forks of Intellij work much faster than Android Studio from my experience.


I wouldn't consider myself a vim user, but learned the basic keybindings awhile back (IMO modal editing is the correct way to edit). Knowing those makes it much easier to bounce between IDEs. Sometimes I don't realize if I'm in VSCode or IntelliJ (especially the new UI) until I try to run something.

> I’m not smart enough to write code as if I’m writing this reply so fluent coding experience without mouse isn’t useful for me ——- I have to stop and think hard every few minutes anyway.

I've worked with good programmers who literally hunt and peck. It drove me nuts, but as most will agree, typing is rarely the bottleneck when programming. I've also worked with people who I would consider vim power users, and while they were faster at typing out some tasks than I am, I found they were often typing/moving around the file as their method of thinking. Whereas I might reach for the mouse and scroll around instead. Again, typing speed is rarely the bottleneck.


I'm a heavy Vim user but I agree with your sentiment. I learned Vim during some down time in my first job out of college. It is a great skill to have in my opinion. It helps me complete complicated text editing quickly and easily, especially operations that I otherwise would never have attempted without it, but I never would have had the time or energy to learn it later in my career. I don't think learning Vim or Emacs is a waste of time but I can see how it is definitely not a priority when you have so many other things to do and little time to do them.


I am a life-long vim-er and I only use probably 30% of its features and thats ok. I learn new things all the time, sometimes adding them to my repertoire, sometimes not. There's so much time that can be wasted if you mess around with configuring tools but either fail to remember to use them or fail to get them set up. I wanted to set up ctags and tried a few times, but fell short of memorizing the forward-back shortcuts and got frustrated at the delay when it goes off scanning my HDD instead of the local code 1-2 directories away. so I just gave up.


For me, cool editors are vscode and jetbrains. I've tried to make them my default editors many times but always go back to vim (which I've been using for decades).


I agree that since you are well versed in Vim, it doesn’t make sense to switch unless for something vastly better —- which I don’t see in any existing product.


I have tried to use VSCode more, but, the original emacs keyboard movement is burned into muscle memory. VSCode has a keyboard mapping for emacs, but, its does not feel right. At the end of the day, who cares? Its just a tool, whatever works.


Likewise. I'm 46, and started my career in the late 90s using mostly light editors on Windows (Homesite, etc). Along the way, I've tried a few times to really dive into vim, but just couldn't see the advantage over editors like VS Code, Sublime, Atom, or some of the editors popular before then like Eclipse. However, I do feel comfortable enough in it that I can edit files on servers/containers, which is something I feel is useful for everyone.


I started with emacs. I love the buffer / windowing system and the on-the-fly macros. Also being able to do everything with the keyboard.

But the time spent getting multimode up or good autocomplete when you can simply fire up something like jetbrains IDE, having most of your ecosystem tools integrated the second you launch it makes the decision to switch easy.

Also, dev machine tend to have a lot of resource nowadays so the RAM hungry IDEs are not a problem.


VSCode has solved so many problems for me. The only time I step out of it is when I run into something that might be a configuration issue. For instance, if I’m programming an Arduino, and it’s not working, it’s worth hoping over to the official Arduino IDE to make sure that the “supported” way doesn’t work as well.

Otherwise, VSCode solves almost all my problems, and virtually all my key-bindings are identical.


wow, i had the same experience. reaching my 40s and decided to drop all the cool kids stuff. i used to have emacs/scheme evangelist phases in my life, now i do all my coding using enterprise languages and tools.

i have never been so productive.


VSCode and JetBrains are better overall, but many of the good editing features of Vim are available via their respective Vim emulation plugins. It is still worth it to learn and use Vim mode for efficiency (in my opinion).


they're really not that difficult, its not a badge of honor to use them. I used xemacs and liked it ages ago, and I still use raw vim for quick edits or views occasionally. The reality is modern ides are simply better for 99.99% of use cases. I might not use idea for open log files that are massive, and that's about it.


Same here. Jetbrains and VS Code.


I felt the same way about, atom, pulsar, vscode and whatever comes next.

These editors are going to be replaced and emacs and vim will be kicking.


Brief by underwear was by far my favorite in the DOS days. It ran great on the 286 I had at the time.

IIRC, something like it was ported to Linux in the mid 90s for purchasing. But Linux had vi and Emacs so I do not know how successful that was.


I remember a version named "dBrief", for those of us programming dBase/Clipper code. It was a great tool back then.


Oddly enough, I never used them at the time. It was the CLI compilers, and makefiles. Possibly because most of the work I was doing was cross compilation for embedded systems, not targeting DOS nor Windows. However it was often using Borland/Turbo C and/or Metaware High C, both as DOS based compilers. Occasional y some assembly, or (DOS based) cross compilers for other CPUs (8051, Z80, etc).

Possibly the experience was different for folks not predominantly doing cross system development.

So the editor was usually BRIEF, which could be set up with per file type "compile", rules. Being a multi-file editor, one could "compile" the makefile, hence build the complete system. This would then run the compiler, and jump to the first error; plus commands available to jump to the next error in sequence.

On the occasions I needed something better for search and replace, I'd use a DOS version of vi.

When working on a local (DOS based) tool, one would occasionally have a TSR version of a help manual available, but generally the printed manuals the products came with were preferable.

As to a retrograde step when switching to unix/linux/bsd based development, not really. The combination of Job Control to switch between suspending the editor and running the compiler, together with virtual terminals where one could have man pages open covered most bases.


Not mentioned in the article is JetBrains CLion. Best C/C++ IDE to come along in decades.

https://www.jetbrains.com/clion/


The Borland IDEs really were the swiss army machine guns of their time.

Turbo Pascal was one of the first languages I learned -- never really used it again, but looking back, it was an excellent beginner language.


Back then BRIEF was also an amazing development environment. Now it was not so much an IDE and more of a programming editor, but it was still amazingly good for software development.


I'm surprised there's no mention of QBasic that shipped with MS-DOS >= 5.0. This was built on top of EDIT.COM, and while it only ran a BASIC interpreter instead of compiling an executable like full-blown QuickBasic.

IIRC it had a rather extensive help lookup system for functions, data types, reference tables, error codes, and whatnot. You could step through your program, set debug points, all without exiting to DOS. It was my first ever exposure to an IDE, I thought it was pretty nice for what it was.


I loved QBasic. That integrated help system also came with extensive sample code for each function, including some full programs. Copying and modifying those was a huge boost to learning how to program.

(The article does mention QBasic, though.)


Those Borland IDEs, a show of hands for Turbo Basic as well, were the main reason why I never liked the UNIX development experience, until a professor showed us XEmacs, at the time much more feature rich than Emacs, and vi was still vi, not vim.

Thankfully with KDevelop, Smalltalk, and when Java started to make IDEs more common on UNIX, I no longer needed XEmacs.

Ironically for all IDE-haters, even James Gosling, inventor of XEmacs, says people are missing out not using IDEs, he surely moved on from Emacs ecosystem.


Nit: James Gosling implemented Unipress Emacs aka. Gosmacs, not XEmacs (with a lot of licensing sturm-und-drang). XEmacs was forked from Gnu Emacs at Lucid, by jwz (Jamie Zawinski)


For the comparison:

Interview with an Emacs enthusiast https://m.youtube.com/watch?v=urcL86UpqZc


Emacs actually supports drop-down menus in text mode, with a different key shortcut: https://www.gnu.org/software/emacs/manual/html_node/emacs/Me... There is also a special incantation for proper mouse support in the terminal, because the default is to keep the default handling as copy/paste: https://www.gnu.org/software/emacs/manual/html_node/emacs/Te...


Screenshots using 80x50 mode would be nice, which I remember using in the early 90s.


Did not see mention of Symbolics Genera, a vastly better IDE than any of the ones pictured in this article. And it was quite mature 30 years ago.


There’s a ton of editors that are all roughly equivalent for writing C++ code. Vim, EMacs, VSCode, 10x, whatever.

But Visual Studio is still hands down the best C++ debugger. And nothing else even comes close. Which is a real travesty.

RemedyBG is making progress. But it needs a NatVis equivalent and it needs to be way more reliable. It fails on major projects with obnoxious regularity.


One thing these IDEs had was an integrated debugger.

It seems Unix/Linux really got a nice integrated debugger with ann IDE and gdb is powerful, it is very cumbersome. Hence, it seems there is a lot more printf debugging on Unix, and less use of debuggers than say on Windows where Visual C++ and Borland Turbo C++ both had very easy to use debugging integrated into the IDE.


Commercial UNIXes had nice graphical debuggers like dbx on HP-UX, while Solaris and NeXTSTEP had good IDEs.

Linux eventually got DDD.

Unknown to most is that gdb has a TUI, and is highly scriptable in Python.


Agreed, for the most part debugging has gone backwards (aside from Xcode, Visual Studio 20xx) and was always pretty bad on UNIX. Still amazes me that there isn't a really nice, batteries included TUI debugger (e.g Periscope!).


Did a lot of programming without an IDE. Then some years ago, tried an IDE: Opened a project. Looked at the resulting directory (folder) and saw ~50 files with what I knew nothing -- hadn't even started the work yet.

Got rid of the IDE and returned to my favorite tools and the software I was writing.

My favorite tools work fine, and in particular in the directory I'm in I actually know what each file there is, what it is for, what is in it, where it came from, etc.

In my work, I need to do some programming, that is, software development. So, I do it.

Difficulties are nearly all from poor documentation of other software I need to use. The parts of the programming that are really mine are like cooking lunch -- no problems for my part, but if the pepperoni is not good, that's a problem. To me, programming is, define some variables to store the data, have expressions to manipulate the data, If-Then-Else, Do-While, call-return, input, output, and that's about it.


I do wonder what the easiest, most straightforward path to a full TUI IDE would be today. Perhaps starting from the codebase of a lean graphical IDE like Lapce and adding a TUI-powered backend. (Note that text editors do exist with that kind of interface, the real gap is wrt. the IDE features including LSP and DAP interop.)


One could always use the updated Turbo Vision library[1], which originally was used for the Turbo Pascal/Turbo C++ IDEs mentioned in the article...

[1]: https://github.com/magiblot/tvision


Vim and emacs do already have plugins for LSP. Probably DAP too. But like everything CLI, it might take a bit of initial set up to get it right.


A discussion of editors/IDEs from 30 years ago and the author doesn't mention Brief?


Does anyone here feel kind of trapped by their IDE now?

PHPStorm + Laravel Idea + the laravel-ide-helper package provides such a great PHP/Laravel development experience that I haven't been able to replicate in VS Code or Sublime. But chowing as much RAM as it does, it feels sluggish. Or at least, not as snappy as the alternatives. But I just haven't been able to find a middle ground with the lighter alternatives.

Running the IDE as a thin client with Jetbrains Gateway sounds like a decent solution, if your backend server is close enough for latency to feel okay. From a ~4GB PHPStorm usage, PHPStorm-via-Gateway on GitPod was 1.2GB max.


Perhaps consider better hardware, not some cheap chromebook. 4GB RAM is very little usage for advanced IDE.


M1 MBP.

At the lower tier with the 8GB RAM, it's the most affordable device that will outlast power cuts that are frequent where I am - cheaper than getting a backup power solution. Getting more RAM is a ridiculous cash grab by Apple.

It's a rock and a hard place.

>4GB RAM is very little usage for advanced IDE

Java is awful with RAM in general. Decent text editor plugin setups can get close to the Jetbrains suite for what I do, but it's just a few small UX & plugin papercuts that make the difference. And I heavily doubt that those tools I prefer are that heavy compared to other editors.


That is some Stockholm syndrome...

8 GB RAM is very little. It will limit you as a developer, you will never get into containers, virtualization, AI...

If you are in Cape Town, get Linux laptop or minipc with external monitor. It all takes 19 volts, and you can power it from a car battery with simple voltage regulator. I have 8 core Ryzen with 4 TB SSD and 64GB RAM, it was less than 1000 USD.

M1 is nice, but has several limits. If it gets broken, it will be very difficult to service in South Africa...


Interesting mention of Sidekick Plus, there was a complete SDK for it which I don't think ever got released anywhere (I had a copy as I was working for Borland at the time). It allowed multiple documents to be open at the same time.


It allowed to copy text from screen and paste into its editor, had a modem terminal app, contact app and a calculator. It was a joy to use!


> MS-DOS shipped with a TUI text editor since version 5 (1981)

MS-DOS 5 came out in 1991, not 1981.


In the early 2000s, back in my homeland, high schools were still stuck using Turbo C for programming classes. This was mainly because there weren't many free and lightweight C IDE options available for Windows (I know Turbo C is not a free software technically).

While Dev-C emerged as a possible alternative for console programs and programming contests, it wasn't enough for developing native Windows GUI applications without shelling out for Visual Studio.

This limitation ultimately led me and some friends to explore development on alternative platforms like OS X and Linux. Ironically, even to this day, none of us mastered the WIN32 API.


In the late 90s I used notepad. Mostly writing plain html. Then in the early, 2k I wrote php in Crimson. http://www.crimsoneditor.com/

When switching to mac in 2009, I used Smultron. https://www.peterborgapps.com/smultron/

Followed by sublime text. Then when I started writing typescript in 2016 or so, I switched to vscode.


Fond memories learning both Pascal and C when at high school with 486 Compaq machines and the Turbo compilers.

Not long ago I configured DOSemu with Turbo C to do some bare bones graphics development but I just couldn't get used to it.

Does anyone know or recommend a setup where coding takes place outside DOSemu but yet the compiling/execution takes place in DOSemu? (I mean calling all the build chain outside DOSEmu).

I've seen in an older HN post that someone setup a retro IDE with VSCode but I'd like something more Vim like instead of this behemoth.


I agree with the article. Turbo Pascal was terrific. There is some kind of psychological thing that has me using neovim in a terminal all the time for many years.

I guess it's convenient for ssh. But I miss the affordances of Borland IDEs. Even last night I was working on a web application and was tempted to add a menu at the top of the page, remembering how useful they were back in Turbo Pascal and such.

I did a Google search and found this https://github.com/skywind3000/vim-quickui


Turbo C/C++ was my first IDE, when you know all short cuts, it was quite fast. Then came DJGPP with RHIDE...and for many years, IMHO, Visual Studio 6 was best for kid ( and window user ) like me.


I was a TA in an "Intro to Compute Science" class in the naughts, and using Borland C was quite popular among students who were on Windows.

I never heard a single student complain about the IDE, even though most of them were used to GUI apps only. It was very intuitive and convenient to use. I wish we had nicer TUI tools today - I often miss them when I work remotely over a thin connection passing through a couple of proxies/gateways/etc which have trouble with X or VNC traffic.


My favorite part of late 80s early 90s C/C++ dev was the shitty auxiliary monitor used as a dedicated debugger screen. I swear I STILL find situations where I want that back.


Separate computer talking to SoftICE over serial port.


I was always kinda partial to the Borland IDEs. I even used the concept on Linux for a while in the form of "RHIDE". It wasn't bad UX but the software was pretty badly written so in the end I dropped it. I never enjoyed anything as bare bones (or requiring such a learning curve and setup of adding) as vim. Right now I use VS Code. Not a huge fan of it and I really wish it was available as a TUI application but for now it seems to be the best around.


If you look at his screenshots you might think you were limited to 80x25 but with EGA and VGA you could pick a different screen size and get more vertical lines.


At the expense of vertically squished fonts, which I hated the look of. I would have been more likely to use it if it had a similar increase in the number of columns to keep the font at the correct aspect ratio.


On the Mac side, THINK/Lightspeed Pascal had all of the same unmmatched mod cons of Turbo Pascal: lighting fast compile times, unmatched symbolic debugger that understood record types and Macintosh handles with a variable watch/execute statement window, auto formatting. It was sad that the Mac world slowly converted to C/C++ as the "flag carrier" programming language.

No TUIs, ever. God forbid.


Borland Turbo C and it's debugger helped me implement my 1st scheduling optimization algorithm, get into production and got me a promotion. Ported it over to IBM mainframe with the help of a colleague which reduced run time to few minutes compared couple of hours on DOS/486

It looked so clunky compared to what my colleagues used in their DEC and SGI (6 processors I think) workstations had, but got the job done


Agree with all of this - I had exactly the same feeling going from DOS to Linux in the 90s and wondering where the proper TUI apps were.


In 1989, I got a IBM AT, with VGA. I flipped from a Herc card, with 25 lines, leaped past EGA 43-line mode into VGA 50-line mode. Turbo Pascal, and SideKick ( or psychic ) all ran perfectly. Paradise VGA w/ 512k. Sony 15" Trinitron, very sharp. Why don't other people program in this?

Wideprint had a 132 column mode for Lotus 1-2-3, and again, SideKick jumped in perfectly.


Seems a bit quick to dismiss modern TUI editors. Emacs in terminal mode has feature parity (almost?) with the GUI version.

And by the way classic curses like menu bar can be opened in text mode with M-x menu-bar-open which is bound to <f10> by default. You can even use mouse with xterm-mouse-mode.

The one he was looking at was text menubar emulation which is pretty powerful too if you take a minute to appreciate it.


He claims they are unintuitive so he didn't bother exploring them.

What is completely fair. Emacs in particular is more featureful than VS-Code, but hell, it's hard to make use of all of it.


If vertico and context-menu-mode were defaults you might not be able to say that.

Or just the menu and toolbar weren't immediately disabled by most.


I hope someone from Embarcadero is paying attention to this thread. They have had some great IDEs but their primary attraction was the price point and the ease of use of the products. Please make Delphi affordable again.

Considering that Delphi can be used for Android, IOS and Linux development as well, it would be a great tool - if it weren't for the insane pricing.


Coming up for retirement and I have made a nice business using Delphi; it's been the right tool for the right job and that is developing Windows programs.

I understand why Embarcadero did it, but for the dozen people who actually use Delphi for any OS other than Windows, and any CPU other than x86, they really should not have bothered.


30 years ago I was using vi and the shell on a Wyse 85 terminal.

Nowadays I used vim and the shell in one of many terminal windows on one of many workspaces.

Every now and then I try an IDE but I always find the loss in productivity and the lack of discoverability to be hampering. I'm there to get work done, not to stare at cartoons and fumble around for wherever the mouse or cursor has disappeared to again.


I forget if it were a MS or Borland thing, but I recall some of these DOS IDEs having an "expert mode" feature in the menus where you'd get, by default, only the most useful features, but could then turn on all the menu features by enabling said expert mode. Anyone remember this? Some tools (VS Code..) could sorely do with this today, I feel.


Here’s another very potent one from 40years ago that was popular and widely used for asm/c dev

Since the simple MS-DOS editors (which are not IDEs really) are listed, this one is a must have in the list.

https://en.m.wikipedia.org/wiki/Personal_Editor


I loved Multiedit Pro and Walter Bright's Symantic C IDE back then, besides emacs. Still with emacs, the others are gone.


Nobody (yet) has mentioned Microsoft PWB - Microsoft's Programmers Workbench for their C compiler, around 1990. It's what all the Microsoft engineers themselves used when writing code for Windows, WinNT, OS/2, etc. It was essentially perfect for its time.


ObjectMaster by ACI was pretty much the best OO IDE at the time. Pretty much everything on the Mac kicked the shit out of these pathetic and crappy DOS/commander interfaces that are linked in the article.

Unfortunately, there don't seem to be any screenshots of it online anymore due to link rot.


The glaring omission in the article is Visual Basic for DOS. It had a visual TUI designer in DOS. https://www.youtube.com/watch?v=-vDpzoYgNd0


Borland C/C++ 3.1 came in the largest by volume and by weight shrink wrapped box ever due to inclusion of printed library and API references for DOS and Windows and other tools.

Borland-branded products were the pro versions compared to the Turbo-branded ones.


If any dev out there is looking for a challenge/cool thing to do, here it is: a TUI DE following the practices of TurboC/TurboPascal, CUA visuals and keybindings from edit.com and support for multiple languages and LSPs.


TurboPascal and other Borlands IDE were great, but just a lesser version of FoxPro!

With FoxPro, you have the features of them, but also, RAD form/menu/table/report builder. Like "Let's add MS Access to your IDE".

That is the dream.

One of my goals is to built it!


So I've been using vim as my IDE for the last ten years. I've really enjoyed using it, but VSCode is super duper popular. Has anyone like me converted? If so, are you still using Code or did you go back to vim? If so, why?


I gave VSC a shot. It's the stupidest thing, but I just can't get over how the side bar (not the file explorer, but the little activity bar that you can open the file explorer/plugin menu/etc from) keeps re-appearing every time I open certain things. I know it's petty, but I can't get over it.

I was able to, mostly, get things to work the way I wanted in it, but that ended up meaning I turned it into basically my vim setup.

I've moved to jetbrains products at work. Same story there, I basically turned it into vim, and hid all the sidebars (but it does let me do this). Mostly I'm using it for the debugger, the jetbrains debugger is legit.

Still use (n)vim at home a lot, especially for anything that doesn't have a dedicated jetbrains ide.

Other than that, the only thing I've really had problems with is that graphical editors all seem to think in terms of files instead of buffers, and there's no equivalent to vinegar. I think that really throws me the most.


Moved to vscode with the neovim extension (vim mode was slow for me over large files). vsc is super customisable, and you can remove all the tab bars, side panels and anything else you don't use fairly easily. It's also far more stable than the neovim ecosystem, i don't have time to mess with neovim plugins breaking bi-weekly anymore


I did. I eventually got tired of fiddling with configs, not so great debug experience, copy paste issues etc.

Now I just use Vim mode in VSCode. Don't get me wrong, you also need to spend time on Configuring VSCode, but it's so much better.


I remember using Borland Turbo C++. It was the most convenient piece of software to get started with programming. If I had been introduced with anything else, I would have found it much more difficult to cross the initial curve.


Meanwhile, outside the world of PCs and DOS... https://youtu.be/uknEhXyZgsg?si=nx7IEAC9RZDwEeWY&t=3251


30 years ago there was Vim. 30 years from now there will still be Vim.


I fondly remember Borland C++ builder - it allowed me to write shareware apps for Windows without having to learn the low-level MFC API. I was a hobbyist, and it scared me :)


30 Years ago Watcom would be ripping through Borland's assumed monopoly and rapidly becoming the most used C/C++ IDE, only to have Visual Studio do the same to it in 2-3 years time.


15 year old me could use the Turbo Pascal debugger well because it was so intuitive. Now I’m just not comfortable with using debuggers at all and I keep spamming prints in my code.


I recently started building a little desktop app using C# .NET WinForms, and it was a tiny bit clunky at times, but strongly reminded me of building VB4 apps.


30 years ago - and from about 89-94 - my IDE was DataViews. Still all these years later I don't know of any IDE that can create such graphical interfaces.


No mention of the Microsoft C and Pascal IDE. They were somewhat similar to the QBASIC editor. Not as popular as the Borland counterparts


I'm not aware of anything so convenient and learler-friendly as Borland IDEs were.


Ahhh.. Borland TUI. But also Visual C++ was really great. With offline docs!


Windows was around .. what about GUI based IDEs? Visual basic?


SlickEdit was around back then, in TUI form. Fantastic.


I _think_ edit.com still ships with Windows!


Not the 64-bit version, surely. They're thinking about shipping a new TUI editor, though.


Sorry, gotta call bullshit on this post. There were several _good_ IDE GUIs 30 years ago.


Fantastic article, thanks for sharing.


XY-Write anyone?


so neovim is the sota of vim now?


I'm not a fan of any IDE brand or company. I'm a fan of features. Any respectable IDE must have the following features: - jump to declaration - autocomplete of fields and methods of an object - autocomplete or tool tips for function arguments - code templates - find all the usages of a variable, type or function - rename variables, function, types etc. all over the code base(and do it correctly not blindly like with sed) - select a piece of code and extract it into a function - select multiple fields and methods and extract them into a separate class - highlight pairs of parentheses, brakets, curly braces and be able to jump between them - highlight searches - show line numbers - search/replace text in current file or over multiple files(aka grep and sed but integrated into the UI) - be able to open a terminal in a pane - search for a type all over the code base - search for a global function or variable all over the code base - display tree of included files(for languages that allow including other source files like C++) - display tree of subtypes/supertypes of a specific type - display a list of all overrides of a method and be able to jump to them - semantic checks(e.g. check that the types of the arguments in a function call match those from the function declaration - of course for languages that have a type system) - display syntax/semantic errors as I type - a file explorer should always be easily accessible - automatically select the current file in the file explorer; be able to turn this behavior on or off - Copy the full path of the current file to clipboard - Copy the name of the current file to clipboard - Autoformat the code - Autoindent - Syntax highlight - debugger integration - build system integration - display an outline tree of the source code ine current file - have a local history of all the changes to a file - version control integration - diff between current and previous version of the current file - search files by name in the entire code base - when many editor tabs/buffers are opened, search through them by file name - spell checking in comments, be able to turn it on/off - generate documentation from comments - fold/unfold code blocks - replace tabs with spaces - show special characters - block editing - place coursor in multple places and perform the same editing changes in multiple places at once(like in sublime) - jump over paragraphs - jump to beginning/end of file - open documentation of function/class/type/etc. under the coursor - set bookmarks in code - have a history of searches and be able to redo older searches - list all TODOs/FIXMEs from all over the code base - split panes vertically/horizontally - textual autocomplete (meaning autocomplete if identifier is already present in current file) - linter integration - UML diagram generation - erase the current line with a single key binding - move current line up/down with a single key binding - display object instance tree - and possibly many others that I just can't remember right now...

Now I don't care if it's vim, emacs, vscode, eclipse or jetbrains offering these features. From experience I learned these features make me most productive...coupled with command line tools it gets even better. So if these features/tools are available in an IDE/tool suite then I'm a happy programmer and I will use them. I don't have time to be a fan of this editor or that editor...even though I do like to enable vim key bindings if they're available once in a while.


  30 years ago
  late 1980s / early 1990s
Yeah, no




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: