Hacker News new | past | comments | ask | show | jobs | submit | barosl's comments login

I also write code using my phone when I'm on a bus or the subway. It requires some patience but after getting used to it, the experience is surprisingly pleasant especially if you're familiar with terminal-based tools. My environment consists of:

  - Galaxy S24 Ultra
  - Termius: I think it is the best terminal emulator and SSH client on Android. The sad thing is that the paid version is a bit too expensive. ($10 per month, no permanent option)
  - tmux: Mobile connections are brittle so it is a must.
  - Vim: Allows me to navigate the code freely without using arrow keys, which is really useful on the touch keyboard.
Not that of a big deal, but the thing that I think is more pleasant on the phone than on the PC is that I can use my fingerprint to log in to the remote server. The fingerprint is stored in the TPM so it is safe. It feels magical!

Edit: The biggest pain point for me was the limited width of the smartphone screen. It is a bit hard to skim over the code quickly because most lines are severely cut. Text wrapping helps this but personally I hate text wrapping. Keeping landscape mode is not an option because the code area is completely hidden when the touch keyboard is displayed. That's why foldable phones are great for coding, as they have a wider screen. My previous phone was Galaxy Fold and it was a wonderful coding machine.


Try pairing tmux with mosh, it's how I've been working for years whenever I'm forced to admin through a brittle straw. Mosh combats lag pretty well and doesn't care if your connection drops intermittently. https://mosh.org/


I tried Mosh but it didn't fit my taste. It tries to "predict" the state of the screen before being acknowledged by the server, but sometimes the prediction is wrong and Mosh reverts the cursor movement and redraws the affected area of the terminal. For example, when I'm using split windows in Vim or tmux, Mosh allows typed characters to overflow beyond the separator, briefly, until being told "no" by the server. Personally I find this behavior very disturbing. Enduring higher lags was more bearable to me.


I can see how that's off-putting, but I've learned to ignore the occasional cosmetic hiccup and just trust that it will sync up correctly. I use it with --predict=experimental (largely undocumented), which seems to be even more aggressive, but it works great for me.


You can try eternal terminal: https://eternalterminal.dev/

I don't remember it doing any sort of prediction but the last time I used it was a while back.


Have you tried any of the various `--predict` options? At least `--predict=never`.


"...admin through a brittle straw." :D That's exactly what it feels like.


mosh has saved some hair pulling, especially when on a train journey with at best spotty 3G and you get pinged about an outage.


I wish I could do it. I find even just texting annoying. Also Galaxy phone. I wonder if my fingers may just be too fat. Although I don't think they are. Actually I hate doing most things through a phone, and e.g. if a food delivery app has a desktop version I will always use that given the chance.


I have been really impressed lately using Samsung Dex on a XReal Air 2. AR glasses have really improved in the recent years. It gives you a better screen than many small laptops.

For longer trips (train, airplane), add a mechanical wireless bluetooth keyboard (my choice would be a NuPhy Air 75) to feel like a king. For the occasional browser + SSH on the go, it's better (less space + better keyboard + larger screen experience) than bringing my 13" laptop (+ phone).


Gosh they look interesting. But ridiculously customer unfriendly product naming, and a website that doesn't provide clear information on international shipping just raises so many red flags for me.


Mosh was suggested in another comment, but I’ve found that et (https://eternalterminal.dev/) suits my needs better.

It does nothing to fix lag, but connection failures are handled without a hitch, same session resumes like normal on spotty train wifi and mobile data.


Do you use a special keyboard app too, or just the default one?


Just the default one. I tried some alternative keyboards and they are better in some ways but in the end the default keyboard was enough. Termius provides input of some special keys (e.g. Ctrl, Alt, Esc, Tab, Home, End) so that's another reason why the default keyboard is enough.


Be sure to check the privacy policy on your default keyboard. I've been burned by that before. The default keyboard on my last galaxy phone was sending every single keystroke to a third party and checking their privacy policy showed they used that data for things like market research, guessing at my level of education, building a psychological profile, detecting my interests, etc. and that they in turn shared that data with others.

I switched to AnySoftKeyboard and although the auto-correct/spellcheck is way worse (understandable since they're not collecting every word everyone's typing) the customization and terminal mode are great. I'd occasionally code on my phone in termux (the largest program written on that device was only around 2000 lines) and it did the job.


Phone keyboards are a big security risk.

Is there a way to completely block them from accessing the network?


Nothing reliable that I know of. To have any hope at all of being able to do that with Android you'd need a rooted device. Without root access "your" phone isn't something you can reasonably hope to secure since Google, your phone carrier, and the manufacture all have privileged access to your device while you don't. Even with a rooted device I'd only use an app that you trust. The default samsung keyboard that phone came with out of the box was downright adversarial so at least I got rid of that, but I don't think of cell phones as something I can really secure or trust in a meaningful way.


You can firewall any app/service on Android with RethinkDNS.


Just FYI, this goes for all Android users. I believe iPhone has similar capabilities but I have never tried myself.

Your phone likely accepts a physical keyboard. I have a USB-C input, but can use a travel dongle (female USB-C device accepting USB-A) to attach.

I used this a few times to do some very light work when travelling. A good setup is picking up a cheap bluetooth keyboard/mouse combo and using the female input to get both. Many alternatives to this too, e.g. you can also attach a dock to your phone to get all devices your phone has the hardware to accept, and you'd be surprised what it does accept.


But how you type?


T9 on a Nokia 3310.


Laptop is superior.. sorry


> if "rm -i" is the default the "rm" level gets disabled, because "rm -i -f" is the same as "rm -f"

You can use "\rm" to invoke the non-aliased version of the command. I made "rm -i" the default using an alias and occasionally use "\rm" to get the decreased safety level you described. I think it is more convenient that way.


> to overload the bit shift operators for stream I/O

MFC extends this idea to network programming, allowing the use of shift operators to send and receive data.


MFC also has CComPtrBase which uses & to represent pointer lifetimes to COM objects such as while(pEnum->Next(1, &pFilter, &cFetched) == S_OK). Especially fun when debugging DirectShow filtergraphs someone made in the UI completely. There is more of an explanation here: https://devblogs.microsoft.com/oldnewthing/20221010-00/?p=10...


We have

  github.com/[username].gpg
to get their OpenPGP keys, too!


Just go for .secrets to get it all in one request.


2 things.

What about any of that is "secret?"

This endpoint does not work.


Try .credit-card instead. Pretty sure that one works.


> states.intersection({'VALID'} | {'IN_PROGRESS'})

Does this Python code count? Haha.


No, that’s the explicit conjunction of two explicit sets.

    ‘VALID’ or ‘IN_PROGRESS’ in states
Does not work and I’ve seen it around or SO. Or more commonly it’s variation

    a == b or c


The cool thing about this project is that as it uses systemd's socket activation, it requires no server processes at all. There is no waste of resources when Cockpit is not being used. Accessing a page is literally the same as invoking a command-line tool (and quitting it). No more, no less. What a beautiful design.


To be fair, we've had this since BSD4.3 (1986) through inetd - which worked slightly differently, but same overall idea. Once popular, it fell out of fashion because... Well, there isn't really any reason for it.

A good server process is idle when nothing is happening, and should be using miniscule real memory that should be easy to swap out. If the server in question uses significant memory for your use-case, you also don't want it starting on demand and triggering sporadic memory pressure.

It does make it easier to avoid blocking on service start in early boot though, which is a common cause of poor boot performance.


There's good reasons for it though!

One is boot performance. Another is zero cost for a rarely used tool, which may be particularly important on a VPS or a small computer like a Raspberry Pi where you don't want to add costs for something that may only rarely be needed.

I think a nice benefit for an administrative tool is the ability to update it, and reload the updated version. You don't need the tool to have its own "re-exec myself" code that's rarely used, and that could fail at an inconvenient time.

The reason why inetd didn't stick is because it's a pain to use -- it's separated from SysV init, so it needs to be very intentionally set up. Plus there was the inetd/xinetd disagreement.

Tying in init, inetd and monit into a single system that can do all those things IMO made things much nicer.


> Another is zero cost for a rarely used tool.

Zero cost is only true for unused services. For rarely used services, it's a rarely occuring full cost that might come by surprise at a bad time.

> I think a nice benefit for an administrative tool is the ability to update it, and reload the updated version.

This is only a benefit if the systemd socket unit is co figured to operate in inetd-mode (Accept=yes), where systemd spawns a new process for every accepted connection, which is quite inefficient resource-wise.

"Normal" systemd socket activation just starts the service and hands over the socket. The service runs indefinitely afterwards as if it was a normal service, and needs to be manually restarted or hot-reloaded after upgrade or configuration change.

> The reason why inetd didn't stick is because it's a pain to use -- it's separated from SysV init, so it needs to be very intentionally set up.

Being separated has a lot of benefits - easy nesting, easy reuse in minimal containers, etc. The integrated model works best for monolithic servers.


Around the time I was first learning Linux, I recall reading that there were two ways to run a service:

1. Start the daemon on boot and have it running all the time, like some undereducated neanderthal.

2. Configure your system to run a daemon which monitors a port/socket and starts up only when there is traffic, like a civilized person.

I believe which one of these to use is highly dependent on your resources, usage, and deployment model. For services that are fast and cheap to start but are rarely used, #1 makes more sense. If you have a server or VM which only does one thing (very much the norm, these days), then running just keeping that service running all the time is easier and better for performance.


Actually I think what killed inetd is, partially, http. At the time, http was connectionless. Open socket, send packet, read response, close. Out of the box inetd would support that, for sure, but it would be constantly forking new http processes to do it.

FTP, SMTP were all stateful, so living under inetd worked OK. One process per overall session rather than individual messages within a session.

Obviously, inetd could have been hammered on to basically consume the pre-forking model then dominant in something like Apache, caching server processes, etc.

But it wasn't. Then databases became the other dominant server process, and they didn't run behind inetd either.

Apache + CGI was the "inetd" of the web age.


I ended up reading more about this and looks like SSHD in Ubuntu 22.10 and later also uses systemd socket activation. So there should be no sshd process(es) started until someone SSHs in!

https://discourse.ubuntu.com/t/sshd-now-uses-socket-based-ac...


This is messed up, totally messed up:

"On upgrades from Ubuntu 22.04 LTS, users who had configured Port settings or a ListenAddress setting in /etc/ssh/sshd_config will find these settings migrated to /etc/systemd/system/ssh.socket.d/addresses.conf."

It's like Canonical is doing 1960's quality acid.

At least the garbage can be disabled:

"it is still possible to revert to the previous non-socket-activated behavior"

With having to remove snapd then mark it to not be installed and in the next Ubuntu having to fix ssh back to the current behavior, it might be easier to migrate my servers back to Debian, or look for a solid non-systemd OS.


What exactly is "garbage" about this? It's so tiring how systemd opponents insist on name-calling instead of substantiated criticism.

There is no reason every single application should manage network socket acquisition on its own - I'm not very fond of the times everyone and their mother wrote whacky shell scripts to start and stop their services, either. But somehow those seem to be the "good old times" you guys miss.


I don't think its a systemD thing. This sounds more like an issue with changing a server's behaviour without asking.


Distribution upgrades have never been an unobtrusive thing. Despite this, everything will continue working exactly as configured before the upgrade, which applies new configuration recommendations by the vendor. What is wrong with that?


What's wrong is the config file moved. If a sysadmin is used to a config file being somewhere they know, and then it disappears that can be extremely frustrating. Especially on a production system.


Which the sysadmin knows, because they reviewed the changelog for the major system upgrade they just did. You wouldn’t install a new major version of a database without any precautions either, right?


> because they should have reviewed the changelog

I.. uhh.. Yeah.


No, seriously, my point is: can you blame anyone else if you don’t?


Certainly for SSH I find this a bad idea. If you need to ssh into a troubled machine then it might very well be it cannot be started.


I don't necessarily think it's an outright bad idea, but it's certainly a departure from how sshd is traditionally run, and without awareness of this kind of change, this kind of "magic" runtime change could lead you to not expecting sshd to be unavailable in this kind of a scenario, and increase time to resolution during an incident.

If your systems are more pets than cattle, then I think I too would prefer an always-running ssh daemon. If your workflow is only to ssh into machines during bootstrap, however, then having sshd run only during initial bootstrap and then shut itself off does seem like a nice way to free up a small amount of resources without stopping or disabling the daemon post-bootstrap.


If it's so troubled that a process won't start, it's probably time to reach for the IPMI console. Even if ssh is still running, if the system is that broken, is bash going to start, or what tools you might need?


I've rescued pretty messed up systems before. In one case, a service went haywire and created tens of millions of <1kb files, eating up all the available inodes in the filesystem. The volume was only ~45% full, but you couldn't make any new files. If that happened here, it's unlikely the ssh process could start correctly since it creates locks, logs, and pid files. Those are mostly in /run, so it might be okay, but it does make me a bit antsy.


TBH - for any non-server class machine on my network, I'm fine with that.

SSH should probably be running 24/7 on any server(to keep those resources allocated for maintenance access), but if it's my workstation with a monitor - then it's a non-issue.


Ew


I should really spend more time learning systemd. The more I look into it, the more cool and useful features I discover.


If you have anything at all to do with OS administration, management, or software packaging, it's worth it.

If I could offer a little advice: The systemd man pages are useful as a reference, but are terrible to learn from. Part of this is because there are parts of systemd that everyone uses, and there are parts that almost nobody uses and it's hard to guess which these are at first. Also, the man pages are dry and long and quite often fail to describe things in a way that would make any sense whatsoever to someone who isn't already intimately familiar with systemd.

Most of my systemd learning came from random blog articles and of course the excellent Arch wiki.


Also, it's 99% "not different than doing it via command line", and also comes with a little js terminal gui, uses native users + passwords, has some lightweight monitoring history, lets you browse a bunch of configuration that you usually would have to remember byzantine systemd command lines for... it's awesome for what it is!

I'm happy to run it (aka: have it installed) on all my little raspberry pi's, because sometimes I'm not at a terminal when I want to scope them out, and/or if I'm at "just a web browser", being able to "natively ssh into them" via a web server (and then run `curl ...etc...` from a "real" command prompt) is super helpful!


Just want to clarify: there's still a server process running to serve the Cockpit web app's static HTML/JS assets, right?

Do you essentially mean that systemd socket activation is used basically only if/when the Cockpit web app end-user/client sends a REST/GQL/etc/? request for logs, for example?


I thought the cool thing was all the rookies who install this thing in a way that it's publicly accessible. How many stories have I heard about people who accidentally configure phpMyAdmin to be publicly accessible... Now you might not JUST leak your whole customer DB!


Interesting, I always thought socket activation meant defer launching a process until somebody tries to access it through the network, but... does it also finish the web server process (or whatever is used here) as well after the request is serviced?


No, it doesn't automatically close the process. Two options I can think of: Application exit when it's done with its thing or RuntimeMaxSec to make it close after a while.

systemd passes the socket on to the application so I don't think it has any reference to it anymore, so it wouldn't be able to know when the socket closes.


systemd-cgi :^)


Everything old is new again.

The next big thing will be a web server where you don't need to use the command line to deploy your project, just sync your workspace folder and it will automatically execute the file matching the URL.


It was/is inetd[1] actually

[1] https://en.wikipedia.org/wiki/Inetd


Socket activation means that every application must be modified so that it can run both with activation or without. So you need to patch every application for compatibility with systemd. And if tomorrow there will be an alternative system daemon, you will have to patch everything again?


> A server-side JavaScript script might pull data out of a relational database and format it in HTML on the fly. A page might contain JavaScript scripts that run on both the client and the server.

This is exactly what we do with server-side rendering today, huh. So the originally intended uses of JavaScript included servers too.

> Java programs and JavaScript scripts are designed to run on both clients and servers, with JavaScript scripts used to modify the properties and behavior of Java objects

This, not quite so.


Netscape had a line of web servers supporting server-side JavaScript. But who in their right mind would use JavaScript on the server side?


Back in 2000 there was a the Helma Object Publisher, a web application framework, written in Java, based on Netscape’s Rhino, afair, and enabling serverside Javascript including a form of ORM which mapped the db into JS objects. Antville, for a time the biggest austro-german blogging community, was and I think still is written in it.

https://github.com/antville/helma


Yes, it was called Netscape Application Server. They changed the product line name later, to IPlanet Application Server, IIRC.

Modules written in JavaScript and run on the server were called AppLogics (bad name, IMO).


As did Microsoft with ASP (your choice was VBScript or JScript for a scripting language)


JScript still runs in modern Microsoft servers. I still have one site running with ASP.NET/JScript.


They were very slow even compared to the Java of that age.


>> Java programs and JavaScript scripts are designed to run on both clients and servers, with JavaScript scripts used to modify the properties and behavior of Java objects

> This, not quite so.

Well, technically, one can (and often does) use Javascript to send an asynchronous request to a server that -when programmed using a Java based framework- will probably modify the properties of one or more Java objects, i.e., some entity, etc.


Java Applets were a thing for a long time though. Interesting but very dead tech.


They were essentially the same idea as WASM -- a portable machine code that runs on web browsers -- but a quarter of a century earlier.

I suspect Java applets fell out of fashion because they could only manipulate a rectangular area of the screen and not the DOM. Did the DOM even exist in 1995?


The problem with Java was flawed sandbox implementation. Every month there was escape sandbox vulnerability. People didn't keep their JVMs up to date. Installing JVM plugin for browser, between x86, amd64 Edge, IE, with multiple JVMs on the machine, was a holy mess.

So in the end it was security nightmare.

Jobs finally killed it, by declining to support Java on iPhones. So from that point it was doomed.

WASM is part of the browser. Browser vendors figured out how to keep their browsers evergreen. You don't need to install anything, it just works. It works on iPhones.

So, yes, idea was the same, but implementation was actually good and supported by everyone.


The vulnerabilities and constant updates where the final straw that killed Java applets, since they convinced browsers that the unpopular Java applets plugin was a liability more than an asset. Java's security story was broken from day 1, but back when Java applets were trying to gain a mindshare security awareness just not there, and almost-weekly nag screens asking you to update your JRE were not a thing yet.

It's also not the fact that users rejected the idea of apps that are restricted to a rectangle inside the web browser - after all, Flash and even ActiveX (to a limited degree) managed to churn out popular apps running inside a (badly) sandboxed rectangle. Some of the popular use cases (e.g. games) did not die, and are now faithfully served by WASM. Rectangle-constrained applets are obviously not good for every use case, but Java applets failed to succeed even at the places where Flash shined later.

Looking back, I think what killed Java applets was that they were just slow, ugly, hard to interact with and had bad developer experience. Developing HTML pages was easy - just a change/reload cycle. The same goes for cgi-bin scripts. Java applets required compilation, packaging and then testing, at the time when automated building tools were not there. IDEs with RAD designers that tried to improve the experience did come, but at least most Java IDEs I tried back then were quite cumbersome to use compared to incumbent native RAD tools like Delphi or Visual Basic.

In addition to that, Java applet support was initially spotty during the browser wars, and features such as JAR files and different JDK 1.1 (and later JDK 1.2) APIs were not evenly supported across Netscape Navigator and Internet Explorer[1]: https://www.infoworld.com/article/2076251/applets--still-ess...

And then there's the slowness and ugliness. You really only had AWT available for your UI and graphics, and it sucked. It was ugly, buggy and slow. In theory, it shouldn't have been so, but the technology just wasn't ready and just wasn't mature enough.

I think Flash was so wildly successful because it took an entirely different (and less ambitious) route at the beginning. Flash started as a multimedia player with user-friendly authoring tools. They later added ActionScript and Flash itself become the RAD tool that could penetrate the web frontend.


Ease of development and speed of loading really makes a difference.

I remember just sitting around looking at these Java Applets loading whole applications. While Flash games were snappy.

The moment you hit some webpage and it shows you a Java Applet loading bar you knew the website was useless basically.

I think this had quite a bit to with Java and the way it implemented UIs. Also just serving a whole bunch of unnecessary stuff. To be fair this eventually became an issue with JS as well, but not the the same degree compared to the network speed.


They fell out of fashion because you had to install JRE on the client machines and because they ended up with an icon in the Windows task bar that obnoxiously asked again and again to download a 100+ MB update week after week. Internet was slow, still a lot of 56 kbps modems, and many people never upgraded and ended up hating the experience. Flash was better looking and they had a much compelling reason to download it, games.


No, Applets were just clunky and slow-loading--that's why they weren't popular. Any website loading one would freeze up the whole machine. But there were definitely Java Applet games, and I happen to remember the last one I ever played: Minecraft.

Flash loaded quickly and seamlessly, but otherwise had all the same problems.


Damn, I remember playing creative minecraft back in 2010, a surprisingly good experience for a java applet embedded in the browser. All running on linux with fairly smooth opengl rendering. I also remember using a wrapper/front-end called world of minecraft that allowed you to run the jar outside of the browser and even had multiplayer support...


Does anyone know why applets loaded so slowly and flash loaded so fast?


Not only that, but they were notoriously vulnerable to attacks.


Nope, at least not as a standard. The original DOM spec is dated 1 October 1998.

https://www.w3.org/TR/1998/REC-DOM-Level-1-19981001/


Notably though, in 1997 Netscape 4 already had document.layers and IE4 document.all, two incompatible precursors to the DOM.

Some info about the LAYER API (which Netscape eventually killed, as the MS/W3C DOM prevailed):

http://web.archive.org/web/19971015223701/http://developer.n...

Also, a DOM compatibility guide can be found here:

http://www.dannyg.com/dl/JSB4RefPoster.pdf


I think in the end they were basically replaced by Flash and Shockwave, which were much easier to use for what most people were using Java applets for. Then, of course, Flash was abandoned because Adobe, being Adobe, seemed to absolutely refuse to fix many of the huge security issues that Flash had. Steve Jobs then led the fight against Flash, and that basically leaves us at today.

Remember that even after JS was released it still took many years until it became fast enough to use for big applications. If I'm not wrong, Google Chrome was made partly with the explicit goal to have super fast JS.

Also no the DOM did not exist until 1998.


> Did the DOM even exist in 1995?

While the limited facilities provided in the first generation of JavaScript and JScript for detecting events and manipulating HTML elements eventually came to be known as "DOM Level 0", it wasn't until 1998 that the first DOM specification was published. Source: https://en.wikipedia.org/wiki/Document_Object_Model


It was also because they were slow to load, and especially because unlike flash targeting newer Java plugins was painful and unreliable.


I think it was possible to view the source of applets.


The book "JavaScript: The Definitive Guide" had a server-side component since I think the first edition in 1996.


I never understood these books. Who reads a 1000 page book to learn a programming language? I bought this particular book some 10 years ago hoping to "master" javascript. The book was so dull that I could not even get to the 3rd chapter. There are books about much more complex subjects that wrap up in < 400 pages. What's really the market of these doorstop books?


There's a lot of value to having "everything in one place" during the learning process and later for reference, plus the convenient dead-tree format means bookmarks are instantly accessible. They're also written by competent people and edited by professionals. I'm a fan.

Of course when one's chosen tech is too new, and/or moving faster than the book publishing process then one is stuck with Stack Overflow, random blog posts and insufferable YouTube vids. Which all also go out of date within a year.


It’s mostly gone now with the Web but having a go to book used to / can still be a really good reference. In the nineties ones like the Camel book for Perl were practically required reading in their fields.


> I never understood these books.

How long have you been in the industry?

They don't make a lot of sense now, reference material is generally kept in electronic form, often online not locally, both because it saves resources (paper, space to store it) and because things change and online content is so much easier & cheaper to keep up to date. That and things change faster ATM in most tech fields.

But think back to the late 90s when many, even devs, had little more than a slow and expensive (per-minute phone call costs) dial-up connection that blocked other calls while active, and having a local resource that you could flick through to find details makes a lot more sense. Also not having convenient easily portable devices with which to store and read the information makes a huge difference. Even if you did have a laptop it was bulky and the screen was not particularly nice to read from. Heck, desktop screens were headache inducing compared to modern kit (imagine a goldfish-bowl like CRT, 14" diagonal, less because you shrunk the display to avoid the really rounded bit at the edge, 15" if you were lucky, 17" if you were rich, displaying at 640×480 or maybe 800×600 resolution (maybe 1280×1024, again: if rich), in 16 or 256 colour depth so no font smoothing, with a pretty low refresh rate) making reading the information from the book much more pleasant even if you did have a local online copy at your fingertips.

Those books tended to have three parts and you weren't expected to read them all cover to cover:

1. Fundamentals, which you would probably read fully unless you had a fair amount of relevant experience of similar languages/environments in which case you'd skim to pick out the key differences.

2. Specific parts in detail, which you would read some sections of but ignore others until you needed them later. This might include worked examples of using common parts of the standard library and so forth.

3. Reference detail. This might be half the book or more, maybe two thirds in some cases. This you would not read as such, you would use it like a dictionary or mini-encyclopedia. That is the part that makes least sense in the modern world.

If you want a really extreme example: I had a copy of the the MS's C/C++ compiler, the full documentation for included a printed reference to the Windows API of the time (Win 3.x era). IIRC that stack of books, piled on the floor, came up to noticeably more than half my height. The vast majority of that was tools and API reference material, though small chunks were intended for more end-to-end reading.

The huge books did seem to hang around beyond their really useful period. I don't think I bought a dead tree like that as late as 2010, so your “about 10 years ago” is probably when they were well past prime. Even in their prime there were some really terrible examples (poor tutorial sections, reference sections that were out of date before they were even published, and full of errors on top of that).


Yeah, a lot of younger folks don't get this. In 1995, the web wasn't like today, only slower. There were no web search engines, only catalog pages like Yahoo. Documentation was often so hard to come by, View Source really was your only option if you wanted to learn newer techniques. That AND it was slow: a 14.4 modem downloaded 1MB every 10 minutes under optimal conditions (no one called while you were connected).

CGI.pm was king.


While the modern Internet has certainly changed how documentation is published, your suggested three-part breakdown seems as relevant today as it did when documentation was mostly delivered in print.

It's also worth pointing out that being able to skim comprehensive, reference-style documentation to get a general overview of a technology without getting bogged down in irrelevant-for-present-purposes detail is a useful skill to learn.

“As the art of reading (after a certain stage in one’s education) is the art of skipping, so the art of being wise is the art of knowing what to overlook.”

— William James[1]

[1] https://archive.org/details/theprinciplesofp00jameuoft/page/...


At the time there wasn't really a good search engine, Yahoo's hierarchy of interesting sites was just starting. No Slashdot and no Google to find it to see how to do things or fix particular problems. Not much OSS to take as an example. Online doc was dry and limited. Blogging only started in 1994 so not much out there yet. But as the web took off books quickly became almost irrelevant.

The framework obstacle course for developers took off. My particular path through it was POJS, YUI-ext, EXTJS, Dojo, jQuery, Backbone, Angular, React - probably about 2 years each give or take.


They were also language references. What we are googling now was in those books back then. The second half of 90s was the tipping point between books and web docs. Books were at least 6 months behind the web docs and web technology moved faster than today. It was less wide in scope but it changed a lot. Think about the times when Javascript changed a lot in the early 2010s. It was something like that.


I have a dead-tree copy of the JVM Specification. I'm never going to need it, but it's a fascinating read and stands up well as a book.

https://docs.oracle.com/javase/specs/jvms/se17/jvms17.pdf


You could dip into sections of these books and pick and choose what you needed to learn. I don't think I ever read most of the three or four hundred books I bought in the period from 1985 to 2006 in a linear fashion. By 2006 I'd mostly switched to learning new stuff from online sources.


> Who reads a 1000 page book to learn a programming language?

Uh, I did, in 1999. My parents would drive me to Barnes and Nobles and I sat there and read the definitive guide cover to cover.


Following that time frameworks like ASP or Coldfusion (?) had server side JavaScript for code snippets similar to what the browsers were doing on the client. So I think, the JavaScript on the server note here is more addressing the ASP use case then the nodejs one.


The funny thing is, I do know about WeeChat (the IRC client) and did use it in the past, but still thought the article was about WeChat (the Chinese messaging service) for a moment. I'm embarrassed!


Oh wow. So some countries had free local calls, which boosted the local BBS culture. That's pretty cool, huh. I had no idea. In my home country there was no such thing, so BBS's tended to be owned by big companies.


I also learned the lesson to use a different history file recently. I added `HISTSIZE=-1` to `~/.bashrc` to make the number of items stored in the history unlimited, but the history kept being truncated. The problem was that Ubuntu's default `~/.bashrc` file set `HISTSIZE` at the beginning of the file, and it had a side effect of truncating the history immediately. I had tried various methods but in the end, using a different history file felt the cleanest.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: