Skimming this makes me long for the time when the multiuser nature of Unix was really utilized as intended.
I miss being on a system and typing "who" to see who is on, and starting a chat with them with "talk". Or sending mail to other users just on that system. Using "finger" to read updates in their .plan files.
Many of the "social" aspects of the internet today existed in that multiuser aspect of Unix, in a much more intimate way.
Macs and Linux machines are still multiuser systems, but "users" are pretty much just used to provide separate configuration and permissions scoping for different application services. It would be nice to have someone hop onto a terminal on my Mac once in awhile and say hello.
I just didn't enjoy the social/cultural aspects I encountered on the tildeverse, it was quite immature and dysfunctional based on my experience ~2 years ago.
I can confirm sdf.org doesn't exist using Firefox but shows ok with Chromium.
On my debian pc. Probably a firefox bug? Oh, wait.. The second time I tried sdf.org it showed up correctly, so I'm guessing it's a temporary DNS name resolution where it fails on the first attempt.. Doesn't make sense!
Unix computers at that time were expensive beyond the finances of most individuals to own one at home, and required special power and air conditioning. That's why one machine was shared among many users.
Cloud computing is ushering in the exact opposite. Each user has their own private cloud VMs; it’s much easier (and cheaper) to spin up and tear down VMs with images precisely customized by each user than to constantly run and maintain a single VM supporting a multiuser system. It’s also a far superior user experience; having root on your own machine is way better than having to ask a sysadmin to update a shared machine (and often getting denied in the process).
What’s wrong with root access to a private cloud VM? Since VMs are private to each user, there are no security concerns with root access. If a dumb user playing amateur sysadmin accidentally borks their VM, they just tear it down and spin up a new one.
On the contrary, sysadmins who give root access to cloud VMs are smart, and know that having root access in safe, controlled environments does wonders for developer productivity. Imagine not having root access to your developer laptop! I have friends who’ve worked at such companies, and they all said it was hell.
In the vast majority of cases, you don't need root access to develop. You can install software and build and run code in your home directory. Root accesss is often just a lazy way for software installers to avoid the extra step or two of modifying their $PATH because some executable is not in /usr/bin
That precludes use of package managers, which makes life way harder (have fun building all dependencies from source!) It also makes it way harder to link against shared libraries, since you’ll have to update include/shared object paths for every piece of software you build.
There is absolutely zero risk or downside of granting root access to single user machines. Zero, zilch, nada.
> I miss being on a system and typing "who" to see who is on, and starting a chat with them with "talk". Or sending mail to other users just on that system. Using "finger" to read updates in their .plan files.
You've just described a BBS. Some are still around, accessible via telnet/SSH.
>You've just described a BBS. Some are still around, accessible via telnet/SSH.
right, but that misses the point a bit; being connected to others on the system at the time was basically frictionless -- no need to connect somewhere, you were already on the same machine.
saying "hey bob, connect to X bbs" is a bit different; you're not already working there -- it requires user intervention to experience.
What I find interesting is that Plan 9's "who" did not. It's just
ls /proc | sed '{grep out usernames}' | sort -u
so it's pretty specific to whatever machine it runs on, as opposed
to who's logged into the installation you've run it on.
On another note, on Tenth Edition Unix, when you logged in from a
Blit, something would get pipefiled[1] over your tty and wrap writes
from new opens in something that puts it to a window-manager window.
Possible because of not having name spaces for files.
What you really miss is social aspects of school or work environment where you could form meaningful connections with others without being stunted by corporate politics or ever growing list of sensitivities in society that gatekeep discussions of anything people actually care about. Otherwise similar functionality still exists in corporate tools. You can certainly chat with your workgroup or individual members, and corporate directory usually has a place to type in what you are working on. Plus others don't cause your processes to slow to a crawl by running CPU or I/O intensive workloads. Technology is not what needs fixing in this picture.
No, I think what they miss is literally what they described. Because I feel the same way with LP MUDs which had many of the same commands and interactions, despite in fact having friends I can speak to and interact with in other locations.
I read an article about students using shared Google docs to chat in class and (to me) it sounded a lot like the experience of communicating with others connected to the same remote host.
I wonder if the Google Docs experience has an equivalent to the joy of discovering new systems and connecting to them and discovering new societies. I fondly remember some ephemeral connections made to the odd insomniac grad student or bored admin through those methods.
Is it? Do you feel free to discuss subjects you most care about or your coworkers most care about without fear of getting cancelled? Not saying politics, just common aspects of human life.
Chapter 8 was the most surprising part of the book for me when I first read it. Up till that point lot of the information was mostly what seemed like administrator/user territory. Then BAM! it goes into designing a full fledged interpreter for a BASIC level complexity language, suddenly you are into language design; utterly fascinating how using simple tools like yacc, lex and some simple code (the entire code for the interpreter is published in the Appendix) you can do amazing things as a programmer.
That chapter named "Program Development" is a masterpiece and a must read by every Software Engineer.
I read it before i knew what Compilers/Language Design involved and it was sort of a revelation in how much you can get done without knowing too much theory but with the intelligent use of appropriate tools, a framework and a guiding hand. Whenever people ask me for resources on Compilers/Language Design i always tell them to first read this chapter a couple of times and grasp it thoroughly and only then move onto other proper textbooks.
You have to read this so that when inodes come up at your sysadmin/devops social gathering's scintillating conversation you don't have to run to the bathroom and google it on your phone and try muddle your way through without your face turning red.
(inodes just strike me as one of those weird little things like SQL, except for much smaller/easier to learn--just something critical, always there, and for some reason, not understood by a lot of sysad/devops types. I forgot the detail until rereading this book recently. I imagine you can expect the details to vary widely on modern filesystems, but ext4 (descending from and backwards compatible with ole ext2) probably has the concept, and BSD's ffs/ufs, and the idea probably gives you some hazy idea of this general area of the world for a lot of filesystems
I also find the treatment of some old topics kind of illuminating, like the stty command when I do obscure stuff like use serial terminals or try to use a text buffer as a tty--interesting that you can still tell the kernel "hey, do this dirty hack for my terminal when handling characters and things. Thanks.")
You should be able to see inode usage with `df -i`, no? I usually use `df -Tih` (print the file system type, list inode usage, in human-readable format).
I've had the other case where `df` says the disk is full with plenty of inodes left, but then `ncdu` says there is plenty of space (supposedly).
Lo' and behold, you have some borked process still holding GBs of log files open that you thought you "deleted" -- really unlinked from a directory.
tl;dr: Don't forget `lsof +L1` to find pesky processes that are holding files open that you unlinked from a directory but have yet to be actually marked as deleted on the filesystem [1]. The space on disk won't be freed until the process closes the files or is terminated. (Technically, the open files are associated with a mount point, not the file system itself, which makes dealing with symbolic links across file systems somewhat confusing.)
Did you have to learn about lsof +L1 the painful way like I had to learn about df -i (every office phone ringing at once, emails filling up with alerts, etc) or were you wise from the beginning?
I agree. Also every software engineer should have written a compiler, a scheduler, a device driver, a small kernel, an IEEE compliant floating point library and a 3D raytracer in their free time between 30 courses by the time they have completed their degree! /s
Writing a filesystem from scratch is something I would expect from a full 5 years degree entirely dedicated to operating systems, not a generic SWE curriculum
You don't need a bachelor's degree to write an ext2 filesystem driver, I didn't finish high school and I've written a read-only one that works on both MSB (68000) and LSB (x86) byte order machines. The format is well documented:
Even though other file systems call them something else (e.g. NTFS uses the term FRS) and their actual fields might be in a different order and contain slightly different data; all file systems use some kind of file table that contains fixed-sized file records.
I am writing an object store that also can manage unstructured data just like a file system. It too has a table of object records that are all the same size. One of the secrets for managing large numbers of files (todays HDDs are big enough that you can create volumes with 100s of millions of files) is to keep this record as small as possible. The FRS in NTFS is 4096 bytes per file. Try storing that table in RAM when you have 200M files! My records are only 64 bytes in size. 16GB of RAM will easily cache the whole table and still give you memory to spare.
well maybe not that, but it is very surprising that there are software engineers out there who dont know what inodes are. on my uni this was explained in an OS class in year 2 of bachelors programme. its basic knowledge.
Yes, and on the other hand, there are surely also degrees with OS knowledge, but without AI knowledge. And degrees with AI knowledge, but without 3D graphics knowledge. And... well, you get the point.
At least Portuguese universities always have introduction lectures to everything relevant in CS, electives are for deep diving, and you always need to do a bunch of them for getting the graduation credits.
In Licentiate, bachelor's are 5 years, master 2 years, and PhD another 3.
One of the reasons to Bologna was to try to harmonize the fact that countries like Portugal, someone with a degree would be downgraded when applying abroad.
So we are pretty much discussing bachelor degrees.
I would be willing to bet that some OS knowledge comes in very handy for almost anyone who writes software than runs on an OS. I’m sure it’s more important for the 3D graphics programmer to understand the execution environment within the context of directX or OpenGL etc. But it is hard to imagine building any native application without some grasp of cpu scheduling, memory paging, device i/o etc that you’d get in an basic OS course
Really strange perspective. Sure it’d be nice if everyone understood those things. But to think they’re absolutely necessary for any application level programming is… odd.
In most of the last two decades of my career, the OS has been abstracted away from developers by their sysadmins. And things have only gotten more extreme with the emergence/resurgence of managed hosts in the form of cloud computing “serverless” services.
Worse, there are a lot of SW engineers who have no idea how a computer works. One common mistake is that code is executed "instantly". Especially in embedded, this is problematic.
At my university the compilers course was optional. I always thought it was weird that such a foundational part of computing was an elective. We did have to take an OS course though!
And lots of people... I was curious so I checked a video that taught JavaScript to newbies. It was full of errors. I would say that this is a common thing, unfortunately.
This is a great book, one of my faves. It is a surprise to see it hosted on archive.org when it is still in print, though. Please support good technical writing!
"Being in print" or "new" a is not well-defined in this context. "Being in print" is only a metaphor, since the process of printing is typically completed when a copy of a book is sold (and makes not sense in the case of publishing-on-demand, where it would be equivalent to "is available"); and "new" can mean "in its original condition" or "recently published" (with some uncertainty what time span "recently" denotes).
A more precise criterion would be whether a book is still on its publisher's backlist. As far as "The UNIX Programming Environment" is concerned, the answer is positive. Its publisher Prentice Hall was aquired by Simon & Schuster and later sold to Pearson.[1] On Pearson's Web-site the book is still available.[2]
"The UNIX system is full duplex: the characters you type on the keyboard are sent to the system, which sends them back to the terminal to be printed on the screen. Normally, this echo process copies the characters directly to the screen, so you can see what you are typing"
> Although most users think of the shell as an interactive command interpreter, it is really a programming language in which each statement runs a command
I’ve read the book in one summer when I was in highschool and this sentence was the biggest light bulb moment for me. So simple and eloquent. This was the point where I _get_ what does programming and writing code really means.
I'd love this book if it was rewritten for ANSI C and maybe modern tools such as the ones for NetBSD or OpenBSD at least. AWK and sh still work of course.
This book has an exceptionally high signal:noise ratio. It's long been a favorite of mine, even with its relatively high price tag for a small paperback.
I had to buy this book back in 1985 when working on my CS degree. Like just a handful of other books, it has survived decades of going through my collection of physical books and weeding out ones that are no longer relevant.
Amazingly I have this on my bookshelf, arms reach away, the times before everything was on the internet. Good times. Though whoever stole my ring-bound vi reference guide of that era - cursor you.
I love this book! There is a lot of abstraction today that most software workers benefit from, but I think there is a lot of value in understanding lower-level mechanisms.
I miss being on a system and typing "who" to see who is on, and starting a chat with them with "talk". Or sending mail to other users just on that system. Using "finger" to read updates in their .plan files.
Many of the "social" aspects of the internet today existed in that multiuser aspect of Unix, in a much more intimate way.
Macs and Linux machines are still multiuser systems, but "users" are pretty much just used to provide separate configuration and permissions scoping for different application services. It would be nice to have someone hop onto a terminal on my Mac once in awhile and say hello.