Hacker News new | past | comments | ask | show | jobs | submit login
The TTY demystified (2008) (linusakesson.net)
243 points by Ivoah on Feb 5, 2017 | hide | past | favorite | 43 comments



One of the most important commands I learned was "reset", especially when I was working with anything that would want to do something special. From the man page:

"When invoked as reset, tset sets cooked and echo modes, turns off and raw modes, turns on newline translation and resets any unset characters to their default values before doing the terminal described above. This is useful after a program dies a terminal in an abnormal state."

Except, it is also useful when you exit something like a Serial-Over-LAN session, after watching the BIOS do inappropriate things to your terminal window.


If the SOL client is leaving the terminal in that state, it's almost certainly broken. The only time it's acceptable not to clean up after yourself is if you literally can't, because you've been whacked with a signal you can't handle (SIGKILL); or because you've _tried_ to clean up and for some reason the correct sequence of restoration calls has failed for some unknown operational reason.


Some edge cases:

- cat /dev/urandom (cat can't reasonably tell if you've piped to another command that expects raw binary data or a terminal that expects to be left in a sane state)

- ssh remotehost cat /dev/urandom (ditto)

- Emergency exit to avoid data loss when detecting e.g. memory corruption (lesser evil)

- Hardware bugs/failures erroneously emitting escape codes (memory bit flips, loose RS-232 connections)

You're of course correct that the vastly preferred thing to do when practical is to clean up when reasonable to do so - and it's probably reasonable to do so.


cat could tell but it probably wouldn't be a good design. Calling isatty is easy, and I think GNU cat already does so and buffers differently. But how can it tell what raw binary data did to your terminal? First you'd need to program all the different terminal types into cat (probably by using termcap) which isn't great simple "Unix" design to begin with. Then you can't just reset the terminal after running cat. You need to see if the binary data screwed up the terminal. That isn't in termcap and is probably intractable in the general case.

While catting actual random data is contrived, using ssh is not. What if the connection is broken while you're in vi. Vi switches to the alternate screen and then scrolling stops working. If you don't know what happened it's not clear why. How can ssh fix this other than acting like tmux and parsing every terminal command sequence? I'm not convinced there is a simple fix.

This doesn't even get into encoding issues. Some UTF-8 sequences are terminal commands. The UTF-8 encoding for U+00DF (the German eszett) contains a sequence which will cause some terminals to wait for a termination byte you wont ever send.

Plan 9 had the right idea: do away with terminal command sequences altogether. But that only works there because you can take your "terminal" and turn it into a GUI window.


One reason might be because of an escape character conflict

For ipmitool, one sequence for exiting a SOL session is <return><tilde><period>. Turns out, SSH uses the same sequence, so if you're using ipmitool over SSH (such as via a jump host), you'll disconnect SSH instead of ending the SOL session.

(BTW, you can change the "EscapeChar" ssh config item to something else, or you can send <Return><tilde><n><o><n><e> to disable it for a session.)


You can also send \n~~. and SSH will send a single ~. Also useful for quoting nested SSH sessions.


>One of the most important commands I learned was "reset"

Used to use 'stty sane' to fix such issues, sometimes. Remember 'reset' too.

Also set many of the settings manually by doing this (if lucky enough to get the chance in advance, not always the case at customer sites):

When the terminal was in an okay state, do:

stty -a

where -a means show All settings.

Then read up on them (in man pages and manuals), and then when the terminal got borked (by a crashed curses or other program, or manual human error), manually set some of those settings back to normal values again, often all in one stty command, like:

stty ocrnl inlcr echo erase ^h intr ^c ...

(writing from memory, some may be wrong).

AFAIK many of the settings were poorly or not documented well - at least in the man pages. Solving many such issues for clients as a system engineer could be frustrating as well as exhilarating ...


This isn't entirely relevant to the article, but I hope some shell/TTY expert might read this, and know the answer :)

Hypothetically, could a TTY, in combination with the help of a shell, be able to separate the stdin, stdout and stderr of every running program? By default the shell could do what we normally see, but then easily (with some kind of GUI control) let me pull a particular program's stdout/stderr out to a seperate window, including (if I stored it) all it's previous output, and redraw the current terminal removing that program's output.

I've always been tempted to do something like this, and I'm curious if there is any massive reasons it's simply not possible.


Generally speaking, not with current ttys. The main thing you're missing is some way to know what output is coming from what program. Keep in mind, some output is generated by the kernel and isn't from any programs at all, and output may come from more then one process at a time. The kernel is the stage where the 'mixing' happens, so if you wanted real support for this you'd want to modify the kernel in a way that lets the tty device get separate streams for each process.

That said, the best way to try to achieve something similar to this is probably by having your program open a completely new tty device for every process-group you start. Then the GUI program can buffer the outputs from all of those tty devices and do the 'mixing' itself. It is important to use actual tty devices here so that `isatty` returns 1 for al of the children outputting to the tty. If you went for a purely software-based approach by simply using `pipe()`s the programs would know they're not outputting to a tty and would forgo things specific to tty's like coloring, and also either output warnings or simply not work at all.

My other ideas are some abuse of a custom shell and `SIGTTOU` or `tee`, but I don't think you could get either of those things to work very well.


To add, separating program's streams would be completely against the Unix idea of making useful programs by combining smaller specialist programs (you implied that by mentioning process groups).


I'm not suggesting changing that -- just that when all those programs eventually output, the shell/TTY store each program's output in a separate stream, rather than just multiplex them all together in a giant pile.


The TTY is just a user-facing device and as such has a single read-write pair.

    - You can read() from it (this is basically what the user types into the keyboard)
    - You can write() to it (this gets translated into what is displayed on the screen)
The TTY has no business separating programs. It doesn't know or care what its input/output is connected to.

What you are looking for is shell redirection.

If you have a problem with redirection in a concrete shell script, get in touch - I can fix it for you :-)


I about shell redirection, but sometimes I get annoyed by things like:

* I am running a program, decide to 'ctrl+z, bg' it, but it keeps spewing onto my terminal. Instead, just store it so I can look at later

* I want to say "take the output of the previous program and put it into a file", without either running it again, or cutting + pasting it into a file.

Nothing serious, I just feel there is "something better" (although it would be a serious change).


Personally I would agree with you. If you don't want to go a hacky route it would require some redesigning, but there's no real reason graphical terminals have to keep acting like a dumb terminals when stuff like what you described is easily achievable.

The best approach would probably be to make parts of the tty framework implementable outside of the kernel. If you could do that, then you could skip the kernel completely and do all the redirection and such directly from the graphical terminal by making it essentially implement it's own tty framework. Getting what you want would be very easy at that point, because you'd have direct access to the inputs and outputs to all of the various processes and process-groups. So it would just be a matter of keeping them all in separate buffers and keeping track of what you're displaying in various outputs.


> I am running a program, decide to 'ctrl+z, bg' it, but it keeps spewing onto my terminal. Instead, just store it so I can look at later

This exists. It's called "less".

It's not the program which keeps spewing at your terminal. At the point where you Ctrl+z the program has already written the content that keeps being spewn. It's just that the terminal is too slow to display the data in the buffer.

> I want to say "take the output of the previous program and put it into a file", without either running it again, or cutting + pasting it into a file.

I want to do that sometimes, too. And then I just re-run it.

It's not a problem of the implementation, I just didn't run it right the first time.

Shells could be implemented to remember the output of your last command-line. There are some questions though: Where should the output be stored (memory / disk usage), what about very big or infinite streams, what about interactive applications... "less" is such a program it makes a particular set of decisions wrt to these questions.


I wouldn't really say that. The tty deals with process-groups, not single processes. The idea I gave would put process-groups into separate ttys, not split-up the groups themselves.


Yes - I was agreeing with you but my comment was unclear. I wanted to make explicit what you implied by use of "Process groups".


Let's say you're an xterm, terminal.app, iTerm2.app, etc. You open a pseudo terminal which gives you a pair of file descriptors - a master and a slave. The parent process (xterm, et alia) does all reading/writing through the master file descriptor. The child process sets stdin, stdout, and stderr to the slave process and execs bash.

So when bash starts, stdin, stdout, and stderr are the same file descriptor. And those descriptors (the child pseudo terminal) are inherited by any programs launched from bash. (bash can redirect stdin, stdout, and stderr of course)

Backing up a step, xterm (et alia) could open 2 pseudo terminals and use one for stdin/stdout and the second for stderr. And then stderr could be displayed in a separate window, always blinking red, or whatever.

(You could also use a pipe for stderr. It would slightly less compatible since it's a pipe and not a pseudo terminal. Somewhere, there's probably a piece of code that cares.)


Are you trying to do mathematica notebooks for the shell? There's no easy way to do this, you have to run each command into a separate pseudo-tty. You can only do this with a combination of a special terminal emulator and a special shell that only runs inside the special terminal emulator. You can't do this and then just use bash.


Oh, I know it would be a serious undertaking (I'd probably try hacking it into fish, because I've looked into it's code-base). I was just wondering if it's hypothetically possible, and why no-one has done it :)


I don't know. The reason I haven't done it is because it's too much work, there is no pleasant way to do GUI programming and I don't have a clear design for how the UI.


You can do it with bash and tmux.


Yes. http://tldp.org/LDP/abs/html/io-redirection.html shows how to do it in bash for any given process.


> Beware, though: What you are about to see is not particularly elegant. In fact, the TTY subsystem — while quite functional from a user's point of view — is a twisty little mess of special cases.

I wonder if it is about time someone tried to build something more elegant. If someone was inventing something like the TTY subsystem from scratch today, with no requirements for backward compatibility, what would they do differently?


IBM's approach (3270) was pretty different. More structured, with control blocks, first class support for things like user editable fields, etc.

http://www.tommysprinkle.com/mvs/P3270/start.htm


I'd want to investigate not relying on in-band signalling for terminal control. So, perhaps have two input streams: stdin and stdctrl?

Also, it'd be tempting to try and add more structure, but I honestly think bare streams of bytes are the right abstraction at this level.


Careful: when you have OOB signalling, how do you keep the control codes in sync with the data, bearing in mind that both streams may be buffered differently?


That's a good point. I think ultimately it would still need to be pipelined as if it were in-band. Perhaps synchronization codes with a timestamp/uuid that corresponds to point-in-time against the data.


> Also, it'd be tempting to try and add more structure, but I honestly think bare streams of bytes are the right abstraction at this level.

Most contemporary terminal streams use escape sequences which are based on ECMA-48 with a bunch of de facto standard extensions, so that isn't really a bare stream of bytes. It is an overly complex encoding and ideally would be replaced with something more elegant. (Of course, strictly speaking kernel space largely doesn't deal with ECMA-48 and that is left to user-space libraries such as terminfo/readline/ncurses/etc, but it is part of the overall architecture.) As well as escape sequences, most contemporary terminal streams use UTF-8 as well, so that's another way in which they are no longer bare byte streams.

Do we need line disciplines in the kernel? I think the answer is that historically line disciplines were implemented in Unix before shared libraries were. If shared libraries had been implemented earlier, line disciplines could have been implemented in user space as shared libraries instead of in the kernel.


They'd make it a web service in a locked-in walled garden.


If you're interested in the history surrounding the TTY, I just published a newsletter about it on Friday which you can read (without signing up for anything) here: http://www.rubyletter.com/newsletter/2017/02/03/terminal.htm...


This one is a bit of a classic, for the last 8 years (apart from in 2015) it has been on HN at least once a year:

https://hn.algolia.com/?query=TTY%20demystified&sort=byDate&...


Maybe worth another api entry (maybe 'frequent' or 'classic')?


Should add "(2008)" to this.


Why? The only reason to put a date on a title is when the content itself is of historical interest, or may no longer be accurate.


Thanks for the submission, I've missed this previous times it has been posted.

I'm currently in the process of writing my own shell so if anyone has any other good resources you'd recommend then I'd be hugely grateful :) (the simpler the better too hehehe)


Not sure from your question, whether you are only looking for resources on TTYs, or general resources on writing a Unix shell.

If the latter, this book is quite good:

Search for "advanced unix programming marc rochkind" and also see this specific page (it's one of the hits of that search):

http://basepath.com/aup/

I had read it some years ago. (It used to come bundled with HP-UX servers and workstations, and I was working in an HP joint venture, so got to read it and try out some of the stuff.) Thought it was really good. Gave a lot of insights into appropriate use of Unix system calls. An example used in the book is creating a simple shell (covers right ways to fork, exec, dup, set up pipes with the pipe system call, etc.). (IIRC, there's even another one about creating a simple client-server DBMS!) And IIRC, he showed a good bit about writing code portable between different versions of Unix, in a sane way.


Thank you. I think resources on writing a Unix shell is probably a good place to start :)


When writing shells it's possible to invoke standard libraries on both ends of the circuit to handle these communications/protocol issues without having to rewrite them.


I've got file streams working, that's the easy part. However I'm having issues with console applications that detect and require a PTY. Also handling ncurses is causing me a bit of bother.


I stumbled across this a few months ago while implementing basic TTY resizing and ended up learning way more than I needed to. Super impressive, thorough article. :-)


ASCII was invented in 1963, so the author's statement that Telex was ASCII-based is probably wrong. Is the rest of the article also as inaccurate?


According to Wikipedia, it seems that ASCII was first used by AT&T's TWX teleprinter network, that previously used another encoding, ITA2.

The keyword is "evolved into" as a long chain of changes, from Morse code, Murray code, a plethora of other variants of encodings and transmission methods, eventually standardised into ASCII on top of a modem in 1963.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: