My operating systems professor hosted the whole institute‘s web site on his OS/2 desktop. That was fifteen years ago.
He was quite cool, he wrote his own simple message board for the lecture, and when students hacked it and wrote fake messages under his name, he took it in a sporting spirit, and the students reached an informal understanding with him: it‘s fine to post under his name if it is in another color than his (his name was red) or it‘s something like "NotProfName".
Hah, what a cool professor! Running an OS/2 webserver and taking student's mischief in stride. Wish I could go back in time and learn from him as you did.
Friend used to run his webservers on Amigas about two decades ago, back in a time where the time of hooking up an unpatched Windows system to a 100MBit connection would have it infected before you could start updating it. "Of course the webserver there is horribly insecure, as there haven't been new releases in years, but it's so obscure that none of the exploits work"
Did that time end? I'm pretty sure it's still a very bad idea to make a system with common serious vulnerabilities even briefly publicly reachable on ipv4.
If you install Windows XP from the release CD and attach it to the public internet, and let it sit, unless your ISP filters out the file sharing ports, I think it will get taken over fairly quickly. But windows vista and later don't make services available by default.
It's also very popular for ISPs to drop traffic on the windows file sharing ports, because it's almost all either malicious or at least unintentional.
Morning standup at a state-sponsored hacking organization…
Bob: A big round of applause to Fred and Jane for setting up that XZ back door! Boy that got us so much intel!
A round of polite clapping.
Bob: What’s your status, Igor?
Igor: Bah, my target is running web server on MS-DOS. I finally managed to hand craft 16 bit 8086 machine code exploit last night (mind you during Hacker News Hug of Death) and gain remote access to A: drive but it turns out secrets are actually hosted on Amiga 2000 on private LAN which I can ping but I don’t know 68k.
Bob: Fortunately we’re a state sponsored hacking organization so we have considerable resources. R.J., do you think you can help Igor?
R.J.: Sure! Igor, do you know if it has an OCS or ECS chipset? …
Nowadays there are armies of bots that will find an insecure internet-connected server within seconds. Security through obscurity isn't much of a thing anymore.
These bots you are talking about are not intelligent, they do not find "insecure" servers to break into. They simply brute-force and exploit known bugs on popular services.
There is no botnet targeting web services running on DOS, because no one is running web services on DOS.
Finding insecure servers, what human hackers would do, requires persistence, time and a working brain.
Bots, instead, throw shit at a wall and see what sticks. Move your SSH server with credentials root:root on port 1234 and notice how many bots get utterly defeated (only for sake of argument, because OpenSSH has a banner which makes it easy to identify wherever it's running)
These tend to try the top _n_ exploits on common ports. In fact, a little obscurity rids oneself from common attacks. I usually move my Wordpress admin access to a different port and URL and that really does stop scripts from trying exploits all day long. (Of course, I make sure everything else is set for security, too.)
Yeah. I eliminated a persistent bot attack on a webapp in minutes by simply adding a very easy question on user signup (like "what's 1+1?")
Security through obscurity is an overused concept: it doesn't work against determined humans, but on the greater internet, when your adversary are bots, it is extremely effective.
It even works on determined humans. It's defeatable but dissuades many humans and slows down the rest. It is a useful layer in security. It just can't be the only layer.
Makes my brain tingle in weird ways, reading different snippets of this article. Where it talks about "A:" being the floppy drive, suddenly I'm an early 90s kid again, learning the basics of my first home computer (an Olivetti 286 with MS-DOS and a green screen). Where it talks about proxying SSL with Caddy, fast-forward to being a 2010s web dev.
And I like the screenshot showing "Bad command or file name". I saw that message plenty of times, although now it seems like a lifetime ago!
> That's because DOS webservers can only keep track of 8- or 16-ish concurrent requests
Is that because of the lack of official 'forking'/threading, or due to the limitations of whatever TCP stack is being used for the listen() / accept() calls? (or something else?)
I wrote a web server for CP/M, the issue there was that the C library could only hold the state of a small number of FILE* at a time, because they were allocated statically.
And even if heap allocated, there might be a static pointer array tracking them.
In the DOS days, dynamic memory allocation was to be avoided if possible. Machines did not handle running out of memory gracefully. Often we'd unnecessarily fix the size of things to avoid run-time uncertainty.
A scientific Fortran code I once saw allocated all of the memory it would need at the beginning of the code in an array called 'a'. Then from there it would dole out portions to other parts of the program using common blocks.
I suspect what you’re trying to describe was the use of blank COMMON as a dynamic storage pool. The anonymous COMMON block begins at the address after the last named block.
> But I'm not sure if accepted sockets fall into this category.
For the HTTP server in this article, no. It contains an embedded TCP/IP stack (linked into the HTTP server executable), which expects to talk to a network card driver TSR using the "Packet Driver" API (normally INT 0x60, but the interrupt to use is configurable). Network connections completely bypass the files subsystem in the DOS kernel
Nowadays, the vast majority of people doing TCP/IP on DOS [0] use Packet Driver, and what I just said is true for anyone using that. However, historically there were a huge array of DOS TCP/IP stacks, all implemented differently. It would be (somewhat surprising) news to me if any of them represented individual network connections as files in the DOS kernel, but not having looked at them all, I can't confidently say.
[0] which is almost all hobbyist/retrocomputing: I'm sure there are a few embedded systems running DOS surviving, but most of those likely don't do networking, and many of the few that do may be running something other than TCP/IP, so any remaining production TCP/IP use under DOS is likely quite rare
I was surprised to find out there is a TCP/IP stack for DOS, I remember still having to depend on non-MS software as late as Win 3.1 to connect to the internet [1], never heard of "LAN manager" [2] but apparently it did have (some?) support.
Btw, according to its author, "mTCP is a hobby project that I started in 2005." [3]
You could get all kinds of network stacks for msdos. Net ware, 3Com XNS, a weird proto Microsoft network I can’t recall the name of, banyan vines, decnet . Weird stuff for ibm token ring networks. And yep TCP as well. When lan manager was released you could run it over a choice of networks TCP included, because the original lan manager ran on Unix as well as OS/2 as the server.
A minor annoyance, I prefer to keep my mouse out of the way to one side when I scroll on a website, but if you have you mouse outside of the narrowest extremes of the columns on this website, scrolling does nothing. Otherwise an interesting writeup
Turn off the "overflow: hidden;" on the <body> element and the "overflow-y: scrol;" on the <div class="insides"> element and the whole page will now scroll with your mouse anywhere. Of course this loses the "scroll within the small window" effect and instead everything now scrolls (except the toolbar, which can be fixed by turning off "position: fixed;" on the <div class="horizontal"> element).
> Your server will be secure because it's obscure. But it's still very likely to become a target for autistic geniuses.
Ah, delving into the abyss of ancient servers, are we? Well, if there's one thing that tickles the fancy of the 'Atypical Geniuses Club,' it's a relic from the digital crypt. Count us in, presently inspecting the code armed with nothing but floppy disks and a dial-up connection!
>Set up a port-forwarding rule that allows the host machine to access the VM's port 80 under "http://localhost:8080". (You can't forward port 80 directly, as it can only be made use of as root. I will also get back to this later.)
You could give the qemu binary the capability to bind low numbered ports as a non-root user:
This is so cool. I love the combination of bringing retro OSs into the present day and mixing them with modern technologies like hosting on AWS, etc. I never got the chance to experience writing code for DOS, but would love to try making a simple networked app in Borland or similar one day as a hobby project.
>I assume that QEMU simply tries to read the VM's video memory at fixed intervals. This feature requires applications to run in plain text mode, of course.
Is that how this works? I assumed qemu's BIOS was hooking INT 21h.
He was quite cool, he wrote his own simple message board for the lecture, and when students hacked it and wrote fake messages under his name, he took it in a sporting spirit, and the students reached an informal understanding with him: it‘s fine to post under his name if it is in another color than his (his name was red) or it‘s something like "NotProfName".