This article is bringing back memories. Not just of fiddling with CONFIG.SYS and AUTOEXEC.BAT, but fiddling with the sound card too, as was mentioned in passing. As mentioned in the article, I did indeed feel a sense of triumph when I finally got Wing Commander: Privateer to run.
As EMS was a kludge that sucked up high memory for its mappings, I eventually had a fancy CONFIG.SYS with different menu options for different memory setups, depending on which game I wanted to play.
Couldn't agree more! I was like 10 years old in the 90's (before Windows 95 on 1995) when all good games run on DOS. I had became an expert on how to edit config.sys and autoexec.bat to enable EMS or XMS or save a few KB off the 640 kb limit - some games needed more than 590 kb from base memory free even though they were also using extended memory!
And of course you needed to know exactly your sound card configuration (IRQ number and COM port) and your video driver capabilities and memory size (I started with a hercules and moved over to a trident before getting a 3dfx voodoo).
Yes, running a game then was much more difficult than double clicking an icon but it was worth it. The games of that era were great and you could aquire skills (persistance, attention to detail, thinking on finding solutions) useful for the rest of your life - if you were a kid at least!
We're close to the same age. I remember that I had certain games with awesome boot disk creation utilities, and I knew which disks would help with which games. Although, those were always just the next step after running memmaker again, hoping that it would find just a little extra (but not really understanding what it was doing).
And I remember being a little disappointed when a game only supported the PC Speaker or Adlib, but shrugging, and deciding to be grateful that I didn't have to worry about getting digital audio working too.
I think...maybe Star Trek: Judgment Rites, and some of the later Wing Commander games. Oh, and a demo of A Final Unity that I never actually got to run (to my memory).
wow these comments are bringing back old memories... the one thing I can't recall is how I even figured out how to get started with all this stuff so long before I had internet access. I didn't really even know anyone who knew much about computers to get me on the right path. Somehow, one thing in particular that eluded me for years about learning computers and technology is that bookstores had tons of books back then to teach me programming and CS. Instead I just spent hours staring at code until it clicked.
Sound cards had to have IRQ #, IO port base address, and often a DMA channel.
The IRQs were handled by the PIC controllers, usually two with the secondary piggybacking on the main controller's IRQ #2. This was how the hardware could signal to the software that it wanted attention. The sound driver needed to know what interrupt to listen for or it would never reply to the sound card's requests. It was setup this way because these were physical pins! If you look at the ISA slot pinout there are physical traces for IRQ 3-7. 16-bit ISA slots added pins for IRQ 10-14. A bunch of IRQs were permanently assigned from the old IBM PC days which is why turning off COM ports in the BIOS would free up the IRQ for use by other peripherals. I seem to remember having a card at one point that was 8-bit, but had a little extension off the end with a couple of traces for 16-bit slot support only so it could support higher IRQs; The manual told you not to use IRQs > 9 if plugged into an 8-bit slot.
The IO port base address is directly related to the article: It selected where in main memory the device was memory-mapped. Writes to that address would cause the device to latch onto the bus and start receiving data. Anything not in the appropriate range was ignored even though all devices technically received the signal electrically. If the driver had the wrong address it would either write to unmapped memory (not that there was an MMU so the data words just went by on the bus without any listeners). Or worse: the write went to a different device that might ignore what it considered malformed data, but in many cases would do random things, scribble on memory, or just blast the bus and cause a lockup. IO base addresses were pre-assigned to peripherals like COM and LPT ports too but they were not in such high demand compared to IRQ or DMA channels; still in some BIOSes you could change the COM port's base address which would make it show up as COM3 instead of COM1 for example.
The DMA channel was exactly what it sounds like: the system had a limited number of direct channels to the memory controller. Often the protocol for a sound driver was to write a "Read audio from address XYZ, length X" to the IO port. The sound card would use the DMA channel to read the audio from that address. Then it would signal an interrupt when it finished so the driver could submit the next batch of samples.
You usually configured ISA cards by setting jumpers on the card. The available jumper settings were usually much narrower than the hardware supported so conflicts were more common.
PCI still used the physical IRQ system but had a protocol for devices to negotiate available settings with BIOS on startup.
PCI express uses a higher-level protocol abstraction. An interrupt is just a special message on the bus and doesn't have any dedicated pins.
Some later ISA cards also supported "negotiation", iirc. It was marketed as plug and play, but often referred to as plug and pray (because sometimes the result would be less than useful). Also made DOS drivers downright massive!
I remember those 'Plug and Play' ISA cards. The PnP bit never worked, I eventually hard set the IRQ, IO port and DMA channel directly to get sound working, and that was that. At least, until the Windows95 install was messed up enough that it was time to reinstall again.
See? There are people who actually know and fully understood all this stuff and then there are guys like me who just played with the numbers until it worked. :) I applaud you
And sometimes it feels like this is still the way to fix MS products, rote memorization of magical incantations.
Right now i am trying to figure out what it is that makes a Windows 10 tablet i own lose the content of dropdown menus. All i get is the "drop shadow" frame around them, if that.
Funny thing is that if i fiddle with the screen resolution it sorts itself out, for a while.
Oh man, IRQ and DMA numbers, there's another set of optimizations for your boot disk. Some games would require your sound card to be on a certain IRQ, or only give a limited number of options. Of course, your other devices like your mouse, CD drive, modem or network card need interrupts too, leading to the dreaded IRQ Conflict error message.
DMA channels were even more limited but thankfully fewer devices needed them and programs tended to be less picky about which ones you chose.
We now use the APIC. The first APICs had only 16 interrupts, but with the introduction of 82093AA in 1996 they now have 24 interrupts separating the ISA and PCI ones completely.
There's a great joke about this in Stephenson's Cryptonomicon
I remember getting a borrowed 19" Sony CRT running at some insane resolution for the time, probably 1280x1024, and hearing that worrying thunk changing modes, then reading this book and reconsidering the wisdom of my custom modelines..
> One night at 3 a.m. Pekka caused this to happen, and
> immediately after the screen went black and made that
> clunking noise, it exploded in his face. The front of the
> picture tube was made of heavy glass (it had to be, to
> withstand the internal vacuum) which fragmented and spread
> into Pekka's face, neck and upper body. The very same
> phosphors that had been glowing beneath the sweeping
> electron beam, moments before, were now physically
> embedded in his flesh.
Might be true, but at least I have the untimely demise of a CRT screen on my conscience.
Got qbasic, some assembler, and a list of interrupt vectors from somwhere. Messing with int 10h, and activating each individual video mode to see what happens. Somewhere halfway the list I find 'character generator video ram access' or something. I remember thinking it sounded really cool. Anyway, activate!
Today I suppose what happened was that the horizontal and vertical retrace got switched off, and the electron beam concentrates on the center of the screen instead of going everywhere. Or maybe not, you tell me.
I do know what happened next: The screen starts to make actual noise! Something wobbling starting silent and getting louder, straight from some third rate sci fi movie: wowowowowowoewoeoeWOEWOE. It can only guess it took a few seconds, but I heard al kinds of exploding monitor horror stories, so I am scared like hell. A few seconds later, I unstiffen and pull the power plug, not the best course of action but at least the fastest.
Repower, and there is a huge purple blotch on screen. Oh dear. After a few second it shrinks away to the center and disappears. For the next month, this keeps on happening every time the screen gets powered on.
I think it was 2 months later when my dad was sick of the display vageueness that had studdenly started and finally replaced it.('Did you do anything?' - ' Who me'? + Put on the 'I know nothing, Im from Barcelona' face) No idea if parents ever knew what happened (It's about 25 years ago at this point).
Friend of mine says every comptent engineer needs to destroy at least 1 piece of expensive hardware. So I hope I'm competent, but at least I learned a healthy respect for hardware that day.
I have an old, slightly blurry CRT here that seems to have a broken vertical sync line on it. Last time I turned it on it exhibited the same effect (gigantic intense horizontal line in the middle). I knew that many electrons was going to wear out the phosphors so I reacted quickly too (it didn't make any noise though) - in my case I found that some mild percussive maintenance restored everything.
Oh man, calculating modelines was some serious voodoo back in the day. Good luck if you didn't have the manual to look up your monitor's timings. Compounding the whole thing was the serious fear that you could actually damage your monitor if you got the settings too far off. Ah, the good old days...
I had per-game profiles that would reboot straight into the game, and then back again into the previous profile on reboot. I've been slowly learning that this kind of thing was a rite of passage back then. But for some of us it was our first exposure to "serious" system scripting.
When I went to college I bought an honest to god IBM PC because I didn't know any better. One of the things that IBM did was clone MSDOS with a product they called PCDOS. It turns out that PCDOS was ever so slightly better at managing its memory and I had a pretty easy time getting pretty much any game to run. I guess spending an extra $500 on the machine has its perks. :/
IBM still bundled it with their Pentium equipped machines. My machine was a P75 equipped box with 16MB of memory and it came with PC-DOS. It came with an ansi-gui application for figuring out what to stuff into high memory and its own version of HIGHMEM.SYS. IIRC you could get 610KB of regular memory free even with the CD-ROM, Mouse, and SoundBlaster drivers installed.
That ANSI program sounds oddly familiar, but I never owned honest IBM PC. I started out with a second hand 486 based clone (complete with MS-DOS and Win3.11 on floppies), after having been an Amiga 500 kid for a number of years.
And the original PC from 1981. I think strictly speaking PC-DOS came first and generic MS-DOS came second. MS developed the OS for IBM and their biggest coup was negotiating the right to still own the IP and sell it to others as well.
Really sorry about the dupe - The lesson, as always, don't use the phone for Hacker News. Didn't notice until it was too late to delete. I wonder why they aren't automatically deleted?
If one ever wondered why the original PCs were viewed as toys by professionals in the field, this article is a good explanation as to why. Imagine you go from, say, an IBM 370 mainframe which has had virtual memory, VMs, no-execute flags, and considerably more than 640K back in the 60s to...this morass just to use the memory you paid for (and pay you did, back then). Granted, PCs didn't cost a couple of million with a maintenance agreement, but that didn't make them any more useful for running your nightly batch jobs.
As i learn of the mainframes of yesterday it feels like all the hoopla of _nix on x86 these days is basically rediscovering all those things mainframes did back then...
Well, an original PC cost, what, $5K back in the day? A new 3081 370-compatible from the early 80s was multiple millions (can't find hard prices quickly, so guessing). I'll bet the CPU in the mainframe alone cost multiples of an entire XT. Mainframe engineers had a little more breathing room when making their design decisions.
So we can look at it one of two ways: those kids today, all excited about VMs and stuff we had in the 60s. Or, holy crap, I can do stuff on my phone that used to take a multi-million dollar mainframe.
Indeed. What I find most "puzzling" is the lack of acknowledgement of this in the business. Its like it is a whole new, almost magical, thing that the devs dreamed up.
I wonder if it has to do with how the microcomputer world had to almost bootstrap itself without any input from the mainframe people, because the latter considered the micros little more than toys.
And this continue into the present, as a large segment of the business is self taught. And by now even going to university will not expose you to mainframes, as they have largely been abandoned in favor of clusters.
It's at least as much do with the industry's youth fetish; ideas that were common on mainframes are going to seem new and groundbreaking to you if you never work with anyone old enough to have used a mainframe.
The tech industry has no mechanisms for developing institutional memory, so we are constantly reinventing old wheels instead of developing new ones.
No, 30-somethings had those too, and moutain dew and what-not. The differences is that people in their 30s might have already spent their 20s doing "death springs" and are sick of it.
The people who actually develop hypervisors and the hardware to support virtualisation would read the journal articles and textbooks written by the people who developed hypervisors and such.
People who developed noSQL databases would have looked at things like IMS when writing new systems presumably too.
There's just mostly a disconnect with people who work developing applications and others on PCs and those who work on mainframes.
At places where I've worked that had both PCs and mainframes there was default segregation between mainframe developers and maintainers and PC developers and users. Of a mainframe team of 20-30 people there seemed to be about 5 who were PC devs as well.
The 640K thing precipitated the first program I ever wrote. If you can call it that.
I had a bunch of games I wanted to play in the early/mid 1990s, including Wing Commander. Each one seemed to require it's own config.sys/autoexec.bat, so I wrote a menu type thing to choose the game and do the appropriate setup.
The experience made me think about what else I could do by joining a few commands together. I wish the internet had been available to me then, because I would have made much faster progress if something like stackoverflow had existed.
Couldn't agree more. I try not to think about it but if I'd have had the resources available to me back then that I have now, I would be orders of magnitude better.
I built my first computer about 17 years ago but dial up was still a thing and I was discouraged from spending time on it so I just did html in the 30 minutes per day I was allowed. If I'd have found a REPL back then, I probably wouldn't feel so inadequate when discussing concepts with my friends at the big 4! It'll work out fine, but later than planned; the spectre of unemployability will start looming in a decade or so, which certainly motivates me!
It feels kind of embarrassing to admit that despite years of tweaking this stuff to get games to run as a kid, I never actually bothered to find out what the difference between EMS and XMS was. Or even know the basics of how they both worked. "It's just extra memory, right?".
It's bizarre that the computer magazines I read (and obsessively reread) would not have run an article like this.
As I recall: you could only access XMS memory by copying. So you had to _copy_ a memory block from above the 1MB barrier to the lower 1MB addressable space to read, and copy back up again to write.
EMS was pageable. You could had up to four 16 KB pages below 1MB that were fully accessible, but you had to select which address above 1MB each of those four pages had to point to.
So in short, XMS was inefficient, because you had to copy, and EMS was more efficient because you simply decided which part of the extended memory you wanted to map to lower memory.
I think, EMS was emulated in real mode on the 80386 chip by using its virtual 8086 mode. This was not possible on the 80286, so the 80286 could only use XMS in real mode.
I briefly developed using both EMS and XMS, and as I recall, XMS was somehow less painful to get up and running and generally work with, but the performance conscience part of me obviously favored EMS's paging mechanism.
Fortunately, 32-bit memory came along quickly both in the form of Windows and DOS protected mode.
EMS was originally for the 286. It wasn't emulated by the CPU like you could on the 386. You bought an add-in board that contained the extra memory and handled memory requests on the bus for the bank switched area and accepted commands to change what areas the banked switched areas pointed to.
When the 386 came out, it was common to have more than 1MB on the motherboard, but the motherboard (typically?) didn't natively emulate the EMS bank switching scheme, probably because at about that time, everyone figured that people were going to stop using DOS and switch to OS/2, and be running 32 bit code that could natively address higher than 1MB.
When that didn't happen right away, Quarterdeck came out with QEMM/386, which used the Virtual 8086 features of the 386 to emulate EMS in software. Microsoft then shipped a not quite as good clone of QEMM as EMM386 with DOS 5.0.
On the other hand, I seem to recall EMS in virtual86 mode being critized because the virtual86 mode caused a lot of very slow traps for that emulation. Not sure if true, but if the emulation was inefficient then I could see XMS being faster even if it means copying memory around.
> It's bizarre that the computer magazines I read (and obsessively reread) would not have run an article like this.
The "Megamemory Explained" graphic used in this post comes from the Jan. 14, 1986 issue of PC Magazine, part of an article called "Enlarging the Dimensions of Memory." It doesn't cover XMS as the 386 was so new as to be effectively nonexistent at the time, but otherwise seems to be a pretty thorough explanation.
And here's another article from 1993 in PC Magazine which goes into detail, and with a bit more accuracy than the original article for this HN discussion (for example, it explains how XMS works on the 286): https://books.google.com/books?id=gCfzPMoPJWgC&pg=PA302&lpg=...
I attended a talk by Bill Gates several years back, where he stated that when IBM was developing the PC, he tried to convince them to base it on the MC68000, and that their decision to go with the 8088 set personal computing back a decade. As the article points out, though, there were a ton of 68000 machines that ultimately got left in the dust by the PC, so I wonder how it would have really turned out.
The 68k had its own list of dumb architectural mistakes, FWIW. Registers were 32 bits wide but load addresses were 24 bit -- the CPU just ignored the high 8 bits, which led to programmers trying to cheat and stuff things like tag bits into pointers, which then broke on the 68020 which expanded the address space.
The hardware bus was async (!!!), meaning you had to arrange correct clock synchronization for every one of your peripherals independently. (c.f. "DTACK grounded", the classic 'zine of the 68k hobbyist).
The exception handling was insanely complicated, and even then they got it wrong in the first version: faulting instructions couldn't be restarted, making the use of an MMU impossible even in principle. They fixed this (via dumping even more undocumented complexity onto the stack) in the 68010, which was essentially a "patch release" to the part that was needed by the Unix workstation community.
It's true that starting the PC with a 32 bit flat address space would have saved headaches in the late 80's, and it's likely that the feature set in Win95 would have arrived a few years earlier. But on balance the mac and unix worlds had not less grief moving off of 68k.
It's still happening today. When AMD64 was introduced, it ignored the upper 16 bits in the address. JIT compilers take advantage of that fact and use that space to store information. Now that we're bumping up against the 48-bit limit, special handling will be needed to ensure applications that assume the upper bits of the address are unused don't get allocated memory with the high bits set.
No, AMD64 does not ignore the upper 16 bits. They need to all equal the bit 47, otherwise the hardware raises a general protection fault (as opposed to a page fault for accesses to unmapped pages).
The problems mentioned in that article result from software that thinks this limit is set in stone. Such software encodes data in these upper bits, and then later masks it away before accessing the pointer.
That was a wraparound condition, sort of the opposite of ignored bits.
Memory addressing on 8086 inherently adds two registers every time: the pointer and a segment register (which points to a 64k region of the 1M address space with a granularity of 16 bytes). The problem is what to do when the segment plus the address overflows. On the 20 bit original implementation, you just got the rolled-over address. (What you should have gotten was a fault of some kind, of course.) On the 286, the same addition would give a valid address in the second megabyte of memory, which isn't the same memory. So the PC/AT (not the CPU) had a compatibility switch wired that would force the 21st memory address line low in real mode for compatibility with apps that actually relied on the early behavior.
And in typical PC hardware fashion, switching to/from this A20 compatibility mode was handled by the keyboard controller. What in the world? Turns out that the 8042 chip that handled the keyboard had an unused pin which could be "borrowed" to implement the "A20 gate" latch...
That's actually not so weird. The 8042 simply had an unused GPIO. The other option would have been to put a whole peripheral on the bus for this purpose, to the tune of like 3 chips and $40 of retail price.
The point was just to have a compatibility hack for old apps, not to pollute the architecture going forward. The reason that pollution happened was actually that the PC clone makers incompletely and incorrectly copied this feature, so later OSes never knew a priori whether or not it would be enabled by the BIOS, and they all had to have the same 10-20 instruction sequence to turn it off. And that meant that future PCs needed the same 8042-looking device on the bus to handle that request.
As I recall (although it is only a recollection, so if someone's got better evidence than my unreliable memory I stand corrected), what drove the triumph of the PC was not that it contained an 8088, but the software available for it and the ease with which third parties could make compatible hardware (and indeed, clone the whole thing).
I suspect (but cannot prove) that both of those could still have happened if they'd gone with the MC68000.
I'd argue for 'open' hardware platform more than anything. Take a look at other personal computers of the time, within an order of (or two) magnitude price up or down, like Mac, Amiga, Atari, SUN, HP(UX), and SGI. They all had great(er) software and hardware and more advanced OS' and features, but got trumped quickly. On the software side, PC had business productivity from the start which could be used as an argument on the software front (Lotus 1-2-3, namely).
To cite "Triumph of the Nerds" again. There were several things that came together to make it possible. IBM's support for it was pretty essential, their bureaucracy being incompatible with a fast changing market leading to them outsource all of the components, and the simplicity of the system making it easy to reverse engineer.
Should have been the Z-8000! I would have liked to have seen Zilog/Motorola be the battle rather than Intel/Motorola. (But maybe that's just my TRS-80 bias showing...)
Well, maybe that was because they had a lot more money to throw at R&D? Not saying either way, but it looks like this was the opportunity that really pushed Intel into the big league. To their credit, they used it wisely and kept making better chips.
It's more like Intel had a CEO that understood what it took to win in the CPU business and Motorola ended up with a CEO that printed out his email and banned Macs from the company when they were one of the few licensees making Apple Mac clones and Apple was one of their biggest buyers of CPUs.
Intel's chips continued to advance after those designed by the AIM Alliance (Apple IBM Motorola) hit their ceiling, so I'm not sure it was just R&D budget.
I was impressed by the performance of the PowerPC 970 but much sooner than expected, Intel's (and AMD's) lineup made it irrelevant.
As others have noted, Intel was in cash and drove on top of that mostly. Not to belittle Intel though. Motorola would do well, as evidenced by 88k and later PowerPC with Apple and IBM... but at that point in time it was already too little too late.
I seem to recall reading that the compatibility between 68k variants were less then stellar. Not sure if it was Filfre, in relation to the Amiga, or elsewhere.
IIRC, a big part of the reason for choosing the 8088 was that Intel successfully generated some FUD about Motorola's ability to manufacture the 68000 in the quantity needed for the PC.
IBM had a second source requirement on procurement, Intel was willing to license or already has licensed the 8088 to be manufactured by someone else, and Motorola was not.
And in "Triumph of the Nerds" Ballmer said working on OS/2 set them back 10 years too. So, I guess with these two setbacks together, the PC became un-invented.
This article is a bit inaccurate about XMS. XMS works just fine on the 286; see the original spec [1]. This is done purely in real mode on the 286 with the the A20 line [2] which allows addressing a 64kB area above 1024MB, in combination with the the undocumented LOADALL instruction, which allows addressing even beyond that [3].
The article mentions confusion. That's an understatement. Information was much harder to come by at the time, you had to rely on articles in computer magazines. Also many bugs existed in memory drivers, not to mention interoperability problems, due to programmers not understanding the issues.
This brings back memories, but not particularly good ones.
The original mistake was made by the IBM engineers that came up with that memory map. They reserved a few bytes at the beginning and then a bunch of bytes at the end. That really makes no sense. They could have avoided all the pain that followed by putting all the reserved space in a contiguous block at the beginning. Maybe they thought it was easier for application programmers to use smaller address numbers? Or maybe it was due to some hardware shortcut, which wouldn't be surprising considering they had already taken the shortcut of using the 8088 instead of 8086. This kind of story is repeated whenever temporary hacks turn into massive successes, for example Javascript's limitations that still haunt us today. If you dig down into these histories you almost always find that it's not the engineers' fault but shortsighted managers.
The 8088 CPU requires that its reset vector be placed at the top of its 1MB address space [1]. So even if the IBM engineers had wanted to put all the reserved space at the bottom, there still would have been a tiny "reserved" hole right at the 1MB barrier due to the design of the chip.
So with the requirement of the physical CPU hardware for a small reserved space at the top of the (at the time) maximum addressable range resulting in a non-linear memory map anyway, putting other reserved stuff (ROM, I/O memory, etc.) up there was not so much of a bad idea at the time. They clustered the "ROM stuff" up there along with the required reset vector.
Also, remember, hindsight is 20/20. The IBM engineer's in Boca Raton, in 1979 to 1981 [2], designing what became the IBM-PC likely had no idea just how big of an industry wide impact their choices were going to eventually have.
I thought it was because it's easier to decode a few upper address lines on the bus so each peripheral is cheaper and easier to build. If you put the hardware addresses in the lower address space you'd have to decode the entire address to see if the access is on your card.
I think the real problem was that people didn't want to let go of DOS. OS/2 1.x was a fine OS, especially from 1.2 onward and it ran great on a 286.
"If you put the hardware addresses in the lower address space you'd have to decode the entire address to see if the access is on your card."
That is incorrect. If your card has a block of 2^n bytes of memory, you need to check the (in this case) 20-n uppercut bits of an address to check whether it is in 'your' address range, whatever block your card is assigned to. Checking whether that is all zeroes isn't easier than checking whether it is, say, 42.
And I don't think people were attached to DOS; they were attached to their (expensive!) programs and those required DOS.
OS/2 tried to support those programs, but wasn't perfect, and even if it,did, it didn't give those programs more memory.
The real flaw was not where they put it, but that it was immobile, the whole block was "reserved".
If they made a few tweaks that allowed peripherals to slot into different spots using some kind of negotiation over the ISA bus this could have been avoided, but that was also way too sophisticated for what was a quick prototype.
OS/2 may have been fine in the abstract, but set in the context of the time, it left a lot to be desired. Not only did it require a bunch of extra (expensive) memory, it also required new the purchase of new software to take advantage of that memory. (Otherwise it was just a hobbled version of DOS via the compat. box.)
Combine that with the $3K SDK cost and the deliberately incompatible PM, there were good reasons it was a hard sell.
One of the big fails of OS/2 was the IBM never had the desire that it would ran on any PC clone. They priorize doing something similar what does Apple with OSX on lesser degree. In other words, there was few drivers for hardware that wasn't on a PS/2 computer. Add to the mix the non cheap SDK...
Microsoft was responsible for licensing it to other OEMs back in OS/2 1.x. Of course, this was before MS turned the the OS/2 2.0 project into an entire fiasco that is now one of my favorite topics.
I rememeber that I was very effective tunning the config.sys and autoexec.bat to grab the max conventional ram. If my memory don't fail, I think that I managed to get ~636-634KiB on my first PC (an AMD 386@40 with 4 MiB of RAM).
I usually, launch memmaker, and then fine-tune the config.sys & autoexec.bat . country.sys ? doskey.sys ? ansi.sys ? Remove it! I need moar free conventional ram !
I designed a game engine for a few games, implemented it on Windows 3.1, and assisted with its DOS implementation. The code running on this engine built equally conveniently for Windows or for DOS due to the design of the engine. The publisher wanted the games to launch and run decently if "mem/c" indicated 500K. This was what was typically available in a 1Meg machine if it had things like Netware installed. We achieved this in part with commercial source-licensed libraries and in part with Borland's VROOM overlay technology.
Microsoft Mouse (software release 9.00) from 1993 was a coveted driver back in it's day. If I remember correctly I think it only took 2K of conventional memory and the rest shunted into upper memory. After discovering that on a BBS I rarely needed an alternative config.sys/autoexec.bat.
It should be noted that some level of multitasking was achieved with so-called TSRs (terminate and stay resident programs). My preferred utility of course was Sidekick.
Compiling helloworld.c with one floppy and no hard drive on the original PC required one disk for each of the following:
(1) source code, (2) cpp.exe, (3) cc1.exe, (4) cc2.exe, (5) lib manager (can't remember the name - something like "marion"?), (5) and the linker -- I think it was 8 floppy swaps total. It blows my mind that my first HD was 5 MB and cost $800.
Also missing was the programming side things: LocalAlloc vs. GlobalAlloc, low-mem HWNDs, etc. - ug - don't miss that...
I realized yesterday that for years I've been mentally measuring data by how many Amiga 3.5 inch disks it would take to store. Noticed this when I was patting myself on the back for getting the build output of the tool I'm working on down below ~60 disks.
At the same time the system felt much more controllable and understandable from a user POV. And I sometimes feel that Linux is going through a similar transition, at least on the user space level.
A lot of memories pop into my mind with this article including unrelated stuff as the first try of Linux, kernel 1.2.8, and the "how to setup the CDROM reader attached to soundcard ?" question.
To go back to the subject, I remember to have used a "driver" to free up some additional memory, but I can't remember the name and how it worked. May be I found it reading Imphobia ? The name may starts with a R.
Are there technical reasons why Microsoft didn't release a Protected Mode version of MS-DOS in the 80s, or was it just a case of trying to push users and developers to OS/2 and Windows?
It seems like such a no-brainer, especially since MS-DOS provided so few services to applications anyway. You have MS-DOS 8 and MS-DOS 16, and you let the user choose which version to boot into, or you automatically kick over to MS-DOS 16 when a Protected Mode app is launched. It seems a lot saner than forcing users to muck around in config.sys.
That people were forced to deal with the memory limitations of the 8088 PC well into the 90s is just bananas.
It's not quite that simple. DOS is not like what we think of as an OS today; more like some libraries that sit in RAM at known addresses plus a loader barely smart enough to transfer data from disk to RAM and start execution. All the intelligence is in the application program itself. And those programs were still stuck with 20-bit pointers.
It wasn't just a question of rewriting the OS, it was also a question of rewriting all the application software. Which eventually happened.
Very interesting article. I hadn't realized that Microsoft strategy of developing their market not by the merits of they product but by lock-in to use business programs (and also games and such) went back so far.
Also it's great to finally learn what "UMB" stands for in "DOS=HIGH,UMB". Never felt right to add that option without knowing its meaning :-).
Awesome article! Best read in a while. I wish it would continue to the state of things the next years, with DOS4GW, and Windows 95 and 98 still running on top of DOS. The article stops too early :)
Shout out to Windows Real Mode, which had a built-in virtual memory manager that swapped out unused memory to disk. All of that required apps to participate, of course, but it could multi-task in the lower 640K.
Astonishingly, Real Mode still exists even on 64-bit Intel chips, and is still the default mode. Booting a modern OS involves a whole stack of bootstraps to jump from Real Mode to Protected Mode to Long Mode.
Real mode is finally becoming a vestigial feature; some modern EFIs have a tiny assembly stub that immediately enables protected mode; from there EFI, the boot loader, and the OS can all be 32/64 protected mode programs.
That's one of the things I feel FOSS is largely underestimating, the amount of old code running in offices around the world. You can still run programs from the Windows 1.x-3.x era (win16) if you have a 32 bit Windows 10 install. You apparently need the 32 bit version because 64 and 16 bit modes are mutually exclusive (or some such).
It's because Virtual 8086 mode is unavailable in long mode. So either you would have to switch the cpu in and out of long mode (which probably has a performance penalty and complicates the code), or use the more general purpose virtualization support (which may not be available), or just emulate everything. It not surprising that Microsoft just decided that it wasn't worth it.
So much time spent optimizing my system to provide as much "conventional memory" so I could run a performant BBS in 640K, with all the drivers and such. It's hard to imagine now.
Intel could have allowed more memory in the real mode, instead of providing that only in protected mode incompatible with the existing software ecosystem.
If I understand the article correctly, DOS extenders weren't feasible until the 386, because the 286 couldn't switch out of protected mode and back to real mode except by doing a full reset.
We're talking protected mode. The segment register would be an index into a table of segments of memory and the segments would be implemented in your RAM, and all of them would be accessible "at once" modulo the fact that there were only a few segment registers. The DOS extender could hack DOS so that you could in some fashion call DOS through "INT 21h" and it would do something (emulation, call through glue code into real mode etc). The call might require a transition into and out of real mode, which would not be cheap. Sometimes the only way to call DOS was to use "low" memory that was directly accessible from real mode, and you'd need to use special routines instead of malloc/free to manage that memory. You'd still have to call DOS through a hack but the memory arguments of the call would not need to be rearranged within the extender.
I used to work as a technician getting DOS/Windows 3.X and OS/2 2.X systems working. There was a program I used to identify expansion cards and get the ID number off of them. It could detect memory size as well, but that doesn't work in modern systems anymore.
Had a VP with a WANG PC that used Microchannel, he needed Netware, AS/400, and the Wal-Mart client going at the same time and not cut into his 640K of DOS memory.
There was a trick to it. First using card IDs I found the option floppies for his expansion cards from CompUServe (As The Internet/WWW was in infancy and our employer paid for CompUServe to research things) and I re-arranged the expansion card memory to leave a 64K area that was not used that DOS could load drivers into it to free up the main 640K memory. At the time Netware and AS/400 used DOS drivers, while OS/2 had OS/2 drivers that could access the memory beyond it. This was before Windows 95 (It was in beta test at the time).
I had upgraded the VP to MS-DOS 6.22 used loadhigh and memmaker and other things and managed to get all three networks working with most of the 640K area free. No other tech had done things like that before.
I studied programming in college, but this gconf tool I wrote had helped me out a lot. I wrote it in college and then modified it to be named whichnet so that it could read Arcnet or Starnet network cards and use the correct Netware driver for DOS on a boot floppy disk. Either by using the expansion card ID to detect the right card or by reading ROM memory and look for patterns or a signature. It had worked 99% of the time, and in the 1% it had failed and I never learned why, but worked af\ter a reboot or shutdown and power up.
I wrote this HexStrip or HStrip to extract ASCII data out of Word Perfect or other files that got corrupt and could no longer be read. It is also good for scanning EXE and COM files in DOS to see if there is a virus that stores ASCII text when it infects without running the EXE or COM. I used it in Tradewars 2002 with the trader.dat to pull out all of his advice for playing the game in it. https://sourceforge.net/projects/hstrip/
I had bought QEMM which helped as well, until the Windows 9X and Windows NT OS models put them out of business and Symantec bought out Quarterdeck and other companies.
Also, I think B000:0000 to B700:0000 was the monochrome memory that could be used to load things into it as well if you never used it. I used to write DOS memory maps, until DOS came with MSD.exe that could show used and unused memory maps. Intel 8086/8088 had 640K of RAM, and 384K of reserved memory, that some of it can be used by EMS for paging to load things into it to free up the 640K.
As EMS was a kludge that sucked up high memory for its mappings, I eventually had a fancy CONFIG.SYS with different menu options for different memory setups, depending on which game I wanted to play.
Good times.