Hacker News new | past | comments | ask | show | jobs | submit login
Let's compile like it's 1992 (2014) (fabiensanglard.net)
301 points by danso on Feb 28, 2017 | hide | past | favorite | 80 comments



Let's play the game "Wolfenstein miscompilation or Dwarf Fortress screenshot?": http://fabiensanglard.net/Compile_Like_Its_1992/images/run_w...


I just see an ASCII interpretation of the TV channels my mom told me to stay away from.

Hey, I think I see a pair of @'s


It took me a while of thinking 'Isn't it obvious that it's not dwarf fortress?' before realizing that I am way too into that game to be the intended audience for the joke ;)


Def not dorffort


Way too many vermin }, would be detrimental to the fort.


Grab a copy of Borland's IDE [0] while you're at it. Experience what was normal and good all those years ago, and marvel at how little distance we've come.

[0] https://winworldpc.com/product/borland-c-builder/6x


Borland's IDE has a huge nostalgia factor for me, it was the first IDE that made it so I didn't have to think about linking and the rest of the toolchain, I could just hit 'Build'.

But while most of our tooling for Software Engineering is pretty archaic. I think there is a tooling revolution going on now, especially with new languages: autoformatters, graduation typing, memory ownership checkers, etc. I feel like as an industry we are slowly starting to take tooling seriously.


Wow, that Borland IDE UI certainly brings back memories.

Turbo Pascal 6.0 (for DOS) was my first real exposure to programming (not counting the various BASICs and a short stint learning 6502 assembly) and led to my teenage self writing a bunch of "IGMs" for Legend of the Red Dragon [0], a BBS game. Unfortunately, just as I started teaching myself C, I scored a copy of Visual Basic 3.0 and it was all downhill from there.

It's really cool to see this game I played 20+ years ago and think about how technology has come. I can't imagine what the next 20 years will bring.

[0]: https://en.wikipedia.org/wiki/Legend_of_the_Red_Dragon


The language/IDE of choice at college was Turbo Pascal. But then they introduced Turbo C and I just had to use it for my graphics class (because C was obviously going to be faster than that Pascal stuff, right?)

Wrong. The Pascal compiler had had years of optimization done to it and their C compiler was a 1.0 product. My programs ran at half the speed of everyone else's.


Visual Basic paid my bills for quite some time. Funny I had to get smacked on the head a couple times (with Dataflex, VB, Python/Zope) to realise what @paulg articulates so well in "Beating the Averages".


No mention of how much time it took to compile? I don't know if my memory is faulty but I remember Turbo C++ to be much much faster than today's C++ compilers.


That is an understatement. That is a full build of my 3D engine - http://i.imgur.com/3ApRyuQ.png (C not C++ but the linked article is also about a C codebase) on my current computer (4770K i7) under Borland C++ 5.0. Partial builds (modify a file and run) are instant, which is basically why i'm using it for a lot of my C code (the code also compiles in other compilers, like OpenWatcom, GCC, Clang, Visual Studio, Digital Mars C and Pelles C, but i mainly test with GCC, OW and VS, the rest i only test with occasionally).

The compiler is part of the IDE, not some external process that needs to start from a blank state for each file, needing to read the same files over and over (which is what every other "IDE" does these days, similarly with the debugger which is usually running gdb at the background and some IDEs do not even bother to perform the builds themselves and instead using cmake or whatever - honestly it is as if people forgot what the "I" stands for) and it keeps compiled objects and libraries in memory and even uses the source code directly from the open windows's text buffers instead of having to save the file and load it from disk (although it does write the object and executable files to disk, it just doesn't do the unnecessary roundtrip for compiling the source).


> The compiler is part of the IDE, not some external process that needs to start from a blank state for each file, needing to read the same files over and over...

I suspect the difference is negligible in practice. In both cases, the files are likely to be cached in memory after they're read the first time, so you're not really reading it over and over.


Probably, but it can be a convenience to make some small modifications to try out things without saving them.

From a performance standpoint the win comes from not having the compiler start from a blank state for each file but keep the compiled objects in memory and only update the changed files. I suppose a modern reimplementation of that idea (that has more memory to spare, after all the official BC++5 requirements were 16MB of RAM) would be able to have a more fine-grained approach.


Well, a more fine-grained approach would be a compiler-server with a fine grained API.

https://gcc.gnu.org/wiki/IncrementalCompiler

https://www.reddit.com/r/cpp/comments/59n8ya/what_happened_t...

I don't know how much effort is poured into these projects. There are some clang based servers like ycmd and rtags, but these are used for linting, refactoring and search (so no incremental compilation).


TBH i was thinking more in the lines of having the text editor associate source code lines with C functions and declarations so that the IDE can recompile only the bits changed while working in the code (something like a more advanced edit-and-continue).

But yeah, i do not see much effort going on in improving these areas. I think people are just used to "patchwork IDEs" and find them good enough.


> Probably, but it can be a convenience to make some small modifications to try out things without saving them.

Yes, I remember it being very pleasant when borland C++ crashed after running your application, losing your unsaved work.

edit: that was on windows 95/98 time; so it is more likely that the application crashed the whole OS.


It sucked when that happened, which is how i developed a habit for saving my files even when i'm doing nothing and even pressing the save shortcut key several times :-P.

But at the same time it can be a convenience if you dont want to save but instead make a small change to try something out.


You're forgetting about dependency management. The win for in-ram cache on incremental builds is real.


> The compiler is part of the IDE, not some external process that needs to start from a blank state for each file

I'm pretty sure Borland used to separate out the IDE executables from the compilers and a couple of other tools tool. I don't have a copy to hand to prove this but I'm sure I used to occasionally invoke Turbo Pascal's compiler from the command line outside of the IDE (due to it being a separate .exe / .com) and I vaguely recall Turbo C++ having a similar design.

I also don't recall build times being that much faster then than they are now. But maybe that's more a symptom of myself compiling on budget hardware previously where as I can now afford better spec'ed dev machines (compared to the market average).


>> I'm pretty sure Borland used to separate out the IDE executables from the compilers

Turbo C was like that, but not the first few versions of Turbo Pascal.

The whole goal with Turbo Pascal was to have everything in one small program so you could code/compile/test as fast as possible. It used a one-pass compiler and didn't have a heavy linker. It was fast even on a 8088. Anders Hejlsberg was the original author of Turbo Pascal (yes, the same guy from MFC, J++, C#, TypeScript...)

The original TURBO.COM file was very small. This was great because you could fit the whole thing on one floppy disk including your own code. No swapping floppies. Plus it was only $49.95 USD!

Pascal compiled way faster than C because there was less to do. No #includes to chew through. But Turbo C was even a fast compiler back then. A hundred thousands of lines per minute according to the ads. Imagine how slow I found DJGPP and other compilers when I finally moved to 32-bit programming.


Remember, Wirth designed Pascal as a teaching language. One-pass compilation, no forward declarations, built-in I/O. Turbo Pascal could compile to a .com file, 64KB max, 16-bit pointers. I'm not sure there was a linker; the first executable instruction was the first byte in e file.

The neat trick was debugging. Instead of tagging to object code with source-code line numbers, to break on line N, Turbo Pascal simply recompiled the source up to line N, and used the size of the output to match the instruction pointer in the debugged image. Move to next line? Compile one more line, and stop at the last produced instruction.

But these were tiny programs, written ab initio. No readline, no X, no network, no database. Hardly any filesystem. To do something akin to readdir(3) meant writing a bespoke function to call the DOS interrupt. Putting a menu on the screen required positioning the cursor in the video buffer and putting each character in successive locations, allowing for the attribute byte.

If Turbo Pascal was simple, it was also primitive. Much bigger C programs compile in the blink of an eye today. Complex programs take a long time to build today, yes. They did then, too.


> the same guy from MFC

Maybe you meant Windows Forms? AFAIK MFC was made years before Anders left Borland to join Microsoft.


The command line compiler was a separate compiler for when you wanted to use some external method for building (like a batch file) or your program was very big to compile with the IDE in memory (remember this was real mode with a 640K RAM limit, often less than that was available).

But you could take TURBO.EXE (ide+compiler+debugger) and TURBO.TPL (the library), put it on a floppy and work from there. Back when i was a kid, my process to start a new "project" was to take a blank floppy and copy those files (and a couple of units i was sharing) since i didn't have a hard disk. I still have a ton of floppies littered with turbo.exe/tpl pairs.

The Turbo C/C++ also needed only a single executable, tc.exe/bc.exe (depends on the version) and the include and lib directories.

This is the same with Borland C++ 5.0 i am talking about above, although that one also needs a bunch of DLLs too. Since i don't want to break my installation, i only renamed the bcc32.exe and bcc32i.exe to something else, run the IDE and built my engine. As i expected it worked. Although the fact that you can make modifications and have them compiled without saving the file is also an indicator.


You are correct. You could fire off a build from inside the IDE or from the command line itself. I remember it quite well.


That was a separate compiler. It would be as if someone took libtcc and made an IDE use it directly but also bundled the tcc binary for command-line builds.


This is interesting. How long does GCC take? Have you tried TCC?


GCC 6.2.0 under MSYS2 takes 6.302s for a full debug build and 10.499s for a full optimized build. With make -j8 this is down to 1.997s for a full debug build and 6.942s for a full optimized build.

I haven't tried TCC, i think i tried at the past but it was missing some libs.


tcc is blazingly fast, at least last time i used it. it's honestly fast enough that you could probably use it as an c interpreter


I do use it as interpreter on embedded boards! Preprocess/trim the headers you need, and add #!/bin/tcc -run at the top of your .C file, add a +x to it and it'll run just fine!

I love tcc, in fact I added a firmware instruction translator to 'JIT' AVR code to simavr a few weeks ago. Takes a AVR binary, translates it to C, and compiles it on the fly with libtcc to run it :-)

https://github.com/buserror/simavr/tree/jit-wip


>I love tcc, in fact I added a firmware instruction translator to 'JIT' AVR code to simavr a few weeks ago. Takes a AVR binary, translates it to C, and compiles it on the fly with libtcc to run it :-)

That is unholy, and glorious.


Ahah, thanks for that -- I thought it was pretty clever, but it's hard to explain why to someone :-)

If you look closer, you can see I've actually repurposed the main interpreter core, and uses a GNU awk (of all thing) to extract each opcode emulation 'meat', converts it to a string to and that string is used by the translator to generate the C for tcc...


Any technical reasons in particular? I'm curious.

How many passes is it doing? I suspect they aren't doing much optimization then? Maybe they patch in differences in the ASTs at the IR level and work from there?


My understanding is TCC does little to no optimization; it's intended to be used for bootstrapping.


In 1992 it probably wouldn't have had templates nor much of much else (it wasn't standardized until 1998). Modern C++ is barely the same language anymore.


Borland C++ 3.1 (mentioned in the article) does have templates, although it doesn't have nested templates. It also didn't have namespaces (i think that was added in 5.0, which was still released before the first C++ standard).

Keep in mind that the C++98 mainly standardized what compilers were already doing.


The source is also plain C. Id/Carmack didn't switch to C++ until Doom 3.


I don't know about c++, but turbo pascal compiled so fast that it felt like an interpreter back in the day.


Pretty close. Back in high school, I was the 'Pascal guy' and one of my friends was the 'C guy.' One day, the friend, after looking through the large list of compiler and linker optimizations available in Borland C++ (and confirming they weren't in my Turbo Pascal) said we should have a Hello World file size competition--see who could get the smallest filesize with our respective languages (compilers really). He spent half a class period digging through the options, reading what would be best, then typed his Hello World program, and announced he was ready. Literally while his code was compiling (these were 386 machines IIRC which took some seconds back in the day), I typed the one-liner and built my code with Turbo Pascal...before his compilation was done.

I had no idea if my executable would be bigger or smaller than his, but I didn't want to put effort into the contest until I knew I was behind. As it turns out, it was about half the size of the C executable and he never asked for a round two. My friend was very frustrated that day.

Fun day.

EDIT: I did some research. They were actually at best 286 machines. The computer lab had those IBM PS/2 all-in-ones with MCGA graphics.


Used both. TP was a lot faster but TC was no slouch.

A Few years ago I managed to copy an old DOS diskette onto DOS VM and compiled and run a space invaders game I wrote. TC said it compiled but just kept returning me to the IDE when it executed! I eventually tweeked the VM speed down by about 100 at which point I saw the very fast invaders move to the bottom in about 2 seconds and kill my defender!


> TP was a lot faster

Not surprising. As far as I know, Pascal doesn't have anything remotely as hostile to efficient compilation as the C preprocessor (oh look, changing that one constant in a header file rendered your entire project out-of-date because the compiler can't prove that it doesn't make arbitrary memory layout and AST changes in every file that indirectly #includes it).


Absolutely. Basically any Wirth-based language is designed so it is possible to compile each unit in a single forward pass. With those languages he did that has proper module support, and with most non-Wirth extensions to languages like Pascal, they tend to use very well constrained formats to cleanly delineate the type information and function signatures that is exported to retain that.

Wirth-style compilers also often don't even build an AST. Wirths own compilers called functions in the code generator module directly from the parser. Which again was something he could easily do because the languages were designed for single pass compilation.


FWIW this is the same with Turbo Pascal's (and modern incarnations of it, like Delphi and Free Pascal). If you modify a constant in a unit, the compiler will recompile all other units that use that unit. The only way to avoid that is to keep everything in memory and do background updates to the AST. At which point you might as well do background compilations anyway.


Not really, C++ build times are usually measured in hours for anything worthwhile using, e.g. Cocos2d-x as a possible example.

You wouldn't get such compilation windows with Pascal derived languages using their module systems.

The proof being that with VC++2017 using the incremental linker and C++ experimental modules, one also gets more human friendly compilation times.

https://blogs.msdn.microsoft.com/vcblog/2016/10/05/faster-c-...

The Microsoft blog is only about the incremental linker, modules would speed it up even more.


I was referring to the given example, i actually use Free Pascal and Lazarus for my own tools and a big reason is how fast both the IDE (Lazarus) and the compiler (Free Pascal) are compared to anything C++ (well, anything except Borland C++ 5 i mentioned above and C++ Builder, but that isn't exactly fair considering the age of those programs :-P).

But even in Turbo/Borland/Free Pascal modifying a unit means that you have to recompile the other units that need that unit.


Ah, ok.

Actually I find those cascading of build dependencies quite productive.

Sometimes I wonder if Pascal variants had gotten more love from gamedevs (TP was my Unity), if they would be always coming up with tricks to speed up their build times or force reloading of C++ code.


I think Pascal didn't got much love from gamedevs for more or less the same reason it didn't got much love from everyone else - there was only a single company with a popular Pascal compiler (Borland) and that self-imploded by trying to chase after enterprise markets while ignoring the masses of developers that made them popular in the first place. For the entirety of the 90s, people who wanted to write Pascal on a mainstream platform (ie. DOS and Windows) had very limited options.

Official SDKs and OS APIs written in C didn't help either too, although that was a minor issue. But still created friction.

Of course with Free Pascal that isn't the case anymore, FPC is the compiler with the second number of supported platforms (after GCC) and architectures, but the stigma and public perception of the language still prevails (for example many things that people laud D and Rust for are things that Free Pascal did for years).


Pretty much my opinion as well.


Pascal, Delphi and Ada are a lot better than C and C++ in that regard, though, by virtue of actually having some sort of sensible module system.


Also TP would usually link in-memory without creating .obj files (and it had real modules/units which made dependency analysis simple).


Yeah, this was another reason why it was so fast - disk i/o was very slow (especially if -like me- you had no hard disk).


Haha - I remember creating many games in Turbo Pascal etc. which used internal loops rather than the PC clock to time things like movement and speed.

Experienced the same thing when PC's came out with the 'Turbo' button. I recall having to turn off Turbo many times to make games semi playable again on new PCs.


Many fast compilers of that period left partially compiled files on disk so that generating the final target could be faster. Also, the analysis they performed for optimisation was much simpler than what we do these days (as processors of the time didn't have much to optimise for).


I have no data to support it but I suspect the level of optimisation in modern compilers adds a lot of overhead.


There was something about the visceral nature of the tc environment and editor and toolset that just felt like you were very mentally close to the code you were writing. Perhaps something about the bare simplicity. I can't help but feel i've been chasing it ever since in emacs.


Recently I had to do something I always avoided since it was platform specific a huge unknown space to me - code something in C#.

It was one of the most pleasurable experiences with "programming" since those Pascal times, and gave that feeling of being close to the code. I attribute it to fast compiles, no need for context switching (due to superb autocompletion), and everything just working well inside the tool. Documentation was also very good, and the huge API you're exposed gives a feeling of power (like I felt as a teen with computers, that I could do so much).

(My recent experience was Python, C (embedded too), Golang, and JS in the browser.)


> Documentation was also very good, and the huge API you're exposed gives a feeling of power (like I felt as a teen with computers, that I could do so much).

That’s the reason why C#, or the language it was copied from (Java) are still popular when performance doesn’t have to be 100% perfect. Easy to use, very well documented and powerful, and reasonably fast.


Edit and continue is the best thing about C#, its so useful to be able to set a breakpoint, and modify code in debug mode, move back a few lines and try again.


I tried to chase it in Emacs during 10 years, back in the 90 - 2000's, until I came to the conclusion that Mac and Windows IDEs are still closer to the old Borland experience than anything else.

So nowadays I only use Emacs for Clojure or when I happen to access an UNIX server box.


Related to all the Pascal discussion in this thread, Carmack actually started working in Pascal before C:

https://twitter.com/id_aa_carmack/status/376210391346315264


That brings back memories - I started my programming career in 1992 and Borland was one of the compilers we used back then, Zortech C++ being the other one.


I was super excited to play with this tonight but sadly it appears the Borland C++ URL is giving 404s.

EDIT: Found Turbo C++ 3.0 and TASM and these work just fine!


Any links to the correct downloads?



I installed tc and built wolf3d from source following the instructions. The whole process is surprisingly smooth.


This was an exciting time because you could tune a radio to the clock frequency to detect crashes!


Ah this was very interesting

But sadly Wolf3D is real mode (maybe 16-bit?) and I got accustomed to the "luxuries" of 32-bit development on DOS under DJGPP


Screw that lets compile like it's 1981, when I started. Actually, no thank you.


Hah. Borland 3.1.Don't remember TP 3.5?


I remember 3.0, 5.5, 6.0, 7.0 for MS-DOS and 1.5 for Windows, those are the ones I used, before switching full time to C++ as my next love in programming languages. :)


> mkdir a

I'm still smiling.


If you have a few hours (or days) free, go through the rest of Fabien's source code reviews and articles. It's fantastic stuff. I really hope he does finish up the book he's writing on the Wolfenstein 3D engine.


Needs a (2014) tag.


Please add (2014) to the title.


Did the way compiling was done in 1992 change since 2014?


Probably not but HN has a habit of putting the year in the title of older content. Makes sense to keep it up, no?


With April Fools' coming up, keep in mind constants may change over time.


Thanks, we've updated the title.


I did this with the original xdoom source when I was a teenager. It all basically works w/ some minor changes. The biggest hurdle is setting up X w/ some goofy color depth.


curl? 1992? ;).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: