THINK C was the straight successor of Lightspeed C, and I came from there, and before that from Turbo/TML Pascal on the IIgs.
Whatever people will say, I still miss these "one pass" compilers, they were amazing and peaked with CodeWarrior, the best development suite, ever, in my nearly 40 years experience.
Nowadays we see autoshit "configure" stuff and compilers like gcc (some) or clang (oh my frigging GOD!!) trawl their way slowly and painfully thru the most simple projects without even support for plain basic stuff like automatic precompiled headers.
Wow look, we've NEARLY got Link Time Optimization working (took decades), in 2020 whoohoo, I'm so delighted. I could compile hundred of thousands of lines of light C++ or (better) plain C 25 years ago on a much, MUCH slower machine, with an simple editor that used the compiler lexer output so you had highlighting, real indexing it was 'just there' and always right, and always blinding fast.
I'm pretty sure we are way worse than we were 20 years ago for tooling. I'm sure some people will disagree, these people haven't seen CodeWarrior chew thru hundreds and hundreds of files in seconds.
I think what happened is that the developers that figuratively jumped up and down screaming about compile times all shifted towards interpreted languages.
JS has been able to do roughly what C did in the 1990's, in terms of raw performance, for some time now. The folks that need typechecking can pick up Typescript or Haxe or whatever else is attractive.
It's the bottom of the stack that really suffers - all the folks that want to work on stuff touching I/O and low level resources directly - and that is hardest to justify investing in. Unix and its baggage remain "untouchable", as these things go, and there are only some hints of promising developments otherwise.
I laughed, but really, the V8 JavaScript engine has come leaps and bounds in terms of performance. I recall a benchmark that found JS about 25% slower than C++, but 20% faster than Java. (citation needed)
That really depends on what was benchmarked, so much so that dropping random benchmark results is meaningless.
A benchmark like this [1] (showing c++ < js < java) is absolutely useless as the person writing it has no idea what they are doing, e.g. using vector in java.
Looking at most benchmarks java beats js more often than not, but they are often neck in neck.[2]
C++ always beats both, often by a huge margin (when written well).[3]
this deliberately ignores the new features we've gotten in the meantime.
gcc is slow when you turn on all the optimizations. when computers were run by SysOps and every program needed to be recompiled from scratch for each slightly-different machine, it wasn't worth making the compiler ten times slower to make the program five percent faster. today, when browsers are compiled once and then run by millions of users for billions of hours a day, the tradeoff is different.
sure, autoconf is bad, particularly today when it's used completely wrong, but is it really worse than when programs had to be completely manually ported between different Unix versions, in a day when "package manager" had yet to be a twinkle in anybody's eye?
is everything great now? no, of course not. but you're grasping at the completely wrong straws here.
It's still slow on -O0, it's slow because it doesn't approach the problem of compiling a complete program the right way, like these old compilers did. The fact that everything is a file, the fact that every single header needs to /searched for/ then /compiled/ millions of times, and optimized later on, without context; that is the problem.
I wrote applications that were shipped on millions of computers in the 90s and I didn't have to make hard choices for building, the debug version were quicker to build sure, but the Release versions weren't a chore to make, and I was often building PPC/68k/debug/release in one go.
Also, this is a ridiculous defence of autoshit stuff. It hasn't been needed for well over 20 years, it's only purpose is self propagation where "oh I need autoconf blah because otherwise I can't build everywhere" where there is so much standardisation these days that there are MOSTLY TWO choices for nix systems.
Not only that but it fails* all the time, on embedded for example; it doesn't 'save' and give you portability, it just gives you a false assurance that you are doing 'the right thing' by using it, while it's broken is many other ways.
More often than not, you can replace all that garbage with a 1/2 page Makefile. Who the hell needs to check wether the compiler /works/ or strdup /exists/ and all that idiocy. Or add dependencies for stuff while 'pkg-config' exists anyway. Who the hell actually /needs/ 'libtool' when there's about (perhaps) 3 ways of making a shared library on a unix system? And who the hell want or have the time to go and debug some stupid arcane 'm4' file to fix the weird problems that comes up?
Disclaimer: I build embedded distros for fun and profit, I deal with that stuff /all the time/ and most of my 'compile time' for distros is not even spent actually 'compiling' stuff, it's spent in the 'configure' stage, and 90% of my time fixing portability problem isn't in the code, it's in the autoshit stuff that somehow breaks in some new, interesting way.
yes! Even in the 90's you could take care of the common cases with a few different flags. I had C apps that built on various Linux distros, FreeBSD, Digital Unix, SunOS 4.x, and Solaris with hardly any changes other than a couple ifdefs. Sure, if you compile your program on a Sequent running DYNIX, circa 1988, you might hit a few snags.
I build embedded software for profit only and I can say with certainty the 98% of all computers out there are not desktops and do not use desktop operating systems. For those systems, the autotools, modern optimizing compilers, and modern tooling like distributed ccache are a godsend. Of course, back in the 1990s similar software was fast to build because your cross-assembler didn't have to worry about on-chip cache or superscalar architectures, but ye gods the code PTSD flashbacks begin.
People aren't willing to pay for compilers any more, hence you get gcc, which isn't built for the convenience of the user, but for standards compliance and performance of its output.
I have mixed feelings on this, it did feel like there was a broader choice of compilers back then but on the other side not being able to afford them in my youth was frustrating.
I was just thinking about how I spent hundreds of dollars for Zortech C++ and Borland C++ back in the 90s and was shocked when IBM included a C/C++ compiler for FREE with OS/2 and it even included a visual UI builder tool.
I refer to that overall development experience too, they aren't any easier to use (Qt Creator and C++ Builder use Clang which is basically as easy as GCC) nor any faster and AFAIK VC++ compiler messages aren't even as good as those in GCC and Clang.
Note that i refer to the compilers specifically, but even the IDEs you mentioned aren't that great. Visual C++ has went mostly downhill since VC6, becoming slower and even removing features (e.g. last time i tried it i couldn't use a bitmap font) or obfuscating them (installing a library "system-wide" is trivial in older versions of VS, but after 2008 or 2010 i think that feature was removed). C++ Builder's UI/IDE also went downhill after they got rid of the Delphi 7 interface and tried to become Eclipse and it also has became too unstable and slow. It does have a few niceties (i like that when you save your project it automatically updates the header file includes at the top), but nothing that makes it worth the negatives (though TBH i haven't spent much time with it because i refuse to rely on anything with DRM so i might be missing some gem covered under that bloat). Qt Creator is neat, but i could never get used to its interface (also it is free, so i'm not sure why you included it in a list of paid products).
Also IMO all the above do not hold a candle to the older Borland C++ Builder when it came to development experience. I do not think there is any C++ development environment that comes close nor i think it is possible to make one by stitching together a frankenstein of a product out of otherwise independent bits and pieces that are oblivious about each other (in other words anyone who thinks that something "based on" Clang or GCC or whatever, stitched together with LLDB, GDB or whatever and some GUI framework thrown in - usually Qt - would fit the bill is totally missing the point).
I included QtCreator, because for the full experience you need the commercial Qt offerings as well.
Windows now has a C++ package manager (vcpkg), so you can install libraries as you want.
Actually Clang provides the necessary IDE hooks that GCC doesn't.
I also consider the UI designers (UWP C++/CX is not that bad), graphical debuggers, pre-compiled headers database, incremental compiler, incremental linker, PGO integration, and being able to drive them without memorizing a ton of switches, part of the whole experience.
I feel like if Terry Davis could have maintained a bit more of a grip on reality, he would have picked up the combined torch of Wozniak and Jobs.
Rest in peace.
Edit: not that TempleOS isn't beautiful, and I'm glad it exists and believe he achieved what he set out to do. I was just selfishly musing that if he hadn't lost his mind the way he did, maybe he would his ideas would have been adopted more in the mainstream. I'm also glad TempleOS exists in its incarnation, so it's conflicting.
This is an amazing post for a bunch of reasons (as soon as I finished reading it a few days ago, I immediately sent it to half a dozen people), but one that should immediately jump out for you is that the picture of the Mac Plus is dithered using Bill Atkinson's dithering algorithm. Atkinson's done a bunch of really awesome things, but most notably, he played probably the biggest role in the genesis of the Macintosh.
Very few posts on the WWW have the same attention to detail and wonderful playfulness as this one. It's an incredibly rare thing, and insanely pleasant to read.
The mystical style mixed with computing subjects reminded me very much of aphyr's excellent "technical interview" series, which I very much recommend if you enjoyed the OP:
And, for anyone who's spent that hour and still wants more, the AMC show Halt and Catch Fire on Netflix is, while not truly historically accurate, very much in the spirit of the early 80's tech scene.
Relatedly, if you're reading the page on a high-DPI screen and wondering why the pictures are so dark and blurry, try opening your browser's JS console and running this:
I used to work on this product (and think Pascal, Think Reference and other cool stuff). Fun times and a lot of great developers.
When compiling, there was a modal dialog that would show the name of each file being compiled...they went by fast even on early macs. Michael Kahl sped it up even more by figuring out that QuickDraw’s DrawText routine was still slow enough to be impacting compile time. So a custom blitter was made to replace it and a fast compiler got even faster.
I ran into that with a backup product showing the current file being backed up. I wound up just posting the current file to a buffer that was protected by a mutex and whatever file name happened to be in the buffer at render time was what was drawn.
I don't know if it's the fonts, the icons, the controls, the monochromaticity, the blocky resolution, the tinkerability, or even just stupid nostalgia, but there's just something so darned cute about classic Mac OS.
macOS today is attractive and useful and colorful, but it feels so serious.
I am the newbie who only recently found some interest in machines from the 80s.
I'm a web programmer and I cannot wrap my head around how those machines rendered the graphics. I draw a 512 x 342 canvas on a webpage, if I loop pixel by pixel to draw an image using an array, the fans of my powerful computer start to scream. I'm not a graphics programmer maybe I am doing it wrong, but how the hell does an 8 Mhz computer with 128kb do it.?
These machines had extremely low overhead for everything you did. If the framebuffer is just a piece of memory, and the screen is monochromatic, then a 512x342 display consists of 175104 one-bit pixels, which is just about 21KiB of memory (which is probably even distinct from main memory). A single pixel update could presumably be done with a single instruction. If we assume maybe 4x looping overhead, then updating the entire screen by individually poking each pixel could be done in about 700k instructions. In practice, there will likely be special hardware and instructions for copying entire blocks of pixels onto the screen in a much faster way ("bit blitting", although I don't know exactly how the Mac did it).
Modern machines have huge overheads because of the indirection, virtualisation, and protection mechanisms we desire. On a webpage, you obviously don't have direct access to the framebuffer, but are drawing into some in-memory canvas. Drawing a single pixel in Javascript is going to go through many function calls, with probably orders of magnitude more instructions than you would have on an old Mac. There are still ways of getting (near) direct graphics access that is screaming fast, since people do write games and play videos and such, but there is much more ceremony to it now.
Little special hardware on those old macs, memory was 16-bits, framebuffer was part of memory (memory was expensive), no blitting hardware, no hardware cursors (software removed the cursor, rendered to the screen, copied the data under the cursor and restored it). Screen refresh also refreshed memory, during horizontal blanking it fetched audio data
There's no really separate video memory on the original Mac, just the 21kb buffer that's updated by software. When it needs to, the video circuitry grabs the bus and reads the pixels it needs and outputs them. (There's probably some caching in the controller, but not a lot - a single screen line tops, I'd wager)
Why do we still need to trawl obscure Google Sites to download a valid classic Mac ROM? Why isn’t Apple just making them available? It’s not like it would cut into their “modern” MacOS sales at this point, and it would make preserving classic Mac software like this so much easier...
I love this. I’ve been using Think C on an old Mac 512k I got a couple years ago, and it’s a completely different world. I’m currently working on writing a gif animation app!
One interesting thing is that the ANSI library is relatively large, so I try not to use it, instead relying on ROM routines or reimplementing bits of stdlib that I need.
I have an XO-4 (the unobtanium model) and I love using it for distraction-free tinkering. A mac emulator looks especially nice in the display's black-and-white mode when I have the chance to use it in sunlight.
Nothing so exciting I'm afraid; I cruised ebay for a few months and had the good fortune of stumbling upon what appeared to be a decommissioned classroom set from a pilot program in Canada. Nabbed a pair, so I have some spare parts if I need them.
Very nice except the magnification (control-H-M) only to one level. If it is one more level (i.e. 4 times instead of just double) that my 27 inches may be good for some development. Full screen is just grey others in mine.
And the alarm sound (for bug etc.) is a bit harsh as alarm sound go. Not sure how to change this.
Not being an original Macintosh guy I'm having a hard time getting the minivmac to boot directly into System7. I follow the steps but can't figure out what "Copy the donor system folder into our new boot volume, using the familiar drag-and-drop." means. Any clues?
As recently noted in the build documentation, compiling it yourself is not recommended for most people. (First, the result will be much less efficient than the official binary unless you tweak things for your particular compiler. Second, there is the chance of running into compiler bugs and bugs in Mini vMac that show up only on some compilers - the official binaries are much better tested.)
I haven’t tried it but the bottom portion of the article seems to suggest you need some slight tweaks depending on your host OS, Linux apparently should work fine.
Unfortunately this is something I see with many articles or open source projects. The building instructions are only for Linux (usually apt-get some packages and run configure/make). For Windows? Good luck, at most you'll get some nasty MSYS setup to follow.
One of these is gratis and libre, and the other isn't. If you want to stand on that side with Windows and expect other people to help you out, then what you're asking for is for them to shell out money and a lot of effort for you. Whereas the reverse is not true; in comparison, the picture of a Windows user having to get access to a FOSS toolchain for themselves looks trivial—and with the abundance of ready-made system images that you can spin up without even having to reboot your computer, it is.
I don't expect Windows support for all projects, but I consider having to install the whole Unix toolchain on Windows equivalent to telling a Linux user that Linux is supported, just run it through Wine and it works.
In that case, then boy, your criticism is really out of place. The entire article here is about running a classic Mac OS image so you can run a discontinued compiler.
I use it daily and test and compile my programs on both Windows and Linux.
On the other hand I also stay away from projects that don't use cross-platform build-systems like Ninja, or CMake. Shell scripts are sometimes incompatible even between different distributions and shells.
The worst offender (that I know of) by far is GNU Octave.
On Windows they basically ship a snapshot of a Linux dev box (complete with the whole directory tree, compiler suite and everything) with userland binaries compiled for Windows. So much for being "cross-platform"...
Whatever people will say, I still miss these "one pass" compilers, they were amazing and peaked with CodeWarrior, the best development suite, ever, in my nearly 40 years experience.
Nowadays we see autoshit "configure" stuff and compilers like gcc (some) or clang (oh my frigging GOD!!) trawl their way slowly and painfully thru the most simple projects without even support for plain basic stuff like automatic precompiled headers.
Wow look, we've NEARLY got Link Time Optimization working (took decades), in 2020 whoohoo, I'm so delighted. I could compile hundred of thousands of lines of light C++ or (better) plain C 25 years ago on a much, MUCH slower machine, with an simple editor that used the compiler lexer output so you had highlighting, real indexing it was 'just there' and always right, and always blinding fast.
I'm pretty sure we are way worse than we were 20 years ago for tooling. I'm sure some people will disagree, these people haven't seen CodeWarrior chew thru hundreds and hundreds of files in seconds.